summaryrefslogtreecommitdiff
path: root/scripts/link-vmlinux.sh
AgeCommit message (Expand)Author
2022-06-05kbuild: factor out the common objtool argumentsMasahiro Yamada
2022-06-05kbuild: move vmlinux.o link to scripts/Makefile.vmlinux_oMasahiro Yamada
2022-06-05kbuild: clean .tmp_* pattern by make cleanMasahiro Yamada
2022-06-01kbuild: remove redundant cleanups in scripts/link-vmlinux.shMasahiro Yamada
2022-05-29kbuild: do not try to parse *.cmd files for objects provided by compilerMasahiro Yamada
2022-05-26Merge tag 'kbuild-v5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mas...Linus Torvalds
2022-05-24Merge tag 'objtool-core-2022-05-23' of git://git.kernel.org/pub/scm/linux/ker...Linus Torvalds
2022-05-24kbuild: stop merging *.symversionsMasahiro Yamada
2022-05-24kbuild: link symbol CRCs at final link, removing CONFIG_MODULE_REL_CRCSMasahiro Yamada
2022-05-23Merge tag 'x86_cpu_for_v5.19_rc1' of git://git.kernel.org/pub/scm/linux/kerne...Linus Torvalds
2022-05-11kbuild: generate a list of objects in vmlinuxMasahiro Yamada
2022-04-22objtool: Remove --lto and --vmlinux in favor of --linkJosh Poimboeuf
2022-04-22objtool: Rename "VMLINUX_VALIDATION" -> "NOINSTR_VALIDATION"Josh Poimboeuf
2022-04-22objtool: Make noinstr hacks optionalJosh Poimboeuf
2022-04-22objtool: Make jump label hack optionalJosh Poimboeuf
2022-04-22objtool: Make static call annotation optionalJosh Poimboeuf
2022-04-22objtool: Make stack validation frame-pointer-specificJosh Poimboeuf
2022-04-22objtool: Add CONFIG_OBJTOOLJosh Poimboeuf
2022-04-22objtool: Make stack validation optionalJosh Poimboeuf
2022-04-22objtool: Ditch subcommandsJosh Poimboeuf
2022-04-22objtool: Reorganize cmdline optionsJosh Poimboeuf
2022-04-19objtool: Enable unreachable warnings for CLANG LTOJosh Poimboeuf
2022-04-04x86/cpu: Remove CONFIG_X86_SMAP and "nosmap"Borislav Petkov
2022-04-02kbuild: fix empty ${PYTHON} in scripts/link-vmlinux.shMasahiro Yamada
2022-03-15x86/alternative: Use .ibt_endbr_seal to seal indirect callsPeter Zijlstra
2022-03-15objtool: Rename --duplicate to --ltoPeter Zijlstra
2022-01-19Merge tag 'kbuild-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/mas...Linus Torvalds
2022-01-16Merge tag 'trace-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rost...Linus Torvalds
2022-01-13scripts: ftrace - move the sort-processing in ftrace_initYinan Liu
2022-01-08kbuild: do not include include/config/auto.conf from shell scriptsMasahiro Yamada
2021-12-09x86: Add straight-line-speculation mitigationPeter Zijlstra
2021-11-08Merge tag 'kbuild-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/mas...Linus Torvalds
2021-11-01kbuild: Unify options for BTF generation for vmlinux and modulesJiri Olsa
2021-10-12scripts: update the comments of kallsyms supportHui Su
2021-09-03kbuild: merge vmlinux_link() between ARCH=um and other architecturesMasahiro Yamada
2021-09-03kbuild: do not remove 'linux' link in scripts/link-vmlinux.shMasahiro Yamada
2021-09-03kbuild: merge vmlinux_link() between the ordinary link and Clang LTOMasahiro Yamada
2021-07-10Merge tag 'kbuild-v5.14' of git://git.kernel.org/pub/scm/linux/kernel/git/mas...Linus Torvalds
2021-06-29kbuild: skip per-CPU BTF generation for pahole v1.18-v1.21Andrii Nakryiko
2021-05-27kbuild: Quote OBJCOPY var to avoid a pahole call break the buildJavier Martinez Canillas
2021-05-27kbuild: clean up ${quiet} checks in shell scriptsMasahiro Yamada
2021-05-06kbuild: Don't remove link-vmlinux temporary files on exit/signalAndi Kleen
2021-04-29Merge tag 'kbuild-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/mas...Linus Torvalds
2021-04-25kbuild: apply fixdep logic to link-vmlinux.shRasmus Villemoes
2021-04-25kbuild: add CONFIG_VMLINUX_MAP expert optionRasmus Villemoes
2021-04-13bpf: Generate BTF_KIND_FLOAT when linking vmlinuxIlya Leoshkevich
2021-02-23kbuild: lto: postpone objtoolSami Tolvanen
2021-02-23objtool: Split noinstr validation from --vmlinuxSami Tolvanen
2021-02-23objtool: Don't autodetect vmlinux.oSami Tolvanen
2021-01-14init: lto: ensure initcall orderingSami Tolvanen
raph'>
-rw-r--r--tools/arch/x86/include/asm/io.h101
-rw-r--r--tools/arch/x86/include/asm/msr-index.h227
-rw-r--r--tools/arch/x86/include/asm/nops.h2
-rw-r--r--tools/arch/x86/include/asm/orc_types.h4
-rw-r--r--tools/arch/x86/include/asm/pvclock-abi.h4
-rw-r--r--tools/arch/x86/include/asm/required-features.h104
-rw-r--r--tools/arch/x86/include/asm/special_insns.h27
-rw-r--r--tools/arch/x86/include/uapi/asm/kvm.h463
-rw-r--r--tools/arch/x86/include/uapi/asm/kvm_perf.h17
-rw-r--r--tools/arch/x86/include/uapi/asm/svm.h5
-rw-r--r--tools/arch/x86/include/uapi/asm/unistd_32.h3
-rw-r--r--tools/arch/x86/include/uapi/asm/unistd_64.h3
-rw-r--r--tools/arch/x86/include/uapi/asm/vmx.h5
-rw-r--r--tools/arch/x86/intel_sdsi/intel_sdsi.c108
-rw-r--r--tools/arch/x86/kcpuid/Makefile4
-rw-r--r--tools/arch/x86/kcpuid/cpuid.csv1549
-rw-r--r--tools/arch/x86/kcpuid/kcpuid.c476
-rw-r--r--tools/arch/x86/lib/inat.c13
-rw-r--r--tools/arch/x86/lib/insn.c131
-rw-r--r--tools/arch/x86/lib/memcpy_64.S1
-rw-r--r--tools/arch/x86/lib/memset_64.S4
-rw-r--r--tools/arch/x86/lib/x86-opcode-map.txt434
-rw-r--r--tools/arch/x86/tools/gen-insn-attr-x86.awk66
-rw-r--r--tools/bootconfig/Makefile4
-rw-r--r--tools/bootconfig/main.c49
-rw-r--r--tools/bootconfig/scripts/ftrace.sh1
-rwxr-xr-xtools/bootconfig/test-bootconfig.sh37
-rw-r--r--tools/bpf/Makefile6
-rw-r--r--tools/bpf/bpf_jit_disasm.c4
-rw-r--r--tools/bpf/bpftool/Documentation/Makefile12
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-btf.rst111
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-cgroup.rst219
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-feature.rst115
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-gen.rst407
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-iter.rst60
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-link.rst73
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-map.rst232
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-net.rst134
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-perf.rst34
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-prog.rst461
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-struct_ops.rst81
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-token.rst64
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool.rst60
-rw-r--r--tools/bpf/bpftool/Documentation/common_options.rst26
-rw-r--r--tools/bpf/bpftool/Makefile47
-rw-r--r--tools/bpf/bpftool/bash-completion/bpftool128
-rw-r--r--tools/bpf/bpftool/btf.c415
-rw-r--r--tools/bpf/bpftool/btf_dumper.c4
-rw-r--r--tools/bpf/bpftool/cfg.c1
-rw-r--r--tools/bpf/bpftool/cgroup.c56
-rw-r--r--tools/bpf/bpftool/common.c257
-rw-r--r--tools/bpf/bpftool/feature.c122
-rw-r--r--tools/bpf/bpftool/gen.c450
-rw-r--r--tools/bpf/bpftool/iter.c4
-rw-r--r--tools/bpf/bpftool/jit_disasm.c43
-rw-r--r--tools/bpf/bpftool/link.c176
-rw-r--r--tools/bpf/bpftool/main.c43
-rw-r--r--tools/bpf/bpftool/main.h37
-rw-r--r--tools/bpf/bpftool/map.c72
-rw-r--r--tools/bpf/bpftool/map_perf_ring.c9
-rw-r--r--tools/bpf/bpftool/net.c99
-rw-r--r--tools/bpf/bpftool/netlink_dumper.c6
-rw-r--r--tools/bpf/bpftool/pids.c21
-rw-r--r--tools/bpf/bpftool/prog.c143
-rw-r--r--tools/bpf/bpftool/sign.c211
-rw-r--r--tools/bpf/bpftool/skeleton/pid_iter.bpf.c11
-rw-r--r--tools/bpf/bpftool/skeleton/profiler.bpf.c14
-rw-r--r--tools/bpf/bpftool/struct_ops.c2
-rw-r--r--tools/bpf/bpftool/token.c210
-rw-r--r--tools/bpf/bpftool/tracelog.c13
-rw-r--r--tools/bpf/bpftool/xlated_dumper.c10
-rw-r--r--tools/bpf/resolve_btfids/Makefile4
-rw-r--r--tools/bpf/resolve_btfids/main.c92
-rw-r--r--tools/bpf/runqslower/Makefile11
-rw-r--r--tools/bpf/runqslower/runqslower.bpf.c1
-rw-r--r--tools/build/Build3
-rw-r--r--tools/build/Build.include2
-rw-r--r--tools/build/Makefile19
-rw-r--r--tools/build/Makefile.build26
-rw-r--r--tools/build/Makefile.feature100
-rw-r--r--tools/build/Makefile.include12
-rw-r--r--tools/build/feature/Makefile145
-rw-r--r--tools/build/feature/test-all.c64
-rw-r--r--tools/build/feature/test-backtrace.c2
-rw-r--r--tools/build/feature/test-bpf.c2
-rw-r--r--tools/build/feature/test-dwarf.c11
-rw-r--r--tools/build/feature/test-dwarf_getcfi.c9
-rw-r--r--tools/build/feature/test-dwarf_getlocations.c13
-rw-r--r--tools/build/feature/test-get_current_dir_name.c11
-rw-r--r--tools/build/feature/test-glibc.c2
-rw-r--r--tools/build/feature/test-libaudit.c11
-rw-r--r--tools/build/feature/test-libcapstone.c11
-rw-r--r--tools/build/feature/test-libcpupower.c8
-rw-r--r--tools/build/feature/test-libcrypto.c25
-rw-r--r--tools/build/feature/test-libdebuginfod.c2
-rw-r--r--tools/build/feature/test-libdw-dwarf-unwind.c14
-rw-r--r--tools/build/feature/test-libdw.c56
-rw-r--r--tools/build/feature/test-libelf-gelf_getnote.c2
-rw-r--r--tools/build/feature/test-libelf-zstd.c9
-rw-r--r--tools/build/feature/test-libelf.c2
-rw-r--r--tools/build/feature/test-libtraceevent.c2
-rw-r--r--tools/build/feature/test-libtracefs.c2
-rw-r--r--tools/build/feature/test-llvm-perf.cpp14
-rw-r--r--tools/build/feature/test-lzma.c2
-rw-r--r--tools/cgroup/memcg_slabinfo.py5
-rw-r--r--tools/counter/.gitignore1
-rw-r--r--tools/counter/counter_watch_events.c5
-rw-r--r--tools/crypto/ccp/dbc.c1
-rwxr-xr-xtools/crypto/ccp/test_dbc.py8
-rwxr-xr-xtools/debugging/kernel-chktaint8
-rwxr-xr-xtools/docs/gen-redirects.py54
-rwxr-xr-xtools/docs/gen-renames.py130
-rw-r--r--tools/docs/lib/__init__.py0
-rw-r--r--tools/docs/lib/enrich_formatter.py70
-rwxr-xr-xtools/docs/lib/parse_data_structs.py452
-rwxr-xr-xtools/docs/parse-headers.py60
-rw-r--r--tools/edid/1024x768.S43
-rw-r--r--tools/edid/1280x1024.S43
-rw-r--r--tools/edid/1600x1200.S43
-rw-r--r--tools/edid/1680x1050.S43
-rw-r--r--tools/edid/1920x1080.S43
-rw-r--r--tools/edid/800x600.S40
-rw-r--r--tools/edid/Makefile37
-rw-r--r--tools/edid/edid.S274
-rw-r--r--tools/edid/hex1
-rw-r--r--tools/firewire/decode-fcp.c2
-rw-r--r--tools/firewire/nosy-dump.c6
-rw-r--r--tools/gpio/Makefile4
-rw-r--r--tools/gpio/gpio-event-mon.c8
-rw-r--r--tools/gpio/gpio-hammer.c4
-rwxr-xr-xtools/gpio/gpio-sloppy-logic-analyzer.sh246
-rw-r--r--tools/hv/.gitignore3
-rw-r--r--tools/hv/Build3
-rw-r--r--tools/hv/Makefile17
-rw-r--r--tools/hv/hv_fcopy_daemon.c266
-rw-r--r--tools/hv/hv_fcopy_uio_daemon.c559
-rwxr-xr-xtools/hv/hv_get_dns_info.sh4
-rw-r--r--tools/hv/hv_kvp_daemon.c394
-rwxr-xr-xtools/hv/hv_set_ifconfig.sh2
-rwxr-xr-x[-rw-r--r--]tools/hv/lsvmbus2
-rw-r--r--tools/hv/vmbus_bufring.c318
-rw-r--r--tools/hv/vmbus_bufring.h158
-rw-r--r--tools/iio/Makefile2
-rw-r--r--tools/iio/iio_event_monitor.c17
-rw-r--r--tools/iio/iio_generic_buffer.c6
-rw-r--r--tools/iio/iio_utils.c2
-rw-r--r--tools/include/asm-generic/bitops/__ffs.h4
-rw-r--r--tools/include/asm-generic/bitops/__fls.h10
-rw-r--r--tools/include/asm-generic/bitops/fls.h8
-rw-r--r--tools/include/asm-generic/io.h482
-rw-r--r--tools/include/asm/alternative.h10
-rw-r--r--tools/include/asm/barrier.h2
-rw-r--r--tools/include/asm/io.h11
-rw-r--r--tools/include/asm/rwonce.h0
-rw-r--r--tools/include/asm/timex.h13
-rw-r--r--tools/include/generated/asm-offsets.h0
-rw-r--r--tools/include/generated/asm/cpucap-defs.h0
-rw-r--r--tools/include/generated/asm/sysreg-defs.h0
-rw-r--r--tools/include/linux/align.h12
-rw-r--r--tools/include/linux/args.h28
-rw-r--r--tools/include/linux/atomic.h22
-rw-r--r--tools/include/linux/bitmap.h48
-rw-r--r--tools/include/linux/bitops.h15
-rw-r--r--tools/include/linux/bits.h84
-rw-r--r--tools/include/linux/btf_ids.h11
-rw-r--r--tools/include/linux/build_bug.h10
-rw-r--r--tools/include/linux/cfi_types.h68
-rw-r--r--tools/include/linux/compiler-gcc.h2
-rw-r--r--tools/include/linux/compiler.h50
-rw-r--r--tools/include/linux/container_of.h18
-rw-r--r--tools/include/linux/coresight-pmu.h17
-rw-r--r--tools/include/linux/filter.h28
-rw-r--r--tools/include/linux/gfp_types.h393
-rw-r--r--tools/include/linux/init.h (renamed from tools/testing/memblock/linux/init.h)19
-rw-r--r--tools/include/linux/io.h4
-rw-r--r--tools/include/linux/kallsyms.h4
-rw-r--r--tools/include/linux/kasan-tags.h15
-rw-r--r--tools/include/linux/kernel.h15
-rw-r--r--tools/include/linux/linkage.h8
-rw-r--r--tools/include/linux/math64.h5
-rw-r--r--tools/include/linux/mm.h17
-rw-r--r--tools/include/linux/moduleparam.h7
-rw-r--r--tools/include/linux/numa.h5
-rw-r--r--tools/include/linux/objtool_types.h13
-rw-r--r--tools/include/linux/panic.h19
l---------tools/include/linux/pci_ids.h1
-rw-r--r--tools/include/linux/pfn.h1
-rw-r--r--tools/include/linux/poison.h7
-rw-r--r--tools/include/linux/prandom.h51
-rw-r--r--tools/include/linux/rbtree_augmented.h4
-rw-r--r--tools/include/linux/refcount.h5
-rw-r--r--tools/include/linux/ring_buffer.h2
-rw-r--r--tools/include/linux/slab.h167
-rw-r--r--tools/include/linux/string.h5
-rw-r--r--tools/include/linux/types.h2
-rw-r--r--tools/include/linux/unaligned.h (renamed from tools/include/asm-generic/unaligned.h)17
-rw-r--r--tools/include/nolibc/Makefile47
-rw-r--r--tools/include/nolibc/arch-arm.h8
-rw-r--r--tools/include/nolibc/arch-arm64.h (renamed from tools/include/nolibc/arch-aarch64.h)15
-rw-r--r--tools/include/nolibc/arch-i386.h180
-rw-r--r--tools/include/nolibc/arch-loongarch.h11
-rw-r--r--tools/include/nolibc/arch-m68k.h141
-rw-r--r--tools/include/nolibc/arch-mips.h126
-rw-r--r--tools/include/nolibc/arch-powerpc.h8
-rw-r--r--tools/include/nolibc/arch-riscv.h5
-rw-r--r--tools/include/nolibc/arch-s390.h14
-rw-r--r--tools/include/nolibc/arch-sh.h162
-rw-r--r--tools/include/nolibc/arch-sparc.h207
-rw-r--r--tools/include/nolibc/arch-x86.h (renamed from tools/include/nolibc/arch-x86_64.h)189
-rw-r--r--tools/include/nolibc/arch.h16
-rw-r--r--tools/include/nolibc/compiler.h39
-rw-r--r--tools/include/nolibc/crt.h32
-rw-r--r--tools/include/nolibc/ctype.h6
-rw-r--r--tools/include/nolibc/dirent.h100
-rw-r--r--tools/include/nolibc/elf.h15
-rw-r--r--tools/include/nolibc/errno.h8
-rw-r--r--tools/include/nolibc/fcntl.h69
-rw-r--r--tools/include/nolibc/getopt.h101
-rw-r--r--tools/include/nolibc/limits.h7
-rw-r--r--tools/include/nolibc/math.h31
-rw-r--r--tools/include/nolibc/nolibc.h29
-rw-r--r--tools/include/nolibc/poll.h53
-rw-r--r--tools/include/nolibc/sched.h50
-rw-r--r--tools/include/nolibc/signal.h7
-rw-r--r--tools/include/nolibc/stackprotector.h6
-rw-r--r--tools/include/nolibc/std.h10
-rw-r--r--tools/include/nolibc/stdbool.h16
-rw-r--r--tools/include/nolibc/stddef.h24
-rw-r--r--tools/include/nolibc/stdint.h23
-rw-r--r--tools/include/nolibc/stdio.h276
-rw-r--r--tools/include/nolibc/stdlib.h164
-rw-r--r--tools/include/nolibc/string.h91
-rw-r--r--tools/include/nolibc/sys.h495
-rw-r--r--tools/include/nolibc/sys/auxv.h41
-rw-r--r--tools/include/nolibc/sys/ioctl.h29
-rw-r--r--tools/include/nolibc/sys/mman.h82
-rw-r--r--tools/include/nolibc/sys/mount.h37
-rw-r--r--tools/include/nolibc/sys/prctl.h36
-rw-r--r--tools/include/nolibc/sys/random.h34
-rw-r--r--tools/include/nolibc/sys/reboot.h34
-rw-r--r--tools/include/nolibc/sys/resource.h53
-rw-r--r--tools/include/nolibc/sys/stat.h94
-rw-r--r--tools/include/nolibc/sys/syscall.h19
-rw-r--r--tools/include/nolibc/sys/sysmacros.h20
-rw-r--r--tools/include/nolibc/sys/time.h49
-rw-r--r--tools/include/nolibc/sys/timerfd.h83
-rw-r--r--tools/include/nolibc/sys/types.h7
-rw-r--r--tools/include/nolibc/sys/utsname.h42
-rw-r--r--tools/include/nolibc/sys/wait.h99
-rw-r--r--tools/include/nolibc/time.h220
-rw-r--r--tools/include/nolibc/types.h36
-rw-r--r--tools/include/nolibc/unistd.h40
-rw-r--r--tools/include/uapi/README73
-rw-r--r--tools/include/uapi/asm-generic/bitsperlong.h4
-rw-r--r--tools/include/uapi/asm-generic/fcntl.h221
-rw-r--r--tools/include/uapi/asm-generic/mman-common.h4
-rw-r--r--tools/include/uapi/asm-generic/mman.h4
-rw-r--r--tools/include/uapi/asm-generic/socket.h23
-rw-r--r--tools/include/uapi/asm-generic/unistd.h28
-rw-r--r--tools/include/uapi/drm/drm.h21
-rw-r--r--tools/include/uapi/drm/i915_drm.h74
-rw-r--r--tools/include/uapi/linux/bits.h14
-rw-r--r--tools/include/uapi/linux/bpf.h373
-rw-r--r--tools/include/uapi/linux/btf.h3
-rw-r--r--tools/include/uapi/linux/const.h17
-rw-r--r--tools/include/uapi/linux/coredump.h104
-rw-r--r--tools/include/uapi/linux/elf.h524
-rw-r--r--tools/include/uapi/linux/ethtool.h104
-rw-r--r--tools/include/uapi/linux/fanotify.h274
-rw-r--r--tools/include/uapi/linux/fcntl.h123
-rw-r--r--tools/include/uapi/linux/fs.h209
-rw-r--r--tools/include/uapi/linux/fscrypt.h6
-rw-r--r--tools/include/uapi/linux/genetlink.h103
-rw-r--r--tools/include/uapi/linux/if_addr.h79
-rw-r--r--tools/include/uapi/linux/if_link.h557
-rw-r--r--tools/include/uapi/linux/if_xdp.h25
-rw-r--r--tools/include/uapi/linux/in.h6
-rw-r--r--tools/include/uapi/linux/kvm.h769
-rw-r--r--tools/include/uapi/linux/memfd.h39
-rw-r--r--tools/include/uapi/linux/mman.h1
-rw-r--r--tools/include/uapi/linux/mount.h28
-rw-r--r--tools/include/uapi/linux/neighbour.h229
-rw-r--r--tools/include/uapi/linux/netdev.h82
-rw-r--r--tools/include/uapi/linux/netfilter.h80
-rw-r--r--tools/include/uapi/linux/netfilter_arp.h23
-rw-r--r--tools/include/uapi/linux/nsfs.h58
-rw-r--r--tools/include/uapi/linux/openat2.h43
-rw-r--r--tools/include/uapi/linux/perf_event.h664
-rw-r--r--tools/include/uapi/linux/prctl.h72
-rw-r--r--tools/include/uapi/linux/rtnetlink.h848
-rw-r--r--tools/include/uapi/linux/stat.h105
-rw-r--r--tools/include/uapi/linux/stddef.h15
-rw-r--r--tools/include/uapi/linux/types.h3
-rw-r--r--tools/include/uapi/linux/userfaultfd.h386
-rw-r--r--tools/include/vdso/unaligned.h15
-rw-r--r--tools/lib/api/Makefile6
-rw-r--r--tools/lib/api/fs/fs.c6
-rw-r--r--tools/lib/api/fs/tracing_path.c2
-rw-r--r--tools/lib/api/io.h70
-rw-r--r--tools/lib/api/io_dir.h105
-rw-r--r--tools/lib/bitmap.c40
-rw-r--r--tools/lib/bpf/.gitignore1
-rw-r--r--tools/lib/bpf/Build4
-rw-r--r--tools/lib/bpf/Makefile31
-rw-r--r--tools/lib/bpf/bpf.c132
-rw-r--r--tools/lib/bpf/bpf.h127
-rw-r--r--tools/lib/bpf/bpf_core_read.h67
-rw-r--r--tools/lib/bpf/bpf_gen_internal.h3
-rw-r--r--tools/lib/bpf/bpf_helpers.h53
-rw-r--r--tools/lib/bpf/bpf_tracing.h95
-rw-r--r--tools/lib/bpf/btf.c1376
-rw-r--r--tools/lib/bpf/btf.h47
-rw-r--r--tools/lib/bpf/btf_dump.c84
-rw-r--r--tools/lib/bpf/btf_iter.c177
-rw-r--r--tools/lib/bpf/btf_relocate.c519
-rw-r--r--tools/lib/bpf/elf.c10
-rw-r--r--tools/lib/bpf/features.c609
-rw-r--r--tools/lib/bpf/gen_loader.c238
-rw-r--r--tools/lib/bpf/hashmap.h20
-rw-r--r--tools/lib/bpf/libbpf.c2648
-rw-r--r--tools/lib/bpf/libbpf.h197
-rw-r--r--tools/lib/bpf/libbpf.map44
-rw-r--r--tools/lib/bpf/libbpf_errno.c75
-rw-r--r--tools/lib/bpf/libbpf_internal.h170
-rw-r--r--tools/lib/bpf/libbpf_legacy.h4
-rw-r--r--tools/lib/bpf/libbpf_probes.c25
-rw-r--r--tools/lib/bpf/libbpf_utils.c256
-rw-r--r--tools/lib/bpf/libbpf_version.h2
-rw-r--r--tools/lib/bpf/linker.c419
-rw-r--r--tools/lib/bpf/netlink.c24
-rw-r--r--tools/lib/bpf/nlattr.c15
-rw-r--r--tools/lib/bpf/relo_core.c27
-rw-r--r--tools/lib/bpf/ringbuf.c86
-rw-r--r--tools/lib/bpf/skel_internal.h81
-rw-r--r--tools/lib/bpf/str_error.c21
-rw-r--r--tools/lib/bpf/str_error.h6
-rw-r--r--tools/lib/bpf/usdt.bpf.h102
-rw-r--r--tools/lib/bpf/usdt.c115
-rw-r--r--tools/lib/bpf/zip.c2
-rw-r--r--tools/lib/cmdline.c53
-rw-r--r--tools/lib/list_sort.c12
-rw-r--r--tools/lib/perf/.gitignore5
-rw-r--r--tools/lib/perf/Documentation/Makefile2
-rw-r--r--tools/lib/perf/Documentation/libperf.txt2
-rw-r--r--tools/lib/perf/Makefile39
-rw-r--r--tools/lib/perf/cpumap.c180
-rw-r--r--tools/lib/perf/evlist.c131
-rw-r--r--tools/lib/perf/evsel.c59
-rw-r--r--tools/lib/perf/include/internal/cpumap.h4
-rw-r--r--tools/lib/perf/include/internal/evlist.h4
-rw-r--r--tools/lib/perf/include/internal/evsel.h66
-rw-r--r--tools/lib/perf/include/perf/cpumap.h27
-rw-r--r--tools/lib/perf/include/perf/event.h37
-rw-r--r--tools/lib/perf/include/perf/threadmap.h1
-rw-r--r--tools/lib/perf/libperf.map5
-rw-r--r--tools/lib/perf/mmap.c4
-rw-r--r--tools/lib/perf/threadmap.c17
-rw-r--r--tools/lib/rbtree.c2
-rw-r--r--tools/lib/slab.c16
-rw-r--r--tools/lib/string.c13
-rw-r--r--tools/lib/subcmd/Makefile6
-rw-r--r--tools/lib/subcmd/help.c15
-rw-r--r--tools/lib/subcmd/parse-options.c36
-rw-r--r--tools/lib/subcmd/run-command.c116
-rw-r--r--tools/lib/subcmd/run-command.h5
-rw-r--r--tools/lib/subcmd/subcmd-util.h2
-rw-r--r--tools/lib/symbol/Makefile4
-rw-r--r--tools/lib/thermal/Makefile26
-rw-r--r--tools/lib/thermal/commands.c188
-rw-r--r--tools/lib/thermal/events.c55
-rw-r--r--tools/lib/thermal/include/thermal.h40
-rw-r--r--tools/lib/thermal/libthermal.map10
-rw-r--r--tools/lib/thermal/sampling.c2
-rw-r--r--tools/lib/thermal/thermal.c17
-rw-r--r--tools/memory-model/Documentation/README31
-rw-r--r--tools/memory-model/Documentation/access-marking.txt34
-rw-r--r--tools/memory-model/Documentation/explanation.txt2
-rw-r--r--tools/memory-model/Documentation/glossary.txt32
-rw-r--r--tools/memory-model/Documentation/herd-representation.txt113
-rw-r--r--tools/memory-model/Documentation/locking.txt5
-rw-r--r--tools/memory-model/Documentation/ordering.txt22
-rw-r--r--tools/memory-model/Documentation/recipes.txt4
-rw-r--r--tools/memory-model/Documentation/references.txt3
-rw-r--r--tools/memory-model/Documentation/simple.txt6
-rw-r--r--tools/memory-model/README4
-rw-r--r--tools/memory-model/linux-kernel.bell33
-rw-r--r--tools/memory-model/linux-kernel.cat10
-rw-r--r--tools/memory-model/linux-kernel.cfg1
-rw-r--r--tools/memory-model/linux-kernel.def169
-rw-r--r--tools/memory-model/lock.cat62
-rw-r--r--tools/mm/Makefile11
-rw-r--r--tools/mm/page-types.c26
-rw-r--r--tools/mm/page_owner_sort.c1
-rw-r--r--tools/mm/show_page_info.py169
-rw-r--r--tools/mm/slabinfo.c21
-rw-r--r--tools/mm/thp_swap_allocator_test.c234
-rw-r--r--tools/mm/thpmaps675
-rwxr-xr-xtools/net/sunrpc/extract.sh11
-rw-r--r--tools/net/sunrpc/xdrgen/.gitignore2
-rw-r--r--tools/net/sunrpc/xdrgen/README261
-rw-r--r--tools/net/sunrpc/xdrgen/__init__.py2
-rw-r--r--tools/net/sunrpc/xdrgen/generators/__init__.py117
-rw-r--r--tools/net/sunrpc/xdrgen/generators/constant.py20
-rw-r--r--tools/net/sunrpc/xdrgen/generators/enum.py64
-rw-r--r--tools/net/sunrpc/xdrgen/generators/header_bottom.py33
-rw-r--r--tools/net/sunrpc/xdrgen/generators/header_top.py45
-rw-r--r--tools/net/sunrpc/xdrgen/generators/pointer.py288
-rw-r--r--tools/net/sunrpc/xdrgen/generators/program.py168
-rw-r--r--tools/net/sunrpc/xdrgen/generators/source_top.py32
-rw-r--r--tools/net/sunrpc/xdrgen/generators/struct.py288
-rw-r--r--tools/net/sunrpc/xdrgen/generators/typedef.py271
-rw-r--r--tools/net/sunrpc/xdrgen/generators/union.py275
-rw-r--r--tools/net/sunrpc/xdrgen/grammars/xdr.lark121
-rw-r--r--tools/net/sunrpc/xdrgen/subcmds/__init__.py2
-rw-r--r--tools/net/sunrpc/xdrgen/subcmds/declarations.py76
-rw-r--r--tools/net/sunrpc/xdrgen/subcmds/definitions.py96
-rw-r--r--tools/net/sunrpc/xdrgen/subcmds/lint.py33
-rw-r--r--tools/net/sunrpc/xdrgen/subcmds/source.py117
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/constants/definition.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/declaration/enum.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum.j219
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum_be.j214
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/definition/close.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/definition/close_be.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/definition/enumerator.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/definition/open.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum.j214
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum_be.j214
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/enum/maxsize/enum.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/header_bottom/declaration/header.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/header_bottom/definition/header.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/header_top/declaration/header.j214
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/header_top/definition/header.j210
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/declaration/close.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/basic.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/close.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_array.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/open.j222
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/optional_data.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/string.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_array.j213
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/basic.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/close.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_array.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_opaque.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/open.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/optional_data.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/string.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_array.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_opaque.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/basic.j210
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/close.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_array.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/open.j220
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/optional_data.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/string.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_array.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_opaque.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/pointer/maxsize/pointer.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/declaration/argument.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/declaration/result.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j221
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/decoder/result.j218
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/definition/close.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/definition/open.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/definition/procedure.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/encoder/argument.j216
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j221
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/source_top/client.j213
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/source_top/server.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/declaration/close.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/basic.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/close.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_array.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/open.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/optional_data.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/string.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_array.j213
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/basic.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/close.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_array.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_opaque.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/open.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/optional_data.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/string.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_array.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_opaque.j25
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/basic.j210
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/close.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_array.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/open.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/optional_data.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/string.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_array.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_opaque.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/struct/maxsize/struct.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/basic.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_array.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_opaque.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/string.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_array.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_opaque.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/basic.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_array.j225
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_opaque.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/string.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_array.j226
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_opaque.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/basic.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_array.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/string.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_array.j29
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/basic.j221
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_array.j225
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_opaque.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/string.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_array.j230
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_opaque.j217
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/basic.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/fixed_length_opaque.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/string.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_array.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_opaque.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/basic.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/break.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec_be.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/close.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/default_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/open.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/optional_data.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/string.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/switch_spec.j27
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_array.j215
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_opaque.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/decoder/void.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/definition/case_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/definition/close.j28
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/definition/default_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/definition/open.j26
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/definition/switch_spec.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/basic.j210
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/break.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec_be.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/close.j24
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/default_spec.j22
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/open.j212
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/switch_spec.j27
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/encoder/void.j23
-rw-r--r--tools/net/sunrpc/xdrgen/templates/C/union/maxsize/union.j23
-rw-r--r--tools/net/sunrpc/xdrgen/tests/test.x36
-rw-r--r--tools/net/sunrpc/xdrgen/xdr_ast.py753
-rw-r--r--tools/net/sunrpc/xdrgen/xdr_parse.py36
-rwxr-xr-xtools/net/sunrpc/xdrgen/xdrgen134
-rw-r--r--tools/net/ynl/Makefile37
-rw-r--r--tools/net/ynl/Makefile.deps31
-rwxr-xr-xtools/net/ynl/cli.py61
-rw-r--r--tools/net/ynl/generated/.gitignore1
-rw-r--r--tools/net/ynl/generated/Makefile55
-rw-r--r--tools/net/ynl/lib/.gitignore2
-rw-r--r--tools/net/ynl/lib/Makefile10
-rw-r--r--tools/net/ynl/lib/__init__.py8
-rw-r--r--tools/net/ynl/lib/ynl-priv.h392
-rw-r--r--tools/net/ynl/lib/ynl.c567
-rw-r--r--tools/net/ynl/lib/ynl.h37
-rw-r--r--tools/net/ynl/lib/ynl.py824
-rw-r--r--tools/net/ynl/pyproject.toml24
-rw-r--r--tools/net/ynl/pyynl/.gitignore2
-rw-r--r--tools/net/ynl/pyynl/__init__.py0
-rwxr-xr-xtools/net/ynl/pyynl/cli.py163
-rwxr-xr-xtools/net/ynl/pyynl/ethtool.py (renamed from tools/net/ynl/ethtool.py)50
-rw-r--r--tools/net/ynl/pyynl/lib/__init__.py11
-rw-r--r--tools/net/ynl/pyynl/lib/doc_generator.py402
-rw-r--r--tools/net/ynl/pyynl/lib/nlspec.py (renamed from tools/net/ynl/lib/nlspec.py)23
-rw-r--r--tools/net/ynl/pyynl/lib/ynl.py1150
-rwxr-xr-xtools/net/ynl/pyynl/ynl_gen_c.py (renamed from tools/net/ynl/ynl-gen-c.py)1510
-rwxr-xr-xtools/net/ynl/pyynl/ynl_gen_rst.py83
-rw-r--r--tools/net/ynl/samples/.gitignore7
-rw-r--r--tools/net/ynl/samples/Makefile8
-rw-r--r--tools/net/ynl/samples/devlink.c7
-rw-r--r--tools/net/ynl/samples/netdev.c8
-rw-r--r--tools/net/ynl/samples/ovs.c60
-rw-r--r--tools/net/ynl/samples/page-pool.c4
-rw-r--r--tools/net/ynl/samples/rt-addr.c80
-rw-r--r--tools/net/ynl/samples/rt-link.c184
-rw-r--r--tools/net/ynl/samples/rt-route.c80
-rw-r--r--tools/net/ynl/samples/tc.c80
-rwxr-xr-xtools/net/ynl/ynl-gen-rst.py417
-rwxr-xr-xtools/net/ynl/ynl-regen.sh2
-rw-r--r--tools/objtool/Documentation/objtool.txt128
-rw-r--r--tools/objtool/Makefile13
-rw-r--r--tools/objtool/arch/loongarch/Build3
-rw-r--r--tools/objtool/arch/loongarch/decode.c416
-rw-r--r--tools/objtool/arch/loongarch/include/arch/cfi_regs.h22
-rw-r--r--tools/objtool/arch/loongarch/include/arch/elf.h37
-rw-r--r--tools/objtool/arch/loongarch/include/arch/special.h33
-rw-r--r--tools/objtool/arch/loongarch/orc.c171
-rw-r--r--tools/objtool/arch/loongarch/special.c196
-rw-r--r--tools/objtool/arch/powerpc/decode.c24
-rw-r--r--tools/objtool/arch/powerpc/special.c3
-rw-r--r--tools/objtool/arch/x86/Build1
-rw-r--r--tools/objtool/arch/x86/decode.c114
-rw-r--r--tools/objtool/arch/x86/orc.c188
-rw-r--r--tools/objtool/arch/x86/special.c65
-rw-r--r--tools/objtool/builtin-check.c206
-rw-r--r--tools/objtool/check.c1502
-rw-r--r--tools/objtool/elf.c212
-rw-r--r--tools/objtool/include/objtool/arch.h8
-rw-r--r--tools/objtool/include/objtool/builtin.h10
-rw-r--r--tools/objtool/include/objtool/check.h8
-rw-r--r--tools/objtool/include/objtool/elf.h32
-rw-r--r--tools/objtool/include/objtool/objtool.h2
-rw-r--r--tools/objtool/include/objtool/orc.h14
-rw-r--r--tools/objtool/include/objtool/special.h9
-rw-r--r--tools/objtool/include/objtool/warn.h50
-rw-r--r--tools/objtool/noreturns.h15
-rw-r--r--tools/objtool/objtool.c91
-rw-r--r--tools/objtool/orc_dump.c106
-rw-r--r--tools/objtool/orc_gen.c113
-rw-r--r--tools/objtool/special.c27
-rw-r--r--tools/pci/Build1
-rw-r--r--tools/pci/Makefile58
-rw-r--r--tools/pci/pcitest.c252
-rw-r--r--tools/pci/pcitest.sh72
-rw-r--r--tools/perf/.gitignore8
-rw-r--r--tools/perf/Build55
-rw-r--r--tools/perf/Documentation/Build.txt43
-rw-r--r--tools/perf/Documentation/android.txt80
-rw-r--r--tools/perf/Documentation/callchain-overhead-calculation.txt5
-rw-r--r--tools/perf/Documentation/cpu-and-latency-overheads.txt85
-rw-r--r--tools/perf/Documentation/intel-acr.txt53
-rw-r--r--tools/perf/Documentation/intel-hybrid.txt12
-rw-r--r--tools/perf/Documentation/itrace.txt2
-rw-r--r--tools/perf/Documentation/perf-amd-ibs.txt223
-rw-r--r--tools/perf/Documentation/perf-annotate.txt6
-rw-r--r--tools/perf/Documentation/perf-arm-spe.txt54
-rw-r--r--tools/perf/Documentation/perf-bench.txt58
-rw-r--r--tools/perf/Documentation/perf-c2c.txt11
-rw-r--r--tools/perf/Documentation/perf-check.txt81
-rw-r--r--tools/perf/Documentation/perf-config.txt19
-rw-r--r--tools/perf/Documentation/perf-diff.txt2
-rw-r--r--tools/perf/Documentation/perf-ftrace.txt77
-rw-r--r--tools/perf/Documentation/perf-intel-pt.txt610
-rw-r--r--tools/perf/Documentation/perf-kvm.txt6
-rw-r--r--tools/perf/Documentation/perf-kwork.txt4
-rw-r--r--tools/perf/Documentation/perf-list.txt48
-rw-r--r--tools/perf/Documentation/perf-lock.txt28
-rw-r--r--tools/perf/Documentation/perf-mem.txt176
-rw-r--r--tools/perf/Documentation/perf-record.txt46
-rw-r--r--tools/perf/Documentation/perf-report.txt96
-rw-r--r--tools/perf/Documentation/perf-sched.txt74
-rw-r--r--tools/perf/Documentation/perf-script-python.txt6
-rw-r--r--tools/perf/Documentation/perf-script.txt49
-rw-r--r--tools/perf/Documentation/perf-stat.txt32
-rw-r--r--tools/perf/Documentation/perf-test.txt36
-rw-r--r--tools/perf/Documentation/perf-top.txt40
-rw-r--r--tools/perf/Documentation/perf-trace.txt30
-rw-r--r--tools/perf/Documentation/perf.data-file-format.txt34
-rw-r--r--tools/perf/Documentation/perf.txt5
-rw-r--r--tools/perf/Documentation/tips.txt35
-rw-r--r--tools/perf/Documentation/topdown.txt30
-rw-r--r--tools/perf/MANIFEST9
-rw-r--r--tools/perf/Makefile8
-rw-r--r--tools/perf/Makefile.config519
-rw-r--r--tools/perf/Makefile.perf308
-rw-r--r--tools/perf/arch/Build5
-rw-r--r--tools/perf/arch/alpha/entry/syscalls/syscall.tbl504
-rw-r--r--tools/perf/arch/arc/annotate/instructions.c2
-rw-r--r--tools/perf/arch/arm/Build4
-rw-r--r--tools/perf/arch/arm/Makefile3
-rw-r--r--tools/perf/arch/arm/annotate/instructions.c2
-rw-r--r--tools/perf/arch/arm/entry/syscalls/syscall.tbl486
-rw-r--r--tools/perf/arch/arm/tests/Build8
-rw-r--r--tools/perf/arch/arm/tests/dwarf-unwind.c2
-rw-r--r--tools/perf/arch/arm/util/Build10
-rw-r--r--tools/perf/arch/arm/util/cs-etm.c381
-rw-r--r--tools/perf/arch/arm/util/dwarf-regs.c61
-rw-r--r--tools/perf/arch/arm/util/perf_regs.c7
-rw-r--r--tools/perf/arch/arm/util/pmu.c17
-rw-r--r--tools/perf/arch/arm/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/arm64/Build4
-rw-r--r--tools/perf/arch/arm64/Makefile26
-rw-r--r--tools/perf/arch/arm64/annotate/instructions.c5
-rwxr-xr-xtools/perf/arch/arm64/entry/syscalls/mksyscalltbl46
-rw-r--r--tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl476
l---------tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl1
-rw-r--r--tools/perf/arch/arm64/tests/Build8
-rw-r--r--tools/perf/arch/arm64/tests/dwarf-unwind.c2
-rw-r--r--tools/perf/arch/arm64/util/Build19
-rw-r--r--tools/perf/arch/arm64/util/arm-spe.c314
-rw-r--r--tools/perf/arch/arm64/util/arm64_exception_types.h15
-rw-r--r--tools/perf/arch/arm64/util/dwarf-regs.c92
-rw-r--r--tools/perf/arch/arm64/util/header.c76
-rw-r--r--tools/perf/arch/arm64/util/hisi-ptt.c1
-rw-r--r--tools/perf/arch/arm64/util/machine.c2
-rw-r--r--tools/perf/arch/arm64/util/mem-events.c39
-rw-r--r--tools/perf/arch/arm64/util/mem-events.h7
-rw-r--r--tools/perf/arch/arm64/util/perf_regs.c7
-rw-r--r--tools/perf/arch/arm64/util/pmu.c25
-rw-r--r--tools/perf/arch/arm64/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/csky/Build2
-rw-r--r--tools/perf/arch/csky/Makefile4
-rw-r--r--tools/perf/arch/csky/annotate/instructions.c7
-rw-r--r--tools/perf/arch/csky/util/Build5
-rw-r--r--tools/perf/arch/csky/util/perf_regs.c7
-rw-r--r--tools/perf/arch/csky/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/loongarch/Build2
-rw-r--r--tools/perf/arch/loongarch/Makefile27
-rw-r--r--tools/perf/arch/loongarch/annotate/instructions.c8
-rwxr-xr-xtools/perf/arch/loongarch/entry/syscalls/mksyscalltbl45
-rw-r--r--tools/perf/arch/loongarch/util/Build9
-rw-r--r--tools/perf/arch/loongarch/util/dwarf-regs.c44
-rw-r--r--tools/perf/arch/loongarch/util/header.c96
-rw-r--r--tools/perf/arch/loongarch/util/kvm-stat.c139
-rw-r--r--tools/perf/arch/loongarch/util/perf_regs.c7
-rw-r--r--tools/perf/arch/loongarch/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/mips/Build2
-rw-r--r--tools/perf/arch/mips/Makefile22
-rw-r--r--tools/perf/arch/mips/annotate/instructions.c2
-rw-r--r--tools/perf/arch/mips/entry/syscalls/mksyscalltbl32
-rw-r--r--tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl8
-rw-r--r--tools/perf/arch/mips/util/Build5
-rw-r--r--tools/perf/arch/mips/util/dwarf-regs.c38
-rw-r--r--tools/perf/arch/mips/util/perf_regs.c7
-rw-r--r--tools/perf/arch/parisc/entry/syscalls/syscall.tbl463
-rw-r--r--tools/perf/arch/powerpc/Build4
-rw-r--r--tools/perf/arch/powerpc/Makefile30
-rw-r--r--tools/perf/arch/powerpc/annotate/instructions.c256
-rwxr-xr-xtools/perf/arch/powerpc/entry/syscalls/mksyscalltbl39
-rw-r--r--tools/perf/arch/powerpc/entry/syscalls/syscall.tbl14
-rw-r--r--tools/perf/arch/powerpc/tests/Build6
-rw-r--r--tools/perf/arch/powerpc/tests/dwarf-unwind.c2
-rw-r--r--tools/perf/arch/powerpc/util/Build22
-rw-r--r--tools/perf/arch/powerpc/util/auxtrace.c103
-rw-r--r--tools/perf/arch/powerpc/util/dwarf-regs.c100
-rw-r--r--tools/perf/arch/powerpc/util/event.c60
-rw-r--r--tools/perf/arch/powerpc/util/header.c36
-rw-r--r--tools/perf/arch/powerpc/util/kvm-stat.c2
-rw-r--r--tools/perf/arch/powerpc/util/mem-events.c16
-rw-r--r--tools/perf/arch/powerpc/util/mem-events.h7
-rw-r--r--tools/perf/arch/powerpc/util/perf_regs.c10
-rw-r--r--tools/perf/arch/powerpc/util/pmu.c12
-rw-r--r--tools/perf/arch/powerpc/util/skip-callchain-idx.c8
-rw-r--r--tools/perf/arch/powerpc/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/riscv/Build2
-rw-r--r--tools/perf/arch/riscv/Makefile6
-rw-r--r--tools/perf/arch/riscv/include/dwarf-regs-table.h42
-rw-r--r--tools/perf/arch/riscv/util/Build8
-rw-r--r--tools/perf/arch/riscv/util/dwarf-regs.c72
-rw-r--r--tools/perf/arch/riscv/util/header.c6
-rw-r--r--tools/perf/arch/riscv/util/kvm-stat.c78
-rw-r--r--tools/perf/arch/riscv/util/perf_regs.c7
-rw-r--r--tools/perf/arch/riscv/util/riscv_trap_types.h57
-rw-r--r--tools/perf/arch/riscv/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/riscv64/annotate/instructions.c2
-rw-r--r--tools/perf/arch/s390/Build2
-rw-r--r--tools/perf/arch/s390/Makefile25
-rw-r--r--tools/perf/arch/s390/annotate/instructions.c7
-rwxr-xr-xtools/perf/arch/s390/entry/syscalls/mksyscalltbl32
-rw-r--r--tools/perf/arch/s390/entry/syscalls/syscall.tbl10
-rw-r--r--tools/perf/arch/s390/util/Build15
-rw-r--r--tools/perf/arch/s390/util/dwarf-regs.c43
-rw-r--r--tools/perf/arch/s390/util/header.c6
-rw-r--r--tools/perf/arch/s390/util/perf_regs.c7
-rw-r--r--tools/perf/arch/s390/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/sh/Build1
-rw-r--r--tools/perf/arch/sh/Makefile4
-rw-r--r--tools/perf/arch/sh/entry/syscalls/syscall.tbl475
-rw-r--r--tools/perf/arch/sh/util/Build1
-rw-r--r--tools/perf/arch/sh/util/dwarf-regs.c41
-rw-r--r--tools/perf/arch/sparc/Build1
-rw-r--r--tools/perf/arch/sparc/Makefile4
-rw-r--r--tools/perf/arch/sparc/annotate/instructions.c2
-rw-r--r--tools/perf/arch/sparc/entry/syscalls/syscall.tbl517
-rw-r--r--tools/perf/arch/sparc/util/Build1
-rw-r--r--tools/perf/arch/sparc/util/dwarf-regs.c39
-rw-r--r--tools/perf/arch/x86/Build17
-rw-r--r--tools/perf/arch/x86/Makefile25
-rw-r--r--tools/perf/arch/x86/annotate/instructions.c402
-rw-r--r--tools/perf/arch/x86/entry/syscalls/syscall_32.tbl477
-rw-r--r--tools/perf/arch/x86/entry/syscalls/syscall_64.tbl18
-rwxr-xr-xtools/perf/arch/x86/entry/syscalls/syscalltbl.sh40
-rw-r--r--tools/perf/arch/x86/include/arch-tests.h6
-rw-r--r--tools/perf/arch/x86/tests/Build33
-rw-r--r--tools/perf/arch/x86/tests/amd-ibs-period.c1032
-rw-r--r--tools/perf/arch/x86/tests/arch-tests.c5
-rw-r--r--tools/perf/arch/x86/tests/dwarf-unwind.c3
-rwxr-xr-xtools/perf/arch/x86/tests/gen-insn-x86-dat.sh2
-rw-r--r--tools/perf/arch/x86/tests/hybrid.c5
-rw-r--r--tools/perf/arch/x86/tests/insn-x86-dat-32.c116
-rw-r--r--tools/perf/arch/x86/tests/insn-x86-dat-64.c1026
-rw-r--r--tools/perf/arch/x86/tests/insn-x86-dat-src.c597
-rw-r--r--tools/perf/arch/x86/tests/intel-cqm.c128
-rw-r--r--tools/perf/arch/x86/tests/intel-pt-test.c4
-rw-r--r--tools/perf/arch/x86/tests/sample-parsing.c125
-rw-r--r--tools/perf/arch/x86/tests/topdown.c77
-rw-r--r--tools/perf/arch/x86/util/Build40
-rw-r--r--tools/perf/arch/x86/util/auxtrace.c3
-rw-r--r--tools/perf/arch/x86/util/dwarf-regs.c153
-rw-r--r--tools/perf/arch/x86/util/env.c19
-rw-r--r--tools/perf/arch/x86/util/env.h7
-rw-r--r--tools/perf/arch/x86/util/event.c50
-rw-r--r--tools/perf/arch/x86/util/evlist.c167
-rw-r--r--tools/perf/arch/x86/util/evsel.c146
-rw-r--r--tools/perf/arch/x86/util/header.c5
-rw-r--r--tools/perf/arch/x86/util/intel-bts.c5
-rw-r--r--tools/perf/arch/x86/util/intel-pt.c45
-rw-r--r--tools/perf/arch/x86/util/iostat.c8
-rw-r--r--tools/perf/arch/x86/util/kvm-stat.c51
-rw-r--r--tools/perf/arch/x86/util/mem-events.c103
-rw-r--r--tools/perf/arch/x86/util/mem-events.h11
-rw-r--r--tools/perf/arch/x86/util/perf_regs.c7
-rw-r--r--tools/perf/arch/x86/util/pmu.c293
-rw-r--r--tools/perf/arch/x86/util/topdown.c66
-rw-r--r--tools/perf/arch/x86/util/topdown.h8
-rw-r--r--tools/perf/arch/x86/util/tsc.c22
-rw-r--r--tools/perf/arch/x86/util/unwind-libdw.c2
-rw-r--r--tools/perf/arch/xtensa/Build1
-rw-r--r--tools/perf/arch/xtensa/Makefile4
-rw-r--r--tools/perf/arch/xtensa/entry/syscalls/syscall.tbl442
-rw-r--r--tools/perf/arch/xtensa/util/Build1
-rw-r--r--tools/perf/arch/xtensa/util/dwarf-regs.c21
-rw-r--r--tools/perf/bench/Build47
-rw-r--r--tools/perf/bench/bench.h3
-rw-r--r--tools/perf/bench/epoll-ctl.c2
-rw-r--r--tools/perf/bench/epoll-wait.c9
-rw-r--r--tools/perf/bench/evlist-open-close.c76
-rw-r--r--tools/perf/bench/find-bit-bench.c2
-rw-r--r--tools/perf/bench/futex-hash.c7
-rw-r--r--tools/perf/bench/futex-lock-pi.c6
-rw-r--r--tools/perf/bench/futex-requeue.c7
-rw-r--r--tools/perf/bench/futex-wake-parallel.c12
-rw-r--r--tools/perf/bench/futex-wake.c5
-rw-r--r--tools/perf/bench/futex.c63
-rw-r--r--tools/perf/bench/futex.h5
-rw-r--r--tools/perf/bench/inject-buildid.c19
-rw-r--r--tools/perf/bench/mem-functions.c390
-rw-r--r--tools/perf/bench/mem-memcpy-arch.h2
-rw-r--r--tools/perf/bench/mem-memcpy-x86-64-asm-def.h4
-rw-r--r--tools/perf/bench/mem-memset-arch.h2
-rw-r--r--tools/perf/bench/mem-memset-x86-64-asm-def.h4
-rw-r--r--tools/perf/bench/numa.c53
-rw-r--r--tools/perf/bench/sched-pipe.c54
-rw-r--r--tools/perf/bench/synthesize.c29
-rw-r--r--tools/perf/bench/syscall.c22
-rw-r--r--tools/perf/bench/uprobe.c22
-rw-r--r--tools/perf/builtin-annotate.c232
-rw-r--r--tools/perf/builtin-bench.c3
-rw-r--r--tools/perf/builtin-buildid-cache.c32
-rw-r--r--tools/perf/builtin-buildid-list.c37
-rw-r--r--tools/perf/builtin-c2c.c179
-rw-r--r--tools/perf/builtin-check.c190
-rw-r--r--tools/perf/builtin-config.c38
-rw-r--r--tools/perf/builtin-daemon.c23
-rw-r--r--tools/perf/builtin-diff.c51
-rw-r--r--tools/perf/builtin-evlist.c18
-rw-r--r--tools/perf/builtin-ftrace.c863
-rw-r--r--tools/perf/builtin-help.c4
-rw-r--r--tools/perf/builtin-inject.c827
-rw-r--r--tools/perf/builtin-kallsyms.c23
-rw-r--r--tools/perf/builtin-kmem.c40
-rw-r--r--tools/perf/builtin-kvm.c224
-rw-r--r--tools/perf/builtin-kwork.c76
-rw-r--r--tools/perf/builtin-list.c149
-rw-r--r--tools/perf/builtin-lock.c508
-rw-r--r--tools/perf/builtin-mem.c214
-rw-r--r--tools/perf/builtin-probe.c16
-rw-r--r--tools/perf/builtin-record.c362
-rw-r--r--tools/perf/builtin-report.c495
-rw-r--r--tools/perf/builtin-sched.c836
-rw-r--r--tools/perf/builtin-script.c830
-rw-r--r--tools/perf/builtin-stat.c1175
-rw-r--r--tools/perf/builtin-timechart.c30
-rw-r--r--tools/perf/builtin-top.c137
-rw-r--r--tools/perf/builtin-trace.c1604
-rw-r--r--tools/perf/builtin-version.c56
-rw-r--r--tools/perf/builtin.h18
-rw-r--r--tools/perf/check-header_ignore_hunks/lib/list_sort.c24
-rwxr-xr-xtools/perf/check-headers.sh91
-rw-r--r--tools/perf/dlfilters/dlfilter-test-api-v0.c2
-rw-r--r--tools/perf/dlfilters/dlfilter-test-api-v2.c2
-rw-r--r--tools/perf/include/perf/perf_dlfilter.h2
-rw-r--r--tools/perf/jvmti/libjvmti.c4
-rwxr-xr-xtools/perf/perf-archive.sh37
-rw-r--r--tools/perf/perf-completion.sh23
-rw-r--r--tools/perf/perf.c35
-rw-r--r--tools/perf/perf.h4
-rw-r--r--tools/perf/pmu-events/Build35
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereone/instruction.json3
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereone/memory.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json8
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereonex/cache.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereonex/memory.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json20
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json2
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json2
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json6
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json14
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json2
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json93
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json14
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json8
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json4
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/bus.json18
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/exception.json62
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/fp_operation.json22
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/general.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1d_cache.json50
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1i_cache.json14
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l2_cache.json78
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l3_cache.json26
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ll_cache.json22
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/memory.json54
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/metrics.json457
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/retired.json90
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spe.json42
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spec_operation.json90
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/stall.json86
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/sve.json50
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/tlb.json74
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/trace.json42
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/brbe.json6
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/bus.json18
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/exception.json62
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/fp_operation.json22
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/general.json40
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1d_cache.json74
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1i_cache.json62
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l2_cache.json78
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ll_cache.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/memory.json58
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/metrics.json457
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/retired.json98
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spe.json42
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spec_operation.json126
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/stall.json124
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/sve.json50
-rw-r--r--tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/tlb.json138
-rw-r--r--tools/perf/pmu-events/arch/arm64/common-and-microarch.json1095
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/ddrc.json9
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/metrics.json26
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/ddrc.json9
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/metrics.json26
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/ddrc.json9
-rw-r--r--tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/metrics.json882
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/core-imp-def.json6
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/cycle_accounting.json122
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/energy.json17
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/exception.json42
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/fp_operation.json265
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/gcycle.json97
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/general.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/hwpf.json52
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1d_cache.json113
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1i_cache.json52
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l2_cache.json160
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l3_cache.json154
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ll_cache.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/memory.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pipeline.json208
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pmu.json10
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/retired.json30
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/spec_operation.json171
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/stall.json94
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/sve.json254
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/tlb.json362
-rw-r--r--tools/perf/pmu-events/arch/arm64/fujitsu/monaka/trace.json18
-rw-r--r--tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json74
-rw-r--r--tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json8
-rw-r--r--tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json26
-rw-r--r--tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json13
-rw-r--r--tools/perf/pmu-events/arch/arm64/mapfile.csv3
-rw-r--r--tools/perf/pmu-events/arch/arm64/recommended.json5
-rw-r--r--tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/ali_drw.json (renamed from tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json)0
-rw-r--r--tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/metrics.json (renamed from tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json)0
-rw-r--r--tools/perf/pmu-events/arch/common/common/software.json92
-rw-r--r--tools/perf/pmu-events/arch/common/common/tool.json74
-rw-r--r--tools/perf/pmu-events/arch/powerpc/compat/generic-events.json117
-rw-r--r--tools/perf/pmu-events/arch/powerpc/mapfile.csv2
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/cache.json20
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/datasource.json40
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/frontend.json30
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/locks.json10
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/memory.json30
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/others.json106
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/pipeline.json45
-rw-r--r--tools/perf/pmu-events/arch/powerpc/power10/pmc.json10
-rw-r--r--tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json (renamed from tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json)2
-rw-r--r--tools/perf/pmu-events/arch/riscv/andes/ax45/instructions.json127
-rw-r--r--tools/perf/pmu-events/arch/riscv/andes/ax45/memory.json57
-rw-r--r--tools/perf/pmu-events/arch/riscv/andes/ax45/microarch.json77
-rw-r--r--tools/perf/pmu-events/arch/riscv/mapfile.csv7
-rw-r--r--tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json2
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet-07/cycle-and-instruction-count.json12
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-07/firmware.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-07/instruction.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-07/memory.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet-07/microarch.json62
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet-07/watchpoint.json42
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/cycle-and-instruction-count.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/firmware.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/instruction.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/memory.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/microarch.json72
l---------tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/watchpoint.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet/firmware.json68
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet/instruction.json92
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet/memory.json32
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/bullet/microarch.json57
l---------tools/perf/pmu-events/arch/riscv/sifive/p550/firmware.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/p550/instruction.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/p550/memory.json47
l---------tools/perf/pmu-events/arch/riscv/sifive/p550/microarch.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/p650/cycle-and-instruction-count.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/p650/firmware.json1
l---------tools/perf/pmu-events/arch/riscv/sifive/p650/instruction.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/p650/memory.json57
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/p650/microarch.json62
l---------tools/perf/pmu-events/arch/riscv/sifive/p650/watchpoint.json1
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/u74/instructions.json92
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/u74/memory.json32
-rw-r--r--tools/perf/pmu-events/arch/riscv/sifive/u74/microarch.json57
-rw-r--r--tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json2
-rw-r--r--tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json2
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z16/extended.json62
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z16/pai_crypto.json14
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z16/transaction.json28
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/basic.json58
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/crypto6.json142
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/extended.json541
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/pai_crypto.json1213
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/pai_ext.json261
-rw-r--r--tools/perf/pmu-events/arch/s390/cf_z17/transaction.json72
-rw-r--r--tools/perf/pmu-events/arch/s390/mapfile.csv3
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json1722
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/cache.json663
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/floating-point.json69
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/frontend.json111
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/memory.json144
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/metricgroups.json42
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/other.json113
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/pipeline.json455
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/uncore-interconnect.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json25
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/uncore-other.json1
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlake/virtual-memory.json59
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json707
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/cache.json346
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/floating-point.json19
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/memory.json81
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/other.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/pipeline.json197
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/uncore-interconnect.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json25
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/uncore-other.json1
-rw-r--r--tools/perf/pmu-events/arch/x86/alderlaken/virtual-memory.json36
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen4/cache.json56
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/branch-prediction.json93
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/data-fabric.json1634
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/decode.json115
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/execution.json174
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/floating-point.json812
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/inst-cache.json72
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/l2-cache.json266
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/l3-cache.json177
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/load-store.json517
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/memory-controller.json101
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/pipeline.json99
-rw-r--r--tools/perf/pmu-events/arch/x86/amdzen5/recommended.json457
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/arl-metrics.json2795
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/cache.json1744
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/floating-point.json532
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/frontend.json745
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/memory.json401
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/metricgroups.json150
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/other.json90
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/pipeline.json2500
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/uncore-cache.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/uncore-interconnect.json47
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/uncore-memory.json160
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/uncore-other.json (renamed from tools/perf/pmu-events/arch/x86/haswell/uncore-other.json)3
-rw-r--r--tools/perf/pmu-events/arch/x86/arrowlake/virtual-memory.json522
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/cache.json93
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/floating-point.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/frontend.json11
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/memory.json19
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/other.json62
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/pipeline.json52
-rw-r--r--tools/perf/pmu-events/arch/x86/bonnell/virtual-memory.json15
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json332
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/cache.json285
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/counter.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/floating-point.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/frontend.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/memory.json246
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/pipeline.json147
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/uncore-cache.json24
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/uncore-interconnect.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwell/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json333
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/cache.json86
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/counter.json42
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/floating-point.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/frontend.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/memory.json45
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/pipeline.json147
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/uncore-cache.json410
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json86
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/uncore-io.json62
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/uncore-memory.json322
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/uncore-power.json60
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellde/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json364
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/cache.json98
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/counter.json57
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/floating-point.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/frontend.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/memory.json64
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/pipeline.json147
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json427
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json486
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/uncore-io.json62
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json327
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/uncore-power.json60
-rw-r--r--tools/perf/pmu-events/arch/x86/broadwellx/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/cache.json1649
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json904
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/counter.json52
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/floating-point.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/frontend.json59
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/memory.json745
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/metricgroups.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/other.json490
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/pipeline.json104
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json2293
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/uncore-interconnect.json2559
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/uncore-io.json703
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json985
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/uncore-power.json53
-rw-r--r--tools/perf/pmu-events/arch/x86/cascadelakex/virtual-memory.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/cache.json179
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/frontend.json18
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/memory.json24
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/pipeline.json115
-rw-r--r--tools/perf/pmu-events/arch/x86/clearwaterforest/virtual-memory.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/cache.json397
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/floating-point.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/frontend.json9
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/memory.json301
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/other.json387
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/pipeline.json91
-rw-r--r--tools/perf/pmu-events/arch/x86/elkhartlake/virtual-memory.json35
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json434
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/counter.json82
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json2286
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/frontend.json107
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/memory.json263
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/metricgroups.json145
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/other.json299
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/pipeline.json251
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json1563
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cxl.json110
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-interconnect.json1453
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-io.json809
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json866
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json47
-rw-r--r--tools/perf/pmu-events/arch/x86/emeraldrapids/virtual-memory.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/cache.json103
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/floating-point.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/frontend.json8
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/memory.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/other.json5
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/pipeline.json40
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmont/virtual-memory.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/cache.json101
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/floating-point.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json8
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/memory.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/other.json5
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json42
-rw-r--r--tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json18
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/cache.json407
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/counter.json42
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/floating-point.json110
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/frontend.json27
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/grr-metrics.json789
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/memory.json77
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/metricgroups.json23
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/other.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/pipeline.json494
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/uncore-cache.json2117
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/uncore-interconnect.json275
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/uncore-io.json1380
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/uncore-memory.json789
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/uncore-power.json11
-rw-r--r--tools/perf/pmu-events/arch/x86/grandridge/virtual-memory.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/cache.json1184
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/counter.json82
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/floating-point.json242
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/frontend.json470
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/gnr-metrics.json2383
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/memory.json387
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/metricgroups.json145
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/other.json64
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/pipeline.json1061
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-cache.json3736
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-cxl.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-interconnect.json1979
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-io.json1925
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-memory.json890
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/uncore-power.json109
-rw-r--r--tools/perf/pmu-events/arch/x86/graniterapids/virtual-memory.json159
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/cache.json94
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/counter.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/floating-point.json10
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/frontend.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json292
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/memory.json60
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/pipeline.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/uncore-cache.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/uncore-interconnect.json6
-rw-r--r--tools/perf/pmu-events/arch/x86/haswell/virtual-memory.json49
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/cache.json97
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/counter.json57
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/floating-point.json10
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/frontend.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json324
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/memory.json67
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/pipeline.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/uncore-cache.json426
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json482
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/uncore-io.json59
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json325
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/uncore-power.json65
-rw-r--r--tools/perf/pmu-events/arch/x86/haswellx/virtual-memory.json49
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/cache.json203
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/floating-point.json13
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/frontend.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json834
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/memory.json216
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/metricgroups.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/other.json205
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/pipeline.json134
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/uncore-interconnect.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/uncore-other.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/cache.json420
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/counter.json57
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/floating-point.json13
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/frontend.json57
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json1009
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/memory.json247
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/metricgroups.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/other.json421
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/pipeline.json124
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/uncore-cache.json2169
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/uncore-interconnect.json3408
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/uncore-io.json1840
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/uncore-memory.json338
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/uncore-power.json54
-rw-r--r--tools/perf/pmu-events/arch/x86/icelakex/virtual-memory.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/cache.json104
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/floating-point.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/frontend.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json309
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/memory.json19
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/pipeline.json126
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/uncore-cache.json25
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/uncore-interconnect.json9
-rw-r--r--tools/perf/pmu-events/arch/x86/ivybridge/virtual-memory.json18
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/cache.json118
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/counter.json52
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/floating-point.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/frontend.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/ivt-metrics.json321
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/memory.json41
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/other.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/pipeline.json126
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/uncore-cache.json349
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json385
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/uncore-io.json61
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/uncore-memory.json198
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json77
-rw-r--r--tools/perf/pmu-events/arch/x86/ivytown/virtual-memory.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/cache.json123
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/counter.json52
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/floating-point.json15
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/frontend.json40
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/jkt-metrics.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/memory.json35
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/other.json12
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/pipeline.json127
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/uncore-cache.json205
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json207
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/uncore-io.json36
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/uncore-memory.json51
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json42
-rw-r--r--tools/perf/pmu-events/arch/x86/jaketown/virtual-memory.json16
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/cache.json213
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/counter.json37
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/floating-point.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/frontend.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/memory.json101
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/pipeline.json45
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/uncore-cache.json421
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/uncore-io.json24
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/uncore-memory.json14
-rw-r--r--tools/perf/pmu-events/arch/x86/knightslanding/virtual-memory.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/cache.json1561
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/floating-point.json484
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/frontend.json658
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/lnl-metrics.json2754
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/memory.json312
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/metricgroups.json150
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/other.json196
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json2096
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/uncore-interconnect.json (renamed from tools/perf/pmu-events/arch/x86/broadwell/uncore-other.json)6
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/uncore-memory.json44
-rw-r--r--tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json416
-rw-r--r--tools/perf/pmu-events/arch/x86/mapfile.csv50
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/cache.json649
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/floating-point.json162
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/frontend.json207
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/memory.json179
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/metricgroups.json150
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/mtl-metrics.json2825
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/other.json91
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/pipeline.json580
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/uncore-cache.json2
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/uncore-interconnect.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/uncore-memory.json34
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/uncore-other.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/meteorlake/virtual-memory.json73
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/cache.json352
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/memory.json67
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/other.json48
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/pipeline.json109
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemep/virtual-memory.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/cache.json347
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/memory.json67
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/other.json48
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/pipeline.json109
-rw-r--r--tools/perf/pmu-events/arch/x86/nehalemex/virtual-memory.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/cache.json1375
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/floating-point.json286
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/frontend.json565
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/memory.json293
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/other.json44
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/pipeline.json1891
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/uncore-memory.json26
-rw-r--r--tools/perf/pmu-events/arch/x86/pantherlake/virtual-memory.json310
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/cache.json203
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/floating-point.json13
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/frontend.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/memory.json216
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/metricgroups.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/other.json205
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/pipeline.json134
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/rkl-metrics.json839
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/uncore-interconnect.json34
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/uncore-other.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/rocketlake/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/cache.json173
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/floating-point.json15
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/frontend.json40
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/memory.json37
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/metricgroups.json21
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/other.json12
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/pipeline.json128
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/snb-metrics.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/uncore-cache.json25
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/uncore-interconnect.json9
-rw-r--r--tools/perf/pmu-events/arch/x86/sandybridge/virtual-memory.json16
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/cache.json496
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/counter.json82
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/frontend.json107
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/memory.json263
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/metricgroups.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/other.json344
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/pipeline.json267
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/spr-metrics.json1127
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cache.json1367
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-cxl.json110
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-interconnect.json1453
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-io.json825
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-memory.json866
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/uncore-power.json47
-rw-r--r--tools/perf/pmu-events/arch/x86/sapphirerapids/virtual-memory.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/cache.json448
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/counter.json77
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/floating-point.json110
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/frontend.json91
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/memory.json99
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/metricgroups.json23
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/other.json20
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/pipeline.json502
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/srf-metrics.json897
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-cache.json3405
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-cxl.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-interconnect.json1625
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-io.json1925
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-memory.json852
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/uncore-power.json109
-rw-r--r--tools/perf/pmu-events/arch/x86/sierraforest/virtual-memory.json130
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/cache.json77
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/floating-point.json1
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/frontend.json8
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/memory.json1
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/other.json2
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/pipeline.json34
-rw-r--r--tools/perf/pmu-events/arch/x86/silvermont/virtual-memory.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/cache.json250
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/counter.json22
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/floating-point.json10
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/frontend.json59
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/memory.json133
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/metricgroups.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/other.json2
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/pipeline.json101
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/skl-metrics.json785
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/uncore-cache.json23
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/uncore-interconnect.json8
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/uncore-other.json10
-rw-r--r--tools/perf/pmu-events/arch/x86/skylake/virtual-memory.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/cache.json238
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/counter.json52
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/floating-point.json13
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/frontend.json59
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/memory.json117
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/metricgroups.json32
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/other.json73
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/pipeline.json104
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json863
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/uncore-cache.json2274
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/uncore-interconnect.json2544
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/uncore-io.json705
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/uncore-memory.json804
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/uncore-power.json53
-rw-r--r--tools/perf/pmu-events/arch/x86/skylakex/virtual-memory.json30
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/cache.json397
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/counter.json47
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/floating-point.json4
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/frontend.json9
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/memory.json301
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/other.json387
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/pipeline.json91
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/uncore-cache.json1599
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/uncore-interconnect.json1409
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/uncore-io.json1754
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/uncore-memory.json103
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/uncore-power.json54
-rw-r--r--tools/perf/pmu-events/arch/x86/snowridgex/virtual-memory.json35
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/cache.json118
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/counter.json17
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/floating-point.json13
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/frontend.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/memory.json35
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/metricgroups.json33
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/other.json6
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/pipeline.json137
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/tgl-metrics.json839
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/uncore-interconnect.json25
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/uncore-memory.json6
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/uncore-other.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/tigerlake/virtual-memory.json38
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/cache.json314
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/memory.json69
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/other.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/pipeline.json111
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-dp/virtual-memory.json29
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/cache.json353
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/memory.json67
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/other.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/pipeline.json111
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereep-sp/virtual-memory.json26
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/cache.json352
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/counter.json7
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/floating-point.json28
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/frontend.json3
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/memory.json68
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/other.json58
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/pipeline.json111
-rw-r--r--tools/perf/pmu-events/arch/x86/westmereex/virtual-memory.json29
-rw-r--r--tools/perf/pmu-events/empty-pmu-events.c1061
-rwxr-xr-xtools/perf/pmu-events/jevents.py223
-rwxr-xr-xtools/perf/pmu-events/models.py73
-rw-r--r--tools/perf/pmu-events/pmu-events.h41
-rwxr-xr-xtools/perf/python/counting.py36
-rwxr-xr-xtools/perf/python/ilist.py495
-rwxr-xr-xtools/perf/python/tracepoint.py29
-rw-r--r--tools/perf/scripts/Build30
-rw-r--r--tools/perf/scripts/perl/Perf-Trace-Util/Build4
-rw-r--r--tools/perf/scripts/python/Perf-Trace-Util/Build2
-rw-r--r--tools/perf/scripts/python/Perf-Trace-Util/Context.c31
-rwxr-xr-xtools/perf/scripts/python/arm-cs-trace-disasm.py152
-rwxr-xr-xtools/perf/scripts/python/bin/flamegraph-report2
-rwxr-xr-xtools/perf/scripts/python/exported-sql-viewer.py5
-rwxr-xr-xtools/perf/scripts/python/flamegraph.py82
-rw-r--r--tools/perf/scripts/python/mem-phys-addr.py177
-rw-r--r--tools/perf/scripts/python/netdev-times.py3
-rwxr-xr-xtools/perf/scripts/python/parallel-perf.py989
-rw-r--r--tools/perf/tests/Build176
-rw-r--r--tools/perf/tests/attr.c218
-rw-r--r--tools/perf/tests/backward-ring-buffer.c1
-rw-r--r--tools/perf/tests/bitmap.c13
-rw-r--r--tools/perf/tests/bp_account.c5
-rw-r--r--tools/perf/tests/bp_signal.c3
-rw-r--r--tools/perf/tests/bp_signal_overflow.c3
-rw-r--r--tools/perf/tests/builtin-test-list.c207
-rw-r--r--tools/perf/tests/builtin-test-list.h12
-rw-r--r--tools/perf/tests/builtin-test.c763
-rw-r--r--tools/perf/tests/code-reading.c254
-rw-r--r--tools/perf/tests/config-fragments/config3
-rw-r--r--tools/perf/tests/cpumap.c68
-rw-r--r--tools/perf/tests/demangle-java-test.c25
-rw-r--r--tools/perf/tests/demangle-ocaml-test.c7
-rw-r--r--tools/perf/tests/demangle-rust-v0-test.c74
-rw-r--r--tools/perf/tests/dlfilter-test.c53
-rw-r--r--tools/perf/tests/dso-data.c81
-rw-r--r--tools/perf/tests/dwarf-unwind.c47
-rw-r--r--tools/perf/tests/event-times.c13
-rw-r--r--tools/perf/tests/event_groups.c31
-rw-r--r--tools/perf/tests/event_update.c12
-rw-r--r--tools/perf/tests/evsel-roundtrip-name.c4
-rw-r--r--tools/perf/tests/evsel-tp-sched.c42
-rw-r--r--tools/perf/tests/expand-cgroup.c27
-rw-r--r--tools/perf/tests/expr.c24
-rw-r--r--tools/perf/tests/hists_common.c6
-rw-r--r--tools/perf/tests/hists_cumulate.c12
-rw-r--r--tools/perf/tests/hists_filter.c8
-rw-r--r--tools/perf/tests/hists_link.c8
-rw-r--r--tools/perf/tests/hists_output.c12
-rw-r--r--tools/perf/tests/hwmon_pmu.c356
-rw-r--r--tools/perf/tests/keep-tracking.c2
-rw-r--r--tools/perf/tests/make33
-rw-r--r--tools/perf/tests/maps.c7
-rw-r--r--tools/perf/tests/mem.c11
-rw-r--r--tools/perf/tests/mmap-basic.c294
-rw-r--r--tools/perf/tests/mmap-thread-lookup.c10
-rw-r--r--tools/perf/tests/openat-syscall-all-cpus.c2
-rw-r--r--tools/perf/tests/openat-syscall-tp-fields.c24
-rw-r--r--tools/perf/tests/openat-syscall.c2
-rw-r--r--tools/perf/tests/parse-events.c198
-rw-r--r--tools/perf/tests/parse-metric.c16
-rw-r--r--tools/perf/tests/parse-no-sample-id-all.c6
-rw-r--r--tools/perf/tests/pe-file-parsing.c6
-rw-r--r--tools/perf/tests/perf-record.c43
-rwxr-xr-xtools/perf/tests/perf-targz-src-pkg2
-rw-r--r--tools/perf/tests/perf-time-to-tsc.c4
-rw-r--r--tools/perf/tests/pmu-events.c197
-rw-r--r--tools/perf/tests/pmu.c663
-rw-r--r--tools/perf/tests/python-use.c27
-rw-r--r--tools/perf/tests/sample-parsing.c76
-rw-r--r--tools/perf/tests/sdt.c6
-rwxr-xr-xtools/perf/tests/shell/amd-ibs-swfilt.sh92
-rwxr-xr-xtools/perf/tests/shell/annotate.sh113
-rwxr-xr-xtools/perf/tests/shell/attr.sh22
-rw-r--r--tools/perf/tests/shell/attr/README (renamed from tools/perf/tests/attr/README)2
-rw-r--r--tools/perf/tests/shell/attr/base-record (renamed from tools/perf/tests/attr/base-record)0
-rw-r--r--tools/perf/tests/shell/attr/base-record-spe (renamed from tools/perf/tests/attr/base-record-spe)0
-rw-r--r--tools/perf/tests/shell/attr/base-stat (renamed from tools/perf/tests/attr/base-stat)0
-rw-r--r--tools/perf/tests/shell/attr/system-wide-dummy (renamed from tools/perf/tests/attr/system-wide-dummy)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-C0 (renamed from tools/perf/tests/attr/test-record-C0)2
-rw-r--r--tools/perf/tests/shell/attr/test-record-basic (renamed from tools/perf/tests/attr/test-record-basic)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-any (renamed from tools/perf/tests/attr/test-record-branch-any)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-any (renamed from tools/perf/tests/attr/test-record-branch-filter-any)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-any_call (renamed from tools/perf/tests/attr/test-record-branch-filter-any_call)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-any_ret (renamed from tools/perf/tests/attr/test-record-branch-filter-any_ret)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-hv (renamed from tools/perf/tests/attr/test-record-branch-filter-hv)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-ind_call (renamed from tools/perf/tests/attr/test-record-branch-filter-ind_call)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-k (renamed from tools/perf/tests/attr/test-record-branch-filter-k)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-branch-filter-u (renamed from tools/perf/tests/attr/test-record-branch-filter-u)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-count (renamed from tools/perf/tests/attr/test-record-count)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-data (renamed from tools/perf/tests/attr/test-record-data)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-dummy-C0 (renamed from tools/perf/tests/attr/test-record-dummy-C0)4
-rw-r--r--tools/perf/tests/shell/attr/test-record-freq (renamed from tools/perf/tests/attr/test-record-freq)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-graph-default (renamed from tools/perf/tests/attr/test-record-graph-default)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-graph-default-aarch64 (renamed from tools/perf/tests/attr/test-record-graph-default-aarch64)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-graph-dwarf (renamed from tools/perf/tests/attr/test-record-graph-dwarf)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-graph-fp (renamed from tools/perf/tests/attr/test-record-graph-fp)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-graph-fp-aarch64 (renamed from tools/perf/tests/attr/test-record-graph-fp-aarch64)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-group-sampling (renamed from tools/perf/tests/attr/test-record-group-sampling)3
-rw-r--r--tools/perf/tests/shell/attr/test-record-group-sampling150
-rw-r--r--tools/perf/tests/shell/attr/test-record-group-sampling261
-rw-r--r--tools/perf/tests/shell/attr/test-record-group1 (renamed from tools/perf/tests/attr/test-record-group1)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-group2 (renamed from tools/perf/tests/attr/test-record-group2)1
-rw-r--r--tools/perf/tests/shell/attr/test-record-group331
-rw-r--r--tools/perf/tests/shell/attr/test-record-no-buffering (renamed from tools/perf/tests/attr/test-record-no-buffering)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-no-inherit (renamed from tools/perf/tests/attr/test-record-no-inherit)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-no-samples (renamed from tools/perf/tests/attr/test-record-no-samples)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-period (renamed from tools/perf/tests/attr/test-record-period)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-pfm-period (renamed from tools/perf/tests/attr/test-record-pfm-period)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-raw (renamed from tools/perf/tests/attr/test-record-raw)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-spe-period (renamed from tools/perf/tests/attr/test-record-spe-period)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-spe-period-term (renamed from tools/perf/tests/attr/test-record-spe-period-term)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-spe-physical-address (renamed from tools/perf/tests/attr/test-record-spe-physical-address)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-user-regs-no-sve-aarch64 (renamed from tools/perf/tests/attr/test-record-user-regs-no-sve-aarch64)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-user-regs-old-sve-aarch64 (renamed from tools/perf/tests/attr/test-record-user-regs-old-sve-aarch64)0
-rw-r--r--tools/perf/tests/shell/attr/test-record-user-regs-sve-aarch64 (renamed from tools/perf/tests/attr/test-record-user-regs-sve-aarch64)0
-rw-r--r--tools/perf/tests/shell/attr/test-stat-C0 (renamed from tools/perf/tests/attr/test-stat-C0)0
-rw-r--r--tools/perf/tests/shell/attr/test-stat-basic (renamed from tools/perf/tests/attr/test-stat-basic)0
-rw-r--r--tools/perf/tests/shell/attr/test-stat-default (renamed from tools/perf/tests/attr/test-stat-default)97
-rw-r--r--tools/perf/tests/shell/attr/test-stat-detailed-1 (renamed from tools/perf/tests/attr/test-stat-detailed-1)113
-rw-r--r--tools/perf/tests/shell/attr/test-stat-detailed-2 (renamed from tools/perf/tests/attr/test-stat-detailed-2)137
-rw-r--r--tools/perf/tests/shell/attr/test-stat-detailed-3 (renamed from tools/perf/tests/attr/test-stat-detailed-3)145
-rw-r--r--tools/perf/tests/shell/attr/test-stat-group1 (renamed from tools/perf/tests/attr/test-stat-group1)0
-rw-r--r--tools/perf/tests/shell/attr/test-stat-no-inherit (renamed from tools/perf/tests/attr/test-stat-no-inherit)0
-rwxr-xr-xtools/perf/tests/shell/base_probe/test_adding_blacklisted.sh114
-rwxr-xr-xtools/perf/tests/shell/base_probe/test_adding_kernel.sh346
-rwxr-xr-xtools/perf/tests/shell/base_probe/test_basic.sh93
-rwxr-xr-xtools/perf/tests/shell/base_probe/test_invalid_options.sh86
-rwxr-xr-xtools/perf/tests/shell/base_probe/test_line_semantics.sh59
-rwxr-xr-xtools/perf/tests/shell/base_report/setup.sh52
-rw-r--r--tools/perf/tests/shell/base_report/stderr-whitelist.txt5
-rwxr-xr-xtools/perf/tests/shell/base_report/test_basic.sh285
-rwxr-xr-xtools/perf/tests/shell/buildid.sh2
-rwxr-xr-xtools/perf/tests/shell/common/check_all_lines_matched.pl39
-rwxr-xr-xtools/perf/tests/shell/common/check_all_patterns_found.pl34
-rwxr-xr-xtools/perf/tests/shell/common/check_errors_whitelisted.pl51
-rwxr-xr-xtools/perf/tests/shell/common/check_no_patterns_found.pl34
-rw-r--r--tools/perf/tests/shell/common/init.sh143
-rw-r--r--tools/perf/tests/shell/common/patterns.sh268
-rw-r--r--tools/perf/tests/shell/common/settings.sh105
-rw-r--r--tools/perf/tests/shell/coresight/Makefile2
-rwxr-xr-xtools/perf/tests/shell/coresight/asm_pure_loop.sh4
-rw-r--r--tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S2
-rw-r--r--tools/perf/tests/shell/coresight/memcpy_thread/memcpy_thread.c2
-rwxr-xr-xtools/perf/tests/shell/coresight/memcpy_thread_16k_10.sh4
-rw-r--r--tools/perf/tests/shell/coresight/thread_loop/thread_loop.c4
-rwxr-xr-xtools/perf/tests/shell/coresight/thread_loop_check_tid_10.sh4
-rwxr-xr-xtools/perf/tests/shell/coresight/thread_loop_check_tid_2.sh4
-rw-r--r--tools/perf/tests/shell/coresight/unroll_loop_thread/unroll_loop_thread.c4
-rwxr-xr-xtools/perf/tests/shell/coresight/unroll_loop_thread_10.sh4
-rwxr-xr-xtools/perf/tests/shell/diff.sh14
-rwxr-xr-xtools/perf/tests/shell/drm_pmu.sh78
-rwxr-xr-xtools/perf/tests/shell/ftrace.sh86
-rwxr-xr-xtools/perf/tests/shell/header.sh74
-rw-r--r--tools/perf/tests/shell/lib/attr.py (renamed from tools/perf/tests/attr.py)26
-rw-r--r--tools/perf/tests/shell/lib/coresight.sh2
-rw-r--r--tools/perf/tests/shell/lib/perf_has_symbol.sh4
-rw-r--r--tools/perf/tests/shell/lib/perf_json_output_lint.py30
-rw-r--r--tools/perf/tests/shell/lib/perf_metric_validation.py251
-rw-r--r--tools/perf/tests/shell/lib/probe_vfs_getname.sh29
-rw-r--r--tools/perf/tests/shell/lib/setup_python.sh2
-rw-r--r--tools/perf/tests/shell/lib/stat_output.sh27
-rw-r--r--tools/perf/tests/shell/lib/waiting.sh2
-rwxr-xr-xtools/perf/tests/shell/list.sh7
-rwxr-xr-xtools/perf/tests/shell/lock_contention.sh35
-rwxr-xr-xtools/perf/tests/shell/perf-report-hierarchy.sh43
-rwxr-xr-xtools/perf/tests/shell/perftool-testsuite_probe.sh24
-rwxr-xr-xtools/perf/tests/shell/perftool-testsuite_report.sh23
-rwxr-xr-xtools/perf/tests/shell/pipe_test.sh130
-rwxr-xr-xtools/perf/tests/shell/probe_vfs_getname.sh13
-rwxr-xr-xtools/perf/tests/shell/python-use.sh36
-rwxr-xr-xtools/perf/tests/shell/record+probe_libc_inet_pton.sh53
-rwxr-xr-xtools/perf/tests/shell/record+script_probe_vfs_getname.sh18
-rwxr-xr-xtools/perf/tests/shell/record+zstd_comp_decomp.sh2
-rwxr-xr-xtools/perf/tests/shell/record.sh272
-rwxr-xr-xtools/perf/tests/shell/record_bpf_filter.sh92
-rwxr-xr-xtools/perf/tests/shell/record_lbr.sh176
-rwxr-xr-xtools/perf/tests/shell/record_offcpu.sh75
-rwxr-xr-xtools/perf/tests/shell/record_sideband.sh2
-rwxr-xr-xtools/perf/tests/shell/sched.sh116
-rwxr-xr-xtools/perf/tests/shell/script.sh31
-rwxr-xr-xtools/perf/tests/shell/stat+csv_output.sh4
-rwxr-xr-xtools/perf/tests/shell/stat+csv_summary.sh2
-rwxr-xr-xtools/perf/tests/shell/stat+event_uniquifying.sh66
-rwxr-xr-xtools/perf/tests/shell/stat+json_output.sh29
-rwxr-xr-xtools/perf/tests/shell/stat+shadow_stat.sh2
-rwxr-xr-xtools/perf/tests/shell/stat+std_output.sh16
-rwxr-xr-xtools/perf/tests/shell/stat.sh100
-rwxr-xr-xtools/perf/tests/shell/stat_all_metricgroups.sh36
-rwxr-xr-xtools/perf/tests/shell/stat_all_metrics.sh103
-rwxr-xr-xtools/perf/tests/shell/stat_all_pfm.sh2
-rwxr-xr-xtools/perf/tests/shell/stat_all_pmu.sh78
-rwxr-xr-xtools/perf/tests/shell/stat_bpf_counters.sh85
-rwxr-xr-xtools/perf/tests/shell/stat_bpf_counters_cgrp.sh15
-rwxr-xr-xtools/perf/tests/shell/stat_metrics_values.sh15
-rwxr-xr-xtools/perf/tests/shell/test_arm_callgraph_fp.sh39
-rwxr-xr-xtools/perf/tests/shell/test_arm_coresight.sh8
-rwxr-xr-xtools/perf/tests/shell/test_arm_coresight_disasm.sh65
-rwxr-xr-xtools/perf/tests/shell/test_arm_spe.sh36
-rwxr-xr-xtools/perf/tests/shell/test_arm_spe_fork.sh4
-rwxr-xr-xtools/perf/tests/shell/test_bpf_metadata.sh76
-rwxr-xr-xtools/perf/tests/shell/test_brstack.sh168
-rwxr-xr-xtools/perf/tests/shell/test_data_symbol.sh63
-rwxr-xr-xtools/perf/tests/shell/test_intel_pt.sh39
-rwxr-xr-xtools/perf/tests/shell/test_stat_intel_tpebs.sh85
-rwxr-xr-xtools/perf/tests/shell/test_task_analyzer.sh9
-rwxr-xr-xtools/perf/tests/shell/test_uprobe_from_different_cu.sh8
-rwxr-xr-xtools/perf/tests/shell/trace+probe_vfs_getname.sh16
-rwxr-xr-xtools/perf/tests/shell/trace_btf_enum.sh79
-rwxr-xr-xtools/perf/tests/shell/trace_btf_general.sh94
-rwxr-xr-xtools/perf/tests/shell/trace_exit_race.sh52
-rwxr-xr-xtools/perf/tests/shell/trace_record_replay.sh21
-rwxr-xr-xtools/perf/tests/shell/trace_summary.sh77
-rw-r--r--tools/perf/tests/sigtrap.c20
-rw-r--r--tools/perf/tests/stat.c22
-rw-r--r--tools/perf/tests/subcmd-help.c108
-rw-r--r--tools/perf/tests/sw-clock.c3
-rw-r--r--tools/perf/tests/switch-tracking.c20
-rw-r--r--tools/perf/tests/symbols.c82
-rw-r--r--tools/perf/tests/task-exit.c10
-rw-r--r--tools/perf/tests/tests-scripts.c294
-rw-r--r--tools/perf/tests/tests-scripts.h9
-rw-r--r--tools/perf/tests/tests.h52
-rw-r--r--tools/perf/tests/thread-map.c4
-rw-r--r--tools/perf/tests/thread-maps-share.c8
-rw-r--r--tools/perf/tests/tool_pmu.c111
-rw-r--r--tools/perf/tests/topology.c67
-rw-r--r--tools/perf/tests/util.c45
-rw-r--r--tools/perf/tests/vmlinux-kallsyms.c21
-rw-r--r--tools/perf/tests/workloads/Build15
-rw-r--r--tools/perf/tests/workloads/datasym.c46
-rw-r--r--tools/perf/tests/workloads/landlock.c66
-rw-r--r--tools/perf/tests/workloads/leafloop.c20
-rw-r--r--tools/perf/tests/workloads/noploop.c2
-rw-r--r--tools/perf/tests/workloads/traploop.c31
-rw-r--r--tools/perf/tests/wp.c5
-rw-r--r--tools/perf/trace/beauty/Build15
-rw-r--r--tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h (renamed from tools/arch/x86/include/asm/irq_vectors.h)14
-rw-r--r--tools/perf/trace/beauty/arch/x86/include/uapi/asm/prctl.h (renamed from tools/arch/x86/include/uapi/asm/prctl.h)0
-rwxr-xr-xtools/perf/trace/beauty/arch_errno_names.sh11
-rw-r--r--tools/perf/trace/beauty/beauty.h18
-rw-r--r--tools/perf/trace/beauty/clone.c46
-rwxr-xr-xtools/perf/trace/beauty/clone.sh17
-rw-r--r--tools/perf/trace/beauty/fcntl.c2
-rw-r--r--tools/perf/trace/beauty/flock.c2
-rw-r--r--tools/perf/trace/beauty/fs_at_flags.c58
-rwxr-xr-xtools/perf/trace/beauty/fs_at_flags.sh27
-rwxr-xr-xtools/perf/trace/beauty/fsconfig.sh6
-rw-r--r--tools/perf/trace/beauty/fsmount.c9
-rwxr-xr-xtools/perf/trace/beauty/fsmount.sh6
-rwxr-xr-xtools/perf/trace/beauty/fspick.sh6
-rw-r--r--tools/perf/trace/beauty/include/linux/socket.h20
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/fcntl.h180
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/fs.h655
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/mount.h235
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/prctl.h379
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/sched.h (renamed from tools/include/uapi/linux/sched.h)1
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/stat.h260
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/usbdevice_fs.h (renamed from tools/include/uapi/linux/usbdevice_fs.h)0
-rw-r--r--tools/perf/trace/beauty/include/uapi/linux/vhost.h (renamed from tools/include/uapi/linux/vhost.h)59
-rw-r--r--tools/perf/trace/beauty/include/uapi/sound/asound.h (renamed from tools/include/uapi/sound/asound.h)36
-rwxr-xr-xtools/perf/trace/beauty/mount_flags.sh6
-rwxr-xr-xtools/perf/trace/beauty/move_mount_flags.sh6
-rw-r--r--tools/perf/trace/beauty/msg_flags.c4
-rw-r--r--tools/perf/trace/beauty/perf_event_open.c6
-rw-r--r--tools/perf/trace/beauty/prctl.c2
-rwxr-xr-xtools/perf/trace/beauty/prctl_option.sh6
-rwxr-xr-xtools/perf/trace/beauty/rename_flags.sh2
-rwxr-xr-xtools/perf/trace/beauty/sndrv_ctl_ioctl.sh4
-rwxr-xr-xtools/perf/trace/beauty/sndrv_pcm_ioctl.sh4
-rw-r--r--tools/perf/trace/beauty/sockaddr.c2
-rw-r--r--tools/perf/trace/beauty/statx.c67
-rwxr-xr-xtools/perf/trace/beauty/statx_mask.sh23
-rw-r--r--tools/perf/trace/beauty/sync_file_range.c11
-rwxr-xr-xtools/perf/trace/beauty/sync_file_range.sh2
-rwxr-xr-xtools/perf/trace/beauty/syscalltbl.sh274
-rw-r--r--tools/perf/trace/beauty/timespec.c2
-rwxr-xr-xtools/perf/trace/beauty/tracepoints/x86_irq_vectors.sh6
-rwxr-xr-xtools/perf/trace/beauty/usbdevfs_ioctl.sh6
-rwxr-xr-xtools/perf/trace/beauty/vhost_virtio_ioctl.sh6
-rwxr-xr-xtools/perf/trace/beauty/x86_arch_prctl.sh4
-rw-r--r--tools/perf/ui/Build19
-rw-r--r--tools/perf/ui/browser.c16
-rw-r--r--tools/perf/ui/browser.h7
-rw-r--r--tools/perf/ui/browsers/Build13
-rw-r--r--tools/perf/ui/browsers/annotate-data.c614
-rw-r--r--tools/perf/ui/browsers/annotate.c318
-rw-r--r--tools/perf/ui/browsers/header.c5
-rw-r--r--tools/perf/ui/browsers/hists.c195
-rw-r--r--tools/perf/ui/browsers/map.c8
-rw-r--r--tools/perf/ui/browsers/res_sample.c2
-rw-r--r--tools/perf/ui/browsers/scripts.c179
-rw-r--r--tools/perf/ui/gtk/annotate.c31
-rw-r--r--tools/perf/ui/hist.c608
-rw-r--r--tools/perf/ui/keysyms.c44
-rw-r--r--tools/perf/ui/keysyms.h2
-rw-r--r--tools/perf/ui/libslang.h4
-rw-r--r--tools/perf/ui/stdio/hist.c70
-rw-r--r--tools/perf/ui/tui/Build8
-rw-r--r--tools/perf/ui/tui/setup.c2
-rw-r--r--tools/perf/util/Build460
-rw-r--r--tools/perf/util/addr2line.c439
-rw-r--r--tools/perf/util/addr2line.h20
-rw-r--r--tools/perf/util/addr_location.c1
-rw-r--r--tools/perf/util/addr_location.h6
-rw-r--r--tools/perf/util/affinity.c18
-rw-r--r--tools/perf/util/affinity.h2
-rw-r--r--tools/perf/util/amd-sample-raw.c81
-rw-r--r--tools/perf/util/annotate-data.c1631
-rw-r--r--tools/perf/util/annotate-data.h180
-rw-r--r--tools/perf/util/annotate.c3165
-rw-r--r--tools/perf/util/annotate.h320
-rw-r--r--tools/perf/util/arm-spe-decoder/Build2
-rw-r--r--tools/perf/util/arm-spe-decoder/arm-spe-decoder.c48
-rw-r--r--tools/perf/util/arm-spe-decoder/arm-spe-decoder.h84
-rw-r--r--tools/perf/util/arm-spe-decoder/arm-spe-pkt-decoder.c51
-rw-r--r--tools/perf/util/arm-spe-decoder/arm-spe-pkt-decoder.h19
-rw-r--r--tools/perf/util/arm-spe.c896
-rw-r--r--tools/perf/util/arm-spe.h40
-rw-r--r--tools/perf/util/arm64-frame-pointer-unwind-support.c29
-rw-r--r--tools/perf/util/auxtrace.c137
-rw-r--r--tools/perf/util/auxtrace.h36
-rw-r--r--tools/perf/util/block-info.c82
-rw-r--r--tools/perf/util/block-info.h23
-rw-r--r--tools/perf/util/bpf-event.c440
-rw-r--r--tools/perf/util/bpf-event.h13
-rw-r--r--tools/perf/util/bpf-filter.c698
-rw-r--r--tools/perf/util/bpf-filter.h27
-rw-r--r--tools/perf/util/bpf-filter.l94
-rw-r--r--tools/perf/util/bpf-filter.y35
-rw-r--r--tools/perf/util/bpf-prologue.h37
-rw-r--r--tools/perf/util/bpf-trace-summary.c464
-rw-r--r--tools/perf/util/bpf-utils.c61
-rw-r--r--tools/perf/util/bpf-utils.h10
-rw-r--r--tools/perf/util/bpf_counter.c123
-rw-r--r--tools/perf/util/bpf_counter.h74
-rw-r--r--tools/perf/util/bpf_counter_cgroup.c15
-rw-r--r--tools/perf/util/bpf_ftrace.c111
-rw-r--r--tools/perf/util/bpf_kwork.c27
-rw-r--r--tools/perf/util/bpf_kwork_top.c21
-rw-r--r--tools/perf/util/bpf_lock_contention.c499
-rw-r--r--tools/perf/util/bpf_map.c3
-rw-r--r--tools/perf/util/bpf_off_cpu.c141
-rw-r--r--tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c266
-rw-r--r--tools/perf/util/bpf_skel/bench_uprobe.bpf.c16
-rw-r--r--tools/perf/util/bpf_skel/bperf_cgroup.bpf.c2
-rw-r--r--tools/perf/util/bpf_skel/bperf_follower.bpf.c98
-rw-r--r--tools/perf/util/bpf_skel/bperf_u.h5
-rw-r--r--tools/perf/util/bpf_skel/func_latency.bpf.c145
-rw-r--r--tools/perf/util/bpf_skel/kwork_top.bpf.c4
-rw-r--r--tools/perf/util/bpf_skel/kwork_trace.bpf.c7
-rw-r--r--tools/perf/util/bpf_skel/lock_contention.bpf.c517
-rw-r--r--tools/perf/util/bpf_skel/lock_data.h32
-rw-r--r--tools/perf/util/bpf_skel/off_cpu.bpf.c107
-rw-r--r--tools/perf/util/bpf_skel/perf_version.h17
-rw-r--r--tools/perf/util/bpf_skel/sample-filter.h51
-rw-r--r--tools/perf/util/bpf_skel/sample_filter.bpf.c178
-rw-r--r--tools/perf/util/bpf_skel/syscall_summary.bpf.c153
-rw-r--r--tools/perf/util/bpf_skel/syscall_summary.h27
-rw-r--r--tools/perf/util/bpf_skel/vmlinux/vmlinux.h31
-rw-r--r--tools/perf/util/bpf_trace_augment.c143
-rw-r--r--tools/perf/util/branch.c2
-rw-r--r--tools/perf/util/branch.h4
-rw-r--r--tools/perf/util/btf.c27
-rw-r--r--tools/perf/util/btf.h10
-rw-r--r--tools/perf/util/build-id.c236
-rw-r--r--tools/perf/util/build-id.h18
-rw-r--r--tools/perf/util/callchain.c53
-rw-r--r--tools/perf/util/callchain.h6
-rw-r--r--tools/perf/util/cap.c60
-rw-r--r--tools/perf/util/cap.h26
-rw-r--r--tools/perf/util/capstone.c471
-rw-r--r--tools/perf/util/capstone.h24
-rw-r--r--tools/perf/util/cgroup.c29
-rw-r--r--tools/perf/util/cgroup.h3
-rw-r--r--tools/perf/util/color.c28
-rw-r--r--tools/perf/util/color.h16
-rw-r--r--tools/perf/util/color_config.c11
-rw-r--r--tools/perf/util/comm.c222
-rw-r--r--tools/perf/util/compress.h20
-rw-r--r--tools/perf/util/config.c57
-rw-r--r--tools/perf/util/config.h3
-rw-r--r--tools/perf/util/cpumap.c119
-rw-r--r--tools/perf/util/cpumap.h21
-rw-r--r--tools/perf/util/cs-etm-decoder/Build2
-rw-r--r--tools/perf/util/cs-etm-decoder/cs-etm-decoder.c43
-rw-r--r--tools/perf/util/cs-etm-decoder/cs-etm-decoder.h2
-rw-r--r--tools/perf/util/cs-etm.c734
-rw-r--r--tools/perf/util/cs-etm.h12
-rw-r--r--tools/perf/util/data-convert-bt.c62
-rw-r--r--tools/perf/util/data-convert-json.c95
-rw-r--r--tools/perf/util/data.c37
-rw-r--r--tools/perf/util/data.h7
-rw-r--r--tools/perf/util/db-export.c17
-rw-r--r--tools/perf/util/debug.c85
-rw-r--r--tools/perf/util/debug.h3
-rw-r--r--tools/perf/util/debuginfo.c16
-rw-r--r--tools/perf/util/debuginfo.h8
-rw-r--r--tools/perf/util/demangle-cxx.h2
-rw-r--r--tools/perf/util/demangle-rust-v0.c2042
-rw-r--r--tools/perf/util/demangle-rust-v0.h88
-rw-r--r--tools/perf/util/demangle-rust.c269
-rw-r--r--tools/perf/util/demangle-rust.h8
-rw-r--r--tools/perf/util/disasm.c1747
-rw-r--r--tools/perf/util/disasm.h129
-rw-r--r--tools/perf/util/dlfilter.c17
-rw-r--r--tools/perf/util/drm_pmu.c688
-rw-r--r--tools/perf/util/drm_pmu.h39
-rw-r--r--tools/perf/util/dso.c792
-rw-r--r--tools/perf/util/dso.h711
-rw-r--r--tools/perf/util/dsos.c557
-rw-r--r--tools/perf/util/dsos.h42
-rw-r--r--tools/perf/util/dump-insn.c2
-rw-r--r--tools/perf/util/dump-insn.h3
-rw-r--r--tools/perf/util/dwarf-aux.c558
-rw-r--r--tools/perf/util/dwarf-aux.h66
-rw-r--r--tools/perf/util/dwarf-regs-csky.c (renamed from tools/perf/arch/csky/util/dwarf-regs.c)19
-rw-r--r--tools/perf/util/dwarf-regs-powerpc.c61
-rw-r--r--tools/perf/util/dwarf-regs-x86.c50
-rw-r--r--tools/perf/util/dwarf-regs.c38
-rw-r--r--tools/perf/util/env.c228
-rw-r--r--tools/perf/util/env.h25
-rw-r--r--tools/perf/util/event.c106
-rw-r--r--tools/perf/util/event.h77
-rw-r--r--tools/perf/util/events_stats.h20
-rw-r--r--tools/perf/util/evlist.c313
-rw-r--r--tools/perf/util/evlist.h38
-rw-r--r--tools/perf/util/evsel.c1534
-rw-r--r--tools/perf/util/evsel.h127
-rw-r--r--tools/perf/util/evsel_config.h3
-rw-r--r--tools/perf/util/evsel_fprintf.c8
-rw-r--r--tools/perf/util/expr.c134
-rw-r--r--tools/perf/util/expr.l9
-rw-r--r--tools/perf/util/fncache.c69
-rw-r--r--tools/perf/util/fncache.h1
-rw-r--r--tools/perf/util/ftrace.h19
-rw-r--r--tools/perf/util/genelf.c96
-rw-r--r--tools/perf/util/genelf.h5
-rwxr-xr-xtools/perf/util/generate-cmdlist.sh4
-rw-r--r--tools/perf/util/get_current_dir_name.c18
-rw-r--r--tools/perf/util/get_current_dir_name.h8
-rw-r--r--tools/perf/util/hashmap.h20
-rw-r--r--tools/perf/util/header.c540
-rw-r--r--tools/perf/util/header.h49
-rw-r--r--tools/perf/util/help-unknown-cmd.c51
-rw-r--r--tools/perf/util/hisi-ptt-decoder/Build2
-rw-r--r--tools/perf/util/hisi-ptt.c11
-rw-r--r--tools/perf/util/hist.c438
-rw-r--r--tools/perf/util/hist.h303
-rw-r--r--tools/perf/util/hwmon_pmu.c836
-rw-r--r--tools/perf/util/hwmon_pmu.h167
-rw-r--r--tools/perf/util/include/dwarf-regs.h127
-rw-r--r--tools/perf/util/include/linux/linkage.h6
-rw-r--r--tools/perf/util/intel-bts.c45
-rw-r--r--tools/perf/util/intel-pt-decoder/Build20
-rw-r--r--tools/perf/util/intel-pt-decoder/intel-pt-decoder.c2
-rw-r--r--tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c19
-rw-r--r--tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c2
-rw-r--r--tools/perf/util/intel-pt.c399
-rw-r--r--tools/perf/util/intel-tpebs.c661
-rw-r--r--tools/perf/util/intel-tpebs.h25
-rw-r--r--tools/perf/util/jit.h3
-rw-r--r--tools/perf/util/jitdump.c54
-rw-r--r--tools/perf/util/kvm-stat.c70
-rw-r--r--tools/perf/util/kvm-stat.h13
-rw-r--r--tools/perf/util/kwork.h7
-rw-r--r--tools/perf/util/libbfd.c600
-rw-r--r--tools/perf/util/libbfd.h83
-rw-r--r--tools/perf/util/llvm-c-helpers.cpp196
-rw-r--r--tools/perf/util/llvm-c-helpers.h60
-rw-r--r--tools/perf/util/llvm.c273
-rw-r--r--tools/perf/util/llvm.h21
-rw-r--r--tools/perf/util/lock-contention.c143
-rw-r--r--tools/perf/util/lock-contention.h36
-rw-r--r--tools/perf/util/lzma.c31
-rw-r--r--tools/perf/util/machine.c885
-rw-r--r--tools/perf/util/machine.h83
-rw-r--r--tools/perf/util/map.c151
-rw-r--r--tools/perf/util/map.h34
-rw-r--r--tools/perf/util/map_symbol.c18
-rw-r--r--tools/perf/util/map_symbol.h3
-rw-r--r--tools/perf/util/maps.c1422
-rw-r--r--tools/perf/util/maps.h65
-rw-r--r--tools/perf/util/mem-events.c494
-rw-r--r--tools/perf/util/mem-events.h106
-rw-r--r--tools/perf/util/mem-info.c48
-rw-r--r--tools/perf/util/mem-info.h55
-rw-r--r--tools/perf/util/metricgroup.c443
-rw-r--r--tools/perf/util/metricgroup.h11
-rw-r--r--tools/perf/util/mmap.c19
-rw-r--r--tools/perf/util/mmap.h3
-rw-r--r--tools/perf/util/mutex.h19
-rw-r--r--tools/perf/util/namespaces.c14
-rw-r--r--tools/perf/util/namespaces.h3
-rw-r--r--tools/perf/util/off_cpu.h3
-rw-r--r--tools/perf/util/parse-events.c1494
-rw-r--r--tools/perf/util/parse-events.h105
-rw-r--r--tools/perf/util/parse-events.l236
-rw-r--r--tools/perf/util/parse-events.y250
-rw-r--r--tools/perf/util/parse-regs-options.c8
-rw-r--r--tools/perf/util/path.c8
-rw-r--r--tools/perf/util/path.h2
-rw-r--r--tools/perf/util/perf-regs-arch/Build18
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_aarch64.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_arm.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_csky.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_loongarch.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_mips.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_powerpc.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_riscv.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_s390.c4
-rw-r--r--tools/perf/util/perf-regs-arch/perf_regs_x86.c4
-rw-r--r--tools/perf/util/perf_event_attr_fprintf.c129
-rw-r--r--tools/perf/util/perf_regs.c11
-rw-r--r--tools/perf/util/perf_regs.h34
-rw-r--r--tools/perf/util/pfm.c10
-rw-r--r--tools/perf/util/pmu.c1089
-rw-r--r--tools/perf/util/pmu.h71
-rw-r--r--tools/perf/util/pmus.c515
-rw-r--r--tools/perf/util/pmus.h21
-rw-r--r--tools/perf/util/powerpc-vpadtl.c734
-rw-r--r--tools/perf/util/powerpc-vpadtl.h23
-rw-r--r--tools/perf/util/print-events.c310
-rw-r--r--tools/perf/util/print-events.h8
-rw-r--r--tools/perf/util/print_insn.c67
-rw-r--r--tools/perf/util/print_insn.h22
-rw-r--r--tools/perf/util/probe-event.c275
-rw-r--r--tools/perf/util/probe-event.h4
-rw-r--r--tools/perf/util/probe-file.c23
-rw-r--r--tools/perf/util/probe-file.h1
-rw-r--r--tools/perf/util/probe-finder.c87
-rw-r--r--tools/perf/util/probe-finder.h19
-rw-r--r--tools/perf/util/pstack.c14
-rw-r--r--tools/perf/util/pstack.h1
-rw-r--r--tools/perf/util/python-ext-sources52
-rw-r--r--tools/perf/util/python.c1378
-rw-r--r--tools/perf/util/rb_resort.h151
-rw-r--r--tools/perf/util/record.c2
-rw-r--r--tools/perf/util/record.h2
-rw-r--r--tools/perf/util/rwsem.c4
-rw-r--r--tools/perf/util/rwsem.h10
-rw-r--r--tools/perf/util/s390-cpumsf.c21
-rw-r--r--tools/perf/util/s390-sample-raw.c8
-rw-r--r--tools/perf/util/sample-raw.c7
-rw-r--r--tools/perf/util/sample-raw.h2
-rw-r--r--tools/perf/util/sample.c43
-rw-r--r--tools/perf/util/sample.h17
-rw-r--r--tools/perf/util/scripting-engines/Build6
-rw-r--r--tools/perf/util/scripting-engines/trace-event-perl.c11
-rw-r--r--tools/perf/util/scripting-engines/trace-event-python.c178
-rw-r--r--tools/perf/util/session.c626
-rw-r--r--tools/perf/util/session.h72
-rw-r--r--tools/perf/util/setup.py51
-rw-r--r--tools/perf/util/sha1.c97
-rw-r--r--tools/perf/util/sha1.h6
-rw-r--r--tools/perf/util/sort.c675
-rw-r--r--tools/perf/util/sort.h205
-rw-r--r--tools/perf/util/spark.c8
-rw-r--r--tools/perf/util/spark.h1
-rw-r--r--tools/perf/util/srccode.c4
-rw-r--r--tools/perf/util/srcline.c766
-rw-r--r--tools/perf/util/srcline.h9
-rw-r--r--tools/perf/util/stat-display.c537
-rw-r--r--tools/perf/util/stat-shadow.c265
-rw-r--r--tools/perf/util/stat.c93
-rw-r--r--tools/perf/util/stat.h38
-rw-r--r--tools/perf/util/stream.c7
-rw-r--r--tools/perf/util/stream.h10
-rw-r--r--tools/perf/util/string.c115
-rw-r--r--tools/perf/util/string2.h2
-rw-r--r--tools/perf/util/svghelper.c21
-rw-r--r--tools/perf/util/symbol-elf.c517
-rw-r--r--tools/perf/util/symbol-minimal.c202
-rw-r--r--tools/perf/util/symbol.c594
-rw-r--r--tools/perf/util/symbol.h14
-rw-r--r--tools/perf/util/symbol_conf.h13
-rw-r--r--tools/perf/util/symbol_fprintf.c4
-rw-r--r--tools/perf/util/synthetic-events.c311
-rw-r--r--tools/perf/util/synthetic-events.h91
-rw-r--r--tools/perf/util/syscalltbl.c205
-rw-r--r--tools/perf/util/syscalltbl.h22
-rw-r--r--tools/perf/util/target.c54
-rw-r--r--tools/perf/util/target.h16
-rw-r--r--tools/perf/util/thread.c141
-rw-r--r--tools/perf/util/thread.h33
-rw-r--r--tools/perf/util/thread_map.c43
-rw-r--r--tools/perf/util/thread_map.h6
-rw-r--r--tools/perf/util/threads.c190
-rw-r--r--tools/perf/util/threads.h35
-rw-r--r--tools/perf/util/time-utils.c4
-rw-r--r--tools/perf/util/tool.c315
-rw-r--r--tools/perf/util/tool.h22
-rw-r--r--tools/perf/util/tool_pmu.c552
-rw-r--r--tools/perf/util/tool_pmu.h56
-rw-r--r--tools/perf/util/top.c4
-rw-r--r--tools/perf/util/top.h1
-rw-r--r--tools/perf/util/tp_pmu.c208
-rw-r--r--tools/perf/util/tp_pmu.h19
-rw-r--r--tools/perf/util/trace-event-parse.c117
-rw-r--r--tools/perf/util/trace-event-read.c2
-rw-r--r--tools/perf/util/trace-event-scripting.c243
-rw-r--r--tools/perf/util/trace-event.c2
-rw-r--r--tools/perf/util/trace-event.h17
-rw-r--r--tools/perf/util/trace.h38
-rw-r--r--tools/perf/util/trace_augment.h66
-rw-r--r--tools/perf/util/tracepoint.c56
-rw-r--r--tools/perf/util/tracepoint.h3
-rw-r--r--tools/perf/util/tsc.c4
-rw-r--r--tools/perf/util/tsc.h2
-rw-r--r--tools/perf/util/units.c2
-rw-r--r--tools/perf/util/unwind-libdw.c28
-rw-r--r--tools/perf/util/unwind-libunwind-local.c69
-rw-r--r--tools/perf/util/unwind-libunwind.c9
-rw-r--r--tools/perf/util/util.c120
-rw-r--r--tools/perf/util/util.h26
-rw-r--r--tools/perf/util/values.c106
-rw-r--r--tools/perf/util/values.h10
-rw-r--r--tools/perf/util/vdso.c60
-rw-r--r--tools/perf/util/zlib.c2
-rw-r--r--tools/power/acpi/common/cmfsize.c2
-rw-r--r--tools/power/acpi/common/getopt.c2
-rw-r--r--tools/power/acpi/os_specific/service_layers/oslinuxtbl.c8
-rw-r--r--tools/power/acpi/os_specific/service_layers/osunixdir.c2
-rw-r--r--tools/power/acpi/os_specific/service_layers/osunixmap.c2
-rw-r--r--tools/power/acpi/os_specific/service_layers/osunixxf.c2
-rw-r--r--tools/power/acpi/tools/acpidump/acpidump.h2
-rw-r--r--tools/power/acpi/tools/acpidump/apdump.c5
-rw-r--r--tools/power/acpi/tools/acpidump/apfiles.c2
-rw-r--r--tools/power/acpi/tools/acpidump/apmain.c2
-rw-r--r--tools/power/acpi/tools/pfrut/pfrut.c2
-rw-r--r--tools/power/cpupower/Makefile104
-rw-r--r--tools/power/cpupower/README188
-rw-r--r--tools/power/cpupower/bench/Makefile5
-rw-r--r--tools/power/cpupower/bench/parse.c9
-rw-r--r--tools/power/cpupower/bindings/python/.gitignore7
-rw-r--r--tools/power/cpupower/bindings/python/Makefile43
-rw-r--r--tools/power/cpupower/bindings/python/README87
-rw-r--r--tools/power/cpupower/bindings/python/raw_pylibcpupower.swg252
-rwxr-xr-xtools/power/cpupower/bindings/python/test_raw_pylibcpupower.py58
-rw-r--r--tools/power/cpupower/cpupower-service.conf32
-rw-r--r--tools/power/cpupower/cpupower.service.in16
-rw-r--r--tools/power/cpupower/cpupower.sh26
-rw-r--r--tools/power/cpupower/lib/cpufreq.c18
-rw-r--r--tools/power/cpupower/lib/cpufreq.h8
-rw-r--r--tools/power/cpupower/lib/cpuidle.c13
-rw-r--r--tools/power/cpupower/lib/cpuidle.h2
-rw-r--r--tools/power/cpupower/lib/cpupower.c50
-rw-r--r--tools/power/cpupower/lib/cpupower.h3
-rw-r--r--tools/power/cpupower/lib/powercap.c8
-rw-r--r--tools/power/cpupower/man/cpupower-frequency-info.12
-rw-r--r--tools/power/cpupower/man/cpupower-monitor.113
-rw-r--r--tools/power/cpupower/man/cpupower-set.139
-rw-r--r--tools/power/cpupower/po/zh_CN.po942
-rw-r--r--tools/power/cpupower/utils/cpufreq-info.c52
-rw-r--r--tools/power/cpupower/utils/cpuidle-info.c4
-rw-r--r--tools/power/cpupower/utils/cpupower-set.c5
-rw-r--r--tools/power/cpupower/utils/helpers/amd.c44
-rw-r--r--tools/power/cpupower/utils/helpers/helpers.h14
-rw-r--r--tools/power/cpupower/utils/helpers/misc.c76
-rw-r--r--tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c46
-rw-r--r--tools/power/cpupower/utils/idle_monitor/hsw_ext_idle.c4
-rw-r--r--tools/power/cpupower/utils/idle_monitor/mperf_monitor.c19
-rw-r--r--tools/power/cpupower/utils/idle_monitor/nhm_idle.c2
-rw-r--r--tools/power/cpupower/utils/idle_monitor/snb_idle.c4
-rw-r--r--tools/power/pm-graph/.gitignore3
-rw-r--r--tools/power/pm-graph/Makefile111
-rwxr-xr-xtools/power/pm-graph/bootgraph.py16
-rw-r--r--tools/power/pm-graph/config/custom-timeline-functions.cfg4
-rw-r--r--tools/power/pm-graph/sleepgraph.83
-rwxr-xr-xtools/power/pm-graph/sleepgraph.py1152
-rwxr-xr-xtools/power/x86/amd_pstate_tracer/amd_pstate_trace.py2
-rw-r--r--tools/power/x86/intel-speed-select/Makefile2
-rw-r--r--tools/power/x86/intel-speed-select/isst-config.c48
-rw-r--r--tools/power/x86/intel-speed-select/isst-core-mbox.c3
-rw-r--r--tools/power/x86/intel-speed-select/isst-core-tpmi.c24
-rw-r--r--tools/power/x86/intel-speed-select/isst-core.c7
-rw-r--r--tools/power/x86/intel-speed-select/isst-display.c59
-rw-r--r--tools/power/x86/intel-speed-select/isst.h5
-rw-r--r--tools/power/x86/turbostat/Makefile32
-rw-r--r--tools/power/x86/turbostat/turbostat.8196
-rw-r--r--tools/power/x86/turbostat/turbostat.c6284
-rw-r--r--tools/power/x86/x86_energy_perf_policy/Makefile29
-rw-r--r--tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.815
-rw-r--r--tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c134
-rwxr-xr-xtools/rcu/rcu-updaters.sh50
-rw-r--r--tools/sched/dl_bw_dump.py57
-rw-r--r--tools/sched/root_domains_dump.py68
-rw-r--r--tools/sched_ext/.gitignore2
-rw-r--r--tools/sched_ext/Makefile257
-rw-r--r--tools/sched_ext/README.md270
-rw-r--r--tools/sched_ext/include/bpf-compat/gnu/stubs.h11
-rw-r--r--tools/sched_ext/include/scx/bpf_arena_common.bpf.h175
-rw-r--r--tools/sched_ext/include/scx/bpf_arena_common.h33
-rw-r--r--tools/sched_ext/include/scx/common.bpf.h771
-rw-r--r--tools/sched_ext/include/scx/common.h83
-rw-r--r--tools/sched_ext/include/scx/compat.bpf.h260
-rw-r--r--tools/sched_ext/include/scx/compat.h199
-rw-r--r--tools/sched_ext/include/scx/enum_defs.autogen.h123
-rw-r--r--tools/sched_ext/include/scx/enums.autogen.bpf.h129
-rw-r--r--tools/sched_ext/include/scx/enums.autogen.h49
-rw-r--r--tools/sched_ext/include/scx/enums.bpf.h12
-rw-r--r--tools/sched_ext/include/scx/enums.h28
-rw-r--r--tools/sched_ext/include/scx/user_exit_info.bpf.h40
-rw-r--r--tools/sched_ext/include/scx/user_exit_info.h73
-rw-r--r--tools/sched_ext/include/scx/user_exit_info_common.h30
-rw-r--r--tools/sched_ext/scx_central.bpf.c356
-rw-r--r--tools/sched_ext/scx_central.c146
-rw-r--r--tools/sched_ext/scx_flatcg.bpf.c954
-rw-r--r--tools/sched_ext/scx_flatcg.c236
-rw-r--r--tools/sched_ext/scx_flatcg.h51
-rw-r--r--tools/sched_ext/scx_qmap.bpf.c897
-rw-r--r--tools/sched_ext/scx_qmap.c159
-rw-r--r--tools/sched_ext/scx_show_state.py42
-rw-r--r--tools/sched_ext/scx_simple.bpf.c151
-rw-r--r--tools/sched_ext/scx_simple.c109
-rw-r--r--tools/scripts/Makefile.arch4
-rw-r--r--tools/scripts/Makefile.include39
-rw-r--r--tools/scripts/syscall.tbl412
-rwxr-xr-xtools/sound/dapm-graph329
-rw-r--r--tools/spi/spidev_fdx.c2
-rw-r--r--tools/spi/spidev_test.c11
-rw-r--r--tools/testing/crypto/chacha20-s390/test-cipher.c10
-rw-r--r--tools/testing/cxl/Kbuild13
-rw-r--r--tools/testing/cxl/config_check.c1
-rw-r--r--tools/testing/cxl/cxl_core_exports.c24
-rw-r--r--tools/testing/cxl/exports.h13
-rw-r--r--tools/testing/cxl/mock_acpi.c2
-rw-r--r--tools/testing/cxl/test/cxl.c429
-rw-r--r--tools/testing/cxl/test/mem.c400
-rw-r--r--tools/testing/cxl/test/mock.c137
-rw-r--r--tools/testing/cxl/test/mock.h10
-rwxr-xr-x[-rw-r--r--]tools/testing/fault-injection/failcmd.sh12
-rw-r--r--tools/testing/ktest/examples/include/defaults.conf2
-rwxr-xr-xtools/testing/ktest/ktest.pl178
-rw-r--r--tools/testing/ktest/sample.conf2
-rw-r--r--tools/testing/kunit/configs/all_tests.config17
-rw-r--r--tools/testing/kunit/configs/arch_uml.config5
-rwxr-xr-xtools/testing/kunit/kunit.py43
-rw-r--r--tools/testing/kunit/kunit_json.py10
-rw-r--r--tools/testing/kunit/kunit_kernel.py19
-rw-r--r--tools/testing/kunit/kunit_parser.py153
-rw-r--r--tools/testing/kunit/kunit_printer.py14
-rwxr-xr-xtools/testing/kunit/kunit_tool_test.py66
-rw-r--r--tools/testing/kunit/qemu_configs/arm64.py2
-rw-r--r--tools/testing/kunit/qemu_configs/loongarch.py21
-rw-r--r--tools/testing/kunit/qemu_configs/mips.py18
-rw-r--r--tools/testing/kunit/qemu_configs/mips64.py19
-rw-r--r--tools/testing/kunit/qemu_configs/mips64el.py19
-rw-r--r--tools/testing/kunit/qemu_configs/mipsel.py18
-rw-r--r--tools/testing/kunit/qemu_configs/powerpc.py1
-rw-r--r--tools/testing/kunit/qemu_configs/powerpc32.py17
-rw-r--r--tools/testing/kunit/qemu_configs/powerpcle.py14
-rw-r--r--tools/testing/kunit/qemu_configs/riscv.py2
-rw-r--r--tools/testing/kunit/qemu_configs/riscv32.py17
-rw-r--r--tools/testing/kunit/qemu_configs/sh.py4
-rw-r--r--tools/testing/kunit/qemu_configs/sparc.py7
-rw-r--r--tools/testing/kunit/qemu_configs/sparc64.py16
-rw-r--r--tools/testing/kunit/qemu_configs/x86_64.py4
-rw-r--r--tools/testing/kunit/test_data/test_is_test_passed-kselftest.log3
-rw-r--r--tools/testing/memblock/Makefile2
-rw-r--r--tools/testing/memblock/internal.h8
-rw-r--r--tools/testing/memblock/linux/kernel.h2
-rw-r--r--tools/testing/memblock/linux/mmzone.h1
-rw-r--r--tools/testing/memblock/linux/mutex.h14
-rw-r--r--tools/testing/memblock/tests/alloc_api.c22
-rw-r--r--tools/testing/memblock/tests/alloc_helpers_api.c4
-rw-r--r--tools/testing/memblock/tests/alloc_nid_api.c20
-rw-r--r--tools/testing/memblock/tests/basic_api.c416
-rw-r--r--tools/testing/memblock/tests/common.c8
-rw-r--r--tools/testing/memblock/tests/common.h4
-rw-r--r--tools/testing/nvdimm/pmem-dax.c6
-rw-r--r--tools/testing/nvdimm/test/iomap.c12
-rw-r--r--tools/testing/nvdimm/test/ndtest.c19
-rw-r--r--tools/testing/nvdimm/test/ndtest.h31
-rw-r--r--tools/testing/nvdimm/test/nfit.c1
-rw-r--r--tools/testing/nvdimm/test/nfit_test.h1
-rw-r--r--tools/testing/radix-tree/.gitignore1
-rw-r--r--tools/testing/radix-tree/Makefile73
-rw-r--r--tools/testing/radix-tree/bitmap.c23
-rw-r--r--tools/testing/radix-tree/idr-test.c17
-rw-r--r--tools/testing/radix-tree/linux/idr.h1
-rw-r--r--tools/testing/radix-tree/linux/init.h2
-rw-r--r--tools/testing/radix-tree/linux/maple_tree.h7
-rw-r--r--tools/testing/radix-tree/maple.c801
-rw-r--r--tools/testing/radix-tree/multiorder.c4
-rw-r--r--tools/testing/radix-tree/xarray.c9
-rw-r--r--tools/testing/rbtree/Makefile33
-rw-r--r--tools/testing/rbtree/interval_tree_test.c58
-rw-r--r--tools/testing/rbtree/rbtree_test.c48
-rw-r--r--tools/testing/rbtree/test.h4
-rw-r--r--tools/testing/scatterlist/linux/mm.h1
-rw-r--r--tools/testing/selftests/.gitignore1
-rw-r--r--tools/testing/selftests/Makefile67
-rw-r--r--tools/testing/selftests/acct/.gitignore3
-rw-r--r--tools/testing/selftests/acct/Makefile5
-rw-r--r--tools/testing/selftests/acct/acct_syscall.c78
-rw-r--r--tools/testing/selftests/alsa/.gitignore2
-rw-r--r--tools/testing/selftests/alsa/Makefile12
-rw-r--r--tools/testing/selftests/alsa/conf.c2
-rw-r--r--tools/testing/selftests/alsa/global-timer.c87
-rw-r--r--tools/testing/selftests/alsa/mixer-test.c151
-rw-r--r--tools/testing/selftests/alsa/pcm-test.c78
-rw-r--r--tools/testing/selftests/alsa/test-pcmtest-driver.c4
-rw-r--r--tools/testing/selftests/alsa/utimer-test.c165
-rw-r--r--tools/testing/selftests/arm64/Makefile4
-rw-r--r--tools/testing/selftests/arm64/abi/Makefile2
-rw-r--r--tools/testing/selftests/arm64/abi/hwcap.c521
-rw-r--r--tools/testing/selftests/arm64/abi/ptrace.c8
-rw-r--r--tools/testing/selftests/arm64/abi/syscall-abi-asm.S32
-rw-r--r--tools/testing/selftests/arm64/abi/syscall-abi.c8
-rw-r--r--tools/testing/selftests/arm64/abi/tpidr2.c156
-rw-r--r--tools/testing/selftests/arm64/bti/assembler.h1
-rw-r--r--tools/testing/selftests/arm64/fp/.gitignore2
-rw-r--r--tools/testing/selftests/arm64/fp/Makefile6
-rw-r--r--tools/testing/selftests/arm64/fp/assembler.h15
-rw-r--r--tools/testing/selftests/arm64/fp/fp-ptrace-asm.S292
-rw-r--r--tools/testing/selftests/arm64/fp/fp-ptrace.c1695
-rw-r--r--tools/testing/selftests/arm64/fp/fp-ptrace.h25
-rw-r--r--tools/testing/selftests/arm64/fp/fp-stress.c81
-rw-r--r--tools/testing/selftests/arm64/fp/fpsimd-test.S6
-rw-r--r--tools/testing/selftests/arm64/fp/kernel-test.c326
-rw-r--r--tools/testing/selftests/arm64/fp/sme-inst.h2
-rw-r--r--tools/testing/selftests/arm64/fp/sve-ptrace.c132
-rw-r--r--tools/testing/selftests/arm64/fp/sve-test.S10
-rw-r--r--tools/testing/selftests/arm64/fp/vec-syscfg.c1
-rw-r--r--tools/testing/selftests/arm64/fp/za-ptrace.c8
-rw-r--r--tools/testing/selftests/arm64/fp/za-test.S15
-rw-r--r--tools/testing/selftests/arm64/fp/zt-ptrace.c9
-rw-r--r--tools/testing/selftests/arm64/fp/zt-test.S15
-rw-r--r--tools/testing/selftests/arm64/gcs/.gitignore7
-rw-r--r--tools/testing/selftests/arm64/gcs/Makefile30
-rw-r--r--tools/testing/selftests/arm64/gcs/asm-offsets.h0
-rw-r--r--tools/testing/selftests/arm64/gcs/basic-gcs.c420
-rw-r--r--tools/testing/selftests/arm64/gcs/gcs-locking.c199
-rw-r--r--tools/testing/selftests/arm64/gcs/gcs-stress-thread.S311
-rw-r--r--tools/testing/selftests/arm64/gcs/gcs-stress.c530
-rw-r--r--tools/testing/selftests/arm64/gcs/gcs-util.h100
-rw-r--r--tools/testing/selftests/arm64/gcs/gcspushm.S96
-rw-r--r--tools/testing/selftests/arm64/gcs/gcsstr.S99
-rw-r--r--tools/testing/selftests/arm64/gcs/libc-gcs.c728
-rw-r--r--tools/testing/selftests/arm64/mte/check_buffer_fill.c16
-rw-r--r--tools/testing/selftests/arm64/mte/check_child_memory.c8
-rw-r--r--tools/testing/selftests/arm64/mte/check_hugetlb_options.c296
-rw-r--r--tools/testing/selftests/arm64/mte/check_ksm_options.c6
-rw-r--r--tools/testing/selftests/arm64/mte/check_mmap_options.c896
-rw-r--r--tools/testing/selftests/arm64/mte/check_prctl.c31
-rw-r--r--tools/testing/selftests/arm64/mte/check_tags_inclusion.c14
-rw-r--r--tools/testing/selftests/arm64/mte/check_user_mem.c4
-rw-r--r--tools/testing/selftests/arm64/mte/mte_common_util.c113
-rw-r--r--tools/testing/selftests/arm64/mte/mte_common_util.h15
-rw-r--r--tools/testing/selftests/arm64/mte/mte_def.h8
-rw-r--r--tools/testing/selftests/arm64/pauth/Makefile6
-rw-r--r--tools/testing/selftests/arm64/pauth/exec_target.c7
-rw-r--r--tools/testing/selftests/arm64/pauth/pac.c5
-rw-r--r--tools/testing/selftests/arm64/signal/.gitignore3
-rw-r--r--tools/testing/selftests/arm64/signal/Makefile4
-rw-r--r--tools/testing/selftests/arm64/signal/sve_helpers.c56
-rw-r--r--tools/testing/selftests/arm64/signal/sve_helpers.h34
-rw-r--r--tools/testing/selftests/arm64/signal/test_signals.c17
-rw-r--r--tools/testing/selftests/arm64/signal/test_signals.h6
-rw-r--r--tools/testing/selftests/arm64/signal/test_signals_utils.c32
-rw-r--r--tools/testing/selftests/arm64/signal/test_signals_utils.h39
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sme_change_vl.c46
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/fake_sigreturn_sve_change_vl.c30
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/fpmr_siginfo.c82
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/gcs_exception_fault.c62
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/gcs_frame.c88
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/gcs_write_fault.c67
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/poe_siginfo.c86
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/ssve_regs.c41
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/ssve_za_regs.c36
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/sve_regs.c32
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/testcases.c42
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/testcases.h30
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/za_no_regs.c32
-rw-r--r--tools/testing/selftests/arm64/signal/testcases/za_regs.c41
-rw-r--r--tools/testing/selftests/arm64/tags/Makefile1
-rwxr-xr-xtools/testing/selftests/arm64/tags/run_tags_test.sh12
-rw-r--r--tools/testing/selftests/arm64/tags/tags_test.c12
-rw-r--r--tools/testing/selftests/bpf/.gitignore13
-rw-r--r--tools/testing/selftests/bpf/DENYLIST.aarch6413
-rw-r--r--tools/testing/selftests/bpf/DENYLIST.riscv643
-rw-r--r--tools/testing/selftests/bpf/DENYLIST.s390x1
-rw-r--r--tools/testing/selftests/bpf/Makefile393
-rw-r--r--tools/testing/selftests/bpf/Makefile.docs6
-rw-r--r--tools/testing/selftests/bpf/README.rst64
-rw-r--r--tools/testing/selftests/bpf/bench.c128
-rw-r--r--tools/testing/selftests/bpf/bench.h2
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c185
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_htab_mem.c3
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_local_storage_create.c2
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_lpm_trie_map.c555
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_sockmap.c599
-rw-r--r--tools/testing/selftests/bpf/benchs/bench_trigger.c596
-rwxr-xr-xtools/testing/selftests/bpf/benchs/run_bench_trigger.sh22
-rwxr-xr-xtools/testing/selftests/bpf/benchs/run_bench_uprobes.sh9
-rw-r--r--tools/testing/selftests/bpf/bpf_arena_alloc.h67
-rw-r--r--tools/testing/selftests/bpf/bpf_arena_common.h75
-rw-r--r--tools/testing/selftests/bpf/bpf_arena_htab.h100
-rw-r--r--tools/testing/selftests/bpf/bpf_arena_list.h92
-rw-r--r--tools/testing/selftests/bpf/bpf_atomic.h140
-rw-r--r--tools/testing/selftests/bpf/bpf_experimental.h272
-rw-r--r--tools/testing/selftests/bpf/bpf_kfuncs.h54
-rw-r--r--tools/testing/selftests/bpf/bpf_tcp_helpers.h241
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/Makefile20
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c555
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h31
-rw-r--r--tools/testing/selftests/bpf/bpf_util.h15
-rw-r--r--tools/testing/selftests/bpf/cap_helpers.c8
-rw-r--r--tools/testing/selftests/bpf/cap_helpers.h1
-rw-r--r--tools/testing/selftests/bpf/cgroup_helpers.c48
-rw-r--r--tools/testing/selftests/bpf/cgroup_helpers.h5
-rw-r--r--tools/testing/selftests/bpf/config36
-rw-r--r--tools/testing/selftests/bpf/config.aarch6413
-rw-r--r--tools/testing/selftests/bpf/config.ppc64el92
-rw-r--r--tools/testing/selftests/bpf/config.riscv6483
-rw-r--r--tools/testing/selftests/bpf/config.s390x12
-rw-r--r--tools/testing/selftests/bpf/config.vm7
-rw-r--r--tools/testing/selftests/bpf/config.x86_646
-rw-r--r--tools/testing/selftests/bpf/disasm_helpers.c69
-rw-r--r--tools/testing/selftests/bpf/disasm_helpers.h12
-rw-r--r--tools/testing/selftests/bpf/get_cgroup_id_user.c151
-rw-r--r--tools/testing/selftests/bpf/io_helpers.c21
-rw-r--r--tools/testing/selftests/bpf/io_helpers.h7
-rw-r--r--tools/testing/selftests/bpf/jit_disasm_helpers.c245
-rw-r--r--tools/testing/selftests/bpf/jit_disasm_helpers.h10
-rw-r--r--tools/testing/selftests/bpf/map_tests/htab_map_batch_ops.c2
-rw-r--r--tools/testing/selftests/bpf/map_tests/lpm_trie_map_basic_ops.c (renamed from tools/testing/selftests/bpf/test_lpm_map.c)423
-rw-r--r--tools/testing/selftests/bpf/map_tests/lpm_trie_map_batch_ops.c2
-rw-r--r--tools/testing/selftests/bpf/map_tests/lpm_trie_map_get_next_key.c109
-rw-r--r--tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c62
-rw-r--r--tools/testing/selftests/bpf/map_tests/map_percpu_stats.c18
-rw-r--r--tools/testing/selftests/bpf/map_tests/sk_storage_map.c2
-rw-r--r--tools/testing/selftests/bpf/map_tests/task_storage_map.c7
-rw-r--r--tools/testing/selftests/bpf/network_helpers.c936
-rw-r--r--tools/testing/selftests/bpf/network_helpers.h177
-rw-r--r--tools/testing/selftests/bpf/prog_tests/align.c189
-rw-r--r--tools/testing/selftests/bpf/prog_tests/arena_atomics.c268
-rw-r--r--tools/testing/selftests/bpf/prog_tests/arena_htab.c90
-rw-r--r--tools/testing/selftests/bpf/prog_tests/arena_list.c71
-rw-r--r--tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c123
-rw-r--r--tools/testing/selftests/bpf/prog_tests/atomics.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/attach_probe.c120
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bad_struct_ops.c67
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c13
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_cookie.c167
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_iter.c103
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_iter_setsockopt.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_mod_race.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_nf.c22
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c231
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c368
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c8
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf.c92
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_dedup_split.c101
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_distill.c692
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_dump.c273
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_field_iter.c161
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_map_in_map.c26
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_skc_cls_ingress.c264
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_split.c58
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf_sysfs.c81
-rw-r--r--tools/testing/selftests/bpf/prog_tests/build_id.c118
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cb_refs.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup1_hierarchy.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_ancestor.c141
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_dev.c125
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_get_current_cgroup_id.c46
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_mprog_opts.c617
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_mprog_ordering.c77
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_preorder.c128
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_skb_direct_packet_access.c28
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_storage.c96
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c27
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgroup_xattr.c72
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgrp_kfunc.c71
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cls_redirect.c38
-rw-r--r--tools/testing/selftests/bpf/prog_tests/compute_live_registers.c9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/core_reloc.c11
-rw-r--r--tools/testing/selftests/bpf/prog_tests/core_reloc_raw.c125
-rw-r--r--tools/testing/selftests/bpf/prog_tests/cpumask.c17
-rw-r--r--tools/testing/selftests/bpf/prog_tests/crypto_sanity.c196
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c128
-rw-r--r--tools/testing/selftests/bpf/prog_tests/decap_sanity.c3
-rw-r--r--tools/testing/selftests/bpf/prog_tests/dmabuf_iter.c285
-rw-r--r--tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c34
-rw-r--r--tools/testing/selftests/bpf/prog_tests/dynptr.c87
-rw-r--r--tools/testing/selftests/bpf/prog_tests/empty_skb.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fd_array.c441
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fd_htab_lookup.c192
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fentry_fexit.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fentry_test.c9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_sleep.c8
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_stress.c7
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fexit_test.c9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fib_lookup.c134
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fill_link_info.c154
-rw-r--r--tools/testing/selftests/bpf/prog_tests/find_vma.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/flow_dissector.c344
-rw-r--r--tools/testing/selftests/bpf/prog_tests/flow_dissector_classification.c797
-rw-r--r--tools/testing/selftests/bpf/prog_tests/for_each.c99
-rw-r--r--tools/testing/selftests/bpf/prog_tests/free_timer.c169
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fs_kfuncs.c159
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ip_check_defrag.c20
-rw-r--r--tools/testing/selftests/bpf/prog_tests/iters.c10
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kernel_flag.c43
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfree_skb.c1
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfunc_call.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfunc_module_order.c55
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kfunc_param_nullable.c11
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kmem_cache_iter.c126
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c280
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kptr_xchg_inline.c52
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ksyms.c30
-rw-r--r--tools/testing/selftests/bpf/prog_tests/libbpf_probes.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/libbpf_str.c6
-rw-r--r--tools/testing/selftests/bpf/prog_tests/linked_funcs.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/linked_list.c20
-rw-r--r--tools/testing/selftests/bpf/prog_tests/log_buf.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/log_fixup.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lsm_cgroup.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_helpers.h31
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_ip_encap.c540
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_redirect.c5
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_reroute.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_seg6local.c176
-rw-r--r--tools/testing/selftests/bpf/prog_tests/map_excl.c54
-rw-r--r--tools/testing/selftests/bpf/prog_tests/map_in_map.c132
-rw-r--r--tools/testing/selftests/bpf/prog_tests/mem_rdonly_untrusted.c9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/missed.c1
-rw-r--r--tools/testing/selftests/bpf/prog_tests/module_attach.c8
-rw-r--r--tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c3
-rw-r--r--tools/testing/selftests/bpf/prog_tests/mptcp.c160
-rw-r--r--tools/testing/selftests/bpf/prog_tests/nested_trust.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/net_timestamping.c239
-rw-r--r--tools/testing/selftests/bpf/prog_tests/netfilter_link_attach.c42
-rw-r--r--tools/testing/selftests/bpf/prog_tests/netns_cookie.c44
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c205
-rw-r--r--tools/testing/selftests/bpf/prog_tests/parse_tcp_hdr_opt.c1
-rw-r--r--tools/testing/selftests/bpf/prog_tests/perf_link.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/perf_skip.c137
-rw-r--r--tools/testing/selftests/bpf/prog_tests/pinning_devmap_reuse.c50
-rw-r--r--tools/testing/selftests/bpf/prog_tests/pinning_htab.c36
-rw-r--r--tools/testing/selftests/bpf/prog_tests/preempt_lock.c9
-rw-r--r--tools/testing/selftests/bpf/prog_tests/prepare.c99
-rw-r--r--tools/testing/selftests/bpf/prog_tests/pro_epilogue.c62
-rw-r--r--tools/testing/selftests/bpf/prog_tests/prog_tests_framework.c125
-rw-r--r--tools/testing/selftests/bpf/prog_tests/raw_tp_null.c28
-rw-r--r--tools/testing/selftests/bpf/prog_tests/raw_tp_writable_reject_nbd_invalid.c3
-rw-r--r--tools/testing/selftests/bpf/prog_tests/raw_tp_writable_test_run.c5
-rw-r--r--tools/testing/selftests/bpf/prog_tests/rbtree.c53
-rw-r--r--tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c13
-rw-r--r--tools/testing/selftests/bpf/prog_tests/read_vsyscall.c59
-rw-r--r--tools/testing/selftests/bpf/prog_tests/recursive_attach.c67
-rw-r--r--tools/testing/selftests/bpf/prog_tests/reg_bounds.c52
-rw-r--r--tools/testing/selftests/bpf/prog_tests/res_spin_lock.c117
-rw-r--r--tools/testing/selftests/bpf/prog_tests/resolve_btfids.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ringbuf.c121
-rw-r--r--tools/testing/selftests/bpf/prog_tests/select_reuseport.c37
-rw-r--r--tools/testing/selftests/bpf/prog_tests/send_signal.c151
-rw-r--r--tools/testing/selftests/bpf/prog_tests/setget_sockopt.c49
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sha256.c52
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sk_assign.c59
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sk_lookup.c193
-rw-r--r--tools/testing/selftests/bpf/prog_tests/snprintf.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_addr.c2350
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_create.c348
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_destroy.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_iter_batch.c889
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_post_bind.c (renamed from tools/testing/selftests/bpf/test_sock.c)254
-rw-r--r--tools/testing/selftests/bpf/prog_tests/socket_helpers.h468
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_basic.c362
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h355
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c389
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_listen.c565
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_redir.c465
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_strp.c454
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockopt.c65
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockopt_inherit.c64
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockopt_sk.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/spin_lock.c17
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_map.c71
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_map_raw_tp.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stacktrace_map_skip.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/stream.c108
-rw-r--r--tools/testing/selftests/bpf/prog_tests/string_kfuncs.c66
-rw-r--r--tools/testing/selftests/bpf/prog_tests/struct_ops_autocreate.c159
-rw-r--r--tools/testing/selftests/bpf/prog_tests/struct_ops_private_stack.c106
-rw-r--r--tools/testing/selftests/bpf/prog_tests/subskeleton.c76
-rw-r--r--tools/testing/selftests/bpf/prog_tests/summarization.c144
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tailcalls.c486
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_kfunc.c80
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_local_data.h386
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_local_storage.c294
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_work_stress.c130
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_change_tail.c62
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_helpers.h28
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_links.c87
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_netkit.c201
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_opts.c42
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_redirect.c143
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tcp_custom_syncookie.c150
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tcp_rtt.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_bpf_syscall_macro.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_bprm_opts.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_btf_ext.c64
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_csum_diff.c408
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_lsm.c46
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_mmap_inner_array.c57
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_skb_pkt_end.c1
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_strncmp.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_id_ops_mapping.c74
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_kptr_return.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_maybe_null.c46
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_module.c317
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_multi_pages.c30
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_no_cfi.c35
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_struct_ops_refcounted.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_sysctl.c (renamed from tools/testing/selftests/bpf/test_sysctl.c)37
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_task_local_data.c297
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_task_work.c157
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_tunnel.c647
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_veristat.c261
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_xdp_veth.c599
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer.c73
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer_crash.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer_lockup.c101
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer_mim.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/token.c1197
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tp_btf_nullable.c14
-rw-r--r--tools/testing/selftests/bpf/prog_tests/trace_printk.c36
-rw-r--r--tools/testing/selftests/bpf/prog_tests/trace_vprintk.c36
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tracing_failure.c89
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tracing_struct.c71
-rw-r--r--tools/testing/selftests/bpf/prog_tests/unpriv_bpf_disabled.c3
-rw-r--r--tools/testing/selftests/bpf/prog_tests/uprobe.c156
-rw-r--r--tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c866
-rw-r--r--tools/testing/selftests/bpf/prog_tests/uprobe_syscall.c803
-rw-r--r--tools/testing/selftests/bpf/prog_tests/uretprobe_stack.c186
-rw-r--r--tools/testing/selftests/bpf/prog_tests/usdt.c144
-rw-r--r--tools/testing/selftests/bpf/prog_tests/user_ringbuf.c13
-rw-r--r--tools/testing/selftests/bpf/prog_tests/verifier.c56
-rw-r--r--tools/testing/selftests/bpf/prog_tests/verifier_kfunc_prog_types.c11
-rw-r--r--tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/wq.c40
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c118
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_bonding.c6
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c392
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c44
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c156
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c183
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_flowtable.c168
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_metadata.c62
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_pull_data.c179
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_vlan.c175
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdpwall.c2
-rw-r--r--tools/testing/selftests/bpf/progs/arena_atomics.c397
-rw-r--r--tools/testing/selftests/bpf/progs/arena_htab.c59
-rw-r--r--tools/testing/selftests/bpf/progs/arena_htab_asm.c5
-rw-r--r--tools/testing/selftests/bpf/progs/arena_list.c88
-rw-r--r--tools/testing/selftests/bpf/progs/arena_spin_lock.c54
-rw-r--r--tools/testing/selftests/bpf/progs/async_stack_depth.c4
-rw-r--r--tools/testing/selftests/bpf/progs/bad_struct_ops.c25
-rw-r--r--tools/testing/selftests/bpf/progs/bad_struct_ops2.c14
-rw-r--r--tools/testing/selftests/bpf/progs/bench_local_storage_create.c5
-rw-r--r--tools/testing/selftests/bpf/progs/bench_sockmap_prog.c65
-rw-r--r--tools/testing/selftests/bpf/progs/bind4_prog.c24
-rw-r--r--tools/testing/selftests/bpf/progs/bind6_prog.c24
-rw-r--r--tools/testing/selftests/bpf/progs/bind_prog.h19
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h542
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_cc_cubic.c189
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_compiler.h33
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_cubic.c80
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_dctcp.c108
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_dctcp_release.c10
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter.h167
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_array_map.c8
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_hash_map.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_link.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_map.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_percpu_array_map.c8
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_percpu_hash_map.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_helpers.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_bpf_sk_storage_map.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_ipv6_route.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_ksym.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_map_elem.c22
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_netlink.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_setsockopt.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_setsockopt_unix.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_sockmap.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task_btf.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task_file.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task_stack.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_tasks.c112
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_tcp4.c6
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_tcp6.c6
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_test_kern3.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_test_kern4.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_test_kern5.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_test_kern6.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_test_kern_common.h2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_udp4.c5
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_udp6.c6
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_unix.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_misc.h155
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_qdisc_common.h27
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_qdisc_fail__incompl_ops.c41
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_qdisc_fifo.c126
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_qdisc_fq.c756
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_syscall_macro.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_tcp_nogpl.c8
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_test_utils.h18
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_tracing_net.h74
-rw-r--r--tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_bad_signed_arr_elem_sz.c3
-rw-r--r--tools/testing/selftests/bpf/progs/btf_dump_test_case_multidim.c4
-rw-r--r--tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c4
-rw-r--r--tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c22
-rw-r--r--tools/testing/selftests/bpf/progs/cb_refs.c2
-rw-r--r--tools/testing/selftests/bpf/progs/cg_storage_multi.h2
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_ancestor.c40
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c9
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_iter.c3
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_mprog.c30
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_preorder.c41
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_read_xattr.c158
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_skb_direct_packet_access.c15
-rw-r--r--tools/testing/selftests/bpf/progs/cgroup_storage.c24
-rw-r--r--tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h2
-rw-r--r--tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c12
-rw-r--r--tools/testing/selftests/bpf/progs/cgrp_ls_recursion.c26
-rw-r--r--tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c3
-rw-r--r--tools/testing/selftests/bpf/progs/compute_live_registers.c440
-rw-r--r--tools/testing/selftests/bpf/progs/connect4_dropper.c4
-rw-r--r--tools/testing/selftests/bpf/progs/connect4_prog.c12
-rw-r--r--tools/testing/selftests/bpf/progs/connect6_prog.c6
-rw-r--r--tools/testing/selftests/bpf/progs/connect_unix_prog.c9
-rw-r--r--tools/testing/selftests/bpf/progs/core_reloc_types.h10
-rw-r--r--tools/testing/selftests/bpf/progs/cpumask_common.h65
-rw-r--r--tools/testing/selftests/bpf/progs/cpumask_failure.c76
-rw-r--r--tools/testing/selftests/bpf/progs/cpumask_success.c365
-rw-r--r--tools/testing/selftests/bpf/progs/crypto_basic.c68
-rw-r--r--tools/testing/selftests/bpf/progs/crypto_bench.c107
-rw-r--r--tools/testing/selftests/bpf/progs/crypto_common.h66
-rw-r--r--tools/testing/selftests/bpf/progs/crypto_sanity.c179
-rw-r--r--tools/testing/selftests/bpf/progs/csum_diff_test.c42
-rw-r--r--tools/testing/selftests/bpf/progs/dev_cgroup.c4
-rw-r--r--tools/testing/selftests/bpf/progs/dmabuf_iter.c101
-rw-r--r--tools/testing/selftests/bpf/progs/dummy_st_ops_success.c15
-rw-r--r--tools/testing/selftests/bpf/progs/dynptr_fail.c351
-rw-r--r--tools/testing/selftests/bpf/progs/dynptr_success.c601
-rw-r--r--tools/testing/selftests/bpf/progs/epilogue_exit.c82
-rw-r--r--tools/testing/selftests/bpf/progs/epilogue_tailcall.c58
-rw-r--r--tools/testing/selftests/bpf/progs/err.h10
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions_assert.c34
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions_fail.c4
-rw-r--r--tools/testing/selftests/bpf/progs/fd_htab_lookup.c25
-rw-r--r--tools/testing/selftests/bpf/progs/fib_lookup.c2
-rw-r--r--tools/testing/selftests/bpf/progs/find_vma.c2
-rw-r--r--tools/testing/selftests/bpf/progs/for_each_hash_modify.c30
-rw-r--r--tools/testing/selftests/bpf/progs/for_each_multi_maps.c49
-rw-r--r--tools/testing/selftests/bpf/progs/free_timer.c71
-rw-r--r--tools/testing/selftests/bpf/progs/freplace_connect_v4_prog.c2
-rw-r--r--tools/testing/selftests/bpf/progs/get_cgroup_id_kern.c26
-rw-r--r--tools/testing/selftests/bpf/progs/get_func_ip_test.c7
-rw-r--r--tools/testing/selftests/bpf/progs/getpeername4_prog.c24
-rw-r--r--tools/testing/selftests/bpf/progs/getpeername6_prog.c31
-rw-r--r--tools/testing/selftests/bpf/progs/getpeername_unix_prog.c3
-rw-r--r--tools/testing/selftests/bpf/progs/getsockname4_prog.c24
-rw-r--r--tools/testing/selftests/bpf/progs/getsockname6_prog.c31
-rw-r--r--tools/testing/selftests/bpf/progs/getsockname_unix_prog.c3
-rw-r--r--tools/testing/selftests/bpf/progs/ip_check_defrag.c10
-rw-r--r--tools/testing/selftests/bpf/progs/irq.c566
-rw-r--r--tools/testing/selftests/bpf/progs/iters.c521
-rw-r--r--tools/testing/selftests/bpf/progs/iters_state_safety.c20
-rw-r--r--tools/testing/selftests/bpf/progs/iters_task.c12
-rw-r--r--tools/testing/selftests/bpf/progs/iters_task_failure.c4
-rw-r--r--tools/testing/selftests/bpf/progs/iters_testmod.c171
-rw-r--r--tools/testing/selftests/bpf/progs/iters_testmod_seq.c56
-rw-r--r--tools/testing/selftests/bpf/progs/jeq_infer_not_null_fail.c4
-rw-r--r--tools/testing/selftests/bpf/progs/jit_probe_mem.c2
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_destructive.c2
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_fail.c9
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_race.c2
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_test.c39
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_call_test_subprog.c2
-rw-r--r--tools/testing/selftests/bpf/progs/kfunc_module_order.c30
-rw-r--r--tools/testing/selftests/bpf/progs/kmem_cache_iter.c108
-rw-r--r--tools/testing/selftests/bpf/progs/kprobe_multi_session.c78
-rw-r--r--tools/testing/selftests/bpf/progs/kprobe_multi_session_cookie.c58
-rw-r--r--tools/testing/selftests/bpf/progs/kprobe_multi_verifier.c31
-rw-r--r--tools/testing/selftests/bpf/progs/kprobe_write_ctx.c22
-rw-r--r--tools/testing/selftests/bpf/progs/kptr_xchg_inline.c48
-rw-r--r--tools/testing/selftests/bpf/progs/linked_funcs1.c8
-rw-r--r--tools/testing/selftests/bpf/progs/linked_funcs2.c8
-rw-r--r--tools/testing/selftests/bpf/progs/linked_list.c47
-rw-r--r--tools/testing/selftests/bpf/progs/linked_list_fail.c5
-rw-r--r--tools/testing/selftests/bpf/progs/linked_list_peek.c113
-rw-r--r--tools/testing/selftests/bpf/progs/local_kptr_stash.c32
-rw-r--r--tools/testing/selftests/bpf/progs/local_storage.c20
-rw-r--r--tools/testing/selftests/bpf/progs/loop1.c7
-rw-r--r--tools/testing/selftests/bpf/progs/loop2.c7
-rw-r--r--tools/testing/selftests/bpf/progs/loop3.c7
-rw-r--r--tools/testing/selftests/bpf/progs/loop4.c4
-rw-r--r--tools/testing/selftests/bpf/progs/loop6.c21
-rw-r--r--tools/testing/selftests/bpf/progs/lpm_trie.h30
-rw-r--r--tools/testing/selftests/bpf/progs/lpm_trie_bench.c230
-rw-r--r--tools/testing/selftests/bpf/progs/lpm_trie_map.c19
-rw-r--r--tools/testing/selftests/bpf/progs/lsm_cgroup.c8
-rw-r--r--tools/testing/selftests/bpf/progs/lsm_tailcall.c34
-rw-r--r--tools/testing/selftests/bpf/progs/map_excl.c34
-rw-r--r--tools/testing/selftests/bpf/progs/map_kptr.c12
-rw-r--r--tools/testing/selftests/bpf/progs/map_kptr_fail.c4
-rw-r--r--tools/testing/selftests/bpf/progs/map_percpu_stats.c2
-rw-r--r--tools/testing/selftests/bpf/progs/map_ptr_kern.c2
-rw-r--r--tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c229
-rw-r--r--tools/testing/selftests/bpf/progs/missed_kprobe.c2
-rw-r--r--tools/testing/selftests/bpf/progs/missed_kprobe_recursion.c8
-rw-r--r--tools/testing/selftests/bpf/progs/mmap_inner_array.c57
-rw-r--r--tools/testing/selftests/bpf/progs/mptcp_bpf.h42
-rw-r--r--tools/testing/selftests/bpf/progs/mptcp_sock.c4
-rw-r--r--tools/testing/selftests/bpf/progs/mptcp_subflow.c128
-rw-r--r--tools/testing/selftests/bpf/progs/mptcpify.c4
-rw-r--r--tools/testing/selftests/bpf/progs/nested_acquire.c33
-rw-r--r--tools/testing/selftests/bpf/progs/nested_trust_common.h2
-rw-r--r--tools/testing/selftests/bpf/progs/nested_trust_failure.c8
-rw-r--r--tools/testing/selftests/bpf/progs/nested_trust_success.c8
-rw-r--r--tools/testing/selftests/bpf/progs/net_timestamping.c248
-rw-r--r--tools/testing/selftests/bpf/progs/netif_receive_skb.c5
-rw-r--r--tools/testing/selftests/bpf/progs/netns_cookie_prog.c19
-rw-r--r--tools/testing/selftests/bpf/progs/preempt_lock.c212
-rw-r--r--tools/testing/selftests/bpf/progs/prepare.c27
-rw-r--r--tools/testing/selftests/bpf/progs/priv_freplace_prog.c13
-rw-r--r--tools/testing/selftests/bpf/progs/priv_map.c13
-rw-r--r--tools/testing/selftests/bpf/progs/priv_prog.c13
-rw-r--r--tools/testing/selftests/bpf/progs/pro_epilogue.c154
-rw-r--r--tools/testing/selftests/bpf/progs/pro_epilogue_goto_start.c149
-rw-r--r--tools/testing/selftests/bpf/progs/pro_epilogue_with_kfunc.c88
-rw-r--r--tools/testing/selftests/bpf/progs/profiler.inc.h24
-rw-r--r--tools/testing/selftests/bpf/progs/pyperf.h7
-rw-r--r--tools/testing/selftests/bpf/progs/raw_tp_null.c31
-rw-r--r--tools/testing/selftests/bpf/progs/raw_tp_null_fail.c24
-rw-r--r--tools/testing/selftests/bpf/progs/rbtree.c91
-rw-r--r--tools/testing/selftests/bpf/progs/rbtree_fail.c31
-rw-r--r--tools/testing/selftests/bpf/progs/rbtree_search.c206
-rw-r--r--tools/testing/selftests/bpf/progs/rcu_read_lock.c186
-rw-r--r--tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c4
-rw-r--r--tools/testing/selftests/bpf/progs/read_cgroupfs_xattr.c60
-rw-r--r--tools/testing/selftests/bpf/progs/read_vsyscall.c59
-rw-r--r--tools/testing/selftests/bpf/progs/recvmsg_unix_prog.c3
-rw-r--r--tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c4
-rw-r--r--tools/testing/selftests/bpf/progs/res_spin_lock.c147
-rw-r--r--tools/testing/selftests/bpf/progs/res_spin_lock_fail.c244
-rw-r--r--tools/testing/selftests/bpf/progs/security_bpf_map.c69
-rw-r--r--tools/testing/selftests/bpf/progs/sendmsg4_prog.c6
-rw-r--r--tools/testing/selftests/bpf/progs/sendmsg6_prog.c57
-rw-r--r--tools/testing/selftests/bpf/progs/sendmsg_unix_prog.c9
-rw-r--r--tools/testing/selftests/bpf/progs/set_global_vars.c106
-rw-r--r--tools/testing/selftests/bpf/progs/setget_sockopt.c45
-rw-r--r--tools/testing/selftests/bpf/progs/sk_storage_omem_uncharge.c4
-rw-r--r--tools/testing/selftests/bpf/progs/skb_pkt_end.c13
-rw-r--r--tools/testing/selftests/bpf/progs/sock_addr_kern.c65
-rw-r--r--tools/testing/selftests/bpf/progs/sock_iter_batch.c56
-rw-r--r--tools/testing/selftests/bpf/progs/sockopt_qos_to_cc.c16
-rw-r--r--tools/testing/selftests/bpf/progs/stacktrace_map.c (renamed from tools/testing/selftests/bpf/progs/test_stacktrace_map.c)2
-rw-r--r--tools/testing/selftests/bpf/progs/stream.c237
-rw-r--r--tools/testing/selftests/bpf/progs/stream_fail.c33
-rw-r--r--tools/testing/selftests/bpf/progs/string_kfuncs_failure1.c93
-rw-r--r--tools/testing/selftests/bpf/progs/string_kfuncs_failure2.c24
-rw-r--r--tools/testing/selftests/bpf/progs/string_kfuncs_success.c46
-rw-r--r--tools/testing/selftests/bpf/progs/strncmp_bench.c5
-rw-r--r--tools/testing/selftests/bpf/progs/strobemeta.h22
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_autocreate.c52
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_autocreate2.c32
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_detach.c22
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_forgotten_cb.c19
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping1.c59
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_id_ops_mapping2.c59
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_kptr_return.c30
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_kptr_return_fail__invalid_scalar.c26
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_kptr_return_fail__local_kptr.c34
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_kptr_return_fail__nonzero_offset.c25
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_kptr_return_fail__wrong_type.c30
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_maybe_null.c29
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_maybe_null_fail.c24
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_module.c90
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_multi_pages.c102
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_nulled_out_cb.c22
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_private_stack.c62
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_private_stack_fail.c62
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_private_stack_recur.c50
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_refcounted.c31
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_refcounted_fail__global_subprog.c39
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_refcounted_fail__ref_leak.c22
-rw-r--r--tools/testing/selftests/bpf/progs/struct_ops_refcounted_fail__tail_call.c36
-rw-r--r--tools/testing/selftests/bpf/progs/summarization.c78
-rw-r--r--tools/testing/selftests/bpf/progs/summarization_freplace.c33
-rw-r--r--tools/testing/selftests/bpf/progs/syscall.c9
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c37
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c73
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c65
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy_fentry.c38
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_fail.c64
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_freplace.c23
-rw-r--r--tools/testing/selftests/bpf/progs/task_kfunc_common.h3
-rw-r--r--tools/testing/selftests/bpf/progs/task_kfunc_failure.c14
-rw-r--r--tools/testing/selftests/bpf/progs/task_kfunc_success.c107
-rw-r--r--tools/testing/selftests/bpf/progs/task_local_data.bpf.h237
-rw-r--r--tools/testing/selftests/bpf/progs/task_ls_recursion.c17
-rw-r--r--tools/testing/selftests/bpf/progs/task_ls_uptr.c63
-rw-r--r--tools/testing/selftests/bpf/progs/task_storage_nodeadlock.c4
-rw-r--r--tools/testing/selftests/bpf/progs/task_work.c107
-rw-r--r--tools/testing/selftests/bpf/progs/task_work_fail.c96
-rw-r--r--tools/testing/selftests/bpf/progs/task_work_stress.c73
-rw-r--r--tools/testing/selftests/bpf/progs/tc_bpf2bpf.c25
-rw-r--r--tools/testing/selftests/bpf/progs/tc_dummy.c12
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_ca_incompl_cong_ops.c12
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_ca_kfunc.c121
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_ca_unsupp_cong_op.c2
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_ca_update.c18
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_ca_write_sk_pacing.c20
-rw-r--r--tools/testing/selftests/bpf/progs/tcp_rtt.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_access_variable_array.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_attach_probe.c64
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_cookie.c16
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_ma.c4
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_nf.c109
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c1
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_ext.c22
-rw-r--r--tools/testing/selftests/bpf/progs/test_btf_skc_cls_ingress.c98
-rw-r--r--tools/testing/selftests/bpf/progs/test_build_id.c31
-rw-r--r--tools/testing/selftests/bpf/progs/test_cgroup1_hierarchy.c4
-rw-r--r--tools/testing/selftests/bpf/progs/test_cls_redirect.c15
-rw-r--r--tools/testing/selftests/bpf/progs/test_cls_redirect.h2
-rw-r--r--tools/testing/selftests/bpf/progs/test_cls_redirect_dynptr.c8
-rw-r--r--tools/testing/selftests/bpf/progs/test_core_read_macros.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c5
-rw-r--r--tools/testing/selftests/bpf/progs/test_core_reloc_type_id.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_fill_link_info.c19
-rw-r--r--tools/testing/selftests/bpf/progs/test_get_xattr.c61
-rw-r--r--tools/testing/selftests/bpf/progs/test_global_func1.c8
-rw-r--r--tools/testing/selftests/bpf/progs/test_global_func10.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_global_func15.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_global_func_ctx_args.c19
-rw-r--r--tools/testing/selftests/bpf/progs/test_global_map_resize.c34
-rw-r--r--tools/testing/selftests/bpf/progs/test_kernel_flag.c28
-rw-r--r--tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c8
-rw-r--r--tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c43
-rw-r--r--tools/testing/selftests/bpf/progs/test_libbpf_get_fd_by_id_opts.c1
-rw-r--r--tools/testing/selftests/bpf/progs/test_lookup_key.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_lwt_redirect.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_lwt_seg6local.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_map_in_map.c26
-rw-r--r--tools/testing/selftests/bpf/progs/test_module_attach.c31
-rw-r--r--tools/testing/selftests/bpf/progs/test_ns_current_pid_tgid.c31
-rw-r--r--tools/testing/selftests/bpf/progs/test_overhead.c5
-rw-r--r--tools/testing/selftests/bpf/progs/test_perf_skip.c15
-rw-r--r--tools/testing/selftests/bpf/progs/test_pinning_devmap.c20
-rw-r--r--tools/testing/selftests/bpf/progs/test_pinning_htab.c25
-rw-r--r--tools/testing/selftests/bpf/progs/test_ptr_untrusted.c8
-rw-r--r--tools/testing/selftests/bpf/progs/test_rdonly_maps.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_ringbuf_n.c47
-rw-r--r--tools/testing/selftests/bpf/progs/test_ringbuf_write.c46
-rw-r--r--tools/testing/selftests/bpf/progs/test_seg6_loop.c4
-rw-r--r--tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c1
-rw-r--r--tools/testing/selftests/bpf/progs/test_send_signal_kern.c35
-rw-r--r--tools/testing/selftests/bpf/progs/test_set_remove_xattr.c133
-rw-r--r--tools/testing/selftests/bpf/progs/test_sig_in_xattr.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_siphash.h64
-rw-r--r--tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_skb_cgroup_id_kern.c45
-rw-r--r--tools/testing/selftests/bpf/progs/test_skb_ctx.c4
-rw-r--r--tools/testing/selftests/bpf/progs/test_skmsg_load_helpers.c27
-rw-r--r--tools/testing/selftests/bpf/progs/test_sock_fields.c5
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_change_tail.c45
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_kern.h20
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_ktls.c40
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c17
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_redir.c68
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_skb_verdict_attach.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_sockmap_strp.c53
-rw-r--r--tools/testing/selftests/bpf/progs/test_spin_lock.c65
-rw-r--r--tools/testing/selftests/bpf/progs/test_spin_lock_fail.c117
-rw-r--r--tools/testing/selftests/bpf/progs/test_subprogs_extable.c6
-rw-r--r--tools/testing/selftests/bpf/progs/test_sysctl_loop1.c9
-rw-r--r--tools/testing/selftests/bpf/progs/test_sysctl_loop2.c9
-rw-r--r--tools/testing/selftests/bpf/progs/test_sysctl_prog.c11
-rw-r--r--tools/testing/selftests/bpf/progs/test_task_local_data.c65
-rw-r--r--tools/testing/selftests/bpf/progs/test_task_under_cgroup.c2
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_change_tail.c106
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_dtime.c39
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_link.c62
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_tunnel.c5
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcp_check_syncookie_kern.c167
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcp_custom_syncookie.c591
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcp_custom_syncookie.h138
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcp_hdr_options.c5
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c15
-rw-r--r--tools/testing/selftests/bpf/progs/test_tcpnotify_kern.c1
-rw-r--r--tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c24
-rw-r--r--tools/testing/selftests/bpf/progs/test_tunnel_kern.c74
-rw-r--r--tools/testing/selftests/bpf/progs/test_uprobe.c38
-rw-r--r--tools/testing/selftests/bpf/progs/test_usdt.c45
-rw-r--r--tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c10
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_devmap_tailcall.c29
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c12
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_dynptr.c10
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_loop.c3
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_meta.c476
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_noinline.c32
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_pull_data.c48
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_redirect.c26
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_vlan.c22
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_cpumap_helpers.c7
-rw-r--r--tools/testing/selftests/bpf/progs/test_xdp_with_devmap_helpers.c2
-rw-r--r--tools/testing/selftests/bpf/progs/timer.c37
-rw-r--r--tools/testing/selftests/bpf/progs/timer_failure.c2
-rw-r--r--tools/testing/selftests/bpf/progs/timer_interrupt.c48
-rw-r--r--tools/testing/selftests/bpf/progs/timer_lockup.c87
-rw-r--r--tools/testing/selftests/bpf/progs/timer_mim.c2
-rw-r--r--tools/testing/selftests/bpf/progs/timer_mim_reject.c2
-rw-r--r--tools/testing/selftests/bpf/progs/token_lsm.c32
-rw-r--r--tools/testing/selftests/bpf/progs/tracing_failure.c32
-rw-r--r--tools/testing/selftests/bpf/progs/tracing_struct.c65
-rw-r--r--tools/testing/selftests/bpf/progs/tracing_struct_many_args.c95
-rw-r--r--tools/testing/selftests/bpf/progs/trigger_bench.c132
-rw-r--r--tools/testing/selftests/bpf/progs/type_cast.c13
-rw-r--r--tools/testing/selftests/bpf/progs/uninit_stack.c5
-rw-r--r--tools/testing/selftests/bpf/progs/unsupported_ops.c22
-rw-r--r--tools/testing/selftests/bpf/progs/update_map_in_htab.c30
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi.c50
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_consumers.c39
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_pid_filter.c40
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_session.c71
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_session_cookie.c48
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_session_recursive.c44
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_session_single.c44
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_multi_verifier.c31
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_syscall.c15
-rw-r--r--tools/testing/selftests/bpf/progs/uprobe_syscall_executed.c73
-rw-r--r--tools/testing/selftests/bpf/progs/uptr_failure.c105
-rw-r--r--tools/testing/selftests/bpf/progs/uptr_map_failure.c27
-rw-r--r--tools/testing/selftests/bpf/progs/uptr_update_failure.c42
-rw-r--r--tools/testing/selftests/bpf/progs/uretprobe_stack.c96
-rw-r--r--tools/testing/selftests/bpf/progs/user_ringbuf_fail.c22
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_and.c8
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_arena.c257
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_arena_large.c274
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_array_access.c206
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_basic_stack.c2
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bits_iter.c232
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bounds.c598
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bounds_deduction.c11
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c888
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bpf_trap.c71
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c52
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_const.c98
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_const_or.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_ctx.c76
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_d_path.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_direct_packet_access.c2
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_div_overflow.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_global_ptr_args.c310
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_global_subprogs.c43
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_gotol.c6
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_helper_access_var_len.c12
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_helper_restricted.c8
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_int_ptr.c17
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c550
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_jit_convergence.c114
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_kfunc_prog_types.c170
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_ldsx.c290
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_linked_scalars.c34
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_live_stack.c294
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_load_acquire.c234
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_loops1.c45
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_lsm.c162
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_map_in_map.c120
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_map_ptr.c7
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_may_goto_1.c109
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_may_goto_2.c28
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_movsx.c115
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_mtu.c20
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_mul.c38
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_netfilter_ctx.c6
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_or_jmp32_k.c41
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_precision.c179
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_private_stack.c359
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_raw_stack.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_ref_tracking.c6
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_scalar_ids.c323
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_sdiv.c439
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_search_pruning.c23
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_sock.c164
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_sock_addr.c331
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_sockmap_mutate.c187
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_spill_fill.c626
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_spin_lock.c30
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_stack_ptr.c52
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_store_release.c301
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_subprog_precision.c95
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_tailcall.c31
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_tailcall_jit.c105
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_unpriv.c235
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_value_illegal_alu.c47
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_value_ptr_arith.c38
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_var_off.c14
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_vfs_accept.c103
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_vfs_reject.c176
-rw-r--r--tools/testing/selftests/bpf/progs/wq.c189
-rw-r--r--tools/testing/selftests/bpf/progs/wq_failures.c144
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_flowtable.c148
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_metadata.c13
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_redirect_map.c94
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_redirect_multi_kern.c41
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c10
-rw-r--r--tools/testing/selftests/bpf/progs/xdping_kern.c3
-rw-r--r--tools/testing/selftests/bpf/progs/xfrm_info.c1
-rw-r--r--tools/testing/selftests/bpf/progs/xsk_xdp_progs.c50
-rw-r--r--tools/testing/selftests/bpf/sdt.h2
-rwxr-xr-xtools/testing/selftests/bpf/test_bpftool_map.sh398
-rwxr-xr-xtools/testing/selftests/bpf/test_bpftool_synctypes.py28
-rw-r--r--tools/testing/selftests/bpf/test_btf.h6
-rw-r--r--tools/testing/selftests/bpf/test_cgroup_storage.c174
-rw-r--r--tools/testing/selftests/bpf/test_cpp.cpp9
-rw-r--r--tools/testing/selftests/bpf/test_dev_cgroup.c85
-rw-r--r--tools/testing/selftests/bpf/test_flow_dissector.c780
-rwxr-xr-xtools/testing/selftests/bpf/test_flow_dissector.sh178
-rw-r--r--tools/testing/selftests/bpf/test_kmods/.gitignore (renamed from tools/testing/selftests/bpf/bpf_testmod/.gitignore)0
-rw-r--r--tools/testing/selftests/bpf/test_kmods/Makefile21
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_test_modorder_x.c39
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_test_modorder_y.c39
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_test_no_cfi.c84
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_test_rqspinlock.c209
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod-events.h (renamed from tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h)14
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod.c1761
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod.h125
-rw-r--r--tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h (renamed from tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h)57
-rw-r--r--tools/testing/selftests/bpf/test_lirc_mode2_user.c2
-rw-r--r--tools/testing/selftests/bpf/test_loader.c804
-rw-r--r--tools/testing/selftests/bpf/test_lru_map.c108
-rwxr-xr-xtools/testing/selftests/bpf/test_lwt_ip_encap.sh476
-rwxr-xr-xtools/testing/selftests/bpf/test_lwt_seg6local.sh156
-rw-r--r--tools/testing/selftests/bpf/test_maps.c25
-rw-r--r--tools/testing/selftests/bpf/test_progs.c532
-rw-r--r--tools/testing/selftests/bpf/test_progs.h118
-rwxr-xr-xtools/testing/selftests/bpf/test_skb_cgroup_id.sh63
-rw-r--r--tools/testing/selftests/bpf/test_skb_cgroup_id_user.c183
-rw-r--r--tools/testing/selftests/bpf/test_sock_addr.c1433
-rwxr-xr-xtools/testing/selftests/bpf/test_sock_addr.sh58
-rw-r--r--tools/testing/selftests/bpf/test_sockmap.c359
-rwxr-xr-xtools/testing/selftests/bpf/test_tc_tunnel.sh14
-rwxr-xr-xtools/testing/selftests/bpf/test_tcp_check_syncookie.sh85
-rw-r--r--tools/testing/selftests/bpf/test_tcp_check_syncookie_user.c299
-rw-r--r--tools/testing/selftests/bpf/test_tcpnotify_user.c20
-rwxr-xr-xtools/testing/selftests/bpf/test_tunnel.sh645
-rw-r--r--tools/testing/selftests/bpf/test_verifier.c77
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_meta.sh58
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_redirect.sh79
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_redirect_multi.sh214
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_veth.sh121
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_vlan.sh233
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_vlan_mode_generic.sh9
-rwxr-xr-xtools/testing/selftests/bpf/test_xdp_vlan_mode_native.sh9
-rwxr-xr-xtools/testing/selftests/bpf/test_xsk.sh2
-rw-r--r--tools/testing/selftests/bpf/testing_helpers.c159
-rw-r--r--tools/testing/selftests/bpf/testing_helpers.h13
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.c454
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.h12
-rw-r--r--tools/testing/selftests/bpf/unpriv_helpers.c95
-rw-r--r--tools/testing/selftests/bpf/uprobe_multi.c47
-rw-r--r--tools/testing/selftests/bpf/uprobe_multi.ld11
-rw-r--r--tools/testing/selftests/bpf/uptr_test_common.h63
-rw-r--r--tools/testing/selftests/bpf/usdt.h545
-rw-r--r--tools/testing/selftests/bpf/verifier/bpf_loop_inline.c6
-rw-r--r--tools/testing/selftests/bpf/verifier/bpf_st_mem.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/calls.c49
-rw-r--r--tools/testing/selftests/bpf/verifier/dead_code.c3
-rw-r--r--tools/testing/selftests/bpf/verifier/jmp32.c33
-rw-r--r--tools/testing/selftests/bpf/verifier/jset.c10
-rw-r--r--tools/testing/selftests/bpf/verifier/map_kptr.c4
-rw-r--r--tools/testing/selftests/bpf/verifier/precise.c4
-rwxr-xr-xtools/testing/selftests/bpf/verify_sig_setup.sh11
-rw-r--r--tools/testing/selftests/bpf/veristat.c1297
-rw-r--r--tools/testing/selftests/bpf/veristat.cfg1
-rwxr-xr-xtools/testing/selftests/bpf/vmtest.sh116
-rwxr-xr-xtools/testing/selftests/bpf/with_addr.sh54
-rwxr-xr-xtools/testing/selftests/bpf/with_tunnels.sh36
-rw-r--r--tools/testing/selftests/bpf/xdp_hw_metadata.c191
-rw-r--r--tools/testing/selftests/bpf/xdp_redirect_multi.c226
-rw-r--r--tools/testing/selftests/bpf/xdping.c2
-rw-r--r--tools/testing/selftests/bpf/xsk.h4
-rw-r--r--tools/testing/selftests/bpf/xsk_xdp_common.h1
-rw-r--r--tools/testing/selftests/bpf/xskxceiver.c375
-rw-r--r--tools/testing/selftests/bpf/xskxceiver.h18
-rw-r--r--tools/testing/selftests/breakpoints/step_after_suspend_test.c43
-rw-r--r--tools/testing/selftests/cachestat/test_cachestat.c63
-rw-r--r--tools/testing/selftests/capabilities/test_execve.c12
-rw-r--r--tools/testing/selftests/capabilities/validate_cap.c7
-rw-r--r--tools/testing/selftests/cgroup/.gitignore11
-rw-r--r--tools/testing/selftests/cgroup/Makefile36
-rw-r--r--tools/testing/selftests/cgroup/config1
-rw-r--r--tools/testing/selftests/cgroup/lib/cgroup_util.c (renamed from tools/testing/selftests/cgroup/cgroup_util.c)154
-rw-r--r--tools/testing/selftests/cgroup/lib/include/cgroup_util.h (renamed from tools/testing/selftests/cgroup/cgroup_util.h)25
-rw-r--r--tools/testing/selftests/cgroup/lib/libcgroup.mk19
-rw-r--r--tools/testing/selftests/cgroup/test_core.c89
-rw-r--r--tools/testing/selftests/cgroup/test_cpu.c144
-rw-r--r--tools/testing/selftests/cgroup/test_cpuset.c2
-rwxr-xr-xtools/testing/selftests/cgroup/test_cpuset_prs.sh689
-rwxr-xr-xtools/testing/selftests/cgroup/test_cpuset_v1_base.sh77
-rwxr-xr-xtools/testing/selftests/cgroup/test_cpuset_v1_hp.sh46
-rw-r--r--tools/testing/selftests/cgroup/test_freezer.c665
-rw-r--r--tools/testing/selftests/cgroup/test_hugetlb_memcg.c2
-rw-r--r--tools/testing/selftests/cgroup/test_kill.c2
-rw-r--r--tools/testing/selftests/cgroup/test_kmem.c11
-rw-r--r--tools/testing/selftests/cgroup/test_memcontrol.c370
-rw-r--r--tools/testing/selftests/cgroup/test_pids.c181
-rw-r--r--tools/testing/selftests/cgroup/test_zswap.c293
-rw-r--r--tools/testing/selftests/clone3/clone3.c7
-rw-r--r--tools/testing/selftests/clone3/clone3_cap_checkpoint_restore.c2
-rw-r--r--tools/testing/selftests/clone3/clone3_clear_sighand.c2
-rw-r--r--tools/testing/selftests/clone3/clone3_selftests.h2
-rw-r--r--tools/testing/selftests/clone3/clone3_set_tid.c121
-rw-r--r--tools/testing/selftests/core/.gitignore1
-rw-r--r--tools/testing/selftests/core/Makefile2
-rw-r--r--tools/testing/selftests/core/close_range_test.c129
-rw-r--r--tools/testing/selftests/core/unshare_test.c94
-rw-r--r--tools/testing/selftests/coredump/Makefile7
-rw-r--r--tools/testing/selftests/coredump/README.rst50
-rw-r--r--tools/testing/selftests/coredump/config3
-rwxr-xr-xtools/testing/selftests/coredump/stackdump14
-rw-r--r--tools/testing/selftests/coredump/stackdump_test.c1827
-rwxr-xr-xtools/testing/selftests/cpu-hotplug/cpu-on-off-test.sh4
-rw-r--r--tools/testing/selftests/cpufreq/.gitignore2
-rw-r--r--tools/testing/selftests/cpufreq/Makefile1
-rwxr-xr-xtools/testing/selftests/cpufreq/cpufreq.sh36
-rwxr-xr-xtools/testing/selftests/cpufreq/main.sh60
-rwxr-xr-xtools/testing/selftests/cpufreq/module.sh6
-rw-r--r--tools/testing/selftests/damon/.gitignore3
-rw-r--r--tools/testing/selftests/damon/Makefile27
-rw-r--r--tools/testing/selftests/damon/_chk_dependency.sh38
-rw-r--r--tools/testing/selftests/damon/_common.sh11
-rw-r--r--tools/testing/selftests/damon/_damon_sysfs.py610
-rw-r--r--tools/testing/selftests/damon/_debugfs_common.sh52
-rw-r--r--tools/testing/selftests/damon/access_memory.c2
-rw-r--r--tools/testing/selftests/damon/access_memory_even.c39
-rw-r--r--tools/testing/selftests/damon/config1
-rwxr-xr-xtools/testing/selftests/damon/damon_nr_regions.py147
-rwxr-xr-xtools/testing/selftests/damon/damos_apply_interval.py67
-rwxr-xr-xtools/testing/selftests/damon/damos_quota.py70
-rwxr-xr-xtools/testing/selftests/damon/damos_quota_goal.py80
-rwxr-xr-xtools/testing/selftests/damon/damos_tried_regions.py65
-rwxr-xr-xtools/testing/selftests/damon/debugfs_attrs.sh17
-rwxr-xr-xtools/testing/selftests/damon/debugfs_duplicate_context_creation.sh27
-rwxr-xr-xtools/testing/selftests/damon/debugfs_empty_targets.sh13
-rwxr-xr-xtools/testing/selftests/damon/debugfs_huge_count_read_write.sh22
-rwxr-xr-xtools/testing/selftests/damon/debugfs_rm_non_contexts.sh19
-rwxr-xr-xtools/testing/selftests/damon/debugfs_schemes.sh19
-rwxr-xr-xtools/testing/selftests/damon/debugfs_target_ids.sh19
-rwxr-xr-xtools/testing/selftests/damon/drgn_dump_damon_status.py222
-rw-r--r--tools/testing/selftests/damon/huge_count_read_write.c48
-rwxr-xr-xtools/testing/selftests/damon/lru_sort.sh8
-rwxr-xr-xtools/testing/selftests/damon/reclaim.sh8
-rwxr-xr-xtools/testing/selftests/damon/sysfs.py272
-rwxr-xr-xtools/testing/selftests/damon/sysfs.sh11
-rwxr-xr-xtools/testing/selftests/damon/sysfs_memcg_path_leak.sh43
-rwxr-xr-xtools/testing/selftests/damon/sysfs_no_op_commit_break.py72
-rwxr-xr-xtools/testing/selftests/damon/sysfs_update_removed_scheme_dir.sh8
-rwxr-xr-x[-rw-r--r--]tools/testing/selftests/damon/sysfs_update_schemes_tried_regions_hang.py2
-rwxr-xr-x[-rw-r--r--]tools/testing/selftests/damon/sysfs_update_schemes_tried_regions_wss_estimation.py2
-rw-r--r--tools/testing/selftests/devices/error_logs/Makefile3
-rwxr-xr-xtools/testing/selftests/devices/error_logs/test_device_error_logs.py85
-rw-r--r--tools/testing/selftests/devices/probe/Makefile4
-rw-r--r--tools/testing/selftests/devices/probe/boards/Dell Inc.,XPS 13 9300.yaml40
-rw-r--r--tools/testing/selftests/devices/probe/boards/google,spherion.yaml54
-rwxr-xr-xtools/testing/selftests/devices/probe/test_discoverable_devices.py358
-rw-r--r--tools/testing/selftests/dma/dma_map_benchmark.c1
-rw-r--r--tools/testing/selftests/dmabuf-heaps/config3
-rw-r--r--tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c251
-rw-r--r--tools/testing/selftests/drivers/dma-buf/udmabuf.c232
-rw-r--r--tools/testing/selftests/drivers/net/.gitignore3
-rw-r--r--tools/testing/selftests/drivers/net/Makefile39
-rw-r--r--tools/testing/selftests/drivers/net/README.rst136
-rw-r--r--tools/testing/selftests/drivers/net/bonding/Makefile19
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond-break-lacpdu-tx.sh19
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond-eth-type-change.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond-lladdr-target.sh21
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_ipsec_offload.sh156
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_lacp_prio.sh108
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_macvlan.sh99
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_macvlan_ipvlan.sh96
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_options.sh287
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/bond_passive_lacp.sh105
-rw-r--r--tools/testing/selftests/drivers/net/bonding/bond_topo_2d1c.sh11
-rw-r--r--tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh2
-rw-r--r--tools/testing/selftests/drivers/net/bonding/config7
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/dev_addr_lists.sh2
-rw-r--r--tools/testing/selftests/drivers/net/bonding/lag_lib.sh7
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/mode-1-recovery-updelay.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/bonding/mode-2-recovery-updelay.sh2
l---------tools/testing/selftests/drivers/net/bonding/net_forwarding_lib.sh1
-rw-r--r--tools/testing/selftests/drivers/net/config10
-rw-r--r--tools/testing/selftests/drivers/net/dsa/Makefile26
l---------tools/testing/selftests/drivers/net/dsa/bridge_locked_port.sh2
l---------tools/testing/selftests/drivers/net/dsa/bridge_mdb.sh2
l---------tools/testing/selftests/drivers/net/dsa/bridge_mld.sh2
l---------tools/testing/selftests/drivers/net/dsa/bridge_vlan_aware.sh2
l---------tools/testing/selftests/drivers/net/dsa/bridge_vlan_mcast.sh2
l---------tools/testing/selftests/drivers/net/dsa/bridge_vlan_unaware.sh2
l---------tools/testing/selftests/drivers/net/dsa/lib.sh1
l---------tools/testing/selftests/drivers/net/dsa/local_termination.sh2
l---------tools/testing/selftests/drivers/net/dsa/no_forwarding.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/dsa/run_net_forwarding_test.sh9
l---------tools/testing/selftests/drivers/net/dsa/tc_actions.sh2
l---------tools/testing/selftests/drivers/net/dsa/tc_common.sh1
l---------tools/testing/selftests/drivers/net/dsa/tc_taprio.sh1
-rwxr-xr-xtools/testing/selftests/drivers/net/dsa/test_bridge_fdb_stress.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/hds.py329
-rw-r--r--tools/testing/selftests/drivers/net/hw/.gitignore3
-rw-r--r--tools/testing/selftests/drivers/net/hw/Makefile57
-rw-r--r--tools/testing/selftests/drivers/net/hw/config11
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/csum.py110
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/devlink_port_split.py (renamed from tools/testing/selftests/net/devlink_port_split.py)0
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/devlink_rate_tc_bw.py465
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/devmem.py77
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/ethtool.sh (renamed from tools/testing/selftests/net/forwarding/ethtool.sh)20
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/ethtool_extended_state.sh (renamed from tools/testing/selftests/net/forwarding/ethtool_extended_state.sh)5
-rw-r--r--tools/testing/selftests/drivers/net/hw/ethtool_lib.sh (renamed from tools/testing/selftests/net/forwarding/ethtool_lib.sh)0
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/ethtool_mm.sh (renamed from tools/testing/selftests/net/forwarding/ethtool_mm.sh)3
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/ethtool_rmon.sh (renamed from tools/testing/selftests/net/forwarding/ethtool_rmon.sh)8
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/hw_stats_l3.sh (renamed from tools/testing/selftests/net/forwarding/hw_stats_l3.sh)20
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/hw_stats_l3_gre.sh (renamed from tools/testing/selftests/net/forwarding/hw_stats_l3_gre.sh)8
-rw-r--r--tools/testing/selftests/drivers/net/hw/iou-zcrx.c464
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/iou-zcrx.py145
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/irq.py99
-rw-r--r--tools/testing/selftests/drivers/net/hw/lib/py/__init__.py33
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/loopback.sh (renamed from tools/testing/selftests/net/forwarding/loopback.sh)5
-rw-r--r--tools/testing/selftests/drivers/net/hw/ncdevmem.c1524
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/nic_timestamp.py113
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/pp_alloc_fail.py144
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/rss_api.py476
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/rss_ctx.py832
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/rss_flow_label.py167
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/rss_input_xfrm.py92
-rw-r--r--tools/testing/selftests/drivers/net/hw/settings1
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/tso.py261
-rwxr-xr-xtools/testing/selftests/drivers/net/hw/xsk_reconfig.py60
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/__init__.py54
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/env.py285
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/load.py71
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/remote.py15
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/remote_netns.py21
-rw-r--r--tools/testing/selftests/drivers/net/lib/py/remote_ssh.py39
-rw-r--r--tools/testing/selftests/drivers/net/lib/sh/lib_netcons.sh371
-rwxr-xr-xtools/testing/selftests/drivers/net/microchip/ksz9477_qos.sh668
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_drops.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_l3_exceptions.sh12
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_policer.sh88
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_tunnel_ipip.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_tunnel_ipip6.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_tunnel_vxlan.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/devlink_trap_tunnel_vxlan_ipv6.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/ethtool_lanes.sh17
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/mirror_gre.sh71
-rw-r--r--tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh18
-rw-r--r--tools/testing/selftests/drivers/net/mlxsw/mlxsw_lib.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/qos_ets_strict.sh171
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/qos_max_descriptors.sh121
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh142
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/rif_bridge.sh1
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/rif_lag.sh1
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/rif_lag_vlan.sh1
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/rtnetlink.sh10
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/sch_ets.sh26
-rw-r--r--tools/testing/selftests/drivers/net/mlxsw/sch_red_core.sh215
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/sch_red_ets.sh32
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/sch_red_root.sh18
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh55
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/spectrum-2/resource_scale.sh3
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh57
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh3
-rwxr-xr-xtools/testing/selftests/drivers/net/mlxsw/tc_sample.sh4
-rwxr-xr-xtools/testing/selftests/drivers/net/napi_id.py23
-rw-r--r--tools/testing/selftests/drivers/net/napi_id_helper.c100
-rwxr-xr-xtools/testing/selftests/drivers/net/napi_threaded.py143
-rwxr-xr-xtools/testing/selftests/drivers/net/netcons_basic.sh73
-rwxr-xr-xtools/testing/selftests/drivers/net/netcons_cmdline.sh65
-rwxr-xr-xtools/testing/selftests/drivers/net/netcons_fragmented_msg.sh122
-rwxr-xr-xtools/testing/selftests/drivers/net/netcons_overflow.sh67
-rwxr-xr-xtools/testing/selftests/drivers/net/netcons_sysdata.sh272
-rw-r--r--tools/testing/selftests/drivers/net/netdevsim/Makefile23
-rw-r--r--tools/testing/selftests/drivers/net/netdevsim/config1
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/devlink.sh57
-rw-r--r--tools/testing/selftests/drivers/net/netdevsim/ethtool-features.sh31
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/ethtool-fec.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/fib_notifications.sh6
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/macsec-offload.sh117
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/nexthop.sh2
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/peer.sh144
-rw-r--r--tools/testing/selftests/drivers/net/netdevsim/settings1
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/tc-mq-visibility.sh9
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh79
-rwxr-xr-xtools/testing/selftests/drivers/net/netpoll_basic.py396
-rwxr-xr-xtools/testing/selftests/drivers/net/ocelot/psfp.sh8
-rwxr-xr-xtools/testing/selftests/drivers/net/ping.py241
-rwxr-xr-xtools/testing/selftests/drivers/net/psp.py627
-rw-r--r--tools/testing/selftests/drivers/net/psp_responder.c483
-rwxr-xr-xtools/testing/selftests/drivers/net/queues.py125
-rwxr-xr-xtools/testing/selftests/drivers/net/shaper.py461
-rwxr-xr-xtools/testing/selftests/drivers/net/stats.py320
-rw-r--r--tools/testing/selftests/drivers/net/team/Makefile16
-rw-r--r--tools/testing/selftests/drivers/net/team/config2
-rwxr-xr-xtools/testing/selftests/drivers/net/team/dev_addr_lists.sh4
l---------tools/testing/selftests/drivers/net/team/lag_lib.sh1
l---------tools/testing/selftests/drivers/net/team/net_forwarding_lib.sh1
-rwxr-xr-xtools/testing/selftests/drivers/net/team/options.sh188
-rwxr-xr-xtools/testing/selftests/drivers/net/team/propagation.sh80
-rw-r--r--tools/testing/selftests/drivers/net/virtio_net/Makefile12
-rwxr-xr-xtools/testing/selftests/drivers/net/virtio_net/basic_features.sh131
-rw-r--r--tools/testing/selftests/drivers/net/virtio_net/config8
-rw-r--r--tools/testing/selftests/drivers/net/virtio_net/virtio_net_common.sh99
-rwxr-xr-xtools/testing/selftests/drivers/net/xdp.py790
-rw-r--r--tools/testing/selftests/drivers/ntsync/.gitignore1
-rw-r--r--tools/testing/selftests/drivers/ntsync/Makefile7
-rw-r--r--tools/testing/selftests/drivers/ntsync/config1
-rw-r--r--tools/testing/selftests/drivers/ntsync/ntsync.c1343
-rw-r--r--tools/testing/selftests/drivers/platform/x86/intel/ifs/Makefile6
-rwxr-xr-xtools/testing/selftests/drivers/platform/x86/intel/ifs/test_ifs.sh494
-rw-r--r--tools/testing/selftests/drivers/s390x/uvdevice/test_uvdevice.c6
-rw-r--r--tools/testing/selftests/dt/Makefile2
-rw-r--r--tools/testing/selftests/dt/ktap_helpers.sh70
-rwxr-xr-xtools/testing/selftests/dt/test_unprobed_devices.sh21
-rwxr-xr-xtools/testing/selftests/efivarfs/efivarfs.sh168
-rw-r--r--tools/testing/selftests/exec/.gitignore7
-rw-r--r--tools/testing/selftests/exec/Makefile43
-rwxr-xr-xtools/testing/selftests/exec/binfmt_script.py10
-rwxr-xr-xtools/testing/selftests/exec/check-exec-tests.sh205
-rw-r--r--tools/testing/selftests/exec/check-exec.c463
-rw-r--r--tools/testing/selftests/exec/config2
-rw-r--r--tools/testing/selftests/exec/execveat.c91
-rw-r--r--tools/testing/selftests/exec/false.c (renamed from tools/build/feature/test-libslang-include-subdir.c)4
-rw-r--r--tools/testing/selftests/exec/load_address.c83
-rw-r--r--tools/testing/selftests/exec/recursion-depth.c53
-rw-r--r--tools/testing/selftests/fchmodat2/Makefile11
-rw-r--r--tools/testing/selftests/filesystems/.gitignore4
-rw-r--r--tools/testing/selftests/filesystems/Makefile2
-rw-r--r--tools/testing/selftests/filesystems/anon_inode_test.c69
-rw-r--r--tools/testing/selftests/filesystems/binderfs/Makefile2
-rw-r--r--tools/testing/selftests/filesystems/binderfs/binderfs_test.c3
-rw-r--r--tools/testing/selftests/filesystems/eventfd/.gitignore2
-rw-r--r--tools/testing/selftests/filesystems/eventfd/Makefile7
-rw-r--r--tools/testing/selftests/filesystems/eventfd/eventfd_test.c311
-rw-r--r--tools/testing/selftests/filesystems/fclog.c130
-rw-r--r--tools/testing/selftests/filesystems/file_stressor.c194
-rw-r--r--tools/testing/selftests/filesystems/fuse/.gitignore3
-rw-r--r--tools/testing/selftests/filesystems/fuse/Makefile21
-rw-r--r--tools/testing/selftests/filesystems/fuse/fuse_mnt.c146
-rw-r--r--tools/testing/selftests/filesystems/fuse/fusectl_test.c140
-rw-r--r--tools/testing/selftests/filesystems/kernfs_test.c38
-rw-r--r--tools/testing/selftests/filesystems/mount-notify/.gitignore3
-rw-r--r--tools/testing/selftests/filesystems/mount-notify/Makefile11
-rw-r--r--tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c528
-rw-r--r--tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c555
-rw-r--r--tools/testing/selftests/filesystems/nsfs/.gitignore (renamed from tools/testing/selftests/nsfs/.gitignore)1
-rw-r--r--tools/testing/selftests/filesystems/nsfs/Makefile (renamed from tools/testing/selftests/nsfs/Makefile)4
-rw-r--r--tools/testing/selftests/filesystems/nsfs/config (renamed from tools/testing/selftests/nsfs/config)0
-rw-r--r--tools/testing/selftests/filesystems/nsfs/iterate_mntns.c163
-rw-r--r--tools/testing/selftests/filesystems/nsfs/owner.c (renamed from tools/testing/selftests/nsfs/owner.c)0
-rw-r--r--tools/testing/selftests/filesystems/nsfs/pidns.c (renamed from tools/testing/selftests/nsfs/pidns.c)0
-rw-r--r--tools/testing/selftests/filesystems/overlayfs/.gitignore1
-rw-r--r--tools/testing/selftests/filesystems/overlayfs/Makefile11
-rw-r--r--tools/testing/selftests/filesystems/overlayfs/dev_in_maps.c28
-rw-r--r--tools/testing/selftests/filesystems/overlayfs/set_layers_via_fds.c720
-rw-r--r--tools/testing/selftests/filesystems/statmount/.gitignore1
-rw-r--r--tools/testing/selftests/filesystems/statmount/Makefile8
-rw-r--r--tools/testing/selftests/filesystems/statmount/listmount_test.c66
-rw-r--r--tools/testing/selftests/filesystems/statmount/statmount.h82
-rw-r--r--tools/testing/selftests/filesystems/statmount/statmount_test.c190
-rw-r--r--tools/testing/selftests/filesystems/statmount/statmount_test_ns.c291
-rw-r--r--tools/testing/selftests/filesystems/utils.c589
-rw-r--r--tools/testing/selftests/filesystems/utils.h48
-rw-r--r--tools/testing/selftests/filesystems/wrappers.h108
-rw-r--r--tools/testing/selftests/ftrace/.gitignore1
-rw-r--r--tools/testing/selftests/ftrace/Makefile2
-rw-r--r--tools/testing/selftests/ftrace/config27
-rwxr-xr-xtools/testing/selftests/ftrace/ftracetest10
-rwxr-xr-xtools/testing/selftests/ftrace/ftracetest-ktap2
-rw-r--r--tools/testing/selftests/ftrace/poll.c74
-rw-r--r--tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc101
-rw-r--r--tools/testing/selftests/ftrace/test.d/00basic/test_ownership.tc42
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_btfarg.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe.tc66
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_fprobe_repeat.tc19
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_tprobe.tc14
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_tprobe_module.tc61
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/add_remove_uprobe.tc32
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/dynevent_limitations.tc63
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/fprobe_args_vfs.tc41
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/fprobe_entry_arg.tc18
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/fprobe_syntax_errors.tc9
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/test_duplicates.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/dynevent/tprobe_syntax_errors.tc1
-rw-r--r--tools/testing/selftests/ftrace/test.d/event/event-mod.tc191
-rw-r--r--tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc30
-rw-r--r--tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc42
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/fgraph-multi-filter.tc177
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/fgraph-multi.tc103
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/fgraph-profiler.tc31
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/fgraph-retval.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/func-filter-pid.tc29
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/func_hotplug.tc42
-rw-r--r--tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc9
-rw-r--r--tools/testing/selftests/ftrace/test.d/functions33
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_char.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_string.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_vfs.tc40
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kprobe_eventname.tc3
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc4
-rw-r--r--tools/testing/selftests/ftrace/test.d/kprobe/kretprobe_entry_arg.tc18
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-action-hist-xfail.tc1
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onchange-action-hist.tc3
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-snapshot-action-hist.tc3
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-expressions.tc1
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-mod.tc2
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-poll.tc74
-rw-r--r--tools/testing/selftests/futex/Makefile2
-rw-r--r--tools/testing/selftests/futex/functional/.gitignore7
-rw-r--r--tools/testing/selftests/futex/functional/Makefile15
-rw-r--r--tools/testing/selftests/futex/functional/futex_numa.c263
-rw-r--r--tools/testing/selftests/futex/functional/futex_numa_mpol.c219
-rw-r--r--tools/testing/selftests/futex/functional/futex_priv_hash.c270
-rw-r--r--tools/testing/selftests/futex/functional/futex_requeue.c76
-rw-r--r--tools/testing/selftests/futex/functional/futex_requeue_pi.c259
-rw-r--r--tools/testing/selftests/futex/functional/futex_requeue_pi_mismatched_ops.c86
-rw-r--r--tools/testing/selftests/futex/functional/futex_requeue_pi_signal_restart.c129
-rw-r--r--tools/testing/selftests/futex/functional/futex_wait.c103
-rw-r--r--tools/testing/selftests/futex/functional/futex_wait_private_mapped_file.c83
-rw-r--r--tools/testing/selftests/futex/functional/futex_wait_timeout.c139
-rw-r--r--tools/testing/selftests/futex/functional/futex_wait_uninitialized_heap.c76
-rw-r--r--tools/testing/selftests/futex/functional/futex_wait_wouldblock.c78
-rw-r--r--tools/testing/selftests/futex/functional/futex_waitv.c99
-rwxr-xr-xtools/testing/selftests/futex/functional/run.sh59
-rw-r--r--tools/testing/selftests/futex/include/futex2test.h78
-rw-r--r--tools/testing/selftests/futex/include/futextest.h22
-rw-r--r--tools/testing/selftests/futex/include/logging.h148
-rw-r--r--tools/testing/selftests/gpio/Makefile2
-rw-r--r--tools/testing/selftests/gpio/config1
-rwxr-xr-xtools/testing/selftests/gpio/gpio-aggregator.sh727
-rwxr-xr-xtools/testing/selftests/gpio/gpio-mockup.sh9
-rwxr-xr-xtools/testing/selftests/gpio/gpio-sim.sh31
-rw-r--r--tools/testing/selftests/hid/.gitignore2
-rw-r--r--tools/testing/selftests/hid/Makefile9
-rw-r--r--tools/testing/selftests/hid/config.common3
-rw-r--r--tools/testing/selftests/hid/hid_bpf.c1028
-rw-r--r--tools/testing/selftests/hid/hid_common.h480
-rw-r--r--tools/testing/selftests/hid/hidraw.c694
-rw-r--r--tools/testing/selftests/hid/progs/hid.c438
-rw-r--r--tools/testing/selftests/hid/progs/hid_bpf_helpers.h63
-rwxr-xr-xtools/testing/selftests/hid/run-hid-tools-tests.sh16
-rw-r--r--tools/testing/selftests/hid/tests/base.py116
-rw-r--r--tools/testing/selftests/hid/tests/base_device.py448
-rw-r--r--tools/testing/selftests/hid/tests/base_gamepad.py238
-rw-r--r--tools/testing/selftests/hid/tests/test_apple_keyboard.py3
-rw-r--r--tools/testing/selftests/hid/tests/test_gamepad.py458
-rw-r--r--tools/testing/selftests/hid/tests/test_ite_keyboard.py3
-rw-r--r--tools/testing/selftests/hid/tests/test_mouse.py70
-rw-r--r--tools/testing/selftests/hid/tests/test_multitouch.py2
-rw-r--r--tools/testing/selftests/hid/tests/test_sony.py7
-rw-r--r--tools/testing/selftests/hid/tests/test_tablet.py724
-rw-r--r--tools/testing/selftests/hid/tests/test_wacom_generic.py445
-rwxr-xr-xtools/testing/selftests/hid/vmtest.sh668
-rw-r--r--tools/testing/selftests/intel_pstate/Makefile2
-rwxr-xr-xtools/testing/selftests/intel_pstate/run.sh9
-rw-r--r--tools/testing/selftests/iommu/Makefile3
-rw-r--r--tools/testing/selftests/iommu/config7
-rw-r--r--tools/testing/selftests/iommu/iommufd.c1308
-rw-r--r--tools/testing/selftests/iommu/iommufd_fail_nth.c134
-rw-r--r--tools/testing/selftests/iommu/iommufd_utils.h588
-rw-r--r--tools/testing/selftests/ipc/msgque.c60
-rw-r--r--tools/testing/selftests/kcmp/kcmp_test.c2
-rw-r--r--tools/testing/selftests/kexec/.gitignore2
-rw-r--r--tools/testing/selftests/kexec/Makefile7
-rw-r--r--tools/testing/selftests/kexec/test_kexec_jump.c72
-rwxr-xr-xtools/testing/selftests/kexec/test_kexec_jump.sh42
-rw-r--r--tools/testing/selftests/kho/arm64.conf9
-rw-r--r--tools/testing/selftests/kho/init.c95
-rwxr-xr-xtools/testing/selftests/kho/vmtest.sh185
-rw-r--r--tools/testing/selftests/kho/x86.conf7
-rw-r--r--tools/testing/selftests/kmod/config5
-rw-r--r--tools/testing/selftests/kselftest.h164
-rw-r--r--tools/testing/selftests/kselftest/ksft.py93
-rw-r--r--tools/testing/selftests/kselftest/ktap_helpers.sh126
-rwxr-xr-xtools/testing/selftests/kselftest/module.sh2
-rw-r--r--tools/testing/selftests/kselftest/runner.sh7
-rwxr-xr-xtools/testing/selftests/kselftest_deps.sh1
-rw-r--r--tools/testing/selftests/kselftest_harness.h461
-rw-r--r--tools/testing/selftests/kselftest_harness/.gitignore2
-rw-r--r--tools/testing/selftests/kselftest_harness/Makefile8
-rw-r--r--tools/testing/selftests/kselftest_harness/harness-selftest.c136
-rw-r--r--tools/testing/selftests/kselftest_harness/harness-selftest.expected64
-rwxr-xr-xtools/testing/selftests/kselftest_harness/harness-selftest.sh13
-rw-r--r--tools/testing/selftests/kvm/.gitignore5
-rw-r--r--tools/testing/selftests/kvm/Makefile312
-rw-r--r--tools/testing/selftests/kvm/Makefile.kvm350
-rw-r--r--tools/testing/selftests/kvm/aarch64/arch_timer.c480
-rw-r--r--tools/testing/selftests/kvm/aarch64/set_id_regs.c481
-rw-r--r--tools/testing/selftests/kvm/access_tracking_perf_test.c281
-rw-r--r--tools/testing/selftests/kvm/arch_timer.c252
-rw-r--r--tools/testing/selftests/kvm/arm64/aarch32_id_regs.c (renamed from tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c)14
-rw-r--r--tools/testing/selftests/kvm/arm64/arch_timer.c215
-rw-r--r--tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c1059
-rw-r--r--tools/testing/selftests/kvm/arm64/debug-exceptions.c (renamed from tools/testing/selftests/kvm/aarch64/debug-exceptions.c)32
-rw-r--r--tools/testing/selftests/kvm/arm64/external_aborts.c372
-rw-r--r--tools/testing/selftests/kvm/arm64/get-reg-list.c (renamed from tools/testing/selftests/kvm/aarch64/get-reg-list.c)194
-rw-r--r--tools/testing/selftests/kvm/arm64/hello_el2.c71
-rw-r--r--tools/testing/selftests/kvm/arm64/host_sve.c127
-rw-r--r--tools/testing/selftests/kvm/arm64/hypercalls.c (renamed from tools/testing/selftests/kvm/aarch64/hypercalls.c)56
-rw-r--r--tools/testing/selftests/kvm/arm64/kvm-uuid.c70
-rw-r--r--tools/testing/selftests/kvm/arm64/no-vgic-v3.c177
-rw-r--r--tools/testing/selftests/kvm/arm64/page_fault_test.c (renamed from tools/testing/selftests/kvm/aarch64/page_fault_test.c)19
-rw-r--r--tools/testing/selftests/kvm/arm64/psci_test.c (renamed from tools/testing/selftests/kvm/aarch64/psci_test.c)113
-rw-r--r--tools/testing/selftests/kvm/arm64/set_id_regs.c806
-rw-r--r--tools/testing/selftests/kvm/arm64/smccc_filter.c (renamed from tools/testing/selftests/kvm/aarch64/smccc_filter.c)17
-rw-r--r--tools/testing/selftests/kvm/arm64/vcpu_width_config.c (renamed from tools/testing/selftests/kvm/aarch64/vcpu_width_config.c)0
-rw-r--r--tools/testing/selftests/kvm/arm64/vgic_init.c (renamed from tools/testing/selftests/kvm/aarch64/vgic_init.c)311
-rw-r--r--tools/testing/selftests/kvm/arm64/vgic_irq.c (renamed from tools/testing/selftests/kvm/aarch64/vgic_irq.c)42
-rw-r--r--tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c408
-rw-r--r--tools/testing/selftests/kvm/arm64/vpmu_counter_access.c (renamed from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c)134
-rw-r--r--tools/testing/selftests/kvm/coalesced_io_test.c236
-rw-r--r--tools/testing/selftests/kvm/config1
-rw-r--r--tools/testing/selftests/kvm/demand_paging_test.c94
-rw-r--r--tools/testing/selftests/kvm/dirty_log_perf_test.c48
-rw-r--r--tools/testing/selftests/kvm/dirty_log_test.c550
-rw-r--r--tools/testing/selftests/kvm/get-reg-list.c9
-rw-r--r--tools/testing/selftests/kvm/guest_memfd_test.c243
-rw-r--r--tools/testing/selftests/kvm/guest_print_test.c20
-rw-r--r--tools/testing/selftests/kvm/hardware_disable_test.c4
-rw-r--r--tools/testing/selftests/kvm/include/aarch64/gic_v3.h82
-rw-r--r--tools/testing/selftests/kvm/include/arm64/arch_timer.h (renamed from tools/testing/selftests/kvm/include/aarch64/arch_timer.h)42
-rw-r--r--tools/testing/selftests/kvm/include/arm64/delay.h (renamed from tools/testing/selftests/kvm/include/aarch64/delay.h)0
-rw-r--r--tools/testing/selftests/kvm/include/arm64/gic.h (renamed from tools/testing/selftests/kvm/include/aarch64/gic.h)21
-rw-r--r--tools/testing/selftests/kvm/include/arm64/gic_v3.h604
-rw-r--r--tools/testing/selftests/kvm/include/arm64/gic_v3_its.h19
-rw-r--r--tools/testing/selftests/kvm/include/arm64/kvm_util_arch.h10
-rw-r--r--tools/testing/selftests/kvm/include/arm64/processor.h (renamed from tools/testing/selftests/kvm/include/aarch64/processor.h)186
-rw-r--r--tools/testing/selftests/kvm/include/arm64/spinlock.h (renamed from tools/testing/selftests/kvm/include/aarch64/spinlock.h)0
-rw-r--r--tools/testing/selftests/kvm/include/arm64/ucall.h (renamed from tools/testing/selftests/kvm/include/aarch64/ucall.h)2
-rw-r--r--tools/testing/selftests/kvm/include/arm64/vgic.h (renamed from tools/testing/selftests/kvm/include/aarch64/vgic.h)8
-rw-r--r--tools/testing/selftests/kvm/include/kvm_test_harness.h36
-rw-r--r--tools/testing/selftests/kvm/include/kvm_util.h1271
-rw-r--r--tools/testing/selftests/kvm/include/kvm_util_base.h1084
-rw-r--r--tools/testing/selftests/kvm/include/kvm_util_types.h20
-rw-r--r--tools/testing/selftests/kvm/include/loongarch/kvm_util_arch.h7
-rw-r--r--tools/testing/selftests/kvm/include/loongarch/processor.h141
-rw-r--r--tools/testing/selftests/kvm/include/loongarch/ucall.h20
-rw-r--r--tools/testing/selftests/kvm/include/lru_gen_util.h51
-rw-r--r--tools/testing/selftests/kvm/include/memstress.h1
-rw-r--r--tools/testing/selftests/kvm/include/riscv/arch_timer.h71
-rw-r--r--tools/testing/selftests/kvm/include/riscv/kvm_util_arch.h7
-rw-r--r--tools/testing/selftests/kvm/include/riscv/processor.h135
-rw-r--r--tools/testing/selftests/kvm/include/riscv/sbi.h141
-rw-r--r--tools/testing/selftests/kvm/include/riscv/ucall.h1
-rw-r--r--tools/testing/selftests/kvm/include/s390/debug_print.h69
-rw-r--r--tools/testing/selftests/kvm/include/s390/diag318_test_handler.h (renamed from tools/testing/selftests/kvm/include/s390x/diag318_test_handler.h)0
-rw-r--r--tools/testing/selftests/kvm/include/s390/facility.h50
-rw-r--r--tools/testing/selftests/kvm/include/s390/kvm_util_arch.h7
-rw-r--r--tools/testing/selftests/kvm/include/s390/processor.h (renamed from tools/testing/selftests/kvm/include/s390x/processor.h)11
-rw-r--r--tools/testing/selftests/kvm/include/s390/sie.h240
-rw-r--r--tools/testing/selftests/kvm/include/s390/ucall.h (renamed from tools/testing/selftests/kvm/include/s390x/ucall.h)2
-rw-r--r--tools/testing/selftests/kvm/include/sparsebit.h56
-rw-r--r--tools/testing/selftests/kvm/include/test_util.h24
-rw-r--r--tools/testing/selftests/kvm/include/timer_test.h45
-rw-r--r--tools/testing/selftests/kvm/include/userfaultfd_util.h19
-rw-r--r--tools/testing/selftests/kvm/include/x86/apic.h (renamed from tools/testing/selftests/kvm/include/x86_64/apic.h)31
-rw-r--r--tools/testing/selftests/kvm/include/x86/evmcs.h (renamed from tools/testing/selftests/kvm/include/x86_64/evmcs.h)3
-rw-r--r--tools/testing/selftests/kvm/include/x86/hyperv.h (renamed from tools/testing/selftests/kvm/include/x86_64/hyperv.h)21
-rw-r--r--tools/testing/selftests/kvm/include/x86/kvm_util_arch.h51
-rw-r--r--tools/testing/selftests/kvm/include/x86/mce.h (renamed from tools/testing/selftests/kvm/include/x86_64/mce.h)2
-rw-r--r--tools/testing/selftests/kvm/include/x86/pmu.h123
-rw-r--r--tools/testing/selftests/kvm/include/x86/processor.h (renamed from tools/testing/selftests/kvm/include/x86_64/processor.h)307
-rw-r--r--tools/testing/selftests/kvm/include/x86/sev.h147
-rw-r--r--tools/testing/selftests/kvm/include/x86/svm.h (renamed from tools/testing/selftests/kvm/include/x86_64/svm.h)6
-rw-r--r--tools/testing/selftests/kvm/include/x86/svm_util.h (renamed from tools/testing/selftests/kvm/include/x86_64/svm_util.h)3
-rw-r--r--tools/testing/selftests/kvm/include/x86/ucall.h (renamed from tools/testing/selftests/kvm/include/x86_64/ucall.h)2
-rw-r--r--tools/testing/selftests/kvm/include/x86/vmx.h (renamed from tools/testing/selftests/kvm/include/x86_64/vmx.h)2
-rw-r--r--tools/testing/selftests/kvm/irqfd_test.c135
-rw-r--r--tools/testing/selftests/kvm/kvm_binary_stats_test.c2
-rw-r--r--tools/testing/selftests/kvm/kvm_create_max_vcpus.c30
-rw-r--r--tools/testing/selftests/kvm/kvm_page_table_test.c4
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/gic.c (renamed from tools/testing/selftests/kvm/lib/aarch64/gic.c)18
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/gic_private.h (renamed from tools/testing/selftests/kvm/lib/aarch64/gic_private.h)4
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/gic_v3.c (renamed from tools/testing/selftests/kvm/lib/aarch64/gic_v3.c)99
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c248
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/handlers.S (renamed from tools/testing/selftests/kvm/lib/aarch64/handlers.S)0
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/processor.c (renamed from tools/testing/selftests/kvm/lib/aarch64/processor.c)212
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/spinlock.c (renamed from tools/testing/selftests/kvm/lib/aarch64/spinlock.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/ucall.c (renamed from tools/testing/selftests/kvm/lib/aarch64/ucall.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/arm64/vgic.c (renamed from tools/testing/selftests/kvm/lib/aarch64/vgic.c)100
-rw-r--r--tools/testing/selftests/kvm/lib/assert.c3
-rw-r--r--tools/testing/selftests/kvm/lib/kvm_util.c466
-rw-r--r--tools/testing/selftests/kvm/lib/loongarch/exception.S59
-rw-r--r--tools/testing/selftests/kvm/lib/loongarch/processor.c346
-rw-r--r--tools/testing/selftests/kvm/lib/loongarch/ucall.c38
-rw-r--r--tools/testing/selftests/kvm/lib/lru_gen_util.c387
-rw-r--r--tools/testing/selftests/kvm/lib/memstress.c15
-rw-r--r--tools/testing/selftests/kvm/lib/riscv/handlers.S104
-rw-r--r--tools/testing/selftests/kvm/lib/riscv/processor.c175
-rw-r--r--tools/testing/selftests/kvm/lib/riscv/ucall.c1
-rw-r--r--tools/testing/selftests/kvm/lib/s390/diag318_test_handler.c (renamed from tools/testing/selftests/kvm/lib/s390x/diag318_test_handler.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/s390/facility.c14
-rw-r--r--tools/testing/selftests/kvm/lib/s390/processor.c (renamed from tools/testing/selftests/kvm/lib/s390x/processor.c)23
-rw-r--r--tools/testing/selftests/kvm/lib/s390/ucall.c (renamed from tools/testing/selftests/kvm/lib/s390x/ucall.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/sparsebit.c52
-rw-r--r--tools/testing/selftests/kvm/lib/test_util.c44
-rw-r--r--tools/testing/selftests/kvm/lib/ucall_common.c8
-rw-r--r--tools/testing/selftests/kvm/lib/userfaultfd_util.c158
-rw-r--r--tools/testing/selftests/kvm/lib/x86/apic.c (renamed from tools/testing/selftests/kvm/lib/x86_64/apic.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/x86/handlers.S (renamed from tools/testing/selftests/kvm/lib/x86_64/handlers.S)0
-rw-r--r--tools/testing/selftests/kvm/lib/x86/hyperv.c113
-rw-r--r--tools/testing/selftests/kvm/lib/x86/memstress.c (renamed from tools/testing/selftests/kvm/lib/x86_64/memstress.c)2
-rw-r--r--tools/testing/selftests/kvm/lib/x86/pmu.c80
-rw-r--r--tools/testing/selftests/kvm/lib/x86/processor.c (renamed from tools/testing/selftests/kvm/lib/x86_64/processor.c)518
-rw-r--r--tools/testing/selftests/kvm/lib/x86/sev.c199
-rw-r--r--tools/testing/selftests/kvm/lib/x86/svm.c (renamed from tools/testing/selftests/kvm/lib/x86_64/svm.c)1
-rw-r--r--tools/testing/selftests/kvm/lib/x86/ucall.c (renamed from tools/testing/selftests/kvm/lib/x86_64/ucall.c)0
-rw-r--r--tools/testing/selftests/kvm/lib/x86/vmx.c (renamed from tools/testing/selftests/kvm/lib/x86_64/vmx.c)4
-rw-r--r--tools/testing/selftests/kvm/lib/x86_64/hyperv.c46
-rw-r--r--tools/testing/selftests/kvm/memslot_modification_stress_test.c31
-rw-r--r--tools/testing/selftests/kvm/memslot_perf_test.c21
-rw-r--r--tools/testing/selftests/kvm/mmu_stress_test.c (renamed from tools/testing/selftests/kvm/max_guest_memory_test.c)172
-rw-r--r--tools/testing/selftests/kvm/pre_fault_memory_test.c146
-rw-r--r--tools/testing/selftests/kvm/riscv/arch_timer.c109
-rw-r--r--tools/testing/selftests/kvm/riscv/ebreak_test.c83
-rw-r--r--tools/testing/selftests/kvm/riscv/get-reg-list.c277
-rw-r--r--tools/testing/selftests/kvm/riscv/sbi_pmu_test.c731
-rw-r--r--tools/testing/selftests/kvm/rseq_test.c69
-rw-r--r--tools/testing/selftests/kvm/s390/cmma_test.c (renamed from tools/testing/selftests/kvm/s390x/cmma_test.c)16
-rw-r--r--tools/testing/selftests/kvm/s390/config2
-rw-r--r--tools/testing/selftests/kvm/s390/cpumodel_subfuncs_test.c301
-rw-r--r--tools/testing/selftests/kvm/s390/debug_test.c (renamed from tools/testing/selftests/kvm/s390x/debug_test.c)4
-rw-r--r--tools/testing/selftests/kvm/s390/memop.c (renamed from tools/testing/selftests/kvm/s390x/memop.c)38
-rw-r--r--tools/testing/selftests/kvm/s390/resets.c (renamed from tools/testing/selftests/kvm/s390x/resets.c)2
-rw-r--r--tools/testing/selftests/kvm/s390/shared_zeropage_test.c111
-rw-r--r--tools/testing/selftests/kvm/s390/sync_regs_test.c (renamed from tools/testing/selftests/kvm/s390x/sync_regs_test.c)2
-rw-r--r--tools/testing/selftests/kvm/s390/tprot.c (renamed from tools/testing/selftests/kvm/s390x/tprot.c)6
-rw-r--r--tools/testing/selftests/kvm/s390/ucontrol_test.c800
-rw-r--r--tools/testing/selftests/kvm/set_memory_region_test.c121
-rw-r--r--tools/testing/selftests/kvm/steal_time.c58
-rw-r--r--tools/testing/selftests/kvm/x86/amx_test.c (renamed from tools/testing/selftests/kvm/x86_64/amx_test.c)27
-rw-r--r--tools/testing/selftests/kvm/x86/aperfmperf_test.c213
-rw-r--r--tools/testing/selftests/kvm/x86/apic_bus_clock_test.c194
-rw-r--r--tools/testing/selftests/kvm/x86/cpuid_test.c (renamed from tools/testing/selftests/kvm/x86_64/cpuid_test.c)69
-rw-r--r--tools/testing/selftests/kvm/x86/cr4_cpuid_sync_test.c100
-rw-r--r--tools/testing/selftests/kvm/x86/debug_regs.c (renamed from tools/testing/selftests/kvm/x86_64/debug_regs.c)13
-rw-r--r--tools/testing/selftests/kvm/x86/dirty_log_page_splitting_test.c (renamed from tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c)7
-rw-r--r--tools/testing/selftests/kvm/x86/exit_on_emulation_failure_test.c (renamed from tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c)5
-rw-r--r--tools/testing/selftests/kvm/x86/fastops_test.c209
-rw-r--r--tools/testing/selftests/kvm/x86/feature_msrs_test.c113
-rw-r--r--tools/testing/selftests/kvm/x86/fix_hypercall_test.c (renamed from tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c)27
-rw-r--r--tools/testing/selftests/kvm/x86/flds_emulation.h (renamed from tools/testing/selftests/kvm/x86_64/flds_emulation.h)0
-rw-r--r--tools/testing/selftests/kvm/x86/hwcr_msr_test.c (renamed from tools/testing/selftests/kvm/x86_64/hwcr_msr_test.c)2
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_clock.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_clock.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_cpuid.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c)72
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_evmcs.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c)5
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_extended_hypercalls.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_features.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_features.c)22
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_ipi.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_ipi.c)11
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_svm_test.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c)3
-rw-r--r--tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c (renamed from tools/testing/selftests/kvm/x86_64/hyperv_tlb_flush.c)2
-rw-r--r--tools/testing/selftests/kvm/x86/kvm_buslock_test.c135
-rw-r--r--tools/testing/selftests/kvm/x86/kvm_clock_test.c (renamed from tools/testing/selftests/kvm/x86_64/kvm_clock_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/kvm_pv_test.c (renamed from tools/testing/selftests/kvm/x86_64/kvm_pv_test.c)70
-rw-r--r--tools/testing/selftests/kvm/x86/max_vcpuid_cap_test.c (renamed from tools/testing/selftests/kvm/x86_64/max_vcpuid_cap_test.c)22
-rw-r--r--tools/testing/selftests/kvm/x86/monitor_mwait_test.c (renamed from tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c)117
-rw-r--r--tools/testing/selftests/kvm/x86/msrs_test.c489
-rw-r--r--tools/testing/selftests/kvm/x86/nested_emulation_test.c146
-rw-r--r--tools/testing/selftests/kvm/x86/nested_exceptions_test.c (renamed from tools/testing/selftests/kvm/x86_64/nested_exceptions_test.c)4
-rw-r--r--tools/testing/selftests/kvm/x86/nx_huge_pages_test.c (renamed from tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c)7
-rwxr-xr-xtools/testing/selftests/kvm/x86/nx_huge_pages_test.sh (renamed from tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh)13
-rw-r--r--tools/testing/selftests/kvm/x86/platform_info_test.c78
-rw-r--r--tools/testing/selftests/kvm/x86/pmu_counters_test.c697
-rw-r--r--tools/testing/selftests/kvm/x86/pmu_event_filter_test.c (renamed from tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c)185
-rw-r--r--tools/testing/selftests/kvm/x86/private_mem_conversions_test.c (renamed from tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c)3
-rw-r--r--tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c (renamed from tools/testing/selftests/kvm/x86_64/private_mem_kvm_exits_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/recalc_apic_map_test.c (renamed from tools/testing/selftests/kvm/x86_64/recalc_apic_map_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/set_boot_cpu_id.c (renamed from tools/testing/selftests/kvm/x86_64/set_boot_cpu_id.c)17
-rw-r--r--tools/testing/selftests/kvm/x86/set_sregs_test.c (renamed from tools/testing/selftests/kvm/x86_64/set_sregs_test.c)64
-rw-r--r--tools/testing/selftests/kvm/x86/sev_init2_tests.c165
-rw-r--r--tools/testing/selftests/kvm/x86/sev_migrate_tests.c (renamed from tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c)60
-rw-r--r--tools/testing/selftests/kvm/x86/sev_smoke_test.c229
-rw-r--r--tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c (renamed from tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c)8
-rw-r--r--tools/testing/selftests/kvm/x86/smm_test.c (renamed from tools/testing/selftests/kvm/x86_64/smm_test.c)1
-rw-r--r--tools/testing/selftests/kvm/x86/state_test.c (renamed from tools/testing/selftests/kvm/x86_64/state_test.c)6
-rw-r--r--tools/testing/selftests/kvm/x86/svm_int_ctl_test.c (renamed from tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c)8
-rw-r--r--tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c (renamed from tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c)5
-rw-r--r--tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c (renamed from tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c)5
-rw-r--r--tools/testing/selftests/kvm/x86/svm_vmcall_test.c (renamed from tools/testing/selftests/kvm/x86_64/svm_vmcall_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/sync_regs_test.c (renamed from tools/testing/selftests/kvm/x86_64/sync_regs_test.c)123
-rw-r--r--tools/testing/selftests/kvm/x86/triple_fault_event_test.c (renamed from tools/testing/selftests/kvm/x86_64/triple_fault_event_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/tsc_msrs_test.c (renamed from tools/testing/selftests/kvm/x86_64/tsc_msrs_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/tsc_scaling_sync.c (renamed from tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/ucna_injection_test.c (renamed from tools/testing/selftests/kvm/x86_64/ucna_injection_test.c)9
-rw-r--r--tools/testing/selftests/kvm/x86/userspace_io_test.c (renamed from tools/testing/selftests/kvm/x86_64/userspace_io_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c (renamed from tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c)93
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_apic_access_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_apic_access_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_close_while_nested_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_close_while_nested_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c)63
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_exception_with_invalid_guest_state.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c)5
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_msrs_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_msrs_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_nested_tsc_scaling_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c)84
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_preemption_timer_test.c)1
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_set_nested_state_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_set_nested_state_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c (renamed from tools/testing/selftests/kvm/x86_64/vmx_tsc_adjust_test.c)0
-rw-r--r--tools/testing/selftests/kvm/x86/xapic_ipi_test.c (renamed from tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c)20
-rw-r--r--tools/testing/selftests/kvm/x86/xapic_state_test.c (renamed from tools/testing/selftests/kvm/x86_64/xapic_state_test.c)91
-rw-r--r--tools/testing/selftests/kvm/x86/xcr0_cpuid_test.c (renamed from tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c)22
-rw-r--r--tools/testing/selftests/kvm/x86/xen_shinfo_test.c (renamed from tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c)144
-rw-r--r--tools/testing/selftests/kvm/x86/xen_vmcall_test.c (renamed from tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c)1
-rw-r--r--tools/testing/selftests/kvm/x86/xss_msr_test.c (renamed from tools/testing/selftests/kvm/x86_64/xss_msr_test.c)2
-rw-r--r--tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c85
-rw-r--r--tools/testing/selftests/kvm/x86_64/get_msr_index_features.c35
-rw-r--r--tools/testing/selftests/kvm/x86_64/platform_info_test.c81
-rw-r--r--tools/testing/selftests/landlock/.gitignore3
-rw-r--r--tools/testing/selftests/landlock/Makefile10
-rw-r--r--tools/testing/selftests/landlock/audit.h478
-rw-r--r--tools/testing/selftests/landlock/audit_test.c672
-rw-r--r--tools/testing/selftests/landlock/base_test.c206
-rw-r--r--tools/testing/selftests/landlock/common.h187
-rw-r--r--tools/testing/selftests/landlock/config5
-rw-r--r--tools/testing/selftests/landlock/fs_test.c1453
-rw-r--r--tools/testing/selftests/landlock/net_test.c291
-rw-r--r--tools/testing/selftests/landlock/ptrace_test.c149
-rw-r--r--tools/testing/selftests/landlock/sandbox-and-launch.c82
-rw-r--r--tools/testing/selftests/landlock/scoped_abstract_unix_test.c1152
-rw-r--r--tools/testing/selftests/landlock/scoped_base_variants.h156
-rw-r--r--tools/testing/selftests/landlock/scoped_common.h28
-rw-r--r--tools/testing/selftests/landlock/scoped_multiple_domain_variants.h152
-rw-r--r--tools/testing/selftests/landlock/scoped_signal_test.c562
-rw-r--r--tools/testing/selftests/landlock/scoped_test.c33
-rw-r--r--tools/testing/selftests/landlock/wait-pipe-sandbox.c131
-rw-r--r--tools/testing/selftests/landlock/wait-pipe.c42
-rw-r--r--tools/testing/selftests/landlock/wrappers.h47
-rw-r--r--tools/testing/selftests/lib.mk97
-rw-r--r--tools/testing/selftests/lib/Makefile3
-rw-r--r--tools/testing/selftests/lib/config3
-rwxr-xr-xtools/testing/selftests/lib/prime_numbers.sh4
-rwxr-xr-xtools/testing/selftests/lib/printf.sh4
-rwxr-xr-xtools/testing/selftests/lib/scanf.sh4
-rwxr-xr-xtools/testing/selftests/lib/strscpy.sh3
-rw-r--r--tools/testing/selftests/livepatch/.gitignore1
-rw-r--r--tools/testing/selftests/livepatch/Makefile6
-rw-r--r--tools/testing/selftests/livepatch/README25
-rw-r--r--tools/testing/selftests/livepatch/config1
-rw-r--r--tools/testing/selftests/livepatch/functions.sh128
-rwxr-xr-xtools/testing/selftests/livepatch/test-callbacks.sh76
-rwxr-xr-xtools/testing/selftests/livepatch/test-ftrace.sh42
-rwxr-xr-xtools/testing/selftests/livepatch/test-kprobe.sh64
-rwxr-xr-xtools/testing/selftests/livepatch/test-livepatch.sh143
-rwxr-xr-xtools/testing/selftests/livepatch/test-shadow-vars.sh2
-rwxr-xr-xtools/testing/selftests/livepatch/test-state.sh26
-rwxr-xr-xtools/testing/selftests/livepatch/test-syscall.sh56
-rwxr-xr-xtools/testing/selftests/livepatch/test-sysfs.sh129
-rw-r--r--tools/testing/selftests/livepatch/test_klp-call_getpid.c44
-rw-r--r--tools/testing/selftests/livepatch/test_modules/Makefile27
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_atomic_replace.c57
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_busy.c70
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo.c121
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_demo2.c93
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_callbacks_mod.c24
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_kprobe.c38
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_livepatch.c51
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_shadow_vars.c301
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_state.c162
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_state2.c191
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_state3.c5
-rw-r--r--tools/testing/selftests/livepatch/test_modules/test_klp_syscall.c116
-rw-r--r--tools/testing/selftests/lkdtm/config2
-rw-r--r--tools/testing/selftests/lkdtm/tests.txt1
-rw-r--r--tools/testing/selftests/lsm/common.h6
-rw-r--r--tools/testing/selftests/lsm/lsm_get_self_attr_test.c10
-rw-r--r--tools/testing/selftests/lsm/lsm_list_modules_test.c17
-rw-r--r--tools/testing/selftests/lsm/lsm_set_self_attr_test.c13
-rw-r--r--tools/testing/selftests/media_tests/regression_test.txt8
-rw-r--r--tools/testing/selftests/membarrier/membarrier_test_multi_thread.c2
-rw-r--r--tools/testing/selftests/membarrier/membarrier_test_single_thread.c2
-rw-r--r--tools/testing/selftests/memfd/fuse_test.c2
-rw-r--r--tools/testing/selftests/memfd/memfd_test.c71
-rw-r--r--tools/testing/selftests/mincore/mincore_selftest.c19
-rw-r--r--tools/testing/selftests/mm/.gitignore14
-rw-r--r--tools/testing/selftests/mm/Makefile73
-rwxr-xr-xtools/testing/selftests/mm/charge_reserved_hugetlb.sh10
-rw-r--r--tools/testing/selftests/mm/compaction_test.c130
-rw-r--r--tools/testing/selftests/mm/config4
-rw-r--r--tools/testing/selftests/mm/cow.c538
-rw-r--r--tools/testing/selftests/mm/droppable.c53
-rw-r--r--tools/testing/selftests/mm/guard-regions.c2141
-rw-r--r--tools/testing/selftests/mm/gup_longterm.c182
-rw-r--r--tools/testing/selftests/mm/gup_test.c9
-rw-r--r--tools/testing/selftests/mm/hmm-tests.c11
-rw-r--r--tools/testing/selftests/mm/hugepage-mmap.c18
-rw-r--r--tools/testing/selftests/mm/hugepage-mremap.c16
-rw-r--r--tools/testing/selftests/mm/hugepage-shm.c18
-rw-r--r--tools/testing/selftests/mm/hugepage-vmemmap.c17
-rw-r--r--tools/testing/selftests/mm/hugetlb-madvise.c10
-rw-r--r--tools/testing/selftests/mm/hugetlb-soft-offline.c228
-rw-r--r--tools/testing/selftests/mm/hugetlb_dio.c125
-rw-r--r--tools/testing/selftests/mm/hugetlb_fault_after_madv.c48
-rw-r--r--tools/testing/selftests/mm/hugetlb_madv_vs_map.c126
-rwxr-xr-xtools/testing/selftests/mm/hugetlb_reparenting_test.sh109
-rw-r--r--tools/testing/selftests/mm/khugepaged.c15
-rw-r--r--tools/testing/selftests/mm/ksm_functional_tests.c370
-rw-r--r--tools/testing/selftests/mm/ksm_tests.c40
-rw-r--r--tools/testing/selftests/mm/madv_populate.c41
-rw-r--r--tools/testing/selftests/mm/map_fixed_noreplace.c104
-rw-r--r--tools/testing/selftests/mm/map_hugetlb.c60
-rw-r--r--tools/testing/selftests/mm/map_populate.c42
-rw-r--r--tools/testing/selftests/mm/memfd_secret.c51
-rw-r--r--tools/testing/selftests/mm/merge.c1174
-rw-r--r--tools/testing/selftests/mm/migration.c137
-rw-r--r--tools/testing/selftests/mm/mkdirty.c3
-rw-r--r--tools/testing/selftests/mm/mlock-random-test.c136
-rw-r--r--tools/testing/selftests/mm/mlock2-tests.c295
-rw-r--r--tools/testing/selftests/mm/mlock2.h19
-rw-r--r--tools/testing/selftests/mm/mrelease_test.c80
-rw-r--r--tools/testing/selftests/mm/mremap_dontunmap.c32
-rw-r--r--tools/testing/selftests/mm/mremap_test.c838
-rw-r--r--tools/testing/selftests/mm/mseal_helpers.h41
-rw-r--r--tools/testing/selftests/mm/mseal_test.c1989
-rw-r--r--tools/testing/selftests/mm/on-fault-limit.c36
-rw-r--r--tools/testing/selftests/mm/page_frag/Makefile18
-rw-r--r--tools/testing/selftests/mm/page_frag/page_frag_test.c198
-rw-r--r--tools/testing/selftests/mm/pagemap_ioctl.c228
-rw-r--r--tools/testing/selftests/mm/pfnmap.c269
-rw-r--r--tools/testing/selftests/mm/pkey-arm64.h140
-rw-r--r--tools/testing/selftests/mm/pkey-helpers.h75
-rw-r--r--tools/testing/selftests/mm/pkey-powerpc.h21
-rw-r--r--tools/testing/selftests/mm/pkey-x86.h12
-rw-r--r--tools/testing/selftests/mm/pkey_sighandler_tests.c546
-rw-r--r--tools/testing/selftests/mm/pkey_util.c41
-rw-r--r--tools/testing/selftests/mm/prctl_thp_disable.c291
-rw-r--r--tools/testing/selftests/mm/process_madv.c344
-rw-r--r--tools/testing/selftests/mm/protection_keys.c333
-rw-r--r--tools/testing/selftests/mm/rmap.c433
-rwxr-xr-xtools/testing/selftests/mm/run_vmtests.sh214
-rw-r--r--tools/testing/selftests/mm/settings2
-rw-r--r--tools/testing/selftests/mm/soft-dirty.c20
-rw-r--r--tools/testing/selftests/mm/split_huge_page_test.c832
-rwxr-xr-xtools/testing/selftests/mm/test_page_frag.sh175
-rwxr-xr-xtools/testing/selftests/mm/test_vmalloc.sh6
-rw-r--r--tools/testing/selftests/mm/thp_settings.c68
-rw-r--r--tools/testing/selftests/mm/thp_settings.h16
-rw-r--r--tools/testing/selftests/mm/thuge-gen.c193
-rw-r--r--tools/testing/selftests/mm/transhuge-stress.c36
-rw-r--r--tools/testing/selftests/mm/uffd-common.c274
-rw-r--r--tools/testing/selftests/mm/uffd-common.h79
-rw-r--r--tools/testing/selftests/mm/uffd-stress.c251
-rw-r--r--tools/testing/selftests/mm/uffd-unit-tests.c760
-rw-r--r--tools/testing/selftests/mm/uffd-wp-mremap.c380
-rw-r--r--tools/testing/selftests/mm/va_high_addr_switch.c468
-rwxr-xr-xtools/testing/selftests/mm/va_high_addr_switch.sh81
-rw-r--r--tools/testing/selftests/mm/virtual_address_range.c161
-rw-r--r--tools/testing/selftests/mm/vm_util.c390
-rw-r--r--tools/testing/selftests/mm/vm_util.h96
-rw-r--r--tools/testing/selftests/mm/write_to_hugetlbfs.c23
-rw-r--r--tools/testing/selftests/module/Makefile (renamed from tools/testing/selftests/user/Makefile)7
-rw-r--r--tools/testing/selftests/module/config3
-rwxr-xr-xtools/testing/selftests/module/find_symbol.sh81
-rw-r--r--tools/testing/selftests/mount_setattr/Makefile2
-rw-r--r--tools/testing/selftests/mount_setattr/mount_setattr_test.c712
-rw-r--r--tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c4
-rw-r--r--tools/testing/selftests/mqueue/mq_perf_tests.c6
-rw-r--r--tools/testing/selftests/mqueue/setting1
-rw-r--r--tools/testing/selftests/mseal_system_mappings/.gitignore2
-rw-r--r--tools/testing/selftests/mseal_system_mappings/Makefile6
-rw-r--r--tools/testing/selftests/mseal_system_mappings/config1
-rw-r--r--tools/testing/selftests/mseal_system_mappings/sysmap_is_sealed.c119
-rw-r--r--tools/testing/selftests/namespaces/.gitignore3
-rw-r--r--tools/testing/selftests/namespaces/Makefile7
-rw-r--r--tools/testing/selftests/namespaces/config7
-rw-r--r--tools/testing/selftests/namespaces/file_handle_test.c1429
-rw-r--r--tools/testing/selftests/namespaces/init_ino_test.c61
-rw-r--r--tools/testing/selftests/namespaces/nsid_test.c986
-rw-r--r--tools/testing/selftests/nci/nci_dev.c2
-rw-r--r--tools/testing/selftests/net/.gitignore15
-rw-r--r--tools/testing/selftests/net/Makefile340
-rw-r--r--tools/testing/selftests/net/af_unix/Makefile12
-rw-r--r--tools/testing/selftests/net/af_unix/config3
-rw-r--r--tools/testing/selftests/net/af_unix/msg_oob.c891
-rw-r--r--tools/testing/selftests/net/af_unix/scm_inq.c123
-rw-r--r--tools/testing/selftests/net/af_unix/scm_pidfd.c215
-rw-r--r--tools/testing/selftests/net/af_unix/scm_rights.c381
-rw-r--r--tools/testing/selftests/net/af_unix/test_unix_oob.c436
-rwxr-xr-xtools/testing/selftests/net/amt.sh42
-rwxr-xr-xtools/testing/selftests/net/arp_ndisc_untracked_subnets.sh53
-rwxr-xr-xtools/testing/selftests/net/bareudp.sh49
-rw-r--r--tools/testing/selftests/net/bench/Makefile7
-rw-r--r--tools/testing/selftests/net/bench/page_pool/Makefile17
-rw-r--r--tools/testing/selftests/net/bench/page_pool/bench_page_pool_simple.c267
-rw-r--r--tools/testing/selftests/net/bench/page_pool/time_bench.c394
-rw-r--r--tools/testing/selftests/net/bench/page_pool/time_bench.h238
-rwxr-xr-xtools/testing/selftests/net/bench/test_bench_page_pool.sh32
-rw-r--r--tools/testing/selftests/net/bind_bhash.c4
-rw-r--r--tools/testing/selftests/net/bind_wildcard.c783
-rw-r--r--tools/testing/selftests/net/bpf.mk53
-rwxr-xr-xtools/testing/selftests/net/bpf_offload.py (renamed from tools/testing/selftests/bpf/test_offload.py)166
-rwxr-xr-xtools/testing/selftests/net/broadcast_ether_dst.sh83
-rwxr-xr-xtools/testing/selftests/net/broadcast_pmtu.sh47
-rwxr-xr-xtools/testing/selftests/net/busy_poll_test.sh165
-rw-r--r--tools/testing/selftests/net/busy_poller.c358
-rw-r--r--tools/testing/selftests/net/can/.gitignore2
-rw-r--r--tools/testing/selftests/net/can/Makefile11
-rw-r--r--tools/testing/selftests/net/can/config3
-rw-r--r--tools/testing/selftests/net/can/test_raw_filter.c405
-rwxr-xr-xtools/testing/selftests/net/can/test_raw_filter.sh45
-rwxr-xr-xtools/testing/selftests/net/cmsg_ip.sh187
-rwxr-xr-xtools/testing/selftests/net/cmsg_ipv6.sh154
-rw-r--r--tools/testing/selftests/net/cmsg_sender.c185
-rwxr-xr-xtools/testing/selftests/net/cmsg_so_priority.sh151
-rwxr-xr-xtools/testing/selftests/net/cmsg_time.sh32
-rw-r--r--tools/testing/selftests/net/config134
-rwxr-xr-xtools/testing/selftests/net/drop_monitor_tests.sh2
-rw-r--r--tools/testing/selftests/net/epoll_busy_poll.c320
-rwxr-xr-xtools/testing/selftests/net/fcnal-ipv4.sh2
-rwxr-xr-xtools/testing/selftests/net/fcnal-ipv6.sh2
-rwxr-xr-xtools/testing/selftests/net/fcnal-other.sh2
-rwxr-xr-xtools/testing/selftests/net/fcnal-test.sh482
-rwxr-xr-xtools/testing/selftests/net/fdb_flush.sh2
-rwxr-xr-xtools/testing/selftests/net/fdb_notify.sh96
-rwxr-xr-xtools/testing/selftests/net/fib_nexthops.sh122
-rwxr-xr-xtools/testing/selftests/net/fib_rule_tests.sh470
-rwxr-xr-xtools/testing/selftests/net/fib_tests.sh303
-rw-r--r--tools/testing/selftests/net/forwarding/Makefile73
-rw-r--r--tools/testing/selftests/net/forwarding/README50
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_activity_notify.sh170
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh18
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_fdb_local_vlan_0.sh387
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_igmp.sh86
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_mdb.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_mld.sh87
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_vlan_aware.sh148
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh25
-rw-r--r--tools/testing/selftests/net/forwarding/config52
-rwxr-xr-xtools/testing/selftests/net/forwarding/custom_multipath_hash.sh24
-rw-r--r--tools/testing/selftests/net/forwarding/devlink_lib.sh4
-rw-r--r--tools/testing/selftests/net/forwarding/forwarding.config.sample51
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_custom_multipath_hash.sh24
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_inner_v4_multipath.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_inner_v6_multipath.sh6
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_multipath.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_multipath_nh.sh41
-rwxr-xr-xtools/testing/selftests/net/forwarding/gre_multipath_nh_res.sh42
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh6
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_custom_multipath_hash.sh24
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_flat.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_flat_key.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_flat_keys.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_hier.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_hier_key.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_hier_keys.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_inner_v4_multipath.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/ip6gre_inner_v6_multipath.sh6
-rw-r--r--tools/testing/selftests/net/forwarding/ip6gre_lib.sh84
-rw-r--r--tools/testing/selftests/net/forwarding/ipip_lib.sh1
-rw-r--r--tools/testing/selftests/net/forwarding/lib.sh672
-rwxr-xr-xtools/testing/selftests/net/forwarding/lib_sh_test.sh208
-rwxr-xr-xtools/testing/selftests/net/forwarding/local_termination.sh434
-rwxr-xr-xtools/testing/selftests/net/forwarding/min_max_mtu.sh283
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre.sh45
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_bound.sh23
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh21
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh21
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh21
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh32
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_changes.sh73
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_flower.sh43
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_lag_lacp.sh66
-rw-r--r--tools/testing/selftests/net/forwarding/mirror_gre_lib.sh92
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_neigh.sh39
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_nh.sh35
-rw-r--r--tools/testing/selftests/net/forwarding/mirror_gre_topo_lib.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_vlan.sh21
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh73
-rw-r--r--tools/testing/selftests/net/forwarding/mirror_lib.sh79
-rwxr-xr-xtools/testing/selftests/net/forwarding/mirror_vlan.sh43
-rwxr-xr-xtools/testing/selftests/net/forwarding/no_forwarding.sh5
-rwxr-xr-xtools/testing/selftests/net/forwarding/router.sh29
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_bridge_1d_lag.sh1
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_bridge_lag.sh1
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_mpath_nh.sh121
-rw-r--r--tools/testing/selftests/net/forwarding/router_mpath_nh_lib.sh132
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_mpath_nh_res.sh106
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_mpath_seed.sh333
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_multicast.sh35
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_multipath.sh45
-rwxr-xr-xtools/testing/selftests/net/forwarding/router_nh.sh14
-rwxr-xr-xtools/testing/selftests/net/forwarding/sch_ets.sh8
-rw-r--r--tools/testing/selftests/net/forwarding/sch_ets_core.sh84
-rw-r--r--tools/testing/selftests/net/forwarding/sch_ets_tests.sh41
-rwxr-xr-xtools/testing/selftests/net/forwarding/sch_red.sh117
-rw-r--r--tools/testing/selftests/net/forwarding/sch_tbf_core.sh95
-rw-r--r--tools/testing/selftests/net/forwarding/sch_tbf_etsprio.sh7
-rwxr-xr-xtools/testing/selftests/net/forwarding/sch_tbf_root.sh3
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_actions.sh49
-rw-r--r--tools/testing/selftests/net/forwarding/tc_common.sh2
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_flower.sh52
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_flower_port_range.sh46
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_police.sh24
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_taprio.sh421
-rwxr-xr-xtools/testing/selftests/net/forwarding/tc_tunnel_key.sh2
-rw-r--r--tools/testing/selftests/net/forwarding/tsn_lib.sh26
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh22
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh8
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_bridge_1q.sh25
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh4
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_bridge_1q_mc_ul.sh766
-rwxr-xr-xtools/testing/selftests/net/forwarding/vxlan_reserved.sh347
-rwxr-xr-xtools/testing/selftests/net/fq_band_pktlimit.sh14
-rwxr-xr-xtools/testing/selftests/net/gre_ipv6_lladdr.sh184
-rw-r--r--tools/testing/selftests/net/gro.c197
-rwxr-xr-xtools/testing/selftests/net/gro.sh9
-rw-r--r--tools/testing/selftests/net/hsr/Makefile7
-rw-r--r--tools/testing/selftests/net/hsr/config4
-rw-r--r--tools/testing/selftests/net/hsr/hsr_common.sh84
-rwxr-xr-xtools/testing/selftests/net/hsr/hsr_ping.sh211
-rwxr-xr-xtools/testing/selftests/net/hsr/hsr_redbox.sh136
-rw-r--r--tools/testing/selftests/net/hsr/settings1
-rwxr-xr-xtools/testing/selftests/net/icmp_redirect.sh2
-rwxr-xr-xtools/testing/selftests/net/ioam6.sh1834
-rw-r--r--tools/testing/selftests/net/ioam6_parser.c1056
-rw-r--r--tools/testing/selftests/net/ip_local_port_range.c8
-rwxr-xr-xtools/testing/selftests/net/ip_local_port_range.sh4
-rw-r--r--tools/testing/selftests/net/ipsec.c3
-rwxr-xr-xtools/testing/selftests/net/ipv6_force_forwarding.sh105
-rw-r--r--tools/testing/selftests/net/ipv6_fragmentation.c114
-rwxr-xr-xtools/testing/selftests/net/ipv6_route_update_soft_lockup.sh261
-rw-r--r--tools/testing/selftests/net/lib.sh639
-rw-r--r--tools/testing/selftests/net/lib/.gitignore3
-rw-r--r--tools/testing/selftests/net/lib/Makefile23
-rw-r--r--tools/testing/selftests/net/lib/csum.c (renamed from tools/testing/selftests/net/csum.c)34
-rw-r--r--tools/testing/selftests/net/lib/ksft.h56
-rw-r--r--tools/testing/selftests/net/lib/py/__init__.py9
-rw-r--r--tools/testing/selftests/net/lib/py/consts.py9
-rw-r--r--tools/testing/selftests/net/lib/py/ksft.py291
-rw-r--r--tools/testing/selftests/net/lib/py/netns.py49
-rw-r--r--tools/testing/selftests/net/lib/py/nsim.py135
-rw-r--r--tools/testing/selftests/net/lib/py/utils.py274
-rw-r--r--tools/testing/selftests/net/lib/py/ynl.py68
-rw-r--r--tools/testing/selftests/net/lib/sh/defer.sh131
-rw-r--r--tools/testing/selftests/net/lib/xdp_dummy.bpf.c (renamed from tools/testing/selftests/net/xdp_dummy.c)6
-rw-r--r--tools/testing/selftests/net/lib/xdp_helper.c131
-rw-r--r--tools/testing/selftests/net/lib/xdp_native.bpf.c679
-rwxr-xr-xtools/testing/selftests/net/link_netns.py141
-rwxr-xr-xtools/testing/selftests/net/lwt_dst_cache_ref_loop.sh246
-rw-r--r--tools/testing/selftests/net/mptcp/.gitignore1
-rw-r--r--tools/testing/selftests/net/mptcp/Makefile31
-rw-r--r--tools/testing/selftests/net/mptcp/config44
-rwxr-xr-xtools/testing/selftests/net/mptcp/diag.sh243
-rw-r--r--tools/testing/selftests/net/mptcp/mptcp_connect.c91
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_connect.sh290
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh5
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh5
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh5
-rw-r--r--tools/testing/selftests/net/mptcp/mptcp_diag.c435
-rw-r--r--tools/testing/selftests/net/mptcp/mptcp_inq.c28
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_join.sh1522
-rw-r--r--tools/testing/selftests/net/mptcp/mptcp_lib.sh435
-rw-r--r--tools/testing/selftests/net/mptcp/mptcp_sockopt.c44
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_sockopt.sh156
-rwxr-xr-xtools/testing/selftests/net/mptcp/pm_netlink.sh356
-rw-r--r--tools/testing/selftests/net/mptcp/pm_nl_ctl.c76
-rwxr-xr-xtools/testing/selftests/net/mptcp/simult_flows.sh107
-rwxr-xr-xtools/testing/selftests/net/mptcp/userspace_pm.sh262
-rw-r--r--tools/testing/selftests/net/msg_zerocopy.c36
-rwxr-xr-xtools/testing/selftests/net/msg_zerocopy.sh84
-rw-r--r--tools/testing/selftests/net/nat6to4.bpf.c (renamed from tools/testing/selftests/net/nat6to4.c)0
-rwxr-xr-xtools/testing/selftests/net/nat6to4.sh15
-rw-r--r--tools/testing/selftests/net/net_helper.sh25
-rwxr-xr-xtools/testing/selftests/net/netdev-l2addr.sh59
-rwxr-xr-xtools/testing/selftests/net/netdevice.sh60
-rw-r--r--tools/testing/selftests/net/netfilter/.gitignore (renamed from tools/testing/selftests/netfilter/.gitignore)6
-rw-r--r--tools/testing/selftests/net/netfilter/Makefile73
-rw-r--r--tools/testing/selftests/net/netfilter/audit_logread.c (renamed from tools/testing/selftests/netfilter/audit_logread.c)0
-rwxr-xr-xtools/testing/selftests/net/netfilter/br_netfilter.sh175
-rwxr-xr-xtools/testing/selftests/net/netfilter/br_netfilter_queue.sh85
-rwxr-xr-xtools/testing/selftests/net/netfilter/bridge_brouter.sh120
-rw-r--r--tools/testing/selftests/net/netfilter/config101
-rw-r--r--tools/testing/selftests/net/netfilter/connect_close.c (renamed from tools/testing/selftests/netfilter/connect_close.c)0
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_clash.sh174
-rw-r--r--tools/testing/selftests/net/netfilter/conntrack_dump_flush.c (renamed from tools/testing/selftests/netfilter/conntrack_dump_flush.c)23
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_dump_flush.sh3
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_icmp_related.sh (renamed from tools/testing/selftests/netfilter/conntrack_icmp_related.sh)179
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_ipip_mtu.sh (renamed from tools/testing/selftests/netfilter/ipip-conntrack-mtu.sh)118
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_resize.sh515
-rw-r--r--tools/testing/selftests/net/netfilter/conntrack_reverse_clash.c125
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_reverse_clash.sh51
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_sctp_collision.sh87
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_tcp_unreplied.sh164
-rwxr-xr-xtools/testing/selftests/net/netfilter/conntrack_vrf.sh (renamed from tools/testing/selftests/netfilter/conntrack_vrf.sh)117
-rwxr-xr-xtools/testing/selftests/net/netfilter/ipvs.sh205
-rw-r--r--tools/testing/selftests/net/netfilter/lib.sh10
-rwxr-xr-xtools/testing/selftests/net/netfilter/nf_conntrack_packetdrill.sh71
-rwxr-xr-xtools/testing/selftests/net/netfilter/nf_nat_edemux.sh121
-rw-r--r--tools/testing/selftests/net/netfilter/nf_queue.c (renamed from tools/testing/selftests/netfilter/nf-queue.c)0
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_audit.sh (renamed from tools/testing/selftests/netfilter/nft_audit.sh)88
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_concat_range.sh (renamed from tools/testing/selftests/netfilter/nft_concat_range.sh)638
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_concat_range_perf.sh9
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_conntrack_helper.sh171
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_fib.sh850
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_flowtable.sh713
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_interface_stress.sh157
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_meta.sh (renamed from tools/testing/selftests/netfilter/nft_meta.sh)4
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_nat.sh (renamed from tools/testing/selftests/netfilter/nft_nat.sh)561
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_nat_zones.sh (renamed from tools/testing/selftests/netfilter/nft_nat_zones.sh)194
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_queue.sh672
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_synproxy.sh96
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_tproxy_tcp.sh358
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_tproxy_udp.sh262
-rwxr-xr-xtools/testing/selftests/net/netfilter/nft_zones_many.sh (renamed from tools/testing/selftests/netfilter/nft_zones_many.sh)97
-rwxr-xr-xtools/testing/selftests/net/netfilter/packetdrill/common.sh33
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_ack_loss_stall.pkt118
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_inexact_rst.pkt62
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_rst_invalid.pkt59
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_syn_challenge_ack.pkt44
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_synack_old.pkt51
-rw-r--r--tools/testing/selftests/net/netfilter/packetdrill/conntrack_synack_reuse.pkt34
-rwxr-xr-xtools/testing/selftests/net/netfilter/rpath.sh (renamed from tools/testing/selftests/netfilter/rpath.sh)46
-rw-r--r--tools/testing/selftests/net/netfilter/sctp_collision.c (renamed from tools/testing/selftests/netfilter/sctp_collision.c)0
-rw-r--r--tools/testing/selftests/net/netfilter/settings1
-rw-r--r--tools/testing/selftests/net/netfilter/udpclash.c158
-rwxr-xr-xtools/testing/selftests/net/netfilter/vxlan_mtu_frag.sh121
-rwxr-xr-xtools/testing/selftests/net/netfilter/xt_string.sh (renamed from tools/testing/selftests/netfilter/xt_string.sh)89
-rw-r--r--tools/testing/selftests/net/netlink-dumps.c262
-rwxr-xr-xtools/testing/selftests/net/netns-name.sh23
-rwxr-xr-xtools/testing/selftests/net/netns-sysctl.sh40
-rw-r--r--tools/testing/selftests/net/nettest.c12
-rwxr-xr-xtools/testing/selftests/net/nl_netdev.py254
-rw-r--r--tools/testing/selftests/net/openvswitch/Makefile2
-rwxr-xr-xtools/testing/selftests/net/openvswitch/openvswitch.sh338
-rw-r--r--tools/testing/selftests/net/openvswitch/ovs-dpctl.py661
-rw-r--r--tools/testing/selftests/net/openvswitch/settings1
-rw-r--r--tools/testing/selftests/net/ovpn/.gitignore2
-rw-r--r--tools/testing/selftests/net/ovpn/Makefile34
-rw-r--r--tools/testing/selftests/net/ovpn/common.sh108
-rw-r--r--tools/testing/selftests/net/ovpn/config10
-rw-r--r--tools/testing/selftests/net/ovpn/data64.key5
-rw-r--r--tools/testing/selftests/net/ovpn/ovpn-cli.c2387
-rw-r--r--tools/testing/selftests/net/ovpn/tcp_peers.txt5
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-chachapoly.sh9
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-close-socket-tcp.sh9
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-close-socket.sh45
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-float.sh9
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-large-mtu.sh9
-rwxr-xr-xtools/testing/selftests/net/ovpn/test-tcp.sh9
-rwxr-xr-xtools/testing/selftests/net/ovpn/test.sh117
-rw-r--r--tools/testing/selftests/net/ovpn/udp_peers.txt6
-rw-r--r--tools/testing/selftests/net/packetdrill/Makefile12
-rw-r--r--tools/testing/selftests/net/packetdrill/config11
-rwxr-xr-xtools/testing/selftests/net/packetdrill/defaults.sh64
-rwxr-xr-xtools/testing/selftests/net/packetdrill/ksft_runner.sh62
-rwxr-xr-xtools/testing/selftests/net/packetdrill/set_sysctls.py38
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_blocking_blocking-accept.pkt18
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_blocking_blocking-connect.pkt13
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_blocking_blocking-read.pkt31
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_blocking_blocking-write.pkt35
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_close_close-local-close-then-remote-fin.pkt23
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_close_close-on-syn-sent.pkt21
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_close_close-remote-fin-then-close.pkt36
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_close_no_rst.pkt32
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_dsack_mult.pkt45
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ecn_ecn-uses-ect0.pkt21
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_eor_no-coalesce-large.pkt38
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_eor_no-coalesce-retrans.pkt72
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_eor_no-coalesce-small.pkt36
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_eor_no-coalesce-subsequent.pkt66
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fast_recovery_prr-ss-10pkt-lost-1.pkt72
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fast_recovery_prr-ss-30pkt-lost-1_4-11_16.pkt50
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fast_recovery_prr-ss-30pkt-lost1_4.pkt43
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fast_recovery_prr-ss-ack-below-snd_una-cubic.pkt41
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-cookie-not-reqd.pkt32
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-no-setsockopt.pkt21
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-non-tfo-listener.pkt26
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-pure-syn-data.pkt50
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-rw.pkt23
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_basic-zero-payload.pkt26
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_client-ack-dropped-then-recovery-ms-timestamps.pkt46
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_experimental_option.pkt37
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_fin-close-socket.pkt30
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_icmp-before-accept.pkt49
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_reset-after-accept.pkt37
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_reset-before-accept.pkt32
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_reset-close-with-unread-data.pkt32
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_reset-non-tfo-socket.pkt37
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_sockopt-fastopen-key.pkt74
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_trigger-rst-listener-closed.pkt21
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_trigger-rst-reconnect.pkt30
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_fastopen_server_trigger-rst-unread-data-closed.pkt23
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_inq_client.pkt54
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_inq_server.pkt54
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_limited_transmit_limited-transmit-no-sack.pkt53
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_limited_transmit_limited-transmit-sack.pkt50
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_md5_md5-only-on-client-ack.pkt28
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_nagle_https_client.pkt40
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_nagle_sendmsg_msg_more.pkt66
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_nagle_sockopt_cork_nodelay.pkt43
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ooo-before-and-after-accept.pkt53
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ooo_rcv_mss.pkt27
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_rcv_big_endseq.pkt44
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_rcv_toobig.pkt33
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_sack_sack-route-refresh-ip-tos.pkt37
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_sack_sack-shift-sacked-2-6-8-3-9-nofack.pkt64
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_sack_sack-shift-sacked-7-3-4-8-9-fack.pkt66
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_sack_sack-shift-sacked-7-5-6-8-9-fack.pkt62
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_sendfile_sendfile-simple.pkt26
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-ack-per-1pkt.pkt56
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-ack-per-2pkt-send-5pkt.pkt33
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-ack-per-2pkt-send-6pkt.pkt34
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-ack-per-2pkt.pkt42
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-ack-per-4pkt.pkt35
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-after-idle.pkt39
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-after-win-update.pkt50
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-app-limited-9-packets-out.pkt38
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-app-limited.pkt36
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_slow_start_slow-start-fq-ack-per-2pkt.pkt63
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_splice_tcp_splice_loop_test.pkt20
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_syscall_bad_arg_fastopen-invalid-buf-ptr.pkt42
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_syscall_bad_arg_sendmsg-empty-iov.pkt30
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_syscall_bad_arg_syscall-invalid-buf-ptr.pkt25
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_tcp_info_tcp-info-last_data_recv.pkt20
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_tcp_info_tcp-info-rwnd-limited.pkt54
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_tcp_info_tcp-info-sndbuf-limited.pkt38
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_timestamping_client-only-last-byte.pkt92
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_timestamping_partial.pkt91
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_timestamping_server.pkt145
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ts_recent_fin_tsval.pkt23
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ts_recent_invalid_ack.pkt25
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_ts_recent_reset_tsval.pkt25
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_user_timeout_user-timeout-probe.pkt37
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_user_timeout_user_timeout.pkt32
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_validate_validate-established-no-flags.pkt24
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_basic.pkt55
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_batch.pkt41
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_client.pkt30
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_closed.pkt44
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_edge.pkt61
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_exclusive.pkt63
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_epoll_oneshot.pkt66
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_fastopen-client.pkt56
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_fastopen-server.pkt44
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_maxfrags.pkt118
-rw-r--r--tools/testing/selftests/net/packetdrill/tcp_zerocopy_small.pkt57
-rwxr-xr-xtools/testing/selftests/net/pmtu.sh271
-rw-r--r--tools/testing/selftests/net/proc_net_pktgen.c690
-rw-r--r--tools/testing/selftests/net/psock_fanout.c84
-rw-r--r--tools/testing/selftests/net/psock_lib.h4
-rw-r--r--tools/testing/selftests/net/psock_tpacket.c6
-rw-r--r--tools/testing/selftests/net/rds/.gitignore1
-rw-r--r--tools/testing/selftests/net/rds/Makefile18
-rw-r--r--tools/testing/selftests/net/rds/README.txt41
-rwxr-xr-xtools/testing/selftests/net/rds/config.sh53
-rwxr-xr-xtools/testing/selftests/net/rds/run.sh224
-rwxr-xr-xtools/testing/selftests/net/rds/test.py265
-rw-r--r--tools/testing/selftests/net/reuseaddr_conflict.c2
-rw-r--r--tools/testing/selftests/net/reuseaddr_ports_exhausted.c2
-rw-r--r--tools/testing/selftests/net/reuseport_addr_any.c36
-rwxr-xr-xtools/testing/selftests/net/route_hint.sh79
-rwxr-xr-xtools/testing/selftests/net/rps_default_mask.sh12
-rwxr-xr-xtools/testing/selftests/net/rtnetlink.py30
-rwxr-xr-xtools/testing/selftests/net/rtnetlink.sh286
-rwxr-xr-xtools/testing/selftests/net/rtnetlink_notification.sh112
-rw-r--r--tools/testing/selftests/net/rxtimestamp.c18
-rw-r--r--tools/testing/selftests/net/sample_map_ret0.bpf.c (renamed from tools/testing/selftests/bpf/progs/sample_map_ret0.c)2
-rw-r--r--tools/testing/selftests/net/sample_ret0.bpf.c (renamed from tools/testing/selftests/bpf/progs/sample_ret0.c)3
-rw-r--r--tools/testing/selftests/net/setup_veth.sh3
-rw-r--r--tools/testing/selftests/net/sk_so_peek_off.c202
-rw-r--r--tools/testing/selftests/net/skf_net_off.c244
-rwxr-xr-xtools/testing/selftests/net/skf_net_off.sh30
-rw-r--r--tools/testing/selftests/net/so_rcv_listener.c168
-rw-r--r--tools/testing/selftests/net/so_txtime.c7
-rw-r--r--tools/testing/selftests/net/socket.c11
-rwxr-xr-xtools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh5
-rwxr-xr-xtools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh5
-rwxr-xr-xtools/testing/selftests/net/srv6_end_dx4_netfilter_test.sh335
-rwxr-xr-xtools/testing/selftests/net/srv6_end_dx6_netfilter_test.sh340
-rwxr-xr-xtools/testing/selftests/net/srv6_end_flavors_test.sh4
-rwxr-xr-xtools/testing/selftests/net/srv6_end_next_csid_l3vpn_test.sh79
-rwxr-xr-xtools/testing/selftests/net/srv6_end_x_next_csid_l3vpn_test.sh133
-rwxr-xr-xtools/testing/selftests/net/srv6_hencap_red_l3vpn_test.sh76
-rwxr-xr-xtools/testing/selftests/net/srv6_hl2encap_red_l2vpn_test.sh85
-rw-r--r--tools/testing/selftests/net/tcp_ao/Makefile5
-rw-r--r--tools/testing/selftests/net/tcp_ao/bench-lookups.c2
-rw-r--r--tools/testing/selftests/net/tcp_ao/config3
-rw-r--r--tools/testing/selftests/net/tcp_ao/connect-deny.c83
-rw-r--r--tools/testing/selftests/net/tcp_ao/connect.c28
-rw-r--r--tools/testing/selftests/net/tcp_ao/icmps-discard.c19
-rw-r--r--tools/testing/selftests/net/tcp_ao/key-management.c94
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/aolib.h295
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/ftrace-tcp.c556
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/ftrace.c543
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/kconfig.c31
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/proc.c2
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/setup.c29
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/sock.c316
-rw-r--r--tools/testing/selftests/net/tcp_ao/lib/utils.c26
-rw-r--r--tools/testing/selftests/net/tcp_ao/restore.c101
-rw-r--r--tools/testing/selftests/net/tcp_ao/rst.c70
-rw-r--r--tools/testing/selftests/net/tcp_ao/self-connect.c58
-rw-r--r--tools/testing/selftests/net/tcp_ao/seq-ext.c62
-rw-r--r--tools/testing/selftests/net/tcp_ao/setsockopt-closed.c192
-rw-r--r--tools/testing/selftests/net/tcp_ao/unsigned-md5.c153
-rw-r--r--tools/testing/selftests/net/tcp_port_share.c258
-rwxr-xr-xtools/testing/selftests/net/test_blackhole_dev.sh11
-rwxr-xr-xtools/testing/selftests/net/test_bridge_backup_port.sh31
-rwxr-xr-xtools/testing/selftests/net/test_bridge_neigh_suppress.sh139
-rwxr-xr-xtools/testing/selftests/net/test_neigh.sh366
-rwxr-xr-xtools/testing/selftests/net/test_so_rcv.sh73
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_fdb_changelink.sh111
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_mdb.sh241
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_nh.sh223
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_vnifiltering.sh9
-rw-r--r--tools/testing/selftests/net/tfo.c171
-rwxr-xr-xtools/testing/selftests/net/tfo_passive.sh112
-rw-r--r--tools/testing/selftests/net/tls.c994
-rwxr-xr-xtools/testing/selftests/net/traceroute.sh250
-rw-r--r--tools/testing/selftests/net/txtimestamp.c53
-rwxr-xr-xtools/testing/selftests/net/txtimestamp.sh24
-rwxr-xr-xtools/testing/selftests/net/udpgro.sh57
-rwxr-xr-xtools/testing/selftests/net/udpgro_bench.sh4
-rwxr-xr-xtools/testing/selftests/net/udpgro_frglist.sh10
-rwxr-xr-xtools/testing/selftests/net/udpgro_fwd.sh18
-rw-r--r--tools/testing/selftests/net/udpgso.c202
-rwxr-xr-xtools/testing/selftests/net/udpgso.sh92
-rwxr-xr-xtools/testing/selftests/net/udpgso_bench.sh3
-rwxr-xr-xtools/testing/selftests/net/unicast_extensions.sh9
-rwxr-xr-xtools/testing/selftests/net/veth.sh38
-rwxr-xr-xtools/testing/selftests/net/vlan_bridge_binding.sh256
-rwxr-xr-xtools/testing/selftests/net/vlan_hw_filter.sh98
-rwxr-xr-xtools/testing/selftests/net/vrf_route_leaking.sh98
-rwxr-xr-xtools/testing/selftests/net/xfrm_policy.sh4
-rwxr-xr-xtools/testing/selftests/net/xfrm_policy_add_speed.sh83
-rw-r--r--tools/testing/selftests/net/ynl.mk40
-rw-r--r--tools/testing/selftests/netfilter/Makefile20
-rwxr-xr-xtools/testing/selftests/netfilter/bridge_brouter.sh146
-rw-r--r--tools/testing/selftests/netfilter/config9
-rwxr-xr-xtools/testing/selftests/netfilter/conntrack_sctp_collision.sh89
-rwxr-xr-xtools/testing/selftests/netfilter/conntrack_tcp_unreplied.sh167
-rwxr-xr-xtools/testing/selftests/netfilter/ipvs.sh228
-rwxr-xr-xtools/testing/selftests/netfilter/nf_nat_edemux.sh127
-rwxr-xr-xtools/testing/selftests/netfilter/nft_conntrack_helper.sh197
-rwxr-xr-xtools/testing/selftests/netfilter/nft_fib.sh273
-rwxr-xr-xtools/testing/selftests/netfilter/nft_flowtable.sh672
-rwxr-xr-xtools/testing/selftests/netfilter/nft_queue.sh449
-rwxr-xr-xtools/testing/selftests/netfilter/nft_synproxy.sh117
-rwxr-xr-xtools/testing/selftests/netfilter/nft_trans_stress.sh151
-rw-r--r--tools/testing/selftests/netfilter/settings1
-rw-r--r--tools/testing/selftests/nolibc/Makefile283
-rw-r--r--tools/testing/selftests/nolibc/Makefile.include10
-rw-r--r--tools/testing/selftests/nolibc/Makefile.nolibc386
-rw-r--r--tools/testing/selftests/nolibc/nolibc-test-linkage.c8
-rw-r--r--tools/testing/selftests/nolibc/nolibc-test.c700
-rwxr-xr-xtools/testing/selftests/nolibc/run-tests.sh67
-rw-r--r--tools/testing/selftests/openat2/Makefile14
-rw-r--r--tools/testing/selftests/openat2/openat2_test.c1
-rw-r--r--tools/testing/selftests/pci_endpoint/.gitignore2
-rw-r--r--tools/testing/selftests/pci_endpoint/Makefile7
-rw-r--r--tools/testing/selftests/pci_endpoint/config4
-rw-r--r--tools/testing/selftests/pci_endpoint/pci_endpoint_test.c264
-rw-r--r--tools/testing/selftests/pcie_bwctrl/Makefile3
-rwxr-xr-xtools/testing/selftests/pcie_bwctrl/set_pcie_cooling_state.sh122
-rwxr-xr-xtools/testing/selftests/pcie_bwctrl/set_pcie_speed.sh67
-rw-r--r--tools/testing/selftests/perf_events/.gitignore2
-rw-r--r--tools/testing/selftests/perf_events/Makefile2
-rw-r--r--tools/testing/selftests/perf_events/mmap.c236
-rw-r--r--tools/testing/selftests/perf_events/watermark_signal.c144
-rw-r--r--tools/testing/selftests/pid_namespace/.gitignore1
-rw-r--r--tools/testing/selftests/pid_namespace/Makefile2
-rw-r--r--tools/testing/selftests/pid_namespace/pid_max.c359
-rw-r--r--tools/testing/selftests/pidfd/.gitignore6
-rw-r--r--tools/testing/selftests/pidfd/Makefile8
-rw-r--r--tools/testing/selftests/pidfd/config1
-rw-r--r--tools/testing/selftests/pidfd/pidfd.h192
-rw-r--r--tools/testing/selftests/pidfd/pidfd_bind_mount.c116
-rw-r--r--tools/testing/selftests/pidfd/pidfd_exec_helper.c12
-rw-r--r--tools/testing/selftests/pidfd/pidfd_fdinfo_test.c3
-rw-r--r--tools/testing/selftests/pidfd/pidfd_file_handle_test.c563
-rw-r--r--tools/testing/selftests/pidfd/pidfd_getfd_test.c31
-rw-r--r--tools/testing/selftests/pidfd/pidfd_info_test.c693
-rw-r--r--tools/testing/selftests/pidfd/pidfd_open_test.c60
-rw-r--r--tools/testing/selftests/pidfd/pidfd_poll_test.c2
-rw-r--r--tools/testing/selftests/pidfd/pidfd_setattr_test.c69
-rw-r--r--tools/testing/selftests/pidfd/pidfd_setns_test.c262
-rw-r--r--tools/testing/selftests/pidfd/pidfd_test.c80
-rw-r--r--tools/testing/selftests/pidfd/pidfd_wait.c47
-rw-r--r--tools/testing/selftests/pidfd/pidfd_xattr_test.c132
-rw-r--r--tools/testing/selftests/power_supply/Makefile4
-rw-r--r--tools/testing/selftests/power_supply/helpers.sh178
-rwxr-xr-xtools/testing/selftests/power_supply/test_power_supply_properties.sh114
-rw-r--r--tools/testing/selftests/powerpc/Makefile11
-rw-r--r--tools/testing/selftests/powerpc/alignment/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/alignment/settings1
-rw-r--r--tools/testing/selftests/powerpc/benchmarks/Makefile7
-rw-r--r--tools/testing/selftests/powerpc/benchmarks/exec_target.c16
-rw-r--r--tools/testing/selftests/powerpc/benchmarks/gettimeofday.c2
-rw-r--r--tools/testing/selftests/powerpc/cache_shape/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/cache_shape/settings1
-rw-r--r--tools/testing/selftests/powerpc/copyloops/Makefile21
-rw-r--r--tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h12
-rw-r--r--tools/testing/selftests/powerpc/copyloops/settings1
-rw-r--r--tools/testing/selftests/powerpc/dexcr/.gitignore2
-rw-r--r--tools/testing/selftests/powerpc/dexcr/Makefile9
-rw-r--r--tools/testing/selftests/powerpc/dexcr/chdexcr.c112
-rw-r--r--tools/testing/selftests/powerpc/dexcr/dexcr.c40
-rw-r--r--tools/testing/selftests/powerpc/dexcr/dexcr.h57
-rw-r--r--tools/testing/selftests/powerpc/dexcr/dexcr_test.c215
-rw-r--r--tools/testing/selftests/powerpc/dexcr/hashchk_test.c8
-rw-r--r--tools/testing/selftests/powerpc/dexcr/lsdexcr.c103
-rw-r--r--tools/testing/selftests/powerpc/dexcr/settings1
-rw-r--r--tools/testing/selftests/powerpc/dscr/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/dscr/settings1
-rw-r--r--tools/testing/selftests/powerpc/eeh/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/flags.mk9
-rw-r--r--tools/testing/selftests/powerpc/include/instructions.h2
-rw-r--r--tools/testing/selftests/powerpc/include/pkeys.h13
-rw-r--r--tools/testing/selftests/powerpc/lib/settings1
-rw-r--r--tools/testing/selftests/powerpc/math/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/math/fpu_signal.c16
-rw-r--r--tools/testing/selftests/powerpc/math/settings1
-rw-r--r--tools/testing/selftests/powerpc/mce/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/mce/settings1
-rw-r--r--tools/testing/selftests/powerpc/mm/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/mm/pkey_exec_prot.c2
-rw-r--r--tools/testing/selftests/powerpc/mm/pkey_siginfo.c2
-rw-r--r--tools/testing/selftests/powerpc/mm/settings1
-rw-r--r--tools/testing/selftests/powerpc/mm/stack_expansion_ldst.c2
-rw-r--r--tools/testing/selftests/powerpc/mm/subpage_prot.c4
-rw-r--r--tools/testing/selftests/powerpc/mm/tlbie_test.c10
-rw-r--r--tools/testing/selftests/powerpc/nx-gzip/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/nx-gzip/settings1
-rw-r--r--tools/testing/selftests/powerpc/papr_attributes/Makefile3
-rw-r--r--tools/testing/selftests/powerpc/papr_attributes/settings1
-rw-r--r--tools/testing/selftests/powerpc/papr_sysparm/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/papr_sysparm/settings1
-rw-r--r--tools/testing/selftests/powerpc/papr_vpd/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/papr_vpd/papr_vpd.c2
-rw-r--r--tools/testing/selftests/powerpc/papr_vpd/settings1
-rw-r--r--tools/testing/selftests/powerpc/pmu/Makefile44
-rw-r--r--tools/testing/selftests/powerpc/pmu/count_stcx_fail.c3
-rw-r--r--tools/testing/selftests/powerpc/pmu/ebb/Makefile21
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c3
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c3
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c4
-rw-r--r--tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c5
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile8
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c17
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/check_extended_reg_test.c35
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c20
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h12
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c2
-rw-r--r--tools/testing/selftests/powerpc/pmu/settings1
-rw-r--r--tools/testing/selftests/powerpc/primitives/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/primitives/linux/bitops.h0
l---------tools/testing/selftests/powerpc/primitives/linux/wordpart.h1
-rw-r--r--tools/testing/selftests/powerpc/primitives/settings1
-rw-r--r--tools/testing/selftests/powerpc/ptrace/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/ptrace/core-pkey.c37
-rw-r--r--tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c32
-rw-r--r--tools/testing/selftests/powerpc/ptrace/settings1
-rw-r--r--tools/testing/selftests/powerpc/scripts/settings1
-rw-r--r--tools/testing/selftests/powerpc/security/Makefile5
-rwxr-xr-xtools/testing/selftests/powerpc/security/mitigation-patching.sh8
-rw-r--r--tools/testing/selftests/powerpc/security/settings1
-rw-r--r--tools/testing/selftests/powerpc/signal/Makefile4
-rw-r--r--tools/testing/selftests/powerpc/signal/sigfuz.c2
-rw-r--r--tools/testing/selftests/powerpc/stringloops/Makefile11
-rw-r--r--tools/testing/selftests/powerpc/stringloops/settings1
-rw-r--r--tools/testing/selftests/powerpc/switch_endian/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/switch_endian/settings1
-rw-r--r--tools/testing/selftests/powerpc/syscalls/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/syscalls/settings1
-rw-r--r--tools/testing/selftests/powerpc/tm/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-signal-context-force-tm.c2
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-signal-sigreturn-nt.c3
-rw-r--r--tools/testing/selftests/powerpc/vphn/Makefile5
-rw-r--r--tools/testing/selftests/powerpc/vphn/settings1
-rw-r--r--tools/testing/selftests/powerpc/vphn/test-vphn.c2
-rw-r--r--tools/testing/selftests/proc/.gitignore5
-rw-r--r--tools/testing/selftests/proc/Makefile7
-rw-r--r--tools/testing/selftests/proc/proc-2-is-kthread.c53
-rw-r--r--tools/testing/selftests/proc/proc-empty-vm.c3
-rw-r--r--tools/testing/selftests/proc/proc-maps-race.c806
-rw-r--r--tools/testing/selftests/proc/proc-net-dev-lseek.c68
-rw-r--r--tools/testing/selftests/proc/proc-pid-vm.c98
-rw-r--r--tools/testing/selftests/proc/proc-pidns.c211
-rw-r--r--tools/testing/selftests/proc/proc-self-isnt-kthread.c37
-rw-r--r--tools/testing/selftests/ptp/testptp.c104
-rw-r--r--tools/testing/selftests/ptrace/.gitignore1
-rw-r--r--tools/testing/selftests/ptrace/Makefile2
-rw-r--r--tools/testing/selftests/ptrace/peeksiginfo.c2
-rw-r--r--tools/testing/selftests/ptrace/set_syscall_info.c519
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/console-badness.sh2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/jitter.sh27
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-build.sh2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-remote.sh25
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run-batch.sh43
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh4
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm.sh21
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/mktestid.sh29
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/parse-console.sh2
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/srcu_lockdep.sh44
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/torture.sh208
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/BUSTED3
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/CFcommon2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/CFcommon.i6862
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/CFcommon.ppc64le1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/CFcommon.x86_642
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/SRCU-N.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE012
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE01.boot2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE05.boot6
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE073
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE07.boot1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE095
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE105
-rw-r--r--tools/testing/selftests/rcutorture/configs/refscale/TINY20
-rw-r--r--tools/testing/selftests/resctrl/Makefile7
-rw-r--r--tools/testing/selftests/resctrl/cache.c287
-rw-r--r--tools/testing/selftests/resctrl/cat_test.c435
-rw-r--r--tools/testing/selftests/resctrl/cmt_test.c133
-rw-r--r--tools/testing/selftests/resctrl/fill_buf.c147
-rw-r--r--tools/testing/selftests/resctrl/mba_test.c116
-rw-r--r--tools/testing/selftests/resctrl/mbm_test.c101
-rw-r--r--tools/testing/selftests/resctrl/resctrl.h238
-rw-r--r--tools/testing/selftests/resctrl/resctrl_tests.c303
-rw-r--r--tools/testing/selftests/resctrl/resctrl_val.c714
-rw-r--r--tools/testing/selftests/resctrl/resctrlfs.c612
-rw-r--r--tools/testing/selftests/ring-buffer/.gitignore1
-rw-r--r--tools/testing/selftests/ring-buffer/Makefile7
-rw-r--r--tools/testing/selftests/ring-buffer/config2
-rw-r--r--tools/testing/selftests/ring-buffer/map_test.c324
-rw-r--r--tools/testing/selftests/riscv/Makefile2
-rw-r--r--tools/testing/selftests/riscv/README24
-rw-r--r--tools/testing/selftests/riscv/abi/.gitignore1
-rw-r--r--tools/testing/selftests/riscv/abi/Makefile10
-rw-r--r--tools/testing/selftests/riscv/abi/pointer_masking.c348
-rw-r--r--tools/testing/selftests/riscv/hwprobe/.gitignore2
-rw-r--r--tools/testing/selftests/riscv/hwprobe/cbo.c68
-rw-r--r--tools/testing/selftests/riscv/hwprobe/hwprobe.h10
-rw-r--r--tools/testing/selftests/riscv/mm/Makefile2
-rw-r--r--tools/testing/selftests/riscv/mm/mmap_bottomup.c23
-rw-r--r--tools/testing/selftests/riscv/mm/mmap_default.c23
-rw-r--r--tools/testing/selftests/riscv/mm/mmap_test.h56
-rw-r--r--tools/testing/selftests/riscv/sigreturn/.gitignore1
-rw-r--r--tools/testing/selftests/riscv/sigreturn/Makefile12
-rw-r--r--tools/testing/selftests/riscv/sigreturn/sigreturn.c82
-rw-r--r--tools/testing/selftests/riscv/vector/.gitignore3
-rw-r--r--tools/testing/selftests/riscv/vector/Makefile17
-rw-r--r--tools/testing/selftests/riscv/vector/v_exec_initval_nolibc.c90
-rw-r--r--tools/testing/selftests/riscv/vector/v_helpers.c68
-rw-r--r--tools/testing/selftests/riscv/vector/v_helpers.h8
-rw-r--r--tools/testing/selftests/riscv/vector/v_initval.c22
-rw-r--r--tools/testing/selftests/riscv/vector/v_initval_nolibc.c68
-rw-r--r--tools/testing/selftests/riscv/vector/vstate_exec_nolibc.c20
-rw-r--r--tools/testing/selftests/riscv/vector/vstate_prctl.c305
-rw-r--r--tools/testing/selftests/rseq/.gitignore1
-rw-r--r--tools/testing/selftests/rseq/Makefile9
-rw-r--r--tools/testing/selftests/rseq/param_test.c24
-rw-r--r--tools/testing/selftests/rseq/rseq-or1k-bits.h412
-rw-r--r--tools/testing/selftests/rseq/rseq-or1k-thread-pointer.h13
-rw-r--r--tools/testing/selftests/rseq/rseq-or1k.h181
-rw-r--r--tools/testing/selftests/rseq/rseq-riscv-bits.h6
-rw-r--r--tools/testing/selftests/rseq/rseq-riscv.h5
-rw-r--r--tools/testing/selftests/rseq/rseq-thread-pointer.h2
-rw-r--r--tools/testing/selftests/rseq/rseq.c173
-rw-r--r--tools/testing/selftests/rseq/rseq.h26
-rwxr-xr-xtools/testing/selftests/rseq/run_syscall_errors_test.sh5
-rw-r--r--tools/testing/selftests/rseq/syscall_errors_test.c124
-rw-r--r--tools/testing/selftests/rtc/.gitignore1
-rw-r--r--tools/testing/selftests/rtc/Makefile4
-rw-r--r--tools/testing/selftests/rtc/rtctest.c91
-rw-r--r--tools/testing/selftests/rtc/setdate.c77
-rwxr-xr-xtools/testing/selftests/run_kselftest.sh11
-rw-r--r--tools/testing/selftests/rust/Makefile4
-rw-r--r--tools/testing/selftests/rust/config6
-rwxr-xr-xtools/testing/selftests/rust/test_probe_samples.sh41
-rw-r--r--tools/testing/selftests/sched/config2
-rw-r--r--tools/testing/selftests/sched/cs_prctl_test.c12
-rw-r--r--tools/testing/selftests/sched_ext/.gitignore6
-rw-r--r--tools/testing/selftests/sched_ext/Makefile213
-rw-r--r--tools/testing/selftests/sched_ext/allowed_cpus.bpf.c144
-rw-r--r--tools/testing/selftests/sched_ext/allowed_cpus.c84
-rw-r--r--tools/testing/selftests/sched_ext/config8
-rw-r--r--tools/testing/selftests/sched_ext/create_dsq.bpf.c58
-rw-r--r--tools/testing/selftests/sched_ext/create_dsq.c57
-rw-r--r--tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c42
-rw-r--r--tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.c60
-rw-r--r--tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c39
-rw-r--r--tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.c59
-rw-r--r--tools/testing/selftests/sched_ext/dsp_local_on.bpf.c68
-rw-r--r--tools/testing/selftests/sched_ext/dsp_local_on.c60
-rw-r--r--tools/testing/selftests/sched_ext/enq_last_no_enq_fails.bpf.c29
-rw-r--r--tools/testing/selftests/sched_ext/enq_last_no_enq_fails.c64
-rw-r--r--tools/testing/selftests/sched_ext/enq_select_cpu.bpf.c74
-rw-r--r--tools/testing/selftests/sched_ext/enq_select_cpu.c88
-rw-r--r--tools/testing/selftests/sched_ext/exit.bpf.c86
-rw-r--r--tools/testing/selftests/sched_ext/exit.c64
-rw-r--r--tools/testing/selftests/sched_ext/exit_test.h20
-rw-r--r--tools/testing/selftests/sched_ext/hotplug.bpf.c61
-rw-r--r--tools/testing/selftests/sched_ext/hotplug.c169
-rw-r--r--tools/testing/selftests/sched_ext/hotplug_test.h15
-rw-r--r--tools/testing/selftests/sched_ext/init_enable_count.bpf.c53
-rw-r--r--tools/testing/selftests/sched_ext/init_enable_count.c157
-rw-r--r--tools/testing/selftests/sched_ext/maximal.bpf.c171
-rw-r--r--tools/testing/selftests/sched_ext/maximal.c54
-rw-r--r--tools/testing/selftests/sched_ext/maybe_null.bpf.c36
-rw-r--r--tools/testing/selftests/sched_ext/maybe_null.c49
-rw-r--r--tools/testing/selftests/sched_ext/maybe_null_fail_dsp.bpf.c25
-rw-r--r--tools/testing/selftests/sched_ext/maybe_null_fail_yld.bpf.c28
-rw-r--r--tools/testing/selftests/sched_ext/minimal.bpf.c21
-rw-r--r--tools/testing/selftests/sched_ext/minimal.c58
-rw-r--r--tools/testing/selftests/sched_ext/numa.bpf.c100
-rw-r--r--tools/testing/selftests/sched_ext/numa.c59
-rw-r--r--tools/testing/selftests/sched_ext/prog_run.bpf.c33
-rw-r--r--tools/testing/selftests/sched_ext/prog_run.c78
-rw-r--r--tools/testing/selftests/sched_ext/reload_loop.c74
-rw-r--r--tools/testing/selftests/sched_ext/runner.c212
-rw-r--r--tools/testing/selftests/sched_ext/scx_test.h131
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c40
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dfl.c75
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c89
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c75
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c41
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch.c73
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c37
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.c59
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c38
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.c59
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c92
-rw-r--r--tools/testing/selftests/sched_ext/select_cpu_vtime.c62
-rw-r--r--tools/testing/selftests/sched_ext/test_example.c49
-rw-r--r--tools/testing/selftests/sched_ext/util.c71
-rw-r--r--tools/testing/selftests/sched_ext/util.h13
-rw-r--r--tools/testing/selftests/seccomp/seccomp_benchmark.c46
-rw-r--r--tools/testing/selftests/seccomp/seccomp_bpf.c595
-rw-r--r--tools/testing/selftests/seccomp/settings2
-rw-r--r--tools/testing/selftests/sgx/Makefile2
-rw-r--r--tools/testing/selftests/signal/.gitignore3
-rw-r--r--tools/testing/selftests/signal/Makefile (renamed from tools/testing/selftests/sigaltstack/Makefile)3
-rw-r--r--tools/testing/selftests/signal/current_stack_pointer.h (renamed from tools/testing/selftests/sigaltstack/current_stack_pointer.h)2
-rw-r--r--tools/testing/selftests/signal/mangle_uc_sigmask.c184
-rw-r--r--tools/testing/selftests/signal/sas.c (renamed from tools/testing/selftests/sigaltstack/sas.c)0
-rw-r--r--tools/testing/selftests/sync/sync_test.c3
-rw-r--r--tools/testing/selftests/syscall_user_dispatch/sud_test.c154
-rwxr-xr-xtools/testing/selftests/sysctl/sysctl.sh42
-rw-r--r--tools/testing/selftests/tc-testing/config3
-rwxr-xr-xtools/testing/selftests/tc-testing/scripts/sfq_rejects_limit_1.py21
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json403
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/nat.json14
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/police.json12
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/basic.json6
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/cgroup.json6
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/flow.json6
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/route.json2
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/u32.json24
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/infra/actions.json22
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json965
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json24
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json25
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/dualpi2.json254
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json23
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json24
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json22
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json22
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json22
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json81
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/pie.json24
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json92
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json46
-rwxr-xr-xtools/testing/selftests/tc-testing/tdc.py3
-rwxr-xr-xtools/testing/selftests/tc-testing/tdc.sh11
-rw-r--r--tools/testing/selftests/thermal/intel/power_floor/.gitignore1
-rw-r--r--tools/testing/selftests/thermal/intel/power_floor/power_floor_test.c2
-rw-r--r--tools/testing/selftests/thermal/intel/workload_hint/.gitignore1
-rw-r--r--tools/testing/selftests/thermal/intel/workload_hint/workload_hint_test.c18
-rw-r--r--tools/testing/selftests/timens/clock_nanosleep.c4
-rw-r--r--tools/testing/selftests/timens/exec.c8
-rw-r--r--tools/testing/selftests/timens/futex.c2
-rw-r--r--tools/testing/selftests/timens/gettime_perf.c2
-rw-r--r--tools/testing/selftests/timens/procfs.c2
-rw-r--r--tools/testing/selftests/timens/timens.c2
-rw-r--r--tools/testing/selftests/timens/timer.c6
-rw-r--r--tools/testing/selftests/timens/timerfd.c8
-rw-r--r--tools/testing/selftests/timens/vfork_exec.c6
-rw-r--r--tools/testing/selftests/timers/Makefile2
-rw-r--r--tools/testing/selftests/timers/adjtick.c10
-rw-r--r--tools/testing/selftests/timers/alarmtimer-suspend.c26
-rw-r--r--tools/testing/selftests/timers/change_skew.c7
-rw-r--r--tools/testing/selftests/timers/clocksource-switch.c6
-rw-r--r--tools/testing/selftests/timers/freq-step.c4
-rw-r--r--tools/testing/selftests/timers/inconsistency-check.c21
-rw-r--r--tools/testing/selftests/timers/leap-a-day.c12
-rw-r--r--tools/testing/selftests/timers/leapcrash.c4
-rw-r--r--tools/testing/selftests/timers/mqueue-lat.c6
-rw-r--r--tools/testing/selftests/timers/nanosleep.c21
-rw-r--r--tools/testing/selftests/timers/nsleep-lat.c22
-rw-r--r--tools/testing/selftests/timers/posix_timers.c687
-rw-r--r--tools/testing/selftests/timers/raw_skew.c10
-rw-r--r--tools/testing/selftests/timers/rtcpie.c3
-rw-r--r--tools/testing/selftests/timers/set-2038.c7
-rw-r--r--tools/testing/selftests/timers/set-tai.c4
-rw-r--r--tools/testing/selftests/timers/set-timer-lat.c25
-rw-r--r--tools/testing/selftests/timers/set-tz.c4
-rw-r--r--tools/testing/selftests/timers/skew_consistency.c8
-rw-r--r--tools/testing/selftests/timers/threadtest.c6
-rw-r--r--tools/testing/selftests/timers/valid-adjtimex.c83
-rw-r--r--tools/testing/selftests/tmpfs/Makefile1
-rw-r--r--tools/testing/selftests/tmpfs/bug-link-o-tmpfile.c41
-rw-r--r--tools/testing/selftests/tpm2/.gitignore3
-rwxr-xr-xtools/testing/selftests/tpm2/test_async.sh2
-rwxr-xr-xtools/testing/selftests/tpm2/test_smoke.sh4
-rwxr-xr-xtools/testing/selftests/tpm2/test_space.sh2
-rw-r--r--tools/testing/selftests/tty/tty_tstamp_update.c48
-rwxr-xr-xtools/testing/selftests/turbostat/added_perf_counters.py178
-rwxr-xr-xtools/testing/selftests/turbostat/defcolumns.py60
-rwxr-xr-xtools/testing/selftests/turbostat/smi_aperf_mperf.py157
-rw-r--r--tools/testing/selftests/ublk/.gitignore3
-rw-r--r--tools/testing/selftests/ublk/Makefile51
-rw-r--r--tools/testing/selftests/ublk/common.c55
-rw-r--r--tools/testing/selftests/ublk/config1
-rw-r--r--tools/testing/selftests/ublk/fault_inject.c106
-rw-r--r--tools/testing/selftests/ublk/file_backed.c182
-rw-r--r--tools/testing/selftests/ublk/kublk.c1715
-rw-r--r--tools/testing/selftests/ublk/kublk.h424
-rw-r--r--tools/testing/selftests/ublk/null.c152
-rw-r--r--tools/testing/selftests/ublk/stripe.c391
-rwxr-xr-xtools/testing/selftests/ublk/test_common.sh384
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_01.sh48
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_02.sh48
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_03.sh28
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_04.sh40
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_05.sh44
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_06.sh41
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_07.sh28
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_08.sh32
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_09.sh28
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_10.sh30
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_11.sh44
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_12.sh59
-rwxr-xr-xtools/testing/selftests/ublk/test_generic_13.sh20
-rwxr-xr-xtools/testing/selftests/ublk/test_loop_01.sh26
-rwxr-xr-xtools/testing/selftests/ublk/test_loop_02.sh20
-rwxr-xr-xtools/testing/selftests/ublk/test_loop_03.sh25
-rwxr-xr-xtools/testing/selftests/ublk/test_loop_04.sh21
-rwxr-xr-xtools/testing/selftests/ublk/test_loop_05.sh26
-rwxr-xr-xtools/testing/selftests/ublk/test_null_01.sh24
-rwxr-xr-xtools/testing/selftests/ublk/test_null_02.sh24
-rwxr-xr-xtools/testing/selftests/ublk/test_stress_01.sh34
-rwxr-xr-xtools/testing/selftests/ublk/test_stress_02.sh36
-rwxr-xr-xtools/testing/selftests/ublk/test_stress_03.sh54
-rwxr-xr-xtools/testing/selftests/ublk/test_stress_04.sh51
-rwxr-xr-xtools/testing/selftests/ublk/test_stress_05.sh84
-rwxr-xr-xtools/testing/selftests/ublk/test_stripe_01.sh26
-rwxr-xr-xtools/testing/selftests/ublk/test_stripe_02.sh21
-rwxr-xr-xtools/testing/selftests/ublk/test_stripe_03.sh26
-rwxr-xr-xtools/testing/selftests/ublk/test_stripe_04.sh21
-rw-r--r--tools/testing/selftests/ublk/trace/count_ios_per_tid.bt11
-rw-r--r--tools/testing/selftests/ublk/trace/seq_io.bt25
-rw-r--r--tools/testing/selftests/ublk/ublk_dep.h18
-rw-r--r--tools/testing/selftests/ublk/utils.h68
-rw-r--r--tools/testing/selftests/uevent/.gitignore1
-rw-r--r--tools/testing/selftests/user/config1
-rwxr-xr-xtools/testing/selftests/user/test_user_copy.sh18
-rw-r--r--tools/testing/selftests/user_events/abi_test.c134
-rw-r--r--tools/testing/selftests/user_events/dyn_test.c2
-rw-r--r--tools/testing/selftests/user_events/ftrace_test.c8
-rw-r--r--tools/testing/selftests/vDSO/.gitignore3
-rw-r--r--tools/testing/selftests/vDSO/Makefile57
-rw-r--r--tools/testing/selftests/vDSO/parse_vdso.c159
-rw-r--r--tools/testing/selftests/vDSO/parse_vdso.h1
-rw-r--r--tools/testing/selftests/vDSO/vdso_call.h69
-rw-r--r--tools/testing/selftests/vDSO/vdso_config.h26
l---------[-rw-r--r--]tools/testing/selftests/vDSO/vdso_standalone_test_x86.c127
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_abi.c115
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_chacha.c133
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_clock_getres.c124
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_correctness.c23
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_getcpu.c19
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_getrandom.c322
-rw-r--r--tools/testing/selftests/vDSO/vdso_test_gettimeofday.c33
-rw-r--r--tools/testing/selftests/vDSO/vgetrandom-chacha.S20
-rw-r--r--tools/testing/selftests/vfio/.gitignore10
-rw-r--r--tools/testing/selftests/vfio/Makefile21
-rw-r--r--tools/testing/selftests/vfio/lib/drivers/dsa/dsa.c416
l---------tools/testing/selftests/vfio/lib/drivers/dsa/registers.h1
l---------tools/testing/selftests/vfio/lib/drivers/ioat/hw.h1
-rw-r--r--tools/testing/selftests/vfio/lib/drivers/ioat/ioat.c235
l---------tools/testing/selftests/vfio/lib/drivers/ioat/registers.h1
-rw-r--r--tools/testing/selftests/vfio/lib/include/vfio_util.h295
-rw-r--r--tools/testing/selftests/vfio/lib/libvfio.mk24
-rw-r--r--tools/testing/selftests/vfio/lib/vfio_pci_device.c594
-rw-r--r--tools/testing/selftests/vfio/lib/vfio_pci_driver.c126
-rwxr-xr-xtools/testing/selftests/vfio/run.sh109
-rw-r--r--tools/testing/selftests/vfio/vfio_dma_mapping_test.c199
-rw-r--r--tools/testing/selftests/vfio/vfio_iommufd_setup_test.c127
-rw-r--r--tools/testing/selftests/vfio/vfio_pci_device_test.c176
-rw-r--r--tools/testing/selftests/vfio/vfio_pci_driver_test.c244
-rw-r--r--tools/testing/selftests/vsock/.gitignore2
-rw-r--r--tools/testing/selftests/vsock/Makefile17
-rw-r--r--tools/testing/selftests/vsock/config111
-rw-r--r--tools/testing/selftests/vsock/settings1
-rwxr-xr-xtools/testing/selftests/vsock/vmtest.sh487
-rw-r--r--tools/testing/selftests/watchdog/watchdog-test.c12
-rwxr-xr-xtools/testing/selftests/wireguard/netns.sh30
-rw-r--r--tools/testing/selftests/wireguard/qemu/Makefile11
-rw-r--r--tools/testing/selftests/wireguard/qemu/arch/riscv32.config3
-rw-r--r--tools/testing/selftests/wireguard/qemu/arch/riscv64.config3
-rw-r--r--tools/testing/selftests/wireguard/qemu/debug.config3
-rw-r--r--tools/testing/selftests/wireguard/qemu/kernel.config7
-rw-r--r--tools/testing/selftests/x86/Makefile42
-rw-r--r--tools/testing/selftests/x86/amx.c479
-rw-r--r--tools/testing/selftests/x86/apx.c10
-rw-r--r--tools/testing/selftests/x86/avx.c12
-rw-r--r--tools/testing/selftests/x86/bugs/Makefile3
-rwxr-xr-xtools/testing/selftests/x86/bugs/common.py164
-rwxr-xr-xtools/testing/selftests/x86/bugs/its_indirect_alignment.py150
-rwxr-xr-xtools/testing/selftests/x86/bugs/its_permutations.py109
-rwxr-xr-xtools/testing/selftests/x86/bugs/its_ret_alignment.py139
-rwxr-xr-xtools/testing/selftests/x86/bugs/its_sysfs.py65
-rw-r--r--tools/testing/selftests/x86/clang_helpers_32.S11
-rw-r--r--tools/testing/selftests/x86/clang_helpers_64.S28
-rw-r--r--tools/testing/selftests/x86/corrupt_xstate_header.c14
-rw-r--r--tools/testing/selftests/x86/entry_from_vm86.c24
-rw-r--r--tools/testing/selftests/x86/fsgsbase.c30
-rw-r--r--tools/testing/selftests/x86/fsgsbase_restore.c11
-rw-r--r--tools/testing/selftests/x86/helpers.h28
-rw-r--r--tools/testing/selftests/x86/ioperm.c25
-rw-r--r--tools/testing/selftests/x86/iopl.c25
-rw-r--r--tools/testing/selftests/x86/lam.c166
-rw-r--r--tools/testing/selftests/x86/ldt_gdt.c18
-rw-r--r--tools/testing/selftests/x86/mov_ss_trap.c14
-rw-r--r--tools/testing/selftests/x86/ptrace_syscall.c24
-rw-r--r--tools/testing/selftests/x86/sigaltstack.c26
-rw-r--r--tools/testing/selftests/x86/sigreturn.c26
-rw-r--r--tools/testing/selftests/x86/sigtrap_loop.c101
-rw-r--r--tools/testing/selftests/x86/single_step_syscall.c22
-rw-r--r--tools/testing/selftests/x86/srso.c70
-rw-r--r--tools/testing/selftests/x86/syscall_arg_fault.c13
-rw-r--r--tools/testing/selftests/x86/syscall_nt.c12
-rw-r--r--tools/testing/selftests/x86/syscall_numbering.c3
-rw-r--r--tools/testing/selftests/x86/sysret_rip.c42
-rw-r--r--tools/testing/selftests/x86/test_FISTTP.c8
-rw-r--r--tools/testing/selftests/x86/test_mremap_vdso.c86
-rw-r--r--tools/testing/selftests/x86/test_shadow_stack.c212
-rw-r--r--tools/testing/selftests/x86/test_vsyscall.c498
-rw-r--r--tools/testing/selftests/x86/unwind_vdso.c12
-rw-r--r--tools/testing/selftests/x86/vdso_restorer.c2
-rw-r--r--tools/testing/selftests/x86/xstate.c478
-rw-r--r--tools/testing/selftests/x86/xstate.h197
-rw-r--r--tools/testing/selftests/zram/.gitignore (renamed from tools/testing/selftests/sigaltstack/.gitignore)2
-rw-r--r--tools/testing/selftests/zram/README1
-rw-r--r--tools/testing/shared/autoconf.h (renamed from tools/testing/radix-tree/generated/autoconf.h)0
-rw-r--r--tools/testing/shared/interval_tree-shim.c5
-rw-r--r--tools/testing/shared/linux.c (renamed from tools/testing/radix-tree/linux.c)154
-rw-r--r--tools/testing/shared/linux/bug.h (renamed from tools/testing/radix-tree/linux/bug.h)0
-rw-r--r--tools/testing/shared/linux/cleanup.h2
-rw-r--r--tools/testing/shared/linux/cpu.h (renamed from tools/testing/radix-tree/linux/cpu.h)0
-rw-r--r--tools/testing/shared/linux/idr.h5
-rw-r--r--tools/testing/shared/linux/interval_tree.h7
-rw-r--r--tools/testing/shared/linux/interval_tree_generic.h2
-rw-r--r--tools/testing/shared/linux/kconfig.h (renamed from tools/testing/radix-tree/linux/kconfig.h)0
-rw-r--r--tools/testing/shared/linux/kernel.h (renamed from tools/testing/radix-tree/linux/kernel.h)2
-rw-r--r--tools/testing/shared/linux/kmemleak.h (renamed from tools/testing/radix-tree/linux/kmemleak.h)0
-rw-r--r--tools/testing/shared/linux/local_lock.h (renamed from tools/testing/radix-tree/linux/local_lock.h)0
-rw-r--r--tools/testing/shared/linux/lockdep.h (renamed from tools/testing/radix-tree/linux/lockdep.h)0
-rw-r--r--tools/testing/shared/linux/maple_tree.h5
-rw-r--r--tools/testing/shared/linux/percpu.h (renamed from tools/testing/radix-tree/linux/percpu.h)0
-rw-r--r--tools/testing/shared/linux/preempt.h (renamed from tools/testing/radix-tree/linux/preempt.h)0
-rw-r--r--tools/testing/shared/linux/radix-tree.h (renamed from tools/testing/radix-tree/linux/radix-tree.h)0
-rw-r--r--tools/testing/shared/linux/rbtree.h8
-rw-r--r--tools/testing/shared/linux/rbtree_augmented.h7
-rw-r--r--tools/testing/shared/linux/rbtree_types.h8
-rw-r--r--tools/testing/shared/linux/rcupdate.h (renamed from tools/testing/radix-tree/linux/rcupdate.h)0
-rw-r--r--tools/testing/shared/linux/xarray.h (renamed from tools/testing/radix-tree/linux/xarray.h)0
-rw-r--r--tools/testing/shared/maple-shared.h24
-rw-r--r--tools/testing/shared/maple-shim.c14
-rw-r--r--tools/testing/shared/rbtree-shim.c6
-rw-r--r--tools/testing/shared/shared.h37
-rw-r--r--tools/testing/shared/shared.mk79
-rw-r--r--tools/testing/shared/trace/events/maple_tree.h (renamed from tools/testing/radix-tree/trace/events/maple_tree.h)0
-rw-r--r--tools/testing/shared/xarray-shared.c5
-rw-r--r--tools/testing/shared/xarray-shared.h8
-rw-r--r--tools/testing/vma/.gitignore7
-rw-r--r--tools/testing/vma/Makefile18
-rw-r--r--tools/testing/vma/linux/mmzone.h38
-rw-r--r--tools/testing/vma/vma.c1715
-rw-r--r--tools/testing/vma/vma_internal.h1410
-rw-r--r--tools/testing/vsock/Makefile14
-rw-r--r--tools/testing/vsock/README15
-rw-r--r--tools/testing/vsock/control.c9
-rw-r--r--tools/testing/vsock/msg_zerocopy_common.c10
-rw-r--r--tools/testing/vsock/msg_zerocopy_common.h1
-rw-r--r--tools/testing/vsock/timeout.c18
-rw-r--r--tools/testing/vsock/timeout.h1
-rw-r--r--tools/testing/vsock/util.c409
-rw-r--r--tools/testing/vsock/util.h55
-rw-r--r--tools/testing/vsock/vsock_diag_test.c23
-rw-r--r--tools/testing/vsock/vsock_perf.c20
-rw-r--r--tools/testing/vsock/vsock_test.c1086
-rw-r--r--tools/testing/vsock/vsock_test_zerocopy.c14
-rw-r--r--tools/testing/vsock/vsock_uring_test.c19
-rw-r--r--tools/thermal/lib/Makefile15
-rw-r--r--tools/thermal/thermal-engine/thermal-engine.c105
-rw-r--r--tools/thermal/thermometer/thermometer.c7
-rw-r--r--tools/tracing/latency/.gitignore5
-rw-r--r--tools/tracing/latency/Build1
-rw-r--r--tools/tracing/latency/Makefile99
-rw-r--r--tools/tracing/latency/Makefile.config39
-rw-r--r--tools/tracing/latency/latency-collector.c8
-rw-r--r--tools/tracing/rtla/.gitignore8
-rw-r--r--tools/tracing/rtla/Build1
-rw-r--r--tools/tracing/rtla/Makefile235
-rw-r--r--tools/tracing/rtla/Makefile.config108
-rw-r--r--tools/tracing/rtla/Makefile.rtla93
-rw-r--r--tools/tracing/rtla/Makefile.standalone26
-rw-r--r--tools/tracing/rtla/README.txt11
-rw-r--r--tools/tracing/rtla/sample/timerlat_load.py78
-rw-r--r--tools/tracing/rtla/src/Build14
-rw-r--r--tools/tracing/rtla/src/actions.c260
-rw-r--r--tools/tracing/rtla/src/actions.h52
-rw-r--r--tools/tracing/rtla/src/common.c344
-rw-r--r--tools/tracing/rtla/src/common.h154
-rw-r--r--tools/tracing/rtla/src/osnoise.c159
-rw-r--r--tools/tracing/rtla/src/osnoise.h86
-rw-r--r--tools/tracing/rtla/src/osnoise_hist.c473
-rw-r--r--tools/tracing/rtla/src/osnoise_top.c416
-rw-r--r--tools/tracing/rtla/src/timerlat.bpf.c154
-rw-r--r--tools/tracing/rtla/src/timerlat.c230
-rw-r--r--tools/tracing/rtla/src/timerlat.h39
-rw-r--r--tools/tracing/rtla/src/timerlat_aa.c111
-rw-r--r--tools/tracing/rtla/src/timerlat_bpf.c180
-rw-r--r--tools/tracing/rtla/src/timerlat_bpf.h62
-rw-r--r--tools/tracing/rtla/src/timerlat_hist.c975
-rw-r--r--tools/tracing/rtla/src/timerlat_top.c844
-rw-r--r--tools/tracing/rtla/src/trace.c74
-rw-r--r--tools/tracing/rtla/src/trace.h8
-rw-r--r--tools/tracing/rtla/src/utils.c188
-rw-r--r--tools/tracing/rtla/src/utils.h21
-rw-r--r--tools/tracing/rtla/tests/engine.sh140
-rw-r--r--tools/tracing/rtla/tests/hwnoise.t22
-rw-r--r--tools/tracing/rtla/tests/osnoise.t50
-rwxr-xr-xtools/tracing/rtla/tests/scripts/check-priority.sh8
-rw-r--r--tools/tracing/rtla/tests/timerlat.t72
-rwxr-xr-xtools/usb/p9_fwd.py243
-rw-r--r--tools/usb/usbip/src/usbip_detach.c1
-rw-r--r--tools/usb/usbip/src/usbipd.c4
-rw-r--r--tools/verification/dot2/Makefile26
-rw-r--r--tools/verification/dot2/dot2k45
-rw-r--r--tools/verification/dot2/dot2k.py177
-rw-r--r--tools/verification/dot2/dot2k_templates/main_global.c91
-rw-r--r--tools/verification/dot2/dot2k_templates/main_per_cpu.c91
-rw-r--r--tools/verification/dot2/dot2k_templates/main_per_task.c91
-rw-r--r--tools/verification/models/rtapp/pagefault.ltl1
-rw-r--r--tools/verification/models/rtapp/sleep.ltl22
-rw-r--r--tools/verification/models/sched/nrp.dot29
-rw-r--r--tools/verification/models/sched/opid.dot35
-rw-r--r--tools/verification/models/sched/sco.dot18
-rw-r--r--tools/verification/models/sched/scpd.dot18
-rw-r--r--tools/verification/models/sched/snep.dot18
-rw-r--r--tools/verification/models/sched/snroc.dot18
-rw-r--r--tools/verification/models/sched/sssw.dot30
-rw-r--r--tools/verification/models/sched/sts.dot38
-rw-r--r--tools/verification/rv/.gitignore6
-rw-r--r--tools/verification/rv/Build1
-rw-r--r--tools/verification/rv/Makefile203
-rw-r--r--tools/verification/rv/Makefile.config48
-rw-r--r--tools/verification/rv/Makefile.rv51
-rw-r--r--tools/verification/rv/include/in_kernel.h2
-rw-r--r--tools/verification/rv/include/rv.h3
-rw-r--r--tools/verification/rv/src/Build4
-rw-r--r--tools/verification/rv/src/in_kernel.c264
-rw-r--r--tools/verification/rv/src/rv.c39
-rw-r--r--tools/verification/rv/src/trace.c2
-rw-r--r--tools/verification/rvgen/.gitignore3
-rw-r--r--tools/verification/rvgen/Makefile27
-rw-r--r--tools/verification/rvgen/__main__.py67
-rw-r--r--tools/verification/rvgen/dot2c (renamed from tools/verification/dot2/dot2c)2
-rw-r--r--tools/verification/rvgen/rvgen/automata.py (renamed from tools/verification/dot2/automata.py)54
-rw-r--r--tools/verification/rvgen/rvgen/container.py32
-rw-r--r--tools/verification/rvgen/rvgen/dot2c.py (renamed from tools/verification/dot2/dot2c.py)26
-rw-r--r--tools/verification/rvgen/rvgen/dot2k.py129
-rw-r--r--tools/verification/rvgen/rvgen/generator.py270
-rw-r--r--tools/verification/rvgen/rvgen/ltl2ba.py566
-rw-r--r--tools/verification/rvgen/rvgen/ltl2k.py271
-rw-r--r--tools/verification/rvgen/rvgen/templates/Kconfig9
-rw-r--r--tools/verification/rvgen/rvgen/templates/container/Kconfig5
-rw-r--r--tools/verification/rvgen/rvgen/templates/container/main.c37
-rw-r--r--tools/verification/rvgen/rvgen/templates/container/main.h3
-rw-r--r--tools/verification/rvgen/rvgen/templates/dot2k/main.c90
-rw-r--r--tools/verification/rvgen/rvgen/templates/dot2k/trace.h13
-rw-r--r--tools/verification/rvgen/rvgen/templates/ltl2k/main.c102
-rw-r--r--tools/verification/rvgen/rvgen/templates/ltl2k/trace.h14
-rw-r--r--tools/virtio/.gitignore1
-rw-r--r--tools/virtio/Makefile8
-rw-r--r--tools/virtio/linux/compiler.h25
-rw-r--r--tools/virtio/linux/dma-mapping.h13
-rw-r--r--tools/virtio/linux/kmsan.h2
-rw-r--r--tools/virtio/linux/module.h7
-rw-r--r--tools/virtio/linux/virtio_config.h4
-rw-r--r--tools/virtio/ringtest/main.c2
-rw-r--r--tools/virtio/vhost_net_test.c532
-rw-r--r--tools/virtio/vringh_test.c11
-rw-r--r--tools/workqueue/wq_dump.py104
-rw-r--r--tools/workqueue/wq_monitor.py9
-rw-r--r--tools/writeback/wb_monitor.py172
5203 files changed, 530655 insertions, 94296 deletions
diff --git a/tools/Makefile b/tools/Makefile
index 37e9f6804832..c31cbbd12c45 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -11,7 +11,6 @@ help:
@echo ''
@echo ' acpi - ACPI tools'
@echo ' bpf - misc BPF tools'
- @echo ' cgroup - cgroup tools'
@echo ' counter - counter tools'
@echo ' cpupower - a tool for all things x86 CPU power'
@echo ' debugging - tools for debugging'
@@ -26,9 +25,9 @@ help:
@echo ' leds - LEDs tools'
@echo ' nolibc - nolibc headers testing and installation'
@echo ' objtool - an ELF object analysis tool'
- @echo ' pci - PCI tools'
@echo ' perf - Linux performance measurement and analysis tool'
@echo ' selftests - various kernel selftests'
+ @echo ' sched_ext - sched_ext example schedulers'
@echo ' bootconfig - boot config tool'
@echo ' spi - spi tools'
@echo ' tmon - thermal monitoring and tuning tool'
@@ -42,6 +41,7 @@ help:
@echo ' mm - misc mm tools'
@echo ' wmi - WMI interface examples'
@echo ' x86_energy_perf_policy - Intel energy policy tool'
+ @echo ' ynl - ynl headers, library, and python tool'
@echo ''
@echo 'You can do:'
@echo ' $$ make -C tools/ <tool>_install'
@@ -69,7 +69,7 @@ acpi: FORCE
cpupower: FORCE
$(call descend,power/$@)
-cgroup counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi pci firmware debugging tracing: FORCE
+counter firewire hv guest bootconfig spi usb virtio mm bpf iio gpio objtool leds wmi firmware debugging tracing: FORCE
$(call descend,$@)
bpf/%: FORCE
@@ -92,6 +92,9 @@ perf: FORCE
$(Q)mkdir -p $(PERF_O) .
$(Q)$(MAKE) --no-print-directory -C perf O=$(PERF_O) subdir=
+sched_ext: FORCE
+ $(call descend,sched_ext)
+
selftests: FORCE
$(call descend,testing/$@)
@@ -116,11 +119,14 @@ freefall: FORCE
kvm_stat: FORCE
$(call descend,kvm/$@)
-all: acpi cgroup counter cpupower gpio hv firewire \
+ynl: FORCE
+ $(call descend,net/ynl)
+
+all: acpi counter cpupower gpio hv firewire \
perf selftests bootconfig spi turbostat usb \
virtio mm bpf x86_energy_perf_policy \
tmon freefall iio objtool kvm_stat wmi \
- pci debugging tracing thermal thermometer thermal-engine
+ debugging tracing thermal thermometer thermal-engine ynl
acpi_install:
$(call descend,power/$(@:_install=),install)
@@ -128,7 +134,7 @@ acpi_install:
cpupower_install:
$(call descend,power/$(@:_install=),install)
-cgroup_install counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install pci_install debugging_install tracing_install:
+counter_install firewire_install gpio_install hv_install iio_install perf_install bootconfig_install spi_install usb_install virtio_install mm_install bpf_install objtool_install wmi_install debugging_install tracing_install:
$(call descend,$(@:_install=),install)
selftests_install:
@@ -155,13 +161,16 @@ freefall_install:
kvm_stat_install:
$(call descend,kvm/$(@:_install=),install)
-install: acpi_install cgroup_install counter_install cpupower_install gpio_install \
+ynl_install:
+ $(call descend,net/$(@:_install=),install)
+
+install: acpi_install counter_install cpupower_install gpio_install \
hv_install firewire_install iio_install \
perf_install selftests_install turbostat_install usb_install \
virtio_install mm_install bpf_install x86_energy_perf_policy_install \
tmon_install freefall_install objtool_install kvm_stat_install \
- wmi_install pci_install debugging_install intel-speed-select_install \
- tracing_install thermometer_install thermal-engine_install
+ wmi_install debugging_install intel-speed-select_install \
+ tracing_install thermometer_install thermal-engine_install ynl_install
acpi_clean:
$(call descend,power/acpi,clean)
@@ -169,7 +178,7 @@ acpi_clean:
cpupower_clean:
$(call descend,power/cpupower,clean)
-cgroup_clean counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean mm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean debugging_clean tracing_clean:
+counter_clean hv_clean firewire_clean bootconfig_clean spi_clean usb_clean virtio_clean mm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean firmware_clean debugging_clean tracing_clean:
$(call descend,$(@:_clean=),clean)
libapi_clean:
@@ -185,6 +194,9 @@ perf_clean:
$(Q)mkdir -p $(PERF_O) .
$(Q)$(MAKE) --no-print-directory -C perf O=$(PERF_O) subdir= clean
+sched_ext_clean:
+ $(call descend,sched_ext,clean)
+
selftests_clean:
$(call descend,testing/$(@:_clean=),clean)
@@ -209,11 +221,15 @@ freefall_clean:
build_clean:
$(call descend,build,clean)
-clean: acpi_clean cgroup_clean counter_clean cpupower_clean hv_clean firewire_clean \
+ynl_clean:
+ $(call descend,net/$(@:_clean=),clean)
+
+clean: acpi_clean counter_clean cpupower_clean hv_clean firewire_clean \
perf_clean selftests_clean turbostat_clean bootconfig_clean spi_clean usb_clean virtio_clean \
mm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \
freefall_clean build_clean libbpf_clean libsubcmd_clean \
- gpio_clean objtool_clean leds_clean wmi_clean pci_clean firmware_clean debugging_clean \
- intel-speed-select_clean tracing_clean thermal_clean thermometer_clean thermal-engine_clean
+ gpio_clean objtool_clean leds_clean wmi_clean firmware_clean debugging_clean \
+ intel-speed-select_clean tracing_clean thermal_clean thermometer_clean thermal-engine_clean \
+ sched_ext_clean ynl_clean
.PHONY: FORCE
diff --git a/tools/accounting/Makefile b/tools/accounting/Makefile
index 11def1ad046c..20bbd461515e 100644
--- a/tools/accounting/Makefile
+++ b/tools/accounting/Makefile
@@ -2,7 +2,7 @@
CC := $(CROSS_COMPILE)gcc
CFLAGS := -I../../usr/include
-PROGS := getdelays procacct
+PROGS := getdelays procacct delaytop
all: $(PROGS)
diff --git a/tools/accounting/delaytop.c b/tools/accounting/delaytop.c
new file mode 100644
index 000000000000..72cc500b44b1
--- /dev/null
+++ b/tools/accounting/delaytop.c
@@ -0,0 +1,1145 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * delaytop.c - system-wide delay monitoring tool.
+ *
+ * This tool provides real-time monitoring and statistics of
+ * system, container, and task-level delays, including CPU,
+ * memory, IO, and IRQ. It supports both interactive (top-like),
+ * and can output delay information for the whole system, specific
+ * containers (cgroups), or individual tasks (PIDs).
+ *
+ * Key features:
+ * - Collects per-task delay accounting statistics via taskstats.
+ * - Collects system-wide PSI information.
+ * - Supports sorting, filtering.
+ * - Supports both interactive (screen refresh).
+ *
+ * Copyright (C) Fan Yu, ZTE Corp. 2025
+ * Copyright (C) Wang Yaxin, ZTE Corp. 2025
+ *
+ * Compile with
+ * gcc -I/usr/src/linux/include delaytop.c -o delaytop
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <getopt.h>
+#include <signal.h>
+#include <time.h>
+#include <dirent.h>
+#include <ctype.h>
+#include <stdbool.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/socket.h>
+#include <sys/select.h>
+#include <termios.h>
+#include <limits.h>
+#include <linux/genetlink.h>
+#include <linux/taskstats.h>
+#include <linux/cgroupstats.h>
+#include <stddef.h>
+
+#define PSI_PATH "/proc/pressure"
+#define PSI_CPU_PATH "/proc/pressure/cpu"
+#define PSI_MEMORY_PATH "/proc/pressure/memory"
+#define PSI_IO_PATH "/proc/pressure/io"
+#define PSI_IRQ_PATH "/proc/pressure/irq"
+
+#define NLA_NEXT(na) ((struct nlattr *)((char *)(na) + NLA_ALIGN((na)->nla_len)))
+#define NLA_DATA(na) ((void *)((char *)(na) + NLA_HDRLEN))
+#define NLA_PAYLOAD(len) (len - NLA_HDRLEN)
+
+#define GENLMSG_DATA(glh) ((void *)(NLMSG_DATA(glh) + GENL_HDRLEN))
+#define GENLMSG_PAYLOAD(glh) (NLMSG_PAYLOAD(glh, 0) - GENL_HDRLEN)
+
+#define TASK_COMM_LEN 16
+#define MAX_MSG_SIZE 1024
+#define MAX_TASKS 1000
+#define MAX_BUF_LEN 256
+#define SET_TASK_STAT(task_count, field) tasks[task_count].field = stats.field
+#define BOOL_FPRINT(stream, fmt, ...) \
+({ \
+ int ret = fprintf(stream, fmt, ##__VA_ARGS__); \
+ ret >= 0; \
+})
+#define TASK_AVG(task, field) average_ms((task).field##_delay_total, (task).field##_count)
+#define PSI_LINE_FORMAT "%-12s %6.1f%%/%6.1f%%/%6.1f%%/%8llu(ms)\n"
+#define DELAY_FMT_DEFAULT "%8.2f %8.2f %8.2f %8.2f\n"
+#define DELAY_FMT_MEMVERBOSE "%8.2f %8.2f %8.2f %8.2f %8.2f %8.2f\n"
+#define SORT_FIELD(name, cmd, modes) \
+ {#name, #cmd, \
+ offsetof(struct task_info, name##_delay_total), \
+ offsetof(struct task_info, name##_count), \
+ modes}
+#define END_FIELD {NULL, 0, 0}
+
+/* Display mode types */
+#define MODE_TYPE_ALL (0xFFFFFFFF)
+#define MODE_DEFAULT (1 << 0)
+#define MODE_MEMVERBOSE (1 << 1)
+
+/* PSI statistics structure */
+struct psi_stats {
+ double cpu_some_avg10, cpu_some_avg60, cpu_some_avg300;
+ unsigned long long cpu_some_total;
+ double cpu_full_avg10, cpu_full_avg60, cpu_full_avg300;
+ unsigned long long cpu_full_total;
+ double memory_some_avg10, memory_some_avg60, memory_some_avg300;
+ unsigned long long memory_some_total;
+ double memory_full_avg10, memory_full_avg60, memory_full_avg300;
+ unsigned long long memory_full_total;
+ double io_some_avg10, io_some_avg60, io_some_avg300;
+ unsigned long long io_some_total;
+ double io_full_avg10, io_full_avg60, io_full_avg300;
+ unsigned long long io_full_total;
+ double irq_full_avg10, irq_full_avg60, irq_full_avg300;
+ unsigned long long irq_full_total;
+};
+
+/* Task delay information structure */
+struct task_info {
+ int pid;
+ int tgid;
+ char command[TASK_COMM_LEN];
+ unsigned long long cpu_count;
+ unsigned long long cpu_delay_total;
+ unsigned long long blkio_count;
+ unsigned long long blkio_delay_total;
+ unsigned long long swapin_count;
+ unsigned long long swapin_delay_total;
+ unsigned long long freepages_count;
+ unsigned long long freepages_delay_total;
+ unsigned long long thrashing_count;
+ unsigned long long thrashing_delay_total;
+ unsigned long long compact_count;
+ unsigned long long compact_delay_total;
+ unsigned long long wpcopy_count;
+ unsigned long long wpcopy_delay_total;
+ unsigned long long irq_count;
+ unsigned long long irq_delay_total;
+ unsigned long long mem_count;
+ unsigned long long mem_delay_total;
+};
+
+/* Container statistics structure */
+struct container_stats {
+ int nr_sleeping; /* Number of sleeping processes */
+ int nr_running; /* Number of running processes */
+ int nr_stopped; /* Number of stopped processes */
+ int nr_uninterruptible; /* Number of uninterruptible processes */
+ int nr_io_wait; /* Number of processes in IO wait */
+};
+
+/* Delay field structure */
+struct field_desc {
+ const char *name; /* Field name for cmdline argument */
+ const char *cmd_char; /* Interactive command */
+ unsigned long total_offset; /* Offset of total delay in task_info */
+ unsigned long count_offset; /* Offset of count in task_info */
+ size_t supported_modes; /* Supported display modes */
+};
+
+/* Program settings structure */
+struct config {
+ int delay; /* Update interval in seconds */
+ int iterations; /* Number of iterations, 0 == infinite */
+ int max_processes; /* Maximum number of processes to show */
+ int output_one_time; /* Output once and exit */
+ int monitor_pid; /* Monitor specific PID */
+ char *container_path; /* Path to container cgroup */
+ const struct field_desc *sort_field; /* Current sort field */
+ size_t display_mode; /* Current display mode */
+};
+
+/* Global variables */
+static struct config cfg;
+static struct psi_stats psi;
+static struct task_info tasks[MAX_TASKS];
+static int task_count;
+static int running = 1;
+static struct container_stats container_stats;
+static const struct field_desc sort_fields[] = {
+ SORT_FIELD(cpu, c, MODE_DEFAULT),
+ SORT_FIELD(blkio, i, MODE_DEFAULT),
+ SORT_FIELD(irq, q, MODE_DEFAULT),
+ SORT_FIELD(mem, m, MODE_DEFAULT | MODE_MEMVERBOSE),
+ SORT_FIELD(swapin, s, MODE_MEMVERBOSE),
+ SORT_FIELD(freepages, r, MODE_MEMVERBOSE),
+ SORT_FIELD(thrashing, t, MODE_MEMVERBOSE),
+ SORT_FIELD(compact, p, MODE_MEMVERBOSE),
+ SORT_FIELD(wpcopy, w, MODE_MEMVERBOSE),
+ END_FIELD
+};
+static int sort_selected;
+
+/* Netlink socket variables */
+static int nl_sd = -1;
+static int family_id;
+
+/* Set terminal to non-canonical mode for q-to-quit */
+static struct termios orig_termios;
+static void enable_raw_mode(void)
+{
+ struct termios raw;
+
+ tcgetattr(STDIN_FILENO, &orig_termios);
+ raw = orig_termios;
+ raw.c_lflag &= ~(ICANON | ECHO);
+ tcsetattr(STDIN_FILENO, TCSAFLUSH, &raw);
+}
+static void disable_raw_mode(void)
+{
+ tcsetattr(STDIN_FILENO, TCSAFLUSH, &orig_termios);
+}
+
+/* Find field descriptor by command line */
+static const struct field_desc *get_field_by_cmd_char(char ch)
+{
+ const struct field_desc *field;
+
+ for (field = sort_fields; field->name != NULL; field++) {
+ if (field->cmd_char[0] == ch)
+ return field;
+ }
+
+ return NULL;
+}
+
+/* Find field descriptor by name with string comparison */
+static const struct field_desc *get_field_by_name(const char *name)
+{
+ const struct field_desc *field;
+ size_t field_len;
+
+ for (field = sort_fields; field->name != NULL; field++) {
+ field_len = strlen(field->name);
+ if (field_len != strlen(name))
+ continue;
+ if (strncmp(field->name, name, field_len) == 0)
+ return field;
+ }
+
+ return NULL;
+}
+
+/* Find display name for a field descriptor */
+static const char *get_name_by_field(const struct field_desc *field)
+{
+ return field ? field->name : "UNKNOWN";
+}
+
+/* Generate string of available field names */
+static void display_available_fields(size_t mode)
+{
+ const struct field_desc *field;
+ char buf[MAX_BUF_LEN];
+
+ buf[0] = '\0';
+
+ for (field = sort_fields; field->name != NULL; field++) {
+ if (!(field->supported_modes & mode))
+ continue;
+ strncat(buf, "|", MAX_BUF_LEN - strlen(buf) - 1);
+ strncat(buf, field->name, MAX_BUF_LEN - strlen(buf) - 1);
+ buf[MAX_BUF_LEN - 1] = '\0';
+ }
+
+ fprintf(stderr, "Available fields: %s\n", buf);
+}
+
+/* Display usage information and command line options */
+static void usage(void)
+{
+ printf("Usage: delaytop [Options]\n"
+ "Options:\n"
+ " -h, --help Show this help message and exit\n"
+ " -d, --delay=SECONDS Set refresh interval (default: 2 seconds, min: 1)\n"
+ " -n, --iterations=COUNT Set number of updates (default: 0 = infinite)\n"
+ " -P, --processes=NUMBER Set maximum number of processes to show (default: 20, max: 1000)\n"
+ " -o, --once Display once and exit\n"
+ " -p, --pid=PID Monitor only the specified PID\n"
+ " -C, --container=PATH Monitor the container at specified cgroup path\n"
+ " -s, --sort=FIELD Sort by delay field (default: cpu)\n"
+ " -M, --memverbose Display memory detailed information\n");
+ exit(0);
+}
+
+/* Parse command line arguments and set configuration */
+static void parse_args(int argc, char **argv)
+{
+ int c;
+ const struct field_desc *field;
+ struct option long_options[] = {
+ {"help", no_argument, 0, 'h'},
+ {"delay", required_argument, 0, 'd'},
+ {"iterations", required_argument, 0, 'n'},
+ {"pid", required_argument, 0, 'p'},
+ {"once", no_argument, 0, 'o'},
+ {"processes", required_argument, 0, 'P'},
+ {"sort", required_argument, 0, 's'},
+ {"container", required_argument, 0, 'C'},
+ {"memverbose", no_argument, 0, 'M'},
+ {0, 0, 0, 0}
+ };
+
+ /* Set defaults */
+ cfg.delay = 2;
+ cfg.iterations = 0;
+ cfg.max_processes = 20;
+ cfg.sort_field = &sort_fields[0]; /* Default sorted by CPU delay */
+ cfg.output_one_time = 0;
+ cfg.monitor_pid = 0; /* 0 means monitor all PIDs */
+ cfg.container_path = NULL;
+ cfg.display_mode = MODE_DEFAULT;
+
+ while (1) {
+ int option_index = 0;
+
+ c = getopt_long(argc, argv, "hd:n:p:oP:C:s:M", long_options, &option_index);
+ if (c == -1)
+ break;
+
+ switch (c) {
+ case 'h':
+ usage();
+ break;
+ case 'd':
+ cfg.delay = atoi(optarg);
+ if (cfg.delay < 1) {
+ fprintf(stderr, "Error: delay must be >= 1.\n");
+ exit(1);
+ }
+ break;
+ case 'n':
+ cfg.iterations = atoi(optarg);
+ if (cfg.iterations < 0) {
+ fprintf(stderr, "Error: iterations must be >= 0.\n");
+ exit(1);
+ }
+ break;
+ case 'p':
+ cfg.monitor_pid = atoi(optarg);
+ if (cfg.monitor_pid < 1) {
+ fprintf(stderr, "Error: pid must be >= 1.\n");
+ exit(1);
+ }
+ break;
+ case 'o':
+ cfg.output_one_time = 1;
+ break;
+ case 'P':
+ cfg.max_processes = atoi(optarg);
+ if (cfg.max_processes < 1) {
+ fprintf(stderr, "Error: processes must be >= 1.\n");
+ exit(1);
+ }
+ if (cfg.max_processes > MAX_TASKS) {
+ fprintf(stderr, "Warning: processes capped to %d.\n",
+ MAX_TASKS);
+ cfg.max_processes = MAX_TASKS;
+ }
+ break;
+ case 'C':
+ cfg.container_path = strdup(optarg);
+ break;
+ case 's':
+ if (strlen(optarg) == 0) {
+ fprintf(stderr, "Error: empty sort field\n");
+ exit(1);
+ }
+
+ field = get_field_by_name(optarg);
+ /* Show available fields if invalid option provided */
+ if (!field) {
+ fprintf(stderr, "Error: invalid sort field '%s'\n", optarg);
+ display_available_fields(MODE_TYPE_ALL);
+ exit(1);
+ }
+
+ cfg.sort_field = field;
+ break;
+ case 'M':
+ cfg.display_mode = MODE_MEMVERBOSE;
+ cfg.sort_field = get_field_by_name("mem");
+ break;
+ default:
+ fprintf(stderr, "Try 'delaytop --help' for more information.\n");
+ exit(1);
+ }
+ }
+}
+
+/* Calculate average delay in milliseconds for overall memory */
+static void set_mem_delay_total(struct task_info *t)
+{
+ t->mem_delay_total = t->swapin_delay_total +
+ t->freepages_delay_total +
+ t->thrashing_delay_total +
+ t->compact_delay_total +
+ t->wpcopy_delay_total;
+}
+
+static void set_mem_count(struct task_info *t)
+{
+ t->mem_count = t->swapin_count +
+ t->freepages_count +
+ t->thrashing_count +
+ t->compact_count +
+ t->wpcopy_count;
+}
+
+/* Create a raw netlink socket and bind */
+static int create_nl_socket(void)
+{
+ int fd;
+ struct sockaddr_nl local;
+
+ fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_GENERIC);
+ if (fd < 0)
+ return -1;
+
+ memset(&local, 0, sizeof(local));
+ local.nl_family = AF_NETLINK;
+
+ if (bind(fd, (struct sockaddr *) &local, sizeof(local)) < 0) {
+ fprintf(stderr, "Failed to bind socket when create nl_socket\n");
+ close(fd);
+ return -1;
+ }
+
+ return fd;
+}
+
+/* Send a command via netlink */
+static int send_cmd(int sd, __u16 nlmsg_type, __u32 nlmsg_pid,
+ __u8 genl_cmd, __u16 nla_type,
+ void *nla_data, int nla_len)
+{
+ struct sockaddr_nl nladdr;
+ struct nlattr *na;
+ int r, buflen;
+ char *buf;
+
+ struct {
+ struct nlmsghdr n;
+ struct genlmsghdr g;
+ char buf[MAX_MSG_SIZE];
+ } msg;
+
+ msg.n.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN);
+ msg.n.nlmsg_type = nlmsg_type;
+ msg.n.nlmsg_flags = NLM_F_REQUEST;
+ msg.n.nlmsg_seq = 0;
+ msg.n.nlmsg_pid = nlmsg_pid;
+ msg.g.cmd = genl_cmd;
+ msg.g.version = 0x1;
+ na = (struct nlattr *) GENLMSG_DATA(&msg);
+ na->nla_type = nla_type;
+ na->nla_len = nla_len + NLA_HDRLEN;
+ memcpy(NLA_DATA(na), nla_data, nla_len);
+ msg.n.nlmsg_len += NLMSG_ALIGN(na->nla_len);
+
+ buf = (char *) &msg;
+ buflen = msg.n.nlmsg_len;
+ memset(&nladdr, 0, sizeof(nladdr));
+ nladdr.nl_family = AF_NETLINK;
+ while ((r = sendto(sd, buf, buflen, 0, (struct sockaddr *) &nladdr,
+ sizeof(nladdr))) < buflen) {
+ if (r > 0) {
+ buf += r;
+ buflen -= r;
+ } else if (errno != EAGAIN)
+ return -1;
+ }
+ return 0;
+}
+
+/* Get family ID for taskstats via netlink */
+static int get_family_id(int sd)
+{
+ struct {
+ struct nlmsghdr n;
+ struct genlmsghdr g;
+ char buf[256];
+ } ans;
+
+ int id = 0, rc;
+ struct nlattr *na;
+ int rep_len;
+ char name[100];
+
+ strncpy(name, TASKSTATS_GENL_NAME, sizeof(name) - 1);
+ name[sizeof(name) - 1] = '\0';
+ rc = send_cmd(sd, GENL_ID_CTRL, getpid(), CTRL_CMD_GETFAMILY,
+ CTRL_ATTR_FAMILY_NAME, (void *)name,
+ strlen(TASKSTATS_GENL_NAME)+1);
+ if (rc < 0) {
+ fprintf(stderr, "Failed to send cmd for family id\n");
+ return 0;
+ }
+
+ rep_len = recv(sd, &ans, sizeof(ans), 0);
+ if (ans.n.nlmsg_type == NLMSG_ERROR ||
+ (rep_len < 0) || !NLMSG_OK((&ans.n), rep_len)) {
+ fprintf(stderr, "Failed to receive response for family id\n");
+ return 0;
+ }
+
+ na = (struct nlattr *) GENLMSG_DATA(&ans);
+ na = (struct nlattr *) ((char *) na + NLA_ALIGN(na->nla_len));
+ if (na->nla_type == CTRL_ATTR_FAMILY_ID)
+ id = *(__u16 *) NLA_DATA(na);
+ return id;
+}
+
+static int read_psi_stats(void)
+{
+ FILE *fp;
+ char line[256];
+ int ret = 0;
+ int error_count = 0;
+
+ /* Check if PSI path exists */
+ if (access(PSI_PATH, F_OK) != 0) {
+ fprintf(stderr, "Error: PSI interface not found at %s\n", PSI_PATH);
+ fprintf(stderr, "Please ensure your kernel supports PSI (Pressure Stall Information)\n");
+ return -1;
+ }
+
+ /* Zero all fields */
+ memset(&psi, 0, sizeof(psi));
+
+ /* CPU pressure */
+ fp = fopen(PSI_CPU_PATH, "r");
+ if (fp) {
+ while (fgets(line, sizeof(line), fp)) {
+ if (strncmp(line, "some", 4) == 0) {
+ ret = sscanf(line, "some avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.cpu_some_avg10, &psi.cpu_some_avg60,
+ &psi.cpu_some_avg300, &psi.cpu_some_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse CPU some PSI data\n");
+ error_count++;
+ }
+ } else if (strncmp(line, "full", 4) == 0) {
+ ret = sscanf(line, "full avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.cpu_full_avg10, &psi.cpu_full_avg60,
+ &psi.cpu_full_avg300, &psi.cpu_full_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse CPU full PSI data\n");
+ error_count++;
+ }
+ }
+ }
+ fclose(fp);
+ } else {
+ fprintf(stderr, "Warning: Failed to open %s\n", PSI_CPU_PATH);
+ error_count++;
+ }
+
+ /* Memory pressure */
+ fp = fopen(PSI_MEMORY_PATH, "r");
+ if (fp) {
+ while (fgets(line, sizeof(line), fp)) {
+ if (strncmp(line, "some", 4) == 0) {
+ ret = sscanf(line, "some avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.memory_some_avg10, &psi.memory_some_avg60,
+ &psi.memory_some_avg300, &psi.memory_some_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse Memory some PSI data\n");
+ error_count++;
+ }
+ } else if (strncmp(line, "full", 4) == 0) {
+ ret = sscanf(line, "full avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.memory_full_avg10, &psi.memory_full_avg60,
+ &psi.memory_full_avg300, &psi.memory_full_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse Memory full PSI data\n");
+ error_count++;
+ }
+ }
+ }
+ fclose(fp);
+ } else {
+ fprintf(stderr, "Warning: Failed to open %s\n", PSI_MEMORY_PATH);
+ error_count++;
+ }
+
+ /* IO pressure */
+ fp = fopen(PSI_IO_PATH, "r");
+ if (fp) {
+ while (fgets(line, sizeof(line), fp)) {
+ if (strncmp(line, "some", 4) == 0) {
+ ret = sscanf(line, "some avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.io_some_avg10, &psi.io_some_avg60,
+ &psi.io_some_avg300, &psi.io_some_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse IO some PSI data\n");
+ error_count++;
+ }
+ } else if (strncmp(line, "full", 4) == 0) {
+ ret = sscanf(line, "full avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.io_full_avg10, &psi.io_full_avg60,
+ &psi.io_full_avg300, &psi.io_full_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse IO full PSI data\n");
+ error_count++;
+ }
+ }
+ }
+ fclose(fp);
+ } else {
+ fprintf(stderr, "Warning: Failed to open %s\n", PSI_IO_PATH);
+ error_count++;
+ }
+
+ /* IRQ pressure (only full) */
+ fp = fopen(PSI_IRQ_PATH, "r");
+ if (fp) {
+ while (fgets(line, sizeof(line), fp)) {
+ if (strncmp(line, "full", 4) == 0) {
+ ret = sscanf(line, "full avg10=%lf avg60=%lf avg300=%lf total=%llu",
+ &psi.irq_full_avg10, &psi.irq_full_avg60,
+ &psi.irq_full_avg300, &psi.irq_full_total);
+ if (ret != 4) {
+ fprintf(stderr, "Failed to parse IRQ full PSI data\n");
+ error_count++;
+ }
+ }
+ }
+ fclose(fp);
+ } else {
+ fprintf(stderr, "Warning: Failed to open %s\n", PSI_IRQ_PATH);
+ error_count++;
+ }
+
+ /* Return error count: 0 means success, >0 means warnings, -1 means fatal error */
+ if (error_count > 0) {
+ fprintf(stderr, "PSI stats reading completed with %d warnings\n", error_count);
+ return error_count;
+ }
+
+ return 0;
+}
+
+static int read_comm(int pid, char *comm_buf, size_t buf_size)
+{
+ char path[64];
+ int ret = -1;
+ size_t len;
+ FILE *fp;
+
+ snprintf(path, sizeof(path), "/proc/%d/comm", pid);
+ fp = fopen(path, "r");
+ if (!fp) {
+ fprintf(stderr, "Failed to open comm file /proc/%d/comm\n", pid);
+ return ret;
+ }
+
+ if (fgets(comm_buf, buf_size, fp)) {
+ len = strlen(comm_buf);
+ if (len > 0 && comm_buf[len - 1] == '\n')
+ comm_buf[len - 1] = '\0';
+ ret = 0;
+ }
+
+ fclose(fp);
+
+ return ret;
+}
+
+static void fetch_and_fill_task_info(int pid, const char *comm)
+{
+ struct {
+ struct nlmsghdr n;
+ struct genlmsghdr g;
+ char buf[MAX_MSG_SIZE];
+ } resp;
+ struct taskstats stats;
+ struct nlattr *nested;
+ struct nlattr *na;
+ int nested_len;
+ int nl_len;
+ int rc;
+
+ /* Send request for task stats */
+ if (send_cmd(nl_sd, family_id, getpid(), TASKSTATS_CMD_GET,
+ TASKSTATS_CMD_ATTR_PID, &pid, sizeof(pid)) < 0) {
+ fprintf(stderr, "Failed to send request for task stats\n");
+ return;
+ }
+
+ /* Receive response */
+ rc = recv(nl_sd, &resp, sizeof(resp), 0);
+ if (rc < 0 || resp.n.nlmsg_type == NLMSG_ERROR) {
+ fprintf(stderr, "Failed to receive response for task stats\n");
+ return;
+ }
+
+ /* Parse response */
+ nl_len = GENLMSG_PAYLOAD(&resp.n);
+ na = (struct nlattr *) GENLMSG_DATA(&resp);
+ while (nl_len > 0) {
+ if (na->nla_type == TASKSTATS_TYPE_AGGR_PID) {
+ nested = (struct nlattr *) NLA_DATA(na);
+ nested_len = NLA_PAYLOAD(na->nla_len);
+ while (nested_len > 0) {
+ if (nested->nla_type == TASKSTATS_TYPE_STATS) {
+ memcpy(&stats, NLA_DATA(nested), sizeof(stats));
+ if (task_count < MAX_TASKS) {
+ tasks[task_count].pid = pid;
+ tasks[task_count].tgid = pid;
+ strncpy(tasks[task_count].command, comm,
+ TASK_COMM_LEN - 1);
+ tasks[task_count].command[TASK_COMM_LEN - 1] = '\0';
+ SET_TASK_STAT(task_count, cpu_count);
+ SET_TASK_STAT(task_count, cpu_delay_total);
+ SET_TASK_STAT(task_count, blkio_count);
+ SET_TASK_STAT(task_count, blkio_delay_total);
+ SET_TASK_STAT(task_count, swapin_count);
+ SET_TASK_STAT(task_count, swapin_delay_total);
+ SET_TASK_STAT(task_count, freepages_count);
+ SET_TASK_STAT(task_count, freepages_delay_total);
+ SET_TASK_STAT(task_count, thrashing_count);
+ SET_TASK_STAT(task_count, thrashing_delay_total);
+ SET_TASK_STAT(task_count, compact_count);
+ SET_TASK_STAT(task_count, compact_delay_total);
+ SET_TASK_STAT(task_count, wpcopy_count);
+ SET_TASK_STAT(task_count, wpcopy_delay_total);
+ SET_TASK_STAT(task_count, irq_count);
+ SET_TASK_STAT(task_count, irq_delay_total);
+ set_mem_count(&tasks[task_count]);
+ set_mem_delay_total(&tasks[task_count]);
+ task_count++;
+ }
+ break;
+ }
+ nested_len -= NLA_ALIGN(nested->nla_len);
+ nested = NLA_NEXT(nested);
+ }
+ }
+ nl_len -= NLA_ALIGN(na->nla_len);
+ na = NLA_NEXT(na);
+ }
+ return;
+}
+
+static void get_task_delays(void)
+{
+ char comm[TASK_COMM_LEN];
+ struct dirent *entry;
+ DIR *dir;
+ int pid;
+
+ task_count = 0;
+ if (cfg.monitor_pid > 0) {
+ if (read_comm(cfg.monitor_pid, comm, sizeof(comm)) == 0)
+ fetch_and_fill_task_info(cfg.monitor_pid, comm);
+ return;
+ }
+
+ dir = opendir("/proc");
+ if (!dir) {
+ fprintf(stderr, "Error opening /proc directory\n");
+ return;
+ }
+
+ while ((entry = readdir(dir)) != NULL && task_count < MAX_TASKS) {
+ if (!isdigit(entry->d_name[0]))
+ continue;
+ pid = atoi(entry->d_name);
+ if (pid == 0)
+ continue;
+ if (read_comm(pid, comm, sizeof(comm)) != 0)
+ continue;
+ fetch_and_fill_task_info(pid, comm);
+ }
+ closedir(dir);
+}
+
+/* Calculate average delay in milliseconds */
+static double average_ms(unsigned long long total, unsigned long long count)
+{
+ if (count == 0)
+ return 0;
+ return (double)total / 1000000.0 / count;
+}
+
+/* Comparison function for sorting tasks */
+static int compare_tasks(const void *a, const void *b)
+{
+ const struct task_info *t1 = (const struct task_info *)a;
+ const struct task_info *t2 = (const struct task_info *)b;
+ unsigned long long total1;
+ unsigned long long total2;
+ unsigned long count1;
+ unsigned long count2;
+ double avg1, avg2;
+
+ total1 = *(unsigned long long *)((char *)t1 + cfg.sort_field->total_offset);
+ total2 = *(unsigned long long *)((char *)t2 + cfg.sort_field->total_offset);
+ count1 = *(unsigned long *)((char *)t1 + cfg.sort_field->count_offset);
+ count2 = *(unsigned long *)((char *)t2 + cfg.sort_field->count_offset);
+
+ avg1 = average_ms(total1, count1);
+ avg2 = average_ms(total2, count2);
+ if (avg1 != avg2)
+ return avg2 > avg1 ? 1 : -1;
+
+ return 0;
+}
+
+/* Sort tasks by selected field */
+static void sort_tasks(void)
+{
+ if (task_count > 0)
+ qsort(tasks, task_count, sizeof(struct task_info), compare_tasks);
+}
+
+/* Get container statistics via cgroupstats */
+static void get_container_stats(void)
+{
+ int rc, cfd;
+ struct {
+ struct nlmsghdr n;
+ struct genlmsghdr g;
+ char buf[MAX_MSG_SIZE];
+ } req, resp;
+ struct nlattr *na;
+ int nl_len;
+ struct cgroupstats stats;
+
+ /* Check if container path is set */
+ if (!cfg.container_path)
+ return;
+
+ /* Open container cgroup */
+ cfd = open(cfg.container_path, O_RDONLY);
+ if (cfd < 0) {
+ fprintf(stderr, "Error opening container path: %s\n", cfg.container_path);
+ return;
+ }
+
+ /* Send request for container stats */
+ if (send_cmd(nl_sd, family_id, getpid(), CGROUPSTATS_CMD_GET,
+ CGROUPSTATS_CMD_ATTR_FD, &cfd, sizeof(__u32)) < 0) {
+ fprintf(stderr, "Failed to send request for container stats\n");
+ close(cfd);
+ return;
+ }
+
+ /* Receive response */
+ rc = recv(nl_sd, &resp, sizeof(resp), 0);
+ if (rc < 0 || resp.n.nlmsg_type == NLMSG_ERROR) {
+ fprintf(stderr, "Failed to receive response for container stats\n");
+ close(cfd);
+ return;
+ }
+
+ /* Parse response */
+ nl_len = GENLMSG_PAYLOAD(&resp.n);
+ na = (struct nlattr *) GENLMSG_DATA(&resp);
+ while (nl_len > 0) {
+ if (na->nla_type == CGROUPSTATS_TYPE_CGROUP_STATS) {
+ /* Get the cgroupstats structure */
+ memcpy(&stats, NLA_DATA(na), sizeof(stats));
+
+ /* Fill container stats */
+ container_stats.nr_sleeping = stats.nr_sleeping;
+ container_stats.nr_running = stats.nr_running;
+ container_stats.nr_stopped = stats.nr_stopped;
+ container_stats.nr_uninterruptible = stats.nr_uninterruptible;
+ container_stats.nr_io_wait = stats.nr_io_wait;
+ break;
+ }
+ nl_len -= NLA_ALIGN(na->nla_len);
+ na = (struct nlattr *) ((char *) na + NLA_ALIGN(na->nla_len));
+ }
+
+ close(cfd);
+}
+
+/* Display results to stdout or log file */
+static void display_results(int psi_ret)
+{
+ time_t now = time(NULL);
+ struct tm *tm_now = localtime(&now);
+ FILE *out = stdout;
+ char timestamp[32];
+ bool suc = true;
+ int i, count;
+
+ /* Clear terminal screen */
+ suc &= BOOL_FPRINT(out, "\033[H\033[J");
+
+ /* PSI output (one-line, no cat style) */
+ suc &= BOOL_FPRINT(out, "System Pressure Information: (avg10/avg60vg300/total)\n");
+ if (psi_ret) {
+ suc &= BOOL_FPRINT(out, " PSI not found: check if psi=1 enabled in cmdline\n");
+ } else {
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "CPU some:",
+ psi.cpu_some_avg10,
+ psi.cpu_some_avg60,
+ psi.cpu_some_avg300,
+ psi.cpu_some_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "CPU full:",
+ psi.cpu_full_avg10,
+ psi.cpu_full_avg60,
+ psi.cpu_full_avg300,
+ psi.cpu_full_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "Memory full:",
+ psi.memory_full_avg10,
+ psi.memory_full_avg60,
+ psi.memory_full_avg300,
+ psi.memory_full_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "Memory some:",
+ psi.memory_some_avg10,
+ psi.memory_some_avg60,
+ psi.memory_some_avg300,
+ psi.memory_some_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "IO full:",
+ psi.io_full_avg10,
+ psi.io_full_avg60,
+ psi.io_full_avg300,
+ psi.io_full_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "IO some:",
+ psi.io_some_avg10,
+ psi.io_some_avg60,
+ psi.io_some_avg300,
+ psi.io_some_total / 1000);
+ suc &= BOOL_FPRINT(out, PSI_LINE_FORMAT,
+ "IRQ full:",
+ psi.irq_full_avg10,
+ psi.irq_full_avg60,
+ psi.irq_full_avg300,
+ psi.irq_full_total / 1000);
+ }
+
+ if (cfg.container_path) {
+ suc &= BOOL_FPRINT(out, "Container Information (%s):\n", cfg.container_path);
+ suc &= BOOL_FPRINT(out, "Processes: running=%d, sleeping=%d, ",
+ container_stats.nr_running, container_stats.nr_sleeping);
+ suc &= BOOL_FPRINT(out, "stopped=%d, uninterruptible=%d, io_wait=%d\n\n",
+ container_stats.nr_stopped, container_stats.nr_uninterruptible,
+ container_stats.nr_io_wait);
+ }
+
+ /* Interacive command */
+ suc &= BOOL_FPRINT(out, "[o]sort [M]memverbose [q]quit\n");
+ if (sort_selected) {
+ if (cfg.display_mode == MODE_MEMVERBOSE)
+ suc &= BOOL_FPRINT(out,
+ "sort selection: [m]MEM [r]RCL [t]THR [p]CMP [w]WP\n");
+ else
+ suc &= BOOL_FPRINT(out,
+ "sort selection: [c]CPU [i]IO [m]MEM [q]IRQ\n");
+ }
+
+ /* Task delay output */
+ suc &= BOOL_FPRINT(out, "Top %d processes (sorted by %s delay):\n",
+ cfg.max_processes, get_name_by_field(cfg.sort_field));
+
+ suc &= BOOL_FPRINT(out, "%8s %8s %-17s", "PID", "TGID", "COMMAND");
+ if (cfg.display_mode == MODE_MEMVERBOSE) {
+ suc &= BOOL_FPRINT(out, "%8s %8s %8s %8s %8s %8s\n",
+ "MEM(ms)", "SWAP(ms)", "RCL(ms)",
+ "THR(ms)", "CMP(ms)", "WP(ms)");
+ suc &= BOOL_FPRINT(out, "-----------------------");
+ suc &= BOOL_FPRINT(out, "-----------------------");
+ suc &= BOOL_FPRINT(out, "-----------------------");
+ suc &= BOOL_FPRINT(out, "---------------------\n");
+ } else {
+ suc &= BOOL_FPRINT(out, "%8s %8s %8s %8s\n",
+ "CPU(ms)", "IO(ms)", "IRQ(ms)", "MEM(ms)");
+ suc &= BOOL_FPRINT(out, "-----------------------");
+ suc &= BOOL_FPRINT(out, "-----------------------");
+ suc &= BOOL_FPRINT(out, "--------------------------\n");
+ }
+
+ count = task_count < cfg.max_processes ? task_count : cfg.max_processes;
+
+ for (i = 0; i < count; i++) {
+ suc &= BOOL_FPRINT(out, "%8d %8d %-15s",
+ tasks[i].pid, tasks[i].tgid, tasks[i].command);
+ if (cfg.display_mode == MODE_MEMVERBOSE) {
+ suc &= BOOL_FPRINT(out, DELAY_FMT_MEMVERBOSE,
+ TASK_AVG(tasks[i], mem),
+ TASK_AVG(tasks[i], swapin),
+ TASK_AVG(tasks[i], freepages),
+ TASK_AVG(tasks[i], thrashing),
+ TASK_AVG(tasks[i], compact),
+ TASK_AVG(tasks[i], wpcopy));
+ } else {
+ suc &= BOOL_FPRINT(out, DELAY_FMT_DEFAULT,
+ TASK_AVG(tasks[i], cpu),
+ TASK_AVG(tasks[i], blkio),
+ TASK_AVG(tasks[i], irq),
+ TASK_AVG(tasks[i], mem));
+ }
+ }
+
+ suc &= BOOL_FPRINT(out, "\n");
+
+ if (!suc)
+ perror("Error writing to output");
+}
+
+/* Check for keyboard input with timeout based on cfg.delay */
+static char check_for_keypress(void)
+{
+ struct timeval tv = {cfg.delay, 0};
+ fd_set readfds;
+ char ch = 0;
+
+ FD_ZERO(&readfds);
+ FD_SET(STDIN_FILENO, &readfds);
+ int r = select(STDIN_FILENO + 1, &readfds, NULL, NULL, &tv);
+
+ if (r > 0 && FD_ISSET(STDIN_FILENO, &readfds)) {
+ read(STDIN_FILENO, &ch, 1);
+ return ch;
+ }
+
+ return 0;
+}
+
+#define MAX_MODE_SIZE 2
+static void toggle_display_mode(void)
+{
+ static const size_t modes[MAX_MODE_SIZE] = {MODE_DEFAULT, MODE_MEMVERBOSE};
+ static size_t cur_index;
+
+ cur_index = (cur_index + 1) % MAX_MODE_SIZE;
+ cfg.display_mode = modes[cur_index];
+}
+
+/* Handle keyboard input: sorting selection, mode toggle, or quit */
+static void handle_keypress(char ch, int *running)
+{
+ const struct field_desc *field;
+
+ /* Change sort field */
+ if (sort_selected) {
+ field = get_field_by_cmd_char(ch);
+ if (field && (field->supported_modes & cfg.display_mode))
+ cfg.sort_field = field;
+
+ sort_selected = 0;
+ /* Handle mode changes or quit */
+ } else {
+ switch (ch) {
+ case 'o':
+ sort_selected = 1;
+ break;
+ case 'M':
+ toggle_display_mode();
+ for (field = sort_fields; field->name != NULL; field++) {
+ if (field->supported_modes & cfg.display_mode) {
+ cfg.sort_field = field;
+ break;
+ }
+ }
+ break;
+ case 'q':
+ case 'Q':
+ *running = 0;
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/* Main function */
+int main(int argc, char **argv)
+{
+ const struct field_desc *field;
+ int iterations = 0;
+ int psi_ret = 0;
+ char keypress;
+
+ /* Parse command line arguments */
+ parse_args(argc, argv);
+
+ /* Setup netlink socket */
+ nl_sd = create_nl_socket();
+ if (nl_sd < 0) {
+ fprintf(stderr, "Error creating netlink socket\n");
+ exit(1);
+ }
+
+ /* Get family ID for taskstats via netlink */
+ family_id = get_family_id(nl_sd);
+ if (!family_id) {
+ fprintf(stderr, "Error getting taskstats family ID\n");
+ close(nl_sd);
+ exit(1);
+ }
+
+ /* Set terminal to non-canonical mode for interaction */
+ enable_raw_mode();
+
+ /* Main loop */
+ while (running) {
+ /* Auto-switch sort field when not matching display mode */
+ if (!(cfg.sort_field->supported_modes & cfg.display_mode)) {
+ for (field = sort_fields; field->name != NULL; field++) {
+ if (field->supported_modes & cfg.display_mode) {
+ cfg.sort_field = field;
+ printf("Auto-switched sort field to: %s\n", field->name);
+ break;
+ }
+ }
+ }
+
+ /* Read PSI statistics */
+ psi_ret = read_psi_stats();
+
+ /* Get container stats if container path provided */
+ if (cfg.container_path)
+ get_container_stats();
+
+ /* Get task delays */
+ get_task_delays();
+
+ /* Sort tasks */
+ sort_tasks();
+
+ /* Display results to stdout or log file */
+ display_results(psi_ret);
+
+ /* Check for iterations */
+ if (cfg.iterations > 0 && ++iterations >= cfg.iterations)
+ break;
+
+ /* Exit if output_one_time is set */
+ if (cfg.output_one_time)
+ break;
+
+ /* Keypress for interactive usage */
+ keypress = check_for_keypress();
+ if (keypress)
+ handle_keypress(keypress, &running);
+ }
+
+ /* Restore terminal mode */
+ disable_raw_mode();
+
+ /* Cleanup */
+ close(nl_sd);
+ if (cfg.container_path)
+ free(cfg.container_path);
+
+ return 0;
+}
diff --git a/tools/accounting/getdelays.c b/tools/accounting/getdelays.c
index 1334214546d7..21cb3c3d1331 100644
--- a/tools/accounting/getdelays.c
+++ b/tools/accounting/getdelays.c
@@ -192,60 +192,110 @@ static int get_family_id(int sd)
}
#define average_ms(t, c) (t / 1000000ULL / (c ? c : 1))
+#define delay_ms(t) (t / 1000000ULL)
+
+/*
+ * Version compatibility note:
+ * Field availability depends on taskstats version (t->version),
+ * corresponding to TASKSTATS_VERSION in kernel headers
+ * see include/uapi/linux/taskstats.h
+ *
+ * Version feature mapping:
+ * version >= 11 - supports COMPACT statistics
+ * version >= 13 - supports WPCOPY statistics
+ * version >= 14 - supports IRQ statistics
+ * version >= 16 - supports *_max and *_min delay statistics
+ *
+ * Always verify version before accessing version-dependent fields
+ * to maintain backward compatibility.
+ */
+#define PRINT_CPU_DELAY(version, t) \
+ do { \
+ if (version >= 16) { \
+ printf("%-10s%15s%15s%15s%15s%15s%15s%15s\n", \
+ "CPU", "count", "real total", "virtual total", \
+ "delay total", "delay average", "delay max", "delay min"); \
+ printf(" %15llu%15llu%15llu%15llu%15.3fms%13.6fms%13.6fms\n", \
+ (unsigned long long)(t)->cpu_count, \
+ (unsigned long long)(t)->cpu_run_real_total, \
+ (unsigned long long)(t)->cpu_run_virtual_total, \
+ (unsigned long long)(t)->cpu_delay_total, \
+ average_ms((double)(t)->cpu_delay_total, (t)->cpu_count), \
+ delay_ms((double)(t)->cpu_delay_max), \
+ delay_ms((double)(t)->cpu_delay_min)); \
+ } else { \
+ printf("%-10s%15s%15s%15s%15s%15s\n", \
+ "CPU", "count", "real total", "virtual total", \
+ "delay total", "delay average"); \
+ printf(" %15llu%15llu%15llu%15llu%15.3fms\n", \
+ (unsigned long long)(t)->cpu_count, \
+ (unsigned long long)(t)->cpu_run_real_total, \
+ (unsigned long long)(t)->cpu_run_virtual_total, \
+ (unsigned long long)(t)->cpu_delay_total, \
+ average_ms((double)(t)->cpu_delay_total, (t)->cpu_count)); \
+ } \
+ } while (0)
+#define PRINT_FILED_DELAY(name, version, t, count, total, max, min) \
+ do { \
+ if (version >= 16) { \
+ printf("%-10s%15s%15s%15s%15s%15s\n", \
+ name, "count", "delay total", "delay average", \
+ "delay max", "delay min"); \
+ printf(" %15llu%15llu%15.3fms%13.6fms%13.6fms\n", \
+ (unsigned long long)(t)->count, \
+ (unsigned long long)(t)->total, \
+ average_ms((double)(t)->total, (t)->count), \
+ delay_ms((double)(t)->max), \
+ delay_ms((double)(t)->min)); \
+ } else { \
+ printf("%-10s%15s%15s%15s\n", \
+ name, "count", "delay total", "delay average"); \
+ printf(" %15llu%15llu%15.3fms\n", \
+ (unsigned long long)(t)->count, \
+ (unsigned long long)(t)->total, \
+ average_ms((double)(t)->total, (t)->count)); \
+ } \
+ } while (0)
static void print_delayacct(struct taskstats *t)
{
- printf("\n\nCPU %15s%15s%15s%15s%15s\n"
- " %15llu%15llu%15llu%15llu%15.3fms\n"
- "IO %15s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "SWAP %15s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "RECLAIM %12s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "THRASHING%12s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "COMPACT %12s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "WPCOPY %12s%15s%15s\n"
- " %15llu%15llu%15.3fms\n"
- "IRQ %15s%15s%15s\n"
- " %15llu%15llu%15.3fms\n",
- "count", "real total", "virtual total",
- "delay total", "delay average",
- (unsigned long long)t->cpu_count,
- (unsigned long long)t->cpu_run_real_total,
- (unsigned long long)t->cpu_run_virtual_total,
- (unsigned long long)t->cpu_delay_total,
- average_ms((double)t->cpu_delay_total, t->cpu_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->blkio_count,
- (unsigned long long)t->blkio_delay_total,
- average_ms((double)t->blkio_delay_total, t->blkio_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->swapin_count,
- (unsigned long long)t->swapin_delay_total,
- average_ms((double)t->swapin_delay_total, t->swapin_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->freepages_count,
- (unsigned long long)t->freepages_delay_total,
- average_ms((double)t->freepages_delay_total, t->freepages_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->thrashing_count,
- (unsigned long long)t->thrashing_delay_total,
- average_ms((double)t->thrashing_delay_total, t->thrashing_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->compact_count,
- (unsigned long long)t->compact_delay_total,
- average_ms((double)t->compact_delay_total, t->compact_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->wpcopy_count,
- (unsigned long long)t->wpcopy_delay_total,
- average_ms((double)t->wpcopy_delay_total, t->wpcopy_count),
- "count", "delay total", "delay average",
- (unsigned long long)t->irq_count,
- (unsigned long long)t->irq_delay_total,
- average_ms((double)t->irq_delay_total, t->irq_count));
+ printf("\n\n");
+
+ PRINT_CPU_DELAY(t->version, t);
+
+ PRINT_FILED_DELAY("IO", t->version, t,
+ blkio_count, blkio_delay_total,
+ blkio_delay_max, blkio_delay_min);
+
+ PRINT_FILED_DELAY("SWAP", t->version, t,
+ swapin_count, swapin_delay_total,
+ swapin_delay_max, swapin_delay_min);
+
+ PRINT_FILED_DELAY("RECLAIM", t->version, t,
+ freepages_count, freepages_delay_total,
+ freepages_delay_max, freepages_delay_min);
+
+ PRINT_FILED_DELAY("THRASHING", t->version, t,
+ thrashing_count, thrashing_delay_total,
+ thrashing_delay_max, thrashing_delay_min);
+
+ if (t->version >= 11) {
+ PRINT_FILED_DELAY("COMPACT", t->version, t,
+ compact_count, compact_delay_total,
+ compact_delay_max, compact_delay_min);
+ }
+
+ if (t->version >= 13) {
+ PRINT_FILED_DELAY("WPCOPY", t->version, t,
+ wpcopy_count, wpcopy_delay_total,
+ wpcopy_delay_max, wpcopy_delay_min);
+ }
+
+ if (t->version >= 14) {
+ PRINT_FILED_DELAY("IRQ", t->version, t,
+ irq_count, irq_delay_total,
+ irq_delay_max, irq_delay_min);
+ }
}
static void task_context_switch_counts(struct taskstats *t)
diff --git a/tools/accounting/procacct.c b/tools/accounting/procacct.c
index 90c4a37f53d9..e8dee05a6264 100644
--- a/tools/accounting/procacct.c
+++ b/tools/accounting/procacct.c
@@ -274,12 +274,11 @@ int main(int argc, char *argv[])
int maskset = 0;
char *logfile = NULL;
int cfd = 0;
- int forking = 0;
struct msgtemplate msg;
- while (!forking) {
- c = getopt(argc, argv, "m:vr:");
+ while (1) {
+ c = getopt(argc, argv, "m:vr:w:");
if (c < 0)
break;
diff --git a/tools/arch/arm/include/uapi/asm/kvm.h b/tools/arch/arm/include/uapi/asm/kvm.h
deleted file mode 100644
index 03cd7c19a683..000000000000
--- a/tools/arch/arm/include/uapi/asm/kvm.h
+++ /dev/null
@@ -1,314 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/*
- * Copyright (C) 2012 - Virtual Open Systems and Columbia University
- * Author: Christoffer Dall <c.dall@virtualopensystems.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
- */
-
-#ifndef __ARM_KVM_H__
-#define __ARM_KVM_H__
-
-#include <linux/types.h>
-#include <linux/psci.h>
-#include <asm/ptrace.h>
-
-#define __KVM_HAVE_GUEST_DEBUG
-#define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_READONLY_MEM
-#define __KVM_HAVE_VCPU_EVENTS
-
-#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
-
-#define KVM_REG_SIZE(id) \
- (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
-
-/* Valid for svc_regs, abt_regs, und_regs, irq_regs in struct kvm_regs */
-#define KVM_ARM_SVC_sp svc_regs[0]
-#define KVM_ARM_SVC_lr svc_regs[1]
-#define KVM_ARM_SVC_spsr svc_regs[2]
-#define KVM_ARM_ABT_sp abt_regs[0]
-#define KVM_ARM_ABT_lr abt_regs[1]
-#define KVM_ARM_ABT_spsr abt_regs[2]
-#define KVM_ARM_UND_sp und_regs[0]
-#define KVM_ARM_UND_lr und_regs[1]
-#define KVM_ARM_UND_spsr und_regs[2]
-#define KVM_ARM_IRQ_sp irq_regs[0]
-#define KVM_ARM_IRQ_lr irq_regs[1]
-#define KVM_ARM_IRQ_spsr irq_regs[2]
-
-/* Valid only for fiq_regs in struct kvm_regs */
-#define KVM_ARM_FIQ_r8 fiq_regs[0]
-#define KVM_ARM_FIQ_r9 fiq_regs[1]
-#define KVM_ARM_FIQ_r10 fiq_regs[2]
-#define KVM_ARM_FIQ_fp fiq_regs[3]
-#define KVM_ARM_FIQ_ip fiq_regs[4]
-#define KVM_ARM_FIQ_sp fiq_regs[5]
-#define KVM_ARM_FIQ_lr fiq_regs[6]
-#define KVM_ARM_FIQ_spsr fiq_regs[7]
-
-struct kvm_regs {
- struct pt_regs usr_regs; /* R0_usr - R14_usr, PC, CPSR */
- unsigned long svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
- unsigned long abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
- unsigned long und_regs[3]; /* SP_und, LR_und, SPSR_und */
- unsigned long irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
- unsigned long fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
-};
-
-/* Supported Processor Types */
-#define KVM_ARM_TARGET_CORTEX_A15 0
-#define KVM_ARM_TARGET_CORTEX_A7 1
-#define KVM_ARM_NUM_TARGETS 2
-
-/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
-#define KVM_ARM_DEVICE_TYPE_SHIFT 0
-#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
-#define KVM_ARM_DEVICE_ID_SHIFT 16
-#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
-
-/* Supported device IDs */
-#define KVM_ARM_DEVICE_VGIC_V2 0
-
-/* Supported VGIC address types */
-#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
-#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
-
-#define KVM_VGIC_V2_DIST_SIZE 0x1000
-#define KVM_VGIC_V2_CPU_SIZE 0x2000
-
-/* Supported VGICv3 address types */
-#define KVM_VGIC_V3_ADDR_TYPE_DIST 2
-#define KVM_VGIC_V3_ADDR_TYPE_REDIST 3
-#define KVM_VGIC_ITS_ADDR_TYPE 4
-#define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION 5
-
-#define KVM_VGIC_V3_DIST_SIZE SZ_64K
-#define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
-#define KVM_VGIC_V3_ITS_SIZE (2 * SZ_64K)
-
-#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
-#define KVM_ARM_VCPU_PSCI_0_2 1 /* CPU uses PSCI v0.2 */
-
-struct kvm_vcpu_init {
- __u32 target;
- __u32 features[7];
-};
-
-struct kvm_sregs {
-};
-
-struct kvm_fpu {
-};
-
-struct kvm_guest_debug_arch {
-};
-
-struct kvm_debug_exit_arch {
-};
-
-struct kvm_sync_regs {
- /* Used with KVM_CAP_ARM_USER_IRQ */
- __u64 device_irq_level;
-};
-
-struct kvm_arch_memory_slot {
-};
-
-/* for KVM_GET/SET_VCPU_EVENTS */
-struct kvm_vcpu_events {
- struct {
- __u8 serror_pending;
- __u8 serror_has_esr;
- __u8 ext_dabt_pending;
- /* Align it to 8 bytes */
- __u8 pad[5];
- __u64 serror_esr;
- } exception;
- __u32 reserved[12];
-};
-
-/* If you need to interpret the index values, here is the key: */
-#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
-#define KVM_REG_ARM_COPROC_SHIFT 16
-#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
-#define KVM_REG_ARM_32_OPC2_SHIFT 0
-#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
-#define KVM_REG_ARM_OPC1_SHIFT 3
-#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
-#define KVM_REG_ARM_CRM_SHIFT 7
-#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
-#define KVM_REG_ARM_32_CRN_SHIFT 11
-/*
- * For KVM currently all guest registers are nonsecure, but we reserve a bit
- * in the encoding to distinguish secure from nonsecure for AArch32 system
- * registers that are banked by security. This is 1 for the secure banked
- * register, and 0 for the nonsecure banked register or if the register is
- * not banked by security.
- */
-#define KVM_REG_ARM_SECURE_MASK 0x0000000010000000
-#define KVM_REG_ARM_SECURE_SHIFT 28
-
-#define ARM_CP15_REG_SHIFT_MASK(x,n) \
- (((x) << KVM_REG_ARM_ ## n ## _SHIFT) & KVM_REG_ARM_ ## n ## _MASK)
-
-#define __ARM_CP15_REG(op1,crn,crm,op2) \
- (KVM_REG_ARM | (15 << KVM_REG_ARM_COPROC_SHIFT) | \
- ARM_CP15_REG_SHIFT_MASK(op1, OPC1) | \
- ARM_CP15_REG_SHIFT_MASK(crn, 32_CRN) | \
- ARM_CP15_REG_SHIFT_MASK(crm, CRM) | \
- ARM_CP15_REG_SHIFT_MASK(op2, 32_OPC2))
-
-#define ARM_CP15_REG32(...) (__ARM_CP15_REG(__VA_ARGS__) | KVM_REG_SIZE_U32)
-
-#define __ARM_CP15_REG64(op1,crm) \
- (__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
-#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
-
-/* PL1 Physical Timer Registers */
-#define KVM_REG_ARM_PTIMER_CTL ARM_CP15_REG32(0, 14, 2, 1)
-#define KVM_REG_ARM_PTIMER_CNT ARM_CP15_REG64(0, 14)
-#define KVM_REG_ARM_PTIMER_CVAL ARM_CP15_REG64(2, 14)
-
-/* Virtual Timer Registers */
-#define KVM_REG_ARM_TIMER_CTL ARM_CP15_REG32(0, 14, 3, 1)
-#define KVM_REG_ARM_TIMER_CNT ARM_CP15_REG64(1, 14)
-#define KVM_REG_ARM_TIMER_CVAL ARM_CP15_REG64(3, 14)
-
-/* Normal registers are mapped as coprocessor 16. */
-#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
-#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4)
-
-/* Some registers need more space to represent values. */
-#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
-#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
-#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
-#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
-#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
-#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
-
-/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
-#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
-#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
-#define KVM_REG_ARM_VFP_BASE_REG 0x0
-#define KVM_REG_ARM_VFP_FPSID 0x1000
-#define KVM_REG_ARM_VFP_FPSCR 0x1001
-#define KVM_REG_ARM_VFP_MVFR1 0x1006
-#define KVM_REG_ARM_VFP_MVFR0 0x1007
-#define KVM_REG_ARM_VFP_FPEXC 0x1008
-#define KVM_REG_ARM_VFP_FPINST 0x1009
-#define KVM_REG_ARM_VFP_FPINST2 0x100A
-
-/* KVM-as-firmware specific pseudo-registers */
-#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
-#define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \
- KVM_REG_ARM_FW | ((r) & 0xffff))
-#define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0)
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1 KVM_REG_ARM_FW_REG(1)
- /* Higher values mean better protection. */
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL 0
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL 1
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED 2
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 KVM_REG_ARM_FW_REG(2)
- /* Higher values mean better protection. */
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL 0
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN 1
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL 2
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED 3
-#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED (1U << 4)
-
-/* Device Control API: ARM VGIC */
-#define KVM_DEV_ARM_VGIC_GRP_ADDR 0
-#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
-#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS 2
-#define KVM_DEV_ARM_VGIC_CPUID_SHIFT 32
-#define KVM_DEV_ARM_VGIC_CPUID_MASK (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
-#define KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT 32
-#define KVM_DEV_ARM_VGIC_V3_MPIDR_MASK \
- (0xffffffffULL << KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT)
-#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT 0
-#define KVM_DEV_ARM_VGIC_OFFSET_MASK (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
-#define KVM_DEV_ARM_VGIC_SYSREG_INSTR_MASK (0xffff)
-#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS 3
-#define KVM_DEV_ARM_VGIC_GRP_CTRL 4
-#define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
-#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
-#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
-#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
-#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
-#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
- (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
-#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK 0x3ff
-#define VGIC_LEVEL_INFO_LINE_LEVEL 0
-
-/* Device Control API on vcpu fd */
-#define KVM_ARM_VCPU_PMU_V3_CTRL 0
-#define KVM_ARM_VCPU_PMU_V3_IRQ 0
-#define KVM_ARM_VCPU_PMU_V3_INIT 1
-#define KVM_ARM_VCPU_TIMER_CTRL 1
-#define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0
-#define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
-
-#define KVM_DEV_ARM_VGIC_CTRL_INIT 0
-#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
-#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
-#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
-#define KVM_DEV_ARM_ITS_CTRL_RESET 4
-
-/* KVM_IRQ_LINE irq field index values */
-#define KVM_ARM_IRQ_VCPU2_SHIFT 28
-#define KVM_ARM_IRQ_VCPU2_MASK 0xf
-#define KVM_ARM_IRQ_TYPE_SHIFT 24
-#define KVM_ARM_IRQ_TYPE_MASK 0xf
-#define KVM_ARM_IRQ_VCPU_SHIFT 16
-#define KVM_ARM_IRQ_VCPU_MASK 0xff
-#define KVM_ARM_IRQ_NUM_SHIFT 0
-#define KVM_ARM_IRQ_NUM_MASK 0xffff
-
-/* irq_type field */
-#define KVM_ARM_IRQ_TYPE_CPU 0
-#define KVM_ARM_IRQ_TYPE_SPI 1
-#define KVM_ARM_IRQ_TYPE_PPI 2
-
-/* out-of-kernel GIC cpu interrupt injection irq_number field */
-#define KVM_ARM_IRQ_CPU_IRQ 0
-#define KVM_ARM_IRQ_CPU_FIQ 1
-
-/*
- * This used to hold the highest supported SPI, but it is now obsolete
- * and only here to provide source code level compatibility with older
- * userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
- */
-#ifndef __KERNEL__
-#define KVM_ARM_IRQ_GIC_MAX 127
-#endif
-
-/* One single KVM irqchip, ie. the VGIC */
-#define KVM_NR_IRQCHIPS 1
-
-/* PSCI interface */
-#define KVM_PSCI_FN_BASE 0x95c1ba5e
-#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
-
-#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
-#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
-#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
-#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
-
-#define KVM_PSCI_RET_SUCCESS PSCI_RET_SUCCESS
-#define KVM_PSCI_RET_NI PSCI_RET_NOT_SUPPORTED
-#define KVM_PSCI_RET_INVAL PSCI_RET_INVALID_PARAMS
-#define KVM_PSCI_RET_DENIED PSCI_RET_DENIED
-
-#endif /* __ARM_KVM_H__ */
diff --git a/tools/arch/arm64/include/asm/brk-imm.h b/tools/arch/arm64/include/asm/brk-imm.h
new file mode 100644
index 000000000000..beb42c62b6ac
--- /dev/null
+++ b/tools/arch/arm64/include/asm/brk-imm.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_BRK_IMM_H
+#define __ASM_BRK_IMM_H
+
+/*
+ * #imm16 values used for BRK instruction generation
+ * 0x004: for installing kprobes
+ * 0x005: for installing uprobes
+ * 0x006: for kprobe software single-step
+ * 0x007: for kretprobe return
+ * Allowed values for kgdb are 0x400 - 0x7ff
+ * 0x100: for triggering a fault on purpose (reserved)
+ * 0x400: for dynamic BRK instruction
+ * 0x401: for compile time BRK instruction
+ * 0x800: kernel-mode BUG() and WARN() traps
+ * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)
+ * 0x55xx: Undefined Behavior Sanitizer traps ('U' << 8)
+ * 0x8xxx: Control-Flow Integrity traps
+ */
+#define KPROBES_BRK_IMM 0x004
+#define UPROBES_BRK_IMM 0x005
+#define KPROBES_BRK_SS_IMM 0x006
+#define KRETPROBES_BRK_IMM 0x007
+#define FAULT_BRK_IMM 0x100
+#define KGDB_DYN_DBG_BRK_IMM 0x400
+#define KGDB_COMPILED_DBG_BRK_IMM 0x401
+#define BUG_BRK_IMM 0x800
+#define KASAN_BRK_IMM 0x900
+#define KASAN_BRK_MASK 0x0ff
+#define UBSAN_BRK_IMM 0x5500
+#define UBSAN_BRK_MASK 0x00ff
+
+#define CFI_BRK_IMM_TARGET GENMASK(4, 0)
+#define CFI_BRK_IMM_TYPE GENMASK(9, 5)
+#define CFI_BRK_IMM_BASE 0x8000
+#define CFI_BRK_IMM_MASK (CFI_BRK_IMM_TARGET | CFI_BRK_IMM_TYPE)
+
+#endif
diff --git a/tools/arch/arm64/include/asm/cputype.h b/tools/arch/arm64/include/asm/cputype.h
index 7c7493cb571f..139d5e87dc95 100644
--- a/tools/arch/arm64/include/asm/cputype.h
+++ b/tools/arch/arm64/include/asm/cputype.h
@@ -61,6 +61,7 @@
#define ARM_CPU_IMP_HISI 0x48
#define ARM_CPU_IMP_APPLE 0x61
#define ARM_CPU_IMP_AMPERE 0xC0
+#define ARM_CPU_IMP_MICROSOFT 0x6D
#define ARM_CPU_PART_AEM_V8 0xD0F
#define ARM_CPU_PART_FOUNDATION 0xD00
@@ -74,17 +75,28 @@
#define ARM_CPU_PART_CORTEX_A76 0xD0B
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define ARM_CPU_PART_CORTEX_A77 0xD0D
+#define ARM_CPU_PART_CORTEX_A76AE 0xD0E
#define ARM_CPU_PART_NEOVERSE_V1 0xD40
#define ARM_CPU_PART_CORTEX_A78 0xD41
#define ARM_CPU_PART_CORTEX_A78AE 0xD42
#define ARM_CPU_PART_CORTEX_X1 0xD44
#define ARM_CPU_PART_CORTEX_A510 0xD46
+#define ARM_CPU_PART_CORTEX_X1C 0xD4C
#define ARM_CPU_PART_CORTEX_A520 0xD80
#define ARM_CPU_PART_CORTEX_A710 0xD47
#define ARM_CPU_PART_CORTEX_A715 0xD4D
#define ARM_CPU_PART_CORTEX_X2 0xD48
#define ARM_CPU_PART_NEOVERSE_N2 0xD49
#define ARM_CPU_PART_CORTEX_A78C 0xD4B
+#define ARM_CPU_PART_CORTEX_X1C 0xD4C
+#define ARM_CPU_PART_CORTEX_X3 0xD4E
+#define ARM_CPU_PART_NEOVERSE_V2 0xD4F
+#define ARM_CPU_PART_CORTEX_A720 0xD81
+#define ARM_CPU_PART_CORTEX_X4 0xD82
+#define ARM_CPU_PART_NEOVERSE_V3 0xD84
+#define ARM_CPU_PART_CORTEX_X925 0xD85
+#define ARM_CPU_PART_CORTEX_A725 0xD87
+#define ARM_CPU_PART_NEOVERSE_N3 0xD8E
#define APM_CPU_PART_XGENE 0x000
#define APM_CPU_VAR_POTENZA 0x00
@@ -109,9 +121,11 @@
#define QCOM_CPU_PART_KRYO 0x200
#define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800
#define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801
+#define QCOM_CPU_PART_KRYO_3XX_GOLD 0x802
#define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803
#define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804
#define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805
+#define QCOM_CPU_PART_ORYON_X1 0x001
#define NVIDIA_CPU_PART_DENVER 0x003
#define NVIDIA_CPU_PART_CARMEL 0x004
@@ -119,6 +133,8 @@
#define FUJITSU_CPU_PART_A64FX 0x001
#define HISI_CPU_PART_TSV110 0xD01
+#define HISI_CPU_PART_HIP09 0xD02
+#define HISI_CPU_PART_HIP12 0xD06
#define APPLE_CPU_PART_M1_ICESTORM 0x022
#define APPLE_CPU_PART_M1_FIRESTORM 0x023
@@ -134,6 +150,9 @@
#define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039
#define AMPERE_CPU_PART_AMPERE1 0xAC3
+#define AMPERE_CPU_PART_AMPERE1A 0xAC4
+
+#define MICROSOFT_CPU_PART_AZURE_COBALT_100 0xD49 /* Based on r0p0 of ARM Neoverse N2 */
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
@@ -145,17 +164,28 @@
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+#define MIDR_CORTEX_A76AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76AE)
#define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
#define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
#define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
#define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
#define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+#define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C)
#define MIDR_CORTEX_A520 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A520)
#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
#define MIDR_CORTEX_A715 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A715)
#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2)
#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2)
#define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C)
+#define MIDR_CORTEX_X1C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1C)
+#define MIDR_CORTEX_X3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X3)
+#define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2)
+#define MIDR_CORTEX_A720 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A720)
+#define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4)
+#define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3)
+#define MIDR_CORTEX_X925 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X925)
+#define MIDR_CORTEX_A725 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A725)
+#define MIDR_NEOVERSE_N3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N3)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
@@ -173,13 +203,27 @@
#define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
#define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD)
#define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER)
+#define MIDR_QCOM_KRYO_3XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_GOLD)
#define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER)
#define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD)
#define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER)
+#define MIDR_QCOM_ORYON_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_ORYON_X1)
+
+/*
+ * NOTES:
+ * - Qualcomm Kryo 5XX Prime / Gold ID themselves as MIDR_CORTEX_A77
+ * - Qualcomm Kryo 5XX Silver IDs itself as MIDR_QCOM_KRYO_4XX_SILVER
+ * - Qualcomm Kryo 6XX Prime IDs itself as MIDR_CORTEX_X1
+ * - Qualcomm Kryo 6XX Gold IDs itself as ARM_CPU_PART_CORTEX_A78
+ * - Qualcomm Kryo 6XX Silver IDs itself as MIDR_CORTEX_A55
+ */
+
#define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER)
#define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
#define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX)
#define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
+#define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09)
+#define MIDR_HISI_HIP12 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP12)
#define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
#define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
#define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO)
@@ -193,6 +237,8 @@
#define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
#define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
+#define MIDR_AMPERE1A MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1A)
+#define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
#define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX
@@ -265,6 +311,14 @@ static inline u32 __attribute_const__ read_cpuid_id(void)
return read_cpuid(MIDR_EL1);
}
+struct target_impl_cpu {
+ u64 midr;
+ u64 revidr;
+ u64 aidr;
+};
+
+bool cpu_errata_set_target_impl(u64 num, void *impl_cpus);
+
static inline u64 __attribute_const__ read_cpuid_mpidr(void)
{
return read_cpuid(MPIDR_EL1);
diff --git a/tools/arch/arm64/include/asm/esr.h b/tools/arch/arm64/include/asm/esr.h
new file mode 100644
index 000000000000..bd592ca81571
--- /dev/null
+++ b/tools/arch/arm64/include/asm/esr.h
@@ -0,0 +1,455 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2013 - ARM Ltd
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ */
+
+#ifndef __ASM_ESR_H
+#define __ASM_ESR_H
+
+#include <asm/sysreg.h>
+
+#define ESR_ELx_EC_UNKNOWN UL(0x00)
+#define ESR_ELx_EC_WFx UL(0x01)
+/* Unallocated EC: 0x02 */
+#define ESR_ELx_EC_CP15_32 UL(0x03)
+#define ESR_ELx_EC_CP15_64 UL(0x04)
+#define ESR_ELx_EC_CP14_MR UL(0x05)
+#define ESR_ELx_EC_CP14_LS UL(0x06)
+#define ESR_ELx_EC_FP_ASIMD UL(0x07)
+#define ESR_ELx_EC_CP10_ID UL(0x08) /* EL2 only */
+#define ESR_ELx_EC_PAC UL(0x09) /* EL2 and above */
+/* Unallocated EC: 0x0A - 0x0B */
+#define ESR_ELx_EC_CP14_64 UL(0x0C)
+#define ESR_ELx_EC_BTI UL(0x0D)
+#define ESR_ELx_EC_ILL UL(0x0E)
+/* Unallocated EC: 0x0F - 0x10 */
+#define ESR_ELx_EC_SVC32 UL(0x11)
+#define ESR_ELx_EC_HVC32 UL(0x12) /* EL2 only */
+#define ESR_ELx_EC_SMC32 UL(0x13) /* EL2 and above */
+/* Unallocated EC: 0x14 */
+#define ESR_ELx_EC_SVC64 UL(0x15)
+#define ESR_ELx_EC_HVC64 UL(0x16) /* EL2 and above */
+#define ESR_ELx_EC_SMC64 UL(0x17) /* EL2 and above */
+#define ESR_ELx_EC_SYS64 UL(0x18)
+#define ESR_ELx_EC_SVE UL(0x19)
+#define ESR_ELx_EC_ERET UL(0x1a) /* EL2 only */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC UL(0x1C) /* EL1 and above */
+#define ESR_ELx_EC_SME UL(0x1D)
+/* Unallocated EC: 0x1E */
+#define ESR_ELx_EC_IMP_DEF UL(0x1f) /* EL3 only */
+#define ESR_ELx_EC_IABT_LOW UL(0x20)
+#define ESR_ELx_EC_IABT_CUR UL(0x21)
+#define ESR_ELx_EC_PC_ALIGN UL(0x22)
+/* Unallocated EC: 0x23 */
+#define ESR_ELx_EC_DABT_LOW UL(0x24)
+#define ESR_ELx_EC_DABT_CUR UL(0x25)
+#define ESR_ELx_EC_SP_ALIGN UL(0x26)
+#define ESR_ELx_EC_MOPS UL(0x27)
+#define ESR_ELx_EC_FP_EXC32 UL(0x28)
+/* Unallocated EC: 0x29 - 0x2B */
+#define ESR_ELx_EC_FP_EXC64 UL(0x2C)
+/* Unallocated EC: 0x2D - 0x2E */
+#define ESR_ELx_EC_SERROR UL(0x2F)
+#define ESR_ELx_EC_BREAKPT_LOW UL(0x30)
+#define ESR_ELx_EC_BREAKPT_CUR UL(0x31)
+#define ESR_ELx_EC_SOFTSTP_LOW UL(0x32)
+#define ESR_ELx_EC_SOFTSTP_CUR UL(0x33)
+#define ESR_ELx_EC_WATCHPT_LOW UL(0x34)
+#define ESR_ELx_EC_WATCHPT_CUR UL(0x35)
+/* Unallocated EC: 0x36 - 0x37 */
+#define ESR_ELx_EC_BKPT32 UL(0x38)
+/* Unallocated EC: 0x39 */
+#define ESR_ELx_EC_VECTOR32 UL(0x3A) /* EL2 only */
+/* Unallocated EC: 0x3B */
+#define ESR_ELx_EC_BRK64 UL(0x3C)
+/* Unallocated EC: 0x3D - 0x3F */
+#define ESR_ELx_EC_MAX UL(0x3F)
+
+#define ESR_ELx_EC_SHIFT (26)
+#define ESR_ELx_EC_WIDTH (6)
+#define ESR_ELx_EC_MASK (UL(0x3F) << ESR_ELx_EC_SHIFT)
+#define ESR_ELx_EC(esr) (((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)
+
+#define ESR_ELx_IL_SHIFT (25)
+#define ESR_ELx_IL (UL(1) << ESR_ELx_IL_SHIFT)
+#define ESR_ELx_ISS_MASK (GENMASK(24, 0))
+#define ESR_ELx_ISS(esr) ((esr) & ESR_ELx_ISS_MASK)
+#define ESR_ELx_ISS2_SHIFT (32)
+#define ESR_ELx_ISS2_MASK (GENMASK_ULL(55, 32))
+#define ESR_ELx_ISS2(esr) (((esr) & ESR_ELx_ISS2_MASK) >> ESR_ELx_ISS2_SHIFT)
+
+/* ISS field definitions shared by different classes */
+#define ESR_ELx_WNR_SHIFT (6)
+#define ESR_ELx_WNR (UL(1) << ESR_ELx_WNR_SHIFT)
+
+/* Asynchronous Error Type */
+#define ESR_ELx_IDS_SHIFT (24)
+#define ESR_ELx_IDS (UL(1) << ESR_ELx_IDS_SHIFT)
+#define ESR_ELx_AET_SHIFT (10)
+#define ESR_ELx_AET (UL(0x7) << ESR_ELx_AET_SHIFT)
+
+#define ESR_ELx_AET_UC (UL(0) << ESR_ELx_AET_SHIFT)
+#define ESR_ELx_AET_UEU (UL(1) << ESR_ELx_AET_SHIFT)
+#define ESR_ELx_AET_UEO (UL(2) << ESR_ELx_AET_SHIFT)
+#define ESR_ELx_AET_UER (UL(3) << ESR_ELx_AET_SHIFT)
+#define ESR_ELx_AET_CE (UL(6) << ESR_ELx_AET_SHIFT)
+
+/* Shared ISS field definitions for Data/Instruction aborts */
+#define ESR_ELx_SET_SHIFT (11)
+#define ESR_ELx_SET_MASK (UL(3) << ESR_ELx_SET_SHIFT)
+#define ESR_ELx_FnV_SHIFT (10)
+#define ESR_ELx_FnV (UL(1) << ESR_ELx_FnV_SHIFT)
+#define ESR_ELx_EA_SHIFT (9)
+#define ESR_ELx_EA (UL(1) << ESR_ELx_EA_SHIFT)
+#define ESR_ELx_S1PTW_SHIFT (7)
+#define ESR_ELx_S1PTW (UL(1) << ESR_ELx_S1PTW_SHIFT)
+
+/* Shared ISS fault status code(IFSC/DFSC) for Data/Instruction aborts */
+#define ESR_ELx_FSC (0x3F)
+#define ESR_ELx_FSC_TYPE (0x3C)
+#define ESR_ELx_FSC_LEVEL (0x03)
+#define ESR_ELx_FSC_EXTABT (0x10)
+#define ESR_ELx_FSC_MTE (0x11)
+#define ESR_ELx_FSC_SERROR (0x11)
+#define ESR_ELx_FSC_ACCESS (0x08)
+#define ESR_ELx_FSC_FAULT (0x04)
+#define ESR_ELx_FSC_PERM (0x0C)
+#define ESR_ELx_FSC_SEA_TTW(n) (0x14 + (n))
+#define ESR_ELx_FSC_SECC (0x18)
+#define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n))
+
+/* Status codes for individual page table levels */
+#define ESR_ELx_FSC_ACCESS_L(n) (ESR_ELx_FSC_ACCESS + (n))
+#define ESR_ELx_FSC_PERM_L(n) (ESR_ELx_FSC_PERM + (n))
+
+#define ESR_ELx_FSC_FAULT_nL (0x2C)
+#define ESR_ELx_FSC_FAULT_L(n) (((n) < 0 ? ESR_ELx_FSC_FAULT_nL : \
+ ESR_ELx_FSC_FAULT) + (n))
+
+/* ISS field definitions for Data Aborts */
+#define ESR_ELx_ISV_SHIFT (24)
+#define ESR_ELx_ISV (UL(1) << ESR_ELx_ISV_SHIFT)
+#define ESR_ELx_SAS_SHIFT (22)
+#define ESR_ELx_SAS (UL(3) << ESR_ELx_SAS_SHIFT)
+#define ESR_ELx_SSE_SHIFT (21)
+#define ESR_ELx_SSE (UL(1) << ESR_ELx_SSE_SHIFT)
+#define ESR_ELx_SRT_SHIFT (16)
+#define ESR_ELx_SRT_MASK (UL(0x1F) << ESR_ELx_SRT_SHIFT)
+#define ESR_ELx_SF_SHIFT (15)
+#define ESR_ELx_SF (UL(1) << ESR_ELx_SF_SHIFT)
+#define ESR_ELx_AR_SHIFT (14)
+#define ESR_ELx_AR (UL(1) << ESR_ELx_AR_SHIFT)
+#define ESR_ELx_CM_SHIFT (8)
+#define ESR_ELx_CM (UL(1) << ESR_ELx_CM_SHIFT)
+
+/* ISS2 field definitions for Data Aborts */
+#define ESR_ELx_TnD_SHIFT (10)
+#define ESR_ELx_TnD (UL(1) << ESR_ELx_TnD_SHIFT)
+#define ESR_ELx_TagAccess_SHIFT (9)
+#define ESR_ELx_TagAccess (UL(1) << ESR_ELx_TagAccess_SHIFT)
+#define ESR_ELx_GCS_SHIFT (8)
+#define ESR_ELx_GCS (UL(1) << ESR_ELx_GCS_SHIFT)
+#define ESR_ELx_Overlay_SHIFT (6)
+#define ESR_ELx_Overlay (UL(1) << ESR_ELx_Overlay_SHIFT)
+#define ESR_ELx_DirtyBit_SHIFT (5)
+#define ESR_ELx_DirtyBit (UL(1) << ESR_ELx_DirtyBit_SHIFT)
+#define ESR_ELx_Xs_SHIFT (0)
+#define ESR_ELx_Xs_MASK (GENMASK_ULL(4, 0))
+
+/* ISS field definitions for exceptions taken in to Hyp */
+#define ESR_ELx_FSC_ADDRSZ (0x00)
+#define ESR_ELx_FSC_ADDRSZ_L(n) (ESR_ELx_FSC_ADDRSZ + (n))
+#define ESR_ELx_CV (UL(1) << 24)
+#define ESR_ELx_COND_SHIFT (20)
+#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
+#define ESR_ELx_WFx_ISS_RN (UL(0x1F) << 5)
+#define ESR_ELx_WFx_ISS_RV (UL(1) << 2)
+#define ESR_ELx_WFx_ISS_TI (UL(3) << 0)
+#define ESR_ELx_WFx_ISS_WFxT (UL(2) << 0)
+#define ESR_ELx_WFx_ISS_WFI (UL(0) << 0)
+#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
+#define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
+
+#define DISR_EL1_IDS (UL(1) << 24)
+/*
+ * DISR_EL1 and ESR_ELx share the bottom 13 bits, but the RES0 bits may mean
+ * different things in the future...
+ */
+#define DISR_EL1_ESR_MASK (ESR_ELx_AET | ESR_ELx_EA | ESR_ELx_FSC)
+
+/* ESR value templates for specific events */
+#define ESR_ELx_WFx_MASK (ESR_ELx_EC_MASK | \
+ (ESR_ELx_WFx_ISS_TI & ~ESR_ELx_WFx_ISS_WFxT))
+#define ESR_ELx_WFx_WFI_VAL ((ESR_ELx_EC_WFx << ESR_ELx_EC_SHIFT) | \
+ ESR_ELx_WFx_ISS_WFI)
+
+/* BRK instruction trap from AArch64 state */
+#define ESR_ELx_BRK64_ISS_COMMENT_MASK 0xffff
+
+/* ISS field definitions for System instruction traps */
+#define ESR_ELx_SYS64_ISS_RES0_SHIFT 22
+#define ESR_ELx_SYS64_ISS_RES0_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_RES0_SHIFT)
+#define ESR_ELx_SYS64_ISS_DIR_MASK 0x1
+#define ESR_ELx_SYS64_ISS_DIR_READ 0x1
+#define ESR_ELx_SYS64_ISS_DIR_WRITE 0x0
+
+#define ESR_ELx_SYS64_ISS_RT_SHIFT 5
+#define ESR_ELx_SYS64_ISS_RT_MASK (UL(0x1f) << ESR_ELx_SYS64_ISS_RT_SHIFT)
+#define ESR_ELx_SYS64_ISS_CRM_SHIFT 1
+#define ESR_ELx_SYS64_ISS_CRM_MASK (UL(0xf) << ESR_ELx_SYS64_ISS_CRM_SHIFT)
+#define ESR_ELx_SYS64_ISS_CRN_SHIFT 10
+#define ESR_ELx_SYS64_ISS_CRN_MASK (UL(0xf) << ESR_ELx_SYS64_ISS_CRN_SHIFT)
+#define ESR_ELx_SYS64_ISS_OP1_SHIFT 14
+#define ESR_ELx_SYS64_ISS_OP1_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_OP1_SHIFT)
+#define ESR_ELx_SYS64_ISS_OP2_SHIFT 17
+#define ESR_ELx_SYS64_ISS_OP2_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_OP2_SHIFT)
+#define ESR_ELx_SYS64_ISS_OP0_SHIFT 20
+#define ESR_ELx_SYS64_ISS_OP0_MASK (UL(0x3) << ESR_ELx_SYS64_ISS_OP0_SHIFT)
+#define ESR_ELx_SYS64_ISS_SYS_MASK (ESR_ELx_SYS64_ISS_OP0_MASK | \
+ ESR_ELx_SYS64_ISS_OP1_MASK | \
+ ESR_ELx_SYS64_ISS_OP2_MASK | \
+ ESR_ELx_SYS64_ISS_CRN_MASK | \
+ ESR_ELx_SYS64_ISS_CRM_MASK)
+#define ESR_ELx_SYS64_ISS_SYS_VAL(op0, op1, op2, crn, crm) \
+ (((op0) << ESR_ELx_SYS64_ISS_OP0_SHIFT) | \
+ ((op1) << ESR_ELx_SYS64_ISS_OP1_SHIFT) | \
+ ((op2) << ESR_ELx_SYS64_ISS_OP2_SHIFT) | \
+ ((crn) << ESR_ELx_SYS64_ISS_CRN_SHIFT) | \
+ ((crm) << ESR_ELx_SYS64_ISS_CRM_SHIFT))
+
+#define ESR_ELx_SYS64_ISS_SYS_OP_MASK (ESR_ELx_SYS64_ISS_SYS_MASK | \
+ ESR_ELx_SYS64_ISS_DIR_MASK)
+#define ESR_ELx_SYS64_ISS_RT(esr) \
+ (((esr) & ESR_ELx_SYS64_ISS_RT_MASK) >> ESR_ELx_SYS64_ISS_RT_SHIFT)
+/*
+ * User space cache operations have the following sysreg encoding
+ * in System instructions.
+ * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 13, 14 }, WRITE (L=0)
+ */
+#define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC 14
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVADP 13
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVAP 12
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVAU 11
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVAC 10
+#define ESR_ELx_SYS64_ISS_CRM_IC_IVAU 5
+
+#define ESR_ELx_SYS64_ISS_EL0_CACHE_OP_MASK (ESR_ELx_SYS64_ISS_OP0_MASK | \
+ ESR_ELx_SYS64_ISS_OP1_MASK | \
+ ESR_ELx_SYS64_ISS_OP2_MASK | \
+ ESR_ELx_SYS64_ISS_CRN_MASK | \
+ ESR_ELx_SYS64_ISS_DIR_MASK)
+#define ESR_ELx_SYS64_ISS_EL0_CACHE_OP_VAL \
+ (ESR_ELx_SYS64_ISS_SYS_VAL(1, 3, 1, 7, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_WRITE)
+/*
+ * User space MRS operations which are supported for emulation
+ * have the following sysreg encoding in System instructions.
+ * op0 = 3, op1= 0, crn = 0, {crm = 0, 4-7}, READ (L = 1)
+ */
+#define ESR_ELx_SYS64_ISS_SYS_MRS_OP_MASK (ESR_ELx_SYS64_ISS_OP0_MASK | \
+ ESR_ELx_SYS64_ISS_OP1_MASK | \
+ ESR_ELx_SYS64_ISS_CRN_MASK | \
+ ESR_ELx_SYS64_ISS_DIR_MASK)
+#define ESR_ELx_SYS64_ISS_SYS_MRS_OP_VAL \
+ (ESR_ELx_SYS64_ISS_SYS_VAL(3, 0, 0, 0, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
+
+#define ESR_ELx_SYS64_ISS_SYS_CTR ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 1, 0, 0)
+#define ESR_ELx_SYS64_ISS_SYS_CTR_READ (ESR_ELx_SYS64_ISS_SYS_CTR | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
+
+#define ESR_ELx_SYS64_ISS_SYS_CNTVCT (ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 2, 14, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
+
+#define ESR_ELx_SYS64_ISS_SYS_CNTVCTSS (ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 6, 14, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
+
+#define ESR_ELx_SYS64_ISS_SYS_CNTFRQ (ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 0, 14, 0) | \
+ ESR_ELx_SYS64_ISS_DIR_READ)
+
+#define esr_sys64_to_sysreg(e) \
+ sys_reg((((e) & ESR_ELx_SYS64_ISS_OP0_MASK) >> \
+ ESR_ELx_SYS64_ISS_OP0_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_OP1_MASK) >> \
+ ESR_ELx_SYS64_ISS_OP1_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_CRN_MASK) >> \
+ ESR_ELx_SYS64_ISS_CRN_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_CRM_MASK) >> \
+ ESR_ELx_SYS64_ISS_CRM_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
+ ESR_ELx_SYS64_ISS_OP2_SHIFT))
+
+#define esr_cp15_to_sysreg(e) \
+ sys_reg(3, \
+ (((e) & ESR_ELx_SYS64_ISS_OP1_MASK) >> \
+ ESR_ELx_SYS64_ISS_OP1_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_CRN_MASK) >> \
+ ESR_ELx_SYS64_ISS_CRN_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_CRM_MASK) >> \
+ ESR_ELx_SYS64_ISS_CRM_SHIFT), \
+ (((e) & ESR_ELx_SYS64_ISS_OP2_MASK) >> \
+ ESR_ELx_SYS64_ISS_OP2_SHIFT))
+
+/* ISS field definitions for ERET/ERETAA/ERETAB trapping */
+#define ESR_ELx_ERET_ISS_ERET 0x2
+#define ESR_ELx_ERET_ISS_ERETA 0x1
+
+/*
+ * ISS field definitions for floating-point exception traps
+ * (FP_EXC_32/FP_EXC_64).
+ *
+ * (The FPEXC_* constants are used instead for common bits.)
+ */
+
+#define ESR_ELx_FP_EXC_TFV (UL(1) << 23)
+
+/*
+ * ISS field definitions for CP15 accesses
+ */
+#define ESR_ELx_CP15_32_ISS_DIR_MASK 0x1
+#define ESR_ELx_CP15_32_ISS_DIR_READ 0x1
+#define ESR_ELx_CP15_32_ISS_DIR_WRITE 0x0
+
+#define ESR_ELx_CP15_32_ISS_RT_SHIFT 5
+#define ESR_ELx_CP15_32_ISS_RT_MASK (UL(0x1f) << ESR_ELx_CP15_32_ISS_RT_SHIFT)
+#define ESR_ELx_CP15_32_ISS_CRM_SHIFT 1
+#define ESR_ELx_CP15_32_ISS_CRM_MASK (UL(0xf) << ESR_ELx_CP15_32_ISS_CRM_SHIFT)
+#define ESR_ELx_CP15_32_ISS_CRN_SHIFT 10
+#define ESR_ELx_CP15_32_ISS_CRN_MASK (UL(0xf) << ESR_ELx_CP15_32_ISS_CRN_SHIFT)
+#define ESR_ELx_CP15_32_ISS_OP1_SHIFT 14
+#define ESR_ELx_CP15_32_ISS_OP1_MASK (UL(0x7) << ESR_ELx_CP15_32_ISS_OP1_SHIFT)
+#define ESR_ELx_CP15_32_ISS_OP2_SHIFT 17
+#define ESR_ELx_CP15_32_ISS_OP2_MASK (UL(0x7) << ESR_ELx_CP15_32_ISS_OP2_SHIFT)
+
+#define ESR_ELx_CP15_32_ISS_SYS_MASK (ESR_ELx_CP15_32_ISS_OP1_MASK | \
+ ESR_ELx_CP15_32_ISS_OP2_MASK | \
+ ESR_ELx_CP15_32_ISS_CRN_MASK | \
+ ESR_ELx_CP15_32_ISS_CRM_MASK | \
+ ESR_ELx_CP15_32_ISS_DIR_MASK)
+#define ESR_ELx_CP15_32_ISS_SYS_VAL(op1, op2, crn, crm) \
+ (((op1) << ESR_ELx_CP15_32_ISS_OP1_SHIFT) | \
+ ((op2) << ESR_ELx_CP15_32_ISS_OP2_SHIFT) | \
+ ((crn) << ESR_ELx_CP15_32_ISS_CRN_SHIFT) | \
+ ((crm) << ESR_ELx_CP15_32_ISS_CRM_SHIFT))
+
+#define ESR_ELx_CP15_64_ISS_DIR_MASK 0x1
+#define ESR_ELx_CP15_64_ISS_DIR_READ 0x1
+#define ESR_ELx_CP15_64_ISS_DIR_WRITE 0x0
+
+#define ESR_ELx_CP15_64_ISS_RT_SHIFT 5
+#define ESR_ELx_CP15_64_ISS_RT_MASK (UL(0x1f) << ESR_ELx_CP15_64_ISS_RT_SHIFT)
+
+#define ESR_ELx_CP15_64_ISS_RT2_SHIFT 10
+#define ESR_ELx_CP15_64_ISS_RT2_MASK (UL(0x1f) << ESR_ELx_CP15_64_ISS_RT2_SHIFT)
+
+#define ESR_ELx_CP15_64_ISS_OP1_SHIFT 16
+#define ESR_ELx_CP15_64_ISS_OP1_MASK (UL(0xf) << ESR_ELx_CP15_64_ISS_OP1_SHIFT)
+#define ESR_ELx_CP15_64_ISS_CRM_SHIFT 1
+#define ESR_ELx_CP15_64_ISS_CRM_MASK (UL(0xf) << ESR_ELx_CP15_64_ISS_CRM_SHIFT)
+
+#define ESR_ELx_CP15_64_ISS_SYS_VAL(op1, crm) \
+ (((op1) << ESR_ELx_CP15_64_ISS_OP1_SHIFT) | \
+ ((crm) << ESR_ELx_CP15_64_ISS_CRM_SHIFT))
+
+#define ESR_ELx_CP15_64_ISS_SYS_MASK (ESR_ELx_CP15_64_ISS_OP1_MASK | \
+ ESR_ELx_CP15_64_ISS_CRM_MASK | \
+ ESR_ELx_CP15_64_ISS_DIR_MASK)
+
+#define ESR_ELx_CP15_64_ISS_SYS_CNTVCT (ESR_ELx_CP15_64_ISS_SYS_VAL(1, 14) | \
+ ESR_ELx_CP15_64_ISS_DIR_READ)
+
+#define ESR_ELx_CP15_64_ISS_SYS_CNTVCTSS (ESR_ELx_CP15_64_ISS_SYS_VAL(9, 14) | \
+ ESR_ELx_CP15_64_ISS_DIR_READ)
+
+#define ESR_ELx_CP15_32_ISS_SYS_CNTFRQ (ESR_ELx_CP15_32_ISS_SYS_VAL(0, 0, 14, 0) |\
+ ESR_ELx_CP15_32_ISS_DIR_READ)
+
+/*
+ * ISS values for SME traps
+ */
+
+#define ESR_ELx_SME_ISS_SME_DISABLED 0
+#define ESR_ELx_SME_ISS_ILL 1
+#define ESR_ELx_SME_ISS_SM_DISABLED 2
+#define ESR_ELx_SME_ISS_ZA_DISABLED 3
+#define ESR_ELx_SME_ISS_ZT_DISABLED 4
+
+/* ISS field definitions for MOPS exceptions */
+#define ESR_ELx_MOPS_ISS_MEM_INST (UL(1) << 24)
+#define ESR_ELx_MOPS_ISS_FROM_EPILOGUE (UL(1) << 18)
+#define ESR_ELx_MOPS_ISS_WRONG_OPTION (UL(1) << 17)
+#define ESR_ELx_MOPS_ISS_OPTION_A (UL(1) << 16)
+#define ESR_ELx_MOPS_ISS_DESTREG(esr) (((esr) & (UL(0x1f) << 10)) >> 10)
+#define ESR_ELx_MOPS_ISS_SRCREG(esr) (((esr) & (UL(0x1f) << 5)) >> 5)
+#define ESR_ELx_MOPS_ISS_SIZEREG(esr) (((esr) & (UL(0x1f) << 0)) >> 0)
+
+#ifndef __ASSEMBLY__
+#include <asm/types.h>
+
+static inline unsigned long esr_brk_comment(unsigned long esr)
+{
+ return esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
+}
+
+static inline bool esr_is_data_abort(unsigned long esr)
+{
+ const unsigned long ec = ESR_ELx_EC(esr);
+
+ return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
+}
+
+static inline bool esr_is_cfi_brk(unsigned long esr)
+{
+ return ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
+ (esr_brk_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE;
+}
+
+static inline bool esr_fsc_is_translation_fault(unsigned long esr)
+{
+ esr = esr & ESR_ELx_FSC;
+
+ return (esr == ESR_ELx_FSC_FAULT_L(3)) ||
+ (esr == ESR_ELx_FSC_FAULT_L(2)) ||
+ (esr == ESR_ELx_FSC_FAULT_L(1)) ||
+ (esr == ESR_ELx_FSC_FAULT_L(0)) ||
+ (esr == ESR_ELx_FSC_FAULT_L(-1));
+}
+
+static inline bool esr_fsc_is_permission_fault(unsigned long esr)
+{
+ esr = esr & ESR_ELx_FSC;
+
+ return (esr == ESR_ELx_FSC_PERM_L(3)) ||
+ (esr == ESR_ELx_FSC_PERM_L(2)) ||
+ (esr == ESR_ELx_FSC_PERM_L(1)) ||
+ (esr == ESR_ELx_FSC_PERM_L(0));
+}
+
+static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
+{
+ esr = esr & ESR_ELx_FSC;
+
+ return (esr == ESR_ELx_FSC_ACCESS_L(3)) ||
+ (esr == ESR_ELx_FSC_ACCESS_L(2)) ||
+ (esr == ESR_ELx_FSC_ACCESS_L(1)) ||
+ (esr == ESR_ELx_FSC_ACCESS_L(0));
+}
+
+/* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */
+static inline bool esr_iss_is_eretax(unsigned long esr)
+{
+ return esr & ESR_ELx_ERET_ISS_ERET;
+}
+
+/* Indicate which key is used for ERETAx (false: A-Key, true: B-Key) */
+static inline bool esr_iss_is_eretab(unsigned long esr)
+{
+ return esr & ESR_ELx_ERET_ISS_ERETA;
+}
+
+const char *esr_get_class_string(unsigned long esr);
+#endif /* __ASSEMBLY */
+
+#endif /* __ASM_ESR_H */
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index ccc13e991376..65f2759ea27a 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -11,6 +11,7 @@
#include <linux/bits.h>
#include <linux/stringify.h>
+#include <linux/kasan-tags.h>
#include <asm/gpr-num.h>
@@ -108,11 +109,15 @@
#define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x))
#define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x))
+/* Register-based PAN access, for save/restore purposes */
+#define SYS_PSTATE_PAN sys_reg(3, 0, 4, 2, 3)
+
#define __SYS_BARRIER_INSN(CRm, op2, Rt) \
__emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
#define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31)
+/* Data cache zero operations */
#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
#define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4)
#define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6)
@@ -123,6 +128,39 @@
#define SYS_DC_CIGSW sys_insn(1, 0, 7, 14, 4)
#define SYS_DC_CIGDSW sys_insn(1, 0, 7, 14, 6)
+#define SYS_IC_IALLUIS sys_insn(1, 0, 7, 1, 0)
+#define SYS_IC_IALLU sys_insn(1, 0, 7, 5, 0)
+#define SYS_IC_IVAU sys_insn(1, 3, 7, 5, 1)
+
+#define SYS_DC_IVAC sys_insn(1, 0, 7, 6, 1)
+#define SYS_DC_IGVAC sys_insn(1, 0, 7, 6, 3)
+#define SYS_DC_IGDVAC sys_insn(1, 0, 7, 6, 5)
+
+#define SYS_DC_CVAC sys_insn(1, 3, 7, 10, 1)
+#define SYS_DC_CGVAC sys_insn(1, 3, 7, 10, 3)
+#define SYS_DC_CGDVAC sys_insn(1, 3, 7, 10, 5)
+
+#define SYS_DC_CVAU sys_insn(1, 3, 7, 11, 1)
+
+#define SYS_DC_CVAP sys_insn(1, 3, 7, 12, 1)
+#define SYS_DC_CGVAP sys_insn(1, 3, 7, 12, 3)
+#define SYS_DC_CGDVAP sys_insn(1, 3, 7, 12, 5)
+
+#define SYS_DC_CVADP sys_insn(1, 3, 7, 13, 1)
+#define SYS_DC_CGVADP sys_insn(1, 3, 7, 13, 3)
+#define SYS_DC_CGDVADP sys_insn(1, 3, 7, 13, 5)
+
+#define SYS_DC_CIVAC sys_insn(1, 3, 7, 14, 1)
+#define SYS_DC_CIGVAC sys_insn(1, 3, 7, 14, 3)
+#define SYS_DC_CIGDVAC sys_insn(1, 3, 7, 14, 5)
+
+#define SYS_DC_ZVA sys_insn(1, 3, 7, 4, 1)
+#define SYS_DC_GVA sys_insn(1, 3, 7, 4, 3)
+#define SYS_DC_GZVA sys_insn(1, 3, 7, 4, 4)
+
+#define SYS_DC_CIVAPS sys_insn(1, 0, 7, 15, 1)
+#define SYS_DC_CIGDVAPS sys_insn(1, 0, 7, 15, 5)
+
/*
* Automatically generated definitions for system registers, the
* manual encodings below are in the process of being converted to
@@ -162,6 +200,84 @@
#define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0)
#define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)
+#define SYS_BRBINF_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 0))
+#define SYS_BRBINFINJ_EL1 sys_reg(2, 1, 9, 1, 0)
+#define SYS_BRBSRC_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 1))
+#define SYS_BRBSRCINJ_EL1 sys_reg(2, 1, 9, 1, 1)
+#define SYS_BRBTGT_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 2))
+#define SYS_BRBTGTINJ_EL1 sys_reg(2, 1, 9, 1, 2)
+#define SYS_BRBTS_EL1 sys_reg(2, 1, 9, 0, 2)
+
+#define SYS_BRBCR_EL1 sys_reg(2, 1, 9, 0, 0)
+#define SYS_BRBFCR_EL1 sys_reg(2, 1, 9, 0, 1)
+#define SYS_BRBIDR0_EL1 sys_reg(2, 1, 9, 2, 0)
+
+#define SYS_TRCITECR_EL1 sys_reg(3, 0, 1, 2, 3)
+#define SYS_TRCACATR(m) sys_reg(2, 1, 2, ((m & 7) << 1), (2 | (m >> 3)))
+#define SYS_TRCACVR(m) sys_reg(2, 1, 2, ((m & 7) << 1), (0 | (m >> 3)))
+#define SYS_TRCAUTHSTATUS sys_reg(2, 1, 7, 14, 6)
+#define SYS_TRCAUXCTLR sys_reg(2, 1, 0, 6, 0)
+#define SYS_TRCBBCTLR sys_reg(2, 1, 0, 15, 0)
+#define SYS_TRCCCCTLR sys_reg(2, 1, 0, 14, 0)
+#define SYS_TRCCIDCCTLR0 sys_reg(2, 1, 3, 0, 2)
+#define SYS_TRCCIDCCTLR1 sys_reg(2, 1, 3, 1, 2)
+#define SYS_TRCCIDCVR(m) sys_reg(2, 1, 3, ((m & 7) << 1), 0)
+#define SYS_TRCCLAIMCLR sys_reg(2, 1, 7, 9, 6)
+#define SYS_TRCCLAIMSET sys_reg(2, 1, 7, 8, 6)
+#define SYS_TRCCNTCTLR(m) sys_reg(2, 1, 0, (4 | (m & 3)), 5)
+#define SYS_TRCCNTRLDVR(m) sys_reg(2, 1, 0, (0 | (m & 3)), 5)
+#define SYS_TRCCNTVR(m) sys_reg(2, 1, 0, (8 | (m & 3)), 5)
+#define SYS_TRCCONFIGR sys_reg(2, 1, 0, 4, 0)
+#define SYS_TRCDEVARCH sys_reg(2, 1, 7, 15, 6)
+#define SYS_TRCDEVID sys_reg(2, 1, 7, 2, 7)
+#define SYS_TRCEVENTCTL0R sys_reg(2, 1, 0, 8, 0)
+#define SYS_TRCEVENTCTL1R sys_reg(2, 1, 0, 9, 0)
+#define SYS_TRCEXTINSELR(m) sys_reg(2, 1, 0, (8 | (m & 3)), 4)
+#define SYS_TRCIDR0 sys_reg(2, 1, 0, 8, 7)
+#define SYS_TRCIDR10 sys_reg(2, 1, 0, 2, 6)
+#define SYS_TRCIDR11 sys_reg(2, 1, 0, 3, 6)
+#define SYS_TRCIDR12 sys_reg(2, 1, 0, 4, 6)
+#define SYS_TRCIDR13 sys_reg(2, 1, 0, 5, 6)
+#define SYS_TRCIDR1 sys_reg(2, 1, 0, 9, 7)
+#define SYS_TRCIDR2 sys_reg(2, 1, 0, 10, 7)
+#define SYS_TRCIDR3 sys_reg(2, 1, 0, 11, 7)
+#define SYS_TRCIDR4 sys_reg(2, 1, 0, 12, 7)
+#define SYS_TRCIDR5 sys_reg(2, 1, 0, 13, 7)
+#define SYS_TRCIDR6 sys_reg(2, 1, 0, 14, 7)
+#define SYS_TRCIDR7 sys_reg(2, 1, 0, 15, 7)
+#define SYS_TRCIDR8 sys_reg(2, 1, 0, 0, 6)
+#define SYS_TRCIDR9 sys_reg(2, 1, 0, 1, 6)
+#define SYS_TRCIMSPEC(m) sys_reg(2, 1, 0, (m & 7), 7)
+#define SYS_TRCITEEDCR sys_reg(2, 1, 0, 2, 1)
+#define SYS_TRCOSLSR sys_reg(2, 1, 1, 1, 4)
+#define SYS_TRCPRGCTLR sys_reg(2, 1, 0, 1, 0)
+#define SYS_TRCQCTLR sys_reg(2, 1, 0, 1, 1)
+#define SYS_TRCRSCTLR(m) sys_reg(2, 1, 1, (m & 15), (0 | (m >> 4)))
+#define SYS_TRCRSR sys_reg(2, 1, 0, 10, 0)
+#define SYS_TRCSEQEVR(m) sys_reg(2, 1, 0, (m & 3), 4)
+#define SYS_TRCSEQRSTEVR sys_reg(2, 1, 0, 6, 4)
+#define SYS_TRCSEQSTR sys_reg(2, 1, 0, 7, 4)
+#define SYS_TRCSSCCR(m) sys_reg(2, 1, 1, (m & 7), 2)
+#define SYS_TRCSSCSR(m) sys_reg(2, 1, 1, (8 | (m & 7)), 2)
+#define SYS_TRCSSPCICR(m) sys_reg(2, 1, 1, (m & 7), 3)
+#define SYS_TRCSTALLCTLR sys_reg(2, 1, 0, 11, 0)
+#define SYS_TRCSTATR sys_reg(2, 1, 0, 3, 0)
+#define SYS_TRCSYNCPR sys_reg(2, 1, 0, 13, 0)
+#define SYS_TRCTRACEIDR sys_reg(2, 1, 0, 0, 1)
+#define SYS_TRCTSCTLR sys_reg(2, 1, 0, 12, 0)
+#define SYS_TRCVICTLR sys_reg(2, 1, 0, 0, 2)
+#define SYS_TRCVIIECTLR sys_reg(2, 1, 0, 1, 2)
+#define SYS_TRCVIPCSSCTLR sys_reg(2, 1, 0, 3, 2)
+#define SYS_TRCVISSCTLR sys_reg(2, 1, 0, 2, 2)
+#define SYS_TRCVMIDCCTLR0 sys_reg(2, 1, 3, 2, 2)
+#define SYS_TRCVMIDCCTLR1 sys_reg(2, 1, 3, 3, 2)
+#define SYS_TRCVMIDCVR(m) sys_reg(2, 1, 3, ((m & 7) << 1), 1)
+
+/* ETM */
+#define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4)
+
+#define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0)
+
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
@@ -170,8 +286,6 @@
#define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5)
#define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6)
-#define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1)
-
#define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2)
#define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0)
@@ -202,15 +316,38 @@
#define SYS_ERXCTLR_EL1 sys_reg(3, 0, 5, 4, 1)
#define SYS_ERXSTATUS_EL1 sys_reg(3, 0, 5, 4, 2)
#define SYS_ERXADDR_EL1 sys_reg(3, 0, 5, 4, 3)
+#define SYS_ERXPFGF_EL1 sys_reg(3, 0, 5, 4, 4)
+#define SYS_ERXPFGCTL_EL1 sys_reg(3, 0, 5, 4, 5)
+#define SYS_ERXPFGCDN_EL1 sys_reg(3, 0, 5, 4, 6)
#define SYS_ERXMISC0_EL1 sys_reg(3, 0, 5, 5, 0)
#define SYS_ERXMISC1_EL1 sys_reg(3, 0, 5, 5, 1)
+#define SYS_ERXMISC2_EL1 sys_reg(3, 0, 5, 5, 2)
+#define SYS_ERXMISC3_EL1 sys_reg(3, 0, 5, 5, 3)
#define SYS_TFSR_EL1 sys_reg(3, 0, 5, 6, 0)
#define SYS_TFSRE0_EL1 sys_reg(3, 0, 5, 6, 1)
#define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0)
#define SYS_PAR_EL1_F BIT(0)
+/* When PAR_EL1.F == 1 */
#define SYS_PAR_EL1_FST GENMASK(6, 1)
+#define SYS_PAR_EL1_PTW BIT(8)
+#define SYS_PAR_EL1_S BIT(9)
+#define SYS_PAR_EL1_AssuredOnly BIT(12)
+#define SYS_PAR_EL1_TopLevel BIT(13)
+#define SYS_PAR_EL1_Overlay BIT(14)
+#define SYS_PAR_EL1_DirtyBit BIT(15)
+#define SYS_PAR_EL1_F1_IMPDEF GENMASK_ULL(63, 48)
+#define SYS_PAR_EL1_F1_RES0 (BIT(7) | BIT(10) | GENMASK_ULL(47, 16))
+#define SYS_PAR_EL1_RES1 BIT(11)
+/* When PAR_EL1.F == 0 */
+#define SYS_PAR_EL1_SH GENMASK_ULL(8, 7)
+#define SYS_PAR_EL1_NS BIT(9)
+#define SYS_PAR_EL1_F0_IMPDEF BIT(10)
+#define SYS_PAR_EL1_NSE BIT(11)
+#define SYS_PAR_EL1_PA GENMASK_ULL(51, 12)
+#define SYS_PAR_EL1_ATTR GENMASK_ULL(63, 56)
+#define SYS_PAR_EL1_F0_RES0 (GENMASK_ULL(6, 1) | GENMASK_ULL(55, 52))
/*** Statistical Profiling Extension ***/
#define PMSEVFR_EL1_RES0_IMP \
@@ -274,6 +411,8 @@
#define SYS_ICC_IGRPEN0_EL1 sys_reg(3, 0, 12, 12, 6)
#define SYS_ICC_IGRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
+#define SYS_ACCDATA_EL1 sys_reg(3, 0, 13, 0, 5)
+
#define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0)
#define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7)
@@ -286,7 +425,6 @@
#define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2)
#define SYS_PMOVSCLR_EL0 sys_reg(3, 3, 9, 12, 3)
#define SYS_PMSWINC_EL0 sys_reg(3, 3, 9, 12, 4)
-#define SYS_PMSELR_EL0 sys_reg(3, 3, 9, 12, 5)
#define SYS_PMCEID0_EL0 sys_reg(3, 3, 9, 12, 6)
#define SYS_PMCEID1_EL0 sys_reg(3, 3, 9, 12, 7)
#define SYS_PMCCNTR_EL0 sys_reg(3, 3, 9, 13, 0)
@@ -340,6 +478,7 @@
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1)
+#define SYS_CNTVCT_EL0 sys_reg(3, 3, 14, 0, 2)
#define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5)
#define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6)
@@ -347,28 +486,42 @@
#define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1)
#define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2)
+#define SYS_CNTV_TVAL_EL0 sys_reg(3, 3, 14, 3, 0)
#define SYS_CNTV_CTL_EL0 sys_reg(3, 3, 14, 3, 1)
#define SYS_CNTV_CVAL_EL0 sys_reg(3, 3, 14, 3, 2)
#define SYS_AARCH32_CNTP_TVAL sys_reg(0, 0, 14, 2, 0)
#define SYS_AARCH32_CNTP_CTL sys_reg(0, 0, 14, 2, 1)
#define SYS_AARCH32_CNTPCT sys_reg(0, 0, 0, 14, 0)
+#define SYS_AARCH32_CNTVCT sys_reg(0, 1, 0, 14, 0)
#define SYS_AARCH32_CNTP_CVAL sys_reg(0, 2, 0, 14, 0)
#define SYS_AARCH32_CNTPCTSS sys_reg(0, 8, 0, 14, 0)
+#define SYS_AARCH32_CNTVCTSS sys_reg(0, 9, 0, 14, 0)
#define __PMEV_op2(n) ((n) & 0x7)
#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))
+#define SYS_PMEVCNTSVRn_EL1(n) sys_reg(2, 0, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define __TYPER_CRm(n) (0xc | (((n) >> 3) & 0x3))
#define SYS_PMEVTYPERn_EL0(n) sys_reg(3, 3, 14, __TYPER_CRm(n), __PMEV_op2(n))
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_SPMCGCRn_EL1(n) sys_reg(2, 0, 9, 13, ((n) & 1))
+
+#define __SPMEV_op2(n) ((n) & 0x7)
+#define __SPMEV_crm(p, n) ((((p) & 7) << 1) | (((n) >> 3) & 1))
+#define SYS_SPMEVCNTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b000, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILT2Rn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b011, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b010, n), __SPMEV_op2(n))
+#define SYS_SPMEVTYPERn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b001, n), __SPMEV_op2(n))
+
#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
#define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0)
#define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1)
+#define SYS_SCTLR2_EL2 sys_reg(3, 4, 1, 0, 3)
#define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0)
#define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1)
#define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2)
@@ -381,13 +534,14 @@
#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
-#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1)
-#define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4)
-#define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5)
-#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
+#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
+#define SYS_SPSR_irq sys_reg(3, 4, 4, 3, 0)
+#define SYS_SPSR_abt sys_reg(3, 4, 4, 3, 1)
+#define SYS_SPSR_und sys_reg(3, 4, 4, 3, 2)
+#define SYS_SPSR_fiq sys_reg(3, 4, 4, 3, 3)
#define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1)
#define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0)
#define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1)
@@ -420,9 +574,6 @@
#define SYS_ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4)
#define SYS_ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5)
-#define SYS_ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0)
-#define SYS_ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1)
-#define SYS_ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2)
#define SYS_ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3)
#define SYS_ICH_ELRSR_EL2 sys_reg(3, 4, 12, 11, 5)
#define SYS_ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7)
@@ -449,24 +600,39 @@
#define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1)
#define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2)
+#define SYS_SCXTNUM_EL2 sys_reg(3, 4, 13, 0, 7)
+
+#define __AMEV_op2(m) (m & 0x7)
+#define __AMEV_CRm(n, m) (n | ((m & 0x8) >> 3))
+#define __SYS__AMEVCNTVOFF0n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0x8, m), __AMEV_op2(m))
+#define SYS_AMEVCNTVOFF0n_EL2(m) __SYS__AMEVCNTVOFF0n_EL2(m)
+#define __SYS__AMEVCNTVOFF1n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0xA, m), __AMEV_op2(m))
+#define SYS_AMEVCNTVOFF1n_EL2(m) __SYS__AMEVCNTVOFF1n_EL2(m)
#define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3)
#define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0)
+#define SYS_CNTHP_TVAL_EL2 sys_reg(3, 4, 14, 2, 0)
+#define SYS_CNTHP_CTL_EL2 sys_reg(3, 4, 14, 2, 1)
+#define SYS_CNTHP_CVAL_EL2 sys_reg(3, 4, 14, 2, 2)
+#define SYS_CNTHV_TVAL_EL2 sys_reg(3, 4, 14, 3, 0)
+#define SYS_CNTHV_CTL_EL2 sys_reg(3, 4, 14, 3, 1)
+#define SYS_CNTHV_CVAL_EL2 sys_reg(3, 4, 14, 3, 2)
/* VHE encodings for architectural EL0/1 system registers */
-#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
+#define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0)
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
-#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
#define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0)
+#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
#define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0)
+#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7)
#define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0)
#define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0)
#define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1)
@@ -477,6 +643,183 @@
#define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0)
+/* AT instructions */
+#define AT_Op0 1
+#define AT_CRn 7
+
+#define OP_AT_S1E1R sys_insn(AT_Op0, 0, AT_CRn, 8, 0)
+#define OP_AT_S1E1W sys_insn(AT_Op0, 0, AT_CRn, 8, 1)
+#define OP_AT_S1E0R sys_insn(AT_Op0, 0, AT_CRn, 8, 2)
+#define OP_AT_S1E0W sys_insn(AT_Op0, 0, AT_CRn, 8, 3)
+#define OP_AT_S1E1RP sys_insn(AT_Op0, 0, AT_CRn, 9, 0)
+#define OP_AT_S1E1WP sys_insn(AT_Op0, 0, AT_CRn, 9, 1)
+#define OP_AT_S1E1A sys_insn(AT_Op0, 0, AT_CRn, 9, 2)
+#define OP_AT_S1E2R sys_insn(AT_Op0, 4, AT_CRn, 8, 0)
+#define OP_AT_S1E2W sys_insn(AT_Op0, 4, AT_CRn, 8, 1)
+#define OP_AT_S12E1R sys_insn(AT_Op0, 4, AT_CRn, 8, 4)
+#define OP_AT_S12E1W sys_insn(AT_Op0, 4, AT_CRn, 8, 5)
+#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6)
+#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
+#define OP_AT_S1E2A sys_insn(AT_Op0, 4, AT_CRn, 9, 2)
+
+/* TLBI instructions */
+#define TLBI_Op0 1
+
+#define TLBI_Op1_EL1 0 /* Accessible from EL1 or higher */
+#define TLBI_Op1_EL2 4 /* Accessible from EL2 or higher */
+
+#define TLBI_CRn_XS 8 /* Extra Slow (the common one) */
+#define TLBI_CRn_nXS 9 /* not Extra Slow (which nobody uses)*/
+
+#define TLBI_CRm_IPAIS 0 /* S2 Inner-Shareable */
+#define TLBI_CRm_nROS 1 /* non-Range, Outer-Sharable */
+#define TLBI_CRm_RIS 2 /* Range, Inner-Sharable */
+#define TLBI_CRm_nRIS 3 /* non-Range, Inner-Sharable */
+#define TLBI_CRm_IPAONS 4 /* S2 Outer and Non-Shareable */
+#define TLBI_CRm_ROS 5 /* Range, Outer-Sharable */
+#define TLBI_CRm_RNS 6 /* Range, Non-Sharable */
+#define TLBI_CRm_nRNS 7 /* non-Range, Non-Sharable */
+
+#define OP_TLBI_VMALLE1OS sys_insn(1, 0, 8, 1, 0)
+#define OP_TLBI_VAE1OS sys_insn(1, 0, 8, 1, 1)
+#define OP_TLBI_ASIDE1OS sys_insn(1, 0, 8, 1, 2)
+#define OP_TLBI_VAAE1OS sys_insn(1, 0, 8, 1, 3)
+#define OP_TLBI_VALE1OS sys_insn(1, 0, 8, 1, 5)
+#define OP_TLBI_VAALE1OS sys_insn(1, 0, 8, 1, 7)
+#define OP_TLBI_RVAE1IS sys_insn(1, 0, 8, 2, 1)
+#define OP_TLBI_RVAAE1IS sys_insn(1, 0, 8, 2, 3)
+#define OP_TLBI_RVALE1IS sys_insn(1, 0, 8, 2, 5)
+#define OP_TLBI_RVAALE1IS sys_insn(1, 0, 8, 2, 7)
+#define OP_TLBI_VMALLE1IS sys_insn(1, 0, 8, 3, 0)
+#define OP_TLBI_VAE1IS sys_insn(1, 0, 8, 3, 1)
+#define OP_TLBI_ASIDE1IS sys_insn(1, 0, 8, 3, 2)
+#define OP_TLBI_VAAE1IS sys_insn(1, 0, 8, 3, 3)
+#define OP_TLBI_VALE1IS sys_insn(1, 0, 8, 3, 5)
+#define OP_TLBI_VAALE1IS sys_insn(1, 0, 8, 3, 7)
+#define OP_TLBI_RVAE1OS sys_insn(1, 0, 8, 5, 1)
+#define OP_TLBI_RVAAE1OS sys_insn(1, 0, 8, 5, 3)
+#define OP_TLBI_RVALE1OS sys_insn(1, 0, 8, 5, 5)
+#define OP_TLBI_RVAALE1OS sys_insn(1, 0, 8, 5, 7)
+#define OP_TLBI_RVAE1 sys_insn(1, 0, 8, 6, 1)
+#define OP_TLBI_RVAAE1 sys_insn(1, 0, 8, 6, 3)
+#define OP_TLBI_RVALE1 sys_insn(1, 0, 8, 6, 5)
+#define OP_TLBI_RVAALE1 sys_insn(1, 0, 8, 6, 7)
+#define OP_TLBI_VMALLE1 sys_insn(1, 0, 8, 7, 0)
+#define OP_TLBI_VAE1 sys_insn(1, 0, 8, 7, 1)
+#define OP_TLBI_ASIDE1 sys_insn(1, 0, 8, 7, 2)
+#define OP_TLBI_VAAE1 sys_insn(1, 0, 8, 7, 3)
+#define OP_TLBI_VALE1 sys_insn(1, 0, 8, 7, 5)
+#define OP_TLBI_VAALE1 sys_insn(1, 0, 8, 7, 7)
+#define OP_TLBI_VMALLE1OSNXS sys_insn(1, 0, 9, 1, 0)
+#define OP_TLBI_VAE1OSNXS sys_insn(1, 0, 9, 1, 1)
+#define OP_TLBI_ASIDE1OSNXS sys_insn(1, 0, 9, 1, 2)
+#define OP_TLBI_VAAE1OSNXS sys_insn(1, 0, 9, 1, 3)
+#define OP_TLBI_VALE1OSNXS sys_insn(1, 0, 9, 1, 5)
+#define OP_TLBI_VAALE1OSNXS sys_insn(1, 0, 9, 1, 7)
+#define OP_TLBI_RVAE1ISNXS sys_insn(1, 0, 9, 2, 1)
+#define OP_TLBI_RVAAE1ISNXS sys_insn(1, 0, 9, 2, 3)
+#define OP_TLBI_RVALE1ISNXS sys_insn(1, 0, 9, 2, 5)
+#define OP_TLBI_RVAALE1ISNXS sys_insn(1, 0, 9, 2, 7)
+#define OP_TLBI_VMALLE1ISNXS sys_insn(1, 0, 9, 3, 0)
+#define OP_TLBI_VAE1ISNXS sys_insn(1, 0, 9, 3, 1)
+#define OP_TLBI_ASIDE1ISNXS sys_insn(1, 0, 9, 3, 2)
+#define OP_TLBI_VAAE1ISNXS sys_insn(1, 0, 9, 3, 3)
+#define OP_TLBI_VALE1ISNXS sys_insn(1, 0, 9, 3, 5)
+#define OP_TLBI_VAALE1ISNXS sys_insn(1, 0, 9, 3, 7)
+#define OP_TLBI_RVAE1OSNXS sys_insn(1, 0, 9, 5, 1)
+#define OP_TLBI_RVAAE1OSNXS sys_insn(1, 0, 9, 5, 3)
+#define OP_TLBI_RVALE1OSNXS sys_insn(1, 0, 9, 5, 5)
+#define OP_TLBI_RVAALE1OSNXS sys_insn(1, 0, 9, 5, 7)
+#define OP_TLBI_RVAE1NXS sys_insn(1, 0, 9, 6, 1)
+#define OP_TLBI_RVAAE1NXS sys_insn(1, 0, 9, 6, 3)
+#define OP_TLBI_RVALE1NXS sys_insn(1, 0, 9, 6, 5)
+#define OP_TLBI_RVAALE1NXS sys_insn(1, 0, 9, 6, 7)
+#define OP_TLBI_VMALLE1NXS sys_insn(1, 0, 9, 7, 0)
+#define OP_TLBI_VAE1NXS sys_insn(1, 0, 9, 7, 1)
+#define OP_TLBI_ASIDE1NXS sys_insn(1, 0, 9, 7, 2)
+#define OP_TLBI_VAAE1NXS sys_insn(1, 0, 9, 7, 3)
+#define OP_TLBI_VALE1NXS sys_insn(1, 0, 9, 7, 5)
+#define OP_TLBI_VAALE1NXS sys_insn(1, 0, 9, 7, 7)
+#define OP_TLBI_IPAS2E1IS sys_insn(1, 4, 8, 0, 1)
+#define OP_TLBI_RIPAS2E1IS sys_insn(1, 4, 8, 0, 2)
+#define OP_TLBI_IPAS2LE1IS sys_insn(1, 4, 8, 0, 5)
+#define OP_TLBI_RIPAS2LE1IS sys_insn(1, 4, 8, 0, 6)
+#define OP_TLBI_ALLE2OS sys_insn(1, 4, 8, 1, 0)
+#define OP_TLBI_VAE2OS sys_insn(1, 4, 8, 1, 1)
+#define OP_TLBI_ALLE1OS sys_insn(1, 4, 8, 1, 4)
+#define OP_TLBI_VALE2OS sys_insn(1, 4, 8, 1, 5)
+#define OP_TLBI_VMALLS12E1OS sys_insn(1, 4, 8, 1, 6)
+#define OP_TLBI_RVAE2IS sys_insn(1, 4, 8, 2, 1)
+#define OP_TLBI_RVALE2IS sys_insn(1, 4, 8, 2, 5)
+#define OP_TLBI_ALLE2IS sys_insn(1, 4, 8, 3, 0)
+#define OP_TLBI_VAE2IS sys_insn(1, 4, 8, 3, 1)
+#define OP_TLBI_ALLE1IS sys_insn(1, 4, 8, 3, 4)
+#define OP_TLBI_VALE2IS sys_insn(1, 4, 8, 3, 5)
+#define OP_TLBI_VMALLS12E1IS sys_insn(1, 4, 8, 3, 6)
+#define OP_TLBI_IPAS2E1OS sys_insn(1, 4, 8, 4, 0)
+#define OP_TLBI_IPAS2E1 sys_insn(1, 4, 8, 4, 1)
+#define OP_TLBI_RIPAS2E1 sys_insn(1, 4, 8, 4, 2)
+#define OP_TLBI_RIPAS2E1OS sys_insn(1, 4, 8, 4, 3)
+#define OP_TLBI_IPAS2LE1OS sys_insn(1, 4, 8, 4, 4)
+#define OP_TLBI_IPAS2LE1 sys_insn(1, 4, 8, 4, 5)
+#define OP_TLBI_RIPAS2LE1 sys_insn(1, 4, 8, 4, 6)
+#define OP_TLBI_RIPAS2LE1OS sys_insn(1, 4, 8, 4, 7)
+#define OP_TLBI_RVAE2OS sys_insn(1, 4, 8, 5, 1)
+#define OP_TLBI_RVALE2OS sys_insn(1, 4, 8, 5, 5)
+#define OP_TLBI_RVAE2 sys_insn(1, 4, 8, 6, 1)
+#define OP_TLBI_RVALE2 sys_insn(1, 4, 8, 6, 5)
+#define OP_TLBI_ALLE2 sys_insn(1, 4, 8, 7, 0)
+#define OP_TLBI_VAE2 sys_insn(1, 4, 8, 7, 1)
+#define OP_TLBI_ALLE1 sys_insn(1, 4, 8, 7, 4)
+#define OP_TLBI_VALE2 sys_insn(1, 4, 8, 7, 5)
+#define OP_TLBI_VMALLS12E1 sys_insn(1, 4, 8, 7, 6)
+#define OP_TLBI_IPAS2E1ISNXS sys_insn(1, 4, 9, 0, 1)
+#define OP_TLBI_RIPAS2E1ISNXS sys_insn(1, 4, 9, 0, 2)
+#define OP_TLBI_IPAS2LE1ISNXS sys_insn(1, 4, 9, 0, 5)
+#define OP_TLBI_RIPAS2LE1ISNXS sys_insn(1, 4, 9, 0, 6)
+#define OP_TLBI_ALLE2OSNXS sys_insn(1, 4, 9, 1, 0)
+#define OP_TLBI_VAE2OSNXS sys_insn(1, 4, 9, 1, 1)
+#define OP_TLBI_ALLE1OSNXS sys_insn(1, 4, 9, 1, 4)
+#define OP_TLBI_VALE2OSNXS sys_insn(1, 4, 9, 1, 5)
+#define OP_TLBI_VMALLS12E1OSNXS sys_insn(1, 4, 9, 1, 6)
+#define OP_TLBI_RVAE2ISNXS sys_insn(1, 4, 9, 2, 1)
+#define OP_TLBI_RVALE2ISNXS sys_insn(1, 4, 9, 2, 5)
+#define OP_TLBI_ALLE2ISNXS sys_insn(1, 4, 9, 3, 0)
+#define OP_TLBI_VAE2ISNXS sys_insn(1, 4, 9, 3, 1)
+#define OP_TLBI_ALLE1ISNXS sys_insn(1, 4, 9, 3, 4)
+#define OP_TLBI_VALE2ISNXS sys_insn(1, 4, 9, 3, 5)
+#define OP_TLBI_VMALLS12E1ISNXS sys_insn(1, 4, 9, 3, 6)
+#define OP_TLBI_IPAS2E1OSNXS sys_insn(1, 4, 9, 4, 0)
+#define OP_TLBI_IPAS2E1NXS sys_insn(1, 4, 9, 4, 1)
+#define OP_TLBI_RIPAS2E1NXS sys_insn(1, 4, 9, 4, 2)
+#define OP_TLBI_RIPAS2E1OSNXS sys_insn(1, 4, 9, 4, 3)
+#define OP_TLBI_IPAS2LE1OSNXS sys_insn(1, 4, 9, 4, 4)
+#define OP_TLBI_IPAS2LE1NXS sys_insn(1, 4, 9, 4, 5)
+#define OP_TLBI_RIPAS2LE1NXS sys_insn(1, 4, 9, 4, 6)
+#define OP_TLBI_RIPAS2LE1OSNXS sys_insn(1, 4, 9, 4, 7)
+#define OP_TLBI_RVAE2OSNXS sys_insn(1, 4, 9, 5, 1)
+#define OP_TLBI_RVALE2OSNXS sys_insn(1, 4, 9, 5, 5)
+#define OP_TLBI_RVAE2NXS sys_insn(1, 4, 9, 6, 1)
+#define OP_TLBI_RVALE2NXS sys_insn(1, 4, 9, 6, 5)
+#define OP_TLBI_ALLE2NXS sys_insn(1, 4, 9, 7, 0)
+#define OP_TLBI_VAE2NXS sys_insn(1, 4, 9, 7, 1)
+#define OP_TLBI_ALLE1NXS sys_insn(1, 4, 9, 7, 4)
+#define OP_TLBI_VALE2NXS sys_insn(1, 4, 9, 7, 5)
+#define OP_TLBI_VMALLS12E1NXS sys_insn(1, 4, 9, 7, 6)
+
+/* Misc instructions */
+#define OP_GCSPUSHX sys_insn(1, 0, 7, 7, 4)
+#define OP_GCSPOPCX sys_insn(1, 0, 7, 7, 5)
+#define OP_GCSPOPX sys_insn(1, 0, 7, 7, 6)
+#define OP_GCSPUSHM sys_insn(1, 3, 7, 7, 0)
+
+#define OP_BRB_IALL sys_insn(1, 1, 7, 2, 4)
+#define OP_BRB_INJ sys_insn(1, 1, 7, 2, 5)
+#define OP_CFP_RCTX sys_insn(1, 3, 7, 3, 4)
+#define OP_DVP_RCTX sys_insn(1, 3, 7, 3, 5)
+#define OP_COSP_RCTX sys_insn(1, 3, 7, 3, 6)
+#define OP_CPP_RCTX sys_insn(1, 3, 7, 3, 7)
+
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_ENTP2 (BIT(60))
#define SCTLR_ELx_DSSBS (BIT(44))
@@ -555,16 +898,14 @@
/* Position the attr at the correct index */
#define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8))
-/* id_aa64pfr0 */
-#define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1
-#define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2
-
/* id_aa64mmfr0 */
#define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0
+#define ID_AA64MMFR0_EL1_TGRAN4_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT
#define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 0x7
#define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 0x0
#define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 0x7
#define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 0x1
+#define ID_AA64MMFR0_EL1_TGRAN16_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT
#define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 0xf
#define ARM64_MIN_PARANGE_BITS 32
@@ -572,6 +913,7 @@
#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT 0x0
#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE 0x1
#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2
+#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2 0x3
#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7
#ifdef CONFIG_ARM64_PA_BITS_52
@@ -582,11 +924,13 @@
#if defined(CONFIG_ARM64_4K_PAGES)
#define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT
+#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT
#define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN
#define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX
#define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT
#elif defined(CONFIG_ARM64_16K_PAGES)
#define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT
+#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT
#define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN
#define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX
#define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT
@@ -610,6 +954,19 @@
#define SYS_GCR_EL1_RRND (BIT(16))
#define SYS_GCR_EL1_EXCL_MASK 0xffffUL
+#ifdef CONFIG_KASAN_HW_TAGS
+/*
+ * KASAN always uses a whole byte for its tags. With CONFIG_KASAN_HW_TAGS it
+ * only uses tags in the range 0xF0-0xFF, which we map to MTE tags 0x0-0xF.
+ */
+#define __MTE_TAG_MIN (KASAN_TAG_MIN & 0xf)
+#define __MTE_TAG_MAX (KASAN_TAG_MAX & 0xf)
+#define __MTE_TAG_INCL GENMASK(__MTE_TAG_MAX, __MTE_TAG_MIN)
+#define KERNEL_GCR_EL1_EXCL (SYS_GCR_EL1_EXCL_MASK & ~__MTE_TAG_INCL)
+#else
+#define KERNEL_GCR_EL1_EXCL SYS_GCR_EL1_EXCL_MASK
+#endif
+
#define KERNEL_GCR_EL1 (SYS_GCR_EL1_RRND | KERNEL_GCR_EL1_EXCL)
/* RGSR_EL1 Definitions */
@@ -626,20 +983,7 @@
/* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */
#define SYS_MPIDR_SAFE_VAL (BIT(31))
-#define TRFCR_ELx_TS_SHIFT 5
-#define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT)
-#define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT)
-#define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT)
-#define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT)
-#define TRFCR_EL2_CX BIT(3)
-#define TRFCR_ELx_ExTRE BIT(1)
-#define TRFCR_ELx_E0TRE BIT(0)
-
/* GIC Hypervisor interface registers */
-/* ICH_MISR_EL2 bit definitions */
-#define ICH_MISR_EOI (1 << 0)
-#define ICH_MISR_U (1 << 1)
-
/* ICH_LR*_EL2 bit definitions */
#define ICH_LR_VIRTUAL_ID_MASK ((1ULL << 32) - 1)
@@ -654,17 +998,6 @@
#define ICH_LR_PRIORITY_SHIFT 48
#define ICH_LR_PRIORITY_MASK (0xffULL << ICH_LR_PRIORITY_SHIFT)
-/* ICH_HCR_EL2 bit definitions */
-#define ICH_HCR_EN (1 << 0)
-#define ICH_HCR_UIE (1 << 1)
-#define ICH_HCR_NPIE (1 << 3)
-#define ICH_HCR_TC (1 << 10)
-#define ICH_HCR_TALL0 (1 << 11)
-#define ICH_HCR_TALL1 (1 << 12)
-#define ICH_HCR_TDIR (1 << 14)
-#define ICH_HCR_EOIcount_SHIFT 27
-#define ICH_HCR_EOIcount_MASK (0x1f << ICH_HCR_EOIcount_SHIFT)
-
/* ICH_VMCR_EL2 bit definitions */
#define ICH_VMCR_ACK_CTL_SHIFT 2
#define ICH_VMCR_ACK_CTL_MASK (1 << ICH_VMCR_ACK_CTL_SHIFT)
@@ -685,41 +1018,67 @@
#define ICH_VMCR_ENG1_SHIFT 1
#define ICH_VMCR_ENG1_MASK (1 << ICH_VMCR_ENG1_SHIFT)
-/* ICH_VTR_EL2 bit definitions */
-#define ICH_VTR_PRI_BITS_SHIFT 29
-#define ICH_VTR_PRI_BITS_MASK (7 << ICH_VTR_PRI_BITS_SHIFT)
-#define ICH_VTR_ID_BITS_SHIFT 23
-#define ICH_VTR_ID_BITS_MASK (7 << ICH_VTR_ID_BITS_SHIFT)
-#define ICH_VTR_SEIS_SHIFT 22
-#define ICH_VTR_SEIS_MASK (1 << ICH_VTR_SEIS_SHIFT)
-#define ICH_VTR_A3V_SHIFT 21
-#define ICH_VTR_A3V_MASK (1 << ICH_VTR_A3V_SHIFT)
-#define ICH_VTR_TDS_SHIFT 19
-#define ICH_VTR_TDS_MASK (1 << ICH_VTR_TDS_SHIFT)
-
/*
* Permission Indirection Extension (PIE) permission encodings.
* Encodings with the _O suffix, have overlays applied (Permission Overlay Extension).
*/
-#define PIE_NONE_O 0x0
-#define PIE_R_O 0x1
-#define PIE_X_O 0x2
-#define PIE_RX_O 0x3
-#define PIE_RW_O 0x5
-#define PIE_RWnX_O 0x6
-#define PIE_RWX_O 0x7
-#define PIE_R 0x8
-#define PIE_GCS 0x9
-#define PIE_RX 0xa
-#define PIE_RW 0xc
-#define PIE_RWX 0xe
-
-#define PIRx_ELx_PERM(idx, perm) ((perm) << ((idx) * 4))
+#define PIE_NONE_O UL(0x0)
+#define PIE_R_O UL(0x1)
+#define PIE_X_O UL(0x2)
+#define PIE_RX_O UL(0x3)
+#define PIE_RW_O UL(0x5)
+#define PIE_RWnX_O UL(0x6)
+#define PIE_RWX_O UL(0x7)
+#define PIE_R UL(0x8)
+#define PIE_GCS UL(0x9)
+#define PIE_RX UL(0xa)
+#define PIE_RW UL(0xc)
+#define PIE_RWX UL(0xe)
+#define PIE_MASK UL(0xf)
+
+#define PIRx_ELx_BITS_PER_IDX 4
+#define PIRx_ELx_PERM_SHIFT(idx) ((idx) * PIRx_ELx_BITS_PER_IDX)
+#define PIRx_ELx_PERM_PREP(idx, perm) (((perm) & PIE_MASK) << PIRx_ELx_PERM_SHIFT(idx))
-#define ARM64_FEATURE_FIELD_BITS 4
+/*
+ * Permission Overlay Extension (POE) permission encodings.
+ */
+#define POE_NONE UL(0x0)
+#define POE_R UL(0x1)
+#define POE_X UL(0x2)
+#define POE_RX UL(0x3)
+#define POE_W UL(0x4)
+#define POE_RW UL(0x5)
+#define POE_WX UL(0x6)
+#define POE_RWX UL(0x7)
+#define POE_MASK UL(0xf)
+
+#define POR_ELx_BITS_PER_IDX 4
+#define POR_ELx_PERM_SHIFT(idx) ((idx) * POR_ELx_BITS_PER_IDX)
+#define POR_ELx_PERM_GET(idx, reg) (((reg) >> POR_ELx_PERM_SHIFT(idx)) & POE_MASK)
+#define POR_ELx_PERM_PREP(idx, perm) (((perm) & POE_MASK) << POR_ELx_PERM_SHIFT(idx))
-/* Defined for compatibility only, do not add new users. */
-#define ARM64_FEATURE_MASK(x) (x##_MASK)
+/*
+ * Definitions for Guarded Control Stack
+ */
+
+#define GCS_CAP_ADDR_MASK GENMASK(63, 12)
+#define GCS_CAP_ADDR_SHIFT 12
+#define GCS_CAP_ADDR_WIDTH 52
+#define GCS_CAP_ADDR(x) FIELD_GET(GCS_CAP_ADDR_MASK, x)
+
+#define GCS_CAP_TOKEN_MASK GENMASK(11, 0)
+#define GCS_CAP_TOKEN_SHIFT 0
+#define GCS_CAP_TOKEN_WIDTH 12
+#define GCS_CAP_TOKEN(x) FIELD_GET(GCS_CAP_TOKEN_MASK, x)
+
+#define GCS_CAP_VALID_TOKEN 0x1
+#define GCS_CAP_IN_PROGRESS_TOKEN 0x5
+
+#define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \
+ GCS_CAP_VALID_TOKEN)
+
+#define ARM64_FEATURE_FIELD_BITS 4
#ifdef __ASSEMBLY__
@@ -789,15 +1148,21 @@
/*
* For registers without architectural names, or simply unsupported by
* GAS.
+ *
+ * __check_r forces warnings to be generated by the compiler when
+ * evaluating r which wouldn't normally happen due to being passed to
+ * the assembler via __stringify(r).
*/
#define read_sysreg_s(r) ({ \
u64 __val; \
+ u32 __maybe_unused __check_r = (u32)(r); \
asm volatile(__mrs_s("%0", r) : "=r" (__val)); \
__val; \
})
#define write_sysreg_s(v, r) do { \
u64 __val = (u64)(v); \
+ u32 __maybe_unused __check_r = (u32)(r); \
asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \
} while (0)
@@ -827,6 +1192,8 @@
par; \
})
+#define SYS_FIELD_VALUE(reg, field, val) reg##_##field##_##val
+
#define SYS_FIELD_GET(reg, field, val) \
FIELD_GET(reg##_##field##_MASK, val)
@@ -834,7 +1201,8 @@
FIELD_PREP(reg##_##field##_MASK, val)
#define SYS_FIELD_PREP_ENUM(reg, field, val) \
- FIELD_PREP(reg##_##field##_MASK, reg##_##field##_##val)
+ FIELD_PREP(reg##_##field##_MASK, \
+ SYS_FIELD_VALUE(reg, field, val))
#endif
diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h
index 89d2fc872d9f..ed5f3892674c 100644
--- a/tools/arch/arm64/include/uapi/asm/kvm.h
+++ b/tools/arch/arm64/include/uapi/asm/kvm.h
@@ -37,17 +37,12 @@
#include <asm/ptrace.h>
#include <asm/sve_context.h>
-#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_READONLY_MEM
#define __KVM_HAVE_VCPU_EVENTS
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_DIRTY_LOG_PAGE_OFFSET 64
-#define KVM_REG_SIZE(id) \
- (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
-
struct kvm_regs {
struct user_pt_regs regs; /* sp = sp_el0 */
@@ -76,11 +71,11 @@ struct kvm_regs {
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
#define KVM_ARM_DEVICE_TYPE_SHIFT 0
-#define KVM_ARM_DEVICE_TYPE_MASK GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \
- KVM_ARM_DEVICE_TYPE_SHIFT)
+#define KVM_ARM_DEVICE_TYPE_MASK __GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \
+ KVM_ARM_DEVICE_TYPE_SHIFT)
#define KVM_ARM_DEVICE_ID_SHIFT 16
-#define KVM_ARM_DEVICE_ID_MASK GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \
- KVM_ARM_DEVICE_ID_SHIFT)
+#define KVM_ARM_DEVICE_ID_MASK __GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \
+ KVM_ARM_DEVICE_ID_SHIFT)
/* Supported device IDs */
#define KVM_ARM_DEVICE_VGIC_V2 0
@@ -110,6 +105,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
+#define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */
struct kvm_vcpu_init {
__u32 target;
@@ -162,6 +158,11 @@ struct kvm_sync_regs {
__u64 device_irq_level;
};
+/* Bits for run->s.regs.device_irq_level */
+#define KVM_ARM_DEV_EL1_VTIMER (1 << 0)
+#define KVM_ARM_DEV_EL1_PTIMER (1 << 1)
+#define KVM_ARM_DEV_PMU (1 << 2)
+
/*
* PMU filter structure. Describe a range of events with a particular
* action. To be used with KVM_ARM_VCPU_PMU_V3_FILTER.
@@ -371,6 +372,7 @@ enum {
#endif
};
+/* Vendor hyper call function numbers 0-63 */
#define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(2)
enum {
@@ -381,6 +383,17 @@ enum {
#endif
};
+/* Vendor hyper call function numbers 64-127 */
+#define KVM_REG_ARM_VENDOR_HYP_BMAP_2 KVM_REG_ARM_FW_FEAT_BMAP_REG(3)
+
+enum {
+ KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_VER = 0,
+ KVM_REG_ARM_VENDOR_HYP_BIT_DISCOVER_IMPL_CPUS = 1,
+#ifdef __KERNEL__
+ KVM_REG_ARM_VENDOR_HYP_BMAP_2_BIT_COUNT,
+#endif
+};
+
/* Device Control API on vm fd */
#define KVM_ARM_VM_SMCCC_CTRL 0
#define KVM_ARM_VM_SMCCC_FILTER 0
@@ -403,6 +416,7 @@ enum {
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
+#define KVM_DEV_ARM_VGIC_GRP_MAINT_IRQ 9
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
@@ -417,10 +431,11 @@ enum {
/* Device Control API on vcpu fd */
#define KVM_ARM_VCPU_PMU_V3_CTRL 0
-#define KVM_ARM_VCPU_PMU_V3_IRQ 0
-#define KVM_ARM_VCPU_PMU_V3_INIT 1
-#define KVM_ARM_VCPU_PMU_V3_FILTER 2
-#define KVM_ARM_VCPU_PMU_V3_SET_PMU 3
+#define KVM_ARM_VCPU_PMU_V3_IRQ 0
+#define KVM_ARM_VCPU_PMU_V3_INIT 1
+#define KVM_ARM_VCPU_PMU_V3_FILTER 2
+#define KVM_ARM_VCPU_PMU_V3_SET_PMU 3
+#define KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS 4
#define KVM_ARM_VCPU_TIMER_CTRL 1
#define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0
#define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
@@ -481,6 +496,12 @@ enum {
*/
#define KVM_SYSTEM_EVENT_RESET_FLAG_PSCI_RESET2 (1ULL << 0)
+/*
+ * Shutdown caused by a PSCI v1.3 SYSTEM_OFF2 call.
+ * Valid only when the system event has a type of KVM_SYSTEM_EVENT_SHUTDOWN.
+ */
+#define KVM_SYSTEM_EVENT_SHUTDOWN_FLAG_PSCI_OFF2 (1ULL << 0)
+
/* run->fail_entry.hardware_entry_failure_reason codes. */
#define KVM_EXIT_FAIL_ENTRY_CPU_UNSUPPORTED (1ULL << 0)
diff --git a/tools/arch/arm64/include/uapi/asm/unistd.h b/tools/arch/arm64/include/uapi/asm/unistd.h
index ce2ee8f1e361..df36f23876e8 100644
--- a/tools/arch/arm64/include/uapi/asm/unistd.h
+++ b/tools/arch/arm64/include/uapi/asm/unistd.h
@@ -1,25 +1,2 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/*
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- */
-
-#define __ARCH_WANT_RENAMEAT
-#define __ARCH_WANT_NEW_STAT
-#define __ARCH_WANT_SET_GET_RLIMIT
-#define __ARCH_WANT_TIME32_SYSCALLS
-#define __ARCH_WANT_SYS_CLONE3
-#define __ARCH_WANT_MEMFD_SECRET
-
-#include <asm-generic/unistd.h>
+#include <asm/unistd_64.h>
diff --git a/tools/arch/arm64/tools/Makefile b/tools/arch/arm64/tools/Makefile
index 7b42feedf647..de4f1b66ef01 100644
--- a/tools/arch/arm64/tools/Makefile
+++ b/tools/arch/arm64/tools/Makefile
@@ -13,12 +13,6 @@ AWK ?= awk
MKDIR ?= mkdir
RM ?= rm
-ifeq ($(V),1)
-Q =
-else
-Q = @
-endif
-
arm64_tools_dir = $(top_srcdir)/arch/arm64/tools
arm64_sysreg_tbl = $(arm64_tools_dir)/sysreg
arm64_gen_sysreg = $(arm64_tools_dir)/gen-sysreg.awk
diff --git a/tools/arch/loongarch/include/asm/inst.h b/tools/arch/loongarch/include/asm/inst.h
new file mode 100644
index 000000000000..d68fad63c8b7
--- /dev/null
+++ b/tools/arch/loongarch/include/asm/inst.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2022 Loongson Technology Corporation Limited
+ */
+#ifndef _ASM_INST_H
+#define _ASM_INST_H
+
+#include <linux/bitops.h>
+
+#define LOONGARCH_INSN_NOP 0x03400000
+
+enum reg0i15_op {
+ break_op = 0x54,
+};
+
+enum reg0i26_op {
+ b_op = 0x14,
+ bl_op = 0x15,
+};
+
+enum reg1i21_op {
+ beqz_op = 0x10,
+ bnez_op = 0x11,
+ bceqz_op = 0x12, /* bits[9:8] = 0x00 */
+ bcnez_op = 0x12, /* bits[9:8] = 0x01 */
+};
+
+enum reg2_op {
+ ertn_op = 0x1920e,
+};
+
+enum reg2i12_op {
+ addid_op = 0x0b,
+ andi_op = 0x0d,
+ ldd_op = 0xa3,
+ std_op = 0xa7,
+};
+
+enum reg2i14_op {
+ ldptrd_op = 0x26,
+ stptrd_op = 0x27,
+};
+
+enum reg2i16_op {
+ jirl_op = 0x13,
+ beq_op = 0x16,
+ bne_op = 0x17,
+ blt_op = 0x18,
+ bge_op = 0x19,
+ bltu_op = 0x1a,
+ bgeu_op = 0x1b,
+};
+
+enum reg3_op {
+ amswapw_op = 0x70c0,
+};
+
+struct reg0i15_format {
+ unsigned int immediate : 15;
+ unsigned int opcode : 17;
+};
+
+struct reg0i26_format {
+ unsigned int immediate_h : 10;
+ unsigned int immediate_l : 16;
+ unsigned int opcode : 6;
+};
+
+struct reg1i21_format {
+ unsigned int immediate_h : 5;
+ unsigned int rj : 5;
+ unsigned int immediate_l : 16;
+ unsigned int opcode : 6;
+};
+
+struct reg2_format {
+ unsigned int rd : 5;
+ unsigned int rj : 5;
+ unsigned int opcode : 22;
+};
+
+struct reg2i12_format {
+ unsigned int rd : 5;
+ unsigned int rj : 5;
+ unsigned int immediate : 12;
+ unsigned int opcode : 10;
+};
+
+struct reg2i14_format {
+ unsigned int rd : 5;
+ unsigned int rj : 5;
+ unsigned int immediate : 14;
+ unsigned int opcode : 8;
+};
+
+struct reg2i16_format {
+ unsigned int rd : 5;
+ unsigned int rj : 5;
+ unsigned int immediate : 16;
+ unsigned int opcode : 6;
+};
+
+struct reg3_format {
+ unsigned int rd : 5;
+ unsigned int rj : 5;
+ unsigned int rk : 5;
+ unsigned int opcode : 17;
+};
+
+union loongarch_instruction {
+ unsigned int word;
+ struct reg0i15_format reg0i15_format;
+ struct reg0i26_format reg0i26_format;
+ struct reg1i21_format reg1i21_format;
+ struct reg2_format reg2_format;
+ struct reg2i12_format reg2i12_format;
+ struct reg2i14_format reg2i14_format;
+ struct reg2i16_format reg2i16_format;
+ struct reg3_format reg3_format;
+};
+
+#define LOONGARCH_INSN_SIZE sizeof(union loongarch_instruction)
+
+enum loongarch_gpr {
+ LOONGARCH_GPR_ZERO = 0,
+ LOONGARCH_GPR_RA = 1,
+ LOONGARCH_GPR_TP = 2,
+ LOONGARCH_GPR_SP = 3,
+ LOONGARCH_GPR_A0 = 4, /* Reused as V0 for return value */
+ LOONGARCH_GPR_A1, /* Reused as V1 for return value */
+ LOONGARCH_GPR_A2,
+ LOONGARCH_GPR_A3,
+ LOONGARCH_GPR_A4,
+ LOONGARCH_GPR_A5,
+ LOONGARCH_GPR_A6,
+ LOONGARCH_GPR_A7,
+ LOONGARCH_GPR_T0 = 12,
+ LOONGARCH_GPR_T1,
+ LOONGARCH_GPR_T2,
+ LOONGARCH_GPR_T3,
+ LOONGARCH_GPR_T4,
+ LOONGARCH_GPR_T5,
+ LOONGARCH_GPR_T6,
+ LOONGARCH_GPR_T7,
+ LOONGARCH_GPR_T8,
+ LOONGARCH_GPR_FP = 22,
+ LOONGARCH_GPR_S0 = 23,
+ LOONGARCH_GPR_S1,
+ LOONGARCH_GPR_S2,
+ LOONGARCH_GPR_S3,
+ LOONGARCH_GPR_S4,
+ LOONGARCH_GPR_S5,
+ LOONGARCH_GPR_S6,
+ LOONGARCH_GPR_S7,
+ LOONGARCH_GPR_S8,
+ LOONGARCH_GPR_MAX
+};
+
+#define DEF_EMIT_REG2I16_FORMAT(NAME, OP) \
+static inline void emit_##NAME(union loongarch_instruction *insn, \
+ enum loongarch_gpr rj, \
+ enum loongarch_gpr rd, \
+ int offset) \
+{ \
+ insn->reg2i16_format.opcode = OP; \
+ insn->reg2i16_format.immediate = offset; \
+ insn->reg2i16_format.rj = rj; \
+ insn->reg2i16_format.rd = rd; \
+}
+
+DEF_EMIT_REG2I16_FORMAT(jirl, jirl_op)
+
+#endif /* _ASM_INST_H */
diff --git a/tools/arch/loongarch/include/asm/orc_types.h b/tools/arch/loongarch/include/asm/orc_types.h
new file mode 100644
index 000000000000..d5fa98d1d177
--- /dev/null
+++ b/tools/arch/loongarch/include/asm/orc_types.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _ORC_TYPES_H
+#define _ORC_TYPES_H
+
+#include <linux/types.h>
+
+/*
+ * The ORC_REG_* registers are base registers which are used to find other
+ * registers on the stack.
+ *
+ * ORC_REG_PREV_SP, also known as DWARF Call Frame Address (CFA), is the
+ * address of the previous frame: the caller's SP before it called the current
+ * function.
+ *
+ * ORC_REG_UNDEFINED means the corresponding register's value didn't change in
+ * the current frame.
+ *
+ * The most commonly used base registers are SP and FP -- which the previous SP
+ * is usually based on -- and PREV_SP and UNDEFINED -- which the previous FP is
+ * usually based on.
+ *
+ * The rest of the base registers are needed for special cases like entry code
+ * and GCC realigned stacks.
+ */
+#define ORC_REG_UNDEFINED 0
+#define ORC_REG_PREV_SP 1
+#define ORC_REG_SP 2
+#define ORC_REG_FP 3
+#define ORC_REG_MAX 4
+
+#define ORC_TYPE_UNDEFINED 0
+#define ORC_TYPE_END_OF_STACK 1
+#define ORC_TYPE_CALL 2
+#define ORC_TYPE_REGS 3
+#define ORC_TYPE_REGS_PARTIAL 4
+
+#ifndef __ASSEMBLER__
+/*
+ * This struct is more or less a vastly simplified version of the DWARF Call
+ * Frame Information standard. It contains only the necessary parts of DWARF
+ * CFI, simplified for ease of access by the in-kernel unwinder. It tells the
+ * unwinder how to find the previous SP and FP (and sometimes entry regs) on
+ * the stack for a given code address. Each instance of the struct corresponds
+ * to one or more code locations.
+ */
+struct orc_entry {
+ s16 sp_offset;
+ s16 fp_offset;
+ s16 ra_offset;
+ unsigned int sp_reg:4;
+ unsigned int fp_reg:4;
+ unsigned int ra_reg:4;
+ unsigned int type:3;
+ unsigned int signal:1;
+};
+#endif /* __ASSEMBLER__ */
+
+#endif /* _ORC_TYPES_H */
diff --git a/tools/arch/loongarch/include/uapi/asm/unistd.h b/tools/arch/loongarch/include/uapi/asm/unistd.h
index 0c743344e92d..8eeaac0087c3 100644
--- a/tools/arch/loongarch/include/uapi/asm/unistd.h
+++ b/tools/arch/loongarch/include/uapi/asm/unistd.h
@@ -4,6 +4,5 @@
*/
#define __ARCH_WANT_SYS_CLONE
-#define __ARCH_WANT_SYS_CLONE3
#include <asm-generic/unistd.h>
diff --git a/tools/arch/powerpc/include/uapi/asm/kvm.h b/tools/arch/powerpc/include/uapi/asm/kvm.h
index 9f18fa090f1f..077c5437f521 100644
--- a/tools/arch/powerpc/include/uapi/asm/kvm.h
+++ b/tools/arch/powerpc/include/uapi/asm/kvm.h
@@ -1,18 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
- *
* Copyright IBM Corp. 2007
*
* Authors: Hollis Blanchard <hollisb@us.ibm.com>
@@ -28,7 +15,6 @@
#define __KVM_HAVE_PPC_SMT
#define __KVM_HAVE_IRQCHIP
#define __KVM_HAVE_IRQ_LINE
-#define __KVM_HAVE_GUEST_DEBUG
/* Not always available, but if it is, this is the correct offset. */
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
@@ -646,6 +632,9 @@ struct kvm_ppc_cpu_char {
#define KVM_REG_PPC_SIER3 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc3)
#define KVM_REG_PPC_DAWR1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc4)
#define KVM_REG_PPC_DAWRX1 (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc5)
+#define KVM_REG_PPC_DEXCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc6)
+#define KVM_REG_PPC_HASHKEYR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc7)
+#define KVM_REG_PPC_HASHPKEYR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xc8)
/* Transactional Memory checkpointed state:
* This is all GPRs, all VSX regs and a subset of SPRs
@@ -733,4 +722,48 @@ struct kvm_ppc_xive_eq {
#define KVM_XIVE_TIMA_PAGE_OFFSET 0
#define KVM_XIVE_ESB_PAGE_OFFSET 4
+/* for KVM_PPC_GET_PVINFO */
+
+#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
+
+struct kvm_ppc_pvinfo {
+ /* out */
+ __u32 flags;
+ __u32 hcall[4];
+ __u8 pad[108];
+};
+
+/* for KVM_PPC_GET_SMMU_INFO */
+#define KVM_PPC_PAGE_SIZES_MAX_SZ 8
+
+struct kvm_ppc_one_page_size {
+ __u32 page_shift; /* Page shift (or 0) */
+ __u32 pte_enc; /* Encoding in the HPTE (>>12) */
+};
+
+struct kvm_ppc_one_seg_page_size {
+ __u32 page_shift; /* Base page shift of segment (or 0) */
+ __u32 slb_enc; /* SLB encoding for BookS */
+ struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ];
+};
+
+#define KVM_PPC_PAGE_SIZES_REAL 0x00000001
+#define KVM_PPC_1T_SEGMENTS 0x00000002
+#define KVM_PPC_NO_HASH 0x00000004
+
+struct kvm_ppc_smmu_info {
+ __u64 flags;
+ __u32 slb_size;
+ __u16 data_keys; /* # storage keys supported for data */
+ __u16 instr_keys; /* # storage keys supported for instructions */
+ struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
+};
+
+/* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */
+struct kvm_ppc_resize_hpt {
+ __u64 flags;
+ __u32 shift;
+ __u32 pad;
+};
+
#endif /* __LINUX_KVM_POWERPC_H */
diff --git a/tools/arch/riscv/include/asm/barrier.h b/tools/arch/riscv/include/asm/barrier.h
new file mode 100644
index 000000000000..6997f197086d
--- /dev/null
+++ b/tools/arch/riscv/include/asm/barrier.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copied from the kernel sources to tools/arch/riscv:
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ */
+
+#ifndef _TOOLS_LINUX_ASM_RISCV_BARRIER_H
+#define _TOOLS_LINUX_ASM_RISCV_BARRIER_H
+
+#include <asm/fence.h>
+#include <linux/compiler.h>
+
+/* These barriers need to enforce ordering on both devices and memory. */
+#define mb() RISCV_FENCE(iorw, iorw)
+#define rmb() RISCV_FENCE(ir, ir)
+#define wmb() RISCV_FENCE(ow, ow)
+
+/* These barriers do not need to enforce ordering on devices, just memory. */
+#define smp_mb() RISCV_FENCE(rw, rw)
+#define smp_rmb() RISCV_FENCE(r, r)
+#define smp_wmb() RISCV_FENCE(w, w)
+
+#define smp_store_release(p, v) \
+do { \
+ RISCV_FENCE(rw, w); \
+ WRITE_ONCE(*p, v); \
+} while (0)
+
+#define smp_load_acquire(p) \
+({ \
+ typeof(*p) ___p1 = READ_ONCE(*p); \
+ RISCV_FENCE(r, rw); \
+ ___p1; \
+})
+
+#endif /* _TOOLS_LINUX_ASM_RISCV_BARRIER_H */
diff --git a/tools/arch/riscv/include/asm/csr.h b/tools/arch/riscv/include/asm/csr.h
new file mode 100644
index 000000000000..56d7367ee344
--- /dev/null
+++ b/tools/arch/riscv/include/asm/csr.h
@@ -0,0 +1,541 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <linux/bits.h>
+
+/* Status register flags */
+#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */
+#define SR_MIE _AC(0x00000008, UL) /* Machine Interrupt Enable */
+#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */
+#define SR_MPIE _AC(0x00000080, UL) /* Previous Machine IE */
+#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_MPP _AC(0x00001800, UL) /* Previously Machine */
+#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */
+
+#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF _AC(0x00000000, UL)
+#define SR_FS_INITIAL _AC(0x00002000, UL)
+#define SR_FS_CLEAN _AC(0x00004000, UL)
+#define SR_FS_DIRTY _AC(0x00006000, UL)
+
+#define SR_VS _AC(0x00000600, UL) /* Vector Status */
+#define SR_VS_OFF _AC(0x00000000, UL)
+#define SR_VS_INITIAL _AC(0x00000200, UL)
+#define SR_VS_CLEAN _AC(0x00000400, UL)
+#define SR_VS_DIRTY _AC(0x00000600, UL)
+
+#define SR_XS _AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF _AC(0x00000000, UL)
+#define SR_XS_INITIAL _AC(0x00008000, UL)
+#define SR_XS_CLEAN _AC(0x00010000, UL)
+#define SR_XS_DIRTY _AC(0x00018000, UL)
+
+#define SR_FS_VS (SR_FS | SR_VS) /* Vector and Floating-Point Unit */
+
+#ifndef CONFIG_64BIT
+#define SR_SD _AC(0x80000000, UL) /* FS/VS/XS dirty */
+#else
+#define SR_SD _AC(0x8000000000000000, UL) /* FS/VS/XS dirty */
+#endif
+
+#ifdef CONFIG_64BIT
+#define SR_UXL _AC(0x300000000, UL) /* XLEN mask for U-mode */
+#define SR_UXL_32 _AC(0x100000000, UL) /* XLEN = 32 for U-mode */
+#define SR_UXL_64 _AC(0x200000000, UL) /* XLEN = 64 for U-mode */
+#endif
+
+/* SATP flags */
+#ifndef CONFIG_64BIT
+#define SATP_PPN _AC(0x003FFFFF, UL)
+#define SATP_MODE_32 _AC(0x80000000, UL)
+#define SATP_MODE_SHIFT 31
+#define SATP_ASID_BITS 9
+#define SATP_ASID_SHIFT 22
+#define SATP_ASID_MASK _AC(0x1FF, UL)
+#else
+#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL)
+#define SATP_MODE_39 _AC(0x8000000000000000, UL)
+#define SATP_MODE_48 _AC(0x9000000000000000, UL)
+#define SATP_MODE_57 _AC(0xa000000000000000, UL)
+#define SATP_MODE_SHIFT 60
+#define SATP_ASID_BITS 16
+#define SATP_ASID_SHIFT 44
+#define SATP_ASID_MASK _AC(0xFFFF, UL)
+#endif
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
+
+/* Interrupt causes (minus the high bit) */
+#define IRQ_S_SOFT 1
+#define IRQ_VS_SOFT 2
+#define IRQ_M_SOFT 3
+#define IRQ_S_TIMER 5
+#define IRQ_VS_TIMER 6
+#define IRQ_M_TIMER 7
+#define IRQ_S_EXT 9
+#define IRQ_VS_EXT 10
+#define IRQ_M_EXT 11
+#define IRQ_S_GEXT 12
+#define IRQ_PMU_OVF 13
+#define IRQ_LOCAL_MAX (IRQ_PMU_OVF + 1)
+#define IRQ_LOCAL_MASK GENMASK((IRQ_LOCAL_MAX - 1), 0)
+
+/* Exception causes */
+#define EXC_INST_MISALIGNED 0
+#define EXC_INST_ACCESS 1
+#define EXC_INST_ILLEGAL 2
+#define EXC_BREAKPOINT 3
+#define EXC_LOAD_MISALIGNED 4
+#define EXC_LOAD_ACCESS 5
+#define EXC_STORE_MISALIGNED 6
+#define EXC_STORE_ACCESS 7
+#define EXC_SYSCALL 8
+#define EXC_HYPERVISOR_SYSCALL 9
+#define EXC_SUPERVISOR_SYSCALL 10
+#define EXC_INST_PAGE_FAULT 12
+#define EXC_LOAD_PAGE_FAULT 13
+#define EXC_STORE_PAGE_FAULT 15
+#define EXC_INST_GUEST_PAGE_FAULT 20
+#define EXC_LOAD_GUEST_PAGE_FAULT 21
+#define EXC_VIRTUAL_INST_FAULT 22
+#define EXC_STORE_GUEST_PAGE_FAULT 23
+
+/* PMP configuration */
+#define PMP_R 0x01
+#define PMP_W 0x02
+#define PMP_X 0x04
+#define PMP_A 0x18
+#define PMP_A_TOR 0x08
+#define PMP_A_NA4 0x10
+#define PMP_A_NAPOT 0x18
+#define PMP_L 0x80
+
+/* HSTATUS flags */
+#ifdef CONFIG_64BIT
+#define HSTATUS_VSXL _AC(0x300000000, UL)
+#define HSTATUS_VSXL_SHIFT 32
+#endif
+#define HSTATUS_VTSR _AC(0x00400000, UL)
+#define HSTATUS_VTW _AC(0x00200000, UL)
+#define HSTATUS_VTVM _AC(0x00100000, UL)
+#define HSTATUS_VGEIN _AC(0x0003f000, UL)
+#define HSTATUS_VGEIN_SHIFT 12
+#define HSTATUS_HU _AC(0x00000200, UL)
+#define HSTATUS_SPVP _AC(0x00000100, UL)
+#define HSTATUS_SPV _AC(0x00000080, UL)
+#define HSTATUS_GVA _AC(0x00000040, UL)
+#define HSTATUS_VSBE _AC(0x00000020, UL)
+
+/* HGATP flags */
+#define HGATP_MODE_OFF _AC(0, UL)
+#define HGATP_MODE_SV32X4 _AC(1, UL)
+#define HGATP_MODE_SV39X4 _AC(8, UL)
+#define HGATP_MODE_SV48X4 _AC(9, UL)
+#define HGATP_MODE_SV57X4 _AC(10, UL)
+
+#define HGATP32_MODE_SHIFT 31
+#define HGATP32_VMID_SHIFT 22
+#define HGATP32_VMID GENMASK(28, 22)
+#define HGATP32_PPN GENMASK(21, 0)
+
+#define HGATP64_MODE_SHIFT 60
+#define HGATP64_VMID_SHIFT 44
+#define HGATP64_VMID GENMASK(57, 44)
+#define HGATP64_PPN GENMASK(43, 0)
+
+#define HGATP_PAGE_SHIFT 12
+
+#ifdef CONFIG_64BIT
+#define HGATP_PPN HGATP64_PPN
+#define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT
+#define HGATP_VMID HGATP64_VMID
+#define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT
+#else
+#define HGATP_PPN HGATP32_PPN
+#define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT
+#define HGATP_VMID HGATP32_VMID
+#define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT
+#endif
+
+/* VSIP & HVIP relation */
+#define VSIP_TO_HVIP_SHIFT (IRQ_VS_SOFT - IRQ_S_SOFT)
+#define VSIP_VALID_MASK ((_AC(1, UL) << IRQ_S_SOFT) | \
+ (_AC(1, UL) << IRQ_S_TIMER) | \
+ (_AC(1, UL) << IRQ_S_EXT))
+
+/* AIA CSR bits */
+#define TOPI_IID_SHIFT 16
+#define TOPI_IID_MASK GENMASK(11, 0)
+#define TOPI_IPRIO_MASK GENMASK(7, 0)
+#define TOPI_IPRIO_BITS 8
+
+#define TOPEI_ID_SHIFT 16
+#define TOPEI_ID_MASK GENMASK(10, 0)
+#define TOPEI_PRIO_MASK GENMASK(10, 0)
+
+#define ISELECT_IPRIO0 0x30
+#define ISELECT_IPRIO15 0x3f
+#define ISELECT_MASK GENMASK(8, 0)
+
+#define HVICTL_VTI BIT(30)
+#define HVICTL_IID GENMASK(27, 16)
+#define HVICTL_IID_SHIFT 16
+#define HVICTL_DPR BIT(9)
+#define HVICTL_IPRIOM BIT(8)
+#define HVICTL_IPRIO GENMASK(7, 0)
+
+/* xENVCFG flags */
+#define ENVCFG_STCE (_AC(1, ULL) << 63)
+#define ENVCFG_PBMTE (_AC(1, ULL) << 62)
+#define ENVCFG_CBZE (_AC(1, UL) << 7)
+#define ENVCFG_CBCFE (_AC(1, UL) << 6)
+#define ENVCFG_CBIE_SHIFT 4
+#define ENVCFG_CBIE (_AC(0x3, UL) << ENVCFG_CBIE_SHIFT)
+#define ENVCFG_CBIE_ILL _AC(0x0, UL)
+#define ENVCFG_CBIE_FLUSH _AC(0x1, UL)
+#define ENVCFG_CBIE_INV _AC(0x3, UL)
+#define ENVCFG_FIOM _AC(0x1, UL)
+
+/* Smstateen bits */
+#define SMSTATEEN0_AIA_IMSIC_SHIFT 58
+#define SMSTATEEN0_AIA_IMSIC (_ULL(1) << SMSTATEEN0_AIA_IMSIC_SHIFT)
+#define SMSTATEEN0_AIA_SHIFT 59
+#define SMSTATEEN0_AIA (_ULL(1) << SMSTATEEN0_AIA_SHIFT)
+#define SMSTATEEN0_AIA_ISEL_SHIFT 60
+#define SMSTATEEN0_AIA_ISEL (_ULL(1) << SMSTATEEN0_AIA_ISEL_SHIFT)
+#define SMSTATEEN0_HSENVCFG_SHIFT 62
+#define SMSTATEEN0_HSENVCFG (_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT)
+#define SMSTATEEN0_SSTATEEN0_SHIFT 63
+#define SMSTATEEN0_SSTATEEN0 (_ULL(1) << SMSTATEEN0_SSTATEEN0_SHIFT)
+
+/* symbolic CSR names: */
+#define CSR_CYCLE 0xc00
+#define CSR_TIME 0xc01
+#define CSR_INSTRET 0xc02
+#define CSR_HPMCOUNTER3 0xc03
+#define CSR_HPMCOUNTER4 0xc04
+#define CSR_HPMCOUNTER5 0xc05
+#define CSR_HPMCOUNTER6 0xc06
+#define CSR_HPMCOUNTER7 0xc07
+#define CSR_HPMCOUNTER8 0xc08
+#define CSR_HPMCOUNTER9 0xc09
+#define CSR_HPMCOUNTER10 0xc0a
+#define CSR_HPMCOUNTER11 0xc0b
+#define CSR_HPMCOUNTER12 0xc0c
+#define CSR_HPMCOUNTER13 0xc0d
+#define CSR_HPMCOUNTER14 0xc0e
+#define CSR_HPMCOUNTER15 0xc0f
+#define CSR_HPMCOUNTER16 0xc10
+#define CSR_HPMCOUNTER17 0xc11
+#define CSR_HPMCOUNTER18 0xc12
+#define CSR_HPMCOUNTER19 0xc13
+#define CSR_HPMCOUNTER20 0xc14
+#define CSR_HPMCOUNTER21 0xc15
+#define CSR_HPMCOUNTER22 0xc16
+#define CSR_HPMCOUNTER23 0xc17
+#define CSR_HPMCOUNTER24 0xc18
+#define CSR_HPMCOUNTER25 0xc19
+#define CSR_HPMCOUNTER26 0xc1a
+#define CSR_HPMCOUNTER27 0xc1b
+#define CSR_HPMCOUNTER28 0xc1c
+#define CSR_HPMCOUNTER29 0xc1d
+#define CSR_HPMCOUNTER30 0xc1e
+#define CSR_HPMCOUNTER31 0xc1f
+#define CSR_CYCLEH 0xc80
+#define CSR_TIMEH 0xc81
+#define CSR_INSTRETH 0xc82
+#define CSR_HPMCOUNTER3H 0xc83
+#define CSR_HPMCOUNTER4H 0xc84
+#define CSR_HPMCOUNTER5H 0xc85
+#define CSR_HPMCOUNTER6H 0xc86
+#define CSR_HPMCOUNTER7H 0xc87
+#define CSR_HPMCOUNTER8H 0xc88
+#define CSR_HPMCOUNTER9H 0xc89
+#define CSR_HPMCOUNTER10H 0xc8a
+#define CSR_HPMCOUNTER11H 0xc8b
+#define CSR_HPMCOUNTER12H 0xc8c
+#define CSR_HPMCOUNTER13H 0xc8d
+#define CSR_HPMCOUNTER14H 0xc8e
+#define CSR_HPMCOUNTER15H 0xc8f
+#define CSR_HPMCOUNTER16H 0xc90
+#define CSR_HPMCOUNTER17H 0xc91
+#define CSR_HPMCOUNTER18H 0xc92
+#define CSR_HPMCOUNTER19H 0xc93
+#define CSR_HPMCOUNTER20H 0xc94
+#define CSR_HPMCOUNTER21H 0xc95
+#define CSR_HPMCOUNTER22H 0xc96
+#define CSR_HPMCOUNTER23H 0xc97
+#define CSR_HPMCOUNTER24H 0xc98
+#define CSR_HPMCOUNTER25H 0xc99
+#define CSR_HPMCOUNTER26H 0xc9a
+#define CSR_HPMCOUNTER27H 0xc9b
+#define CSR_HPMCOUNTER28H 0xc9c
+#define CSR_HPMCOUNTER29H 0xc9d
+#define CSR_HPMCOUNTER30H 0xc9e
+#define CSR_HPMCOUNTER31H 0xc9f
+
+#define CSR_SSCOUNTOVF 0xda0
+
+#define CSR_SSTATUS 0x100
+#define CSR_SIE 0x104
+#define CSR_STVEC 0x105
+#define CSR_SCOUNTEREN 0x106
+#define CSR_SENVCFG 0x10a
+#define CSR_SSTATEEN0 0x10c
+#define CSR_SSCRATCH 0x140
+#define CSR_SEPC 0x141
+#define CSR_SCAUSE 0x142
+#define CSR_STVAL 0x143
+#define CSR_SIP 0x144
+#define CSR_SATP 0x180
+
+#define CSR_STIMECMP 0x14D
+#define CSR_STIMECMPH 0x15D
+
+/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_SISELECT 0x150
+#define CSR_SIREG 0x151
+
+/* Supervisor-Level Interrupts (AIA) */
+#define CSR_STOPEI 0x15c
+#define CSR_STOPI 0xdb0
+
+/* Supervisor-Level High-Half CSRs (AIA) */
+#define CSR_SIEH 0x114
+#define CSR_SIPH 0x154
+
+#define CSR_VSSTATUS 0x200
+#define CSR_VSIE 0x204
+#define CSR_VSTVEC 0x205
+#define CSR_VSSCRATCH 0x240
+#define CSR_VSEPC 0x241
+#define CSR_VSCAUSE 0x242
+#define CSR_VSTVAL 0x243
+#define CSR_VSIP 0x244
+#define CSR_VSATP 0x280
+#define CSR_VSTIMECMP 0x24D
+#define CSR_VSTIMECMPH 0x25D
+
+#define CSR_HSTATUS 0x600
+#define CSR_HEDELEG 0x602
+#define CSR_HIDELEG 0x603
+#define CSR_HIE 0x604
+#define CSR_HTIMEDELTA 0x605
+#define CSR_HCOUNTEREN 0x606
+#define CSR_HGEIE 0x607
+#define CSR_HENVCFG 0x60a
+#define CSR_HTIMEDELTAH 0x615
+#define CSR_HENVCFGH 0x61a
+#define CSR_HTVAL 0x643
+#define CSR_HIP 0x644
+#define CSR_HVIP 0x645
+#define CSR_HTINST 0x64a
+#define CSR_HGATP 0x680
+#define CSR_HGEIP 0xe12
+
+/* Virtual Interrupts and Interrupt Priorities (H-extension with AIA) */
+#define CSR_HVIEN 0x608
+#define CSR_HVICTL 0x609
+#define CSR_HVIPRIO1 0x646
+#define CSR_HVIPRIO2 0x647
+
+/* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA) */
+#define CSR_VSISELECT 0x250
+#define CSR_VSIREG 0x251
+
+/* VS-Level Interrupts (H-extension with AIA) */
+#define CSR_VSTOPEI 0x25c
+#define CSR_VSTOPI 0xeb0
+
+/* Hypervisor and VS-Level High-Half CSRs (H-extension with AIA) */
+#define CSR_HIDELEGH 0x613
+#define CSR_HVIENH 0x618
+#define CSR_HVIPH 0x655
+#define CSR_HVIPRIO1H 0x656
+#define CSR_HVIPRIO2H 0x657
+#define CSR_VSIEH 0x214
+#define CSR_VSIPH 0x254
+
+/* Hypervisor stateen CSRs */
+#define CSR_HSTATEEN0 0x60c
+#define CSR_HSTATEEN0H 0x61c
+
+#define CSR_MSTATUS 0x300
+#define CSR_MISA 0x301
+#define CSR_MIDELEG 0x303
+#define CSR_MIE 0x304
+#define CSR_MTVEC 0x305
+#define CSR_MENVCFG 0x30a
+#define CSR_MENVCFGH 0x31a
+#define CSR_MSCRATCH 0x340
+#define CSR_MEPC 0x341
+#define CSR_MCAUSE 0x342
+#define CSR_MTVAL 0x343
+#define CSR_MIP 0x344
+#define CSR_PMPCFG0 0x3a0
+#define CSR_PMPADDR0 0x3b0
+#define CSR_MVENDORID 0xf11
+#define CSR_MARCHID 0xf12
+#define CSR_MIMPID 0xf13
+#define CSR_MHARTID 0xf14
+
+/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
+#define CSR_MISELECT 0x350
+#define CSR_MIREG 0x351
+
+/* Machine-Level Interrupts (AIA) */
+#define CSR_MTOPEI 0x35c
+#define CSR_MTOPI 0xfb0
+
+/* Virtual Interrupts for Supervisor Level (AIA) */
+#define CSR_MVIEN 0x308
+#define CSR_MVIP 0x309
+
+/* Machine-Level High-Half CSRs (AIA) */
+#define CSR_MIDELEGH 0x313
+#define CSR_MIEH 0x314
+#define CSR_MVIENH 0x318
+#define CSR_MVIPH 0x319
+#define CSR_MIPH 0x354
+
+#define CSR_VSTART 0x8
+#define CSR_VCSR 0xf
+#define CSR_VL 0xc20
+#define CSR_VTYPE 0xc21
+#define CSR_VLENB 0xc22
+
+#ifdef CONFIG_RISCV_M_MODE
+# define CSR_STATUS CSR_MSTATUS
+# define CSR_IE CSR_MIE
+# define CSR_TVEC CSR_MTVEC
+# define CSR_SCRATCH CSR_MSCRATCH
+# define CSR_EPC CSR_MEPC
+# define CSR_CAUSE CSR_MCAUSE
+# define CSR_TVAL CSR_MTVAL
+# define CSR_IP CSR_MIP
+
+# define CSR_IEH CSR_MIEH
+# define CSR_ISELECT CSR_MISELECT
+# define CSR_IREG CSR_MIREG
+# define CSR_IPH CSR_MIPH
+# define CSR_TOPEI CSR_MTOPEI
+# define CSR_TOPI CSR_MTOPI
+
+# define SR_IE SR_MIE
+# define SR_PIE SR_MPIE
+# define SR_PP SR_MPP
+
+# define RV_IRQ_SOFT IRQ_M_SOFT
+# define RV_IRQ_TIMER IRQ_M_TIMER
+# define RV_IRQ_EXT IRQ_M_EXT
+#else /* CONFIG_RISCV_M_MODE */
+# define CSR_STATUS CSR_SSTATUS
+# define CSR_IE CSR_SIE
+# define CSR_TVEC CSR_STVEC
+# define CSR_SCRATCH CSR_SSCRATCH
+# define CSR_EPC CSR_SEPC
+# define CSR_CAUSE CSR_SCAUSE
+# define CSR_TVAL CSR_STVAL
+# define CSR_IP CSR_SIP
+
+# define CSR_IEH CSR_SIEH
+# define CSR_ISELECT CSR_SISELECT
+# define CSR_IREG CSR_SIREG
+# define CSR_IPH CSR_SIPH
+# define CSR_TOPEI CSR_STOPEI
+# define CSR_TOPI CSR_STOPI
+
+# define SR_IE SR_SIE
+# define SR_PIE SR_SPIE
+# define SR_PP SR_SPP
+
+# define RV_IRQ_SOFT IRQ_S_SOFT
+# define RV_IRQ_TIMER IRQ_S_TIMER
+# define RV_IRQ_EXT IRQ_S_EXT
+# define RV_IRQ_PMU IRQ_PMU_OVF
+# define SIP_LCOFIP (_AC(0x1, UL) << IRQ_PMU_OVF)
+
+#endif /* !CONFIG_RISCV_M_MODE */
+
+/* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */
+#define IE_SIE (_AC(0x1, UL) << RV_IRQ_SOFT)
+#define IE_TIE (_AC(0x1, UL) << RV_IRQ_TIMER)
+#define IE_EIE (_AC(0x1, UL) << RV_IRQ_EXT)
+
+#ifdef __ASSEMBLER__
+#define __ASM_STR(x) x
+#else
+#define __ASM_STR(x) #x
+#endif
+
+#ifndef __ASSEMBLER__
+
+#define csr_swap(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
+ : "=r" (__v) : "rK" (__v) \
+ : "memory"); \
+ __v; \
+})
+
+#define csr_read(csr) \
+({ \
+ register unsigned long __v; \
+ __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \
+ : "=r" (__v) : \
+ : "memory"); \
+ __v; \
+})
+
+#define csr_write(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \
+ : : "rK" (__v) \
+ : "memory"); \
+})
+
+#define csr_read_set(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
+ : "=r" (__v) : "rK" (__v) \
+ : "memory"); \
+ __v; \
+})
+
+#define csr_set(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \
+ : : "rK" (__v) \
+ : "memory"); \
+})
+
+#define csr_read_clear(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
+ : "=r" (__v) : "rK" (__v) \
+ : "memory"); \
+ __v; \
+})
+
+#define csr_clear(csr, val) \
+({ \
+ unsigned long __v = (unsigned long)(val); \
+ __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \
+ : : "rK" (__v) \
+ : "memory"); \
+})
+
+#endif /* __ASSEMBLER__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/tools/arch/riscv/include/asm/fence.h b/tools/arch/riscv/include/asm/fence.h
new file mode 100644
index 000000000000..37860e86771d
--- /dev/null
+++ b/tools/arch/riscv/include/asm/fence.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copied from the kernel sources to tools/arch/riscv:
+ */
+
+#ifndef _ASM_RISCV_FENCE_H
+#define _ASM_RISCV_FENCE_H
+
+#define RISCV_FENCE_ASM(p, s) "\tfence " #p "," #s "\n"
+#define RISCV_FENCE(p, s) \
+ ({ __asm__ __volatile__ (RISCV_FENCE_ASM(p, s) : : : "memory"); })
+
+#endif /* _ASM_RISCV_FENCE_H */
diff --git a/tools/arch/riscv/include/asm/vdso/processor.h b/tools/arch/riscv/include/asm/vdso/processor.h
new file mode 100644
index 000000000000..0665b117f30f
--- /dev/null
+++ b/tools/arch/riscv/include/asm/vdso/processor.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_VDSO_PROCESSOR_H
+#define __ASM_VDSO_PROCESSOR_H
+
+#ifndef __ASSEMBLER__
+
+#include <asm-generic/barrier.h>
+
+static inline void cpu_relax(void)
+{
+#ifdef __riscv_muldiv
+ int dummy;
+ /* In lieu of a halt instruction, induce a long-latency stall. */
+ __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+#endif
+
+#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE
+ /*
+ * Reduce instruction retirement.
+ * This assumes the PC changes.
+ */
+ __asm__ __volatile__ ("pause");
+#else
+ /* Encoding of the pause instruction */
+ __asm__ __volatile__ (".4byte 0x100000F");
+#endif
+ barrier();
+}
+
+#endif /* __ASSEMBLER__ */
+
+#endif /* __ASM_VDSO_PROCESSOR_H */
diff --git a/tools/arch/s390/include/uapi/asm/kvm.h b/tools/arch/s390/include/uapi/asm/kvm.h
index abe926d43cbe..60345dd2cba2 100644
--- a/tools/arch/s390/include/uapi/asm/kvm.h
+++ b/tools/arch/s390/include/uapi/asm/kvm.h
@@ -12,7 +12,320 @@
#include <linux/types.h>
#define __KVM_S390
-#define __KVM_HAVE_GUEST_DEBUG
+
+struct kvm_s390_skeys {
+ __u64 start_gfn;
+ __u64 count;
+ __u64 skeydata_addr;
+ __u32 flags;
+ __u32 reserved[9];
+};
+
+#define KVM_S390_CMMA_PEEK (1 << 0)
+
+/**
+ * kvm_s390_cmma_log - Used for CMMA migration.
+ *
+ * Used both for input and output.
+ *
+ * @start_gfn: Guest page number to start from.
+ * @count: Size of the result buffer.
+ * @flags: Control operation mode via KVM_S390_CMMA_* flags
+ * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty
+ * pages are still remaining.
+ * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set
+ * in the PGSTE.
+ * @values: Pointer to the values buffer.
+ *
+ * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls.
+ */
+struct kvm_s390_cmma_log {
+ __u64 start_gfn;
+ __u32 count;
+ __u32 flags;
+ union {
+ __u64 remaining;
+ __u64 mask;
+ };
+ __u64 values;
+};
+
+#define KVM_S390_RESET_POR 1
+#define KVM_S390_RESET_CLEAR 2
+#define KVM_S390_RESET_SUBSYSTEM 4
+#define KVM_S390_RESET_CPU_INIT 8
+#define KVM_S390_RESET_IPL 16
+
+/* for KVM_S390_MEM_OP */
+struct kvm_s390_mem_op {
+ /* in */
+ __u64 gaddr; /* the guest address */
+ __u64 flags; /* flags */
+ __u32 size; /* amount of bytes */
+ __u32 op; /* type of operation */
+ __u64 buf; /* buffer in userspace */
+ union {
+ struct {
+ __u8 ar; /* the access register number */
+ __u8 key; /* access key, ignored if flag unset */
+ __u8 pad1[6]; /* ignored */
+ __u64 old_addr; /* ignored if cmpxchg flag unset */
+ };
+ __u32 sida_offset; /* offset into the sida */
+ __u8 reserved[32]; /* ignored */
+ };
+};
+/* types for kvm_s390_mem_op->op */
+#define KVM_S390_MEMOP_LOGICAL_READ 0
+#define KVM_S390_MEMOP_LOGICAL_WRITE 1
+#define KVM_S390_MEMOP_SIDA_READ 2
+#define KVM_S390_MEMOP_SIDA_WRITE 3
+#define KVM_S390_MEMOP_ABSOLUTE_READ 4
+#define KVM_S390_MEMOP_ABSOLUTE_WRITE 5
+#define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG 6
+
+/* flags for kvm_s390_mem_op->flags */
+#define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0)
+#define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1)
+#define KVM_S390_MEMOP_F_SKEY_PROTECTION (1ULL << 2)
+
+/* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */
+#define KVM_S390_MEMOP_EXTENSION_CAP_BASE (1 << 0)
+#define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG (1 << 1)
+
+struct kvm_s390_psw {
+ __u64 mask;
+ __u64 addr;
+};
+
+/* valid values for type in kvm_s390_interrupt */
+#define KVM_S390_SIGP_STOP 0xfffe0000u
+#define KVM_S390_PROGRAM_INT 0xfffe0001u
+#define KVM_S390_SIGP_SET_PREFIX 0xfffe0002u
+#define KVM_S390_RESTART 0xfffe0003u
+#define KVM_S390_INT_PFAULT_INIT 0xfffe0004u
+#define KVM_S390_INT_PFAULT_DONE 0xfffe0005u
+#define KVM_S390_MCHK 0xfffe1000u
+#define KVM_S390_INT_CLOCK_COMP 0xffff1004u
+#define KVM_S390_INT_CPU_TIMER 0xffff1005u
+#define KVM_S390_INT_VIRTIO 0xffff2603u
+#define KVM_S390_INT_SERVICE 0xffff2401u
+#define KVM_S390_INT_EMERGENCY 0xffff1201u
+#define KVM_S390_INT_EXTERNAL_CALL 0xffff1202u
+/* Anything below 0xfffe0000u is taken by INT_IO */
+#define KVM_S390_INT_IO(ai,cssid,ssid,schid) \
+ (((schid)) | \
+ ((ssid) << 16) | \
+ ((cssid) << 18) | \
+ ((ai) << 26))
+#define KVM_S390_INT_IO_MIN 0x00000000u
+#define KVM_S390_INT_IO_MAX 0xfffdffffu
+#define KVM_S390_INT_IO_AI_MASK 0x04000000u
+
+
+struct kvm_s390_interrupt {
+ __u32 type;
+ __u32 parm;
+ __u64 parm64;
+};
+
+struct kvm_s390_io_info {
+ __u16 subchannel_id;
+ __u16 subchannel_nr;
+ __u32 io_int_parm;
+ __u32 io_int_word;
+};
+
+struct kvm_s390_ext_info {
+ __u32 ext_params;
+ __u32 pad;
+ __u64 ext_params2;
+};
+
+struct kvm_s390_pgm_info {
+ __u64 trans_exc_code;
+ __u64 mon_code;
+ __u64 per_address;
+ __u32 data_exc_code;
+ __u16 code;
+ __u16 mon_class_nr;
+ __u8 per_code;
+ __u8 per_atmid;
+ __u8 exc_access_id;
+ __u8 per_access_id;
+ __u8 op_access_id;
+#define KVM_S390_PGM_FLAGS_ILC_VALID 0x01
+#define KVM_S390_PGM_FLAGS_ILC_0 0x02
+#define KVM_S390_PGM_FLAGS_ILC_1 0x04
+#define KVM_S390_PGM_FLAGS_ILC_MASK 0x06
+#define KVM_S390_PGM_FLAGS_NO_REWIND 0x08
+ __u8 flags;
+ __u8 pad[2];
+};
+
+struct kvm_s390_prefix_info {
+ __u32 address;
+};
+
+struct kvm_s390_extcall_info {
+ __u16 code;
+};
+
+struct kvm_s390_emerg_info {
+ __u16 code;
+};
+
+#define KVM_S390_STOP_FLAG_STORE_STATUS 0x01
+struct kvm_s390_stop_info {
+ __u32 flags;
+};
+
+struct kvm_s390_mchk_info {
+ __u64 cr14;
+ __u64 mcic;
+ __u64 failing_storage_address;
+ __u32 ext_damage_code;
+ __u32 pad;
+ __u8 fixed_logout[16];
+};
+
+struct kvm_s390_irq {
+ __u64 type;
+ union {
+ struct kvm_s390_io_info io;
+ struct kvm_s390_ext_info ext;
+ struct kvm_s390_pgm_info pgm;
+ struct kvm_s390_emerg_info emerg;
+ struct kvm_s390_extcall_info extcall;
+ struct kvm_s390_prefix_info prefix;
+ struct kvm_s390_stop_info stop;
+ struct kvm_s390_mchk_info mchk;
+ char reserved[64];
+ } u;
+};
+
+struct kvm_s390_irq_state {
+ __u64 buf;
+ __u32 flags; /* will stay unused for compatibility reasons */
+ __u32 len;
+ __u32 reserved[4]; /* will stay unused for compatibility reasons */
+};
+
+struct kvm_s390_ucas_mapping {
+ __u64 user_addr;
+ __u64 vcpu_addr;
+ __u64 length;
+};
+
+struct kvm_s390_pv_sec_parm {
+ __u64 origin;
+ __u64 length;
+};
+
+struct kvm_s390_pv_unp {
+ __u64 addr;
+ __u64 size;
+ __u64 tweak;
+};
+
+enum pv_cmd_dmp_id {
+ KVM_PV_DUMP_INIT,
+ KVM_PV_DUMP_CONFIG_STOR_STATE,
+ KVM_PV_DUMP_COMPLETE,
+ KVM_PV_DUMP_CPU,
+};
+
+struct kvm_s390_pv_dmp {
+ __u64 subcmd;
+ __u64 buff_addr;
+ __u64 buff_len;
+ __u64 gaddr; /* For dump storage state */
+ __u64 reserved[4];
+};
+
+enum pv_cmd_info_id {
+ KVM_PV_INFO_VM,
+ KVM_PV_INFO_DUMP,
+};
+
+struct kvm_s390_pv_info_dump {
+ __u64 dump_cpu_buffer_len;
+ __u64 dump_config_mem_buffer_per_1m;
+ __u64 dump_config_finalize_len;
+};
+
+struct kvm_s390_pv_info_vm {
+ __u64 inst_calls_list[4];
+ __u64 max_cpus;
+ __u64 max_guests;
+ __u64 max_guest_addr;
+ __u64 feature_indication;
+};
+
+struct kvm_s390_pv_info_header {
+ __u32 id;
+ __u32 len_max;
+ __u32 len_written;
+ __u32 reserved;
+};
+
+struct kvm_s390_pv_info {
+ struct kvm_s390_pv_info_header header;
+ union {
+ struct kvm_s390_pv_info_dump dump;
+ struct kvm_s390_pv_info_vm vm;
+ };
+};
+
+enum pv_cmd_id {
+ KVM_PV_ENABLE,
+ KVM_PV_DISABLE,
+ KVM_PV_SET_SEC_PARMS,
+ KVM_PV_UNPACK,
+ KVM_PV_VERIFY,
+ KVM_PV_PREP_RESET,
+ KVM_PV_UNSHARE_ALL,
+ KVM_PV_INFO,
+ KVM_PV_DUMP,
+ KVM_PV_ASYNC_CLEANUP_PREPARE,
+ KVM_PV_ASYNC_CLEANUP_PERFORM,
+};
+
+struct kvm_pv_cmd {
+ __u32 cmd; /* Command to be executed */
+ __u16 rc; /* Ultravisor return code */
+ __u16 rrc; /* Ultravisor return reason code */
+ __u64 data; /* Data or address */
+ __u32 flags; /* flags for future extensions. Must be 0 for now */
+ __u32 reserved[3];
+};
+
+struct kvm_s390_zpci_op {
+ /* in */
+ __u32 fh; /* target device */
+ __u8 op; /* operation to perform */
+ __u8 pad[3];
+ union {
+ /* for KVM_S390_ZPCIOP_REG_AEN */
+ struct {
+ __u64 ibv; /* Guest addr of interrupt bit vector */
+ __u64 sb; /* Guest addr of summary bit */
+ __u32 flags;
+ __u32 noi; /* Number of interrupts */
+ __u8 isc; /* Guest interrupt subclass */
+ __u8 sbo; /* Offset of guest summary bit vector */
+ __u16 pad;
+ } reg_aen;
+ __u64 reserved[8];
+ } u;
+};
+
+/* types for kvm_s390_zpci_op->op */
+#define KVM_S390_ZPCIOP_REG_AEN 0
+#define KVM_S390_ZPCIOP_DEREG_AEN 1
+
+/* flags for kvm_s390_zpci_op->u.reg_aen.flags */
+#define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0)
/* Device control API: s390-specific devices */
#define KVM_DEV_FLIC_GET_ALL_IRQS 1
@@ -156,7 +469,8 @@ struct kvm_s390_vm_cpu_subfunc {
__u8 kdsa[16]; /* with MSA9 */
__u8 sortl[32]; /* with STFLE.150 */
__u8 dfltcc[32]; /* with STFLE.151 */
- __u8 reserved[1728];
+ __u8 pfcr[16]; /* with STFLE.201 */
+ __u8 reserved[1712];
};
#define KVM_S390_VM_CPU_PROCESSOR_UV_FEAT_GUEST 6
diff --git a/tools/arch/s390/include/uapi/asm/kvm_perf.h b/tools/arch/s390/include/uapi/asm/kvm_perf.h
deleted file mode 100644
index 84606b8cc49e..000000000000
--- a/tools/arch/s390/include/uapi/asm/kvm_perf.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/*
- * Definitions for perf-kvm on s390
- *
- * Copyright 2014 IBM Corp.
- * Author(s): Alexander Yarygin <yarygin@linux.vnet.ibm.com>
- */
-
-#ifndef __LINUX_KVM_PERF_S390_H
-#define __LINUX_KVM_PERF_S390_H
-
-#include <asm/sie.h>
-
-#define DECODE_STR_LEN 40
-
-#define VCPU_ID "id"
-
-#define KVM_ENTRY_TRACE "kvm:kvm_s390_sie_enter"
-#define KVM_EXIT_TRACE "kvm:kvm_s390_sie_exit"
-#define KVM_EXIT_REASON "icptcode"
-
-#endif
diff --git a/tools/arch/x86/dell-uart-backlight-emulator/.gitignore b/tools/arch/x86/dell-uart-backlight-emulator/.gitignore
new file mode 100644
index 000000000000..5c8cad8d72b9
--- /dev/null
+++ b/tools/arch/x86/dell-uart-backlight-emulator/.gitignore
@@ -0,0 +1 @@
+dell-uart-backlight-emulator
diff --git a/tools/arch/x86/dell-uart-backlight-emulator/Makefile b/tools/arch/x86/dell-uart-backlight-emulator/Makefile
new file mode 100644
index 000000000000..6ea1d9fd534b
--- /dev/null
+++ b/tools/arch/x86/dell-uart-backlight-emulator/Makefile
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for Intel Software Defined Silicon provisioning tool
+
+dell-uart-backlight-emulator: dell-uart-backlight-emulator.c
+
+BINDIR ?= /usr/bin
+
+override CFLAGS += -O2 -Wall
+
+%: %.c
+ $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY : clean
+clean :
+ @rm -f dell-uart-backlight-emulator
+
+install : dell-uart-backlight-emulator
+ install -d $(DESTDIR)$(BINDIR)
+ install -m 755 -p dell-uart-backlight-emulator $(DESTDIR)$(BINDIR)/dell-uart-backlight-emulator
diff --git a/tools/arch/x86/dell-uart-backlight-emulator/README b/tools/arch/x86/dell-uart-backlight-emulator/README
new file mode 100644
index 000000000000..c0d8e52046ee
--- /dev/null
+++ b/tools/arch/x86/dell-uart-backlight-emulator/README
@@ -0,0 +1,46 @@
+Emulator for DELL0501 UART attached backlight controller
+--------------------------------------------------------
+
+Dell All In One (AIO) models released after 2017 use a backlight controller
+board connected to an UART.
+
+In DSDT this uart port will be defined as:
+
+ Name (_HID, "DELL0501")
+ Name (_CID, EisaId ("PNP0501")
+
+With the DELL0501 indicating that we are dealing with an UART with
+the backlight controller board attached.
+
+This small emulator allows testing
+the drivers/platform/x86/dell/dell-uart-backlight.c driver without access
+to an actual Dell All In One.
+
+This requires:
+1. A (desktop) PC with a 16550 UART on the motherboard and a standard DB9
+ connector connected to this UART.
+2. A DB9 NULL modem cable.
+3. A second DB9 serial port, this can e.g. be a USB to serial converter
+ with a DB9 connector plugged into the same desktop PC.
+4. A DSDT overlay for the desktop PC replacing the _HID of the 16550 UART
+ ACPI Device() with "DELL0501" and adding a _CID of "PNP0501", see
+ DSDT.patch for an example of the necessary DSDT changes.
+
+With everything setup and the NULL modem cable connected between
+the 2 serial ports run:
+
+./dell-uart-backlight-emulator <path-to-/dev/tty*S#-for-second-port>
+
+For example when using an USB to serial converter for the second port:
+
+./dell-uart-backlight-emulator /dev/ttyUSB0
+
+And then (re)load the dell-uart-backlight driver:
+
+sudo rmmod dell-uart-backlight; sudo modprobe dell-uart-backlight dyndbg
+
+After this check "dmesg" to see if the driver correctly received
+the firmware version string from the emulator. If this works there
+should be a /sys/class/backlight/dell_uart_backlight/ directory now
+and writes to the brightness or bl_power files should be reflected
+by matching output from the emulator.
diff --git a/tools/arch/x86/dell-uart-backlight-emulator/dell-uart-backlight-emulator.c b/tools/arch/x86/dell-uart-backlight-emulator/dell-uart-backlight-emulator.c
new file mode 100644
index 000000000000..655b6c96d8cf
--- /dev/null
+++ b/tools/arch/x86/dell-uart-backlight-emulator/dell-uart-backlight-emulator.c
@@ -0,0 +1,163 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Dell AIO Serial Backlight board emulator for testing
+ * the Linux dell-uart-backlight driver.
+ *
+ * Copyright (C) 2024 Hans de Goede <hansg@kernel.org>
+ */
+#include <errno.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/un.h>
+#include <termios.h>
+#include <unistd.h>
+
+int serial_fd;
+int brightness = 50;
+
+static unsigned char dell_uart_checksum(unsigned char *buf, int len)
+{
+ unsigned char val = 0;
+
+ while (len-- > 0)
+ val += buf[len];
+
+ return val ^ 0xff;
+}
+
+/* read() will return -1 on SIGINT / SIGTERM causing the mainloop to cleanly exit */
+void signalhdlr(int signum)
+{
+}
+
+int main(int argc, char *argv[])
+{
+ struct sigaction sigact = { .sa_handler = signalhdlr };
+ unsigned char buf[4], csum, response[32];
+ const char *version_str = "PHI23-V321";
+ struct termios tty, saved_tty;
+ int ret, idx, len = 0;
+
+ if (argc != 2) {
+ fprintf(stderr, "Invalid or missing arguments\n");
+ fprintf(stderr, "Usage: %s <serial-port>\n", argv[0]);
+ return 1;
+ }
+
+ serial_fd = open(argv[1], O_RDWR | O_NOCTTY);
+ if (serial_fd == -1) {
+ fprintf(stderr, "Error opening %s: %s\n", argv[1], strerror(errno));
+ return 1;
+ }
+
+ ret = tcgetattr(serial_fd, &tty);
+ if (ret == -1) {
+ fprintf(stderr, "Error getting tcattr: %s\n", strerror(errno));
+ goto out_close;
+ }
+ saved_tty = tty;
+
+ cfsetspeed(&tty, 9600);
+ cfmakeraw(&tty);
+ tty.c_cflag &= ~CSTOPB;
+ tty.c_cflag &= ~CRTSCTS;
+ tty.c_cflag |= CLOCAL | CREAD;
+
+ ret = tcsetattr(serial_fd, TCSANOW, &tty);
+ if (ret == -1) {
+ fprintf(stderr, "Error setting tcattr: %s\n", strerror(errno));
+ goto out_restore;
+ }
+
+ sigaction(SIGINT, &sigact, 0);
+ sigaction(SIGTERM, &sigact, 0);
+
+ idx = 0;
+ while (read(serial_fd, &buf[idx], 1) == 1) {
+ if (idx == 0) {
+ switch (buf[0]) {
+ /* 3 MSB bits: cmd-len + 01010 SOF marker */
+ case 0x6a: len = 3; break;
+ case 0x8a: len = 4; break;
+ default:
+ fprintf(stderr, "Error unexpected first byte: 0x%02x\n", buf[0]);
+ continue; /* Try to sync up with sender */
+ }
+ }
+
+ /* Process msg when len bytes have been received */
+ if (idx != (len - 1)) {
+ idx++;
+ continue;
+ }
+
+ /* Reset idx for next command */
+ idx = 0;
+
+ csum = dell_uart_checksum(buf, len - 1);
+ if (buf[len - 1] != csum) {
+ fprintf(stderr, "Error checksum mismatch got 0x%02x expected 0x%02x\n",
+ buf[len - 1], csum);
+ continue;
+ }
+
+ switch ((buf[0] << 8) | buf[1]) {
+ case 0x6a06: /* cmd = 0x06, get version */
+ len = strlen(version_str);
+ strcpy((char *)&response[2], version_str);
+ printf("Get version, reply: %s\n", version_str);
+ break;
+ case 0x8a0b: /* cmd = 0x0b, set brightness */
+ if (buf[2] > 100) {
+ fprintf(stderr, "Error invalid brightness param: %d\n", buf[2]);
+ continue;
+ }
+
+ len = 0;
+ brightness = buf[2];
+ printf("Set brightness %d\n", brightness);
+ break;
+ case 0x6a0c: /* cmd = 0x0c, get brightness */
+ len = 1;
+ response[2] = brightness;
+ printf("Get brightness, reply: %d\n", brightness);
+ break;
+ case 0x8a0e: /* cmd = 0x0e, set backlight power */
+ if (buf[2] != 0 && buf[2] != 1) {
+ fprintf(stderr, "Error invalid set power param: %d\n", buf[2]);
+ continue;
+ }
+
+ len = 0;
+ printf("Set power %d\n", buf[2]);
+ break;
+ default:
+ fprintf(stderr, "Error unknown cmd 0x%04x\n",
+ (buf[0] << 8) | buf[1]);
+ continue;
+ }
+
+ /* Respond with <total-len> <cmd> <data...> <csum> */
+ response[0] = len + 3; /* response length in bytes */
+ response[1] = buf[1]; /* ack cmd */
+ csum = dell_uart_checksum(response, len + 2);
+ response[len + 2] = csum;
+ ret = write(serial_fd, response, response[0]);
+ if (ret != (response[0]))
+ fprintf(stderr, "Error writing %d bytes: %d\n",
+ response[0], ret);
+ }
+
+ ret = 0;
+out_restore:
+ tcsetattr(serial_fd, TCSANOW, &saved_tty);
+out_close:
+ close(serial_fd);
+ return ret;
+}
diff --git a/tools/arch/x86/include/asm/amd-ibs.h b/tools/arch/x86/include/asm/amd/ibs.h
index 93807b437e4d..cbce54fec7b9 100644
--- a/tools/arch/x86/include/asm/amd-ibs.h
+++ b/tools/arch/x86/include/asm/amd/ibs.h
@@ -1,10 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_AMD_IBS_H
+#define _ASM_X86_AMD_IBS_H
+
/*
* From PPR Vol 1 for AMD Family 19h Model 01h B1
* 55898 Rev 0.35 - Feb 5, 2021
*/
-#include "msr-index.h"
+#include "../msr-index.h"
/* IBS_OP_DATA2 DataSrc */
#define IBS_DATA_SRC_LOC_CACHE 2
@@ -64,7 +67,8 @@ union ibs_op_ctl {
opmaxcnt_ext:7, /* 20-26: upper 7 bits of periodic op maximum count */
reserved0:5, /* 27-31: reserved */
opcurcnt:27, /* 32-58: periodic op counter current count */
- reserved1:5; /* 59-63: reserved */
+ ldlat_thrsh:4, /* 59-62: Load Latency threshold */
+ ldlat_en:1; /* 63: Load Latency enabled */
};
};
@@ -150,3 +154,5 @@ struct perf_ibs_data {
};
u64 regs[MSR_AMD64_IBS_REG_COUNT_MAX];
};
+
+#endif /* _ASM_X86_AMD_IBS_H */
diff --git a/tools/arch/x86/include/asm/asm.h b/tools/arch/x86/include/asm/asm.h
index 3ad3da9a7d97..6e1b357c374b 100644
--- a/tools/arch/x86/include/asm/asm.h
+++ b/tools/arch/x86/include/asm/asm.h
@@ -2,7 +2,7 @@
#ifndef _ASM_X86_ASM_H
#define _ASM_X86_ASM_H
-#ifdef __ASSEMBLY__
+#ifdef __ASSEMBLER__
# define __ASM_FORM(x, ...) x,## __VA_ARGS__
# define __ASM_FORM_RAW(x, ...) x,## __VA_ARGS__
# define __ASM_FORM_COMMA(x, ...) x,## __VA_ARGS__,
@@ -108,22 +108,10 @@
#endif
-/*
- * Macros to generate condition code outputs from inline assembly,
- * The output operand must be type "bool".
- */
-#ifdef __GCC_ASM_FLAG_OUTPUTS__
-# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
-# define CC_OUT(c) "=@cc" #c
-#else
-# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
-# define CC_OUT(c) [_cc_ ## c] "=qm"
-#endif
-
#ifdef __KERNEL__
/* Exception table entry */
-#ifdef __ASSEMBLY__
+#ifdef __ASSEMBLER__
# define _ASM_EXTABLE_HANDLE(from, to, handler) \
.pushsection "__ex_table","a" ; \
.balign 4 ; \
@@ -154,7 +142,7 @@
# define _ASM_NOKPROBE(entry)
# endif
-#else /* ! __ASSEMBLY__ */
+#else /* ! __ASSEMBLER__ */
# define _EXPAND_EXTABLE_HANDLE(x) #x
# define _ASM_EXTABLE_HANDLE(from, to, handler) \
" .pushsection \"__ex_table\",\"a\"\n" \
@@ -186,7 +174,7 @@
*/
register unsigned long current_stack_pointer asm(_ASM_SP);
#define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer)
-#endif /* __ASSEMBLY__ */
+#endif /* __ASSEMBLER__ */
#endif /* __KERNEL__ */
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index 29cb275a219d..06fc0479a23f 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -2,188 +2,178 @@
#ifndef _ASM_X86_CPUFEATURES_H
#define _ASM_X86_CPUFEATURES_H
-#ifndef _ASM_X86_REQUIRED_FEATURES_H
-#include <asm/required-features.h>
-#endif
-
-#ifndef _ASM_X86_DISABLED_FEATURES_H
-#include <asm/disabled-features.h>
-#endif
-
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 21 /* N 32-bit words worth of info */
+#define NCAPINTS 22 /* N 32-bit words worth of info */
#define NBUGINTS 2 /* N 32-bit bug flags */
/*
* Note: If the comment begins with a quoted string, that string is used
- * in /proc/cpuinfo instead of the macro name. If the string is "",
- * this feature bit is not displayed in /proc/cpuinfo at all.
+ * in /proc/cpuinfo instead of the macro name. Otherwise, this feature
+ * bit is not displayed in /proc/cpuinfo at all.
*
* When adding new features here that depend on other features,
* please update the table in kernel/cpu/cpuid-deps.c as well.
*/
/* Intel-defined CPU features, CPUID level 0x00000001 (EDX), word 0 */
-#define X86_FEATURE_FPU ( 0*32+ 0) /* Onboard FPU */
-#define X86_FEATURE_VME ( 0*32+ 1) /* Virtual Mode Extensions */
-#define X86_FEATURE_DE ( 0*32+ 2) /* Debugging Extensions */
-#define X86_FEATURE_PSE ( 0*32+ 3) /* Page Size Extensions */
-#define X86_FEATURE_TSC ( 0*32+ 4) /* Time Stamp Counter */
-#define X86_FEATURE_MSR ( 0*32+ 5) /* Model-Specific Registers */
-#define X86_FEATURE_PAE ( 0*32+ 6) /* Physical Address Extensions */
-#define X86_FEATURE_MCE ( 0*32+ 7) /* Machine Check Exception */
-#define X86_FEATURE_CX8 ( 0*32+ 8) /* CMPXCHG8 instruction */
-#define X86_FEATURE_APIC ( 0*32+ 9) /* Onboard APIC */
-#define X86_FEATURE_SEP ( 0*32+11) /* SYSENTER/SYSEXIT */
-#define X86_FEATURE_MTRR ( 0*32+12) /* Memory Type Range Registers */
-#define X86_FEATURE_PGE ( 0*32+13) /* Page Global Enable */
-#define X86_FEATURE_MCA ( 0*32+14) /* Machine Check Architecture */
-#define X86_FEATURE_CMOV ( 0*32+15) /* CMOV instructions (plus FCMOVcc, FCOMI with FPU) */
-#define X86_FEATURE_PAT ( 0*32+16) /* Page Attribute Table */
-#define X86_FEATURE_PSE36 ( 0*32+17) /* 36-bit PSEs */
-#define X86_FEATURE_PN ( 0*32+18) /* Processor serial number */
-#define X86_FEATURE_CLFLUSH ( 0*32+19) /* CLFLUSH instruction */
+#define X86_FEATURE_FPU ( 0*32+ 0) /* "fpu" Onboard FPU */
+#define X86_FEATURE_VME ( 0*32+ 1) /* "vme" Virtual Mode Extensions */
+#define X86_FEATURE_DE ( 0*32+ 2) /* "de" Debugging Extensions */
+#define X86_FEATURE_PSE ( 0*32+ 3) /* "pse" Page Size Extensions */
+#define X86_FEATURE_TSC ( 0*32+ 4) /* "tsc" Time Stamp Counter */
+#define X86_FEATURE_MSR ( 0*32+ 5) /* "msr" Model-Specific Registers */
+#define X86_FEATURE_PAE ( 0*32+ 6) /* "pae" Physical Address Extensions */
+#define X86_FEATURE_MCE ( 0*32+ 7) /* "mce" Machine Check Exception */
+#define X86_FEATURE_CX8 ( 0*32+ 8) /* "cx8" CMPXCHG8 instruction */
+#define X86_FEATURE_APIC ( 0*32+ 9) /* "apic" Onboard APIC */
+#define X86_FEATURE_SEP ( 0*32+11) /* "sep" SYSENTER/SYSEXIT */
+#define X86_FEATURE_MTRR ( 0*32+12) /* "mtrr" Memory Type Range Registers */
+#define X86_FEATURE_PGE ( 0*32+13) /* "pge" Page Global Enable */
+#define X86_FEATURE_MCA ( 0*32+14) /* "mca" Machine Check Architecture */
+#define X86_FEATURE_CMOV ( 0*32+15) /* "cmov" CMOV instructions (plus FCMOVcc, FCOMI with FPU) */
+#define X86_FEATURE_PAT ( 0*32+16) /* "pat" Page Attribute Table */
+#define X86_FEATURE_PSE36 ( 0*32+17) /* "pse36" 36-bit PSEs */
+#define X86_FEATURE_PN ( 0*32+18) /* "pn" Processor serial number */
+#define X86_FEATURE_CLFLUSH ( 0*32+19) /* "clflush" CLFLUSH instruction */
#define X86_FEATURE_DS ( 0*32+21) /* "dts" Debug Store */
-#define X86_FEATURE_ACPI ( 0*32+22) /* ACPI via MSR */
-#define X86_FEATURE_MMX ( 0*32+23) /* Multimedia Extensions */
-#define X86_FEATURE_FXSR ( 0*32+24) /* FXSAVE/FXRSTOR, CR4.OSFXSR */
+#define X86_FEATURE_ACPI ( 0*32+22) /* "acpi" ACPI via MSR */
+#define X86_FEATURE_MMX ( 0*32+23) /* "mmx" Multimedia Extensions */
+#define X86_FEATURE_FXSR ( 0*32+24) /* "fxsr" FXSAVE/FXRSTOR, CR4.OSFXSR */
#define X86_FEATURE_XMM ( 0*32+25) /* "sse" */
#define X86_FEATURE_XMM2 ( 0*32+26) /* "sse2" */
#define X86_FEATURE_SELFSNOOP ( 0*32+27) /* "ss" CPU self snoop */
-#define X86_FEATURE_HT ( 0*32+28) /* Hyper-Threading */
+#define X86_FEATURE_HT ( 0*32+28) /* "ht" Hyper-Threading */
#define X86_FEATURE_ACC ( 0*32+29) /* "tm" Automatic clock control */
-#define X86_FEATURE_IA64 ( 0*32+30) /* IA-64 processor */
-#define X86_FEATURE_PBE ( 0*32+31) /* Pending Break Enable */
+#define X86_FEATURE_IA64 ( 0*32+30) /* "ia64" IA-64 processor */
+#define X86_FEATURE_PBE ( 0*32+31) /* "pbe" Pending Break Enable */
/* AMD-defined CPU features, CPUID level 0x80000001, word 1 */
/* Don't duplicate feature flags which are redundant with Intel! */
-#define X86_FEATURE_SYSCALL ( 1*32+11) /* SYSCALL/SYSRET */
-#define X86_FEATURE_MP ( 1*32+19) /* MP Capable */
-#define X86_FEATURE_NX ( 1*32+20) /* Execute Disable */
-#define X86_FEATURE_MMXEXT ( 1*32+22) /* AMD MMX extensions */
-#define X86_FEATURE_FXSR_OPT ( 1*32+25) /* FXSAVE/FXRSTOR optimizations */
+#define X86_FEATURE_SYSCALL ( 1*32+11) /* "syscall" SYSCALL/SYSRET */
+#define X86_FEATURE_MP ( 1*32+19) /* "mp" MP Capable */
+#define X86_FEATURE_NX ( 1*32+20) /* "nx" Execute Disable */
+#define X86_FEATURE_MMXEXT ( 1*32+22) /* "mmxext" AMD MMX extensions */
+#define X86_FEATURE_FXSR_OPT ( 1*32+25) /* "fxsr_opt" FXSAVE/FXRSTOR optimizations */
#define X86_FEATURE_GBPAGES ( 1*32+26) /* "pdpe1gb" GB pages */
-#define X86_FEATURE_RDTSCP ( 1*32+27) /* RDTSCP */
-#define X86_FEATURE_LM ( 1*32+29) /* Long Mode (x86-64, 64-bit support) */
-#define X86_FEATURE_3DNOWEXT ( 1*32+30) /* AMD 3DNow extensions */
-#define X86_FEATURE_3DNOW ( 1*32+31) /* 3DNow */
+#define X86_FEATURE_RDTSCP ( 1*32+27) /* "rdtscp" RDTSCP */
+#define X86_FEATURE_LM ( 1*32+29) /* "lm" Long Mode (x86-64, 64-bit support) */
+#define X86_FEATURE_3DNOWEXT ( 1*32+30) /* "3dnowext" AMD 3DNow extensions */
+#define X86_FEATURE_3DNOW ( 1*32+31) /* "3dnow" 3DNow */
/* Transmeta-defined CPU features, CPUID level 0x80860001, word 2 */
-#define X86_FEATURE_RECOVERY ( 2*32+ 0) /* CPU in recovery mode */
-#define X86_FEATURE_LONGRUN ( 2*32+ 1) /* Longrun power control */
-#define X86_FEATURE_LRTI ( 2*32+ 3) /* LongRun table interface */
+#define X86_FEATURE_RECOVERY ( 2*32+ 0) /* "recovery" CPU in recovery mode */
+#define X86_FEATURE_LONGRUN ( 2*32+ 1) /* "longrun" Longrun power control */
+#define X86_FEATURE_LRTI ( 2*32+ 3) /* "lrti" LongRun table interface */
/* Other features, Linux-defined mapping, word 3 */
/* This range is used for feature bits which conflict or are synthesized */
-#define X86_FEATURE_CXMMX ( 3*32+ 0) /* Cyrix MMX extensions */
-#define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */
-#define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */
-#define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */
-
-/* CPU types for specific tunings: */
-#define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */
-/* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */
-#define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */
-#define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */
-#define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */
-#define X86_FEATURE_UP ( 3*32+ 9) /* SMP kernel running on UP */
-#define X86_FEATURE_ART ( 3*32+10) /* Always running timer (ART) */
-#define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* Intel Architectural PerfMon */
-#define X86_FEATURE_PEBS ( 3*32+12) /* Precise-Event Based Sampling */
-#define X86_FEATURE_BTS ( 3*32+13) /* Branch Trace Store */
-#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */
-#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
-#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
-#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
-/* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */
-#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
-#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
-#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
-#define X86_FEATURE_XTOPOLOGY ( 3*32+22) /* CPU topology enum extensions */
-#define X86_FEATURE_TSC_RELIABLE ( 3*32+23) /* TSC is known to be reliable */
-#define X86_FEATURE_NONSTOP_TSC ( 3*32+24) /* TSC does not stop in C states */
-#define X86_FEATURE_CPUID ( 3*32+25) /* CPU has CPUID instruction itself */
-#define X86_FEATURE_EXTD_APICID ( 3*32+26) /* Extended APICID (8 bits) */
-#define X86_FEATURE_AMD_DCM ( 3*32+27) /* AMD multi-node processor */
-#define X86_FEATURE_APERFMPERF ( 3*32+28) /* P-State hardware coordination feedback capability (APERF/MPERF MSRs) */
-#define X86_FEATURE_RAPL ( 3*32+29) /* AMD/Hygon RAPL interface */
-#define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* TSC doesn't stop in S3 state */
-#define X86_FEATURE_TSC_KNOWN_FREQ ( 3*32+31) /* TSC has known frequency */
+#define X86_FEATURE_CXMMX ( 3*32+ 0) /* "cxmmx" Cyrix MMX extensions */
+#define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* "k6_mtrr" AMD K6 nonstandard MTRRs */
+#define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* "cyrix_arr" Cyrix ARRs (= MTRRs) */
+#define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* "centaur_mcr" Centaur MCRs (= MTRRs) */
+#define X86_FEATURE_K8 ( 3*32+ 4) /* Opteron, Athlon64 */
+#define X86_FEATURE_ZEN5 ( 3*32+ 5) /* CPU based on Zen5 microarchitecture */
+#define X86_FEATURE_ZEN6 ( 3*32+ 6) /* CPU based on Zen6 microarchitecture */
+/* Free ( 3*32+ 7) */
+#define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* "constant_tsc" TSC ticks at a constant rate */
+#define X86_FEATURE_UP ( 3*32+ 9) /* "up" SMP kernel running on UP */
+#define X86_FEATURE_ART ( 3*32+10) /* "art" Always running timer (ART) */
+#define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* "arch_perfmon" Intel Architectural PerfMon */
+#define X86_FEATURE_PEBS ( 3*32+12) /* "pebs" Precise-Event Based Sampling */
+#define X86_FEATURE_BTS ( 3*32+13) /* "bts" Branch Trace Store */
+#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* syscall in IA32 userspace */
+#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* sysenter in IA32 userspace */
+#define X86_FEATURE_REP_GOOD ( 3*32+16) /* "rep_good" REP microcode works well */
+#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* "amd_lbr_v2" AMD Last Branch Record Extension Version 2 */
+#define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* Clear CPU buffers using VERW */
+#define X86_FEATURE_ACC_POWER ( 3*32+19) /* "acc_power" AMD Accumulated Power Mechanism */
+#define X86_FEATURE_NOPL ( 3*32+20) /* "nopl" The NOPL (0F 1F) instructions */
+#define X86_FEATURE_ALWAYS ( 3*32+21) /* Always-present feature */
+#define X86_FEATURE_XTOPOLOGY ( 3*32+22) /* "xtopology" CPU topology enum extensions */
+#define X86_FEATURE_TSC_RELIABLE ( 3*32+23) /* "tsc_reliable" TSC is known to be reliable */
+#define X86_FEATURE_NONSTOP_TSC ( 3*32+24) /* "nonstop_tsc" TSC does not stop in C states */
+#define X86_FEATURE_CPUID ( 3*32+25) /* "cpuid" CPU has CPUID instruction itself */
+#define X86_FEATURE_EXTD_APICID ( 3*32+26) /* "extd_apicid" Extended APICID (8 bits) */
+#define X86_FEATURE_AMD_DCM ( 3*32+27) /* "amd_dcm" AMD multi-node processor */
+#define X86_FEATURE_APERFMPERF ( 3*32+28) /* "aperfmperf" P-State hardware coordination feedback capability (APERF/MPERF MSRs) */
+#define X86_FEATURE_RAPL ( 3*32+29) /* "rapl" AMD/Hygon RAPL interface */
+#define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* "nonstop_tsc_s3" TSC doesn't stop in S3 state */
+#define X86_FEATURE_TSC_KNOWN_FREQ ( 3*32+31) /* "tsc_known_freq" TSC has known frequency */
/* Intel-defined CPU features, CPUID level 0x00000001 (ECX), word 4 */
#define X86_FEATURE_XMM3 ( 4*32+ 0) /* "pni" SSE-3 */
-#define X86_FEATURE_PCLMULQDQ ( 4*32+ 1) /* PCLMULQDQ instruction */
-#define X86_FEATURE_DTES64 ( 4*32+ 2) /* 64-bit Debug Store */
+#define X86_FEATURE_PCLMULQDQ ( 4*32+ 1) /* "pclmulqdq" PCLMULQDQ instruction */
+#define X86_FEATURE_DTES64 ( 4*32+ 2) /* "dtes64" 64-bit Debug Store */
#define X86_FEATURE_MWAIT ( 4*32+ 3) /* "monitor" MONITOR/MWAIT support */
#define X86_FEATURE_DSCPL ( 4*32+ 4) /* "ds_cpl" CPL-qualified (filtered) Debug Store */
-#define X86_FEATURE_VMX ( 4*32+ 5) /* Hardware virtualization */
-#define X86_FEATURE_SMX ( 4*32+ 6) /* Safer Mode eXtensions */
-#define X86_FEATURE_EST ( 4*32+ 7) /* Enhanced SpeedStep */
-#define X86_FEATURE_TM2 ( 4*32+ 8) /* Thermal Monitor 2 */
-#define X86_FEATURE_SSSE3 ( 4*32+ 9) /* Supplemental SSE-3 */
-#define X86_FEATURE_CID ( 4*32+10) /* Context ID */
-#define X86_FEATURE_SDBG ( 4*32+11) /* Silicon Debug */
-#define X86_FEATURE_FMA ( 4*32+12) /* Fused multiply-add */
-#define X86_FEATURE_CX16 ( 4*32+13) /* CMPXCHG16B instruction */
-#define X86_FEATURE_XTPR ( 4*32+14) /* Send Task Priority Messages */
-#define X86_FEATURE_PDCM ( 4*32+15) /* Perf/Debug Capabilities MSR */
-#define X86_FEATURE_PCID ( 4*32+17) /* Process Context Identifiers */
-#define X86_FEATURE_DCA ( 4*32+18) /* Direct Cache Access */
+#define X86_FEATURE_VMX ( 4*32+ 5) /* "vmx" Hardware virtualization */
+#define X86_FEATURE_SMX ( 4*32+ 6) /* "smx" Safer Mode eXtensions */
+#define X86_FEATURE_EST ( 4*32+ 7) /* "est" Enhanced SpeedStep */
+#define X86_FEATURE_TM2 ( 4*32+ 8) /* "tm2" Thermal Monitor 2 */
+#define X86_FEATURE_SSSE3 ( 4*32+ 9) /* "ssse3" Supplemental SSE-3 */
+#define X86_FEATURE_CID ( 4*32+10) /* "cid" Context ID */
+#define X86_FEATURE_SDBG ( 4*32+11) /* "sdbg" Silicon Debug */
+#define X86_FEATURE_FMA ( 4*32+12) /* "fma" Fused multiply-add */
+#define X86_FEATURE_CX16 ( 4*32+13) /* "cx16" CMPXCHG16B instruction */
+#define X86_FEATURE_XTPR ( 4*32+14) /* "xtpr" Send Task Priority Messages */
+#define X86_FEATURE_PDCM ( 4*32+15) /* "pdcm" Perf/Debug Capabilities MSR */
+#define X86_FEATURE_PCID ( 4*32+17) /* "pcid" Process Context Identifiers */
+#define X86_FEATURE_DCA ( 4*32+18) /* "dca" Direct Cache Access */
#define X86_FEATURE_XMM4_1 ( 4*32+19) /* "sse4_1" SSE-4.1 */
#define X86_FEATURE_XMM4_2 ( 4*32+20) /* "sse4_2" SSE-4.2 */
-#define X86_FEATURE_X2APIC ( 4*32+21) /* X2APIC */
-#define X86_FEATURE_MOVBE ( 4*32+22) /* MOVBE instruction */
-#define X86_FEATURE_POPCNT ( 4*32+23) /* POPCNT instruction */
-#define X86_FEATURE_TSC_DEADLINE_TIMER ( 4*32+24) /* TSC deadline timer */
-#define X86_FEATURE_AES ( 4*32+25) /* AES instructions */
-#define X86_FEATURE_XSAVE ( 4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV instructions */
-#define X86_FEATURE_OSXSAVE ( 4*32+27) /* "" XSAVE instruction enabled in the OS */
-#define X86_FEATURE_AVX ( 4*32+28) /* Advanced Vector Extensions */
-#define X86_FEATURE_F16C ( 4*32+29) /* 16-bit FP conversions */
-#define X86_FEATURE_RDRAND ( 4*32+30) /* RDRAND instruction */
-#define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a hypervisor */
+#define X86_FEATURE_X2APIC ( 4*32+21) /* "x2apic" X2APIC */
+#define X86_FEATURE_MOVBE ( 4*32+22) /* "movbe" MOVBE instruction */
+#define X86_FEATURE_POPCNT ( 4*32+23) /* "popcnt" POPCNT instruction */
+#define X86_FEATURE_TSC_DEADLINE_TIMER ( 4*32+24) /* "tsc_deadline_timer" TSC deadline timer */
+#define X86_FEATURE_AES ( 4*32+25) /* "aes" AES instructions */
+#define X86_FEATURE_XSAVE ( 4*32+26) /* "xsave" XSAVE/XRSTOR/XSETBV/XGETBV instructions */
+#define X86_FEATURE_OSXSAVE ( 4*32+27) /* XSAVE instruction enabled in the OS */
+#define X86_FEATURE_AVX ( 4*32+28) /* "avx" Advanced Vector Extensions */
+#define X86_FEATURE_F16C ( 4*32+29) /* "f16c" 16-bit FP conversions */
+#define X86_FEATURE_RDRAND ( 4*32+30) /* "rdrand" RDRAND instruction */
+#define X86_FEATURE_HYPERVISOR ( 4*32+31) /* "hypervisor" Running on a hypervisor */
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
#define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present (xstore) */
#define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */
#define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto (xcrypt) */
#define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU crypto enabled */
-#define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography Engine v2 */
-#define X86_FEATURE_ACE2_EN ( 5*32+ 9) /* ACE v2 enabled */
-#define X86_FEATURE_PHE ( 5*32+10) /* PadLock Hash Engine */
-#define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */
-#define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery Multiplier */
-#define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */
+#define X86_FEATURE_ACE2 ( 5*32+ 8) /* "ace2" Advanced Cryptography Engine v2 */
+#define X86_FEATURE_ACE2_EN ( 5*32+ 9) /* "ace2_en" ACE v2 enabled */
+#define X86_FEATURE_PHE ( 5*32+10) /* "phe" PadLock Hash Engine */
+#define X86_FEATURE_PHE_EN ( 5*32+11) /* "phe_en" PHE enabled */
+#define X86_FEATURE_PMM ( 5*32+12) /* "pmm" PadLock Montgomery Multiplier */
+#define X86_FEATURE_PMM_EN ( 5*32+13) /* "pmm_en" PMM enabled */
/* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */
-#define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */
-#define X86_FEATURE_CMP_LEGACY ( 6*32+ 1) /* If yes HyperThreading not valid */
-#define X86_FEATURE_SVM ( 6*32+ 2) /* Secure Virtual Machine */
-#define X86_FEATURE_EXTAPIC ( 6*32+ 3) /* Extended APIC space */
-#define X86_FEATURE_CR8_LEGACY ( 6*32+ 4) /* CR8 in 32-bit mode */
-#define X86_FEATURE_ABM ( 6*32+ 5) /* Advanced bit manipulation */
-#define X86_FEATURE_SSE4A ( 6*32+ 6) /* SSE-4A */
-#define X86_FEATURE_MISALIGNSSE ( 6*32+ 7) /* Misaligned SSE mode */
-#define X86_FEATURE_3DNOWPREFETCH ( 6*32+ 8) /* 3DNow prefetch instructions */
-#define X86_FEATURE_OSVW ( 6*32+ 9) /* OS Visible Workaround */
-#define X86_FEATURE_IBS ( 6*32+10) /* Instruction Based Sampling */
-#define X86_FEATURE_XOP ( 6*32+11) /* extended AVX instructions */
-#define X86_FEATURE_SKINIT ( 6*32+12) /* SKINIT/STGI instructions */
-#define X86_FEATURE_WDT ( 6*32+13) /* Watchdog timer */
-#define X86_FEATURE_LWP ( 6*32+15) /* Light Weight Profiling */
-#define X86_FEATURE_FMA4 ( 6*32+16) /* 4 operands MAC instructions */
-#define X86_FEATURE_TCE ( 6*32+17) /* Translation Cache Extension */
-#define X86_FEATURE_NODEID_MSR ( 6*32+19) /* NodeId MSR */
-#define X86_FEATURE_TBM ( 6*32+21) /* Trailing Bit Manipulations */
-#define X86_FEATURE_TOPOEXT ( 6*32+22) /* Topology extensions CPUID leafs */
-#define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* Core performance counter extensions */
-#define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* NB performance counter extensions */
-#define X86_FEATURE_BPEXT ( 6*32+26) /* Data breakpoint extension */
-#define X86_FEATURE_PTSC ( 6*32+27) /* Performance time-stamp counter */
-#define X86_FEATURE_PERFCTR_LLC ( 6*32+28) /* Last Level Cache performance counter extensions */
-#define X86_FEATURE_MWAITX ( 6*32+29) /* MWAIT extension (MONITORX/MWAITX instructions) */
+#define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* "lahf_lm" LAHF/SAHF in long mode */
+#define X86_FEATURE_CMP_LEGACY ( 6*32+ 1) /* "cmp_legacy" If yes HyperThreading not valid */
+#define X86_FEATURE_SVM ( 6*32+ 2) /* "svm" Secure Virtual Machine */
+#define X86_FEATURE_EXTAPIC ( 6*32+ 3) /* "extapic" Extended APIC space */
+#define X86_FEATURE_CR8_LEGACY ( 6*32+ 4) /* "cr8_legacy" CR8 in 32-bit mode */
+#define X86_FEATURE_ABM ( 6*32+ 5) /* "abm" Advanced bit manipulation */
+#define X86_FEATURE_SSE4A ( 6*32+ 6) /* "sse4a" SSE-4A */
+#define X86_FEATURE_MISALIGNSSE ( 6*32+ 7) /* "misalignsse" Misaligned SSE mode */
+#define X86_FEATURE_3DNOWPREFETCH ( 6*32+ 8) /* "3dnowprefetch" 3DNow prefetch instructions */
+#define X86_FEATURE_OSVW ( 6*32+ 9) /* "osvw" OS Visible Workaround */
+#define X86_FEATURE_IBS ( 6*32+10) /* "ibs" Instruction Based Sampling */
+#define X86_FEATURE_XOP ( 6*32+11) /* "xop" Extended AVX instructions */
+#define X86_FEATURE_SKINIT ( 6*32+12) /* "skinit" SKINIT/STGI instructions */
+#define X86_FEATURE_WDT ( 6*32+13) /* "wdt" Watchdog timer */
+#define X86_FEATURE_LWP ( 6*32+15) /* "lwp" Light Weight Profiling */
+#define X86_FEATURE_FMA4 ( 6*32+16) /* "fma4" 4 operands MAC instructions */
+#define X86_FEATURE_TCE ( 6*32+17) /* "tce" Translation Cache Extension */
+#define X86_FEATURE_NODEID_MSR ( 6*32+19) /* "nodeid_msr" NodeId MSR */
+#define X86_FEATURE_TBM ( 6*32+21) /* "tbm" Trailing Bit Manipulations */
+#define X86_FEATURE_TOPOEXT ( 6*32+22) /* "topoext" Topology extensions CPUID leafs */
+#define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* "perfctr_core" Core performance counter extensions */
+#define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* "perfctr_nb" NB performance counter extensions */
+#define X86_FEATURE_BPEXT ( 6*32+26) /* "bpext" Data breakpoint extension */
+#define X86_FEATURE_PTSC ( 6*32+27) /* "ptsc" Performance time-stamp counter */
+#define X86_FEATURE_PERFCTR_LLC ( 6*32+28) /* "perfctr_llc" Last Level Cache performance counter extensions */
+#define X86_FEATURE_MWAITX ( 6*32+29) /* "mwaitx" MWAIT extension (MONITORX/MWAITX instructions) */
/*
* Auxiliary flags: Linux defined - For features scattered in various
@@ -191,93 +181,93 @@
*
* Reuse free bits when adding new feature flags!
*/
-#define X86_FEATURE_RING3MWAIT ( 7*32+ 0) /* Ring 3 MONITOR/MWAIT instructions */
-#define X86_FEATURE_CPUID_FAULT ( 7*32+ 1) /* Intel CPUID faulting */
-#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
-#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
-#define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation Technology L3 */
-#define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* Cache Allocation Technology L2 */
-#define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* Code and Data Prioritization L3 */
-#define X86_FEATURE_TDX_HOST_PLATFORM ( 7*32+ 7) /* Platform supports being a TDX host */
-#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
-#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
-#define X86_FEATURE_XCOMPACTED ( 7*32+10) /* "" Use compacted XSTATE (XSAVES or XSAVEC) */
-#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
-#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
-#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */
-#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
-#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
-#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
-#define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */
-#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
-#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */
-#define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* AMD Performance Monitoring Version 2 */
-#define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
-#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
-#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
-#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* "" AMD SSBD implementation via LS_CFG MSR */
-#define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */
-#define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */
-#define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */
-#define X86_FEATURE_ZEN ( 7*32+28) /* "" Generic flag for all Zen and newer */
-#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */
-#define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */
-#define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* "" MSR IA32_FEAT_CTL configured */
+#define X86_FEATURE_RING3MWAIT ( 7*32+ 0) /* "ring3mwait" Ring 3 MONITOR/MWAIT instructions */
+#define X86_FEATURE_CPUID_FAULT ( 7*32+ 1) /* "cpuid_fault" Intel CPUID faulting */
+#define X86_FEATURE_CPB ( 7*32+ 2) /* "cpb" AMD Core Performance Boost */
+#define X86_FEATURE_EPB ( 7*32+ 3) /* "epb" IA32_ENERGY_PERF_BIAS support */
+#define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* "cat_l3" Cache Allocation Technology L3 */
+#define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* "cat_l2" Cache Allocation Technology L2 */
+#define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* "cdp_l3" Code and Data Prioritization L3 */
+#define X86_FEATURE_TDX_HOST_PLATFORM ( 7*32+ 7) /* "tdx_host_platform" Platform supports being a TDX host */
+#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* "hw_pstate" AMD HW-PState */
+#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* "proc_feedback" AMD ProcFeedbackInterface */
+#define X86_FEATURE_XCOMPACTED ( 7*32+10) /* Use compacted XSTATE (XSAVES or XSAVEC) */
+#define X86_FEATURE_PTI ( 7*32+11) /* "pti" Kernel Page Table Isolation enabled */
+#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* Set/clear IBRS on kernel entry/exit */
+#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* Fill RSB on VM-Exit */
+#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* "intel_ppin" Intel Processor Inventory Number */
+#define X86_FEATURE_CDP_L2 ( 7*32+15) /* "cdp_l2" Code and Data Prioritization L2 */
+#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* MSR SPEC_CTRL is implemented */
+#define X86_FEATURE_SSBD ( 7*32+17) /* "ssbd" Speculative Store Bypass Disable */
+#define X86_FEATURE_MBA ( 7*32+18) /* "mba" Memory Bandwidth Allocation */
+#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
+#define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* "perfmon_v2" AMD Performance Monitoring Version 2 */
+#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* Use IBRS during runtime firmware calls */
+#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* Disable Speculative Store Bypass. */
+#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* AMD SSBD implementation via LS_CFG MSR */
+#define X86_FEATURE_IBRS ( 7*32+25) /* "ibrs" Indirect Branch Restricted Speculation */
+#define X86_FEATURE_IBPB ( 7*32+26) /* "ibpb" Indirect Branch Prediction Barrier without a guaranteed RSB flush */
+#define X86_FEATURE_STIBP ( 7*32+27) /* "stibp" Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_ZEN ( 7*32+28) /* Generic flag for all Zen and newer */
+#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* L1TF workaround PTE inversion */
+#define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* "ibrs_enhanced" Enhanced IBRS */
+#define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* MSR IA32_FEAT_CTL configured */
/* Virtualization flags: Linux defined, word 8 */
-#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
-#define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* Intel FlexPriority */
-#define X86_FEATURE_EPT ( 8*32+ 2) /* Intel Extended Page Table */
-#define X86_FEATURE_VPID ( 8*32+ 3) /* Intel Virtual Processor ID */
+#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* "tpr_shadow" Intel TPR Shadow */
+#define X86_FEATURE_FLEXPRIORITY ( 8*32+ 1) /* "flexpriority" Intel FlexPriority */
+#define X86_FEATURE_EPT ( 8*32+ 2) /* "ept" Intel Extended Page Table */
+#define X86_FEATURE_VPID ( 8*32+ 3) /* "vpid" Intel Virtual Processor ID */
+#define X86_FEATURE_COHERENCY_SFW_NO ( 8*32+ 4) /* SNP cache coherency software work around not needed */
-#define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */
-#define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */
-#define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */
-#define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
-#define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
-#define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */
-#define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */
-#define X86_FEATURE_TDX_GUEST ( 8*32+22) /* Intel Trust Domain Extensions Guest */
+#define X86_FEATURE_VMMCALL ( 8*32+15) /* "vmmcall" Prefer VMMCALL to VMCALL */
+#define X86_FEATURE_XENPV ( 8*32+16) /* Xen paravirtual guest */
+#define X86_FEATURE_EPT_AD ( 8*32+17) /* "ept_ad" Intel Extended Page Table access-dirty bit */
+#define X86_FEATURE_VMCALL ( 8*32+18) /* Hypervisor supports the VMCALL instruction */
+#define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* VMware prefers VMMCALL hypercall instruction */
+#define X86_FEATURE_PVUNLOCK ( 8*32+20) /* PV unlock function */
+#define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* PV vcpu_is_preempted function */
+#define X86_FEATURE_TDX_GUEST ( 8*32+22) /* "tdx_guest" Intel Trust Domain Extensions Guest */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
-#define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
-#define X86_FEATURE_TSC_ADJUST ( 9*32+ 1) /* TSC adjustment MSR 0x3B */
-#define X86_FEATURE_SGX ( 9*32+ 2) /* Software Guard Extensions */
-#define X86_FEATURE_BMI1 ( 9*32+ 3) /* 1st group bit manipulation extensions */
-#define X86_FEATURE_HLE ( 9*32+ 4) /* Hardware Lock Elision */
-#define X86_FEATURE_AVX2 ( 9*32+ 5) /* AVX2 instructions */
-#define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* "" FPU data pointer updated only on x87 exceptions */
-#define X86_FEATURE_SMEP ( 9*32+ 7) /* Supervisor Mode Execution Protection */
-#define X86_FEATURE_BMI2 ( 9*32+ 8) /* 2nd group bit manipulation extensions */
-#define X86_FEATURE_ERMS ( 9*32+ 9) /* Enhanced REP MOVSB/STOSB instructions */
-#define X86_FEATURE_INVPCID ( 9*32+10) /* Invalidate Processor Context ID */
-#define X86_FEATURE_RTM ( 9*32+11) /* Restricted Transactional Memory */
-#define X86_FEATURE_CQM ( 9*32+12) /* Cache QoS Monitoring */
-#define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* "" Zero out FPU CS and FPU DS */
-#define X86_FEATURE_MPX ( 9*32+14) /* Memory Protection Extension */
-#define X86_FEATURE_RDT_A ( 9*32+15) /* Resource Director Technology Allocation */
-#define X86_FEATURE_AVX512F ( 9*32+16) /* AVX-512 Foundation */
-#define X86_FEATURE_AVX512DQ ( 9*32+17) /* AVX-512 DQ (Double/Quad granular) Instructions */
-#define X86_FEATURE_RDSEED ( 9*32+18) /* RDSEED instruction */
-#define X86_FEATURE_ADX ( 9*32+19) /* ADCX and ADOX instructions */
-#define X86_FEATURE_SMAP ( 9*32+20) /* Supervisor Mode Access Prevention */
-#define X86_FEATURE_AVX512IFMA ( 9*32+21) /* AVX-512 Integer Fused Multiply-Add instructions */
-#define X86_FEATURE_CLFLUSHOPT ( 9*32+23) /* CLFLUSHOPT instruction */
-#define X86_FEATURE_CLWB ( 9*32+24) /* CLWB instruction */
-#define X86_FEATURE_INTEL_PT ( 9*32+25) /* Intel Processor Trace */
-#define X86_FEATURE_AVX512PF ( 9*32+26) /* AVX-512 Prefetch */
-#define X86_FEATURE_AVX512ER ( 9*32+27) /* AVX-512 Exponential and Reciprocal */
-#define X86_FEATURE_AVX512CD ( 9*32+28) /* AVX-512 Conflict Detection */
-#define X86_FEATURE_SHA_NI ( 9*32+29) /* SHA1/SHA256 Instruction Extensions */
-#define X86_FEATURE_AVX512BW ( 9*32+30) /* AVX-512 BW (Byte/Word granular) Instructions */
-#define X86_FEATURE_AVX512VL ( 9*32+31) /* AVX-512 VL (128/256 Vector Length) Extensions */
+#define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* "fsgsbase" RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
+#define X86_FEATURE_TSC_ADJUST ( 9*32+ 1) /* "tsc_adjust" TSC adjustment MSR 0x3B */
+#define X86_FEATURE_SGX ( 9*32+ 2) /* "sgx" Software Guard Extensions */
+#define X86_FEATURE_BMI1 ( 9*32+ 3) /* "bmi1" 1st group bit manipulation extensions */
+#define X86_FEATURE_HLE ( 9*32+ 4) /* "hle" Hardware Lock Elision */
+#define X86_FEATURE_AVX2 ( 9*32+ 5) /* "avx2" AVX2 instructions */
+#define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* FPU data pointer updated only on x87 exceptions */
+#define X86_FEATURE_SMEP ( 9*32+ 7) /* "smep" Supervisor Mode Execution Protection */
+#define X86_FEATURE_BMI2 ( 9*32+ 8) /* "bmi2" 2nd group bit manipulation extensions */
+#define X86_FEATURE_ERMS ( 9*32+ 9) /* "erms" Enhanced REP MOVSB/STOSB instructions */
+#define X86_FEATURE_INVPCID ( 9*32+10) /* "invpcid" Invalidate Processor Context ID */
+#define X86_FEATURE_RTM ( 9*32+11) /* "rtm" Restricted Transactional Memory */
+#define X86_FEATURE_CQM ( 9*32+12) /* "cqm" Cache QoS Monitoring */
+#define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* Zero out FPU CS and FPU DS */
+#define X86_FEATURE_MPX ( 9*32+14) /* "mpx" Memory Protection Extension */
+#define X86_FEATURE_RDT_A ( 9*32+15) /* "rdt_a" Resource Director Technology Allocation */
+#define X86_FEATURE_AVX512F ( 9*32+16) /* "avx512f" AVX-512 Foundation */
+#define X86_FEATURE_AVX512DQ ( 9*32+17) /* "avx512dq" AVX-512 DQ (Double/Quad granular) Instructions */
+#define X86_FEATURE_RDSEED ( 9*32+18) /* "rdseed" RDSEED instruction */
+#define X86_FEATURE_ADX ( 9*32+19) /* "adx" ADCX and ADOX instructions */
+#define X86_FEATURE_SMAP ( 9*32+20) /* "smap" Supervisor Mode Access Prevention */
+#define X86_FEATURE_AVX512IFMA ( 9*32+21) /* "avx512ifma" AVX-512 Integer Fused Multiply-Add instructions */
+#define X86_FEATURE_CLFLUSHOPT ( 9*32+23) /* "clflushopt" CLFLUSHOPT instruction */
+#define X86_FEATURE_CLWB ( 9*32+24) /* "clwb" CLWB instruction */
+#define X86_FEATURE_INTEL_PT ( 9*32+25) /* "intel_pt" Intel Processor Trace */
+#define X86_FEATURE_AVX512PF ( 9*32+26) /* "avx512pf" AVX-512 Prefetch */
+#define X86_FEATURE_AVX512ER ( 9*32+27) /* "avx512er" AVX-512 Exponential and Reciprocal */
+#define X86_FEATURE_AVX512CD ( 9*32+28) /* "avx512cd" AVX-512 Conflict Detection */
+#define X86_FEATURE_SHA_NI ( 9*32+29) /* "sha_ni" SHA1/SHA256 Instruction Extensions */
+#define X86_FEATURE_AVX512BW ( 9*32+30) /* "avx512bw" AVX-512 BW (Byte/Word granular) Instructions */
+#define X86_FEATURE_AVX512VL ( 9*32+31) /* "avx512vl" AVX-512 VL (128/256 Vector Length) Extensions */
/* Extended state features, CPUID level 0x0000000d:1 (EAX), word 10 */
-#define X86_FEATURE_XSAVEOPT (10*32+ 0) /* XSAVEOPT instruction */
-#define X86_FEATURE_XSAVEC (10*32+ 1) /* XSAVEC instruction */
-#define X86_FEATURE_XGETBV1 (10*32+ 2) /* XGETBV with ECX = 1 instruction */
-#define X86_FEATURE_XSAVES (10*32+ 3) /* XSAVES/XRSTORS instructions */
-#define X86_FEATURE_XFD (10*32+ 4) /* "" eXtended Feature Disabling */
+#define X86_FEATURE_XSAVEOPT (10*32+ 0) /* "xsaveopt" XSAVEOPT instruction */
+#define X86_FEATURE_XSAVEC (10*32+ 1) /* "xsavec" XSAVEC instruction */
+#define X86_FEATURE_XGETBV1 (10*32+ 2) /* "xgetbv1" XGETBV with ECX = 1 instruction */
+#define X86_FEATURE_XSAVES (10*32+ 3) /* "xsaves" XSAVES/XRSTORS instructions */
+#define X86_FEATURE_XFD (10*32+ 4) /* eXtended Feature Disabling */
/*
* Extended auxiliary flags: Linux defined - for features scattered in various
@@ -285,224 +275,280 @@
*
* Reuse free bits when adding new feature flags!
*/
-#define X86_FEATURE_CQM_LLC (11*32+ 0) /* LLC QoS if 1 */
-#define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* LLC occupancy monitoring */
-#define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* LLC Total MBM monitoring */
-#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */
-#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */
-#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
-#define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */
-#define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */
-#define X86_FEATURE_SGX1 (11*32+ 8) /* "" Basic SGX */
-#define X86_FEATURE_SGX2 (11*32+ 9) /* "" SGX Enclave Dynamic Memory Management (EDMM) */
-#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */
-#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
-#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
-#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
-#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
-#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
-#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
-#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
-#define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */
-#define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */
-#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
-#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
-#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
-#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
-#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
-#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
-#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
-#define X86_FEATURE_APIC_MSRS_FENCE (11*32+27) /* "" IA32_TSC_DEADLINE and X2APIC MSRs need fencing */
-#define X86_FEATURE_ZEN2 (11*32+28) /* "" CPU based on Zen2 microarchitecture */
-#define X86_FEATURE_ZEN3 (11*32+29) /* "" CPU based on Zen3 microarchitecture */
-#define X86_FEATURE_ZEN4 (11*32+30) /* "" CPU based on Zen4 microarchitecture */
-#define X86_FEATURE_ZEN1 (11*32+31) /* "" CPU based on Zen1 microarchitecture */
+#define X86_FEATURE_CQM_LLC (11*32+ 0) /* "cqm_llc" LLC QoS if 1 */
+#define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* "cqm_occup_llc" LLC occupancy monitoring */
+#define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* "cqm_mbm_total" LLC Total MBM monitoring */
+#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* "cqm_mbm_local" LLC Local MBM monitoring */
+#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* LFENCE in user entry SWAPGS path */
+#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* LFENCE in kernel entry SWAPGS path */
+#define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* "split_lock_detect" #AC for split lock */
+#define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* Per-thread Memory Bandwidth Allocation */
+#define X86_FEATURE_SGX1 (11*32+ 8) /* Basic SGX */
+#define X86_FEATURE_SGX2 (11*32+ 9) /* SGX Enclave Dynamic Memory Management (EDMM) */
+#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* Issue an IBPB on kernel entry */
+#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* RET prediction control */
+#define X86_FEATURE_RETPOLINE (11*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */
+#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* Use LFENCE for Spectre variant 2 */
+#define X86_FEATURE_RETHUNK (11*32+14) /* Use REturn THUNK */
+#define X86_FEATURE_UNRET (11*32+15) /* AMD BTB untrain return */
+#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* Use IBPB during runtime firmware calls */
+#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* Fill RSB on VM exit when EIBRS is enabled */
+#define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* SGX EDECCSSA user leaf function */
+#define X86_FEATURE_CALL_DEPTH (11*32+19) /* Call depth tracking for RSB stuffing */
+#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* MSR IA32_TSX_CTRL (Intel) implemented */
+#define X86_FEATURE_SMBA (11*32+21) /* Slow Memory Bandwidth Allocation */
+#define X86_FEATURE_BMEC (11*32+22) /* Bandwidth Monitoring Event Configuration */
+#define X86_FEATURE_USER_SHSTK (11*32+23) /* "user_shstk" Shadow stack support for user mode applications */
+#define X86_FEATURE_SRSO (11*32+24) /* AMD BTB untrain RETs */
+#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* AMD BTB untrain RETs through aliasing */
+#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* Issue an IBPB only on VMEXIT */
+#define X86_FEATURE_APIC_MSRS_FENCE (11*32+27) /* IA32_TSC_DEADLINE and X2APIC MSRs need fencing */
+#define X86_FEATURE_ZEN2 (11*32+28) /* CPU based on Zen2 microarchitecture */
+#define X86_FEATURE_ZEN3 (11*32+29) /* CPU based on Zen3 microarchitecture */
+#define X86_FEATURE_ZEN4 (11*32+30) /* CPU based on Zen4 microarchitecture */
+#define X86_FEATURE_ZEN1 (11*32+31) /* CPU based on Zen1 microarchitecture */
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
-#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
-#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
-#define X86_FEATURE_CMPCCXADD (12*32+ 7) /* "" CMPccXADD instructions */
-#define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* "" Intel Architectural PerfMon Extension */
-#define X86_FEATURE_FZRM (12*32+10) /* "" Fast zero-length REP MOVSB */
-#define X86_FEATURE_FSRS (12*32+11) /* "" Fast short REP STOSB */
-#define X86_FEATURE_FSRC (12*32+12) /* "" Fast short REP {CMPSB,SCASB} */
-#define X86_FEATURE_LKGS (12*32+18) /* "" Load "kernel" (userspace) GS */
-#define X86_FEATURE_AMX_FP16 (12*32+21) /* "" AMX fp16 Support */
-#define X86_FEATURE_AVX_IFMA (12*32+23) /* "" Support for VPMADD52[H,L]UQ */
-#define X86_FEATURE_LAM (12*32+26) /* Linear Address Masking */
+#define X86_FEATURE_SHA512 (12*32+ 0) /* SHA512 instructions */
+#define X86_FEATURE_SM3 (12*32+ 1) /* SM3 instructions */
+#define X86_FEATURE_SM4 (12*32+ 2) /* SM4 instructions */
+#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* "avx_vnni" AVX VNNI instructions */
+#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* "avx512_bf16" AVX512 BFLOAT16 instructions */
+#define X86_FEATURE_CMPCCXADD (12*32+ 7) /* CMPccXADD instructions */
+#define X86_FEATURE_ARCH_PERFMON_EXT (12*32+ 8) /* Intel Architectural PerfMon Extension */
+#define X86_FEATURE_FZRM (12*32+10) /* Fast zero-length REP MOVSB */
+#define X86_FEATURE_FSRS (12*32+11) /* Fast short REP STOSB */
+#define X86_FEATURE_FSRC (12*32+12) /* Fast short REP {CMPSB,SCASB} */
+#define X86_FEATURE_FRED (12*32+17) /* "fred" Flexible Return and Event Delivery */
+#define X86_FEATURE_LKGS (12*32+18) /* Load "kernel" (userspace) GS */
+#define X86_FEATURE_WRMSRNS (12*32+19) /* Non-serializing WRMSR */
+#define X86_FEATURE_AMX_FP16 (12*32+21) /* AMX fp16 Support */
+#define X86_FEATURE_AVX_IFMA (12*32+23) /* Support for VPMADD52[H,L]UQ */
+#define X86_FEATURE_LAM (12*32+26) /* "lam" Linear Address Masking */
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
-#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
-#define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */
-#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */
-#define X86_FEATURE_RDPRU (13*32+ 4) /* Read processor register at user level */
-#define X86_FEATURE_WBNOINVD (13*32+ 9) /* WBNOINVD instruction */
-#define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */
-#define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */
-#define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */
-#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
-#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */
-#define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
-#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
-#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
-#define X86_FEATURE_CPPC (13*32+27) /* Collaborative Processor Performance Control */
-#define X86_FEATURE_AMD_PSFD (13*32+28) /* "" Predictive Store Forwarding Disable */
-#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */
-#define X86_FEATURE_BRS (13*32+31) /* Branch Sampling available */
+#define X86_FEATURE_CLZERO (13*32+ 0) /* "clzero" CLZERO instruction */
+#define X86_FEATURE_IRPERF (13*32+ 1) /* "irperf" Instructions Retired Count */
+#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* "xsaveerptr" Always save/restore FP error pointers */
+#define X86_FEATURE_INVLPGB (13*32+ 3) /* INVLPGB and TLBSYNC instructions supported */
+#define X86_FEATURE_RDPRU (13*32+ 4) /* "rdpru" Read processor register at user level */
+#define X86_FEATURE_WBNOINVD (13*32+ 9) /* "wbnoinvd" WBNOINVD instruction */
+#define X86_FEATURE_AMD_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */
+#define X86_FEATURE_AMD_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */
+#define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */
+#define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/
+#define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */
+#define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */
+#define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */
+#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* Speculative Store Bypass is fixed in hardware. */
+#define X86_FEATURE_CPPC (13*32+27) /* "cppc" Collaborative Processor Performance Control */
+#define X86_FEATURE_AMD_PSFD (13*32+28) /* Predictive Store Forwarding Disable */
+#define X86_FEATURE_BTC_NO (13*32+29) /* Not vulnerable to Branch Type Confusion */
+#define X86_FEATURE_AMD_IBPB_RET (13*32+30) /* IBPB clears return address predictor */
+#define X86_FEATURE_BRS (13*32+31) /* "brs" Branch Sampling available */
/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
-#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
-#define X86_FEATURE_IDA (14*32+ 1) /* Intel Dynamic Acceleration */
-#define X86_FEATURE_ARAT (14*32+ 2) /* Always Running APIC Timer */
-#define X86_FEATURE_PLN (14*32+ 4) /* Intel Power Limit Notification */
-#define X86_FEATURE_PTS (14*32+ 6) /* Intel Package Thermal Status */
-#define X86_FEATURE_HWP (14*32+ 7) /* Intel Hardware P-states */
-#define X86_FEATURE_HWP_NOTIFY (14*32+ 8) /* HWP Notification */
-#define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* HWP Activity Window */
-#define X86_FEATURE_HWP_EPP (14*32+10) /* HWP Energy Perf. Preference */
-#define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* HWP Package Level Request */
-#define X86_FEATURE_HFI (14*32+19) /* Hardware Feedback Interface */
+#define X86_FEATURE_DTHERM (14*32+ 0) /* "dtherm" Digital Thermal Sensor */
+#define X86_FEATURE_IDA (14*32+ 1) /* "ida" Intel Dynamic Acceleration */
+#define X86_FEATURE_ARAT (14*32+ 2) /* "arat" Always Running APIC Timer */
+#define X86_FEATURE_PLN (14*32+ 4) /* "pln" Intel Power Limit Notification */
+#define X86_FEATURE_PTS (14*32+ 6) /* "pts" Intel Package Thermal Status */
+#define X86_FEATURE_HWP (14*32+ 7) /* "hwp" Intel Hardware P-states */
+#define X86_FEATURE_HWP_NOTIFY (14*32+ 8) /* "hwp_notify" HWP Notification */
+#define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* "hwp_act_window" HWP Activity Window */
+#define X86_FEATURE_HWP_EPP (14*32+10) /* "hwp_epp" HWP Energy Perf. Preference */
+#define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* "hwp_pkg_req" HWP Package Level Request */
+#define X86_FEATURE_HWP_HIGHEST_PERF_CHANGE (14*32+15) /* HWP Highest perf change */
+#define X86_FEATURE_HFI (14*32+19) /* "hfi" Hardware Feedback Interface */
/* AMD SVM Feature Identification, CPUID level 0x8000000a (EDX), word 15 */
-#define X86_FEATURE_NPT (15*32+ 0) /* Nested Page Table support */
-#define X86_FEATURE_LBRV (15*32+ 1) /* LBR Virtualization support */
+#define X86_FEATURE_NPT (15*32+ 0) /* "npt" Nested Page Table support */
+#define X86_FEATURE_LBRV (15*32+ 1) /* "lbrv" LBR Virtualization support */
#define X86_FEATURE_SVML (15*32+ 2) /* "svm_lock" SVM locking MSR */
#define X86_FEATURE_NRIPS (15*32+ 3) /* "nrip_save" SVM next_rip save */
#define X86_FEATURE_TSCRATEMSR (15*32+ 4) /* "tsc_scale" TSC scaling support */
#define X86_FEATURE_VMCBCLEAN (15*32+ 5) /* "vmcb_clean" VMCB clean bits support */
-#define X86_FEATURE_FLUSHBYASID (15*32+ 6) /* flush-by-ASID support */
-#define X86_FEATURE_DECODEASSISTS (15*32+ 7) /* Decode Assists support */
-#define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */
-#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */
-#define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
-#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */
-#define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */
-#define X86_FEATURE_X2AVIC (15*32+18) /* Virtual x2apic */
-#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */
-#define X86_FEATURE_VNMI (15*32+25) /* Virtual NMI */
-#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */
+#define X86_FEATURE_FLUSHBYASID (15*32+ 6) /* "flushbyasid" Flush-by-ASID support */
+#define X86_FEATURE_DECODEASSISTS (15*32+ 7) /* "decodeassists" Decode Assists support */
+#define X86_FEATURE_PAUSEFILTER (15*32+10) /* "pausefilter" Filtered pause intercept */
+#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* "pfthreshold" Pause filter threshold */
+#define X86_FEATURE_AVIC (15*32+13) /* "avic" Virtual Interrupt Controller */
+#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* "v_vmsave_vmload" Virtual VMSAVE VMLOAD */
+#define X86_FEATURE_VGIF (15*32+16) /* "vgif" Virtual GIF */
+#define X86_FEATURE_X2AVIC (15*32+18) /* "x2avic" Virtual x2apic */
+#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */
+#define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */
+#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */
+#define X86_FEATURE_BUS_LOCK_THRESHOLD (15*32+29) /* Bus lock threshold */
+#define X86_FEATURE_IDLE_HLT (15*32+30) /* IDLE HLT intercept */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
-#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
-#define X86_FEATURE_UMIP (16*32+ 2) /* User Mode Instruction Protection */
-#define X86_FEATURE_PKU (16*32+ 3) /* Protection Keys for Userspace */
-#define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */
-#define X86_FEATURE_WAITPKG (16*32+ 5) /* UMONITOR/UMWAIT/TPAUSE Instructions */
-#define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* Additional AVX512 Vector Bit Manipulation Instructions */
-#define X86_FEATURE_SHSTK (16*32+ 7) /* "" Shadow stack */
-#define X86_FEATURE_GFNI (16*32+ 8) /* Galois Field New Instructions */
-#define X86_FEATURE_VAES (16*32+ 9) /* Vector AES */
-#define X86_FEATURE_VPCLMULQDQ (16*32+10) /* Carry-Less Multiplication Double Quadword */
-#define X86_FEATURE_AVX512_VNNI (16*32+11) /* Vector Neural Network Instructions */
-#define X86_FEATURE_AVX512_BITALG (16*32+12) /* Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */
-#define X86_FEATURE_TME (16*32+13) /* Intel Total Memory Encryption */
-#define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */
-#define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */
-#define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */
-#define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */
-#define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */
-#define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */
-#define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */
-#define X86_FEATURE_ENQCMD (16*32+29) /* ENQCMD and ENQCMDS instructions */
-#define X86_FEATURE_SGX_LC (16*32+30) /* Software Guard Extensions Launch Control */
+#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* "avx512vbmi" AVX512 Vector Bit Manipulation instructions*/
+#define X86_FEATURE_UMIP (16*32+ 2) /* "umip" User Mode Instruction Protection */
+#define X86_FEATURE_PKU (16*32+ 3) /* "pku" Protection Keys for Userspace */
+#define X86_FEATURE_OSPKE (16*32+ 4) /* "ospke" OS Protection Keys Enable */
+#define X86_FEATURE_WAITPKG (16*32+ 5) /* "waitpkg" UMONITOR/UMWAIT/TPAUSE Instructions */
+#define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* "avx512_vbmi2" Additional AVX512 Vector Bit Manipulation Instructions */
+#define X86_FEATURE_SHSTK (16*32+ 7) /* Shadow stack */
+#define X86_FEATURE_GFNI (16*32+ 8) /* "gfni" Galois Field New Instructions */
+#define X86_FEATURE_VAES (16*32+ 9) /* "vaes" Vector AES */
+#define X86_FEATURE_VPCLMULQDQ (16*32+10) /* "vpclmulqdq" Carry-Less Multiplication Double Quadword */
+#define X86_FEATURE_AVX512_VNNI (16*32+11) /* "avx512_vnni" Vector Neural Network Instructions */
+#define X86_FEATURE_AVX512_BITALG (16*32+12) /* "avx512_bitalg" Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */
+#define X86_FEATURE_TME (16*32+13) /* "tme" Intel Total Memory Encryption */
+#define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* "avx512_vpopcntdq" POPCNT for vectors of DW/QW */
+#define X86_FEATURE_LA57 (16*32+16) /* "la57" 5-level page tables */
+#define X86_FEATURE_RDPID (16*32+22) /* "rdpid" RDPID instruction */
+#define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* "bus_lock_detect" Bus Lock detect */
+#define X86_FEATURE_CLDEMOTE (16*32+25) /* "cldemote" CLDEMOTE instruction */
+#define X86_FEATURE_MOVDIRI (16*32+27) /* "movdiri" MOVDIRI instruction */
+#define X86_FEATURE_MOVDIR64B (16*32+28) /* "movdir64b" MOVDIR64B instruction */
+#define X86_FEATURE_ENQCMD (16*32+29) /* "enqcmd" ENQCMD and ENQCMDS instructions */
+#define X86_FEATURE_SGX_LC (16*32+30) /* "sgx_lc" Software Guard Extensions Launch Control */
/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */
-#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
-#define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */
-#define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */
+#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* "overflow_recov" MCA overflow recovery support */
+#define X86_FEATURE_SUCCOR (17*32+ 1) /* "succor" Uncorrectable error containment and recovery */
+#define X86_FEATURE_SMCA (17*32+ 3) /* "smca" Scalable MCA */
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
-#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
-#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
-#define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */
-#define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
-#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
-#define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
-#define X86_FEATURE_RTM_ALWAYS_ABORT (18*32+11) /* "" RTM transaction always aborts */
-#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
-#define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */
-#define X86_FEATURE_HYBRID_CPU (18*32+15) /* "" This part has CPUs of more than one type */
-#define X86_FEATURE_TSXLDTRK (18*32+16) /* TSX Suspend Load Address Tracking */
-#define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
-#define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */
-#define X86_FEATURE_IBT (18*32+20) /* Indirect Branch Tracking */
-#define X86_FEATURE_AMX_BF16 (18*32+22) /* AMX bf16 Support */
-#define X86_FEATURE_AVX512_FP16 (18*32+23) /* AVX512 FP16 */
-#define X86_FEATURE_AMX_TILE (18*32+24) /* AMX tile Support */
-#define X86_FEATURE_AMX_INT8 (18*32+25) /* AMX int8 Support */
-#define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
-#define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
-#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */
-#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
-#define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */
-#define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
+#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* "avx512_4vnniw" AVX-512 Neural Network Instructions */
+#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* "avx512_4fmaps" AVX-512 Multiply Accumulation Single precision */
+#define X86_FEATURE_FSRM (18*32+ 4) /* "fsrm" Fast Short Rep Mov */
+#define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* "avx512_vp2intersect" AVX-512 Intersect for D/Q */
+#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* SRBDS mitigation MSR available */
+#define X86_FEATURE_MD_CLEAR (18*32+10) /* "md_clear" VERW clears CPU buffers */
+#define X86_FEATURE_RTM_ALWAYS_ABORT (18*32+11) /* RTM transaction always aborts */
+#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* TSX_FORCE_ABORT */
+#define X86_FEATURE_SERIALIZE (18*32+14) /* "serialize" SERIALIZE instruction */
+#define X86_FEATURE_HYBRID_CPU (18*32+15) /* This part has CPUs of more than one type */
+#define X86_FEATURE_TSXLDTRK (18*32+16) /* "tsxldtrk" TSX Suspend Load Address Tracking */
+#define X86_FEATURE_PCONFIG (18*32+18) /* "pconfig" Intel PCONFIG */
+#define X86_FEATURE_ARCH_LBR (18*32+19) /* "arch_lbr" Intel ARCH LBR */
+#define X86_FEATURE_IBT (18*32+20) /* "ibt" Indirect Branch Tracking */
+#define X86_FEATURE_AMX_BF16 (18*32+22) /* "amx_bf16" AMX bf16 Support */
+#define X86_FEATURE_AVX512_FP16 (18*32+23) /* "avx512_fp16" AVX512 FP16 */
+#define X86_FEATURE_AMX_TILE (18*32+24) /* "amx_tile" AMX tile Support */
+#define X86_FEATURE_AMX_INT8 (18*32+25) /* "amx_int8" AMX int8 Support */
+#define X86_FEATURE_SPEC_CTRL (18*32+26) /* Speculation Control (IBRS + IBPB) */
+#define X86_FEATURE_INTEL_STIBP (18*32+27) /* Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_FLUSH_L1D (18*32+28) /* "flush_l1d" Flush L1D cache */
+#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* "arch_capabilities" IA32_ARCH_CAPABILITIES MSR (Intel) */
+#define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* IA32_CORE_CAPABILITIES MSR */
+#define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* Speculative Store Bypass Disable */
/* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */
-#define X86_FEATURE_SME (19*32+ 0) /* AMD Secure Memory Encryption */
-#define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */
-#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */
-#define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
-#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
-#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */
-#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */
+#define X86_FEATURE_SME (19*32+ 0) /* "sme" Secure Memory Encryption */
+#define X86_FEATURE_SEV (19*32+ 1) /* "sev" Secure Encrypted Virtualization */
+#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* VM Page Flush MSR is supported */
+#define X86_FEATURE_SEV_ES (19*32+ 3) /* "sev_es" Secure Encrypted Virtualization - Encrypted State */
+#define X86_FEATURE_SEV_SNP (19*32+ 4) /* "sev_snp" Secure Encrypted Virtualization - Secure Nested Paging */
+#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
+#define X86_FEATURE_SME_COHERENT (19*32+10) /* hardware-enforced cache coherency */
+#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */
+#define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */
+#define X86_FEATURE_SEGMENTED_RMP (19*32+23) /* Segmented RMP support */
+#define X86_FEATURE_ALLOWED_SEV_FEATURES (19*32+27) /* Allowed SEV Features */
+#define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */
+#define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */
/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */
-#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */
-#define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* "" WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */
-#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */
-#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */
-#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
-#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* "" SMM_CTL MSR is not present */
+#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */
+#define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */
+#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */
+#define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */
+#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */
+
+#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */
+#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */
+
+#define X86_FEATURE_GP_ON_USER_CPUID (20*32+17) /* User CPUID faulting */
-#define X86_FEATURE_SBPB (20*32+27) /* "" Selective Branch Prediction Barrier */
-#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
-#define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */
+#define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */
+#define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */
+#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */
+#define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */
+#define X86_FEATURE_SRSO_USER_KERNEL_NO (20*32+30) /* CPU is not affected by SRSO across user/kernel boundaries */
+#define X86_FEATURE_SRSO_BP_SPEC_REDUCE (20*32+31) /*
+ * BP_CFG[BpSpecReduce] can be used to mitigate SRSO for VMs.
+ * (SRSO_MSR_FIX in the official doc).
+ */
+
+/*
+ * Extended auxiliary flags: Linux defined - for features scattered in various
+ * CPUID levels like 0x80000022, etc and Linux defined features.
+ *
+ * Reuse free bits when adding new feature flags!
+ */
+#define X86_FEATURE_AMD_LBR_PMC_FREEZE (21*32+ 0) /* "amd_lbr_pmc_freeze" AMD LBR and PMC Freeze */
+#define X86_FEATURE_CLEAR_BHB_LOOP (21*32+ 1) /* Clear branch history at syscall entry using SW loop */
+#define X86_FEATURE_BHI_CTRL (21*32+ 2) /* BHI_DIS_S HW control available */
+#define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* BHI_DIS_S HW control enabled */
+#define X86_FEATURE_CLEAR_BHB_VMEXIT (21*32+ 4) /* Clear branch history at vmexit using SW loop */
+#define X86_FEATURE_AMD_FAST_CPPC (21*32+ 5) /* Fast CPPC */
+#define X86_FEATURE_AMD_HTR_CORES (21*32+ 6) /* Heterogeneous Core Topology */
+#define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32+ 7) /* Workload Classification */
+#define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */
+#define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */
+#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */
+#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
+#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
+#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
/*
* BUG word(s)
*/
#define X86_BUG(x) (NCAPINTS*32 + (x))
-#define X86_BUG_F00F X86_BUG(0) /* Intel F00F */
-#define X86_BUG_FDIV X86_BUG(1) /* FPU FDIV */
-#define X86_BUG_COMA X86_BUG(2) /* Cyrix 6x86 coma */
+#define X86_BUG_F00F X86_BUG(0) /* "f00f" Intel F00F */
+#define X86_BUG_FDIV X86_BUG(1) /* "fdiv" FPU FDIV */
+#define X86_BUG_COMA X86_BUG(2) /* "coma" Cyrix 6x86 coma */
#define X86_BUG_AMD_TLB_MMATCH X86_BUG(3) /* "tlb_mmatch" AMD Erratum 383 */
#define X86_BUG_AMD_APIC_C1E X86_BUG(4) /* "apic_c1e" AMD Erratum 400 */
-#define X86_BUG_11AP X86_BUG(5) /* Bad local APIC aka 11AP */
-#define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */
-#define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */
-#define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* SYSRET doesn't fix up SS attrs */
+#define X86_BUG_11AP X86_BUG(5) /* "11ap" Bad local APIC aka 11AP */
+#define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* "fxsave_leak" FXSAVE leaks FOP/FIP/FOP */
+#define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* "clflush_monitor" AAI65, CLFLUSH required before MONITOR */
+#define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* "sysret_ss_attrs" SYSRET doesn't fix up SS attrs */
#ifdef CONFIG_X86_32
/*
* 64-bit kernels don't use X86_BUG_ESPFIX. Make the define conditional
* to avoid confusion.
*/
-#define X86_BUG_ESPFIX X86_BUG(9) /* "" IRET to 16-bit SS corrupts ESP/RSP high bits */
+#define X86_BUG_ESPFIX X86_BUG(9) /* IRET to 16-bit SS corrupts ESP/RSP high bits */
#endif
-#define X86_BUG_NULL_SEG X86_BUG(10) /* Nulling a selector preserves the base */
-#define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* SWAPGS without input dep on GS */
-#define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */
-#define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */
-#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
-#define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
-#define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
-#define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */
-#define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
-#define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
-#define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
-#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
-#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
-#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
-#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
-#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
-#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
-#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */
-#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
-#define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */
-#define X86_BUG_GDS X86_BUG(30) /* CPU is affected by Gather Data Sampling */
-#define X86_BUG_TDX_PW_MCE X86_BUG(31) /* CPU may incur #MC if non-TD software does partial write to TDX private memory */
+#define X86_BUG_NULL_SEG X86_BUG(10) /* "null_seg" Nulling a selector preserves the base */
+#define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* "swapgs_fence" SWAPGS without input dep on GS */
+#define X86_BUG_MONITOR X86_BUG(12) /* "monitor" IPI required to wake up remote CPU */
+#define X86_BUG_AMD_E400 X86_BUG(13) /* "amd_e400" CPU is among the affected by Erratum 400 */
+#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* "cpu_meltdown" CPU is affected by meltdown attack and needs kernel page table isolation */
+#define X86_BUG_SPECTRE_V1 X86_BUG(15) /* "spectre_v1" CPU is affected by Spectre variant 1 attack with conditional branches */
+#define X86_BUG_SPECTRE_V2 X86_BUG(16) /* "spectre_v2" CPU is affected by Spectre variant 2 attack with indirect branches */
+#define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* "spec_store_bypass" CPU is affected by speculative store bypass attack */
+#define X86_BUG_L1TF X86_BUG(18) /* "l1tf" CPU is affected by L1 Terminal Fault */
+#define X86_BUG_MDS X86_BUG(19) /* "mds" CPU is affected by Microarchitectural data sampling */
+#define X86_BUG_MSBDS_ONLY X86_BUG(20) /* "msbds_only" CPU is only affected by the MSDBS variant of BUG_MDS */
+#define X86_BUG_SWAPGS X86_BUG(21) /* "swapgs" CPU is affected by speculation through SWAPGS */
+#define X86_BUG_TAA X86_BUG(22) /* "taa" CPU is affected by TSX Async Abort(TAA) */
+#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* "itlb_multihit" CPU may incur MCE during certain page attribute changes */
+#define X86_BUG_SRBDS X86_BUG(24) /* "srbds" CPU may leak RNG bits if not mitigated */
+#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* "mmio_stale_data" CPU is affected by Processor MMIO Stale Data vulnerabilities */
+/* unused, was #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) "mmio_unknown" CPU is too old and its MMIO Stale Data status is unknown */
+#define X86_BUG_RETBLEED X86_BUG(27) /* "retbleed" CPU is affected by RETBleed */
+#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* "eibrs_pbrsb" EIBRS is vulnerable to Post Barrier RSB Predictions */
+#define X86_BUG_SMT_RSB X86_BUG(29) /* "smt_rsb" CPU is vulnerable to Cross-Thread Return Address Predictions */
+#define X86_BUG_GDS X86_BUG(30) /* "gds" CPU is affected by Gather Data Sampling */
+#define X86_BUG_TDX_PW_MCE X86_BUG(31) /* "tdx_pw_mce" CPU may incur #MC if non-TD software does partial write to TDX private memory */
/* BUG word 2 */
-#define X86_BUG_SRSO X86_BUG(1*32 + 0) /* AMD SRSO bug */
-#define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */
+#define X86_BUG_SRSO X86_BUG( 1*32+ 0) /* "srso" AMD SRSO bug */
+#define X86_BUG_DIV0 X86_BUG( 1*32+ 1) /* "div0" AMD DIV0 speculation bug */
+#define X86_BUG_RFDS X86_BUG( 1*32+ 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
+#define X86_BUG_BHI X86_BUG( 1*32+ 3) /* "bhi" CPU is affected by Branch History Injection */
+#define X86_BUG_IBPB_NO_RET X86_BUG( 1*32+ 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+#define X86_BUG_SPECTRE_V2_USER X86_BUG( 1*32+ 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */
+#define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */
+#define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */
+#define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
+#define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
#endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h
deleted file mode 100644
index 702d93fdd10e..000000000000
--- a/tools/arch/x86/include/asm/disabled-features.h
+++ /dev/null
@@ -1,148 +0,0 @@
-#ifndef _ASM_X86_DISABLED_FEATURES_H
-#define _ASM_X86_DISABLED_FEATURES_H
-
-/* These features, although they might be available in a CPU
- * will not be used because the compile options to support
- * them are not present.
- *
- * This code allows them to be checked and disabled at
- * compile time without an explicit #ifdef. Use
- * cpu_feature_enabled().
- */
-
-#ifdef CONFIG_X86_UMIP
-# define DISABLE_UMIP 0
-#else
-# define DISABLE_UMIP (1<<(X86_FEATURE_UMIP & 31))
-#endif
-
-#ifdef CONFIG_X86_64
-# define DISABLE_VME (1<<(X86_FEATURE_VME & 31))
-# define DISABLE_K6_MTRR (1<<(X86_FEATURE_K6_MTRR & 31))
-# define DISABLE_CYRIX_ARR (1<<(X86_FEATURE_CYRIX_ARR & 31))
-# define DISABLE_CENTAUR_MCR (1<<(X86_FEATURE_CENTAUR_MCR & 31))
-# define DISABLE_PCID 0
-#else
-# define DISABLE_VME 0
-# define DISABLE_K6_MTRR 0
-# define DISABLE_CYRIX_ARR 0
-# define DISABLE_CENTAUR_MCR 0
-# define DISABLE_PCID (1<<(X86_FEATURE_PCID & 31))
-#endif /* CONFIG_X86_64 */
-
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-# define DISABLE_PKU 0
-# define DISABLE_OSPKE 0
-#else
-# define DISABLE_PKU (1<<(X86_FEATURE_PKU & 31))
-# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31))
-#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
-
-#ifdef CONFIG_X86_5LEVEL
-# define DISABLE_LA57 0
-#else
-# define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31))
-#endif
-
-#ifdef CONFIG_PAGE_TABLE_ISOLATION
-# define DISABLE_PTI 0
-#else
-# define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31))
-#endif
-
-#ifdef CONFIG_RETPOLINE
-# define DISABLE_RETPOLINE 0
-#else
-# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \
- (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)))
-#endif
-
-#ifdef CONFIG_RETHUNK
-# define DISABLE_RETHUNK 0
-#else
-# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31))
-#endif
-
-#ifdef CONFIG_CPU_UNRET_ENTRY
-# define DISABLE_UNRET 0
-#else
-# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31))
-#endif
-
-#ifdef CONFIG_CALL_DEPTH_TRACKING
-# define DISABLE_CALL_DEPTH_TRACKING 0
-#else
-# define DISABLE_CALL_DEPTH_TRACKING (1 << (X86_FEATURE_CALL_DEPTH & 31))
-#endif
-
-#ifdef CONFIG_ADDRESS_MASKING
-# define DISABLE_LAM 0
-#else
-# define DISABLE_LAM (1 << (X86_FEATURE_LAM & 31))
-#endif
-
-#ifdef CONFIG_INTEL_IOMMU_SVM
-# define DISABLE_ENQCMD 0
-#else
-# define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
-#endif
-
-#ifdef CONFIG_X86_SGX
-# define DISABLE_SGX 0
-#else
-# define DISABLE_SGX (1 << (X86_FEATURE_SGX & 31))
-#endif
-
-#ifdef CONFIG_XEN_PV
-# define DISABLE_XENPV 0
-#else
-# define DISABLE_XENPV (1 << (X86_FEATURE_XENPV & 31))
-#endif
-
-#ifdef CONFIG_INTEL_TDX_GUEST
-# define DISABLE_TDX_GUEST 0
-#else
-# define DISABLE_TDX_GUEST (1 << (X86_FEATURE_TDX_GUEST & 31))
-#endif
-
-#ifdef CONFIG_X86_USER_SHADOW_STACK
-#define DISABLE_USER_SHSTK 0
-#else
-#define DISABLE_USER_SHSTK (1 << (X86_FEATURE_USER_SHSTK & 31))
-#endif
-
-#ifdef CONFIG_X86_KERNEL_IBT
-#define DISABLE_IBT 0
-#else
-#define DISABLE_IBT (1 << (X86_FEATURE_IBT & 31))
-#endif
-
-/*
- * Make sure to add features to the correct mask
- */
-#define DISABLED_MASK0 (DISABLE_VME)
-#define DISABLED_MASK1 0
-#define DISABLED_MASK2 0
-#define DISABLED_MASK3 (DISABLE_CYRIX_ARR|DISABLE_CENTAUR_MCR|DISABLE_K6_MTRR)
-#define DISABLED_MASK4 (DISABLE_PCID)
-#define DISABLED_MASK5 0
-#define DISABLED_MASK6 0
-#define DISABLED_MASK7 (DISABLE_PTI)
-#define DISABLED_MASK8 (DISABLE_XENPV|DISABLE_TDX_GUEST)
-#define DISABLED_MASK9 (DISABLE_SGX)
-#define DISABLED_MASK10 0
-#define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET| \
- DISABLE_CALL_DEPTH_TRACKING|DISABLE_USER_SHSTK)
-#define DISABLED_MASK12 (DISABLE_LAM)
-#define DISABLED_MASK13 0
-#define DISABLED_MASK14 0
-#define DISABLED_MASK15 0
-#define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \
- DISABLE_ENQCMD)
-#define DISABLED_MASK17 0
-#define DISABLED_MASK18 (DISABLE_IBT)
-#define DISABLED_MASK19 0
-#define DISABLED_MASK20 0
-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
-
-#endif /* _ASM_X86_DISABLED_FEATURES_H */
diff --git a/tools/arch/x86/include/asm/inat.h b/tools/arch/x86/include/asm/inat.h
index a61051400311..099e926595bd 100644
--- a/tools/arch/x86/include/asm/inat.h
+++ b/tools/arch/x86/include/asm/inat.h
@@ -35,6 +35,10 @@
#define INAT_PFX_VEX2 13 /* 2-bytes VEX prefix */
#define INAT_PFX_VEX3 14 /* 3-bytes VEX prefix */
#define INAT_PFX_EVEX 15 /* EVEX prefix */
+/* x86-64 REX2 prefix */
+#define INAT_PFX_REX2 16 /* 0xD5 */
+/* AMD XOP prefix */
+#define INAT_PFX_XOP 17 /* 0x8F */
#define INAT_LSTPFX_MAX 3
#define INAT_LGCPFX_MAX 11
@@ -50,7 +54,7 @@
/* Legacy prefix */
#define INAT_PFX_OFFS 0
-#define INAT_PFX_BITS 4
+#define INAT_PFX_BITS 5
#define INAT_PFX_MAX ((1 << INAT_PFX_BITS) - 1)
#define INAT_PFX_MASK (INAT_PFX_MAX << INAT_PFX_OFFS)
/* Escape opcodes */
@@ -75,8 +79,13 @@
#define INAT_MOFFSET (1 << (INAT_FLAG_OFFS + 3))
#define INAT_VARIANT (1 << (INAT_FLAG_OFFS + 4))
#define INAT_VEXOK (1 << (INAT_FLAG_OFFS + 5))
+#define INAT_XOPOK INAT_VEXOK
#define INAT_VEXONLY (1 << (INAT_FLAG_OFFS + 6))
#define INAT_EVEXONLY (1 << (INAT_FLAG_OFFS + 7))
+#define INAT_NO_REX2 (1 << (INAT_FLAG_OFFS + 8))
+#define INAT_REX2_VARIANT (1 << (INAT_FLAG_OFFS + 9))
+#define INAT_EVEX_SCALABLE (1 << (INAT_FLAG_OFFS + 10))
+#define INAT_INV64 (1 << (INAT_FLAG_OFFS + 11))
/* Attribute making macros for attribute tables */
#define INAT_MAKE_PREFIX(pfx) (pfx << INAT_PFX_OFFS)
#define INAT_MAKE_ESCAPE(esc) (esc << INAT_ESC_OFFS)
@@ -105,6 +114,8 @@ extern insn_attr_t inat_get_group_attribute(insn_byte_t modrm,
extern insn_attr_t inat_get_avx_attribute(insn_byte_t opcode,
insn_byte_t vex_m,
insn_byte_t vex_pp);
+extern insn_attr_t inat_get_xop_attribute(insn_byte_t opcode,
+ insn_byte_t map_select);
/* Attribute checking functions */
static inline int inat_is_legacy_prefix(insn_attr_t attr)
@@ -128,6 +139,11 @@ static inline int inat_is_rex_prefix(insn_attr_t attr)
return (attr & INAT_PFX_MASK) == INAT_PFX_REX;
}
+static inline int inat_is_rex2_prefix(insn_attr_t attr)
+{
+ return (attr & INAT_PFX_MASK) == INAT_PFX_REX2;
+}
+
static inline int inat_last_prefix_id(insn_attr_t attr)
{
if ((attr & INAT_PFX_MASK) > INAT_LSTPFX_MAX)
@@ -153,6 +169,11 @@ static inline int inat_is_vex3_prefix(insn_attr_t attr)
return (attr & INAT_PFX_MASK) == INAT_PFX_VEX3;
}
+static inline int inat_is_xop_prefix(insn_attr_t attr)
+{
+ return (attr & INAT_PFX_MASK) == INAT_PFX_XOP;
+}
+
static inline int inat_is_escape(insn_attr_t attr)
{
return attr & INAT_ESC_MASK;
@@ -218,6 +239,11 @@ static inline int inat_accept_vex(insn_attr_t attr)
return attr & INAT_VEXOK;
}
+static inline int inat_accept_xop(insn_attr_t attr)
+{
+ return attr & INAT_XOPOK;
+}
+
static inline int inat_must_vex(insn_attr_t attr)
{
return attr & (INAT_VEXONLY | INAT_EVEXONLY);
@@ -227,4 +253,14 @@ static inline int inat_must_evex(insn_attr_t attr)
{
return attr & INAT_EVEXONLY;
}
+
+static inline int inat_evex_scalable(insn_attr_t attr)
+{
+ return attr & INAT_EVEX_SCALABLE;
+}
+
+static inline int inat_is_invalid64(insn_attr_t attr)
+{
+ return attr & INAT_INV64;
+}
#endif
diff --git a/tools/arch/x86/include/asm/insn.h b/tools/arch/x86/include/asm/insn.h
index 65c0d9ce1e29..c683d609934b 100644
--- a/tools/arch/x86/include/asm/insn.h
+++ b/tools/arch/x86/include/asm/insn.h
@@ -71,7 +71,10 @@ struct insn {
* prefixes.bytes[3]: last prefix
*/
struct insn_field rex_prefix; /* REX prefix */
- struct insn_field vex_prefix; /* VEX prefix */
+ union {
+ struct insn_field vex_prefix; /* VEX prefix */
+ struct insn_field xop_prefix; /* XOP prefix */
+ };
struct insn_field opcode; /*
* opcode.bytes[0]: opcode1
* opcode.bytes[1]: opcode2
@@ -112,10 +115,15 @@ struct insn {
#define X86_SIB_INDEX(sib) (((sib) & 0x38) >> 3)
#define X86_SIB_BASE(sib) ((sib) & 0x07)
-#define X86_REX_W(rex) ((rex) & 8)
-#define X86_REX_R(rex) ((rex) & 4)
-#define X86_REX_X(rex) ((rex) & 2)
-#define X86_REX_B(rex) ((rex) & 1)
+#define X86_REX2_M(rex) ((rex) & 0x80) /* REX2 M0 */
+#define X86_REX2_R(rex) ((rex) & 0x40) /* REX2 R4 */
+#define X86_REX2_X(rex) ((rex) & 0x20) /* REX2 X4 */
+#define X86_REX2_B(rex) ((rex) & 0x10) /* REX2 B4 */
+
+#define X86_REX_W(rex) ((rex) & 8) /* REX or REX2 W */
+#define X86_REX_R(rex) ((rex) & 4) /* REX or REX2 R3 */
+#define X86_REX_X(rex) ((rex) & 2) /* REX or REX2 X3 */
+#define X86_REX_B(rex) ((rex) & 1) /* REX or REX2 B3 */
/* VEX bit flags */
#define X86_VEX_W(vex) ((vex) & 0x80) /* VEX3 Byte2 */
@@ -130,6 +138,17 @@ struct insn {
#define X86_VEX_V(vex) (((vex) & 0x78) >> 3) /* VEX3 Byte2, VEX2 Byte1 */
#define X86_VEX_P(vex) ((vex) & 0x03) /* VEX3 Byte2, VEX2 Byte1 */
#define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */
+/* XOP bit fields */
+#define X86_XOP_R(xop) ((xop) & 0x80) /* XOP Byte2 */
+#define X86_XOP_X(xop) ((xop) & 0x40) /* XOP Byte2 */
+#define X86_XOP_B(xop) ((xop) & 0x20) /* XOP Byte2 */
+#define X86_XOP_M(xop) ((xop) & 0x1f) /* XOP Byte2 */
+#define X86_XOP_W(xop) ((xop) & 0x80) /* XOP Byte3 */
+#define X86_XOP_V(xop) ((xop) & 0x78) /* XOP Byte3 */
+#define X86_XOP_L(xop) ((xop) & 0x04) /* XOP Byte3 */
+#define X86_XOP_P(xop) ((xop) & 0x03) /* XOP Byte3 */
+#define X86_XOP_M_MIN 0x08 /* Min of XOP.M */
+#define X86_XOP_M_MAX 0x1f /* Max of XOP.M */
extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64);
extern int insn_get_prefixes(struct insn *insn);
@@ -161,7 +180,19 @@ static inline void insn_get_attribute(struct insn *insn)
/* Instruction uses RIP-relative addressing */
extern int insn_rip_relative(struct insn *insn);
-static inline int insn_is_avx(struct insn *insn)
+static inline int insn_is_rex2(struct insn *insn)
+{
+ if (!insn->prefixes.got)
+ insn_get_prefixes(insn);
+ return insn->rex_prefix.nbytes == 2;
+}
+
+static inline insn_byte_t insn_rex2_m_bit(struct insn *insn)
+{
+ return X86_REX2_M(insn->rex_prefix.bytes[1]);
+}
+
+static inline int insn_is_avx_or_xop(struct insn *insn)
{
if (!insn->prefixes.got)
insn_get_prefixes(insn);
@@ -175,6 +206,22 @@ static inline int insn_is_evex(struct insn *insn)
return (insn->vex_prefix.nbytes == 4);
}
+/* If we already know this is AVX/XOP encoded */
+static inline int avx_insn_is_xop(struct insn *insn)
+{
+ insn_attr_t attr = inat_get_opcode_attribute(insn->vex_prefix.bytes[0]);
+
+ return inat_is_xop_prefix(attr);
+}
+
+static inline int insn_is_xop(struct insn *insn)
+{
+ if (!insn_is_avx_or_xop(insn))
+ return 0;
+
+ return avx_insn_is_xop(insn);
+}
+
static inline int insn_has_emulate_prefix(struct insn *insn)
{
return !!insn->emulate_prefix_size;
@@ -198,11 +245,33 @@ static inline insn_byte_t insn_vex_p_bits(struct insn *insn)
return X86_VEX_P(insn->vex_prefix.bytes[2]);
}
+static inline insn_byte_t insn_vex_w_bit(struct insn *insn)
+{
+ if (insn->vex_prefix.nbytes < 3)
+ return 0;
+ return X86_VEX_W(insn->vex_prefix.bytes[2]);
+}
+
+static inline insn_byte_t insn_xop_map_bits(struct insn *insn)
+{
+ if (insn->xop_prefix.nbytes < 3) /* XOP is 3 bytes */
+ return 0;
+ return X86_XOP_M(insn->xop_prefix.bytes[1]);
+}
+
+static inline insn_byte_t insn_xop_p_bits(struct insn *insn)
+{
+ return X86_XOP_P(insn->vex_prefix.bytes[2]);
+}
+
/* Get the last prefix id from last prefix or VEX prefix */
static inline int insn_last_prefix_id(struct insn *insn)
{
- if (insn_is_avx(insn))
+ if (insn_is_avx_or_xop(insn)) {
+ if (avx_insn_is_xop(insn))
+ return insn_xop_p_bits(insn);
return insn_vex_p_bits(insn); /* VEX_p is a SIMD prefix id */
+ }
if (insn->prefixes.bytes[3])
return inat_get_last_prefix_id(insn->prefixes.bytes[3]);
diff --git a/tools/arch/x86/include/asm/io.h b/tools/arch/x86/include/asm/io.h
new file mode 100644
index 000000000000..ecad61a3ea52
--- /dev/null
+++ b/tools/arch/x86/include/asm/io.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_ASM_X86_IO_H
+#define _TOOLS_ASM_X86_IO_H
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include "special_insns.h"
+
+#define build_mmio_read(name, size, type, reg, barrier) \
+static inline type name(const volatile void __iomem *addr) \
+{ type ret; asm volatile("mov" size " %1,%0":reg (ret) \
+:"m" (*(volatile type __force *)addr) barrier); return ret; }
+
+#define build_mmio_write(name, size, type, reg, barrier) \
+static inline void name(type val, volatile void __iomem *addr) \
+{ asm volatile("mov" size " %0,%1": :reg (val), \
+"m" (*(volatile type __force *)addr) barrier); }
+
+build_mmio_read(readb, "b", unsigned char, "=q", :"memory")
+build_mmio_read(readw, "w", unsigned short, "=r", :"memory")
+build_mmio_read(readl, "l", unsigned int, "=r", :"memory")
+
+build_mmio_read(__readb, "b", unsigned char, "=q", )
+build_mmio_read(__readw, "w", unsigned short, "=r", )
+build_mmio_read(__readl, "l", unsigned int, "=r", )
+
+build_mmio_write(writeb, "b", unsigned char, "q", :"memory")
+build_mmio_write(writew, "w", unsigned short, "r", :"memory")
+build_mmio_write(writel, "l", unsigned int, "r", :"memory")
+
+build_mmio_write(__writeb, "b", unsigned char, "q", )
+build_mmio_write(__writew, "w", unsigned short, "r", )
+build_mmio_write(__writel, "l", unsigned int, "r", )
+
+#define readb readb
+#define readw readw
+#define readl readl
+#define readb_relaxed(a) __readb(a)
+#define readw_relaxed(a) __readw(a)
+#define readl_relaxed(a) __readl(a)
+#define __raw_readb __readb
+#define __raw_readw __readw
+#define __raw_readl __readl
+
+#define writeb writeb
+#define writew writew
+#define writel writel
+#define writeb_relaxed(v, a) __writeb(v, a)
+#define writew_relaxed(v, a) __writew(v, a)
+#define writel_relaxed(v, a) __writel(v, a)
+#define __raw_writeb __writeb
+#define __raw_writew __writew
+#define __raw_writel __writel
+
+#ifdef __x86_64__
+
+build_mmio_read(readq, "q", u64, "=r", :"memory")
+build_mmio_read(__readq, "q", u64, "=r", )
+build_mmio_write(writeq, "q", u64, "r", :"memory")
+build_mmio_write(__writeq, "q", u64, "r", )
+
+#define readq_relaxed(a) __readq(a)
+#define writeq_relaxed(v, a) __writeq(v, a)
+
+#define __raw_readq __readq
+#define __raw_writeq __writeq
+
+/* Let people know that we have them */
+#define readq readq
+#define writeq writeq
+
+#endif /* __x86_64__ */
+
+#include <asm-generic/io.h>
+
+/**
+ * iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
+ * @dst: destination, in MMIO space (must be 512-bit aligned)
+ * @src: source
+ * @count: number of 512 bits quantities to submit
+ *
+ * Submit data from kernel space to MMIO space, in units of 512 bits at a
+ * time. Order of access is not guaranteed, nor is a memory barrier
+ * performed afterwards.
+ *
+ * Warning: Do not use this helper unless your driver has checked that the CPU
+ * instruction is supported on the platform.
+ */
+static inline void iosubmit_cmds512(void __iomem *dst, const void *src,
+ size_t count)
+{
+ const u8 *from = src;
+ const u8 *end = from + count * 64;
+
+ while (from < end) {
+ movdir64b(dst, from);
+ from += 64;
+ }
+}
+
+#endif /* _TOOLS_ASM_X86_IO_H */
diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
index f1bd7b91b3c6..f627196eb796 100644
--- a/tools/arch/x86/include/asm/msr-index.h
+++ b/tools/arch/x86/include/asm/msr-index.h
@@ -25,6 +25,7 @@
#define _EFER_SVME 12 /* Enable virtualization */
#define _EFER_LMSLE 13 /* Long Mode Segment Limit Enable */
#define _EFER_FFXSR 14 /* Enable Fast FXSAVE/FXRSTOR */
+#define _EFER_TCE 15 /* Enable Translation Cache Extensions */
#define _EFER_AUTOIBRS 21 /* Enable Automatic IBRS */
#define EFER_SCE (1<<_EFER_SCE)
@@ -34,10 +35,36 @@
#define EFER_SVME (1<<_EFER_SVME)
#define EFER_LMSLE (1<<_EFER_LMSLE)
#define EFER_FFXSR (1<<_EFER_FFXSR)
+#define EFER_TCE (1<<_EFER_TCE)
#define EFER_AUTOIBRS (1<<_EFER_AUTOIBRS)
-/* Intel MSRs. Some also available on other CPUs */
+/*
+ * Architectural memory types that are common to MTRRs, PAT, VMX MSRs, etc.
+ * Most MSRs support/allow only a subset of memory types, but the values
+ * themselves are common across all relevant MSRs.
+ */
+#define X86_MEMTYPE_UC 0ull /* Uncacheable, a.k.a. Strong Uncacheable */
+#define X86_MEMTYPE_WC 1ull /* Write Combining */
+/* RESERVED 2 */
+/* RESERVED 3 */
+#define X86_MEMTYPE_WT 4ull /* Write Through */
+#define X86_MEMTYPE_WP 5ull /* Write Protected */
+#define X86_MEMTYPE_WB 6ull /* Write Back */
+#define X86_MEMTYPE_UC_MINUS 7ull /* Weak Uncacheabled (PAT only) */
+
+/* FRED MSRs */
+#define MSR_IA32_FRED_RSP0 0x1cc /* Level 0 stack pointer */
+#define MSR_IA32_FRED_RSP1 0x1cd /* Level 1 stack pointer */
+#define MSR_IA32_FRED_RSP2 0x1ce /* Level 2 stack pointer */
+#define MSR_IA32_FRED_RSP3 0x1cf /* Level 3 stack pointer */
+#define MSR_IA32_FRED_STKLVLS 0x1d0 /* Exception stack levels */
+#define MSR_IA32_FRED_SSP0 MSR_IA32_PL0_SSP /* Level 0 shadow stack pointer */
+#define MSR_IA32_FRED_SSP1 0x1d1 /* Level 1 shadow stack pointer */
+#define MSR_IA32_FRED_SSP2 0x1d2 /* Level 2 shadow stack pointer */
+#define MSR_IA32_FRED_SSP3 0x1d3 /* Level 3 shadow stack pointer */
+#define MSR_IA32_FRED_CONFIG 0x1d4 /* Entrypoint and interrupt stack level */
+/* Intel MSRs. Some also available on other CPUs */
#define MSR_TEST_CTRL 0x00000033
#define MSR_TEST_CTRL_SPLIT_LOCK_DETECT_BIT 29
#define MSR_TEST_CTRL_SPLIT_LOCK_DETECT BIT(MSR_TEST_CTRL_SPLIT_LOCK_DETECT_BIT)
@@ -50,10 +77,13 @@
#define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */
#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+#define SPEC_CTRL_BHI_DIS_S_SHIFT 10 /* Disable Branch History Injection behavior */
+#define SPEC_CTRL_BHI_DIS_S BIT(SPEC_CTRL_BHI_DIS_S_SHIFT)
/* A mask for bits which the kernel toggles when controlling mitigations */
#define SPEC_CTRL_MITIGATIONS_MASK (SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
- | SPEC_CTRL_RRSBA_DIS_S)
+ | SPEC_CTRL_RRSBA_DIS_S \
+ | SPEC_CTRL_BHI_DIS_S)
#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
#define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
@@ -152,6 +182,14 @@
* are restricted to targets in
* kernel.
*/
+#define ARCH_CAP_BHI_NO BIT(20) /*
+ * CPU is not affected by Branch
+ * History Injection.
+ */
+#define ARCH_CAP_XAPIC_DISABLE BIT(21) /*
+ * IA32_XAPIC_DISABLE_STATUS MSR
+ * supported
+ */
#define ARCH_CAP_PBRSB_NO BIT(24) /*
* Not susceptible to Post-Barrier
* Return Stack Buffer Predictions.
@@ -165,11 +203,22 @@
* CPU is not vulnerable to Gather
* Data Sampling (GDS).
*/
-
-#define ARCH_CAP_XAPIC_DISABLE BIT(21) /*
- * IA32_XAPIC_DISABLE_STATUS MSR
- * supported
+#define ARCH_CAP_RFDS_NO BIT(27) /*
+ * Not susceptible to Register
+ * File Data Sampling.
+ */
+#define ARCH_CAP_RFDS_CLEAR BIT(28) /*
+ * VERW clears CPU Register
+ * File.
*/
+#define ARCH_CAP_ITS_NO BIT_ULL(62) /*
+ * Not susceptible to
+ * Indirect Target Selection.
+ * This bit is not set by
+ * HW, but is synthesized by
+ * VMMs for guests to know
+ * their affected status.
+ */
#define MSR_IA32_FLUSH_CMD 0x0000010b
#define L1D_FLUSH BIT(0) /*
@@ -222,6 +271,8 @@
#define MSR_INTEGRITY_CAPS_ARRAY_BIST BIT(MSR_INTEGRITY_CAPS_ARRAY_BIST_BIT)
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT)
+#define MSR_INTEGRITY_CAPS_SBAF_BIT 8
+#define MSR_INTEGRITY_CAPS_SBAF BIT(MSR_INTEGRITY_CAPS_SBAF_BIT)
#define MSR_INTEGRITY_CAPS_SAF_GEN_MASK GENMASK_ULL(10, 9)
#define MSR_LBR_NHM_FROM 0x00000680
@@ -264,12 +315,14 @@
#define PERF_CAP_PT_IDX 16
#define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6
-#define PERF_CAP_PEBS_TRAP BIT_ULL(6)
-#define PERF_CAP_ARCH_REG BIT_ULL(7)
-#define PERF_CAP_PEBS_FORMAT 0xf00
-#define PERF_CAP_PEBS_BASELINE BIT_ULL(14)
-#define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \
- PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE)
+#define PERF_CAP_PEBS_TRAP BIT_ULL(6)
+#define PERF_CAP_ARCH_REG BIT_ULL(7)
+#define PERF_CAP_PEBS_FORMAT 0xf00
+#define PERF_CAP_PEBS_BASELINE BIT_ULL(14)
+#define PERF_CAP_PEBS_TIMING_INFO BIT_ULL(17)
+#define PERF_CAP_PEBS_MASK (PERF_CAP_PEBS_TRAP | PERF_CAP_ARCH_REG | \
+ PERF_CAP_PEBS_FORMAT | PERF_CAP_PEBS_BASELINE | \
+ PERF_CAP_PEBS_TIMING_INFO)
#define MSR_IA32_RTIT_CTL 0x00000570
#define RTIT_CTL_TRACEEN BIT(0)
@@ -338,6 +391,12 @@
#define MSR_IA32_CR_PAT 0x00000277
+#define PAT_VALUE(p0, p1, p2, p3, p4, p5, p6, p7) \
+ ((X86_MEMTYPE_ ## p0) | (X86_MEMTYPE_ ## p1 << 8) | \
+ (X86_MEMTYPE_ ## p2 << 16) | (X86_MEMTYPE_ ## p3 << 24) | \
+ (X86_MEMTYPE_ ## p4 << 32) | (X86_MEMTYPE_ ## p5 << 40) | \
+ (X86_MEMTYPE_ ## p6 << 48) | (X86_MEMTYPE_ ## p7 << 56))
+
#define MSR_IA32_DEBUGCTLMSR 0x000001d9
#define MSR_IA32_LASTBRANCHFROMIP 0x000001db
#define MSR_IA32_LASTBRANCHTOIP 0x000001dc
@@ -348,7 +407,8 @@
#define MSR_IA32_PASID_VALID BIT_ULL(31)
/* DEBUGCTLMSR bits (others vary by model): */
-#define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */
+#define DEBUGCTLMSR_LBR_BIT 0 /* last branch recording */
+#define DEBUGCTLMSR_LBR (1UL << DEBUGCTLMSR_LBR_BIT)
#define DEBUGCTLMSR_BTF_SHIFT 1
#define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */
#define DEBUGCTLMSR_BUS_LOCK_DETECT (1UL << 2)
@@ -361,6 +421,7 @@
#define DEBUGCTLMSR_FREEZE_PERFMON_ON_PMI (1UL << 12)
#define DEBUGCTLMSR_FREEZE_IN_SMM_BIT 14
#define DEBUGCTLMSR_FREEZE_IN_SMM (1UL << DEBUGCTLMSR_FREEZE_IN_SMM_BIT)
+#define DEBUGCTLMSR_RTM_DEBUG BIT(15)
#define MSR_PEBS_FRONTEND 0x000003f7
@@ -475,7 +536,7 @@
#define MSR_HWP_CAPABILITIES 0x00000771
#define MSR_HWP_REQUEST_PKG 0x00000772
#define MSR_HWP_INTERRUPT 0x00000773
-#define MSR_HWP_REQUEST 0x00000774
+#define MSR_HWP_REQUEST 0x00000774
#define MSR_HWP_STATUS 0x00000777
/* CPUID.6.EAX */
@@ -492,16 +553,16 @@
#define HWP_LOWEST_PERF(x) (((x) >> 24) & 0xff)
/* IA32_HWP_REQUEST */
-#define HWP_MIN_PERF(x) (x & 0xff)
-#define HWP_MAX_PERF(x) ((x & 0xff) << 8)
+#define HWP_MIN_PERF(x) (x & 0xff)
+#define HWP_MAX_PERF(x) ((x & 0xff) << 8)
#define HWP_DESIRED_PERF(x) ((x & 0xff) << 16)
-#define HWP_ENERGY_PERF_PREFERENCE(x) (((unsigned long long) x & 0xff) << 24)
+#define HWP_ENERGY_PERF_PREFERENCE(x) (((u64)x & 0xff) << 24)
#define HWP_EPP_PERFORMANCE 0x00
#define HWP_EPP_BALANCE_PERFORMANCE 0x80
#define HWP_EPP_BALANCE_POWERSAVE 0xC0
#define HWP_EPP_POWERSAVE 0xFF
-#define HWP_ACTIVITY_WINDOW(x) ((unsigned long long)(x & 0xff3) << 32)
-#define HWP_PACKAGE_CONTROL(x) ((unsigned long long)(x & 0x1) << 42)
+#define HWP_ACTIVITY_WINDOW(x) ((u64)(x & 0xff3) << 32)
+#define HWP_PACKAGE_CONTROL(x) ((u64)(x & 0x1) << 42)
/* IA32_HWP_STATUS */
#define HWP_GUARANTEED_CHANGE(x) (x & 0x1)
@@ -541,6 +602,16 @@
#define MSR_RELOAD_PMC0 0x000014c1
#define MSR_RELOAD_FIXED_CTR0 0x00001309
+/* V6 PMON MSR range */
+#define MSR_IA32_PMC_V6_GP0_CTR 0x1900
+#define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901
+#define MSR_IA32_PMC_V6_GP0_CFG_B 0x1902
+#define MSR_IA32_PMC_V6_GP0_CFG_C 0x1903
+#define MSR_IA32_PMC_V6_FX0_CTR 0x1980
+#define MSR_IA32_PMC_V6_FX0_CFG_B 0x1982
+#define MSR_IA32_PMC_V6_FX0_CFG_C 0x1983
+#define MSR_IA32_PMC_V6_STEP 4
+
/* KeyID partitioning between MKTME and TDX */
#define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087
@@ -555,10 +626,12 @@
#define MSR_AMD_PERF_CTL 0xc0010062
#define MSR_AMD_PERF_STATUS 0xc0010063
#define MSR_AMD_PSTATE_DEF_BASE 0xc0010064
+#define MSR_AMD64_GUEST_TSC_FREQ 0xc0010134
#define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140
#define MSR_AMD64_OSVW_STATUS 0xc0010141
#define MSR_AMD_PPIN_CTL 0xc00102f0
#define MSR_AMD_PPIN 0xc00102f1
+#define MSR_AMD64_CPUID_FN_7 0xc0011002
#define MSR_AMD64_CPUID_FN_1 0xc0011004
#define MSR_AMD64_LS_CFG 0xc0011020
#define MSR_AMD64_DC_CFG 0xc0011022
@@ -591,36 +664,53 @@
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
#define MSR_AMD64_SVM_AVIC_DOORBELL 0xc001011b
#define MSR_AMD64_VM_PAGE_FLUSH 0xc001011e
+#define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
#define MSR_AMD64_SEV_ES_GHCB 0xc0010130
#define MSR_AMD64_SEV 0xc0010131
#define MSR_AMD64_SEV_ENABLED_BIT 0
-#define MSR_AMD64_SEV_ES_ENABLED_BIT 1
-#define MSR_AMD64_SEV_SNP_ENABLED_BIT 2
#define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
+#define MSR_AMD64_SEV_ES_ENABLED_BIT 1
#define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT)
+#define MSR_AMD64_SEV_SNP_ENABLED_BIT 2
#define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT)
-
-/* SNP feature bits enabled by the hypervisor */
-#define MSR_AMD64_SNP_VTOM BIT_ULL(3)
-#define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(4)
-#define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(5)
-#define MSR_AMD64_SNP_ALT_INJ BIT_ULL(6)
-#define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(7)
-#define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(8)
-#define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(9)
-#define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(10)
-#define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(11)
-#define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(12)
-#define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14)
-#define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16)
-#define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17)
-
-/* SNP feature bits reserved for future use. */
-#define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13)
-#define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15)
-#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18)
-
-#define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
+#define MSR_AMD64_SNP_VTOM_BIT 3
+#define MSR_AMD64_SNP_VTOM BIT_ULL(MSR_AMD64_SNP_VTOM_BIT)
+#define MSR_AMD64_SNP_REFLECT_VC_BIT 4
+#define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(MSR_AMD64_SNP_REFLECT_VC_BIT)
+#define MSR_AMD64_SNP_RESTRICTED_INJ_BIT 5
+#define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(MSR_AMD64_SNP_RESTRICTED_INJ_BIT)
+#define MSR_AMD64_SNP_ALT_INJ_BIT 6
+#define MSR_AMD64_SNP_ALT_INJ BIT_ULL(MSR_AMD64_SNP_ALT_INJ_BIT)
+#define MSR_AMD64_SNP_DEBUG_SWAP_BIT 7
+#define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(MSR_AMD64_SNP_DEBUG_SWAP_BIT)
+#define MSR_AMD64_SNP_PREVENT_HOST_IBS_BIT 8
+#define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(MSR_AMD64_SNP_PREVENT_HOST_IBS_BIT)
+#define MSR_AMD64_SNP_BTB_ISOLATION_BIT 9
+#define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(MSR_AMD64_SNP_BTB_ISOLATION_BIT)
+#define MSR_AMD64_SNP_VMPL_SSS_BIT 10
+#define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(MSR_AMD64_SNP_VMPL_SSS_BIT)
+#define MSR_AMD64_SNP_SECURE_TSC_BIT 11
+#define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(MSR_AMD64_SNP_SECURE_TSC_BIT)
+#define MSR_AMD64_SNP_VMGEXIT_PARAM_BIT 12
+#define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(MSR_AMD64_SNP_VMGEXIT_PARAM_BIT)
+#define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13)
+#define MSR_AMD64_SNP_IBS_VIRT_BIT 14
+#define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(MSR_AMD64_SNP_IBS_VIRT_BIT)
+#define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15)
+#define MSR_AMD64_SNP_VMSA_REG_PROT_BIT 16
+#define MSR_AMD64_SNP_VMSA_REG_PROT BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT)
+#define MSR_AMD64_SNP_SMT_PROT_BIT 17
+#define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT)
+#define MSR_AMD64_SNP_RESV_BIT 18
+#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
+#define MSR_AMD64_RMP_BASE 0xc0010132
+#define MSR_AMD64_RMP_END 0xc0010133
+#define MSR_AMD64_RMP_CFG 0xc0010136
+#define MSR_AMD64_SEG_RMP_ENABLED_BIT 0
+#define MSR_AMD64_SEG_RMP_ENABLED BIT_ULL(MSR_AMD64_SEG_RMP_ENABLED_BIT)
+#define MSR_AMD64_RMP_SEGMENT_SHIFT(x) (((x) & GENMASK_ULL(13, 8)) >> 8)
+
+#define MSR_SVSM_CAA 0xc001f000
/* AMD Collaborative Processor Performance Control MSRs */
#define MSR_AMD_CPPC_CAP1 0xc00102b0
@@ -629,26 +719,34 @@
#define MSR_AMD_CPPC_REQ 0xc00102b3
#define MSR_AMD_CPPC_STATUS 0xc00102b4
-#define AMD_CPPC_LOWEST_PERF(x) (((x) >> 0) & 0xff)
-#define AMD_CPPC_LOWNONLIN_PERF(x) (((x) >> 8) & 0xff)
-#define AMD_CPPC_NOMINAL_PERF(x) (((x) >> 16) & 0xff)
-#define AMD_CPPC_HIGHEST_PERF(x) (((x) >> 24) & 0xff)
+/* Masks for use with MSR_AMD_CPPC_CAP1 */
+#define AMD_CPPC_LOWEST_PERF_MASK GENMASK(7, 0)
+#define AMD_CPPC_LOWNONLIN_PERF_MASK GENMASK(15, 8)
+#define AMD_CPPC_NOMINAL_PERF_MASK GENMASK(23, 16)
+#define AMD_CPPC_HIGHEST_PERF_MASK GENMASK(31, 24)
-#define AMD_CPPC_MAX_PERF(x) (((x) & 0xff) << 0)
-#define AMD_CPPC_MIN_PERF(x) (((x) & 0xff) << 8)
-#define AMD_CPPC_DES_PERF(x) (((x) & 0xff) << 16)
-#define AMD_CPPC_ENERGY_PERF_PREF(x) (((x) & 0xff) << 24)
+/* Masks for use with MSR_AMD_CPPC_REQ */
+#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0)
+#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8)
+#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16)
+#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24)
/* AMD Performance Counter Global Status and Control MSRs */
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302
+/* AMD Hardware Feedback Support MSRs */
+#define MSR_AMD_WORKLOAD_CLASS_CONFIG 0xc0000500
+#define MSR_AMD_WORKLOAD_CLASS_ID 0xc0000501
+#define MSR_AMD_WORKLOAD_HRST 0xc0000502
+
/* AMD Last Branch Record MSRs */
#define MSR_AMD64_LBR_SELECT 0xc000010e
/* Zen4 */
#define MSR_ZEN4_BP_CFG 0xc001102e
+#define MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT 4
#define MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT 5
/* Fam 19h MSRs */
@@ -708,8 +806,15 @@
#define MSR_K8_TOP_MEM1 0xc001001a
#define MSR_K8_TOP_MEM2 0xc001001d
#define MSR_AMD64_SYSCFG 0xc0010010
-#define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23
+#define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23
#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT)
+#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24
+#define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT)
+#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25
+#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT)
+#define MSR_AMD64_SYSCFG_MFDM_BIT 19
+#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT)
+
#define MSR_K8_INT_PENDING_MSG 0xc0010055
/* C1E active bits in int pending message */
#define K8_INTP_C1E_ACTIVE_MASK 0x18000000
@@ -734,8 +839,11 @@
#define MSR_K7_HWCR_SMMLOCK BIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT)
#define MSR_K7_HWCR_IRPERF_EN_BIT 30
#define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT)
+#define MSR_K7_HWCR_CPUID_USER_DIS_BIT 35
#define MSR_K7_FID_VID_CTL 0xc0010041
#define MSR_K7_FID_VID_STATUS 0xc0010042
+#define MSR_K7_HWCR_CPB_DIS_BIT 25
+#define MSR_K7_HWCR_CPB_DIS BIT_ULL(MSR_K7_HWCR_CPB_DIS_BIT)
/* K6 MSRs */
#define MSR_K6_WHCR 0xc0000082
@@ -1102,15 +1210,6 @@
#define MSR_IA32_VMX_VMFUNC 0x00000491
#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492
-/* VMX_BASIC bits and bitmasks */
-#define VMX_BASIC_VMCS_SIZE_SHIFT 32
-#define VMX_BASIC_TRUE_CTLS (1ULL << 55)
-#define VMX_BASIC_64 0x0001000000000000LLU
-#define VMX_BASIC_MEM_TYPE_SHIFT 50
-#define VMX_BASIC_MEM_TYPE_MASK 0x003c000000000000LLU
-#define VMX_BASIC_MEM_TYPE_WB 6LLU
-#define VMX_BASIC_INOUT 0x0040000000000000LLU
-
/* Resctrl MSRs: */
/* - Intel: */
#define MSR_IA32_L3_QOS_CFG 0xc81
@@ -1119,6 +1218,7 @@
#define MSR_IA32_QM_CTR 0xc8e
#define MSR_IA32_PQR_ASSOC 0xc8f
#define MSR_IA32_L3_CBM_BASE 0xc90
+#define MSR_RMID_SNC_CONFIG 0xca0
#define MSR_IA32_L2_CBM_BASE 0xd10
#define MSR_IA32_MBA_THRTL_BASE 0xd50
@@ -1127,11 +1227,6 @@
#define MSR_IA32_SMBA_BW_BASE 0xc0000280
#define MSR_IA32_EVT_CFG_BASE 0xc0000400
-/* MSR_IA32_VMX_MISC bits */
-#define MSR_IA32_VMX_MISC_INTEL_PT (1ULL << 14)
-#define MSR_IA32_VMX_MISC_VMWRITE_SHADOW_RO_FIELDS (1ULL << 29)
-#define MSR_IA32_VMX_MISC_PREEMPTION_TIMER_SCALE 0x1F
-
/* AMD-V MSRs */
#define MSR_VM_CR 0xc0010114
#define MSR_VM_IGNNE 0xc0010115
diff --git a/tools/arch/x86/include/asm/nops.h b/tools/arch/x86/include/asm/nops.h
index 1c1b7550fa55..cd94221d8335 100644
--- a/tools/arch/x86/include/asm/nops.h
+++ b/tools/arch/x86/include/asm/nops.h
@@ -82,7 +82,7 @@
#define ASM_NOP7 _ASM_BYTES(BYTES_NOP7)
#define ASM_NOP8 _ASM_BYTES(BYTES_NOP8)
-#ifndef __ASSEMBLY__
+#ifndef __ASSEMBLER__
extern const unsigned char * const x86_nops[];
#endif
diff --git a/tools/arch/x86/include/asm/orc_types.h b/tools/arch/x86/include/asm/orc_types.h
index 46d7e06763c9..e0125afa53fb 100644
--- a/tools/arch/x86/include/asm/orc_types.h
+++ b/tools/arch/x86/include/asm/orc_types.h
@@ -45,7 +45,7 @@
#define ORC_TYPE_REGS 3
#define ORC_TYPE_REGS_PARTIAL 4
-#ifndef __ASSEMBLY__
+#ifndef __ASSEMBLER__
#include <asm/byteorder.h>
/*
@@ -73,6 +73,6 @@ struct orc_entry {
#endif
} __packed;
-#endif /* __ASSEMBLY__ */
+#endif /* __ASSEMBLER__ */
#endif /* _ORC_TYPES_H */
diff --git a/tools/arch/x86/include/asm/pvclock-abi.h b/tools/arch/x86/include/asm/pvclock-abi.h
index 1436226efe3e..b9fece5fc96d 100644
--- a/tools/arch/x86/include/asm/pvclock-abi.h
+++ b/tools/arch/x86/include/asm/pvclock-abi.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_PVCLOCK_ABI_H
#define _ASM_X86_PVCLOCK_ABI_H
-#ifndef __ASSEMBLY__
+#ifndef __ASSEMBLER__
/*
* These structs MUST NOT be changed.
@@ -44,5 +44,5 @@ struct pvclock_wall_clock {
#define PVCLOCK_GUEST_STOPPED (1 << 1)
/* PVCLOCK_COUNTS_FROM_ZERO broke ABI and can't be used anymore. */
#define PVCLOCK_COUNTS_FROM_ZERO (1 << 2)
-#endif /* __ASSEMBLY__ */
+#endif /* __ASSEMBLER__ */
#endif /* _ASM_X86_PVCLOCK_ABI_H */
diff --git a/tools/arch/x86/include/asm/required-features.h b/tools/arch/x86/include/asm/required-features.h
deleted file mode 100644
index 7ba1726b71c7..000000000000
--- a/tools/arch/x86/include/asm/required-features.h
+++ /dev/null
@@ -1,104 +0,0 @@
-#ifndef _ASM_X86_REQUIRED_FEATURES_H
-#define _ASM_X86_REQUIRED_FEATURES_H
-
-/* Define minimum CPUID feature set for kernel These bits are checked
- really early to actually display a visible error message before the
- kernel dies. Make sure to assign features to the proper mask!
-
- Some requirements that are not in CPUID yet are also in the
- CONFIG_X86_MINIMUM_CPU_FAMILY which is checked too.
-
- The real information is in arch/x86/Kconfig.cpu, this just converts
- the CONFIGs into a bitmask */
-
-#ifndef CONFIG_MATH_EMULATION
-# define NEED_FPU (1<<(X86_FEATURE_FPU & 31))
-#else
-# define NEED_FPU 0
-#endif
-
-#if defined(CONFIG_X86_PAE) || defined(CONFIG_X86_64)
-# define NEED_PAE (1<<(X86_FEATURE_PAE & 31))
-#else
-# define NEED_PAE 0
-#endif
-
-#ifdef CONFIG_X86_CMPXCHG64
-# define NEED_CX8 (1<<(X86_FEATURE_CX8 & 31))
-#else
-# define NEED_CX8 0
-#endif
-
-#if defined(CONFIG_X86_CMOV) || defined(CONFIG_X86_64)
-# define NEED_CMOV (1<<(X86_FEATURE_CMOV & 31))
-#else
-# define NEED_CMOV 0
-#endif
-
-# define NEED_3DNOW 0
-
-#if defined(CONFIG_X86_P6_NOP) || defined(CONFIG_X86_64)
-# define NEED_NOPL (1<<(X86_FEATURE_NOPL & 31))
-#else
-# define NEED_NOPL 0
-#endif
-
-#ifdef CONFIG_MATOM
-# define NEED_MOVBE (1<<(X86_FEATURE_MOVBE & 31))
-#else
-# define NEED_MOVBE 0
-#endif
-
-#ifdef CONFIG_X86_64
-#ifdef CONFIG_PARAVIRT_XXL
-/* Paravirtualized systems may not have PSE or PGE available */
-#define NEED_PSE 0
-#define NEED_PGE 0
-#else
-#define NEED_PSE (1<<(X86_FEATURE_PSE) & 31)
-#define NEED_PGE (1<<(X86_FEATURE_PGE) & 31)
-#endif
-#define NEED_MSR (1<<(X86_FEATURE_MSR & 31))
-#define NEED_FXSR (1<<(X86_FEATURE_FXSR & 31))
-#define NEED_XMM (1<<(X86_FEATURE_XMM & 31))
-#define NEED_XMM2 (1<<(X86_FEATURE_XMM2 & 31))
-#define NEED_LM (1<<(X86_FEATURE_LM & 31))
-#else
-#define NEED_PSE 0
-#define NEED_MSR 0
-#define NEED_PGE 0
-#define NEED_FXSR 0
-#define NEED_XMM 0
-#define NEED_XMM2 0
-#define NEED_LM 0
-#endif
-
-#define REQUIRED_MASK0 (NEED_FPU|NEED_PSE|NEED_MSR|NEED_PAE|\
- NEED_CX8|NEED_PGE|NEED_FXSR|NEED_CMOV|\
- NEED_XMM|NEED_XMM2)
-#define SSE_MASK (NEED_XMM|NEED_XMM2)
-
-#define REQUIRED_MASK1 (NEED_LM|NEED_3DNOW)
-
-#define REQUIRED_MASK2 0
-#define REQUIRED_MASK3 (NEED_NOPL)
-#define REQUIRED_MASK4 (NEED_MOVBE)
-#define REQUIRED_MASK5 0
-#define REQUIRED_MASK6 0
-#define REQUIRED_MASK7 0
-#define REQUIRED_MASK8 0
-#define REQUIRED_MASK9 0
-#define REQUIRED_MASK10 0
-#define REQUIRED_MASK11 0
-#define REQUIRED_MASK12 0
-#define REQUIRED_MASK13 0
-#define REQUIRED_MASK14 0
-#define REQUIRED_MASK15 0
-#define REQUIRED_MASK16 0
-#define REQUIRED_MASK17 0
-#define REQUIRED_MASK18 0
-#define REQUIRED_MASK19 0
-#define REQUIRED_MASK20 0
-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
-
-#endif /* _ASM_X86_REQUIRED_FEATURES_H */
diff --git a/tools/arch/x86/include/asm/special_insns.h b/tools/arch/x86/include/asm/special_insns.h
new file mode 100644
index 000000000000..04af42a99c38
--- /dev/null
+++ b/tools/arch/x86/include/asm/special_insns.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_ASM_X86_SPECIAL_INSNS_H
+#define _TOOLS_ASM_X86_SPECIAL_INSNS_H
+
+/* The dst parameter must be 64-bytes aligned */
+static inline void movdir64b(void *dst, const void *src)
+{
+ const struct { char _[64]; } *__src = src;
+ struct { char _[64]; } *__dst = dst;
+
+ /*
+ * MOVDIR64B %(rdx), rax.
+ *
+ * Both __src and __dst must be memory constraints in order to tell the
+ * compiler that no other memory accesses should be reordered around
+ * this one.
+ *
+ * Also, both must be supplied as lvalues because this tells
+ * the compiler what the object is (its size) the instruction accesses.
+ * I.e., not the pointers but what they point to, thus the deref'ing '*'.
+ */
+ asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
+ : "+m" (*__dst)
+ : "m" (*__src), "a" (__dst), "d" (__src));
+}
+
+#endif /* _TOOLS_ASM_X86_SPECIAL_INSNS_H */
diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h
index a448d0964fc0..0f15d683817d 100644
--- a/tools/arch/x86/include/uapi/asm/kvm.h
+++ b/tools/arch/x86/include/uapi/asm/kvm.h
@@ -7,6 +7,8 @@
*
*/
+#include <linux/const.h>
+#include <linux/bits.h>
#include <linux/types.h>
#include <linux/ioctl.h>
#include <linux/stddef.h>
@@ -40,7 +42,6 @@
#define __KVM_HAVE_IRQ_LINE
#define __KVM_HAVE_MSI
#define __KVM_HAVE_USER_NMI
-#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_MSIX
#define __KVM_HAVE_MCE
#define __KVM_HAVE_PIT_STATE2
@@ -49,7 +50,6 @@
#define __KVM_HAVE_DEBUGREGS
#define __KVM_HAVE_XSAVE
#define __KVM_HAVE_XCRS
-#define __KVM_HAVE_READONLY_MEM
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
@@ -106,6 +106,7 @@ struct kvm_ioapic_state {
#define KVM_RUN_X86_SMM (1 << 0)
#define KVM_RUN_X86_BUS_LOCK (1 << 1)
+#define KVM_RUN_X86_GUEST_MODE (1 << 2)
/* for KVM_GET_REGS and KVM_SET_REGS */
struct kvm_regs {
@@ -438,6 +439,9 @@ struct kvm_sync_regs {
#define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4)
#define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5)
#define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6)
+#define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7)
+#define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8)
+#define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9)
#define KVM_STATE_NESTED_FORMAT_VMX 0
#define KVM_STATE_NESTED_FORMAT_SVM 1
@@ -457,8 +461,13 @@ struct kvm_sync_regs {
#define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE 0x00000001
-/* attributes for system fd (group 0) */
-#define KVM_X86_XCOMP_GUEST_SUPP 0
+/* vendor-independent attributes for system fd (group 0) */
+#define KVM_X86_GRP_SYSTEM 0
+# define KVM_X86_XCOMP_GUEST_SUPP 0
+
+/* vendor-specific groups and attributes for system fd */
+#define KVM_X86_GRP_SEV 1
+# define KVM_X86_SEV_VMSA_FEATURES 0
struct kvm_vmx_nested_state_data {
__u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE];
@@ -526,9 +535,363 @@ struct kvm_pmu_event_filter {
#define KVM_PMU_EVENT_ALLOW 0
#define KVM_PMU_EVENT_DENY 1
-#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0)
+#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS _BITUL(0)
#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
+/* for KVM_CAP_MCE */
+struct kvm_x86_mce {
+ __u64 status;
+ __u64 addr;
+ __u64 misc;
+ __u64 mcg_status;
+ __u8 bank;
+ __u8 pad1[7];
+ __u64 pad2[3];
+};
+
+/* for KVM_CAP_XEN_HVM */
+#define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR (1 << 0)
+#define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL (1 << 1)
+#define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2)
+#define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4)
+#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5)
+#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6)
+#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7)
+#define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 8)
+
+#define KVM_XEN_MSR_MIN_INDEX 0x40000000u
+#define KVM_XEN_MSR_MAX_INDEX 0x4fffffffu
+
+struct kvm_xen_hvm_config {
+ __u32 flags;
+ __u32 msr;
+ __u64 blob_addr_32;
+ __u64 blob_addr_64;
+ __u8 blob_size_32;
+ __u8 blob_size_64;
+ __u8 pad2[30];
+};
+
+struct kvm_xen_hvm_attr {
+ __u16 type;
+ __u16 pad[3];
+ union {
+ __u8 long_mode;
+ __u8 vector;
+ __u8 runstate_update_flag;
+ union {
+ __u64 gfn;
+#define KVM_XEN_INVALID_GFN ((__u64)-1)
+ __u64 hva;
+ } shared_info;
+ struct {
+ __u32 send_port;
+ __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */
+ __u32 flags;
+#define KVM_XEN_EVTCHN_DEASSIGN (1 << 0)
+#define KVM_XEN_EVTCHN_UPDATE (1 << 1)
+#define KVM_XEN_EVTCHN_RESET (1 << 2)
+ /*
+ * Events sent by the guest are either looped back to
+ * the guest itself (potentially on a different port#)
+ * or signalled via an eventfd.
+ */
+ union {
+ struct {
+ __u32 port;
+ __u32 vcpu;
+ __u32 priority;
+ } port;
+ struct {
+ __u32 port; /* Zero for eventfd */
+ __s32 fd;
+ } eventfd;
+ __u32 padding[4];
+ } deliver;
+ } evtchn;
+ __u32 xen_version;
+ __u64 pad[8];
+ } u;
+};
+
+
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
+#define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0
+#define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1
+#define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_ATTR_TYPE_EVTCHN 0x3
+#define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
+#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
+#define KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA 0x6
+
+struct kvm_xen_vcpu_attr {
+ __u16 type;
+ __u16 pad[3];
+ union {
+ __u64 gpa;
+#define KVM_XEN_INVALID_GPA ((__u64)-1)
+ __u64 hva;
+ __u64 pad[8];
+ struct {
+ __u64 state;
+ __u64 state_entry_time;
+ __u64 time_running;
+ __u64 time_runnable;
+ __u64 time_blocked;
+ __u64 time_offline;
+ } runstate;
+ __u32 vcpu_id;
+ struct {
+ __u32 port;
+ __u32 priority;
+ __u64 expires_ns;
+ } timer;
+ __u8 vector;
+ } u;
+};
+
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO 0x0
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO 0x1
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR 0x2
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4
+#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6
+#define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7
+#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8
+/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */
+#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA 0x9
+
+/* Secure Encrypted Virtualization command */
+enum sev_cmd_id {
+ /* Guest initialization commands */
+ KVM_SEV_INIT = 0,
+ KVM_SEV_ES_INIT,
+ /* Guest launch commands */
+ KVM_SEV_LAUNCH_START,
+ KVM_SEV_LAUNCH_UPDATE_DATA,
+ KVM_SEV_LAUNCH_UPDATE_VMSA,
+ KVM_SEV_LAUNCH_SECRET,
+ KVM_SEV_LAUNCH_MEASURE,
+ KVM_SEV_LAUNCH_FINISH,
+ /* Guest migration commands (outgoing) */
+ KVM_SEV_SEND_START,
+ KVM_SEV_SEND_UPDATE_DATA,
+ KVM_SEV_SEND_UPDATE_VMSA,
+ KVM_SEV_SEND_FINISH,
+ /* Guest migration commands (incoming) */
+ KVM_SEV_RECEIVE_START,
+ KVM_SEV_RECEIVE_UPDATE_DATA,
+ KVM_SEV_RECEIVE_UPDATE_VMSA,
+ KVM_SEV_RECEIVE_FINISH,
+ /* Guest status and debug commands */
+ KVM_SEV_GUEST_STATUS,
+ KVM_SEV_DBG_DECRYPT,
+ KVM_SEV_DBG_ENCRYPT,
+ /* Guest certificates commands */
+ KVM_SEV_CERT_EXPORT,
+ /* Attestation report */
+ KVM_SEV_GET_ATTESTATION_REPORT,
+ /* Guest Migration Extension */
+ KVM_SEV_SEND_CANCEL,
+
+ /* Second time is the charm; improved versions of the above ioctls. */
+ KVM_SEV_INIT2,
+
+ /* SNP-specific commands */
+ KVM_SEV_SNP_LAUNCH_START = 100,
+ KVM_SEV_SNP_LAUNCH_UPDATE,
+ KVM_SEV_SNP_LAUNCH_FINISH,
+
+ KVM_SEV_NR_MAX,
+};
+
+struct kvm_sev_cmd {
+ __u32 id;
+ __u32 pad0;
+ __u64 data;
+ __u32 error;
+ __u32 sev_fd;
+};
+
+struct kvm_sev_init {
+ __u64 vmsa_features;
+ __u32 flags;
+ __u16 ghcb_version;
+ __u16 pad1;
+ __u32 pad2[8];
+};
+
+struct kvm_sev_launch_start {
+ __u32 handle;
+ __u32 policy;
+ __u64 dh_uaddr;
+ __u32 dh_len;
+ __u32 pad0;
+ __u64 session_uaddr;
+ __u32 session_len;
+ __u32 pad1;
+};
+
+struct kvm_sev_launch_update_data {
+ __u64 uaddr;
+ __u32 len;
+ __u32 pad0;
+};
+
+
+struct kvm_sev_launch_secret {
+ __u64 hdr_uaddr;
+ __u32 hdr_len;
+ __u32 pad0;
+ __u64 guest_uaddr;
+ __u32 guest_len;
+ __u32 pad1;
+ __u64 trans_uaddr;
+ __u32 trans_len;
+ __u32 pad2;
+};
+
+struct kvm_sev_launch_measure {
+ __u64 uaddr;
+ __u32 len;
+ __u32 pad0;
+};
+
+struct kvm_sev_guest_status {
+ __u32 handle;
+ __u32 policy;
+ __u32 state;
+};
+
+struct kvm_sev_dbg {
+ __u64 src_uaddr;
+ __u64 dst_uaddr;
+ __u32 len;
+ __u32 pad0;
+};
+
+struct kvm_sev_attestation_report {
+ __u8 mnonce[16];
+ __u64 uaddr;
+ __u32 len;
+ __u32 pad0;
+};
+
+struct kvm_sev_send_start {
+ __u32 policy;
+ __u32 pad0;
+ __u64 pdh_cert_uaddr;
+ __u32 pdh_cert_len;
+ __u32 pad1;
+ __u64 plat_certs_uaddr;
+ __u32 plat_certs_len;
+ __u32 pad2;
+ __u64 amd_certs_uaddr;
+ __u32 amd_certs_len;
+ __u32 pad3;
+ __u64 session_uaddr;
+ __u32 session_len;
+ __u32 pad4;
+};
+
+struct kvm_sev_send_update_data {
+ __u64 hdr_uaddr;
+ __u32 hdr_len;
+ __u32 pad0;
+ __u64 guest_uaddr;
+ __u32 guest_len;
+ __u32 pad1;
+ __u64 trans_uaddr;
+ __u32 trans_len;
+ __u32 pad2;
+};
+
+struct kvm_sev_receive_start {
+ __u32 handle;
+ __u32 policy;
+ __u64 pdh_uaddr;
+ __u32 pdh_len;
+ __u32 pad0;
+ __u64 session_uaddr;
+ __u32 session_len;
+ __u32 pad1;
+};
+
+struct kvm_sev_receive_update_data {
+ __u64 hdr_uaddr;
+ __u32 hdr_len;
+ __u32 pad0;
+ __u64 guest_uaddr;
+ __u32 guest_len;
+ __u32 pad1;
+ __u64 trans_uaddr;
+ __u32 trans_len;
+ __u32 pad2;
+};
+
+struct kvm_sev_snp_launch_start {
+ __u64 policy;
+ __u8 gosvw[16];
+ __u16 flags;
+ __u8 pad0[6];
+ __u64 pad1[4];
+};
+
+/* Kept in sync with firmware values for simplicity. */
+#define KVM_SEV_PAGE_TYPE_INVALID 0x0
+#define KVM_SEV_SNP_PAGE_TYPE_NORMAL 0x1
+#define KVM_SEV_SNP_PAGE_TYPE_ZERO 0x3
+#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED 0x4
+#define KVM_SEV_SNP_PAGE_TYPE_SECRETS 0x5
+#define KVM_SEV_SNP_PAGE_TYPE_CPUID 0x6
+
+struct kvm_sev_snp_launch_update {
+ __u64 gfn_start;
+ __u64 uaddr;
+ __u64 len;
+ __u8 type;
+ __u8 pad0;
+ __u16 flags;
+ __u32 pad1;
+ __u64 pad2[4];
+};
+
+#define KVM_SEV_SNP_ID_BLOCK_SIZE 96
+#define KVM_SEV_SNP_ID_AUTH_SIZE 4096
+#define KVM_SEV_SNP_FINISH_DATA_SIZE 32
+
+struct kvm_sev_snp_launch_finish {
+ __u64 id_block_uaddr;
+ __u64 id_auth_uaddr;
+ __u8 id_block_en;
+ __u8 auth_key_en;
+ __u8 vcek_disabled;
+ __u8 host_data[KVM_SEV_SNP_FINISH_DATA_SIZE];
+ __u8 pad0[3];
+ __u16 flags;
+ __u64 pad1[4];
+};
+
+#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
+#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
+
+struct kvm_hyperv_eventfd {
+ __u32 conn_id;
+ __s32 fd;
+ __u32 flags;
+ __u32 padding[3];
+};
+
+#define KVM_HYPERV_CONN_ID_MASK 0x00ffffff
+#define KVM_HYPERV_EVENTFD_DEASSIGN (1 << 0)
+
/*
* Masked event layout.
* Bits Description
@@ -549,10 +912,10 @@ struct kvm_pmu_event_filter {
((__u64)(!!(exclude)) << 55))
#define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \
- (GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32))
-#define KVM_PMU_MASKED_ENTRY_UMASK_MASK (GENMASK_ULL(63, 56))
-#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (GENMASK_ULL(15, 8))
-#define KVM_PMU_MASKED_ENTRY_EXCLUDE (BIT_ULL(55))
+ (__GENMASK_ULL(7, 0) | __GENMASK_ULL(35, 32))
+#define KVM_PMU_MASKED_ENTRY_UMASK_MASK (__GENMASK_ULL(63, 56))
+#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (__GENMASK_ULL(15, 8))
+#define KVM_PMU_MASKED_ENTRY_EXCLUDE (_BITULL(55))
#define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT (56)
/* for KVM_{GET,SET,HAS}_DEVICE_ATTR */
@@ -560,9 +923,89 @@ struct kvm_pmu_event_filter {
#define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
/* x86-specific KVM_EXIT_HYPERCALL flags. */
-#define KVM_EXIT_HYPERCALL_LONG_MODE BIT(0)
+#define KVM_EXIT_HYPERCALL_LONG_MODE _BITULL(0)
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
+#define KVM_X86_SEV_VM 2
+#define KVM_X86_SEV_ES_VM 3
+#define KVM_X86_SNP_VM 4
+#define KVM_X86_TDX_VM 5
+
+/* Trust Domain eXtension sub-ioctl() commands. */
+enum kvm_tdx_cmd_id {
+ KVM_TDX_CAPABILITIES = 0,
+ KVM_TDX_INIT_VM,
+ KVM_TDX_INIT_VCPU,
+ KVM_TDX_INIT_MEM_REGION,
+ KVM_TDX_FINALIZE_VM,
+ KVM_TDX_GET_CPUID,
+
+ KVM_TDX_CMD_NR_MAX,
+};
+
+struct kvm_tdx_cmd {
+ /* enum kvm_tdx_cmd_id */
+ __u32 id;
+ /* flags for sub-commend. If sub-command doesn't use this, set zero. */
+ __u32 flags;
+ /*
+ * data for each sub-command. An immediate or a pointer to the actual
+ * data in process virtual address. If sub-command doesn't use it,
+ * set zero.
+ */
+ __u64 data;
+ /*
+ * Auxiliary error code. The sub-command may return TDX SEAMCALL
+ * status code in addition to -Exxx.
+ */
+ __u64 hw_error;
+};
+
+struct kvm_tdx_capabilities {
+ __u64 supported_attrs;
+ __u64 supported_xfam;
+
+ __u64 kernel_tdvmcallinfo_1_r11;
+ __u64 user_tdvmcallinfo_1_r11;
+ __u64 kernel_tdvmcallinfo_1_r12;
+ __u64 user_tdvmcallinfo_1_r12;
+
+ __u64 reserved[250];
+
+ /* Configurable CPUID bits for userspace */
+ struct kvm_cpuid2 cpuid;
+};
+
+struct kvm_tdx_init_vm {
+ __u64 attributes;
+ __u64 xfam;
+ __u64 mrconfigid[6]; /* sha384 digest */
+ __u64 mrowner[6]; /* sha384 digest */
+ __u64 mrownerconfig[6]; /* sha384 digest */
+
+ /* The total space for TD_PARAMS before the CPUIDs is 256 bytes */
+ __u64 reserved[12];
+
+ /*
+ * Call KVM_TDX_INIT_VM before vcpu creation, thus before
+ * KVM_SET_CPUID2.
+ * This configuration supersedes KVM_SET_CPUID2s for VCPUs because the
+ * TDX module directly virtualizes those CPUIDs without VMM. The user
+ * space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with
+ * those values. If it doesn't, KVM may have wrong idea of vCPUIDs of
+ * the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX
+ * module doesn't virtualize.
+ */
+ struct kvm_cpuid2 cpuid;
+};
+
+#define KVM_TDX_MEASURE_MEMORY_REGION _BITULL(0)
+
+struct kvm_tdx_init_mem_region {
+ __u64 source_addr;
+ __u64 gpa;
+ __u64 nr_pages;
+};
#endif /* _ASM_X86_KVM_H */
diff --git a/tools/arch/x86/include/uapi/asm/kvm_perf.h b/tools/arch/x86/include/uapi/asm/kvm_perf.h
deleted file mode 100644
index 125cf5cdf6c5..000000000000
--- a/tools/arch/x86/include/uapi/asm/kvm_perf.h
+++ /dev/null
@@ -1,17 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _ASM_X86_KVM_PERF_H
-#define _ASM_X86_KVM_PERF_H
-
-#include <asm/svm.h>
-#include <asm/vmx.h>
-#include <asm/kvm.h>
-
-#define DECODE_STR_LEN 20
-
-#define VCPU_ID "vcpu_id"
-
-#define KVM_ENTRY_TRACE "kvm:kvm_entry"
-#define KVM_EXIT_TRACE "kvm:kvm_exit"
-#define KVM_EXIT_REASON "exit_reason"
-
-#endif /* _ASM_X86_KVM_PERF_H */
diff --git a/tools/arch/x86/include/uapi/asm/svm.h b/tools/arch/x86/include/uapi/asm/svm.h
index 80e1df482337..9c640a521a67 100644
--- a/tools/arch/x86/include/uapi/asm/svm.h
+++ b/tools/arch/x86/include/uapi/asm/svm.h
@@ -95,6 +95,8 @@
#define SVM_EXIT_CR14_WRITE_TRAP 0x09e
#define SVM_EXIT_CR15_WRITE_TRAP 0x09f
#define SVM_EXIT_INVPCID 0x0a2
+#define SVM_EXIT_BUS_LOCK 0x0a5
+#define SVM_EXIT_IDLE_HLT 0x0a6
#define SVM_EXIT_NPF 0x400
#define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401
#define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402
@@ -115,6 +117,7 @@
#define SVM_VMGEXIT_AP_CREATE_ON_INIT 0
#define SVM_VMGEXIT_AP_CREATE 1
#define SVM_VMGEXIT_AP_DESTROY 2
+#define SVM_VMGEXIT_SNP_RUN_VMPL 0x80000018
#define SVM_VMGEXIT_HV_FEATURES 0x8000fffd
#define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe
#define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \
@@ -223,6 +226,8 @@
{ SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \
{ SVM_EXIT_CR8_WRITE_TRAP, "write_cr8_trap" }, \
{ SVM_EXIT_INVPCID, "invpcid" }, \
+ { SVM_EXIT_BUS_LOCK, "buslock" }, \
+ { SVM_EXIT_IDLE_HLT, "idle-halt" }, \
{ SVM_EXIT_NPF, "npf" }, \
{ SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
{ SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \
diff --git a/tools/arch/x86/include/uapi/asm/unistd_32.h b/tools/arch/x86/include/uapi/asm/unistd_32.h
index 9de35df1afc3..63182a023e9d 100644
--- a/tools/arch/x86/include/uapi/asm/unistd_32.h
+++ b/tools/arch/x86/include/uapi/asm/unistd_32.h
@@ -11,6 +11,9 @@
#ifndef __NR_getpgid
#define __NR_getpgid 132
#endif
+#ifndef __NR_capget
+#define __NR_capget 184
+#endif
#ifndef __NR_gettid
#define __NR_gettid 224
#endif
diff --git a/tools/arch/x86/include/uapi/asm/unistd_64.h b/tools/arch/x86/include/uapi/asm/unistd_64.h
index d0f2043d7132..77311e8d1b5d 100644
--- a/tools/arch/x86/include/uapi/asm/unistd_64.h
+++ b/tools/arch/x86/include/uapi/asm/unistd_64.h
@@ -11,6 +11,9 @@
#ifndef __NR_getpgid
#define __NR_getpgid 121
#endif
+#ifndef __NR_capget
+#define __NR_capget 125
+#endif
#ifndef __NR_gettid
#define __NR_gettid 186
#endif
diff --git a/tools/arch/x86/include/uapi/asm/vmx.h b/tools/arch/x86/include/uapi/asm/vmx.h
index a5faf6d88f1b..f0f4a4cf84a7 100644
--- a/tools/arch/x86/include/uapi/asm/vmx.h
+++ b/tools/arch/x86/include/uapi/asm/vmx.h
@@ -34,6 +34,7 @@
#define EXIT_REASON_TRIPLE_FAULT 2
#define EXIT_REASON_INIT_SIGNAL 3
#define EXIT_REASON_SIPI_SIGNAL 4
+#define EXIT_REASON_OTHER_SMI 6
#define EXIT_REASON_INTERRUPT_WINDOW 7
#define EXIT_REASON_NMI_WINDOW 8
@@ -92,6 +93,7 @@
#define EXIT_REASON_TPAUSE 68
#define EXIT_REASON_BUS_LOCK 74
#define EXIT_REASON_NOTIFY 75
+#define EXIT_REASON_TDCALL 77
#define VMX_EXIT_REASONS \
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
@@ -155,7 +157,8 @@
{ EXIT_REASON_UMWAIT, "UMWAIT" }, \
{ EXIT_REASON_TPAUSE, "TPAUSE" }, \
{ EXIT_REASON_BUS_LOCK, "BUS_LOCK" }, \
- { EXIT_REASON_NOTIFY, "NOTIFY" }
+ { EXIT_REASON_NOTIFY, "NOTIFY" }, \
+ { EXIT_REASON_TDCALL, "TDCALL" }
#define VMX_EXIT_REASON_FLAGS \
{ VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" }
diff --git a/tools/arch/x86/intel_sdsi/intel_sdsi.c b/tools/arch/x86/intel_sdsi/intel_sdsi.c
index 2cd92761f171..766a5d26f534 100644
--- a/tools/arch/x86/intel_sdsi/intel_sdsi.c
+++ b/tools/arch/x86/intel_sdsi/intel_sdsi.c
@@ -43,7 +43,7 @@
#define METER_CERT_MAX_SIZE 4096
#define STATE_MAX_NUM_LICENSES 16
#define STATE_MAX_NUM_IN_BUNDLE (uint32_t)8
-#define METER_MAX_NUM_BUNDLES 8
+#define FEAT_LEN 5 /* 4 plus NUL terminator */
#define __round_mask(x, y) ((__typeof__(x))((y) - 1))
#define round_up(x, y) ((((x) - 1) | __round_mask(x, y)) + 1)
@@ -154,11 +154,12 @@ struct bundle_encoding {
};
struct meter_certificate {
- uint32_t block_signature;
- uint32_t counter_unit;
+ uint32_t signature;
+ uint32_t version;
uint64_t ppin;
+ uint32_t counter_unit;
uint32_t bundle_length;
- uint32_t reserved;
+ uint64_t reserved;
uint32_t mmrc_encoding;
uint32_t mmrc_counter;
};
@@ -167,6 +168,11 @@ struct bundle_encoding_counter {
uint32_t encoding;
uint32_t counter;
};
+#define METER_BUNDLE_SIZE sizeof(struct bundle_encoding_counter)
+#define BUNDLE_COUNT(length) ((length) / METER_BUNDLE_SIZE)
+#define METER_MAX_NUM_BUNDLES \
+ ((METER_CERT_MAX_SIZE - sizeof(struct meter_certificate)) / \
+ sizeof(struct bundle_encoding_counter))
struct sdsi_dev {
struct sdsi_regs regs;
@@ -179,6 +185,7 @@ struct sdsi_dev {
enum command {
CMD_SOCKET_INFO,
CMD_METER_CERT,
+ CMD_METER_CURRENT_CERT,
CMD_STATE_CERT,
CMD_PROV_AKC,
CMD_PROV_CAP,
@@ -316,24 +323,27 @@ static char *content_type(uint32_t type)
}
}
-static void get_feature(uint32_t encoding, char *feature)
+static void get_feature(uint32_t encoding, char feature[5])
{
char *name = (char *)&encoding;
+ feature[4] = '\0';
feature[3] = name[0];
feature[2] = name[1];
feature[1] = name[2];
feature[0] = name[3];
}
-static int sdsi_meter_cert_show(struct sdsi_dev *s)
+static int sdsi_meter_cert_show(struct sdsi_dev *s, bool show_current)
{
char buf[METER_CERT_MAX_SIZE] = {0};
struct bundle_encoding_counter *bec;
struct meter_certificate *mc;
uint32_t count = 0;
FILE *cert_ptr;
+ char *cert_fname;
int ret, size;
+ char name[FEAT_LEN];
ret = sdsi_update_registers(s);
if (ret)
@@ -341,7 +351,6 @@ static int sdsi_meter_cert_show(struct sdsi_dev *s)
if (!s->regs.en_features.sdsi) {
fprintf(stderr, "SDSi feature is present but not enabled.\n");
- fprintf(stderr, " Unable to read meter certificate\n");
return -1;
}
@@ -356,15 +365,17 @@ static int sdsi_meter_cert_show(struct sdsi_dev *s)
return ret;
}
- cert_ptr = fopen("meter_certificate", "r");
+ cert_fname = show_current ? "meter_current" : "meter_certificate";
+ cert_ptr = fopen(cert_fname, "r");
+
if (!cert_ptr) {
- perror("Could not open 'meter_certificate' file");
+ fprintf(stderr, "Could not open '%s' file: %s", cert_fname, strerror(errno));
return -1;
}
size = fread(buf, 1, sizeof(buf), cert_ptr);
if (!size) {
- fprintf(stderr, "Could not read 'meter_certificate' file\n");
+ fprintf(stderr, "Could not read '%s' file\n", cert_fname);
fclose(cert_ptr);
return -1;
}
@@ -375,32 +386,39 @@ static int sdsi_meter_cert_show(struct sdsi_dev *s)
printf("\n");
printf("Meter certificate for device %s\n", s->dev_name);
printf("\n");
- printf("Block Signature: 0x%x\n", mc->block_signature);
- printf("Count Unit: %dms\n", mc->counter_unit);
- printf("PPIN: 0x%lx\n", mc->ppin);
- printf("Feature Bundle Length: %d\n", mc->bundle_length);
- printf("MMRC encoding: %d\n", mc->mmrc_encoding);
- printf("MMRC counter: %d\n", mc->mmrc_counter);
- if (mc->bundle_length % 8) {
+
+ get_feature(mc->signature, name);
+ printf("Signature: %s\n", name);
+
+ printf("Version: %d\n", mc->version);
+ printf("Count Unit: %dms\n", mc->counter_unit);
+ printf("PPIN: 0x%lx\n", mc->ppin);
+ printf("Feature Bundle Length: %d\n", mc->bundle_length);
+
+ get_feature(mc->mmrc_encoding, name);
+ printf("MMRC encoding: %s\n", name);
+
+ printf("MMRC counter: %d\n", mc->mmrc_counter);
+ if (mc->bundle_length % METER_BUNDLE_SIZE) {
fprintf(stderr, "Invalid bundle length\n");
return -1;
}
- if (mc->bundle_length > METER_MAX_NUM_BUNDLES * 8) {
- fprintf(stderr, "More than %d bundles: %d\n",
- METER_MAX_NUM_BUNDLES, mc->bundle_length / 8);
+ if (mc->bundle_length > METER_MAX_NUM_BUNDLES * METER_BUNDLE_SIZE) {
+ fprintf(stderr, "More than %ld bundles: actual %ld\n",
+ METER_MAX_NUM_BUNDLES, BUNDLE_COUNT(mc->bundle_length));
return -1;
}
- bec = (void *)(mc) + sizeof(mc);
+ bec = (struct bundle_encoding_counter *)(mc + 1);
- printf("Number of Feature Counters: %d\n", mc->bundle_length / 8);
- while (count++ < mc->bundle_length / 8) {
- char feature[5];
+ printf("Number of Feature Counters: %ld\n", BUNDLE_COUNT(mc->bundle_length));
+ while (count < BUNDLE_COUNT(mc->bundle_length)) {
+ char feature[FEAT_LEN];
- feature[4] = '\0';
get_feature(bec[count].encoding, feature);
printf(" %s: %d\n", feature, bec[count].counter);
+ ++count;
}
return 0;
@@ -480,7 +498,7 @@ static int sdsi_state_cert_show(struct sdsi_dev *s)
sizeof(*lki) + // size of the license key info
offset; // offset to this blob content
struct bundle_encoding *bundle = (void *)(lbc) + sizeof(*lbc);
- char feature[5];
+ char feature[FEAT_LEN];
uint32_t i;
printf(" Blob %d:\n", count - 1);
@@ -493,8 +511,6 @@ static int sdsi_state_cert_show(struct sdsi_dev *s)
printf(" Blob revision ID: %u\n", lbc->rev_id);
printf(" Number of Features: %u\n", lbc->num_bundles);
- feature[4] = '\0';
-
for (i = 0; i < min(lbc->num_bundles, STATE_MAX_NUM_IN_BUNDLE); i++) {
get_feature(bundle[i].encoding, feature);
printf(" Feature %d: %s\n", i, feature);
@@ -725,7 +741,7 @@ static void sdsi_free_dev(struct sdsi_dev *s)
static void usage(char *prog)
{
- printf("Usage: %s [-l] [-d DEVNO [-i] [-s] [-m] [-a FILE] [-c FILE]]\n", prog);
+ printf("Usage: %s [-l] [-d DEVNO [-i] [-s] [-m | -C] [-a FILE] [-c FILE]\n", prog);
}
static void show_help(void)
@@ -734,8 +750,9 @@ static void show_help(void)
printf(" %-18s\t%s\n", "-l, --list", "list available On Demand devices");
printf(" %-18s\t%s\n", "-d, --devno DEVNO", "On Demand device number");
printf(" %-18s\t%s\n", "-i, --info", "show socket information");
- printf(" %-18s\t%s\n", "-s, --state", "show state certificate");
- printf(" %-18s\t%s\n", "-m, --meter", "show meter certificate");
+ printf(" %-18s\t%s\n", "-s, --state", "show state certificate data");
+ printf(" %-18s\t%s\n", "-m, --meter", "show meter certificate data");
+ printf(" %-18s\t%s\n", "-C, --meter_current", "show live unattested meter data");
printf(" %-18s\t%s\n", "-a, --akc FILE", "provision socket with AKC FILE");
printf(" %-18s\t%s\n", "-c, --cap FILE>", "provision socket with CAP FILE");
}
@@ -751,21 +768,22 @@ int main(int argc, char *argv[])
int option_index = 0;
static struct option long_options[] = {
- {"akc", required_argument, 0, 'a'},
- {"cap", required_argument, 0, 'c'},
- {"devno", required_argument, 0, 'd'},
- {"help", no_argument, 0, 'h'},
- {"info", no_argument, 0, 'i'},
- {"list", no_argument, 0, 'l'},
- {"meter", no_argument, 0, 'm'},
- {"state", no_argument, 0, 's'},
- {0, 0, 0, 0 }
+ {"akc", required_argument, 0, 'a'},
+ {"cap", required_argument, 0, 'c'},
+ {"devno", required_argument, 0, 'd'},
+ {"help", no_argument, 0, 'h'},
+ {"info", no_argument, 0, 'i'},
+ {"list", no_argument, 0, 'l'},
+ {"meter", no_argument, 0, 'm'},
+ {"meter_current", no_argument, 0, 'C'},
+ {"state", no_argument, 0, 's'},
+ {0, 0, 0, 0 }
};
progname = argv[0];
- while ((opt = getopt_long_only(argc, argv, "+a:c:d:hilms", long_options,
+ while ((opt = getopt_long_only(argc, argv, "+a:c:d:hilmCs", long_options,
&option_index)) != -1) {
switch (opt) {
case 'd':
@@ -781,6 +799,9 @@ int main(int argc, char *argv[])
case 'm':
command = CMD_METER_CERT;
break;
+ case 'C':
+ command = CMD_METER_CURRENT_CERT;
+ break;
case 's':
command = CMD_STATE_CERT;
break;
@@ -819,7 +840,10 @@ int main(int argc, char *argv[])
ret = sdsi_read_reg(s);
break;
case CMD_METER_CERT:
- ret = sdsi_meter_cert_show(s);
+ ret = sdsi_meter_cert_show(s, false);
+ break;
+ case CMD_METER_CURRENT_CERT:
+ ret = sdsi_meter_cert_show(s, true);
break;
case CMD_STATE_CERT:
ret = sdsi_state_cert_show(s);
diff --git a/tools/arch/x86/kcpuid/Makefile b/tools/arch/x86/kcpuid/Makefile
index 87b554fab14b..d0b4b0ed10ff 100644
--- a/tools/arch/x86/kcpuid/Makefile
+++ b/tools/arch/x86/kcpuid/Makefile
@@ -19,6 +19,6 @@ clean :
@rm -f kcpuid
install : kcpuid
- install -d $(DESTDIR)$(BINDIR)
+ install -d $(DESTDIR)$(BINDIR) $(DESTDIR)$(HWDATADIR)
install -m 755 -p kcpuid $(DESTDIR)$(BINDIR)/kcpuid
- install -m 444 -p cpuid.csv $(HWDATADIR)/cpuid.csv
+ install -m 444 -p cpuid.csv $(DESTDIR)$(HWDATADIR)/cpuid.csv
diff --git a/tools/arch/x86/kcpuid/cpuid.csv b/tools/arch/x86/kcpuid/cpuid.csv
index e0c25b75327e..8d925ce9750f 100644
--- a/tools/arch/x86/kcpuid/cpuid.csv
+++ b/tools/arch/x86/kcpuid/cpuid.csv
@@ -1,451 +1,1172 @@
-# The basic row format is:
-# LEAF, SUBLEAF, register_name, bits, short_name, long_description
-
-# Leaf 00H
- 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
-
-# Leaf 01H
- 1, 0, EAX, 3:0, stepping, Stepping ID
- 1, 0, EAX, 7:4, model, Model
- 1, 0, EAX, 11:8, family, Family ID
- 1, 0, EAX, 13:12, processor, Processor Type
- 1, 0, EAX, 19:16, model_ext, Extended Model ID
- 1, 0, EAX, 27:20, family_ext, Extended Family ID
-
- 1, 0, EBX, 7:0, brand, Brand Index
- 1, 0, EBX, 15:8, clflush_size, CLFLUSH line size (value * 8) in bytes
- 1, 0, EBX, 23:16, max_cpu_id, Maxim number of addressable logic cpu in this package
- 1, 0, EBX, 31:24, apic_id, Initial APIC ID
-
- 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
- 1, 0, ECX, 1, pclmulqdq, PCLMULQDQ instruction supported
- 1, 0, ECX, 2, dtes64, DS area uses 64-bit layout
- 1, 0, ECX, 3, mwait, MONITOR/MWAIT supported
- 1, 0, ECX, 4, ds_cpl, CPL Qualified Debug Store which allows for branch message storage qualified by CPL
- 1, 0, ECX, 5, vmx, Virtual Machine Extensions supported
- 1, 0, ECX, 6, smx, Safer Mode Extension supported
- 1, 0, ECX, 7, eist, Enhanced Intel SpeedStep Technology
- 1, 0, ECX, 8, tm2, Thermal Monitor 2
- 1, 0, ECX, 9, ssse3, Supplemental Streaming SIMD Extensions 3 (SSSE3)
- 1, 0, ECX, 10, l1_ctx_id, L1 data cache could be set to either adaptive mode or shared mode (check IA32_MISC_ENABLE bit 24 definition)
- 1, 0, ECX, 11, sdbg, IA32_DEBUG_INTERFACE MSR for silicon debug supported
- 1, 0, ECX, 12, fma, FMA extensions using YMM state supported
- 1, 0, ECX, 13, cmpxchg16b, 'CMPXCHG16B - Compare and Exchange Bytes' supported
- 1, 0, ECX, 14, xtpr_update, xTPR Update Control supported
- 1, 0, ECX, 15, pdcm, Perfmon and Debug Capability present
- 1, 0, ECX, 17, pcid, Process-Context Identifiers feature present
- 1, 0, ECX, 18, dca, Prefetching data from a memory mapped device supported
- 1, 0, ECX, 19, sse4_1, SSE4.1 feature present
- 1, 0, ECX, 20, sse4_2, SSE4.2 feature present
- 1, 0, ECX, 21, x2apic, x2APIC supported
- 1, 0, ECX, 22, movbe, MOVBE instruction supported
- 1, 0, ECX, 23, popcnt, POPCNT instruction supported
- 1, 0, ECX, 24, tsc_deadline_timer, LAPIC supports one-shot operation using a TSC deadline value
- 1, 0, ECX, 25, aesni, AESNI instruction supported
- 1, 0, ECX, 26, xsave, XSAVE/XRSTOR processor extended states (XSETBV/XGETBV/XCR0)
- 1, 0, ECX, 27, osxsave, OS has set CR4.OSXSAVE bit to enable XSETBV/XGETBV/XCR0
- 1, 0, ECX, 28, avx, AVX instruction supported
- 1, 0, ECX, 29, f16c, 16-bit floating-point conversion instruction supported
- 1, 0, ECX, 30, rdrand, RDRAND instruction supported
-
- 1, 0, EDX, 0, fpu, x87 FPU on chip
- 1, 0, EDX, 1, vme, Virtual-8086 Mode Enhancement
- 1, 0, EDX, 2, de, Debugging Extensions
- 1, 0, EDX, 3, pse, Page Size Extensions
- 1, 0, EDX, 4, tsc, Time Stamp Counter
- 1, 0, EDX, 5, msr, RDMSR and WRMSR Support
- 1, 0, EDX, 6, pae, Physical Address Extensions
- 1, 0, EDX, 7, mce, Machine Check Exception
- 1, 0, EDX, 8, cx8, CMPXCHG8B instr
- 1, 0, EDX, 9, apic, APIC on Chip
- 1, 0, EDX, 11, sep, SYSENTER and SYSEXIT instrs
- 1, 0, EDX, 12, mtrr, Memory Type Range Registers
- 1, 0, EDX, 13, pge, Page Global Bit
- 1, 0, EDX, 14, mca, Machine Check Architecture
- 1, 0, EDX, 15, cmov, Conditional Move Instrs
- 1, 0, EDX, 16, pat, Page Attribute Table
- 1, 0, EDX, 17, pse36, 36-Bit Page Size Extension
- 1, 0, EDX, 18, psn, Processor Serial Number
- 1, 0, EDX, 19, clflush, CLFLUSH instr
-# 1, 0, EDX, 20,
- 1, 0, EDX, 21, ds, Debug Store
- 1, 0, EDX, 22, acpi, Thermal Monitor and Software Controlled Clock Facilities
- 1, 0, EDX, 23, mmx, Intel MMX Technology
- 1, 0, EDX, 24, fxsr, XSAVE and FXRSTOR Instrs
- 1, 0, EDX, 25, sse, SSE
- 1, 0, EDX, 26, sse2, SSE2
- 1, 0, EDX, 27, ss, Self Snoop
- 1, 0, EDX, 28, hit, Max APIC IDs
- 1, 0, EDX, 29, tm, Thermal Monitor
-# 1, 0, EDX, 30,
- 1, 0, EDX, 31, pbe, Pending Break Enable
-
-# Leaf 02H
-# cache and TLB descriptor info
-
-# Leaf 03H
-# Precessor Serial Number, introduced on Pentium III, not valid for
-# latest models
-
-# Leaf 04H
-# thread/core and cache topology
- 4, 0, EAX, 4:0, cache_type, Cache type like instr/data or unified
- 4, 0, EAX, 7:5, cache_level, Cache Level (starts at 1)
- 4, 0, EAX, 8, cache_self_init, Cache Self Initialization
- 4, 0, EAX, 9, fully_associate, Fully Associative cache
-# 4, 0, EAX, 13:10, resvd, resvd
- 4, 0, EAX, 25:14, max_logical_id, Max number of addressable IDs for logical processors sharing the cache
- 4, 0, EAX, 31:26, max_phy_id, Max number of addressable IDs for processors in phy package
-
- 4, 0, EBX, 11:0, cache_linesize, Size of a cache line in bytes
- 4, 0, EBX, 21:12, cache_partition, Physical Line partitions
- 4, 0, EBX, 31:22, cache_ways, Ways of associativity
- 4, 0, ECX, 31:0, cache_sets, Number of Sets - 1
- 4, 0, EDX, 0, c_wbinvd, 1 means WBINVD/INVD is not ganranteed to act upon lower level caches of non-originating threads sharing this cache
- 4, 0, EDX, 1, c_incl, Whether cache is inclusive of lower cache level
- 4, 0, EDX, 2, c_comp_index, Complex Cache Indexing
-
-# Leaf 05H
-# MONITOR/MWAIT
- 5, 0, EAX, 15:0, min_mon_size, Smallest monitor line size in bytes
- 5, 0, EBX, 15:0, max_mon_size, Largest monitor line size in bytes
- 5, 0, ECX, 0, mwait_ext, Enum of Monitor-Mwait extensions supported
- 5, 0, ECX, 1, mwait_irq_break, Largest monitor line size in bytes
- 5, 0, EDX, 3:0, c0_sub_stats, Number of C0* sub C-states supported using MWAIT
- 5, 0, EDX, 7:4, c1_sub_stats, Number of C1* sub C-states supported using MWAIT
- 5, 0, EDX, 11:8, c2_sub_stats, Number of C2* sub C-states supported using MWAIT
- 5, 0, EDX, 15:12, c3_sub_stats, Number of C3* sub C-states supported using MWAIT
- 5, 0, EDX, 19:16, c4_sub_stats, Number of C4* sub C-states supported using MWAIT
- 5, 0, EDX, 23:20, c5_sub_stats, Number of C5* sub C-states supported using MWAIT
- 5, 0, EDX, 27:24, c6_sub_stats, Number of C6* sub C-states supported using MWAIT
- 5, 0, EDX, 31:28, c7_sub_stats, Number of C7* sub C-states supported using MWAIT
-
-# Leaf 06H
-# Thermal & Power Management
-
- 6, 0, EAX, 0, dig_temp, Digital temperature sensor supported
- 6, 0, EAX, 1, turbo, Intel Turbo Boost
- 6, 0, EAX, 2, arat, Always running APIC timer
-# 6, 0, EAX, 3, resv, Reserved
- 6, 0, EAX, 4, pln, Power limit notifications supported
- 6, 0, EAX, 5, ecmd, Clock modulation duty cycle extension supported
- 6, 0, EAX, 6, ptm, Package thermal management supported
- 6, 0, EAX, 7, hwp, HWP base register
- 6, 0, EAX, 8, hwp_notify, HWP notification
- 6, 0, EAX, 9, hwp_act_window, HWP activity window
- 6, 0, EAX, 10, hwp_energy, HWP energy performance preference
- 6, 0, EAX, 11, hwp_pkg_req, HWP package level request
-# 6, 0, EAX, 12, resv, Reserved
- 6, 0, EAX, 13, hdc, HDC base registers supported
- 6, 0, EAX, 14, turbo3, Turbo Boost Max 3.0
- 6, 0, EAX, 15, hwp_cap, Highest Performance change supported
- 6, 0, EAX, 16, hwp_peci, HWP PECI override is supported
- 6, 0, EAX, 17, hwp_flex, Flexible HWP is supported
- 6, 0, EAX, 18, hwp_fast, Fast access mode for the IA32_HWP_REQUEST MSR is supported
-# 6, 0, EAX, 19, resv, Reserved
- 6, 0, EAX, 20, hwp_ignr, Ignoring Idle Logical Processor HWP request is supported
-
- 6, 0, EBX, 3:0, therm_irq_thresh, Number of Interrupt Thresholds in Digital Thermal Sensor
- 6, 0, ECX, 0, aperfmperf, Presence of IA32_MPERF and IA32_APERF
- 6, 0, ECX, 3, energ_bias, Performance-energy bias preference supported
-
-# Leaf 07H
-# ECX == 0
-# AVX512 refers to https://en.wikipedia.org/wiki/AVX-512
-# XXX: Do we really need to enumerate each and every AVX512 sub features
-
- 7, 0, EBX, 0, fsgsbase, RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE supported
- 7, 0, EBX, 1, tsc_adjust, TSC_ADJUST MSR supported
- 7, 0, EBX, 2, sgx, Software Guard Extensions
- 7, 0, EBX, 3, bmi1, BMI1
- 7, 0, EBX, 4, hle, Hardware Lock Elision
- 7, 0, EBX, 5, avx2, AVX2
-# 7, 0, EBX, 6, fdp_excp_only, x87 FPU Data Pointer updated only on x87 exceptions
- 7, 0, EBX, 7, smep, Supervisor-Mode Execution Prevention
- 7, 0, EBX, 8, bmi2, BMI2
- 7, 0, EBX, 9, rep_movsb, Enhanced REP MOVSB/STOSB
- 7, 0, EBX, 10, invpcid, INVPCID instruction
- 7, 0, EBX, 11, rtm, Restricted Transactional Memory
- 7, 0, EBX, 12, rdt_m, Intel RDT Monitoring capability
- 7, 0, EBX, 13, depc_fpu_cs_ds, Deprecates FPU CS and FPU DS
- 7, 0, EBX, 14, mpx, Memory Protection Extensions
- 7, 0, EBX, 15, rdt_a, Intel RDT Allocation capability
- 7, 0, EBX, 16, avx512f, AVX512 Foundation instr
- 7, 0, EBX, 17, avx512dq, AVX512 Double and Quadword AVX512 instr
- 7, 0, EBX, 18, rdseed, RDSEED instr
- 7, 0, EBX, 19, adx, ADX instr
- 7, 0, EBX, 20, smap, Supervisor Mode Access Prevention
- 7, 0, EBX, 21, avx512ifma, AVX512 Integer Fused Multiply Add
-# 7, 0, EBX, 22, resvd, resvd
- 7, 0, EBX, 23, clflushopt, CLFLUSHOPT instr
- 7, 0, EBX, 24, clwb, CLWB instr
- 7, 0, EBX, 25, intel_pt, Intel Processor Trace instr
- 7, 0, EBX, 26, avx512pf, Prefetch
- 7, 0, EBX, 27, avx512er, AVX512 Exponent Reciproca instr
- 7, 0, EBX, 28, avx512cd, AVX512 Conflict Detection instr
- 7, 0, EBX, 29, sha, Intel Secure Hash Algorithm Extensions instr
- 7, 0, EBX, 30, avx512bw, AVX512 Byte & Word instr
- 7, 0, EBX, 31, avx512vl, AVX512 Vector Length Extentions (VL)
- 7, 0, ECX, 0, prefetchwt1, X
- 7, 0, ECX, 1, avx512vbmi, AVX512 Vector Byte Manipulation Instructions
- 7, 0, ECX, 2, umip, User-mode Instruction Prevention
-
- 7, 0, ECX, 3, pku, Protection Keys for User-mode pages
- 7, 0, ECX, 4, ospke, CR4 PKE set to enable protection keys
-# 7, 0, ECX, 16:5, resvd, resvd
- 7, 0, ECX, 21:17, mawau, The value of MAWAU used by the BNDLDX and BNDSTX instructions in 64-bit mode
- 7, 0, ECX, 22, rdpid, RDPID and IA32_TSC_AUX
-# 7, 0, ECX, 29:23, resvd, resvd
- 7, 0, ECX, 30, sgx_lc, SGX Launch Configuration
-# 7, 0, ECX, 31, resvd, resvd
-
-# Leaf 08H
-#
-
-
-# Leaf 09H
-# Direct Cache Access (DCA) information
- 9, 0, ECX, 31:0, dca_cap, The value of IA32_PLATFORM_DCA_CAP
+# SPDX-License-Identifier: CC0-1.0
+# Generator: x86-cpuid-db v2.4
-# Leaf 0AH
-# Architectural Performance Monitoring
#
-# Do we really need to print out the PMU related stuff?
-# Does normal user really care about it?
+# Auto-generated file.
+# Please submit all updates and bugfixes to https://x86-cpuid.org
#
- 0xA, 0, EAX, 7:0, pmu_ver, Performance Monitoring Unit version
- 0xA, 0, EAX, 15:8, pmu_gp_cnt_num, Numer of general-purose PMU counters per logical CPU
- 0xA, 0, EAX, 23:16, pmu_cnt_bits, Bit wideth of PMU counter
- 0xA, 0, EAX, 31:24, pmu_ebx_bits, Length of EBX bit vector to enumerate PMU events
-
- 0xA, 0, EBX, 0, pmu_no_core_cycle_evt, Core cycle event not available
- 0xA, 0, EBX, 1, pmu_no_instr_ret_evt, Instruction retired event not available
- 0xA, 0, EBX, 2, pmu_no_ref_cycle_evt, Reference cycles event not available
- 0xA, 0, EBX, 3, pmu_no_llc_ref_evt, Last-level cache reference event not available
- 0xA, 0, EBX, 4, pmu_no_llc_mis_evt, Last-level cache misses event not available
- 0xA, 0, EBX, 5, pmu_no_br_instr_ret_evt, Branch instruction retired event not available
- 0xA, 0, EBX, 6, pmu_no_br_mispredict_evt, Branch mispredict retired event not available
-
- 0xA, 0, ECX, 4:0, pmu_fixed_cnt_num, Performance Monitoring Unit version
- 0xA, 0, ECX, 12:5, pmu_fixed_cnt_bits, Numer of PMU counters per logical CPU
-
-# Leaf 0BH
-# Extended Topology Enumeration Leaf
-#
-
- 0xB, 0, EAX, 4:0, id_shift, Number of bits to shift right on x2APIC ID to get a unique topology ID of the next level type
- 0xB, 0, EBX, 15:0, cpu_nr, Number of logical processors at this level type
- 0xB, 0, ECX, 15:8, lvl_type, 0-Invalid 1-SMT 2-Core
- 0xB, 0, EDX, 31:0, x2apic_id, x2APIC ID the current logical processor
-
-
-# Leaf 0DH
-# Processor Extended State
- 0xD, 0, EAX, 0, x87, X87 state
- 0xD, 0, EAX, 1, sse, SSE state
- 0xD, 0, EAX, 2, avx, AVX state
- 0xD, 0, EAX, 4:3, mpx, MPX state
- 0xD, 0, EAX, 7:5, avx512, AVX-512 state
- 0xD, 0, EAX, 9, pkru, PKRU state
-
- 0xD, 0, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
- 0xD, 0, ECX, 31:0, max_sz_xsave, Maximum size (bytes) of the XSAVE/XRSTOR save area
-
- 0xD, 1, EAX, 0, xsaveopt, XSAVEOPT available
- 0xD, 1, EAX, 1, xsavec, XSAVEC and compacted form supported
- 0xD, 1, EAX, 2, xgetbv, XGETBV supported
- 0xD, 1, EAX, 3, xsaves, XSAVES/XRSTORS and IA32_XSS supported
-
- 0xD, 1, EBX, 31:0, max_sz_xcr0, Maximum size (bytes) required by enabled features in XCR0
- 0xD, 1, ECX, 8, pt, PT state
- 0xD, 1, ECX, 11, cet_usr, CET user state
- 0xD, 1, ECX, 12, cet_supv, CET supervisor state
- 0xD, 1, ECX, 13, hdc, HDC state
- 0xD, 1, ECX, 16, hwp, HWP state
-
-# Leaf 0FH
-# Intel RDT Monitoring
-
- 0xF, 0, EBX, 31:0, rmid_range, Maximum range (zero-based) of RMID within this physical processor of all types
- 0xF, 0, EDX, 1, l3c_rdt_mon, L3 Cache RDT Monitoring supported
-
- 0xF, 1, ECX, 31:0, rmid_range, Maximum range (zero-based) of RMID of this types
- 0xF, 1, EDX, 0, l3c_ocp_mon, L3 Cache occupancy Monitoring supported
- 0xF, 1, EDX, 1, l3c_tbw_mon, L3 Cache Total Bandwidth Monitoring supported
- 0xF, 1, EDX, 2, l3c_lbw_mon, L3 Cache Local Bandwidth Monitoring supported
+# The basic row format is:
+# LEAF, SUBLEAVES, reg, bits, short_name , long_description
+
+# Leaf 0H
+# Maximum standard leaf number + CPU vendor string
+
+ 0x0, 0, eax, 31:0, max_std_leaf , Highest standard CPUID leaf supported
+ 0x0, 0, ebx, 31:0, cpu_vendorid_0 , CPU vendor ID string bytes 0 - 3
+ 0x0, 0, ecx, 31:0, cpu_vendorid_2 , CPU vendor ID string bytes 8 - 11
+ 0x0, 0, edx, 31:0, cpu_vendorid_1 , CPU vendor ID string bytes 4 - 7
+
+# Leaf 1H
+# CPU FMS (Family/Model/Stepping) + standard feature flags
+
+ 0x1, 0, eax, 3:0, stepping , Stepping ID
+ 0x1, 0, eax, 7:4, base_model , Base CPU model ID
+ 0x1, 0, eax, 11:8, base_family_id , Base CPU family ID
+ 0x1, 0, eax, 13:12, cpu_type , CPU type
+ 0x1, 0, eax, 19:16, ext_model , Extended CPU model ID
+ 0x1, 0, eax, 27:20, ext_family , Extended CPU family ID
+ 0x1, 0, ebx, 7:0, brand_id , Brand index
+ 0x1, 0, ebx, 15:8, clflush_size , CLFLUSH instruction cache line size
+ 0x1, 0, ebx, 23:16, n_logical_cpu , Logical CPU count
+ 0x1, 0, ebx, 31:24, local_apic_id , Initial local APIC physical ID
+ 0x1, 0, ecx, 0, pni , Streaming SIMD Extensions 3 (SSE3)
+ 0x1, 0, ecx, 1, pclmulqdq , PCLMULQDQ instruction support
+ 0x1, 0, ecx, 2, dtes64 , 64-bit DS save area
+ 0x1, 0, ecx, 3, monitor , MONITOR/MWAIT support
+ 0x1, 0, ecx, 4, ds_cpl , CPL Qualified Debug Store
+ 0x1, 0, ecx, 5, vmx , Virtual Machine Extensions
+ 0x1, 0, ecx, 6, smx , Safer Mode Extensions
+ 0x1, 0, ecx, 7, est , Enhanced Intel SpeedStep
+ 0x1, 0, ecx, 8, tm2 , Thermal Monitor 2
+ 0x1, 0, ecx, 9, ssse3 , Supplemental SSE3
+ 0x1, 0, ecx, 10, cid , L1 Context ID
+ 0x1, 0, ecx, 11, sdbg , Silicon Debug
+ 0x1, 0, ecx, 12, fma , FMA extensions using YMM state
+ 0x1, 0, ecx, 13, cx16 , CMPXCHG16B instruction support
+ 0x1, 0, ecx, 14, xtpr , xTPR Update Control
+ 0x1, 0, ecx, 15, pdcm , Perfmon and Debug Capability
+ 0x1, 0, ecx, 17, pcid , Process-context identifiers
+ 0x1, 0, ecx, 18, dca , Direct Cache Access
+ 0x1, 0, ecx, 19, sse4_1 , SSE4.1
+ 0x1, 0, ecx, 20, sse4_2 , SSE4.2
+ 0x1, 0, ecx, 21, x2apic , X2APIC support
+ 0x1, 0, ecx, 22, movbe , MOVBE instruction support
+ 0x1, 0, ecx, 23, popcnt , POPCNT instruction support
+ 0x1, 0, ecx, 24, tsc_deadline_timer , APIC timer one-shot operation
+ 0x1, 0, ecx, 25, aes , AES instructions
+ 0x1, 0, ecx, 26, xsave , XSAVE (and related instructions) support
+ 0x1, 0, ecx, 27, osxsave , XSAVE (and related instructions) are enabled by OS
+ 0x1, 0, ecx, 28, avx , AVX instructions support
+ 0x1, 0, ecx, 29, f16c , Half-precision floating-point conversion support
+ 0x1, 0, ecx, 30, rdrand , RDRAND instruction support
+ 0x1, 0, ecx, 31, guest_status , System is running as guest; (para-)virtualized system
+ 0x1, 0, edx, 0, fpu , Floating-Point Unit on-chip (x87)
+ 0x1, 0, edx, 1, vme , Virtual-8086 Mode Extensions
+ 0x1, 0, edx, 2, de , Debugging Extensions
+ 0x1, 0, edx, 3, pse , Page Size Extension
+ 0x1, 0, edx, 4, tsc , Time Stamp Counter
+ 0x1, 0, edx, 5, msr , Model-Specific Registers (RDMSR and WRMSR support)
+ 0x1, 0, edx, 6, pae , Physical Address Extensions
+ 0x1, 0, edx, 7, mce , Machine Check Exception
+ 0x1, 0, edx, 8, cx8 , CMPXCHG8B instruction
+ 0x1, 0, edx, 9, apic , APIC on-chip
+ 0x1, 0, edx, 11, sep , SYSENTER, SYSEXIT, and associated MSRs
+ 0x1, 0, edx, 12, mtrr , Memory Type Range Registers
+ 0x1, 0, edx, 13, pge , Page Global Extensions
+ 0x1, 0, edx, 14, mca , Machine Check Architecture
+ 0x1, 0, edx, 15, cmov , Conditional Move Instruction
+ 0x1, 0, edx, 16, pat , Page Attribute Table
+ 0x1, 0, edx, 17, pse36 , Page Size Extension (36-bit)
+ 0x1, 0, edx, 18, pn , Processor Serial Number
+ 0x1, 0, edx, 19, clflush , CLFLUSH instruction
+ 0x1, 0, edx, 21, dts , Debug Store
+ 0x1, 0, edx, 22, acpi , Thermal monitor and clock control
+ 0x1, 0, edx, 23, mmx , MMX instructions
+ 0x1, 0, edx, 24, fxsr , FXSAVE and FXRSTOR instructions
+ 0x1, 0, edx, 25, sse , SSE instructions
+ 0x1, 0, edx, 26, sse2 , SSE2 instructions
+ 0x1, 0, edx, 27, ss , Self Snoop
+ 0x1, 0, edx, 28, ht , Hyper-threading
+ 0x1, 0, edx, 29, tm , Thermal Monitor
+ 0x1, 0, edx, 30, ia64 , Legacy IA-64 (Itanium) support bit, now reserved
+ 0x1, 0, edx, 31, pbe , Pending Break Enable
+
+# Leaf 2H
+# Intel cache and TLB information one-byte descriptors
+
+ 0x2, 0, eax, 7:0, iteration_count , Number of times this leaf must be queried
+ 0x2, 0, eax, 15:8, desc1 , Descriptor #1
+ 0x2, 0, eax, 23:16, desc2 , Descriptor #2
+ 0x2, 0, eax, 30:24, desc3 , Descriptor #3
+ 0x2, 0, eax, 31, eax_invalid , Descriptors 1-3 are invalid if set
+ 0x2, 0, ebx, 7:0, desc4 , Descriptor #4
+ 0x2, 0, ebx, 15:8, desc5 , Descriptor #5
+ 0x2, 0, ebx, 23:16, desc6 , Descriptor #6
+ 0x2, 0, ebx, 30:24, desc7 , Descriptor #7
+ 0x2, 0, ebx, 31, ebx_invalid , Descriptors 4-7 are invalid if set
+ 0x2, 0, ecx, 7:0, desc8 , Descriptor #8
+ 0x2, 0, ecx, 15:8, desc9 , Descriptor #9
+ 0x2, 0, ecx, 23:16, desc10 , Descriptor #10
+ 0x2, 0, ecx, 30:24, desc11 , Descriptor #11
+ 0x2, 0, ecx, 31, ecx_invalid , Descriptors 8-11 are invalid if set
+ 0x2, 0, edx, 7:0, desc12 , Descriptor #12
+ 0x2, 0, edx, 15:8, desc13 , Descriptor #13
+ 0x2, 0, edx, 23:16, desc14 , Descriptor #14
+ 0x2, 0, edx, 30:24, desc15 , Descriptor #15
+ 0x2, 0, edx, 31, edx_invalid , Descriptors 12-15 are invalid if set
+
+# Leaf 4H
+# Intel deterministic cache parameters
+
+ 0x4, 31:0, eax, 4:0, cache_type , Cache type field
+ 0x4, 31:0, eax, 7:5, cache_level , Cache level (1-based)
+ 0x4, 31:0, eax, 8, cache_self_init , Self-initializing cache level
+ 0x4, 31:0, eax, 9, fully_associative , Fully-associative cache
+ 0x4, 31:0, eax, 25:14, num_threads_sharing , Number logical CPUs sharing this cache
+ 0x4, 31:0, eax, 31:26, num_cores_on_die , Number of cores in the physical package
+ 0x4, 31:0, ebx, 11:0, cache_linesize , System coherency line size (0-based)
+ 0x4, 31:0, ebx, 21:12, cache_npartitions , Physical line partitions (0-based)
+ 0x4, 31:0, ebx, 31:22, cache_nways , Ways of associativity (0-based)
+ 0x4, 31:0, ecx, 30:0, cache_nsets , Cache number of sets (0-based)
+ 0x4, 31:0, edx, 0, wbinvd_rll_no_guarantee, WBINVD/INVD not guaranteed for Remote Lower-Level caches
+ 0x4, 31:0, edx, 1, ll_inclusive , Cache is inclusive of Lower-Level caches
+ 0x4, 31:0, edx, 2, complex_indexing , Not a direct-mapped cache (complex function)
+
+# Leaf 5H
+# MONITOR/MWAIT instructions enumeration
+
+ 0x5, 0, eax, 15:0, min_mon_size , Smallest monitor-line size, in bytes
+ 0x5, 0, ebx, 15:0, max_mon_size , Largest monitor-line size, in bytes
+ 0x5, 0, ecx, 0, mwait_ext , Enumeration of MONITOR/MWAIT extensions is supported
+ 0x5, 0, ecx, 1, mwait_irq_break , Interrupts as a break-event for MWAIT is supported
+ 0x5, 0, edx, 3:0, n_c0_substates , Number of C0 sub C-states supported using MWAIT
+ 0x5, 0, edx, 7:4, n_c1_substates , Number of C1 sub C-states supported using MWAIT
+ 0x5, 0, edx, 11:8, n_c2_substates , Number of C2 sub C-states supported using MWAIT
+ 0x5, 0, edx, 15:12, n_c3_substates , Number of C3 sub C-states supported using MWAIT
+ 0x5, 0, edx, 19:16, n_c4_substates , Number of C4 sub C-states supported using MWAIT
+ 0x5, 0, edx, 23:20, n_c5_substates , Number of C5 sub C-states supported using MWAIT
+ 0x5, 0, edx, 27:24, n_c6_substates , Number of C6 sub C-states supported using MWAIT
+ 0x5, 0, edx, 31:28, n_c7_substates , Number of C7 sub C-states supported using MWAIT
+
+# Leaf 6H
+# Thermal and Power Management enumeration
+
+ 0x6, 0, eax, 0, dtherm , Digital temperature sensor
+ 0x6, 0, eax, 1, turbo_boost , Intel Turbo Boost
+ 0x6, 0, eax, 2, arat , Always-Running APIC Timer (not affected by p-state)
+ 0x6, 0, eax, 4, pln , Power Limit Notification (PLN) event
+ 0x6, 0, eax, 5, ecmd , Clock modulation duty cycle extension
+ 0x6, 0, eax, 6, pts , Package thermal management
+ 0x6, 0, eax, 7, hwp , HWP (Hardware P-states) base registers are supported
+ 0x6, 0, eax, 8, hwp_notify , HWP notification (IA32_HWP_INTERRUPT MSR)
+ 0x6, 0, eax, 9, hwp_act_window , HWP activity window (IA32_HWP_REQUEST[bits 41:32]) supported
+ 0x6, 0, eax, 10, hwp_epp , HWP Energy Performance Preference
+ 0x6, 0, eax, 11, hwp_pkg_req , HWP Package Level Request
+ 0x6, 0, eax, 13, hdc_base_regs , HDC base registers are supported
+ 0x6, 0, eax, 14, turbo_boost_3_0 , Intel Turbo Boost Max 3.0
+ 0x6, 0, eax, 15, hwp_capabilities , HWP Highest Performance change
+ 0x6, 0, eax, 16, hwp_peci_override , HWP PECI override
+ 0x6, 0, eax, 17, hwp_flexible , Flexible HWP
+ 0x6, 0, eax, 18, hwp_fast , IA32_HWP_REQUEST MSR fast access mode
+ 0x6, 0, eax, 19, hfi , HW_FEEDBACK MSRs supported
+ 0x6, 0, eax, 20, hwp_ignore_idle , Ignoring idle logical CPU HWP req is supported
+ 0x6, 0, eax, 23, thread_director , Intel thread director support
+ 0x6, 0, eax, 24, therm_interrupt_bit25 , IA32_THERM_INTERRUPT MSR bit 25 is supported
+ 0x6, 0, ebx, 3:0, n_therm_thresholds , Digital thermometer thresholds
+ 0x6, 0, ecx, 0, aperfmperf , MPERF/APERF MSRs (effective frequency interface)
+ 0x6, 0, ecx, 3, epb , IA32_ENERGY_PERF_BIAS MSR support
+ 0x6, 0, ecx, 15:8, thrd_director_nclasses , Number of classes, Intel thread director
+ 0x6, 0, edx, 0, perfcap_reporting , Performance capability reporting
+ 0x6, 0, edx, 1, encap_reporting , Energy efficiency capability reporting
+ 0x6, 0, edx, 11:8, feedback_sz , Feedback interface structure size, in 4K pages
+ 0x6, 0, edx, 31:16, this_lcpu_hwfdbk_idx , This logical CPU hardware feedback interface index
+
+# Leaf 7H
+# Extended CPU features enumeration
+
+ 0x7, 0, eax, 31:0, leaf7_n_subleaves , Number of leaf 0x7 subleaves
+ 0x7, 0, ebx, 0, fsgsbase , FSBASE/GSBASE read/write support
+ 0x7, 0, ebx, 1, tsc_adjust , IA32_TSC_ADJUST MSR supported
+ 0x7, 0, ebx, 2, sgx , Intel SGX (Software Guard Extensions)
+ 0x7, 0, ebx, 3, bmi1 , Bit manipulation extensions group 1
+ 0x7, 0, ebx, 4, hle , Hardware Lock Elision
+ 0x7, 0, ebx, 5, avx2 , AVX2 instruction set
+ 0x7, 0, ebx, 6, fdp_excptn_only , FPU Data Pointer updated only on x87 exceptions
+ 0x7, 0, ebx, 7, smep , Supervisor Mode Execution Protection
+ 0x7, 0, ebx, 8, bmi2 , Bit manipulation extensions group 2
+ 0x7, 0, ebx, 9, erms , Enhanced REP MOVSB/STOSB
+ 0x7, 0, ebx, 10, invpcid , INVPCID instruction (Invalidate Processor Context ID)
+ 0x7, 0, ebx, 11, rtm , Intel restricted transactional memory
+ 0x7, 0, ebx, 12, cqm , Intel RDT-CMT / AMD Platform-QoS cache monitoring
+ 0x7, 0, ebx, 13, zero_fcs_fds , Deprecated FPU CS/DS (stored as zero)
+ 0x7, 0, ebx, 14, mpx , Intel memory protection extensions
+ 0x7, 0, ebx, 15, rdt_a , Intel RDT / AMD Platform-QoS Enforcement
+ 0x7, 0, ebx, 16, avx512f , AVX-512 foundation instructions
+ 0x7, 0, ebx, 17, avx512dq , AVX-512 double/quadword instructions
+ 0x7, 0, ebx, 18, rdseed , RDSEED instruction
+ 0x7, 0, ebx, 19, adx , ADCX/ADOX instructions
+ 0x7, 0, ebx, 20, smap , Supervisor mode access prevention
+ 0x7, 0, ebx, 21, avx512ifma , AVX-512 integer fused multiply add
+ 0x7, 0, ebx, 23, clflushopt , CLFLUSHOPT instruction
+ 0x7, 0, ebx, 24, clwb , CLWB instruction
+ 0x7, 0, ebx, 25, intel_pt , Intel processor trace
+ 0x7, 0, ebx, 26, avx512pf , AVX-512 prefetch instructions
+ 0x7, 0, ebx, 27, avx512er , AVX-512 exponent/reciprocal instructions
+ 0x7, 0, ebx, 28, avx512cd , AVX-512 conflict detection instructions
+ 0x7, 0, ebx, 29, sha_ni , SHA/SHA256 instructions
+ 0x7, 0, ebx, 30, avx512bw , AVX-512 byte/word instructions
+ 0x7, 0, ebx, 31, avx512vl , AVX-512 VL (128/256 vector length) extensions
+ 0x7, 0, ecx, 0, prefetchwt1 , PREFETCHWT1 (Intel Xeon Phi only)
+ 0x7, 0, ecx, 1, avx512vbmi , AVX-512 Vector byte manipulation instructions
+ 0x7, 0, ecx, 2, umip , User mode instruction protection
+ 0x7, 0, ecx, 3, pku , Protection keys for user-space
+ 0x7, 0, ecx, 4, ospke , OS protection keys enable
+ 0x7, 0, ecx, 5, waitpkg , WAITPKG instructions
+ 0x7, 0, ecx, 6, avx512_vbmi2 , AVX-512 vector byte manipulation instructions group 2
+ 0x7, 0, ecx, 7, cet_ss , CET shadow stack features
+ 0x7, 0, ecx, 8, gfni , Galois field new instructions
+ 0x7, 0, ecx, 9, vaes , Vector AES instructions
+ 0x7, 0, ecx, 10, vpclmulqdq , VPCLMULQDQ 256-bit instruction support
+ 0x7, 0, ecx, 11, avx512_vnni , Vector neural network instructions
+ 0x7, 0, ecx, 12, avx512_bitalg , AVX-512 bitwise algorithms
+ 0x7, 0, ecx, 13, tme , Intel total memory encryption
+ 0x7, 0, ecx, 14, avx512_vpopcntdq , AVX-512: POPCNT for vectors of DWORD/QWORD
+ 0x7, 0, ecx, 16, la57 , 57-bit linear addresses (five-level paging)
+ 0x7, 0, ecx, 21:17, mawau_val_lm , BNDLDX/BNDSTX MAWAU value in 64-bit mode
+ 0x7, 0, ecx, 22, rdpid , RDPID instruction
+ 0x7, 0, ecx, 23, key_locker , Intel key locker support
+ 0x7, 0, ecx, 24, bus_lock_detect , OS bus-lock detection
+ 0x7, 0, ecx, 25, cldemote , CLDEMOTE instruction
+ 0x7, 0, ecx, 27, movdiri , MOVDIRI instruction
+ 0x7, 0, ecx, 28, movdir64b , MOVDIR64B instruction
+ 0x7, 0, ecx, 29, enqcmd , Enqueue stores supported (ENQCMD{,S})
+ 0x7, 0, ecx, 30, sgx_lc , Intel SGX launch configuration
+ 0x7, 0, ecx, 31, pks , Protection keys for supervisor-mode pages
+ 0x7, 0, edx, 1, sgx_keys , Intel SGX attestation services
+ 0x7, 0, edx, 2, avx512_4vnniw , AVX-512 neural network instructions
+ 0x7, 0, edx, 3, avx512_4fmaps , AVX-512 multiply accumulation single precision
+ 0x7, 0, edx, 4, fsrm , Fast short REP MOV
+ 0x7, 0, edx, 5, uintr , CPU supports user interrupts
+ 0x7, 0, edx, 8, avx512_vp2intersect , VP2INTERSECT{D,Q} instructions
+ 0x7, 0, edx, 9, srdbs_ctrl , SRBDS mitigation MSR available
+ 0x7, 0, edx, 10, md_clear , VERW MD_CLEAR microcode support
+ 0x7, 0, edx, 11, rtm_always_abort , XBEGIN (RTM transaction) always aborts
+ 0x7, 0, edx, 13, tsx_force_abort , MSR TSX_FORCE_ABORT, RTM_ABORT bit, supported
+ 0x7, 0, edx, 14, serialize , SERIALIZE instruction
+ 0x7, 0, edx, 15, hybrid_cpu , The CPU is identified as a 'hybrid part'
+ 0x7, 0, edx, 16, tsxldtrk , TSX suspend/resume load address tracking
+ 0x7, 0, edx, 18, pconfig , PCONFIG instruction
+ 0x7, 0, edx, 19, arch_lbr , Intel architectural LBRs
+ 0x7, 0, edx, 20, ibt , CET indirect branch tracking
+ 0x7, 0, edx, 22, amx_bf16 , AMX-BF16: tile bfloat16 support
+ 0x7, 0, edx, 23, avx512_fp16 , AVX-512 FP16 instructions
+ 0x7, 0, edx, 24, amx_tile , AMX-TILE: tile architecture support
+ 0x7, 0, edx, 25, amx_int8 , AMX-INT8: tile 8-bit integer support
+ 0x7, 0, edx, 26, spec_ctrl , Speculation Control (IBRS/IBPB: indirect branch restrictions)
+ 0x7, 0, edx, 27, intel_stibp , Single thread indirect branch predictors
+ 0x7, 0, edx, 28, flush_l1d , FLUSH L1D cache: IA32_FLUSH_CMD MSR
+ 0x7, 0, edx, 29, arch_capabilities , Intel IA32_ARCH_CAPABILITIES MSR
+ 0x7, 0, edx, 30, core_capabilities , IA32_CORE_CAPABILITIES MSR
+ 0x7, 0, edx, 31, spec_ctrl_ssbd , Speculative store bypass disable
+ 0x7, 1, eax, 4, avx_vnni , AVX-VNNI instructions
+ 0x7, 1, eax, 5, avx512_bf16 , AVX-512 bfloat16 instructions
+ 0x7, 1, eax, 6, lass , Linear address space separation
+ 0x7, 1, eax, 7, cmpccxadd , CMPccXADD instructions
+ 0x7, 1, eax, 8, arch_perfmon_ext , ArchPerfmonExt: leaf 0x23 is supported
+ 0x7, 1, eax, 10, fzrm , Fast zero-length REP MOVSB
+ 0x7, 1, eax, 11, fsrs , Fast short REP STOSB
+ 0x7, 1, eax, 12, fsrc , Fast Short REP CMPSB/SCASB
+ 0x7, 1, eax, 17, fred , FRED: Flexible return and event delivery transitions
+ 0x7, 1, eax, 18, lkgs , LKGS: Load 'kernel' (userspace) GS
+ 0x7, 1, eax, 19, wrmsrns , WRMSRNS instruction (WRMSR-non-serializing)
+ 0x7, 1, eax, 20, nmi_src , NMI-source reporting with FRED event data
+ 0x7, 1, eax, 21, amx_fp16 , AMX-FP16: FP16 tile operations
+ 0x7, 1, eax, 22, hreset , History reset support
+ 0x7, 1, eax, 23, avx_ifma , Integer fused multiply add
+ 0x7, 1, eax, 26, lam , Linear address masking
+ 0x7, 1, eax, 27, rd_wr_msrlist , RDMSRLIST/WRMSRLIST instructions
+ 0x7, 1, ebx, 0, intel_ppin , Protected processor inventory number (PPIN{,_CTL} MSRs)
+ 0x7, 1, edx, 4, avx_vnni_int8 , AVX-VNNI-INT8 instructions
+ 0x7, 1, edx, 5, avx_ne_convert , AVX-NE-CONVERT instructions
+ 0x7, 1, edx, 8, amx_complex , AMX-COMPLEX instructions (starting from Granite Rapids)
+ 0x7, 1, edx, 14, prefetchit_0_1 , PREFETCHIT0/1 instructions
+ 0x7, 1, edx, 18, cet_sss , CET supervisor shadow stacks safe to use
+ 0x7, 2, edx, 0, intel_psfd , Intel predictive store forward disable
+ 0x7, 2, edx, 1, ipred_ctrl , MSR bits IA32_SPEC_CTRL.IPRED_DIS_{U,S}
+ 0x7, 2, edx, 2, rrsba_ctrl , MSR bits IA32_SPEC_CTRL.RRSBA_DIS_{U,S}
+ 0x7, 2, edx, 3, ddp_ctrl , MSR bit IA32_SPEC_CTRL.DDPD_U
+ 0x7, 2, edx, 4, bhi_ctrl , MSR bit IA32_SPEC_CTRL.BHI_DIS_S
+ 0x7, 2, edx, 5, mcdt_no , MCDT mitigation not needed
+ 0x7, 2, edx, 6, uclock_disable , UC-lock disable is supported
+
+# Leaf 9H
+# Intel DCA (Direct Cache Access) enumeration
+
+ 0x9, 0, eax, 0, dca_enabled_in_bios , DCA is enabled in BIOS
+
+# Leaf AH
+# Intel PMU (Performance Monitoring Unit) enumeration
+
+ 0xa, 0, eax, 7:0, pmu_version , Performance monitoring unit version ID
+ 0xa, 0, eax, 15:8, pmu_n_gcounters , Number of general PMU counters per logical CPU
+ 0xa, 0, eax, 23:16, pmu_gcounters_nbits , Bitwidth of PMU general counters
+ 0xa, 0, eax, 31:24, pmu_cpuid_ebx_bits , Length of leaf 0xa EBX bit vector
+ 0xa, 0, ebx, 0, no_core_cycle_evt , Core cycle event not available
+ 0xa, 0, ebx, 1, no_insn_retired_evt , Instruction retired event not available
+ 0xa, 0, ebx, 2, no_refcycle_evt , Reference cycles event not available
+ 0xa, 0, ebx, 3, no_llc_ref_evt , LLC-reference event not available
+ 0xa, 0, ebx, 4, no_llc_miss_evt , LLC-misses event not available
+ 0xa, 0, ebx, 5, no_br_insn_ret_evt , Branch instruction retired event not available
+ 0xa, 0, ebx, 6, no_br_mispredict_evt , Branch mispredict retired event not available
+ 0xa, 0, ebx, 7, no_td_slots_evt , Topdown slots event not available
+ 0xa, 0, ecx, 31:0, pmu_fcounters_bitmap , Fixed-function PMU counters support bitmap
+ 0xa, 0, edx, 4:0, pmu_n_fcounters , Number of fixed PMU counters
+ 0xa, 0, edx, 12:5, pmu_fcounters_nbits , Bitwidth of PMU fixed counters
+ 0xa, 0, edx, 15, anythread_depr , AnyThread deprecation
+
+# Leaf BH
+# CPUs v1 extended topology enumeration
+
+ 0xb, 1:0, eax, 4:0, x2apic_id_shift , Bit width of this level (previous levels inclusive)
+ 0xb, 1:0, ebx, 15:0, domain_lcpus_count , Logical CPUs count across all instances of this domain
+ 0xb, 1:0, ecx, 7:0, domain_nr , This domain level (subleaf ID)
+ 0xb, 1:0, ecx, 15:8, domain_type , This domain type
+ 0xb, 1:0, edx, 31:0, x2apic_id , x2APIC ID of current logical CPU
+
+# Leaf DH
+# Processor extended state enumeration
+
+ 0xd, 0, eax, 0, xcr0_x87 , XCR0.X87 (bit 0) supported
+ 0xd, 0, eax, 1, xcr0_sse , XCR0.SEE (bit 1) supported
+ 0xd, 0, eax, 2, xcr0_avx , XCR0.AVX (bit 2) supported
+ 0xd, 0, eax, 3, xcr0_mpx_bndregs , XCR0.BNDREGS (bit 3) supported (MPX BND0-BND3 registers)
+ 0xd, 0, eax, 4, xcr0_mpx_bndcsr , XCR0.BNDCSR (bit 4) supported (MPX BNDCFGU/BNDSTATUS registers)
+ 0xd, 0, eax, 5, xcr0_avx512_opmask , XCR0.OPMASK (bit 5) supported (AVX-512 k0-k7 registers)
+ 0xd, 0, eax, 6, xcr0_avx512_zmm_hi256 , XCR0.ZMM_Hi256 (bit 6) supported (AVX-512 ZMM0->ZMM7/15 registers)
+ 0xd, 0, eax, 7, xcr0_avx512_hi16_zmm , XCR0.HI16_ZMM (bit 7) supported (AVX-512 ZMM16->ZMM31 registers)
+ 0xd, 0, eax, 9, xcr0_pkru , XCR0.PKRU (bit 9) supported (XSAVE PKRU registers)
+ 0xd, 0, eax, 11, xcr0_cet_u , XCR0.CET_U (bit 11) supported (CET user state)
+ 0xd, 0, eax, 12, xcr0_cet_s , XCR0.CET_S (bit 12) supported (CET supervisor state)
+ 0xd, 0, eax, 17, xcr0_tileconfig , XCR0.TILECONFIG (bit 17) supported (AMX can manage TILECONFIG)
+ 0xd, 0, eax, 18, xcr0_tiledata , XCR0.TILEDATA (bit 18) supported (AMX can manage TILEDATA)
+ 0xd, 0, ebx, 31:0, xsave_sz_xcr0_enabled , XSAVE/XRSTOR area byte size, for XCR0 enabled features
+ 0xd, 0, ecx, 31:0, xsave_sz_max , XSAVE/XRSTOR area max byte size, all CPU features
+ 0xd, 0, edx, 30, xcr0_lwp , AMD XCR0.LWP (bit 62) supported (Light-weight Profiling)
+ 0xd, 1, eax, 0, xsaveopt , XSAVEOPT instruction
+ 0xd, 1, eax, 1, xsavec , XSAVEC instruction
+ 0xd, 1, eax, 2, xgetbv1 , XGETBV instruction with ECX = 1
+ 0xd, 1, eax, 3, xsaves , XSAVES/XRSTORS instructions (and XSS MSR)
+ 0xd, 1, eax, 4, xfd , Extended feature disable support
+ 0xd, 1, ebx, 31:0, xsave_sz_xcr0_xmms_enabled, XSAVE area size, all XCR0 and XMMS features enabled
+ 0xd, 1, ecx, 8, xss_pt , PT state, supported
+ 0xd, 1, ecx, 10, xss_pasid , PASID state, supported
+ 0xd, 1, ecx, 11, xss_cet_u , CET user state, supported
+ 0xd, 1, ecx, 12, xss_cet_p , CET supervisor state, supported
+ 0xd, 1, ecx, 13, xss_hdc , HDC state, supported
+ 0xd, 1, ecx, 14, xss_uintr , UINTR state, supported
+ 0xd, 1, ecx, 15, xss_lbr , LBR state, supported
+ 0xd, 1, ecx, 16, xss_hwp , HWP state, supported
+ 0xd, 63:2, eax, 31:0, xsave_sz , Size of save area for subleaf-N feature, in bytes
+ 0xd, 63:2, ebx, 31:0, xsave_offset , Offset of save area for subleaf-N feature, in bytes
+ 0xd, 63:2, ecx, 0, is_xss_bit , Subleaf N describes an XSS bit, otherwise XCR0 bit
+ 0xd, 63:2, ecx, 1, compacted_xsave_64byte_aligned, When compacted, subleaf-N feature XSAVE area is 64-byte aligned
+
+# Leaf FH
+# Intel RDT / AMD PQoS resource monitoring
+
+ 0xf, 0, ebx, 31:0, core_rmid_max , RMID max, within this core, all types (0-based)
+ 0xf, 0, edx, 1, cqm_llc , LLC QoS-monitoring supported
+ 0xf, 1, eax, 7:0, l3c_qm_bitwidth , L3 QoS-monitoring counter bitwidth (24-based)
+ 0xf, 1, eax, 8, l3c_qm_overflow_bit , QM_CTR MSR bit 61 is an overflow bit
+ 0xf, 1, ebx, 31:0, l3c_qm_conver_factor , QM_CTR MSR conversion factor to bytes
+ 0xf, 1, ecx, 31:0, l3c_qm_rmid_max , L3 QoS-monitoring max RMID
+ 0xf, 1, edx, 0, cqm_occup_llc , L3 QoS occupancy monitoring supported
+ 0xf, 1, edx, 1, cqm_mbm_total , L3 QoS total bandwidth monitoring supported
+ 0xf, 1, edx, 2, cqm_mbm_local , L3 QoS local bandwidth monitoring supported
# Leaf 10H
-# Intel RDT Allocation
-
- 0x10, 0, EBX, 1, l3c_rdt_alloc, L3 Cache Allocation supported
- 0x10, 0, EBX, 2, l2c_rdt_alloc, L2 Cache Allocation supported
- 0x10, 0, EBX, 3, mem_bw_alloc, Memory Bandwidth Allocation supported
-
+# Intel RDT / AMD PQoS allocation enumeration
+
+ 0x10, 0, ebx, 1, cat_l3 , L3 Cache Allocation Technology supported
+ 0x10, 0, ebx, 2, cat_l2 , L2 Cache Allocation Technology supported
+ 0x10, 0, ebx, 3, mba , Memory Bandwidth Allocation supported
+ 0x10, 2:1, eax, 4:0, cat_cbm_len , L3/L2_CAT capacity bitmask length, minus-one notation
+ 0x10, 2:1, ebx, 31:0, cat_units_bitmap , L3/L2_CAT bitmap of allocation units
+ 0x10, 2:1, ecx, 1, l3_cat_cos_infreq_updates, L3_CAT COS updates should be infrequent
+ 0x10, 2:1, ecx, 2, cdp_l3 , L3/L2_CAT CDP (Code and Data Prioritization)
+ 0x10, 2:1, ecx, 3, cat_sparse_1s , L3/L2_CAT non-contiguous 1s value supported
+ 0x10, 2:1, edx, 15:0, cat_cos_max , L3/L2_CAT max COS (Class of Service) supported
+ 0x10, 3, eax, 11:0, mba_max_delay , Max MBA throttling value; minus-one notation
+ 0x10, 3, ecx, 0, per_thread_mba , Per-thread MBA controls are supported
+ 0x10, 3, ecx, 2, mba_delay_linear , Delay values are linear
+ 0x10, 3, edx, 15:0, mba_cos_max , MBA max Class of Service supported
# Leaf 12H
-# SGX Capability
-#
-# Some detailed SGX features not added yet
-
- 0x12, 0, EAX, 0, sgx1, L3 Cache Allocation supported
- 0x12, 1, EAX, 0, sgx2, L3 Cache Allocation supported
-
+# Intel Software Guard Extensions (SGX) enumeration
+
+ 0x12, 0, eax, 0, sgx1 , SGX1 leaf functions supported
+ 0x12, 0, eax, 1, sgx2 , SGX2 leaf functions supported
+ 0x12, 0, eax, 5, enclv_leaves , ENCLV leaves (E{INC,DEC}VIRTCHILD, ESETCONTEXT) supported
+ 0x12, 0, eax, 6, encls_leaves , ENCLS leaves (ENCLS ETRACKC, ERDINFO, ELDBC, ELDUC) supported
+ 0x12, 0, eax, 7, enclu_everifyreport2 , ENCLU leaf EVERIFYREPORT2 supported
+ 0x12, 0, eax, 10, encls_eupdatesvn , ENCLS leaf EUPDATESVN supported
+ 0x12, 0, eax, 11, sgx_edeccssa , ENCLU leaf EDECCSSA supported
+ 0x12, 0, ebx, 0, miscselect_exinfo , SSA.MISC frame: reporting #PF and #GP exceptions inside enclave supported
+ 0x12, 0, ebx, 1, miscselect_cpinfo , SSA.MISC frame: reporting #CP exceptions inside enclave supported
+ 0x12, 0, edx, 7:0, max_enclave_sz_not64 , Maximum enclave size in non-64-bit mode (log2)
+ 0x12, 0, edx, 15:8, max_enclave_sz_64 , Maximum enclave size in 64-bit mode (log2)
+ 0x12, 1, eax, 0, secs_attr_init , ATTRIBUTES.INIT supported (enclave initialized by EINIT)
+ 0x12, 1, eax, 1, secs_attr_debug , ATTRIBUTES.DEBUG supported (enclave permits debugger read/write)
+ 0x12, 1, eax, 2, secs_attr_mode64bit , ATTRIBUTES.MODE64BIT supported (enclave runs in 64-bit mode)
+ 0x12, 1, eax, 4, secs_attr_provisionkey , ATTRIBUTES.PROVISIONKEY supported (provisioning key available)
+ 0x12, 1, eax, 5, secs_attr_einittoken_key, ATTRIBUTES.EINITTOKEN_KEY supported (EINIT token key available)
+ 0x12, 1, eax, 6, secs_attr_cet , ATTRIBUTES.CET supported (enable CET attributes)
+ 0x12, 1, eax, 7, secs_attr_kss , ATTRIBUTES.KSS supported (Key Separation and Sharing enabled)
+ 0x12, 1, eax, 10, secs_attr_aexnotify , ATTRIBUTES.AEXNOTIFY supported (enclave threads may get AEX notifications
+ 0x12, 1, ecx, 0, xfrm_x87 , Enclave XFRM.X87 (bit 0) supported
+ 0x12, 1, ecx, 1, xfrm_sse , Enclave XFRM.SEE (bit 1) supported
+ 0x12, 1, ecx, 2, xfrm_avx , Enclave XFRM.AVX (bit 2) supported
+ 0x12, 1, ecx, 3, xfrm_mpx_bndregs , Enclave XFRM.BNDREGS (bit 3) supported (MPX BND0-BND3 registers)
+ 0x12, 1, ecx, 4, xfrm_mpx_bndcsr , Enclave XFRM.BNDCSR (bit 4) supported (MPX BNDCFGU/BNDSTATUS registers)
+ 0x12, 1, ecx, 5, xfrm_avx512_opmask , Enclave XFRM.OPMASK (bit 5) supported (AVX-512 k0-k7 registers)
+ 0x12, 1, ecx, 6, xfrm_avx512_zmm_hi256 , Enclave XFRM.ZMM_Hi256 (bit 6) supported (AVX-512 ZMM0->ZMM7/15 registers)
+ 0x12, 1, ecx, 7, xfrm_avx512_hi16_zmm , Enclave XFRM.HI16_ZMM (bit 7) supported (AVX-512 ZMM16->ZMM31 registers)
+ 0x12, 1, ecx, 9, xfrm_pkru , Enclave XFRM.PKRU (bit 9) supported (XSAVE PKRU registers)
+ 0x12, 1, ecx, 17, xfrm_tileconfig , Enclave XFRM.TILECONFIG (bit 17) supported (AMX can manage TILECONFIG)
+ 0x12, 1, ecx, 18, xfrm_tiledata , Enclave XFRM.TILEDATA (bit 18) supported (AMX can manage TILEDATA)
+ 0x12, 31:2, eax, 3:0, subleaf_type , Subleaf type (dictates output layout)
+ 0x12, 31:2, eax, 31:12, epc_sec_base_addr_0 , EPC section base address, bits[12:31]
+ 0x12, 31:2, ebx, 19:0, epc_sec_base_addr_1 , EPC section base address, bits[32:51]
+ 0x12, 31:2, ecx, 3:0, epc_sec_type , EPC section type / property encoding
+ 0x12, 31:2, ecx, 31:12, epc_sec_size_0 , EPC section size, bits[12:31]
+ 0x12, 31:2, edx, 19:0, epc_sec_size_1 , EPC section size, bits[32:51]
# Leaf 14H
-# Intel Processor Tracer
-#
+# Intel Processor Trace enumeration
+
+ 0x14, 0, eax, 31:0, pt_max_subleaf , Maximum leaf 0x14 subleaf
+ 0x14, 0, ebx, 0, cr3_filtering , IA32_RTIT_CR3_MATCH is accessible
+ 0x14, 0, ebx, 1, psb_cyc , Configurable PSB and cycle-accurate mode
+ 0x14, 0, ebx, 2, ip_filtering , IP/TraceStop filtering; Warm-reset PT MSRs preservation
+ 0x14, 0, ebx, 3, mtc_timing , MTC timing packet; COFI-based packets suppression
+ 0x14, 0, ebx, 4, ptwrite , PTWRITE support
+ 0x14, 0, ebx, 5, power_event_trace , Power Event Trace support
+ 0x14, 0, ebx, 6, psb_pmi_preserve , PSB and PMI preservation support
+ 0x14, 0, ebx, 7, event_trace , Event Trace packet generation through IA32_RTIT_CTL.EventEn
+ 0x14, 0, ebx, 8, tnt_disable , TNT packet generation disable through IA32_RTIT_CTL.DisTNT
+ 0x14, 0, ecx, 0, topa_output , ToPA output scheme support
+ 0x14, 0, ecx, 1, topa_multiple_entries , ToPA tables can hold multiple entries
+ 0x14, 0, ecx, 2, single_range_output , Single-range output scheme supported
+ 0x14, 0, ecx, 3, trance_transport_output, Trace Transport subsystem output support
+ 0x14, 0, ecx, 31, ip_payloads_lip , IP payloads have LIP values (CS base included)
+ 0x14, 1, eax, 2:0, num_address_ranges , Filtering number of configurable Address Ranges
+ 0x14, 1, eax, 31:16, mtc_periods_bmp , Bitmap of supported MTC period encodings
+ 0x14, 1, ebx, 15:0, cycle_thresholds_bmp , Bitmap of supported Cycle Threshold encodings
+ 0x14, 1, ebx, 31:16, psb_periods_bmp , Bitmap of supported Configurable PSB frequency encodings
# Leaf 15H
-# Time Stamp Counter and Nominal Core Crystal Clock Information
+# Intel TSC (Time Stamp Counter) enumeration
- 0x15, 0, EAX, 31:0, tsc_denominator, The denominator of the TSC/”core crystal clock” ratio
- 0x15, 0, EBX, 31:0, tsc_numerator, The numerator of the TSC/”core crystal clock” ratio
- 0x15, 0, ECX, 31:0, nom_freq, Nominal frequency of the core crystal clock in Hz
+ 0x15, 0, eax, 31:0, tsc_denominator , Denominator of the TSC/'core crystal clock' ratio
+ 0x15, 0, ebx, 31:0, tsc_numerator , Numerator of the TSC/'core crystal clock' ratio
+ 0x15, 0, ecx, 31:0, cpu_crystal_hz , Core crystal clock nominal frequency, in Hz
# Leaf 16H
-# Processor Frequency Information
+# Intel processor frequency enumeration
- 0x16, 0, EAX, 15:0, cpu_base_freq, Processor Base Frequency in MHz
- 0x16, 0, EBX, 15:0, cpu_max_freq, Maximum Frequency in MHz
- 0x16, 0, ECX, 15:0, bus_freq, Bus (Reference) Frequency in MHz
+ 0x16, 0, eax, 15:0, cpu_base_mhz , Processor base frequency, in MHz
+ 0x16, 0, ebx, 15:0, cpu_max_mhz , Processor max frequency, in MHz
+ 0x16, 0, ecx, 15:0, bus_mhz , Bus reference frequency, in MHz
# Leaf 17H
-# System-On-Chip Vendor Attribute
-
- 0x17, 0, EAX, 31:0, max_socid, Maximum input value of supported sub-leaf
- 0x17, 0, EBX, 15:0, soc_vid, SOC Vendor ID
- 0x17, 0, EBX, 16, std_vid, SOC Vendor ID is assigned via an industry standard scheme
- 0x17, 0, ECX, 31:0, soc_pid, SOC Project ID assigned by vendor
- 0x17, 0, EDX, 31:0, soc_sid, SOC Stepping ID
+# Intel SoC vendor attributes enumeration
+
+ 0x17, 0, eax, 31:0, soc_max_subleaf , Maximum leaf 0x17 subleaf
+ 0x17, 0, ebx, 15:0, soc_vendor_id , SoC vendor ID
+ 0x17, 0, ebx, 16, is_vendor_scheme , Assigned by industry enumeration scheme (not Intel)
+ 0x17, 0, ecx, 31:0, soc_proj_id , SoC project ID, assigned by vendor
+ 0x17, 0, edx, 31:0, soc_stepping_id , Soc project stepping ID, assigned by vendor
+ 0x17, 3:1, eax, 31:0, vendor_brand_a , Vendor Brand ID string, bytes subleaf_nr * (0 -> 3)
+ 0x17, 3:1, ebx, 31:0, vendor_brand_b , Vendor Brand ID string, bytes subleaf_nr * (4 -> 7)
+ 0x17, 3:1, ecx, 31:0, vendor_brand_c , Vendor Brand ID string, bytes subleaf_nr * (8 -> 11)
+ 0x17, 3:1, edx, 31:0, vendor_brand_d , Vendor Brand ID string, bytes subleaf_nr * (12 -> 15)
# Leaf 18H
-# Deterministic Address Translation Parameters
-
+# Intel determenestic address translation (TLB) parameters
+
+ 0x18, 31:0, eax, 31:0, tlb_max_subleaf , Maximum leaf 0x18 subleaf
+ 0x18, 31:0, ebx, 0, tlb_4k_page , TLB 4KB-page entries supported
+ 0x18, 31:0, ebx, 1, tlb_2m_page , TLB 2MB-page entries supported
+ 0x18, 31:0, ebx, 2, tlb_4m_page , TLB 4MB-page entries supported
+ 0x18, 31:0, ebx, 3, tlb_1g_page , TLB 1GB-page entries supported
+ 0x18, 31:0, ebx, 10:8, hard_partitioning , (Hard/Soft) partitioning between logical CPUs sharing this structure
+ 0x18, 31:0, ebx, 31:16, n_way_associative , Ways of associativity
+ 0x18, 31:0, ecx, 31:0, n_sets , Number of sets
+ 0x18, 31:0, edx, 4:0, tlb_type , Translation cache type (TLB type)
+ 0x18, 31:0, edx, 7:5, tlb_cache_level , Translation cache level (1-based)
+ 0x18, 31:0, edx, 8, is_fully_associative , Fully-associative structure
+ 0x18, 31:0, edx, 25:14, tlb_max_addressible_ids, Max number of addressable IDs for logical CPUs sharing this TLB - 1
# Leaf 19H
-# Key Locker Leaf
+# Intel Key Locker enumeration
+ 0x19, 0, eax, 0, kl_cpl0_only , CPL0-only key Locker restriction supported
+ 0x19, 0, eax, 1, kl_no_encrypt , No-encrypt key locker restriction supported
+ 0x19, 0, eax, 2, kl_no_decrypt , No-decrypt key locker restriction supported
+ 0x19, 0, ebx, 0, aes_keylocker , AES key locker instructions supported
+ 0x19, 0, ebx, 2, aes_keylocker_wide , AES wide key locker instructions supported
+ 0x19, 0, ebx, 4, kl_msr_iwkey , Key locker MSRs and IWKEY backups supported
+ 0x19, 0, ecx, 0, loadiwkey_no_backup , LOADIWKEY NoBackup parameter supported
+ 0x19, 0, ecx, 1, iwkey_rand , IWKEY randomization (KeySource encoding 1) supported
# Leaf 1AH
-# Hybrid Information
-
- 0x1A, 0, EAX, 31:24, core_type, 20H-Intel_Atom 40H-Intel_Core
-
+# Intel hybrid CPUs identification (e.g. Atom, Core)
+
+ 0x1a, 0, eax, 23:0, core_native_model , This core's native model ID
+ 0x1a, 0, eax, 31:24, core_type , This core's type
+
+# Leaf 1BH
+# Intel PCONFIG (Platform configuration) enumeration
+
+ 0x1b, 31:0, eax, 11:0, pconfig_subleaf_type , CPUID 0x1b subleaf type
+ 0x1b, 31:0, ebx, 31:0, pconfig_target_id_x , A supported PCONFIG target ID
+ 0x1b, 31:0, ecx, 31:0, pconfig_target_id_y , A supported PCONFIG target ID
+ 0x1b, 31:0, edx, 31:0, pconfig_target_id_z , A supported PCONFIG target ID
+
+# Leaf 1CH
+# Intel LBR (Last Branch Record) enumeration
+
+ 0x1c, 0, eax, 0, lbr_depth_8 , Max stack depth (number of LBR entries) = 8
+ 0x1c, 0, eax, 1, lbr_depth_16 , Max stack depth (number of LBR entries) = 16
+ 0x1c, 0, eax, 2, lbr_depth_24 , Max stack depth (number of LBR entries) = 24
+ 0x1c, 0, eax, 3, lbr_depth_32 , Max stack depth (number of LBR entries) = 32
+ 0x1c, 0, eax, 4, lbr_depth_40 , Max stack depth (number of LBR entries) = 40
+ 0x1c, 0, eax, 5, lbr_depth_48 , Max stack depth (number of LBR entries) = 48
+ 0x1c, 0, eax, 6, lbr_depth_56 , Max stack depth (number of LBR entries) = 56
+ 0x1c, 0, eax, 7, lbr_depth_64 , Max stack depth (number of LBR entries) = 64
+ 0x1c, 0, eax, 30, lbr_deep_c_reset , LBRs maybe cleared on MWAIT C-state > C1
+ 0x1c, 0, eax, 31, lbr_ip_is_lip , LBR IP contain Last IP, otherwise effective IP
+ 0x1c, 0, ebx, 0, lbr_cpl , CPL filtering (non-zero IA32_LBR_CTL[2:1]) supported
+ 0x1c, 0, ebx, 1, lbr_branch_filter , Branch filtering (non-zero IA32_LBR_CTL[22:16]) supported
+ 0x1c, 0, ebx, 2, lbr_call_stack , Call-stack mode (IA32_LBR_CTL[3] = 1) supported
+ 0x1c, 0, ecx, 0, lbr_mispredict , Branch misprediction bit supported (IA32_LBR_x_INFO[63])
+ 0x1c, 0, ecx, 1, lbr_timed_lbr , Timed LBRs (CPU cycles since last LBR entry) supported
+ 0x1c, 0, ecx, 2, lbr_branch_type , Branch type field (IA32_LBR_INFO_x[59:56]) supported
+ 0x1c, 0, ecx, 19:16, lbr_events_gpc_bmp , LBR PMU-events logging support; bitmap for first 4 GP (general-purpose) Counters
+
+# Leaf 1DH
+# Intel AMX (Advanced Matrix Extensions) tile information
+
+ 0x1d, 0, eax, 31:0, amx_max_palette , Highest palette ID / subleaf ID
+ 0x1d, 1, eax, 15:0, amx_palette_size , AMX palette total tiles size, in bytes
+ 0x1d, 1, eax, 31:16, amx_tile_size , AMX single tile's size, in bytes
+ 0x1d, 1, ebx, 15:0, amx_tile_row_size , AMX tile single row's size, in bytes
+ 0x1d, 1, ebx, 31:16, amx_palette_nr_tiles , AMX palette number of tiles
+ 0x1d, 1, ecx, 15:0, amx_tile_nr_rows , AMX tile max number of rows
+
+# Leaf 1EH
+# Intel AMX, TMUL (Tile-matrix MULtiply) accelerator unit enumeration
+
+ 0x1e, 0, ebx, 7:0, tmul_maxk , TMUL unit maximum height, K (rows or columns)
+ 0x1e, 0, ebx, 23:8, tmul_maxn , TMUL unit maximum SIMD dimension, N (column bytes)
# Leaf 1FH
-# V2 Extended Topology - A preferred superset to leaf 0BH
-
-
-# According to SDM
-# 40000000H - 4FFFFFFFH is invalid range
+# Intel extended topology enumeration v2
+
+ 0x1f, 5:0, eax, 4:0, x2apic_id_shift , Bit width of this level (previous levels inclusive)
+ 0x1f, 5:0, ebx, 15:0, domain_lcpus_count , Logical CPUs count across all instances of this domain
+ 0x1f, 5:0, ecx, 7:0, domain_level , This domain level (subleaf ID)
+ 0x1f, 5:0, ecx, 15:8, domain_type , This domain type
+ 0x1f, 5:0, edx, 31:0, x2apic_id , x2APIC ID of current logical CPU
+
+# Leaf 20H
+# Intel HRESET (History Reset) enumeration
+
+ 0x20, 0, eax, 31:0, hreset_nr_subleaves , CPUID 0x20 max subleaf + 1
+ 0x20, 0, ebx, 0, hreset_thread_director , HRESET of Intel thread director is supported
+
+# Leaf 21H
+# Intel TD (Trust Domain) guest execution environment enumeration
+
+ 0x21, 0, ebx, 31:0, tdx_vendorid_0 , TDX vendor ID string bytes 0 - 3
+ 0x21, 0, ecx, 31:0, tdx_vendorid_2 , CPU vendor ID string bytes 8 - 11
+ 0x21, 0, edx, 31:0, tdx_vendorid_1 , CPU vendor ID string bytes 4 - 7
+
+# Leaf 23H
+# Intel Architectural Performance Monitoring Extended (ArchPerfmonExt)
+
+ 0x23, 0, eax, 1, subleaf_1_counters , Subleaf 1, PMU counters bitmaps, is valid
+ 0x23, 0, eax, 3, subleaf_3_events , Subleaf 3, PMU events bitmaps, is valid
+ 0x23, 0, ebx, 0, unitmask2 , IA32_PERFEVTSELx MSRs UnitMask2 is supported
+ 0x23, 0, ebx, 1, zbit , IA32_PERFEVTSELx MSRs Z-bit is supported
+ 0x23, 1, eax, 31:0, pmu_gp_counters_bitmap , General-purpose PMU counters bitmap
+ 0x23, 1, ebx, 31:0, pmu_f_counters_bitmap , Fixed PMU counters bitmap
+ 0x23, 3, eax, 0, core_cycles_evt , Core cycles event supported
+ 0x23, 3, eax, 1, insn_retired_evt , Instructions retired event supported
+ 0x23, 3, eax, 2, ref_cycles_evt , Reference cycles event supported
+ 0x23, 3, eax, 3, llc_refs_evt , Last-level cache references event supported
+ 0x23, 3, eax, 4, llc_misses_evt , Last-level cache misses event supported
+ 0x23, 3, eax, 5, br_insn_ret_evt , Branch instruction retired event supported
+ 0x23, 3, eax, 6, br_mispr_evt , Branch mispredict retired event supported
+ 0x23, 3, eax, 7, td_slots_evt , Topdown slots event supported
+ 0x23, 3, eax, 8, td_backend_bound_evt , Topdown backend bound event supported
+ 0x23, 3, eax, 9, td_bad_spec_evt , Topdown bad speculation event supported
+ 0x23, 3, eax, 10, td_frontend_bound_evt , Topdown frontend bound event supported
+ 0x23, 3, eax, 11, td_retiring_evt , Topdown retiring event support
+
+# Leaf 40000000H
+# Maximum hypervisor standard leaf + hypervisor vendor string
+
+0x40000000, 0, eax, 31:0, max_hyp_leaf , Maximum hypervisor standard leaf number
+0x40000000, 0, ebx, 31:0, hypervisor_id_0 , Hypervisor ID string bytes 0 - 3
+0x40000000, 0, ecx, 31:0, hypervisor_id_1 , Hypervisor ID string bytes 4 - 7
+0x40000000, 0, edx, 31:0, hypervisor_id_2 , Hypervisor ID string bytes 8 - 11
+
+# Leaf 80000000H
+# Maximum extended leaf number + AMD/Transmeta CPU vendor string
+
+0x80000000, 0, eax, 31:0, max_ext_leaf , Maximum extended CPUID leaf supported
+0x80000000, 0, ebx, 31:0, cpu_vendorid_0 , Vendor ID string bytes 0 - 3
+0x80000000, 0, ecx, 31:0, cpu_vendorid_2 , Vendor ID string bytes 8 - 11
+0x80000000, 0, edx, 31:0, cpu_vendorid_1 , Vendor ID string bytes 4 - 7
# Leaf 80000001H
-# Extended Processor Signature and Feature Bits
-
-0x80000001, 0, EAX, 27:20, extfamily, Extended family
-0x80000001, 0, EAX, 19:16, extmodel, Extended model
-0x80000001, 0, EAX, 11:8, basefamily, Description of Family
-0x80000001, 0, EAX, 11:8, basemodel, Model numbers vary with product
-0x80000001, 0, EAX, 3:0, stepping, Processor stepping (revision) for a specific model
-
-0x80000001, 0, EBX, 31:28, pkgtype, Specifies the package type
-
-0x80000001, 0, ECX, 0, lahf_lm, LAHF/SAHF available in 64-bit mode
-0x80000001, 0, ECX, 1, cmplegacy, Core multi-processing legacy mode
-0x80000001, 0, ECX, 2, svm, Indicates support for: VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, and INVLPGA
-0x80000001, 0, ECX, 3, extapicspace, Extended APIC register space
-0x80000001, 0, ECX, 4, altmovecr8, Indicates support for LOCK MOV CR0 means MOV CR8
-0x80000001, 0, ECX, 5, lzcnt, LZCNT
-0x80000001, 0, ECX, 6, sse4a, EXTRQ, INSERTQ, MOVNTSS, and MOVNTSD instruction support
-0x80000001, 0, ECX, 7, misalignsse, Misaligned SSE Mode
-0x80000001, 0, ECX, 8, prefetchw, PREFETCHW
-0x80000001, 0, ECX, 9, osvw, OS Visible Work-around support
-0x80000001, 0, ECX, 10, ibs, Instruction Based Sampling
-0x80000001, 0, ECX, 11, xop, Extended operation support
-0x80000001, 0, ECX, 12, skinit, SKINIT and STGI support
-0x80000001, 0, ECX, 13, wdt, Watchdog timer support
-0x80000001, 0, ECX, 15, lwp, Lightweight profiling support
-0x80000001, 0, ECX, 16, fma4, Four-operand FMA instruction support
-0x80000001, 0, ECX, 17, tce, Translation cache extension
-0x80000001, 0, ECX, 22, TopologyExtensions, Indicates support for Core::X86::Cpuid::CachePropEax0 and Core::X86::Cpuid::ExtApicId
-0x80000001, 0, ECX, 23, perfctrextcore, Indicates support for Core::X86::Msr::PERF_CTL0 - 5 and Core::X86::Msr::PERF_CTR
-0x80000001, 0, ECX, 24, perfctrextdf, Indicates support for Core::X86::Msr::DF_PERF_CTL and Core::X86::Msr::DF_PERF_CTR
-0x80000001, 0, ECX, 26, databreakpointextension, Indicates data breakpoint support for Core::X86::Msr::DR0_ADDR_MASK, Core::X86::Msr::DR1_ADDR_MASK, Core::X86::Msr::DR2_ADDR_MASK and Core::X86::Msr::DR3_ADDR_MASK
-0x80000001, 0, ECX, 27, perftsc, Performance time-stamp counter supported
-0x80000001, 0, ECX, 28, perfctrextllc, Indicates support for L3 performance counter extensions
-0x80000001, 0, ECX, 29, mwaitextended, MWAITX and MONITORX capability is supported
-0x80000001, 0, ECX, 30, admskextn, Indicates support for address mask extension (to 32 bits and to all 4 DRs) for instruction breakpoints
-
-0x80000001, 0, EDX, 0, fpu, x87 floating point unit on-chip
-0x80000001, 0, EDX, 1, vme, Virtual-mode enhancements
-0x80000001, 0, EDX, 2, de, Debugging extensions, IO breakpoints, CR4.DE
-0x80000001, 0, EDX, 3, pse, Page-size extensions (4 MB pages)
-0x80000001, 0, EDX, 4, tsc, Time stamp counter, RDTSC/RDTSCP instructions, CR4.TSD
-0x80000001, 0, EDX, 5, msr, Model-specific registers (MSRs), with RDMSR and WRMSR instructions
-0x80000001, 0, EDX, 6, pae, Physical-address extensions (PAE)
-0x80000001, 0, EDX, 7, mce, Machine Check Exception, CR4.MCE
-0x80000001, 0, EDX, 8, cmpxchg8b, CMPXCHG8B instruction
-0x80000001, 0, EDX, 9, apic, advanced programmable interrupt controller (APIC) exists and is enabled
-0x80000001, 0, EDX, 11, sysret, SYSCALL/SYSRET supported
-0x80000001, 0, EDX, 12, mtrr, Memory-type range registers
-0x80000001, 0, EDX, 13, pge, Page global extension, CR4.PGE
-0x80000001, 0, EDX, 14, mca, Machine check architecture, MCG_CAP
-0x80000001, 0, EDX, 15, cmov, Conditional move instructions, CMOV, FCOMI, FCMOV
-0x80000001, 0, EDX, 16, pat, Page attribute table
-0x80000001, 0, EDX, 17, pse36, Page-size extensions
-0x80000001, 0, EDX, 20, exec_dis, Execute Disable Bit available
-0x80000001, 0, EDX, 22, mmxext, AMD extensions to MMX instructions
-0x80000001, 0, EDX, 23, mmx, MMX instructions
-0x80000001, 0, EDX, 24, fxsr, FXSAVE and FXRSTOR instructions
-0x80000001, 0, EDX, 25, ffxsr, FXSAVE and FXRSTOR instruction optimizations
-0x80000001, 0, EDX, 26, 1gb_page, 1GB page supported
-0x80000001, 0, EDX, 27, rdtscp, RDTSCP and IA32_TSC_AUX are available
-0x80000001, 0, EDX, 29, lm, 64b Architecture supported
-0x80000001, 0, EDX, 30, threednowext, AMD extensions to 3DNow! instructions
-0x80000001, 0, EDX, 31, threednow, 3DNow! instructions
-
-# Leaf 80000002H/80000003H/80000004H
-# Processor Brand String
+# Extended CPU feature identifiers
+
+0x80000001, 0, eax, 3:0, e_stepping_id , Stepping ID
+0x80000001, 0, eax, 7:4, e_base_model , Base processor model
+0x80000001, 0, eax, 11:8, e_base_family , Base processor family
+0x80000001, 0, eax, 13:12, e_base_type , Base processor type (Transmeta)
+0x80000001, 0, eax, 19:16, e_ext_model , Extended processor model
+0x80000001, 0, eax, 27:20, e_ext_family , Extended processor family
+0x80000001, 0, ebx, 15:0, brand_id , Brand ID
+0x80000001, 0, ebx, 31:28, pkg_type , Package type
+0x80000001, 0, ecx, 0, lahf_lm , LAHF and SAHF in 64-bit mode
+0x80000001, 0, ecx, 1, cmp_legacy , Multi-processing legacy mode (No HT)
+0x80000001, 0, ecx, 2, svm , Secure Virtual Machine
+0x80000001, 0, ecx, 3, extapic , Extended APIC space
+0x80000001, 0, ecx, 4, cr8_legacy , LOCK MOV CR0 means MOV CR8
+0x80000001, 0, ecx, 5, abm , LZCNT advanced bit manipulation
+0x80000001, 0, ecx, 6, sse4a , SSE4A support
+0x80000001, 0, ecx, 7, misalignsse , Misaligned SSE mode
+0x80000001, 0, ecx, 8, 3dnowprefetch , 3DNow PREFETCH/PREFETCHW support
+0x80000001, 0, ecx, 9, osvw , OS visible workaround
+0x80000001, 0, ecx, 10, ibs , Instruction based sampling
+0x80000001, 0, ecx, 11, xop , XOP: extended operation (AVX instructions)
+0x80000001, 0, ecx, 12, skinit , SKINIT/STGI support
+0x80000001, 0, ecx, 13, wdt , Watchdog timer support
+0x80000001, 0, ecx, 15, lwp , Lightweight profiling
+0x80000001, 0, ecx, 16, fma4 , 4-operand FMA instruction
+0x80000001, 0, ecx, 17, tce , Translation cache extension
+0x80000001, 0, ecx, 19, nodeid_msr , NodeId MSR (0xc001100c)
+0x80000001, 0, ecx, 21, tbm , Trailing bit manipulations
+0x80000001, 0, ecx, 22, topoext , Topology Extensions (leaf 0x8000001d)
+0x80000001, 0, ecx, 23, perfctr_core , Core performance counter extensions
+0x80000001, 0, ecx, 24, perfctr_nb , NB/DF performance counter extensions
+0x80000001, 0, ecx, 26, bpext , Data access breakpoint extension
+0x80000001, 0, ecx, 27, ptsc , Performance time-stamp counter
+0x80000001, 0, ecx, 28, perfctr_llc , LLC (L3) performance counter extensions
+0x80000001, 0, ecx, 29, mwaitx , MWAITX/MONITORX support
+0x80000001, 0, ecx, 30, addr_mask_ext , Breakpoint address mask extension (to bit 31)
+0x80000001, 0, edx, 0, e_fpu , Floating-Point Unit on-chip (x87)
+0x80000001, 0, edx, 1, e_vme , Virtual-8086 Mode Extensions
+0x80000001, 0, edx, 2, e_de , Debugging Extensions
+0x80000001, 0, edx, 3, e_pse , Page Size Extension
+0x80000001, 0, edx, 4, e_tsc , Time Stamp Counter
+0x80000001, 0, edx, 5, e_msr , Model-Specific Registers (RDMSR and WRMSR support)
+0x80000001, 0, edx, 6, pae , Physical Address Extensions
+0x80000001, 0, edx, 7, mce , Machine Check Exception
+0x80000001, 0, edx, 8, cx8 , CMPXCHG8B instruction
+0x80000001, 0, edx, 9, apic , APIC on-chip
+0x80000001, 0, edx, 11, syscall , SYSCALL and SYSRET instructions
+0x80000001, 0, edx, 12, mtrr , Memory Type Range Registers
+0x80000001, 0, edx, 13, pge , Page Global Extensions
+0x80000001, 0, edx, 14, mca , Machine Check Architecture
+0x80000001, 0, edx, 15, cmov , Conditional Move Instruction
+0x80000001, 0, edx, 16, pat , Page Attribute Table
+0x80000001, 0, edx, 17, pse36 , Page Size Extension (36-bit)
+0x80000001, 0, edx, 19, mp , Out-of-spec AMD Multiprocessing bit
+0x80000001, 0, edx, 20, nx , No-execute page protection
+0x80000001, 0, edx, 22, mmxext , AMD MMX extensions
+0x80000001, 0, edx, 23, e_mmx , MMX instructions
+0x80000001, 0, edx, 24, e_fxsr , FXSAVE and FXRSTOR instructions
+0x80000001, 0, edx, 25, fxsr_opt , FXSAVE and FXRSTOR optimizations
+0x80000001, 0, edx, 26, pdpe1gb , 1-GB large page support
+0x80000001, 0, edx, 27, rdtscp , RDTSCP instruction
+0x80000001, 0, edx, 29, lm , Long mode (x86-64, 64-bit support)
+0x80000001, 0, edx, 30, 3dnowext , AMD 3DNow extensions
+0x80000001, 0, edx, 31, 3dnow , 3DNow instructions
+
+# Leaf 80000002H
+# CPU brand ID string, bytes 0 - 15
+
+0x80000002, 0, eax, 31:0, cpu_brandid_0 , CPU brand ID string, bytes 0 - 3
+0x80000002, 0, ebx, 31:0, cpu_brandid_1 , CPU brand ID string, bytes 4 - 7
+0x80000002, 0, ecx, 31:0, cpu_brandid_2 , CPU brand ID string, bytes 8 - 11
+0x80000002, 0, edx, 31:0, cpu_brandid_3 , CPU brand ID string, bytes 12 - 15
+
+# Leaf 80000003H
+# CPU brand ID string, bytes 16 - 31
+
+0x80000003, 0, eax, 31:0, cpu_brandid_4 , CPU brand ID string bytes, 16 - 19
+0x80000003, 0, ebx, 31:0, cpu_brandid_5 , CPU brand ID string bytes, 20 - 23
+0x80000003, 0, ecx, 31:0, cpu_brandid_6 , CPU brand ID string bytes, 24 - 27
+0x80000003, 0, edx, 31:0, cpu_brandid_7 , CPU brand ID string bytes, 28 - 31
+
+# Leaf 80000004H
+# CPU brand ID string, bytes 32 - 47
+
+0x80000004, 0, eax, 31:0, cpu_brandid_8 , CPU brand ID string, bytes 32 - 35
+0x80000004, 0, ebx, 31:0, cpu_brandid_9 , CPU brand ID string, bytes 36 - 39
+0x80000004, 0, ecx, 31:0, cpu_brandid_10 , CPU brand ID string, bytes 40 - 43
+0x80000004, 0, edx, 31:0, cpu_brandid_11 , CPU brand ID string, bytes 44 - 47
# Leaf 80000005H
-# Reserved
+# AMD/Transmeta L1 cache and L1 TLB enumeration
+
+0x80000005, 0, eax, 7:0, l1_itlb_2m_4m_nentries , L1 ITLB #entries, 2M and 4M pages
+0x80000005, 0, eax, 15:8, l1_itlb_2m_4m_assoc , L1 ITLB associativity, 2M and 4M pages
+0x80000005, 0, eax, 23:16, l1_dtlb_2m_4m_nentries , L1 DTLB #entries, 2M and 4M pages
+0x80000005, 0, eax, 31:24, l1_dtlb_2m_4m_assoc , L1 DTLB associativity, 2M and 4M pages
+0x80000005, 0, ebx, 7:0, l1_itlb_4k_nentries , L1 ITLB #entries, 4K pages
+0x80000005, 0, ebx, 15:8, l1_itlb_4k_assoc , L1 ITLB associativity, 4K pages
+0x80000005, 0, ebx, 23:16, l1_dtlb_4k_nentries , L1 DTLB #entries, 4K pages
+0x80000005, 0, ebx, 31:24, l1_dtlb_4k_assoc , L1 DTLB associativity, 4K pages
+0x80000005, 0, ecx, 7:0, l1_dcache_line_size , L1 dcache line size, in bytes
+0x80000005, 0, ecx, 15:8, l1_dcache_nlines , L1 dcache lines per tag
+0x80000005, 0, ecx, 23:16, l1_dcache_assoc , L1 dcache associativity
+0x80000005, 0, ecx, 31:24, l1_dcache_size_kb , L1 dcache size, in KB
+0x80000005, 0, edx, 7:0, l1_icache_line_size , L1 icache line size, in bytes
+0x80000005, 0, edx, 15:8, l1_icache_nlines , L1 icache lines per tag
+0x80000005, 0, edx, 23:16, l1_icache_assoc , L1 icache associativity
+0x80000005, 0, edx, 31:24, l1_icache_size_kb , L1 icache size, in KB
# Leaf 80000006H
-# Extended L2 Cache Features
-
-0x80000006, 0, ECX, 7:0, clsize, Cache Line size in bytes
-0x80000006, 0, ECX, 15:12, l2c_assoc, L2 Associativity
-0x80000006, 0, ECX, 31:16, csize, Cache size in 1K units
-
+# (Mostly AMD) L2 TLB, L2 cache, and L3 cache enumeration
+
+0x80000006, 0, eax, 11:0, l2_itlb_2m_4m_nentries , L2 iTLB #entries, 2M and 4M pages
+0x80000006, 0, eax, 15:12, l2_itlb_2m_4m_assoc , L2 iTLB associativity, 2M and 4M pages
+0x80000006, 0, eax, 27:16, l2_dtlb_2m_4m_nentries , L2 dTLB #entries, 2M and 4M pages
+0x80000006, 0, eax, 31:28, l2_dtlb_2m_4m_assoc , L2 dTLB associativity, 2M and 4M pages
+0x80000006, 0, ebx, 11:0, l2_itlb_4k_nentries , L2 iTLB #entries, 4K pages
+0x80000006, 0, ebx, 15:12, l2_itlb_4k_assoc , L2 iTLB associativity, 4K pages
+0x80000006, 0, ebx, 27:16, l2_dtlb_4k_nentries , L2 dTLB #entries, 4K pages
+0x80000006, 0, ebx, 31:28, l2_dtlb_4k_assoc , L2 dTLB associativity, 4K pages
+0x80000006, 0, ecx, 7:0, l2_line_size , L2 cache line size, in bytes
+0x80000006, 0, ecx, 11:8, l2_nlines , L2 cache number of lines per tag
+0x80000006, 0, ecx, 15:12, l2_assoc , L2 cache associativity
+0x80000006, 0, ecx, 31:16, l2_size_kb , L2 cache size, in KB
+0x80000006, 0, edx, 7:0, l3_line_size , L3 cache line size, in bytes
+0x80000006, 0, edx, 11:8, l3_nlines , L3 cache number of lines per tag
+0x80000006, 0, edx, 15:12, l3_assoc , L3 cache associativity
+0x80000006, 0, edx, 31:18, l3_size_range , L3 cache size range
# Leaf 80000007H
-
-0x80000007, 0, EDX, 8, nonstop_tsc, Invariant TSC available
-
+# CPU power management (mostly AMD) and AMD RAS enumeration
+
+0x80000007, 0, ebx, 0, overflow_recov , MCA overflow conditions not fatal
+0x80000007, 0, ebx, 1, succor , Software containment of uncorrectable errors
+0x80000007, 0, ebx, 2, hw_assert , Hardware assert MSRs
+0x80000007, 0, ebx, 3, smca , Scalable MCA (MCAX MSRs)
+0x80000007, 0, ecx, 31:0, cpu_pwr_sample_ratio , CPU power sample time ratio
+0x80000007, 0, edx, 0, digital_temp , Digital temperature sensor
+0x80000007, 0, edx, 1, powernow_freq_id , PowerNOW! frequency scaling
+0x80000007, 0, edx, 2, powernow_volt_id , PowerNOW! voltage scaling
+0x80000007, 0, edx, 3, thermal_trip , THERMTRIP (Thermal Trip)
+0x80000007, 0, edx, 4, hw_thermal_control , Hardware thermal control
+0x80000007, 0, edx, 5, sw_thermal_control , Software thermal control
+0x80000007, 0, edx, 6, 100mhz_steps , 100 MHz multiplier control
+0x80000007, 0, edx, 7, hw_pstate , Hardware P-state control
+0x80000007, 0, edx, 8, constant_tsc , TSC ticks at constant rate across all P and C states
+0x80000007, 0, edx, 9, cpb , Core performance boost
+0x80000007, 0, edx, 10, eff_freq_ro , Read-only effective frequency interface
+0x80000007, 0, edx, 11, proc_feedback , Processor feedback interface (deprecated)
+0x80000007, 0, edx, 12, acc_power , Processor power reporting interface
+0x80000007, 0, edx, 13, connected_standby , CPU Connected Standby support
+0x80000007, 0, edx, 14, rapl , Runtime Average Power Limit interface
# Leaf 80000008H
-
-0x80000008, 0, EAX, 7:0, phy_adr_bits, Physical Address Bits
-0x80000008, 0, EAX, 15:8, lnr_adr_bits, Linear Address Bits
-0x80000007, 0, EBX, 9, wbnoinvd, WBNOINVD
-
-# 0x8000001E
-# EAX: Extended APIC ID
-0x8000001E, 0, EAX, 31:0, extended_apic_id, Extended APIC ID
-# EBX: Core Identifiers
-0x8000001E, 0, EBX, 7:0, core_id, Identifies the logical core ID
-0x8000001E, 0, EBX, 15:8, threads_per_core, The number of threads per core is threads_per_core + 1
-# ECX: Node Identifiers
-0x8000001E, 0, ECX, 7:0, node_id, Node ID
-0x8000001E, 0, ECX, 10:8, nodes_per_processor, Nodes per processor { 0: 1 node, else reserved }
-
-# 8000001F: AMD Secure Encryption
-0x8000001F, 0, EAX, 0, sme, Secure Memory Encryption
-0x8000001F, 0, EAX, 1, sev, Secure Encrypted Virtualization
-0x8000001F, 0, EAX, 2, vmpgflush, VM Page Flush MSR
-0x8000001F, 0, EAX, 3, seves, SEV Encrypted State
-0x8000001F, 0, EBX, 5:0, c-bit, Page table bit number used to enable memory encryption
-0x8000001F, 0, EBX, 11:6, mem_encrypt_physaddr_width, Reduction of physical address space in bits with SME enabled
-0x8000001F, 0, ECX, 31:0, num_encrypted_guests, Maximum ASID value that may be used for an SEV-enabled guest
-0x8000001F, 0, EDX, 31:0, minimum_sev_asid, Minimum ASID value that must be used for an SEV-enabled, SEV-ES-disabled guest
+# CPU capacity parameters and extended feature flags (mostly AMD)
+
+0x80000008, 0, eax, 7:0, phys_addr_bits , Max physical address bits
+0x80000008, 0, eax, 15:8, virt_addr_bits , Max virtual address bits
+0x80000008, 0, eax, 23:16, guest_phys_addr_bits , Max nested-paging guest physical address bits
+0x80000008, 0, ebx, 0, clzero , CLZERO supported
+0x80000008, 0, ebx, 1, irperf , Instruction retired counter MSR
+0x80000008, 0, ebx, 2, xsaveerptr , XSAVE/XRSTOR always saves/restores FPU error pointers
+0x80000008, 0, ebx, 3, invlpgb , INVLPGB broadcasts a TLB invalidate to all threads
+0x80000008, 0, ebx, 4, rdpru , RDPRU (Read Processor Register at User level) supported
+0x80000008, 0, ebx, 6, mba , Memory Bandwidth Allocation (AMD bit)
+0x80000008, 0, ebx, 8, mcommit , MCOMMIT (Memory commit) supported
+0x80000008, 0, ebx, 9, wbnoinvd , WBNOINVD supported
+0x80000008, 0, ebx, 12, amd_ibpb , Indirect Branch Prediction Barrier
+0x80000008, 0, ebx, 13, wbinvd_int , Interruptible WBINVD/WBNOINVD
+0x80000008, 0, ebx, 14, amd_ibrs , Indirect Branch Restricted Speculation
+0x80000008, 0, ebx, 15, amd_stibp , Single Thread Indirect Branch Prediction mode
+0x80000008, 0, ebx, 16, ibrs_always_on , IBRS always-on preferred
+0x80000008, 0, ebx, 17, amd_stibp_always_on , STIBP always-on preferred
+0x80000008, 0, ebx, 18, ibrs_fast , IBRS is preferred over software solution
+0x80000008, 0, ebx, 19, ibrs_same_mode , IBRS provides same mode protection
+0x80000008, 0, ebx, 20, no_efer_lmsle , EFER[LMSLE] bit (Long-Mode Segment Limit Enable) unsupported
+0x80000008, 0, ebx, 21, tlb_flush_nested , INVLPGB RAX[5] bit can be set (nested translations)
+0x80000008, 0, ebx, 23, amd_ppin , Protected Processor Inventory Number
+0x80000008, 0, ebx, 24, amd_ssbd , Speculative Store Bypass Disable
+0x80000008, 0, ebx, 25, virt_ssbd , virtualized SSBD (Speculative Store Bypass Disable)
+0x80000008, 0, ebx, 26, amd_ssb_no , SSBD is not needed (fixed in hardware)
+0x80000008, 0, ebx, 27, cppc , Collaborative Processor Performance Control
+0x80000008, 0, ebx, 28, amd_psfd , Predictive Store Forward Disable
+0x80000008, 0, ebx, 29, btc_no , CPU not affected by Branch Type Confusion
+0x80000008, 0, ebx, 30, ibpb_ret , IBPB clears RSB/RAS too
+0x80000008, 0, ebx, 31, brs , Branch Sampling supported
+0x80000008, 0, ecx, 7:0, cpu_nthreads , Number of physical threads - 1
+0x80000008, 0, ecx, 15:12, apicid_coreid_len , Number of thread core ID bits (shift) in APIC ID
+0x80000008, 0, ecx, 17:16, perf_tsc_len , Performance time-stamp counter size
+0x80000008, 0, edx, 15:0, invlpgb_max_pages , INVLPGB maximum page count
+0x80000008, 0, edx, 31:16, rdpru_max_reg_id , RDPRU max register ID (ECX input)
+
+# Leaf 8000000AH
+# AMD SVM (Secure Virtual Machine) enumeration
+
+0x8000000a, 0, eax, 7:0, svm_version , SVM revision number
+0x8000000a, 0, ebx, 31:0, svm_nasid , Number of address space identifiers (ASID)
+0x8000000a, 0, edx, 0, npt , Nested paging
+0x8000000a, 0, edx, 1, lbrv , LBR virtualization
+0x8000000a, 0, edx, 2, svm_lock , SVM lock
+0x8000000a, 0, edx, 3, nrip_save , NRIP save support on #VMEXIT
+0x8000000a, 0, edx, 4, tsc_scale , MSR based TSC rate control
+0x8000000a, 0, edx, 5, vmcb_clean , VMCB clean bits support
+0x8000000a, 0, edx, 6, flushbyasid , Flush by ASID + Extended VMCB TLB_Control
+0x8000000a, 0, edx, 7, decodeassists , Decode Assists support
+0x8000000a, 0, edx, 10, pausefilter , Pause intercept filter
+0x8000000a, 0, edx, 12, pfthreshold , Pause filter threshold
+0x8000000a, 0, edx, 13, avic , Advanced virtual interrupt controller
+0x8000000a, 0, edx, 15, v_vmsave_vmload , Virtual VMSAVE/VMLOAD (nested virtualization)
+0x8000000a, 0, edx, 16, vgif , Virtualize the Global Interrupt Flag
+0x8000000a, 0, edx, 17, gmet , Guest mode execution trap
+0x8000000a, 0, edx, 18, x2avic , Virtual x2APIC
+0x8000000a, 0, edx, 19, sss_check , Supervisor Shadow Stack restrictions
+0x8000000a, 0, edx, 20, v_spec_ctrl , Virtual SPEC_CTRL
+0x8000000a, 0, edx, 21, ro_gpt , Read-Only guest page table support
+0x8000000a, 0, edx, 23, h_mce_override , Host MCE override
+0x8000000a, 0, edx, 24, tlbsync_int , TLBSYNC intercept + INVLPGB/TLBSYNC in VMCB
+0x8000000a, 0, edx, 25, vnmi , NMI virtualization
+0x8000000a, 0, edx, 26, ibs_virt , IBS Virtualization
+0x8000000a, 0, edx, 27, ext_lvt_off_chg , Extended LVT offset fault change
+0x8000000a, 0, edx, 28, svme_addr_chk , Guest SVME address check
+
+# Leaf 80000019H
+# AMD TLB 1G-pages enumeration
+
+0x80000019, 0, eax, 11:0, l1_itlb_1g_nentries , L1 iTLB #entries, 1G pages
+0x80000019, 0, eax, 15:12, l1_itlb_1g_assoc , L1 iTLB associativity, 1G pages
+0x80000019, 0, eax, 27:16, l1_dtlb_1g_nentries , L1 dTLB #entries, 1G pages
+0x80000019, 0, eax, 31:28, l1_dtlb_1g_assoc , L1 dTLB associativity, 1G pages
+0x80000019, 0, ebx, 11:0, l2_itlb_1g_nentries , L2 iTLB #entries, 1G pages
+0x80000019, 0, ebx, 15:12, l2_itlb_1g_assoc , L2 iTLB associativity, 1G pages
+0x80000019, 0, ebx, 27:16, l2_dtlb_1g_nentries , L2 dTLB #entries, 1G pages
+0x80000019, 0, ebx, 31:28, l2_dtlb_1g_assoc , L2 dTLB associativity, 1G pages
+
+# Leaf 8000001AH
+# AMD instruction optimizations enumeration
+
+0x8000001a, 0, eax, 0, fp_128 , Internal FP/SIMD exec data path is 128-bits wide
+0x8000001a, 0, eax, 1, movu_preferred , SSE: MOVU* better than MOVL*/MOVH*
+0x8000001a, 0, eax, 2, fp_256 , internal FP/SSE exec data path is 256-bits wide
+
+# Leaf 8000001BH
+# AMD IBS (Instruction-Based Sampling) enumeration
+
+0x8000001b, 0, eax, 0, ibs_flags_valid , IBS feature flags valid
+0x8000001b, 0, eax, 1, ibs_fetch_sampling , IBS fetch sampling supported
+0x8000001b, 0, eax, 2, ibs_op_sampling , IBS execution sampling supported
+0x8000001b, 0, eax, 3, ibs_rdwr_op_counter , IBS read/write of op counter supported
+0x8000001b, 0, eax, 4, ibs_op_count , IBS OP counting mode supported
+0x8000001b, 0, eax, 5, ibs_branch_target , IBS branch target address reporting supported
+0x8000001b, 0, eax, 6, ibs_op_counters_ext , IBS IbsOpCurCnt/IbsOpMaxCnt extend by 7 bits
+0x8000001b, 0, eax, 7, ibs_rip_invalid_chk , IBS invalid RIP indication supported
+0x8000001b, 0, eax, 8, ibs_op_branch_fuse , IBS fused branch micro-op indication supported
+0x8000001b, 0, eax, 9, ibs_fetch_ctl_ext , IBS Fetch Control Extended MSR (0xc001103c) supported
+0x8000001b, 0, eax, 10, ibs_op_data_4 , IBS op data 4 MSR supported
+0x8000001b, 0, eax, 11, ibs_l3_miss_filter , IBS L3-miss filtering supported (Zen4+)
+
+# Leaf 8000001CH
+# AMD LWP (Lightweight Profiling)
+
+0x8000001c, 0, eax, 0, os_lwp_avail , LWP is available to application programs (supported by OS)
+0x8000001c, 0, eax, 1, os_lpwval , LWPVAL instruction is supported by OS
+0x8000001c, 0, eax, 2, os_lwp_ire , Instructions Retired Event is supported by OS
+0x8000001c, 0, eax, 3, os_lwp_bre , Branch Retired Event is supported by OS
+0x8000001c, 0, eax, 4, os_lwp_dme , Dcache Miss Event is supported by OS
+0x8000001c, 0, eax, 5, os_lwp_cnh , CPU Clocks Not Halted event is supported by OS
+0x8000001c, 0, eax, 6, os_lwp_rnh , CPU Reference clocks Not Halted event is supported by OS
+0x8000001c, 0, eax, 29, os_lwp_cont , LWP sampling in continuous mode is supported by OS
+0x8000001c, 0, eax, 30, os_lwp_ptsc , Performance Time Stamp Counter in event records is supported by OS
+0x8000001c, 0, eax, 31, os_lwp_int , Interrupt on threshold overflow is supported by OS
+0x8000001c, 0, ebx, 7:0, lwp_lwpcb_sz , LWP Control Block size, in quadwords
+0x8000001c, 0, ebx, 15:8, lwp_event_sz , LWP event record size, in bytes
+0x8000001c, 0, ebx, 23:16, lwp_max_events , LWP max supported EventID value (EventID 255 not included)
+0x8000001c, 0, ebx, 31:24, lwp_event_offset , LWP events area offset in the LWP Control Block
+0x8000001c, 0, ecx, 4:0, lwp_latency_max , Number of bits in cache latency counters (10 to 31)
+0x8000001c, 0, ecx, 5, lwp_data_adddr , Cache miss events report the data address of the reference
+0x8000001c, 0, ecx, 8:6, lwp_latency_rnd , Amount by which cache latency is rounded
+0x8000001c, 0, ecx, 15:9, lwp_version , LWP implementation version
+0x8000001c, 0, ecx, 23:16, lwp_buf_min_sz , LWP event ring buffer min size, in units of 32 event records
+0x8000001c, 0, ecx, 28, lwp_branch_predict , Branches Retired events can be filtered
+0x8000001c, 0, ecx, 29, lwp_ip_filtering , IP filtering (IPI, IPF, BaseIP, and LimitIP @ LWPCP) supported
+0x8000001c, 0, ecx, 30, lwp_cache_levels , Cache-related events can be filtered by cache level
+0x8000001c, 0, ecx, 31, lwp_cache_latency , Cache-related events can be filtered by latency
+0x8000001c, 0, edx, 0, hw_lwp_avail , LWP is available in hardware
+0x8000001c, 0, edx, 1, hw_lpwval , LWPVAL instruction is available in hardware
+0x8000001c, 0, edx, 2, hw_lwp_ire , Instructions Retired Event is available in hardware
+0x8000001c, 0, edx, 3, hw_lwp_bre , Branch Retired Event is available in hardware
+0x8000001c, 0, edx, 4, hw_lwp_dme , Dcache Miss Event is available in hardware
+0x8000001c, 0, edx, 5, hw_lwp_cnh , Clocks Not Halted event is available in hardware
+0x8000001c, 0, edx, 6, hw_lwp_rnh , Reference clocks Not Halted event is available in hardware
+0x8000001c, 0, edx, 29, hw_lwp_cont , LWP sampling in continuous mode is available in hardware
+0x8000001c, 0, edx, 30, hw_lwp_ptsc , Performance Time Stamp Counter in event records is available in hardware
+0x8000001c, 0, edx, 31, hw_lwp_int , Interrupt on threshold overflow is available in hardware
+
+# Leaf 8000001DH
+# AMD deterministic cache parameters
+
+0x8000001d, 31:0, eax, 4:0, cache_type , Cache type field
+0x8000001d, 31:0, eax, 7:5, cache_level , Cache level (1-based)
+0x8000001d, 31:0, eax, 8, cache_self_init , Self-initializing cache level
+0x8000001d, 31:0, eax, 9, fully_associative , Fully-associative cache
+0x8000001d, 31:0, eax, 25:14, num_threads_sharing , Number of logical CPUs sharing cache
+0x8000001d, 31:0, ebx, 11:0, cache_linesize , System coherency line size (0-based)
+0x8000001d, 31:0, ebx, 21:12, cache_npartitions , Physical line partitions (0-based)
+0x8000001d, 31:0, ebx, 31:22, cache_nways , Ways of associativity (0-based)
+0x8000001d, 31:0, ecx, 30:0, cache_nsets , Cache number of sets (0-based)
+0x8000001d, 31:0, edx, 0, wbinvd_rll_no_guarantee, WBINVD/INVD not guaranteed for Remote Lower-Level caches
+0x8000001d, 31:0, edx, 1, ll_inclusive , Cache is inclusive of Lower-Level caches
+
+# Leaf 8000001EH
+# AMD CPU topology enumeration
+
+0x8000001e, 0, eax, 31:0, ext_apic_id , Extended APIC ID
+0x8000001e, 0, ebx, 7:0, core_id , Unique per-socket logical core unit ID
+0x8000001e, 0, ebx, 15:8, core_nthreas , #Threads per core (zero-based)
+0x8000001e, 0, ecx, 7:0, node_id , Node (die) ID of invoking logical CPU
+0x8000001e, 0, ecx, 10:8, nnodes_per_socket , #nodes in invoking logical CPU's package/socket
+
+# Leaf 8000001FH
+# AMD encrypted memory capabilities enumeration (SME/SEV)
+
+0x8000001f, 0, eax, 0, sme , Secure Memory Encryption supported
+0x8000001f, 0, eax, 1, sev , Secure Encrypted Virtualization supported
+0x8000001f, 0, eax, 2, vm_page_flush , VM Page Flush MSR (0xc001011e) available
+0x8000001f, 0, eax, 3, sev_es , SEV Encrypted State supported
+0x8000001f, 0, eax, 4, sev_nested_paging , SEV secure nested paging supported
+0x8000001f, 0, eax, 5, vm_permission_levels , VMPL supported
+0x8000001f, 0, eax, 6, rpmquery , RPMQUERY instruction supported
+0x8000001f, 0, eax, 7, vmpl_sss , VMPL supervisor shadow stack supported
+0x8000001f, 0, eax, 8, secure_tsc , Secure TSC supported
+0x8000001f, 0, eax, 9, v_tsc_aux , Hardware virtualizes TSC_AUX
+0x8000001f, 0, eax, 10, sme_coherent , Cache coherency is enforced across encryption domains
+0x8000001f, 0, eax, 11, req_64bit_hypervisor , SEV guest mandates 64-bit hypervisor
+0x8000001f, 0, eax, 12, restricted_injection , Restricted Injection supported
+0x8000001f, 0, eax, 13, alternate_injection , Alternate Injection supported
+0x8000001f, 0, eax, 14, debug_swap , SEV-ES: full debug state swap is supported
+0x8000001f, 0, eax, 15, disallow_host_ibs , SEV-ES: Disallowing IBS use by the host is supported
+0x8000001f, 0, eax, 16, virt_transparent_enc , Virtual Transparent Encryption
+0x8000001f, 0, eax, 17, vmgexit_paremeter , VmgexitParameter is supported in SEV_FEATURES
+0x8000001f, 0, eax, 18, virt_tom_msr , Virtual TOM MSR is supported
+0x8000001f, 0, eax, 19, virt_ibs , IBS state virtualization is supported for SEV-ES guests
+0x8000001f, 0, eax, 24, vmsa_reg_protection , VMSA register protection is supported
+0x8000001f, 0, eax, 25, smt_protection , SMT protection is supported
+0x8000001f, 0, eax, 28, svsm_page_msr , SVSM communication page MSR (0xc001f000) is supported
+0x8000001f, 0, eax, 29, nested_virt_snp_msr , VIRT_RMPUPDATE/VIRT_PSMASH MSRs are supported
+0x8000001f, 0, ebx, 5:0, pte_cbit_pos , PTE bit number used to enable memory encryption
+0x8000001f, 0, ebx, 11:6, phys_addr_reduction_nbits, Reduction of phys address space when encryption is enabled, in bits
+0x8000001f, 0, ebx, 15:12, vmpl_count , Number of VM permission levels (VMPL) supported
+0x8000001f, 0, ecx, 31:0, enc_guests_max , Max supported number of simultaneous encrypted guests
+0x8000001f, 0, edx, 31:0, min_sev_asid_no_sev_es , Minimum ASID for SEV-enabled SEV-ES-disabled guest
+
+# Leaf 80000020H
+# AMD Platform QoS extended feature IDs
+
+0x80000020, 0, ebx, 1, mba , Memory Bandwidth Allocation support
+0x80000020, 0, ebx, 2, smba , Slow Memory Bandwidth Allocation support
+0x80000020, 0, ebx, 3, bmec , Bandwidth Monitoring Event Configuration support
+0x80000020, 0, ebx, 4, l3rr , L3 Range Reservation support
+0x80000020, 0, ebx, 5, abmc , Assignable Bandwidth Monitoring Counters
+0x80000020, 0, ebx, 6, sdciae , Smart Data Cache Injection (SDCI) Allocation Enforcement
+0x80000020, 1, eax, 31:0, mba_limit_len , MBA enforcement limit size
+0x80000020, 1, edx, 31:0, mba_cos_max , MBA max Class of Service number (zero-based)
+0x80000020, 2, eax, 31:0, smba_limit_len , SMBA enforcement limit size
+0x80000020, 2, edx, 31:0, smba_cos_max , SMBA max Class of Service number (zero-based)
+0x80000020, 3, ebx, 7:0, bmec_num_events , BMEC number of bandwidth events available
+0x80000020, 3, ecx, 0, bmec_local_reads , Local NUMA reads can be tracked
+0x80000020, 3, ecx, 1, bmec_remote_reads , Remote NUMA reads can be tracked
+0x80000020, 3, ecx, 2, bmec_local_nontemp_wr , Local NUMA non-temporal writes can be tracked
+0x80000020, 3, ecx, 3, bmec_remote_nontemp_wr , Remote NUMA non-temporal writes can be tracked
+0x80000020, 3, ecx, 4, bmec_local_slow_mem_rd , Local NUMA slow-memory reads can be tracked
+0x80000020, 3, ecx, 5, bmec_remote_slow_mem_rd, Remote NUMA slow-memory reads can be tracked
+0x80000020, 3, ecx, 6, bmec_all_dirty_victims , Dirty QoS victims to all types of memory can be tracked
+
+# Leaf 80000021H
+# AMD extended features enumeration 2
+
+0x80000021, 0, eax, 0, no_nested_data_bp , No nested data breakpoints
+0x80000021, 0, eax, 1, fsgs_non_serializing , WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing
+0x80000021, 0, eax, 2, lfence_rdtsc , LFENCE always serializing / synchronizes RDTSC
+0x80000021, 0, eax, 3, smm_page_cfg_lock , SMM paging configuration lock
+0x80000021, 0, eax, 6, null_sel_clr_base , Null selector clears base
+0x80000021, 0, eax, 7, upper_addr_ignore , EFER MSR Upper Address Ignore
+0x80000021, 0, eax, 8, autoibrs , EFER MSR Automatic IBRS
+0x80000021, 0, eax, 9, no_smm_ctl_msr , SMM_CTL MSR (0xc0010116) is not available
+0x80000021, 0, eax, 10, fsrs , Fast Short Rep STOSB
+0x80000021, 0, eax, 11, fsrc , Fast Short Rep CMPSB
+0x80000021, 0, eax, 13, prefetch_ctl_msr , Prefetch control MSR is available
+0x80000021, 0, eax, 16, opcode_reclaim , Reserves opcode space
+0x80000021, 0, eax, 17, user_cpuid_disable , #GP when executing CPUID at CPL > 0 is supported
+0x80000021, 0, eax, 18, epsf , Enhanced Predictive Store Forwarding
+0x80000021, 0, eax, 22, wl_feedback , Workload-based heuristic feedback to OS
+0x80000021, 0, eax, 24, eraps , Enhanced Return Address Predictor Security
+0x80000021, 0, eax, 27, sbpb , Selective Branch Predictor Barrier
+0x80000021, 0, eax, 28, ibpb_brtype , Branch predictions flushed from CPU branch predictor
+0x80000021, 0, eax, 29, srso_no , CPU is not subject to the SRSO vulnerability
+0x80000021, 0, eax, 30, srso_uk_no , CPU is not vulnerable to SRSO at user-kernel boundary
+0x80000021, 0, eax, 31, srso_msr_fix , Software may use MSR BP_CFG[BpSpecReduce] to mitigate SRSO
+0x80000021, 0, ebx, 15:0, microcode_patch_size , Size of microcode patch, in 16-byte units
+0x80000021, 0, ebx, 23:16, rap_size , Return Address Predictor size
+
+# Leaf 80000022H
+# AMD Performance Monitoring v2 enumeration
+
+0x80000022, 0, eax, 0, perfmon_v2 , Performance monitoring v2 supported
+0x80000022, 0, eax, 1, lbr_v2 , Last Branch Record v2 extensions (LBR Stack)
+0x80000022, 0, eax, 2, lbr_pmc_freeze , Freezing core performance counters / LBR Stack supported
+0x80000022, 0, ebx, 3:0, n_pmc_core , Number of core performance counters
+0x80000022, 0, ebx, 9:4, lbr_v2_stack_size , Number of available LBR stack entries
+0x80000022, 0, ebx, 15:10, n_pmc_northbridge , Number of available northbridge (data fabric) performance counters
+0x80000022, 0, ebx, 21:16, n_pmc_umc , Number of available UMC performance counters
+0x80000022, 0, ecx, 31:0, active_umc_bitmask , Active UMCs bitmask
+
+# Leaf 80000023H
+# AMD Secure Multi-key Encryption enumeration
+
+0x80000023, 0, eax, 0, mem_hmk_mode , MEM-HMK encryption mode is supported
+0x80000023, 0, ebx, 15:0, mem_hmk_avail_keys , MEM-HMK mode: total number of available encryption keys
+
+# Leaf 80000026H
+# AMD extended topology enumeration v2
+
+0x80000026, 3:0, eax, 4:0, x2apic_id_shift , Bit width of this level (previous levels inclusive)
+0x80000026, 3:0, eax, 29, core_has_pwreff_ranking, This core has a power efficiency ranking
+0x80000026, 3:0, eax, 30, domain_has_hybrid_cores, This domain level has hybrid (E, P) cores
+0x80000026, 3:0, eax, 31, domain_core_count_asymm, The 'Core' domain has asymmetric cores count
+0x80000026, 3:0, ebx, 15:0, domain_lcpus_count , Number of logical CPUs at this domain instance
+0x80000026, 3:0, ebx, 23:16, core_pwreff_ranking , This core's static power efficiency ranking
+0x80000026, 3:0, ebx, 27:24, core_native_model_id , This core's native model ID
+0x80000026, 3:0, ebx, 31:28, core_type , This core's type
+0x80000026, 3:0, ecx, 7:0, domain_level , This domain level (subleaf ID)
+0x80000026, 3:0, ecx, 15:8, domain_type , This domain type
+0x80000026, 3:0, edx, 31:0, x2apic_id , x2APIC ID of current logical CPU
+
+# Leaf 80860000H
+# Maximum Transmeta leaf number + CPU vendor ID string
+
+0x80860000, 0, eax, 31:0, max_tra_leaf , Maximum supported Transmeta leaf number
+0x80860000, 0, ebx, 31:0, cpu_vendorid_0 , Transmeta Vendor ID string bytes 0 - 3
+0x80860000, 0, ecx, 31:0, cpu_vendorid_2 , Transmeta Vendor ID string bytes 8 - 11
+0x80860000, 0, edx, 31:0, cpu_vendorid_1 , Transmeta Vendor ID string bytes 4 - 7
+
+# Leaf 80860001H
+# Transmeta extended CPU information
+
+0x80860001, 0, eax, 3:0, stepping , Stepping ID
+0x80860001, 0, eax, 7:4, base_model , Base CPU model ID
+0x80860001, 0, eax, 11:8, base_family_id , Base CPU family ID
+0x80860001, 0, eax, 13:12, cpu_type , CPU type
+0x80860001, 0, ebx, 7:0, cpu_rev_mask_minor , CPU revision ID, mask minor
+0x80860001, 0, ebx, 15:8, cpu_rev_mask_major , CPU revision ID, mask major
+0x80860001, 0, ebx, 23:16, cpu_rev_minor , CPU revision ID, minor
+0x80860001, 0, ebx, 31:24, cpu_rev_major , CPU revision ID, major
+0x80860001, 0, ecx, 31:0, cpu_base_mhz , CPU nominal frequency, in MHz
+0x80860001, 0, edx, 0, recovery , Recovery CMS is active (after bad flush)
+0x80860001, 0, edx, 1, longrun , LongRun power management capabilities
+0x80860001, 0, edx, 3, lrti , LongRun Table Interface
+
+# Leaf 80860002H
+# Transmeta Code Morphing Software (CMS) enumeration
+
+0x80860002, 0, eax, 31:0, cpu_rev_id , CPU revision ID
+0x80860002, 0, ebx, 7:0, cms_rev_mask_2 , CMS revision ID, mask component 2
+0x80860002, 0, ebx, 15:8, cms_rev_mask_1 , CMS revision ID, mask component 1
+0x80860002, 0, ebx, 23:16, cms_rev_minor , CMS revision ID, minor
+0x80860002, 0, ebx, 31:24, cms_rev_major , CMS revision ID, major
+0x80860002, 0, ecx, 31:0, cms_rev_mask_3 , CMS revision ID, mask component 3
+
+# Leaf 80860003H
+# Transmeta CPU information string, bytes 0 - 15
+
+0x80860003, 0, eax, 31:0, cpu_info_0 , CPU info string bytes 0 - 3
+0x80860003, 0, ebx, 31:0, cpu_info_1 , CPU info string bytes 4 - 7
+0x80860003, 0, ecx, 31:0, cpu_info_2 , CPU info string bytes 8 - 11
+0x80860003, 0, edx, 31:0, cpu_info_3 , CPU info string bytes 12 - 15
+
+# Leaf 80860004H
+# Transmeta CPU information string, bytes 16 - 31
+
+0x80860004, 0, eax, 31:0, cpu_info_4 , CPU info string bytes 16 - 19
+0x80860004, 0, ebx, 31:0, cpu_info_5 , CPU info string bytes 20 - 23
+0x80860004, 0, ecx, 31:0, cpu_info_6 , CPU info string bytes 24 - 27
+0x80860004, 0, edx, 31:0, cpu_info_7 , CPU info string bytes 28 - 31
+
+# Leaf 80860005H
+# Transmeta CPU information string, bytes 32 - 47
+
+0x80860005, 0, eax, 31:0, cpu_info_8 , CPU info string bytes 32 - 35
+0x80860005, 0, ebx, 31:0, cpu_info_9 , CPU info string bytes 36 - 39
+0x80860005, 0, ecx, 31:0, cpu_info_10 , CPU info string bytes 40 - 43
+0x80860005, 0, edx, 31:0, cpu_info_11 , CPU info string bytes 44 - 47
+
+# Leaf 80860006H
+# Transmeta CPU information string, bytes 48 - 63
+
+0x80860006, 0, eax, 31:0, cpu_info_12 , CPU info string bytes 48 - 51
+0x80860006, 0, ebx, 31:0, cpu_info_13 , CPU info string bytes 52 - 55
+0x80860006, 0, ecx, 31:0, cpu_info_14 , CPU info string bytes 56 - 59
+0x80860006, 0, edx, 31:0, cpu_info_15 , CPU info string bytes 60 - 63
+
+# Leaf 80860007H
+# Transmeta live CPU information
+
+0x80860007, 0, eax, 31:0, cpu_cur_mhz , Current CPU frequency, in MHz
+0x80860007, 0, ebx, 31:0, cpu_cur_voltage , Current CPU voltage, in millivolts
+0x80860007, 0, ecx, 31:0, cpu_cur_perf_pctg , Current CPU performance percentage, 0 - 100
+0x80860007, 0, edx, 31:0, cpu_cur_gate_delay , Current CPU gate delay, in femtoseconds
+
+# Leaf C0000000H
+# Maximum Centaur/Zhaoxin leaf number
+
+0xc0000000, 0, eax, 31:0, max_cntr_leaf , Maximum Centaur/Zhaoxin leaf number
+
+# Leaf C0000001H
+# Centaur/Zhaoxin extended CPU features
+
+0xc0000001, 0, edx, 0, ccs_sm2 , CCS SM2 instructions
+0xc0000001, 0, edx, 1, ccs_sm2_en , CCS SM2 enabled
+0xc0000001, 0, edx, 2, xstore , Random Number Generator
+0xc0000001, 0, edx, 3, xstore_en , RNG enabled
+0xc0000001, 0, edx, 4, ccs_sm3_sm4 , CCS SM3 and SM4 instructions
+0xc0000001, 0, edx, 5, ccs_sm3_sm4_en , CCS SM3/SM4 enabled
+0xc0000001, 0, edx, 6, ace , Advanced Cryptography Engine
+0xc0000001, 0, edx, 7, ace_en , ACE enabled
+0xc0000001, 0, edx, 8, ace2 , Advanced Cryptography Engine v2
+0xc0000001, 0, edx, 9, ace2_en , ACE v2 enabled
+0xc0000001, 0, edx, 10, phe , PadLock Hash Engine
+0xc0000001, 0, edx, 11, phe_en , PHE enabled
+0xc0000001, 0, edx, 12, pmm , PadLock Montgomery Multiplier
+0xc0000001, 0, edx, 13, pmm_en , PMM enabled
+0xc0000001, 0, edx, 16, parallax , Parallax auto adjust processor voltage
+0xc0000001, 0, edx, 17, parallax_en , Parallax enabled
+0xc0000001, 0, edx, 20, tm3 , Thermal Monitor v3
+0xc0000001, 0, edx, 21, tm3_en , TM v3 enabled
+0xc0000001, 0, edx, 25, phe2 , PadLock Hash Engine v2 (SHA384/SHA512)
+0xc0000001, 0, edx, 26, phe2_en , PHE v2 enabled
+0xc0000001, 0, edx, 27, rsa , RSA instructions (XMODEXP/MONTMUL2)
+0xc0000001, 0, edx, 28, rsa_en , RSA instructions enabled
diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
index 24b7d017ec2c..7dc6b9235d02 100644
--- a/tools/arch/x86/kcpuid/kcpuid.c
+++ b/tools/arch/x86/kcpuid/kcpuid.c
@@ -1,13 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
#define _GNU_SOURCE
-#include <stdio.h>
+#include <cpuid.h>
+#include <err.h>
+#include <getopt.h>
#include <stdbool.h>
+#include <stdio.h>
#include <stdlib.h>
#include <string.h>
-#include <getopt.h>
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+#define min(a, b) (((a) < (b)) ? (a) : (b))
+#define __noreturn __attribute__((__noreturn__))
typedef unsigned int u32;
typedef unsigned long long u64;
@@ -48,7 +52,7 @@ static const char * const reg_names[] = {
struct subleaf {
u32 index;
u32 sub;
- u32 eax, ebx, ecx, edx;
+ u32 output[NR_REGS];
struct reg_desc info[NR_REGS];
};
@@ -62,22 +66,64 @@ struct cpuid_func {
int nr;
};
+enum range_index {
+ RANGE_STD = 0, /* Standard */
+ RANGE_EXT = 0x80000000, /* Extended */
+ RANGE_TSM = 0x80860000, /* Transmeta */
+ RANGE_CTR = 0xc0000000, /* Centaur/Zhaoxin */
+};
+
+#define CPUID_INDEX_MASK 0xffff0000
+#define CPUID_FUNCTION_MASK (~CPUID_INDEX_MASK)
+
struct cpuid_range {
/* array of main leafs */
struct cpuid_func *funcs;
/* number of valid leafs */
int nr;
- bool is_ext;
+ enum range_index index;
};
-/*
- * basic: basic functions range: [0... ]
- * ext: extended functions range: [0x80000000... ]
- */
-struct cpuid_range *leafs_basic, *leafs_ext;
+static struct cpuid_range ranges[] = {
+ { .index = RANGE_STD, },
+ { .index = RANGE_EXT, },
+ { .index = RANGE_TSM, },
+ { .index = RANGE_CTR, },
+};
+
+static char *range_to_str(struct cpuid_range *range)
+{
+ switch (range->index) {
+ case RANGE_STD: return "Standard";
+ case RANGE_EXT: return "Extended";
+ case RANGE_TSM: return "Transmeta";
+ case RANGE_CTR: return "Centaur";
+ default: return NULL;
+ }
+}
+
+#define __for_each_cpuid_range(range, __condition) \
+ for (unsigned int i = 0; \
+ i < ARRAY_SIZE(ranges) && ((range) = &ranges[i]) && (__condition); \
+ i++)
+
+#define for_each_valid_cpuid_range(range) __for_each_cpuid_range(range, (range)->nr != 0)
+#define for_each_cpuid_range(range) __for_each_cpuid_range(range, true)
+
+struct cpuid_range *index_to_cpuid_range(u32 index)
+{
+ u32 func_idx = index & CPUID_FUNCTION_MASK;
+ u32 range_idx = index & CPUID_INDEX_MASK;
+ struct cpuid_range *range;
+
+ for_each_valid_cpuid_range(range) {
+ if (range->index == range_idx && (u32)range->nr > func_idx)
+ return range;
+ }
+
+ return NULL;
+}
-static int num_leafs;
-static bool is_amd;
static bool show_details;
static bool show_raw;
static bool show_flags_only = true;
@@ -85,40 +131,30 @@ static u32 user_index = 0xFFFFFFFF;
static u32 user_sub = 0xFFFFFFFF;
static int flines;
-static inline void cpuid(u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
-{
- /* ecx is often an input as well as an output. */
- asm volatile("cpuid"
- : "=a" (*eax),
- "=b" (*ebx),
- "=c" (*ecx),
- "=d" (*edx)
- : "0" (*eax), "2" (*ecx));
-}
+/*
+ * Force using <cpuid.h> __cpuid_count() instead of __cpuid(). The
+ * latter leaves ECX uninitialized, which can break CPUID queries.
+ */
+
+#define cpuid(leaf, a, b, c, d) \
+ __cpuid_count(leaf, 0, a, b, c, d)
+
+#define cpuid_count(leaf, subleaf, a, b, c, d) \
+ __cpuid_count(leaf, subleaf, a, b, c, d)
static inline bool has_subleafs(u32 f)
{
- if (f == 0x7 || f == 0xd)
- return true;
-
- if (is_amd) {
- if (f == 0x8000001d)
+ u32 with_subleaves[] = {
+ 0x4, 0x7, 0xb, 0xd, 0xf, 0x10, 0x12,
+ 0x14, 0x17, 0x18, 0x1b, 0x1d, 0x1f, 0x23,
+ 0x8000001d, 0x80000020, 0x80000026,
+ };
+
+ for (unsigned i = 0; i < ARRAY_SIZE(with_subleaves); i++)
+ if (f == with_subleaves[i])
return true;
- return false;
- }
- switch (f) {
- case 0x4:
- case 0xb:
- case 0xf:
- case 0x10:
- case 0x14:
- case 0x18:
- case 0x1f:
- return true;
- default:
- return false;
- }
+ return false;
}
static void leaf_print_raw(struct subleaf *leaf)
@@ -127,11 +163,11 @@ static void leaf_print_raw(struct subleaf *leaf)
if (leaf->sub == 0)
printf("0x%08x: subleafs:\n", leaf->index);
- printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
- leaf->sub, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ printf(" %2d: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n", leaf->sub,
+ leaf->output[0], leaf->output[1], leaf->output[2], leaf->output[3]);
} else {
- printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n",
- leaf->index, leaf->eax, leaf->ebx, leaf->ecx, leaf->edx);
+ printf("0x%08x: EAX=0x%08x, EBX=0x%08x, ECX=0x%08x, EDX=0x%08x\n", leaf->index,
+ leaf->output[0], leaf->output[1], leaf->output[2], leaf->output[3]);
}
}
@@ -150,19 +186,19 @@ static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
* Cut off vendor-prefix from CPUID function as we're using it as an
* index into ->funcs.
*/
- func = &range->funcs[f & 0xffff];
+ func = &range->funcs[f & CPUID_FUNCTION_MASK];
if (!func->leafs) {
func->leafs = malloc(sizeof(struct subleaf));
if (!func->leafs)
- perror("malloc func leaf");
+ err(EXIT_FAILURE, NULL);
func->nr = 1;
} else {
s = func->nr;
func->leafs = realloc(func->leafs, (s + 1) * sizeof(*leaf));
if (!func->leafs)
- perror("realloc f->leafs");
+ err(EXIT_FAILURE, NULL);
func->nr++;
}
@@ -171,113 +207,99 @@ static bool cpuid_store(struct cpuid_range *range, u32 f, int subleaf,
leaf->index = f;
leaf->sub = subleaf;
- leaf->eax = a;
- leaf->ebx = b;
- leaf->ecx = c;
- leaf->edx = d;
+ leaf->output[R_EAX] = a;
+ leaf->output[R_EBX] = b;
+ leaf->output[R_ECX] = c;
+ leaf->output[R_EDX] = d;
return false;
}
static void raw_dump_range(struct cpuid_range *range)
{
- u32 f;
- int i;
-
- printf("%s Leafs :\n", range->is_ext ? "Extended" : "Basic");
+ printf("%s Leafs :\n", range_to_str(range));
printf("================\n");
- for (f = 0; (int)f < range->nr; f++) {
+ for (u32 f = 0; (int)f < range->nr; f++) {
struct cpuid_func *func = &range->funcs[f];
- u32 index = f;
-
- if (range->is_ext)
- index += 0x80000000;
/* Skip leaf without valid items */
if (!func->nr)
continue;
/* First item is the main leaf, followed by all subleafs */
- for (i = 0; i < func->nr; i++)
+ for (int i = 0; i < func->nr; i++)
leaf_print_raw(&func->leafs[i]);
}
}
-#define MAX_SUBLEAF_NUM 32
-struct cpuid_range *setup_cpuid_range(u32 input_eax)
+#define MAX_SUBLEAF_NUM 64
+#define MAX_RANGE_INDEX_OFFSET 0xff
+void setup_cpuid_range(struct cpuid_range *range)
{
- u32 max_func, idx_func;
- int subleaf;
- struct cpuid_range *range;
+ u32 max_func, range_funcs_sz;
u32 eax, ebx, ecx, edx;
- u32 f = input_eax;
- int max_subleaf;
- bool allzero;
-
- eax = input_eax;
- ebx = ecx = edx = 0;
- cpuid(&eax, &ebx, &ecx, &edx);
- max_func = eax;
- idx_func = (max_func & 0xffff) + 1;
+ cpuid(range->index, max_func, ebx, ecx, edx);
- range = malloc(sizeof(struct cpuid_range));
- if (!range)
- perror("malloc range");
+ /*
+ * If the CPUID range's maximum function value is garbage, then it
+ * is not recognized by this CPU. Set the range's number of valid
+ * leaves to zero so that for_each_valid_cpu_range() can ignore it.
+ */
+ if (max_func < range->index || max_func > (range->index + MAX_RANGE_INDEX_OFFSET)) {
+ range->nr = 0;
+ return;
+ }
- if (input_eax & 0x80000000)
- range->is_ext = true;
- else
- range->is_ext = false;
+ range->nr = (max_func & CPUID_FUNCTION_MASK) + 1;
+ range_funcs_sz = range->nr * sizeof(struct cpuid_func);
- range->funcs = malloc(sizeof(struct cpuid_func) * idx_func);
+ range->funcs = malloc(range_funcs_sz);
if (!range->funcs)
- perror("malloc range->funcs");
+ err(EXIT_FAILURE, NULL);
+
+ memset(range->funcs, 0, range_funcs_sz);
- range->nr = idx_func;
- memset(range->funcs, 0, sizeof(struct cpuid_func) * idx_func);
+ for (u32 f = range->index; f <= max_func; f++) {
+ u32 max_subleaf = MAX_SUBLEAF_NUM;
+ bool allzero;
- for (; f <= max_func; f++) {
- eax = f;
- subleaf = ecx = 0;
+ cpuid(f, eax, ebx, ecx, edx);
- cpuid(&eax, &ebx, &ecx, &edx);
- allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
+ allzero = cpuid_store(range, f, 0, eax, ebx, ecx, edx);
if (allzero)
continue;
- num_leafs++;
if (!has_subleafs(f))
continue;
- max_subleaf = MAX_SUBLEAF_NUM;
-
/*
* Some can provide the exact number of subleafs,
* others have to be tried (0xf)
*/
- if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18)
- max_subleaf = (eax & 0xff) + 1;
-
+ if (f == 0x7 || f == 0x14 || f == 0x17 || f == 0x18 || f == 0x1d)
+ max_subleaf = min((eax & 0xff) + 1, max_subleaf);
if (f == 0xb)
max_subleaf = 2;
-
- for (subleaf = 1; subleaf < max_subleaf; subleaf++) {
- eax = f;
- ecx = subleaf;
-
- cpuid(&eax, &ebx, &ecx, &edx);
- allzero = cpuid_store(range, f, subleaf,
- eax, ebx, ecx, edx);
+ if (f == 0x1f)
+ max_subleaf = 6;
+ if (f == 0x23)
+ max_subleaf = 4;
+ if (f == 0x80000020)
+ max_subleaf = 4;
+ if (f == 0x80000026)
+ max_subleaf = 5;
+
+ for (u32 subleaf = 1; subleaf < max_subleaf; subleaf++) {
+ cpuid_count(f, subleaf, eax, ebx, ecx, edx);
+
+ allzero = cpuid_store(range, f, subleaf, eax, ebx, ecx, edx);
if (allzero)
continue;
- num_leafs++;
}
}
-
- return range;
}
/*
@@ -288,15 +310,13 @@ struct cpuid_range *setup_cpuid_range(u32 input_eax)
* 0, 0, EAX, 31:0, max_basic_leafs, Max input value for supported subleafs
* 1, 0, ECX, 0, sse3, Streaming SIMD Extensions 3(SSE3)
*/
-static int parse_line(char *line)
+static void parse_line(char *line)
{
char *str;
- int i;
struct cpuid_range *range;
struct cpuid_func *func;
struct subleaf *leaf;
u32 index;
- u32 sub;
char buffer[512];
char *buf;
/*
@@ -313,15 +333,17 @@ static int parse_line(char *line)
struct bits_desc *bdesc;
int reg_index;
char *start, *end;
+ u32 subleaf_start, subleaf_end;
+ unsigned bit_start, bit_end;
/* Skip comments and NULL line */
if (line[0] == '#' || line[0] == '\n')
- return 0;
+ return;
strncpy(buffer, line, 511);
buffer[511] = 0;
str = buffer;
- for (i = 0; i < 5; i++) {
+ for (int i = 0; i < 5; i++) {
tokens[i] = strtok(str, ",");
if (!tokens[i])
goto err_exit;
@@ -334,30 +356,40 @@ static int parse_line(char *line)
/* index/main-leaf */
index = strtoull(tokens[0], NULL, 0);
- if (index & 0x80000000)
- range = leafs_ext;
- else
- range = leafs_basic;
-
- index &= 0x7FFFFFFF;
- /* Skip line parsing for non-existing indexes */
- if ((int)index >= range->nr)
- return -1;
+ /*
+ * Skip line parsing if the index is not covered by known-valid
+ * CPUID ranges on this CPU.
+ */
+ range = index_to_cpuid_range(index);
+ if (!range)
+ return;
+ /* Skip line parsing if the index CPUID output is all zero */
+ index &= CPUID_FUNCTION_MASK;
func = &range->funcs[index];
-
- /* Return if the index has no valid item on this platform */
if (!func->nr)
- return 0;
+ return;
/* subleaf */
- sub = strtoul(tokens[1], NULL, 0);
- if ((int)sub > func->nr)
- return -1;
+ buf = tokens[1];
+ end = strtok(buf, ":");
+ start = strtok(NULL, ":");
+ subleaf_end = strtoul(end, NULL, 0);
- leaf = &func->leafs[sub];
- buf = tokens[2];
+ /* A subleaf range is given? */
+ if (start) {
+ subleaf_start = strtoul(start, NULL, 0);
+ subleaf_end = min(subleaf_end, (u32)(func->nr - 1));
+ if (subleaf_start > subleaf_end)
+ return;
+ } else {
+ subleaf_start = subleaf_end;
+ if (subleaf_start > (u32)(func->nr - 1))
+ return;
+ }
+ /* register */
+ buf = tokens[2];
if (strcasestr(buf, "EAX"))
reg_index = R_EAX;
else if (strcasestr(buf, "EBX"))
@@ -369,29 +401,28 @@ static int parse_line(char *line)
else
goto err_exit;
- reg = &leaf->info[reg_index];
- bdesc = &reg->descs[reg->nr++];
-
/* bit flag or bits field */
buf = tokens[3];
-
end = strtok(buf, ":");
- bdesc->end = strtoul(end, NULL, 0);
- bdesc->start = bdesc->end;
-
- /* start != NULL means it is bit fields */
start = strtok(NULL, ":");
- if (start)
- bdesc->start = strtoul(start, NULL, 0);
-
- strcpy(bdesc->simp, tokens[4]);
- strcpy(bdesc->detail, tokens[5]);
- return 0;
+ bit_end = strtoul(end, NULL, 0);
+ bit_start = (start) ? strtoul(start, NULL, 0) : bit_end;
+
+ for (u32 sub = subleaf_start; sub <= subleaf_end; sub++) {
+ leaf = &func->leafs[sub];
+ reg = &leaf->info[reg_index];
+ bdesc = &reg->descs[reg->nr++];
+
+ bdesc->end = bit_end;
+ bdesc->start = bit_start;
+ strcpy(bdesc->simp, strtok(tokens[4], " \t"));
+ strcpy(bdesc->detail, tokens[5]);
+ }
+ return;
err_exit:
- printf("Warning: wrong line format:\n");
- printf("\tline[%d]: %s\n", flines, line);
- return -1;
+ warnx("Wrong line format:\n"
+ "\tline[%d]: %s", flines, line);
}
/* Parse csv file, and construct the array of all leafs and subleafs */
@@ -412,10 +443,8 @@ static void parse_text(void)
file = fopen("./cpuid.csv", "r");
}
- if (!file) {
- printf("Fail to open '%s'\n", filename);
- return;
- }
+ if (!file)
+ err(EXIT_FAILURE, "%s", filename);
while (1) {
ret = getline(&line, &len, file);
@@ -430,21 +459,13 @@ static void parse_text(void)
fclose(file);
}
-
-/* Decode every eax/ebx/ecx/edx */
-static void decode_bits(u32 value, struct reg_desc *rdesc, enum cpuid_reg reg)
+static void show_reg(const struct reg_desc *rdesc, u32 value)
{
- struct bits_desc *bdesc;
- int start, end, i;
+ const struct bits_desc *bdesc;
+ int start, end;
u32 mask;
- if (!rdesc->nr) {
- if (show_details)
- printf("\t %s: 0x%08x\n", reg_names[reg], value);
- return;
- }
-
- for (i = 0; i < rdesc->nr; i++) {
+ for (int i = 0; i < rdesc->nr; i++) {
bdesc = &rdesc->descs[i];
start = bdesc->start;
@@ -452,8 +473,9 @@ static void decode_bits(u32 value, struct reg_desc *rdesc, enum cpuid_reg reg)
if (start == end) {
/* single bit flag */
if (value & (1 << start))
- printf("\t%-20s %s%s\n",
+ printf("\t%-20s %s%s%s\n",
bdesc->simp,
+ show_flags_only ? "" : "\t\t\t",
show_details ? "-" : "",
show_details ? bdesc->detail : ""
);
@@ -473,23 +495,21 @@ static void decode_bits(u32 value, struct reg_desc *rdesc, enum cpuid_reg reg)
}
}
-static void show_leaf(struct subleaf *leaf)
+static void show_reg_header(bool has_entries, u32 leaf, u32 subleaf, const char *reg_name)
{
- if (!leaf)
- return;
+ if (show_details && has_entries)
+ printf("CPUID_0x%x_%s[0x%x]:\n", leaf, reg_name, subleaf);
+}
- if (show_raw) {
+static void show_leaf(struct subleaf *leaf)
+{
+ if (show_raw)
leaf_print_raw(leaf);
- } else {
- if (show_details)
- printf("CPUID_0x%x_ECX[0x%x]:\n",
- leaf->index, leaf->sub);
- }
- decode_bits(leaf->eax, &leaf->info[R_EAX], R_EAX);
- decode_bits(leaf->ebx, &leaf->info[R_EBX], R_EBX);
- decode_bits(leaf->ecx, &leaf->info[R_ECX], R_ECX);
- decode_bits(leaf->edx, &leaf->info[R_EDX], R_EDX);
+ for (int i = R_EAX; i < NR_REGS; i++) {
+ show_reg_header((leaf->info[i].nr > 0), leaf->index, leaf->sub, reg_names[i]);
+ show_reg(&leaf->info[i], leaf->output[i]);
+ }
if (!show_raw && show_details)
printf("\n");
@@ -497,46 +517,37 @@ static void show_leaf(struct subleaf *leaf)
static void show_func(struct cpuid_func *func)
{
- int i;
-
- if (!func)
- return;
-
- for (i = 0; i < func->nr; i++)
+ for (int i = 0; i < func->nr; i++)
show_leaf(&func->leafs[i]);
}
static void show_range(struct cpuid_range *range)
{
- int i;
-
- for (i = 0; i < range->nr; i++)
+ for (int i = 0; i < range->nr; i++)
show_func(&range->funcs[i]);
}
static inline struct cpuid_func *index_to_func(u32 index)
{
+ u32 func_idx = index & CPUID_FUNCTION_MASK;
struct cpuid_range *range;
- u32 func_idx;
-
- range = (index & 0x80000000) ? leafs_ext : leafs_basic;
- func_idx = index & 0xffff;
- if ((func_idx + 1) > (u32)range->nr) {
- printf("ERR: invalid input index (0x%x)\n", index);
+ range = index_to_cpuid_range(index);
+ if (!range)
return NULL;
- }
+
return &range->funcs[func_idx];
}
static void show_info(void)
{
+ struct cpuid_range *range;
struct cpuid_func *func;
if (show_raw) {
/* Show all of the raw output of 'cpuid' instr */
- raw_dump_range(leafs_basic);
- raw_dump_range(leafs_ext);
+ for_each_valid_cpuid_range(range)
+ raw_dump_range(range);
return;
}
@@ -544,18 +555,19 @@ static void show_info(void)
/* Only show specific leaf/subleaf info */
func = index_to_func(user_index);
if (!func)
- return;
+ errx(EXIT_FAILURE, "Invalid input leaf (0x%x)", user_index);
/* Dump the raw data also */
show_raw = true;
if (user_sub != 0xFFFFFFFF) {
- if (user_sub + 1 <= (u32)func->nr) {
- show_leaf(&func->leafs[user_sub]);
- return;
+ if (user_sub + 1 > (u32)func->nr) {
+ errx(EXIT_FAILURE, "Leaf 0x%x has no valid subleaf = 0x%x",
+ user_index, user_sub);
}
- printf("ERR: invalid input subleaf (0x%x)\n", user_sub);
+ show_leaf(&func->leafs[user_sub]);
+ return;
}
show_func(func);
@@ -563,38 +575,21 @@ static void show_info(void)
}
printf("CPU features:\n=============\n\n");
- show_range(leafs_basic);
- show_range(leafs_ext);
-}
-
-static void setup_platform_cpuid(void)
-{
- u32 eax, ebx, ecx, edx;
-
- /* Check vendor */
- eax = ebx = ecx = edx = 0;
- cpuid(&eax, &ebx, &ecx, &edx);
-
- /* "htuA" */
- if (ebx == 0x68747541)
- is_amd = true;
-
- /* Setup leafs for the basic and extended range */
- leafs_basic = setup_cpuid_range(0x0);
- leafs_ext = setup_cpuid_range(0x80000000);
+ for_each_valid_cpuid_range(range)
+ show_range(range);
}
-static void usage(void)
+static void __noreturn usage(int exit_code)
{
- printf("kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
- "\t-a|--all Show both bit flags and complex bit fields info\n"
- "\t-b|--bitflags Show boolean flags only\n"
- "\t-d|--detail Show details of the flag/fields (default)\n"
- "\t-f|--flags Specify the cpuid csv file\n"
- "\t-h|--help Show usage info\n"
- "\t-l|--leaf=index Specify the leaf you want to check\n"
- "\t-r|--raw Show raw cpuid data\n"
- "\t-s|--subleaf=sub Specify the subleaf you want to check\n"
+ errx(exit_code, "kcpuid [-abdfhr] [-l leaf] [-s subleaf]\n"
+ "\t-a|--all Show both bit flags and complex bit fields info\n"
+ "\t-b|--bitflags Show boolean flags only\n"
+ "\t-d|--detail Show details of the flag/fields (default)\n"
+ "\t-f|--flags Specify the CPUID CSV file\n"
+ "\t-h|--help Show usage info\n"
+ "\t-l|--leaf=index Specify the leaf you want to check\n"
+ "\t-r|--raw Show raw CPUID data\n"
+ "\t-s|--subleaf=sub Specify the subleaf you want to check"
);
}
@@ -610,7 +605,7 @@ static struct option opts[] = {
{ NULL, 0, NULL, 0 }
};
-static int parse_options(int argc, char *argv[])
+static void parse_options(int argc, char *argv[])
{
int c;
@@ -630,9 +625,7 @@ static int parse_options(int argc, char *argv[])
user_csv = optarg;
break;
case 'h':
- usage();
- exit(1);
- break;
+ usage(EXIT_SUCCESS);
case 'l':
/* main leaf */
user_index = strtoul(optarg, NULL, 0);
@@ -645,11 +638,8 @@ static int parse_options(int argc, char *argv[])
user_sub = strtoul(optarg, NULL, 0);
break;
default:
- printf("%s: Invalid option '%c'\n", argv[0], optopt);
- return -1;
- }
-
- return 0;
+ usage(EXIT_FAILURE);
+ }
}
/*
@@ -662,11 +652,13 @@ static int parse_options(int argc, char *argv[])
*/
int main(int argc, char *argv[])
{
- if (parse_options(argc, argv))
- return -1;
+ struct cpuid_range *range;
+
+ parse_options(argc, argv);
/* Setup the cpuid leafs of current platform */
- setup_platform_cpuid();
+ for_each_cpuid_range(range)
+ setup_cpuid_range(range);
/* Read and parse the 'cpuid.csv' */
parse_text();
diff --git a/tools/arch/x86/lib/inat.c b/tools/arch/x86/lib/inat.c
index dfbcc6405941..ffcb0e27453b 100644
--- a/tools/arch/x86/lib/inat.c
+++ b/tools/arch/x86/lib/inat.c
@@ -81,3 +81,16 @@ insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m,
return table[opcode];
}
+insn_attr_t inat_get_xop_attribute(insn_byte_t opcode, insn_byte_t map_select)
+{
+ const insn_attr_t *table;
+
+ if (map_select < X86_XOP_M_MIN || map_select > X86_XOP_M_MAX)
+ return 0;
+ map_select -= X86_XOP_M_MIN;
+ /* At first, this checks the master table */
+ table = inat_xop_tables[map_select];
+ if (!table)
+ return 0;
+ return table[opcode];
+}
diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
index 8fd63a067308..1d1c57c74d1f 100644
--- a/tools/arch/x86/lib/insn.c
+++ b/tools/arch/x86/lib/insn.c
@@ -13,7 +13,7 @@
#endif
#include "../include/asm/inat.h" /* __ignore_sync_check__ */
#include "../include/asm/insn.h" /* __ignore_sync_check__ */
-#include "../include/asm-generic/unaligned.h" /* __ignore_sync_check__ */
+#include <linux/unaligned.h> /* __ignore_sync_check__ */
#include <linux/errno.h>
#include <linux/kconfig.h>
@@ -71,7 +71,7 @@ void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
insn->kaddr = kaddr;
insn->end_kaddr = kaddr + buf_len;
insn->next_byte = kaddr;
- insn->x86_64 = x86_64 ? 1 : 0;
+ insn->x86_64 = x86_64;
insn->opnd_bytes = 4;
if (x86_64)
insn->addr_bytes = 8;
@@ -185,16 +185,30 @@ found:
if (X86_REX_W(b))
/* REX.W overrides opnd_size */
insn->opnd_bytes = 8;
+ } else if (inat_is_rex2_prefix(attr)) {
+ insn_set_byte(&insn->rex_prefix, 0, b);
+ b = peek_nbyte_next(insn_byte_t, insn, 1);
+ insn_set_byte(&insn->rex_prefix, 1, b);
+ insn->rex_prefix.nbytes = 2;
+ insn->next_byte += 2;
+ if (X86_REX_W(b))
+ /* REX.W overrides opnd_size */
+ insn->opnd_bytes = 8;
+ insn->rex_prefix.got = 1;
+ goto vex_end;
}
}
insn->rex_prefix.got = 1;
- /* Decode VEX prefix */
+ /* Decode VEX/XOP prefix */
b = peek_next(insn_byte_t, insn);
- attr = inat_get_opcode_attribute(b);
- if (inat_is_vex_prefix(attr)) {
+ if (inat_is_vex_prefix(attr) || inat_is_xop_prefix(attr)) {
insn_byte_t b2 = peek_nbyte_next(insn_byte_t, insn, 1);
- if (!insn->x86_64) {
+
+ if (inat_is_xop_prefix(attr) && X86_MODRM_REG(b2) == 0) {
+ /* Grp1A.0 is always POP Ev */
+ goto vex_end;
+ } else if (!insn->x86_64) {
/*
* In 32-bits mode, if the [7:6] bits (mod bits of
* ModRM) on the second byte are not 11b, it is
@@ -215,13 +229,13 @@ found:
if (insn->x86_64 && X86_VEX_W(b2))
/* VEX.W overrides opnd_size */
insn->opnd_bytes = 8;
- } else if (inat_is_vex3_prefix(attr)) {
+ } else if (inat_is_vex3_prefix(attr) || inat_is_xop_prefix(attr)) {
b2 = peek_nbyte_next(insn_byte_t, insn, 2);
insn_set_byte(&insn->vex_prefix, 2, b2);
insn->vex_prefix.nbytes = 3;
insn->next_byte += 3;
if (insn->x86_64 && X86_VEX_W(b2))
- /* VEX.W overrides opnd_size */
+ /* VEX.W/XOP.W overrides opnd_size */
insn->opnd_bytes = 8;
} else {
/*
@@ -268,23 +282,38 @@ int insn_get_opcode(struct insn *insn)
if (opcode->got)
return 0;
- if (!insn->prefixes.got) {
- ret = insn_get_prefixes(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_prefixes(insn);
+ if (ret)
+ return ret;
/* Get first opcode */
op = get_next(insn_byte_t, insn);
insn_set_byte(opcode, 0, op);
opcode->nbytes = 1;
- /* Check if there is VEX prefix or not */
- if (insn_is_avx(insn)) {
+ /* Check if there is VEX/XOP prefix or not */
+ if (insn_is_avx_or_xop(insn)) {
insn_byte_t m, p;
+
+ /* XOP prefix has different encoding */
+ if (unlikely(avx_insn_is_xop(insn))) {
+ m = insn_xop_map_bits(insn);
+ insn->attr = inat_get_xop_attribute(op, m);
+ if (!inat_accept_xop(insn->attr)) {
+ insn->attr = 0;
+ return -EINVAL;
+ }
+ /* XOP has only 1 byte for opcode */
+ goto end;
+ }
+
m = insn_vex_m_bits(insn);
p = insn_vex_p_bits(insn);
insn->attr = inat_get_avx_attribute(op, m, p);
+ /* SCALABLE EVEX uses p bits to encode operand size */
+ if (inat_evex_scalable(insn->attr) && !insn_vex_w_bit(insn) &&
+ p == INAT_PFX_OPNDSZ)
+ insn->opnd_bytes = 2;
if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) ||
(!inat_accept_vex(insn->attr) &&
!inat_is_group(insn->attr))) {
@@ -296,7 +325,26 @@ int insn_get_opcode(struct insn *insn)
goto end;
}
+ /* Check if there is REX2 prefix or not */
+ if (insn_is_rex2(insn)) {
+ if (insn_rex2_m_bit(insn)) {
+ /* map 1 is escape 0x0f */
+ insn_attr_t esc_attr = inat_get_opcode_attribute(0x0f);
+
+ pfx_id = insn_last_prefix_id(insn);
+ insn->attr = inat_get_escape_attribute(op, pfx_id, esc_attr);
+ } else {
+ insn->attr = inat_get_opcode_attribute(op);
+ }
+ goto end;
+ }
+
insn->attr = inat_get_opcode_attribute(op);
+ if (insn->x86_64 && inat_is_invalid64(insn->attr)) {
+ /* This instruction is invalid, like UD2. Stop decoding. */
+ insn->attr &= INAT_INV64;
+ }
+
while (inat_is_escape(insn->attr)) {
/* Get escaped opcode */
op = get_next(insn_byte_t, insn);
@@ -310,6 +358,7 @@ int insn_get_opcode(struct insn *insn)
insn->attr = 0;
return -EINVAL;
}
+
end:
opcode->got = 1;
return 0;
@@ -339,11 +388,9 @@ int insn_get_modrm(struct insn *insn)
if (modrm->got)
return 0;
- if (!insn->opcode.got) {
- ret = insn_get_opcode(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_opcode(insn);
+ if (ret)
+ return ret;
if (inat_has_modrm(insn->attr)) {
mod = get_next(insn_byte_t, insn);
@@ -352,7 +399,8 @@ int insn_get_modrm(struct insn *insn)
pfx_id = insn_last_prefix_id(insn);
insn->attr = inat_get_group_attribute(mod, pfx_id,
insn->attr);
- if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) {
+ if (insn_is_avx_or_xop(insn) && !inat_accept_vex(insn->attr) &&
+ !inat_accept_xop(insn->attr)) {
/* Bad insn */
insn->attr = 0;
return -EINVAL;
@@ -386,11 +434,9 @@ int insn_rip_relative(struct insn *insn)
if (!insn->x86_64)
return 0;
- if (!modrm->got) {
- ret = insn_get_modrm(insn);
- if (ret)
- return 0;
- }
+ ret = insn_get_modrm(insn);
+ if (ret)
+ return 0;
/*
* For rip-relative instructions, the mod field (top 2 bits)
* is zero and the r/m field (bottom 3 bits) is 0x5.
@@ -417,11 +463,9 @@ int insn_get_sib(struct insn *insn)
if (insn->sib.got)
return 0;
- if (!insn->modrm.got) {
- ret = insn_get_modrm(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_modrm(insn);
+ if (ret)
+ return ret;
if (insn->modrm.nbytes) {
modrm = insn->modrm.bytes[0];
@@ -460,11 +504,9 @@ int insn_get_displacement(struct insn *insn)
if (insn->displacement.got)
return 0;
- if (!insn->sib.got) {
- ret = insn_get_sib(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_sib(insn);
+ if (ret)
+ return ret;
if (insn->modrm.nbytes) {
/*
@@ -628,11 +670,9 @@ int insn_get_immediate(struct insn *insn)
if (insn->immediate.got)
return 0;
- if (!insn->displacement.got) {
- ret = insn_get_displacement(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_displacement(insn);
+ if (ret)
+ return ret;
if (inat_has_moffset(insn->attr)) {
if (!__get_moffset(insn))
@@ -641,7 +681,6 @@ int insn_get_immediate(struct insn *insn)
}
if (!inat_has_immediate(insn->attr))
- /* no immediates */
goto done;
switch (inat_immediate_size(insn->attr)) {
@@ -703,11 +742,9 @@ int insn_get_length(struct insn *insn)
if (insn->length)
return 0;
- if (!insn->immediate.got) {
- ret = insn_get_immediate(insn);
- if (ret)
- return ret;
- }
+ ret = insn_get_immediate(insn);
+ if (ret)
+ return ret;
insn->length = (unsigned char)((unsigned long)insn->next_byte
- (unsigned long)insn->kaddr);
diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S
index 59cf6f9065aa..ccc3d923fc1e 100644
--- a/tools/arch/x86/lib/memcpy_64.S
+++ b/tools/arch/x86/lib/memcpy_64.S
@@ -40,6 +40,7 @@ SYM_FUNC_END(__memcpy)
EXPORT_SYMBOL(__memcpy)
SYM_FUNC_ALIAS_MEMFUNC(memcpy, __memcpy)
+SYM_PIC_ALIAS(memcpy)
EXPORT_SYMBOL(memcpy)
SYM_FUNC_START_LOCAL(memcpy_orig)
diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S
index 0199d56cb479..fb5a03cf5ab7 100644
--- a/tools/arch/x86/lib/memset_64.S
+++ b/tools/arch/x86/lib/memset_64.S
@@ -3,6 +3,7 @@
#include <linux/export.h>
#include <linux/linkage.h>
+#include <linux/cfi_types.h>
#include <asm/cpufeatures.h>
#include <asm/alternative.h>
@@ -28,7 +29,7 @@
* only for the return value that is the same as the source input,
* which the compiler could/should do much better anyway.
*/
-SYM_FUNC_START(__memset)
+SYM_TYPED_FUNC_START(__memset)
ALTERNATIVE "jmp memset_orig", "", X86_FEATURE_FSRS
movq %rdi,%r9
@@ -41,6 +42,7 @@ SYM_FUNC_END(__memset)
EXPORT_SYMBOL(__memset)
SYM_FUNC_ALIAS_MEMFUNC(memset, __memset)
+SYM_PIC_ALIAS(memset)
EXPORT_SYMBOL(memset)
SYM_FUNC_START_LOCAL(memset_orig)
diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
index 5168ee0360b2..2a4e69ecc2de 100644
--- a/tools/arch/x86/lib/x86-opcode-map.txt
+++ b/tools/arch/x86/lib/x86-opcode-map.txt
@@ -23,9 +23,15 @@
#
# AVX Superscripts
# (ev): this opcode requires EVEX prefix.
+# (es): this opcode requires EVEX prefix and is SCALABALE.
# (evo): this opcode is changed by EVEX prefix (EVEX opcode)
# (v): this opcode requires VEX prefix.
# (v1): this opcode only supports 128bit VEX.
+# (xop): this opcode accepts XOP prefix.
+#
+# XOP Superscripts
+# (W=0): this opcode requires XOP.W == 0
+# (W=1): this opcode requires XOP.W == 1
#
# Last Prefix Superscripts
# - (66): the last prefix is 0x66
@@ -33,6 +39,10 @@
# - (F2): the last prefix is 0xF2
# - (!F3) : the last prefix is not 0xF3 (including non-last prefix case)
# - (66&F2): Both 0x66 and 0xF2 prefixes are specified.
+#
+# REX2 Prefix Superscripts
+# - (!REX2): REX2 is not allowed
+# - (REX2): REX2 variant e.g. JMPABS
Table: one byte opcode
Referrer:
@@ -142,13 +152,13 @@ AVXcode:
# 0x60 - 0x6f
60: PUSHA/PUSHAD (i64)
61: POPA/POPAD (i64)
-62: BOUND Gv,Ma (i64) | EVEX (Prefix)
+62: BOUND Gv,Ma (i64) | EVEX (Prefix),(o64)
63: ARPL Ew,Gw (i64) | MOVSXD Gv,Ev (o64)
64: SEG=FS (Prefix)
65: SEG=GS (Prefix)
66: Operand-Size (Prefix)
67: Address-Size (Prefix)
-68: PUSH Iz (d64)
+68: PUSH Iz
69: IMUL Gv,Ev,Iz
6a: PUSH Ib (d64)
6b: IMUL Gv,Ev,Ib
@@ -157,22 +167,22 @@ AVXcode:
6e: OUTS/OUTSB DX,Xb
6f: OUTS/OUTSW/OUTSD DX,Xz
# 0x70 - 0x7f
-70: JO Jb
-71: JNO Jb
-72: JB/JNAE/JC Jb
-73: JNB/JAE/JNC Jb
-74: JZ/JE Jb
-75: JNZ/JNE Jb
-76: JBE/JNA Jb
-77: JNBE/JA Jb
-78: JS Jb
-79: JNS Jb
-7a: JP/JPE Jb
-7b: JNP/JPO Jb
-7c: JL/JNGE Jb
-7d: JNL/JGE Jb
-7e: JLE/JNG Jb
-7f: JNLE/JG Jb
+70: JO Jb (!REX2)
+71: JNO Jb (!REX2)
+72: JB/JNAE/JC Jb (!REX2)
+73: JNB/JAE/JNC Jb (!REX2)
+74: JZ/JE Jb (!REX2)
+75: JNZ/JNE Jb (!REX2)
+76: JBE/JNA Jb (!REX2)
+77: JNBE/JA Jb (!REX2)
+78: JS Jb (!REX2)
+79: JNS Jb (!REX2)
+7a: JP/JPE Jb (!REX2)
+7b: JNP/JPO Jb (!REX2)
+7c: JL/JNGE Jb (!REX2)
+7d: JNL/JGE Jb (!REX2)
+7e: JLE/JNG Jb (!REX2)
+7f: JNLE/JG Jb (!REX2)
# 0x80 - 0x8f
80: Grp1 Eb,Ib (1A)
81: Grp1 Ev,Iz (1A)
@@ -189,7 +199,7 @@ AVXcode:
8c: MOV Ev,Sw
8d: LEA Gv,M
8e: MOV Sw,Ew
-8f: Grp1A (1A) | POP Ev (d64)
+8f: Grp1A (1A) | POP Ev (d64) | XOP (Prefix)
# 0x90 - 0x9f
90: NOP | PAUSE (F3) | XCHG r8,rAX
91: XCHG rCX/r9,rAX
@@ -208,24 +218,24 @@ AVXcode:
9e: SAHF
9f: LAHF
# 0xa0 - 0xaf
-a0: MOV AL,Ob
-a1: MOV rAX,Ov
-a2: MOV Ob,AL
-a3: MOV Ov,rAX
-a4: MOVS/B Yb,Xb
-a5: MOVS/W/D/Q Yv,Xv
-a6: CMPS/B Xb,Yb
-a7: CMPS/W/D Xv,Yv
-a8: TEST AL,Ib
-a9: TEST rAX,Iz
-aa: STOS/B Yb,AL
-ab: STOS/W/D/Q Yv,rAX
-ac: LODS/B AL,Xb
-ad: LODS/W/D/Q rAX,Xv
-ae: SCAS/B AL,Yb
+a0: MOV AL,Ob (!REX2)
+a1: MOV rAX,Ov (!REX2) | JMPABS O (REX2),(o64)
+a2: MOV Ob,AL (!REX2)
+a3: MOV Ov,rAX (!REX2)
+a4: MOVS/B Yb,Xb (!REX2)
+a5: MOVS/W/D/Q Yv,Xv (!REX2)
+a6: CMPS/B Xb,Yb (!REX2)
+a7: CMPS/W/D Xv,Yv (!REX2)
+a8: TEST AL,Ib (!REX2)
+a9: TEST rAX,Iz (!REX2)
+aa: STOS/B Yb,AL (!REX2)
+ab: STOS/W/D/Q Yv,rAX (!REX2)
+ac: LODS/B AL,Xb (!REX2)
+ad: LODS/W/D/Q rAX,Xv (!REX2)
+ae: SCAS/B AL,Yb (!REX2)
# Note: The May 2011 Intel manual shows Xv for the second parameter of the
# next instruction but Yv is correct
-af: SCAS/W/D/Q rAX,Yv
+af: SCAS/W/D/Q rAX,Yv (!REX2)
# 0xb0 - 0xbf
b0: MOV AL/R8L,Ib
b1: MOV CL/R9L,Ib
@@ -248,8 +258,8 @@ c0: Grp2 Eb,Ib (1A)
c1: Grp2 Ev,Ib (1A)
c2: RETN Iw (f64)
c3: RETN
-c4: LES Gz,Mp (i64) | VEX+2byte (Prefix)
-c5: LDS Gz,Mp (i64) | VEX+1byte (Prefix)
+c4: LES Gz,Mp (i64) | VEX+2byte (Prefix),(o64)
+c5: LDS Gz,Mp (i64) | VEX+1byte (Prefix),(o64)
c6: Grp11A Eb,Ib (1A)
c7: Grp11B Ev,Iz (1A)
c8: ENTER Iw,Ib
@@ -266,7 +276,7 @@ d1: Grp2 Ev,1 (1A)
d2: Grp2 Eb,CL (1A)
d3: Grp2 Ev,CL (1A)
d4: AAM Ib (i64)
-d5: AAD Ib (i64)
+d5: AAD Ib (i64) | REX2 (Prefix),(o64)
d6:
d7: XLAT/XLATB
d8: ESC
@@ -281,26 +291,26 @@ df: ESC
# Note: "forced64" is Intel CPU behavior: they ignore 0x66 prefix
# in 64-bit mode. AMD CPUs accept 0x66 prefix, it causes RIP truncation
# to 16 bits. In 32-bit mode, 0x66 is accepted by both Intel and AMD.
-e0: LOOPNE/LOOPNZ Jb (f64)
-e1: LOOPE/LOOPZ Jb (f64)
-e2: LOOP Jb (f64)
-e3: JrCXZ Jb (f64)
-e4: IN AL,Ib
-e5: IN eAX,Ib
-e6: OUT Ib,AL
-e7: OUT Ib,eAX
+e0: LOOPNE/LOOPNZ Jb (f64),(!REX2)
+e1: LOOPE/LOOPZ Jb (f64),(!REX2)
+e2: LOOP Jb (f64),(!REX2)
+e3: JrCXZ Jb (f64),(!REX2)
+e4: IN AL,Ib (!REX2)
+e5: IN eAX,Ib (!REX2)
+e6: OUT Ib,AL (!REX2)
+e7: OUT Ib,eAX (!REX2)
# With 0x66 prefix in 64-bit mode, for AMD CPUs immediate offset
# in "near" jumps and calls is 16-bit. For CALL,
# push of return address is 16-bit wide, RSP is decremented by 2
# but is not truncated to 16 bits, unlike RIP.
-e8: CALL Jz (f64)
-e9: JMP-near Jz (f64)
-ea: JMP-far Ap (i64)
-eb: JMP-short Jb (f64)
-ec: IN AL,DX
-ed: IN eAX,DX
-ee: OUT DX,AL
-ef: OUT DX,eAX
+e8: CALL Jz (f64),(!REX2)
+e9: JMP-near Jz (f64),(!REX2)
+ea: JMP-far Ap (i64),(!REX2)
+eb: JMP-short Jb (f64),(!REX2)
+ec: IN AL,DX (!REX2)
+ed: IN eAX,DX (!REX2)
+ee: OUT DX,AL (!REX2)
+ef: OUT DX,eAX (!REX2)
# 0xf0 - 0xff
f0: LOCK (Prefix)
f1:
@@ -386,14 +396,14 @@ AVXcode: 1
2e: vucomiss Vss,Wss (v1) | vucomisd Vsd,Wsd (66),(v1)
2f: vcomiss Vss,Wss (v1) | vcomisd Vsd,Wsd (66),(v1)
# 0x0f 0x30-0x3f
-30: WRMSR
-31: RDTSC
-32: RDMSR
-33: RDPMC
-34: SYSENTER
-35: SYSEXIT
+30: WRMSR (!REX2)
+31: RDTSC (!REX2)
+32: RDMSR (!REX2)
+33: RDPMC (!REX2)
+34: SYSENTER (!REX2)
+35: SYSEXIT (!REX2)
36:
-37: GETSEC
+37: GETSEC (!REX2)
38: escape # 3-byte escape 1
39:
3a: escape # 3-byte escape 2
@@ -473,22 +483,22 @@ AVXcode: 1
7f: movq Qq,Pq | vmovdqa Wx,Vx (66) | vmovdqa32/64 Wx,Vx (66),(evo) | vmovdqu Wx,Vx (F3) | vmovdqu32/64 Wx,Vx (F3),(evo) | vmovdqu8/16 Wx,Vx (F2),(ev)
# 0x0f 0x80-0x8f
# Note: "forced64" is Intel CPU behavior (see comment about CALL insn).
-80: JO Jz (f64)
-81: JNO Jz (f64)
-82: JB/JC/JNAE Jz (f64)
-83: JAE/JNB/JNC Jz (f64)
-84: JE/JZ Jz (f64)
-85: JNE/JNZ Jz (f64)
-86: JBE/JNA Jz (f64)
-87: JA/JNBE Jz (f64)
-88: JS Jz (f64)
-89: JNS Jz (f64)
-8a: JP/JPE Jz (f64)
-8b: JNP/JPO Jz (f64)
-8c: JL/JNGE Jz (f64)
-8d: JNL/JGE Jz (f64)
-8e: JLE/JNG Jz (f64)
-8f: JNLE/JG Jz (f64)
+80: JO Jz (f64),(!REX2)
+81: JNO Jz (f64),(!REX2)
+82: JB/JC/JNAE Jz (f64),(!REX2)
+83: JAE/JNB/JNC Jz (f64),(!REX2)
+84: JE/JZ Jz (f64),(!REX2)
+85: JNE/JNZ Jz (f64),(!REX2)
+86: JBE/JNA Jz (f64),(!REX2)
+87: JA/JNBE Jz (f64),(!REX2)
+88: JS Jz (f64),(!REX2)
+89: JNS Jz (f64),(!REX2)
+8a: JP/JPE Jz (f64),(!REX2)
+8b: JNP/JPO Jz (f64),(!REX2)
+8c: JL/JNGE Jz (f64),(!REX2)
+8d: JNL/JGE Jz (f64),(!REX2)
+8e: JLE/JNG Jz (f64),(!REX2)
+8f: JNLE/JG Jz (f64),(!REX2)
# 0x0f 0x90-0x9f
90: SETO Eb | kmovw/q Vk,Wk | kmovb/d Vk,Wk (66)
91: SETNO Eb | kmovw/q Mv,Vk | kmovb/d Mv,Vk (66)
@@ -698,17 +708,17 @@ AVXcode: 2
4d: vrcp14ss/d Vsd,Hpd,Wsd (66),(ev)
4e: vrsqrt14ps/d Vpd,Wpd (66),(ev)
4f: vrsqrt14ss/d Vsd,Hsd,Wsd (66),(ev)
-50: vpdpbusd Vx,Hx,Wx (66),(ev)
-51: vpdpbusds Vx,Hx,Wx (66),(ev)
-52: vdpbf16ps Vx,Hx,Wx (F3),(ev) | vpdpwssd Vx,Hx,Wx (66),(ev) | vp4dpwssd Vdqq,Hdqq,Wdq (F2),(ev)
-53: vpdpwssds Vx,Hx,Wx (66),(ev) | vp4dpwssds Vdqq,Hdqq,Wdq (F2),(ev)
+50: vpdpbusd Vx,Hx,Wx (66) | vpdpbssd Vx,Hx,Wx (F2),(v) | vpdpbsud Vx,Hx,Wx (F3),(v) | vpdpbuud Vx,Hx,Wx (v)
+51: vpdpbusds Vx,Hx,Wx (66) | vpdpbssds Vx,Hx,Wx (F2),(v) | vpdpbsuds Vx,Hx,Wx (F3),(v) | vpdpbuuds Vx,Hx,Wx (v)
+52: vdpbf16ps Vx,Hx,Wx (F3),(ev) | vpdpwssd Vx,Hx,Wx (66) | vp4dpwssd Vdqq,Hdqq,Wdq (F2),(ev)
+53: vpdpwssds Vx,Hx,Wx (66) | vp4dpwssds Vdqq,Hdqq,Wdq (F2),(ev)
54: vpopcntb/w Vx,Wx (66),(ev)
55: vpopcntd/q Vx,Wx (66),(ev)
58: vpbroadcastd Vx,Wx (66),(v)
59: vpbroadcastq Vx,Wx (66),(v) | vbroadcasti32x2 Vx,Wx (66),(evo)
5a: vbroadcasti128 Vqq,Mdq (66),(v) | vbroadcasti32x4/64x2 Vx,Wx (66),(evo)
5b: vbroadcasti32x8/64x4 Vqq,Mdq (66),(ev)
-5c: TDPBF16PS Vt,Wt,Ht (F3),(v1)
+5c: TDPBF16PS Vt,Wt,Ht (F3),(v1) | TDPFP16PS Vt,Wt,Ht (F2),(v1),(o64)
# Skip 0x5d
5e: TDPBSSD Vt,Wt,Ht (F2),(v1) | TDPBSUD Vt,Wt,Ht (F3),(v1) | TDPBUSD Vt,Wt,Ht (66),(v1) | TDPBUUD Vt,Wt,Ht (v1)
# Skip 0x5f-0x61
@@ -718,10 +728,12 @@ AVXcode: 2
65: vblendmps/d Vx,Hx,Wx (66),(ev)
66: vpblendmb/w Vx,Hx,Wx (66),(ev)
68: vp2intersectd/q Kx,Hx,Wx (F2),(ev)
-# Skip 0x69-0x6f
+# Skip 0x69-0x6b
+6c: TCMMIMFP16PS Vt,Wt,Ht (66),(v1),(o64) | TCMMRLFP16PS Vt,Wt,Ht (v1),(o64)
+# Skip 0x6d-0x6f
70: vpshldvw Vx,Hx,Wx (66),(ev)
71: vpshldvd/q Vx,Hx,Wx (66),(ev)
-72: vcvtne2ps2bf16 Vx,Hx,Wx (F2),(ev) | vcvtneps2bf16 Vx,Wx (F3),(ev) | vpshrdvw Vx,Hx,Wx (66),(ev)
+72: vcvtne2ps2bf16 Vx,Hx,Wx (F2),(ev) | vcvtneps2bf16 Vx,Wx (F3) | vpshrdvw Vx,Hx,Wx (66),(ev)
73: vpshrdvd/q Vx,Hx,Wx (66),(ev)
75: vpermi2b/w Vx,Hx,Wx (66),(ev)
76: vpermi2d/q Vx,Hx,Wx (66),(ev)
@@ -777,8 +789,10 @@ ac: vfnmadd213ps/d Vx,Hx,Wx (66),(v)
ad: vfnmadd213ss/d Vx,Hx,Wx (66),(v),(v1)
ae: vfnmsub213ps/d Vx,Hx,Wx (66),(v)
af: vfnmsub213ss/d Vx,Hx,Wx (66),(v),(v1)
-b4: vpmadd52luq Vx,Hx,Wx (66),(ev)
-b5: vpmadd52huq Vx,Hx,Wx (66),(ev)
+b0: vcvtneebf162ps Vx,Mx (F3),(!11B),(v) | vcvtneeph2ps Vx,Mx (66),(!11B),(v) | vcvtneobf162ps Vx,Mx (F2),(!11B),(v) | vcvtneoph2ps Vx,Mx (!11B),(v)
+b1: vbcstnebf162ps Vx,Mw (F3),(!11B),(v) | vbcstnesh2ps Vx,Mw (66),(!11B),(v)
+b4: vpmadd52luq Vx,Hx,Wx (66)
+b5: vpmadd52huq Vx,Hx,Wx (66)
b6: vfmaddsub231ps/d Vx,Hx,Wx (66),(v)
b7: vfmsubadd231ps/d Vx,Hx,Wx (66),(v)
b8: vfmadd231ps/d Vx,Hx,Wx (66),(v)
@@ -796,15 +810,35 @@ c7: Grp19 (1A)
c8: sha1nexte Vdq,Wdq | vexp2ps/d Vx,Wx (66),(ev)
c9: sha1msg1 Vdq,Wdq
ca: sha1msg2 Vdq,Wdq | vrcp28ps/d Vx,Wx (66),(ev)
-cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev)
-cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev)
-cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev)
+cb: sha256rnds2 Vdq,Wdq | vrcp28ss/d Vx,Hx,Wx (66),(ev) | vsha512rnds2 Vqq,Hqq,Udq (F2),(11B),(v)
+cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev) | vsha512msg1 Vqq,Udq (F2),(11B),(v)
+cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev) | vsha512msg2 Vqq,Uqq (F2),(11B),(v)
cf: vgf2p8mulb Vx,Wx (66)
+d2: vpdpwsud Vx,Hx,Wx (F3),(v) | vpdpwusd Vx,Hx,Wx (66),(v) | vpdpwuud Vx,Hx,Wx (v)
+d3: vpdpwsuds Vx,Hx,Wx (F3),(v) | vpdpwusds Vx,Hx,Wx (66),(v) | vpdpwuuds Vx,Hx,Wx (v)
+d8: AESENCWIDE128KL Qpi (F3),(000),(00B) | AESENCWIDE256KL Qpi (F3),(000),(10B) | AESDECWIDE128KL Qpi (F3),(000),(01B) | AESDECWIDE256KL Qpi (F3),(000),(11B)
+da: vsm3msg1 Vdq,Hdq,Udq (v1) | vsm3msg2 Vdq,Hdq,Udq (66),(v1) | vsm4key4 Vx,Hx,Wx (F3),(v) | vsm4rnds4 Vx,Hx,Wx (F2),(v)
db: VAESIMC Vdq,Wdq (66),(v1)
-dc: vaesenc Vx,Hx,Wx (66)
-dd: vaesenclast Vx,Hx,Wx (66)
-de: vaesdec Vx,Hx,Wx (66)
-df: vaesdeclast Vx,Hx,Wx (66)
+dc: vaesenc Vx,Hx,Wx (66) | LOADIWKEY Vx,Hx (F3) | AESENC128KL Vpd,Qpi (F3)
+dd: vaesenclast Vx,Hx,Wx (66) | AESDEC128KL Vpd,Qpi (F3)
+de: vaesdec Vx,Hx,Wx (66) | AESENC256KL Vpd,Qpi (F3)
+df: vaesdeclast Vx,Hx,Wx (66) | AESDEC256KL Vpd,Qpi (F3)
+e0: CMPOXADD My,Gy,By (66),(v1),(o64)
+e1: CMPNOXADD My,Gy,By (66),(v1),(o64)
+e2: CMPBXADD My,Gy,By (66),(v1),(o64)
+e3: CMPNBXADD My,Gy,By (66),(v1),(o64)
+e4: CMPZXADD My,Gy,By (66),(v1),(o64)
+e5: CMPNZXADD My,Gy,By (66),(v1),(o64)
+e6: CMPBEXADD My,Gy,By (66),(v1),(o64)
+e7: CMPNBEXADD My,Gy,By (66),(v1),(o64)
+e8: CMPSXADD My,Gy,By (66),(v1),(o64)
+e9: CMPNSXADD My,Gy,By (66),(v1),(o64)
+ea: CMPPXADD My,Gy,By (66),(v1),(o64)
+eb: CMPNPXADD My,Gy,By (66),(v1),(o64)
+ec: CMPLXADD My,Gy,By (66),(v1),(o64)
+ed: CMPNLXADD My,Gy,By (66),(v1),(o64)
+ee: CMPLEXADD My,Gy,By (66),(v1),(o64)
+ef: CMPNLEXADD My,Gy,By (66),(v1),(o64)
f0: MOVBE Gy,My | MOVBE Gw,Mw (66) | CRC32 Gd,Eb (F2) | CRC32 Gd,Eb (66&F2)
f1: MOVBE My,Gy | MOVBE Mw,Gw (66) | CRC32 Gd,Ey (F2) | CRC32 Gd,Ew (66&F2)
f2: ANDN Gy,By,Ey (v)
@@ -812,8 +846,11 @@ f3: Grp17 (1A)
f5: BZHI Gy,Ey,By (v) | PEXT Gy,By,Ey (F3),(v) | PDEP Gy,By,Ey (F2),(v) | WRUSSD/Q My,Gy (66)
f6: ADCX Gy,Ey (66) | ADOX Gy,Ey (F3) | MULX By,Gy,rDX,Ey (F2),(v) | WRSSD/Q My,Gy
f7: BEXTR Gy,Ey,By (v) | SHLX Gy,Ey,By (66),(v) | SARX Gy,Ey,By (F3),(v) | SHRX Gy,Ey,By (F2),(v)
-f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3)
+f8: MOVDIR64B Gv,Mdqq (66) | ENQCMD Gv,Mdqq (F2) | ENQCMDS Gv,Mdqq (F3) | URDMSR Rq,Gq (F2),(11B) | UWRMSR Gq,Rq (F3),(11B)
f9: MOVDIRI My,Gy
+fa: ENCODEKEY128 Ew,Ew (F3)
+fb: ENCODEKEY256 Ew,Ew (F3)
+fc: AADD My,Gy | AAND My,Gy (66) | AOR My,Gy (F2) | AXOR My,Gy (F3)
EndTable
Table: 3-byte opcode 2 (0x0f 0x3a)
@@ -893,10 +930,103 @@ c2: vcmpph Vx,Hx,Wx,Ib (ev) | vcmpsh Vx,Hx,Wx,Ib (F3),(ev)
cc: sha1rnds4 Vdq,Wdq,Ib
ce: vgf2p8affineqb Vx,Wx,Ib (66)
cf: vgf2p8affineinvqb Vx,Wx,Ib (66)
+de: vsm3rnds2 Vdq,Hdq,Wdq,Ib (66),(v1)
df: VAESKEYGEN Vdq,Wdq,Ib (66),(v1)
f0: RORX Gy,Ey,Ib (F2),(v) | HRESET Gv,Ib (F3),(000),(11B)
EndTable
+Table: EVEX map 4
+Referrer:
+AVXcode: 4
+00: ADD Eb,Gb (ev)
+01: ADD Ev,Gv (es) | ADD Ev,Gv (66),(es)
+02: ADD Gb,Eb (ev)
+03: ADD Gv,Ev (es) | ADD Gv,Ev (66),(es)
+08: OR Eb,Gb (ev)
+09: OR Ev,Gv (es) | OR Ev,Gv (66),(es)
+0a: OR Gb,Eb (ev)
+0b: OR Gv,Ev (es) | OR Gv,Ev (66),(es)
+10: ADC Eb,Gb (ev)
+11: ADC Ev,Gv (es) | ADC Ev,Gv (66),(es)
+12: ADC Gb,Eb (ev)
+13: ADC Gv,Ev (es) | ADC Gv,Ev (66),(es)
+18: SBB Eb,Gb (ev)
+19: SBB Ev,Gv (es) | SBB Ev,Gv (66),(es)
+1a: SBB Gb,Eb (ev)
+1b: SBB Gv,Ev (es) | SBB Gv,Ev (66),(es)
+20: AND Eb,Gb (ev)
+21: AND Ev,Gv (es) | AND Ev,Gv (66),(es)
+22: AND Gb,Eb (ev)
+23: AND Gv,Ev (es) | AND Gv,Ev (66),(es)
+24: SHLD Ev,Gv,Ib (es) | SHLD Ev,Gv,Ib (66),(es)
+28: SUB Eb,Gb (ev)
+29: SUB Ev,Gv (es) | SUB Ev,Gv (66),(es)
+2a: SUB Gb,Eb (ev)
+2b: SUB Gv,Ev (es) | SUB Gv,Ev (66),(es)
+2c: SHRD Ev,Gv,Ib (es) | SHRD Ev,Gv,Ib (66),(es)
+30: XOR Eb,Gb (ev)
+31: XOR Ev,Gv (es) | XOR Ev,Gv (66),(es)
+32: XOR Gb,Eb (ev)
+33: XOR Gv,Ev (es) | XOR Gv,Ev (66),(es)
+# CCMPSCC instructions are: CCOMB, CCOMBE, CCOMF, CCOML, CCOMLE, CCOMNB, CCOMNBE, CCOMNL, CCOMNLE,
+# CCOMNO, CCOMNS, CCOMNZ, CCOMO, CCOMS, CCOMT, CCOMZ
+38: CCMPSCC Eb,Gb (ev)
+39: CCMPSCC Ev,Gv (es) | CCMPSCC Ev,Gv (66),(es)
+3a: CCMPSCC Gv,Ev (ev)
+3b: CCMPSCC Gv,Ev (es) | CCMPSCC Gv,Ev (66),(es)
+40: CMOVO Gv,Ev (es) | CMOVO Gv,Ev (66),(es) | CFCMOVO Ev,Ev (es) | CFCMOVO Ev,Ev (66),(es) | SETO Eb (F2),(ev)
+41: CMOVNO Gv,Ev (es) | CMOVNO Gv,Ev (66),(es) | CFCMOVNO Ev,Ev (es) | CFCMOVNO Ev,Ev (66),(es) | SETNO Eb (F2),(ev)
+42: CMOVB Gv,Ev (es) | CMOVB Gv,Ev (66),(es) | CFCMOVB Ev,Ev (es) | CFCMOVB Ev,Ev (66),(es) | SETB Eb (F2),(ev)
+43: CMOVNB Gv,Ev (es) | CMOVNB Gv,Ev (66),(es) | CFCMOVNB Ev,Ev (es) | CFCMOVNB Ev,Ev (66),(es) | SETNB Eb (F2),(ev)
+44: CMOVZ Gv,Ev (es) | CMOVZ Gv,Ev (66),(es) | CFCMOVZ Ev,Ev (es) | CFCMOVZ Ev,Ev (66),(es) | SETZ Eb (F2),(ev)
+45: CMOVNZ Gv,Ev (es) | CMOVNZ Gv,Ev (66),(es) | CFCMOVNZ Ev,Ev (es) | CFCMOVNZ Ev,Ev (66),(es) | SETNZ Eb (F2),(ev)
+46: CMOVBE Gv,Ev (es) | CMOVBE Gv,Ev (66),(es) | CFCMOVBE Ev,Ev (es) | CFCMOVBE Ev,Ev (66),(es) | SETBE Eb (F2),(ev)
+47: CMOVNBE Gv,Ev (es) | CMOVNBE Gv,Ev (66),(es) | CFCMOVNBE Ev,Ev (es) | CFCMOVNBE Ev,Ev (66),(es) | SETNBE Eb (F2),(ev)
+48: CMOVS Gv,Ev (es) | CMOVS Gv,Ev (66),(es) | CFCMOVS Ev,Ev (es) | CFCMOVS Ev,Ev (66),(es) | SETS Eb (F2),(ev)
+49: CMOVNS Gv,Ev (es) | CMOVNS Gv,Ev (66),(es) | CFCMOVNS Ev,Ev (es) | CFCMOVNS Ev,Ev (66),(es) | SETNS Eb (F2),(ev)
+4a: CMOVP Gv,Ev (es) | CMOVP Gv,Ev (66),(es) | CFCMOVP Ev,Ev (es) | CFCMOVP Ev,Ev (66),(es) | SETP Eb (F2),(ev)
+4b: CMOVNP Gv,Ev (es) | CMOVNP Gv,Ev (66),(es) | CFCMOVNP Ev,Ev (es) | CFCMOVNP Ev,Ev (66),(es) | SETNP Eb (F2),(ev)
+4c: CMOVL Gv,Ev (es) | CMOVL Gv,Ev (66),(es) | CFCMOVL Ev,Ev (es) | CFCMOVL Ev,Ev (66),(es) | SETL Eb (F2),(ev)
+4d: CMOVNL Gv,Ev (es) | CMOVNL Gv,Ev (66),(es) | CFCMOVNL Ev,Ev (es) | CFCMOVNL Ev,Ev (66),(es) | SETNL Eb (F2),(ev)
+4e: CMOVLE Gv,Ev (es) | CMOVLE Gv,Ev (66),(es) | CFCMOVLE Ev,Ev (es) | CFCMOVLE Ev,Ev (66),(es) | SETLE Eb (F2),(ev)
+4f: CMOVNLE Gv,Ev (es) | CMOVNLE Gv,Ev (66),(es) | CFCMOVNLE Ev,Ev (es) | CFCMOVNLE Ev,Ev (66),(es) | SETNLE Eb (F2),(ev)
+60: MOVBE Gv,Ev (es) | MOVBE Gv,Ev (66),(es)
+61: MOVBE Ev,Gv (es) | MOVBE Ev,Gv (66),(es)
+65: WRUSSD Md,Gd (66),(ev) | WRUSSQ Mq,Gq (66),(ev)
+66: ADCX Gy,Ey (66),(ev) | ADOX Gy,Ey (F3),(ev) | WRSSD Md,Gd (ev) | WRSSQ Mq,Gq (66),(ev)
+69: IMUL Gv,Ev,Iz (es) | IMUL Gv,Ev,Iz (66),(es)
+6b: IMUL Gv,Ev,Ib (es) | IMUL Gv,Ev,Ib (66),(es)
+80: Grp1 Eb,Ib (1A),(ev)
+81: Grp1 Ev,Iz (1A),(es)
+83: Grp1 Ev,Ib (1A),(es)
+# CTESTSCC instructions are: CTESTB, CTESTBE, CTESTF, CTESTL, CTESTLE, CTESTNB, CTESTNBE, CTESTNL,
+# CTESTNLE, CTESTNO, CTESTNS, CTESTNZ, CTESTO, CTESTS, CTESTT, CTESTZ
+84: CTESTSCC Eb,Gb (ev)
+85: CTESTSCC Ev,Gv (es) | CTESTSCC Ev,Gv (66),(es)
+88: POPCNT Gv,Ev (es) | POPCNT Gv,Ev (66),(es)
+8f: POP2 Bq,Rq (000),(11B),(ev)
+a5: SHLD Ev,Gv,CL (es) | SHLD Ev,Gv,CL (66),(es)
+ad: SHRD Ev,Gv,CL (es) | SHRD Ev,Gv,CL (66),(es)
+af: IMUL Gv,Ev (es) | IMUL Gv,Ev (66),(es)
+c0: Grp2 Eb,Ib (1A),(ev)
+c1: Grp2 Ev,Ib (1A),(es)
+d0: Grp2 Eb,1 (1A),(ev)
+d1: Grp2 Ev,1 (1A),(es)
+d2: Grp2 Eb,CL (1A),(ev)
+d3: Grp2 Ev,CL (1A),(es)
+f0: CRC32 Gy,Eb (es) | INVEPT Gq,Mdq (F3),(ev)
+f1: CRC32 Gy,Ey (es) | CRC32 Gy,Ey (66),(es) | INVVPID Gy,Mdq (F3),(ev)
+f2: INVPCID Gy,Mdq (F3),(ev)
+f4: TZCNT Gv,Ev (es) | TZCNT Gv,Ev (66),(es)
+f5: LZCNT Gv,Ev (es) | LZCNT Gv,Ev (66),(es)
+f6: Grp3_1 Eb (1A),(ev)
+f7: Grp3_2 Ev (1A),(es)
+f8: MOVDIR64B Gv,Mdqq (66),(ev) | ENQCMD Gv,Mdqq (F2),(ev) | ENQCMDS Gv,Mdqq (F3),(ev) | URDMSR Rq,Gq (F2),(11B),(ev) | UWRMSR Gq,Rq (F3),(11B),(ev)
+f9: MOVDIRI My,Gy (ev)
+fe: Grp4 (1A),(ev)
+ff: Grp5 (1A),(es) | PUSH2 Bq,Rq (110),(11B),(ev)
+EndTable
+
Table: EVEX map 5
Referrer:
AVXcode: 5
@@ -975,6 +1105,90 @@ d6: vfcmulcph Vx,Hx,Wx (F2),(ev) | vfmulcph Vx,Hx,Wx (F3),(ev)
d7: vfcmulcsh Vx,Hx,Wx (F2),(ev) | vfmulcsh Vx,Hx,Wx (F3),(ev)
EndTable
+Table: VEX map 7
+Referrer:
+AVXcode: 7
+f8: URDMSR Rq,Id (F2),(v1),(11B) | UWRMSR Id,Rq (F3),(v1),(11B)
+EndTable
+
+# From AMD64 Architecture Programmer's Manual Vol3, Appendix A.1.5
+Table: XOP map 8h
+Referrer:
+XOPcode: 0
+85: VPMACSSWW Vo,Ho,Wo,Lo
+86: VPMACSSWD Vo,Ho,Wo,Lo
+87: VPMACSSDQL Vo,Ho,Wo,Lo
+8e: VPMACSSDD Vo,Ho,Wo,Lo
+8f: VPMACSSDQH Vo,Ho,Wo,Lo
+95: VPMACSWW Vo,Ho,Wo,Lo
+96: VPMACSWD Vo,Ho,Wo,Lo
+97: VPMACSDQL Vo,Ho,Wo,Lo
+9e: VPMACSDD Vo,Ho,Wo,Lo
+9f: VPMACSDQH Vo,Ho,Wo,Lo
+a2: VPCMOV Vx,Hx,Wx,Lx (W=0) | VPCMOV Vx,Hx,Lx,Wx (W=1)
+a3: VPPERM Vo,Ho,Wo,Lo (W=0) | VPPERM Vo,Ho,Lo,Wo (W=1)
+a6: VPMADCSSWD Vo,Ho,Wo,Lo
+b6: VPMADCSWD Vo,Ho,Wo,Lo
+c0: VPROTB Vo,Wo,Ib
+c1: VPROTW Vo,Wo,Ib
+c2: VPROTD Vo,Wo,Ib
+c3: VPROTQ Vo,Wo,Ib
+cc: VPCOMccB Vo,Ho,Wo,Ib
+cd: VPCOMccW Vo,Ho,Wo,Ib
+ce: VPCOMccD Vo,Ho,Wo,Ib
+cf: VPCOMccQ Vo,Ho,Wo,Ib
+ec: VPCOMccUB Vo,Ho,Wo,Ib
+ed: VPCOMccUW Vo,Ho,Wo,Ib
+ee: VPCOMccUD Vo,Ho,Wo,Ib
+ef: VPCOMccUQ Vo,Ho,Wo,Ib
+EndTable
+
+Table: XOP map 9h
+Referrer:
+XOPcode: 1
+01: GrpXOP1
+02: GrpXOP2
+12: GrpXOP3
+80: VFRCZPS Vx,Wx
+81: VFRCZPD Vx,Wx
+82: VFRCZSS Vq,Wss
+83: VFRCZSD Vq,Wsd
+90: VPROTB Vo,Wo,Ho (W=0) | VPROTB Vo,Ho,Wo (W=1)
+91: VPROTW Vo,Wo,Ho (W=0) | VPROTB Vo,Ho,Wo (W=1)
+92: VPROTD Vo,Wo,Ho (W=0) | VPROTB Vo,Ho,Wo (W=1)
+93: VPROTQ Vo,Wo,Ho (W=0) | VPROTB Vo,Ho,Wo (W=1)
+94: VPSHLB Vo,Wo,Ho (W=0) | VPSHLB Vo,Ho,Wo (W=1)
+95: VPSHLW Vo,Wo,Ho (W=0) | VPSHLW Vo,Ho,Wo (W=1)
+96: VPSHLD Vo,Wo,Ho (W=0) | VPSHLD Vo,Ho,Wo (W=1)
+97: VPSHLQ Vo,Wo,Ho (W=0) | VPSHLQ Vo,Ho,Wo (W=1)
+98: VPSHAB Vo,Wo,Ho (W=0) | VPSHAB Vo,Ho,Wo (W=1)
+99: VPSHAW Vo,Wo,Ho (W=0) | VPSHAW Vo,Ho,Wo (W=1)
+9a: VPSHAD Vo,Wo,Ho (W=0) | VPSHAD Vo,Ho,Wo (W=1)
+9b: VPSHAQ Vo,Wo,Ho (W=0) | VPSHAQ Vo,Ho,Wo (W=1)
+c1: VPHADDBW Vo,Wo
+c2: VPHADDBD Vo,Wo
+c3: VPHADDBQ Vo,Wo
+c6: VPHADDWD Vo,Wo
+c7: VPHADDWQ Vo,Wo
+cb: VPHADDDQ Vo,Wo
+d1: VPHADDUBWD Vo,Wo
+d2: VPHADDUBD Vo,Wo
+d3: VPHADDUBQ Vo,Wo
+d6: VPHADDUWD Vo,Wo
+d7: VPHADDUWQ Vo,Wo
+db: VPHADDUDQ Vo,Wo
+e1: VPHSUBBW Vo,Wo
+e2: VPHSUBWD Vo,Wo
+e3: VPHSUBDQ Vo,Wo
+EndTable
+
+Table: XOP map Ah
+Referrer:
+XOPcode: 2
+10: BEXTR Gy,Ey,Id
+12: GrpXOP4
+EndTable
+
GrpTable: Grp1
0: ADD
1: OR
@@ -1051,8 +1265,8 @@ GrpTable: Grp6
EndTable
GrpTable: Grp7
-0: SGDT Ms | VMCALL (001),(11B) | VMLAUNCH (010),(11B) | VMRESUME (011),(11B) | VMXOFF (100),(11B) | PCONFIG (101),(11B) | ENCLV (000),(11B)
-1: SIDT Ms | MONITOR (000),(11B) | MWAIT (001),(11B) | CLAC (010),(11B) | STAC (011),(11B) | ENCLS (111),(11B)
+0: SGDT Ms | VMCALL (001),(11B) | VMLAUNCH (010),(11B) | VMRESUME (011),(11B) | VMXOFF (100),(11B) | PCONFIG (101),(11B) | ENCLV (000),(11B) | WRMSRNS (110),(11B) | RDMSRLIST (F2),(110),(11B) | WRMSRLIST (F3),(110),(11B) | PBNDKB (111),(11B)
+1: SIDT Ms | MONITOR (000),(11B) | MWAIT (001),(11B) | CLAC (010),(11B) | STAC (011),(11B) | ENCLS (111),(11B) | ERETU (F3),(010),(11B) | ERETS (F2),(010),(11B)
2: LGDT Ms | XGETBV (000),(11B) | XSETBV (001),(11B) | VMFUNC (100),(11B) | XEND (101)(11B) | XTEST (110)(11B) | ENCLU (111),(11B)
3: LIDT Ms
4: SMSW Mw/Rv
@@ -1137,6 +1351,8 @@ GrpTable: Grp16
1: prefetch T0
2: prefetch T1
3: prefetch T2
+6: prefetch IT1
+7: prefetch IT0
EndTable
GrpTable: Grp17
@@ -1187,3 +1403,29 @@ GrpTable: GrpRNG
4: xcrypt-cfb
5: xcrypt-ofb
EndTable
+
+# GrpXOP1-4 is shown in AMD APM Vol.3 Appendix A as XOP group #1-4
+GrpTable: GrpXOP1
+1: BLCFILL By,Ey (xop)
+2: BLSFILL By,Ey (xop)
+3: BLCS By,Ey (xop)
+4: TZMSK By,Ey (xop)
+5: BLCIC By,Ey (xop)
+6: BLSIC By,Ey (xop)
+7: T1MSKC By,Ey (xop)
+EndTable
+
+GrpTable: GrpXOP2
+1: BLCMSK By,Ey (xop)
+6: BLCI By,Ey (xop)
+EndTable
+
+GrpTable: GrpXOP3
+0: LLWPCB Ry (xop)
+1: SLWPCB Ry (xop)
+EndTable
+
+GrpTable: GrpXOP4
+0: LWPINS By,Ed,Id (xop)
+1: LWPVAL By,Ed,Id (xop)
+EndTable
diff --git a/tools/arch/x86/tools/gen-insn-attr-x86.awk b/tools/arch/x86/tools/gen-insn-attr-x86.awk
index af38469afd14..7ea1b75e59b7 100644
--- a/tools/arch/x86/tools/gen-insn-attr-x86.awk
+++ b/tools/arch/x86/tools/gen-insn-attr-x86.awk
@@ -21,6 +21,7 @@ function clear_vars() {
eid = -1 # escape id
gid = -1 # group id
aid = -1 # AVX id
+ xopid = -1 # XOP id
tname = ""
}
@@ -39,9 +40,11 @@ BEGIN {
ggid = 1
geid = 1
gaid = 0
+ gxopid = 0
delete etable
delete gtable
delete atable
+ delete xoptable
opnd_expr = "^[A-Za-z/]"
ext_expr = "^\\("
@@ -61,10 +64,15 @@ BEGIN {
imm_flag["Ob"] = "INAT_MOFFSET"
imm_flag["Ov"] = "INAT_MOFFSET"
imm_flag["Lx"] = "INAT_MAKE_IMM(INAT_IMM_BYTE)"
+ imm_flag["Lo"] = "INAT_MAKE_IMM(INAT_IMM_BYTE)"
modrm_expr = "^([CDEGMNPQRSUVW/][a-z]+|NTA|T[012])"
force64_expr = "\\([df]64\\)"
- rex_expr = "^REX(\\.[XRWB]+)*"
+ invalid64_expr = "\\(i64\\)"
+ only64_expr = "\\(o64\\)"
+ rex_expr = "^((REX(\\.[XRWB]+)+)|(REX$))"
+ rex2_expr = "\\(REX2\\)"
+ no_rex2_expr = "\\(!REX2\\)"
fpu_expr = "^ESC" # TODO
lprefix1_expr = "\\((66|!F3)\\)"
@@ -81,6 +89,10 @@ BEGIN {
vexonly_expr = "\\(v\\)"
# All opcodes with (ev) superscript supports *only* EVEX prefix
evexonly_expr = "\\(ev\\)"
+ # (es) is the same as (ev) but also "SCALABLE" i.e. W and pp determine operand size
+ evex_scalable_expr = "\\(es\\)"
+ # All opcodes in XOP table or with (xop) superscript accept XOP prefix
+ xopok_expr = "\\(xop\\)"
prefix_expr = "\\(Prefix\\)"
prefix_num["Operand-Size"] = "INAT_PFX_OPNDSZ"
@@ -99,6 +111,8 @@ BEGIN {
prefix_num["VEX+1byte"] = "INAT_PFX_VEX2"
prefix_num["VEX+2byte"] = "INAT_PFX_VEX3"
prefix_num["EVEX"] = "INAT_PFX_EVEX"
+ prefix_num["REX2"] = "INAT_PFX_REX2"
+ prefix_num["XOP"] = "INAT_PFX_XOP"
clear_vars()
}
@@ -140,6 +154,7 @@ function array_size(arr, i,c) {
if (NF != 1) {
# AVX/escape opcode table
aid = $2
+ xopid = -1
if (gaid <= aid)
gaid = aid + 1
if (tname == "") # AVX only opcode table
@@ -149,6 +164,20 @@ function array_size(arr, i,c) {
tname = "inat_primary_table"
}
+/^XOPcode:/ {
+ if (NF != 1) {
+ # XOP opcode table
+ xopid = $2
+ aid = -1
+ if (gxopid <= xopid)
+ gxopid = xopid + 1
+ if (tname == "") # XOP only opcode table
+ tname = sprintf("inat_xop_table_%d", $2)
+ }
+ if (xopid == -1 && eid == -1) # primary opcode table
+ tname = "inat_primary_table"
+}
+
/^GrpTable:/ {
print "/* " $0 " */"
if (!($2 in group))
@@ -199,6 +228,8 @@ function print_table(tbl,name,fmt,n)
etable[eid,0] = tname
if (aid >= 0)
atable[aid,0] = tname
+ else if (xopid >= 0)
+ xoptable[xopid] = tname
}
if (array_size(lptable1) != 0) {
print_table(lptable1,tname "_1[INAT_OPCODE_TABLE_SIZE]",
@@ -314,6 +345,15 @@ function convert_operands(count,opnd, i,j,imm,mod)
if (match(ext, force64_expr))
flags = add_flags(flags, "INAT_FORCE64")
+ # check invalid in 64-bit (and no only64)
+ if (match(ext, invalid64_expr) &&
+ !match($0, only64_expr))
+ flags = add_flags(flags, "INAT_INV64")
+
+ # check REX2 not allowed
+ if (match(ext, no_rex2_expr))
+ flags = add_flags(flags, "INAT_NO_REX2")
+
# check REX prefix
if (match(opcode, rex_expr))
flags = add_flags(flags, "INAT_MAKE_PREFIX(INAT_PFX_REX)")
@@ -325,10 +365,14 @@ function convert_operands(count,opnd, i,j,imm,mod)
# check VEX codes
if (match(ext, evexonly_expr))
flags = add_flags(flags, "INAT_VEXOK | INAT_EVEXONLY")
+ else if (match(ext, evex_scalable_expr))
+ flags = add_flags(flags, "INAT_VEXOK | INAT_EVEXONLY | INAT_EVEX_SCALABLE")
else if (match(ext, vexonly_expr))
flags = add_flags(flags, "INAT_VEXOK | INAT_VEXONLY")
else if (match(ext, vexok_expr) || match(opcode, vexok_opcode_expr))
flags = add_flags(flags, "INAT_VEXOK")
+ else if (match(ext, xopok_expr) || xopid >= 0)
+ flags = add_flags(flags, "INAT_XOPOK")
# check prefixes
if (match(ext, prefix_expr)) {
@@ -351,6 +395,8 @@ function convert_operands(count,opnd, i,j,imm,mod)
lptable3[idx] = add_flags(lptable3[idx],flags)
variant = "INAT_VARIANT"
}
+ if (match(ext, rex2_expr))
+ table[idx] = add_flags(table[idx], "INAT_REX2_VARIANT")
if (!match(ext, lprefix_expr)){
table[idx] = add_flags(table[idx],flags)
}
@@ -393,6 +439,14 @@ END {
print " ["i"]["j"] = "atable[i,j]","
print "};\n"
+ print "/* XOP opcode map array */"
+ print "const insn_attr_t * const inat_xop_tables[X86_XOP_M_MAX - X86_XOP_M_MIN + 1]" \
+ " = {"
+ for (i = 0; i < gxopid; i++)
+ if (xoptable[i])
+ print " ["i"] = "xoptable[i]","
+ print "};"
+
print "#else /* !__BOOT_COMPRESSED */\n"
print "/* Escape opcode map array */"
@@ -410,6 +464,10 @@ END {
"[INAT_LSTPFX_MAX + 1];"
print ""
+ print "/* XOP opcode map array */"
+ print "static const insn_attr_t *inat_xop_tables[X86_XOP_M_MAX - X86_XOP_M_MIN + 1];"
+ print ""
+
print "static void inat_init_tables(void)"
print "{"
@@ -435,6 +493,12 @@ END {
if (atable[i,j])
print "\tinat_avx_tables["i"]["j"] = "atable[i,j]";"
+ print ""
+ print "\t/* Print XOP opcode map array */"
+ for (i = 0; i < gxopid; i++)
+ if (xoptable[i])
+ print "\tinat_xop_tables["i"] = "xoptable[i]";"
+
print "}"
print "#endif"
}
diff --git a/tools/bootconfig/Makefile b/tools/bootconfig/Makefile
index 566c3e0ee561..90eb47c9d8de 100644
--- a/tools/bootconfig/Makefile
+++ b/tools/bootconfig/Makefile
@@ -10,7 +10,7 @@ srctree := $(patsubst %/,%,$(dir $(srctree)))
endif
LIBSRC = $(srctree)/lib/bootconfig.c $(srctree)/include/linux/bootconfig.h
-CFLAGS = -Wall -g -I$(CURDIR)/include
+override CFLAGS += -Wall -g -I$(CURDIR)/include
ALL_TARGETS := bootconfig
ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
@@ -18,7 +18,7 @@ ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
all: $(ALL_PROGRAMS) test
$(OUTPUT)bootconfig: main.c include/linux/bootconfig.h $(LIBSRC)
- $(CC) $(filter %.c,$^) $(CFLAGS) -o $@
+ $(CC) $(filter %.c,$^) $(CFLAGS) $(LDFLAGS) -o $@
test: $(ALL_PROGRAMS) test-bootconfig.sh
./test-bootconfig.sh $(OUTPUT)
diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
index 156b62a163c5..55d59ed507d5 100644
--- a/tools/bootconfig/main.c
+++ b/tools/bootconfig/main.c
@@ -11,11 +11,16 @@
#include <string.h>
#include <errno.h>
#include <endian.h>
+#include <assert.h>
#include <linux/bootconfig.h>
#define pr_err(fmt, ...) fprintf(stderr, fmt, ##__VA_ARGS__)
+/* Bootconfig footer is [size][csum][BOOTCONFIG_MAGIC]. */
+#define BOOTCONFIG_FOOTER_SIZE \
+ (sizeof(uint32_t) * 2 + BOOTCONFIG_MAGIC_LEN)
+
static int xbc_show_value(struct xbc_node *node, bool semicolon)
{
const char *val, *eol;
@@ -185,10 +190,10 @@ static int load_xbc_from_initrd(int fd, char **buf)
if (ret < 0)
return -errno;
- if (stat.st_size < 8 + BOOTCONFIG_MAGIC_LEN)
+ if (stat.st_size < BOOTCONFIG_FOOTER_SIZE)
return 0;
- if (lseek(fd, -BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0)
+ if (lseek(fd, -(off_t)BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0)
return pr_errno("Failed to lseek for magic", -errno);
if (read(fd, magic, BOOTCONFIG_MAGIC_LEN) < 0)
@@ -198,7 +203,7 @@ static int load_xbc_from_initrd(int fd, char **buf)
if (memcmp(magic, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN) != 0)
return 0;
- if (lseek(fd, -(8 + BOOTCONFIG_MAGIC_LEN), SEEK_END) < 0)
+ if (lseek(fd, -(off_t)BOOTCONFIG_FOOTER_SIZE, SEEK_END) < 0)
return pr_errno("Failed to lseek for size", -errno);
if (read(fd, &size, sizeof(uint32_t)) < 0)
@@ -210,12 +215,12 @@ static int load_xbc_from_initrd(int fd, char **buf)
csum = le32toh(csum);
/* Wrong size error */
- if (stat.st_size < size + 8 + BOOTCONFIG_MAGIC_LEN) {
+ if (stat.st_size < size + BOOTCONFIG_FOOTER_SIZE) {
pr_err("bootconfig size is too big\n");
return -E2BIG;
}
- if (lseek(fd, stat.st_size - (size + 8 + BOOTCONFIG_MAGIC_LEN),
+ if (lseek(fd, stat.st_size - (size + BOOTCONFIG_FOOTER_SIZE),
SEEK_SET) < 0)
return pr_errno("Failed to lseek", -errno);
@@ -226,7 +231,7 @@ static int load_xbc_from_initrd(int fd, char **buf)
/* Wrong Checksum */
rcsum = xbc_calc_checksum(*buf, size);
if (csum != rcsum) {
- pr_err("checksum error: %d != %d\n", csum, rcsum);
+ pr_err("checksum error: %u != %u\n", csum, rcsum);
return -EINVAL;
}
@@ -346,7 +351,7 @@ static int delete_xbc(const char *path)
ret = fstat(fd, &stat);
if (!ret)
ret = ftruncate(fd, stat.st_size
- - size - 8 - BOOTCONFIG_MAGIC_LEN);
+ - size - BOOTCONFIG_FOOTER_SIZE);
if (ret)
ret = -errno;
} /* Ignore if there is no boot config in initrd */
@@ -359,7 +364,12 @@ static int delete_xbc(const char *path)
static int apply_xbc(const char *path, const char *xbc_path)
{
- char *buf, *data, *p;
+ struct {
+ uint32_t size;
+ uint32_t csum;
+ char magic[BOOTCONFIG_MAGIC_LEN];
+ } footer;
+ char *buf, *data;
size_t total_size;
struct stat stat;
const char *msg;
@@ -376,8 +386,7 @@ static int apply_xbc(const char *path, const char *xbc_path)
csum = xbc_calc_checksum(buf, size);
/* Backup the bootconfig data */
- data = calloc(size + BOOTCONFIG_ALIGN +
- sizeof(uint32_t) + sizeof(uint32_t) + BOOTCONFIG_MAGIC_LEN, 1);
+ data = calloc(size + BOOTCONFIG_ALIGN + BOOTCONFIG_FOOTER_SIZE, 1);
if (!data)
return -ENOMEM;
memcpy(data, buf, size);
@@ -395,7 +404,7 @@ static int apply_xbc(const char *path, const char *xbc_path)
xbc_get_info(&ret, NULL);
printf("\tNumber of nodes: %d\n", ret);
printf("\tSize: %u bytes\n", (unsigned int)size);
- printf("\tChecksum: %d\n", (unsigned int)csum);
+ printf("\tChecksum: %u\n", (unsigned int)csum);
/* TODO: Check the options by schema */
xbc_exit();
@@ -425,22 +434,18 @@ static int apply_xbc(const char *path, const char *xbc_path)
}
/* To align up the total size to BOOTCONFIG_ALIGN, get padding size */
- total_size = stat.st_size + size + sizeof(uint32_t) * 2 + BOOTCONFIG_MAGIC_LEN;
+ total_size = stat.st_size + size + BOOTCONFIG_FOOTER_SIZE;
pad = ((total_size + BOOTCONFIG_ALIGN - 1) & (~BOOTCONFIG_ALIGN_MASK)) - total_size;
size += pad;
/* Add a footer */
- p = data + size;
- *(uint32_t *)p = htole32(size);
- p += sizeof(uint32_t);
-
- *(uint32_t *)p = htole32(csum);
- p += sizeof(uint32_t);
-
- memcpy(p, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN);
- p += BOOTCONFIG_MAGIC_LEN;
+ footer.size = htole32(size);
+ footer.csum = htole32(csum);
+ memcpy(footer.magic, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN);
+ static_assert(sizeof(footer) == BOOTCONFIG_FOOTER_SIZE);
+ memcpy(data + size, &footer, BOOTCONFIG_FOOTER_SIZE);
- total_size = p - data;
+ total_size = size + BOOTCONFIG_FOOTER_SIZE;
ret = write(fd, data, total_size);
if (ret < total_size) {
diff --git a/tools/bootconfig/scripts/ftrace.sh b/tools/bootconfig/scripts/ftrace.sh
index 186eed923041..cc5250c64699 100644
--- a/tools/bootconfig/scripts/ftrace.sh
+++ b/tools/bootconfig/scripts/ftrace.sh
@@ -1,3 +1,4 @@
+#!/bin/sh
# SPDX-License-Identifier: GPL-2.0-only
clear_trace() { # reset trace output
diff --git a/tools/bootconfig/test-bootconfig.sh b/tools/bootconfig/test-bootconfig.sh
index a2c484c243f5..7594659af1e1 100755
--- a/tools/bootconfig/test-bootconfig.sh
+++ b/tools/bootconfig/test-bootconfig.sh
@@ -27,16 +27,16 @@ NO=1
xpass() { # pass test command
echo "test case $NO ($*)... "
- if ! ($@ && echo "\t\t[OK]"); then
- echo "\t\t[NG]"; NG=$((NG + 1))
+ if ! ($@ && printf "\t\t[OK]\n"); then
+ printf "\t\t[NG]\n"; NG=$((NG + 1))
fi
NO=$((NO + 1))
}
xfail() { # fail test command
echo "test case $NO ($*)... "
- if ! (! $@ && echo "\t\t[OK]"); then
- echo "\t\t[NG]"; NG=$((NG + 1))
+ if ! (! $@ && printf "\t\t[OK]\n"); then
+ printf "\t\t[NG]\n"; NG=$((NG + 1))
fi
NO=$((NO + 1))
}
@@ -48,13 +48,13 @@ echo "Delete command should success without bootconfig"
xpass $BOOTCONF -d $INITRD
dd if=/dev/zero of=$INITRD bs=4096 count=1
-echo "key = value;" > $TEMPCONF
-bconf_size=$(stat -c %s $TEMPCONF)
-initrd_size=$(stat -c %s $INITRD)
+printf "key = value;" > $TEMPCONF
+bconf_size=$(wc -c < $TEMPCONF)
+initrd_size=$(wc -c < $INITRD)
echo "Apply command test"
xpass $BOOTCONF -a $TEMPCONF $INITRD
-new_size=$(stat -c %s $INITRD)
+new_size=$(wc -c < $INITRD)
echo "Show command test"
xpass $BOOTCONF $INITRD
@@ -69,13 +69,13 @@ echo "Apply command repeat test"
xpass $BOOTCONF -a $TEMPCONF $INITRD
echo "File size check"
-xpass test $new_size -eq $(stat -c %s $INITRD)
+xpass test $new_size -eq $(wc -c < $INITRD)
echo "Delete command check"
xpass $BOOTCONF -d $INITRD
echo "File size check"
-new_size=$(stat -c %s $INITRD)
+new_size=$(wc -c < $INITRD)
xpass test $new_size -eq $initrd_size
echo "No error messge while applying"
@@ -97,19 +97,20 @@ BEGIN {
' > $TEMPCONF
xpass $BOOTCONF -a $TEMPCONF $INITRD
-echo "badnode" >> $TEMPCONF
+printf "badnode\n" >> $TEMPCONF
xfail $BOOTCONF -a $TEMPCONF $INITRD
echo "Max filesize check"
# Max size is 32767 (including terminal byte)
-echo -n "data = \"" > $TEMPCONF
+printf "data = \"" > $TEMPCONF
dd if=/dev/urandom bs=768 count=32 | base64 -w0 >> $TEMPCONF
-echo "\"" >> $TEMPCONF
+printf "\"\n" >> $TEMPCONF
xfail $BOOTCONF -a $TEMPCONF $INITRD
-truncate -s 32764 $TEMPCONF
-echo "\"" >> $TEMPCONF # add 2 bytes + terminal ('\"\n\0')
+dd if=$TEMPCONF of=$OUTFILE bs=1 count=32764
+cp $OUTFILE $TEMPCONF
+printf "\"\n" >> $TEMPCONF # add 2 bytes + terminal ('\"\n\0')
xpass $BOOTCONF -a $TEMPCONF $INITRD
echo "Adding same-key values"
@@ -139,7 +140,7 @@ xfail grep -q "baz" $OUTFILE
xpass grep -q "qux" $OUTFILE
echo "Double/single quotes test"
-echo "key = '\"string\"';" > $TEMPCONF
+printf "key = '\"string\"';" > $TEMPCONF
$BOOTCONF -a $TEMPCONF $INITRD
$BOOTCONF $INITRD > $TEMPCONF
cat $TEMPCONF
@@ -167,8 +168,8 @@ echo > $INITRD
xpass $BOOTCONF -a $TEMPCONF $INITRD
$BOOTCONF $INITRD > $OUTFILE
-xfail grep -q val[[:space:]] $OUTFILE
-xpass grep -q val2[[:space:]] $OUTFILE
+xfail grep -q 'val[[:space:]]' $OUTFILE
+xpass grep -q 'val2[[:space:]]' $OUTFILE
echo "=== expected failure cases ==="
for i in samples/bad-* ; do
diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
index 243b79f2b451..062bbd6cd048 100644
--- a/tools/bpf/Makefile
+++ b/tools/bpf/Makefile
@@ -27,12 +27,6 @@ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
srctree := $(patsubst %/,%,$(dir $(srctree)))
endif
-ifeq ($(V),1)
- Q =
-else
- Q = @
-endif
-
FEATURE_USER = .bpf
FEATURE_TESTS = libbfd disassembler-four-args disassembler-init-styled
FEATURE_DISPLAY = libbfd
diff --git a/tools/bpf/bpf_jit_disasm.c b/tools/bpf/bpf_jit_disasm.c
index a90a5d110f92..5ab8f80e2834 100644
--- a/tools/bpf/bpf_jit_disasm.c
+++ b/tools/bpf/bpf_jit_disasm.c
@@ -45,6 +45,8 @@ static void get_exec_path(char *tpath, size_t size)
assert(path);
len = readlink(path, tpath, size);
+ if (len < 0)
+ len = 0;
tpath[len] = 0;
free(path);
@@ -210,7 +212,7 @@ static uint8_t *get_last_jit_image(char *haystack, size_t hlen,
return NULL;
}
if (proglen > 1000000) {
- printf("proglen of %d too big, stopping\n", proglen);
+ printf("proglen of %u too big, stopping\n", proglen);
return NULL;
}
diff --git a/tools/bpf/bpftool/Documentation/Makefile b/tools/bpf/bpftool/Documentation/Makefile
index ac8487dcff1d..bf843f328812 100644
--- a/tools/bpf/bpftool/Documentation/Makefile
+++ b/tools/bpf/bpftool/Documentation/Makefile
@@ -5,12 +5,6 @@ INSTALL ?= install
RM ?= rm -f
RMDIR ?= rmdir --ignore-fail-on-non-empty
-ifeq ($(V),1)
- Q =
-else
- Q = @
-endif
-
prefix ?= /usr/local
mandir ?= $(prefix)/man
man8dir = $(mandir)/man8
@@ -31,9 +25,9 @@ see_also = $(subst " ",, \
"\n" \
"SEE ALSO\n" \
"========\n" \
- "\t**bpf**\ (2),\n" \
- "\t**bpf-helpers**\\ (7)" \
- $(foreach page,$(call list_pages,$(1)),",\n\t**$(page)**\\ (8)") \
+ "**bpf**\ (2),\n" \
+ "**bpf-helpers**\\ (7)" \
+ $(foreach page,$(call list_pages,$(1)),",\n**$(page)**\\ (8)") \
"\n")
$(OUTPUT)%.8: %.rst
diff --git a/tools/bpf/bpftool/Documentation/bpftool-btf.rst b/tools/bpf/bpftool/Documentation/bpftool-btf.rst
index 342716f74ec4..d47dddc2b4ee 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-btf.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-btf.rst
@@ -14,82 +14,83 @@ tool for inspection of BTF data
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **btf** *COMMAND*
+**bpftool** [*OPTIONS*] **btf** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| | { **-B** | **--base-btf** } }
+*OPTIONS* := { |COMMON_OPTIONS| | { **-B** | **--base-btf** } }
- *COMMANDS* := { **dump** | **help** }
+*COMMANDS* := { **dump** | **help** }
BTF COMMANDS
=============
-| **bpftool** **btf** { **show** | **list** } [**id** *BTF_ID*]
-| **bpftool** **btf dump** *BTF_SRC* [**format** *FORMAT*]
-| **bpftool** **btf help**
+| **bpftool** **btf** { **show** | **list** } [**id** *BTF_ID*]
+| **bpftool** **btf dump** *BTF_SRC* [**format** *FORMAT*] [**root_id** *ROOT_ID*]
+| **bpftool** **btf help**
|
-| *BTF_SRC* := { **id** *BTF_ID* | **prog** *PROG* | **map** *MAP* [{**key** | **value** | **kv** | **all**}] | **file** *FILE* }
-| *FORMAT* := { **raw** | **c** }
-| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
-| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* }
+| *BTF_SRC* := { **id** *BTF_ID* | **prog** *PROG* | **map** *MAP* [{**key** | **value** | **kv** | **all**}] | **file** *FILE* }
+| *FORMAT* := { **raw** | **c** [**unsorted**] }
+| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
+| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
DESCRIPTION
===========
- **bpftool btf { show | list }** [**id** *BTF_ID*]
- Show information about loaded BTF objects. If a BTF ID is
- specified, show information only about given BTF object,
- otherwise list all BTF objects currently loaded on the
- system.
+bpftool btf { show | list } [id *BTF_ID*]
+ Show information about loaded BTF objects. If a BTF ID is specified, show
+ information only about given BTF object, otherwise list all BTF objects
+ currently loaded on the system.
- Since Linux 5.8 bpftool is able to discover information about
- processes that hold open file descriptors (FDs) against BTF
- objects. On such kernels bpftool will automatically emit this
- information as well.
+ Since Linux 5.8 bpftool is able to discover information about processes
+ that hold open file descriptors (FDs) against BTF objects. On such kernels
+ bpftool will automatically emit this information as well.
- **bpftool btf dump** *BTF_SRC*
- Dump BTF entries from a given *BTF_SRC*.
+bpftool btf dump *BTF_SRC* [format *FORMAT*] [root_id *ROOT_ID*]
+ Dump BTF entries from a given *BTF_SRC*.
- When **id** is specified, BTF object with that ID will be
- loaded and all its BTF types emitted.
+ When **id** is specified, BTF object with that ID will be loaded and all
+ its BTF types emitted.
- When **map** is provided, it's expected that map has
- associated BTF object with BTF types describing key and
- value. It's possible to select whether to dump only BTF
- type(s) associated with key (**key**), value (**value**),
- both key and value (**kv**), or all BTF types present in
- associated BTF object (**all**). If not specified, **kv**
- is assumed.
+ When **map** is provided, it's expected that map has associated BTF object
+ with BTF types describing key and value. It's possible to select whether to
+ dump only BTF type(s) associated with key (**key**), value (**value**),
+ both key and value (**kv**), or all BTF types present in associated BTF
+ object (**all**). If not specified, **kv** is assumed.
- When **prog** is provided, it's expected that program has
- associated BTF object with BTF types.
+ When **prog** is provided, it's expected that program has associated BTF
+ object with BTF types.
- When specifying *FILE*, an ELF file is expected, containing
- .BTF section with well-defined BTF binary format data,
- typically produced by clang or pahole.
+ When specifying *FILE*, an ELF file is expected, containing .BTF section
+ with well-defined BTF binary format data, typically produced by clang or
+ pahole.
- **format** option can be used to override default (raw)
- output format. Raw (**raw**) or C-syntax (**c**) output
- formats are supported.
+ **format** option can be used to override default (raw) output format. Raw
+ (**raw**) or C-syntax (**c**) output formats are supported. With C-style
+ formatting, the output is sorted by default. Use the **unsorted** option
+ to avoid sorting the output.
- **bpftool btf help**
- Print short help message.
+ **root_id** option can be used to filter a dump to a single type and all
+ its dependent types. It cannot be used with any other types of filtering
+ (such as the "key", "value", or "kv" arguments when dumping BTF for a map).
+ It can be passed multiple times to dump multiple types.
+
+bpftool btf help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
-
- -B, --base-btf *FILE*
- Pass a base BTF object. Base BTF objects are typically used
- with BTF objects for kernel modules. To avoid duplicating
- all kernel symbols required by modules, BTF objects for
- modules are "split", they are built incrementally on top of
- the kernel (vmlinux) BTF object. So the base BTF reference
- should usually point to the kernel BTF.
-
- When the main BTF object to process (for example, the
- module BTF to dump) is passed as a *FILE*, bpftool attempts
- to autodetect the path for the base object, and passing
- this option is optional. When the main BTF object is passed
- through other handles, this option becomes necessary.
+.. include:: common_options.rst
+
+-B, --base-btf *FILE*
+ Pass a base BTF object. Base BTF objects are typically used with BTF
+ objects for kernel modules. To avoid duplicating all kernel symbols
+ required by modules, BTF objects for modules are "split", they are
+ built incrementally on top of the kernel (vmlinux) BTF object. So the
+ base BTF reference should usually point to the kernel BTF.
+
+ When the main BTF object to process (for example, the module BTF to
+ dump) is passed as a *FILE*, bpftool attempts to autodetect the path
+ for the base object, and passing this option is optional. When the main
+ BTF object is passed through other handles, this option becomes
+ necessary.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
index 2ce900f66d6e..e8185596a759 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
@@ -14,134 +14,125 @@ tool for inspection and simple manipulation of eBPF progs
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **cgroup** *COMMAND*
+**bpftool** [*OPTIONS*] **cgroup** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } }
+*OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } }
- *COMMANDS* :=
- { **show** | **list** | **tree** | **attach** | **detach** | **help** }
+*COMMANDS* :=
+{ **show** | **list** | **tree** | **attach** | **detach** | **help** }
CGROUP COMMANDS
===============
-| **bpftool** **cgroup** { **show** | **list** } *CGROUP* [**effective**]
-| **bpftool** **cgroup tree** [*CGROUP_ROOT*] [**effective**]
-| **bpftool** **cgroup attach** *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*]
-| **bpftool** **cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
-| **bpftool** **cgroup help**
+| **bpftool** **cgroup** { **show** | **list** } *CGROUP* [**effective**]
+| **bpftool** **cgroup tree** [*CGROUP_ROOT*] [**effective**]
+| **bpftool** **cgroup attach** *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*]
+| **bpftool** **cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
+| **bpftool** **cgroup help**
|
-| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* }
-| *ATTACH_TYPE* := { **cgroup_inet_ingress** | **cgroup_inet_egress** |
-| **cgroup_inet_sock_create** | **cgroup_sock_ops** |
-| **cgroup_device** | **cgroup_inet4_bind** | **cgroup_inet6_bind** |
-| **cgroup_inet4_post_bind** | **cgroup_inet6_post_bind** |
-| **cgroup_inet4_connect** | **cgroup_inet6_connect** |
-| **cgroup_unix_connect** | **cgroup_inet4_getpeername** |
-| **cgroup_inet6_getpeername** | **cgroup_unix_getpeername** |
-| **cgroup_inet4_getsockname** | **cgroup_inet6_getsockname** |
-| **cgroup_unix_getsockname** | **cgroup_udp4_sendmsg** |
-| **cgroup_udp6_sendmsg** | **cgroup_unix_sendmsg** |
-| **cgroup_udp4_recvmsg** | **cgroup_udp6_recvmsg** |
-| **cgroup_unix_recvmsg** | **cgroup_sysctl** |
-| **cgroup_getsockopt** | **cgroup_setsockopt** |
-| **cgroup_inet_sock_release** }
-| *ATTACH_FLAGS* := { **multi** | **override** }
+| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
+| *ATTACH_TYPE* := { **cgroup_inet_ingress** | **cgroup_inet_egress** |
+| **cgroup_inet_sock_create** | **cgroup_sock_ops** |
+| **cgroup_device** | **cgroup_inet4_bind** | **cgroup_inet6_bind** |
+| **cgroup_inet4_post_bind** | **cgroup_inet6_post_bind** |
+| **cgroup_inet4_connect** | **cgroup_inet6_connect** |
+| **cgroup_unix_connect** | **cgroup_inet4_getpeername** |
+| **cgroup_inet6_getpeername** | **cgroup_unix_getpeername** |
+| **cgroup_inet4_getsockname** | **cgroup_inet6_getsockname** |
+| **cgroup_unix_getsockname** | **cgroup_udp4_sendmsg** |
+| **cgroup_udp6_sendmsg** | **cgroup_unix_sendmsg** |
+| **cgroup_udp4_recvmsg** | **cgroup_udp6_recvmsg** |
+| **cgroup_unix_recvmsg** | **cgroup_sysctl** |
+| **cgroup_getsockopt** | **cgroup_setsockopt** |
+| **cgroup_inet_sock_release** }
+| *ATTACH_FLAGS* := { **multi** | **override** }
DESCRIPTION
===========
- **bpftool cgroup { show | list }** *CGROUP* [**effective**]
- List all programs attached to the cgroup *CGROUP*.
-
- Output will start with program ID followed by attach type,
- attach flags and program name.
-
- If **effective** is specified retrieve effective programs that
- will execute for events within a cgroup. This includes
- inherited along with attached ones.
-
- **bpftool cgroup tree** [*CGROUP_ROOT*] [**effective**]
- Iterate over all cgroups in *CGROUP_ROOT* and list all
- attached programs. If *CGROUP_ROOT* is not specified,
- bpftool uses cgroup v2 mountpoint.
-
- The output is similar to the output of cgroup show/list
- commands: it starts with absolute cgroup path, followed by
- program ID, attach type, attach flags and program name.
-
- If **effective** is specified retrieve effective programs that
- will execute for events within a cgroup. This includes
- inherited along with attached ones.
-
- **bpftool cgroup attach** *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*]
- Attach program *PROG* to the cgroup *CGROUP* with attach type
- *ATTACH_TYPE* and optional *ATTACH_FLAGS*.
-
- *ATTACH_FLAGS* can be one of: **override** if a sub-cgroup installs
- some bpf program, the program in this cgroup yields to sub-cgroup
- program; **multi** if a sub-cgroup installs some bpf program,
- that cgroup program gets run in addition to the program in this
- cgroup.
-
- Only one program is allowed to be attached to a cgroup with
- no attach flags or the **override** flag. Attaching another
- program will release old program and attach the new one.
-
- Multiple programs are allowed to be attached to a cgroup with
- **multi**. They are executed in FIFO order (those that were
- attached first, run first).
-
- Non-default *ATTACH_FLAGS* are supported by kernel version 4.14
- and later.
-
- *ATTACH_TYPE* can be on of:
- **ingress** ingress path of the inet socket (since 4.10);
- **egress** egress path of the inet socket (since 4.10);
- **sock_create** opening of an inet socket (since 4.10);
- **sock_ops** various socket operations (since 4.12);
- **device** device access (since 4.15);
- **bind4** call to bind(2) for an inet4 socket (since 4.17);
- **bind6** call to bind(2) for an inet6 socket (since 4.17);
- **post_bind4** return from bind(2) for an inet4 socket (since 4.17);
- **post_bind6** return from bind(2) for an inet6 socket (since 4.17);
- **connect4** call to connect(2) for an inet4 socket (since 4.17);
- **connect6** call to connect(2) for an inet6 socket (since 4.17);
- **connect_unix** call to connect(2) for a unix socket (since 6.7);
- **sendmsg4** call to sendto(2), sendmsg(2), sendmmsg(2) for an
- unconnected udp4 socket (since 4.18);
- **sendmsg6** call to sendto(2), sendmsg(2), sendmmsg(2) for an
- unconnected udp6 socket (since 4.18);
- **sendmsg_unix** call to sendto(2), sendmsg(2), sendmmsg(2) for
- an unconnected unix socket (since 6.7);
- **recvmsg4** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
- an unconnected udp4 socket (since 5.2);
- **recvmsg6** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
- an unconnected udp6 socket (since 5.2);
- **recvmsg_unix** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
- an unconnected unix socket (since 6.7);
- **sysctl** sysctl access (since 5.2);
- **getsockopt** call to getsockopt (since 5.3);
- **setsockopt** call to setsockopt (since 5.3);
- **getpeername4** call to getpeername(2) for an inet4 socket (since 5.8);
- **getpeername6** call to getpeername(2) for an inet6 socket (since 5.8);
- **getpeername_unix** call to getpeername(2) for a unix socket (since 6.7);
- **getsockname4** call to getsockname(2) for an inet4 socket (since 5.8);
- **getsockname6** call to getsockname(2) for an inet6 socket (since 5.8).
- **getsockname_unix** call to getsockname(2) for a unix socket (since 6.7);
- **sock_release** closing an userspace inet socket (since 5.9).
-
- **bpftool cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
- Detach *PROG* from the cgroup *CGROUP* and attach type
- *ATTACH_TYPE*.
-
- **bpftool prog help**
- Print short help message.
+bpftool cgroup { show | list } *CGROUP* [effective]
+ List all programs attached to the cgroup *CGROUP*.
+
+ Output will start with program ID followed by attach type, attach flags and
+ program name.
+
+ If **effective** is specified retrieve effective programs that will execute
+ for events within a cgroup. This includes inherited along with attached
+ ones.
+
+bpftool cgroup tree [*CGROUP_ROOT*] [effective]
+ Iterate over all cgroups in *CGROUP_ROOT* and list all attached programs.
+ If *CGROUP_ROOT* is not specified, bpftool uses cgroup v2 mountpoint.
+
+ The output is similar to the output of cgroup show/list commands: it starts
+ with absolute cgroup path, followed by program ID, attach type, attach
+ flags and program name.
+
+ If **effective** is specified retrieve effective programs that will execute
+ for events within a cgroup. This includes inherited along with attached
+ ones.
+
+bpftool cgroup attach *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*]
+ Attach program *PROG* to the cgroup *CGROUP* with attach type *ATTACH_TYPE*
+ and optional *ATTACH_FLAGS*.
+
+ *ATTACH_FLAGS* can be one of: **override** if a sub-cgroup installs some
+ bpf program, the program in this cgroup yields to sub-cgroup program;
+ **multi** if a sub-cgroup installs some bpf program, that cgroup program
+ gets run in addition to the program in this cgroup.
+
+ Only one program is allowed to be attached to a cgroup with no attach flags
+ or the **override** flag. Attaching another program will release old
+ program and attach the new one.
+
+ Multiple programs are allowed to be attached to a cgroup with **multi**.
+ They are executed in FIFO order (those that were attached first, run
+ first).
+
+ Non-default *ATTACH_FLAGS* are supported by kernel version 4.14 and later.
+
+ *ATTACH_TYPE* can be one of:
+
+ - **ingress** ingress path of the inet socket (since 4.10)
+ - **egress** egress path of the inet socket (since 4.10)
+ - **sock_create** opening of an inet socket (since 4.10)
+ - **sock_ops** various socket operations (since 4.12)
+ - **device** device access (since 4.15)
+ - **bind4** call to bind(2) for an inet4 socket (since 4.17)
+ - **bind6** call to bind(2) for an inet6 socket (since 4.17)
+ - **post_bind4** return from bind(2) for an inet4 socket (since 4.17)
+ - **post_bind6** return from bind(2) for an inet6 socket (since 4.17)
+ - **connect4** call to connect(2) for an inet4 socket (since 4.17)
+ - **connect6** call to connect(2) for an inet6 socket (since 4.17)
+ - **connect_unix** call to connect(2) for a unix socket (since 6.7)
+ - **sendmsg4** call to sendto(2), sendmsg(2), sendmmsg(2) for an unconnected udp4 socket (since 4.18)
+ - **sendmsg6** call to sendto(2), sendmsg(2), sendmmsg(2) for an unconnected udp6 socket (since 4.18)
+ - **sendmsg_unix** call to sendto(2), sendmsg(2), sendmmsg(2) for an unconnected unix socket (since 6.7)
+ - **recvmsg4** call to recvfrom(2), recvmsg(2), recvmmsg(2) for an unconnected udp4 socket (since 5.2)
+ - **recvmsg6** call to recvfrom(2), recvmsg(2), recvmmsg(2) for an unconnected udp6 socket (since 5.2)
+ - **recvmsg_unix** call to recvfrom(2), recvmsg(2), recvmmsg(2) for an unconnected unix socket (since 6.7)
+ - **sysctl** sysctl access (since 5.2)
+ - **getsockopt** call to getsockopt (since 5.3)
+ - **setsockopt** call to setsockopt (since 5.3)
+ - **getpeername4** call to getpeername(2) for an inet4 socket (since 5.8)
+ - **getpeername6** call to getpeername(2) for an inet6 socket (since 5.8)
+ - **getpeername_unix** call to getpeername(2) for a unix socket (since 6.7)
+ - **getsockname4** call to getsockname(2) for an inet4 socket (since 5.8)
+ - **getsockname6** call to getsockname(2) for an inet6 socket (since 5.8)
+ - **getsockname_unix** call to getsockname(2) for a unix socket (since 6.7)
+ - **sock_release** closing a userspace inet socket (since 5.9)
+
+bpftool cgroup detach *CGROUP* *ATTACH_TYPE* *PROG*
+ Detach *PROG* from the cgroup *CGROUP* and attach type *ATTACH_TYPE*.
+
+bpftool prog help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
- -f, --bpffs
- Show file names of pinned programs.
+-f, --bpffs
+ Show file names of pinned programs.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-feature.rst b/tools/bpf/bpftool/Documentation/bpftool-feature.rst
index e44039f89be7..c7f837898bc7 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-feature.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-feature.rst
@@ -14,77 +14,70 @@ tool for inspection of eBPF-related parameters for Linux kernel or net device
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **feature** *COMMAND*
+**bpftool** [*OPTIONS*] **feature** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| }
+*OPTIONS* := { |COMMON_OPTIONS| }
- *COMMANDS* := { **probe** | **help** }
+*COMMANDS* := { **probe** | **help** }
FEATURE COMMANDS
================
-| **bpftool** **feature probe** [*COMPONENT*] [**full**] [**unprivileged**] [**macros** [**prefix** *PREFIX*]]
-| **bpftool** **feature list_builtins** *GROUP*
-| **bpftool** **feature help**
+| **bpftool** **feature probe** [*COMPONENT*] [**full**] [**unprivileged**] [**macros** [**prefix** *PREFIX*]]
+| **bpftool** **feature list_builtins** *GROUP*
+| **bpftool** **feature help**
|
-| *COMPONENT* := { **kernel** | **dev** *NAME* }
-| *GROUP* := { **prog_types** | **map_types** | **attach_types** | **link_types** | **helpers** }
+| *COMPONENT* := { **kernel** | **dev** *NAME* }
+| *GROUP* := { **prog_types** | **map_types** | **attach_types** | **link_types** | **helpers** }
DESCRIPTION
===========
- **bpftool feature probe** [**kernel**] [**full**] [**macros** [**prefix** *PREFIX*]]
- Probe the running kernel and dump a number of eBPF-related
- parameters, such as availability of the **bpf**\ () system call,
- JIT status, eBPF program types availability, eBPF helper
- functions availability, and more.
-
- By default, bpftool **does not run probes** for
- **bpf_probe_write_user**\ () and **bpf_trace_printk**\()
- helpers which print warnings to kernel logs. To enable them
- and run all probes, the **full** keyword should be used.
-
- If the **macros** keyword (but not the **-j** option) is
- passed, a subset of the output is dumped as a list of
- **#define** macros that are ready to be included in a C
- header file, for example. If, additionally, **prefix** is
- used to define a *PREFIX*, the provided string will be used
- as a prefix to the names of the macros: this can be used to
- avoid conflicts on macro names when including the output of
- this command as a header file.
-
- Keyword **kernel** can be omitted. If no probe target is
- specified, probing the kernel is the default behaviour.
-
- When the **unprivileged** keyword is used, bpftool will dump
- only the features available to a user who does not have the
- **CAP_SYS_ADMIN** capability set. The features available in
- that case usually represent a small subset of the parameters
- supported by the system. Unprivileged users MUST use the
- **unprivileged** keyword: This is to avoid misdetection if
- bpftool is inadvertently run as non-root, for example. This
- keyword is unavailable if bpftool was compiled without
- libcap.
-
- **bpftool feature probe dev** *NAME* [**full**] [**macros** [**prefix** *PREFIX*]]
- Probe network device for supported eBPF features and dump
- results to the console.
-
- The keywords **full**, **macros** and **prefix** have the
- same role as when probing the kernel.
-
- **bpftool feature list_builtins** *GROUP*
- List items known to bpftool. These can be BPF program types
- (**prog_types**), BPF map types (**map_types**), attach types
- (**attach_types**), link types (**link_types**), or BPF helper
- functions (**helpers**). The command does not probe the system, but
- simply lists the elements that bpftool knows from compilation time,
- as provided from libbpf (for all object types) or from the BPF UAPI
- header (list of helpers). This can be used in scripts to iterate over
- BPF types or helpers.
-
- **bpftool feature help**
- Print short help message.
+bpftool feature probe [kernel] [full] [macros [prefix *PREFIX*]]
+ Probe the running kernel and dump a number of eBPF-related parameters, such
+ as availability of the **bpf**\ () system call, JIT status, eBPF program
+ types availability, eBPF helper functions availability, and more.
+
+ By default, bpftool **does not run probes** for **bpf_probe_write_user**\
+ () and **bpf_trace_printk**\() helpers which print warnings to kernel logs.
+ To enable them and run all probes, the **full** keyword should be used.
+
+ If the **macros** keyword (but not the **-j** option) is passed, a subset
+ of the output is dumped as a list of **#define** macros that are ready to
+ be included in a C header file, for example. If, additionally, **prefix**
+ is used to define a *PREFIX*, the provided string will be used as a prefix
+ to the names of the macros: this can be used to avoid conflicts on macro
+ names when including the output of this command as a header file.
+
+ Keyword **kernel** can be omitted. If no probe target is specified, probing
+ the kernel is the default behaviour.
+
+ When the **unprivileged** keyword is used, bpftool will dump only the
+ features available to a user who does not have the **CAP_SYS_ADMIN**
+ capability set. The features available in that case usually represent a
+ small subset of the parameters supported by the system. Unprivileged users
+ MUST use the **unprivileged** keyword: This is to avoid misdetection if
+ bpftool is inadvertently run as non-root, for example. This keyword is
+ unavailable if bpftool was compiled without libcap.
+
+bpftool feature probe dev *NAME* [full] [macros [prefix *PREFIX*]]
+ Probe network device for supported eBPF features and dump results to the
+ console.
+
+ The keywords **full**, **macros** and **prefix** have the same role as when
+ probing the kernel.
+
+bpftool feature list_builtins *GROUP*
+ List items known to bpftool. These can be BPF program types
+ (**prog_types**), BPF map types (**map_types**), attach types
+ (**attach_types**), link types (**link_types**), or BPF helper functions
+ (**helpers**). The command does not probe the system, but simply lists the
+ elements that bpftool knows from compilation time, as provided from libbpf
+ (for all object types) or from the BPF UAPI header (list of helpers). This
+ can be used in scripts to iterate over BPF types or helpers.
+
+bpftool feature help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
index 5006e724d1bc..d0a36f442db7 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
@@ -14,199 +14,188 @@ tool for BPF code-generation
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **gen** *COMMAND*
+**bpftool** [*OPTIONS*] **gen** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| | { **-L** | **--use-loader** } }
+*OPTIONS* := { |COMMON_OPTIONS| | { **-L** | **--use-loader** } | [ { **-S** | **--sign** } {**-k** <private_key.pem>} **-i** <certificate.x509> ] }
- *COMMAND* := { **object** | **skeleton** | **help** }
+*COMMAND* := { **object** | **skeleton** | **help** }
GEN COMMANDS
=============
-| **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
-| **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*]
-| **bpftool** **gen subskeleton** *FILE* [**name** *OBJECT_NAME*]
-| **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
-| **bpftool** **gen help**
+| **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
+| **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*]
+| **bpftool** **gen subskeleton** *FILE* [**name** *OBJECT_NAME*]
+| **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
+| **bpftool** **gen help**
DESCRIPTION
===========
- **bpftool gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
- Statically link (combine) together one or more *INPUT_FILE*'s
- into a single resulting *OUTPUT_FILE*. All the files involved
- are BPF ELF object files.
-
- The rules of BPF static linking are mostly the same as for
- user-space object files, but in addition to combining data
- and instruction sections, .BTF and .BTF.ext (if present in
- any of the input files) data are combined together. .BTF
- data is deduplicated, so all the common types across
- *INPUT_FILE*'s will only be represented once in the resulting
- BTF information.
-
- BPF static linking allows to partition BPF source code into
- individually compiled files that are then linked into
- a single resulting BPF object file, which can be used to
- generated BPF skeleton (with **gen skeleton** command) or
- passed directly into **libbpf** (using **bpf_object__open()**
- family of APIs).
-
- **bpftool gen skeleton** *FILE*
- Generate BPF skeleton C header file for a given *FILE*.
-
- BPF skeleton is an alternative interface to existing libbpf
- APIs for working with BPF objects. Skeleton code is intended
- to significantly shorten and simplify code to load and work
- with BPF programs from userspace side. Generated code is
- tailored to specific input BPF object *FILE*, reflecting its
- structure by listing out available maps, program, variables,
- etc. Skeleton eliminates the need to lookup mentioned
- components by name. Instead, if skeleton instantiation
- succeeds, they are populated in skeleton structure as valid
- libbpf types (e.g., **struct bpf_map** pointer) and can be
- passed to existing generic libbpf APIs.
-
- In addition to simple and reliable access to maps and
- programs, skeleton provides a storage for BPF links (**struct
- bpf_link**) for each BPF program within BPF object. When
- requested, supported BPF programs will be automatically
- attached and resulting BPF links stored for further use by
- user in pre-allocated fields in skeleton struct. For BPF
- programs that can't be automatically attached by libbpf,
- user can attach them manually, but store resulting BPF link
- in per-program link field. All such set up links will be
- automatically destroyed on BPF skeleton destruction. This
- eliminates the need for users to manage links manually and
- rely on libbpf support to detach programs and free up
- resources.
-
- Another facility provided by BPF skeleton is an interface to
- global variables of all supported kinds: mutable, read-only,
- as well as extern ones. This interface allows to pre-setup
- initial values of variables before BPF object is loaded and
- verified by kernel. For non-read-only variables, the same
- interface can be used to fetch values of global variables on
- userspace side, even if they are modified by BPF code.
-
- During skeleton generation, contents of source BPF object
- *FILE* is embedded within generated code and is thus not
- necessary to keep around. This ensures skeleton and BPF
- object file are matching 1-to-1 and always stay in sync.
- Generated code is dual-licensed under LGPL-2.1 and
- BSD-2-Clause licenses.
-
- It is a design goal and guarantee that skeleton interfaces
- are interoperable with generic libbpf APIs. User should
- always be able to use skeleton API to create and load BPF
- object, and later use libbpf APIs to keep working with
- specific maps, programs, etc.
-
- As part of skeleton, few custom functions are generated.
- Each of them is prefixed with object name. Object name can
- either be derived from object file name, i.e., if BPF object
- file name is **example.o**, BPF object name will be
- **example**. Object name can be also specified explicitly
- through **name** *OBJECT_NAME* parameter. The following
- custom functions are provided (assuming **example** as
- the object name):
-
- - **example__open** and **example__open_opts**.
- These functions are used to instantiate skeleton. It
- corresponds to libbpf's **bpf_object__open**\ () API.
- **_opts** variants accepts extra **bpf_object_open_opts**
- options.
-
- - **example__load**.
- This function creates maps, loads and verifies BPF
- programs, initializes global data maps. It corresponds to
- libppf's **bpf_object__load**\ () API.
-
- - **example__open_and_load** combines **example__open** and
- **example__load** invocations in one commonly used
- operation.
-
- - **example__attach** and **example__detach**
- This pair of functions allow to attach and detach,
- correspondingly, already loaded BPF object. Only BPF
- programs of types supported by libbpf for auto-attachment
- will be auto-attached and their corresponding BPF links
- instantiated. For other BPF programs, user can manually
- create a BPF link and assign it to corresponding fields in
- skeleton struct. **example__detach** will detach both
- links created automatically, as well as those populated by
- user manually.
-
- - **example__destroy**
- Detach and unload BPF programs, free up all the resources
- used by skeleton and BPF object.
-
- If BPF object has global variables, corresponding structs
- with memory layout corresponding to global data data section
- layout will be created. Currently supported ones are: *.data*,
- *.bss*, *.rodata*, and *.kconfig* structs/data sections.
- These data sections/structs can be used to set up initial
- values of variables, if set before **example__load**.
- Afterwards, if target kernel supports memory-mapped BPF
- arrays, same structs can be used to fetch and update
- (non-read-only) data from userspace, with same simplicity
- as for BPF side.
-
- **bpftool gen subskeleton** *FILE*
- Generate BPF subskeleton C header file for a given *FILE*.
-
- Subskeletons are similar to skeletons, except they do not own
- the corresponding maps, programs, or global variables. They
- require that the object file used to generate them is already
- loaded into a *bpf_object* by some other means.
-
- This functionality is useful when a library is included into a
- larger BPF program. A subskeleton for the library would have
- access to all objects and globals defined in it, without
- having to know about the larger program.
-
- Consequently, there are only two functions defined
- for subskeletons:
-
- - **example__open(bpf_object\*)**
- Instantiates a subskeleton from an already opened (but not
- necessarily loaded) **bpf_object**.
-
- - **example__destroy()**
- Frees the storage for the subskeleton but *does not* unload
- any BPF programs or maps.
-
- **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
- Generate a minimum BTF file as *OUTPUT*, derived from a given
- *INPUT* BTF file, containing all needed BTF types so one, or
- more, given eBPF objects CO-RE relocations may be satisfied.
-
- When kernels aren't compiled with CONFIG_DEBUG_INFO_BTF,
- libbpf, when loading an eBPF object, has to rely on external
- BTF files to be able to calculate CO-RE relocations.
-
- Usually, an external BTF file is built from existing kernel
- DWARF data using pahole. It contains all the types used by
- its respective kernel image and, because of that, is big.
-
- The min_core_btf feature builds smaller BTF files, customized
- to one or multiple eBPF objects, so they can be distributed
- together with an eBPF CO-RE based application, turning the
- application portable to different kernel versions.
-
- Check examples bellow for more information how to use it.
-
- **bpftool gen help**
- Print short help message.
+bpftool gen object *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
+ Statically link (combine) together one or more *INPUT_FILE*'s into a single
+ resulting *OUTPUT_FILE*. All the files involved are BPF ELF object files.
+
+ The rules of BPF static linking are mostly the same as for user-space
+ object files, but in addition to combining data and instruction sections,
+ .BTF and .BTF.ext (if present in any of the input files) data are combined
+ together. .BTF data is deduplicated, so all the common types across
+ *INPUT_FILE*'s will only be represented once in the resulting BTF
+ information.
+
+ BPF static linking allows to partition BPF source code into individually
+ compiled files that are then linked into a single resulting BPF object
+ file, which can be used to generated BPF skeleton (with **gen skeleton**
+ command) or passed directly into **libbpf** (using **bpf_object__open()**
+ family of APIs).
+
+bpftool gen skeleton *FILE*
+ Generate BPF skeleton C header file for a given *FILE*.
+
+ BPF skeleton is an alternative interface to existing libbpf APIs for
+ working with BPF objects. Skeleton code is intended to significantly
+ shorten and simplify code to load and work with BPF programs from userspace
+ side. Generated code is tailored to specific input BPF object *FILE*,
+ reflecting its structure by listing out available maps, program, variables,
+ etc. Skeleton eliminates the need to lookup mentioned components by name.
+ Instead, if skeleton instantiation succeeds, they are populated in skeleton
+ structure as valid libbpf types (e.g., **struct bpf_map** pointer) and can
+ be passed to existing generic libbpf APIs.
+
+ In addition to simple and reliable access to maps and programs, skeleton
+ provides a storage for BPF links (**struct bpf_link**) for each BPF program
+ within BPF object. When requested, supported BPF programs will be
+ automatically attached and resulting BPF links stored for further use by
+ user in pre-allocated fields in skeleton struct. For BPF programs that
+ can't be automatically attached by libbpf, user can attach them manually,
+ but store resulting BPF link in per-program link field. All such set up
+ links will be automatically destroyed on BPF skeleton destruction. This
+ eliminates the need for users to manage links manually and rely on libbpf
+ support to detach programs and free up resources.
+
+ Another facility provided by BPF skeleton is an interface to global
+ variables of all supported kinds: mutable, read-only, as well as extern
+ ones. This interface allows to pre-setup initial values of variables before
+ BPF object is loaded and verified by kernel. For non-read-only variables,
+ the same interface can be used to fetch values of global variables on
+ userspace side, even if they are modified by BPF code.
+
+ During skeleton generation, contents of source BPF object *FILE* is
+ embedded within generated code and is thus not necessary to keep around.
+ This ensures skeleton and BPF object file are matching 1-to-1 and always
+ stay in sync. Generated code is dual-licensed under LGPL-2.1 and
+ BSD-2-Clause licenses.
+
+ It is a design goal and guarantee that skeleton interfaces are
+ interoperable with generic libbpf APIs. User should always be able to use
+ skeleton API to create and load BPF object, and later use libbpf APIs to
+ keep working with specific maps, programs, etc.
+
+ As part of skeleton, few custom functions are generated. Each of them is
+ prefixed with object name. Object name can either be derived from object
+ file name, i.e., if BPF object file name is **example.o**, BPF object name
+ will be **example**. Object name can be also specified explicitly through
+ **name** *OBJECT_NAME* parameter. The following custom functions are
+ provided (assuming **example** as the object name):
+
+ - **example__open** and **example__open_opts**.
+ These functions are used to instantiate skeleton. It corresponds to
+ libbpf's **bpf_object__open**\ () API. **_opts** variants accepts extra
+ **bpf_object_open_opts** options.
+
+ - **example__load**.
+ This function creates maps, loads and verifies BPF programs, initializes
+ global data maps. It corresponds to libbpf's **bpf_object__load**\ ()
+ API.
+
+ - **example__open_and_load** combines **example__open** and
+ **example__load** invocations in one commonly used operation.
+
+ - **example__attach** and **example__detach**.
+ This pair of functions allow to attach and detach, correspondingly,
+ already loaded BPF object. Only BPF programs of types supported by libbpf
+ for auto-attachment will be auto-attached and their corresponding BPF
+ links instantiated. For other BPF programs, user can manually create a
+ BPF link and assign it to corresponding fields in skeleton struct.
+ **example__detach** will detach both links created automatically, as well
+ as those populated by user manually.
+
+ - **example__destroy**.
+ Detach and unload BPF programs, free up all the resources used by
+ skeleton and BPF object.
+
+ If BPF object has global variables, corresponding structs with memory
+ layout corresponding to global data data section layout will be created.
+ Currently supported ones are: *.data*, *.bss*, *.rodata*, and *.kconfig*
+ structs/data sections. These data sections/structs can be used to set up
+ initial values of variables, if set before **example__load**. Afterwards,
+ if target kernel supports memory-mapped BPF arrays, same structs can be
+ used to fetch and update (non-read-only) data from userspace, with same
+ simplicity as for BPF side.
+
+bpftool gen subskeleton *FILE*
+ Generate BPF subskeleton C header file for a given *FILE*.
+
+ Subskeletons are similar to skeletons, except they do not own the
+ corresponding maps, programs, or global variables. They require that the
+ object file used to generate them is already loaded into a *bpf_object* by
+ some other means.
+
+ This functionality is useful when a library is included into a larger BPF
+ program. A subskeleton for the library would have access to all objects and
+ globals defined in it, without having to know about the larger program.
+
+ Consequently, there are only two functions defined for subskeletons:
+
+ - **example__open(bpf_object\*)**.
+ Instantiates a subskeleton from an already opened (but not necessarily
+ loaded) **bpf_object**.
+
+ - **example__destroy()**.
+ Frees the storage for the subskeleton but *does not* unload any BPF
+ programs or maps.
+
+bpftool gen min_core_btf *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
+ Generate a minimum BTF file as *OUTPUT*, derived from a given *INPUT* BTF
+ file, containing all needed BTF types so one, or more, given eBPF objects
+ CO-RE relocations may be satisfied.
+
+ When kernels aren't compiled with CONFIG_DEBUG_INFO_BTF, libbpf, when
+ loading an eBPF object, has to rely on external BTF files to be able to
+ calculate CO-RE relocations.
+
+ Usually, an external BTF file is built from existing kernel DWARF data
+ using pahole. It contains all the types used by its respective kernel image
+ and, because of that, is big.
+
+ The min_core_btf feature builds smaller BTF files, customized to one or
+ multiple eBPF objects, so they can be distributed together with an eBPF
+ CO-RE based application, turning the application portable to different
+ kernel versions.
+
+ Check examples below for more information on how to use it.
+
+bpftool gen help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
- -L, --use-loader
- For skeletons, generate a "light" skeleton (also known as "loader"
- skeleton). A light skeleton contains a loader eBPF program. It does
- not use the majority of the libbpf infrastructure, and does not need
- libelf.
+-L, --use-loader
+ For skeletons, generate a "light" skeleton (also known as "loader"
+ skeleton). A light skeleton contains a loader eBPF program. It does not use
+ the majority of the libbpf infrastructure, and does not need libelf.
+
+-S, --sign
+ For skeletons, generate a signed skeleton. This option must be used with
+ **-k** and **-i**. Using this flag implicitly enables **--use-loader**.
+
+-k <private_key.pem>
+ Path to the private key file in PEM format, required for signing.
+
+-i <certificate.x509>
+ Path to the X.509 certificate file in PEM or DER format, required for
+ signing.
EXAMPLES
========
@@ -257,18 +246,48 @@ EXAMPLES
return 0;
}
-This is example BPF application with two BPF programs and a mix of BPF maps
-and global variables. Source code is split across two source code files.
+**$ cat example3.bpf.c**
+
+::
+
+ #include <linux/ptrace.h>
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ /* This header file is provided by the bpf_testmod module. */
+ #include "bpf_testmod.h"
+
+ int test_2_result = 0;
+
+ /* bpf_Testmod.ko calls this function, passing a "4"
+ * and testmod_map->data.
+ */
+ SEC("struct_ops/test_2")
+ void BPF_PROG(test_2, int a, int b)
+ {
+ test_2_result = a + b;
+ }
+
+ SEC(".struct_ops")
+ struct bpf_testmod_ops testmod_map = {
+ .test_2 = (void *)test_2,
+ .data = 0x1,
+ };
+
+This is example BPF application with three BPF programs and a mix of BPF
+maps and global variables. Source code is split across three source code
+files.
**$ clang --target=bpf -g example1.bpf.c -o example1.bpf.o**
**$ clang --target=bpf -g example2.bpf.c -o example2.bpf.o**
-**$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o**
+**$ clang --target=bpf -g example3.bpf.c -o example3.bpf.o**
+
+**$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o example3.bpf.o**
-This set of commands compiles *example1.bpf.c* and *example2.bpf.c*
-individually and then statically links respective object files into the final
-BPF ELF object file *example.bpf.o*.
+This set of commands compiles *example1.bpf.c*, *example2.bpf.c* and
+*example3.bpf.c* individually and then statically links respective object
+files into the final BPF ELF object file *example.bpf.o*.
**$ bpftool gen skeleton example.bpf.o name example | tee example.skel.h**
@@ -291,7 +310,15 @@ BPF ELF object file *example.bpf.o*.
struct bpf_map *data;
struct bpf_map *bss;
struct bpf_map *my_map;
+ struct bpf_map *testmod_map;
} maps;
+ struct {
+ struct example__testmod_map__bpf_testmod_ops {
+ const struct bpf_program *test_1;
+ const struct bpf_program *test_2;
+ int data;
+ } *testmod_map;
+ } struct_ops;
struct {
struct bpf_program *handle_sys_enter;
struct bpf_program *handle_sys_exit;
@@ -304,6 +331,7 @@ BPF ELF object file *example.bpf.o*.
struct {
int x;
} data;
+ int test_2_result;
} *bss;
struct example__data {
_Bool global_flag;
@@ -342,10 +370,16 @@ BPF ELF object file *example.bpf.o*.
skel->rodata->param1 = 128;
+ /* Change the value through the pointer of shadow type */
+ skel->struct_ops.testmod_map->data = 13;
+
err = example__load(skel);
if (err)
goto cleanup;
+ /* The result of the function test_2() */
+ printf("test_2_result: %d\n", skel->bss->test_2_result);
+
err = example__attach(skel);
if (err)
goto cleanup;
@@ -372,6 +406,7 @@ BPF ELF object file *example.bpf.o*.
::
+ test_2_result: 17
my_map name: my_map
sys_enter prog FD: 8
my_static_var: 7
diff --git a/tools/bpf/bpftool/Documentation/bpftool-iter.rst b/tools/bpf/bpftool/Documentation/bpftool-iter.rst
index 84839d488621..2e5d81c906dc 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-iter.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-iter.rst
@@ -14,50 +14,46 @@ tool to create BPF iterators
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **iter** *COMMAND*
+**bpftool** [*OPTIONS*] **iter** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| }
+*OPTIONS* := { |COMMON_OPTIONS| }
- *COMMANDS* := { **pin** | **help** }
+*COMMANDS* := { **pin** | **help** }
ITER COMMANDS
-===================
+=============
-| **bpftool** **iter pin** *OBJ* *PATH* [**map** *MAP*]
-| **bpftool** **iter help**
+| **bpftool** **iter pin** *OBJ* *PATH* [**map** *MAP*]
+| **bpftool** **iter help**
|
-| *OBJ* := /a/file/of/bpf_iter_target.o
-| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
+| *OBJ* := /a/file/of/bpf_iter_target.o
+| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
DESCRIPTION
===========
- **bpftool iter pin** *OBJ* *PATH* [**map** *MAP*]
- A bpf iterator combines a kernel iterating of
- particular kernel data (e.g., tasks, bpf_maps, etc.)
- and a bpf program called for each kernel data object
- (e.g., one task, one bpf_map, etc.). User space can
- *read* kernel iterator output through *read()* syscall.
-
- The *pin* command creates a bpf iterator from *OBJ*,
- and pin it to *PATH*. The *PATH* should be located
- in *bpffs* mount. It must not contain a dot
- character ('.'), which is reserved for future extensions
- of *bpffs*.
-
- Map element bpf iterator requires an additional parameter
- *MAP* so bpf program can iterate over map elements for
- that map. User can have a bpf program in kernel to run
- with each map element, do checking, filtering, aggregation,
- etc. without copying data to user space.
-
- User can then *cat PATH* to see the bpf iterator output.
-
- **bpftool iter help**
- Print short help message.
+bpftool iter pin *OBJ* *PATH* [map *MAP*]
+ A bpf iterator combines a kernel iterating of particular kernel data (e.g.,
+ tasks, bpf_maps, etc.) and a bpf program called for each kernel data object
+ (e.g., one task, one bpf_map, etc.). User space can *read* kernel iterator
+ output through *read()* syscall.
+
+ The *pin* command creates a bpf iterator from *OBJ*, and pin it to *PATH*.
+ The *PATH* should be located in *bpffs* mount. It must not contain a dot
+ character ('.'), which is reserved for future extensions of *bpffs*.
+
+ Map element bpf iterator requires an additional parameter *MAP* so bpf
+ program can iterate over map elements for that map. User can have a bpf
+ program in kernel to run with each map element, do checking, filtering,
+ aggregation, etc. without copying data to user space.
+
+ User can then *cat PATH* to see the bpf iterator output.
+
+bpftool iter help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-link.rst b/tools/bpf/bpftool/Documentation/bpftool-link.rst
index 52a4eee4af54..6f09d4405ed8 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-link.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-link.rst
@@ -14,67 +14,62 @@ tool for inspection and simple manipulation of eBPF links
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **link** *COMMAND*
+**bpftool** [*OPTIONS*] **link** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } | { **-n** | **--nomount** } }
+*OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } | { **-n** | **--nomount** } }
- *COMMANDS* := { **show** | **list** | **pin** | **help** }
+*COMMANDS* := { **show** | **list** | **pin** | **help** }
LINK COMMANDS
=============
-| **bpftool** **link { show | list }** [*LINK*]
-| **bpftool** **link pin** *LINK* *FILE*
-| **bpftool** **link detach** *LINK*
-| **bpftool** **link help**
+| **bpftool** **link { show | list }** [*LINK*]
+| **bpftool** **link pin** *LINK* *FILE*
+| **bpftool** **link detach** *LINK*
+| **bpftool** **link help**
|
-| *LINK* := { **id** *LINK_ID* | **pinned** *FILE* }
+| *LINK* := { **id** *LINK_ID* | **pinned** *FILE* }
DESCRIPTION
===========
- **bpftool link { show | list }** [*LINK*]
- Show information about active links. If *LINK* is
- specified show information only about given link,
- otherwise list all links currently active on the system.
+bpftool link { show | list } [*LINK*]
+ Show information about active links. If *LINK* is specified show
+ information only about given link, otherwise list all links currently
+ active on the system.
- Output will start with link ID followed by link type and
- zero or more named attributes, some of which depend on type
- of link.
+ Output will start with link ID followed by link type and zero or more named
+ attributes, some of which depend on type of link.
- Since Linux 5.8 bpftool is able to discover information about
- processes that hold open file descriptors (FDs) against BPF
- links. On such kernels bpftool will automatically emit this
- information as well.
+ Since Linux 5.8 bpftool is able to discover information about processes
+ that hold open file descriptors (FDs) against BPF links. On such kernels
+ bpftool will automatically emit this information as well.
- **bpftool link pin** *LINK* *FILE*
- Pin link *LINK* as *FILE*.
+bpftool link pin *LINK* *FILE*
+ Pin link *LINK* as *FILE*.
- Note: *FILE* must be located in *bpffs* mount. It must not
- contain a dot character ('.'), which is reserved for future
- extensions of *bpffs*.
+ Note: *FILE* must be located in *bpffs* mount. It must not contain a dot
+ character ('.'), which is reserved for future extensions of *bpffs*.
- **bpftool link detach** *LINK*
- Force-detach link *LINK*. BPF link and its underlying BPF
- program will stay valid, but they will be detached from the
- respective BPF hook and BPF link will transition into
- a defunct state until last open file descriptor for that
- link is closed.
+bpftool link detach *LINK*
+ Force-detach link *LINK*. BPF link and its underlying BPF program will stay
+ valid, but they will be detached from the respective BPF hook and BPF link
+ will transition into a defunct state until last open file descriptor for
+ that link is closed.
- **bpftool link help**
- Print short help message.
+bpftool link help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+ .. include:: common_options.rst
- -f, --bpffs
- When showing BPF links, show file names of pinned
- links.
+ -f, --bpffs
+ When showing BPF links, show file names of pinned links.
- -n, --nomount
- Do not automatically attempt to mount any virtual file system
- (such as tracefs or BPF virtual file system) when necessary.
+ -n, --nomount
+ Do not automatically attempt to mount any virtual file system (such as
+ tracefs or BPF virtual file system) when necessary.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-map.rst b/tools/bpf/bpftool/Documentation/bpftool-map.rst
index 3b7ba037af95..252e4c538edb 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-map.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-map.rst
@@ -14,166 +14,160 @@ tool for inspection and simple manipulation of eBPF maps
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **map** *COMMAND*
+**bpftool** [*OPTIONS*] **map** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } | { **-n** | **--nomount** } }
+*OPTIONS* := { |COMMON_OPTIONS| | { **-f** | **--bpffs** } | { **-n** | **--nomount** } }
- *COMMANDS* :=
- { **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
- **delete** | **pin** | **help** }
+*COMMANDS* :=
+{ **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
+**delete** | **pin** | **help** }
MAP COMMANDS
=============
-| **bpftool** **map** { **show** | **list** } [*MAP*]
-| **bpftool** **map create** *FILE* **type** *TYPE* **key** *KEY_SIZE* **value** *VALUE_SIZE* \
-| **entries** *MAX_ENTRIES* **name** *NAME* [**flags** *FLAGS*] [**inner_map** *MAP*] \
-| [**offload_dev** *NAME*]
-| **bpftool** **map dump** *MAP*
-| **bpftool** **map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
-| **bpftool** **map lookup** *MAP* [**key** *DATA*]
-| **bpftool** **map getnext** *MAP* [**key** *DATA*]
-| **bpftool** **map delete** *MAP* **key** *DATA*
-| **bpftool** **map pin** *MAP* *FILE*
-| **bpftool** **map event_pipe** *MAP* [**cpu** *N* **index** *M*]
-| **bpftool** **map peek** *MAP*
-| **bpftool** **map push** *MAP* **value** *VALUE*
-| **bpftool** **map pop** *MAP*
-| **bpftool** **map enqueue** *MAP* **value** *VALUE*
-| **bpftool** **map dequeue** *MAP*
-| **bpftool** **map freeze** *MAP*
-| **bpftool** **map help**
+| **bpftool** **map** { **show** | **list** } [*MAP*]
+| **bpftool** **map create** *FILE* **type** *TYPE* **key** *KEY_SIZE* **value** *VALUE_SIZE* \
+| **entries** *MAX_ENTRIES* **name** *NAME* [**flags** *FLAGS*] [**inner_map** *MAP*] \
+| [**offload_dev** *NAME*]
+| **bpftool** **map dump** *MAP*
+| **bpftool** **map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
+| **bpftool** **map lookup** *MAP* [**key** *DATA*]
+| **bpftool** **map getnext** *MAP* [**key** *DATA*]
+| **bpftool** **map delete** *MAP* **key** *DATA*
+| **bpftool** **map pin** *MAP* *FILE*
+| **bpftool** **map event_pipe** *MAP* [**cpu** *N* **index** *M*]
+| **bpftool** **map peek** *MAP*
+| **bpftool** **map push** *MAP* **value** *VALUE*
+| **bpftool** **map pop** *MAP*
+| **bpftool** **map enqueue** *MAP* **value** *VALUE*
+| **bpftool** **map dequeue** *MAP*
+| **bpftool** **map freeze** *MAP*
+| **bpftool** **map help**
|
-| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* | **name** *MAP_NAME* }
-| *DATA* := { [**hex**] *BYTES* }
-| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
-| *VALUE* := { *DATA* | *MAP* | *PROG* }
-| *UPDATE_FLAGS* := { **any** | **exist** | **noexist** }
-| *TYPE* := { **hash** | **array** | **prog_array** | **perf_event_array** | **percpu_hash**
-| | **percpu_array** | **stack_trace** | **cgroup_array** | **lru_hash**
-| | **lru_percpu_hash** | **lpm_trie** | **array_of_maps** | **hash_of_maps**
-| | **devmap** | **devmap_hash** | **sockmap** | **cpumap** | **xskmap** | **sockhash**
-| | **cgroup_storage** | **reuseport_sockarray** | **percpu_cgroup_storage**
-| | **queue** | **stack** | **sk_storage** | **struct_ops** | **ringbuf** | **inode_storage**
-| | **task_storage** | **bloom_filter** | **user_ringbuf** | **cgrp_storage** }
+| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* | **name** *MAP_NAME* }
+| *DATA* := { [**hex**] *BYTES* }
+| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
+| *VALUE* := { *DATA* | *MAP* | *PROG* }
+| *UPDATE_FLAGS* := { **any** | **exist** | **noexist** }
+| *TYPE* := { **hash** | **array** | **prog_array** | **perf_event_array** | **percpu_hash**
+| | **percpu_array** | **stack_trace** | **cgroup_array** | **lru_hash**
+| | **lru_percpu_hash** | **lpm_trie** | **array_of_maps** | **hash_of_maps**
+| | **devmap** | **devmap_hash** | **sockmap** | **cpumap** | **xskmap** | **sockhash**
+| | **cgroup_storage** | **reuseport_sockarray** | **percpu_cgroup_storage**
+| | **queue** | **stack** | **sk_storage** | **struct_ops** | **ringbuf** | **inode_storage**
+| | **task_storage** | **bloom_filter** | **user_ringbuf** | **cgrp_storage** | **arena** }
DESCRIPTION
===========
- **bpftool map { show | list }** [*MAP*]
- Show information about loaded maps. If *MAP* is specified
- show information only about given maps, otherwise list all
- maps currently loaded on the system. In case of **name**,
- *MAP* may match several maps which will all be shown.
+bpftool map { show | list } [*MAP*]
+ Show information about loaded maps. If *MAP* is specified show information
+ only about given maps, otherwise list all maps currently loaded on the
+ system. In case of **name**, *MAP* may match several maps which will all
+ be shown.
- Output will start with map ID followed by map type and
- zero or more named attributes (depending on kernel version).
+ Output will start with map ID followed by map type and zero or more named
+ attributes (depending on kernel version).
- Since Linux 5.8 bpftool is able to discover information about
- processes that hold open file descriptors (FDs) against BPF
- maps. On such kernels bpftool will automatically emit this
- information as well.
+ Since Linux 5.8 bpftool is able to discover information about processes
+ that hold open file descriptors (FDs) against BPF maps. On such kernels
+ bpftool will automatically emit this information as well.
- **bpftool map create** *FILE* **type** *TYPE* **key** *KEY_SIZE* **value** *VALUE_SIZE* **entries** *MAX_ENTRIES* **name** *NAME* [**flags** *FLAGS*] [**inner_map** *MAP*] [**offload_dev** *NAME*]
- Create a new map with given parameters and pin it to *bpffs*
- as *FILE*.
+bpftool map create *FILE* type *TYPE* key *KEY_SIZE* value *VALUE_SIZE* entries *MAX_ENTRIES* name *NAME* [flags *FLAGS*] [inner_map *MAP*] [offload_dev *NAME*]
+ Create a new map with given parameters and pin it to *bpffs* as *FILE*.
- *FLAGS* should be an integer which is the combination of
- desired flags, e.g. 1024 for **BPF_F_MMAPABLE** (see bpf.h
- UAPI header for existing flags).
+ *FLAGS* should be an integer which is the combination of desired flags,
+ e.g. 1024 for **BPF_F_MMAPABLE** (see bpf.h UAPI header for existing
+ flags).
- To create maps of type array-of-maps or hash-of-maps, the
- **inner_map** keyword must be used to pass an inner map. The
- kernel needs it to collect metadata related to the inner maps
- that the new map will work with.
+ To create maps of type array-of-maps or hash-of-maps, the **inner_map**
+ keyword must be used to pass an inner map. The kernel needs it to collect
+ metadata related to the inner maps that the new map will work with.
- Keyword **offload_dev** expects a network interface name,
- and is used to request hardware offload for the map.
+ Keyword **offload_dev** expects a network interface name, and is used to
+ request hardware offload for the map.
- **bpftool map dump** *MAP*
- Dump all entries in a given *MAP*. In case of **name**,
- *MAP* may match several maps which will all be dumped.
+bpftool map dump *MAP*
+ Dump all entries in a given *MAP*. In case of **name**, *MAP* may match
+ several maps which will all be dumped.
- **bpftool map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
- Update map entry for a given *KEY*.
+bpftool map update *MAP* [key *DATA*] [value *VALUE*] [*UPDATE_FLAGS*]
+ Update map entry for a given *KEY*.
- *UPDATE_FLAGS* can be one of: **any** update existing entry
- or add if doesn't exit; **exist** update only if entry already
- exists; **noexist** update only if entry doesn't exist.
+ *UPDATE_FLAGS* can be one of: **any** update existing entry or add if
+ doesn't exit; **exist** update only if entry already exists; **noexist**
+ update only if entry doesn't exist.
- If the **hex** keyword is provided in front of the bytes
- sequence, the bytes are parsed as hexadecimal values, even if
- no "0x" prefix is added. If the keyword is not provided, then
- the bytes are parsed as decimal values, unless a "0x" prefix
- (for hexadecimal) or a "0" prefix (for octal) is provided.
+ If the **hex** keyword is provided in front of the bytes sequence, the
+ bytes are parsed as hexadecimal values, even if no "0x" prefix is added. If
+ the keyword is not provided, then the bytes are parsed as decimal values,
+ unless a "0x" prefix (for hexadecimal) or a "0" prefix (for octal) is
+ provided.
- **bpftool map lookup** *MAP* [**key** *DATA*]
- Lookup **key** in the map.
+bpftool map lookup *MAP* [key *DATA*]
+ Lookup **key** in the map.
- **bpftool map getnext** *MAP* [**key** *DATA*]
- Get next key. If *key* is not specified, get first key.
+bpftool map getnext *MAP* [key *DATA*]
+ Get next key. If *key* is not specified, get first key.
- **bpftool map delete** *MAP* **key** *DATA*
- Remove entry from the map.
+bpftool map delete *MAP* key *DATA*
+ Remove entry from the map.
- **bpftool map pin** *MAP* *FILE*
- Pin map *MAP* as *FILE*.
+bpftool map pin *MAP* *FILE*
+ Pin map *MAP* as *FILE*.
- Note: *FILE* must be located in *bpffs* mount. It must not
- contain a dot character ('.'), which is reserved for future
- extensions of *bpffs*.
+ Note: *FILE* must be located in *bpffs* mount. It must not contain a dot
+ character ('.'), which is reserved for future extensions of *bpffs*.
- **bpftool** **map event_pipe** *MAP* [**cpu** *N* **index** *M*]
- Read events from a **BPF_MAP_TYPE_PERF_EVENT_ARRAY** map.
+bpftool map event_pipe *MAP* [cpu *N* index *M*]
+ Read events from a **BPF_MAP_TYPE_PERF_EVENT_ARRAY** map.
- Install perf rings into a perf event array map and dump
- output of any **bpf_perf_event_output**\ () call in the kernel.
- By default read the number of CPUs on the system and
- install perf ring for each CPU in the corresponding index
- in the array.
+ Install perf rings into a perf event array map and dump output of any
+ **bpf_perf_event_output**\ () call in the kernel. By default read the
+ number of CPUs on the system and install perf ring for each CPU in the
+ corresponding index in the array.
- If **cpu** and **index** are specified, install perf ring
- for given **cpu** at **index** in the array (single ring).
+ If **cpu** and **index** are specified, install perf ring for given **cpu**
+ at **index** in the array (single ring).
- Note that installing a perf ring into an array will silently
- replace any existing ring. Any other application will stop
- receiving events if it installed its rings earlier.
+ Note that installing a perf ring into an array will silently replace any
+ existing ring. Any other application will stop receiving events if it
+ installed its rings earlier.
- **bpftool map peek** *MAP*
- Peek next value in the queue or stack.
+bpftool map peek *MAP*
+ Peek next value in the queue or stack.
- **bpftool map push** *MAP* **value** *VALUE*
- Push *VALUE* onto the stack.
+bpftool map push *MAP* value *VALUE*
+ Push *VALUE* onto the stack.
- **bpftool map pop** *MAP*
- Pop and print value from the stack.
+bpftool map pop *MAP*
+ Pop and print value from the stack.
- **bpftool map enqueue** *MAP* **value** *VALUE*
- Enqueue *VALUE* into the queue.
+bpftool map enqueue *MAP* value *VALUE*
+ Enqueue *VALUE* into the queue.
- **bpftool map dequeue** *MAP*
- Dequeue and print value from the queue.
+bpftool map dequeue *MAP*
+ Dequeue and print value from the queue.
- **bpftool map freeze** *MAP*
- Freeze the map as read-only from user space. Entries from a
- frozen map can not longer be updated or deleted with the
- **bpf**\ () system call. This operation is not reversible,
- and the map remains immutable from user space until its
- destruction. However, read and write permissions for BPF
- programs to the map remain unchanged.
+bpftool map freeze *MAP*
+ Freeze the map as read-only from user space. Entries from a frozen map can
+ not longer be updated or deleted with the **bpf**\ () system call. This
+ operation is not reversible, and the map remains immutable from user space
+ until its destruction. However, read and write permissions for BPF programs
+ to the map remain unchanged.
- **bpftool map help**
- Print short help message.
+bpftool map help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
- -f, --bpffs
- Show file names of pinned maps.
+-f, --bpffs
+ Show file names of pinned maps.
- -n, --nomount
- Do not automatically attempt to mount any virtual file system
- (such as tracefs or BPF virtual file system) when necessary.
+-n, --nomount
+ Do not automatically attempt to mount any virtual file system (such as
+ tracefs or BPF virtual file system) when necessary.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-net.rst b/tools/bpf/bpftool/Documentation/bpftool-net.rst
index dd3f9469765b..a9ed8992800f 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-net.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-net.rst
@@ -14,76 +14,76 @@ tool for inspection of networking related bpf prog attachments
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **net** *COMMAND*
+**bpftool** [*OPTIONS*] **net** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| }
+*OPTIONS* := { |COMMON_OPTIONS| }
- *COMMANDS* :=
- { **show** | **list** | **attach** | **detach** | **help** }
+*COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** }
NET COMMANDS
============
-| **bpftool** **net** { **show** | **list** } [ **dev** *NAME* ]
-| **bpftool** **net attach** *ATTACH_TYPE* *PROG* **dev** *NAME* [ **overwrite** ]
-| **bpftool** **net detach** *ATTACH_TYPE* **dev** *NAME*
-| **bpftool** **net help**
+| **bpftool** **net** { **show** | **list** } [ **dev** *NAME* ]
+| **bpftool** **net attach** *ATTACH_TYPE* *PROG* **dev** *NAME* [ **overwrite** ]
+| **bpftool** **net detach** *ATTACH_TYPE* **dev** *NAME*
+| **bpftool** **net help**
|
-| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* }
-| *ATTACH_TYPE* := { **xdp** | **xdpgeneric** | **xdpdrv** | **xdpoffload** }
+| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
+| *ATTACH_TYPE* := { **xdp** | **xdpgeneric** | **xdpdrv** | **xdpoffload** | **tcx_ingress** | **tcx_egress** }
DESCRIPTION
===========
- **bpftool net { show | list }** [ **dev** *NAME* ]
- List bpf program attachments in the kernel networking subsystem.
-
- Currently, device driver xdp attachments, tcx, netkit and old-style tc
- classifier/action attachments, flow_dissector as well as netfilter
- attachments are implemented, i.e., for
- program types **BPF_PROG_TYPE_XDP**, **BPF_PROG_TYPE_SCHED_CLS**,
- **BPF_PROG_TYPE_SCHED_ACT**, **BPF_PROG_TYPE_FLOW_DISSECTOR**,
- **BPF_PROG_TYPE_NETFILTER**.
-
- For programs attached to a particular cgroup, e.g.,
- **BPF_PROG_TYPE_CGROUP_SKB**, **BPF_PROG_TYPE_CGROUP_SOCK**,
- **BPF_PROG_TYPE_SOCK_OPS** and **BPF_PROG_TYPE_CGROUP_SOCK_ADDR**,
- users can use **bpftool cgroup** to dump cgroup attachments.
- For sk_{filter, skb, msg, reuseport} and lwt/seg6
- bpf programs, users should consult other tools, e.g., iproute2.
-
- The current output will start with all xdp program attachments, followed by
- all tcx, netkit, then tc class/qdisc bpf program attachments, then flow_dissector
- and finally netfilter programs. Both xdp programs and tcx/netkit/tc programs are
- ordered based on ifindex number. If multiple bpf programs attached
- to the same networking device through **tc**, the order will be first
- all bpf programs attached to tcx, netkit, then tc classes, then all bpf programs
- attached to non clsact qdiscs, and finally all bpf programs attached
- to root and clsact qdisc.
-
- **bpftool** **net attach** *ATTACH_TYPE* *PROG* **dev** *NAME* [ **overwrite** ]
- Attach bpf program *PROG* to network interface *NAME* with
- type specified by *ATTACH_TYPE*. Previously attached bpf program
- can be replaced by the command used with **overwrite** option.
- Currently, only XDP-related modes are supported for *ATTACH_TYPE*.
-
- *ATTACH_TYPE* can be of:
- **xdp** - try native XDP and fallback to generic XDP if NIC driver does not support it;
- **xdpgeneric** - Generic XDP. runs at generic XDP hook when packet already enters receive path as skb;
- **xdpdrv** - Native XDP. runs earliest point in driver's receive path;
- **xdpoffload** - Offload XDP. runs directly on NIC on each packet reception;
-
- **bpftool** **net detach** *ATTACH_TYPE* **dev** *NAME*
- Detach bpf program attached to network interface *NAME* with
- type specified by *ATTACH_TYPE*. To detach bpf program, same
- *ATTACH_TYPE* previously used for attach must be specified.
- Currently, only XDP-related modes are supported for *ATTACH_TYPE*.
-
- **bpftool net help**
- Print short help message.
+bpftool net { show | list } [ dev *NAME* ]
+ List bpf program attachments in the kernel networking subsystem.
+
+ Currently, device driver xdp attachments, tcx, netkit and old-style tc
+ classifier/action attachments, flow_dissector as well as netfilter
+ attachments are implemented, i.e., for program types **BPF_PROG_TYPE_XDP**,
+ **BPF_PROG_TYPE_SCHED_CLS**, **BPF_PROG_TYPE_SCHED_ACT**,
+ **BPF_PROG_TYPE_FLOW_DISSECTOR**, **BPF_PROG_TYPE_NETFILTER**.
+
+ For programs attached to a particular cgroup, e.g.,
+ **BPF_PROG_TYPE_CGROUP_SKB**, **BPF_PROG_TYPE_CGROUP_SOCK**,
+ **BPF_PROG_TYPE_SOCK_OPS** and **BPF_PROG_TYPE_CGROUP_SOCK_ADDR**, users
+ can use **bpftool cgroup** to dump cgroup attachments. For sk_{filter, skb,
+ msg, reuseport} and lwt/seg6 bpf programs, users should consult other
+ tools, e.g., iproute2.
+
+ The current output will start with all xdp program attachments, followed by
+ all tcx, netkit, then tc class/qdisc bpf program attachments, then
+ flow_dissector and finally netfilter programs. Both xdp programs and
+ tcx/netkit/tc programs are ordered based on ifindex number. If multiple bpf
+ programs attached to the same networking device through **tc**, the order
+ will be first all bpf programs attached to tcx, netkit, then tc classes,
+ then all bpf programs attached to non clsact qdiscs, and finally all bpf
+ programs attached to root and clsact qdisc.
+
+bpftool net attach *ATTACH_TYPE* *PROG* dev *NAME* [ overwrite ]
+ Attach bpf program *PROG* to network interface *NAME* with type specified
+ by *ATTACH_TYPE*. Previously attached bpf program can be replaced by the
+ command used with **overwrite** option. Currently, only XDP-related modes
+ are supported for *ATTACH_TYPE*.
+
+ *ATTACH_TYPE* can be of:
+ **xdp** - try native XDP and fallback to generic XDP if NIC driver does not support it;
+ **xdpgeneric** - Generic XDP. runs at generic XDP hook when packet already enters receive path as skb;
+ **xdpdrv** - Native XDP. runs earliest point in driver's receive path;
+ **xdpoffload** - Offload XDP. runs directly on NIC on each packet reception;
+ **tcx_ingress** - Ingress TCX. runs on ingress net traffic;
+ **tcx_egress** - Egress TCX. runs on egress net traffic;
+
+bpftool net detach *ATTACH_TYPE* dev *NAME*
+ Detach bpf program attached to network interface *NAME* with type specified
+ by *ATTACH_TYPE*. To detach bpf program, same *ATTACH_TYPE* previously used
+ for attach must be specified. Currently, only XDP-related modes are
+ supported for *ATTACH_TYPE*.
+
+bpftool net help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
EXAMPLES
========
@@ -180,3 +180,23 @@ EXAMPLES
::
xdp:
+
+|
+| **# bpftool net attach tcx_ingress name tc_prog dev lo**
+| **# bpftool net**
+|
+
+::
+
+ tc:
+ lo(1) tcx/ingress tc_prog prog_id 29
+
+|
+| **# bpftool net attach tcx_ingress name tc_prog dev lo**
+| **# bpftool net detach tcx_ingress dev lo**
+| **# bpftool net**
+|
+
+::
+
+ tc:
diff --git a/tools/bpf/bpftool/Documentation/bpftool-perf.rst b/tools/bpf/bpftool/Documentation/bpftool-perf.rst
index 5fea633a82f1..8c1ae55be596 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-perf.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-perf.rst
@@ -14,37 +14,37 @@ tool for inspection of perf related bpf prog attachments
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **perf** *COMMAND*
+**bpftool** [*OPTIONS*] **perf** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| }
+*OPTIONS* := { |COMMON_OPTIONS| }
- *COMMANDS* :=
- { **show** | **list** | **help** }
+*COMMANDS* :=
+{ **show** | **list** | **help** }
PERF COMMANDS
=============
-| **bpftool** **perf** { **show** | **list** }
-| **bpftool** **perf help**
+| **bpftool** **perf** { **show** | **list** }
+| **bpftool** **perf help**
DESCRIPTION
===========
- **bpftool perf { show | list }**
- List all raw_tracepoint, tracepoint, kprobe attachment in the system.
+bpftool perf { show | list }
+ List all raw_tracepoint, tracepoint, kprobe attachment in the system.
- Output will start with process id and file descriptor in that process,
- followed by bpf program id, attachment information, and attachment point.
- The attachment point for raw_tracepoint/tracepoint is the trace probe name.
- The attachment point for k[ret]probe is either symbol name and offset,
- or a kernel virtual address.
- The attachment point for u[ret]probe is the file name and the file offset.
+ Output will start with process id and file descriptor in that process,
+ followed by bpf program id, attachment information, and attachment point.
+ The attachment point for raw_tracepoint/tracepoint is the trace probe name.
+ The attachment point for k[ret]probe is either symbol name and offset, or a
+ kernel virtual address. The attachment point for u[ret]probe is the file
+ name and the file offset.
- **bpftool perf help**
- Print short help message.
+bpftool perf help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
index 58e6a5b10ef7..009633294b09 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
@@ -14,250 +14,251 @@ tool for inspection and simple manipulation of eBPF progs
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **prog** *COMMAND*
+**bpftool** [*OPTIONS*] **prog** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| |
- { **-f** | **--bpffs** } | { **-m** | **--mapcompat** } | { **-n** | **--nomount** } |
- { **-L** | **--use-loader** } }
+*OPTIONS* := { |COMMON_OPTIONS| |
+{ **-f** | **--bpffs** } | { **-m** | **--mapcompat** } | { **-n** | **--nomount** } |
+{ **-L** | **--use-loader** } | [ { **-S** | **--sign** } **-k** <private_key.pem> **-i** <certificate.x509> ] }
- *COMMANDS* :=
- { **show** | **list** | **dump xlated** | **dump jited** | **pin** | **load** |
- **loadall** | **help** }
+*COMMANDS* :=
+{ **show** | **list** | **dump xlated** | **dump jited** | **pin** | **load** |
+**loadall** | **help** }
PROG COMMANDS
=============
-| **bpftool** **prog** { **show** | **list** } [*PROG*]
-| **bpftool** **prog dump xlated** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] [**visual**] }]
-| **bpftool** **prog dump jited** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] }]
-| **bpftool** **prog pin** *PROG* *FILE*
-| **bpftool** **prog** { **load** | **loadall** } *OBJ* *PATH* [**type** *TYPE*] [**map** { **idx** *IDX* | **name** *NAME* } *MAP*] [{ **offload_dev** | **xdpmeta_dev** } *NAME*] [**pinmaps** *MAP_DIR*] [**autoattach**]
-| **bpftool** **prog attach** *PROG* *ATTACH_TYPE* [*MAP*]
-| **bpftool** **prog detach** *PROG* *ATTACH_TYPE* [*MAP*]
-| **bpftool** **prog tracelog**
-| **bpftool** **prog run** *PROG* **data_in** *FILE* [**data_out** *FILE* [**data_size_out** *L*]] [**ctx_in** *FILE* [**ctx_out** *FILE* [**ctx_size_out** *M*]]] [**repeat** *N*]
-| **bpftool** **prog profile** *PROG* [**duration** *DURATION*] *METRICs*
-| **bpftool** **prog help**
+| **bpftool** **prog** { **show** | **list** } [*PROG*]
+| **bpftool** **prog dump xlated** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] [**visual**] }]
+| **bpftool** **prog dump jited** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] }]
+| **bpftool** **prog pin** *PROG* *FILE*
+| **bpftool** **prog** { **load** | **loadall** } *OBJ* *PATH* [**type** *TYPE*] [**map** { **idx** *IDX* | **name** *NAME* } *MAP*] [{ **offload_dev** | **xdpmeta_dev** } *NAME*] [**pinmaps** *MAP_DIR*] [**autoattach**] [**kernel_btf** *BTF_FILE*]
+| **bpftool** **prog attach** *PROG* *ATTACH_TYPE* [*MAP*]
+| **bpftool** **prog detach** *PROG* *ATTACH_TYPE* [*MAP*]
+| **bpftool** **prog tracelog**
+| **bpftool** **prog tracelog** [ { **stdout** | **stderr** } *PROG* ]
+| **bpftool** **prog run** *PROG* **data_in** *FILE* [**data_out** *FILE* [**data_size_out** *L*]] [**ctx_in** *FILE* [**ctx_out** *FILE* [**ctx_size_out** *M*]]] [**repeat** *N*]
+| **bpftool** **prog profile** *PROG* [**duration** *DURATION*] *METRICs*
+| **bpftool** **prog help**
|
-| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
-| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
-| *TYPE* := {
-| **socket** | **kprobe** | **kretprobe** | **classifier** | **action** |
-| **tracepoint** | **raw_tracepoint** | **xdp** | **perf_event** | **cgroup/skb** |
-| **cgroup/sock** | **cgroup/dev** | **lwt_in** | **lwt_out** | **lwt_xmit** |
-| **lwt_seg6local** | **sockops** | **sk_skb** | **sk_msg** | **lirc_mode2** |
-| **cgroup/bind4** | **cgroup/bind6** | **cgroup/post_bind4** | **cgroup/post_bind6** |
-| **cgroup/connect4** | **cgroup/connect6** | **cgroup/connect_unix** |
-| **cgroup/getpeername4** | **cgroup/getpeername6** | **cgroup/getpeername_unix** |
-| **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/getsockname_unix** |
-| **cgroup/sendmsg4** | **cgroup/sendmsg6** | **cgroup/sendmsg_unix** |
-| **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/recvmsg_unix** | **cgroup/sysctl** |
-| **cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** |
-| **struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup**
-| }
-| *ATTACH_TYPE* := {
-| **sk_msg_verdict** | **sk_skb_verdict** | **sk_skb_stream_verdict** |
-| **sk_skb_stream_parser** | **flow_dissector**
-| }
-| *METRICs* := {
-| **cycles** | **instructions** | **l1d_loads** | **llc_misses** |
-| **itlb_misses** | **dtlb_misses**
-| }
+| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* | **name** *MAP_NAME* }
+| *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* | **name** *PROG_NAME* }
+| *TYPE* := {
+| **socket** | **kprobe** | **kretprobe** | **classifier** | **action** |
+| **tracepoint** | **raw_tracepoint** | **xdp** | **perf_event** | **cgroup/skb** |
+| **cgroup/sock** | **cgroup/dev** | **lwt_in** | **lwt_out** | **lwt_xmit** |
+| **lwt_seg6local** | **sockops** | **sk_skb** | **sk_msg** | **lirc_mode2** |
+| **cgroup/bind4** | **cgroup/bind6** | **cgroup/post_bind4** | **cgroup/post_bind6** |
+| **cgroup/connect4** | **cgroup/connect6** | **cgroup/connect_unix** |
+| **cgroup/getpeername4** | **cgroup/getpeername6** | **cgroup/getpeername_unix** |
+| **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/getsockname_unix** |
+| **cgroup/sendmsg4** | **cgroup/sendmsg6** | **cgroup/sendmsg_unix** |
+| **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/recvmsg_unix** | **cgroup/sysctl** |
+| **cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** |
+| **struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup**
+| }
+| *ATTACH_TYPE* := {
+| **sk_msg_verdict** | **sk_skb_verdict** | **sk_skb_stream_verdict** |
+| **sk_skb_stream_parser** | **flow_dissector**
+| }
+| *METRICs* := {
+| **cycles** | **instructions** | **l1d_loads** | **llc_misses** |
+| **itlb_misses** | **dtlb_misses**
+| }
DESCRIPTION
===========
- **bpftool prog { show | list }** [*PROG*]
- Show information about loaded programs. If *PROG* is
- specified show information only about given programs,
- otherwise list all programs currently loaded on the system.
- In case of **tag** or **name**, *PROG* may match several
- programs which will all be shown.
-
- Output will start with program ID followed by program type and
- zero or more named attributes (depending on kernel version).
-
- Since Linux 5.1 the kernel can collect statistics on BPF
- programs (such as the total time spent running the program,
- and the number of times it was run). If available, bpftool
- shows such statistics. However, the kernel does not collect
- them by defaults, as it slightly impacts performance on each
- program run. Activation or deactivation of the feature is
- performed via the **kernel.bpf_stats_enabled** sysctl knob.
-
- Since Linux 5.8 bpftool is able to discover information about
- processes that hold open file descriptors (FDs) against BPF
- programs. On such kernels bpftool will automatically emit this
- information as well.
-
- **bpftool prog dump xlated** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] [**visual**] }]
- Dump eBPF instructions of the programs from the kernel. By
- default, eBPF will be disassembled and printed to standard
- output in human-readable format. In this case, **opcodes**
- controls if raw opcodes should be printed as well.
-
- In case of **tag** or **name**, *PROG* may match several
- programs which will all be dumped. However, if **file** or
- **visual** is specified, *PROG* must match a single program.
-
- If **file** is specified, the binary image will instead be
- written to *FILE*.
-
- If **visual** is specified, control flow graph (CFG) will be
- built instead, and eBPF instructions will be presented with
- CFG in DOT format, on standard output.
-
- If the programs have line_info available, the source line will
- be displayed. If **linum** is specified, the filename, line
- number and line column will also be displayed.
-
- **bpftool prog dump jited** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] }]
- Dump jited image (host machine code) of the program.
-
- If *FILE* is specified image will be written to a file,
- otherwise it will be disassembled and printed to stdout.
- *PROG* must match a single program when **file** is specified.
-
- **opcodes** controls if raw opcodes will be printed.
-
- If the prog has line_info available, the source line will
- be displayed. If **linum** is specified, the filename, line
- number and line column will also be displayed.
-
- **bpftool prog pin** *PROG* *FILE*
- Pin program *PROG* as *FILE*.
-
- Note: *FILE* must be located in *bpffs* mount. It must not
- contain a dot character ('.'), which is reserved for future
- extensions of *bpffs*.
-
- **bpftool prog { load | loadall }** *OBJ* *PATH* [**type** *TYPE*] [**map** { **idx** *IDX* | **name** *NAME* } *MAP*] [{ **offload_dev** | **xdpmeta_dev** } *NAME*] [**pinmaps** *MAP_DIR*] [**autoattach**]
- Load bpf program(s) from binary *OBJ* and pin as *PATH*.
- **bpftool prog load** pins only the first program from the
- *OBJ* as *PATH*. **bpftool prog loadall** pins all programs
- from the *OBJ* under *PATH* directory.
- **type** is optional, if not specified program type will be
- inferred from section names.
- By default bpftool will create new maps as declared in the ELF
- object being loaded. **map** parameter allows for the reuse
- of existing maps. It can be specified multiple times, each
- time for a different map. *IDX* refers to index of the map
- to be replaced in the ELF file counting from 0, while *NAME*
- allows to replace a map by name. *MAP* specifies the map to
- use, referring to it by **id** or through a **pinned** file.
- If **offload_dev** *NAME* is specified program will be loaded
- onto given networking device (offload).
- If **xdpmeta_dev** *NAME* is specified program will become
- device-bound without offloading, this facilitates access
- to XDP metadata.
- Optional **pinmaps** argument can be provided to pin all
- maps under *MAP_DIR* directory.
-
- If **autoattach** is specified program will be attached
- before pin. In that case, only the link (representing the
- program attached to its hook) is pinned, not the program as
- such, so the path won't show in **bpftool prog show -f**,
- only show in **bpftool link show -f**. Also, this only works
- when bpftool (libbpf) is able to infer all necessary
- information from the object file, in particular, it's not
- supported for all program types. If a program does not
- support autoattach, bpftool falls back to regular pinning
- for that program instead.
-
- Note: *PATH* must be located in *bpffs* mount. It must not
- contain a dot character ('.'), which is reserved for future
- extensions of *bpffs*.
-
- **bpftool prog attach** *PROG* *ATTACH_TYPE* [*MAP*]
- Attach bpf program *PROG* (with type specified by
- *ATTACH_TYPE*). Most *ATTACH_TYPEs* require a *MAP*
- parameter, with the exception of *flow_dissector* which is
- attached to current networking name space.
-
- **bpftool prog detach** *PROG* *ATTACH_TYPE* [*MAP*]
- Detach bpf program *PROG* (with type specified by
- *ATTACH_TYPE*). Most *ATTACH_TYPEs* require a *MAP*
- parameter, with the exception of *flow_dissector* which is
- detached from the current networking name space.
-
- **bpftool prog tracelog**
- Dump the trace pipe of the system to the console (stdout).
- Hit <Ctrl+C> to stop printing. BPF programs can write to this
- trace pipe at runtime with the **bpf_trace_printk**\ () helper.
- This should be used only for debugging purposes. For
- streaming data from BPF programs to user space, one can use
- perf events (see also **bpftool-map**\ (8)).
-
- **bpftool prog run** *PROG* **data_in** *FILE* [**data_out** *FILE* [**data_size_out** *L*]] [**ctx_in** *FILE* [**ctx_out** *FILE* [**ctx_size_out** *M*]]] [**repeat** *N*]
- Run BPF program *PROG* in the kernel testing infrastructure
- for BPF, meaning that the program works on the data and
- context provided by the user, and not on actual packets or
- monitored functions etc. Return value and duration for the
- test run are printed out to the console.
-
- Input data is read from the *FILE* passed with **data_in**.
- If this *FILE* is "**-**", input data is read from standard
- input. Input context, if any, is read from *FILE* passed with
- **ctx_in**. Again, "**-**" can be used to read from standard
- input, but only if standard input is not already in use for
- input data. If a *FILE* is passed with **data_out**, output
- data is written to that file. Similarly, output context is
- written to the *FILE* passed with **ctx_out**. For both
- output flows, "**-**" can be used to print to the standard
- output (as plain text, or JSON if relevant option was
- passed). If output keywords are omitted, output data and
- context are discarded. Keywords **data_size_out** and
- **ctx_size_out** are used to pass the size (in bytes) for the
- output buffers to the kernel, although the default of 32 kB
- should be more than enough for most cases.
-
- Keyword **repeat** is used to indicate the number of
- consecutive runs to perform. Note that output data and
- context printed to files correspond to the last of those
- runs. The duration printed out at the end of the runs is an
- average over all runs performed by the command.
-
- Not all program types support test run. Among those which do,
- not all of them can take the **ctx_in**/**ctx_out**
- arguments. bpftool does not perform checks on program types.
-
- **bpftool prog profile** *PROG* [**duration** *DURATION*] *METRICs*
- Profile *METRICs* for bpf program *PROG* for *DURATION*
- seconds or until user hits <Ctrl+C>. *DURATION* is optional.
- If *DURATION* is not specified, the profiling will run up to
- **UINT_MAX** seconds.
-
- **bpftool prog help**
- Print short help message.
+bpftool prog { show | list } [*PROG*]
+ Show information about loaded programs. If *PROG* is specified show
+ information only about given programs, otherwise list all programs
+ currently loaded on the system. In case of **tag** or **name**, *PROG* may
+ match several programs which will all be shown.
+
+ Output will start with program ID followed by program type and zero or more
+ named attributes (depending on kernel version).
+
+ Since Linux 5.1 the kernel can collect statistics on BPF programs (such as
+ the total time spent running the program, and the number of times it was
+ run). If available, bpftool shows such statistics. However, the kernel does
+ not collect them by defaults, as it slightly impacts performance on each
+ program run. Activation or deactivation of the feature is performed via the
+ **kernel.bpf_stats_enabled** sysctl knob.
+
+ Since Linux 5.8 bpftool is able to discover information about processes
+ that hold open file descriptors (FDs) against BPF programs. On such kernels
+ bpftool will automatically emit this information as well.
+
+bpftool prog dump xlated *PROG* [{ file *FILE* | [opcodes] [linum] [visual] }]
+ Dump eBPF instructions of the programs from the kernel. By default, eBPF
+ will be disassembled and printed to standard output in human-readable
+ format. In this case, **opcodes** controls if raw opcodes should be printed
+ as well.
+
+ In case of **tag** or **name**, *PROG* may match several programs which
+ will all be dumped. However, if **file** or **visual** is specified,
+ *PROG* must match a single program.
+
+ If **file** is specified, the binary image will instead be written to
+ *FILE*.
+
+ If **visual** is specified, control flow graph (CFG) will be built instead,
+ and eBPF instructions will be presented with CFG in DOT format, on standard
+ output.
+
+ If the programs have line_info available, the source line will be
+ displayed. If **linum** is specified, the filename, line number and line
+ column will also be displayed.
+
+bpftool prog dump jited *PROG* [{ file *FILE* | [opcodes] [linum] }]
+ Dump jited image (host machine code) of the program.
+
+ If *FILE* is specified image will be written to a file, otherwise it will
+ be disassembled and printed to stdout. *PROG* must match a single program
+ when **file** is specified.
+
+ **opcodes** controls if raw opcodes will be printed.
+
+ If the prog has line_info available, the source line will be displayed. If
+ **linum** is specified, the filename, line number and line column will also
+ be displayed.
+
+bpftool prog pin *PROG* *FILE*
+ Pin program *PROG* as *FILE*.
+
+ Note: *FILE* must be located in *bpffs* mount. It must not contain a dot
+ character ('.'), which is reserved for future extensions of *bpffs*.
+
+bpftool prog { load | loadall } *OBJ* *PATH* [type *TYPE*] [map { idx *IDX* | name *NAME* } *MAP*] [{ offload_dev | xdpmeta_dev } *NAME*] [pinmaps *MAP_DIR*] [autoattach] [kernel_btf *BTF_FILE*]
+ Load bpf program(s) from binary *OBJ* and pin as *PATH*. **bpftool prog
+ load** pins only the first program from the *OBJ* as *PATH*. **bpftool prog
+ loadall** pins all programs from the *OBJ* under *PATH* directory. **type**
+ is optional, if not specified program type will be inferred from section
+ names. By default bpftool will create new maps as declared in the ELF
+ object being loaded. **map** parameter allows for the reuse of existing
+ maps. It can be specified multiple times, each time for a different map.
+ *IDX* refers to index of the map to be replaced in the ELF file counting
+ from 0, while *NAME* allows to replace a map by name. *MAP* specifies the
+ map to use, referring to it by **id** or through a **pinned** file. If
+ **offload_dev** *NAME* is specified program will be loaded onto given
+ networking device (offload). If **xdpmeta_dev** *NAME* is specified program
+ will become device-bound without offloading, this facilitates access to XDP
+ metadata. Optional **pinmaps** argument can be provided to pin all maps
+ under *MAP_DIR* directory.
+
+ If **autoattach** is specified program will be attached before pin. In that
+ case, only the link (representing the program attached to its hook) is
+ pinned, not the program as such, so the path won't show in **bpftool prog
+ show -f**, only show in **bpftool link show -f**. Also, this only works
+ when bpftool (libbpf) is able to infer all necessary information from the
+ object file, in particular, it's not supported for all program types. If a
+ program does not support autoattach, bpftool falls back to regular pinning
+ for that program instead.
+
+ The **kernel_btf** option allows specifying an external BTF file to replace
+ the system's own vmlinux BTF file for CO-RE relocations. Note that any
+ other feature relying on BTF (such as fentry/fexit programs, struct_ops)
+ requires the BTF file for the actual kernel running on the host, often
+ exposed at /sys/kernel/btf/vmlinux.
+
+ Note: *PATH* must be located in *bpffs* mount. It must not contain a dot
+ character ('.'), which is reserved for future extensions of *bpffs*.
+
+bpftool prog attach *PROG* *ATTACH_TYPE* [*MAP*]
+ Attach bpf program *PROG* (with type specified by *ATTACH_TYPE*). Most
+ *ATTACH_TYPEs* require a *MAP* parameter, with the exception of
+ *flow_dissector* which is attached to current networking name space.
+
+bpftool prog detach *PROG* *ATTACH_TYPE* [*MAP*]
+ Detach bpf program *PROG* (with type specified by *ATTACH_TYPE*). Most
+ *ATTACH_TYPEs* require a *MAP* parameter, with the exception of
+ *flow_dissector* which is detached from the current networking name space.
+
+bpftool prog tracelog
+ Dump the trace pipe of the system to the console (stdout). Hit <Ctrl+C> to
+ stop printing. BPF programs can write to this trace pipe at runtime with
+ the **bpf_trace_printk**\ () helper. This should be used only for debugging
+ purposes. For streaming data from BPF programs to user space, one can use
+ perf events (see also **bpftool-map**\ (8)).
+
+bpftool prog tracelog { stdout | stderr } *PROG*
+ Dump the BPF stream of the program. BPF programs can write to these streams
+ at runtime with the **bpf_stream_vprintk**\ () kfunc. The kernel may write
+ error messages to the standard error stream. This facility should be used
+ only for debugging purposes.
+
+bpftool prog run *PROG* data_in *FILE* [data_out *FILE* [data_size_out *L*]] [ctx_in *FILE* [ctx_out *FILE* [ctx_size_out *M*]]] [repeat *N*]
+ Run BPF program *PROG* in the kernel testing infrastructure for BPF,
+ meaning that the program works on the data and context provided by the
+ user, and not on actual packets or monitored functions etc. Return value
+ and duration for the test run are printed out to the console.
+
+ Input data is read from the *FILE* passed with **data_in**. If this *FILE*
+ is "**-**", input data is read from standard input. Input context, if any,
+ is read from *FILE* passed with **ctx_in**. Again, "**-**" can be used to
+ read from standard input, but only if standard input is not already in use
+ for input data. If a *FILE* is passed with **data_out**, output data is
+ written to that file. Similarly, output context is written to the *FILE*
+ passed with **ctx_out**. For both output flows, "**-**" can be used to
+ print to the standard output (as plain text, or JSON if relevant option was
+ passed). If output keywords are omitted, output data and context are
+ discarded. Keywords **data_size_out** and **ctx_size_out** are used to pass
+ the size (in bytes) for the output buffers to the kernel, although the
+ default of 32 kB should be more than enough for most cases.
+
+ Keyword **repeat** is used to indicate the number of consecutive runs to
+ perform. Note that output data and context printed to files correspond to
+ the last of those runs. The duration printed out at the end of the runs is
+ an average over all runs performed by the command.
+
+ Not all program types support test run. Among those which do, not all of
+ them can take the **ctx_in**/**ctx_out** arguments. bpftool does not
+ perform checks on program types.
+
+bpftool prog profile *PROG* [duration *DURATION*] *METRICs*
+ Profile *METRICs* for bpf program *PROG* for *DURATION* seconds or until
+ user hits <Ctrl+C>. *DURATION* is optional. If *DURATION* is not specified,
+ the profiling will run up to **UINT_MAX** seconds.
+
+bpftool prog help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
-
- -f, --bpffs
- When showing BPF programs, show file names of pinned
- programs.
-
- -m, --mapcompat
- Allow loading maps with unknown map definitions.
-
- -n, --nomount
- Do not automatically attempt to mount any virtual file system
- (such as tracefs or BPF virtual file system) when necessary.
-
- -L, --use-loader
- Load program as a "loader" program. This is useful to debug
- the generation of such programs. When this option is in
- use, bpftool attempts to load the programs from the object
- file into the kernel, but does not pin them (therefore, the
- *PATH* must not be provided).
-
- When combined with the **-d**\ \|\ **--debug** option,
- additional debug messages are generated, and the execution
- of the loader program will use the **bpf_trace_printk**\ ()
- helper to log each step of loading BTF, creating the maps,
- and loading the programs (see **bpftool prog tracelog** as
- a way to dump those messages).
+.. include:: common_options.rst
+
+-f, --bpffs
+ When showing BPF programs, show file names of pinned programs.
+
+-m, --mapcompat
+ Allow loading maps with unknown map definitions.
+
+-n, --nomount
+ Do not automatically attempt to mount any virtual file system (such as
+ tracefs or BPF virtual file system) when necessary.
+
+-L, --use-loader
+ Load program as a "loader" program. This is useful to debug the generation
+ of such programs. When this option is in use, bpftool attempts to load the
+ programs from the object file into the kernel, but does not pin them
+ (therefore, the *PATH* must not be provided).
+
+ When combined with the **-d**\ \|\ **--debug** option, additional debug
+ messages are generated, and the execution of the loader program will use
+ the **bpf_trace_printk**\ () helper to log each step of loading BTF,
+ creating the maps, and loading the programs (see **bpftool prog tracelog**
+ as a way to dump those messages).
+
+-S, --sign
+ Enable signing of the BPF program before loading. This option must be
+ used with **-k** and **-i**. Using this flag implicitly enables
+ **--use-loader**.
+
+-k <private_key.pem>
+ Path to the private key file in PEM format, required when signing.
+
+-i <certificate.x509>
+ Path to the X.509 certificate file in PEM or DER format, required when
+ signing.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-struct_ops.rst b/tools/bpf/bpftool/Documentation/bpftool-struct_ops.rst
index 8022b5321dbe..e871b9539ac7 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-struct_ops.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-struct_ops.rst
@@ -14,61 +14,60 @@ tool to register/unregister/introspect BPF struct_ops
SYNOPSIS
========
- **bpftool** [*OPTIONS*] **struct_ops** *COMMAND*
+**bpftool** [*OPTIONS*] **struct_ops** *COMMAND*
- *OPTIONS* := { |COMMON_OPTIONS| }
+*OPTIONS* := { |COMMON_OPTIONS| }
- *COMMANDS* :=
- { **show** | **list** | **dump** | **register** | **unregister** | **help** }
+*COMMANDS* :=
+{ **show** | **list** | **dump** | **register** | **unregister** | **help** }
STRUCT_OPS COMMANDS
===================
-| **bpftool** **struct_ops { show | list }** [*STRUCT_OPS_MAP*]
-| **bpftool** **struct_ops dump** [*STRUCT_OPS_MAP*]
-| **bpftool** **struct_ops register** *OBJ* [*LINK_DIR*]
-| **bpftool** **struct_ops unregister** *STRUCT_OPS_MAP*
-| **bpftool** **struct_ops help**
+| **bpftool** **struct_ops { show | list }** [*STRUCT_OPS_MAP*]
+| **bpftool** **struct_ops dump** [*STRUCT_OPS_MAP*]
+| **bpftool** **struct_ops register** *OBJ* [*LINK_DIR*]
+| **bpftool** **struct_ops unregister** *STRUCT_OPS_MAP*
+| **bpftool** **struct_ops help**
|
-| *STRUCT_OPS_MAP* := { **id** *STRUCT_OPS_MAP_ID* | **name** *STRUCT_OPS_MAP_NAME* }
-| *OBJ* := /a/file/of/bpf_struct_ops.o
+| *STRUCT_OPS_MAP* := { **id** *STRUCT_OPS_MAP_ID* | **name** *STRUCT_OPS_MAP_NAME* }
+| *OBJ* := /a/file/of/bpf_struct_ops.o
DESCRIPTION
===========
- **bpftool struct_ops { show | list }** [*STRUCT_OPS_MAP*]
- Show brief information about the struct_ops in the system.
- If *STRUCT_OPS_MAP* is specified, it shows information only
- for the given struct_ops. Otherwise, it lists all struct_ops
- currently existing in the system.
-
- Output will start with struct_ops map ID, followed by its map
- name and its struct_ops's kernel type.
-
- **bpftool struct_ops dump** [*STRUCT_OPS_MAP*]
- Dump details information about the struct_ops in the system.
- If *STRUCT_OPS_MAP* is specified, it dumps information only
- for the given struct_ops. Otherwise, it dumps all struct_ops
- currently existing in the system.
-
- **bpftool struct_ops register** *OBJ* [*LINK_DIR*]
- Register bpf struct_ops from *OBJ*. All struct_ops under
- the ELF section ".struct_ops" and ".struct_ops.link" will
- be registered to its kernel subsystem. For each
- struct_ops in the ".struct_ops.link" section, a link
- will be created. You can give *LINK_DIR* to provide a
- directory path where these links will be pinned with the
- same name as their corresponding map name.
-
- **bpftool struct_ops unregister** *STRUCT_OPS_MAP*
- Unregister the *STRUCT_OPS_MAP* from the kernel subsystem.
-
- **bpftool struct_ops help**
- Print short help message.
+bpftool struct_ops { show | list } [*STRUCT_OPS_MAP*]
+ Show brief information about the struct_ops in the system. If
+ *STRUCT_OPS_MAP* is specified, it shows information only for the given
+ struct_ops. Otherwise, it lists all struct_ops currently existing in the
+ system.
+
+ Output will start with struct_ops map ID, followed by its map name and its
+ struct_ops's kernel type.
+
+bpftool struct_ops dump [*STRUCT_OPS_MAP*]
+ Dump details information about the struct_ops in the system. If
+ *STRUCT_OPS_MAP* is specified, it dumps information only for the given
+ struct_ops. Otherwise, it dumps all struct_ops currently existing in the
+ system.
+
+bpftool struct_ops register *OBJ* [*LINK_DIR*]
+ Register bpf struct_ops from *OBJ*. All struct_ops under the ELF section
+ ".struct_ops" and ".struct_ops.link" will be registered to its kernel
+ subsystem. For each struct_ops in the ".struct_ops.link" section, a link
+ will be created. You can give *LINK_DIR* to provide a directory path where
+ these links will be pinned with the same name as their corresponding map
+ name.
+
+bpftool struct_ops unregister *STRUCT_OPS_MAP*
+ Unregister the *STRUCT_OPS_MAP* from the kernel subsystem.
+
+bpftool struct_ops help
+ Print short help message.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-token.rst b/tools/bpf/bpftool/Documentation/bpftool-token.rst
new file mode 100644
index 000000000000..d082c499cfe3
--- /dev/null
+++ b/tools/bpf/bpftool/Documentation/bpftool-token.rst
@@ -0,0 +1,64 @@
+.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+
+================
+bpftool-token
+================
+-------------------------------------------------------------------------------
+tool for inspection and simple manipulation of eBPF tokens
+-------------------------------------------------------------------------------
+
+:Manual section: 8
+
+.. include:: substitutions.rst
+
+SYNOPSIS
+========
+
+**bpftool** [*OPTIONS*] **token** *COMMAND*
+
+*OPTIONS* := { |COMMON_OPTIONS| }
+
+*COMMANDS* := { **show** | **list** | **help** }
+
+TOKEN COMMANDS
+===============
+
+| **bpftool** **token** { **show** | **list** }
+| **bpftool** **token help**
+|
+
+DESCRIPTION
+===========
+bpftool token { show | list }
+ List BPF token information for each *bpffs* mount point containing token
+ information on the system. Information include mount point path, allowed
+ **bpf**\ () system call commands, maps, programs, and attach types for the
+ token.
+
+bpftool prog help
+ Print short help message.
+
+OPTIONS
+========
+.. include:: common_options.rst
+
+EXAMPLES
+========
+|
+| **# mkdir -p /sys/fs/bpf/token**
+| **# mount -t bpf bpffs /sys/fs/bpf/token** \
+| **-o delegate_cmds=prog_load:map_create** \
+| **-o delegate_progs=kprobe** \
+| **-o delegate_attachs=xdp**
+| **# bpftool token list**
+
+::
+
+ token_info /sys/fs/bpf/token
+ allowed_cmds:
+ map_create prog_load
+ allowed_maps:
+ allowed_progs:
+ kprobe
+ allowed_attachs:
+ xdp
diff --git a/tools/bpf/bpftool/Documentation/bpftool.rst b/tools/bpf/bpftool/Documentation/bpftool.rst
index 09e4f2ff5658..f38ae5c40439 100644
--- a/tools/bpf/bpftool/Documentation/bpftool.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool.rst
@@ -14,57 +14,57 @@ tool for inspection and simple manipulation of eBPF programs and maps
SYNOPSIS
========
- **bpftool** [*OPTIONS*] *OBJECT* { *COMMAND* | **help** }
+**bpftool** [*OPTIONS*] *OBJECT* { *COMMAND* | **help** }
- **bpftool** **batch file** *FILE*
+**bpftool** **batch file** *FILE*
- **bpftool** **version**
+**bpftool** **version**
- *OBJECT* := { **map** | **prog** | **link** | **cgroup** | **perf** | **net** | **feature** |
- **btf** | **gen** | **struct_ops** | **iter** }
+*OBJECT* := { **map** | **prog** | **link** | **cgroup** | **perf** | **net** | **feature** |
+**btf** | **gen** | **struct_ops** | **iter** }
- *OPTIONS* := { { **-V** | **--version** } | |COMMON_OPTIONS| }
+*OPTIONS* := { { **-V** | **--version** } | |COMMON_OPTIONS| }
- *MAP-COMMANDS* :=
- { **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
- **delete** | **pin** | **event_pipe** | **help** }
+*MAP-COMMANDS* :=
+{ **show** | **list** | **create** | **dump** | **update** | **lookup** | **getnext** |
+**delete** | **pin** | **event_pipe** | **help** }
- *PROG-COMMANDS* := { **show** | **list** | **dump jited** | **dump xlated** | **pin** |
- **load** | **attach** | **detach** | **help** }
+*PROG-COMMANDS* := { **show** | **list** | **dump jited** | **dump xlated** | **pin** |
+**load** | **attach** | **detach** | **help** }
- *LINK-COMMANDS* := { **show** | **list** | **pin** | **detach** | **help** }
+*LINK-COMMANDS* := { **show** | **list** | **pin** | **detach** | **help** }
- *CGROUP-COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** }
+*CGROUP-COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** }
- *PERF-COMMANDS* := { **show** | **list** | **help** }
+*PERF-COMMANDS* := { **show** | **list** | **help** }
- *NET-COMMANDS* := { **show** | **list** | **help** }
+*NET-COMMANDS* := { **show** | **list** | **help** }
- *FEATURE-COMMANDS* := { **probe** | **help** }
+*FEATURE-COMMANDS* := { **probe** | **help** }
- *BTF-COMMANDS* := { **show** | **list** | **dump** | **help** }
+*BTF-COMMANDS* := { **show** | **list** | **dump** | **help** }
- *GEN-COMMANDS* := { **object** | **skeleton** | **min_core_btf** | **help** }
+*GEN-COMMANDS* := { **object** | **skeleton** | **min_core_btf** | **help** }
- *STRUCT-OPS-COMMANDS* := { **show** | **list** | **dump** | **register** | **unregister** | **help** }
+*STRUCT-OPS-COMMANDS* := { **show** | **list** | **dump** | **register** | **unregister** | **help** }
- *ITER-COMMANDS* := { **pin** | **help** }
+*ITER-COMMANDS* := { **pin** | **help** }
DESCRIPTION
===========
- *bpftool* allows for inspection and simple modification of BPF objects
- on the system.
+*bpftool* allows for inspection and simple modification of BPF objects on the
+system.
- Note that format of the output of all tools is not guaranteed to be
- stable and should not be depended upon.
+Note that format of the output of all tools is not guaranteed to be stable and
+should not be depended upon.
OPTIONS
=======
- .. include:: common_options.rst
+.. include:: common_options.rst
- -m, --mapcompat
- Allow loading maps with unknown map definitions.
+-m, --mapcompat
+ Allow loading maps with unknown map definitions.
- -n, --nomount
- Do not automatically attempt to mount any virtual file system
- (such as tracefs or BPF virtual file system) when necessary.
+-n, --nomount
+ Do not automatically attempt to mount any virtual file system (such as
+ tracefs or BPF virtual file system) when necessary.
diff --git a/tools/bpf/bpftool/Documentation/common_options.rst b/tools/bpf/bpftool/Documentation/common_options.rst
index 30df7a707f02..9234b9dab768 100644
--- a/tools/bpf/bpftool/Documentation/common_options.rst
+++ b/tools/bpf/bpftool/Documentation/common_options.rst
@@ -1,25 +1,23 @@
.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
-h, --help
- Print short help message (similar to **bpftool help**).
+ Print short help message (similar to **bpftool help**).
-V, --version
- Print bpftool's version number (similar to **bpftool version**), the
- number of the libbpf version in use, and optional features that were
- included when bpftool was compiled. Optional features include linking
- against LLVM or libbfd to provide the disassembler for JIT-ted
- programs (**bpftool prog dump jited**) and usage of BPF skeletons
- (some features like **bpftool prog profile** or showing pids
- associated to BPF objects may rely on it).
+ Print bpftool's version number (similar to **bpftool version**), the number
+ of the libbpf version in use, and optional features that were included when
+ bpftool was compiled. Optional features include linking against LLVM or
+ libbfd to provide the disassembler for JIT-ted programs (**bpftool prog
+ dump jited**) and usage of BPF skeletons (some features like **bpftool prog
+ profile** or showing pids associated to BPF objects may rely on it).
-j, --json
- Generate JSON output. For commands that cannot produce JSON, this
- option has no effect.
+ Generate JSON output. For commands that cannot produce JSON, this option
+ has no effect.
-p, --pretty
- Generate human-readable JSON output. Implies **-j**.
+ Generate human-readable JSON output. Implies **-j**.
-d, --debug
- Print all logs available, even debug-level information. This includes
- logs from libbpf as well as from the verifier, when attempting to
- load programs.
+ Print all logs available, even debug-level information. This includes logs
+ from libbpf as well as from the verifier, when attempting to load programs.
diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
index e9154ace80ff..586d1b2595d1 100644
--- a/tools/bpf/bpftool/Makefile
+++ b/tools/bpf/bpftool/Makefile
@@ -7,12 +7,6 @@ srctree := $(patsubst %/,%,$(dir $(srctree)))
srctree := $(patsubst %/,%,$(dir $(srctree)))
endif
-ifeq ($(V),1)
- Q =
-else
- Q = @
-endif
-
BPF_DIR = $(srctree)/tools/lib/bpf
ifneq ($(OUTPUT),)
@@ -71,7 +65,12 @@ prefix ?= /usr/local
bash_compdir ?= /usr/share/bash-completion/completions
CFLAGS += -O2
-CFLAGS += -W -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers
+CFLAGS += -W
+CFLAGS += -Wall
+CFLAGS += -Wextra
+CFLAGS += -Wformat-signedness
+CFLAGS += -Wno-unused-parameter
+CFLAGS += -Wno-missing-field-initializers
CFLAGS += $(filter-out -Wswitch-enum -Wnested-externs,$(EXTRA_WARNINGS))
CFLAGS += -DPACKAGE='"bpftool"' -D__EXPORTED_HEADERS__ \
-I$(or $(OUTPUT),.) \
@@ -89,6 +88,10 @@ ifneq ($(EXTRA_LDFLAGS),)
LDFLAGS += $(EXTRA_LDFLAGS)
endif
+HOST_CFLAGS := $(subst -I$(LIBBPF_INCLUDE),-I$(LIBBPF_BOOTSTRAP_INCLUDE),\
+ $(subst $(CLANG_CROSS_FLAGS),,$(CFLAGS)))
+HOST_LDFLAGS := $(LDFLAGS)
+
INSTALL ?= install
RM ?= rm -f
@@ -102,6 +105,7 @@ FEATURE_TESTS += libbfd-liberty
FEATURE_TESTS += libbfd-liberty-z
FEATURE_TESTS += disassembler-four-args
FEATURE_TESTS += disassembler-init-styled
+FEATURE_TESTS += libelf-zstd
FEATURE_DISPLAY := clang-bpf-co-re
FEATURE_DISPLAY += llvm
@@ -126,8 +130,14 @@ include $(FEATURES_DUMP)
endif
endif
-LIBS = $(LIBBPF) -lelf -lz
-LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz
+LIBS = $(LIBBPF) -lelf -lz -lcrypto
+LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz -lcrypto
+
+ifeq ($(feature-libelf-zstd),1)
+LIBS += -lzstd
+LIBS_BOOTSTRAP += -lzstd
+endif
+
ifeq ($(feature-libcap), 1)
CFLAGS += -DUSE_LIBCAP
LIBS += -lcap
@@ -143,7 +153,11 @@ ifeq ($(feature-llvm),1)
# If LLVM is available, use it for JIT disassembly
CFLAGS += -DHAVE_LLVM_SUPPORT
LLVM_CONFIG_LIB_COMPONENTS := mcdisassembler all-targets
- CFLAGS += $(shell $(LLVM_CONFIG) --cflags --libs $(LLVM_CONFIG_LIB_COMPONENTS))
+ # llvm-config always adds -D_GNU_SOURCE, however, it may already be in CFLAGS
+ # (e.g. when bpftool build is called from selftests build as selftests
+ # Makefile includes lib.mk which sets -D_GNU_SOURCE) which would cause
+ # compilation error due to redefinition. Let's filter it out here.
+ CFLAGS += $(filter-out -D_GNU_SOURCE,$(shell $(LLVM_CONFIG) --cflags))
LIBS += $(shell $(LLVM_CONFIG) --libs $(LLVM_CONFIG_LIB_COMPONENTS))
ifeq ($(shell $(LLVM_CONFIG) --shared-mode),static)
LIBS += $(shell $(LLVM_CONFIG) --system-libs $(LLVM_CONFIG_LIB_COMPONENTS))
@@ -178,12 +192,9 @@ ifeq ($(filter -DHAVE_LLVM_SUPPORT -DHAVE_LIBBFD_SUPPORT,$(CFLAGS)),)
SRCS := $(filter-out jit_disasm.c,$(SRCS))
endif
-HOST_CFLAGS = $(subst -I$(LIBBPF_INCLUDE),-I$(LIBBPF_BOOTSTRAP_INCLUDE),\
- $(subst $(CLANG_CROSS_FLAGS),,$(CFLAGS)))
-
BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool
-BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o xlated_dumper.o btf_dumper.o disasm.o)
+BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o sign.o)
$(BOOTSTRAP_OBJS): $(LIBBPF_BOOTSTRAP)
OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o
@@ -203,10 +214,11 @@ ifeq ($(feature-clang-bpf-co-re),1)
BUILD_BPF_SKELS := 1
-$(OUTPUT)vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL_BOOTSTRAP)
ifeq ($(VMLINUX_H),)
+$(OUTPUT)vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL_BOOTSTRAP)
$(QUIET_GEN)$(BPFTOOL_BOOTSTRAP) btf dump file $< format c > $@
else
+$(OUTPUT)vmlinux.h: $(VMLINUX_H)
$(Q)cp "$(VMLINUX_H)" $@
endif
@@ -231,14 +243,11 @@ endif
CFLAGS += $(if $(BUILD_BPF_SKELS),,-DBPFTOOL_WITHOUT_SKELETONS)
-$(BOOTSTRAP_OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c
- $(QUIET_CC)$(HOSTCC) $(HOST_CFLAGS) -c -MMD $< -o $@
-
$(OUTPUT)disasm.o: $(srctree)/kernel/bpf/disasm.c
$(QUIET_CC)$(CC) $(CFLAGS) -c -MMD $< -o $@
$(BPFTOOL_BOOTSTRAP): $(BOOTSTRAP_OBJS) $(LIBBPF_BOOTSTRAP)
- $(QUIET_LINK)$(HOSTCC) $(HOST_CFLAGS) $(LDFLAGS) $(BOOTSTRAP_OBJS) $(LIBS_BOOTSTRAP) -o $@
+ $(QUIET_LINK)$(HOSTCC) $(HOST_CFLAGS) $(HOST_LDFLAGS) $(BOOTSTRAP_OBJS) $(LIBS_BOOTSTRAP) -o $@
$(OUTPUT)bpftool: $(OBJS) $(LIBBPF)
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $(OBJS) $(LIBS) -o $@
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 6e4f7ce6bc01..53bcfeb1a76e 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -106,19 +106,19 @@ _bpftool_get_link_ids()
_bpftool_get_obj_map_names()
{
- local obj
+ local obj maps
obj=$1
- maps=$(objdump -j maps -t $obj 2>/dev/null | \
- command awk '/g . maps/ {print $NF}')
+ maps=$(objdump -j .maps -t $obj 2>/dev/null | \
+ command awk '/g . .maps/ {print $NF}')
COMPREPLY+=( $( compgen -W "$maps" -- "$cur" ) )
}
_bpftool_get_obj_map_idxs()
{
- local obj
+ local obj nmaps
obj=$1
@@ -136,7 +136,7 @@ _sysfs_get_netdevs()
# Retrieve type of the map that we are operating on.
_bpftool_map_guess_map_type()
{
- local keyword ref
+ local keyword idx ref=""
for (( idx=3; idx < ${#words[@]}-1; idx++ )); do
case "${words[$((idx-2))]}" in
lookup|update)
@@ -255,13 +255,14 @@ _bpftool_map_update_get_name()
_bpftool()
{
- local cur prev words objword json=0
- _init_completion || return
+ local cur prev words cword comp_args
+ local json=0
+ _init_completion -- "$@" || return
# Deal with options
if [[ ${words[cword]} == -* ]]; then
local c='--version --json --pretty --bpffs --mapcompat --debug \
- --use-loader --base-btf'
+ --use-loader --base-btf --sign -i -k'
COMPREPLY=( $( compgen -W "$c" -- "$cur" ) )
return 0
fi
@@ -282,7 +283,7 @@ _bpftool()
_sysfs_get_netdevs
return 0
;;
- file|pinned|-B|--base-btf)
+ file|pinned|-B|--base-btf|-i|-k)
_filedir
return 0
;;
@@ -293,21 +294,29 @@ _bpftool()
esac
# Remove all options so completions don't have to deal with them.
- local i
+ local i pprev
for (( i=1; i < ${#words[@]}; )); do
- if [[ ${words[i]::1} == - ]] &&
- [[ ${words[i]} != "-B" ]] && [[ ${words[i]} != "--base-btf" ]]; then
- words=( "${words[@]:0:i}" "${words[@]:i+1}" )
- [[ $i -le $cword ]] && cword=$(( cword - 1 ))
- else
- i=$(( ++i ))
- fi
+ case ${words[i]} in
+ # Remove option and its argument
+ -B|--base-btf|-i|-k)
+ words=( "${words[@]:0:i}" "${words[@]:i+2}" )
+ [[ $i -le $(($cword + 1)) ]] && cword=$(( cword - 2 ))
+ ;;
+ # No argument, remove option only
+ -*)
+ words=( "${words[@]:0:i}" "${words[@]:i+1}" )
+ [[ $i -le $cword ]] && cword=$(( cword - 1 ))
+ ;;
+ *)
+ i=$(( ++i ))
+ ;;
+ esac
done
cur=${words[cword]}
prev=${words[cword - 1]}
pprev=${words[cword - 2]}
- local object=${words[1]} command=${words[2]}
+ local object=${words[1]}
if [[ -z $object || $cword -eq 1 ]]; then
case $cur in
@@ -324,8 +333,12 @@ _bpftool()
esac
fi
+ local command=${words[2]}
[[ $command == help ]] && return 0
+ local MAP_TYPE='id pinned name'
+ local PROG_TYPE='id pinned tag name'
+
# Completion depends on object and command in use
case $object in
prog)
@@ -346,8 +359,6 @@ _bpftool()
;;
esac
- local PROG_TYPE='id pinned tag name'
- local MAP_TYPE='id pinned name'
local METRIC_TYPE='cycles instructions l1d_loads llc_misses \
itlb_misses dtlb_misses'
case $command in
@@ -457,7 +468,7 @@ _bpftool()
obj=${words[3]}
if [[ ${words[-4]} == "map" ]]; then
- COMPREPLY=( $( compgen -W "id pinned" -- "$cur" ) )
+ COMPREPLY=( $( compgen -W "$MAP_TYPE" -- "$cur" ) )
return 0
fi
if [[ ${words[-3]} == "map" ]]; then
@@ -502,20 +513,34 @@ _bpftool()
_bpftool_get_map_names
return 0
;;
- pinned|pinmaps)
+ pinned|pinmaps|kernel_btf)
_filedir
return 0
;;
*)
COMPREPLY=( $( compgen -W "map" -- "$cur" ) )
- _bpftool_once_attr 'type pinmaps autoattach'
+ _bpftool_once_attr 'type pinmaps autoattach kernel_btf'
_bpftool_one_of_list 'offload_dev xdpmeta_dev'
return 0
;;
esac
;;
tracelog)
- return 0
+ case $prev in
+ $command)
+ COMPREPLY+=( $( compgen -W "stdout stderr" -- \
+ "$cur" ) )
+ return 0
+ ;;
+ stdout|stderr)
+ COMPREPLY=( $( compgen -W "$PROG_TYPE" -- \
+ "$cur" ) )
+ return 0
+ ;;
+ *)
+ return 0
+ ;;
+ esac
;;
profile)
case $cword in
@@ -541,20 +566,9 @@ _bpftool()
COMPREPLY=( $( compgen -W "$METRIC_TYPE duration" -- "$cur" ) )
return 0
;;
- 6)
- case $prev in
- duration)
- return 0
- ;;
- *)
- COMPREPLY=( $( compgen -W "$METRIC_TYPE" -- "$cur" ) )
- return 0
- ;;
- esac
- return 0
- ;;
*)
- COMPREPLY=( $( compgen -W "$METRIC_TYPE" -- "$cur" ) )
+ [[ $prev == duration ]] && return 0
+ _bpftool_once_attr "$METRIC_TYPE"
return 0
;;
esac
@@ -612,7 +626,7 @@ _bpftool()
return 0
;;
register)
- _filedir
+ [[ $prev == $command ]] && _filedir
return 0
;;
*)
@@ -638,9 +652,12 @@ _bpftool()
pinned)
_filedir
;;
- *)
+ map)
_bpftool_one_of_list $MAP_TYPE
;;
+ *)
+ _bpftool_once_attr 'map'
+ ;;
esac
return 0
;;
@@ -652,7 +669,6 @@ _bpftool()
esac
;;
map)
- local MAP_TYPE='id pinned name'
case $command in
show|list|dump|peek|pop|dequeue|freeze)
case $prev in
@@ -793,13 +809,11 @@ _bpftool()
# map, depending on the type of the map to update.
case "$(_bpftool_map_guess_map_type)" in
array_of_maps|hash_of_maps)
- local MAP_TYPE='id pinned name'
COMPREPLY+=( $( compgen -W "$MAP_TYPE" \
-- "$cur" ) )
return 0
;;
prog_array)
- local PROG_TYPE='id pinned tag name'
COMPREPLY+=( $( compgen -W "$PROG_TYPE" \
-- "$cur" ) )
return 0
@@ -821,7 +835,7 @@ _bpftool()
esac
_bpftool_once_attr 'key'
- local UPDATE_FLAGS='any exist noexist'
+ local UPDATE_FLAGS='any exist noexist' idx
for (( idx=3; idx < ${#words[@]}-1; idx++ )); do
if [[ ${words[idx]} == 'value' ]]; then
# 'value' is present, but is not the last
@@ -893,7 +907,6 @@ _bpftool()
esac
;;
btf)
- local PROG_TYPE='id pinned tag name'
local MAP_TYPE='id pinned name'
case $command in
dump)
@@ -939,16 +952,24 @@ _bpftool()
format)
COMPREPLY=( $( compgen -W "c raw" -- "$cur" ) )
;;
+ root_id)
+ return 0;
+ ;;
+ c)
+ COMPREPLY=( $( compgen -W "unsorted root_id" -- "$cur" ) )
+ ;;
*)
# emit extra options
case ${words[3]} in
id|file)
+ COMPREPLY=( $( compgen -W "root_id" -- "$cur" ) )
_bpftool_once_attr 'format'
;;
map|prog)
if [[ ${words[3]} == "map" ]] && [[ $cword == 6 ]]; then
COMPREPLY+=( $( compgen -W "key value kv all" -- "$cur" ) )
fi
+ COMPREPLY=( $( compgen -W "root_id" -- "$cur" ) )
_bpftool_once_attr 'format'
;;
*)
@@ -1033,7 +1054,6 @@ _bpftool()
local BPFTOOL_CGROUP_ATTACH_TYPES="$(bpftool feature list_builtins attach_types 2>/dev/null | \
grep '^cgroup_')"
local ATTACH_FLAGS='multi override'
- local PROG_TYPE='id pinned tag name'
# Check for $prev = $command first
if [ $prev = $command ]; then
_filedir
@@ -1086,8 +1106,7 @@ _bpftool()
esac
;;
net)
- local PROG_TYPE='id pinned tag name'
- local ATTACH_TYPES='xdp xdpgeneric xdpdrv xdpoffload'
+ local ATTACH_TYPES='xdp xdpgeneric xdpdrv xdpoffload tcx_ingress tcx_egress'
case $command in
show|list)
[[ $prev != "$command" ]] && return 0
@@ -1193,17 +1212,28 @@ _bpftool()
pin|detach)
if [[ $prev == "$command" ]]; then
COMPREPLY=( $( compgen -W "$LINK_TYPE" -- "$cur" ) )
- else
+ elif [[ $pprev == "$command" ]]; then
_filedir
fi
return 0
;;
*)
[[ $prev == $object ]] && \
- COMPREPLY=( $( compgen -W 'help pin show list' -- "$cur" ) )
+ COMPREPLY=( $( compgen -W 'help pin detach show list' -- "$cur" ) )
;;
esac
;;
+ token)
+ case $command in
+ show|list)
+ return 0
+ ;;
+ *)
+ [[ $prev == $object ]] && \
+ COMPREPLY=( $( compgen -W 'help show list' -- "$cur" ) )
+ ;;
+ esac
+ ;;
esac
} &&
complete -F _bpftool bpftool
diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
index 91fcb75babe3..946612029dee 100644
--- a/tools/bpf/bpftool/btf.c
+++ b/tools/bpf/bpftool/btf.c
@@ -1,11 +1,15 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/* Copyright (C) 2019 Facebook */
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#include <errno.h>
#include <fcntl.h>
#include <linux/err.h>
#include <stdbool.h>
#include <stdio.h>
+#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <linux/btf.h>
@@ -20,6 +24,11 @@
#include "json_writer.h"
#include "main.h"
+#define KFUNC_DECL_TAG "bpf_kfunc"
+#define FASTCALL_DECL_TAG "bpf_fastcall"
+
+#define MAX_ROOT_IDS 16
+
static const char * const btf_kind_str[NR_BTF_KINDS] = {
[BTF_KIND_UNKN] = "UNKNOWN",
[BTF_KIND_INT] = "INT",
@@ -43,6 +52,14 @@ static const char * const btf_kind_str[NR_BTF_KINDS] = {
[BTF_KIND_ENUM64] = "ENUM64",
};
+struct sort_datum {
+ int index;
+ int type_rank;
+ const char *sort_name;
+ const char *own_name;
+ __u64 disambig_hash;
+};
+
static const char *btf_int_enc_str(__u8 encoding)
{
switch (encoding) {
@@ -236,7 +253,7 @@ static int dump_btf_type(const struct btf *btf, __u32 id,
if (btf_kflag(t))
printf("\n\t'%s' val=%d", name, v->val);
else
- printf("\n\t'%s' val=%u", name, v->val);
+ printf("\n\t'%s' val=%u", name, (__u32)v->val);
}
}
if (json_output)
@@ -274,7 +291,7 @@ static int dump_btf_type(const struct btf *btf, __u32 id,
} else {
if (btf_kflag(t))
printf("\n\t'%s' val=%lldLL", name,
- (unsigned long long)val);
+ (long long)val);
else
printf("\n\t'%s' val=%lluULL", name,
(unsigned long long)val);
@@ -454,15 +471,307 @@ static int dump_btf_raw(const struct btf *btf,
return 0;
}
+struct ptr_array {
+ __u32 cnt;
+ __u32 cap;
+ const void **elems;
+};
+
+static int ptr_array_push(const void *ptr, struct ptr_array *arr)
+{
+ __u32 new_cap;
+ void *tmp;
+
+ if (arr->cnt == arr->cap) {
+ new_cap = (arr->cap ?: 16) * 2;
+ tmp = realloc(arr->elems, sizeof(*arr->elems) * new_cap);
+ if (!tmp)
+ return -ENOMEM;
+ arr->elems = tmp;
+ arr->cap = new_cap;
+ }
+ arr->elems[arr->cnt++] = ptr;
+ return 0;
+}
+
+static void ptr_array_free(struct ptr_array *arr)
+{
+ free(arr->elems);
+}
+
+static int cmp_kfuncs(const void *pa, const void *pb, void *ctx)
+{
+ struct btf *btf = ctx;
+ const struct btf_type *a = *(void **)pa;
+ const struct btf_type *b = *(void **)pb;
+
+ return strcmp(btf__str_by_offset(btf, a->name_off),
+ btf__str_by_offset(btf, b->name_off));
+}
+
+static int dump_btf_kfuncs(struct btf_dump *d, const struct btf *btf)
+{
+ LIBBPF_OPTS(btf_dump_emit_type_decl_opts, opts);
+ __u32 cnt = btf__type_cnt(btf), i, j;
+ struct ptr_array fastcalls = {};
+ struct ptr_array kfuncs = {};
+ int err = 0;
+
+ printf("\n/* BPF kfuncs */\n");
+ printf("#ifndef BPF_NO_KFUNC_PROTOTYPES\n");
+
+ for (i = 1; i < cnt; i++) {
+ const struct btf_type *t = btf__type_by_id(btf, i);
+ const struct btf_type *ft;
+ const char *name;
+
+ if (!btf_is_decl_tag(t))
+ continue;
+
+ if (btf_decl_tag(t)->component_idx != -1)
+ continue;
+
+ ft = btf__type_by_id(btf, t->type);
+ if (!btf_is_func(ft))
+ continue;
+
+ name = btf__name_by_offset(btf, t->name_off);
+ if (strncmp(name, KFUNC_DECL_TAG, sizeof(KFUNC_DECL_TAG)) == 0) {
+ err = ptr_array_push(ft, &kfuncs);
+ if (err)
+ goto out;
+ }
+
+ if (strncmp(name, FASTCALL_DECL_TAG, sizeof(FASTCALL_DECL_TAG)) == 0) {
+ err = ptr_array_push(ft, &fastcalls);
+ if (err)
+ goto out;
+ }
+ }
+
+ /* Sort kfuncs by name for improved vmlinux.h stability */
+ qsort_r(kfuncs.elems, kfuncs.cnt, sizeof(*kfuncs.elems), cmp_kfuncs, (void *)btf);
+ for (i = 0; i < kfuncs.cnt; i++) {
+ const struct btf_type *t = kfuncs.elems[i];
+
+ printf("extern ");
+
+ /* Assume small amount of fastcall kfuncs */
+ for (j = 0; j < fastcalls.cnt; j++) {
+ if (fastcalls.elems[j] == t) {
+ printf("__bpf_fastcall ");
+ break;
+ }
+ }
+
+ opts.field_name = btf__name_by_offset(btf, t->name_off);
+ err = btf_dump__emit_type_decl(d, t->type, &opts);
+ if (err)
+ goto out;
+
+ printf(" __weak __ksym;\n");
+ }
+
+ printf("#endif\n\n");
+
+out:
+ ptr_array_free(&fastcalls);
+ ptr_array_free(&kfuncs);
+ return err;
+}
+
static void __printf(2, 0) btf_dump_printf(void *ctx,
const char *fmt, va_list args)
{
vfprintf(stdout, fmt, args);
}
+static int btf_type_rank(const struct btf *btf, __u32 index, bool has_name)
+{
+ const struct btf_type *t = btf__type_by_id(btf, index);
+ const int kind = btf_kind(t);
+ const int max_rank = 10;
+
+ if (t->name_off)
+ has_name = true;
+
+ switch (kind) {
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ return has_name ? 1 : 0;
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ return 2;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ return has_name ? 3 : max_rank;
+ case BTF_KIND_FUNC_PROTO:
+ return has_name ? 4 : max_rank;
+ case BTF_KIND_ARRAY:
+ if (has_name)
+ return btf_type_rank(btf, btf_array(t)->type, has_name);
+ return max_rank;
+ case BTF_KIND_TYPE_TAG:
+ case BTF_KIND_CONST:
+ case BTF_KIND_PTR:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_DECL_TAG:
+ if (has_name)
+ return btf_type_rank(btf, t->type, has_name);
+ return max_rank;
+ default:
+ return max_rank;
+ }
+}
+
+static const char *btf_type_sort_name(const struct btf *btf, __u32 index, bool from_ref)
+{
+ const struct btf_type *t = btf__type_by_id(btf, index);
+
+ switch (btf_kind(t)) {
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64: {
+ int name_off = t->name_off;
+
+ if (!from_ref && !name_off && btf_vlen(t))
+ name_off = btf_kind(t) == BTF_KIND_ENUM64 ?
+ btf_enum64(t)->name_off :
+ btf_enum(t)->name_off;
+
+ return btf__name_by_offset(btf, name_off);
+ }
+ case BTF_KIND_ARRAY:
+ return btf_type_sort_name(btf, btf_array(t)->type, true);
+ case BTF_KIND_TYPE_TAG:
+ case BTF_KIND_CONST:
+ case BTF_KIND_PTR:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_DECL_TAG:
+ return btf_type_sort_name(btf, t->type, true);
+ default:
+ return btf__name_by_offset(btf, t->name_off);
+ }
+ return NULL;
+}
+
+static __u64 hasher(__u64 hash, __u64 val)
+{
+ return hash * 31 + val;
+}
+
+static __u64 btf_name_hasher(__u64 hash, const struct btf *btf, __u32 name_off)
+{
+ if (!name_off)
+ return hash;
+
+ return hasher(hash, str_hash(btf__name_by_offset(btf, name_off)));
+}
+
+static __u64 btf_type_disambig_hash(const struct btf *btf, __u32 id, bool include_members)
+{
+ const struct btf_type *t = btf__type_by_id(btf, id);
+ int i;
+ size_t hash = 0;
+
+ hash = btf_name_hasher(hash, btf, t->name_off);
+
+ switch (btf_kind(t)) {
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ for (i = 0; i < btf_vlen(t); i++) {
+ __u32 name_off = btf_is_enum(t) ?
+ btf_enum(t)[i].name_off :
+ btf_enum64(t)[i].name_off;
+
+ hash = btf_name_hasher(hash, btf, name_off);
+ }
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ if (!include_members)
+ break;
+ for (i = 0; i < btf_vlen(t); i++) {
+ const struct btf_member *m = btf_members(t) + i;
+
+ hash = btf_name_hasher(hash, btf, m->name_off);
+ /* resolve field type's name and hash it as well */
+ hash = hasher(hash, btf_type_disambig_hash(btf, m->type, false));
+ }
+ break;
+ case BTF_KIND_TYPE_TAG:
+ case BTF_KIND_CONST:
+ case BTF_KIND_PTR:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_DECL_TAG:
+ hash = hasher(hash, btf_type_disambig_hash(btf, t->type, include_members));
+ break;
+ case BTF_KIND_ARRAY: {
+ struct btf_array *arr = btf_array(t);
+
+ hash = hasher(hash, arr->nelems);
+ hash = hasher(hash, btf_type_disambig_hash(btf, arr->type, include_members));
+ break;
+ }
+ default:
+ break;
+ }
+ return hash;
+}
+
+static int btf_type_compare(const void *left, const void *right)
+{
+ const struct sort_datum *d1 = (const struct sort_datum *)left;
+ const struct sort_datum *d2 = (const struct sort_datum *)right;
+ int r;
+
+ r = d1->type_rank - d2->type_rank;
+ r = r ?: strcmp(d1->sort_name, d2->sort_name);
+ r = r ?: strcmp(d1->own_name, d2->own_name);
+ if (r)
+ return r;
+
+ if (d1->disambig_hash != d2->disambig_hash)
+ return d1->disambig_hash < d2->disambig_hash ? -1 : 1;
+
+ return d1->index - d2->index;
+}
+
+static struct sort_datum *sort_btf_c(const struct btf *btf)
+{
+ struct sort_datum *datums;
+ int n;
+
+ n = btf__type_cnt(btf);
+ datums = malloc(sizeof(struct sort_datum) * n);
+ if (!datums)
+ return NULL;
+
+ for (int i = 0; i < n; ++i) {
+ struct sort_datum *d = datums + i;
+ const struct btf_type *t = btf__type_by_id(btf, i);
+
+ d->index = i;
+ d->type_rank = btf_type_rank(btf, i, false);
+ d->sort_name = btf_type_sort_name(btf, i, false);
+ d->own_name = btf__name_by_offset(btf, t->name_off);
+ d->disambig_hash = btf_type_disambig_hash(btf, i, true);
+ }
+
+ qsort(datums, n, sizeof(struct sort_datum), btf_type_compare);
+
+ return datums;
+}
+
static int dump_btf_c(const struct btf *btf,
- __u32 *root_type_ids, int root_type_cnt)
+ __u32 *root_type_ids, int root_type_cnt, bool sort_dump)
{
+ struct sort_datum *datums = NULL;
struct btf_dump *d;
int err = 0, i;
@@ -476,6 +785,19 @@ static int dump_btf_c(const struct btf *btf,
printf("#ifndef BPF_NO_PRESERVE_ACCESS_INDEX\n");
printf("#pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)\n");
printf("#endif\n\n");
+ printf("#ifndef __ksym\n");
+ printf("#define __ksym __attribute__((section(\".ksyms\")))\n");
+ printf("#endif\n\n");
+ printf("#ifndef __weak\n");
+ printf("#define __weak __attribute__((weak))\n");
+ printf("#endif\n\n");
+ printf("#ifndef __bpf_fastcall\n");
+ printf("#if __has_attribute(bpf_fastcall)\n");
+ printf("#define __bpf_fastcall __attribute__((bpf_fastcall))\n");
+ printf("#else\n");
+ printf("#define __bpf_fastcall\n");
+ printf("#endif\n");
+ printf("#endif\n\n");
if (root_type_cnt) {
for (i = 0; i < root_type_cnt; i++) {
@@ -486,11 +808,19 @@ static int dump_btf_c(const struct btf *btf,
} else {
int cnt = btf__type_cnt(btf);
+ if (sort_dump)
+ datums = sort_btf_c(btf);
for (i = 1; i < cnt; i++) {
- err = btf_dump__dump_type(d, i);
+ int idx = datums ? datums[i].index : i;
+
+ err = btf_dump__dump_type(d, idx);
if (err)
goto done;
}
+
+ err = dump_btf_kfuncs(d, btf);
+ if (err)
+ goto done;
}
printf("#ifndef BPF_NO_PRESERVE_ACCESS_INDEX\n");
@@ -500,6 +830,7 @@ static int dump_btf_c(const struct btf *btf,
printf("#endif /* __VMLINUX_H__ */\n");
done:
+ free(datums);
btf_dump__free(d);
return err;
}
@@ -549,14 +880,16 @@ static bool btf_is_kernel_module(__u32 btf_id)
static int do_dump(int argc, char **argv)
{
+ bool dump_c = false, sort_dump_c = true;
struct btf *btf = NULL, *base = NULL;
- __u32 root_type_ids[2];
+ __u32 root_type_ids[MAX_ROOT_IDS];
+ bool have_id_filtering;
int root_type_cnt = 0;
- bool dump_c = false;
__u32 btf_id = -1;
const char *src;
int fd = -1;
int err = 0;
+ int i;
if (!REQ_ARGS(2)) {
usage();
@@ -572,7 +905,8 @@ static int do_dump(int argc, char **argv)
return -1;
}
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len,
+ BPF_F_RDONLY);
if (fd < 0)
return -1;
@@ -644,6 +978,8 @@ static int do_dump(int argc, char **argv)
goto done;
}
+ have_id_filtering = !!root_type_cnt;
+
while (argc) {
if (is_prefix(*argv, "format")) {
NEXT_ARG();
@@ -663,6 +999,39 @@ static int do_dump(int argc, char **argv)
goto done;
}
NEXT_ARG();
+ } else if (is_prefix(*argv, "root_id")) {
+ __u32 root_id;
+ char *end;
+
+ if (have_id_filtering) {
+ p_err("cannot use root_id with other type filtering");
+ err = -EINVAL;
+ goto done;
+ } else if (root_type_cnt == MAX_ROOT_IDS) {
+ p_err("only %d root_id are supported", MAX_ROOT_IDS);
+ err = -E2BIG;
+ goto done;
+ }
+
+ NEXT_ARG();
+ root_id = strtoul(*argv, &end, 0);
+ if (*end) {
+ err = -1;
+ p_err("can't parse %s as root ID", *argv);
+ goto done;
+ }
+ for (i = 0; i < root_type_cnt; i++) {
+ if (root_type_ids[i] == root_id) {
+ err = -EINVAL;
+ p_err("duplicate root_id %u supplied", root_id);
+ goto done;
+ }
+ }
+ root_type_ids[root_type_cnt++] = root_id;
+ NEXT_ARG();
+ } else if (is_prefix(*argv, "unsorted")) {
+ sort_dump_c = false;
+ NEXT_ARG();
} else {
p_err("unrecognized option: '%s'", *argv);
err = -EINVAL;
@@ -685,13 +1054,24 @@ static int do_dump(int argc, char **argv)
}
}
+ /* Invalid root IDs causes half emitted boilerplate and then unclean
+ * exit. It's an ugly user experience, so handle common error here.
+ */
+ for (i = 0; i < root_type_cnt; i++) {
+ if (root_type_ids[i] >= btf__type_cnt(btf)) {
+ err = -EINVAL;
+ p_err("invalid root ID: %u", root_type_ids[i]);
+ goto done;
+ }
+ }
+
if (dump_c) {
if (json_output) {
p_err("JSON output for C-syntax dump is not supported");
err = -ENOTSUP;
goto done;
}
- err = dump_btf_c(btf, root_type_ids, root_type_cnt);
+ err = dump_btf_c(btf, root_type_ids, root_type_cnt, sort_dump_c);
} else {
err = dump_btf_raw(btf, root_type_ids, root_type_cnt);
}
@@ -739,10 +1119,13 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
[BPF_OBJ_PROG] = "prog",
[BPF_OBJ_MAP] = "map",
};
+ LIBBPF_OPTS(bpf_get_fd_by_id_opts, opts_ro);
__u32 btf_id, id = 0;
int err;
int fd;
+ opts_ro.open_flags = BPF_F_RDONLY;
+
while (true) {
switch (type) {
case BPF_OBJ_PROG:
@@ -753,7 +1136,7 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
break;
default:
err = -1;
- p_err("unexpected object type: %d", type);
+ p_err("unexpected object type: %u", type);
goto err_free;
}
if (err) {
@@ -772,11 +1155,11 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
fd = bpf_prog_get_fd_by_id(id);
break;
case BPF_OBJ_MAP:
- fd = bpf_map_get_fd_by_id(id);
+ fd = bpf_map_get_fd_by_id_opts(id, &opts_ro);
break;
default:
err = -1;
- p_err("unexpected object type: %d", type);
+ p_err("unexpected object type: %u", type);
goto err_free;
}
if (fd < 0) {
@@ -809,7 +1192,7 @@ build_btf_type_table(struct hashmap *tab, enum bpf_obj_type type,
break;
default:
err = -1;
- p_err("unexpected object type: %d", type);
+ p_err("unexpected object type: %u", type);
goto err_free;
}
if (!btf_id)
@@ -875,12 +1258,12 @@ show_btf_plain(struct bpf_btf_info *info, int fd,
n = 0;
hashmap__for_each_key_entry(btf_prog_table, entry, info->id) {
- printf("%s%lu", n++ == 0 ? " prog_ids " : ",", entry->value);
+ printf("%s%lu", n++ == 0 ? " prog_ids " : ",", (unsigned long)entry->value);
}
n = 0;
hashmap__for_each_key_entry(btf_map_table, entry, info->id) {
- printf("%s%lu", n++ == 0 ? " map_ids " : ",", entry->value);
+ printf("%s%lu", n++ == 0 ? " map_ids " : ",", (unsigned long)entry->value);
}
emit_obj_refs_plain(refs_table, info->id, "\n\tpids ");
@@ -1059,11 +1442,11 @@ static int do_help(int argc, char **argv)
fprintf(stderr,
"Usage: %1$s %2$s { show | list } [id BTF_ID]\n"
- " %1$s %2$s dump BTF_SRC [format FORMAT]\n"
+ " %1$s %2$s dump BTF_SRC [format FORMAT] [root_id ROOT_ID]\n"
" %1$s %2$s help\n"
"\n"
" BTF_SRC := { id BTF_ID | prog PROG | map MAP [{key | value | kv | all}] | file FILE }\n"
- " FORMAT := { raw | c }\n"
+ " FORMAT := { raw | c [unsorted] }\n"
" " HELP_SPEC_MAP "\n"
" " HELP_SPEC_PROGRAM "\n"
" " HELP_SPEC_OPTIONS " |\n"
diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
index 527fe867a8fb..ff12628593ae 100644
--- a/tools/bpf/bpftool/btf_dumper.c
+++ b/tools/bpf/bpftool/btf_dumper.c
@@ -38,7 +38,7 @@ static int dump_prog_id_as_func_ptr(const struct btf_dumper *d,
__u32 info_len = sizeof(info);
const char *prog_name = NULL;
struct btf *prog_btf = NULL;
- struct bpf_func_info finfo;
+ struct bpf_func_info finfo = {};
__u32 finfo_rec_size;
char prog_str[1024];
int err;
@@ -653,7 +653,7 @@ static int __btf_dumper_type_only(const struct btf *btf, __u32 type_id,
case BTF_KIND_ARRAY:
array = (struct btf_array *)(t + 1);
BTF_PRINT_TYPE(array->type);
- BTF_PRINT_ARG("[%d]", array->nelems);
+ BTF_PRINT_ARG("[%u]", array->nelems);
break;
case BTF_KIND_PTR:
BTF_PRINT_TYPE(t->type);
diff --git a/tools/bpf/bpftool/cfg.c b/tools/bpf/bpftool/cfg.c
index eec437cca2ea..e3785f9a697d 100644
--- a/tools/bpf/bpftool/cfg.c
+++ b/tools/bpf/bpftool/cfg.c
@@ -302,6 +302,7 @@ static bool func_add_bb_edges(struct func_node *func)
insn = bb->tail;
if (!is_jmp_insn(insn->code) ||
+ BPF_OP(insn->code) == BPF_CALL ||
BPF_OP(insn->code) == BPF_EXIT) {
e->dst = bb_next(bb);
e->flags |= EDGE_FLAG_FALLTHROUGH;
diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
index af6898c0f388..ec356deb27c9 100644
--- a/tools/bpf/bpftool/cgroup.c
+++ b/tools/bpf/bpftool/cgroup.c
@@ -2,6 +2,10 @@
// Copyright (C) 2017 Facebook
// Author: Roman Gushchin <guro@fb.com>
+#undef GCC_VERSION
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#define _XOPEN_SOURCE 500
#include <errno.h>
#include <fcntl.h>
@@ -19,6 +23,38 @@
#include "main.h"
+static const int cgroup_attach_types[] = {
+ BPF_CGROUP_INET_INGRESS,
+ BPF_CGROUP_INET_EGRESS,
+ BPF_CGROUP_INET_SOCK_CREATE,
+ BPF_CGROUP_INET_SOCK_RELEASE,
+ BPF_CGROUP_INET4_BIND,
+ BPF_CGROUP_INET6_BIND,
+ BPF_CGROUP_INET4_POST_BIND,
+ BPF_CGROUP_INET6_POST_BIND,
+ BPF_CGROUP_INET4_CONNECT,
+ BPF_CGROUP_INET6_CONNECT,
+ BPF_CGROUP_UNIX_CONNECT,
+ BPF_CGROUP_INET4_GETPEERNAME,
+ BPF_CGROUP_INET6_GETPEERNAME,
+ BPF_CGROUP_UNIX_GETPEERNAME,
+ BPF_CGROUP_INET4_GETSOCKNAME,
+ BPF_CGROUP_INET6_GETSOCKNAME,
+ BPF_CGROUP_UNIX_GETSOCKNAME,
+ BPF_CGROUP_UDP4_SENDMSG,
+ BPF_CGROUP_UDP6_SENDMSG,
+ BPF_CGROUP_UNIX_SENDMSG,
+ BPF_CGROUP_UDP4_RECVMSG,
+ BPF_CGROUP_UDP6_RECVMSG,
+ BPF_CGROUP_UNIX_RECVMSG,
+ BPF_CGROUP_SOCK_OPS,
+ BPF_CGROUP_DEVICE,
+ BPF_CGROUP_SYSCTL,
+ BPF_CGROUP_GETSOCKOPT,
+ BPF_CGROUP_SETSOCKOPT,
+ BPF_LSM_CGROUP
+};
+
#define HELP_SPEC_ATTACH_FLAGS \
"ATTACH_FLAGS := { multi | override }"
@@ -159,7 +195,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type,
if (attach_btf_name)
printf(" %-15s", attach_btf_name);
else if (info.attach_btf_id)
- printf(" attach_btf_obj_id=%d attach_btf_id=%d",
+ printf(" attach_btf_obj_id=%u attach_btf_id=%u",
info.attach_btf_obj_id, info.attach_btf_id);
printf("\n");
}
@@ -183,11 +219,11 @@ static int count_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type)
static int cgroup_has_attached_progs(int cgroup_fd)
{
- enum bpf_attach_type type;
+ unsigned int i = 0;
bool no_prog = true;
- for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) {
- int count = count_attached_bpf_progs(cgroup_fd, type);
+ for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++) {
+ int count = count_attached_bpf_progs(cgroup_fd, cgroup_attach_types[i]);
if (count < 0 && errno != EINVAL)
return -1;
@@ -286,11 +322,11 @@ static int show_bpf_progs(int cgroup_fd, enum bpf_attach_type type,
static int do_show(int argc, char **argv)
{
- enum bpf_attach_type type;
int has_attached_progs;
const char *path;
int cgroup_fd;
int ret = -1;
+ unsigned int i;
query_flags = 0;
@@ -338,14 +374,14 @@ static int do_show(int argc, char **argv)
"AttachFlags", "Name");
btf_vmlinux = libbpf_find_kernel_btf();
- for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) {
+ for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++) {
/*
* Not all attach types may be supported, so it's expected,
* that some requests will fail.
* If we were able to get the show for at least one
* attach type, let's return 0.
*/
- if (show_bpf_progs(cgroup_fd, type, 0) == 0)
+ if (show_bpf_progs(cgroup_fd, cgroup_attach_types[i], 0) == 0)
ret = 0;
}
@@ -368,9 +404,9 @@ exit:
static int do_show_tree_fn(const char *fpath, const struct stat *sb,
int typeflag, struct FTW *ftw)
{
- enum bpf_attach_type type;
int has_attached_progs;
int cgroup_fd;
+ unsigned int i;
if (typeflag != FTW_D)
return 0;
@@ -402,8 +438,8 @@ static int do_show_tree_fn(const char *fpath, const struct stat *sb,
}
btf_vmlinux = libbpf_find_kernel_btf();
- for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++)
- show_bpf_progs(cgroup_fd, type, ftw->level);
+ for (i = 0; i < ARRAY_SIZE(cgroup_attach_types); i++)
+ show_bpf_progs(cgroup_fd, cgroup_attach_types[i], ftw->level);
if (errno == EINVAL)
/* Last attach type does not support query.
diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c
index cc6e6aae2447..e8daf963ecef 100644
--- a/tools/bpf/bpftool/common.c
+++ b/tools/bpf/bpftool/common.c
@@ -4,6 +4,7 @@
#ifndef _GNU_SOURCE
#define _GNU_SOURCE
#endif
+#include <assert.h>
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
@@ -20,6 +21,7 @@
#include <sys/resource.h>
#include <sys/stat.h>
#include <sys/vfs.h>
+#include <sys/utsname.h>
#include <linux/filter.h>
#include <linux/limits.h>
@@ -30,6 +32,7 @@
#include <bpf/hashmap.h>
#include <bpf/libbpf.h> /* libbpf_num_possible_cpus */
#include <bpf/btf.h>
+#include <zlib.h>
#include "main.h"
@@ -193,7 +196,8 @@ int mount_tracefs(const char *target)
return err;
}
-int open_obj_pinned(const char *path, bool quiet)
+int open_obj_pinned(const char *path, bool quiet,
+ const struct bpf_obj_get_opts *opts)
{
char *pname;
int fd = -1;
@@ -205,7 +209,7 @@ int open_obj_pinned(const char *path, bool quiet)
goto out_ret;
}
- fd = bpf_obj_get(pname);
+ fd = bpf_obj_get_opts(pname, opts);
if (fd < 0) {
if (!quiet)
p_err("bpf obj get (%s): %s", pname,
@@ -221,12 +225,13 @@ out_ret:
return fd;
}
-int open_obj_pinned_any(const char *path, enum bpf_obj_type exp_type)
+int open_obj_pinned_any(const char *path, enum bpf_obj_type exp_type,
+ const struct bpf_obj_get_opts *opts)
{
enum bpf_obj_type type;
int fd;
- fd = open_obj_pinned(path, false);
+ fd = open_obj_pinned(path, false, opts);
if (fd < 0)
return -1;
@@ -244,29 +249,101 @@ int open_obj_pinned_any(const char *path, enum bpf_obj_type exp_type)
return fd;
}
-int mount_bpffs_for_pin(const char *name, bool is_dir)
+int create_and_mount_bpffs_dir(const char *dir_name)
{
char err_str[ERR_MAX_LEN];
- char *file;
- char *dir;
+ bool dir_exists;
int err = 0;
- if (is_dir && is_bpffs(name))
+ if (is_bpffs(dir_name))
return err;
- file = malloc(strlen(name) + 1);
- if (!file) {
+ dir_exists = access(dir_name, F_OK) == 0;
+
+ if (!dir_exists) {
+ char *temp_name;
+ char *parent_name;
+
+ temp_name = strdup(dir_name);
+ if (!temp_name) {
+ p_err("mem alloc failed");
+ return -1;
+ }
+
+ parent_name = dirname(temp_name);
+
+ if (is_bpffs(parent_name)) {
+ /* nothing to do if already mounted */
+ free(temp_name);
+ return err;
+ }
+
+ if (access(parent_name, F_OK) == -1) {
+ p_err("can't create dir '%s' to pin BPF object: parent dir '%s' doesn't exist",
+ dir_name, parent_name);
+ free(temp_name);
+ return -1;
+ }
+
+ free(temp_name);
+ }
+
+ if (block_mount) {
+ p_err("no BPF file system found, not mounting it due to --nomount option");
+ return -1;
+ }
+
+ if (!dir_exists) {
+ err = mkdir(dir_name, S_IRWXU);
+ if (err) {
+ p_err("failed to create dir '%s': %s", dir_name, strerror(errno));
+ return err;
+ }
+ }
+
+ err = mnt_fs(dir_name, "bpf", err_str, ERR_MAX_LEN);
+ if (err) {
+ err_str[ERR_MAX_LEN - 1] = '\0';
+ p_err("can't mount BPF file system on given dir '%s': %s",
+ dir_name, err_str);
+
+ if (!dir_exists)
+ rmdir(dir_name);
+ }
+
+ return err;
+}
+
+int mount_bpffs_for_file(const char *file_name)
+{
+ char err_str[ERR_MAX_LEN];
+ char *temp_name;
+ char *dir;
+ int err = 0;
+
+ if (access(file_name, F_OK) != -1) {
+ p_err("can't pin BPF object: path '%s' already exists", file_name);
+ return -1;
+ }
+
+ temp_name = strdup(file_name);
+ if (!temp_name) {
p_err("mem alloc failed");
return -1;
}
- strcpy(file, name);
- dir = dirname(file);
+ dir = dirname(temp_name);
if (is_bpffs(dir))
/* nothing to do if already mounted */
goto out_free;
+ if (access(dir, F_OK) == -1) {
+ p_err("can't pin BPF object: dir '%s' doesn't exist", dir);
+ err = -1;
+ goto out_free;
+ }
+
if (block_mount) {
p_err("no BPF file system found, not mounting it due to --nomount option");
err = -1;
@@ -276,12 +353,12 @@ int mount_bpffs_for_pin(const char *name, bool is_dir)
err = mnt_fs(dir, "bpf", err_str, ERR_MAX_LEN);
if (err) {
err_str[ERR_MAX_LEN - 1] = '\0';
- p_err("can't mount BPF file system to pin the object (%s): %s",
- name, err_str);
+ p_err("can't mount BPF file system to pin the object '%s': %s",
+ file_name, err_str);
}
out_free:
- free(file);
+ free(temp_name);
return err;
}
@@ -289,7 +366,7 @@ int do_pin_fd(int fd, const char *name)
{
int err;
- err = mount_bpffs_for_pin(name, false);
+ err = mount_bpffs_for_file(name);
if (err)
return err;
@@ -338,7 +415,7 @@ void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd,
{
const char *prog_name = prog_info->name;
const struct btf_type *func_type;
- const struct bpf_func_info finfo = {};
+ struct bpf_func_info finfo = {};
struct bpf_prog_info info = {};
__u32 info_len = sizeof(info);
struct btf *prog_btf = NULL;
@@ -389,10 +466,11 @@ int get_fd_type(int fd)
p_err("can't read link type: %s", strerror(errno));
return -1;
}
- if (n == sizeof(path)) {
+ if (n == sizeof(buf)) {
p_err("can't read link type: path too long!");
return -1;
}
+ buf[n] = '\0';
if (strstr(buf, "bpf-map"))
return BPF_OBJ_MAP;
@@ -482,7 +560,7 @@ static int do_build_table_cb(const char *fpath, const struct stat *sb,
if (typeflag != FTW_F)
goto out_ret;
- fd = open_obj_pinned(fpath, true);
+ fd = open_obj_pinned(fpath, true, NULL);
if (fd < 0)
goto out_ret;
@@ -641,7 +719,7 @@ ifindex_to_arch(__u32 ifindex, __u64 ns_dev, __u64 ns_ino, const char **opt)
int vendor_id;
if (!ifindex_to_name_ns(ifindex, ns_dev, ns_ino, devname)) {
- p_err("Can't get net device name for ifindex %d: %s", ifindex,
+ p_err("Can't get net device name for ifindex %u: %s", ifindex,
strerror(errno));
return NULL;
}
@@ -666,7 +744,7 @@ ifindex_to_arch(__u32 ifindex, __u64 ns_dev, __u64 ns_ino, const char **opt)
/* No NFP support in LLVM, we have no valid triple to return. */
default:
p_err("Can't get arch name for device vendor id 0x%04x",
- vendor_id);
+ (unsigned int)vendor_id);
return NULL;
}
}
@@ -855,7 +933,7 @@ int prog_parse_fds(int *argc, char ***argv, int **fds)
path = **argv;
NEXT_ARGP();
- (*fds)[0] = open_obj_pinned_any(path, BPF_OBJ_PROG);
+ (*fds)[0] = open_obj_pinned_any(path, BPF_OBJ_PROG, NULL);
if ((*fds)[0] < 0)
return -1;
return 1;
@@ -892,7 +970,8 @@ exit_free:
return fd;
}
-static int map_fd_by_name(char *name, int **fds)
+static int map_fd_by_name(char *name, int **fds,
+ const struct bpf_get_fd_by_id_opts *opts)
{
unsigned int id = 0;
int fd, nb_fds = 0;
@@ -900,6 +979,7 @@ static int map_fd_by_name(char *name, int **fds)
int err;
while (true) {
+ LIBBPF_OPTS(bpf_get_fd_by_id_opts, opts_ro);
struct bpf_map_info info = {};
__u32 len = sizeof(info);
@@ -912,7 +992,9 @@ static int map_fd_by_name(char *name, int **fds)
return nb_fds;
}
- fd = bpf_map_get_fd_by_id(id);
+ /* Request a read-only fd to query the map info */
+ opts_ro.open_flags = BPF_F_RDONLY;
+ fd = bpf_map_get_fd_by_id_opts(id, &opts_ro);
if (fd < 0) {
p_err("can't get map by id (%u): %s",
id, strerror(errno));
@@ -931,6 +1013,19 @@ static int map_fd_by_name(char *name, int **fds)
continue;
}
+ /* Get an fd with the requested options, if they differ
+ * from the read-only options used to get the fd above.
+ */
+ if (memcmp(opts, &opts_ro, sizeof(opts_ro))) {
+ close(fd);
+ fd = bpf_map_get_fd_by_id_opts(id, opts);
+ if (fd < 0) {
+ p_err("can't get map by id (%u): %s", id,
+ strerror(errno));
+ goto err_close_fds;
+ }
+ }
+
if (nb_fds > 0) {
tmp = realloc(*fds, (nb_fds + 1) * sizeof(int));
if (!tmp) {
@@ -950,8 +1045,13 @@ err_close_fds:
return -1;
}
-int map_parse_fds(int *argc, char ***argv, int **fds)
+int map_parse_fds(int *argc, char ***argv, int **fds, __u32 open_flags)
{
+ LIBBPF_OPTS(bpf_get_fd_by_id_opts, opts);
+
+ assert((open_flags & ~BPF_F_RDONLY) == 0);
+ opts.open_flags = open_flags;
+
if (is_prefix(**argv, "id")) {
unsigned int id;
char *endptr;
@@ -965,7 +1065,7 @@ int map_parse_fds(int *argc, char ***argv, int **fds)
}
NEXT_ARGP();
- (*fds)[0] = bpf_map_get_fd_by_id(id);
+ (*fds)[0] = bpf_map_get_fd_by_id_opts(id, &opts);
if ((*fds)[0] < 0) {
p_err("get map by id (%u): %s", id, strerror(errno));
return -1;
@@ -983,16 +1083,18 @@ int map_parse_fds(int *argc, char ***argv, int **fds)
}
NEXT_ARGP();
- return map_fd_by_name(name, fds);
+ return map_fd_by_name(name, fds, &opts);
} else if (is_prefix(**argv, "pinned")) {
char *path;
+ LIBBPF_OPTS(bpf_obj_get_opts, get_opts);
+ get_opts.file_flags = open_flags;
NEXT_ARGP();
path = **argv;
NEXT_ARGP();
- (*fds)[0] = open_obj_pinned_any(path, BPF_OBJ_MAP);
+ (*fds)[0] = open_obj_pinned_any(path, BPF_OBJ_MAP, &get_opts);
if ((*fds)[0] < 0)
return -1;
return 1;
@@ -1002,7 +1104,7 @@ int map_parse_fds(int *argc, char ***argv, int **fds)
return -1;
}
-int map_parse_fd(int *argc, char ***argv)
+int map_parse_fd(int *argc, char ***argv, __u32 open_flags)
{
int *fds = NULL;
int nb_fds, fd;
@@ -1012,7 +1114,7 @@ int map_parse_fd(int *argc, char ***argv)
p_err("mem alloc failed");
return -1;
}
- nb_fds = map_parse_fds(argc, argv, &fds);
+ nb_fds = map_parse_fds(argc, argv, &fds, open_flags);
if (nb_fds != 1) {
if (nb_fds > 1) {
p_err("several maps match this handle");
@@ -1030,12 +1132,12 @@ exit_free:
}
int map_parse_fd_and_info(int *argc, char ***argv, struct bpf_map_info *info,
- __u32 *info_len)
+ __u32 *info_len, __u32 open_flags)
{
int err;
int fd;
- fd = map_parse_fd(argc, argv);
+ fd = map_parse_fd(argc, argv, open_flags);
if (fd < 0)
return -1;
@@ -1108,3 +1210,94 @@ int pathname_concat(char *buf, int buf_sz, const char *path,
return 0;
}
+
+static bool read_next_kernel_config_option(gzFile file, char *buf, size_t n,
+ char **value)
+{
+ char *sep;
+
+ while (gzgets(file, buf, n)) {
+ if (strncmp(buf, "CONFIG_", 7))
+ continue;
+
+ sep = strchr(buf, '=');
+ if (!sep)
+ continue;
+
+ /* Trim ending '\n' */
+ buf[strlen(buf) - 1] = '\0';
+
+ /* Split on '=' and ensure that a value is present. */
+ *sep = '\0';
+ if (!sep[1])
+ continue;
+
+ *value = sep + 1;
+ return true;
+ }
+
+ return false;
+}
+
+int read_kernel_config(const struct kernel_config_option *requested_options,
+ size_t num_options, char **out_values,
+ const char *define_prefix)
+{
+ struct utsname utsn;
+ char path[PATH_MAX];
+ gzFile file = NULL;
+ char buf[4096];
+ char *value;
+ size_t i;
+ int ret = 0;
+
+ if (!requested_options || !out_values || num_options == 0)
+ return -1;
+
+ if (!uname(&utsn)) {
+ snprintf(path, sizeof(path), "/boot/config-%s", utsn.release);
+
+ /* gzopen also accepts uncompressed files. */
+ file = gzopen(path, "r");
+ }
+
+ if (!file) {
+ /* Some distributions build with CONFIG_IKCONFIG=y and put the
+ * config file at /proc/config.gz.
+ */
+ file = gzopen("/proc/config.gz", "r");
+ }
+
+ if (!file) {
+ p_info("skipping kernel config, can't open file: %s",
+ strerror(errno));
+ return -1;
+ }
+
+ if (!gzgets(file, buf, sizeof(buf)) || !gzgets(file, buf, sizeof(buf))) {
+ p_info("skipping kernel config, can't read from file: %s",
+ strerror(errno));
+ ret = -1;
+ goto end_parse;
+ }
+
+ if (strcmp(buf, "# Automatically generated file; DO NOT EDIT.\n")) {
+ p_info("skipping kernel config, can't find correct file");
+ ret = -1;
+ goto end_parse;
+ }
+
+ while (read_next_kernel_config_option(file, buf, sizeof(buf), &value)) {
+ for (i = 0; i < num_options; i++) {
+ if ((define_prefix && !requested_options[i].macro_dump) ||
+ out_values[i] || strcmp(buf, requested_options[i].name))
+ continue;
+
+ out_values[i] = strdup(value);
+ }
+ }
+
+end_parse:
+ gzclose(file);
+ return ret;
+}
diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
index 708733b0ea06..0f6070a0c8e7 100644
--- a/tools/bpf/bpftool/feature.c
+++ b/tools/bpf/bpftool/feature.c
@@ -10,7 +10,6 @@
#ifdef USE_LIBCAP
#include <sys/capability.h>
#endif
-#include <sys/utsname.h>
#include <sys/vfs.h>
#include <linux/filter.h>
@@ -18,7 +17,6 @@
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
-#include <zlib.h>
#include "main.h"
@@ -196,7 +194,7 @@ static void probe_unprivileged_disabled(void)
{
long res;
- /* No support for C-style ouptut */
+ /* No support for C-style output */
res = read_procfs("/proc/sys/kernel/unprivileged_bpf_disabled");
if (json_output) {
@@ -225,7 +223,7 @@ static void probe_jit_enable(void)
{
long res;
- /* No support for C-style ouptut */
+ /* No support for C-style output */
res = read_procfs("/proc/sys/net/core/bpf_jit_enable");
if (json_output) {
@@ -255,7 +253,7 @@ static void probe_jit_harden(void)
{
long res;
- /* No support for C-style ouptut */
+ /* No support for C-style output */
res = read_procfs("/proc/sys/net/core/bpf_jit_harden");
if (json_output) {
@@ -285,7 +283,7 @@ static void probe_jit_kallsyms(void)
{
long res;
- /* No support for C-style ouptut */
+ /* No support for C-style output */
res = read_procfs("/proc/sys/net/core/bpf_jit_kallsyms");
if (json_output) {
@@ -311,7 +309,7 @@ static void probe_jit_limit(void)
{
long res;
- /* No support for C-style ouptut */
+ /* No support for C-style output */
res = read_procfs("/proc/sys/net/core/bpf_jit_limit");
if (json_output) {
@@ -327,40 +325,9 @@ static void probe_jit_limit(void)
}
}
-static bool read_next_kernel_config_option(gzFile file, char *buf, size_t n,
- char **value)
-{
- char *sep;
-
- while (gzgets(file, buf, n)) {
- if (strncmp(buf, "CONFIG_", 7))
- continue;
-
- sep = strchr(buf, '=');
- if (!sep)
- continue;
-
- /* Trim ending '\n' */
- buf[strlen(buf) - 1] = '\0';
-
- /* Split on '=' and ensure that a value is present. */
- *sep = '\0';
- if (!sep[1])
- continue;
-
- *value = sep + 1;
- return true;
- }
-
- return false;
-}
-
static void probe_kernel_image_config(const char *define_prefix)
{
- static const struct {
- const char * const name;
- bool macro_dump;
- } options[] = {
+ struct kernel_config_option options[] = {
/* Enable BPF */
{ "CONFIG_BPF", },
/* Enable bpf() syscall */
@@ -435,52 +402,11 @@ static void probe_kernel_image_config(const char *define_prefix)
{ "CONFIG_HZ", true, }
};
char *values[ARRAY_SIZE(options)] = { };
- struct utsname utsn;
- char path[PATH_MAX];
- gzFile file = NULL;
- char buf[4096];
- char *value;
size_t i;
- if (!uname(&utsn)) {
- snprintf(path, sizeof(path), "/boot/config-%s", utsn.release);
-
- /* gzopen also accepts uncompressed files. */
- file = gzopen(path, "r");
- }
-
- if (!file) {
- /* Some distributions build with CONFIG_IKCONFIG=y and put the
- * config file at /proc/config.gz.
- */
- file = gzopen("/proc/config.gz", "r");
- }
- if (!file) {
- p_info("skipping kernel config, can't open file: %s",
- strerror(errno));
- goto end_parse;
- }
- /* Sanity checks */
- if (!gzgets(file, buf, sizeof(buf)) ||
- !gzgets(file, buf, sizeof(buf))) {
- p_info("skipping kernel config, can't read from file: %s",
- strerror(errno));
- goto end_parse;
- }
- if (strcmp(buf, "# Automatically generated file; DO NOT EDIT.\n")) {
- p_info("skipping kernel config, can't find correct file");
- goto end_parse;
- }
-
- while (read_next_kernel_config_option(file, buf, sizeof(buf), &value)) {
- for (i = 0; i < ARRAY_SIZE(options); i++) {
- if ((define_prefix && !options[i].macro_dump) ||
- values[i] || strcmp(buf, options[i].name))
- continue;
-
- values[i] = strdup(value);
- }
- }
+ if (read_kernel_config(options, ARRAY_SIZE(options), values,
+ define_prefix))
+ return;
for (i = 0; i < ARRAY_SIZE(options); i++) {
if (define_prefix && !options[i].macro_dump)
@@ -488,10 +414,6 @@ static void probe_kernel_image_config(const char *define_prefix)
print_kernel_option(options[i].name, values[i], define_prefix);
free(values[i]);
}
-
-end_parse:
- if (file)
- gzclose(file);
}
static bool probe_bpf_syscall(const char *define_prefix)
@@ -664,7 +586,8 @@ probe_helper_ifindex(enum bpf_func_id id, enum bpf_prog_type prog_type,
probe_prog_load_ifindex(prog_type, insns, ARRAY_SIZE(insns), buf,
sizeof(buf), ifindex);
- res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ");
+ res = !grep(buf, "invalid func ") && !grep(buf, "unknown func ") &&
+ !grep(buf, "program of this type cannot use helper ");
switch (get_vendor_id(ifindex)) {
case 0x19ee: /* Netronome specific */
@@ -884,6 +807,28 @@ probe_v3_isa_extension(const char *define_prefix, __u32 ifindex)
"V3_ISA_EXTENSION");
}
+/*
+ * Probe for the v4 instruction set extension introduced in commit 1f9a1ea821ff
+ * ("bpf: Support new sign-extension load insns").
+ */
+static void
+probe_v4_isa_extension(const char *define_prefix, __u32 ifindex)
+{
+ struct bpf_insn insns[5] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_JMP32_IMM(BPF_JEQ, BPF_REG_0, 1, 1),
+ BPF_JMP32_A(1),
+ BPF_MOV64_IMM(BPF_REG_0, 1),
+ BPF_EXIT_INSN()
+ };
+
+ probe_misc_feature(insns, ARRAY_SIZE(insns),
+ define_prefix, ifindex,
+ "have_v4_isa_extension",
+ "ISA extension v4",
+ "V4_ISA_EXTENSION");
+}
+
static void
section_system_config(enum probe_component target, const char *define_prefix)
{
@@ -1028,6 +973,7 @@ static void section_misc(const char *define_prefix, __u32 ifindex)
probe_bounded_loops(define_prefix, ifindex);
probe_v2_isa_extension(define_prefix, ifindex);
probe_v3_isa_extension(define_prefix, ifindex);
+ probe_v4_isa_extension(define_prefix, ifindex);
print_end_section();
}
diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
index ee3ce2b8000d..993c7d9484a4 100644
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@ -7,6 +7,7 @@
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
+#include <libgen.h>
#include <linux/err.h>
#include <stdbool.h>
#include <stdio.h>
@@ -54,11 +55,27 @@ static bool str_has_suffix(const char *str, const char *suffix)
return true;
}
+static const struct btf_type *
+resolve_func_ptr(const struct btf *btf, __u32 id, __u32 *res_id)
+{
+ const struct btf_type *t;
+
+ t = skip_mods_and_typedefs(btf, id, NULL);
+ if (!btf_is_ptr(t))
+ return NULL;
+
+ t = skip_mods_and_typedefs(btf, t->type, res_id);
+
+ return btf_is_func_proto(t) ? t : NULL;
+}
+
static void get_obj_name(char *name, const char *file)
{
- /* Using basename() GNU version which doesn't modify arg. */
- strncpy(name, basename(file), MAX_OBJ_NAME_LEN - 1);
- name[MAX_OBJ_NAME_LEN - 1] = '\0';
+ char file_copy[PATH_MAX];
+
+ /* Using basename() POSIX version to be more portable. */
+ strncpy(file_copy, file, PATH_MAX - 1)[PATH_MAX - 1] = '\0';
+ strncpy(name, basename(file_copy), MAX_OBJ_NAME_LEN - 1)[MAX_OBJ_NAME_LEN - 1] = '\0';
if (str_has_suffix(name, ".o"))
name[strlen(name) - 2] = '\0';
sanitize_identifier(name);
@@ -103,6 +120,12 @@ static bool get_datasec_ident(const char *sec_name, char *buf, size_t buf_sz)
static const char *pfxs[] = { ".data", ".rodata", ".bss", ".kconfig" };
int i, n;
+ /* recognize hard coded LLVM section name */
+ if (strcmp(sec_name, ".addr_space.1") == 0) {
+ /* this is the name to use in skeleton */
+ snprintf(buf, buf_sz, "arena");
+ return true;
+ }
for (i = 0, n = ARRAY_SIZE(pfxs); i < n; i++) {
const char *pfx = pfxs[i];
@@ -231,8 +254,15 @@ static const struct btf_type *find_type_for_map(struct btf *btf, const char *map
return NULL;
}
-static bool is_internal_mmapable_map(const struct bpf_map *map, char *buf, size_t sz)
+static bool is_mmapable_map(const struct bpf_map *map, char *buf, size_t sz)
{
+ size_t tmp_sz;
+
+ if (bpf_map__type(map) == BPF_MAP_TYPE_ARENA && bpf_map__initial_value(map, &tmp_sz)) {
+ snprintf(buf, sz, "arena");
+ return true;
+ }
+
if (!bpf_map__is_internal(map) || !(bpf_map__map_flags(map) & BPF_F_MMAPABLE))
return false;
@@ -257,7 +287,7 @@ static int codegen_datasecs(struct bpf_object *obj, const char *obj_name)
bpf_object__for_each_map(map, obj) {
/* only generate definitions for memory-mapped internal maps */
- if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
+ if (!is_mmapable_map(map, map_ident, sizeof(map_ident)))
continue;
sec = find_type_for_map(btf, map_ident);
@@ -310,7 +340,7 @@ static int codegen_subskel_datasecs(struct bpf_object *obj, const char *obj_name
bpf_object__for_each_map(map, obj) {
/* only generate definitions for memory-mapped internal maps */
- if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
+ if (!is_mmapable_map(map, map_ident, sizeof(map_ident)))
continue;
sec = find_type_for_map(btf, map_ident);
@@ -356,7 +386,7 @@ static int codegen_subskel_datasecs(struct bpf_object *obj, const char *obj_name
*/
needs_typeof = btf_is_array(var) || btf_is_ptr_to_func_proto(btf, var);
if (needs_typeof)
- printf("typeof(");
+ printf("__typeof__(");
err = btf_dump__emit_type_decl(d, var_type_id, &opts);
if (err)
@@ -487,7 +517,7 @@ static void codegen_asserts(struct bpf_object *obj, const char *obj_name)
", obj_name);
bpf_object__for_each_map(map, obj) {
- if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident)))
+ if (!is_mmapable_map(map, map_ident, sizeof(map_ident)))
continue;
sec = find_type_for_map(btf, map_ident);
@@ -640,7 +670,7 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
continue;
if (bpf_map__is_internal(map) &&
(bpf_map__map_flags(map) & BPF_F_MMAPABLE))
- printf("\tskel_free_map_data(skel->%1$s, skel->maps.%1$s.initial_value, %2$zd);\n",
+ printf("\tskel_free_map_data(skel->%1$s, skel->maps.%1$s.initial_value, %2$zu);\n",
ident, bpf_map_mmap_sz(map));
codegen("\
\n\
@@ -658,10 +688,17 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard)
{
DECLARE_LIBBPF_OPTS(gen_loader_opts, opts);
+ struct bpf_load_and_run_opts sopts = {};
+ char sig_buf[MAX_SIG_SIZE];
+ __u8 prog_sha[SHA256_DIGEST_LENGTH];
struct bpf_map *map;
+
char ident[256];
int err = 0;
+ if (sign_progs)
+ opts.gen_hash = true;
+
err = bpf_object__gen_loader(obj, &opts);
if (err)
return err;
@@ -671,6 +708,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
p_err("failed to load object file");
goto out;
}
+
/* If there was no error during load then gen_loader_opts
* are populated with the loader program.
*/
@@ -703,7 +741,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
const void *mmap_data = NULL;
size_t mmap_size = 0;
- if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ if (!is_mmapable_map(map, ident, sizeof(ident)))
continue;
codegen("\
@@ -750,8 +788,52 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
print_hex(opts.insns, opts.insns_sz);
codegen("\
\n\
- \"; \n\
- \n\
+ \";\n");
+
+ if (sign_progs) {
+ sopts.insns = opts.insns;
+ sopts.insns_sz = opts.insns_sz;
+ sopts.excl_prog_hash = prog_sha;
+ sopts.excl_prog_hash_sz = sizeof(prog_sha);
+ sopts.signature = sig_buf;
+ sopts.signature_sz = MAX_SIG_SIZE;
+
+ err = bpftool_prog_sign(&sopts);
+ if (err < 0) {
+ p_err("failed to sign program");
+ goto out;
+ }
+
+ codegen("\
+ \n\
+ static const char opts_sig[] __attribute__((__aligned__(8))) = \"\\\n\
+ ");
+ print_hex((const void *)sig_buf, sopts.signature_sz);
+ codegen("\
+ \n\
+ \";\n");
+
+ codegen("\
+ \n\
+ static const char opts_excl_hash[] __attribute__((__aligned__(8))) = \"\\\n\
+ ");
+ print_hex((const void *)prog_sha, sizeof(prog_sha));
+ codegen("\
+ \n\
+ \";\n");
+
+ codegen("\
+ \n\
+ opts.signature = (void *)opts_sig; \n\
+ opts.signature_sz = sizeof(opts_sig) - 1; \n\
+ opts.excl_prog_hash = (void *)opts_excl_hash; \n\
+ opts.excl_prog_hash_sz = sizeof(opts_excl_hash) - 1; \n\
+ opts.keyring_id = skel->keyring_id; \n\
+ ");
+ }
+
+ codegen("\
+ \n\
opts.ctx = (struct bpf_loader_ctx *)skel; \n\
opts.data_sz = sizeof(opts_data) - 1; \n\
opts.data = (void *)opts_data; \n\
@@ -765,7 +847,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
bpf_object__for_each_map(map, obj) {
const char *mmap_flags;
- if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ if (!is_mmapable_map(map, ident, sizeof(ident)))
continue;
if (bpf_map__map_flags(map) & BPF_F_RDONLY_PROG)
@@ -818,28 +900,45 @@ out:
}
static void
-codegen_maps_skeleton(struct bpf_object *obj, size_t map_cnt, bool mmaped)
+codegen_maps_skeleton(struct bpf_object *obj, size_t map_cnt, bool mmaped, bool populate_links)
{
struct bpf_map *map;
char ident[256];
- size_t i;
+ size_t i, map_sz;
if (!map_cnt)
return;
+ /* for backward compatibility with old libbpf versions that don't
+ * handle new BPF skeleton with new struct bpf_map_skeleton definition
+ * that includes link field, avoid specifying new increased size,
+ * unless we absolutely have to (i.e., if there are struct_ops maps
+ * present)
+ */
+ map_sz = offsetof(struct bpf_map_skeleton, link);
+ if (populate_links) {
+ bpf_object__for_each_map(map, obj) {
+ if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS) {
+ map_sz = sizeof(struct bpf_map_skeleton);
+ break;
+ }
+ }
+ }
+
codegen("\
\n\
- \n\
+ \n\
/* maps */ \n\
s->map_cnt = %zu; \n\
- s->map_skel_sz = sizeof(*s->maps); \n\
- s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\
+ s->map_skel_sz = %zu; \n\
+ s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt,\n\
+ sizeof(*s->maps) > %zu ? sizeof(*s->maps) : %zu);\n\
if (!s->maps) { \n\
err = -ENOMEM; \n\
goto err; \n\
} \n\
",
- map_cnt
+ map_cnt, map_sz, map_sz, map_sz
);
i = 0;
bpf_object__for_each_map(map, obj) {
@@ -848,15 +947,22 @@ codegen_maps_skeleton(struct bpf_object *obj, size_t map_cnt, bool mmaped)
codegen("\
\n\
- \n\
- s->maps[%zu].name = \"%s\"; \n\
- s->maps[%zu].map = &obj->maps.%s; \n\
+ \n\
+ map = (struct bpf_map_skeleton *)((char *)s->maps + %zu * s->map_skel_sz);\n\
+ map->name = \"%s\"; \n\
+ map->map = &obj->maps.%s; \n\
",
- i, bpf_map__name(map), i, ident);
+ i, bpf_map__name(map), ident);
/* memory-mapped internal maps */
- if (mmaped && is_internal_mmapable_map(map, ident, sizeof(ident))) {
- printf("\ts->maps[%zu].mmaped = (void **)&obj->%s;\n",
- i, ident);
+ if (mmaped && is_mmapable_map(map, ident, sizeof(ident))) {
+ printf("\tmap->mmaped = (void **)&obj->%s;\n", ident);
+ }
+
+ if (populate_links && bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS) {
+ codegen("\
+ \n\
+ map->link = &obj->links.%s; \n\
+ ", ident);
}
i++;
}
@@ -906,10 +1012,212 @@ codegen_progs_skeleton(struct bpf_object *obj, size_t prog_cnt, bool populate_li
}
}
+static int walk_st_ops_shadow_vars(struct btf *btf, const char *ident,
+ const struct btf_type *map_type, __u32 map_type_id)
+{
+ LIBBPF_OPTS(btf_dump_emit_type_decl_opts, opts, .indent_level = 3);
+ const struct btf_type *member_type;
+ __u32 offset, next_offset = 0;
+ const struct btf_member *m;
+ struct btf_dump *d = NULL;
+ const char *member_name;
+ __u32 member_type_id;
+ int i, err = 0, n;
+ int size;
+
+ d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL);
+ if (!d)
+ return -errno;
+
+ n = btf_vlen(map_type);
+ for (i = 0, m = btf_members(map_type); i < n; i++, m++) {
+ member_type = skip_mods_and_typedefs(btf, m->type, &member_type_id);
+ member_name = btf__name_by_offset(btf, m->name_off);
+
+ offset = m->offset / 8;
+ if (next_offset < offset)
+ printf("\t\t\tchar __padding_%d[%u];\n", i, offset - next_offset);
+
+ switch (btf_kind(member_type)) {
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ /* scalar type */
+ printf("\t\t\t");
+ opts.field_name = member_name;
+ err = btf_dump__emit_type_decl(d, member_type_id, &opts);
+ if (err) {
+ p_err("Failed to emit type declaration for %s: %d", member_name, err);
+ goto out;
+ }
+ printf(";\n");
+
+ size = btf__resolve_size(btf, member_type_id);
+ if (size < 0) {
+ p_err("Failed to resolve size of %s: %d\n", member_name, size);
+ err = size;
+ goto out;
+ }
+
+ next_offset = offset + size;
+ break;
+
+ case BTF_KIND_PTR:
+ if (resolve_func_ptr(btf, m->type, NULL)) {
+ /* Function pointer */
+ printf("\t\t\tstruct bpf_program *%s;\n", member_name);
+
+ next_offset = offset + sizeof(void *);
+ break;
+ }
+ /* All pointer types are unsupported except for
+ * function pointers.
+ */
+ fallthrough;
+
+ default:
+ /* Unsupported types
+ *
+ * Types other than scalar types and function
+ * pointers are currently not supported in order to
+ * prevent conflicts in the generated code caused
+ * by multiple definitions. For instance, if the
+ * struct type FOO is used in a struct_ops map,
+ * bpftool has to generate definitions for FOO,
+ * which may result in conflicts if FOO is defined
+ * in different skeleton files.
+ */
+ size = btf__resolve_size(btf, member_type_id);
+ if (size < 0) {
+ p_err("Failed to resolve size of %s: %d\n", member_name, size);
+ err = size;
+ goto out;
+ }
+ printf("\t\t\tchar __unsupported_%d[%d];\n", i, size);
+
+ next_offset = offset + size;
+ break;
+ }
+ }
+
+ /* Cannot fail since it must be a struct type */
+ size = btf__resolve_size(btf, map_type_id);
+ if (next_offset < (__u32)size)
+ printf("\t\t\tchar __padding_end[%u];\n", size - next_offset);
+
+out:
+ btf_dump__free(d);
+
+ return err;
+}
+
+/* Generate the pointer of the shadow type for a struct_ops map.
+ *
+ * This function adds a pointer of the shadow type for a struct_ops map.
+ * The members of a struct_ops map can be exported through a pointer to a
+ * shadow type. The user can access these members through the pointer.
+ *
+ * A shadow type includes not all members, only members of some types.
+ * They are scalar types and function pointers. The function pointers are
+ * translated to the pointer of the struct bpf_program. The scalar types
+ * are translated to the original type without any modifiers.
+ *
+ * Unsupported types will be translated to a char array to occupy the same
+ * space as the original field, being renamed as __unsupported_*. The user
+ * should treat these fields as opaque data.
+ */
+static int gen_st_ops_shadow_type(const char *obj_name, struct btf *btf, const char *ident,
+ const struct bpf_map *map)
+{
+ const struct btf_type *map_type;
+ const char *type_name;
+ __u32 map_type_id;
+ int err;
+
+ map_type_id = bpf_map__btf_value_type_id(map);
+ if (map_type_id == 0)
+ return -EINVAL;
+ map_type = btf__type_by_id(btf, map_type_id);
+ if (!map_type)
+ return -EINVAL;
+
+ type_name = btf__name_by_offset(btf, map_type->name_off);
+
+ printf("\t\tstruct %s__%s__%s {\n", obj_name, ident, type_name);
+
+ err = walk_st_ops_shadow_vars(btf, ident, map_type, map_type_id);
+ if (err)
+ return err;
+
+ printf("\t\t} *%s;\n", ident);
+
+ return 0;
+}
+
+static int gen_st_ops_shadow(const char *obj_name, struct btf *btf, struct bpf_object *obj)
+{
+ int err, st_ops_cnt = 0;
+ struct bpf_map *map;
+ char ident[256];
+
+ if (!btf)
+ return 0;
+
+ /* Generate the pointers to shadow types of
+ * struct_ops maps.
+ */
+ bpf_object__for_each_map(map, obj) {
+ if (bpf_map__type(map) != BPF_MAP_TYPE_STRUCT_OPS)
+ continue;
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+
+ if (st_ops_cnt == 0) /* first struct_ops map */
+ printf("\tstruct {\n");
+ st_ops_cnt++;
+
+ err = gen_st_ops_shadow_type(obj_name, btf, ident, map);
+ if (err)
+ return err;
+ }
+
+ if (st_ops_cnt)
+ printf("\t} struct_ops;\n");
+
+ return 0;
+}
+
+/* Generate the code to initialize the pointers of shadow types. */
+static void gen_st_ops_shadow_init(struct btf *btf, struct bpf_object *obj)
+{
+ struct bpf_map *map;
+ char ident[256];
+
+ if (!btf)
+ return;
+
+ /* Initialize the pointers to_ops shadow types of
+ * struct_ops maps.
+ */
+ bpf_object__for_each_map(map, obj) {
+ if (bpf_map__type(map) != BPF_MAP_TYPE_STRUCT_OPS)
+ continue;
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+ codegen("\
+ \n\
+ obj->struct_ops.%1$s = (__typeof__(obj->struct_ops.%1$s))\n\
+ bpf_map__initial_value(obj->maps.%1$s, NULL);\n\
+ \n\
+ ", ident);
+ }
+}
+
static int do_skeleton(int argc, char **argv)
{
char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")];
- size_t map_cnt = 0, prog_cnt = 0, file_sz, mmap_sz;
+ size_t map_cnt = 0, prog_cnt = 0, attach_map_cnt = 0, file_sz, mmap_sz;
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts);
char obj_name[MAX_OBJ_NAME_LEN] = "", *obj_data;
struct bpf_object *obj = NULL;
@@ -984,7 +1292,7 @@ static int do_skeleton(int argc, char **argv)
err = -errno;
libbpf_strerror(err, err_buf, sizeof(err_buf));
p_err("failed to open BPF object file: %s", err_buf);
- goto out;
+ goto out_obj;
}
bpf_object__for_each_map(map, obj) {
@@ -993,6 +1301,10 @@ static int do_skeleton(int argc, char **argv)
bpf_map__name(map));
continue;
}
+
+ if (bpf_map__type(map) == BPF_MAP_TYPE_STRUCT_OPS)
+ attach_map_cnt++;
+
map_cnt++;
}
bpf_object__for_each_program(prog, obj) {
@@ -1028,6 +1340,8 @@ static int do_skeleton(int argc, char **argv)
#include <stdlib.h> \n\
#include <bpf/libbpf.h> \n\
\n\
+ #define BPF_SKEL_SUPPORTS_MAP_AUTO_ATTACH 1 \n\
+ \n\
struct %1$s { \n\
struct bpf_object_skeleton *skeleton; \n\
struct bpf_object *obj; \n\
@@ -1049,6 +1363,11 @@ static int do_skeleton(int argc, char **argv)
printf("\t} maps;\n");
}
+ btf = bpf_object__btf(obj);
+ err = gen_st_ops_shadow(obj_name, btf, obj);
+ if (err)
+ goto out;
+
if (prog_cnt) {
printf("\tstruct {\n");
bpf_object__for_each_program(prog, obj) {
@@ -1060,6 +1379,9 @@ static int do_skeleton(int argc, char **argv)
bpf_program__name(prog));
}
printf("\t} progs;\n");
+ }
+
+ if (prog_cnt + attach_map_cnt) {
printf("\tstruct {\n");
bpf_object__for_each_program(prog, obj) {
if (use_loader)
@@ -1069,10 +1391,29 @@ static int do_skeleton(int argc, char **argv)
printf("\t\tstruct bpf_link *%s;\n",
bpf_program__name(prog));
}
+
+ bpf_object__for_each_map(map, obj) {
+ if (!get_map_ident(map, ident, sizeof(ident)))
+ continue;
+ if (bpf_map__type(map) != BPF_MAP_TYPE_STRUCT_OPS)
+ continue;
+
+ if (use_loader)
+ printf("t\tint %s_fd;\n", ident);
+ else
+ printf("\t\tstruct bpf_link *%s;\n", ident);
+ }
+
printf("\t} links;\n");
}
- btf = bpf_object__btf(obj);
+ if (sign_progs) {
+ codegen("\
+ \n\
+ __s32 keyring_id; \n\
+ ");
+ }
+
if (btf) {
err = codegen_datasecs(obj, obj_name);
if (err)
@@ -1130,6 +1471,12 @@ static int do_skeleton(int argc, char **argv)
if (err) \n\
goto err_out; \n\
\n\
+ ", obj_name);
+
+ gen_st_ops_shadow_init(btf, obj);
+
+ codegen("\
+ \n\
return obj; \n\
err_out: \n\
%1$s__destroy(obj); \n\
@@ -1191,6 +1538,7 @@ static int do_skeleton(int argc, char **argv)
%1$s__create_skeleton(struct %1$s *obj) \n\
{ \n\
struct bpf_object_skeleton *s; \n\
+ struct bpf_map_skeleton *map __attribute__((unused));\n\
int err; \n\
\n\
s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
@@ -1206,7 +1554,7 @@ static int do_skeleton(int argc, char **argv)
obj_name
);
- codegen_maps_skeleton(obj, map_cnt, true /*mmaped*/);
+ codegen_maps_skeleton(obj, map_cnt, true /*mmaped*/, true /*links*/);
codegen_progs_skeleton(obj, prog_cnt, true /*populate_links*/);
codegen("\
@@ -1263,6 +1611,7 @@ static int do_skeleton(int argc, char **argv)
err = 0;
out:
bpf_object__close(obj);
+out_obj:
if (obj_data)
munmap(obj_data, mmap_sz);
close(fd);
@@ -1389,7 +1738,7 @@ static int do_subskeleton(int argc, char **argv)
/* Also count all maps that have a name */
map_cnt++;
- if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ if (!is_mmapable_map(map, ident, sizeof(ident)))
continue;
map_type_id = bpf_map__btf_value_type_id(map);
@@ -1439,6 +1788,10 @@ static int do_subskeleton(int argc, char **argv)
printf("\t} maps;\n");
}
+ err = gen_st_ops_shadow(obj_name, btf, obj);
+ if (err)
+ goto out;
+
if (prog_cnt) {
printf("\tstruct {\n");
bpf_object__for_each_program(prog, obj) {
@@ -1477,6 +1830,7 @@ static int do_subskeleton(int argc, char **argv)
{ \n\
struct %1$s *obj; \n\
struct bpf_object_subskeleton *s; \n\
+ struct bpf_map_skeleton *map __attribute__((unused));\n\
int err; \n\
\n\
obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\
@@ -1507,7 +1861,7 @@ static int do_subskeleton(int argc, char **argv)
/* walk through each symbol and emit the runtime representation */
bpf_object__for_each_map(map, obj) {
- if (!is_internal_mmapable_map(map, ident, sizeof(ident)))
+ if (!is_mmapable_map(map, ident, sizeof(ident)))
continue;
map_type_id = bpf_map__btf_value_type_id(map);
@@ -1540,7 +1894,7 @@ static int do_subskeleton(int argc, char **argv)
}
}
- codegen_maps_skeleton(obj, map_cnt, false /*mmaped*/);
+ codegen_maps_skeleton(obj, map_cnt, false /*mmaped*/, false /*links*/);
codegen_progs_skeleton(obj, prog_cnt, false /*links*/);
codegen("\
@@ -1550,6 +1904,12 @@ static int do_subskeleton(int argc, char **argv)
if (err) \n\
goto err; \n\
\n\
+ ");
+
+ gen_st_ops_shadow_init(btf, obj);
+
+ codegen("\
+ \n\
return obj; \n\
err: \n\
%1$s__destroy(obj); \n\
@@ -1630,7 +1990,7 @@ static int do_help(int argc, char **argv)
" %1$s %2$s help\n"
"\n"
" " HELP_SPEC_OPTIONS " |\n"
- " {-L|--use-loader} }\n"
+ " {-L|--use-loader} | [ {-S|--sign } {-k} <private_key.pem> {-i} <certificate.x509> ]}\n"
"",
bin_name, "gen");
@@ -1795,7 +2155,7 @@ btfgen_mark_type(struct btfgen_info *info, unsigned int type_id, bool follow_poi
break;
/* tells if some other type needs to be handled */
default:
- p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id);
+ p_err("unsupported kind: %s (%u)", btf_kind_str(btf_type), type_id);
return -EINVAL;
}
@@ -1847,7 +2207,7 @@ static int btfgen_record_field_relo(struct btfgen_info *info, struct bpf_core_sp
btf_type = btf__type_by_id(btf, type_id);
break;
default:
- p_err("unsupported kind: %s (%d)",
+ p_err("unsupported kind: %s (%u)",
btf_kind_str(btf_type), btf_type->type);
return -EINVAL;
}
@@ -1946,7 +2306,7 @@ static int btfgen_mark_type_match(struct btfgen_info *info, __u32 type_id, bool
}
/* tells if some other type needs to be handled */
default:
- p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id);
+ p_err("unsupported kind: %s (%u)", btf_kind_str(btf_type), type_id);
return -EINVAL;
}
@@ -2127,15 +2487,6 @@ out:
return err;
}
-static int btfgen_remap_id(__u32 *type_id, void *ctx)
-{
- unsigned int *ids = ctx;
-
- *type_id = ids[*type_id];
-
- return 0;
-}
-
/* Generate BTF from relocation information previously recorded */
static struct btf *btfgen_get_btf(struct btfgen_info *info)
{
@@ -2215,10 +2566,15 @@ static struct btf *btfgen_get_btf(struct btfgen_info *info)
/* second pass: fix up type ids */
for (i = 1; i < btf__type_cnt(btf_new); i++) {
struct btf_type *btf_type = (struct btf_type *) btf__type_by_id(btf_new, i);
+ struct btf_field_iter it;
+ __u32 *type_id;
- err = btf_type_visit_type_ids(btf_type, btfgen_remap_id, ids);
+ err = btf_field_iter_init(&it, btf_type, BTF_FIELD_ITER_IDS);
if (err)
goto err_out;
+
+ while ((type_id = btf_field_iter_next(&it)))
+ *type_id = ids[*type_id];
}
free(ids);
diff --git a/tools/bpf/bpftool/iter.c b/tools/bpf/bpftool/iter.c
index 6b0e5202ca7a..df5f0d1e07e8 100644
--- a/tools/bpf/bpftool/iter.c
+++ b/tools/bpf/bpftool/iter.c
@@ -37,7 +37,7 @@ static int do_pin(int argc, char **argv)
return -1;
}
- map_fd = map_parse_fd(&argc, &argv);
+ map_fd = map_parse_fd(&argc, &argv, BPF_F_RDONLY);
if (map_fd < 0)
return -1;
@@ -76,7 +76,7 @@ static int do_pin(int argc, char **argv)
goto close_obj;
}
- err = mount_bpffs_for_pin(path, false);
+ err = mount_bpffs_for_file(path);
if (err)
goto close_link;
diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
index 7b8d9ec89ebd..8895b4e1f690 100644
--- a/tools/bpf/bpftool/jit_disasm.c
+++ b/tools/bpf/bpftool/jit_disasm.c
@@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info,
static int
init_context(disasm_ctx_t *ctx, const char *arch,
__maybe_unused const char *disassembler_options,
- __maybe_unused unsigned char *image, __maybe_unused ssize_t len)
+ __maybe_unused unsigned char *image, __maybe_unused ssize_t len,
+ __maybe_unused __u64 func_ksym)
{
char *triple;
@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx)
}
static int
-disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc)
+disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc,
+ __u64 func_ksym)
{
char buf[256];
int count;
- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc,
+ count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc,
buf, sizeof(buf));
if (json_output)
printf_json(buf);
@@ -136,8 +138,21 @@ int disasm_init(void)
#ifdef HAVE_LIBBFD_SUPPORT
#define DISASM_SPACER "\t"
+struct disasm_info {
+ struct disassemble_info info;
+ __u64 func_ksym;
+};
+
+static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info)
+{
+ struct disasm_info *dinfo = container_of(info, struct disasm_info, info);
+
+ addr += dinfo->func_ksym;
+ generic_print_address(addr, info);
+}
+
typedef struct {
- struct disassemble_info *info;
+ struct disasm_info *info;
disassembler_ftype disassemble;
bfd *bfdf;
} disasm_ctx_t;
@@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
static int init_context(disasm_ctx_t *ctx, const char *arch,
const char *disassembler_options,
- unsigned char *image, ssize_t len)
+ unsigned char *image, ssize_t len, __u64 func_ksym)
{
struct disassemble_info *info;
char tpath[PATH_MAX];
@@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
}
bfdf = ctx->bfdf;
- ctx->info = malloc(sizeof(struct disassemble_info));
+ ctx->info = malloc(sizeof(struct disasm_info));
if (!ctx->info) {
p_err("mem alloc failed");
goto err_close;
}
- info = ctx->info;
+ ctx->info->func_ksym = func_ksym;
+ info = &ctx->info->info;
if (json_output)
init_disassemble_info_compat(info, stdout,
@@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
info->disassembler_options = disassembler_options;
info->buffer = image;
info->buffer_length = len;
+ info->print_address_func = disasm_print_addr;
disassemble_init_for_target(info);
@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
static int
disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image,
- __maybe_unused ssize_t len, int pc)
+ __maybe_unused ssize_t len, int pc,
+ __maybe_unused __u64 func_ksym)
{
- return ctx->disassemble(pc, ctx->info);
+ return ctx->disassemble(pc, &ctx->info->info);
}
int disasm_init(void)
@@ -325,13 +343,14 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
{
const struct bpf_line_info *linfo = NULL;
unsigned int nr_skip = 0;
- int count, i, pc = 0;
+ int count, i;
+ unsigned int pc = 0;
disasm_ctx_t ctx;
if (!len)
return -1;
- if (init_context(&ctx, arch, disassembler_options, image, len))
+ if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym))
return -1;
if (json_output)
@@ -360,7 +379,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
printf("%4x:" DISASM_SPACER, pc);
}
- count = disassemble_insn(&ctx, image, len, pc);
+ count = disassemble_insn(&ctx, image, len, pc, func_ksym);
if (json_output) {
/* Operand array, was started in fprintf_json. Before
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index cb46667a6b2e..bdcd717b0348 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -107,7 +107,7 @@ static int link_parse_fd(int *argc, char ***argv)
fd = bpf_link_get_fd_by_id(id);
if (fd < 0)
- p_err("failed to get link with ID %d: %s", id, strerror(errno));
+ p_err("failed to get link with ID %u: %s", id, strerror(errno));
return fd;
} else if (is_prefix(**argv, "pinned")) {
char *path;
@@ -117,7 +117,7 @@ static int link_parse_fd(int *argc, char ***argv)
path = **argv;
NEXT_ARGP();
- return open_obj_pinned_any(path, BPF_OBJ_LINK);
+ return open_obj_pinned_any(path, BPF_OBJ_LINK, NULL);
}
p_err("expected 'id' or 'pinned', got: '%s'?", **argv);
@@ -249,18 +249,85 @@ static int get_prog_info(int prog_id, struct bpf_prog_info *info)
return err;
}
-static int cmp_u64(const void *A, const void *B)
+struct addr_cookie {
+ __u64 addr;
+ __u64 cookie;
+};
+
+static int cmp_addr_cookie(const void *A, const void *B)
{
- const __u64 *a = A, *b = B;
+ const struct addr_cookie *a = A, *b = B;
+
+ if (a->addr == b->addr)
+ return 0;
+ return a->addr < b->addr ? -1 : 1;
+}
+
+static struct addr_cookie *
+get_addr_cookie_array(__u64 *addrs, __u64 *cookies, __u32 count)
+{
+ struct addr_cookie *data;
+ __u32 i;
+
+ data = calloc(count, sizeof(data[0]));
+ if (!data) {
+ p_err("mem alloc failed");
+ return NULL;
+ }
+ for (i = 0; i < count; i++) {
+ data[i].addr = addrs[i];
+ data[i].cookie = cookies[i];
+ }
+ qsort(data, count, sizeof(data[0]), cmp_addr_cookie);
+ return data;
+}
+
+static bool is_x86_ibt_enabled(void)
+{
+#if defined(__x86_64__)
+ struct kernel_config_option options[] = {
+ { "CONFIG_X86_KERNEL_IBT", },
+ };
+ char *values[ARRAY_SIZE(options)] = { };
+ bool ret;
+
+ if (read_kernel_config(options, ARRAY_SIZE(options), values, NULL))
+ return false;
+
+ ret = !!values[0];
+ free(values[0]);
+ return ret;
+#else
+ return false;
+#endif
+}
- return *a - *b;
+static bool
+symbol_matches_target(__u64 sym_addr, __u64 target_addr, bool is_ibt_enabled)
+{
+ if (sym_addr == target_addr)
+ return true;
+
+ /*
+ * On x86_64 architectures with CET (Control-flow Enforcement Technology),
+ * function entry points have a 4-byte 'endbr' instruction prefix.
+ * This causes kprobe hooks to target the address *after* 'endbr'
+ * (symbol address + 4), preserving the CET instruction.
+ * Here we check if the symbol address matches the hook target address
+ * minus 4, indicating a CET-enabled function entry point.
+ */
+ if (is_ibt_enabled && sym_addr == target_addr - 4)
+ return true;
+
+ return false;
}
static void
show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
{
+ struct addr_cookie *data;
__u32 i, j = 0;
- __u64 *addrs;
+ bool is_ibt_enabled;
jsonw_bool_field(json_wtr, "retprobe",
info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
@@ -268,17 +335,25 @@ show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
jsonw_uint_field(json_wtr, "missed", info->kprobe_multi.missed);
jsonw_name(json_wtr, "funcs");
jsonw_start_array(json_wtr);
- addrs = u64_to_ptr(info->kprobe_multi.addrs);
- qsort(addrs, info->kprobe_multi.count, sizeof(addrs[0]), cmp_u64);
+ data = get_addr_cookie_array(u64_to_ptr(info->kprobe_multi.addrs),
+ u64_to_ptr(info->kprobe_multi.cookies),
+ info->kprobe_multi.count);
+ if (!data)
+ return;
/* Load it once for all. */
if (!dd.sym_count)
kernel_syms_load(&dd);
+ if (!dd.sym_count)
+ goto error;
+
+ is_ibt_enabled = is_x86_ibt_enabled();
for (i = 0; i < dd.sym_count; i++) {
- if (dd.sym_mapping[i].address != addrs[j])
+ if (!symbol_matches_target(dd.sym_mapping[i].address,
+ data[j].addr, is_ibt_enabled))
continue;
jsonw_start_object(json_wtr);
- jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
+ jsonw_uint_field(json_wtr, "addr", (unsigned long)data[j].addr);
jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
/* Print null if it is vmlinux */
if (dd.sym_mapping[i].module[0] == '\0') {
@@ -287,11 +362,14 @@ show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
} else {
jsonw_string_field(json_wtr, "module", dd.sym_mapping[i].module);
}
+ jsonw_uint_field(json_wtr, "cookie", data[j].cookie);
jsonw_end_object(json_wtr);
if (j++ == info->kprobe_multi.count)
break;
}
jsonw_end_array(json_wtr);
+error:
+ free(data);
}
static __u64 *u64_to_arr(__u64 val)
@@ -334,6 +412,7 @@ show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
u64_to_ptr(info->perf_event.kprobe.func_name));
jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
jsonw_uint_field(wtr, "missed", info->perf_event.kprobe.missed);
+ jsonw_uint_field(wtr, "cookie", info->perf_event.kprobe.cookie);
}
static void
@@ -343,6 +422,8 @@ show_perf_event_uprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
jsonw_string_field(wtr, "file",
u64_to_ptr(info->perf_event.uprobe.file_name));
jsonw_uint_field(wtr, "offset", info->perf_event.uprobe.offset);
+ jsonw_uint_field(wtr, "cookie", info->perf_event.uprobe.cookie);
+ jsonw_uint_field(wtr, "ref_ctr_offset", info->perf_event.uprobe.ref_ctr_offset);
}
static void
@@ -350,6 +431,7 @@ show_perf_event_tracepoint_json(struct bpf_link_info *info, json_writer_t *wtr)
{
jsonw_string_field(wtr, "tracepoint",
u64_to_ptr(info->perf_event.tracepoint.tp_name));
+ jsonw_uint_field(wtr, "cookie", info->perf_event.tracepoint.cookie);
}
static char *perf_config_hw_cache_str(__u64 config)
@@ -366,7 +448,7 @@ static char *perf_config_hw_cache_str(__u64 config)
if (hw_cache)
snprintf(str, PERF_HW_CACHE_LEN, "%s-", hw_cache);
else
- snprintf(str, PERF_HW_CACHE_LEN, "%lld-", config & 0xff);
+ snprintf(str, PERF_HW_CACHE_LEN, "%llu-", config & 0xff);
op = perf_event_name(evsel__hw_cache_op, (config >> 8) & 0xff);
if (op)
@@ -374,7 +456,7 @@ static char *perf_config_hw_cache_str(__u64 config)
"%s-", op);
else
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
- "%lld-", (config >> 8) & 0xff);
+ "%llu-", (config >> 8) & 0xff);
result = perf_event_name(evsel__hw_cache_result, config >> 16);
if (result)
@@ -382,7 +464,7 @@ static char *perf_config_hw_cache_str(__u64 config)
"%s", result);
else
snprintf(str + strlen(str), PERF_HW_CACHE_LEN - strlen(str),
- "%lld", config >> 16);
+ "%llu", config >> 16);
return str;
}
@@ -426,6 +508,8 @@ show_perf_event_event_json(struct bpf_link_info *info, json_writer_t *wtr)
else
jsonw_uint_field(wtr, "event_config", config);
+ jsonw_uint_field(wtr, "cookie", info->perf_event.event.cookie);
+
if (type == PERF_TYPE_HW_CACHE && perf_config)
free((void *)perf_config);
}
@@ -444,6 +528,7 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_RAW_TRACEPOINT:
jsonw_string_field(json_wtr, "tp_name",
u64_to_ptr(info->raw_tracepoint.tp_name));
+ jsonw_uint_field(json_wtr, "cookie", info->raw_tracepoint.cookie);
break;
case BPF_LINK_TYPE_TRACING:
err = get_prog_info(info->prog_id, &prog_info);
@@ -461,6 +546,7 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
json_wtr);
jsonw_uint_field(json_wtr, "target_obj_id", info->tracing.target_obj_id);
jsonw_uint_field(json_wtr, "target_btf_id", info->tracing.target_btf_id);
+ jsonw_uint_field(json_wtr, "cookie", info->tracing.cookie);
break;
case BPF_LINK_TYPE_CGROUP:
jsonw_lluint_field(json_wtr, "cgroup_id",
@@ -486,6 +572,10 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
show_link_ifindex_json(info->netkit.ifindex, json_wtr);
show_link_attach_type_json(info->netkit.attach_type, json_wtr);
break;
+ case BPF_LINK_TYPE_SOCKMAP:
+ jsonw_uint_field(json_wtr, "map_id", info->sockmap.map_id);
+ show_link_attach_type_json(info->sockmap.attach_type, json_wtr);
+ break;
case BPF_LINK_TYPE_XDP:
show_link_ifindex_json(info->xdp.ifindex, json_wtr);
break;
@@ -579,7 +669,7 @@ static void show_link_ifindex_plain(__u32 ifindex)
else
snprintf(devname, sizeof(devname), "(detached)");
if (ret)
- snprintf(devname, sizeof(devname), "%s(%d)",
+ snprintf(devname, sizeof(devname), "%s(%u)",
tmpname, ifindex);
printf("ifindex %s ", devname);
}
@@ -655,7 +745,7 @@ void netfilter_dump_plain(const struct bpf_link_info *info)
if (pfname)
printf("\n\t%s", pfname);
else
- printf("\n\tpf: %d", pf);
+ printf("\n\tpf: %u", pf);
if (hookname)
printf(" %s", hookname);
@@ -670,8 +760,9 @@ void netfilter_dump_plain(const struct bpf_link_info *info)
static void show_kprobe_multi_plain(struct bpf_link_info *info)
{
+ struct addr_cookie *data;
__u32 i, j = 0;
- __u64 *addrs;
+ bool is_ibt_enabled;
if (!info->kprobe_multi.count)
return;
@@ -683,21 +774,26 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
printf("func_cnt %u ", info->kprobe_multi.count);
if (info->kprobe_multi.missed)
printf("missed %llu ", info->kprobe_multi.missed);
- addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
- qsort(addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
+ data = get_addr_cookie_array(u64_to_ptr(info->kprobe_multi.addrs),
+ u64_to_ptr(info->kprobe_multi.cookies),
+ info->kprobe_multi.count);
+ if (!data)
+ return;
/* Load it once for all. */
if (!dd.sym_count)
kernel_syms_load(&dd);
if (!dd.sym_count)
- return;
+ goto error;
- printf("\n\t%-16s %s", "addr", "func [module]");
+ is_ibt_enabled = is_x86_ibt_enabled();
+ printf("\n\t%-16s %-16s %s", "addr", "cookie", "func [module]");
for (i = 0; i < dd.sym_count; i++) {
- if (dd.sym_mapping[i].address != addrs[j])
+ if (!symbol_matches_target(dd.sym_mapping[i].address,
+ data[j].addr, is_ibt_enabled))
continue;
- printf("\n\t%016lx %s",
- dd.sym_mapping[i].address, dd.sym_mapping[i].name);
+ printf("\n\t%016lx %-16llx %s",
+ (unsigned long)data[j].addr, data[j].cookie, dd.sym_mapping[i].name);
if (dd.sym_mapping[i].module[0] != '\0')
printf(" [%s] ", dd.sym_mapping[i].module);
else
@@ -706,6 +802,8 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
if (j++ == info->kprobe_multi.count)
break;
}
+error:
+ free(data);
}
static void show_uprobe_multi_plain(struct bpf_link_info *info)
@@ -724,7 +822,7 @@ static void show_uprobe_multi_plain(struct bpf_link_info *info)
printf("func_cnt %u ", info->uprobe_multi.count);
if (info->uprobe_multi.pid)
- printf("pid %d ", info->uprobe_multi.pid);
+ printf("pid %u ", info->uprobe_multi.pid);
printf("\n\t%-16s %-16s %-16s", "offset", "ref_ctr_offset", "cookies");
for (i = 0; i < info->uprobe_multi.count; i++) {
@@ -754,6 +852,8 @@ static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
printf("+%#x", info->perf_event.kprobe.offset);
if (info->perf_event.kprobe.missed)
printf(" missed %llu", info->perf_event.kprobe.missed);
+ if (info->perf_event.kprobe.cookie)
+ printf(" cookie %llu", info->perf_event.kprobe.cookie);
printf(" ");
}
@@ -770,6 +870,10 @@ static void show_perf_event_uprobe_plain(struct bpf_link_info *info)
else
printf("\n\tuprobe ");
printf("%s+%#x ", buf, info->perf_event.uprobe.offset);
+ if (info->perf_event.uprobe.cookie)
+ printf("cookie %llu ", info->perf_event.uprobe.cookie);
+ if (info->perf_event.uprobe.ref_ctr_offset)
+ printf("ref_ctr_offset 0x%llx ", info->perf_event.uprobe.ref_ctr_offset);
}
static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
@@ -781,6 +885,8 @@ static void show_perf_event_tracepoint_plain(struct bpf_link_info *info)
return;
printf("\n\ttracepoint %s ", buf);
+ if (info->perf_event.tracepoint.cookie)
+ printf("cookie %llu ", info->perf_event.tracepoint.cookie);
}
static void show_perf_event_event_plain(struct bpf_link_info *info)
@@ -802,6 +908,9 @@ static void show_perf_event_event_plain(struct bpf_link_info *info)
else
printf("%llu ", config);
+ if (info->perf_event.event.cookie)
+ printf("cookie %llu ", info->perf_event.event.cookie);
+
if (type == PERF_TYPE_HW_CACHE && perf_config)
free((void *)perf_config);
}
@@ -818,6 +927,8 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
case BPF_LINK_TYPE_RAW_TRACEPOINT:
printf("\n\ttp '%s' ",
(const char *)u64_to_ptr(info->raw_tracepoint.tp_name));
+ if (info->raw_tracepoint.cookie)
+ printf("cookie %llu ", info->raw_tracepoint.cookie);
break;
case BPF_LINK_TYPE_TRACING:
err = get_prog_info(info->prog_id, &prog_info);
@@ -836,6 +947,8 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
printf("\n\ttarget_obj_id %u target_btf_id %u ",
info->tracing.target_obj_id,
info->tracing.target_btf_id);
+ if (info->tracing.cookie)
+ printf("\n\tcookie %llu ", info->tracing.cookie);
break;
case BPF_LINK_TYPE_CGROUP:
printf("\n\tcgroup_id %zu ", (size_t)info->cgroup.cgroup_id);
@@ -861,6 +974,11 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
show_link_ifindex_plain(info->netkit.ifindex);
show_link_attach_type_plain(info->netkit.attach_type);
break;
+ case BPF_LINK_TYPE_SOCKMAP:
+ printf("\n\t");
+ printf("map_id %u ", info->sockmap.map_id);
+ show_link_attach_type_plain(info->sockmap.attach_type);
+ break;
case BPF_LINK_TYPE_XDP:
printf("\n\t");
show_link_ifindex_plain(info->xdp.ifindex);
@@ -952,6 +1070,14 @@ again:
return -ENOMEM;
}
info.kprobe_multi.addrs = ptr_to_u64(addrs);
+ cookies = calloc(count, sizeof(__u64));
+ if (!cookies) {
+ p_err("mem alloc failed");
+ free(addrs);
+ close(fd);
+ return -ENOMEM;
+ }
+ info.kprobe_multi.cookies = ptr_to_u64(cookies);
goto again;
}
}
@@ -977,7 +1103,7 @@ again:
cookies = calloc(count, sizeof(__u64));
if (!cookies) {
p_err("mem alloc failed");
- free(cookies);
+ free(ref_ctr_offsets);
free(offsets);
close(fd);
return -ENOMEM;
diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index 08d0ac543c67..a829a6a49037 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -33,6 +33,9 @@ bool relaxed_maps;
bool use_loader;
struct btf *base_btf;
struct hashmap *refs_table;
+bool sign_progs;
+const char *private_key_path;
+const char *cert_path;
static void __noreturn clean_and_exit(int i)
{
@@ -61,7 +64,7 @@ static int do_help(int argc, char **argv)
" %s batch file FILE\n"
" %s version\n"
"\n"
- " OBJECT := { prog | map | link | cgroup | perf | net | feature | btf | gen | struct_ops | iter }\n"
+ " OBJECT := { prog | map | link | cgroup | perf | net | feature | btf | gen | struct_ops | iter | token }\n"
" " HELP_SPEC_OPTIONS " |\n"
" {-V|--version} }\n"
"",
@@ -87,6 +90,7 @@ static const struct cmd commands[] = {
{ "gen", do_gen },
{ "struct_ops", do_struct_ops },
{ "iter", do_iter },
+ { "token", do_token },
{ "version", do_version },
{ 0 }
};
@@ -152,7 +156,7 @@ static int do_version(int argc, char **argv)
BPFTOOL_MINOR_VERSION, BPFTOOL_PATCH_VERSION);
#endif
jsonw_name(json_wtr, "libbpf_version");
- jsonw_printf(json_wtr, "\"%d.%d\"",
+ jsonw_printf(json_wtr, "\"%u.%u\"",
libbpf_major_version(), libbpf_minor_version());
jsonw_name(json_wtr, "features");
@@ -370,7 +374,7 @@ static int do_batch(int argc, char **argv)
while ((cp = strstr(buf, "\\\n")) != NULL) {
if (!fgets(contline, sizeof(contline), fp) ||
strlen(contline) == 0) {
- p_err("missing continuation line on command %d",
+ p_err("missing continuation line on command %u",
lines);
err = -1;
goto err_close;
@@ -381,7 +385,7 @@ static int do_batch(int argc, char **argv)
*cp = '\0';
if (strlen(buf) + strlen(contline) + 1 > sizeof(buf)) {
- p_err("command %d is too long", lines);
+ p_err("command %u is too long", lines);
err = -1;
goto err_close;
}
@@ -423,7 +427,7 @@ static int do_batch(int argc, char **argv)
err = -1;
} else {
if (!json_output)
- printf("processed %d commands\n", lines);
+ printf("processed %u commands\n", lines);
}
err_close:
if (fp != stdin)
@@ -447,6 +451,7 @@ int main(int argc, char **argv)
{ "nomount", no_argument, NULL, 'n' },
{ "debug", no_argument, NULL, 'd' },
{ "use-loader", no_argument, NULL, 'L' },
+ { "sign", no_argument, NULL, 'S' },
{ "base-btf", required_argument, NULL, 'B' },
{ 0 }
};
@@ -473,7 +478,7 @@ int main(int argc, char **argv)
bin_name = "bpftool";
opterr = 0;
- while ((opt = getopt_long(argc, argv, "VhpjfLmndB:l",
+ while ((opt = getopt_long(argc, argv, "VhpjfLmndSi:k:B:l",
options, NULL)) >= 0) {
switch (opt) {
case 'V':
@@ -519,6 +524,16 @@ int main(int argc, char **argv)
case 'L':
use_loader = true;
break;
+ case 'S':
+ sign_progs = true;
+ use_loader = true;
+ break;
+ case 'k':
+ private_key_path = optarg;
+ break;
+ case 'i':
+ cert_path = optarg;
+ break;
default:
p_err("unrecognized option '%s'", argv[optind - 1]);
if (json_output)
@@ -533,10 +548,20 @@ int main(int argc, char **argv)
if (argc < 0)
usage();
- if (version_requested)
- return do_version(argc, argv);
+ if (sign_progs && (private_key_path == NULL || cert_path == NULL)) {
+ p_err("-i <identity_x509_cert> and -k <private_key> must be supplied with -S for signing");
+ return -EINVAL;
+ }
+
+ if (!sign_progs && (private_key_path != NULL || cert_path != NULL)) {
+ p_err("--sign (or -S) must be explicitly passed with -i <identity_x509_cert> and -k <private_key> to sign the programs");
+ return -EINVAL;
+ }
- ret = cmd_select(commands, argc, argv, do_help);
+ if (version_requested)
+ ret = do_version(argc, argv);
+ else
+ ret = cmd_select(commands, argc, argv, do_help);
if (json_output)
jsonw_destroy(&json_wtr);
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index b8bb08d10dec..1130299cede0 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -6,15 +6,21 @@
/* BFD and kernel.h both define GCC_VERSION, differently */
#undef GCC_VERSION
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#include <stdbool.h>
#include <stdio.h>
+#include <errno.h>
#include <stdlib.h>
+#include <bpf/skel_internal.h>
#include <linux/bpf.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <bpf/hashmap.h>
#include <bpf/libbpf.h>
+#include <bpf/bpf.h>
#include "json_writer.h"
@@ -51,6 +57,7 @@ static inline void *u64_to_ptr(__u64 ptr)
})
#define ERR_MAX_LEN 1024
+#define MAX_SIG_SIZE 4096
#define BPF_TAG_FMT "%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
@@ -84,6 +91,9 @@ extern bool relaxed_maps;
extern bool use_loader;
extern struct btf *base_btf;
extern struct hashmap *refs_table;
+extern bool sign_progs;
+extern const char *private_key_path;
+extern const char *cert_path;
void __printf(1, 2) p_err(const char *fmt, ...);
void __printf(1, 2) p_info(const char *fmt, ...);
@@ -140,9 +150,12 @@ void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd,
int get_fd_type(int fd);
const char *get_fd_type_name(enum bpf_obj_type type);
char *get_fdinfo(int fd, const char *key);
-int open_obj_pinned(const char *path, bool quiet);
-int open_obj_pinned_any(const char *path, enum bpf_obj_type exp_type);
-int mount_bpffs_for_pin(const char *name, bool is_dir);
+int open_obj_pinned(const char *path, bool quiet,
+ const struct bpf_obj_get_opts *opts);
+int open_obj_pinned_any(const char *path, enum bpf_obj_type exp_type,
+ const struct bpf_obj_get_opts *opts);
+int mount_bpffs_for_file(const char *file_name);
+int create_and_mount_bpffs_dir(const char *dir_name);
int do_pin_any(int argc, char **argv, int (*get_fd_by_id)(int *, char ***));
int do_pin_fd(int fd, const char *name);
@@ -162,14 +175,15 @@ int do_tracelog(int argc, char **arg) __weak;
int do_feature(int argc, char **argv) __weak;
int do_struct_ops(int argc, char **argv) __weak;
int do_iter(int argc, char **argv) __weak;
+int do_token(int argc, char **argv) __weak;
int parse_u32_arg(int *argc, char ***argv, __u32 *val, const char *what);
int prog_parse_fd(int *argc, char ***argv);
int prog_parse_fds(int *argc, char ***argv, int **fds);
-int map_parse_fd(int *argc, char ***argv);
-int map_parse_fds(int *argc, char ***argv, int **fds);
+int map_parse_fd(int *argc, char ***argv, __u32 open_flags);
+int map_parse_fds(int *argc, char ***argv, int **fds, __u32 open_flags);
int map_parse_fd_and_info(int *argc, char ***argv, struct bpf_map_info *info,
- __u32 *info_len);
+ __u32 *info_len, __u32 open_flags);
struct bpf_prog_linfo;
#if defined(HAVE_LLVM_SUPPORT) || defined(HAVE_LIBBFD_SUPPORT)
@@ -270,4 +284,15 @@ int pathname_concat(char *buf, int buf_sz, const char *path,
/* print netfilter bpf_link info */
void netfilter_dump_plain(const struct bpf_link_info *info);
void netfilter_dump_json(const struct bpf_link_info *info, json_writer_t *wtr);
+
+struct kernel_config_option {
+ const char *name;
+ bool macro_dump;
+};
+
+int read_kernel_config(const struct kernel_config_option *requested_options,
+ size_t num_options, char **out_values,
+ const char *define_prefix);
+int bpftool_prog_sign(struct bpf_load_and_run_opts *opts);
+__u32 register_session_key(const char *key_der_path);
#endif
diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
index f98f7bbea2b1..c9de44a45778 100644
--- a/tools/bpf/bpftool/map.c
+++ b/tools/bpf/bpftool/map.c
@@ -285,7 +285,7 @@ static void print_entry_plain(struct bpf_map_info *info, unsigned char *key,
}
if (info->value_size) {
for (i = 0; i < n; i++) {
- printf("value (CPU %02d):%c",
+ printf("value (CPU %02u):%c",
i, info->value_size > 16 ? '\n' : ' ');
fprint_hex(stdout, value + i * step,
info->value_size, " ");
@@ -316,7 +316,7 @@ static char **parse_bytes(char **argv, const char *name, unsigned char *val,
}
if (i != n) {
- p_err("%s expected %d bytes got %d", name, n, i);
+ p_err("%s expected %u bytes got %u", name, n, i);
return NULL;
}
@@ -337,9 +337,9 @@ static void fill_per_cpu_value(struct bpf_map_info *info, void *value)
memcpy(value + i * step, value, info->value_size);
}
-static int parse_elem(char **argv, struct bpf_map_info *info,
- void *key, void *value, __u32 key_size, __u32 value_size,
- __u32 *flags, __u32 **value_fd)
+static int parse_elem(char **argv, struct bpf_map_info *info, void *key,
+ void *value, __u32 key_size, __u32 value_size,
+ __u32 *flags, __u32 **value_fd, __u32 open_flags)
{
if (!*argv) {
if (!key && !value)
@@ -362,7 +362,7 @@ static int parse_elem(char **argv, struct bpf_map_info *info,
return -1;
return parse_elem(argv, info, NULL, value, key_size, value_size,
- flags, value_fd);
+ flags, value_fd, open_flags);
} else if (is_prefix(*argv, "value")) {
int fd;
@@ -388,7 +388,7 @@ static int parse_elem(char **argv, struct bpf_map_info *info,
return -1;
}
- fd = map_parse_fd(&argc, &argv);
+ fd = map_parse_fd(&argc, &argv, open_flags);
if (fd < 0)
return -1;
@@ -424,7 +424,7 @@ static int parse_elem(char **argv, struct bpf_map_info *info,
}
return parse_elem(argv, info, key, NULL, key_size, value_size,
- flags, NULL);
+ flags, NULL, open_flags);
} else if (is_prefix(*argv, "any") || is_prefix(*argv, "noexist") ||
is_prefix(*argv, "exist")) {
if (!flags) {
@@ -440,7 +440,7 @@ static int parse_elem(char **argv, struct bpf_map_info *info,
*flags = BPF_EXIST;
return parse_elem(argv + 1, info, key, value, key_size,
- value_size, NULL, value_fd);
+ value_size, NULL, value_fd, open_flags);
}
p_err("expected key or value, got: %s", *argv);
@@ -462,7 +462,7 @@ static void show_map_header_json(struct bpf_map_info *info, json_writer_t *wtr)
jsonw_string_field(wtr, "name", info->name);
jsonw_name(wtr, "flags");
- jsonw_printf(wtr, "%d", info->map_flags);
+ jsonw_printf(wtr, "%u", info->map_flags);
}
static int show_map_close_json(int fd, struct bpf_map_info *info)
@@ -588,7 +588,7 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
if (prog_type_str)
printf("owner_prog_type %s ", prog_type_str);
else
- printf("owner_prog_type %d ", prog_type);
+ printf("owner_prog_type %u ", prog_type);
}
if (owner_jited)
printf("owner%s jited",
@@ -615,7 +615,7 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
printf("\n\t");
if (info->btf_id)
- printf("btf_id %d", info->btf_id);
+ printf("btf_id %u", info->btf_id);
if (frozen)
printf("%sfrozen", info->btf_id ? " " : "");
@@ -639,7 +639,7 @@ static int do_show_subset(int argc, char **argv)
p_err("mem alloc failed");
return -1;
}
- nb_fds = map_parse_fds(&argc, &argv, &fds);
+ nb_fds = map_parse_fds(&argc, &argv, &fds, BPF_F_RDONLY);
if (nb_fds < 1)
goto exit_free;
@@ -672,12 +672,15 @@ exit_free:
static int do_show(int argc, char **argv)
{
+ LIBBPF_OPTS(bpf_get_fd_by_id_opts, opts);
struct bpf_map_info info = {};
__u32 len = sizeof(info);
__u32 id = 0;
int err;
int fd;
+ opts.open_flags = BPF_F_RDONLY;
+
if (show_pinned) {
map_table = hashmap__new(hash_fn_for_key_as_id,
equal_fn_for_key_as_id, NULL);
@@ -707,7 +710,7 @@ static int do_show(int argc, char **argv)
break;
}
- fd = bpf_map_get_fd_by_id(id);
+ fd = bpf_map_get_fd_by_id_opts(id, &opts);
if (fd < 0) {
if (errno == ENOENT)
continue;
@@ -909,7 +912,7 @@ static int do_dump(int argc, char **argv)
p_err("mem alloc failed");
return -1;
}
- nb_fds = map_parse_fds(&argc, &argv, &fds);
+ nb_fds = map_parse_fds(&argc, &argv, &fds, BPF_F_RDONLY);
if (nb_fds < 1)
goto exit_free;
@@ -997,7 +1000,7 @@ static int do_update(int argc, char **argv)
if (argc < 2)
usage();
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len, 0);
if (fd < 0)
return -1;
@@ -1006,7 +1009,7 @@ static int do_update(int argc, char **argv)
goto exit_free;
err = parse_elem(argv, &info, key, value, info.key_size,
- info.value_size, &flags, &value_fd);
+ info.value_size, &flags, &value_fd, 0);
if (err)
goto exit_free;
@@ -1076,7 +1079,7 @@ static int do_lookup(int argc, char **argv)
if (argc < 2)
usage();
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len, BPF_F_RDONLY);
if (fd < 0)
return -1;
@@ -1084,7 +1087,8 @@ static int do_lookup(int argc, char **argv)
if (err)
goto exit_free;
- err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL, NULL);
+ err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL, NULL,
+ BPF_F_RDONLY);
if (err)
goto exit_free;
@@ -1127,7 +1131,7 @@ static int do_getnext(int argc, char **argv)
if (argc < 2)
usage();
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len, BPF_F_RDONLY);
if (fd < 0)
return -1;
@@ -1140,8 +1144,8 @@ static int do_getnext(int argc, char **argv)
}
if (argc) {
- err = parse_elem(argv, &info, key, NULL, info.key_size, 0,
- NULL, NULL);
+ err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL,
+ NULL, BPF_F_RDONLY);
if (err)
goto exit_free;
} else {
@@ -1198,7 +1202,7 @@ static int do_delete(int argc, char **argv)
if (argc < 2)
usage();
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len, 0);
if (fd < 0)
return -1;
@@ -1209,7 +1213,8 @@ static int do_delete(int argc, char **argv)
goto exit_free;
}
- err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL, NULL);
+ err = parse_elem(argv, &info, key, NULL, info.key_size, 0, NULL, NULL,
+ 0);
if (err)
goto exit_free;
@@ -1226,11 +1231,16 @@ exit_free:
return err;
}
+static int map_parse_read_only_fd(int *argc, char ***argv)
+{
+ return map_parse_fd(argc, argv, BPF_F_RDONLY);
+}
+
static int do_pin(int argc, char **argv)
{
int err;
- err = do_pin_any(argc, argv, map_parse_fd);
+ err = do_pin_any(argc, argv, map_parse_read_only_fd);
if (!err && json_output)
jsonw_null(json_wtr);
return err;
@@ -1270,6 +1280,10 @@ static int do_create(int argc, char **argv)
} else if (is_prefix(*argv, "name")) {
NEXT_ARG();
map_name = GET_ARG();
+ if (strlen(map_name) > BPF_OBJ_NAME_LEN - 1) {
+ p_info("Warning: map name is longer than %u characters, it will be truncated.",
+ BPF_OBJ_NAME_LEN - 1);
+ }
} else if (is_prefix(*argv, "key")) {
if (parse_u32_arg(&argc, &argv, &key_size,
"key size"))
@@ -1315,7 +1329,7 @@ offload_dev:
if (!REQ_ARGS(2))
usage();
inner_map_fd = map_parse_fd_and_info(&argc, &argv,
- &info, &len);
+ &info, &len, BPF_F_RDONLY);
if (inner_map_fd < 0)
return -1;
attr.inner_map_fd = inner_map_fd;
@@ -1364,7 +1378,7 @@ static int do_pop_dequeue(int argc, char **argv)
if (argc < 2)
usage();
- fd = map_parse_fd_and_info(&argc, &argv, &info, &len);
+ fd = map_parse_fd_and_info(&argc, &argv, &info, &len, 0);
if (fd < 0)
return -1;
@@ -1403,7 +1417,7 @@ static int do_freeze(int argc, char **argv)
if (!REQ_ARGS(2))
return -1;
- fd = map_parse_fd(&argc, &argv);
+ fd = map_parse_fd(&argc, &argv, 0);
if (fd < 0)
return -1;
@@ -1463,7 +1477,7 @@ static int do_help(int argc, char **argv)
" devmap | devmap_hash | sockmap | cpumap | xskmap | sockhash |\n"
" cgroup_storage | reuseport_sockarray | percpu_cgroup_storage |\n"
" queue | stack | sk_storage | struct_ops | ringbuf | inode_storage |\n"
- " task_storage | bloom_filter | user_ringbuf | cgrp_storage }\n"
+ " task_storage | bloom_filter | user_ringbuf | cgrp_storage | arena }\n"
" " HELP_SPEC_OPTIONS " |\n"
" {-f|--bpffs} | {-n|--nomount} }\n"
"",
diff --git a/tools/bpf/bpftool/map_perf_ring.c b/tools/bpf/bpftool/map_perf_ring.c
index 21d7d447e1f3..bcb767e2d673 100644
--- a/tools/bpf/bpftool/map_perf_ring.c
+++ b/tools/bpf/bpftool/map_perf_ring.c
@@ -91,15 +91,15 @@ print_bpf_output(void *private_data, int cpu, struct perf_event_header *event)
jsonw_end_object(json_wtr);
} else {
if (e->header.type == PERF_RECORD_SAMPLE) {
- printf("== @%lld.%09lld CPU: %d index: %d =====\n",
+ printf("== @%llu.%09llu CPU: %d index: %d =====\n",
e->time / 1000000000ULL, e->time % 1000000000ULL,
cpu, idx);
fprint_hex(stdout, e->data, e->size, " ");
printf("\n");
} else if (e->header.type == PERF_RECORD_LOST) {
- printf("lost %lld events\n", lost->lost);
+ printf("lost %llu events\n", lost->lost);
} else {
- printf("unknown event type=%d size=%d\n",
+ printf("unknown event type=%u size=%u\n",
e->header.type, e->header.size);
}
}
@@ -128,7 +128,8 @@ int do_event_pipe(int argc, char **argv)
int err, map_fd;
map_info_len = sizeof(map_info);
- map_fd = map_parse_fd_and_info(&argc, &argv, &map_info, &map_info_len);
+ map_fd = map_parse_fd_and_info(&argc, &argv, &map_info, &map_info_len,
+ 0);
if (map_fd < 0)
return -1;
diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
index 968714b4c3d4..cfc6f944f7c3 100644
--- a/tools/bpf/bpftool/net.c
+++ b/tools/bpf/bpftool/net.c
@@ -67,6 +67,8 @@ enum net_attach_type {
NET_ATTACH_TYPE_XDP_GENERIC,
NET_ATTACH_TYPE_XDP_DRIVER,
NET_ATTACH_TYPE_XDP_OFFLOAD,
+ NET_ATTACH_TYPE_TCX_INGRESS,
+ NET_ATTACH_TYPE_TCX_EGRESS,
};
static const char * const attach_type_strings[] = {
@@ -74,6 +76,8 @@ static const char * const attach_type_strings[] = {
[NET_ATTACH_TYPE_XDP_GENERIC] = "xdpgeneric",
[NET_ATTACH_TYPE_XDP_DRIVER] = "xdpdrv",
[NET_ATTACH_TYPE_XDP_OFFLOAD] = "xdpoffload",
+ [NET_ATTACH_TYPE_TCX_INGRESS] = "tcx_ingress",
+ [NET_ATTACH_TYPE_TCX_EGRESS] = "tcx_egress",
};
static const char * const attach_loc_strings[] = {
@@ -362,17 +366,18 @@ static int dump_link_nlmsg(void *cookie, void *msg, struct nlattr **tb)
{
struct bpf_netdev_t *netinfo = cookie;
struct ifinfomsg *ifinfo = msg;
+ struct ip_devname_ifindex *tmp;
if (netinfo->filter_idx > 0 && netinfo->filter_idx != ifinfo->ifi_index)
return 0;
if (netinfo->used_len == netinfo->array_len) {
- netinfo->devices = realloc(netinfo->devices,
- (netinfo->array_len + 16) *
- sizeof(struct ip_devname_ifindex));
- if (!netinfo->devices)
+ tmp = realloc(netinfo->devices,
+ (netinfo->array_len + 16) * sizeof(struct ip_devname_ifindex));
+ if (!tmp)
return -ENOMEM;
+ netinfo->devices = tmp;
netinfo->array_len += 16;
}
netinfo->devices[netinfo->used_len].ifindex = ifinfo->ifi_index;
@@ -391,6 +396,7 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
{
struct bpf_tcinfo_t *tcinfo = cookie;
struct tcmsg *info = msg;
+ struct tc_kind_handle *tmp;
if (tcinfo->is_qdisc) {
/* skip clsact qdisc */
@@ -402,11 +408,12 @@ static int dump_class_qdisc_nlmsg(void *cookie, void *msg, struct nlattr **tb)
}
if (tcinfo->used_len == tcinfo->array_len) {
- tcinfo->handle_array = realloc(tcinfo->handle_array,
+ tmp = realloc(tcinfo->handle_array,
(tcinfo->array_len + 16) * sizeof(struct tc_kind_handle));
- if (!tcinfo->handle_array)
+ if (!tmp)
return -ENOMEM;
+ tcinfo->handle_array = tmp;
tcinfo->array_len += 16;
}
tcinfo->handle_array[tcinfo->used_len].handle = info->tcm_handle;
@@ -472,7 +479,7 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
for (i = 0; i < optq.count; i++) {
NET_START_OBJECT;
NET_DUMP_STR("devname", "%s", dev->devname);
- NET_DUMP_UINT("ifindex", "(%u)", dev->ifindex);
+ NET_DUMP_UINT("ifindex", "(%u)", (unsigned int)dev->ifindex);
NET_DUMP_STR("kind", " %s", attach_loc_strings[loc]);
ret = __show_dev_tc_bpf_name(prog_ids[i], prog_name,
sizeof(prog_name));
@@ -482,9 +489,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
if (prog_flags[i] || json_output) {
NET_START_ARRAY("prog_flags", "%s ");
for (j = 0; prog_flags[i] && j < 32; j++) {
- if (!(prog_flags[i] & (1 << j)))
+ if (!(prog_flags[i] & (1U << j)))
continue;
- NET_DUMP_UINT_ONLY(1 << j);
+ NET_DUMP_UINT_ONLY(1U << j);
}
NET_END_ARRAY("");
}
@@ -493,9 +500,9 @@ static void __show_dev_tc_bpf(const struct ip_devname_ifindex *dev,
if (link_flags[i] || json_output) {
NET_START_ARRAY("link_flags", "%s ");
for (j = 0; link_flags[i] && j < 32; j++) {
- if (!(link_flags[i] & (1 << j)))
+ if (!(link_flags[i] & (1U << j)))
continue;
- NET_DUMP_UINT_ONLY(1 << j);
+ NET_DUMP_UINT_ONLY(1U << j);
}
NET_END_ARRAY("");
}
@@ -647,6 +654,32 @@ static int do_attach_detach_xdp(int progfd, enum net_attach_type attach_type,
return bpf_xdp_attach(ifindex, progfd, flags, NULL);
}
+static int get_tcx_type(enum net_attach_type attach_type)
+{
+ switch (attach_type) {
+ case NET_ATTACH_TYPE_TCX_INGRESS:
+ return BPF_TCX_INGRESS;
+ case NET_ATTACH_TYPE_TCX_EGRESS:
+ return BPF_TCX_EGRESS;
+ default:
+ return -1;
+ }
+}
+
+static int do_attach_tcx(int progfd, enum net_attach_type attach_type, int ifindex)
+{
+ int type = get_tcx_type(attach_type);
+
+ return bpf_prog_attach(progfd, ifindex, type, 0);
+}
+
+static int do_detach_tcx(int targetfd, enum net_attach_type attach_type)
+{
+ int type = get_tcx_type(attach_type);
+
+ return bpf_prog_detach(targetfd, type);
+}
+
static int do_attach(int argc, char **argv)
{
enum net_attach_type attach_type;
@@ -684,10 +717,23 @@ static int do_attach(int argc, char **argv)
}
}
+ switch (attach_type) {
/* attach xdp prog */
- if (is_prefix("xdp", attach_type_strings[attach_type]))
- err = do_attach_detach_xdp(progfd, attach_type, ifindex,
- overwrite);
+ case NET_ATTACH_TYPE_XDP:
+ case NET_ATTACH_TYPE_XDP_GENERIC:
+ case NET_ATTACH_TYPE_XDP_DRIVER:
+ case NET_ATTACH_TYPE_XDP_OFFLOAD:
+ err = do_attach_detach_xdp(progfd, attach_type, ifindex, overwrite);
+ break;
+ /* attach tcx prog */
+ case NET_ATTACH_TYPE_TCX_INGRESS:
+ case NET_ATTACH_TYPE_TCX_EGRESS:
+ err = do_attach_tcx(progfd, attach_type, ifindex);
+ break;
+ default:
+ break;
+ }
+
if (err) {
p_err("interface %s attach failed: %s",
attach_type_strings[attach_type], strerror(-err));
@@ -721,10 +767,23 @@ static int do_detach(int argc, char **argv)
if (ifindex < 1)
return -EINVAL;
+ switch (attach_type) {
/* detach xdp prog */
- progfd = -1;
- if (is_prefix("xdp", attach_type_strings[attach_type]))
+ case NET_ATTACH_TYPE_XDP:
+ case NET_ATTACH_TYPE_XDP_GENERIC:
+ case NET_ATTACH_TYPE_XDP_DRIVER:
+ case NET_ATTACH_TYPE_XDP_OFFLOAD:
+ progfd = -1;
err = do_attach_detach_xdp(progfd, attach_type, ifindex, NULL);
+ break;
+ /* detach tcx prog */
+ case NET_ATTACH_TYPE_TCX_INGRESS:
+ case NET_ATTACH_TYPE_TCX_EGRESS:
+ err = do_detach_tcx(ifindex, attach_type);
+ break;
+ default:
+ break;
+ }
if (err < 0) {
p_err("interface %s detach failed: %s",
@@ -775,7 +834,7 @@ static void show_link_netfilter(void)
if (err) {
if (errno == ENOENT)
break;
- p_err("can't get next link: %s (id %d)", strerror(errno), id);
+ p_err("can't get next link: %s (id %u)", strerror(errno), id);
break;
}
@@ -824,6 +883,9 @@ static void show_link_netfilter(void)
nf_link_count++;
}
+ if (!nf_link_info)
+ return;
+
qsort(nf_link_info, nf_link_count, sizeof(*nf_link_info), netfilter_link_compar);
for (id = 0; id < nf_link_count; id++) {
@@ -928,7 +990,8 @@ static int do_help(int argc, char **argv)
" %1$s %2$s help\n"
"\n"
" " HELP_SPEC_PROGRAM "\n"
- " ATTACH_TYPE := { xdp | xdpgeneric | xdpdrv | xdpoffload }\n"
+ " ATTACH_TYPE := { xdp | xdpgeneric | xdpdrv | xdpoffload | tcx_ingress\n"
+ " | tcx_egress }\n"
" " HELP_SPEC_OPTIONS " }\n"
"\n"
"Note: Only xdp, tcx, tc, netkit, flow_dissector and netfilter attachments\n"
diff --git a/tools/bpf/bpftool/netlink_dumper.c b/tools/bpf/bpftool/netlink_dumper.c
index 5f65140b003b..0a3c7e96c797 100644
--- a/tools/bpf/bpftool/netlink_dumper.c
+++ b/tools/bpf/bpftool/netlink_dumper.c
@@ -45,7 +45,7 @@ static int do_xdp_dump_one(struct nlattr *attr, unsigned int ifindex,
NET_START_OBJECT;
if (name)
NET_DUMP_STR("devname", "%s", name);
- NET_DUMP_UINT("ifindex", "(%d)", ifindex);
+ NET_DUMP_UINT("ifindex", "(%u)", ifindex);
if (mode == XDP_ATTACHED_MULTI) {
if (json_output) {
@@ -74,7 +74,7 @@ int do_xdp_dump(struct ifinfomsg *ifinfo, struct nlattr **tb)
if (!tb[IFLA_XDP])
return 0;
- return do_xdp_dump_one(tb[IFLA_XDP], ifinfo->ifi_index,
+ return do_xdp_dump_one(tb[IFLA_XDP], (unsigned int)ifinfo->ifi_index,
libbpf_nla_getattr_str(tb[IFLA_IFNAME]));
}
@@ -168,7 +168,7 @@ int do_filter_dump(struct tcmsg *info, struct nlattr **tb, const char *kind,
NET_START_OBJECT;
if (devname[0] != '\0')
NET_DUMP_STR("devname", "%s", devname);
- NET_DUMP_UINT("ifindex", "(%u)", ifindex);
+ NET_DUMP_UINT("ifindex", "(%u)", (unsigned int)ifindex);
NET_DUMP_STR("kind", " %s", kind);
ret = do_bpf_filter_dump(tb[TCA_OPTIONS]);
NET_END_OBJECT_FINAL;
diff --git a/tools/bpf/bpftool/pids.c b/tools/bpf/bpftool/pids.c
index 00c77edb6331..23f488cf1740 100644
--- a/tools/bpf/bpftool/pids.c
+++ b/tools/bpf/bpftool/pids.c
@@ -54,6 +54,7 @@ static void add_ref(struct hashmap *map, struct pid_iter_entry *e)
ref = &refs->refs[refs->ref_cnt];
ref->pid = e->pid;
memcpy(ref->comm, e->comm, sizeof(ref->comm));
+ ref->comm[sizeof(ref->comm) - 1] = '\0';
refs->ref_cnt++;
return;
@@ -77,6 +78,7 @@ static void add_ref(struct hashmap *map, struct pid_iter_entry *e)
ref = &refs->refs[0];
ref->pid = e->pid;
memcpy(ref->comm, e->comm, sizeof(ref->comm));
+ ref->comm[sizeof(ref->comm) - 1] = '\0';
refs->ref_cnt = 1;
refs->has_bpf_cookie = e->has_bpf_cookie;
refs->bpf_cookie = e->bpf_cookie;
@@ -101,7 +103,6 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type)
char buf[4096 / sizeof(*e) * sizeof(*e)];
struct pid_iter_bpf *skel;
int err, ret, fd = -1, i;
- libbpf_print_fn_t default_print;
*map = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL);
if (IS_ERR(*map)) {
@@ -118,12 +119,18 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type)
skel->rodata->obj_type = type;
- /* we don't want output polluted with libbpf errors if bpf_iter is not
- * supported
- */
- default_print = libbpf_set_print(libbpf_print_none);
- err = pid_iter_bpf__load(skel);
- libbpf_set_print(default_print);
+ if (!verifier_logs) {
+ libbpf_print_fn_t default_print;
+
+ /* Unless debug information is on, we don't want the output to
+ * be polluted with libbpf errors if bpf_iter is not supported.
+ */
+ default_print = libbpf_set_print(libbpf_print_none);
+ err = pid_iter_bpf__load(skel);
+ libbpf_set_print(default_print);
+ } else {
+ err = pid_iter_bpf__load(skel);
+ }
if (err) {
/* too bad, kernel doesn't support BPF iterators yet */
err = 0;
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index feb8e305804f..6daf19809ca4 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -23,6 +23,7 @@
#include <linux/err.h>
#include <linux/perf_event.h>
#include <linux/sizes.h>
+#include <linux/keyctl.h>
#include <bpf/bpf.h>
#include <bpf/btf.h>
@@ -521,10 +522,10 @@ static void print_prog_header_plain(struct bpf_prog_info *info, int fd)
print_dev_plain(info->ifindex, info->netns_dev, info->netns_ino);
printf("%s", info->gpl_compatible ? " gpl" : "");
if (info->run_time_ns)
- printf(" run_time_ns %lld run_cnt %lld",
+ printf(" run_time_ns %llu run_cnt %llu",
info->run_time_ns, info->run_cnt);
if (info->recursion_misses)
- printf(" recursion_misses %lld", info->recursion_misses);
+ printf(" recursion_misses %llu", info->recursion_misses);
printf("\n");
}
@@ -569,7 +570,7 @@ static void print_prog_plain(struct bpf_prog_info *info, int fd, bool orphaned)
}
if (info->btf_id)
- printf("\n\tbtf_id %d", info->btf_id);
+ printf("\n\tbtf_id %u", info->btf_id);
emit_obj_refs_plain(refs_table, info->id, "\n\tpids ");
@@ -714,7 +715,7 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
if (mode == DUMP_JITED) {
if (info->jited_prog_len == 0 || !info->jited_prog_insns) {
- p_info("no instructions returned");
+ p_err("error retrieving jit dump: no instructions returned or kernel.kptr_restrict set?");
return -1;
}
buf = u64_to_ptr(info->jited_prog_insns);
@@ -822,11 +823,18 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
printf("%s:\n", sym_name);
}
- if (disasm_print_insn(img, lens[i], opcodes,
- name, disasm_opt, btf,
- prog_linfo, ksyms[i], i,
- linum))
- goto exit_free;
+ if (ksyms) {
+ if (disasm_print_insn(img, lens[i], opcodes,
+ name, disasm_opt, btf,
+ prog_linfo, ksyms[i], i,
+ linum))
+ goto exit_free;
+ } else {
+ if (disasm_print_insn(img, lens[i], opcodes,
+ name, disasm_opt, btf,
+ NULL, 0, 0, false))
+ goto exit_free;
+ }
img += lens[i];
@@ -1055,7 +1063,7 @@ static int parse_attach_detach_args(int argc, char **argv, int *progfd,
if (!REQ_ARGS(2))
return -EINVAL;
- *mapfd = map_parse_fd(&argc, &argv);
+ *mapfd = map_parse_fd(&argc, &argv, 0);
if (*mapfd < 0)
return *mapfd;
@@ -1106,6 +1114,52 @@ static int do_detach(int argc, char **argv)
return 0;
}
+enum prog_tracelog_mode {
+ TRACE_STDOUT,
+ TRACE_STDERR,
+};
+
+static int
+prog_tracelog_stream(int prog_fd, enum prog_tracelog_mode mode)
+{
+ FILE *file = mode == TRACE_STDOUT ? stdout : stderr;
+ int stream_id = mode == TRACE_STDOUT ? 1 : 2;
+ char buf[512];
+ int ret;
+
+ ret = 0;
+ do {
+ ret = bpf_prog_stream_read(prog_fd, stream_id, buf, sizeof(buf), NULL);
+ if (ret > 0)
+ fwrite(buf, sizeof(buf[0]), ret, file);
+ } while (ret > 0);
+
+ fflush(file);
+ return ret ? -1 : 0;
+}
+
+static int do_tracelog_any(int argc, char **argv)
+{
+ enum prog_tracelog_mode mode;
+ int fd;
+
+ if (argc == 0)
+ return do_tracelog(argc, argv);
+ if (!is_prefix(*argv, "stdout") && !is_prefix(*argv, "stderr"))
+ usage();
+ mode = is_prefix(*argv, "stdout") ? TRACE_STDOUT : TRACE_STDERR;
+ NEXT_ARG();
+
+ if (!REQ_ARGS(2))
+ return -1;
+
+ fd = prog_parse_fd(&argc, &argv);
+ if (fd < 0)
+ return -1;
+
+ return prog_tracelog_stream(fd, mode);
+}
+
static int check_single_stdin(char *file_data_in, char *file_ctx_in)
{
if (file_data_in && file_ctx_in &&
@@ -1157,7 +1211,7 @@ static int get_run_data(const char *fname, void **data_ptr, unsigned int *size)
}
if (nb_read > buf_size - block_size) {
if (buf_size == UINT32_MAX) {
- p_err("data_in/ctx_in is too long (max: %d)",
+ p_err("data_in/ctx_in is too long (max: %u)",
UINT32_MAX);
goto err_free;
}
@@ -1601,7 +1655,7 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
}
NEXT_ARG();
- fd = map_parse_fd(&argc, &argv);
+ fd = map_parse_fd(&argc, &argv, 0);
if (fd < 0)
goto err_free_reuse_maps;
@@ -1674,8 +1728,17 @@ offload_dev:
} else if (is_prefix(*argv, "autoattach")) {
auto_attach = true;
NEXT_ARG();
+ } else if (is_prefix(*argv, "kernel_btf")) {
+ NEXT_ARG();
+
+ if (!REQ_ARGS(1))
+ goto err_free_reuse_maps;
+
+ open_opts.btf_custom_path = GET_ARG();
} else {
- p_err("expected no more arguments, 'type', 'map' or 'dev', got: '%s'?",
+ p_err("expected no more arguments, "
+ "'type', 'map', 'offload_dev', 'xdpmeta_dev', 'pinmaps', "
+ "'autoattach', or 'kernel_btf', got: '%s'?",
*argv);
goto err_free_reuse_maps;
}
@@ -1778,7 +1841,10 @@ offload_dev:
goto err_close_obj;
}
- err = mount_bpffs_for_pin(pinfile, !first_prog_only);
+ if (first_prog_only)
+ err = mount_bpffs_for_file(pinfile);
+ else
+ err = create_and_mount_bpffs_dir(pinfile);
if (err)
goto err_close_obj;
@@ -1810,6 +1876,10 @@ offload_dev:
}
if (pinmaps) {
+ err = create_and_mount_bpffs_dir(pinmaps);
+ if (err)
+ goto err_unpin;
+
err = bpf_object__pin_maps(obj, pinmaps);
if (err) {
p_err("failed to pin all maps");
@@ -1861,6 +1931,8 @@ static int try_loader(struct gen_loader_opts *gen)
{
struct bpf_load_and_run_opts opts = {};
struct bpf_loader_ctx *ctx;
+ char sig_buf[MAX_SIG_SIZE];
+ __u8 prog_sha[SHA256_DIGEST_LENGTH];
int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct bpf_map_desc),
sizeof(struct bpf_prog_desc));
int log_buf_sz = (1u << 24) - 1;
@@ -1884,6 +1956,26 @@ static int try_loader(struct gen_loader_opts *gen)
opts.insns = gen->insns;
opts.insns_sz = gen->insns_sz;
fds_before = count_open_fds();
+
+ if (sign_progs) {
+ opts.excl_prog_hash = prog_sha;
+ opts.excl_prog_hash_sz = sizeof(prog_sha);
+ opts.signature = sig_buf;
+ opts.signature_sz = MAX_SIG_SIZE;
+ opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
+
+ err = bpftool_prog_sign(&opts);
+ if (err < 0) {
+ p_err("failed to sign program");
+ goto out;
+ }
+
+ err = register_session_key(cert_path);
+ if (err < 0) {
+ p_err("failed to add session key");
+ goto out;
+ }
+ }
err = bpf_load_and_run(&opts);
fd_delta = count_open_fds() - fds_before;
if (err < 0 || verifier_logs) {
@@ -1892,6 +1984,7 @@ static int try_loader(struct gen_loader_opts *gen)
fprintf(stderr, "loader prog leaked %d FDs\n",
fd_delta);
}
+out:
free(log_buf);
return err;
}
@@ -1914,10 +2007,14 @@ static int do_loader(int argc, char **argv)
obj = bpf_object__open_file(file, &open_opts);
if (!obj) {
+ err = -1;
p_err("failed to open object file");
goto err_close_obj;
}
+ if (sign_progs)
+ gen.gen_hash = true;
+
err = bpf_object__gen_loader(obj, &gen);
if (err)
goto err_close_obj;
@@ -2078,7 +2175,7 @@ static int profile_parse_metrics(int argc, char **argv)
NEXT_ARG();
}
if (selected_cnt > MAX_NUM_PROFILE_METRICS) {
- p_err("too many (%d) metrics, please specify no more than %d metrics at at time",
+ p_err("too many (%d) metrics, please specify no more than %d metrics at a time",
selected_cnt, MAX_NUM_PROFILE_METRICS);
return -1;
}
@@ -2192,7 +2289,7 @@ static void profile_print_readings(void)
static char *profile_target_name(int tgt_fd)
{
- struct bpf_func_info func_info;
+ struct bpf_func_info func_info = {};
struct bpf_prog_info info = {};
__u32 info_len = sizeof(info);
const struct btf_type *t;
@@ -2237,7 +2334,7 @@ static char *profile_target_name(int tgt_fd)
t = btf__type_by_id(btf, func_info.type_id);
if (!t) {
- p_err("btf %d doesn't have type %d",
+ p_err("btf %u doesn't have type %u",
info.btf_id, func_info.type_id);
goto out;
}
@@ -2298,7 +2395,7 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
int map_fd;
profile_perf_events = calloc(
- sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
+ obj->rodata->num_cpu * obj->rodata->num_metric, sizeof(int));
if (!profile_perf_events) {
p_err("failed to allocate memory for perf_event array: %s",
strerror(errno));
@@ -2315,7 +2412,7 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
continue;
for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
if (profile_open_perf_event(m, cpu, map_fd)) {
- p_err("failed to create event %s on cpu %d",
+ p_err("failed to create event %s on cpu %u",
metrics[m].name, cpu);
return -1;
}
@@ -2459,6 +2556,7 @@ static int do_help(int argc, char **argv)
" [map { idx IDX | name NAME } MAP]\\\n"
" [pinmaps MAP_DIR]\n"
" [autoattach]\n"
+ " [kernel_btf BTF_FILE]\n"
" %1$s %2$s attach PROG ATTACH_TYPE [MAP]\n"
" %1$s %2$s detach PROG ATTACH_TYPE [MAP]\n"
" %1$s %2$s run PROG \\\n"
@@ -2468,6 +2566,7 @@ static int do_help(int argc, char **argv)
" [repeat N]\n"
" %1$s %2$s profile PROG [duration DURATION] METRICs\n"
" %1$s %2$s tracelog\n"
+ " %1$s %2$s tracelog { stdout | stderr } PROG\n"
" %1$s %2$s help\n"
"\n"
" " HELP_SPEC_MAP "\n"
@@ -2482,7 +2581,7 @@ static int do_help(int argc, char **argv)
" cgroup/connect_unix | cgroup/getpeername4 | cgroup/getpeername6 |\n"
" cgroup/getpeername_unix | cgroup/getsockname4 | cgroup/getsockname6 |\n"
" cgroup/getsockname_unix | cgroup/sendmsg4 | cgroup/sendmsg6 |\n"
- " cgroup/sendmsg°unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n"
+ " cgroup/sendmsg_unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n"
" cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n"
" struct_ops | fentry | fexit | freplace | sk_lookup }\n"
" ATTACH_TYPE := { sk_msg_verdict | sk_skb_verdict | sk_skb_stream_verdict |\n"
@@ -2490,7 +2589,7 @@ static int do_help(int argc, char **argv)
" METRIC := { cycles | instructions | l1d_loads | llc_misses | itlb_misses | dtlb_misses }\n"
" " HELP_SPEC_OPTIONS " |\n"
" {-f|--bpffs} | {-m|--mapcompat} | {-n|--nomount} |\n"
- " {-L|--use-loader} }\n"
+ " {-L|--use-loader} | [ {-S|--sign } {-k} <private_key.pem> {-i} <certificate.x509> ] \n"
"",
bin_name, argv[-2]);
@@ -2507,7 +2606,7 @@ static const struct cmd cmds[] = {
{ "loadall", do_loadall },
{ "attach", do_attach },
{ "detach", do_detach },
- { "tracelog", do_tracelog },
+ { "tracelog", do_tracelog_any },
{ "run", do_run },
{ "profile", do_profile },
{ 0 }
diff --git a/tools/bpf/bpftool/sign.c b/tools/bpf/bpftool/sign.c
new file mode 100644
index 000000000000..b34f74d210e9
--- /dev/null
+++ b/tools/bpf/bpftool/sign.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/*
+ * Copyright (C) 2025 Google LLC.
+ */
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+#include <getopt.h>
+#include <err.h>
+#include <openssl/opensslv.h>
+#include <openssl/bio.h>
+#include <openssl/evp.h>
+#include <openssl/pem.h>
+#include <openssl/err.h>
+#include <openssl/cms.h>
+#include <linux/keyctl.h>
+#include <errno.h>
+
+#include <bpf/skel_internal.h>
+
+#include "main.h"
+
+#define OPEN_SSL_ERR_BUF_LEN 256
+
+static void display_openssl_errors(int l)
+{
+ char buf[OPEN_SSL_ERR_BUF_LEN];
+ const char *file;
+ const char *data;
+ unsigned long e;
+ int flags;
+ int line;
+
+ while ((e = ERR_get_error_all(&file, &line, NULL, &data, &flags))) {
+ ERR_error_string_n(e, buf, sizeof(buf));
+ if (data && (flags & ERR_TXT_STRING)) {
+ p_err("OpenSSL %s: %s:%d: %s", buf, file, line, data);
+ } else {
+ p_err("OpenSSL %s: %s:%d", buf, file, line);
+ }
+ }
+}
+
+#define DISPLAY_OSSL_ERR(cond) \
+ do { \
+ bool __cond = (cond); \
+ if (__cond && ERR_peek_error()) \
+ display_openssl_errors(__LINE__);\
+ } while (0)
+
+static EVP_PKEY *read_private_key(const char *pkey_path)
+{
+ EVP_PKEY *private_key = NULL;
+ BIO *b;
+
+ b = BIO_new_file(pkey_path, "rb");
+ private_key = PEM_read_bio_PrivateKey(b, NULL, NULL, NULL);
+ BIO_free(b);
+ DISPLAY_OSSL_ERR(!private_key);
+ return private_key;
+}
+
+static X509 *read_x509(const char *x509_name)
+{
+ unsigned char buf[2];
+ X509 *x509 = NULL;
+ BIO *b;
+ int n;
+
+ b = BIO_new_file(x509_name, "rb");
+ if (!b)
+ goto cleanup;
+
+ /* Look at the first two bytes of the file to determine the encoding */
+ n = BIO_read(b, buf, 2);
+ if (n != 2)
+ goto cleanup;
+
+ if (BIO_reset(b) != 0)
+ goto cleanup;
+
+ if (buf[0] == 0x30 && buf[1] >= 0x81 && buf[1] <= 0x84)
+ /* Assume raw DER encoded X.509 */
+ x509 = d2i_X509_bio(b, NULL);
+ else
+ /* Assume PEM encoded X.509 */
+ x509 = PEM_read_bio_X509(b, NULL, NULL, NULL);
+
+cleanup:
+ BIO_free(b);
+ DISPLAY_OSSL_ERR(!x509);
+ return x509;
+}
+
+__u32 register_session_key(const char *key_der_path)
+{
+ unsigned char *der_buf = NULL;
+ X509 *x509 = NULL;
+ int key_id = -1;
+ int der_len;
+
+ if (!key_der_path)
+ return key_id;
+ x509 = read_x509(key_der_path);
+ if (!x509)
+ goto cleanup;
+ der_len = i2d_X509(x509, &der_buf);
+ if (der_len < 0)
+ goto cleanup;
+ key_id = syscall(__NR_add_key, "asymmetric", key_der_path, der_buf,
+ (size_t)der_len, KEY_SPEC_SESSION_KEYRING);
+cleanup:
+ X509_free(x509);
+ OPENSSL_free(der_buf);
+ DISPLAY_OSSL_ERR(key_id == -1);
+ return key_id;
+}
+
+int bpftool_prog_sign(struct bpf_load_and_run_opts *opts)
+{
+ BIO *bd_in = NULL, *bd_out = NULL;
+ EVP_PKEY *private_key = NULL;
+ CMS_ContentInfo *cms = NULL;
+ long actual_sig_len = 0;
+ X509 *x509 = NULL;
+ int err = 0;
+
+ bd_in = BIO_new_mem_buf(opts->insns, opts->insns_sz);
+ if (!bd_in) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+ private_key = read_private_key(private_key_path);
+ if (!private_key) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ x509 = read_x509(cert_path);
+ if (!x509) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ cms = CMS_sign(NULL, NULL, NULL, NULL,
+ CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | CMS_DETACHED |
+ CMS_STREAM);
+ if (!cms) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ if (!CMS_add1_signer(cms, x509, private_key, EVP_sha256(),
+ CMS_NOCERTS | CMS_BINARY | CMS_NOSMIMECAP |
+ CMS_USE_KEYID | CMS_NOATTR)) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ if (CMS_final(cms, bd_in, NULL, CMS_NOCERTS | CMS_BINARY) != 1) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ EVP_Digest(opts->insns, opts->insns_sz, opts->excl_prog_hash,
+ &opts->excl_prog_hash_sz, EVP_sha256(), NULL);
+
+ bd_out = BIO_new(BIO_s_mem());
+ if (!bd_out) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+ if (!i2d_CMS_bio_stream(bd_out, cms, NULL, 0)) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ actual_sig_len = BIO_get_mem_data(bd_out, NULL);
+ if (actual_sig_len <= 0) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ if ((size_t)actual_sig_len > opts->signature_sz) {
+ err = -ENOSPC;
+ goto cleanup;
+ }
+
+ if (BIO_read(bd_out, opts->signature, actual_sig_len) != actual_sig_len) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ opts->signature_sz = actual_sig_len;
+cleanup:
+ BIO_free(bd_out);
+ CMS_ContentInfo_free(cms);
+ X509_free(x509);
+ EVP_PKEY_free(private_key);
+ BIO_free(bd_in);
+ DISPLAY_OSSL_ERR(err < 0);
+ return err;
+}
diff --git a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
index 26004f0c5a6a..948dde25034e 100644
--- a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
+++ b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
@@ -29,6 +29,7 @@ enum bpf_link_type___local {
};
extern const void bpf_link_fops __ksym;
+extern const void bpf_link_fops_poll __ksym __weak;
extern const void bpf_map_fops __ksym;
extern const void bpf_prog_fops __ksym;
extern const void btf_fops __ksym;
@@ -84,7 +85,11 @@ int iter(struct bpf_iter__task_file *ctx)
fops = &btf_fops;
break;
case BPF_OBJ_LINK:
- fops = &bpf_link_fops;
+ if (&bpf_link_fops_poll &&
+ file->f_op == &bpf_link_fops_poll)
+ fops = &bpf_link_fops_poll;
+ else
+ fops = &bpf_link_fops;
break;
default:
return 0;
@@ -102,8 +107,8 @@ int iter(struct bpf_iter__task_file *ctx)
BPF_LINK_TYPE_PERF_EVENT___local)) {
struct bpf_link *link = (struct bpf_link *) file->private_data;
- if (link->type == bpf_core_enum_value(enum bpf_link_type___local,
- BPF_LINK_TYPE_PERF_EVENT___local)) {
+ if (BPF_CORE_READ(link, type) == bpf_core_enum_value(enum bpf_link_type___local,
+ BPF_LINK_TYPE_PERF_EVENT___local)) {
e.has_bpf_cookie = true;
e.bpf_cookie = get_bpf_cookie(link);
}
diff --git a/tools/bpf/bpftool/skeleton/profiler.bpf.c b/tools/bpf/bpftool/skeleton/profiler.bpf.c
index 2f80edc682f1..f48c783cb9f7 100644
--- a/tools/bpf/bpftool/skeleton/profiler.bpf.c
+++ b/tools/bpf/bpftool/skeleton/profiler.bpf.c
@@ -40,17 +40,17 @@ struct {
const volatile __u32 num_cpu = 1;
const volatile __u32 num_metric = 1;
-#define MAX_NUM_MATRICS 4
+#define MAX_NUM_METRICS 4
SEC("fentry/XXX")
int BPF_PROG(fentry_XXX)
{
- struct bpf_perf_event_value___local *ptrs[MAX_NUM_MATRICS];
+ struct bpf_perf_event_value___local *ptrs[MAX_NUM_METRICS];
u32 key = bpf_get_smp_processor_id();
u32 i;
/* look up before reading, to reduce error */
- for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+ for (i = 0; i < num_metric && i < MAX_NUM_METRICS; i++) {
u32 flag = i;
ptrs[i] = bpf_map_lookup_elem(&fentry_readings, &flag);
@@ -58,7 +58,7 @@ int BPF_PROG(fentry_XXX)
return 0;
}
- for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+ for (i = 0; i < num_metric && i < MAX_NUM_METRICS; i++) {
struct bpf_perf_event_value___local reading;
int err;
@@ -99,14 +99,14 @@ fexit_update_maps(u32 id, struct bpf_perf_event_value___local *after)
SEC("fexit/XXX")
int BPF_PROG(fexit_XXX)
{
- struct bpf_perf_event_value___local readings[MAX_NUM_MATRICS];
+ struct bpf_perf_event_value___local readings[MAX_NUM_METRICS];
u32 cpu = bpf_get_smp_processor_id();
u32 i, zero = 0;
int err;
u64 *count;
/* read all events before updating the maps, to reduce error */
- for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+ for (i = 0; i < num_metric && i < MAX_NUM_METRICS; i++) {
err = bpf_perf_event_read_value(&events, cpu + i * num_cpu,
(void *)(readings + i),
sizeof(*readings));
@@ -116,7 +116,7 @@ int BPF_PROG(fexit_XXX)
count = bpf_map_lookup_elem(&counts, &zero);
if (count) {
*count += 1;
- for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++)
+ for (i = 0; i < num_metric && i < MAX_NUM_METRICS; i++)
fexit_update_maps(i, &readings[i]);
}
return 0;
diff --git a/tools/bpf/bpftool/struct_ops.c b/tools/bpf/bpftool/struct_ops.c
index d573f2640d8e..aa43dead249c 100644
--- a/tools/bpf/bpftool/struct_ops.c
+++ b/tools/bpf/bpftool/struct_ops.c
@@ -515,7 +515,7 @@ static int do_register(int argc, char **argv)
if (argc == 1)
linkdir = GET_ARG();
- if (linkdir && mount_bpffs_for_pin(linkdir, true)) {
+ if (linkdir && create_and_mount_bpffs_dir(linkdir)) {
p_err("can't mount bpffs for pinning");
return -1;
}
diff --git a/tools/bpf/bpftool/token.c b/tools/bpf/bpftool/token.c
new file mode 100644
index 000000000000..c08f34b9d51b
--- /dev/null
+++ b/tools/bpf/bpftool/token.c
@@ -0,0 +1,210 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+/* Copyright (C) 2025 Didi Technology Co., Tao Chen */
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+#include <errno.h>
+#include <fcntl.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <mntent.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+
+#include "json_writer.h"
+#include "main.h"
+
+#define MOUNTS_FILE "/proc/mounts"
+
+static struct {
+ const char *header;
+ const char *key;
+} sets[] = {
+ {"allowed_cmds", "delegate_cmds"},
+ {"allowed_maps", "delegate_maps"},
+ {"allowed_progs", "delegate_progs"},
+ {"allowed_attachs", "delegate_attachs"},
+};
+
+static bool has_delegate_options(const char *mnt_ops)
+{
+ return strstr(mnt_ops, "delegate_cmds") ||
+ strstr(mnt_ops, "delegate_maps") ||
+ strstr(mnt_ops, "delegate_progs") ||
+ strstr(mnt_ops, "delegate_attachs");
+}
+
+static char *get_delegate_value(char *opts, const char *key)
+{
+ char *token, *rest, *ret = NULL;
+
+ if (!opts)
+ return NULL;
+
+ for (token = strtok_r(opts, ",", &rest); token;
+ token = strtok_r(NULL, ",", &rest)) {
+ if (strncmp(token, key, strlen(key)) == 0 &&
+ token[strlen(key)] == '=') {
+ ret = token + strlen(key) + 1;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static void print_items_per_line(char *input, int items_per_line)
+{
+ char *str, *rest;
+ int cnt = 0;
+
+ if (!input)
+ return;
+
+ for (str = strtok_r(input, ":", &rest); str;
+ str = strtok_r(NULL, ":", &rest)) {
+ if (cnt % items_per_line == 0)
+ printf("\n\t ");
+
+ printf("%-20s", str);
+ cnt++;
+ }
+}
+
+#define ITEMS_PER_LINE 4
+static void show_token_info_plain(struct mntent *mntent)
+{
+ size_t i;
+
+ printf("token_info %s", mntent->mnt_dir);
+
+ for (i = 0; i < ARRAY_SIZE(sets); i++) {
+ char *opts, *value;
+
+ printf("\n\t%s:", sets[i].header);
+ opts = strdup(mntent->mnt_opts);
+ value = get_delegate_value(opts, sets[i].key);
+ print_items_per_line(value, ITEMS_PER_LINE);
+ free(opts);
+ }
+
+ printf("\n");
+}
+
+static void split_json_array_str(char *input)
+{
+ char *str, *rest;
+
+ if (!input) {
+ jsonw_start_array(json_wtr);
+ jsonw_end_array(json_wtr);
+ return;
+ }
+
+ jsonw_start_array(json_wtr);
+ for (str = strtok_r(input, ":", &rest); str;
+ str = strtok_r(NULL, ":", &rest)) {
+ jsonw_string(json_wtr, str);
+ }
+ jsonw_end_array(json_wtr);
+}
+
+static void show_token_info_json(struct mntent *mntent)
+{
+ size_t i;
+
+ jsonw_start_object(json_wtr);
+ jsonw_string_field(json_wtr, "token_info", mntent->mnt_dir);
+
+ for (i = 0; i < ARRAY_SIZE(sets); i++) {
+ char *opts, *value;
+
+ jsonw_name(json_wtr, sets[i].header);
+ opts = strdup(mntent->mnt_opts);
+ value = get_delegate_value(opts, sets[i].key);
+ split_json_array_str(value);
+ free(opts);
+ }
+
+ jsonw_end_object(json_wtr);
+}
+
+static int __show_token_info(struct mntent *mntent)
+{
+ if (json_output)
+ show_token_info_json(mntent);
+ else
+ show_token_info_plain(mntent);
+
+ return 0;
+}
+
+static int show_token_info(void)
+{
+ FILE *fp;
+ struct mntent *ent;
+
+ fp = setmntent(MOUNTS_FILE, "r");
+ if (!fp) {
+ p_err("Failed to open: %s", MOUNTS_FILE);
+ return -1;
+ }
+
+ if (json_output)
+ jsonw_start_array(json_wtr);
+
+ while ((ent = getmntent(fp)) != NULL) {
+ if (strncmp(ent->mnt_type, "bpf", 3) == 0) {
+ if (has_delegate_options(ent->mnt_opts))
+ __show_token_info(ent);
+ }
+ }
+
+ if (json_output)
+ jsonw_end_array(json_wtr);
+
+ endmntent(fp);
+
+ return 0;
+}
+
+static int do_show(int argc, char **argv)
+{
+ if (argc)
+ return BAD_ARG();
+
+ return show_token_info();
+}
+
+static int do_help(int argc, char **argv)
+{
+ if (json_output) {
+ jsonw_null(json_wtr);
+ return 0;
+ }
+
+ fprintf(stderr,
+ "Usage: %1$s %2$s { show | list }\n"
+ " %1$s %2$s help\n"
+ " " HELP_SPEC_OPTIONS " }\n"
+ "\n"
+ "",
+ bin_name, argv[-2]);
+ return 0;
+}
+
+static const struct cmd cmds[] = {
+ { "show", do_show },
+ { "list", do_show },
+ { "help", do_help },
+ { 0 }
+};
+
+int do_token(int argc, char **argv)
+{
+ return cmd_select(cmds, argc, argv, do_help);
+}
diff --git a/tools/bpf/bpftool/tracelog.c b/tools/bpf/bpftool/tracelog.c
index bf1f02212797..573a8d99f009 100644
--- a/tools/bpf/bpftool/tracelog.c
+++ b/tools/bpf/bpftool/tracelog.c
@@ -57,10 +57,8 @@ find_tracefs_mnt_single(unsigned long magic, char *mnt, const char *mntpt)
static bool get_tracefs_pipe(char *mnt)
{
static const char * const known_mnts[] = {
- "/sys/kernel/debug/tracing",
"/sys/kernel/tracing",
- "/tracing",
- "/trace",
+ "/sys/kernel/debug/tracing",
};
const char *pipe_name = "/trace_pipe";
const char *fstype = "tracefs";
@@ -78,7 +76,7 @@ static bool get_tracefs_pipe(char *mnt)
return false;
/* Allow room for NULL terminating byte and pipe file name */
- snprintf(format, sizeof(format), "%%*s %%%zds %%99s %%*s %%*d %%*d\\n",
+ snprintf(format, sizeof(format), "%%*s %%%zus %%99s %%*s %%*d %%*d\\n",
PATH_MAX - strlen(pipe_name) - 1);
while (fscanf(fp, format, mnt, type) == 2)
if (strcmp(type, fstype) == 0) {
@@ -95,12 +93,7 @@ static bool get_tracefs_pipe(char *mnt)
return false;
p_info("could not find tracefs, attempting to mount it now");
- /* Most of the time, tracefs is automatically mounted by debugfs at
- * /sys/kernel/debug/tracing when we try to access it. If we could not
- * find it, it is likely that debugfs is not mounted. Let's give one
- * attempt at mounting just tracefs at /sys/kernel/tracing.
- */
- strcpy(mnt, known_mnts[1]);
+ strcpy(mnt, known_mnts[0]);
if (mount_tracefs(mnt))
return false;
diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c
index 567f56dfd9f1..5e7cb8b36fef 100644
--- a/tools/bpf/bpftool/xlated_dumper.c
+++ b/tools/bpf/bpftool/xlated_dumper.c
@@ -199,13 +199,13 @@ static const char *print_imm(void *private_data,
if (insn->src_reg == BPF_PSEUDO_MAP_FD)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "map[id:%u]", insn->imm);
+ "map[id:%d]", insn->imm);
else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "map[id:%u][0]+%u", insn->imm, (insn + 1)->imm);
+ "map[id:%d][0]+%d", insn->imm, (insn + 1)->imm);
else if (insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
- "map[idx:%u]+%u", insn->imm, (insn + 1)->imm);
+ "map[idx:%d]+%d", insn->imm, (insn + 1)->imm);
else if (insn->src_reg == BPF_PSEUDO_FUNC)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"subprog[%+d]", insn->imm);
@@ -349,7 +349,7 @@ void dump_xlated_plain(struct dump_data *dd, void *buf, unsigned int len,
double_insn = insn[i].code == (BPF_LD | BPF_IMM | BPF_DW);
- printf("% 4d: ", i);
+ printf("%4u: ", i);
print_bpf_insn(&cbs, insn + i, true);
if (opcodes) {
@@ -415,7 +415,7 @@ void dump_xlated_for_graph(struct dump_data *dd, void *buf_start, void *buf_end,
}
}
- printf("%d: ", insn_off);
+ printf("%u: ", insn_off);
print_bpf_insn(&cbs, cur, true);
if (opcodes) {
diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
index 4b8079f294f6..ce1b556dfa90 100644
--- a/tools/bpf/resolve_btfids/Makefile
+++ b/tools/bpf/resolve_btfids/Makefile
@@ -5,10 +5,8 @@ include ../../scripts/Makefile.arch
srctree := $(abspath $(CURDIR)/../../../)
ifeq ($(V),1)
- Q =
msg =
else
- Q = @
ifeq ($(silent),1)
msg =
else
@@ -19,7 +17,7 @@ endif
# Overrides for the prepare step libraries.
HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \
- CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
+ CROSS_COMPILE="" CLANG_CROSS_FLAGS="" EXTRA_CFLAGS="$(HOSTCFLAGS)"
RM ?= rm
HOSTCC ?= gcc
diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index 27a23196d58e..d47191c6e55e 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -70,6 +70,7 @@
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
+#include <linux/btf_ids.h>
#include <linux/rbtree.h>
#include <linux/zalloc.h>
#include <linux/err.h>
@@ -78,7 +79,7 @@
#include <subcmd/parse-options.h>
#define BTF_IDS_SECTION ".BTF_ids"
-#define BTF_ID "__BTF_ID__"
+#define BTF_ID_PREFIX "__BTF_ID__"
#define BTF_STRUCT "struct"
#define BTF_UNION "union"
@@ -89,6 +90,14 @@
#define ADDR_CNT 100
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+# define ELFDATANATIVE ELFDATA2LSB
+#elif __BYTE_ORDER == __BIG_ENDIAN
+# define ELFDATANATIVE ELFDATA2MSB
+#else
+# error "Unknown machine endianness!"
+#endif
+
struct btf_id {
struct rb_node rb_node;
char *name;
@@ -116,6 +125,7 @@ struct object {
int idlist_shndx;
size_t strtabidx;
unsigned long idlist_addr;
+ int encoding;
} efile;
struct rb_root sets;
@@ -131,6 +141,7 @@ struct object {
};
static int verbose;
+static int warnings;
static int eprintf(int level, int var, const char *fmt, ...)
{
@@ -161,7 +172,7 @@ static int eprintf(int level, int var, const char *fmt, ...)
static bool is_btf_id(const char *name)
{
- return name && !strncmp(name, BTF_ID, sizeof(BTF_ID) - 1);
+ return name && !strncmp(name, BTF_ID_PREFIX, sizeof(BTF_ID_PREFIX) - 1);
}
static struct btf_id *btf_id__find(struct rb_root *root, const char *name)
@@ -319,6 +330,7 @@ static int elf_collect(struct object *obj)
{
Elf_Scn *scn = NULL;
size_t shdrstrndx;
+ GElf_Ehdr ehdr;
int idx = 0;
Elf *elf;
int fd;
@@ -350,6 +362,13 @@ static int elf_collect(struct object *obj)
return -1;
}
+ if (gelf_getehdr(obj->efile.elf, &ehdr) == NULL) {
+ pr_err("FAILED cannot get ELF header: %s\n",
+ elf_errmsg(-1));
+ return -1;
+ }
+ obj->efile.encoding = ehdr.e_ident[EI_DATA];
+
/*
* Scan all the elf sections and look for save data
* from .BTF_ids section and symbols.
@@ -391,6 +410,14 @@ static int elf_collect(struct object *obj)
obj->efile.idlist = data;
obj->efile.idlist_shndx = idx;
obj->efile.idlist_addr = sh.sh_addr;
+ } else if (!strcmp(name, BTF_BASE_ELF_SEC)) {
+ /* If a .BTF.base section is found, do not resolve
+ * BTF ids relative to vmlinux; resolve relative
+ * to the .BTF.base section instead. btf__parse_split()
+ * will take care of this once the base BTF it is
+ * passed is NULL.
+ */
+ obj->base_btf_path = NULL;
}
if (compressed_section_fix(elf, scn, &sh))
@@ -441,7 +468,7 @@ static int symbols_collect(struct object *obj)
* __BTF_ID__TYPE__vfs_truncate__0
* prefix = ^
*/
- prefix = name + sizeof(BTF_ID) - 1;
+ prefix = name + sizeof(BTF_ID_PREFIX) - 1;
/* struct */
if (!strncmp(prefix, BTF_STRUCT, sizeof(BTF_STRUCT) - 1)) {
@@ -578,6 +605,7 @@ static int symbols_resolve(struct object *obj)
if (id->id) {
pr_info("WARN: multiple IDs found for '%s': %d, %d - using %d\n",
str, id->id, type_id, id->id);
+ warnings++;
} else {
id->id = type_id;
(*nr)--;
@@ -599,8 +627,10 @@ static int id_patch(struct object *obj, struct btf_id *id)
int i;
/* For set, set8, id->id may be 0 */
- if (!id->id && !id->is_set && !id->is_set8)
+ if (!id->id && !id->is_set && !id->is_set8) {
pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name);
+ warnings++;
+ }
for (i = 0; i < id->addr_cnt; i++) {
unsigned long addr = id->addr[i];
@@ -649,19 +679,18 @@ static int cmp_id(const void *pa, const void *pb)
static int sets_patch(struct object *obj)
{
Elf_Data *data = obj->efile.idlist;
- int *ptr = data->d_buf;
struct rb_node *next;
next = rb_first(&obj->sets);
while (next) {
- unsigned long addr, idx;
+ struct btf_id_set8 *set8 = NULL;
+ struct btf_id_set *set = NULL;
+ unsigned long addr, off;
struct btf_id *id;
- int *base;
- int cnt;
id = rb_entry(next, struct btf_id, rb_node);
addr = id->addr[0];
- idx = addr - obj->efile.idlist_addr;
+ off = addr - obj->efile.idlist_addr;
/* sets are unique */
if (id->addr_cnt != 1) {
@@ -670,14 +699,39 @@ static int sets_patch(struct object *obj)
return -1;
}
- idx = idx / sizeof(int);
- base = &ptr[idx] + (id->is_set8 ? 2 : 1);
- cnt = ptr[idx];
+ if (id->is_set) {
+ set = data->d_buf + off;
+ qsort(set->ids, set->cnt, sizeof(set->ids[0]), cmp_id);
+ } else {
+ set8 = data->d_buf + off;
+ /*
+ * Make sure id is at the beginning of the pairs
+ * struct, otherwise the below qsort would not work.
+ */
+ BUILD_BUG_ON((u32 *)set8->pairs != &set8->pairs[0].id);
+ qsort(set8->pairs, set8->cnt, sizeof(set8->pairs[0]), cmp_id);
- pr_debug("sorting addr %5lu: cnt %6d [%s]\n",
- (idx + 1) * sizeof(int), cnt, id->name);
+ /*
+ * When ELF endianness does not match endianness of the
+ * host, libelf will do the translation when updating
+ * the ELF. This, however, corrupts SET8 flags which are
+ * already in the target endianness. So, let's bswap
+ * them to the host endianness and libelf will then
+ * correctly translate everything.
+ */
+ if (obj->efile.encoding != ELFDATANATIVE) {
+ int i;
+
+ set8->flags = bswap_32(set8->flags);
+ for (i = 0; i < set8->cnt; i++) {
+ set8->pairs[i].flags =
+ bswap_32(set8->pairs[i].flags);
+ }
+ }
+ }
- qsort(base, cnt, id->is_set8 ? sizeof(uint64_t) : sizeof(int), cmp_id);
+ pr_debug("sorting addr %5lu: cnt %6d [%s]\n",
+ off, id->is_set ? set->cnt : set8->cnt, id->name);
next = rb_next(next);
}
@@ -686,7 +740,7 @@ static int sets_patch(struct object *obj)
static int symbols_patch(struct object *obj)
{
- int err;
+ off_t err;
if (__symbols_patch(obj, &obj->structs) ||
__symbols_patch(obj, &obj->unions) ||
@@ -732,6 +786,7 @@ int main(int argc, const char **argv)
.funcs = RB_ROOT,
.sets = RB_ROOT,
};
+ bool fatal_warnings = false;
struct option btfid_options[] = {
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show errors, etc)"),
@@ -739,6 +794,8 @@ int main(int argc, const char **argv)
"BTF data"),
OPT_STRING('b', "btf_base", &obj.base_btf_path, "file",
"path of file providing base BTF"),
+ OPT_BOOLEAN(0, "fatal_warnings", &fatal_warnings,
+ "turn warnings into errors"),
OPT_END()
};
int err = -1;
@@ -773,7 +830,8 @@ int main(int argc, const char **argv)
if (symbols_patch(&obj))
goto out;
- err = 0;
+ if (!(fatal_warnings && warnings))
+ err = 0;
out:
if (obj.efile.elf) {
elf_end(obj.efile.elf);
diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile
index d8288936c912..78a436c4072e 100644
--- a/tools/bpf/runqslower/Makefile
+++ b/tools/bpf/runqslower/Makefile
@@ -6,6 +6,7 @@ OUTPUT ?= $(abspath .output)/
BPFTOOL_OUTPUT := $(OUTPUT)bpftool/
DEFAULT_BPFTOOL := $(BPFTOOL_OUTPUT)bootstrap/bpftool
BPFTOOL ?= $(DEFAULT_BPFTOOL)
+BPF_TARGET_ENDIAN ?= --target=bpf
LIBBPF_SRC := $(abspath ../../lib/bpf)
BPFOBJ_OUTPUT := $(OUTPUT)libbpf/
BPFOBJ := $(BPFOBJ_OUTPUT)libbpf.a
@@ -15,6 +16,7 @@ INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../include/uapi)
CFLAGS := -g -Wall $(CLANG_CROSS_FLAGS)
CFLAGS += $(EXTRA_CFLAGS)
LDFLAGS += $(EXTRA_LDFLAGS)
+LDLIBS += -lelf -lz
# Try to detect best kernel BTF source
KERNEL_REL := $(shell uname -r)
@@ -25,10 +27,7 @@ VMLINUX_BTF_PATHS := $(if $(O),$(O)/vmlinux) \
VMLINUX_BTF_PATH := $(or $(VMLINUX_BTF),$(firstword \
$(wildcard $(VMLINUX_BTF_PATHS))))
-ifeq ($(V),1)
-Q =
-else
-Q = @
+ifneq ($(V),1)
MAKEFLAGS += --no-print-directory
submake_extras := feature_display=0
endif
@@ -51,7 +50,7 @@ clean:
libbpf_hdrs: $(BPFOBJ)
$(OUTPUT)/runqslower: $(OUTPUT)/runqslower.o $(BPFOBJ)
- $(QUIET_LINK)$(CC) $(CFLAGS) $^ -lelf -lz -o $@
+ $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
$(OUTPUT)/runqslower.o: runqslower.h $(OUTPUT)/runqslower.skel.h \
$(OUTPUT)/runqslower.bpf.o | libbpf_hdrs
@@ -62,7 +61,7 @@ $(OUTPUT)/%.skel.h: $(OUTPUT)/%.bpf.o | $(BPFTOOL)
$(QUIET_GEN)$(BPFTOOL) gen skeleton $< > $@
$(OUTPUT)/%.bpf.o: %.bpf.c $(BPFOBJ) | $(OUTPUT)
- $(QUIET_GEN)$(CLANG) -g -O2 --target=bpf $(INCLUDES) \
+ $(QUIET_GEN)$(CLANG) -g -O2 $(BPF_TARGET_ENDIAN) $(INCLUDES) \
-c $(filter %.c,$^) -o $@ && \
$(LLVM_STRIP) -g $@
diff --git a/tools/bpf/runqslower/runqslower.bpf.c b/tools/bpf/runqslower/runqslower.bpf.c
index 9a5c1f008fe6..fced54a3adf6 100644
--- a/tools/bpf/runqslower/runqslower.bpf.c
+++ b/tools/bpf/runqslower/runqslower.bpf.c
@@ -70,7 +70,6 @@ int handle__sched_switch(u64 *ctx)
struct task_struct *next = (struct task_struct *)ctx[2];
struct runq_event event = {};
u64 *tsp, delta_us;
- long state;
u32 pid;
/* ivcsw: treat like an enqueue event and store timestamp */
diff --git a/tools/build/Build b/tools/build/Build
deleted file mode 100644
index 76d1a4960973..000000000000
--- a/tools/build/Build
+++ /dev/null
@@ -1,3 +0,0 @@
-hostprogs := fixdep
-
-fixdep-y := fixdep.o
diff --git a/tools/build/Build.include b/tools/build/Build.include
index c2a95ab47379..e45b2eb0d24a 100644
--- a/tools/build/Build.include
+++ b/tools/build/Build.include
@@ -13,6 +13,8 @@
comma := ,
squote := '
pound := \#
+empty :=
+space := $(empty) $(empty)
###
# Name of target with a '.' as filename prefix. foo/bar.o => foo/.bar.o
diff --git a/tools/build/Makefile b/tools/build/Makefile
index 17cdf01e29a0..63ef21878761 100644
--- a/tools/build/Makefile
+++ b/tools/build/Makefile
@@ -17,13 +17,7 @@ $(call allow-override,LD,$(CROSS_COMPILE)ld)
export HOSTCC HOSTLD HOSTAR
-ifeq ($(V),1)
- Q =
-else
- Q = @
-endif
-
-export Q srctree CC LD
+export srctree CC LD
MAKEFLAGS := --no-print-directory
build := -f $(srctree)/tools/build/Makefile.build dir=. obj
@@ -43,12 +37,5 @@ ifneq ($(wildcard $(TMP_O)),)
$(Q)$(MAKE) -C feature OUTPUT=$(TMP_O) clean >/dev/null
endif
-$(OUTPUT)fixdep-in.o: FORCE
- $(Q)$(MAKE) $(build)=fixdep
-
-$(OUTPUT)fixdep: $(OUTPUT)fixdep-in.o
- $(QUIET_LINK)$(HOSTCC) $(KBUILD_HOSTLDFLAGS) -o $@ $<
-
-FORCE:
-
-.PHONY: FORCE
+$(OUTPUT)fixdep: $(srctree)/tools/build/fixdep.c
+ $(QUIET_CC)$(HOSTCC) $(KBUILD_HOSTCFLAGS) $(KBUILD_HOSTLDFLAGS) -o $@ $<
diff --git a/tools/build/Makefile.build b/tools/build/Makefile.build
index 5fb3fb3d97e0..3584ff308607 100644
--- a/tools/build/Makefile.build
+++ b/tools/build/Makefile.build
@@ -12,26 +12,6 @@
PHONY := __build
__build:
-ifeq ($(V),1)
- quiet =
- Q =
-else
- quiet=quiet_
- Q=@
-endif
-
-# If the user is running make -s (silent mode), suppress echoing of commands
-# make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS.
-ifeq ($(filter 3.%,$(MAKE_VERSION)),)
-short-opts := $(firstword -$(MAKEFLAGS))
-else
-short-opts := $(filter-out --%,$(MAKEFLAGS))
-endif
-
-ifneq ($(findstring s,$(short-opts)),)
- quiet=silent_
-endif
-
build-dir := $(srctree)/tools/build
# Define $(fixdep) for dep-cmd function
@@ -149,6 +129,10 @@ objprefix := $(subst ./,,$(OUTPUT)$(dir)/)
obj-y := $(addprefix $(objprefix),$(obj-y))
subdir-obj-y := $(addprefix $(objprefix),$(subdir-obj-y))
+# Separate out test log files from real build objects.
+test-y := $(filter %_log, $(obj-y))
+obj-y := $(filter-out %_log, $(obj-y))
+
# Final '$(obj)-in.o' object
in-target := $(objprefix)$(obj)-in.o
@@ -159,7 +143,7 @@ $(subdir-y):
$(sort $(subdir-obj-y)): $(subdir-y) ;
-$(in-target): $(obj-y) FORCE
+$(in-target): $(obj-y) $(test-y) FORCE
$(call rule_mkdir)
$(call if_changed,$(host)ld_multi)
diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
index 64df118376df..32bbe29fe5f6 100644
--- a/tools/build/Makefile.feature
+++ b/tools/build/Makefile.feature
@@ -28,39 +28,66 @@ endef
# the rule that uses them - an example for that is the 'bionic'
# feature check. ]
#
+# These + the ones in FEATURE_TESTS_EXTRA are included in
+# tools/build/feature/test-all.c and we try to build it all together
+# then setting all those features to '1' meaning they are all enabled.
+#
+# There are things like fortify-source that will be set to 1 because test-all
+# is built with the flags needed to test if its enabled, resulting in
+#
+# $ rm -rf /tmp/b ; mkdir /tmp/b ; make -C tools/perf O=/tmp/b feature-dump
+# $ grep fortify-source /tmp/b/FEATURE-DUMP
+# feature-fortify-source=1
+# $
+#
+# All the others should have lines in tools/build/feature/test-all.c like:
+#
+# #define main main_test_disassembler_init_styled
+# # include "test-disassembler-init-styled.c"
+# #undef main
+#
+# #define main main_test_libzstd
+# # include "test-libzstd.c"
+# #undef main
+#
+# int main(int argc, char *argv[])
+# {
+# main_test_disassembler_four_args();
+# main_test_libzstd();
+# return 0;
+# }
+#
+# If the sample above works, then we end up with these lines in the FEATURE-DUMP
+# file:
+#
+# feature-disassembler-four-args=1
+# feature-libzstd=1
+#
FEATURE_TESTS_BASIC := \
backtrace \
- dwarf \
- dwarf_getlocations \
- dwarf_getcfi \
+ libdw \
eventfd \
fortify-source \
- get_current_dir_name \
gettid \
glibc \
libbfd \
libbfd-buildid \
- libcap \
libelf \
libelf-getphdrnum \
libelf-gelf_getnote \
libelf-getshdrstrndx \
+ libelf-zstd \
libnuma \
numa_num_possible_cpus \
- libperl \
libpython \
libslang \
- libslang-include-subdir \
libtraceevent \
- libtracefs \
- libcrypto \
- libunwind \
+ libcpupower \
pthread-attr-setaffinity-np \
pthread-barrier \
reallocarray \
stackprotector-all \
timerfd \
- libdw-dwarf-unwind \
zlib \
lzma \
get_cpuid \
@@ -87,30 +114,19 @@ FEATURE_TESTS_EXTRA := \
gtk2-infobar \
hello \
libbabeltrace \
+ libcapstone \
libbfd-liberty \
libbfd-liberty-z \
libopencsd \
- libunwind-x86 \
- libunwind-x86_64 \
- libunwind-arm \
- libunwind-aarch64 \
- libunwind-debug-frame \
- libunwind-debug-frame-arm \
- libunwind-debug-frame-aarch64 \
+ libperl \
cxx \
llvm \
- llvm-version \
clang \
libbpf \
- libbpf-btf__load_from_kernel_by_id \
- libbpf-bpf_prog_load \
- libbpf-bpf_object__next_program \
- libbpf-bpf_object__next_map \
- libbpf-bpf_program__set_insns \
- libbpf-bpf_create_map \
libpfm4 \
libdebuginfod \
- clang-bpf-co-re
+ clang-bpf-co-re \
+ bpftool-skeletons
FEATURE_TESTS ?= $(FEATURE_TESTS_BASIC)
@@ -120,20 +136,14 @@ ifeq ($(FEATURE_TESTS),all)
endif
FEATURE_DISPLAY ?= \
- dwarf \
- dwarf_getlocations \
+ libdw \
glibc \
- libbfd \
- libbfd-buildid \
- libcap \
libelf \
libnuma \
numa_num_possible_cpus \
- libperl \
libpython \
- libcrypto \
- libunwind \
- libdw-dwarf-unwind \
+ libcapstone \
+ llvm-perf \
zlib \
lzma \
get_cpuid \
@@ -147,6 +157,24 @@ FEATURE_DISPLAY ?= \
#
FEATURE_GROUP_MEMBERS-libbfd = libbfd-liberty libbfd-liberty-z
+#
+# Declare list of feature dependency packages that provide pkg-config files.
+#
+FEATURE_PKG_CONFIG ?= \
+ libtraceevent \
+ libtracefs
+
+feature_pkg_config = $(eval $(feature_pkg_config_code))
+define feature_pkg_config_code
+ FEATURE_CHECK_CFLAGS-$(1) := $(shell $(PKG_CONFIG) --cflags $(1) 2>/dev/null)
+ FEATURE_CHECK_LDFLAGS-$(1) := $(shell $(PKG_CONFIG) --libs $(1) 2>/dev/null)
+endef
+
+# Set FEATURE_CHECK_(C|LD)FLAGS-$(package) for packages using pkg-config.
+ifneq ($(PKG_CONFIG),)
+ $(foreach package,$(FEATURE_PKG_CONFIG),$(call feature_pkg_config,$(package)))
+endif
+
# Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features.
# If in the future we need per-feature checks/flags for features not
# mentioned in this list we need to refactor this ;-).
@@ -213,7 +241,7 @@ endef
#
# generates feature value assignment for name, like:
-# $(call feature_assign,dwarf) == feature-dwarf=1
+# $(call feature_assign,libdw) == feature-libdw=1
#
feature_assign = feature-$(1)=$(feature-$(1))
diff --git a/tools/build/Makefile.include b/tools/build/Makefile.include
index 8dadaa0fbb43..0e4de83400ac 100644
--- a/tools/build/Makefile.include
+++ b/tools/build/Makefile.include
@@ -1,8 +1,18 @@
# SPDX-License-Identifier: GPL-2.0-only
build := -f $(srctree)/tools/build/Makefile.build dir=. obj
+# More than just $(Q), we sometimes want to suppress all command output from a
+# recursive make -- even the 'up to date' printout.
+ifeq ($(V),1)
+ Q ?=
+ SILENT_MAKE = +$(Q)$(MAKE)
+else
+ Q ?= @
+ SILENT_MAKE = +$(Q)$(MAKE) --silent
+endif
+
fixdep:
- $(Q)$(MAKE) -C $(srctree)/tools/build CFLAGS= LDFLAGS= $(OUTPUT)fixdep
+ $(SILENT_MAKE) -C $(srctree)/tools/build CFLAGS= LDFLAGS= $(OUTPUT)fixdep
fixdep-clean:
$(Q)$(MAKE) -C $(srctree)/tools/build clean
diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
index 37722e509eb9..49b0add392b1 100644
--- a/tools/build/feature/Makefile
+++ b/tools/build/feature/Makefile
@@ -5,17 +5,13 @@ FILES= \
test-all.bin \
test-backtrace.bin \
test-bionic.bin \
- test-dwarf.bin \
- test-dwarf_getlocations.bin \
- test-dwarf_getcfi.bin \
+ test-libdw.bin \
test-eventfd.bin \
test-fortify-source.bin \
- test-get_current_dir_name.bin \
test-glibc.bin \
test-gtk2.bin \
test-gtk2-infobar.bin \
test-hello.bin \
- test-libaudit.bin \
test-libbfd.bin \
test-libbfd-buildid.bin \
test-disassembler-four-args.bin \
@@ -30,16 +26,16 @@ FILES= \
test-libelf-getphdrnum.bin \
test-libelf-gelf_getnote.bin \
test-libelf-getshdrstrndx.bin \
+ test-libelf-zstd.bin \
test-libdebuginfod.bin \
test-libnuma.bin \
test-numa_num_possible_cpus.bin \
test-libperl.bin \
test-libpython.bin \
test-libslang.bin \
- test-libslang-include-subdir.bin \
test-libtraceevent.bin \
+ test-libcpupower.bin \
test-libtracefs.bin \
- test-libcrypto.bin \
test-libunwind.bin \
test-libunwind-debug-frame.bin \
test-libunwind-x86.bin \
@@ -52,8 +48,8 @@ FILES= \
test-pthread-barrier.bin \
test-stackprotector-all.bin \
test-timerfd.bin \
- test-libdw-dwarf-unwind.bin \
test-libbabeltrace.bin \
+ test-libcapstone.bin \
test-compile-32.bin \
test-compile-x32.bin \
test-zlib.bin \
@@ -72,7 +68,7 @@ FILES= \
test-libopencsd.bin \
test-clang.bin \
test-llvm.bin \
- test-llvm-version.bin \
+ test-llvm-perf.bin \
test-libaio.bin \
test-libzstd.bin \
test-clang-bpf-co-re.bin \
@@ -81,14 +77,37 @@ FILES= \
FILES := $(addprefix $(OUTPUT),$(FILES))
-PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+# Some distros provide the command $(CROSS_COMPILE)pkg-config for
+# searching packges installed with Multiarch. Use it for cross
+# compilation if it is existed.
+ifneq (, $(shell which $(CROSS_COMPILE)pkg-config))
+ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+else
+ PKG_CONFIG ?= pkg-config
+
+ # PKG_CONFIG_PATH or PKG_CONFIG_LIBDIR, alongside PKG_CONFIG_SYSROOT_DIR
+ # for modified system root, are required for the cross compilation.
+ # If these PKG_CONFIG environment variables are not set, Multiarch library
+ # paths are used instead.
+ ifdef CROSS_COMPILE
+ ifeq ($(PKG_CONFIG_LIBDIR)$(PKG_CONFIG_PATH)$(PKG_CONFIG_SYSROOT_DIR),)
+ CROSS_ARCH = $(notdir $(CROSS_COMPILE:%-=%))
+ PKG_CONFIG_LIBDIR := /usr/local/$(CROSS_ARCH)/lib/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/local/lib/$(CROSS_ARCH)/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/lib/$(CROSS_ARCH)/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/local/share/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/share/pkgconfig/
+ export PKG_CONFIG_LIBDIR
+ endif
+ endif
+endif
all: $(FILES)
__BUILD = $(CC) $(CFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.c,$(@F)) $(LDFLAGS)
BUILD = $(__BUILD) > $(@:.bin=.make.output) 2>&1
BUILD_BFD = $(BUILD) -DPACKAGE='"perf"' -lbfd -ldl
- BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd -lcap
+ BUILD_ALL = $(BUILD) -fstack-protector-all -O2 -D_FORTIFY_SOURCE=2 -ldw -lelf -lnuma -lelf -lslang $(FLAGS_PERL_EMBED) $(FLAGS_PYTHON_EMBED) -DPACKAGE='"perf"' -lbfd -ldl -lz -llzma -lzstd
__BUILDXX = $(CXX) $(CXXFLAGS) -MD -Wall -Werror -o $@ $(patsubst %.bin,%.cpp,$(@F)) $(LDFLAGS)
BUILDXX = $(__BUILDXX) > $(@:.bin=.make.output) 2>&1
@@ -125,9 +144,6 @@ $(OUTPUT)test-libelf.bin:
$(OUTPUT)test-eventfd.bin:
$(BUILD)
-$(OUTPUT)test-get_current_dir_name.bin:
- $(BUILD)
-
$(OUTPUT)test-glibc.bin:
$(BUILD)
@@ -144,19 +160,26 @@ $(OUTPUT)test-libopencsd.bin:
$(BUILD) # -lopencsd_c_api -lopencsd provided by
# $(FEATURE_CHECK_LDFLAGS-libopencsd)
-DWARFLIBS := -ldw
+DWLIBS := -ldw
ifeq ($(findstring -static,${LDFLAGS}),-static)
-DWARFLIBS += -lelf -lebl -lz -llzma -lbz2
-endif
+ DWLIBS += -lelf -lz -llzma -lbz2 -lzstd
+
+ LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw).0.0
+ LIBDW_VERSION_1 := $(word 1, $(subst ., ,$(LIBDW_VERSION)))
+ LIBDW_VERSION_2 := $(word 2, $(subst ., ,$(LIBDW_VERSION)))
-$(OUTPUT)test-dwarf.bin:
- $(BUILD) $(DWARFLIBS)
+ # Elfutils merged libebl.a into libdw.a starting from version 0.177,
+ # Link libebl.a only if libdw is older than this version.
+ ifeq ($(shell test $(LIBDW_VERSION_2) -lt 177; echo $$?),0)
+ DWLIBS += -lebl
+ endif
-$(OUTPUT)test-dwarf_getlocations.bin:
- $(BUILD) $(DWARFLIBS)
+ # Must put -ldl after -lebl for dependency
+ DWARFLIBS += -ldl
+endif
-$(OUTPUT)test-dwarf_getcfi.bin:
- $(BUILD) $(DWARFLIBS)
+$(OUTPUT)test-libdw.bin:
+ $(BUILD) $(DWLIBS)
$(OUTPUT)test-libelf-getphdrnum.bin:
$(BUILD) -lelf
@@ -167,6 +190,9 @@ $(OUTPUT)test-libelf-gelf_getnote.bin:
$(OUTPUT)test-libelf-getshdrstrndx.bin:
$(BUILD) -lelf
+$(OUTPUT)test-libelf-zstd.bin:
+ $(BUILD) -lelf -lz -lzstd
+
$(OUTPUT)test-libdebuginfod.bin:
$(BUILD) -ldebuginfod
@@ -177,45 +203,39 @@ $(OUTPUT)test-numa_num_possible_cpus.bin:
$(BUILD) -lnuma
$(OUTPUT)test-libunwind.bin:
- $(BUILD) -lelf
+ $(BUILD) -lelf -llzma
$(OUTPUT)test-libunwind-debug-frame.bin:
- $(BUILD) -lelf
+ $(BUILD) -lelf -llzma
$(OUTPUT)test-libunwind-x86.bin:
- $(BUILD) -lelf -lunwind-x86
+ $(BUILD) -lelf -llzma -lunwind-x86
$(OUTPUT)test-libunwind-x86_64.bin:
- $(BUILD) -lelf -lunwind-x86_64
+ $(BUILD) -lelf -llzma -lunwind-x86_64
$(OUTPUT)test-libunwind-arm.bin:
- $(BUILD) -lelf -lunwind-arm
+ $(BUILD) -lelf -llzma -lunwind-arm
$(OUTPUT)test-libunwind-aarch64.bin:
- $(BUILD) -lelf -lunwind-aarch64
+ $(BUILD) -lelf -llzma -lunwind-aarch64
$(OUTPUT)test-libunwind-debug-frame-arm.bin:
- $(BUILD) -lelf -lunwind-arm
+ $(BUILD) -lelf -llzma -lunwind-arm
$(OUTPUT)test-libunwind-debug-frame-aarch64.bin:
- $(BUILD) -lelf -lunwind-aarch64
-
-$(OUTPUT)test-libaudit.bin:
- $(BUILD) -laudit
+ $(BUILD) -lelf -llzma -lunwind-aarch64
$(OUTPUT)test-libslang.bin:
$(BUILD) -lslang
-$(OUTPUT)test-libslang-include-subdir.bin:
- $(BUILD) -lslang
-
$(OUTPUT)test-libtraceevent.bin:
$(BUILD) -ltraceevent
-$(OUTPUT)test-libtracefs.bin:
- $(BUILD) $(shell $(PKG_CONFIG) --cflags libtraceevent 2>/dev/null) -ltracefs
+$(OUTPUT)test-libcpupower.bin:
+ $(BUILD) -lcpupower
-$(OUTPUT)test-libcrypto.bin:
- $(BUILD) -lcrypto
+$(OUTPUT)test-libtracefs.bin:
+ $(BUILD) $(shell $(PKG_CONFIG) --cflags libtracefs 2>/dev/null) -ltracefs
$(OUTPUT)test-gtk2.bin:
$(BUILD) $(shell $(PKG_CONFIG) --libs --cflags gtk+-2.0 2>/dev/null) -Wno-deprecated-declarations
@@ -280,17 +300,17 @@ $(OUTPUT)test-backtrace.bin:
$(OUTPUT)test-timerfd.bin:
$(BUILD)
-$(OUTPUT)test-libdw-dwarf-unwind.bin:
- $(BUILD) # -ldw provided by $(FEATURE_CHECK_LDFLAGS-libdw-dwarf-unwind)
-
$(OUTPUT)test-libbabeltrace.bin:
$(BUILD) # -lbabeltrace provided by $(FEATURE_CHECK_LDFLAGS-libbabeltrace)
+$(OUTPUT)test-libcapstone.bin:
+ $(BUILD) # -lcapstone provided by $(FEATURE_CHECK_LDFLAGS-libcapstone)
+
$(OUTPUT)test-compile-32.bin:
- $(CC) -m32 -o $@ test-compile.c
+ $(CC) -m32 -Wall -Werror -o $@ test-compile.c
$(OUTPUT)test-compile-x32.bin:
- $(CC) -mx32 -o $@ test-compile.c
+ $(CC) -mx32 -Wall -Werror -o $@ test-compile.c
$(OUTPUT)test-zlib.bin:
$(BUILD) -lz
@@ -307,27 +327,6 @@ $(OUTPUT)test-bpf.bin:
$(OUTPUT)test-libbpf.bin:
$(BUILD) -lbpf
-$(OUTPUT)test-libbpf-btf__load_from_kernel_by_id.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-bpf_prog_load.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-bpf_map_create.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-bpf_object__next_program.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-bpf_object__next_map.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-bpf_program__set_insns.bin:
- $(BUILD) -lbpf
-
-$(OUTPUT)test-libbpf-btf__raw_data.bin:
- $(BUILD) -lbpf
-
$(OUTPUT)test-sdt.bin:
$(BUILD)
@@ -351,9 +350,12 @@ $(OUTPUT)test-llvm.bin:
$(shell $(LLVM_CONFIG) --system-libs) \
> $(@:.bin=.make.output) 2>&1
-$(OUTPUT)test-llvm-version.bin:
- $(BUILDXX) -std=gnu++17 \
- -I$(shell $(LLVM_CONFIG) --includedir) \
+$(OUTPUT)test-llvm-perf.bin:
+ $(BUILDXX) -std=gnu++17 \
+ -I$(shell $(LLVM_CONFIG) --includedir) \
+ -L$(shell $(LLVM_CONFIG) --libdir) \
+ $(shell $(LLVM_CONFIG) --libs Core BPF) \
+ $(shell $(LLVM_CONFIG) --system-libs) \
> $(@:.bin=.make.output) 2>&1
$(OUTPUT)test-clang.bin:
@@ -383,6 +385,9 @@ $(OUTPUT)test-file-handle.bin:
$(OUTPUT)test-libpfm4.bin:
$(BUILD) -lpfm
+$(OUTPUT)test-bpftool-skeletons.bin:
+ $(SYSTEM_BPFTOOL) version | grep '^features:.*skeletons' \
+ > $(@:.bin=.make.output) 2>&1
###############################
clean:
diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
index 6f4bf386a3b5..8a354b81417c 100644
--- a/tools/build/feature/test-all.c
+++ b/tools/build/feature/test-all.c
@@ -7,17 +7,13 @@
*/
/*
- * Quirk: Python and Perl headers cannot be in arbitrary places, so keep
- * these 3 testcases at the top:
+ * Quirk: Python headers cannot be in arbitrary places, so keep this testcase at
+ * the top:
*/
#define main main_test_libpython
# include "test-libpython.c"
#undef main
-#define main main_test_libperl
-# include "test-libperl.c"
-#undef main
-
#define main main_test_hello
# include "test-hello.c"
#undef main
@@ -26,10 +22,6 @@
# include "test-libelf.c"
#undef main
-#define main main_test_get_current_dir_name
-# include "test-get_current_dir_name.c"
-#undef main
-
#define main main_test_gettid
# include "test-gettid.c"
#undef main
@@ -38,12 +30,8 @@
# include "test-glibc.c"
#undef main
-#define main main_test_dwarf
-# include "test-dwarf.c"
-#undef main
-
-#define main main_test_dwarf_getlocations
-# include "test-dwarf_getlocations.c"
+#define main main_test_libdw
+# include "test-libdw.c"
#undef main
#define main main_test_eventfd
@@ -62,22 +50,14 @@
# include "test-libelf-getshdrstrndx.c"
#undef main
-#define main main_test_libunwind
-# include "test-libunwind.c"
+#define main main_test_libelf_zstd
+# include "test-libelf-zstd.c"
#undef main
#define main main_test_libslang
# include "test-libslang.c"
#undef main
-#define main main_test_libbfd
-# include "test-libbfd.c"
-#undef main
-
-#define main main_test_libbfd_buildid
-# include "test-libbfd-buildid.c"
-#undef main
-
#define main main_test_backtrace
# include "test-backtrace.c"
#undef main
@@ -98,10 +78,6 @@
# include "test-stackprotector-all.c"
#undef main
-#define main main_test_libdw_dwarf_unwind
-# include "test-libdw-dwarf-unwind.c"
-#undef main
-
#define main main_test_zlib
# include "test-zlib.c"
#undef main
@@ -146,10 +122,6 @@
# include "test-bpf.c"
#undef main
-#define main main_test_libcrypto
-# include "test-libcrypto.c"
-#undef main
-
#define main main_test_sdt
# include "test-sdt.c"
#undef main
@@ -166,58 +138,46 @@
# include "test-reallocarray.c"
#undef main
-#define main main_test_disassembler_four_args
-# include "test-disassembler-four-args.c"
-#undef main
-
-#define main main_test_disassembler_init_styled
-# include "test-disassembler-init-styled.c"
-#undef main
-
#define main main_test_libzstd
# include "test-libzstd.c"
#undef main
+#define main main_test_libtraceevent
+# include "test-libtraceevent.c"
+#undef main
+
int main(int argc, char *argv[])
{
main_test_libpython();
- main_test_libperl();
main_test_hello();
main_test_libelf();
- main_test_get_current_dir_name();
main_test_gettid();
main_test_glibc();
- main_test_dwarf();
- main_test_dwarf_getlocations();
+ main_test_libdw();
main_test_eventfd();
main_test_libelf_getphdrnum();
main_test_libelf_gelf_getnote();
main_test_libelf_getshdrstrndx();
- main_test_libunwind();
main_test_libslang();
- main_test_libbfd();
- main_test_libbfd_buildid();
main_test_backtrace();
main_test_libnuma();
main_test_numa_num_possible_cpus();
main_test_timerfd();
main_test_stackprotector_all();
- main_test_libdw_dwarf_unwind();
main_test_zlib();
main_test_pthread_attr_setaffinity_np();
main_test_pthread_barrier();
main_test_lzma();
main_test_get_cpuid();
main_test_bpf();
- main_test_libcrypto();
main_test_scandirat();
main_test_sched_getcpu();
main_test_sdt();
main_test_setns();
main_test_libaio();
main_test_reallocarray();
- main_test_disassembler_four_args();
main_test_libzstd();
+ main_test_libtraceevent();
return 0;
}
diff --git a/tools/build/feature/test-backtrace.c b/tools/build/feature/test-backtrace.c
index e9ddd27c69c3..7962fbad6401 100644
--- a/tools/build/feature/test-backtrace.c
+++ b/tools/build/feature/test-backtrace.c
@@ -5,7 +5,7 @@
int main(void)
{
void *backtrace_fns[10];
- size_t entries;
+ int entries;
entries = backtrace(backtrace_fns, 10);
backtrace_symbols_fd(backtrace_fns, entries, 1);
diff --git a/tools/build/feature/test-bpf.c b/tools/build/feature/test-bpf.c
index 727d22e34a6e..e7a405f83af6 100644
--- a/tools/build/feature/test-bpf.c
+++ b/tools/build/feature/test-bpf.c
@@ -44,5 +44,5 @@ int main(void)
* Test existence of __NR_bpf and BPF_PROG_LOAD.
* This call should fail if we run the testcase.
*/
- return syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr));
+ return syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)) == 0;
}
diff --git a/tools/build/feature/test-dwarf.c b/tools/build/feature/test-dwarf.c
deleted file mode 100644
index 8d474bd7371b..000000000000
--- a/tools/build/feature/test-dwarf.c
+++ /dev/null
@@ -1,11 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <dwarf.h>
-#include <elfutils/libdw.h>
-#include <elfutils/version.h>
-
-int main(void)
-{
- Dwarf *dbg = dwarf_begin(0, DWARF_C_READ);
-
- return (long)dbg;
-}
diff --git a/tools/build/feature/test-dwarf_getcfi.c b/tools/build/feature/test-dwarf_getcfi.c
deleted file mode 100644
index 50e7d7cb7bdf..000000000000
--- a/tools/build/feature/test-dwarf_getcfi.c
+++ /dev/null
@@ -1,9 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <stdio.h>
-#include <elfutils/libdw.h>
-
-int main(void)
-{
- Dwarf *dwarf = NULL;
- return dwarf_getcfi(dwarf) == NULL;
-}
diff --git a/tools/build/feature/test-dwarf_getlocations.c b/tools/build/feature/test-dwarf_getlocations.c
deleted file mode 100644
index 78fb4a1fa68c..000000000000
--- a/tools/build/feature/test-dwarf_getlocations.c
+++ /dev/null
@@ -1,13 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <stdlib.h>
-#include <elfutils/libdw.h>
-
-int main(void)
-{
- Dwarf_Addr base, start, end;
- Dwarf_Attribute attr;
- Dwarf_Op *op;
- size_t nops;
- ptrdiff_t offset = 0;
- return (int)dwarf_getlocations(&attr, offset, &base, &start, &end, &op, &nops);
-}
diff --git a/tools/build/feature/test-get_current_dir_name.c b/tools/build/feature/test-get_current_dir_name.c
deleted file mode 100644
index c3c201691b4f..000000000000
--- a/tools/build/feature/test-get_current_dir_name.c
+++ /dev/null
@@ -1,11 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#define _GNU_SOURCE
-#include <unistd.h>
-#include <stdlib.h>
-
-int main(void)
-{
- free(get_current_dir_name());
- return 0;
-}
-#undef _GNU_SOURCE
diff --git a/tools/build/feature/test-glibc.c b/tools/build/feature/test-glibc.c
index 9ab8e90e7b88..20a250419f31 100644
--- a/tools/build/feature/test-glibc.c
+++ b/tools/build/feature/test-glibc.c
@@ -16,5 +16,5 @@ int main(void)
const char *version = XSTR(__GLIBC__) "." XSTR(__GLIBC_MINOR__);
#endif
- return (long)version;
+ return version == NULL;
}
diff --git a/tools/build/feature/test-libaudit.c b/tools/build/feature/test-libaudit.c
deleted file mode 100644
index f5b0863fa1ec..000000000000
--- a/tools/build/feature/test-libaudit.c
+++ /dev/null
@@ -1,11 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <libaudit.h>
-
-extern int printf(const char *format, ...);
-
-int main(void)
-{
- printf("error message: %s\n", audit_errno_to_name(0));
-
- return audit_open();
-}
diff --git a/tools/build/feature/test-libcapstone.c b/tools/build/feature/test-libcapstone.c
new file mode 100644
index 000000000000..fbe8dba189e9
--- /dev/null
+++ b/tools/build/feature/test-libcapstone.c
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <capstone/capstone.h>
+
+int main(void)
+{
+ csh handle;
+
+ cs_open(CS_ARCH_X86, CS_MODE_64, &handle);
+ return 0;
+}
diff --git a/tools/build/feature/test-libcpupower.c b/tools/build/feature/test-libcpupower.c
new file mode 100644
index 000000000000..a346aa332a71
--- /dev/null
+++ b/tools/build/feature/test-libcpupower.c
@@ -0,0 +1,8 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <cpuidle.h>
+
+int main(void)
+{
+ int rv = cpuidle_state_count(0);
+ return rv;
+}
diff --git a/tools/build/feature/test-libcrypto.c b/tools/build/feature/test-libcrypto.c
deleted file mode 100644
index bc34a5bbb504..000000000000
--- a/tools/build/feature/test-libcrypto.c
+++ /dev/null
@@ -1,25 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <openssl/evp.h>
-#include <openssl/sha.h>
-#include <openssl/md5.h>
-
-int main(void)
-{
- EVP_MD_CTX *mdctx;
- unsigned char md[MD5_DIGEST_LENGTH + SHA_DIGEST_LENGTH];
- unsigned char dat[] = "12345";
- unsigned int digest_len;
-
- mdctx = EVP_MD_CTX_new();
- if (!mdctx)
- return 0;
-
- EVP_DigestInit_ex(mdctx, EVP_md5(), NULL);
- EVP_DigestUpdate(mdctx, &dat[0], sizeof(dat));
- EVP_DigestFinal_ex(mdctx, &md[0], &digest_len);
- EVP_MD_CTX_free(mdctx);
-
- SHA1(&dat[0], sizeof(dat), &md[0]);
-
- return 0;
-}
diff --git a/tools/build/feature/test-libdebuginfod.c b/tools/build/feature/test-libdebuginfod.c
index da22548b8413..823f9fa9391d 100644
--- a/tools/build/feature/test-libdebuginfod.c
+++ b/tools/build/feature/test-libdebuginfod.c
@@ -4,5 +4,5 @@
int main(void)
{
debuginfod_client* c = debuginfod_begin();
- return (long)c;
+ return !!c;
}
diff --git a/tools/build/feature/test-libdw-dwarf-unwind.c b/tools/build/feature/test-libdw-dwarf-unwind.c
deleted file mode 100644
index ed03d9505609..000000000000
--- a/tools/build/feature/test-libdw-dwarf-unwind.c
+++ /dev/null
@@ -1,14 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-
-#include <elfutils/libdwfl.h>
-
-int main(void)
-{
- /*
- * This function is guarded via: __nonnull_attribute__ (1, 2).
- * Passing '1' as arguments value. This code is never executed,
- * only compiled.
- */
- dwfl_thread_getframes((void *) 1, (void *) 1, NULL);
- return 0;
-}
diff --git a/tools/build/feature/test-libdw.c b/tools/build/feature/test-libdw.c
new file mode 100644
index 000000000000..aabd63ca76b4
--- /dev/null
+++ b/tools/build/feature/test-libdw.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdlib.h>
+#include <dwarf.h>
+#include <elfutils/libdw.h>
+#include <elfutils/libdwfl.h>
+#include <elfutils/version.h>
+
+int test_libdw(void)
+{
+ Dwarf *dbg = dwarf_begin(0, DWARF_C_READ);
+
+ return dbg == NULL;
+}
+
+int test_libdw_unwind(void)
+{
+ /*
+ * This function is guarded via: __nonnull_attribute__ (1, 2).
+ * Passing '1' as arguments value. This code is never executed,
+ * only compiled.
+ */
+ dwfl_thread_getframes((void *) 1, (void *) 1, NULL);
+ return 0;
+}
+
+int test_libdw_getlocations(void)
+{
+ Dwarf_Addr base, start, end;
+ Dwarf_Attribute attr;
+ Dwarf_Op *op;
+ size_t nops;
+ ptrdiff_t offset = 0;
+
+ return (int)dwarf_getlocations(&attr, offset, &base, &start, &end, &op, &nops);
+}
+
+int test_libdw_getcfi(void)
+{
+ Dwarf *dwarf = NULL;
+
+ return dwarf_getcfi(dwarf) == NULL;
+}
+
+int test_elfutils(void)
+{
+ Dwarf_CFI *cfi = NULL;
+
+ dwarf_cfi_end(cfi);
+ return 0;
+}
+
+int main(void)
+{
+ return test_libdw() + test_libdw_unwind() + test_libdw_getlocations() +
+ test_libdw_getcfi() + test_elfutils();
+}
diff --git a/tools/build/feature/test-libelf-gelf_getnote.c b/tools/build/feature/test-libelf-gelf_getnote.c
index 075d062fe841..e06121161161 100644
--- a/tools/build/feature/test-libelf-gelf_getnote.c
+++ b/tools/build/feature/test-libelf-gelf_getnote.c
@@ -4,5 +4,5 @@
int main(void)
{
- return gelf_getnote(NULL, 0, NULL, NULL, NULL);
+ return gelf_getnote(NULL, 0, NULL, NULL, NULL) == 0;
}
diff --git a/tools/build/feature/test-libelf-zstd.c b/tools/build/feature/test-libelf-zstd.c
new file mode 100644
index 000000000000..a1324a1db3bb
--- /dev/null
+++ b/tools/build/feature/test-libelf-zstd.c
@@ -0,0 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stddef.h>
+#include <libelf.h>
+
+int main(void)
+{
+ elf_compress(NULL, ELFCOMPRESS_ZSTD, 0);
+ return 0;
+}
diff --git a/tools/build/feature/test-libelf.c b/tools/build/feature/test-libelf.c
index 905044127d56..2dbb6ea870f3 100644
--- a/tools/build/feature/test-libelf.c
+++ b/tools/build/feature/test-libelf.c
@@ -5,5 +5,5 @@ int main(void)
{
Elf *elf = elf_begin(0, ELF_C_READ, 0);
- return (long)elf;
+ return !!elf;
}
diff --git a/tools/build/feature/test-libtraceevent.c b/tools/build/feature/test-libtraceevent.c
index 416b11ffd4b4..804ad80dbbd9 100644
--- a/tools/build/feature/test-libtraceevent.c
+++ b/tools/build/feature/test-libtraceevent.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-#include <traceevent/trace-seq.h>
+#include <trace-seq.h>
int main(void)
{
diff --git a/tools/build/feature/test-libtracefs.c b/tools/build/feature/test-libtracefs.c
index 8eff16c0c10b..29a757a7d848 100644
--- a/tools/build/feature/test-libtracefs.c
+++ b/tools/build/feature/test-libtracefs.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-#include <tracefs/tracefs.h>
+#include <tracefs.h>
int main(void)
{
diff --git a/tools/build/feature/test-llvm-perf.cpp b/tools/build/feature/test-llvm-perf.cpp
new file mode 100644
index 000000000000..a8cbb67e335e
--- /dev/null
+++ b/tools/build/feature/test-llvm-perf.cpp
@@ -0,0 +1,14 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "llvm/Support/ManagedStatic.h"
+#include "llvm/Support/raw_ostream.h"
+
+#if LLVM_VERSION_MAJOR < 13
+# error "Perf requires llvm-devel/llvm-dev version 13 or greater"
+#endif
+
+int main()
+{
+ llvm::errs() << "Hello World!\n";
+ llvm::llvm_shutdown();
+ return 0;
+}
diff --git a/tools/build/feature/test-lzma.c b/tools/build/feature/test-lzma.c
index 78682bb01d57..b57103774e8e 100644
--- a/tools/build/feature/test-lzma.c
+++ b/tools/build/feature/test-lzma.c
@@ -4,7 +4,7 @@
int main(void)
{
lzma_stream strm = LZMA_STREAM_INIT;
- int ret;
+ lzma_ret ret;
ret = lzma_stream_decoder(&strm, UINT64_MAX, LZMA_CONCATENATED);
return ret ? -1 : 0;
diff --git a/tools/cgroup/memcg_slabinfo.py b/tools/cgroup/memcg_slabinfo.py
index 1d3a90d93fe2..6bf4bde77903 100644
--- a/tools/cgroup/memcg_slabinfo.py
+++ b/tools/cgroup/memcg_slabinfo.py
@@ -146,12 +146,11 @@ def detect_kernel_config():
def for_each_slab(prog):
- PGSlab = 1 << prog.constant('PG_slab')
- PGHead = 1 << prog.constant('PG_head')
+ slabtype = prog.constant('PGTY_slab')
for page in for_each_page(prog):
try:
- if page.flags.value_() & PGSlab:
+ if (page.page_type.value_() >> 24) == slabtype:
yield cast('struct slab *', page)
except FaultError:
pass
diff --git a/tools/counter/.gitignore b/tools/counter/.gitignore
index 9fd290d4bf43..22d8727d2696 100644
--- a/tools/counter/.gitignore
+++ b/tools/counter/.gitignore
@@ -1,2 +1,3 @@
/counter_example
+/counter_watch_events
/include/linux/counter.h
diff --git a/tools/counter/counter_watch_events.c b/tools/counter/counter_watch_events.c
index 107631e0f2e3..15e21b0c5ffd 100644
--- a/tools/counter/counter_watch_events.c
+++ b/tools/counter/counter_watch_events.c
@@ -38,6 +38,7 @@ static const char * const counter_event_type_name[] = {
"COUNTER_EVENT_INDEX",
"COUNTER_EVENT_CHANGE_OF_STATE",
"COUNTER_EVENT_CAPTURE",
+ "COUNTER_EVENT_DIRECTION_CHANGE",
};
static const char * const counter_component_type_name[] = {
@@ -118,6 +119,7 @@ static void print_usage(void)
" evt_index (COUNTER_EVENT_INDEX)\n"
" evt_change_of_state (COUNTER_EVENT_CHANGE_OF_STATE)\n"
" evt_capture (COUNTER_EVENT_CAPTURE)\n"
+ " evt_direction_change (COUNTER_EVENT_DIRECTION_CHANGE)\n"
"\n"
" chan=<n> channel <n> for this watch [default: 0]\n"
" id=<n> component id <n> for this watch [default: 0]\n"
@@ -157,6 +159,7 @@ enum {
WATCH_EVENT_INDEX,
WATCH_EVENT_CHANGE_OF_STATE,
WATCH_EVENT_CAPTURE,
+ WATCH_EVENT_DIRECTION_CHANGE,
WATCH_CHANNEL,
WATCH_ID,
WATCH_PARENT,
@@ -183,6 +186,7 @@ static char * const counter_watch_subopts[WATCH_SUBOPTS_MAX + 1] = {
[WATCH_EVENT_INDEX] = "evt_index",
[WATCH_EVENT_CHANGE_OF_STATE] = "evt_change_of_state",
[WATCH_EVENT_CAPTURE] = "evt_capture",
+ [WATCH_EVENT_DIRECTION_CHANGE] = "evt_direction_change",
/* channel, id, parent */
[WATCH_CHANNEL] = "chan",
[WATCH_ID] = "id",
@@ -278,6 +282,7 @@ int main(int argc, char **argv)
case WATCH_EVENT_INDEX:
case WATCH_EVENT_CHANGE_OF_STATE:
case WATCH_EVENT_CAPTURE:
+ case WATCH_EVENT_DIRECTION_CHANGE:
/* match counter_event_type: subtract enum value */
ret -= WATCH_EVENT_OVERFLOW;
watches[i].event = ret;
diff --git a/tools/crypto/ccp/dbc.c b/tools/crypto/ccp/dbc.c
index a807df0f0597..80248d3d3a5a 100644
--- a/tools/crypto/ccp/dbc.c
+++ b/tools/crypto/ccp/dbc.c
@@ -57,7 +57,6 @@ int process_param(int fd, int msg_index, __u8 *signature, int *data)
.msg_index = msg_index,
.param = *data,
};
- int ret;
assert(signature);
assert(data);
diff --git a/tools/crypto/ccp/test_dbc.py b/tools/crypto/ccp/test_dbc.py
index 79de3638a01a..bb0e671be96d 100755
--- a/tools/crypto/ccp/test_dbc.py
+++ b/tools/crypto/ccp/test_dbc.py
@@ -138,12 +138,14 @@ class TestInvalidSignature(DynamicBoostControlTest):
def test_authenticated_nonce(self) -> None:
"""fetch authenticated nonce"""
+ get_nonce(self.d, None)
with self.assertRaises(OSError) as error:
get_nonce(self.d, self.signature)
- self.assertEqual(error.exception.errno, 1)
+ self.assertEqual(error.exception.errno, 22)
def test_set_uid(self) -> None:
"""set uid"""
+ get_nonce(self.d, None)
with self.assertRaises(OSError) as error:
set_uid(self.d, self.uid, self.signature)
self.assertEqual(error.exception.errno, 1)
@@ -152,13 +154,13 @@ class TestInvalidSignature(DynamicBoostControlTest):
"""fetch a parameter"""
with self.assertRaises(OSError) as error:
process_param(self.d, PARAM_GET_SOC_PWR_CUR, self.signature)
- self.assertEqual(error.exception.errno, 1)
+ self.assertEqual(error.exception.errno, 11)
def test_set_param(self) -> None:
"""set a parameter"""
with self.assertRaises(OSError) as error:
process_param(self.d, PARAM_SET_PWR_CAP, self.signature, 1000)
- self.assertEqual(error.exception.errno, 1)
+ self.assertEqual(error.exception.errno, 11)
class TestUnFusedSystem(DynamicBoostControlTest):
diff --git a/tools/debugging/kernel-chktaint b/tools/debugging/kernel-chktaint
index 279be06332be..e7da0909d097 100755
--- a/tools/debugging/kernel-chktaint
+++ b/tools/debugging/kernel-chktaint
@@ -204,6 +204,14 @@ else
echo " * an in-kernel test (such as a KUnit test) has been run (#18)"
fi
+T=`expr $T / 2`
+if [ `expr $T % 2` -eq 0 ]; then
+ addout " "
+else
+ addout "J"
+ echo " * fwctl's mutating debug interface was used (#19)"
+fi
+
echo "For a more detailed explanation of the various taint flags see"
echo " Documentation/admin-guide/tainted-kernels.rst in the Linux kernel sources"
echo " or https://kernel.org/doc/html/latest/admin-guide/tainted-kernels.html"
diff --git a/tools/docs/gen-redirects.py b/tools/docs/gen-redirects.py
new file mode 100755
index 000000000000..6a6ebf6f42dc
--- /dev/null
+++ b/tools/docs/gen-redirects.py
@@ -0,0 +1,54 @@
+#! /usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright © 2025, Oracle and/or its affiliates.
+# Author: Vegard Nossum <vegard.nossum@oracle.com>
+
+"""Generate HTML redirects for renamed Documentation/**.rst files using
+the output of tools/docs/gen-renames.py.
+
+Example:
+
+ tools/docs/gen-redirects.py --output Documentation/output/ < Documentation/.renames.txt
+"""
+
+import argparse
+import os
+import sys
+
+parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
+parser.add_argument('-o', '--output', help='output directory')
+
+args = parser.parse_args()
+
+for line in sys.stdin:
+ line = line.rstrip('\n')
+
+ old_name, new_name = line.split(' ', 2)
+
+ old_html_path = os.path.join(args.output, old_name + '.html')
+ new_html_path = os.path.join(args.output, new_name + '.html')
+
+ if not os.path.exists(new_html_path):
+ print(f"warning: target does not exist: {new_html_path} (redirect from {old_html_path})")
+ continue
+
+ old_html_dir = os.path.dirname(old_html_path)
+ if not os.path.exists(old_html_dir):
+ os.makedirs(old_html_dir)
+
+ relpath = os.path.relpath(new_name, os.path.dirname(old_name)) + '.html'
+
+ with open(old_html_path, 'w') as f:
+ print(f"""\
+<!DOCTYPE html>
+
+<html lang="en">
+<head>
+ <title>This page has moved</title>
+ <meta http-equiv="refresh" content="0; url={relpath}">
+</head>
+<body>
+<p>This page has moved to <a href="{relpath}">{new_name}</a>.</p>
+</body>
+</html>""", file=f)
diff --git a/tools/docs/gen-renames.py b/tools/docs/gen-renames.py
new file mode 100755
index 000000000000..8cb3b2157d83
--- /dev/null
+++ b/tools/docs/gen-renames.py
@@ -0,0 +1,130 @@
+#! /usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright © 2025, Oracle and/or its affiliates.
+# Author: Vegard Nossum <vegard.nossum@oracle.com>
+
+"""Trawl repository history for renames of Documentation/**.rst files.
+
+Example:
+
+ tools/docs/gen-renames.py --rev HEAD > Documentation/.renames.txt
+"""
+
+import argparse
+import itertools
+import os
+import subprocess
+import sys
+
+parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
+parser.add_argument('--rev', default='HEAD', help='generate renames up to this revision')
+
+args = parser.parse_args()
+
+def normalize(path):
+ prefix = 'Documentation/'
+ suffix = '.rst'
+
+ assert path.startswith(prefix)
+ assert path.endswith(suffix)
+
+ return path[len(prefix):-len(suffix)]
+
+class Name(object):
+ def __init__(self, name):
+ self.names = [name]
+
+ def rename(self, new_name):
+ self.names.append(new_name)
+
+names = {
+}
+
+for line in subprocess.check_output([
+ 'git', 'log',
+ '--reverse',
+ '--oneline',
+ '--find-renames',
+ '--diff-filter=RD',
+ '--name-status',
+ '--format=commit %H',
+ # ~v4.8-ish is when Sphinx/.rst was added in the first place
+ f'v4.8..{args.rev}',
+ '--',
+ 'Documentation/'
+], text=True).splitlines():
+ # rename
+ if line.startswith('R'):
+ _, old, new = line[1:].split('\t', 2)
+
+ if old.endswith('.rst') and new.endswith('.rst'):
+ old = normalize(old)
+ new = normalize(new)
+
+ name = names.get(old)
+ if name is None:
+ name = Name(old)
+ else:
+ del names[old]
+
+ name.rename(new)
+ names[new] = name
+
+ continue
+
+ # delete
+ if line.startswith('D'):
+ _, old = line.split('\t', 1)
+
+ if old.endswith('.rst'):
+ old = normalize(old)
+
+ # TODO: we could save added/modified files as well and propose
+ # them as alternatives
+ name = names.get(old)
+ if name is None:
+ pass
+ else:
+ del names[old]
+
+ continue
+
+#
+# Get the set of current files so we can sanity check that we aren't
+# redirecting any of those
+#
+
+current_files = set()
+for line in subprocess.check_output([
+ 'git', 'ls-tree',
+ '-r',
+ '--name-only',
+ args.rev,
+ 'Documentation/',
+], text=True).splitlines():
+ if line.endswith('.rst'):
+ current_files.add(normalize(line))
+
+#
+# Format/group/output result
+#
+
+result = []
+for _, v in names.items():
+ old_names = v.names[:-1]
+ new_name = v.names[-1]
+
+ for old_name in old_names:
+ if old_name == new_name:
+ # A file was renamed to its new name twice; don't redirect that
+ continue
+
+ if old_name in current_files:
+ # A file was recreated with a former name; don't redirect those
+ continue
+
+ result.append((old_name, new_name))
+
+for old_name, new_name in sorted(result):
+ print(f"{old_name} {new_name}")
diff --git a/tools/docs/lib/__init__.py b/tools/docs/lib/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/docs/lib/__init__.py
diff --git a/tools/docs/lib/enrich_formatter.py b/tools/docs/lib/enrich_formatter.py
new file mode 100644
index 000000000000..bb171567a4ca
--- /dev/null
+++ b/tools/docs/lib/enrich_formatter.py
@@ -0,0 +1,70 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2025 by Mauro Carvalho Chehab <mchehab@kernel.org>.
+
+"""
+Ancillary argparse HelpFormatter class that works on a similar way as
+argparse.RawDescriptionHelpFormatter, e.g. description maintains line
+breaks, but it also implement transformations to the help text. The
+actual transformations ar given by enrich_text(), if the output is tty.
+
+Currently, the follow transformations are done:
+
+ - Positional arguments are shown in upper cases;
+ - if output is TTY, ``var`` and positional arguments are shown prepended
+ by an ANSI SGR code. This is usually translated to bold. On some
+ terminals, like, konsole, this is translated into a colored bold text.
+"""
+
+import argparse
+import re
+import sys
+
+class EnrichFormatter(argparse.HelpFormatter):
+ """
+ Better format the output, making easier to identify the positional args
+ and how they're used at the __doc__ description.
+ """
+ def __init__(self, *args, **kwargs):
+ """Initialize class and check if is TTY"""
+ super().__init__(*args, **kwargs)
+ self._tty = sys.stdout.isatty()
+
+ def enrich_text(self, text):
+ """Handle ReST markups (currently, only ``foo``)"""
+ if self._tty and text:
+ # Replace ``text`` with ANSI SGR (bold)
+ return re.sub(r'\`\`(.+?)\`\`',
+ lambda m: f'\033[1m{m.group(1)}\033[0m', text)
+ return text
+
+ def _fill_text(self, text, width, indent):
+ """Enrich descriptions with markups on it"""
+ enriched = self.enrich_text(text)
+ return "\n".join(indent + line for line in enriched.splitlines())
+
+ def _format_usage(self, usage, actions, groups, prefix):
+ """Enrich positional arguments at usage: line"""
+
+ prog = self._prog
+ parts = []
+
+ for action in actions:
+ if action.option_strings:
+ opt = action.option_strings[0]
+ if action.nargs != 0:
+ opt += f" {action.dest.upper()}"
+ parts.append(f"[{opt}]")
+ else:
+ # Positional argument
+ parts.append(self.enrich_text(f"``{action.dest.upper()}``"))
+
+ usage_text = f"{prefix or 'usage: '} {prog} {' '.join(parts)}\n"
+ return usage_text
+
+ def _format_action_invocation(self, action):
+ """Enrich argument names"""
+ if not action.option_strings:
+ return self.enrich_text(f"``{action.dest.upper()}``")
+
+ return ", ".join(action.option_strings)
diff --git a/tools/docs/lib/parse_data_structs.py b/tools/docs/lib/parse_data_structs.py
new file mode 100755
index 000000000000..a5aa2e182052
--- /dev/null
+++ b/tools/docs/lib/parse_data_structs.py
@@ -0,0 +1,452 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2016-2025 by Mauro Carvalho Chehab <mchehab@kernel.org>.
+# pylint: disable=R0912,R0915
+
+"""
+Parse a source file or header, creating ReStructured Text cross references.
+
+It accepts an optional file to change the default symbol reference or to
+suppress symbols from the output.
+
+It is capable of identifying defines, functions, structs, typedefs,
+enums and enum symbols and create cross-references for all of them.
+It is also capable of distinguish #define used for specifying a Linux
+ioctl.
+
+The optional rules file contains a set of rules like:
+
+ ignore ioctl VIDIOC_ENUM_FMT
+ replace ioctl VIDIOC_DQBUF vidioc_qbuf
+ replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det`
+"""
+
+import os
+import re
+import sys
+
+
+class ParseDataStructs:
+ """
+ Creates an enriched version of a Kernel header file with cross-links
+ to each C data structure type.
+
+ It is meant to allow having a more comprehensive documentation, where
+ uAPI headers will create cross-reference links to the code.
+
+ It is capable of identifying defines, functions, structs, typedefs,
+ enums and enum symbols and create cross-references for all of them.
+ It is also capable of distinguish #define used for specifying a Linux
+ ioctl.
+
+ By default, it create rules for all symbols and defines, but it also
+ allows parsing an exception file. Such file contains a set of rules
+ using the syntax below:
+
+ 1. Ignore rules:
+
+ ignore <type> <symbol>`
+
+ Removes the symbol from reference generation.
+
+ 2. Replace rules:
+
+ replace <type> <old_symbol> <new_reference>
+
+ Replaces how old_symbol with a new reference. The new_reference can be:
+ - A simple symbol name;
+ - A full Sphinx reference.
+
+ On both cases, <type> can be:
+ - ioctl: for defines that end with _IO*, e.g. ioctl definitions
+ - define: for other defines
+ - symbol: for symbols defined within enums;
+ - typedef: for typedefs;
+ - enum: for the name of a non-anonymous enum;
+ - struct: for structs.
+
+ Examples:
+
+ ignore define __LINUX_MEDIA_H
+ ignore ioctl VIDIOC_ENUM_FMT
+ replace ioctl VIDIOC_DQBUF vidioc_qbuf
+ replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det`
+ """
+
+ # Parser regexes with multiple ways to capture enums and structs
+ RE_ENUMS = [
+ re.compile(r"^\s*enum\s+([\w_]+)\s*\{"),
+ re.compile(r"^\s*enum\s+([\w_]+)\s*$"),
+ re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*\{"),
+ re.compile(r"^\s*typedef\s*enum\s+([\w_]+)\s*$"),
+ ]
+ RE_STRUCTS = [
+ re.compile(r"^\s*struct\s+([_\w][\w\d_]+)\s*\{"),
+ re.compile(r"^\s*struct\s+([_\w][\w\d_]+)$"),
+ re.compile(r"^\s*typedef\s*struct\s+([_\w][\w\d_]+)\s*\{"),
+ re.compile(r"^\s*typedef\s*struct\s+([_\w][\w\d_]+)$"),
+ ]
+
+ # FIXME: the original code was written a long time before Sphinx C
+ # domain to have multiple namespaces. To avoid to much turn at the
+ # existing hyperlinks, the code kept using "c:type" instead of the
+ # right types. To change that, we need to change the types not only
+ # here, but also at the uAPI media documentation.
+ DEF_SYMBOL_TYPES = {
+ "ioctl": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":ref",
+ "description": "IOCTL Commands",
+ },
+ "define": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":ref",
+ "description": "Macros and Definitions",
+ },
+ # We're calling each definition inside an enum as "symbol"
+ "symbol": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":ref",
+ "description": "Enumeration values",
+ },
+ "typedef": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":c:type",
+ "description": "Type Definitions",
+ },
+ # This is the description of the enum itself
+ "enum": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":c:type",
+ "description": "Enumerations",
+ },
+ "struct": {
+ "prefix": "\\ ",
+ "suffix": "\\ ",
+ "ref_type": ":c:type",
+ "description": "Structures",
+ },
+ }
+
+ def __init__(self, debug: bool = False):
+ """Initialize internal vars"""
+ self.debug = debug
+ self.data = ""
+
+ self.symbols = {}
+
+ for symbol_type in self.DEF_SYMBOL_TYPES:
+ self.symbols[symbol_type] = {}
+
+ def store_type(self, symbol_type: str, symbol: str,
+ ref_name: str = None, replace_underscores: bool = True):
+ """
+ Stores a new symbol at self.symbols under symbol_type.
+
+ By default, underscores are replaced by "-"
+ """
+ defs = self.DEF_SYMBOL_TYPES[symbol_type]
+
+ prefix = defs.get("prefix", "")
+ suffix = defs.get("suffix", "")
+ ref_type = defs.get("ref_type")
+
+ # Determine ref_link based on symbol type
+ if ref_type:
+ if symbol_type == "enum":
+ ref_link = f"{ref_type}:`{symbol}`"
+ else:
+ if not ref_name:
+ ref_name = symbol.lower()
+
+ # c-type references don't support hash
+ if ref_type == ":ref" and replace_underscores:
+ ref_name = ref_name.replace("_", "-")
+
+ ref_link = f"{ref_type}:`{symbol} <{ref_name}>`"
+ else:
+ ref_link = symbol
+
+ self.symbols[symbol_type][symbol] = f"{prefix}{ref_link}{suffix}"
+
+ def store_line(self, line):
+ """Stores a line at self.data, properly indented"""
+ line = " " + line.expandtabs()
+ self.data += line.rstrip(" ")
+
+ def parse_file(self, file_in: str):
+ """Reads a C source file and get identifiers"""
+ self.data = ""
+ is_enum = False
+ is_comment = False
+ multiline = ""
+
+ with open(file_in, "r",
+ encoding="utf-8", errors="backslashreplace") as f:
+ for line_no, line in enumerate(f):
+ self.store_line(line)
+ line = line.strip("\n")
+
+ # Handle continuation lines
+ if line.endswith(r"\\"):
+ multiline += line[-1]
+ continue
+
+ if multiline:
+ line = multiline + line
+ multiline = ""
+
+ # Handle comments. They can be multilined
+ if not is_comment:
+ if re.search(r"/\*.*", line):
+ is_comment = True
+ else:
+ # Strip C99-style comments
+ line = re.sub(r"(//.*)", "", line)
+
+ if is_comment:
+ if re.search(r".*\*/", line):
+ is_comment = False
+ else:
+ multiline = line
+ continue
+
+ # At this point, line variable may be a multilined statement,
+ # if lines end with \ or if they have multi-line comments
+ # With that, it can safely remove the entire comments,
+ # and there's no need to use re.DOTALL for the logic below
+
+ line = re.sub(r"(/\*.*\*/)", "", line)
+ if not line.strip():
+ continue
+
+ # It can be useful for debug purposes to print the file after
+ # having comments stripped and multi-lines grouped.
+ if self.debug > 1:
+ print(f"line {line_no + 1}: {line}")
+
+ # Now the fun begins: parse each type and store it.
+
+ # We opted for a two parsing logic here due to:
+ # 1. it makes easier to debug issues not-parsed symbols;
+ # 2. we want symbol replacement at the entire content, not
+ # just when the symbol is detected.
+
+ if is_enum:
+ match = re.match(r"^\s*([_\w][\w\d_]+)\s*[\,=]?", line)
+ if match:
+ self.store_type("symbol", match.group(1))
+ if "}" in line:
+ is_enum = False
+ continue
+
+ match = re.match(r"^\s*#\s*define\s+([\w_]+)\s+_IO", line)
+ if match:
+ self.store_type("ioctl", match.group(1),
+ replace_underscores=False)
+ continue
+
+ match = re.match(r"^\s*#\s*define\s+([\w_]+)(\s+|$)", line)
+ if match:
+ self.store_type("define", match.group(1))
+ continue
+
+ match = re.match(r"^\s*typedef\s+([_\w][\w\d_]+)\s+(.*)\s+([_\w][\w\d_]+);",
+ line)
+ if match:
+ name = match.group(2).strip()
+ symbol = match.group(3)
+ self.store_type("typedef", symbol, ref_name=name)
+ continue
+
+ for re_enum in self.RE_ENUMS:
+ match = re_enum.match(line)
+ if match:
+ self.store_type("enum", match.group(1))
+ is_enum = True
+ break
+
+ for re_struct in self.RE_STRUCTS:
+ match = re_struct.match(line)
+ if match:
+ self.store_type("struct", match.group(1))
+ break
+
+ def process_exceptions(self, fname: str):
+ """
+ Process exceptions file with rules to ignore or replace references.
+ """
+ if not fname:
+ return
+
+ name = os.path.basename(fname)
+
+ with open(fname, "r", encoding="utf-8", errors="backslashreplace") as f:
+ for ln, line in enumerate(f):
+ ln += 1
+ line = line.strip()
+ if not line or line.startswith("#"):
+ continue
+
+ # Handle ignore rules
+ match = re.match(r"^ignore\s+(\w+)\s+(\S+)", line)
+ if match:
+ c_type = match.group(1)
+ symbol = match.group(2)
+
+ if c_type not in self.DEF_SYMBOL_TYPES:
+ sys.exit(f"{name}:{ln}: {c_type} is invalid")
+
+ d = self.symbols[c_type]
+ if symbol in d:
+ del d[symbol]
+
+ continue
+
+ # Handle replace rules
+ match = re.match(r"^replace\s+(\S+)\s+(\S+)\s+(\S+)", line)
+ if not match:
+ sys.exit(f"{name}:{ln}: invalid line: {line}")
+
+ c_type, old, new = match.groups()
+
+ if c_type not in self.DEF_SYMBOL_TYPES:
+ sys.exit(f"{name}:{ln}: {c_type} is invalid")
+
+ reftype = None
+
+ # Parse reference type when the type is specified
+
+ match = re.match(r"^\:c\:(data|func|macro|type)\:\`(.+)\`", new)
+ if match:
+ reftype = f":c:{match.group(1)}"
+ new = match.group(2)
+ else:
+ match = re.search(r"(\:ref)\:\`(.+)\`", new)
+ if match:
+ reftype = match.group(1)
+ new = match.group(2)
+
+ # If the replacement rule doesn't have a type, get default
+ if not reftype:
+ reftype = self.DEF_SYMBOL_TYPES[c_type].get("ref_type")
+ if not reftype:
+ reftype = self.DEF_SYMBOL_TYPES[c_type].get("real_type")
+
+ new_ref = f"{reftype}:`{old} <{new}>`"
+
+ # Change self.symbols to use the replacement rule
+ if old in self.symbols[c_type]:
+ self.symbols[c_type][old] = new_ref
+ else:
+ print(f"{name}:{ln}: Warning: can't find {old} {c_type}")
+
+ def debug_print(self):
+ """
+ Print debug information containing the replacement rules per symbol.
+ To make easier to check, group them per type.
+ """
+ if not self.debug:
+ return
+
+ for c_type, refs in self.symbols.items():
+ if not refs: # Skip empty dictionaries
+ continue
+
+ print(f"{c_type}:")
+
+ for symbol, ref in sorted(refs.items()):
+ print(f" {symbol} -> {ref}")
+
+ print()
+
+ def gen_output(self):
+ """Write the formatted output to a file."""
+
+ # Avoid extra blank lines
+ text = re.sub(r"\s+$", "", self.data) + "\n"
+ text = re.sub(r"\n\s+\n", "\n\n", text)
+
+ # Escape Sphinx special characters
+ text = re.sub(r"([\_\`\*\<\>\&\\\\:\/\|\%\$\#\{\}\~\^])", r"\\\1", text)
+
+ # Source uAPI files may have special notes. Use bold font for them
+ text = re.sub(r"DEPRECATED", "**DEPRECATED**", text)
+
+ # Delimiters to catch the entire symbol after escaped
+ start_delim = r"([ \n\t\(=\*\@])"
+ end_delim = r"(\s|,|\\=|\\:|\;|\)|\}|\{)"
+
+ # Process all reference types
+ for ref_dict in self.symbols.values():
+ for symbol, replacement in ref_dict.items():
+ symbol = re.escape(re.sub(r"([\_\`\*\<\>\&\\\\:\/])", r"\\\1", symbol))
+ text = re.sub(fr'{start_delim}{symbol}{end_delim}',
+ fr'\1{replacement}\2', text)
+
+ # Remove "\ " where not needed: before spaces and at the end of lines
+ text = re.sub(r"\\ ([\n ])", r"\1", text)
+ text = re.sub(r" \\ ", " ", text)
+
+ return text
+
+ def gen_toc(self):
+ """
+ Create a TOC table pointing to each symbol from the header
+ """
+ text = []
+
+ # Add header
+ text.append(".. contents:: Table of Contents")
+ text.append(" :depth: 2")
+ text.append(" :local:")
+ text.append("")
+
+ # Sort symbol types per description
+ symbol_descriptions = []
+ for k, v in self.DEF_SYMBOL_TYPES.items():
+ symbol_descriptions.append((v['description'], k))
+
+ symbol_descriptions.sort()
+
+ # Process each category
+ for description, c_type in symbol_descriptions:
+
+ refs = self.symbols[c_type]
+ if not refs: # Skip empty categories
+ continue
+
+ text.append(f"{description}")
+ text.append("-" * len(description))
+ text.append("")
+
+ # Sort symbols alphabetically
+ for symbol, ref in sorted(refs.items()):
+ text.append(f"* :{ref}:")
+
+ text.append("") # Add empty line between categories
+
+ return "\n".join(text)
+
+ def write_output(self, file_in: str, file_out: str, toc: bool):
+ title = os.path.basename(file_in)
+
+ if toc:
+ text = self.gen_toc()
+ else:
+ text = self.gen_output()
+
+ with open(file_out, "w", encoding="utf-8", errors="backslashreplace") as f:
+ f.write(".. -*- coding: utf-8; mode: rst -*-\n\n")
+ f.write(f"{title}\n")
+ f.write("=" * len(title) + "\n\n")
+
+ if not toc:
+ f.write(".. parsed-literal::\n\n")
+
+ f.write(text)
diff --git a/tools/docs/parse-headers.py b/tools/docs/parse-headers.py
new file mode 100755
index 000000000000..bfa4e46a53e3
--- /dev/null
+++ b/tools/docs/parse-headers.py
@@ -0,0 +1,60 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (c) 2016, 2025 by Mauro Carvalho Chehab <mchehab@kernel.org>.
+# pylint: disable=C0103
+
+"""
+Convert a C header or source file ``FILE_IN``, into a ReStructured Text
+included via ..parsed-literal block with cross-references for the
+documentation files that describe the API. It accepts an optional
+``FILE_RULES`` file to describes what elements will be either ignored or
+be pointed to a non-default reference type/name.
+
+The output is written at ``FILE_OUT``.
+
+It is capable of identifying defines, functions, structs, typedefs,
+enums and enum symbols and create cross-references for all of them.
+It is also capable of distinguish #define used for specifying a Linux
+ioctl.
+
+The optional ``FILE_RULES`` contains a set of rules like:
+
+ ignore ioctl VIDIOC_ENUM_FMT
+ replace ioctl VIDIOC_DQBUF vidioc_qbuf
+ replace define V4L2_EVENT_MD_FL_HAVE_FRAME_SEQ :c:type:`v4l2_event_motion_det`
+"""
+
+import argparse
+
+from lib.parse_data_structs import ParseDataStructs
+from lib.enrich_formatter import EnrichFormatter
+
+def main():
+ """Main function"""
+ parser = argparse.ArgumentParser(description=__doc__,
+ formatter_class=EnrichFormatter)
+
+ parser.add_argument("-d", "--debug", action="count", default=0,
+ help="Increase debug level. Can be used multiple times")
+ parser.add_argument("-t", "--toc", action="store_true",
+ help="instead of a literal block, outputs a TOC table at the RST file")
+
+ parser.add_argument("file_in", help="Input C file")
+ parser.add_argument("file_out", help="Output RST file")
+ parser.add_argument("file_rules", nargs="?",
+ help="Exceptions file (optional)")
+
+ args = parser.parse_args()
+
+ parser = ParseDataStructs(debug=args.debug)
+ parser.parse_file(args.file_in)
+
+ if args.file_rules:
+ parser.process_exceptions(args.file_rules)
+
+ parser.debug_print()
+ parser.write_output(args.file_in, args.file_out, args.toc)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/edid/1024x768.S b/tools/edid/1024x768.S
deleted file mode 100644
index 4aed3f9ab88a..000000000000
--- a/tools/edid/1024x768.S
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- 1024x768.S: EDID data set for standard 1024x768 60 Hz monitor
-
- Copyright (C) 2011 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 65000 /* kHz */
-#define XPIX 1024
-#define YPIX 768
-#define XY_RATIO XY_RATIO_4_3
-#define XBLANK 320
-#define YBLANK 38
-#define XOFFSET 8
-#define XPULSE 144
-#define YOFFSET 3
-#define YPULSE 6
-#define DPI 72
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux XGA"
-#define ESTABLISHED_TIMING2_BITS 0x08 /* Bit 3 -> 1024x768 @60 Hz */
-#define HSYNC_POL 0
-#define VSYNC_POL 0
-
-#include "edid.S"
diff --git a/tools/edid/1280x1024.S b/tools/edid/1280x1024.S
deleted file mode 100644
index b26dd424cad7..000000000000
--- a/tools/edid/1280x1024.S
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- 1280x1024.S: EDID data set for standard 1280x1024 60 Hz monitor
-
- Copyright (C) 2011 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 108000 /* kHz */
-#define XPIX 1280
-#define YPIX 1024
-#define XY_RATIO XY_RATIO_5_4
-#define XBLANK 408
-#define YBLANK 42
-#define XOFFSET 48
-#define XPULSE 112
-#define YOFFSET 1
-#define YPULSE 3
-#define DPI 72
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux SXGA"
-/* No ESTABLISHED_TIMINGx_BITS */
-#define HSYNC_POL 1
-#define VSYNC_POL 1
-
-#include "edid.S"
diff --git a/tools/edid/1600x1200.S b/tools/edid/1600x1200.S
deleted file mode 100644
index 0d091b282768..000000000000
--- a/tools/edid/1600x1200.S
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- 1600x1200.S: EDID data set for standard 1600x1200 60 Hz monitor
-
- Copyright (C) 2013 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 162000 /* kHz */
-#define XPIX 1600
-#define YPIX 1200
-#define XY_RATIO XY_RATIO_4_3
-#define XBLANK 560
-#define YBLANK 50
-#define XOFFSET 64
-#define XPULSE 192
-#define YOFFSET 1
-#define YPULSE 3
-#define DPI 72
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux UXGA"
-/* No ESTABLISHED_TIMINGx_BITS */
-#define HSYNC_POL 1
-#define VSYNC_POL 1
-
-#include "edid.S"
diff --git a/tools/edid/1680x1050.S b/tools/edid/1680x1050.S
deleted file mode 100644
index 7dfed9a33eab..000000000000
--- a/tools/edid/1680x1050.S
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- 1680x1050.S: EDID data set for standard 1680x1050 60 Hz monitor
-
- Copyright (C) 2012 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 146250 /* kHz */
-#define XPIX 1680
-#define YPIX 1050
-#define XY_RATIO XY_RATIO_16_10
-#define XBLANK 560
-#define YBLANK 39
-#define XOFFSET 104
-#define XPULSE 176
-#define YOFFSET 3
-#define YPULSE 6
-#define DPI 96
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux WSXGA"
-/* No ESTABLISHED_TIMINGx_BITS */
-#define HSYNC_POL 1
-#define VSYNC_POL 1
-
-#include "edid.S"
diff --git a/tools/edid/1920x1080.S b/tools/edid/1920x1080.S
deleted file mode 100644
index d6ffbba28e95..000000000000
--- a/tools/edid/1920x1080.S
+++ /dev/null
@@ -1,43 +0,0 @@
-/*
- 1920x1080.S: EDID data set for standard 1920x1080 60 Hz monitor
-
- Copyright (C) 2012 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 148500 /* kHz */
-#define XPIX 1920
-#define YPIX 1080
-#define XY_RATIO XY_RATIO_16_9
-#define XBLANK 280
-#define YBLANK 45
-#define XOFFSET 88
-#define XPULSE 44
-#define YOFFSET 4
-#define YPULSE 5
-#define DPI 96
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux FHD"
-/* No ESTABLISHED_TIMINGx_BITS */
-#define HSYNC_POL 1
-#define VSYNC_POL 1
-
-#include "edid.S"
diff --git a/tools/edid/800x600.S b/tools/edid/800x600.S
deleted file mode 100644
index a5616588de08..000000000000
--- a/tools/edid/800x600.S
+++ /dev/null
@@ -1,40 +0,0 @@
-/*
- 800x600.S: EDID data set for standard 800x600 60 Hz monitor
-
- Copyright (C) 2011 Carsten Emde <C.Emde@osadl.org>
- Copyright (C) 2014 Linaro Limited
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-*/
-
-/* EDID */
-#define VERSION 1
-#define REVISION 3
-
-/* Display */
-#define CLOCK 40000 /* kHz */
-#define XPIX 800
-#define YPIX 600
-#define XY_RATIO XY_RATIO_4_3
-#define XBLANK 256
-#define YBLANK 28
-#define XOFFSET 40
-#define XPULSE 128
-#define YOFFSET 1
-#define YPULSE 4
-#define DPI 72
-#define VFREQ 60 /* Hz */
-#define TIMING_NAME "Linux SVGA"
-#define ESTABLISHED_TIMING1_BITS 0x01 /* Bit 0: 800x600 @ 60Hz */
-#define HSYNC_POL 1
-#define VSYNC_POL 1
-
-#include "edid.S"
diff --git a/tools/edid/Makefile b/tools/edid/Makefile
deleted file mode 100644
index 85a927dfab02..000000000000
--- a/tools/edid/Makefile
+++ /dev/null
@@ -1,37 +0,0 @@
-
-SOURCES := $(wildcard [0-9]*x[0-9]*.S)
-
-BIN := $(patsubst %.S, %.bin, $(SOURCES))
-
-IHEX := $(patsubst %.S, %.bin.ihex, $(SOURCES))
-
-CODE := $(patsubst %.S, %.c, $(SOURCES))
-
-all: $(BIN) $(IHEX) $(CODE)
-
-clean:
- @rm -f *.o *.bin.ihex *.bin *.c
-
-%.o: %.S
- @cc -c $^
-
-%.bin.nocrc: %.o
- @objcopy -Obinary $^ $@
-
-%.crc: %.bin.nocrc
- @list=$$(for i in `seq 1 127`; do head -c$$i $^ | tail -c1 \
- | hexdump -v -e '/1 "%02X+"'; done); \
- echo "ibase=16;100-($${list%?})%100" | bc >$@
-
-%.p: %.crc %.S
- @cc -c -DCRC="$$(cat $*.crc)" -o $@ $*.S
-
-%.bin: %.p
- @objcopy -Obinary $^ $@
-
-%.bin.ihex: %.p
- @objcopy -Oihex $^ $@
- @dos2unix $@ 2>/dev/null
-
-%.c: %.bin
- @echo "{" >$@; hexdump -f hex $^ >>$@; echo "};" >>$@
diff --git a/tools/edid/edid.S b/tools/edid/edid.S
deleted file mode 100644
index c3d13815526d..000000000000
--- a/tools/edid/edid.S
+++ /dev/null
@@ -1,274 +0,0 @@
-/*
- edid.S: EDID data template
-
- Copyright (C) 2012 Carsten Emde <C.Emde@osadl.org>
-
- This program is free software; you can redistribute it and/or
- modify it under the terms of the GNU General Public License
- as published by the Free Software Foundation; either version 2
- of the License, or (at your option) any later version.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
-*/
-
-
-/* Manufacturer */
-#define MFG_LNX1 'L'
-#define MFG_LNX2 'N'
-#define MFG_LNX3 'X'
-#define SERIAL 0
-#define YEAR 2012
-#define WEEK 5
-
-/* EDID 1.3 standard definitions */
-#define XY_RATIO_16_10 0b00
-#define XY_RATIO_4_3 0b01
-#define XY_RATIO_5_4 0b10
-#define XY_RATIO_16_9 0b11
-
-/* Provide defaults for the timing bits */
-#ifndef ESTABLISHED_TIMING1_BITS
-#define ESTABLISHED_TIMING1_BITS 0x00
-#endif
-#ifndef ESTABLISHED_TIMING2_BITS
-#define ESTABLISHED_TIMING2_BITS 0x00
-#endif
-#ifndef ESTABLISHED_TIMING3_BITS
-#define ESTABLISHED_TIMING3_BITS 0x00
-#endif
-
-#define mfgname2id(v1,v2,v3) \
- ((((v1-'@')&0x1f)<<10)+(((v2-'@')&0x1f)<<5)+((v3-'@')&0x1f))
-#define swap16(v1) ((v1>>8)+((v1&0xff)<<8))
-#define lsbs2(v1,v2) (((v1&0x0f)<<4)+(v2&0x0f))
-#define msbs2(v1,v2) ((((v1>>8)&0x0f)<<4)+((v2>>8)&0x0f))
-#define msbs4(v1,v2,v3,v4) \
- ((((v1>>8)&0x03)<<6)+(((v2>>8)&0x03)<<4)+\
- (((v3>>4)&0x03)<<2)+((v4>>4)&0x03))
-#define pixdpi2mm(pix,dpi) ((pix*25)/dpi)
-#define xsize pixdpi2mm(XPIX,DPI)
-#define ysize pixdpi2mm(YPIX,DPI)
-
- .data
-
-/* Fixed header pattern */
-header: .byte 0x00,0xff,0xff,0xff,0xff,0xff,0xff,0x00
-
-mfg_id: .hword swap16(mfgname2id(MFG_LNX1, MFG_LNX2, MFG_LNX3))
-
-prod_code: .hword 0
-
-/* Serial number. 32 bits, little endian. */
-serial_number: .long SERIAL
-
-/* Week of manufacture */
-week: .byte WEEK
-
-/* Year of manufacture, less 1990. (1990-2245)
- If week=255, it is the model year instead */
-year: .byte YEAR-1990
-
-version: .byte VERSION /* EDID version, usually 1 (for 1.3) */
-revision: .byte REVISION /* EDID revision, usually 3 (for 1.3) */
-
-/* If Bit 7=1 Digital input. If set, the following bit definitions apply:
- Bits 6-1 Reserved, must be 0
- Bit 0 Signal is compatible with VESA DFP 1.x TMDS CRGB,
- 1 pixel per clock, up to 8 bits per color, MSB aligned,
- If Bit 7=0 Analog input. If clear, the following bit definitions apply:
- Bits 6-5 Video white and sync levels, relative to blank
- 00=+0.7/-0.3 V; 01=+0.714/-0.286 V;
- 10=+1.0/-0.4 V; 11=+0.7/0 V
- Bit 4 Blank-to-black setup (pedestal) expected
- Bit 3 Separate sync supported
- Bit 2 Composite sync (on HSync) supported
- Bit 1 Sync on green supported
- Bit 0 VSync pulse must be serrated when somposite or
- sync-on-green is used. */
-video_parms: .byte 0x6d
-
-/* Maximum horizontal image size, in centimetres
- (max 292 cm/115 in at 16:9 aspect ratio) */
-max_hor_size: .byte xsize/10
-
-/* Maximum vertical image size, in centimetres.
- If either byte is 0, undefined (e.g. projector) */
-max_vert_size: .byte ysize/10
-
-/* Display gamma, minus 1, times 100 (range 1.00-3.5 */
-gamma: .byte 120
-
-/* Bit 7 DPMS standby supported
- Bit 6 DPMS suspend supported
- Bit 5 DPMS active-off supported
- Bits 4-3 Display type: 00=monochrome; 01=RGB colour;
- 10=non-RGB multicolour; 11=undefined
- Bit 2 Standard sRGB colour space. Bytes 25-34 must contain
- sRGB standard values.
- Bit 1 Preferred timing mode specified in descriptor block 1.
- Bit 0 GTF supported with default parameter values. */
-dsp_features: .byte 0xea
-
-/* Chromaticity coordinates. */
-/* Red and green least-significant bits
- Bits 7-6 Red x value least-significant 2 bits
- Bits 5-4 Red y value least-significant 2 bits
- Bits 3-2 Green x value lst-significant 2 bits
- Bits 1-0 Green y value least-significant 2 bits */
-red_green_lsb: .byte 0x5e
-
-/* Blue and white least-significant 2 bits */
-blue_white_lsb: .byte 0xc0
-
-/* Red x value most significant 8 bits.
- 0-255 encodes 0-0.996 (255/256); 0-0.999 (1023/1024) with lsbits */
-red_x_msb: .byte 0xa4
-
-/* Red y value most significant 8 bits */
-red_y_msb: .byte 0x59
-
-/* Green x and y value most significant 8 bits */
-green_x_y_msb: .byte 0x4a,0x98
-
-/* Blue x and y value most significant 8 bits */
-blue_x_y_msb: .byte 0x25,0x20
-
-/* Default white point x and y value most significant 8 bits */
-white_x_y_msb: .byte 0x50,0x54
-
-/* Established timings */
-/* Bit 7 720x400 @ 70 Hz
- Bit 6 720x400 @ 88 Hz
- Bit 5 640x480 @ 60 Hz
- Bit 4 640x480 @ 67 Hz
- Bit 3 640x480 @ 72 Hz
- Bit 2 640x480 @ 75 Hz
- Bit 1 800x600 @ 56 Hz
- Bit 0 800x600 @ 60 Hz */
-estbl_timing1: .byte ESTABLISHED_TIMING1_BITS
-
-/* Bit 7 800x600 @ 72 Hz
- Bit 6 800x600 @ 75 Hz
- Bit 5 832x624 @ 75 Hz
- Bit 4 1024x768 @ 87 Hz, interlaced (1024x768)
- Bit 3 1024x768 @ 60 Hz
- Bit 2 1024x768 @ 72 Hz
- Bit 1 1024x768 @ 75 Hz
- Bit 0 1280x1024 @ 75 Hz */
-estbl_timing2: .byte ESTABLISHED_TIMING2_BITS
-
-/* Bit 7 1152x870 @ 75 Hz (Apple Macintosh II)
- Bits 6-0 Other manufacturer-specific display mod */
-estbl_timing3: .byte ESTABLISHED_TIMING3_BITS
-
-/* Standard timing */
-/* X resolution, less 31, divided by 8 (256-2288 pixels) */
-std_xres: .byte (XPIX/8)-31
-/* Y resolution, X:Y pixel ratio
- Bits 7-6 X:Y pixel ratio: 00=16:10; 01=4:3; 10=5:4; 11=16:9.
- Bits 5-0 Vertical frequency, less 60 (60-123 Hz) */
-std_vres: .byte (XY_RATIO<<6)+VFREQ-60
- .fill 7,2,0x0101 /* Unused */
-
-descriptor1:
-/* Pixel clock in 10 kHz units. (0.-655.35 MHz, little-endian) */
-clock: .hword CLOCK/10
-
-/* Horizontal active pixels 8 lsbits (0-4095) */
-x_act_lsb: .byte XPIX&0xff
-/* Horizontal blanking pixels 8 lsbits (0-4095)
- End of active to start of next active. */
-x_blk_lsb: .byte XBLANK&0xff
-/* Bits 7-4 Horizontal active pixels 4 msbits
- Bits 3-0 Horizontal blanking pixels 4 msbits */
-x_msbs: .byte msbs2(XPIX,XBLANK)
-
-/* Vertical active lines 8 lsbits (0-4095) */
-y_act_lsb: .byte YPIX&0xff
-/* Vertical blanking lines 8 lsbits (0-4095) */
-y_blk_lsb: .byte YBLANK&0xff
-/* Bits 7-4 Vertical active lines 4 msbits
- Bits 3-0 Vertical blanking lines 4 msbits */
-y_msbs: .byte msbs2(YPIX,YBLANK)
-
-/* Horizontal sync offset pixels 8 lsbits (0-1023) From blanking start */
-x_snc_off_lsb: .byte XOFFSET&0xff
-/* Horizontal sync pulse width pixels 8 lsbits (0-1023) */
-x_snc_pls_lsb: .byte XPULSE&0xff
-/* Bits 7-4 Vertical sync offset lines 4 lsbits (0-63)
- Bits 3-0 Vertical sync pulse width lines 4 lsbits (0-63) */
-y_snc_lsb: .byte lsbs2(YOFFSET, YPULSE)
-/* Bits 7-6 Horizontal sync offset pixels 2 msbits
- Bits 5-4 Horizontal sync pulse width pixels 2 msbits
- Bits 3-2 Vertical sync offset lines 2 msbits
- Bits 1-0 Vertical sync pulse width lines 2 msbits */
-xy_snc_msbs: .byte msbs4(XOFFSET,XPULSE,YOFFSET,YPULSE)
-
-/* Horizontal display size, mm, 8 lsbits (0-4095 mm, 161 in) */
-x_dsp_size: .byte xsize&0xff
-
-/* Vertical display size, mm, 8 lsbits (0-4095 mm, 161 in) */
-y_dsp_size: .byte ysize&0xff
-
-/* Bits 7-4 Horizontal display size, mm, 4 msbits
- Bits 3-0 Vertical display size, mm, 4 msbits */
-dsp_size_mbsb: .byte msbs2(xsize,ysize)
-
-/* Horizontal border pixels (each side; total is twice this) */
-x_border: .byte 0
-/* Vertical border lines (each side; total is twice this) */
-y_border: .byte 0
-
-/* Bit 7 Interlaced
- Bits 6-5 Stereo mode: 00=No stereo; other values depend on bit 0:
- Bit 0=0: 01=Field sequential, sync=1 during right; 10=similar,
- sync=1 during left; 11=4-way interleaved stereo
- Bit 0=1 2-way interleaved stereo: 01=Right image on even lines;
- 10=Left image on even lines; 11=side-by-side
- Bits 4-3 Sync type: 00=Analog composite; 01=Bipolar analog composite;
- 10=Digital composite (on HSync); 11=Digital separate
- Bit 2 If digital separate: Vertical sync polarity (1=positive)
- Other types: VSync serrated (HSync during VSync)
- Bit 1 If analog sync: Sync on all 3 RGB lines (else green only)
- Digital: HSync polarity (1=positive)
- Bit 0 2-way line-interleaved stereo, if bits 4-3 are not 00. */
-features: .byte 0x18+(VSYNC_POL<<2)+(HSYNC_POL<<1)
-
-descriptor2: .byte 0,0 /* Not a detailed timing descriptor */
- .byte 0 /* Must be zero */
- .byte 0xff /* Descriptor is monitor serial number (text) */
- .byte 0 /* Must be zero */
-start1: .ascii "Linux #0"
-end1: .byte 0x0a /* End marker */
- .fill 12-(end1-start1), 1, 0x20 /* Padded spaces */
-descriptor3: .byte 0,0 /* Not a detailed timing descriptor */
- .byte 0 /* Must be zero */
- .byte 0xfd /* Descriptor is monitor range limits */
- .byte 0 /* Must be zero */
-start2: .byte VFREQ-1 /* Minimum vertical field rate (1-255 Hz) */
- .byte VFREQ+1 /* Maximum vertical field rate (1-255 Hz) */
- .byte (CLOCK/(XPIX+XBLANK))-1 /* Minimum horizontal line rate
- (1-255 kHz) */
- .byte (CLOCK/(XPIX+XBLANK))+1 /* Maximum horizontal line rate
- (1-255 kHz) */
- .byte (CLOCK/10000)+1 /* Maximum pixel clock rate, rounded up
- to 10 MHz multiple (10-2550 MHz) */
- .byte 0 /* No extended timing information type */
-end2: .byte 0x0a /* End marker */
- .fill 12-(end2-start2), 1, 0x20 /* Padded spaces */
-descriptor4: .byte 0,0 /* Not a detailed timing descriptor */
- .byte 0 /* Must be zero */
- .byte 0xfc /* Descriptor is text */
- .byte 0 /* Must be zero */
-start3: .ascii TIMING_NAME
-end3: .byte 0x0a /* End marker */
- .fill 12-(end3-start3), 1, 0x20 /* Padded spaces */
-extensions: .byte 0 /* Number of extensions to follow */
-checksum: .byte CRC /* Sum of all bytes must be 0 */
diff --git a/tools/edid/hex b/tools/edid/hex
deleted file mode 100644
index 8873ebb618af..000000000000
--- a/tools/edid/hex
+++ /dev/null
@@ -1 +0,0 @@
-"\t" 8/1 "0x%02x, " "\n"
diff --git a/tools/firewire/decode-fcp.c b/tools/firewire/decode-fcp.c
index b67ebc88434d..f115a3be8d1e 100644
--- a/tools/firewire/decode-fcp.c
+++ b/tools/firewire/decode-fcp.c
@@ -160,7 +160,7 @@ decode_avc(struct link_transaction *t)
name = info->name;
}
- printf("av/c %s, subunit_type=%s, subunit_id=%d, opcode=%s",
+ printf("av/c %s, subunit_type=%s, subunit_id=%u, opcode=%s",
ctype_names[frame->ctype], subunit_type_names[frame->subunit_type],
frame->subunit_id, name);
diff --git a/tools/firewire/nosy-dump.c b/tools/firewire/nosy-dump.c
index 156e0356e814..9a906de3a9ef 100644
--- a/tools/firewire/nosy-dump.c
+++ b/tools/firewire/nosy-dump.c
@@ -771,7 +771,7 @@ print_packet(uint32_t *data, size_t length)
if (pp->phy_config.set_root)
printf(" set_root_id=%02x", pp->phy_config.root_id);
if (pp->phy_config.set_gap_count)
- printf(" set_gap_count=%d", pp->phy_config.gap_count);
+ printf(" set_gap_count=%u", pp->phy_config.gap_count);
}
break;
@@ -781,13 +781,13 @@ print_packet(uint32_t *data, size_t length)
case PHY_PACKET_SELF_ID:
if (pp->self_id.extended) {
- printf("extended self id: phy_id=%02x, seq=%d",
+ printf("extended self id: phy_id=%02x, seq=%u",
pp->ext_self_id.phy_id, pp->ext_self_id.sequence);
} else {
static const char * const speed_names[] = {
"S100", "S200", "S400", "BETA"
};
- printf("self id: phy_id=%02x, link %s, gap_count=%d, speed=%s%s%s",
+ printf("self id: phy_id=%02x, link %s, gap_count=%u speed=%s%s%s",
pp->self_id.phy_id,
(pp->self_id.link_active ? "active" : "not active"),
pp->self_id.gap_count,
diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile
index d29c9c49e251..342e056c8c66 100644
--- a/tools/gpio/Makefile
+++ b/tools/gpio/Makefile
@@ -77,8 +77,8 @@ $(OUTPUT)gpio-watch: $(GPIO_WATCH_IN)
clean:
rm -f $(ALL_PROGRAMS)
- rm -f $(OUTPUT)include/linux/gpio.h
- find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
+ rm -rf $(OUTPUT)include
+ find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete
install: $(ALL_PROGRAMS)
install -d -m 755 $(DESTDIR)$(bindir); \
diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
index 5dee2b98ab60..b70813b0bf8e 100644
--- a/tools/gpio/gpio-event-mon.c
+++ b/tools/gpio/gpio-event-mon.c
@@ -69,14 +69,14 @@ int monitor_device(const char *device_name,
}
if (num_lines == 1) {
- fprintf(stdout, "Monitoring line %d on %s\n", lines[0], device_name);
+ fprintf(stdout, "Monitoring line %u on %s\n", lines[0], device_name);
fprintf(stdout, "Initial line value: %d\n",
gpiotools_test_bit(values.bits, 0));
} else {
- fprintf(stdout, "Monitoring lines %d", lines[0]);
+ fprintf(stdout, "Monitoring lines %u", lines[0]);
for (i = 1; i < num_lines - 1; i++)
- fprintf(stdout, ", %d", lines[i]);
- fprintf(stdout, " and %d on %s\n", lines[i], device_name);
+ fprintf(stdout, ", %u", lines[i]);
+ fprintf(stdout, " and %u on %s\n", lines[i], device_name);
fprintf(stdout, "Initial line values: %d",
gpiotools_test_bit(values.bits, 0));
for (i = 1; i < num_lines - 1; i++)
diff --git a/tools/gpio/gpio-hammer.c b/tools/gpio/gpio-hammer.c
index 54fdf59dd320..ba0866eb3581 100644
--- a/tools/gpio/gpio-hammer.c
+++ b/tools/gpio/gpio-hammer.c
@@ -54,7 +54,7 @@ int hammer_device(const char *device_name, unsigned int *lines, int num_lines,
fprintf(stdout, "Hammer lines [");
for (i = 0; i < num_lines; i++) {
- fprintf(stdout, "%d", lines[i]);
+ fprintf(stdout, "%u", lines[i]);
if (i != (num_lines - 1))
fprintf(stdout, ", ");
}
@@ -89,7 +89,7 @@ int hammer_device(const char *device_name, unsigned int *lines, int num_lines,
fprintf(stdout, "[");
for (i = 0; i < num_lines; i++) {
- fprintf(stdout, "%d: %d", lines[i],
+ fprintf(stdout, "%u: %d", lines[i],
gpiotools_test_bit(values.bits, i));
if (i != (num_lines - 1))
fprintf(stdout, ", ");
diff --git a/tools/gpio/gpio-sloppy-logic-analyzer.sh b/tools/gpio/gpio-sloppy-logic-analyzer.sh
new file mode 100755
index 000000000000..3ef2278e49f9
--- /dev/null
+++ b/tools/gpio/gpio-sloppy-logic-analyzer.sh
@@ -0,0 +1,246 @@
+#!/bin/sh -eu
+# SPDX-License-Identifier: GPL-2.0
+#
+# Helper script for the Linux Kernel GPIO sloppy logic analyzer
+#
+# Copyright (C) Wolfram Sang <wsa@sang-engineering.com>
+# Copyright (C) Renesas Electronics Corporation
+
+samplefreq=1000000
+numsamples=250000
+cpusetdefaultdir='/sys/fs/cgroup'
+cpusetprefix='cpuset.'
+debugdir='/sys/kernel/debug'
+ladirname='gpio-sloppy-logic-analyzer'
+outputdir="$PWD"
+neededcmds='taskset zip'
+max_chans=8
+duration=
+initcpu=
+listinstances=0
+lainstance=
+lasysfsdir=
+triggerdat=
+trigger_bindat=
+progname="${0##*/}"
+print_help()
+{
+ cat << EOF
+$progname - helper script for the Linux Kernel Sloppy GPIO Logic Analyzer
+Available options:
+ -c|--cpu <n>: which CPU to isolate for sampling. Only needed once. Default <1>.
+ Remember that a more powerful CPU gives you higher sampling speeds.
+ Also CPU0 is not recommended as it usually does extra bookkeeping.
+ -d|--duration-us <SI-n>: number of microseconds to sample. Overrides -n, no default value.
+ -h|--help: print this help
+ -i|--instance <str>: name of the logic analyzer in case you have multiple instances. Default
+ to first instance found
+ -k|--kernel-debug-dir <str>: path to the kernel debugfs mountpoint. Default: <$debugdir>
+ -l|--list-instances: list all available instances
+ -n|--num_samples <SI-n>: number of samples to acquire. Default <$numsamples>
+ -o|--output-dir <str>: directory to put the result files. Default: current dir
+ -s|--sample_freq <SI-n>: desired sampling frequency. Might be capped if too large.
+ Default: <1000000>
+ -t|--trigger <str>: pattern to use as trigger. <str> consists of two-char pairs. First
+ char is channel number starting at "1". Second char is trigger level:
+ "L" - low; "H" - high; "R" - rising; "F" - falling
+ These pairs can be combined with "+", so "1H+2F" triggers when probe 1
+ is high while probe 2 has a falling edge. You can have multiple triggers
+ combined with ",". So, "1H+2F,1H+2R" is like the example before but it
+ waits for a rising edge on probe 2 while probe 1 is still high after the
+ first trigger has been met.
+ Trigger data will only be used for the next capture and then be erased.
+
+<SI-n> is an integer value where SI units "T", "G", "M", "K" are recognized, e.g. '1M500K' is 1500000.
+
+Examples:
+Samples $numsamples values at 1MHz with an already prepared CPU or automatically prepares CPU1 if needed,
+use the first logic analyzer instance found:
+ '$progname'
+Samples 50us at 2MHz waiting for a falling edge on channel 2. CPU and instance as above:
+ '$progname -d 50 -s 2M -t "2F"'
+
+Note that the process exits after checking all parameters but a sub-process still works in
+the background. The result is only available once the sub-process finishes.
+
+Result is a .sr file to be consumed with PulseView from the free Sigrok project. It is
+a zip file which also contains the binary sample data which may be consumed by others.
+The filename is the logic analyzer instance name plus a since-epoch timestamp.
+EOF
+}
+
+fail()
+{
+ echo "$1"
+ exit 1
+}
+
+parse_si()
+{
+ conv_si="$(printf $1 | sed 's/[tT]+\?/*1000G+/g; s/[gG]+\?/*1000M+/g; s/[mM]+\?/*1000K+/g; s/[kK]+\?/*1000+/g; s/+$//')"
+ si_val="$((conv_si))"
+}
+set_newmask()
+{
+ for f in $(find "$1" -iname "$2"); do echo "$newmask" > "$f" 2>/dev/null || true; done
+}
+
+init_cpu()
+{
+ isol_cpu="$1"
+
+ [ -d "$lacpusetdir" ] || mkdir "$lacpusetdir"
+
+ cur_cpu=$(cat "${lacpusetfile}cpus")
+ [ "$cur_cpu" = "$isol_cpu" ] && return
+ [ -z "$cur_cpu" ] || fail "CPU$isol_cpu requested but CPU$cur_cpu already isolated"
+
+ echo "$isol_cpu" > "${lacpusetfile}cpus" || fail "Could not isolate CPU$isol_cpu. Does it exist?"
+ echo 1 > "${lacpusetfile}cpu_exclusive"
+ echo 0 > "${lacpusetfile}mems"
+
+ oldmask=$(cat /proc/irq/default_smp_affinity)
+ newmask=$(printf "%x" $((0x$oldmask & ~(1 << isol_cpu))))
+
+ set_newmask '/proc/irq' '*smp_affinity'
+ set_newmask '/sys/devices/virtual/workqueue/' 'cpumask'
+
+ # Move tasks away from isolated CPU
+ for p in $(ps -o pid | tail -n +2); do
+ mask=$(taskset -p "$p") || continue
+ # Ignore tasks with a custom mask, i.e. not equal $oldmask
+ [ "${mask##*: }" = "$oldmask" ] || continue
+ taskset -p "$newmask" "$p" || continue
+ done 2>/dev/null >/dev/null
+
+ # Big hammer! Working with 'rcu_momentary_eqs()' for a more fine-grained solution
+ # still printed warnings. Same for re-enabling the stall detector after sampling.
+ echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress
+
+ cpufreqgov="/sys/devices/system/cpu/cpu$isol_cpu/cpufreq/scaling_governor"
+ [ -w "$cpufreqgov" ] && echo 'performance' > "$cpufreqgov" || true
+}
+
+parse_triggerdat()
+{
+ oldifs="$IFS"
+ IFS=','; for trig in $1; do
+ mask=0; val1=0; val2=0
+ IFS='+'; for elem in $trig; do
+ chan=${elem%[lhfrLHFR]}
+ mode=${elem#$chan}
+ # Check if we could parse something and the channel number fits
+ [ "$chan" != "$elem" ] && [ "$chan" -le $max_chans ] || fail "Trigger syntax error: $elem"
+ bit=$((1 << (chan - 1)))
+ mask=$((mask | bit))
+ case $mode in
+ [hH]) val1=$((val1 | bit)); val2=$((val2 | bit));;
+ [fF]) val1=$((val1 | bit));;
+ [rR]) val2=$((val2 | bit));;
+ esac
+ done
+ trigger_bindat="$trigger_bindat$(printf '\\%o\\%o' $mask $val1)"
+ [ $val1 -ne $val2 ] && trigger_bindat="$trigger_bindat$(printf '\\%o\\%o' $mask $val2)"
+ done
+ IFS="$oldifs"
+}
+
+do_capture()
+{
+ taskset "$1" echo 1 > "$lasysfsdir"/capture || fail "Capture error! Check kernel log"
+
+ srtmp=$(mktemp -d)
+ echo 1 > "$srtmp"/version
+ cp "$lasysfsdir"/sample_data "$srtmp"/logic-1-1
+ cat > "$srtmp"/metadata << EOF
+[global]
+sigrok version=0.2.0
+
+[device 1]
+capturefile=logic-1
+total probes=$(wc -l < "$lasysfsdir"/meta_data)
+samplerate=${samplefreq}Hz
+unitsize=1
+EOF
+ cat "$lasysfsdir"/meta_data >> "$srtmp"/metadata
+
+ zipname="$outputdir/${lasysfsdir##*/}-$(date +%s).sr"
+ zip -jq "$zipname" "$srtmp"/*
+ rm -rf "$srtmp"
+ delay_ack=$(cat "$lasysfsdir"/delay_ns_acquisition)
+ [ "$delay_ack" -eq 0 ] && delay_ack=1
+ echo "Logic analyzer done. Saved '$zipname'"
+ echo "Max sample frequency this time: $((1000000000 / delay_ack))Hz."
+}
+
+rep=$(getopt -a -l cpu:,duration-us:,help,instance:,list-instances,kernel-debug-dir:,num_samples:,output-dir:,sample_freq:,trigger: -o c:d:hi:k:ln:o:s:t: -- "$@") || exit 1
+eval set -- "$rep"
+while true; do
+ case "$1" in
+ -c|--cpu) initcpu="$2"; shift;;
+ -d|--duration-us) parse_si $2; duration=$si_val; shift;;
+ -h|--help) print_help; exit 0;;
+ -i|--instance) lainstance="$2"; shift;;
+ -k|--kernel-debug-dir) debugdir="$2"; shift;;
+ -l|--list-instances) listinstances=1;;
+ -n|--num_samples) parse_si $2; numsamples=$si_val; shift;;
+ -o|--output-dir) outputdir="$2"; shift;;
+ -s|--sample_freq) parse_si $2; samplefreq=$si_val; shift;;
+ -t|--trigger) triggerdat="$2"; shift;;
+ --) break;;
+ *) fail "error parsing command line: $*";;
+ esac
+ shift
+done
+
+for f in $neededcmds; do
+ command -v "$f" >/dev/null || fail "Command '$f' not found"
+done
+
+# print cpuset mountpoint if any, errorcode > 0 if noprefix option was found
+cpusetdir=$(awk '$3 == "cgroup" && $4 ~ /cpuset/ { print $2; exit (match($4, /noprefix/) > 0) }' /proc/self/mounts) || cpusetprefix=''
+if [ -z "$cpusetdir" ]; then
+ cpusetdir="$cpusetdefaultdir"
+ [ -d $cpusetdir ] || mkdir $cpusetdir
+ mount -t cgroup -o cpuset none $cpusetdir || fail "Couldn't mount cpusets. Not in kernel or already in use?"
+fi
+
+lacpusetdir="$cpusetdir/$ladirname"
+lacpusetfile="$lacpusetdir/$cpusetprefix"
+sysfsdir="$debugdir/$ladirname"
+
+[ "$samplefreq" -ne 0 ] || fail "Invalid sample frequency"
+
+[ -d "$sysfsdir" ] || fail "Could not find logic analyzer root dir '$sysfsdir'. Module loaded?"
+[ -x "$sysfsdir" ] || fail "Could not access logic analyzer root dir '$sysfsdir'. Need root?"
+
+[ $listinstances -gt 0 ] && find "$sysfsdir" -mindepth 1 -type d | sed 's|.*/||' && exit 0
+
+if [ -n "$lainstance" ]; then
+ lasysfsdir="$sysfsdir/$lainstance"
+else
+ lasysfsdir=$(find "$sysfsdir" -mindepth 1 -type d -print -quit)
+fi
+[ -d "$lasysfsdir" ] || fail "Logic analyzer directory '$lasysfsdir' not found!"
+[ -d "$outputdir" ] || fail "Output directory '$outputdir' not found!"
+
+[ -n "$initcpu" ] && init_cpu "$initcpu"
+[ -d "$lacpusetdir" ] || { echo "Auto-Isolating CPU1"; init_cpu 1; }
+
+ndelay=$((1000000000 / samplefreq))
+echo "$ndelay" > "$lasysfsdir"/delay_ns
+
+[ -n "$duration" ] && numsamples=$((samplefreq * duration / 1000000))
+echo $numsamples > "$lasysfsdir"/buf_size
+
+if [ -n "$triggerdat" ]; then
+ parse_triggerdat "$triggerdat"
+ printf "$trigger_bindat" > "$lasysfsdir"/trigger 2>/dev/null || fail "Trigger data '$triggerdat' rejected"
+fi
+
+workcpu=$(cat "${lacpusetfile}effective_cpus")
+[ -n "$workcpu" ] || fail "No isolated CPU found"
+cpumask=$(printf '%x' $((1 << workcpu)))
+instance=${lasysfsdir##*/}
+echo "Setting up '$instance': $numsamples samples at ${samplefreq}Hz with ${triggerdat:-no} trigger using CPU$workcpu"
+do_capture "$cpumask" &
diff --git a/tools/hv/.gitignore b/tools/hv/.gitignore
new file mode 100644
index 000000000000..0c5bc15d602f
--- /dev/null
+++ b/tools/hv/.gitignore
@@ -0,0 +1,3 @@
+hv_fcopy_uio_daemon
+hv_kvp_daemon
+hv_vss_daemon
diff --git a/tools/hv/Build b/tools/hv/Build
index 6cf51fa4b306..7d1f1698069b 100644
--- a/tools/hv/Build
+++ b/tools/hv/Build
@@ -1,3 +1,4 @@
hv_kvp_daemon-y += hv_kvp_daemon.o
hv_vss_daemon-y += hv_vss_daemon.o
-hv_fcopy_daemon-y += hv_fcopy_daemon.o
+hv_fcopy_uio_daemon-y += hv_fcopy_uio_daemon.o
+hv_fcopy_uio_daemon-y += vmbus_bufring.o
diff --git a/tools/hv/Makefile b/tools/hv/Makefile
index fe770e679ae8..34ffcec264ab 100644
--- a/tools/hv/Makefile
+++ b/tools/hv/Makefile
@@ -2,6 +2,7 @@
# Makefile for Hyper-V tools
include ../scripts/Makefile.include
+ARCH := $(shell uname -m 2>/dev/null)
sbindir ?= /usr/sbin
libexecdir ?= /usr/libexec
sharedstatedir ?= /var/lib
@@ -16,8 +17,12 @@ endif
MAKEFLAGS += -r
override CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include
+override CFLAGS += -Wno-address-of-packed-member
-ALL_TARGETS := hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon
+ALL_TARGETS := hv_kvp_daemon hv_vss_daemon
+ifneq ($(ARCH), aarch64)
+ALL_TARGETS += hv_fcopy_uio_daemon
+endif
ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
ALL_SCRIPTS := hv_get_dhcp_info.sh hv_get_dns_info.sh hv_set_ifconfig.sh
@@ -39,15 +44,15 @@ $(HV_VSS_DAEMON_IN): FORCE
$(OUTPUT)hv_vss_daemon: $(HV_VSS_DAEMON_IN)
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
-HV_FCOPY_DAEMON_IN := $(OUTPUT)hv_fcopy_daemon-in.o
-$(HV_FCOPY_DAEMON_IN): FORCE
- $(Q)$(MAKE) $(build)=hv_fcopy_daemon
-$(OUTPUT)hv_fcopy_daemon: $(HV_FCOPY_DAEMON_IN)
+HV_FCOPY_UIO_DAEMON_IN := $(OUTPUT)hv_fcopy_uio_daemon-in.o
+$(HV_FCOPY_UIO_DAEMON_IN): FORCE
+ $(Q)$(MAKE) $(build)=hv_fcopy_uio_daemon
+$(OUTPUT)hv_fcopy_uio_daemon: $(HV_FCOPY_UIO_DAEMON_IN)
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
clean:
rm -f $(ALL_PROGRAMS)
- find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
+ find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete
install: $(ALL_PROGRAMS)
install -d -m 755 $(DESTDIR)$(sbindir); \
diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c
deleted file mode 100644
index 16d629b22c25..000000000000
--- a/tools/hv/hv_fcopy_daemon.c
+++ /dev/null
@@ -1,266 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * An implementation of host to guest copy functionality for Linux.
- *
- * Copyright (C) 2014, Microsoft, Inc.
- *
- * Author : K. Y. Srinivasan <kys@microsoft.com>
- */
-
-
-#include <sys/types.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <unistd.h>
-#include <string.h>
-#include <errno.h>
-#include <linux/hyperv.h>
-#include <linux/limits.h>
-#include <syslog.h>
-#include <sys/stat.h>
-#include <fcntl.h>
-#include <getopt.h>
-
-static int target_fd;
-static char target_fname[PATH_MAX];
-static unsigned long long filesize;
-
-static int hv_start_fcopy(struct hv_start_fcopy *smsg)
-{
- int error = HV_E_FAIL;
- char *q, *p;
-
- filesize = 0;
- p = (char *)smsg->path_name;
- snprintf(target_fname, sizeof(target_fname), "%s/%s",
- (char *)smsg->path_name, (char *)smsg->file_name);
-
- syslog(LOG_INFO, "Target file name: %s", target_fname);
- /*
- * Check to see if the path is already in place; if not,
- * create if required.
- */
- while ((q = strchr(p, '/')) != NULL) {
- if (q == p) {
- p++;
- continue;
- }
- *q = '\0';
- if (access((char *)smsg->path_name, F_OK)) {
- if (smsg->copy_flags & CREATE_PATH) {
- if (mkdir((char *)smsg->path_name, 0755)) {
- syslog(LOG_ERR, "Failed to create %s",
- (char *)smsg->path_name);
- goto done;
- }
- } else {
- syslog(LOG_ERR, "Invalid path: %s",
- (char *)smsg->path_name);
- goto done;
- }
- }
- p = q + 1;
- *q = '/';
- }
-
- if (!access(target_fname, F_OK)) {
- syslog(LOG_INFO, "File: %s exists", target_fname);
- if (!(smsg->copy_flags & OVER_WRITE)) {
- error = HV_ERROR_ALREADY_EXISTS;
- goto done;
- }
- }
-
- target_fd = open(target_fname,
- O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC, 0744);
- if (target_fd == -1) {
- syslog(LOG_INFO, "Open Failed: %s", strerror(errno));
- goto done;
- }
-
- error = 0;
-done:
- if (error)
- target_fname[0] = '\0';
- return error;
-}
-
-static int hv_copy_data(struct hv_do_fcopy *cpmsg)
-{
- ssize_t bytes_written;
- int ret = 0;
-
- bytes_written = pwrite(target_fd, cpmsg->data, cpmsg->size,
- cpmsg->offset);
-
- filesize += cpmsg->size;
- if (bytes_written != cpmsg->size) {
- switch (errno) {
- case ENOSPC:
- ret = HV_ERROR_DISK_FULL;
- break;
- default:
- ret = HV_E_FAIL;
- break;
- }
- syslog(LOG_ERR, "pwrite failed to write %llu bytes: %ld (%s)",
- filesize, (long)bytes_written, strerror(errno));
- }
-
- return ret;
-}
-
-/*
- * Reset target_fname to "" in the two below functions for hibernation: if
- * the fcopy operation is aborted by hibernation, the daemon should remove the
- * partially-copied file; to achieve this, the hv_utils driver always fakes a
- * CANCEL_FCOPY message upon suspend, and later when the VM resumes back,
- * the daemon calls hv_copy_cancel() to remove the file; if a file is copied
- * successfully before suspend, hv_copy_finished() must reset target_fname to
- * avoid that the file can be incorrectly removed upon resume, since the faked
- * CANCEL_FCOPY message is spurious in this case.
- */
-static int hv_copy_finished(void)
-{
- close(target_fd);
- target_fname[0] = '\0';
- return 0;
-}
-static int hv_copy_cancel(void)
-{
- close(target_fd);
- if (strlen(target_fname) > 0) {
- unlink(target_fname);
- target_fname[0] = '\0';
- }
- return 0;
-
-}
-
-void print_usage(char *argv[])
-{
- fprintf(stderr, "Usage: %s [options]\n"
- "Options are:\n"
- " -n, --no-daemon stay in foreground, don't daemonize\n"
- " -h, --help print this help\n", argv[0]);
-}
-
-int main(int argc, char *argv[])
-{
- int fcopy_fd = -1;
- int error;
- int daemonize = 1, long_index = 0, opt;
- int version = FCOPY_CURRENT_VERSION;
- union {
- struct hv_fcopy_hdr hdr;
- struct hv_start_fcopy start;
- struct hv_do_fcopy copy;
- __u32 kernel_modver;
- } buffer = { };
- int in_handshake;
-
- static struct option long_options[] = {
- {"help", no_argument, 0, 'h' },
- {"no-daemon", no_argument, 0, 'n' },
- {0, 0, 0, 0 }
- };
-
- while ((opt = getopt_long(argc, argv, "hn", long_options,
- &long_index)) != -1) {
- switch (opt) {
- case 'n':
- daemonize = 0;
- break;
- case 'h':
- default:
- print_usage(argv);
- exit(EXIT_FAILURE);
- }
- }
-
- if (daemonize && daemon(1, 0)) {
- syslog(LOG_ERR, "daemon() failed; error: %s", strerror(errno));
- exit(EXIT_FAILURE);
- }
-
- openlog("HV_FCOPY", 0, LOG_USER);
- syslog(LOG_INFO, "starting; pid is:%d", getpid());
-
-reopen_fcopy_fd:
- if (fcopy_fd != -1)
- close(fcopy_fd);
- /* Remove any possible partially-copied file on error */
- hv_copy_cancel();
- in_handshake = 1;
- fcopy_fd = open("/dev/vmbus/hv_fcopy", O_RDWR);
-
- if (fcopy_fd < 0) {
- syslog(LOG_ERR, "open /dev/vmbus/hv_fcopy failed; error: %d %s",
- errno, strerror(errno));
- exit(EXIT_FAILURE);
- }
-
- /*
- * Register with the kernel.
- */
- if ((write(fcopy_fd, &version, sizeof(int))) != sizeof(int)) {
- syslog(LOG_ERR, "Registration failed: %s", strerror(errno));
- exit(EXIT_FAILURE);
- }
-
- while (1) {
- /*
- * In this loop we process fcopy messages after the
- * handshake is complete.
- */
- ssize_t len;
-
- len = pread(fcopy_fd, &buffer, sizeof(buffer), 0);
- if (len < 0) {
- syslog(LOG_ERR, "pread failed: %s", strerror(errno));
- goto reopen_fcopy_fd;
- }
-
- if (in_handshake) {
- if (len != sizeof(buffer.kernel_modver)) {
- syslog(LOG_ERR, "invalid version negotiation");
- exit(EXIT_FAILURE);
- }
- in_handshake = 0;
- syslog(LOG_INFO, "kernel module version: %u",
- buffer.kernel_modver);
- continue;
- }
-
- switch (buffer.hdr.operation) {
- case START_FILE_COPY:
- error = hv_start_fcopy(&buffer.start);
- break;
- case WRITE_TO_FILE:
- error = hv_copy_data(&buffer.copy);
- break;
- case COMPLETE_FCOPY:
- error = hv_copy_finished();
- break;
- case CANCEL_FCOPY:
- error = hv_copy_cancel();
- break;
-
- default:
- error = HV_E_FAIL;
- syslog(LOG_ERR, "Unknown operation: %d",
- buffer.hdr.operation);
-
- }
-
- /*
- * pwrite() may return an error due to the faked CANCEL_FCOPY
- * message upon hibernation. Ignore the error by resetting the
- * dev file, i.e. closing and re-opening it.
- */
- if (pwrite(fcopy_fd, &error, sizeof(int), 0) != sizeof(int)) {
- syslog(LOG_ERR, "pwrite failed: %s", strerror(errno));
- goto reopen_fcopy_fd;
- }
- }
-}
diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
new file mode 100644
index 000000000000..92e8307b2a46
--- /dev/null
+++ b/tools/hv/hv_fcopy_uio_daemon.c
@@ -0,0 +1,559 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * An implementation of host to guest copy functionality for Linux.
+ *
+ * Copyright (C) 2023, Microsoft, Inc.
+ *
+ * Author : K. Y. Srinivasan <kys@microsoft.com>
+ * Author : Saurabh Sengar <ssengar@microsoft.com>
+ *
+ */
+
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <getopt.h>
+#include <locale.h>
+#include <stdbool.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <syslog.h>
+#include <unistd.h>
+#include <wchar.h>
+#include <sys/stat.h>
+#include <linux/hyperv.h>
+#include <linux/limits.h>
+#include "vmbus_bufring.h"
+
+#define ICMSGTYPE_NEGOTIATE 0
+#define ICMSGTYPE_FCOPY 7
+
+#define WIN8_SRV_MAJOR 1
+#define WIN8_SRV_MINOR 1
+#define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
+
+#define FCOPY_DEVICE_PATH(subdir) \
+ "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/" #subdir
+#define FCOPY_UIO_PATH FCOPY_DEVICE_PATH(uio)
+#define FCOPY_CHANNELS_PATH FCOPY_DEVICE_PATH(channels)
+
+#define FCOPY_VER_COUNT 1
+static const int fcopy_versions[] = {
+ WIN8_SRV_VERSION
+};
+
+#define FW_VER_COUNT 1
+static const int fw_versions[] = {
+ UTIL_FW_VERSION
+};
+
+static uint32_t get_ring_buffer_size(void)
+{
+ char ring_path[PATH_MAX];
+ DIR *dir;
+ struct dirent *entry;
+ struct stat st;
+ uint32_t ring_size = 0;
+ int retry_count = 0;
+
+ /* Find the channel directory */
+ dir = opendir(FCOPY_CHANNELS_PATH);
+ if (!dir) {
+ usleep(100 * 1000); /* Avoid race with kernel, wait 100ms and retry once */
+ dir = opendir(FCOPY_CHANNELS_PATH);
+ if (!dir) {
+ syslog(LOG_ERR, "Failed to open channels directory: %s", strerror(errno));
+ return 0;
+ }
+ }
+
+retry_once:
+ while ((entry = readdir(dir)) != NULL) {
+ if (entry->d_type == DT_DIR && strcmp(entry->d_name, ".") != 0 &&
+ strcmp(entry->d_name, "..") != 0) {
+ snprintf(ring_path, sizeof(ring_path), "%s/%s/ring",
+ FCOPY_CHANNELS_PATH, entry->d_name);
+
+ if (stat(ring_path, &st) == 0) {
+ /*
+ * stat returns size of Tx, Rx rings combined,
+ * so take half of it for individual ring size.
+ */
+ ring_size = (uint32_t)st.st_size / 2;
+ syslog(LOG_INFO, "Ring buffer size from %s: %u bytes",
+ ring_path, ring_size);
+ break;
+ }
+ }
+ }
+
+ if (!ring_size && retry_count == 0) {
+ retry_count = 1;
+ rewinddir(dir);
+ usleep(100 * 1000); /* Wait 100ms and retry once */
+ goto retry_once;
+ }
+
+ closedir(dir);
+
+ if (!ring_size)
+ syslog(LOG_ERR, "Could not determine ring size");
+
+ return ring_size;
+}
+
+static unsigned char *desc;
+
+static int target_fd;
+static char target_fname[PATH_MAX];
+static unsigned long long filesize;
+
+static int hv_fcopy_create_file(char *file_name, char *path_name, __u32 flags)
+{
+ int error = HV_E_FAIL;
+ char *q, *p;
+
+ filesize = 0;
+ p = path_name;
+ if (snprintf(target_fname, sizeof(target_fname), "%s/%s",
+ path_name, file_name) >= sizeof(target_fname)) {
+ syslog(LOG_ERR, "target file name is too long: %s/%s", path_name, file_name);
+ goto done;
+ }
+
+ /*
+ * Check to see if the path is already in place; if not,
+ * create if required.
+ */
+ while ((q = strchr(p, '/')) != NULL) {
+ if (q == p) {
+ p++;
+ continue;
+ }
+ *q = '\0';
+ if (access(path_name, F_OK)) {
+ if (flags & CREATE_PATH) {
+ if (mkdir(path_name, 0755)) {
+ syslog(LOG_ERR, "Failed to create %s",
+ path_name);
+ goto done;
+ }
+ } else {
+ syslog(LOG_ERR, "Invalid path: %s", path_name);
+ goto done;
+ }
+ }
+ p = q + 1;
+ *q = '/';
+ }
+
+ if (!access(target_fname, F_OK)) {
+ syslog(LOG_INFO, "File: %s exists", target_fname);
+ if (!(flags & OVER_WRITE)) {
+ error = HV_ERROR_ALREADY_EXISTS;
+ goto done;
+ }
+ }
+
+ target_fd = open(target_fname,
+ O_RDWR | O_CREAT | O_TRUNC | O_CLOEXEC, 0744);
+ if (target_fd == -1) {
+ syslog(LOG_INFO, "Open Failed: %s", strerror(errno));
+ goto done;
+ }
+
+ error = 0;
+done:
+ if (error)
+ target_fname[0] = '\0';
+ return error;
+}
+
+/* copy the data into the file */
+static int hv_copy_data(struct hv_do_fcopy *cpmsg)
+{
+ ssize_t len;
+ int ret = 0;
+
+ len = pwrite(target_fd, cpmsg->data, cpmsg->size, cpmsg->offset);
+
+ filesize += cpmsg->size;
+ if (len != cpmsg->size) {
+ switch (errno) {
+ case ENOSPC:
+ ret = HV_ERROR_DISK_FULL;
+ break;
+ default:
+ ret = HV_E_FAIL;
+ break;
+ }
+ syslog(LOG_ERR, "pwrite failed to write %llu bytes: %ld (%s)",
+ filesize, (long)len, strerror(errno));
+ }
+
+ return ret;
+}
+
+static int hv_copy_finished(void)
+{
+ close(target_fd);
+ target_fname[0] = '\0';
+
+ return 0;
+}
+
+static void print_usage(char *argv[])
+{
+ fprintf(stderr, "Usage: %s [options]\n"
+ "Options are:\n"
+ " -n, --no-daemon stay in foreground, don't daemonize\n"
+ " -h, --help print this help\n", argv[0]);
+}
+
+static bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, unsigned char *buf,
+ unsigned int buflen, const int *fw_version, int fw_vercnt,
+ const int *srv_version, int srv_vercnt,
+ int *nego_fw_version, int *nego_srv_version)
+{
+ int icframe_major, icframe_minor;
+ int icmsg_major, icmsg_minor;
+ int fw_major, fw_minor;
+ int srv_major, srv_minor;
+ int i, j;
+ bool found_match = false;
+ struct icmsg_negotiate *negop;
+
+ /* Check that there's enough space for icframe_vercnt, icmsg_vercnt */
+ if (buflen < ICMSG_HDR + offsetof(struct icmsg_negotiate, reserved)) {
+ syslog(LOG_ERR, "Invalid icmsg negotiate");
+ return false;
+ }
+
+ icmsghdrp->icmsgsize = 0x10;
+ negop = (struct icmsg_negotiate *)&buf[ICMSG_HDR];
+
+ icframe_major = negop->icframe_vercnt;
+ icframe_minor = 0;
+
+ icmsg_major = negop->icmsg_vercnt;
+ icmsg_minor = 0;
+
+ /* Validate negop packet */
+ if (icframe_major > IC_VERSION_NEGOTIATION_MAX_VER_COUNT ||
+ icmsg_major > IC_VERSION_NEGOTIATION_MAX_VER_COUNT ||
+ ICMSG_NEGOTIATE_PKT_SIZE(icframe_major, icmsg_major) > buflen) {
+ syslog(LOG_ERR, "Invalid icmsg negotiate - icframe_major: %u, icmsg_major: %u\n",
+ icframe_major, icmsg_major);
+ goto fw_error;
+ }
+
+ /*
+ * Select the framework version number we will
+ * support.
+ */
+
+ for (i = 0; i < fw_vercnt; i++) {
+ fw_major = (fw_version[i] >> 16);
+ fw_minor = (fw_version[i] & 0xFFFF);
+
+ for (j = 0; j < negop->icframe_vercnt; j++) {
+ if (negop->icversion_data[j].major == fw_major &&
+ negop->icversion_data[j].minor == fw_minor) {
+ icframe_major = negop->icversion_data[j].major;
+ icframe_minor = negop->icversion_data[j].minor;
+ found_match = true;
+ break;
+ }
+ }
+
+ if (found_match)
+ break;
+ }
+
+ if (!found_match)
+ goto fw_error;
+
+ found_match = false;
+
+ for (i = 0; i < srv_vercnt; i++) {
+ srv_major = (srv_version[i] >> 16);
+ srv_minor = (srv_version[i] & 0xFFFF);
+
+ for (j = negop->icframe_vercnt;
+ (j < negop->icframe_vercnt + negop->icmsg_vercnt);
+ j++) {
+ if (negop->icversion_data[j].major == srv_major &&
+ negop->icversion_data[j].minor == srv_minor) {
+ icmsg_major = negop->icversion_data[j].major;
+ icmsg_minor = negop->icversion_data[j].minor;
+ found_match = true;
+ break;
+ }
+ }
+
+ if (found_match)
+ break;
+ }
+
+ /*
+ * Respond with the framework and service
+ * version numbers we can support.
+ */
+fw_error:
+ if (!found_match) {
+ negop->icframe_vercnt = 0;
+ negop->icmsg_vercnt = 0;
+ } else {
+ negop->icframe_vercnt = 1;
+ negop->icmsg_vercnt = 1;
+ }
+
+ if (nego_fw_version)
+ *nego_fw_version = (icframe_major << 16) | icframe_minor;
+
+ if (nego_srv_version)
+ *nego_srv_version = (icmsg_major << 16) | icmsg_minor;
+
+ negop->icversion_data[0].major = icframe_major;
+ negop->icversion_data[0].minor = icframe_minor;
+ negop->icversion_data[1].major = icmsg_major;
+ negop->icversion_data[1].minor = icmsg_minor;
+
+ return found_match;
+}
+
+static void wcstoutf8(char *dest, const __u16 *src, size_t dest_size)
+{
+ size_t len = 0;
+
+ while (len < dest_size && *src) {
+ if (src[len] < 0x80)
+ dest[len++] = (char)(*src++);
+ else
+ dest[len++] = 'X';
+ }
+
+ dest[len] = '\0';
+}
+
+static int hv_fcopy_start(struct hv_start_fcopy *smsg_in)
+{
+ /*
+ * file_name and path_name should have same length with appropriate
+ * member of hv_start_fcopy.
+ */
+ char file_name[W_MAX_PATH], path_name[W_MAX_PATH];
+
+ setlocale(LC_ALL, "en_US.utf8");
+ wcstoutf8(file_name, smsg_in->file_name, W_MAX_PATH - 1);
+ wcstoutf8(path_name, smsg_in->path_name, W_MAX_PATH - 1);
+
+ return hv_fcopy_create_file(file_name, path_name, smsg_in->copy_flags);
+}
+
+static int hv_fcopy_send_data(struct hv_fcopy_hdr *fcopy_msg, int recvlen)
+{
+ int operation = fcopy_msg->operation;
+
+ /*
+ * The strings sent from the host are encoded in
+ * utf16; convert it to utf8 strings.
+ * The host assures us that the utf16 strings will not exceed
+ * the max lengths specified. We will however, reserve room
+ * for the string terminating character - in the utf16s_utf8s()
+ * function we limit the size of the buffer where the converted
+ * string is placed to W_MAX_PATH -1 to guarantee
+ * that the strings can be properly terminated!
+ */
+
+ switch (operation) {
+ case START_FILE_COPY:
+ return hv_fcopy_start((struct hv_start_fcopy *)fcopy_msg);
+ case WRITE_TO_FILE:
+ return hv_copy_data((struct hv_do_fcopy *)fcopy_msg);
+ case COMPLETE_FCOPY:
+ return hv_copy_finished();
+ }
+
+ return HV_E_FAIL;
+}
+
+/* process the packet recv from host */
+static int fcopy_pkt_process(struct vmbus_br *txbr)
+{
+ int ret, offset, pktlen;
+ int fcopy_srv_version;
+ const struct vmbus_chanpkt_hdr *pkt;
+ struct hv_fcopy_hdr *fcopy_msg;
+ struct icmsg_hdr *icmsghdr;
+
+ pkt = (const struct vmbus_chanpkt_hdr *)desc;
+ offset = pkt->hlen << 3;
+ pktlen = (pkt->tlen << 3) - offset;
+ icmsghdr = (struct icmsg_hdr *)&desc[offset + sizeof(struct vmbuspipe_hdr)];
+ icmsghdr->status = HV_E_FAIL;
+
+ if (icmsghdr->icmsgtype == ICMSGTYPE_NEGOTIATE) {
+ if (vmbus_prep_negotiate_resp(icmsghdr, desc + offset, pktlen, fw_versions,
+ FW_VER_COUNT, fcopy_versions, FCOPY_VER_COUNT,
+ NULL, &fcopy_srv_version)) {
+ syslog(LOG_INFO, "FCopy IC version %d.%d",
+ fcopy_srv_version >> 16, fcopy_srv_version & 0xFFFF);
+ icmsghdr->status = 0;
+ }
+ } else if (icmsghdr->icmsgtype == ICMSGTYPE_FCOPY) {
+ /* Ensure recvlen is big enough to contain hv_fcopy_hdr */
+ if (pktlen < ICMSG_HDR + sizeof(struct hv_fcopy_hdr)) {
+ syslog(LOG_ERR, "Invalid Fcopy hdr. Packet length too small: %u",
+ pktlen);
+ return -ENOBUFS;
+ }
+
+ fcopy_msg = (struct hv_fcopy_hdr *)&desc[offset + ICMSG_HDR];
+ icmsghdr->status = hv_fcopy_send_data(fcopy_msg, pktlen);
+ }
+
+ icmsghdr->icflags = ICMSGHDRFLAG_TRANSACTION | ICMSGHDRFLAG_RESPONSE;
+ ret = rte_vmbus_chan_send(txbr, 0x6, desc + offset, pktlen, 0);
+ if (ret) {
+ syslog(LOG_ERR, "Write to ringbuffer failed err: %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void fcopy_get_first_folder(char *path, char *chan_no)
+{
+ DIR *dir = opendir(path);
+ struct dirent *entry;
+
+ if (!dir) {
+ syslog(LOG_ERR, "Failed to open directory (errno=%s).\n", strerror(errno));
+ return;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (entry->d_type == DT_DIR && strcmp(entry->d_name, ".") != 0 &&
+ strcmp(entry->d_name, "..") != 0) {
+ strcpy(chan_no, entry->d_name);
+ break;
+ }
+ }
+
+ closedir(dir);
+}
+
+int main(int argc, char *argv[])
+{
+ int fcopy_fd = -1, tmp = 1;
+ int daemonize = 1, long_index = 0, opt, ret = -EINVAL;
+ struct vmbus_br txbr, rxbr;
+ void *ring;
+ uint32_t ring_size, len;
+ char uio_name[NAME_MAX] = {0};
+ char uio_dev_path[PATH_MAX] = {0};
+
+ static struct option long_options[] = {
+ {"help", no_argument, 0, 'h' },
+ {"no-daemon", no_argument, 0, 'n' },
+ {0, 0, 0, 0 }
+ };
+
+ while ((opt = getopt_long(argc, argv, "hn", long_options,
+ &long_index)) != -1) {
+ switch (opt) {
+ case 'n':
+ daemonize = 0;
+ break;
+ case 'h':
+ default:
+ print_usage(argv);
+ goto exit;
+ }
+ }
+
+ if (daemonize && daemon(1, 0)) {
+ syslog(LOG_ERR, "daemon() failed; error: %s", strerror(errno));
+ goto exit;
+ }
+
+ openlog("HV_UIO_FCOPY", 0, LOG_USER);
+ syslog(LOG_INFO, "starting; pid is:%d", getpid());
+
+ ring_size = get_ring_buffer_size();
+ if (!ring_size) {
+ ret = -ENODEV;
+ goto exit;
+ }
+
+ desc = malloc(ring_size * sizeof(unsigned char));
+ if (!desc) {
+ syslog(LOG_ERR, "malloc failed for desc buffer");
+ ret = -ENOMEM;
+ goto exit;
+ }
+
+ fcopy_get_first_folder(FCOPY_UIO_PATH, uio_name);
+ snprintf(uio_dev_path, sizeof(uio_dev_path), "/dev/%s", uio_name);
+ fcopy_fd = open(uio_dev_path, O_RDWR);
+
+ if (fcopy_fd < 0) {
+ syslog(LOG_ERR, "open %s failed; error: %d %s",
+ uio_dev_path, errno, strerror(errno));
+ ret = fcopy_fd;
+ goto free_desc;
+ }
+
+ ring = vmbus_uio_map(&fcopy_fd, ring_size);
+ if (!ring) {
+ ret = errno;
+ syslog(LOG_ERR, "mmap ringbuffer failed; error: %d %s", ret, strerror(ret));
+ goto close;
+ }
+ vmbus_br_setup(&txbr, ring, ring_size);
+ vmbus_br_setup(&rxbr, (char *)ring + ring_size, ring_size);
+
+ rxbr.vbr->imask = 0;
+
+ while (1) {
+ /*
+ * In this loop we process fcopy messages after the
+ * handshake is complete.
+ */
+ ret = pread(fcopy_fd, &tmp, sizeof(int), 0);
+ if (ret < 0) {
+ if (errno == EINTR || errno == EAGAIN)
+ continue;
+ syslog(LOG_ERR, "pread failed: %s", strerror(errno));
+ goto close;
+ }
+
+ len = ring_size;
+ ret = rte_vmbus_chan_recv_raw(&rxbr, desc, &len);
+ if (unlikely(ret <= 0)) {
+ /* This indicates a failure to communicate (or worse) */
+ syslog(LOG_ERR, "VMBus channel recv error: %d", ret);
+ } else {
+ ret = fcopy_pkt_process(&txbr);
+ if (ret < 0)
+ goto close;
+
+ /* Signal host */
+ if ((write(fcopy_fd, &tmp, sizeof(int))) != sizeof(int)) {
+ ret = errno;
+ syslog(LOG_ERR, "Signal to host failed: %s\n", strerror(ret));
+ goto close;
+ }
+ }
+ }
+close:
+ close(fcopy_fd);
+free_desc:
+ free(desc);
+exit:
+ return ret;
+}
diff --git a/tools/hv/hv_get_dns_info.sh b/tools/hv/hv_get_dns_info.sh
index 058c17b46ffc..268521234d4b 100755
--- a/tools/hv/hv_get_dns_info.sh
+++ b/tools/hv/hv_get_dns_info.sh
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/bin/sh
# This example script parses /etc/resolv.conf to retrive DNS information.
# In the interest of keeping the KVP daemon code free of distro specific
@@ -10,4 +10,4 @@
# this script can be based on the Network Manager APIs for retrieving DNS
# entries.
-cat /etc/resolv.conf 2>/dev/null | awk '/^nameserver/ { print $2 }'
+exec awk '/^nameserver/ { print $2 }' /etc/resolv.conf 2>/dev/null
diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
index 318e2dad27e0..1f64c680be13 100644
--- a/tools/hv/hv_kvp_daemon.c
+++ b/tools/hv/hv_kvp_daemon.c
@@ -24,6 +24,7 @@
#include <sys/poll.h>
#include <sys/utsname.h>
+#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
@@ -76,7 +77,14 @@ enum {
DNS
};
+enum {
+ IPV4 = 1,
+ IPV6,
+ IP_TYPE_MAX
+};
+
static int in_hand_shake;
+static int debug;
static char *os_name = "";
static char *os_major = "";
@@ -102,6 +110,11 @@ static struct utsname uts_buf;
#define MAX_FILE_NAME 100
#define ENTRIES_PER_BLOCK 50
+/*
+ * Change this entry if the number of addresses increases in future
+ */
+#define MAX_IP_ENTRIES 64
+#define OUTSTR_BUF_SIZE ((INET6_ADDRSTRLEN + 1) * MAX_IP_ENTRIES)
struct kvp_record {
char key[HV_KVP_EXCHANGE_MAX_KEY_SIZE];
@@ -172,6 +185,20 @@ static void kvp_update_file(int pool)
kvp_release_lock(pool);
}
+static void kvp_dump_initial_pools(int pool)
+{
+ int i;
+
+ syslog(LOG_DEBUG, "===Start dumping the contents of pool %d ===\n",
+ pool);
+
+ for (i = 0; i < kvp_file_info[pool].num_records; i++)
+ syslog(LOG_DEBUG, "pool: %d, %d/%d key=%s val=%s\n",
+ pool, i + 1, kvp_file_info[pool].num_records,
+ kvp_file_info[pool].records[i].key,
+ kvp_file_info[pool].records[i].value);
+}
+
static void kvp_update_mem_state(int pool)
{
FILE *filep;
@@ -259,6 +286,8 @@ static int kvp_file_init(void)
return 1;
kvp_file_info[i].num_records = 0;
kvp_update_mem_state(i);
+ if (debug)
+ kvp_dump_initial_pools(i);
}
return 0;
@@ -286,6 +315,9 @@ static int kvp_key_delete(int pool, const __u8 *key, int key_size)
* Found a match; just move the remaining
* entries up.
*/
+ if (debug)
+ syslog(LOG_DEBUG, "%s: deleting the KVP: pool=%d key=%s val=%s",
+ __func__, pool, record[i].key, record[i].value);
if (i == (num_records - 1)) {
kvp_file_info[pool].num_records--;
kvp_update_file(pool);
@@ -304,20 +336,36 @@ static int kvp_key_delete(int pool, const __u8 *key, int key_size)
kvp_update_file(pool);
return 0;
}
+
+ if (debug)
+ syslog(LOG_DEBUG, "%s: could not delete KVP: pool=%d key=%s. Record not found",
+ __func__, pool, key);
+
return 1;
}
static int kvp_key_add_or_modify(int pool, const __u8 *key, int key_size,
const __u8 *value, int value_size)
{
- int i;
- int num_records;
struct kvp_record *record;
+ int num_records;
int num_blocks;
+ int i;
+
+ if (debug)
+ syslog(LOG_DEBUG, "%s: got a KVP: pool=%d key=%s val=%s",
+ __func__, pool, key, value);
if ((key_size > HV_KVP_EXCHANGE_MAX_KEY_SIZE) ||
- (value_size > HV_KVP_EXCHANGE_MAX_VALUE_SIZE))
+ (value_size > HV_KVP_EXCHANGE_MAX_VALUE_SIZE)) {
+ syslog(LOG_ERR, "%s: Too long key or value: key=%s, val=%s",
+ __func__, key, value);
+
+ if (debug)
+ syslog(LOG_DEBUG, "%s: Too long key or value: pool=%d, key=%s, val=%s",
+ __func__, pool, key, value);
return 1;
+ }
/*
* First update the in-memory state.
@@ -337,6 +385,9 @@ static int kvp_key_add_or_modify(int pool, const __u8 *key, int key_size,
*/
memcpy(record[i].value, value, value_size);
kvp_update_file(pool);
+ if (debug)
+ syslog(LOG_DEBUG, "%s: updated: pool=%d key=%s val=%s",
+ __func__, pool, key, value);
return 0;
}
@@ -348,8 +399,10 @@ static int kvp_key_add_or_modify(int pool, const __u8 *key, int key_size,
record = realloc(record, sizeof(struct kvp_record) *
ENTRIES_PER_BLOCK * (num_blocks + 1));
- if (record == NULL)
+ if (!record) {
+ syslog(LOG_ERR, "%s: Memory alloc failure", __func__);
return 1;
+ }
kvp_file_info[pool].num_blocks++;
}
@@ -357,6 +410,11 @@ static int kvp_key_add_or_modify(int pool, const __u8 *key, int key_size,
memcpy(record[i].key, key, key_size);
kvp_file_info[pool].records = record;
kvp_file_info[pool].num_records++;
+
+ if (debug)
+ syslog(LOG_DEBUG, "%s: added: pool=%d key=%s val=%s",
+ __func__, pool, key, value);
+
kvp_update_file(pool);
return 0;
}
@@ -666,6 +724,88 @@ static void kvp_process_ipconfig_file(char *cmd,
pclose(file);
}
+static bool kvp_verify_ip_address(const void *address_string)
+{
+ char verify_buf[sizeof(struct in6_addr)];
+
+ if (inet_pton(AF_INET, address_string, verify_buf) == 1)
+ return true;
+ if (inet_pton(AF_INET6, address_string, verify_buf) == 1)
+ return true;
+ return false;
+}
+
+static void kvp_extract_routes(const char *line, void **output, size_t *remaining)
+{
+ static const char needle[] = "via ";
+ const char *match, *haystack = line;
+
+ while ((match = strstr(haystack, needle))) {
+ const char *address, *next_char;
+
+ /* Address starts after needle. */
+ address = match + strlen(needle);
+
+ /* The char following address is a space or end of line. */
+ next_char = strpbrk(address, " \t\\");
+ if (!next_char)
+ next_char = address + strlen(address) + 1;
+
+ /* Enough room for address and semicolon. */
+ if (*remaining >= (next_char - address) + 1) {
+ memcpy(*output, address, next_char - address);
+ /* Terminate string for verification. */
+ memcpy(*output + (next_char - address), "", 1);
+ if (kvp_verify_ip_address(*output)) {
+ /* Advance output buffer. */
+ *output += next_char - address;
+ *remaining -= next_char - address;
+
+ /* Each address needs a trailing semicolon. */
+ memcpy(*output, ";", 1);
+ *output += 1;
+ *remaining -= 1;
+ }
+ }
+ haystack = next_char;
+ }
+}
+
+static void kvp_get_gateway(void *buffer, size_t buffer_len)
+{
+ static const char needle[] = "default ";
+ FILE *f;
+ void *output = buffer;
+ char *line = NULL;
+ size_t alloc_size = 0, remaining = buffer_len - 1;
+ ssize_t num_chars;
+
+ /* Show route information in a single line, for each address family */
+ f = popen("ip --oneline -4 route show;ip --oneline -6 route show", "r");
+ if (!f) {
+ /* Convert buffer into C-String. */
+ memcpy(output, "", 1);
+ return;
+ }
+ while ((num_chars = getline(&line, &alloc_size, f)) > 0) {
+ /* Skip short lines. */
+ if (num_chars <= strlen(needle))
+ continue;
+ /* Skip lines without default route. */
+ if (memcmp(line, needle, strlen(needle)))
+ continue;
+ /* Remove trailing newline to simplify further parsing. */
+ if (line[num_chars - 1] == '\n')
+ line[num_chars - 1] = '\0';
+ /* Search routes after match. */
+ kvp_extract_routes(line + strlen(needle), &output, &remaining);
+ }
+ /* Convert buffer into C-String. */
+ memcpy(output, "", 1);
+ free(line);
+ pclose(f);
+}
+
static void kvp_get_ipconfig_info(char *if_name,
struct hv_kvp_ipaddr_value *buffer)
{
@@ -674,30 +814,7 @@ static void kvp_get_ipconfig_info(char *if_name,
char *p;
FILE *file;
- /*
- * Get the address of default gateway (ipv4).
- */
- sprintf(cmd, "%s %s", "ip route show dev", if_name);
- strcat(cmd, " | awk '/default/ {print $3 }'");
-
- /*
- * Execute the command to gather gateway info.
- */
- kvp_process_ipconfig_file(cmd, (char *)buffer->gate_way,
- (MAX_GATEWAY_SIZE * 2), INET_ADDRSTRLEN, 0);
-
- /*
- * Get the address of default gateway (ipv6).
- */
- sprintf(cmd, "%s %s", "ip -f inet6 route show dev", if_name);
- strcat(cmd, " | awk '/default/ {print $3 }'");
-
- /*
- * Execute the command to gather gateway info (ipv6).
- */
- kvp_process_ipconfig_file(cmd, (char *)buffer->gate_way,
- (MAX_GATEWAY_SIZE * 2), INET6_ADDRSTRLEN, 1);
-
+ kvp_get_gateway(buffer->gate_way, sizeof(buffer->gate_way));
/*
* Gather the DNS state.
@@ -714,7 +831,7 @@ static void kvp_get_ipconfig_info(char *if_name,
* .
*/
- sprintf(cmd, KVP_SCRIPTS_PATH "%s", "hv_get_dns_info");
+ sprintf(cmd, "exec %s %s", KVP_SCRIPTS_PATH "hv_get_dns_info", if_name);
/*
* Execute the command to gather DNS info.
@@ -731,7 +848,7 @@ static void kvp_get_ipconfig_info(char *if_name,
* Enabled: DHCP enabled.
*/
- sprintf(cmd, KVP_SCRIPTS_PATH "%s %s", "hv_get_dhcp_info", if_name);
+ sprintf(cmd, "exec %s %s", KVP_SCRIPTS_PATH "hv_get_dhcp_info", if_name);
file = popen(cmd, "r");
if (file == NULL)
@@ -1171,6 +1288,18 @@ static int process_ip_string(FILE *f, char *ip_string, int type)
return 0;
}
+int ip_version_check(const char *input_addr)
+{
+ struct in6_addr addr;
+
+ if (inet_pton(AF_INET, input_addr, &addr))
+ return IPV4;
+ else if (inet_pton(AF_INET6, input_addr, &addr))
+ return IPV6;
+
+ return -EINVAL;
+}
+
/*
* Only IPv4 subnet strings needs to be converted to plen
* For IPv6 the subnet is already privided in plen format
@@ -1197,14 +1326,75 @@ static int kvp_subnet_to_plen(char *subnet_addr_str)
return plen;
}
+static int process_dns_gateway_nm(FILE *f, char *ip_string, int type,
+ int ip_sec)
+{
+ char addr[INET6_ADDRSTRLEN], *output_str;
+ int ip_offset = 0, error = 0, ip_ver;
+ char *param_name;
+
+ if (type == DNS)
+ param_name = "dns";
+ else if (type == GATEWAY)
+ param_name = "gateway";
+ else
+ return -EINVAL;
+
+ output_str = (char *)calloc(OUTSTR_BUF_SIZE, sizeof(char));
+ if (!output_str)
+ return -ENOMEM;
+
+ while (1) {
+ memset(addr, 0, sizeof(addr));
+
+ if (!parse_ip_val_buffer(ip_string, &ip_offset, addr,
+ (MAX_IP_ADDR_SIZE * 2)))
+ break;
+
+ ip_ver = ip_version_check(addr);
+ if (ip_ver < 0)
+ continue;
+
+ if ((ip_ver == IPV4 && ip_sec == IPV4) ||
+ (ip_ver == IPV6 && ip_sec == IPV6)) {
+ /*
+ * do a bound check to avoid out-of bound writes
+ */
+ if ((OUTSTR_BUF_SIZE - strlen(output_str)) >
+ (strlen(addr) + 1)) {
+ strncat(output_str, addr,
+ OUTSTR_BUF_SIZE -
+ strlen(output_str) - 1);
+ strncat(output_str, ",",
+ OUTSTR_BUF_SIZE -
+ strlen(output_str) - 1);
+ }
+ } else {
+ continue;
+ }
+ }
+
+ if (strlen(output_str)) {
+ /*
+ * This is to get rid of that extra comma character
+ * in the end of the string
+ */
+ output_str[strlen(output_str) - 1] = '\0';
+ error = fprintf(f, "%s=%s\n", param_name, output_str);
+ }
+
+ free(output_str);
+ return error;
+}
+
static int process_ip_string_nm(FILE *f, char *ip_string, char *subnet,
- int is_ipv6)
+ int ip_sec)
{
char addr[INET6_ADDRSTRLEN];
char subnet_addr[INET6_ADDRSTRLEN];
- int error, i = 0;
+ int error = 0, i = 0;
int ip_offset = 0, subnet_offset = 0;
- int plen;
+ int plen, ip_ver;
memset(addr, 0, sizeof(addr));
memset(subnet_addr, 0, sizeof(subnet_addr));
@@ -1216,10 +1406,16 @@ static int process_ip_string_nm(FILE *f, char *ip_string, char *subnet,
subnet_addr,
(MAX_IP_ADDR_SIZE *
2))) {
- if (!is_ipv6)
+ ip_ver = ip_version_check(addr);
+ if (ip_ver < 0)
+ continue;
+
+ if (ip_ver == IPV4 && ip_sec == IPV4)
plen = kvp_subnet_to_plen((char *)subnet_addr);
- else
+ else if (ip_ver == IPV6 && ip_sec == IPV6)
plen = atoi(subnet_addr);
+ else
+ continue;
if (plen < 0)
return plen;
@@ -1233,17 +1429,16 @@ static int process_ip_string_nm(FILE *f, char *ip_string, char *subnet,
memset(subnet_addr, 0, sizeof(subnet_addr));
}
- return 0;
+ return error;
}
static int kvp_set_ip_info(char *if_name, struct hv_kvp_ipaddr_value *new_val)
{
- int error = 0;
+ int error = 0, ip_ver;
char if_filename[PATH_MAX];
char nm_filename[PATH_MAX];
FILE *ifcfg_file, *nmfile;
char cmd[PATH_MAX];
- int is_ipv6 = 0;
char *mac_addr;
int str_len;
@@ -1421,52 +1616,94 @@ static int kvp_set_ip_info(char *if_name, struct hv_kvp_ipaddr_value *new_val)
if (error)
goto setval_error;
- if (new_val->addr_family & ADDR_FAMILY_IPV6) {
- error = fprintf(nmfile, "\n[ipv6]\n");
- if (error < 0)
- goto setval_error;
- is_ipv6 = 1;
- } else {
- error = fprintf(nmfile, "\n[ipv4]\n");
- if (error < 0)
- goto setval_error;
- }
-
/*
* Now we populate the keyfile format
+ *
+ * The keyfile format expects the IPv6 and IPv4 configuration in
+ * different sections. Therefore we iterate through the list twice,
+ * once to populate the IPv4 section and the next time for IPv6
*/
+ ip_ver = IPV4;
+ do {
+ if (ip_ver == IPV4) {
+ error = fprintf(nmfile, "\n[ipv4]\n");
+ if (error < 0)
+ goto setval_error;
+ } else {
+ error = fprintf(nmfile, "\n[ipv6]\n");
+ if (error < 0)
+ goto setval_error;
+ }
- if (new_val->dhcp_enabled) {
- error = kvp_write_file(nmfile, "method", "", "auto");
- if (error < 0)
- goto setval_error;
- } else {
- error = kvp_write_file(nmfile, "method", "", "manual");
+ /*
+ * Write the configuration for ipaddress, netmask, gateway and
+ * name services
+ */
+ error = process_ip_string_nm(nmfile, (char *)new_val->ip_addr,
+ (char *)new_val->sub_net,
+ ip_ver);
if (error < 0)
goto setval_error;
- }
- /*
- * Write the configuration for ipaddress, netmask, gateway and
- * name services
- */
- error = process_ip_string_nm(nmfile, (char *)new_val->ip_addr,
- (char *)new_val->sub_net, is_ipv6);
- if (error < 0)
- goto setval_error;
+ /*
+ * As dhcp_enabled is only valid for ipv4, we do not set dhcp
+ * methods for ipv6 based on dhcp_enabled flag.
+ *
+ * For ipv4, set method to manual only when dhcp_enabled is
+ * false and specific ipv4 addresses are configured. If neither
+ * dhcp_enabled is true and no ipv4 addresses are configured,
+ * set method to 'disabled'.
+ *
+ * For ipv6, set method to manual when we configure ipv6
+ * addresses. Otherwise set method to 'auto' so that SLAAC from
+ * RA may be used.
+ */
+ if (ip_ver == IPV4) {
+ if (new_val->dhcp_enabled) {
+ error = kvp_write_file(nmfile, "method", "",
+ "auto");
+ if (error < 0)
+ goto setval_error;
+ } else if (error) {
+ error = kvp_write_file(nmfile, "method", "",
+ "manual");
+ if (error < 0)
+ goto setval_error;
+ } else {
+ error = kvp_write_file(nmfile, "method", "",
+ "disabled");
+ if (error < 0)
+ goto setval_error;
+ }
+ } else if (ip_ver == IPV6) {
+ if (error) {
+ error = kvp_write_file(nmfile, "method", "",
+ "manual");
+ if (error < 0)
+ goto setval_error;
+ } else {
+ error = kvp_write_file(nmfile, "method", "",
+ "auto");
+ if (error < 0)
+ goto setval_error;
+ }
+ }
- /* we do not want ipv4 addresses in ipv6 section and vice versa */
- if (is_ipv6 != is_ipv4((char *)new_val->gate_way)) {
- error = fprintf(nmfile, "gateway=%s\n", (char *)new_val->gate_way);
+ error = process_dns_gateway_nm(nmfile,
+ (char *)new_val->gate_way,
+ GATEWAY, ip_ver);
if (error < 0)
goto setval_error;
- }
- if (is_ipv6 != is_ipv4((char *)new_val->dns_addr)) {
- error = fprintf(nmfile, "dns=%s\n", (char *)new_val->dns_addr);
+ error = process_dns_gateway_nm(nmfile,
+ (char *)new_val->dns_addr, DNS,
+ ip_ver);
if (error < 0)
goto setval_error;
- }
+
+ ip_ver++;
+ } while (ip_ver < IP_TYPE_MAX);
+
fclose(nmfile);
fclose(ifcfg_file);
@@ -1475,8 +1712,9 @@ static int kvp_set_ip_info(char *if_name, struct hv_kvp_ipaddr_value *new_val)
* invoke the external script to do its magic.
*/
- str_len = snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s %s",
- "hv_set_ifconfig", if_filename, nm_filename);
+ str_len = snprintf(cmd, sizeof(cmd), "exec %s %s %s",
+ KVP_SCRIPTS_PATH "hv_set_ifconfig",
+ if_filename, nm_filename);
/*
* This is a little overcautious, but it's necessary to suppress some
* false warnings from gcc 8.0.1.
@@ -1530,6 +1768,7 @@ void print_usage(char *argv[])
fprintf(stderr, "Usage: %s [options]\n"
"Options are:\n"
" -n, --no-daemon stay in foreground, don't daemonize\n"
+ " -d, --debug Enable debug logs(syslog debug by default)\n"
" -h, --help print this help\n", argv[0]);
}
@@ -1551,10 +1790,11 @@ int main(int argc, char *argv[])
static struct option long_options[] = {
{"help", no_argument, 0, 'h' },
{"no-daemon", no_argument, 0, 'n' },
+ {"debug", no_argument, 0, 'd' },
{0, 0, 0, 0 }
};
- while ((opt = getopt_long(argc, argv, "hn", long_options,
+ while ((opt = getopt_long(argc, argv, "hnd", long_options,
&long_index)) != -1) {
switch (opt) {
case 'n':
@@ -1563,6 +1803,9 @@ int main(int argc, char *argv[])
case 'h':
print_usage(argv);
exit(0);
+ case 'd':
+ debug = 1;
+ break;
default:
print_usage(argv);
exit(EXIT_FAILURE);
@@ -1585,6 +1828,9 @@ int main(int argc, char *argv[])
*/
kvp_get_domain_name(full_domain_name, sizeof(full_domain_name));
+ if (debug)
+ syslog(LOG_INFO, "Logging debug info in syslog(debug)");
+
if (kvp_file_init()) {
syslog(LOG_ERR, "Failed to initialize the pools");
exit(EXIT_FAILURE);
diff --git a/tools/hv/hv_set_ifconfig.sh b/tools/hv/hv_set_ifconfig.sh
index 440a91b35823..2f8baed2b8f7 100755
--- a/tools/hv/hv_set_ifconfig.sh
+++ b/tools/hv/hv_set_ifconfig.sh
@@ -81,7 +81,7 @@ echo "ONBOOT=yes" >> $1
cp $1 /etc/sysconfig/network-scripts/
-chmod 600 $2
+umask 0177
interface=$(echo $2 | awk -F - '{ print $2 }')
filename="${2##*/}"
diff --git a/tools/hv/lsvmbus b/tools/hv/lsvmbus
index 099f2c44dbed..f83698f14da2 100644..100755
--- a/tools/hv/lsvmbus
+++ b/tools/hv/lsvmbus
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# SPDX-License-Identifier: GPL-2.0
import os
diff --git a/tools/hv/vmbus_bufring.c b/tools/hv/vmbus_bufring.c
new file mode 100644
index 000000000000..bac32c1109df
--- /dev/null
+++ b/tools/hv/vmbus_bufring.c
@@ -0,0 +1,318 @@
+// SPDX-License-Identifier: BSD-3-Clause
+/*
+ * Copyright (c) 2009-2012,2016,2023 Microsoft Corp.
+ * Copyright (c) 2012 NetApp Inc.
+ * Copyright (c) 2012 Citrix Inc.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <emmintrin.h>
+#include <linux/limits.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <sys/uio.h>
+#include <unistd.h>
+#include "vmbus_bufring.h"
+
+/**
+ * Compiler barrier.
+ *
+ * Guarantees that operation reordering does not occur at compile time
+ * for operations directly before and after the barrier.
+ */
+#define rte_compiler_barrier() ({ asm volatile ("" : : : "memory"); })
+
+#define VMBUS_RQST_ERROR 0xFFFFFFFFFFFFFFFF
+#define ALIGN(val, align) ((typeof(val))((val) & (~((typeof(val))((align) - 1)))))
+
+void *vmbus_uio_map(int *fd, int size)
+{
+ void *map;
+
+ map = mmap(NULL, 2 * size, PROT_READ | PROT_WRITE, MAP_SHARED, *fd, 0);
+ if (map == MAP_FAILED)
+ return NULL;
+
+ return map;
+}
+
+/* Increase bufring index by inc with wraparound */
+static inline uint32_t vmbus_br_idxinc(uint32_t idx, uint32_t inc, uint32_t sz)
+{
+ idx += inc;
+ if (idx >= sz)
+ idx -= sz;
+
+ return idx;
+}
+
+void vmbus_br_setup(struct vmbus_br *br, void *buf, unsigned int blen)
+{
+ br->vbr = buf;
+ br->windex = br->vbr->windex;
+ br->dsize = blen - sizeof(struct vmbus_bufring);
+}
+
+static inline __always_inline void
+rte_smp_mb(void)
+{
+ asm volatile("lock addl $0, -128(%%rsp); " ::: "memory");
+}
+
+static inline int
+rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
+{
+ uint8_t res;
+
+ asm volatile("lock ; "
+ "cmpxchgl %[src], %[dst];"
+ "sete %[res];"
+ : [res] "=a" (res), /* output */
+ [dst] "=m" (*dst)
+ : [src] "r" (src), /* input */
+ "a" (exp),
+ "m" (*dst)
+ : "memory"); /* no-clobber list */
+ return res;
+}
+
+static inline uint32_t
+vmbus_txbr_copyto(const struct vmbus_br *tbr, uint32_t windex,
+ const void *src0, uint32_t cplen)
+{
+ uint8_t *br_data = tbr->vbr->data;
+ uint32_t br_dsize = tbr->dsize;
+ const uint8_t *src = src0;
+
+ /* XXX use double mapping like Linux kernel? */
+ if (cplen > br_dsize - windex) {
+ uint32_t fraglen = br_dsize - windex;
+
+ /* Wrap-around detected */
+ memcpy(br_data + windex, src, fraglen);
+ memcpy(br_data, src + fraglen, cplen - fraglen);
+ } else {
+ memcpy(br_data + windex, src, cplen);
+ }
+
+ return vmbus_br_idxinc(windex, cplen, br_dsize);
+}
+
+/*
+ * Write scattered channel packet to TX bufring.
+ *
+ * The offset of this channel packet is written as a 64bits value
+ * immediately after this channel packet.
+ *
+ * The write goes through three stages:
+ * 1. Reserve space in ring buffer for the new data.
+ * Writer atomically moves priv_write_index.
+ * 2. Copy the new data into the ring.
+ * 3. Update the tail of the ring (visible to host) that indicates
+ * next read location. Writer updates write_index
+ */
+static int
+vmbus_txbr_write(struct vmbus_br *tbr, const struct iovec iov[], int iovlen)
+{
+ struct vmbus_bufring *vbr = tbr->vbr;
+ uint32_t ring_size = tbr->dsize;
+ uint32_t old_windex, next_windex, windex, total;
+ uint64_t save_windex;
+ int i;
+
+ total = 0;
+ for (i = 0; i < iovlen; i++)
+ total += iov[i].iov_len;
+ total += sizeof(save_windex);
+
+ /* Reserve space in ring */
+ do {
+ uint32_t avail;
+
+ /* Get current free location */
+ old_windex = tbr->windex;
+
+ /* Prevent compiler reordering this with calculation */
+ rte_compiler_barrier();
+
+ avail = vmbus_br_availwrite(tbr, old_windex);
+
+ /* If not enough space in ring, then tell caller. */
+ if (avail <= total)
+ return -EAGAIN;
+
+ next_windex = vmbus_br_idxinc(old_windex, total, ring_size);
+
+ /* Atomic update of next write_index for other threads */
+ } while (!rte_atomic32_cmpset(&tbr->windex, old_windex, next_windex));
+
+ /* Space from old..new is now reserved */
+ windex = old_windex;
+ for (i = 0; i < iovlen; i++)
+ windex = vmbus_txbr_copyto(tbr, windex, iov[i].iov_base, iov[i].iov_len);
+
+ /* Set the offset of the current channel packet. */
+ save_windex = ((uint64_t)old_windex) << 32;
+ windex = vmbus_txbr_copyto(tbr, windex, &save_windex,
+ sizeof(save_windex));
+
+ /* The region reserved should match region used */
+ if (windex != next_windex)
+ return -EINVAL;
+
+ /* Ensure that data is available before updating host index */
+ rte_compiler_barrier();
+
+ /* Checkin for our reservation. wait for our turn to update host */
+ while (!rte_atomic32_cmpset(&vbr->windex, old_windex, next_windex))
+ _mm_pause();
+
+ return 0;
+}
+
+int rte_vmbus_chan_send(struct vmbus_br *txbr, uint16_t type, void *data,
+ uint32_t dlen, uint32_t flags)
+{
+ struct vmbus_chanpkt pkt;
+ unsigned int pktlen, pad_pktlen;
+ const uint32_t hlen = sizeof(pkt);
+ uint64_t pad = 0;
+ struct iovec iov[3];
+ int error;
+
+ pktlen = hlen + dlen;
+ pad_pktlen = ALIGN(pktlen, sizeof(uint64_t));
+
+ pkt.hdr.type = type;
+ pkt.hdr.flags = flags;
+ pkt.hdr.hlen = hlen >> VMBUS_CHANPKT_SIZE_SHIFT;
+ pkt.hdr.tlen = pad_pktlen >> VMBUS_CHANPKT_SIZE_SHIFT;
+ pkt.hdr.xactid = VMBUS_RQST_ERROR;
+
+ iov[0].iov_base = &pkt;
+ iov[0].iov_len = hlen;
+ iov[1].iov_base = data;
+ iov[1].iov_len = dlen;
+ iov[2].iov_base = &pad;
+ iov[2].iov_len = pad_pktlen - pktlen;
+
+ error = vmbus_txbr_write(txbr, iov, 3);
+
+ return error;
+}
+
+static inline uint32_t
+vmbus_rxbr_copyfrom(const struct vmbus_br *rbr, uint32_t rindex,
+ void *dst0, size_t cplen)
+{
+ const uint8_t *br_data = rbr->vbr->data;
+ uint32_t br_dsize = rbr->dsize;
+ uint8_t *dst = dst0;
+
+ if (cplen > br_dsize - rindex) {
+ uint32_t fraglen = br_dsize - rindex;
+
+ /* Wrap-around detected. */
+ memcpy(dst, br_data + rindex, fraglen);
+ memcpy(dst + fraglen, br_data, cplen - fraglen);
+ } else {
+ memcpy(dst, br_data + rindex, cplen);
+ }
+
+ return vmbus_br_idxinc(rindex, cplen, br_dsize);
+}
+
+/* Copy data from receive ring but don't change index */
+static int
+vmbus_rxbr_peek(const struct vmbus_br *rbr, void *data, size_t dlen)
+{
+ uint32_t avail;
+
+ /*
+ * The requested data and the 64bits channel packet
+ * offset should be there at least.
+ */
+ avail = vmbus_br_availread(rbr);
+ if (avail < dlen + sizeof(uint64_t))
+ return -EAGAIN;
+
+ vmbus_rxbr_copyfrom(rbr, rbr->vbr->rindex, data, dlen);
+ return 0;
+}
+
+/*
+ * Copy data from receive ring and change index
+ * NOTE:
+ * We assume (dlen + skip) == sizeof(channel packet).
+ */
+static int
+vmbus_rxbr_read(struct vmbus_br *rbr, void *data, size_t dlen, size_t skip)
+{
+ struct vmbus_bufring *vbr = rbr->vbr;
+ uint32_t br_dsize = rbr->dsize;
+ uint32_t rindex;
+
+ if (vmbus_br_availread(rbr) < dlen + skip + sizeof(uint64_t))
+ return -EAGAIN;
+
+ /* Record where host was when we started read (for debug) */
+ rbr->windex = rbr->vbr->windex;
+
+ /*
+ * Copy channel packet from RX bufring.
+ */
+ rindex = vmbus_br_idxinc(rbr->vbr->rindex, skip, br_dsize);
+ rindex = vmbus_rxbr_copyfrom(rbr, rindex, data, dlen);
+
+ /*
+ * Discard this channel packet's 64bits offset, which is useless to us.
+ */
+ rindex = vmbus_br_idxinc(rindex, sizeof(uint64_t), br_dsize);
+
+ /* Update the read index _after_ the channel packet is fetched. */
+ rte_compiler_barrier();
+
+ vbr->rindex = rindex;
+
+ return 0;
+}
+
+int rte_vmbus_chan_recv_raw(struct vmbus_br *rxbr,
+ void *data, uint32_t *len)
+{
+ struct vmbus_chanpkt_hdr pkt;
+ uint32_t dlen, bufferlen = *len;
+ int error;
+
+ error = vmbus_rxbr_peek(rxbr, &pkt, sizeof(pkt));
+ if (error)
+ return error;
+
+ if (unlikely(pkt.hlen < VMBUS_CHANPKT_HLEN_MIN))
+ /* XXX this channel is dead actually. */
+ return -EIO;
+
+ if (unlikely(pkt.hlen > pkt.tlen))
+ return -EIO;
+
+ /* Length are in quad words */
+ dlen = pkt.tlen << VMBUS_CHANPKT_SIZE_SHIFT;
+ *len = dlen;
+
+ /* If caller buffer is not large enough */
+ if (unlikely(dlen > bufferlen))
+ return -ENOBUFS;
+
+ /* Read data and skip packet header */
+ error = vmbus_rxbr_read(rxbr, data, dlen, 0);
+ if (error)
+ return error;
+
+ /* Return the number of bytes read */
+ return dlen + sizeof(uint64_t);
+}
diff --git a/tools/hv/vmbus_bufring.h b/tools/hv/vmbus_bufring.h
new file mode 100644
index 000000000000..6e7caacfff57
--- /dev/null
+++ b/tools/hv/vmbus_bufring.h
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+
+#ifndef _VMBUS_BUF_H_
+#define _VMBUS_BUF_H_
+
+#include <stdbool.h>
+#include <stdint.h>
+
+#define __packed __attribute__((__packed__))
+#define unlikely(x) __builtin_expect(!!(x), 0)
+
+#define ICMSGHDRFLAG_TRANSACTION 1
+#define ICMSGHDRFLAG_REQUEST 2
+#define ICMSGHDRFLAG_RESPONSE 4
+
+#define IC_VERSION_NEGOTIATION_MAX_VER_COUNT 100
+#define ICMSG_HDR (sizeof(struct vmbuspipe_hdr) + sizeof(struct icmsg_hdr))
+#define ICMSG_NEGOTIATE_PKT_SIZE(icframe_vercnt, icmsg_vercnt) \
+ (ICMSG_HDR + sizeof(struct icmsg_negotiate) + \
+ (((icframe_vercnt) + (icmsg_vercnt)) * sizeof(struct ic_version)))
+
+/*
+ * Channel packets
+ */
+
+/* Channel packet flags */
+#define VMBUS_CHANPKT_TYPE_INBAND 0x0006
+#define VMBUS_CHANPKT_TYPE_RXBUF 0x0007
+#define VMBUS_CHANPKT_TYPE_GPA 0x0009
+#define VMBUS_CHANPKT_TYPE_COMP 0x000b
+
+#define VMBUS_CHANPKT_FLAG_NONE 0
+#define VMBUS_CHANPKT_FLAG_RC 0x0001 /* report completion */
+
+#define VMBUS_CHANPKT_SIZE_SHIFT 3
+#define VMBUS_CHANPKT_SIZE_ALIGN BIT(VMBUS_CHANPKT_SIZE_SHIFT)
+#define VMBUS_CHANPKT_HLEN_MIN \
+ (sizeof(struct vmbus_chanpkt_hdr) >> VMBUS_CHANPKT_SIZE_SHIFT)
+
+/*
+ * Buffer ring
+ */
+struct vmbus_bufring {
+ volatile uint32_t windex;
+ volatile uint32_t rindex;
+
+ /*
+ * Interrupt mask {0,1}
+ *
+ * For TX bufring, host set this to 1, when it is processing
+ * the TX bufring, so that we can safely skip the TX event
+ * notification to host.
+ *
+ * For RX bufring, once this is set to 1 by us, host will not
+ * further dispatch interrupts to us, even if there are data
+ * pending on the RX bufring. This effectively disables the
+ * interrupt of the channel to which this RX bufring is attached.
+ */
+ volatile uint32_t imask;
+
+ /*
+ * Win8 uses some of the reserved bits to implement
+ * interrupt driven flow management. On the send side
+ * we can request that the receiver interrupt the sender
+ * when the ring transitions from being full to being able
+ * to handle a message of size "pending_send_sz".
+ *
+ * Add necessary state for this enhancement.
+ */
+ volatile uint32_t pending_send;
+ uint32_t reserved1[12];
+
+ union {
+ struct {
+ uint32_t feat_pending_send_sz:1;
+ };
+ uint32_t value;
+ } feature_bits;
+
+ /* Pad it to rte_mem_page_size() so that data starts on page boundary */
+ uint8_t reserved2[4028];
+
+ /*
+ * Ring data starts here + RingDataStartOffset
+ * !!! DO NOT place any fields below this !!!
+ */
+ uint8_t data[];
+} __packed;
+
+struct vmbus_br {
+ struct vmbus_bufring *vbr;
+ uint32_t dsize;
+ uint32_t windex; /* next available location */
+};
+
+struct vmbus_chanpkt_hdr {
+ uint16_t type; /* VMBUS_CHANPKT_TYPE_ */
+ uint16_t hlen; /* header len, in 8 bytes */
+ uint16_t tlen; /* total len, in 8 bytes */
+ uint16_t flags; /* VMBUS_CHANPKT_FLAG_ */
+ uint64_t xactid;
+} __packed;
+
+struct vmbus_chanpkt {
+ struct vmbus_chanpkt_hdr hdr;
+} __packed;
+
+struct vmbuspipe_hdr {
+ unsigned int flags;
+ unsigned int msgsize;
+} __packed;
+
+struct ic_version {
+ unsigned short major;
+ unsigned short minor;
+} __packed;
+
+struct icmsg_negotiate {
+ unsigned short icframe_vercnt;
+ unsigned short icmsg_vercnt;
+ unsigned int reserved;
+ struct ic_version icversion_data[]; /* any size array */
+} __packed;
+
+struct icmsg_hdr {
+ struct ic_version icverframe;
+ unsigned short icmsgtype;
+ struct ic_version icvermsg;
+ unsigned short icmsgsize;
+ unsigned int status;
+ unsigned char ictransaction_id;
+ unsigned char icflags;
+ unsigned char reserved[2];
+} __packed;
+
+int rte_vmbus_chan_recv_raw(struct vmbus_br *rxbr, void *data, uint32_t *len);
+int rte_vmbus_chan_send(struct vmbus_br *txbr, uint16_t type, void *data,
+ uint32_t dlen, uint32_t flags);
+void vmbus_br_setup(struct vmbus_br *br, void *buf, unsigned int blen);
+void *vmbus_uio_map(int *fd, int size);
+
+/* Amount of space available for write */
+static inline uint32_t vmbus_br_availwrite(const struct vmbus_br *br, uint32_t windex)
+{
+ uint32_t rindex = br->vbr->rindex;
+
+ if (windex >= rindex)
+ return br->dsize - (windex - rindex);
+ else
+ return rindex - windex;
+}
+
+static inline uint32_t vmbus_br_availread(const struct vmbus_br *br)
+{
+ return br->dsize - vmbus_br_availwrite(br, br->vbr->windex);
+}
+
+#endif /* !_VMBUS_BUF_H_ */
diff --git a/tools/iio/Makefile b/tools/iio/Makefile
index fa720f062229..3bcce0b7d10f 100644
--- a/tools/iio/Makefile
+++ b/tools/iio/Makefile
@@ -58,7 +58,7 @@ $(OUTPUT)iio_generic_buffer: $(IIO_GENERIC_BUFFER_IN)
clean:
rm -f $(ALL_PROGRAMS)
rm -rf $(OUTPUT)include/linux/iio
- find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
+ find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete -o -name '\.*.cmd' -delete
install: $(ALL_PROGRAMS)
install -d -m 755 $(DESTDIR)$(bindir); \
diff --git a/tools/iio/iio_event_monitor.c b/tools/iio/iio_event_monitor.c
index 8073c9e4fe46..03ca33869ce8 100644
--- a/tools/iio/iio_event_monitor.c
+++ b/tools/iio/iio_event_monitor.c
@@ -63,6 +63,8 @@ static const char * const iio_chan_type_name_spec[] = {
[IIO_DELTA_VELOCITY] = "deltavelocity",
[IIO_COLORTEMP] = "colortemp",
[IIO_CHROMATICITY] = "chromaticity",
+ [IIO_ATTENTION] = "attention",
+ [IIO_ALTCURRENT] = "altcurrent",
};
static const char * const iio_ev_type_text[] = {
@@ -74,6 +76,7 @@ static const char * const iio_ev_type_text[] = {
[IIO_EV_TYPE_CHANGE] = "change",
[IIO_EV_TYPE_MAG_REFERENCED] = "mag_referenced",
[IIO_EV_TYPE_GESTURE] = "gesture",
+ [IIO_EV_TYPE_FAULT] = "fault",
};
static const char * const iio_ev_dir_text[] = {
@@ -82,6 +85,7 @@ static const char * const iio_ev_dir_text[] = {
[IIO_EV_DIR_FALLING] = "falling",
[IIO_EV_DIR_SINGLETAP] = "singletap",
[IIO_EV_DIR_DOUBLETAP] = "doubletap",
+ [IIO_EV_DIR_FAULT_OPENWIRE] = "openwire",
};
static const char * const iio_modifier_names[] = {
@@ -137,6 +141,10 @@ static const char * const iio_modifier_names[] = {
[IIO_MOD_PITCH] = "pitch",
[IIO_MOD_YAW] = "yaw",
[IIO_MOD_ROLL] = "roll",
+ [IIO_MOD_RMS] = "rms",
+ [IIO_MOD_ACTIVE] = "active",
+ [IIO_MOD_REACTIVE] = "reactive",
+ [IIO_MOD_APPARENT] = "apparent",
};
static bool event_is_known(struct iio_event_data *event)
@@ -183,6 +191,8 @@ static bool event_is_known(struct iio_event_data *event)
case IIO_DELTA_VELOCITY:
case IIO_COLORTEMP:
case IIO_CHROMATICITY:
+ case IIO_ATTENTION:
+ case IIO_ALTCURRENT:
break;
default:
return false;
@@ -234,6 +244,10 @@ static bool event_is_known(struct iio_event_data *event)
case IIO_MOD_PM4:
case IIO_MOD_PM10:
case IIO_MOD_O2:
+ case IIO_MOD_RMS:
+ case IIO_MOD_ACTIVE:
+ case IIO_MOD_REACTIVE:
+ case IIO_MOD_APPARENT:
break;
default:
return false;
@@ -247,6 +261,7 @@ static bool event_is_known(struct iio_event_data *event)
case IIO_EV_TYPE_MAG_ADAPTIVE:
case IIO_EV_TYPE_CHANGE:
case IIO_EV_TYPE_GESTURE:
+ case IIO_EV_TYPE_FAULT:
break;
default:
return false;
@@ -258,6 +273,7 @@ static bool event_is_known(struct iio_event_data *event)
case IIO_EV_DIR_FALLING:
case IIO_EV_DIR_SINGLETAP:
case IIO_EV_DIR_DOUBLETAP:
+ case IIO_EV_DIR_FAULT_OPENWIRE:
case IIO_EV_DIR_NONE:
break;
default:
@@ -449,6 +465,7 @@ error_free_chrdev_name:
enable_events(dev_dir_name, 0);
free(chrdev_name);
+ free(dev_dir_name);
return ret;
}
diff --git a/tools/iio/iio_generic_buffer.c b/tools/iio/iio_generic_buffer.c
index 0d0a7a19d6f9..bc82bb6a7a2a 100644
--- a/tools/iio/iio_generic_buffer.c
+++ b/tools/iio/iio_generic_buffer.c
@@ -335,7 +335,7 @@ static const struct option longopts[] = {
{ "device-num", 1, 0, 'N' },
{ "trigger-name", 1, 0, 't' },
{ "trigger-num", 1, 0, 'T' },
- { },
+ { }
};
int main(int argc, char **argv)
@@ -498,6 +498,10 @@ int main(int argc, char **argv)
return -ENOMEM;
}
trigger_name = malloc(IIO_MAX_NAME_LENGTH);
+ if (!trigger_name) {
+ ret = -ENOMEM;
+ goto error;
+ }
ret = read_sysfs_string("name", trig_dev_name, trigger_name);
free(trig_dev_name);
if (ret < 0) {
diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c
index 6a00a6eecaef..c5c5082cb24e 100644
--- a/tools/iio/iio_utils.c
+++ b/tools/iio/iio_utils.c
@@ -376,7 +376,7 @@ int build_channel_array(const char *device_dir, int buffer_idx,
goto error_close_dir;
}
- seekdir(dp, 0);
+ rewinddir(dp);
while (ent = readdir(dp), ent) {
if (strcmp(ent->d_name + strlen(ent->d_name) - strlen("_en"),
"_en") == 0) {
diff --git a/tools/include/asm-generic/bitops/__ffs.h b/tools/include/asm-generic/bitops/__ffs.h
index 9d1310519497..2d94c1e9b2f3 100644
--- a/tools/include/asm-generic/bitops/__ffs.h
+++ b/tools/include/asm-generic/bitops/__ffs.h
@@ -11,9 +11,9 @@
*
* Undefined if no bit exists, so code should check against 0 first.
*/
-static __always_inline unsigned long __ffs(unsigned long word)
+static __always_inline unsigned int __ffs(unsigned long word)
{
- int num = 0;
+ unsigned int num = 0;
#if __BITS_PER_LONG == 64
if ((word & 0xffffffff) == 0) {
diff --git a/tools/include/asm-generic/bitops/__fls.h b/tools/include/asm-generic/bitops/__fls.h
index 03f721a8a2b1..e974ec932ec1 100644
--- a/tools/include/asm-generic/bitops/__fls.h
+++ b/tools/include/asm-generic/bitops/__fls.h
@@ -5,14 +5,14 @@
#include <asm/types.h>
/**
- * __fls - find last (most-significant) set bit in a long word
+ * generic___fls - find last (most-significant) set bit in a long word
* @word: the word to search
*
* Undefined if no set bit exists, so code should check against 0 first.
*/
-static __always_inline unsigned long __fls(unsigned long word)
+static __always_inline unsigned int generic___fls(unsigned long word)
{
- int num = BITS_PER_LONG - 1;
+ unsigned int num = BITS_PER_LONG - 1;
#if BITS_PER_LONG == 64
if (!(word & (~0ul << 32))) {
@@ -41,4 +41,8 @@ static __always_inline unsigned long __fls(unsigned long word)
return num;
}
+#ifndef __HAVE_ARCH___FLS
+#define __fls(word) generic___fls(word)
+#endif
+
#endif /* _ASM_GENERIC_BITOPS___FLS_H_ */
diff --git a/tools/include/asm-generic/bitops/fls.h b/tools/include/asm-generic/bitops/fls.h
index b168bb10e1be..26f3ce1dd6e4 100644
--- a/tools/include/asm-generic/bitops/fls.h
+++ b/tools/include/asm-generic/bitops/fls.h
@@ -3,14 +3,14 @@
#define _ASM_GENERIC_BITOPS_FLS_H_
/**
- * fls - find last (most-significant) bit set
+ * generic_fls - find last (most-significant) bit set
* @x: the word to search
*
* This is defined the same way as ffs.
* Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32.
*/
-static __always_inline int fls(unsigned int x)
+static __always_inline int generic_fls(unsigned int x)
{
int r = 32;
@@ -39,4 +39,8 @@ static __always_inline int fls(unsigned int x)
return r;
}
+#ifndef __HAVE_ARCH_FLS
+#define fls(x) generic_fls(x)
+#endif
+
#endif /* _ASM_GENERIC_BITOPS_FLS_H_ */
diff --git a/tools/include/asm-generic/io.h b/tools/include/asm-generic/io.h
new file mode 100644
index 000000000000..e5a0b07ad452
--- /dev/null
+++ b/tools/include/asm-generic/io.h
@@ -0,0 +1,482 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_ASM_GENERIC_IO_H
+#define _TOOLS_ASM_GENERIC_IO_H
+
+#include <asm/barrier.h>
+#include <asm/byteorder.h>
+
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#ifndef mmiowb_set_pending
+#define mmiowb_set_pending() do { } while (0)
+#endif
+
+#ifndef __io_br
+#define __io_br() barrier()
+#endif
+
+/* prevent prefetching of coherent DMA data ahead of a dma-complete */
+#ifndef __io_ar
+#ifdef rmb
+#define __io_ar(v) rmb()
+#else
+#define __io_ar(v) barrier()
+#endif
+#endif
+
+/* flush writes to coherent DMA data before possibly triggering a DMA read */
+#ifndef __io_bw
+#ifdef wmb
+#define __io_bw() wmb()
+#else
+#define __io_bw() barrier()
+#endif
+#endif
+
+/* serialize device access against a spin_unlock, usually handled there. */
+#ifndef __io_aw
+#define __io_aw() mmiowb_set_pending()
+#endif
+
+#ifndef __io_pbw
+#define __io_pbw() __io_bw()
+#endif
+
+#ifndef __io_paw
+#define __io_paw() __io_aw()
+#endif
+
+#ifndef __io_pbr
+#define __io_pbr() __io_br()
+#endif
+
+#ifndef __io_par
+#define __io_par(v) __io_ar(v)
+#endif
+
+#ifndef _THIS_IP_
+#define _THIS_IP_ 0
+#endif
+
+static inline void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
+ unsigned long caller_addr, unsigned long caller_addr0) {}
+static inline void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
+ unsigned long caller_addr, unsigned long caller_addr0) {}
+static inline void log_read_mmio(u8 width, const volatile void __iomem *addr,
+ unsigned long caller_addr, unsigned long caller_addr0) {}
+static inline void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr,
+ unsigned long caller_addr, unsigned long caller_addr0) {}
+
+/*
+ * __raw_{read,write}{b,w,l,q}() access memory in native endianness.
+ *
+ * On some architectures memory mapped IO needs to be accessed differently.
+ * On the simple architectures, we just read/write the memory location
+ * directly.
+ */
+
+#ifndef __raw_readb
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+ return *(const volatile u8 __force *)addr;
+}
+#endif
+
+#ifndef __raw_readw
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+ return *(const volatile u16 __force *)addr;
+}
+#endif
+
+#ifndef __raw_readl
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+ return *(const volatile u32 __force *)addr;
+}
+#endif
+
+#ifndef __raw_readq
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+ return *(const volatile u64 __force *)addr;
+}
+#endif
+
+#ifndef __raw_writeb
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 value, volatile void __iomem *addr)
+{
+ *(volatile u8 __force *)addr = value;
+}
+#endif
+
+#ifndef __raw_writew
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 value, volatile void __iomem *addr)
+{
+ *(volatile u16 __force *)addr = value;
+}
+#endif
+
+#ifndef __raw_writel
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 value, volatile void __iomem *addr)
+{
+ *(volatile u32 __force *)addr = value;
+}
+#endif
+
+#ifndef __raw_writeq
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 value, volatile void __iomem *addr)
+{
+ *(volatile u64 __force *)addr = value;
+}
+#endif
+
+/*
+ * {read,write}{b,w,l,q}() access little endian memory and return result in
+ * native endianness.
+ */
+
+#ifndef readb
+#define readb readb
+static inline u8 readb(const volatile void __iomem *addr)
+{
+ u8 val;
+
+ log_read_mmio(8, addr, _THIS_IP_, _RET_IP_);
+ __io_br();
+ val = __raw_readb(addr);
+ __io_ar(val);
+ log_post_read_mmio(val, 8, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef readw
+#define readw readw
+static inline u16 readw(const volatile void __iomem *addr)
+{
+ u16 val;
+
+ log_read_mmio(16, addr, _THIS_IP_, _RET_IP_);
+ __io_br();
+ val = __le16_to_cpu((__le16 __force)__raw_readw(addr));
+ __io_ar(val);
+ log_post_read_mmio(val, 16, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef readl
+#define readl readl
+static inline u32 readl(const volatile void __iomem *addr)
+{
+ u32 val;
+
+ log_read_mmio(32, addr, _THIS_IP_, _RET_IP_);
+ __io_br();
+ val = __le32_to_cpu((__le32 __force)__raw_readl(addr));
+ __io_ar(val);
+ log_post_read_mmio(val, 32, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef readq
+#define readq readq
+static inline u64 readq(const volatile void __iomem *addr)
+{
+ u64 val;
+
+ log_read_mmio(64, addr, _THIS_IP_, _RET_IP_);
+ __io_br();
+ val = __le64_to_cpu((__le64 __force)__raw_readq(addr));
+ __io_ar(val);
+ log_post_read_mmio(val, 64, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef writeb
+#define writeb writeb
+static inline void writeb(u8 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 8, addr, _THIS_IP_, _RET_IP_);
+ __io_bw();
+ __raw_writeb(value, addr);
+ __io_aw();
+ log_post_write_mmio(value, 8, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#ifndef writew
+#define writew writew
+static inline void writew(u16 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 16, addr, _THIS_IP_, _RET_IP_);
+ __io_bw();
+ __raw_writew((u16 __force)cpu_to_le16(value), addr);
+ __io_aw();
+ log_post_write_mmio(value, 16, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#ifndef writel
+#define writel writel
+static inline void writel(u32 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 32, addr, _THIS_IP_, _RET_IP_);
+ __io_bw();
+ __raw_writel((u32 __force)__cpu_to_le32(value), addr);
+ __io_aw();
+ log_post_write_mmio(value, 32, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#ifndef writeq
+#define writeq writeq
+static inline void writeq(u64 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 64, addr, _THIS_IP_, _RET_IP_);
+ __io_bw();
+ __raw_writeq((u64 __force)__cpu_to_le64(value), addr);
+ __io_aw();
+ log_post_write_mmio(value, 64, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+/*
+ * {read,write}{b,w,l,q}_relaxed() are like the regular version, but
+ * are not guaranteed to provide ordering against spinlocks or memory
+ * accesses.
+ */
+#ifndef readb_relaxed
+#define readb_relaxed readb_relaxed
+static inline u8 readb_relaxed(const volatile void __iomem *addr)
+{
+ u8 val;
+
+ log_read_mmio(8, addr, _THIS_IP_, _RET_IP_);
+ val = __raw_readb(addr);
+ log_post_read_mmio(val, 8, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef readw_relaxed
+#define readw_relaxed readw_relaxed
+static inline u16 readw_relaxed(const volatile void __iomem *addr)
+{
+ u16 val;
+
+ log_read_mmio(16, addr, _THIS_IP_, _RET_IP_);
+ val = __le16_to_cpu((__le16 __force)__raw_readw(addr));
+ log_post_read_mmio(val, 16, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef readl_relaxed
+#define readl_relaxed readl_relaxed
+static inline u32 readl_relaxed(const volatile void __iomem *addr)
+{
+ u32 val;
+
+ log_read_mmio(32, addr, _THIS_IP_, _RET_IP_);
+ val = __le32_to_cpu((__le32 __force)__raw_readl(addr));
+ log_post_read_mmio(val, 32, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#if defined(readq) && !defined(readq_relaxed)
+#define readq_relaxed readq_relaxed
+static inline u64 readq_relaxed(const volatile void __iomem *addr)
+{
+ u64 val;
+
+ log_read_mmio(64, addr, _THIS_IP_, _RET_IP_);
+ val = __le64_to_cpu((__le64 __force)__raw_readq(addr));
+ log_post_read_mmio(val, 64, addr, _THIS_IP_, _RET_IP_);
+ return val;
+}
+#endif
+
+#ifndef writeb_relaxed
+#define writeb_relaxed writeb_relaxed
+static inline void writeb_relaxed(u8 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 8, addr, _THIS_IP_, _RET_IP_);
+ __raw_writeb(value, addr);
+ log_post_write_mmio(value, 8, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#ifndef writew_relaxed
+#define writew_relaxed writew_relaxed
+static inline void writew_relaxed(u16 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 16, addr, _THIS_IP_, _RET_IP_);
+ __raw_writew((u16 __force)cpu_to_le16(value), addr);
+ log_post_write_mmio(value, 16, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#ifndef writel_relaxed
+#define writel_relaxed writel_relaxed
+static inline void writel_relaxed(u32 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 32, addr, _THIS_IP_, _RET_IP_);
+ __raw_writel((u32 __force)__cpu_to_le32(value), addr);
+ log_post_write_mmio(value, 32, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+#if defined(writeq) && !defined(writeq_relaxed)
+#define writeq_relaxed writeq_relaxed
+static inline void writeq_relaxed(u64 value, volatile void __iomem *addr)
+{
+ log_write_mmio(value, 64, addr, _THIS_IP_, _RET_IP_);
+ __raw_writeq((u64 __force)__cpu_to_le64(value), addr);
+ log_post_write_mmio(value, 64, addr, _THIS_IP_, _RET_IP_);
+}
+#endif
+
+/*
+ * {read,write}s{b,w,l,q}() repeatedly access the same memory address in
+ * native endianness in 8-, 16-, 32- or 64-bit chunks (@count times).
+ */
+#ifndef readsb
+#define readsb readsb
+static inline void readsb(const volatile void __iomem *addr, void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ u8 *buf = buffer;
+
+ do {
+ u8 x = __raw_readb(addr);
+ *buf++ = x;
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef readsw
+#define readsw readsw
+static inline void readsw(const volatile void __iomem *addr, void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ u16 *buf = buffer;
+
+ do {
+ u16 x = __raw_readw(addr);
+ *buf++ = x;
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef readsl
+#define readsl readsl
+static inline void readsl(const volatile void __iomem *addr, void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ u32 *buf = buffer;
+
+ do {
+ u32 x = __raw_readl(addr);
+ *buf++ = x;
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef readsq
+#define readsq readsq
+static inline void readsq(const volatile void __iomem *addr, void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ u64 *buf = buffer;
+
+ do {
+ u64 x = __raw_readq(addr);
+ *buf++ = x;
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef writesb
+#define writesb writesb
+static inline void writesb(volatile void __iomem *addr, const void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ const u8 *buf = buffer;
+
+ do {
+ __raw_writeb(*buf++, addr);
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef writesw
+#define writesw writesw
+static inline void writesw(volatile void __iomem *addr, const void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ const u16 *buf = buffer;
+
+ do {
+ __raw_writew(*buf++, addr);
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef writesl
+#define writesl writesl
+static inline void writesl(volatile void __iomem *addr, const void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ const u32 *buf = buffer;
+
+ do {
+ __raw_writel(*buf++, addr);
+ } while (--count);
+ }
+}
+#endif
+
+#ifndef writesq
+#define writesq writesq
+static inline void writesq(volatile void __iomem *addr, const void *buffer,
+ unsigned int count)
+{
+ if (count) {
+ const u64 *buf = buffer;
+
+ do {
+ __raw_writeq(*buf++, addr);
+ } while (--count);
+ }
+}
+#endif
+
+#endif /* _TOOLS_ASM_GENERIC_IO_H */
diff --git a/tools/include/asm/alternative.h b/tools/include/asm/alternative.h
index 7ce02a223732..8e548ac8f740 100644
--- a/tools/include/asm/alternative.h
+++ b/tools/include/asm/alternative.h
@@ -2,8 +2,18 @@
#ifndef _TOOLS_ASM_ALTERNATIVE_ASM_H
#define _TOOLS_ASM_ALTERNATIVE_ASM_H
+#if defined(__s390x__)
+#ifdef __ASSEMBLY__
+.macro ALTERNATIVE oldinstr, newinstr, feature
+ \oldinstr
+.endm
+#endif
+#else
+
/* Just disable it so we can build arch/x86/lib/memcpy_64.S for perf bench: */
#define ALTERNATIVE #
#endif
+
+#endif
diff --git a/tools/include/asm/barrier.h b/tools/include/asm/barrier.h
index 8d378c57cb01..0c21678ac5e6 100644
--- a/tools/include/asm/barrier.h
+++ b/tools/include/asm/barrier.h
@@ -8,6 +8,8 @@
#include "../../arch/arm64/include/asm/barrier.h"
#elif defined(__powerpc__)
#include "../../arch/powerpc/include/asm/barrier.h"
+#elif defined(__riscv)
+#include "../../arch/riscv/include/asm/barrier.h"
#elif defined(__s390__)
#include "../../arch/s390/include/asm/barrier.h"
#elif defined(__sh__)
diff --git a/tools/include/asm/io.h b/tools/include/asm/io.h
new file mode 100644
index 000000000000..eed5066f25c4
--- /dev/null
+++ b/tools/include/asm/io.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_ASM_IO_H
+#define _TOOLS_ASM_IO_H
+
+#if defined(__i386__) || defined(__x86_64__)
+#include "../../arch/x86/include/asm/io.h"
+#else
+#include <asm-generic/io.h>
+#endif
+
+#endif /* _TOOLS_ASM_IO_H */
diff --git a/tools/include/asm/rwonce.h b/tools/include/asm/rwonce.h
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/include/asm/rwonce.h
diff --git a/tools/include/asm/timex.h b/tools/include/asm/timex.h
new file mode 100644
index 000000000000..5adfe3c6d326
--- /dev/null
+++ b/tools/include/asm/timex.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __TOOLS_LINUX_ASM_TIMEX_H
+#define __TOOLS_LINUX_ASM_TIMEX_H
+
+#include <time.h>
+
+#define cycles_t clock_t
+
+static inline cycles_t get_cycles(void)
+{
+ return clock();
+}
+#endif // __TOOLS_LINUX_ASM_TIMEX_H
diff --git a/tools/include/generated/asm-offsets.h b/tools/include/generated/asm-offsets.h
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/include/generated/asm-offsets.h
diff --git a/tools/include/generated/asm/cpucap-defs.h b/tools/include/generated/asm/cpucap-defs.h
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/include/generated/asm/cpucap-defs.h
diff --git a/tools/include/generated/asm/sysreg-defs.h b/tools/include/generated/asm/sysreg-defs.h
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/include/generated/asm/sysreg-defs.h
diff --git a/tools/include/linux/align.h b/tools/include/linux/align.h
new file mode 100644
index 000000000000..14e34ace80dd
--- /dev/null
+++ b/tools/include/linux/align.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _TOOLS_LINUX_ALIGN_H
+#define _TOOLS_LINUX_ALIGN_H
+
+#include <uapi/linux/const.h>
+
+#define ALIGN(x, a) __ALIGN_KERNEL((x), (a))
+#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a))
+#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
+
+#endif /* _TOOLS_LINUX_ALIGN_H */
diff --git a/tools/include/linux/args.h b/tools/include/linux/args.h
new file mode 100644
index 000000000000..2e8e65d975c7
--- /dev/null
+++ b/tools/include/linux/args.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_ARGS_H
+#define _LINUX_ARGS_H
+
+/*
+ * How do these macros work?
+ *
+ * In __COUNT_ARGS() _0 to _12 are just placeholders from the start
+ * in order to make sure _n is positioned over the correct number
+ * from 12 to 0 (depending on X, which is a variadic argument list).
+ * They serve no purpose other than occupying a position. Since each
+ * macro parameter must have a distinct identifier, those identifiers
+ * are as good as any.
+ *
+ * In COUNT_ARGS() we use actual integers, so __COUNT_ARGS() returns
+ * that as _n.
+ */
+
+/* This counts to 15. Any more, it will return 16th argument. */
+#define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _n, X...) _n
+#define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
+
+/* Concatenate two parameters, but allow them to be expanded beforehand. */
+#define __CONCAT(a, b) a ## b
+#define CONCATENATE(a, b) __CONCAT(a, b)
+
+#endif /* _LINUX_ARGS_H */
diff --git a/tools/include/linux/atomic.h b/tools/include/linux/atomic.h
index 01907b33537e..50c66ba9ada5 100644
--- a/tools/include/linux/atomic.h
+++ b/tools/include/linux/atomic.h
@@ -12,4 +12,26 @@ void atomic_long_set(atomic_long_t *v, long i);
#define atomic_cmpxchg_release atomic_cmpxchg
#endif /* atomic_cmpxchg_relaxed */
+static inline bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new)
+{
+ int ret, old = *oldp;
+
+ ret = atomic_cmpxchg(ptr, old, new);
+ if (ret != old)
+ *oldp = ret;
+ return ret == old;
+}
+
+static inline bool atomic_inc_unless_negative(atomic_t *v)
+{
+ int c = atomic_read(v);
+
+ do {
+ if (unlikely(c < 0))
+ return false;
+ } while (!atomic_try_cmpxchg(v, &c, c + 1));
+
+ return true;
+}
+
#endif /* __TOOLS_LINUX_ATOMIC_H */
diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h
index f3566ea0f932..0d992245c600 100644
--- a/tools/include/linux/bitmap.h
+++ b/tools/include/linux/bitmap.h
@@ -3,6 +3,8 @@
#define _TOOLS_LINUX_BITMAP_H
#include <string.h>
+#include <asm-generic/bitsperlong.h>
+#include <linux/align.h>
#include <linux/bitops.h>
#include <linux/find.h>
#include <stdlib.h>
@@ -18,20 +20,22 @@ bool __bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, unsigned int bits);
bool __bitmap_equal(const unsigned long *bitmap1,
const unsigned long *bitmap2, unsigned int bits);
-void bitmap_clear(unsigned long *map, unsigned int start, int len);
+void __bitmap_set(unsigned long *map, unsigned int start, int len);
+void __bitmap_clear(unsigned long *map, unsigned int start, int len);
bool __bitmap_intersects(const unsigned long *bitmap1,
const unsigned long *bitmap2, unsigned int bits);
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1)))
+#define bitmap_size(nbits) (ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE)
+
static inline void bitmap_zero(unsigned long *dst, unsigned int nbits)
{
if (small_const_nbits(nbits))
*dst = 0UL;
else {
- int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
- memset(dst, 0, len);
+ memset(dst, 0, bitmap_size(nbits));
}
}
@@ -77,13 +81,18 @@ static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
__bitmap_or(dst, src1, src2, nbits);
}
+static inline unsigned long *bitmap_alloc(unsigned int nbits, gfp_t flags __maybe_unused)
+{
+ return malloc(bitmap_size(nbits));
+}
+
/**
* bitmap_zalloc - Allocate bitmap
* @nbits: Number of bits
*/
static inline unsigned long *bitmap_zalloc(int nbits)
{
- return calloc(1, BITS_TO_LONGS(nbits) * sizeof(unsigned long));
+ return calloc(1, bitmap_size(nbits));
}
/*
@@ -126,7 +135,6 @@ static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1,
#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
#endif
#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
-#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
static inline bool bitmap_equal(const unsigned long *src1,
const unsigned long *src2, unsigned int nbits)
@@ -149,4 +157,34 @@ static inline bool bitmap_intersects(const unsigned long *src1,
return __bitmap_intersects(src1, src2, nbits);
}
+static inline void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits)
+{
+ if (__builtin_constant_p(nbits) && nbits == 1)
+ __set_bit(start, map);
+ else if (small_const_nbits(start + nbits))
+ *map |= GENMASK(start + nbits - 1, start);
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
+ memset((char *)map + start / 8, 0xff, nbits / 8);
+ else
+ __bitmap_set(map, start, nbits);
+}
+
+static inline void bitmap_clear(unsigned long *map, unsigned int start,
+ unsigned int nbits)
+{
+ if (__builtin_constant_p(nbits) && nbits == 1)
+ __clear_bit(start, map);
+ else if (small_const_nbits(start + nbits))
+ *map &= ~GENMASK(start + nbits - 1, start);
+ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
+ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
+ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
+ memset((char *)map + start / 8, 0, nbits / 8);
+ else
+ __bitmap_clear(map, start, nbits);
+}
#endif /* _TOOLS_LINUX_BITMAP_H */
diff --git a/tools/include/linux/bitops.h b/tools/include/linux/bitops.h
index f18683b95ea6..b4e4cd071f8c 100644
--- a/tools/include/linux/bitops.h
+++ b/tools/include/linux/bitops.h
@@ -20,6 +20,8 @@
#define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u32))
#define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char))
+#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_BYTE)
+
extern unsigned int __sw_hweight8(unsigned int w);
extern unsigned int __sw_hweight16(unsigned int w);
extern unsigned int __sw_hweight32(unsigned int w);
@@ -70,7 +72,7 @@ static inline unsigned long hweight_long(unsigned long w)
return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
}
-static inline unsigned fls_long(unsigned long l)
+static inline unsigned int fls_long(unsigned long l)
{
if (sizeof(l) == 4)
return fls(l);
@@ -87,4 +89,15 @@ static inline __u32 rol32(__u32 word, unsigned int shift)
return (word << shift) | (word >> ((-shift) & 31));
}
+/**
+ * sign_extend64 - sign extend a 64-bit value using specified bit as sign-bit
+ * @value: value to sign extend
+ * @index: 0 based bit index (0<=index<64) to sign bit
+ */
+static __always_inline __s64 sign_extend64(__u64 value, int index)
+{
+ __u8 shift = 63 - index;
+ return (__s64)(value << shift) >> shift;
+}
+
#endif
diff --git a/tools/include/linux/bits.h b/tools/include/linux/bits.h
index 7c0cf5031abe..a40cc861b3a7 100644
--- a/tools/include/linux/bits.h
+++ b/tools/include/linux/bits.h
@@ -2,15 +2,15 @@
#ifndef __LINUX_BITS_H
#define __LINUX_BITS_H
-#include <linux/const.h>
#include <vdso/bits.h>
-#include <asm/bitsperlong.h>
+#include <uapi/linux/bits.h>
#define BIT_MASK(nr) (UL(1) << ((nr) % BITS_PER_LONG))
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
#define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG))
#define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG)
#define BITS_PER_BYTE 8
+#define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE)
/*
* Create a contiguous bitmask starting at bit position @l and ending at
@@ -18,28 +18,72 @@
* GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
*/
#if !defined(__ASSEMBLY__)
+
+/*
+ * Missing asm support
+ *
+ * GENMASK_U*() and BIT_U*() depend on BITS_PER_TYPE() which relies on sizeof(),
+ * something not available in asm. Nevertheless, fixed width integers is a C
+ * concept. Assembly code can rely on the long and long long versions instead.
+ */
+
#include <linux/build_bug.h>
-#define GENMASK_INPUT_CHECK(h, l) \
- (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
- __is_constexpr((l) > (h)), (l) > (h), 0)))
-#else
+#include <linux/compiler.h>
+#include <linux/overflow.h>
+
+#define GENMASK_INPUT_CHECK(h, l) BUILD_BUG_ON_ZERO(const_true((l) > (h)))
+
+/*
+ * Generate a mask for the specified type @t. Additional checks are made to
+ * guarantee the value returned fits in that type, relying on
+ * -Wshift-count-overflow compiler check to detect incompatible arguments.
+ * For example, all these create build errors or warnings:
+ *
+ * - GENMASK(15, 20): wrong argument order
+ * - GENMASK(72, 15): doesn't fit unsigned long
+ * - GENMASK_U32(33, 15): doesn't fit in a u32
+ */
+#define GENMASK_TYPE(t, h, l) \
+ ((t)(GENMASK_INPUT_CHECK(h, l) + \
+ (type_max(t) << (l) & \
+ type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))
+
+#define GENMASK(h, l) GENMASK_TYPE(unsigned long, h, l)
+#define GENMASK_ULL(h, l) GENMASK_TYPE(unsigned long long, h, l)
+
+#define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l)
+#define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l)
+#define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l)
+#define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l)
+#define GENMASK_U128(h, l) GENMASK_TYPE(u128, h, l)
+
+/*
+ * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The
+ * following examples generate compiler warnings due to -Wshift-count-overflow:
+ *
+ * - BIT_U8(8)
+ * - BIT_U32(-1)
+ * - BIT_U32(40)
+ */
+#define BIT_INPUT_CHECK(type, nr) \
+ BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type)))
+
+#define BIT_TYPE(type, nr) ((type)(BIT_INPUT_CHECK(type, nr) + BIT_ULL(nr)))
+
+#define BIT_U8(nr) BIT_TYPE(u8, nr)
+#define BIT_U16(nr) BIT_TYPE(u16, nr)
+#define BIT_U32(nr) BIT_TYPE(u32, nr)
+#define BIT_U64(nr) BIT_TYPE(u64, nr)
+
+#else /* defined(__ASSEMBLY__) */
+
/*
* BUILD_BUG_ON_ZERO is not available in h files included from asm files,
* disable the input check if that is the case.
*/
-#define GENMASK_INPUT_CHECK(h, l) 0
-#endif
-
-#define __GENMASK(h, l) \
- (((~UL(0)) - (UL(1) << (l)) + 1) & \
- (~UL(0) >> (BITS_PER_LONG - 1 - (h))))
-#define GENMASK(h, l) \
- (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))
-
-#define __GENMASK_ULL(h, l) \
- (((~ULL(0)) - (ULL(1) << (l)) + 1) & \
- (~ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h))))
-#define GENMASK_ULL(h, l) \
- (GENMASK_INPUT_CHECK(h, l) + __GENMASK_ULL(h, l))
+#define GENMASK(h, l) __GENMASK(h, l)
+#define GENMASK_ULL(h, l) __GENMASK_ULL(h, l)
+
+#endif /* !defined(__ASSEMBLY__) */
#endif /* __LINUX_BITS_H */
diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
index 2f882d5cb30f..72ea363d434d 100644
--- a/tools/include/linux/btf_ids.h
+++ b/tools/include/linux/btf_ids.h
@@ -3,11 +3,22 @@
#ifndef _LINUX_BTF_IDS_H
#define _LINUX_BTF_IDS_H
+#include <linux/types.h> /* for u32 */
+
struct btf_id_set {
u32 cnt;
u32 ids[];
};
+struct btf_id_set8 {
+ u32 cnt;
+ u32 flags;
+ struct {
+ u32 id;
+ u32 flags;
+ } pairs[];
+};
+
#ifdef CONFIG_DEBUG_INFO_BTF
#include <linux/compiler.h> /* for __PASTE */
diff --git a/tools/include/linux/build_bug.h b/tools/include/linux/build_bug.h
index b4898ff085de..ab2aa97bd8ce 100644
--- a/tools/include/linux/build_bug.h
+++ b/tools/include/linux/build_bug.h
@@ -4,17 +4,17 @@
#include <linux/compiler.h>
-#ifdef __CHECKER__
-#define BUILD_BUG_ON_ZERO(e) (0)
-#else /* __CHECKER__ */
/*
* Force a compilation error if condition is true, but also produce a
* result (of value 0 and type int), so the expression can be used
* e.g. in a structure initializer (or where-ever else comma expressions
* aren't permitted).
+ *
+ * Take an error message as an optional second argument. If omitted,
+ * default to the stringification of the tested expression.
*/
-#define BUILD_BUG_ON_ZERO(e) ((int)(sizeof(struct { int:(-!!(e)); })))
-#endif /* __CHECKER__ */
+#define BUILD_BUG_ON_ZERO(e, ...) \
+ __BUILD_BUG_ON_ZERO_MSG(e, ##__VA_ARGS__, #e " is true")
/* Force a compilation error if a constant expression is not a power of 2 */
#define __BUILD_BUG_ON_NOT_POWER_OF_2(n) \
diff --git a/tools/include/linux/cfi_types.h b/tools/include/linux/cfi_types.h
new file mode 100644
index 000000000000..a86af9bc8bdc
--- /dev/null
+++ b/tools/include/linux/cfi_types.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Clang Control Flow Integrity (CFI) type definitions.
+ */
+#ifndef _LINUX_CFI_TYPES_H
+#define _LINUX_CFI_TYPES_H
+
+#ifdef __ASSEMBLY__
+#include <linux/linkage.h>
+
+#ifdef CONFIG_CFI
+/*
+ * Use the __kcfi_typeid_<function> type identifier symbol to
+ * annotate indirectly called assembly functions. The compiler emits
+ * these symbols for all address-taken function declarations in C
+ * code.
+ */
+#ifndef __CFI_TYPE
+#define __CFI_TYPE(name) \
+ .4byte __kcfi_typeid_##name
+#endif
+
+#define SYM_TYPED_ENTRY(name, linkage, align...) \
+ linkage(name) ASM_NL \
+ align ASM_NL \
+ __CFI_TYPE(name) ASM_NL \
+ name:
+
+#define SYM_TYPED_START(name, linkage, align...) \
+ SYM_TYPED_ENTRY(name, linkage, align)
+
+#else /* CONFIG_CFI */
+
+#define SYM_TYPED_START(name, linkage, align...) \
+ SYM_START(name, linkage, align)
+
+#endif /* CONFIG_CFI */
+
+#ifndef SYM_TYPED_FUNC_START
+#define SYM_TYPED_FUNC_START(name) \
+ SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_CFI
+#define DEFINE_CFI_TYPE(name, func) \
+ /* \
+ * Force a reference to the function so the compiler generates \
+ * __kcfi_typeid_<func>. \
+ */ \
+ __ADDRESSABLE(func); \
+ /* u32 name __ro_after_init = __kcfi_typeid_<func> */ \
+ extern u32 name; \
+ asm ( \
+ " .pushsection .data..ro_after_init,\"aw\",\%progbits \n" \
+ " .type " #name ",\%object \n" \
+ " .globl " #name " \n" \
+ " .p2align 2, 0x0 \n" \
+ #name ": \n" \
+ " .4byte __kcfi_typeid_" #func " \n" \
+ " .size " #name ", 4 \n" \
+ " .popsection \n" \
+ );
+#endif
+
+#endif /* __ASSEMBLY__ */
+#endif /* _LINUX_CFI_TYPES_H */
diff --git a/tools/include/linux/compiler-gcc.h b/tools/include/linux/compiler-gcc.h
index 62e7c901ac28..e20f98e14e81 100644
--- a/tools/include/linux/compiler-gcc.h
+++ b/tools/include/linux/compiler-gcc.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _TOOLS_LINUX_COMPILER_H_
-#error "Please don't include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead."
+#error "Please do not include <linux/compiler-gcc.h> directly, include <linux/compiler.h> instead."
#endif
/*
diff --git a/tools/include/linux/compiler.h b/tools/include/linux/compiler.h
index 7b65566f3e42..f40bd2b04c29 100644
--- a/tools/include/linux/compiler.h
+++ b/tools/include/linux/compiler.h
@@ -2,6 +2,8 @@
#ifndef _TOOLS_LINUX_COMPILER_H_
#define _TOOLS_LINUX_COMPILER_H_
+#ifndef __ASSEMBLY__
+
#include <linux/compiler_types.h>
#ifndef __compiletime_error
@@ -58,6 +60,14 @@
#define noinline
#endif
+#ifndef __nocf_check
+#define __nocf_check __attribute__((nocf_check))
+#endif
+
+#ifndef __naked
+#define __naked __attribute__((__naked__))
+#endif
+
/* Are two types/vars the same type (ignoring qualifiers)? */
#ifndef __same_type
# define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))
@@ -71,6 +81,28 @@
#define __is_constexpr(x) \
(sizeof(int) == sizeof(*(8 ? ((void *)((long)(x) * 0l)) : (int *)8)))
+/*
+ * Similar to statically_true() but produces a constant expression
+ *
+ * To be used in conjunction with macros, such as BUILD_BUG_ON_ZERO(),
+ * which require their input to be a constant expression and for which
+ * statically_true() would otherwise fail.
+ *
+ * This is a trade-off: const_true() requires all its operands to be
+ * compile time constants. Else, it would always returns false even on
+ * the most trivial cases like:
+ *
+ * true || non_const_var
+ *
+ * On the opposite, statically_true() is able to fold more complex
+ * tautologies and will return true on expressions such as:
+ *
+ * !(non_const_var * 8 % 4)
+ *
+ * For the general case, statically_true() is better.
+ */
+#define const_true(x) __builtin_choose_expr(__is_constexpr(x), x, false)
+
#ifdef __ANDROID__
/*
* FIXME: Big hammer to get rid of tons of:
@@ -106,6 +138,10 @@
# define __force
#endif
+#ifndef __iomem
+# define __iomem
+#endif
+
#ifndef __weak
# define __weak __attribute__((weak))
#endif
@@ -118,10 +154,6 @@
# define unlikely(x) __builtin_expect(!!(x), 0)
#endif
-#ifndef __init
-# define __init
-#endif
-
#include <linux/types.h>
/*
@@ -216,4 +248,14 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
__asm__ ("" : "=r" (var) : "0" (var))
#endif
+#ifndef __BUILD_BUG_ON_ZERO_MSG
+#if defined(__clang__)
+#define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)(sizeof(struct { int:(-!!(e)); })))
+#else
+#define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)sizeof(struct {_Static_assert(!(e), msg);}))
+#endif
+#endif
+
+#endif /* __ASSEMBLY__ */
+
#endif /* _TOOLS_LINUX_COMPILER_H */
diff --git a/tools/include/linux/container_of.h b/tools/include/linux/container_of.h
new file mode 100644
index 000000000000..c879e14c3dd6
--- /dev/null
+++ b/tools/include/linux/container_of.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_LINUX_CONTAINER_OF_H
+#define _TOOLS_LINUX_CONTAINER_OF_H
+
+#ifndef container_of
+/**
+ * container_of - cast a member of a structure out to the containing structure
+ * @ptr: the pointer to the member.
+ * @type: the type of the container struct this is embedded in.
+ * @member: the name of the member within the struct.
+ *
+ */
+#define container_of(ptr, type, member) ({ \
+ const typeof(((type *)0)->member) * __mptr = (ptr); \
+ (type *)((char *)__mptr - offsetof(type, member)); })
+#endif
+
+#endif /* _TOOLS_LINUX_CONTAINER_OF_H */
diff --git a/tools/include/linux/coresight-pmu.h b/tools/include/linux/coresight-pmu.h
index 51ac441a37c3..89b0ac0014b0 100644
--- a/tools/include/linux/coresight-pmu.h
+++ b/tools/include/linux/coresight-pmu.h
@@ -49,12 +49,21 @@
* Interpretation of the PERF_RECORD_AUX_OUTPUT_HW_ID payload.
* Used to associate a CPU with the CoreSight Trace ID.
* [07:00] - Trace ID - uses 8 bits to make value easy to read in file.
- * [59:08] - Unused (SBZ)
- * [63:60] - Version
+ * [39:08] - Sink ID - as reported in /sys/bus/event_source/devices/cs_etm/sinks/
+ * Added in minor version 1.
+ * [55:40] - Unused (SBZ)
+ * [59:56] - Minor Version - previously existing fields are compatible with
+ * all minor versions.
+ * [63:60] - Major Version - previously existing fields mean different things
+ * in new major versions.
*/
#define CS_AUX_HW_ID_TRACE_ID_MASK GENMASK_ULL(7, 0)
-#define CS_AUX_HW_ID_VERSION_MASK GENMASK_ULL(63, 60)
+#define CS_AUX_HW_ID_SINK_ID_MASK GENMASK_ULL(39, 8)
-#define CS_AUX_HW_ID_CURR_VERSION 0
+#define CS_AUX_HW_ID_MINOR_VERSION_MASK GENMASK_ULL(59, 56)
+#define CS_AUX_HW_ID_MAJOR_VERSION_MASK GENMASK_ULL(63, 60)
+
+#define CS_AUX_HW_ID_MAJOR_VERSION 0
+#define CS_AUX_HW_ID_MINOR_VERSION 1
#endif
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index 736bdeccdfe4..bcc6df79301a 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -111,6 +111,24 @@
.off = 0, \
.imm = IMM })
+/* Short form of movsx, dst_reg = (s8,s16,s32)src_reg */
+
+#define BPF_MOVSX64_REG(DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_MOV | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
+#define BPF_MOVSX32_REG(DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU | BPF_MOV | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
/* Short form of mov based on type, BPF_X: dst_reg = src_reg, BPF_K: dst_reg = imm32 */
#define BPF_MOV64_RAW(TYPE, DST, SRC, IMM) \
@@ -255,6 +273,16 @@
.off = OFF, \
.imm = 0 })
+/* Unconditional jumps, gotol pc + imm32 */
+
+#define BPF_JMP32_A(IMM) \
+ ((struct bpf_insn) { \
+ .code = BPF_JMP32 | BPF_JA, \
+ .dst_reg = 0, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = IMM })
+
/* Function call */
#define BPF_EMIT_CALL(FUNC) \
diff --git a/tools/include/linux/gfp_types.h b/tools/include/linux/gfp_types.h
index 5f9f1ed190a0..65db9349f905 100644
--- a/tools/include/linux/gfp_types.h
+++ b/tools/include/linux/gfp_types.h
@@ -1 +1,392 @@
-#include "../../../include/linux/gfp_types.h"
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_GFP_TYPES_H
+#define __LINUX_GFP_TYPES_H
+
+#include <linux/bits.h>
+
+/* The typedef is in types.h but we want the documentation here */
+#if 0
+/**
+ * typedef gfp_t - Memory allocation flags.
+ *
+ * GFP flags are commonly used throughout Linux to indicate how memory
+ * should be allocated. The GFP acronym stands for get_free_pages(),
+ * the underlying memory allocation function. Not every GFP flag is
+ * supported by every function which may allocate memory. Most users
+ * will want to use a plain ``GFP_KERNEL``.
+ */
+typedef unsigned int __bitwise gfp_t;
+#endif
+
+/*
+ * In case of changes, please don't forget to update
+ * include/trace/events/mmflags.h and tools/perf/builtin-kmem.c
+ */
+
+enum {
+ ___GFP_DMA_BIT,
+ ___GFP_HIGHMEM_BIT,
+ ___GFP_DMA32_BIT,
+ ___GFP_MOVABLE_BIT,
+ ___GFP_RECLAIMABLE_BIT,
+ ___GFP_HIGH_BIT,
+ ___GFP_IO_BIT,
+ ___GFP_FS_BIT,
+ ___GFP_ZERO_BIT,
+ ___GFP_UNUSED_BIT, /* 0x200u unused */
+ ___GFP_DIRECT_RECLAIM_BIT,
+ ___GFP_KSWAPD_RECLAIM_BIT,
+ ___GFP_WRITE_BIT,
+ ___GFP_NOWARN_BIT,
+ ___GFP_RETRY_MAYFAIL_BIT,
+ ___GFP_NOFAIL_BIT,
+ ___GFP_NORETRY_BIT,
+ ___GFP_MEMALLOC_BIT,
+ ___GFP_COMP_BIT,
+ ___GFP_NOMEMALLOC_BIT,
+ ___GFP_HARDWALL_BIT,
+ ___GFP_THISNODE_BIT,
+ ___GFP_ACCOUNT_BIT,
+ ___GFP_ZEROTAGS_BIT,
+#ifdef CONFIG_KASAN_HW_TAGS
+ ___GFP_SKIP_ZERO_BIT,
+ ___GFP_SKIP_KASAN_BIT,
+#endif
+#ifdef CONFIG_LOCKDEP
+ ___GFP_NOLOCKDEP_BIT,
+#endif
+#ifdef CONFIG_SLAB_OBJ_EXT
+ ___GFP_NO_OBJ_EXT_BIT,
+#endif
+ ___GFP_LAST_BIT
+};
+
+/* Plain integer GFP bitmasks. Do not use this directly. */
+#define ___GFP_DMA BIT(___GFP_DMA_BIT)
+#define ___GFP_HIGHMEM BIT(___GFP_HIGHMEM_BIT)
+#define ___GFP_DMA32 BIT(___GFP_DMA32_BIT)
+#define ___GFP_MOVABLE BIT(___GFP_MOVABLE_BIT)
+#define ___GFP_RECLAIMABLE BIT(___GFP_RECLAIMABLE_BIT)
+#define ___GFP_HIGH BIT(___GFP_HIGH_BIT)
+#define ___GFP_IO BIT(___GFP_IO_BIT)
+#define ___GFP_FS BIT(___GFP_FS_BIT)
+#define ___GFP_ZERO BIT(___GFP_ZERO_BIT)
+/* 0x200u unused */
+#define ___GFP_DIRECT_RECLAIM BIT(___GFP_DIRECT_RECLAIM_BIT)
+#define ___GFP_KSWAPD_RECLAIM BIT(___GFP_KSWAPD_RECLAIM_BIT)
+#define ___GFP_WRITE BIT(___GFP_WRITE_BIT)
+#define ___GFP_NOWARN BIT(___GFP_NOWARN_BIT)
+#define ___GFP_RETRY_MAYFAIL BIT(___GFP_RETRY_MAYFAIL_BIT)
+#define ___GFP_NOFAIL BIT(___GFP_NOFAIL_BIT)
+#define ___GFP_NORETRY BIT(___GFP_NORETRY_BIT)
+#define ___GFP_MEMALLOC BIT(___GFP_MEMALLOC_BIT)
+#define ___GFP_COMP BIT(___GFP_COMP_BIT)
+#define ___GFP_NOMEMALLOC BIT(___GFP_NOMEMALLOC_BIT)
+#define ___GFP_HARDWALL BIT(___GFP_HARDWALL_BIT)
+#define ___GFP_THISNODE BIT(___GFP_THISNODE_BIT)
+#define ___GFP_ACCOUNT BIT(___GFP_ACCOUNT_BIT)
+#define ___GFP_ZEROTAGS BIT(___GFP_ZEROTAGS_BIT)
+#ifdef CONFIG_KASAN_HW_TAGS
+#define ___GFP_SKIP_ZERO BIT(___GFP_SKIP_ZERO_BIT)
+#define ___GFP_SKIP_KASAN BIT(___GFP_SKIP_KASAN_BIT)
+#else
+#define ___GFP_SKIP_ZERO 0
+#define ___GFP_SKIP_KASAN 0
+#endif
+#ifdef CONFIG_LOCKDEP
+#define ___GFP_NOLOCKDEP BIT(___GFP_NOLOCKDEP_BIT)
+#else
+#define ___GFP_NOLOCKDEP 0
+#endif
+#ifdef CONFIG_SLAB_OBJ_EXT
+#define ___GFP_NO_OBJ_EXT BIT(___GFP_NO_OBJ_EXT_BIT)
+#else
+#define ___GFP_NO_OBJ_EXT 0
+#endif
+
+/*
+ * Physical address zone modifiers (see linux/mmzone.h - low four bits)
+ *
+ * Do not put any conditional on these. If necessary modify the definitions
+ * without the underscores and use them consistently. The definitions here may
+ * be used in bit comparisons.
+ */
+#define __GFP_DMA ((__force gfp_t)___GFP_DMA)
+#define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM)
+#define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32)
+#define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */
+#define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)
+
+/**
+ * DOC: Page mobility and placement hints
+ *
+ * Page mobility and placement hints
+ * ---------------------------------
+ *
+ * These flags provide hints about how mobile the page is. Pages with similar
+ * mobility are placed within the same pageblocks to minimise problems due
+ * to external fragmentation.
+ *
+ * %__GFP_MOVABLE (also a zone modifier) indicates that the page can be
+ * moved by page migration during memory compaction or can be reclaimed.
+ *
+ * %__GFP_RECLAIMABLE is used for slab allocations that specify
+ * SLAB_RECLAIM_ACCOUNT and whose pages can be freed via shrinkers.
+ *
+ * %__GFP_WRITE indicates the caller intends to dirty the page. Where possible,
+ * these pages will be spread between local zones to avoid all the dirty
+ * pages being in one zone (fair zone allocation policy).
+ *
+ * %__GFP_HARDWALL enforces the cpuset memory allocation policy.
+ *
+ * %__GFP_THISNODE forces the allocation to be satisfied from the requested
+ * node with no fallbacks or placement policy enforcements.
+ *
+ * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.
+ *
+ * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.
+ */
+#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)
+#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE)
+#define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL)
+#define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE)
+#define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT)
+#define __GFP_NO_OBJ_EXT ((__force gfp_t)___GFP_NO_OBJ_EXT)
+
+/**
+ * DOC: Watermark modifiers
+ *
+ * Watermark modifiers -- controls access to emergency reserves
+ * ------------------------------------------------------------
+ *
+ * %__GFP_HIGH indicates that the caller is high-priority and that granting
+ * the request is necessary before the system can make forward progress.
+ * For example creating an IO context to clean pages and requests
+ * from atomic context.
+ *
+ * %__GFP_MEMALLOC allows access to all memory. This should only be used when
+ * the caller guarantees the allocation will allow more memory to be freed
+ * very shortly e.g. process exiting or swapping. Users either should
+ * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
+ * Users of this flag have to be extremely careful to not deplete the reserve
+ * completely and implement a throttling mechanism which controls the
+ * consumption of the reserve based on the amount of freed memory.
+ * Usage of a pre-allocated pool (e.g. mempool) should be always considered
+ * before using this flag.
+ *
+ * %__GFP_NOMEMALLOC is used to explicitly forbid access to emergency reserves.
+ * This takes precedence over the %__GFP_MEMALLOC flag if both are set.
+ */
+#define __GFP_HIGH ((__force gfp_t)___GFP_HIGH)
+#define __GFP_MEMALLOC ((__force gfp_t)___GFP_MEMALLOC)
+#define __GFP_NOMEMALLOC ((__force gfp_t)___GFP_NOMEMALLOC)
+
+/**
+ * DOC: Reclaim modifiers
+ *
+ * Reclaim modifiers
+ * -----------------
+ * Please note that all the following flags are only applicable to sleepable
+ * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them).
+ *
+ * %__GFP_IO can start physical IO.
+ *
+ * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the
+ * allocator recursing into the filesystem which might already be holding
+ * locks.
+ *
+ * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
+ * This flag can be cleared to avoid unnecessary delays when a fallback
+ * option is available.
+ *
+ * %__GFP_KSWAPD_RECLAIM indicates that the caller wants to wake kswapd when
+ * the low watermark is reached and have it reclaim pages until the high
+ * watermark is reached. A caller may wish to clear this flag when fallback
+ * options are available and the reclaim is likely to disrupt the system. The
+ * canonical example is THP allocation where a fallback is cheap but
+ * reclaim/compaction may cause indirect stalls.
+ *
+ * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
+ *
+ * The default allocator behavior depends on the request size. We have a concept
+ * of so-called costly allocations (with order > %PAGE_ALLOC_COSTLY_ORDER).
+ * !costly allocations are too essential to fail so they are implicitly
+ * non-failing by default (with some exceptions like OOM victims might fail so
+ * the caller still has to check for failures) while costly requests try to be
+ * not disruptive and back off even without invoking the OOM killer.
+ * The following three modifiers might be used to override some of these
+ * implicit rules. Please note that all of them must be used along with
+ * %__GFP_DIRECT_RECLAIM flag.
+ *
+ * %__GFP_NORETRY: The VM implementation will try only very lightweight
+ * memory direct reclaim to get some memory under memory pressure (thus
+ * it can sleep). It will avoid disruptive actions like OOM killer. The
+ * caller must handle the failure which is quite likely to happen under
+ * heavy memory pressure. The flag is suitable when failure can easily be
+ * handled at small cost, such as reduced throughput.
+ *
+ * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
+ * procedures that have previously failed if there is some indication
+ * that progress has been made elsewhere. It can wait for other
+ * tasks to attempt high-level approaches to freeing memory such as
+ * compaction (which removes fragmentation) and page-out.
+ * There is still a definite limit to the number of retries, but it is
+ * a larger limit than with %__GFP_NORETRY.
+ * Allocations with this flag may fail, but only when there is
+ * genuinely little unused memory. While these allocations do not
+ * directly trigger the OOM killer, their failure indicates that
+ * the system is likely to need to use the OOM killer soon. The
+ * caller must handle failure, but can reasonably do so by failing
+ * a higher-level request, or completing it only in a much less
+ * efficient manner.
+ * If the allocation does fail, and the caller is in a position to
+ * free some non-essential memory, doing so could benefit the system
+ * as a whole.
+ *
+ * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
+ * cannot handle allocation failures. The allocation could block
+ * indefinitely but will never return with failure. Testing for
+ * failure is pointless.
+ * It _must_ be blockable and used together with __GFP_DIRECT_RECLAIM.
+ * It should _never_ be used in non-sleepable contexts.
+ * New users should be evaluated carefully (and the flag should be
+ * used only when there is no reasonable failure policy) but it is
+ * definitely preferable to use the flag rather than opencode endless
+ * loop around allocator.
+ * Allocating pages from the buddy with __GFP_NOFAIL and order > 1 is
+ * not supported. Please consider using kvmalloc() instead.
+ */
+#define __GFP_IO ((__force gfp_t)___GFP_IO)
+#define __GFP_FS ((__force gfp_t)___GFP_FS)
+#define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */
+#define __GFP_KSWAPD_RECLAIM ((__force gfp_t)___GFP_KSWAPD_RECLAIM) /* kswapd can wake */
+#define __GFP_RECLAIM ((__force gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM))
+#define __GFP_RETRY_MAYFAIL ((__force gfp_t)___GFP_RETRY_MAYFAIL)
+#define __GFP_NOFAIL ((__force gfp_t)___GFP_NOFAIL)
+#define __GFP_NORETRY ((__force gfp_t)___GFP_NORETRY)
+
+/**
+ * DOC: Action modifiers
+ *
+ * Action modifiers
+ * ----------------
+ *
+ * %__GFP_NOWARN suppresses allocation failure reports.
+ *
+ * %__GFP_COMP address compound page metadata.
+ *
+ * %__GFP_ZERO returns a zeroed page on success.
+ *
+ * %__GFP_ZEROTAGS zeroes memory tags at allocation time if the memory itself
+ * is being zeroed (either via __GFP_ZERO or via init_on_alloc, provided that
+ * __GFP_SKIP_ZERO is not set). This flag is intended for optimization: setting
+ * memory tags at the same time as zeroing memory has minimal additional
+ * performance impact.
+ *
+ * %__GFP_SKIP_KASAN makes KASAN skip unpoisoning on page allocation.
+ * Used for userspace and vmalloc pages; the latter are unpoisoned by
+ * kasan_unpoison_vmalloc instead. For userspace pages, results in
+ * poisoning being skipped as well, see should_skip_kasan_poison for
+ * details. Only effective in HW_TAGS mode.
+ */
+#define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN)
+#define __GFP_COMP ((__force gfp_t)___GFP_COMP)
+#define __GFP_ZERO ((__force gfp_t)___GFP_ZERO)
+#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS)
+#define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO)
+#define __GFP_SKIP_KASAN ((__force gfp_t)___GFP_SKIP_KASAN)
+
+/* Disable lockdep for GFP context tracking */
+#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
+
+/* Room for N __GFP_FOO bits */
+#define __GFP_BITS_SHIFT ___GFP_LAST_BIT
+#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
+
+/**
+ * DOC: Useful GFP flag combinations
+ *
+ * Useful GFP flag combinations
+ * ----------------------------
+ *
+ * Useful GFP flag combinations that are commonly used. It is recommended
+ * that subsystems start with one of these combinations and then set/clear
+ * %__GFP_FOO flags as necessary.
+ *
+ * %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
+ * watermark is applied to allow access to "atomic reserves".
+ * The current implementation doesn't support NMI and few other strict
+ * non-preemptive contexts (e.g. raw_spin_lock). The same applies to %GFP_NOWAIT.
+ *
+ * %GFP_KERNEL is typical for kernel-internal allocations. The caller requires
+ * %ZONE_NORMAL or a lower zone for direct access but can direct reclaim.
+ *
+ * %GFP_KERNEL_ACCOUNT is the same as GFP_KERNEL, except the allocation is
+ * accounted to kmemcg.
+ *
+ * %GFP_NOWAIT is for kernel allocations that should not stall for direct
+ * reclaim, start physical IO or use any filesystem callback. It is very
+ * likely to fail to allocate memory, even for very small allocations.
+ *
+ * %GFP_NOIO will use direct reclaim to discard clean pages or slab pages
+ * that do not require the starting of any physical IO.
+ * Please try to avoid using this flag directly and instead use
+ * memalloc_noio_{save,restore} to mark the whole scope which cannot
+ * perform any IO with a short explanation why. All allocation requests
+ * will inherit GFP_NOIO implicitly.
+ *
+ * %GFP_NOFS will use direct reclaim but will not use any filesystem interfaces.
+ * Please try to avoid using this flag directly and instead use
+ * memalloc_nofs_{save,restore} to mark the whole scope which cannot/shouldn't
+ * recurse into the FS layer with a short explanation why. All allocation
+ * requests will inherit GFP_NOFS implicitly.
+ *
+ * %GFP_USER is for userspace allocations that also need to be directly
+ * accessibly by the kernel or hardware. It is typically used by hardware
+ * for buffers that are mapped to userspace (e.g. graphics) that hardware
+ * still must DMA to. cpuset limits are enforced for these allocations.
+ *
+ * %GFP_DMA exists for historical reasons and should be avoided where possible.
+ * The flags indicates that the caller requires that the lowest zone be
+ * used (%ZONE_DMA or 16M on x86-64). Ideally, this would be removed but
+ * it would require careful auditing as some users really require it and
+ * others use the flag to avoid lowmem reserves in %ZONE_DMA and treat the
+ * lowest zone as a type of emergency reserve.
+ *
+ * %GFP_DMA32 is similar to %GFP_DMA except that the caller requires a 32-bit
+ * address. Note that kmalloc(..., GFP_DMA32) does not return DMA32 memory
+ * because the DMA32 kmalloc cache array is not implemented.
+ * (Reason: there is no such user in kernel).
+ *
+ * %GFP_HIGHUSER is for userspace allocations that may be mapped to userspace,
+ * do not need to be directly accessible by the kernel but that cannot
+ * move once in use. An example may be a hardware allocation that maps
+ * data directly into userspace but has no addressing limitations.
+ *
+ * %GFP_HIGHUSER_MOVABLE is for userspace allocations that the kernel does not
+ * need direct access to but can use kmap() when access is required. They
+ * are expected to be movable via page reclaim or page migration. Typically,
+ * pages on the LRU would also be allocated with %GFP_HIGHUSER_MOVABLE.
+ *
+ * %GFP_TRANSHUGE and %GFP_TRANSHUGE_LIGHT are used for THP allocations. They
+ * are compound allocations that will generally fail quickly if memory is not
+ * available and will not wake kswapd/kcompactd on failure. The _LIGHT
+ * version does not attempt reclaim/compaction at all and is by default used
+ * in page fault path, while the non-light is used by khugepaged.
+ */
+#define GFP_ATOMIC (__GFP_HIGH|__GFP_KSWAPD_RECLAIM)
+#define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
+#define GFP_KERNEL_ACCOUNT (GFP_KERNEL | __GFP_ACCOUNT)
+#define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM | __GFP_NOWARN)
+#define GFP_NOIO (__GFP_RECLAIM)
+#define GFP_NOFS (__GFP_RECLAIM | __GFP_IO)
+#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL)
+#define GFP_DMA __GFP_DMA
+#define GFP_DMA32 __GFP_DMA32
+#define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM)
+#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | __GFP_SKIP_KASAN)
+#define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
+ __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM)
+#define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)
+
+#endif /* __LINUX_GFP_TYPES_H */
diff --git a/tools/testing/memblock/linux/init.h b/tools/include/linux/init.h
index 828e0ee0bc6c..51b5cde28639 100644
--- a/tools/testing/memblock/linux/init.h
+++ b/tools/include/linux/init.h
@@ -1,10 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _LINUX_INIT_H
-#define _LINUX_INIT_H
+#ifndef _TOOLS_LINUX_INIT_H_
+#define _TOOLS_LINUX_INIT_H_
#include <linux/compiler.h>
-#include <asm/export.h>
-#include <linux/memory_hotplug.h>
+
+#ifndef __init
+# define __init
+#endif
+
+#ifndef __exit
+# define __exit
+#endif
#define __section(section) __attribute__((__section__(section)))
@@ -28,7 +34,10 @@ struct obs_kernel_param {
__aligned(__alignof__(struct obs_kernel_param)) = \
{ __setup_str_##unique_id, fn, early }
+#define __setup(str, fn) \
+ __setup_param(str, fn, fn, 0)
+
#define early_param(str, fn) \
__setup_param(str, fn, fn, 1)
-#endif
+#endif /* _TOOLS_LINUX_INIT_H_ */
diff --git a/tools/include/linux/io.h b/tools/include/linux/io.h
index e129871fe661..4b94b84160b8 100644
--- a/tools/include/linux/io.h
+++ b/tools/include/linux/io.h
@@ -2,4 +2,6 @@
#ifndef _TOOLS_IO_H
#define _TOOLS_IO_H
-#endif
+#include <asm/io.h>
+
+#endif /* _TOOLS_IO_H */
diff --git a/tools/include/linux/kallsyms.h b/tools/include/linux/kallsyms.h
index 5a37ccbec54f..f61a01dd7eb7 100644
--- a/tools/include/linux/kallsyms.h
+++ b/tools/include/linux/kallsyms.h
@@ -18,6 +18,7 @@ static inline const char *kallsyms_lookup(unsigned long addr,
return NULL;
}
+#ifdef HAVE_BACKTRACE_SUPPORT
#include <execinfo.h>
#include <stdlib.h>
static inline void print_ip_sym(const char *loglvl, unsigned long ip)
@@ -30,5 +31,8 @@ static inline void print_ip_sym(const char *loglvl, unsigned long ip)
free(name);
}
+#else
+static inline void print_ip_sym(const char *loglvl, unsigned long ip) {}
+#endif
#endif
diff --git a/tools/include/linux/kasan-tags.h b/tools/include/linux/kasan-tags.h
new file mode 100644
index 000000000000..4f85f562512c
--- /dev/null
+++ b/tools/include/linux/kasan-tags.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_KASAN_TAGS_H
+#define _LINUX_KASAN_TAGS_H
+
+#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */
+#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */
+#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */
+
+#ifdef CONFIG_KASAN_HW_TAGS
+#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
+#else
+#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
+#endif
+
+#endif /* LINUX_KASAN_TAGS_H */
diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h
index 4b0673bf52c2..c8c18d3908a9 100644
--- a/tools/include/linux/kernel.h
+++ b/tools/include/linux/kernel.h
@@ -8,8 +8,10 @@
#include <linux/build_bug.h>
#include <linux/compiler.h>
#include <linux/math.h>
+#include <linux/panic.h>
#include <endian.h>
#include <byteswap.h>
+#include <linux/container_of.h>
#ifndef UINT_MAX
#define UINT_MAX (~0U)
@@ -24,19 +26,6 @@
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
#endif
-#ifndef container_of
-/**
- * container_of - cast a member of a structure out to the containing structure
- * @ptr: the pointer to the member.
- * @type: the type of the container struct this is embedded in.
- * @member: the name of the member within the struct.
- *
- */
-#define container_of(ptr, type, member) ({ \
- const typeof(((type *)0)->member) * __mptr = (ptr); \
- (type *)((char *)__mptr - offsetof(type, member)); })
-#endif
-
#ifndef max
#define max(x, y) ({ \
typeof(x) _max1 = (x); \
diff --git a/tools/include/linux/linkage.h b/tools/include/linux/linkage.h
index bc763d500262..7baaa5898ca2 100644
--- a/tools/include/linux/linkage.h
+++ b/tools/include/linux/linkage.h
@@ -1,4 +1,12 @@
#ifndef _TOOLS_INCLUDE_LINUX_LINKAGE_H
#define _TOOLS_INCLUDE_LINUX_LINKAGE_H
+#include <linux/export.h>
+
+#define SYM_FUNC_START(x) .globl x; x:
+#define SYM_FUNC_END(x)
+#define SYM_DATA_START(x) .globl x; x:
+#define SYM_DATA_START_LOCAL(x) x:
+#define SYM_DATA_END(x)
+
#endif /* _TOOLS_INCLUDE_LINUX_LINKAGE_H */
diff --git a/tools/include/linux/math64.h b/tools/include/linux/math64.h
index 4ad45d5943dc..8a67d478bf19 100644
--- a/tools/include/linux/math64.h
+++ b/tools/include/linux/math64.h
@@ -72,4 +72,9 @@ static inline u64 mul_u64_u64_div64(u64 a, u64 b, u64 c)
}
#endif
+static inline u64 div_u64(u64 dividend, u32 divisor)
+{
+ return dividend / divisor;
+}
+
#endif /* _LINUX_MATH64_H */
diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h
index f3c82ab5b14c..677c37e4a18c 100644
--- a/tools/include/linux/mm.h
+++ b/tools/include/linux/mm.h
@@ -2,8 +2,8 @@
#ifndef _TOOLS_LINUX_MM_H
#define _TOOLS_LINUX_MM_H
+#include <linux/align.h>
#include <linux/mmzone.h>
-#include <uapi/linux/const.h>
#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
@@ -11,10 +11,8 @@
#define PHYS_ADDR_MAX (~(phys_addr_t)0)
-#define ALIGN(x, a) __ALIGN_KERNEL((x), (a))
-#define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a))
-
#define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
+#define PAGE_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PAGE_SIZE)
#define __va(x) ((void *)((unsigned long)(x)))
#define __pa(x) ((unsigned long)(x))
@@ -27,6 +25,12 @@ static inline void *phys_to_virt(unsigned long address)
return __va(address);
}
+#define virt_to_phys virt_to_phys
+static inline phys_addr_t virt_to_phys(volatile void *address)
+{
+ return (phys_addr_t)address;
+}
+
void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid);
static inline void totalram_pages_inc(void)
@@ -37,4 +41,9 @@ static inline void totalram_pages_add(long count)
{
}
+static inline int early_pfn_to_nid(unsigned long pfn)
+{
+ return 0;
+}
+
#endif
diff --git a/tools/include/linux/moduleparam.h b/tools/include/linux/moduleparam.h
new file mode 100644
index 000000000000..4c4d05bef0cb
--- /dev/null
+++ b/tools/include/linux/moduleparam.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_LINUX_MODULE_PARAMS_H
+#define _TOOLS_LINUX_MODULE_PARAMS_H
+
+#define MODULE_PARM_DESC(parm, desc)
+
+#endif // _TOOLS_LINUX_MODULE_PARAMS_H
diff --git a/tools/include/linux/numa.h b/tools/include/linux/numa.h
index 110b0e5d0fb0..c8b9369335e0 100644
--- a/tools/include/linux/numa.h
+++ b/tools/include/linux/numa.h
@@ -13,4 +13,9 @@
#define NUMA_NO_NODE (-1)
+static inline bool numa_valid_node(int nid)
+{
+ return nid >= 0 && nid < MAX_NUMNODES;
+}
+
#endif /* _LINUX_NUMA_H */
diff --git a/tools/include/linux/objtool_types.h b/tools/include/linux/objtool_types.h
index 453a4f4ef39d..aceac94632c8 100644
--- a/tools/include/linux/objtool_types.h
+++ b/tools/include/linux/objtool_types.h
@@ -54,4 +54,17 @@ struct unwind_hint {
#define UNWIND_HINT_TYPE_SAVE 6
#define UNWIND_HINT_TYPE_RESTORE 7
+/*
+ * Annotate types
+ */
+#define ANNOTYPE_NOENDBR 1
+#define ANNOTYPE_RETPOLINE_SAFE 2
+#define ANNOTYPE_INSTR_BEGIN 3
+#define ANNOTYPE_INSTR_END 4
+#define ANNOTYPE_UNRET_BEGIN 5
+#define ANNOTYPE_IGNORE_ALTS 6
+#define ANNOTYPE_INTRA_FUNCTION_CALL 7
+#define ANNOTYPE_REACHABLE 8
+#define ANNOTYPE_NOCFI 9
+
#endif /* _LINUX_OBJTOOL_TYPES_H */
diff --git a/tools/include/linux/panic.h b/tools/include/linux/panic.h
new file mode 100644
index 000000000000..9c8f17a41ce8
--- /dev/null
+++ b/tools/include/linux/panic.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _TOOLS_LINUX_PANIC_H
+#define _TOOLS_LINUX_PANIC_H
+
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+static inline void panic(const char *fmt, ...)
+{
+ va_list argp;
+
+ va_start(argp, fmt);
+ vfprintf(stderr, fmt, argp);
+ va_end(argp);
+ exit(-1);
+}
+
+#endif
diff --git a/tools/include/linux/pci_ids.h b/tools/include/linux/pci_ids.h
new file mode 120000
index 000000000000..1c9e88f41261
--- /dev/null
+++ b/tools/include/linux/pci_ids.h
@@ -0,0 +1 @@
+../../../include/linux/pci_ids.h \ No newline at end of file
diff --git a/tools/include/linux/pfn.h b/tools/include/linux/pfn.h
index 7512a58189eb..f77a30d70152 100644
--- a/tools/include/linux/pfn.h
+++ b/tools/include/linux/pfn.h
@@ -7,4 +7,5 @@
#define PFN_UP(x) (((x) + PAGE_SIZE - 1) >> PAGE_SHIFT)
#define PFN_DOWN(x) ((x) >> PAGE_SHIFT)
#define PFN_PHYS(x) ((phys_addr_t)(x) << PAGE_SHIFT)
+#define PHYS_PFN(x) ((unsigned long)((x) >> PAGE_SHIFT))
#endif
diff --git a/tools/include/linux/poison.h b/tools/include/linux/poison.h
index 2e6338ac5eed..e530e54046c9 100644
--- a/tools/include/linux/poison.h
+++ b/tools/include/linux/poison.h
@@ -47,11 +47,8 @@
* Magic nums for obj red zoning.
* Placed in the first word before and the first word after an obj.
*/
-#define RED_INACTIVE 0x09F911029D74E35BULL /* when obj is inactive */
-#define RED_ACTIVE 0xD84156C5635688C0ULL /* when obj is active */
-
-#define SLUB_RED_INACTIVE 0xbb
-#define SLUB_RED_ACTIVE 0xcc
+#define SLUB_RED_INACTIVE 0xbb /* when obj is inactive */
+#define SLUB_RED_ACTIVE 0xcc /* when obj is active */
/* ...and for poisoning */
#define POISON_INUSE 0x5a /* for use-uninitialised poisoning */
diff --git a/tools/include/linux/prandom.h b/tools/include/linux/prandom.h
new file mode 100644
index 000000000000..b745041ccd6a
--- /dev/null
+++ b/tools/include/linux/prandom.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __TOOLS_LINUX_PRANDOM_H
+#define __TOOLS_LINUX_PRANDOM_H
+
+#include <linux/types.h>
+
+struct rnd_state {
+ __u32 s1, s2, s3, s4;
+};
+
+/*
+ * Handle minimum values for seeds
+ */
+static inline u32 __seed(u32 x, u32 m)
+{
+ return (x < m) ? x + m : x;
+}
+
+/**
+ * prandom_seed_state - set seed for prandom_u32_state().
+ * @state: pointer to state structure to receive the seed.
+ * @seed: arbitrary 64-bit value to use as a seed.
+ */
+static inline void prandom_seed_state(struct rnd_state *state, u64 seed)
+{
+ u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL;
+
+ state->s1 = __seed(i, 2U);
+ state->s2 = __seed(i, 8U);
+ state->s3 = __seed(i, 16U);
+ state->s4 = __seed(i, 128U);
+}
+
+/**
+ * prandom_u32_state - seeded pseudo-random number generator.
+ * @state: pointer to state structure holding seeded state.
+ *
+ * This is used for pseudo-randomness with no outside seeding.
+ * For more random results, use get_random_u32().
+ */
+static inline u32 prandom_u32_state(struct rnd_state *state)
+{
+#define TAUSWORTHE(s, a, b, c, d) (((s & c) << d) ^ (((s << a) ^ s) >> b))
+ state->s1 = TAUSWORTHE(state->s1, 6U, 13U, 4294967294U, 18U);
+ state->s2 = TAUSWORTHE(state->s2, 2U, 27U, 4294967288U, 2U);
+ state->s3 = TAUSWORTHE(state->s3, 13U, 21U, 4294967280U, 7U);
+ state->s4 = TAUSWORTHE(state->s4, 3U, 12U, 4294967168U, 13U);
+
+ return (state->s1 ^ state->s2 ^ state->s3 ^ state->s4);
+}
+#endif // __TOOLS_LINUX_PRANDOM_H
diff --git a/tools/include/linux/rbtree_augmented.h b/tools/include/linux/rbtree_augmented.h
index 570bb9794421..95483c7d81df 100644
--- a/tools/include/linux/rbtree_augmented.h
+++ b/tools/include/linux/rbtree_augmented.h
@@ -158,13 +158,13 @@ RB_DECLARE_CALLBACKS(RBSTATIC, RBNAME, \
static inline void rb_set_parent(struct rb_node *rb, struct rb_node *p)
{
- rb->__rb_parent_color = rb_color(rb) | (unsigned long)p;
+ rb->__rb_parent_color = rb_color(rb) + (unsigned long)p;
}
static inline void rb_set_parent_color(struct rb_node *rb,
struct rb_node *p, int color)
{
- rb->__rb_parent_color = (unsigned long)p | color;
+ rb->__rb_parent_color = (unsigned long)p + color;
}
static inline void
diff --git a/tools/include/linux/refcount.h b/tools/include/linux/refcount.h
index 36cb29bc57c2..1f30956e070d 100644
--- a/tools/include/linux/refcount.h
+++ b/tools/include/linux/refcount.h
@@ -60,6 +60,11 @@ static inline void refcount_set(refcount_t *r, unsigned int n)
atomic_set(&r->refs, n);
}
+static inline void refcount_set_release(refcount_t *r, unsigned int n)
+{
+ atomic_set(&r->refs, n);
+}
+
static inline unsigned int refcount_read(const refcount_t *r)
{
return atomic_read(&r->refs);
diff --git a/tools/include/linux/ring_buffer.h b/tools/include/linux/ring_buffer.h
index 6c02617377c2..a74c397359c7 100644
--- a/tools/include/linux/ring_buffer.h
+++ b/tools/include/linux/ring_buffer.h
@@ -55,7 +55,7 @@ static inline u64 ring_buffer_read_head(struct perf_event_mmap_page *base)
* READ_ONCE() + smp_mb() pair.
*/
#if defined(__x86_64__) || defined(__aarch64__) || defined(__powerpc64__) || \
- defined(__ia64__) || defined(__sparc__) && defined(__arch64__)
+ defined(__ia64__) || defined(__sparc__) && defined(__arch64__) || defined(__riscv)
return smp_load_acquire(&base->data_head);
#else
u64 head = READ_ONCE(base->data_head);
diff --git a/tools/include/linux/slab.h b/tools/include/linux/slab.h
index 311759ea25e9..94937a699402 100644
--- a/tools/include/linux/slab.h
+++ b/tools/include/linux/slab.h
@@ -4,25 +4,137 @@
#include <linux/types.h>
#include <linux/gfp.h>
+#include <pthread.h>
-#define SLAB_PANIC 2
#define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are reclaimable */
#define kzalloc_node(size, flags, node) kmalloc(size, flags)
+enum _slab_flag_bits {
+ _SLAB_KMALLOC,
+ _SLAB_HWCACHE_ALIGN,
+ _SLAB_PANIC,
+ _SLAB_TYPESAFE_BY_RCU,
+ _SLAB_ACCOUNT,
+ _SLAB_FLAGS_LAST_BIT
+};
+
+#define __SLAB_FLAG_BIT(nr) ((unsigned int __force)(1U << (nr)))
+#define __SLAB_FLAG_UNUSED ((unsigned int __force)(0U))
+
+#define SLAB_HWCACHE_ALIGN __SLAB_FLAG_BIT(_SLAB_HWCACHE_ALIGN)
+#define SLAB_PANIC __SLAB_FLAG_BIT(_SLAB_PANIC)
+#define SLAB_TYPESAFE_BY_RCU __SLAB_FLAG_BIT(_SLAB_TYPESAFE_BY_RCU)
+#ifdef CONFIG_MEMCG
+# define SLAB_ACCOUNT __SLAB_FLAG_BIT(_SLAB_ACCOUNT)
+#else
+# define SLAB_ACCOUNT __SLAB_FLAG_UNUSED
+#endif
void *kmalloc(size_t size, gfp_t gfp);
void kfree(void *p);
+void *kmalloc_array(size_t n, size_t size, gfp_t gfp);
bool slab_is_available(void);
enum slab_state {
DOWN,
PARTIAL,
- PARTIAL_NODE,
UP,
FULL
};
+struct kmem_cache {
+ pthread_mutex_t lock;
+ unsigned int size;
+ unsigned int align;
+ unsigned int sheaf_capacity;
+ int nr_objs;
+ void *objs;
+ void (*ctor)(void *);
+ bool non_kernel_enabled;
+ unsigned int non_kernel;
+ unsigned long nr_allocated;
+ unsigned long nr_tallocated;
+ bool exec_callback;
+ void (*callback)(void *);
+ void *private;
+};
+
+struct kmem_cache_args {
+ /**
+ * @align: The required alignment for the objects.
+ *
+ * %0 means no specific alignment is requested.
+ */
+ unsigned int align;
+ /**
+ * @sheaf_capacity: The maximum size of the sheaf.
+ */
+ unsigned int sheaf_capacity;
+ /**
+ * @useroffset: Usercopy region offset.
+ *
+ * %0 is a valid offset, when @usersize is non-%0
+ */
+ unsigned int useroffset;
+ /**
+ * @usersize: Usercopy region size.
+ *
+ * %0 means no usercopy region is specified.
+ */
+ unsigned int usersize;
+ /**
+ * @freeptr_offset: Custom offset for the free pointer
+ * in &SLAB_TYPESAFE_BY_RCU caches
+ *
+ * By default &SLAB_TYPESAFE_BY_RCU caches place the free pointer
+ * outside of the object. This might cause the object to grow in size.
+ * Cache creators that have a reason to avoid this can specify a custom
+ * free pointer offset in their struct where the free pointer will be
+ * placed.
+ *
+ * Note that placing the free pointer inside the object requires the
+ * caller to ensure that no fields are invalidated that are required to
+ * guard against object recycling (See &SLAB_TYPESAFE_BY_RCU for
+ * details).
+ *
+ * Using %0 as a value for @freeptr_offset is valid. If @freeptr_offset
+ * is specified, %use_freeptr_offset must be set %true.
+ *
+ * Note that @ctor currently isn't supported with custom free pointers
+ * as a @ctor requires an external free pointer.
+ */
+ unsigned int freeptr_offset;
+ /**
+ * @use_freeptr_offset: Whether a @freeptr_offset is used.
+ */
+ bool use_freeptr_offset;
+ /**
+ * @ctor: A constructor for the objects.
+ *
+ * The constructor is invoked for each object in a newly allocated slab
+ * page. It is the cache user's responsibility to free object in the
+ * same state as after calling the constructor, or deal appropriately
+ * with any differences between a freshly constructed and a reallocated
+ * object.
+ *
+ * %NULL means no constructor.
+ */
+ void (*ctor)(void *);
+};
+
+struct slab_sheaf {
+ union {
+ struct list_head barn_list;
+ /* only used for prefilled sheafs */
+ unsigned int capacity;
+ };
+ struct kmem_cache *cache;
+ unsigned int size;
+ int node; /* only used for rcu_sheaf */
+ void *objects[];
+};
+
static inline void *kzalloc(size_t size, gfp_t gfp)
{
return kmalloc(size, gfp | __GFP_ZERO);
@@ -37,12 +149,57 @@ static inline void *kmem_cache_alloc(struct kmem_cache *cachep, int flags)
}
void kmem_cache_free(struct kmem_cache *cachep, void *objp);
-struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
- unsigned int align, unsigned int flags,
- void (*ctor)(void *));
+
+struct kmem_cache *
+__kmem_cache_create_args(const char *name, unsigned int size,
+ struct kmem_cache_args *args, unsigned int flags);
+
+/* If NULL is passed for @args, use this variant with default arguments. */
+static inline struct kmem_cache *
+__kmem_cache_default_args(const char *name, unsigned int size,
+ struct kmem_cache_args *args, unsigned int flags)
+{
+ struct kmem_cache_args kmem_default_args = {};
+
+ return __kmem_cache_create_args(name, size, &kmem_default_args, flags);
+}
+
+static inline struct kmem_cache *
+__kmem_cache_create(const char *name, unsigned int size, unsigned int align,
+ unsigned int flags, void (*ctor)(void *))
+{
+ struct kmem_cache_args kmem_args = {
+ .align = align,
+ .ctor = ctor,
+ };
+
+ return __kmem_cache_create_args(name, size, &kmem_args, flags);
+}
+
+#define kmem_cache_create(__name, __object_size, __args, ...) \
+ _Generic((__args), \
+ struct kmem_cache_args *: __kmem_cache_create_args, \
+ void *: __kmem_cache_default_args, \
+ default: __kmem_cache_create)(__name, __object_size, __args, __VA_ARGS__)
void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **list);
int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size,
void **list);
+struct slab_sheaf *
+kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int size);
+
+void *
+kmem_cache_alloc_from_sheaf(struct kmem_cache *s, gfp_t gfp,
+ struct slab_sheaf *sheaf);
+
+void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp,
+ struct slab_sheaf *sheaf);
+int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp,
+ struct slab_sheaf **sheafp, unsigned int size);
+
+static inline unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf)
+{
+ return sheaf->size;
+}
#endif /* _TOOLS_SLAB_H */
diff --git a/tools/include/linux/string.h b/tools/include/linux/string.h
index db5c99318c79..8499f509f03e 100644
--- a/tools/include/linux/string.h
+++ b/tools/include/linux/string.h
@@ -12,6 +12,8 @@ void argv_free(char **argv);
int strtobool(const char *s, bool *res);
+#define strscpy strcpy
+
/*
* glibc based builds needs the extern while uClibc doesn't.
* However uClibc headers also define __GLIBC__ hence the hack below
@@ -46,5 +48,8 @@ extern char * __must_check skip_spaces(const char *);
extern char *strim(char *);
+extern void remove_spaces(char *s);
+
extern void *memchr_inv(const void *start, int c, size_t bytes);
+extern unsigned long long memparse(const char *ptr, char **retptr);
#endif /* _TOOLS_LINUX_STRING_H_ */
diff --git a/tools/include/linux/types.h b/tools/include/linux/types.h
index 8519386acd23..4928e33d44ac 100644
--- a/tools/include/linux/types.h
+++ b/tools/include/linux/types.h
@@ -42,6 +42,8 @@ typedef __s16 s16;
typedef __u8 u8;
typedef __s8 s8;
+typedef unsigned long long ullong;
+
#ifdef __CHECKER__
#define __bitwise __attribute__((bitwise))
#else
diff --git a/tools/include/asm-generic/unaligned.h b/tools/include/linux/unaligned.h
index cdd2fd078027..395a4464fe73 100644
--- a/tools/include/asm-generic/unaligned.h
+++ b/tools/include/linux/unaligned.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_GENERIC_UNALIGNED_H
-#define __ASM_GENERIC_UNALIGNED_H
+#ifndef __LINUX_UNALIGNED_H
+#define __LINUX_UNALIGNED_H
/*
* This is the most generic implementation of unaligned accesses
@@ -9,16 +9,7 @@
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wpacked"
#pragma GCC diagnostic ignored "-Wattributes"
-
-#define __get_unaligned_t(type, ptr) ({ \
- const struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \
- __pptr->x; \
-})
-
-#define __put_unaligned_t(type, val, ptr) do { \
- struct { type x; } __packed *__pptr = (typeof(__pptr))(ptr); \
- __pptr->x = (val); \
-} while (0)
+#include <vdso/unaligned.h>
#define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr))
#define put_unaligned(val, ptr) __put_unaligned_t(typeof(*(ptr)), (val), (ptr))
@@ -154,4 +145,4 @@ static inline u64 get_unaligned_be48(const void *p)
}
#pragma GCC diagnostic pop
-#endif /* __ASM_GENERIC_UNALIGNED_H */
+#endif /* __LINUX_UNALIGNED_H */
diff --git a/tools/include/nolibc/Makefile b/tools/include/nolibc/Makefile
index e69c26abe1ea..143c2d2c2ba6 100644
--- a/tools/include/nolibc/Makefile
+++ b/tools/include/nolibc/Makefile
@@ -23,22 +23,47 @@ else
Q=@
endif
-nolibc_arch := $(patsubst arm64,aarch64,$(ARCH))
-arch_file := arch-$(nolibc_arch).h
+arch_file := arch-$(ARCH).h
all_files := \
compiler.h \
crt.h \
ctype.h \
+ dirent.h \
+ elf.h \
errno.h \
+ fcntl.h \
+ getopt.h \
+ limits.h \
+ math.h \
nolibc.h \
+ poll.h \
+ sched.h \
signal.h \
stackprotector.h \
std.h \
stdarg.h \
+ stdbool.h \
+ stddef.h \
stdint.h \
stdlib.h \
string.h \
sys.h \
+ sys/auxv.h \
+ sys/ioctl.h \
+ sys/mman.h \
+ sys/mount.h \
+ sys/prctl.h \
+ sys/random.h \
+ sys/reboot.h \
+ sys/resource.h \
+ sys/stat.h \
+ sys/syscall.h \
+ sys/sysmacros.h \
+ sys/time.h \
+ sys/timerfd.h \
+ sys/types.h \
+ sys/utsname.h \
+ sys/wait.h \
time.h \
types.h \
unistd.h \
@@ -65,18 +90,12 @@ help:
@echo " OUTPUT = $(OUTPUT)"
@echo ""
-# Note: when ARCH is "x86" we concatenate both x86_64 and i386
headers:
$(Q)mkdir -p $(OUTPUT)sysroot
$(Q)mkdir -p $(OUTPUT)sysroot/include
- $(Q)cp $(all_files) $(OUTPUT)sysroot/include/
- $(Q)if [ "$(ARCH)" = "x86" ]; then \
- sed -e \
- 's,^#ifndef _NOLIBC_ARCH_X86_64_H,#if !defined(_NOLIBC_ARCH_X86_64_H) \&\& defined(__x86_64__),' \
- arch-x86_64.h; \
- sed -e \
- 's,^#ifndef _NOLIBC_ARCH_I386_H,#if !defined(_NOLIBC_ARCH_I386_H) \&\& !defined(__x86_64__),' \
- arch-i386.h; \
+ $(Q)cp --parents $(all_files) $(OUTPUT)sysroot/include/
+ $(Q)if [ "$(ARCH)" = "i386" -o "$(ARCH)" = "x86_64" ]; then \
+ cat arch-x86.h; \
elif [ -e "$(arch_file)" ]; then \
cat $(arch_file); \
else \
@@ -88,5 +107,11 @@ headers_standalone: headers
$(Q)$(MAKE) -C $(srctree) headers
$(Q)$(MAKE) -C $(srctree) headers_install INSTALL_HDR_PATH=$(OUTPUT)sysroot
+headers_check: headers_standalone
+ $(Q)for header in $(filter-out crt.h std.h,$(all_files)); do \
+ $(CC) $(CLANG_CROSS_FLAGS) -Wall -Werror -nostdinc -fsyntax-only -x c /dev/null \
+ -I$(or $(objtree),$(srctree))/usr/include -include $$header -include $$header || exit 1; \
+ done
+
clean:
$(call QUIET_CLEAN, nolibc) rm -rf "$(OUTPUT)sysroot"
diff --git a/tools/include/nolibc/arch-arm.h b/tools/include/nolibc/arch-arm.h
index cae4afa7c1c7..1f66e7e5a444 100644
--- a/tools/include/nolibc/arch-arm.h
+++ b/tools/include/nolibc/arch-arm.h
@@ -185,15 +185,13 @@
})
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
- "mov %r0, sp\n" /* save stack pointer to %r0, as arg1 of _start_c */
- "and ip, %r0, #-8\n" /* sp must be 8-byte aligned in the callee */
- "mov sp, ip\n"
+ "mov r0, sp\n" /* save stack pointer to %r0, as arg1 of _start_c */
"bl _start_c\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#endif /* _NOLIBC_ARCH_ARM_H */
diff --git a/tools/include/nolibc/arch-aarch64.h b/tools/include/nolibc/arch-arm64.h
index b23ac1f04035..02a3f74c8ec8 100644
--- a/tools/include/nolibc/arch-aarch64.h
+++ b/tools/include/nolibc/arch-arm64.h
@@ -1,16 +1,16 @@
/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
/*
- * AARCH64 specific definitions for NOLIBC
+ * ARM64 specific definitions for NOLIBC
* Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
*/
-#ifndef _NOLIBC_ARCH_AARCH64_H
-#define _NOLIBC_ARCH_AARCH64_H
+#ifndef _NOLIBC_ARCH_ARM64_H
+#define _NOLIBC_ARCH_ARM64_H
#include "compiler.h"
#include "crt.h"
-/* Syscalls for AARCH64 :
+/* Syscalls for ARM64 :
* - registers are 64-bit
* - stack is 16-byte aligned
* - syscall number is passed in x8
@@ -142,13 +142,12 @@
})
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
"mov x0, sp\n" /* save stack pointer to x0, as arg1 of _start_c */
- "and sp, x0, -16\n" /* sp must be 16-byte aligned in the callee */
"bl _start_c\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
-#endif /* _NOLIBC_ARCH_AARCH64_H */
+#endif /* _NOLIBC_ARCH_ARM64_H */
diff --git a/tools/include/nolibc/arch-i386.h b/tools/include/nolibc/arch-i386.h
deleted file mode 100644
index 28c26a00a762..000000000000
--- a/tools/include/nolibc/arch-i386.h
+++ /dev/null
@@ -1,180 +0,0 @@
-/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
-/*
- * i386 specific definitions for NOLIBC
- * Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
- */
-
-#ifndef _NOLIBC_ARCH_I386_H
-#define _NOLIBC_ARCH_I386_H
-
-#include "compiler.h"
-#include "crt.h"
-
-/* Syscalls for i386 :
- * - mostly similar to x86_64
- * - registers are 32-bit
- * - syscall number is passed in eax
- * - arguments are in ebx, ecx, edx, esi, edi, ebp respectively
- * - all registers are preserved (except eax of course)
- * - the system call is performed by calling int $0x80
- * - syscall return comes in eax
- * - the arguments are cast to long and assigned into the target registers
- * which are then simply passed as registers to the asm code, so that we
- * don't have to experience issues with register constraints.
- * - the syscall number is always specified last in order to allow to force
- * some registers before (gcc refuses a %-register at the last position).
- *
- * Also, i386 supports the old_select syscall if newselect is not available
- */
-#define __ARCH_WANT_SYS_OLD_SELECT
-
-#define my_syscall0(num) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall1(num, arg1) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- register long _arg1 __asm__ ("ebx") = (long)(arg1); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "r"(_arg1), \
- "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall2(num, arg1, arg2) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- register long _arg1 __asm__ ("ebx") = (long)(arg1); \
- register long _arg2 __asm__ ("ecx") = (long)(arg2); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "r"(_arg1), "r"(_arg2), \
- "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall3(num, arg1, arg2, arg3) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- register long _arg1 __asm__ ("ebx") = (long)(arg1); \
- register long _arg2 __asm__ ("ecx") = (long)(arg2); \
- register long _arg3 __asm__ ("edx") = (long)(arg3); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "r"(_arg1), "r"(_arg2), "r"(_arg3), \
- "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall4(num, arg1, arg2, arg3, arg4) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- register long _arg1 __asm__ ("ebx") = (long)(arg1); \
- register long _arg2 __asm__ ("ecx") = (long)(arg2); \
- register long _arg3 __asm__ ("edx") = (long)(arg3); \
- register long _arg4 __asm__ ("esi") = (long)(arg4); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), \
- "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
-({ \
- long _ret; \
- register long _num __asm__ ("eax") = (num); \
- register long _arg1 __asm__ ("ebx") = (long)(arg1); \
- register long _arg2 __asm__ ("ecx") = (long)(arg2); \
- register long _arg3 __asm__ ("edx") = (long)(arg3); \
- register long _arg4 __asm__ ("esi") = (long)(arg4); \
- register long _arg5 __asm__ ("edi") = (long)(arg5); \
- \
- __asm__ volatile ( \
- "int $0x80\n" \
- : "=a" (_ret) \
- : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \
- "0"(_num) \
- : "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
-({ \
- long _eax = (long)(num); \
- long _arg6 = (long)(arg6); /* Always in memory */ \
- __asm__ volatile ( \
- "pushl %[_arg6]\n\t" \
- "pushl %%ebp\n\t" \
- "movl 4(%%esp),%%ebp\n\t" \
- "int $0x80\n\t" \
- "popl %%ebp\n\t" \
- "addl $4,%%esp\n\t" \
- : "+a"(_eax) /* %eax */ \
- : "b"(arg1), /* %ebx */ \
- "c"(arg2), /* %ecx */ \
- "d"(arg3), /* %edx */ \
- "S"(arg4), /* %esi */ \
- "D"(arg5), /* %edi */ \
- [_arg6]"m"(_arg6) /* memory */ \
- : "memory", "cc" \
- ); \
- _eax; \
-})
-
-/* startup code */
-/*
- * i386 System V ABI mandates:
- * 1) last pushed argument must be 16-byte aligned.
- * 2) The deepest stack frame should be set to zero
- *
- */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
-{
- __asm__ volatile (
- "xor %ebp, %ebp\n" /* zero the stack frame */
- "mov %esp, %eax\n" /* save stack pointer to %eax, as arg1 of _start_c */
- "add $12, %esp\n" /* avoid over-estimating after the 'and' & 'sub' below */
- "and $-16, %esp\n" /* the %esp must be 16-byte aligned on 'call' */
- "sub $12, %esp\n" /* sub 12 to keep it aligned after the push %eax */
- "push %eax\n" /* push arg1 on stack to support plain stack modes too */
- "call _start_c\n" /* transfer to c runtime */
- "hlt\n" /* ensure it does not return */
- );
- __builtin_unreachable();
-}
-
-#endif /* _NOLIBC_ARCH_I386_H */
diff --git a/tools/include/nolibc/arch-loongarch.h b/tools/include/nolibc/arch-loongarch.h
index 3f8ef8f86c0f..5511705303ea 100644
--- a/tools/include/nolibc/arch-loongarch.h
+++ b/tools/include/nolibc/arch-loongarch.h
@@ -142,21 +142,14 @@
_arg1; \
})
-#if __loongarch_grlen == 32
-#define LONG_BSTRINS "bstrins.w"
-#else /* __loongarch_grlen == 64 */
-#define LONG_BSTRINS "bstrins.d"
-#endif
-
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
"move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */
- LONG_BSTRINS " $sp, $zero, 3, 0\n" /* $sp must be 16-byte aligned */
"bl _start_c\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#endif /* _NOLIBC_ARCH_LOONGARCH_H */
diff --git a/tools/include/nolibc/arch-m68k.h b/tools/include/nolibc/arch-m68k.h
new file mode 100644
index 000000000000..6dac1845f298
--- /dev/null
+++ b/tools/include/nolibc/arch-m68k.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * m68k specific definitions for NOLIBC
+ * Copyright (C) 2025 Daniel Palmer<daniel@thingy.jp>
+ *
+ * Roughly based on one or more of the other arch files.
+ *
+ */
+
+#ifndef _NOLIBC_ARCH_M68K_H
+#define _NOLIBC_ARCH_M68K_H
+
+#include "compiler.h"
+#include "crt.h"
+
+#define _NOLIBC_SYSCALL_CLOBBERLIST "memory"
+
+#define my_syscall0(num) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r"(_num) \
+ : "r"(_num) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall1(num, arg1) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r"(_num) \
+ : "r"(_arg1) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall2(num, arg1, arg2) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ register long _arg2 __asm__ ("d2") = (long)(arg2); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r"(_num) \
+ : "r"(_arg1), "r"(_arg2) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall3(num, arg1, arg2, arg3) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ register long _arg2 __asm__ ("d2") = (long)(arg2); \
+ register long _arg3 __asm__ ("d3") = (long)(arg3); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r"(_num) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall4(num, arg1, arg2, arg3, arg4) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ register long _arg2 __asm__ ("d2") = (long)(arg2); \
+ register long _arg3 __asm__ ("d3") = (long)(arg3); \
+ register long _arg4 __asm__ ("d4") = (long)(arg4); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r" (_num) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ register long _arg2 __asm__ ("d2") = (long)(arg2); \
+ register long _arg3 __asm__ ("d3") = (long)(arg3); \
+ register long _arg4 __asm__ ("d4") = (long)(arg4); \
+ register long _arg5 __asm__ ("d5") = (long)(arg5); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r" (_num) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
+({ \
+ register long _num __asm__ ("d0") = (num); \
+ register long _arg1 __asm__ ("d1") = (long)(arg1); \
+ register long _arg2 __asm__ ("d2") = (long)(arg2); \
+ register long _arg3 __asm__ ("d3") = (long)(arg3); \
+ register long _arg4 __asm__ ("d4") = (long)(arg4); \
+ register long _arg5 __asm__ ("d5") = (long)(arg5); \
+ register long _arg6 __asm__ ("a0") = (long)(arg6); \
+ \
+ __asm__ volatile ( \
+ "trap #0\n" \
+ : "+r" (_num) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \
+ "r"(_arg6) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _num; \
+})
+
+void _start(void);
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
+{
+ __asm__ volatile (
+ "movel %sp, %sp@-\n"
+ "jsr _start_c\n"
+ );
+ __nolibc_entrypoint_epilogue();
+}
+
+#endif /* _NOLIBC_ARCH_M68K_H */
diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h
index 62cc50ef3288..0cbac63b249a 100644
--- a/tools/include/nolibc/arch-mips.h
+++ b/tools/include/nolibc/arch-mips.h
@@ -10,7 +10,7 @@
#include "compiler.h"
#include "crt.h"
-#if !defined(_ABIO32)
+#if !defined(_ABIO32) && !defined(_ABIN32) && !defined(_ABI64)
#error Unsupported MIPS ABI
#endif
@@ -32,11 +32,32 @@
* - the arguments are cast to long and assigned into the target registers
* which are then simply passed as registers to the asm code, so that we
* don't have to experience issues with register constraints.
+ *
+ * Syscalls for MIPS ABI N32, same as ABI O32 with the following differences :
+ * - arguments are in a0, a1, a2, a3, t0, t1, t2, t3.
+ * t0..t3 are also known as a4..a7.
+ * - stack is 16-byte aligned
*/
+#if defined(_ABIO32)
+
#define _NOLIBC_SYSCALL_CLOBBERLIST \
"memory", "cc", "at", "v1", "hi", "lo", \
"t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7", "t8", "t9"
+#define _NOLIBC_SYSCALL_STACK_RESERVE "addiu $sp, $sp, -32\n"
+#define _NOLIBC_SYSCALL_STACK_UNRESERVE "addiu $sp, $sp, 32\n"
+
+#else /* _ABIN32 || _ABI64 */
+
+/* binutils, GCC and clang disagree about register aliases, use numbers instead. */
+#define _NOLIBC_SYSCALL_CLOBBERLIST \
+ "memory", "cc", "at", "v1", \
+ "10", "11", "12", "13", "14", "15", "24", "25"
+
+#define _NOLIBC_SYSCALL_STACK_RESERVE
+#define _NOLIBC_SYSCALL_STACK_UNRESERVE
+
+#endif /* _ABIO32 */
#define my_syscall0(num) \
({ \
@@ -44,9 +65,9 @@
register long _arg4 __asm__ ("a3"); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r"(_num), "=r"(_arg4) \
: "r"(_num) \
: _NOLIBC_SYSCALL_CLOBBERLIST \
@@ -61,9 +82,9 @@
register long _arg4 __asm__ ("a3"); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r"(_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1) \
@@ -80,9 +101,9 @@
register long _arg4 __asm__ ("a3"); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r"(_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1), "r"(_arg2) \
@@ -100,9 +121,9 @@
register long _arg4 __asm__ ("a3"); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r"(_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1), "r"(_arg2), "r"(_arg3) \
@@ -120,9 +141,9 @@
register long _arg4 __asm__ ("a3") = (long)(arg4); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r" (_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4) \
@@ -131,6 +152,8 @@
_arg4 ? -_num : _num; \
})
+#if defined(_ABIO32)
+
#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
({ \
register long _num __asm__ ("v0") = (num); \
@@ -141,10 +164,10 @@
register long _arg5 = (long)(arg5); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"sw %7, 16($sp)\n" \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
: "=r" (_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5) \
@@ -164,11 +187,53 @@
register long _arg6 = (long)(arg6); \
\
__asm__ volatile ( \
- "addiu $sp, $sp, -32\n" \
+ _NOLIBC_SYSCALL_STACK_RESERVE \
"sw %7, 16($sp)\n" \
"sw %8, 20($sp)\n" \
"syscall\n" \
- "addiu $sp, $sp, 32\n" \
+ _NOLIBC_SYSCALL_STACK_UNRESERVE \
+ : "=r" (_num), "=r"(_arg4) \
+ : "0"(_num), \
+ "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \
+ "r"(_arg6) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _arg4 ? -_num : _num; \
+})
+
+#else /* _ABIN32 || _ABI64 */
+
+#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
+({ \
+ register long _num __asm__ ("v0") = (num); \
+ register long _arg1 __asm__ ("$4") = (long)(arg1); \
+ register long _arg2 __asm__ ("$5") = (long)(arg2); \
+ register long _arg3 __asm__ ("$6") = (long)(arg3); \
+ register long _arg4 __asm__ ("$7") = (long)(arg4); \
+ register long _arg5 __asm__ ("$8") = (long)(arg5); \
+ \
+ __asm__ volatile ( \
+ "syscall\n" \
+ : "=r" (_num), "=r"(_arg4) \
+ : "0"(_num), \
+ "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5) \
+ : _NOLIBC_SYSCALL_CLOBBERLIST \
+ ); \
+ _arg4 ? -_num : _num; \
+})
+
+#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
+({ \
+ register long _num __asm__ ("v0") = (num); \
+ register long _arg1 __asm__ ("$4") = (long)(arg1); \
+ register long _arg2 __asm__ ("$5") = (long)(arg2); \
+ register long _arg3 __asm__ ("$6") = (long)(arg3); \
+ register long _arg4 __asm__ ("$7") = (long)(arg4); \
+ register long _arg5 __asm__ ("$8") = (long)(arg5); \
+ register long _arg6 __asm__ ("$9") = (long)(arg6); \
+ \
+ __asm__ volatile ( \
+ "syscall\n" \
: "=r" (_num), "=r"(_arg4) \
: "0"(_num), \
"r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \
@@ -178,27 +243,28 @@
_arg4 ? -_num : _num; \
})
+#endif /* _ABIO32 */
+
/* startup code, note that it's called __start on MIPS */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector __start(void)
+void __start(void);
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector __start(void)
{
__asm__ volatile (
- ".set push\n"
- ".set noreorder\n"
- "bal 1f\n" /* prime $ra for .cpload */
- "nop\n"
- "1:\n"
- ".cpload $ra\n"
"move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */
- "addiu $sp, $sp, -4\n" /* space for .cprestore to store $gp */
- ".cprestore 0\n"
- "li $t0, -8\n"
- "and $sp, $sp, $t0\n" /* $sp must be 8-byte aligned */
+#if defined(_ABIO32)
"addiu $sp, $sp, -16\n" /* the callee expects to save a0..a3 there */
- "jal _start_c\n" /* transfer to c runtime */
- " nop\n" /* delayed slot */
- ".set pop\n"
+#endif /* _ABIO32 */
+ "lui $t9, %hi(_start_c)\n" /* ABI requires current function address in $t9 */
+ "ori $t9, %lo(_start_c)\n"
+#if defined(_ABI64)
+ "lui $t0, %highest(_start_c)\n"
+ "ori $t0, %higher(_start_c)\n"
+ "dsll $t0, 0x20\n"
+ "or $t9, $t0\n"
+#endif /* _ABI64 */
+ "jalr $t9\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#endif /* _NOLIBC_ARCH_MIPS_H */
diff --git a/tools/include/nolibc/arch-powerpc.h b/tools/include/nolibc/arch-powerpc.h
index ac212e6185b2..204564bbcd32 100644
--- a/tools/include/nolibc/arch-powerpc.h
+++ b/tools/include/nolibc/arch-powerpc.h
@@ -172,7 +172,7 @@
_ret; \
})
-#ifndef __powerpc64__
+#if !defined(__powerpc64__) && !defined(__clang__)
/* FIXME: For 32-bit PowerPC, with newer gcc compilers (e.g. gcc 13.1.0),
* "omit-frame-pointer" fails with __attribute__((no_stack_protector)) but
* works with __attribute__((__optimize__("-fno-stack-protector")))
@@ -184,7 +184,7 @@
#endif /* !__powerpc64__ */
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
#ifdef __powerpc64__
#if _CALL_ELF == 2
@@ -201,7 +201,6 @@ void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_
__asm__ volatile (
"mr 3, 1\n" /* save stack pointer to r3, as arg1 of _start_c */
- "clrrdi 1, 1, 4\n" /* align the stack to 16 bytes */
"li 0, 0\n" /* zero the frame pointer */
"stdu 1, -32(1)\n" /* the initial stack frame */
"bl _start_c\n" /* transfer to c runtime */
@@ -209,13 +208,12 @@ void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_
#else
__asm__ volatile (
"mr 3, 1\n" /* save stack pointer to r3, as arg1 of _start_c */
- "clrrwi 1, 1, 4\n" /* align the stack to 16 bytes */
"li 0, 0\n" /* zero the frame pointer */
"stwu 1, -16(1)\n" /* the initial stack frame */
"bl _start_c\n" /* transfer to c runtime */
);
#endif
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#endif /* _NOLIBC_ARCH_POWERPC_H */
diff --git a/tools/include/nolibc/arch-riscv.h b/tools/include/nolibc/arch-riscv.h
index 1927c643c739..885383a86c38 100644
--- a/tools/include/nolibc/arch-riscv.h
+++ b/tools/include/nolibc/arch-riscv.h
@@ -140,7 +140,7 @@
})
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
".option push\n"
@@ -148,10 +148,9 @@ void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_
"lla gp, __global_pointer$\n"
".option pop\n"
"mv a0, sp\n" /* save stack pointer to a0, as arg1 of _start_c */
- "andi sp, a0, -16\n" /* sp must be 16-byte aligned */
"call _start_c\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#endif /* _NOLIBC_ARCH_RISCV_H */
diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
index 5d60fd43f883..df4c3cc713ac 100644
--- a/tools/include/nolibc/arch-s390.h
+++ b/tools/include/nolibc/arch-s390.h
@@ -5,11 +5,12 @@
#ifndef _NOLIBC_ARCH_S390_H
#define _NOLIBC_ARCH_S390_H
-#include <asm/signal.h>
-#include <asm/unistd.h>
+#include <linux/signal.h>
+#include <linux/unistd.h>
#include "compiler.h"
#include "crt.h"
+#include "std.h"
/* Syscalls for s390:
* - registers are 64-bit
@@ -139,15 +140,20 @@
})
/* startup code */
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
+#ifdef __s390x__
"lgr %r2, %r15\n" /* save stack pointer to %r2, as arg1 of _start_c */
"aghi %r15, -160\n" /* allocate new stackframe */
+#else
+ "lr %r2, %r15\n"
+ "ahi %r15, -96\n"
+#endif
"xc 0(8,%r15), 0(%r15)\n" /* clear backchain */
"brasl %r14, _start_c\n" /* transfer to c runtime */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
struct s390_mmap_arg_struct {
diff --git a/tools/include/nolibc/arch-sh.h b/tools/include/nolibc/arch-sh.h
new file mode 100644
index 000000000000..a96b8914607e
--- /dev/null
+++ b/tools/include/nolibc/arch-sh.h
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * SuperH specific definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+#ifndef _NOLIBC_ARCH_SH_H
+#define _NOLIBC_ARCH_SH_H
+
+#include "compiler.h"
+#include "crt.h"
+
+/*
+ * Syscalls for SuperH:
+ * - registers are 32bit wide
+ * - syscall number is passed in r3
+ * - arguments are in r4, r5, r6, r7, r0, r1, r2
+ * - the system call is performed by calling trapa #31
+ * - syscall return value is in r0
+ */
+
+#define my_syscall0(num) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall1(num, arg1) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall2(num, arg1, arg2) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ register long _arg2 __asm__ ("r5") = (long)(arg2); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1), "r"(_arg2) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall3(num, arg1, arg2, arg3) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ register long _arg2 __asm__ ("r5") = (long)(arg2); \
+ register long _arg3 __asm__ ("r6") = (long)(arg3); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1), "r"(_arg2), "r"(_arg3) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall4(num, arg1, arg2, arg3, arg4) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ register long _arg2 __asm__ ("r5") = (long)(arg2); \
+ register long _arg3 __asm__ ("r6") = (long)(arg3); \
+ register long _arg4 __asm__ ("r7") = (long)(arg4); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ register long _arg2 __asm__ ("r5") = (long)(arg2); \
+ register long _arg3 __asm__ ("r6") = (long)(arg3); \
+ register long _arg4 __asm__ ("r7") = (long)(arg4); \
+ register long _arg5 __asm__ ("r0") = (long)(arg5); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), \
+ "r"(_arg5) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
+({ \
+ register long _num __asm__ ("r3") = (num); \
+ register long _ret __asm__ ("r0"); \
+ register long _arg1 __asm__ ("r4") = (long)(arg1); \
+ register long _arg2 __asm__ ("r5") = (long)(arg2); \
+ register long _arg3 __asm__ ("r6") = (long)(arg3); \
+ register long _arg4 __asm__ ("r7") = (long)(arg4); \
+ register long _arg5 __asm__ ("r0") = (long)(arg5); \
+ register long _arg6 __asm__ ("r1") = (long)(arg6); \
+ \
+ __asm__ volatile ( \
+ "trapa #31" \
+ : "=r"(_ret) \
+ : "r"(_num), "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), \
+ "r"(_arg5), "r"(_arg6) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+/* startup code */
+void _start_wrapper(void);
+void __attribute__((weak,noreturn)) __nolibc_entrypoint __no_stack_protector _start_wrapper(void)
+{
+ __asm__ volatile (
+ ".global _start\n" /* The C function will have a prologue, */
+ ".type _start, @function\n" /* corrupting "sp" */
+ ".weak _start\n"
+ "_start:\n"
+
+ "mov sp, r4\n" /* save argc pointer to r4, as arg1 of _start_c */
+ "bsr _start_c\n" /* transfer to c runtime */
+ "nop\n" /* delay slot */
+
+ ".size _start, .-_start\n"
+ );
+ __nolibc_entrypoint_epilogue();
+}
+
+#endif /* _NOLIBC_ARCH_SH_H */
diff --git a/tools/include/nolibc/arch-sparc.h b/tools/include/nolibc/arch-sparc.h
new file mode 100644
index 000000000000..ca420d843e25
--- /dev/null
+++ b/tools/include/nolibc/arch-sparc.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * SPARC (32bit and 64bit) specific definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+#ifndef _NOLIBC_ARCH_SPARC_H
+#define _NOLIBC_ARCH_SPARC_H
+
+#include <linux/unistd.h>
+
+#include "compiler.h"
+#include "crt.h"
+
+/*
+ * Syscalls for SPARC:
+ * - registers are native word size
+ * - syscall number is passed in g1
+ * - arguments are in o0-o5
+ * - the system call is performed by calling a trap instruction
+ * - syscall return value is in o0
+ * - syscall error flag is in the carry bit of the processor status register
+ */
+
+#ifdef __arch64__
+
+#define _NOLIBC_SYSCALL "t 0x6d\n" \
+ "bcs,a %%xcc, 1f\n" \
+ "sub %%g0, %%o0, %%o0\n" \
+ "1:\n"
+
+#else
+
+#define _NOLIBC_SYSCALL "t 0x10\n" \
+ "bcs,a 1f\n" \
+ "sub %%g0, %%o0, %%o0\n" \
+ "1:\n"
+
+#endif /* __arch64__ */
+
+#define my_syscall0(num) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0"); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall1(num, arg1) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall2(num, arg1, arg2) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ register long _arg2 __asm__ ("o1") = (long)(arg2); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_arg2), "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall3(num, arg1, arg2, arg3) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ register long _arg2 __asm__ ("o1") = (long)(arg2); \
+ register long _arg3 __asm__ ("o2") = (long)(arg3); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_arg2), "r"(_arg3), "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall4(num, arg1, arg2, arg3, arg4) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ register long _arg2 __asm__ ("o1") = (long)(arg2); \
+ register long _arg3 __asm__ ("o2") = (long)(arg3); \
+ register long _arg4 __asm__ ("o3") = (long)(arg4); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ register long _arg2 __asm__ ("o1") = (long)(arg2); \
+ register long _arg3 __asm__ ("o2") = (long)(arg3); \
+ register long _arg4 __asm__ ("o3") = (long)(arg4); \
+ register long _arg5 __asm__ ("o4") = (long)(arg5); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
+({ \
+ register long _num __asm__ ("g1") = (num); \
+ register long _arg1 __asm__ ("o0") = (long)(arg1); \
+ register long _arg2 __asm__ ("o1") = (long)(arg2); \
+ register long _arg3 __asm__ ("o2") = (long)(arg3); \
+ register long _arg4 __asm__ ("o3") = (long)(arg4); \
+ register long _arg5 __asm__ ("o4") = (long)(arg5); \
+ register long _arg6 __asm__ ("o5") = (long)(arg6); \
+ \
+ __asm__ volatile ( \
+ _NOLIBC_SYSCALL \
+ : "+r"(_arg1) \
+ : "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), "r"(_arg6), \
+ "r"(_num) \
+ : "memory", "cc" \
+ ); \
+ _arg1; \
+})
+
+/* startup code */
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
+{
+ __asm__ volatile (
+ /*
+ * Save argc pointer to o0, as arg1 of _start_c.
+ * Account for the window save area, which is 16 registers wide.
+ */
+#ifdef __arch64__
+ "add %sp, 128 + 2047, %o0\n" /* on sparc64 / v9 the stack is offset by 2047 */
+#else
+ "add %sp, 64, %o0\n"
+#endif
+ "b,a _start_c\n" /* transfer to c runtime */
+ );
+ __nolibc_entrypoint_epilogue();
+}
+
+static pid_t getpid(void);
+
+static __attribute__((unused))
+pid_t sys_fork(void)
+{
+ pid_t parent, ret;
+
+ parent = getpid();
+ ret = my_syscall0(__NR_fork);
+
+ /* The syscall returns the parent pid in the child instead of 0 */
+ if (ret == parent)
+ return 0;
+ else
+ return ret;
+}
+#define sys_fork sys_fork
+
+static __attribute__((unused))
+pid_t sys_vfork(void)
+{
+ pid_t parent, ret;
+
+ parent = getpid();
+ ret = my_syscall0(__NR_vfork);
+
+ /* The syscall returns the parent pid in the child instead of 0 */
+ if (ret == parent)
+ return 0;
+ else
+ return ret;
+}
+#define sys_vfork sys_vfork
+
+#endif /* _NOLIBC_ARCH_SPARC_H */
diff --git a/tools/include/nolibc/arch-x86_64.h b/tools/include/nolibc/arch-x86.h
index 68609f421934..d3efc0c3b8ad 100644
--- a/tools/include/nolibc/arch-x86_64.h
+++ b/tools/include/nolibc/arch-x86.h
@@ -1,15 +1,184 @@
/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
/*
- * x86_64 specific definitions for NOLIBC
- * Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
+ * x86 specific definitions for NOLIBC (both 32- and 64-bit)
+ * Copyright (C) 2017-2025 Willy Tarreau <w@1wt.eu>
*/
-#ifndef _NOLIBC_ARCH_X86_64_H
-#define _NOLIBC_ARCH_X86_64_H
+#ifndef _NOLIBC_ARCH_X86_H
+#define _NOLIBC_ARCH_X86_H
#include "compiler.h"
#include "crt.h"
+#if !defined(__x86_64__)
+
+/* Syscalls for i386 :
+ * - mostly similar to x86_64
+ * - registers are 32-bit
+ * - syscall number is passed in eax
+ * - arguments are in ebx, ecx, edx, esi, edi, ebp respectively
+ * - all registers are preserved (except eax of course)
+ * - the system call is performed by calling int $0x80
+ * - syscall return comes in eax
+ * - the arguments are cast to long and assigned into the target registers
+ * which are then simply passed as registers to the asm code, so that we
+ * don't have to experience issues with register constraints.
+ * - the syscall number is always specified last in order to allow to force
+ * some registers before (gcc refuses a %-register at the last position).
+ *
+ * Also, i386 supports the old_select syscall if newselect is not available
+ */
+#define __ARCH_WANT_SYS_OLD_SELECT
+
+#define my_syscall0(num) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall1(num, arg1) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ register long _arg1 __asm__ ("ebx") = (long)(arg1); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "r"(_arg1), \
+ "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall2(num, arg1, arg2) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ register long _arg1 __asm__ ("ebx") = (long)(arg1); \
+ register long _arg2 __asm__ ("ecx") = (long)(arg2); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "r"(_arg1), "r"(_arg2), \
+ "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall3(num, arg1, arg2, arg3) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ register long _arg1 __asm__ ("ebx") = (long)(arg1); \
+ register long _arg2 __asm__ ("ecx") = (long)(arg2); \
+ register long _arg3 __asm__ ("edx") = (long)(arg3); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), \
+ "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall4(num, arg1, arg2, arg3, arg4) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ register long _arg1 __asm__ ("ebx") = (long)(arg1); \
+ register long _arg2 __asm__ ("ecx") = (long)(arg2); \
+ register long _arg3 __asm__ ("edx") = (long)(arg3); \
+ register long _arg4 __asm__ ("esi") = (long)(arg4); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), \
+ "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \
+({ \
+ long _ret; \
+ register long _num __asm__ ("eax") = (num); \
+ register long _arg1 __asm__ ("ebx") = (long)(arg1); \
+ register long _arg2 __asm__ ("ecx") = (long)(arg2); \
+ register long _arg3 __asm__ ("edx") = (long)(arg3); \
+ register long _arg4 __asm__ ("esi") = (long)(arg4); \
+ register long _arg5 __asm__ ("edi") = (long)(arg5); \
+ \
+ __asm__ volatile ( \
+ "int $0x80\n" \
+ : "=a" (_ret) \
+ : "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \
+ "0"(_num) \
+ : "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \
+({ \
+ long _eax = (long)(num); \
+ long _arg6 = (long)(arg6); /* Always in memory */ \
+ __asm__ volatile ( \
+ "pushl %[_arg6]\n\t" \
+ "pushl %%ebp\n\t" \
+ "movl 4(%%esp),%%ebp\n\t" \
+ "int $0x80\n\t" \
+ "popl %%ebp\n\t" \
+ "addl $4,%%esp\n\t" \
+ : "+a"(_eax) /* %eax */ \
+ : "b"(arg1), /* %ebx */ \
+ "c"(arg2), /* %ecx */ \
+ "d"(arg3), /* %edx */ \
+ "S"(arg4), /* %esi */ \
+ "D"(arg5), /* %edi */ \
+ [_arg6]"m"(_arg6) /* memory */ \
+ : "memory", "cc" \
+ ); \
+ _eax; \
+})
+
+/* startup code */
+/*
+ * i386 System V ABI mandates:
+ * 1) last pushed argument must be 16-byte aligned.
+ * 2) The deepest stack frame should be set to zero
+ *
+ */
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
+{
+ __asm__ volatile (
+ "xor %ebp, %ebp\n" /* zero the stack frame */
+ "mov %esp, %eax\n" /* save stack pointer to %eax, as arg1 of _start_c */
+ "sub $12, %esp\n" /* sub 12 to keep it aligned after the push %eax */
+ "push %eax\n" /* push arg1 on stack to support plain stack modes too */
+ "call _start_c\n" /* transfer to c runtime */
+ "hlt\n" /* ensure it does not return */
+ );
+ __nolibc_entrypoint_epilogue();
+}
+
+#else /* !defined(__x86_64__) */
+
/* Syscalls for x86_64 :
* - registers are 64-bit
* - syscall number is passed in rax
@@ -161,16 +330,15 @@
* 2) The deepest stack frame should be zero (the %rbp).
*
*/
-void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector _start(void)
{
__asm__ volatile (
"xor %ebp, %ebp\n" /* zero the stack frame */
"mov %rsp, %rdi\n" /* save stack pointer to %rdi, as arg1 of _start_c */
- "and $-16, %rsp\n" /* %rsp must be 16-byte aligned before call */
"call _start_c\n" /* transfer to c runtime */
"hlt\n" /* ensure it does not return */
);
- __builtin_unreachable();
+ __nolibc_entrypoint_epilogue();
}
#define NOLIBC_ARCH_HAS_MEMMOVE
@@ -193,10 +361,10 @@ __asm__ (
"movq %rdi, %rdx\n\t"
"subq %rsi, %rdx\n\t"
"cmpq %rcx, %rdx\n\t"
- "jb .Lbackward_copy\n\t"
+ "jb 1f\n\t"
"rep movsb\n\t"
"retq\n"
-".Lbackward_copy:"
+"1:" /* backward copy */
"leaq -1(%rdi, %rcx, 1), %rdi\n\t"
"leaq -1(%rsi, %rcx, 1), %rsi\n\t"
"std\n\t"
@@ -215,4 +383,5 @@ __asm__ (
"retq\n"
);
-#endif /* _NOLIBC_ARCH_X86_64_H */
+#endif /* !defined(__x86_64__) */
+#endif /* _NOLIBC_ARCH_X86_H */
diff --git a/tools/include/nolibc/arch.h b/tools/include/nolibc/arch.h
index c8f4e5d3add9..426c89198135 100644
--- a/tools/include/nolibc/arch.h
+++ b/tools/include/nolibc/arch.h
@@ -15,24 +15,28 @@
#ifndef _NOLIBC_ARCH_H
#define _NOLIBC_ARCH_H
-#if defined(__x86_64__)
-#include "arch-x86_64.h"
-#elif defined(__i386__) || defined(__i486__) || defined(__i586__) || defined(__i686__)
-#include "arch-i386.h"
+#if defined(__x86_64__) || defined(__i386__) || defined(__i486__) || defined(__i586__) || defined(__i686__)
+#include "arch-x86.h"
#elif defined(__ARM_EABI__)
#include "arch-arm.h"
#elif defined(__aarch64__)
-#include "arch-aarch64.h"
+#include "arch-arm64.h"
#elif defined(__mips__)
#include "arch-mips.h"
#elif defined(__powerpc__)
#include "arch-powerpc.h"
#elif defined(__riscv)
#include "arch-riscv.h"
-#elif defined(__s390x__)
+#elif defined(__s390x__) || defined(__s390__)
#include "arch-s390.h"
#elif defined(__loongarch__)
#include "arch-loongarch.h"
+#elif defined(__sparc__)
+#include "arch-sparc.h"
+#elif defined(__m68k__)
+#include "arch-m68k.h"
+#elif defined(__sh__)
+#include "arch-sh.h"
#else
#error Unsupported Architecture
#endif
diff --git a/tools/include/nolibc/compiler.h b/tools/include/nolibc/compiler.h
index beddc3665d69..369cfb5a0e78 100644
--- a/tools/include/nolibc/compiler.h
+++ b/tools/include/nolibc/compiler.h
@@ -6,20 +6,45 @@
#ifndef _NOLIBC_COMPILER_H
#define _NOLIBC_COMPILER_H
+#if defined(__has_attribute)
+# define __nolibc_has_attribute(attr) __has_attribute(attr)
+#else
+# define __nolibc_has_attribute(attr) 0
+#endif
+
+#if defined(__has_feature)
+# define __nolibc_has_feature(feature) __has_feature(feature)
+#else
+# define __nolibc_has_feature(feature) 0
+#endif
+
+#define __nolibc_aligned(alignment) __attribute__((aligned(alignment)))
+#define __nolibc_aligned_as(type) __nolibc_aligned(__alignof__(type))
+
+#if __nolibc_has_attribute(naked)
+# define __nolibc_entrypoint __attribute__((naked))
+# define __nolibc_entrypoint_epilogue()
+#else
+# define __nolibc_entrypoint __attribute__((optimize("Os", "omit-frame-pointer")))
+# define __nolibc_entrypoint_epilogue() __builtin_unreachable()
+#endif /* __nolibc_has_attribute(naked) */
+
#if defined(__SSP__) || defined(__SSP_STRONG__) || defined(__SSP_ALL__) || defined(__SSP_EXPLICIT__)
#define _NOLIBC_STACKPROTECTOR
#endif /* defined(__SSP__) ... */
-#if defined(__has_attribute)
-# if __has_attribute(no_stack_protector)
-# define __no_stack_protector __attribute__((no_stack_protector))
-# else
-# define __no_stack_protector __attribute__((__optimize__("-fno-stack-protector")))
-# endif
+#if __nolibc_has_attribute(no_stack_protector)
+# define __no_stack_protector __attribute__((no_stack_protector))
#else
# define __no_stack_protector __attribute__((__optimize__("-fno-stack-protector")))
-#endif /* defined(__has_attribute) */
+#endif /* __nolibc_has_attribute(no_stack_protector) */
+
+#if __nolibc_has_attribute(fallthrough)
+# define __nolibc_fallthrough do { } while (0); __attribute__((fallthrough))
+#else
+# define __nolibc_fallthrough do { } while (0)
+#endif /* __nolibc_has_attribute(fallthrough) */
#endif /* _NOLIBC_COMPILER_H */
diff --git a/tools/include/nolibc/crt.h b/tools/include/nolibc/crt.h
index 43b551468c2a..961cfe777c35 100644
--- a/tools/include/nolibc/crt.h
+++ b/tools/include/nolibc/crt.h
@@ -7,29 +7,37 @@
#ifndef _NOLIBC_CRT_H
#define _NOLIBC_CRT_H
+#include "compiler.h"
+
char **environ __attribute__((weak));
const unsigned long *_auxv __attribute__((weak));
+void _start(void);
static void __stack_chk_init(void);
static void exit(int);
-extern void (*const __preinit_array_start[])(void) __attribute__((weak));
-extern void (*const __preinit_array_end[])(void) __attribute__((weak));
+extern void (*const __preinit_array_start[])(int, char **, char**) __attribute__((weak));
+extern void (*const __preinit_array_end[])(int, char **, char**) __attribute__((weak));
-extern void (*const __init_array_start[])(void) __attribute__((weak));
-extern void (*const __init_array_end[])(void) __attribute__((weak));
+extern void (*const __init_array_start[])(int, char **, char**) __attribute__((weak));
+extern void (*const __init_array_end[])(int, char **, char**) __attribute__((weak));
extern void (*const __fini_array_start[])(void) __attribute__((weak));
extern void (*const __fini_array_end[])(void) __attribute__((weak));
-__attribute__((weak))
+void _start_c(long *sp);
+__attribute__((weak,used))
+#if __nolibc_has_feature(undefined_behavior_sanitizer)
+ __attribute__((no_sanitize("function")))
+#endif
void _start_c(long *sp)
{
long argc;
char **argv;
char **envp;
int exitcode;
- void (* const *func)(void);
+ void (* const *ctor_func)(int, char **, char **);
+ void (* const *dtor_func)(void);
const unsigned long *auxv;
/* silence potential warning: conflicting types for 'main' */
int _nolibc_main(int, char **, char **) __asm__ ("main");
@@ -66,16 +74,16 @@ void _start_c(long *sp)
;
_auxv = auxv;
- for (func = __preinit_array_start; func < __preinit_array_end; func++)
- (*func)();
- for (func = __init_array_start; func < __init_array_end; func++)
- (*func)();
+ for (ctor_func = __preinit_array_start; ctor_func < __preinit_array_end; ctor_func++)
+ (*ctor_func)(argc, argv, envp);
+ for (ctor_func = __init_array_start; ctor_func < __init_array_end; ctor_func++)
+ (*ctor_func)(argc, argv, envp);
/* go to application */
exitcode = _nolibc_main(argc, argv, envp);
- for (func = __fini_array_end; func > __fini_array_start;)
- (*--func)();
+ for (dtor_func = __fini_array_end; dtor_func > __fini_array_start;)
+ (*--dtor_func)();
exit(exitcode);
}
diff --git a/tools/include/nolibc/ctype.h b/tools/include/nolibc/ctype.h
index 6f90706d0644..470fdf34394a 100644
--- a/tools/include/nolibc/ctype.h
+++ b/tools/include/nolibc/ctype.h
@@ -4,6 +4,9 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_CTYPE_H
#define _NOLIBC_CTYPE_H
@@ -96,7 +99,4 @@ int ispunct(int c)
return isgraph(c) && !isalnum(c);
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_CTYPE_H */
diff --git a/tools/include/nolibc/dirent.h b/tools/include/nolibc/dirent.h
new file mode 100644
index 000000000000..758b95c48e7a
--- /dev/null
+++ b/tools/include/nolibc/dirent.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Directory access for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_DIRENT_H
+#define _NOLIBC_DIRENT_H
+
+#include "compiler.h"
+#include "stdint.h"
+#include "types.h"
+#include "fcntl.h"
+
+#include <linux/limits.h>
+
+struct dirent {
+ ino_t d_ino;
+ char d_name[NAME_MAX + 1];
+};
+
+/* See comment of FILE in stdio.h */
+typedef struct {
+ char dummy[1];
+} DIR;
+
+static __attribute__((unused))
+DIR *fdopendir(int fd)
+{
+ if (fd < 0) {
+ SET_ERRNO(EBADF);
+ return NULL;
+ }
+ return (DIR *)(intptr_t)~fd;
+}
+
+static __attribute__((unused))
+DIR *opendir(const char *name)
+{
+ int fd;
+
+ fd = open(name, O_RDONLY);
+ if (fd == -1)
+ return NULL;
+ return fdopendir(fd);
+}
+
+static __attribute__((unused))
+int closedir(DIR *dirp)
+{
+ intptr_t i = (intptr_t)dirp;
+
+ if (i >= 0) {
+ SET_ERRNO(EBADF);
+ return -1;
+ }
+ return close(~i);
+}
+
+static __attribute__((unused))
+int readdir_r(DIR *dirp, struct dirent *entry, struct dirent **result)
+{
+ char buf[sizeof(struct linux_dirent64) + NAME_MAX + 1] __nolibc_aligned_as(struct linux_dirent64);
+ struct linux_dirent64 *ldir = (void *)buf;
+ intptr_t i = (intptr_t)dirp;
+ int fd, ret;
+
+ if (i >= 0)
+ return EBADF;
+
+ fd = ~i;
+
+ ret = sys_getdents64(fd, ldir, sizeof(buf));
+ if (ret < 0)
+ return -ret;
+ if (ret == 0) {
+ *result = NULL;
+ return 0;
+ }
+
+ /*
+ * getdents64() returns as many entries as fit the buffer.
+ * readdir() can only return one entry at a time.
+ * Make sure the non-returned ones are not skipped.
+ */
+ ret = lseek(fd, ldir->d_off, SEEK_SET);
+ if (ret == -1)
+ return errno;
+
+ entry->d_ino = ldir->d_ino;
+ /* the destination should always be big enough */
+ strlcpy(entry->d_name, ldir->d_name, sizeof(entry->d_name));
+ *result = entry;
+ return 0;
+}
+
+#endif /* _NOLIBC_DIRENT_H */
diff --git a/tools/include/nolibc/elf.h b/tools/include/nolibc/elf.h
new file mode 100644
index 000000000000..3e2c5228bf3d
--- /dev/null
+++ b/tools/include/nolibc/elf.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Shim elf.h header for NOLIBC.
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_SYS_ELF_H
+#define _NOLIBC_SYS_ELF_H
+
+#include <linux/elf.h>
+
+#endif /* _NOLIBC_SYS_ELF_H */
diff --git a/tools/include/nolibc/errno.h b/tools/include/nolibc/errno.h
index a44486ff0477..08a33c40ec0c 100644
--- a/tools/include/nolibc/errno.h
+++ b/tools/include/nolibc/errno.h
@@ -4,10 +4,13 @@
* Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_ERRNO_H
#define _NOLIBC_ERRNO_H
-#include <asm/errno.h>
+#include <linux/errno.h>
#ifndef NOLIBC_IGNORE_ERRNO
#define SET_ERRNO(v) do { errno = (v); } while (0)
@@ -22,7 +25,4 @@ int errno __attribute__((weak));
*/
#define MAX_ERRNO 4095
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_ERRNO_H */
diff --git a/tools/include/nolibc/fcntl.h b/tools/include/nolibc/fcntl.h
new file mode 100644
index 000000000000..bff2e542f20f
--- /dev/null
+++ b/tools/include/nolibc/fcntl.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * fcntl definition for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_FCNTL_H
+#define _NOLIBC_FCNTL_H
+
+#include "arch.h"
+#include "types.h"
+#include "sys.h"
+
+/*
+ * int openat(int dirfd, const char *path, int flags[, mode_t mode]);
+ */
+
+static __attribute__((unused))
+int sys_openat(int dirfd, const char *path, int flags, mode_t mode)
+{
+ return my_syscall4(__NR_openat, dirfd, path, flags, mode);
+}
+
+static __attribute__((unused))
+int openat(int dirfd, const char *path, int flags, ...)
+{
+ mode_t mode = 0;
+
+ if (flags & O_CREAT) {
+ va_list args;
+
+ va_start(args, flags);
+ mode = va_arg(args, mode_t);
+ va_end(args);
+ }
+
+ return __sysret(sys_openat(dirfd, path, flags, mode));
+}
+
+/*
+ * int open(const char *path, int flags[, mode_t mode]);
+ */
+
+static __attribute__((unused))
+int sys_open(const char *path, int flags, mode_t mode)
+{
+ return my_syscall4(__NR_openat, AT_FDCWD, path, flags, mode);
+}
+
+static __attribute__((unused))
+int open(const char *path, int flags, ...)
+{
+ mode_t mode = 0;
+
+ if (flags & O_CREAT) {
+ va_list args;
+
+ va_start(args, flags);
+ mode = va_arg(args, mode_t);
+ va_end(args);
+ }
+
+ return __sysret(sys_open(path, flags, mode));
+}
+
+#endif /* _NOLIBC_FCNTL_H */
diff --git a/tools/include/nolibc/getopt.h b/tools/include/nolibc/getopt.h
new file mode 100644
index 000000000000..217abb95264b
--- /dev/null
+++ b/tools/include/nolibc/getopt.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * getopt function definitions for NOLIBC, adapted from musl libc
+ * Copyright (C) 2005-2020 Rich Felker, et al.
+ * Copyright (C) 2025 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_GETOPT_H
+#define _NOLIBC_GETOPT_H
+
+struct FILE;
+static struct FILE *const stderr;
+static int fprintf(struct FILE *stream, const char *fmt, ...);
+
+__attribute__((weak,unused,section(".data.nolibc_getopt")))
+char *optarg;
+
+__attribute__((weak,unused,section(".data.nolibc_getopt")))
+int optind = 1, opterr = 1, optopt;
+
+static __attribute__((unused))
+int getopt(int argc, char * const argv[], const char *optstring)
+{
+ static int __optpos;
+ int i;
+ char c, d;
+ char *optchar;
+
+ if (!optind) {
+ __optpos = 0;
+ optind = 1;
+ }
+
+ if (optind >= argc || !argv[optind])
+ return -1;
+
+ if (argv[optind][0] != '-') {
+ if (optstring[0] == '-') {
+ optarg = argv[optind++];
+ return 1;
+ }
+ return -1;
+ }
+
+ if (!argv[optind][1])
+ return -1;
+
+ if (argv[optind][1] == '-' && !argv[optind][2])
+ return optind++, -1;
+
+ if (!__optpos)
+ __optpos++;
+ c = argv[optind][__optpos];
+ optchar = argv[optind] + __optpos;
+ __optpos++;
+
+ if (!argv[optind][__optpos]) {
+ optind++;
+ __optpos = 0;
+ }
+
+ if (optstring[0] == '-' || optstring[0] == '+')
+ optstring++;
+
+ i = 0;
+ d = 0;
+ do {
+ d = optstring[i++];
+ } while (d && d != c);
+
+ if (d != c || c == ':') {
+ optopt = c;
+ if (optstring[0] != ':' && opterr)
+ fprintf(stderr, "%s: unrecognized option: %c\n", argv[0], *optchar);
+ return '?';
+ }
+ if (optstring[i] == ':') {
+ optarg = 0;
+ if (optstring[i + 1] != ':' || __optpos) {
+ optarg = argv[optind++];
+ if (__optpos)
+ optarg += __optpos;
+ __optpos = 0;
+ }
+ if (optind > argc) {
+ optopt = c;
+ if (optstring[0] == ':')
+ return ':';
+ if (opterr)
+ fprintf(stderr, "%s: option requires argument: %c\n",
+ argv[0], *optchar);
+ return '?';
+ }
+ }
+ return c;
+}
+
+#endif /* _NOLIBC_GETOPT_H */
diff --git a/tools/include/nolibc/limits.h b/tools/include/nolibc/limits.h
new file mode 100644
index 000000000000..306d4141f4d2
--- /dev/null
+++ b/tools/include/nolibc/limits.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Shim limits.h header for NOLIBC.
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+#include "nolibc.h"
diff --git a/tools/include/nolibc/math.h b/tools/include/nolibc/math.h
new file mode 100644
index 000000000000..9df823ddd412
--- /dev/null
+++ b/tools/include/nolibc/math.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * math definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_SYS_MATH_H
+#define _NOLIBC_SYS_MATH_H
+
+static __inline__
+double fabs(double x)
+{
+ return x >= 0 ? x : -x;
+}
+
+static __inline__
+float fabsf(float x)
+{
+ return x >= 0 ? x : -x;
+}
+
+static __inline__
+long double fabsl(long double x)
+{
+ return x >= 0 ? x : -x;
+}
+
+#endif /* _NOLIBC_SYS_MATH_H */
diff --git a/tools/include/nolibc/nolibc.h b/tools/include/nolibc/nolibc.h
index 989e707263a4..d2f5aa085f8e 100644
--- a/tools/include/nolibc/nolibc.h
+++ b/tools/include/nolibc/nolibc.h
@@ -31,8 +31,7 @@
* - The third level is the libc call definition. It exposes the lower raw
* sys_<name>() calls in a way that looks like what a libc usually does,
* takes care of specific input values, and of setting errno upon error.
- * There can be minor variations compared to standard libc calls. For
- * example the open() call always takes 3 args here.
+ * There can be minor variations compared to standard libc calls.
*
* The errno variable is declared static and unused. This way it can be
* optimized away if not used. However this means that a program made of
@@ -74,7 +73,8 @@
* -I../nolibc -o hello hello.c -lgcc
*
* The available standard (but limited) include files are:
- * ctype.h, errno.h, signal.h, stdarg.h, stdio.h, stdlib.h, string.h, time.h
+ * ctype.h, errno.h, signal.h, stdarg.h, stdbool.h stdio.h, stdlib.h,
+ * string.h, time.h
*
* In addition, the following ones are expected to be provided by the compiler:
* float.h, stddef.h
@@ -96,14 +96,37 @@
#include "arch.h"
#include "types.h"
#include "sys.h"
+#include "sys/auxv.h"
+#include "sys/ioctl.h"
+#include "sys/mman.h"
+#include "sys/mount.h"
+#include "sys/prctl.h"
+#include "sys/random.h"
+#include "sys/reboot.h"
+#include "sys/resource.h"
+#include "sys/stat.h"
+#include "sys/syscall.h"
+#include "sys/sysmacros.h"
+#include "sys/time.h"
+#include "sys/timerfd.h"
+#include "sys/utsname.h"
+#include "sys/wait.h"
#include "ctype.h"
+#include "elf.h"
+#include "sched.h"
#include "signal.h"
#include "unistd.h"
+#include "stdbool.h"
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "time.h"
#include "stackprotector.h"
+#include "dirent.h"
+#include "fcntl.h"
+#include "getopt.h"
+#include "poll.h"
+#include "math.h"
/* Used by programs to avoid std includes */
#define NOLIBC
diff --git a/tools/include/nolibc/poll.h b/tools/include/nolibc/poll.h
new file mode 100644
index 000000000000..0d053f93ea99
--- /dev/null
+++ b/tools/include/nolibc/poll.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * poll definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_POLL_H
+#define _NOLIBC_POLL_H
+
+#include "arch.h"
+#include "sys.h"
+
+#include <linux/poll.h>
+#include <linux/time.h>
+
+/*
+ * int poll(struct pollfd *fds, int nfds, int timeout);
+ */
+
+static __attribute__((unused))
+int sys_poll(struct pollfd *fds, int nfds, int timeout)
+{
+#if defined(__NR_ppoll)
+ struct timespec t;
+
+ if (timeout >= 0) {
+ t.tv_sec = timeout / 1000;
+ t.tv_nsec = (timeout % 1000) * 1000000;
+ }
+ return my_syscall5(__NR_ppoll, fds, nfds, (timeout >= 0) ? &t : NULL, NULL, 0);
+#elif defined(__NR_ppoll_time64)
+ struct __kernel_timespec t;
+
+ if (timeout >= 0) {
+ t.tv_sec = timeout / 1000;
+ t.tv_nsec = (timeout % 1000) * 1000000;
+ }
+ return my_syscall5(__NR_ppoll_time64, fds, nfds, (timeout >= 0) ? &t : NULL, NULL, 0);
+#else
+ return my_syscall3(__NR_poll, fds, nfds, timeout);
+#endif
+}
+
+static __attribute__((unused))
+int poll(struct pollfd *fds, int nfds, int timeout)
+{
+ return __sysret(sys_poll(fds, nfds, timeout));
+}
+
+#endif /* _NOLIBC_POLL_H */
diff --git a/tools/include/nolibc/sched.h b/tools/include/nolibc/sched.h
new file mode 100644
index 000000000000..32221562c166
--- /dev/null
+++ b/tools/include/nolibc/sched.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * sched function definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_SCHED_H
+#define _NOLIBC_SCHED_H
+
+#include "sys.h"
+
+#include <linux/sched.h>
+
+/*
+ * int setns(int fd, int nstype);
+ */
+
+static __attribute__((unused))
+int sys_setns(int fd, int nstype)
+{
+ return my_syscall2(__NR_setns, fd, nstype);
+}
+
+static __attribute__((unused))
+int setns(int fd, int nstype)
+{
+ return __sysret(sys_setns(fd, nstype));
+}
+
+
+/*
+ * int unshare(int flags);
+ */
+
+static __attribute__((unused))
+int sys_unshare(int flags)
+{
+ return my_syscall1(__NR_unshare, flags);
+}
+
+static __attribute__((unused))
+int unshare(int flags)
+{
+ return __sysret(sys_unshare(flags));
+}
+
+#endif /* _NOLIBC_SCHED_H */
diff --git a/tools/include/nolibc/signal.h b/tools/include/nolibc/signal.h
index 137552216e46..ac13e53ac31d 100644
--- a/tools/include/nolibc/signal.h
+++ b/tools/include/nolibc/signal.h
@@ -4,6 +4,9 @@
* Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_SIGNAL_H
#define _NOLIBC_SIGNAL_H
@@ -13,13 +16,11 @@
#include "sys.h"
/* This one is not marked static as it's needed by libgcc for divide by zero */
+int raise(int signal);
__attribute__((weak,unused,section(".text.nolibc_raise")))
int raise(int signal)
{
return sys_kill(sys_getpid(), signal);
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_SIGNAL_H */
diff --git a/tools/include/nolibc/stackprotector.h b/tools/include/nolibc/stackprotector.h
index 13f1d0e60387..c71a2c257177 100644
--- a/tools/include/nolibc/stackprotector.h
+++ b/tools/include/nolibc/stackprotector.h
@@ -18,7 +18,8 @@
* triggering stack protector errors themselves
*/
-__attribute__((weak,noreturn,section(".text.nolibc_stack_chk")))
+void __stack_chk_fail(void);
+__attribute__((weak,used,noreturn,section(".text.nolibc_stack_chk")))
void __stack_chk_fail(void)
{
pid_t pid;
@@ -28,13 +29,14 @@ void __stack_chk_fail(void)
for (;;);
}
+void __stack_chk_fail_local(void);
__attribute__((weak,noreturn,section(".text.nolibc_stack_chk")))
void __stack_chk_fail_local(void)
{
__stack_chk_fail();
}
-__attribute__((weak,section(".data.nolibc_stack_chk")))
+__attribute__((weak,used,section(".data.nolibc_stack_chk")))
uintptr_t __stack_chk_guard;
static __no_stack_protector void __stack_chk_init(void)
diff --git a/tools/include/nolibc/std.h b/tools/include/nolibc/std.h
index 933bc0be7e1c..2c1ad23b9b5c 100644
--- a/tools/include/nolibc/std.h
+++ b/tools/include/nolibc/std.h
@@ -13,12 +13,10 @@
* syscall-specific stuff, as this file is expected to be included very early.
*/
-/* note: may already be defined */
-#ifndef NULL
-#define NULL ((void *)0)
-#endif
-
#include "stdint.h"
+#include "stddef.h"
+
+#include <linux/types.h>
/* those are commonly provided by sys/types.h */
typedef unsigned int dev_t;
@@ -31,6 +29,6 @@ typedef unsigned long nlink_t;
typedef signed long off_t;
typedef signed long blksize_t;
typedef signed long blkcnt_t;
-typedef signed long time_t;
+typedef __kernel_time_t time_t;
#endif /* _NOLIBC_STD_H */
diff --git a/tools/include/nolibc/stdbool.h b/tools/include/nolibc/stdbool.h
new file mode 100644
index 000000000000..60feece22f17
--- /dev/null
+++ b/tools/include/nolibc/stdbool.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Boolean types support for NOLIBC
+ * Copyright (C) 2024 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+#ifndef _NOLIBC_STDBOOL_H
+#define _NOLIBC_STDBOOL_H
+
+#define bool _Bool
+#define true 1
+#define false 0
+
+#define __bool_true_false_are_defined 1
+
+#endif /* _NOLIBC_STDBOOL_H */
diff --git a/tools/include/nolibc/stddef.h b/tools/include/nolibc/stddef.h
new file mode 100644
index 000000000000..ecbd13eab1f5
--- /dev/null
+++ b/tools/include/nolibc/stddef.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Stddef definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
+#ifndef _NOLIBC_STDDEF_H
+#define _NOLIBC_STDDEF_H
+
+#include "stdint.h"
+
+/* note: may already be defined */
+#ifndef NULL
+#define NULL ((void *)0)
+#endif
+
+#ifndef offsetof
+#define offsetof(TYPE, FIELD) ((size_t) &((TYPE *)0)->FIELD)
+#endif
+
+#endif /* _NOLIBC_STDDEF_H */
diff --git a/tools/include/nolibc/stdint.h b/tools/include/nolibc/stdint.h
index 6665e272e213..b052ad6303c3 100644
--- a/tools/include/nolibc/stdint.h
+++ b/tools/include/nolibc/stdint.h
@@ -39,8 +39,8 @@ typedef size_t uint_fast32_t;
typedef int64_t int_fast64_t;
typedef uint64_t uint_fast64_t;
-typedef int64_t intmax_t;
-typedef uint64_t uintmax_t;
+typedef __INTMAX_TYPE__ intmax_t;
+typedef __UINTMAX_TYPE__ uintmax_t;
/* limits of integral types */
@@ -96,6 +96,10 @@ typedef uint64_t uintmax_t;
#define UINT_FAST32_MAX SIZE_MAX
#define UINT_FAST64_MAX UINT64_MAX
+#define INTMAX_MIN INT64_MIN
+#define INTMAX_MAX INT64_MAX
+#define UINTMAX_MAX UINT64_MAX
+
#ifndef INT_MIN
#define INT_MIN (-__INT_MAX__ - 1)
#endif
@@ -110,4 +114,19 @@ typedef uint64_t uintmax_t;
#define LONG_MAX __LONG_MAX__
#endif
+#ifndef ULONG_MAX
+#define ULONG_MAX ((unsigned long)(__LONG_MAX__) * 2 + 1)
+#endif
+
+#ifndef LLONG_MIN
+#define LLONG_MIN (-__LONG_LONG_MAX__ - 1)
+#endif
+#ifndef LLONG_MAX
+#define LLONG_MAX __LONG_LONG_MAX__
+#endif
+
+#ifndef ULLONG_MAX
+#define ULLONG_MAX ((unsigned long long)(__LONG_LONG_MAX__) * 2 + 1)
+#endif
+
#endif /* _NOLIBC_STDINT_H */
diff --git a/tools/include/nolibc/stdio.h b/tools/include/nolibc/stdio.h
index 16cd4d807251..7630234408c5 100644
--- a/tools/include/nolibc/stdio.h
+++ b/tools/include/nolibc/stdio.h
@@ -4,17 +4,24 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_STDIO_H
#define _NOLIBC_STDIO_H
#include "std.h"
#include "arch.h"
#include "errno.h"
+#include "fcntl.h"
#include "types.h"
#include "sys.h"
#include "stdarg.h"
#include "stdlib.h"
#include "string.h"
+#include "compiler.h"
+
+static const char *strerror(int errnum);
#ifndef EOF
#define EOF (-1)
@@ -49,6 +56,32 @@ FILE *fdopen(int fd, const char *mode __attribute__((unused)))
return (FILE*)(intptr_t)~fd;
}
+static __attribute__((unused))
+FILE *fopen(const char *pathname, const char *mode)
+{
+ int flags, fd;
+
+ switch (*mode) {
+ case 'r':
+ flags = O_RDONLY;
+ break;
+ case 'w':
+ flags = O_WRONLY | O_CREAT | O_TRUNC;
+ break;
+ case 'a':
+ flags = O_WRONLY | O_CREAT | O_APPEND;
+ break;
+ default:
+ SET_ERRNO(EINVAL); return NULL;
+ }
+
+ if (mode[1] == '+')
+ flags = (flags & ~(O_RDONLY | O_WRONLY)) | O_RDWR;
+
+ fd = open(pathname, flags, 0666);
+ return fdopen(fd, mode);
+}
+
/* provides the fd of stream. */
static __attribute__((unused))
int fileno(FILE *stream)
@@ -207,28 +240,40 @@ char *fgets(char *s, int size, FILE *stream)
}
-/* minimal vfprintf(). It supports the following formats:
+/* minimal printf(). It supports the following formats:
* - %[l*]{d,u,c,x,p}
* - %s
* - unknown modifiers are ignored.
*/
-static __attribute__((unused, format(printf, 2, 0)))
-int vfprintf(FILE *stream, const char *fmt, va_list args)
+typedef int (*__nolibc_printf_cb)(intptr_t state, const char *buf, size_t size);
+
+static __attribute__((unused, format(printf, 4, 0)))
+int __nolibc_printf(__nolibc_printf_cb cb, intptr_t state, size_t n, const char *fmt, va_list args)
{
char escape, lpref, c;
unsigned long long v;
- unsigned int written;
- size_t len, ofs;
+ unsigned int written, width;
+ size_t len, ofs, w;
char tmpbuf[21];
const char *outstr;
written = ofs = escape = lpref = 0;
while (1) {
c = fmt[ofs++];
+ width = 0;
if (escape) {
/* we're in an escape sequence, ofs == 1 */
escape = 0;
+
+ /* width */
+ while (c >= '0' && c <= '9') {
+ width *= 10;
+ width += c - '0';
+
+ c = fmt[ofs++];
+ }
+
if (c == 'c' || c == 'd' || c == 'u' || c == 'x' || c == 'p') {
char *out = tmpbuf;
@@ -264,7 +309,7 @@ int vfprintf(FILE *stream, const char *fmt, va_list args)
case 'p':
*(out++) = '0';
*(out++) = 'x';
- /* fall through */
+ __nolibc_fallthrough;
default: /* 'x' and 'p' above */
u64toh_r(v, out);
break;
@@ -276,6 +321,11 @@ int vfprintf(FILE *stream, const char *fmt, va_list args)
if (!outstr)
outstr="(null)";
}
+#ifndef NOLIBC_IGNORE_ERRNO
+ else if (c == 'm') {
+ outstr = strerror(errno);
+ }
+#endif /* NOLIBC_IGNORE_ERRNO */
else if (c == '%') {
/* queue it verbatim */
continue;
@@ -285,6 +335,8 @@ int vfprintf(FILE *stream, const char *fmt, va_list args)
if (c == 'l') {
/* long format prefix, maintain the escape */
lpref++;
+ } else if (c == 'j') {
+ lpref = 2;
}
escape = 1;
goto do_escape;
@@ -301,8 +353,17 @@ int vfprintf(FILE *stream, const char *fmt, va_list args)
outstr = fmt;
len = ofs - 1;
flush_str:
- if (_fwrite(outstr, len, stream) != 0)
- break;
+ if (n) {
+ w = len < n ? len : n;
+ n -= w;
+ while (width-- > w) {
+ if (cb(state, " ", 1) != 0)
+ return -1;
+ written += 1;
+ }
+ if (cb(state, outstr, w) != 0)
+ return -1;
+ }
written += len;
do_escape:
@@ -318,6 +379,17 @@ int vfprintf(FILE *stream, const char *fmt, va_list args)
return written;
}
+static int __nolibc_fprintf_cb(intptr_t state, const char *buf, size_t size)
+{
+ return _fwrite(buf, size, (FILE *)state);
+}
+
+static __attribute__((unused, format(printf, 2, 0)))
+int vfprintf(FILE *stream, const char *fmt, va_list args)
+{
+ return __nolibc_printf(__nolibc_fprintf_cb, (intptr_t)stream, SIZE_MAX, fmt, args);
+}
+
static __attribute__((unused, format(printf, 1, 0)))
int vprintf(const char *fmt, va_list args)
{
@@ -348,6 +420,183 @@ int printf(const char *fmt, ...)
return ret;
}
+static __attribute__((unused, format(printf, 2, 0)))
+int vdprintf(int fd, const char *fmt, va_list args)
+{
+ FILE *stream;
+
+ stream = fdopen(fd, NULL);
+ if (!stream)
+ return -1;
+ /* Technically 'stream' is leaked, but as it's only a wrapper around 'fd' that is fine */
+ return vfprintf(stream, fmt, args);
+}
+
+static __attribute__((unused, format(printf, 2, 3)))
+int dprintf(int fd, const char *fmt, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ ret = vdprintf(fd, fmt, args);
+ va_end(args);
+
+ return ret;
+}
+
+static int __nolibc_sprintf_cb(intptr_t _state, const char *buf, size_t size)
+{
+ char **state = (char **)_state;
+
+ memcpy(*state, buf, size);
+ *state += size;
+ return 0;
+}
+
+static __attribute__((unused, format(printf, 3, 0)))
+int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
+{
+ char *state = buf;
+ int ret;
+
+ ret = __nolibc_printf(__nolibc_sprintf_cb, (intptr_t)&state, size, fmt, args);
+ if (ret < 0)
+ return ret;
+ buf[(size_t)ret < size ? (size_t)ret : size - 1] = '\0';
+ return ret;
+}
+
+static __attribute__((unused, format(printf, 3, 4)))
+int snprintf(char *buf, size_t size, const char *fmt, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ ret = vsnprintf(buf, size, fmt, args);
+ va_end(args);
+
+ return ret;
+}
+
+static __attribute__((unused, format(printf, 2, 0)))
+int vsprintf(char *buf, const char *fmt, va_list args)
+{
+ return vsnprintf(buf, SIZE_MAX, fmt, args);
+}
+
+static __attribute__((unused, format(printf, 2, 3)))
+int sprintf(char *buf, const char *fmt, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, fmt);
+ ret = vsprintf(buf, fmt, args);
+ va_end(args);
+
+ return ret;
+}
+
+static __attribute__((unused))
+int vsscanf(const char *str, const char *format, va_list args)
+{
+ uintmax_t uval;
+ intmax_t ival;
+ int base;
+ char *endptr;
+ int matches;
+ int lpref;
+
+ matches = 0;
+
+ while (1) {
+ if (*format == '%') {
+ /* start of pattern */
+ lpref = 0;
+ format++;
+
+ if (*format == 'l') {
+ /* same as in printf() */
+ lpref = 1;
+ format++;
+ if (*format == 'l') {
+ lpref = 2;
+ format++;
+ }
+ }
+
+ if (*format == '%') {
+ /* literal % */
+ if ('%' != *str)
+ goto done;
+ str++;
+ format++;
+ continue;
+ } else if (*format == 'd') {
+ ival = strtoll(str, &endptr, 10);
+ if (lpref == 0)
+ *va_arg(args, int *) = ival;
+ else if (lpref == 1)
+ *va_arg(args, long *) = ival;
+ else if (lpref == 2)
+ *va_arg(args, long long *) = ival;
+ } else if (*format == 'u' || *format == 'x' || *format == 'X') {
+ base = *format == 'u' ? 10 : 16;
+ uval = strtoull(str, &endptr, base);
+ if (lpref == 0)
+ *va_arg(args, unsigned int *) = uval;
+ else if (lpref == 1)
+ *va_arg(args, unsigned long *) = uval;
+ else if (lpref == 2)
+ *va_arg(args, unsigned long long *) = uval;
+ } else if (*format == 'p') {
+ *va_arg(args, void **) = (void *)strtoul(str, &endptr, 16);
+ } else {
+ SET_ERRNO(EILSEQ);
+ goto done;
+ }
+
+ format++;
+ str = endptr;
+ matches++;
+
+ } else if (*format == '\0') {
+ goto done;
+ } else if (isspace(*format)) {
+ /* skip spaces in format and str */
+ while (isspace(*format))
+ format++;
+ while (isspace(*str))
+ str++;
+ } else if (*format == *str) {
+ /* literal match */
+ format++;
+ str++;
+ } else {
+ if (!matches)
+ matches = EOF;
+ goto done;
+ }
+ }
+
+done:
+ return matches;
+}
+
+static __attribute__((unused, format(scanf, 2, 3)))
+int sscanf(const char *str, const char *format, ...)
+{
+ va_list args;
+ int ret;
+
+ va_start(args, format);
+ ret = vsscanf(str, format, args);
+ va_end(args);
+ return ret;
+}
+
static __attribute__((unused))
void perror(const char *msg)
{
@@ -376,7 +625,14 @@ int setvbuf(FILE *stream __attribute__((unused)),
return 0;
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
+static __attribute__((unused))
+const char *strerror(int errno)
+{
+ static char buf[18] = "errno=";
+
+ i64toa_r(errno, &buf[6]);
+
+ return buf;
+}
#endif /* _NOLIBC_STDIO_H */
diff --git a/tools/include/nolibc/stdlib.h b/tools/include/nolibc/stdlib.h
index bacfd35c5156..5fd99a480f82 100644
--- a/tools/include/nolibc/stdlib.h
+++ b/tools/include/nolibc/stdlib.h
@@ -4,6 +4,9 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_STDLIB_H
#define _NOLIBC_STDLIB_H
@@ -29,7 +32,26 @@ static __attribute__((unused)) char itoa_buffer[21];
* As much as possible, please keep functions alphabetically sorted.
*/
+static __inline__
+int abs(int j)
+{
+ return j >= 0 ? j : -j;
+}
+
+static __inline__
+long labs(long j)
+{
+ return j >= 0 ? j : -j;
+}
+
+static __inline__
+long long llabs(long long j)
+{
+ return j >= 0 ? j : -j;
+}
+
/* must be exported, as it's used by libgcc for various divide functions */
+void abort(void);
__attribute__((weak,unused,noreturn,section(".text.nolibc_abort")))
void abort(void)
{
@@ -102,32 +124,6 @@ char *getenv(const char *name)
}
static __attribute__((unused))
-unsigned long getauxval(unsigned long type)
-{
- const unsigned long *auxv = _auxv;
- unsigned long ret;
-
- if (!auxv)
- return 0;
-
- while (1) {
- if (!auxv[0] && !auxv[1]) {
- ret = 0;
- break;
- }
-
- if (auxv[0] == type) {
- ret = auxv[1];
- break;
- }
-
- auxv += 2;
- }
-
- return ret;
-}
-
-static __attribute__((unused))
void *malloc(size_t len)
{
struct nolibc_heap *heap;
@@ -185,7 +181,7 @@ void *realloc(void *old_ptr, size_t new_size)
if (__builtin_expect(!ret, 0))
return NULL;
- memcpy(ret, heap->user_p, heap->len);
+ memcpy(ret, heap->user_p, user_p_len);
munmap(heap, heap->len);
return ret;
}
@@ -274,7 +270,7 @@ int itoa_r(long in, char *buffer)
int len = 0;
if (in < 0) {
- in = -in;
+ in = -(unsigned long)in;
*(ptr++) = '-';
len++;
}
@@ -410,7 +406,7 @@ int i64toa_r(int64_t in, char *buffer)
int len = 0;
if (in < 0) {
- in = -in;
+ in = -(uint64_t)in;
*(ptr++) = '-';
len++;
}
@@ -438,7 +434,113 @@ char *u64toa(uint64_t in)
return itoa_buffer;
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
+static __attribute__((unused))
+uintmax_t __strtox(const char *nptr, char **endptr, int base, intmax_t lower_limit, uintmax_t upper_limit)
+{
+ const char signed_ = lower_limit != 0;
+ unsigned char neg = 0, overflow = 0;
+ uintmax_t val = 0, limit, old_val;
+ char c;
+
+ if (base < 0 || base > 36) {
+ SET_ERRNO(EINVAL);
+ goto out;
+ }
+
+ while (isspace(*nptr))
+ nptr++;
+
+ if (*nptr == '+') {
+ nptr++;
+ } else if (*nptr == '-') {
+ neg = 1;
+ nptr++;
+ }
+
+ if (signed_ && neg)
+ limit = -(uintmax_t)lower_limit;
+ else
+ limit = upper_limit;
+
+ if ((base == 0 || base == 16) &&
+ (strncmp(nptr, "0x", 2) == 0 || strncmp(nptr, "0X", 2) == 0)) {
+ base = 16;
+ nptr += 2;
+ } else if (base == 0 && strncmp(nptr, "0", 1) == 0) {
+ base = 8;
+ nptr += 1;
+ } else if (base == 0) {
+ base = 10;
+ }
+
+ while (*nptr) {
+ c = *nptr;
+
+ if (c >= '0' && c <= '9')
+ c -= '0';
+ else if (c >= 'a' && c <= 'z')
+ c = c - 'a' + 10;
+ else if (c >= 'A' && c <= 'Z')
+ c = c - 'A' + 10;
+ else
+ goto out;
+
+ if (c >= base)
+ goto out;
+
+ nptr++;
+ old_val = val;
+ val *= base;
+ val += c;
+
+ if (val > limit || val < old_val)
+ overflow = 1;
+ }
+
+out:
+ if (overflow) {
+ SET_ERRNO(ERANGE);
+ val = limit;
+ }
+ if (endptr)
+ *endptr = (char *)nptr;
+ return neg ? -val : val;
+}
+
+static __attribute__((unused))
+long strtol(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, LONG_MIN, LONG_MAX);
+}
+
+static __attribute__((unused))
+unsigned long strtoul(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, 0, ULONG_MAX);
+}
+
+static __attribute__((unused))
+long long strtoll(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, LLONG_MIN, LLONG_MAX);
+}
+
+static __attribute__((unused))
+unsigned long long strtoull(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, 0, ULLONG_MAX);
+}
+
+static __attribute__((unused))
+intmax_t strtoimax(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, INTMAX_MIN, INTMAX_MAX);
+}
+
+static __attribute__((unused))
+uintmax_t strtoumax(const char *nptr, char **endptr, int base)
+{
+ return __strtox(nptr, endptr, base, 0, UINTMAX_MAX);
+}
#endif /* _NOLIBC_STDLIB_H */
diff --git a/tools/include/nolibc/string.h b/tools/include/nolibc/string.h
index a01c69dd495f..163a17e7dd38 100644
--- a/tools/include/nolibc/string.h
+++ b/tools/include/nolibc/string.h
@@ -4,9 +4,13 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_STRING_H
#define _NOLIBC_STRING_H
+#include "arch.h"
#include "std.h"
static void *malloc(size_t len);
@@ -31,6 +35,7 @@ int memcmp(const void *s1, const void *s2, size_t n)
/* might be ignored by the compiler without -ffreestanding, then found as
* missing.
*/
+void *memmove(void *dst, const void *src, size_t len);
__attribute__((weak,unused,section(".text.nolibc_memmove")))
void *memmove(void *dst, const void *src, size_t len)
{
@@ -55,6 +60,7 @@ void *memmove(void *dst, const void *src, size_t len)
#ifndef NOLIBC_ARCH_HAS_MEMCPY
/* must be exported, as it's used by libgcc on ARM */
+void *memcpy(void *dst, const void *src, size_t len);
__attribute__((weak,unused,section(".text.nolibc_memcpy")))
void *memcpy(void *dst, const void *src, size_t len)
{
@@ -72,6 +78,7 @@ void *memcpy(void *dst, const void *src, size_t len)
/* might be ignored by the compiler without -ffreestanding, then found as
* missing.
*/
+void *memset(void *dst, int b, size_t len);
__attribute__((weak,unused,section(".text.nolibc_memset")))
void *memset(void *dst, int b, size_t len)
{
@@ -123,7 +130,8 @@ char *strcpy(char *dst, const char *src)
* thus itself, hence the asm() statement below that's meant to disable this
* confusing practice.
*/
-static __attribute__((unused))
+size_t strlen(const char *str);
+__attribute__((weak,unused,section(".text.nolibc_strlen")))
size_t strlen(const char *str)
{
size_t len;
@@ -187,22 +195,26 @@ char *strndup(const char *str, size_t maxlen)
static __attribute__((unused))
size_t strlcat(char *dst, const char *src, size_t size)
{
- size_t len;
- char c;
-
- for (len = 0; dst[len]; len++)
- ;
-
- for (;;) {
- c = *src;
- if (len < size)
- dst[len] = c;
- if (!c)
+ size_t len = strnlen(dst, size);
+
+ /*
+ * We want len < size-1. But as size is unsigned and can wrap
+ * around, we use len + 1 instead.
+ */
+ while (len + 1 < size) {
+ dst[len] = *src;
+ if (*src == '\0')
break;
len++;
src++;
}
+ if (len < size)
+ dst[len] = '\0';
+
+ while (*src++)
+ len++;
+
return len;
}
@@ -210,16 +222,18 @@ static __attribute__((unused))
size_t strlcpy(char *dst, const char *src, size_t size)
{
size_t len;
- char c;
- for (len = 0;;) {
- c = src[len];
- if (len < size)
- dst[len] = c;
- if (!c)
- break;
- len++;
+ for (len = 0; len < size; len++) {
+ dst[len] = src[len];
+ if (!dst[len])
+ return len;
}
+ if (size)
+ dst[size-1] = '\0';
+
+ while (src[len])
+ len++;
+
return len;
}
@@ -278,7 +292,40 @@ char *strrchr(const char *s, int c)
return (char *)ret;
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
+static __attribute__((unused))
+char *strstr(const char *haystack, const char *needle)
+{
+ size_t len_haystack, len_needle;
+
+ len_needle = strlen(needle);
+ if (!len_needle)
+ return NULL;
+
+ len_haystack = strlen(haystack);
+ while (len_haystack >= len_needle) {
+ if (!memcmp(haystack, needle, len_needle))
+ return (char *)haystack;
+ haystack++;
+ len_haystack--;
+ }
+
+ return NULL;
+}
+
+static __attribute__((unused))
+int tolower(int c)
+{
+ if (c >= 'A' && c <= 'Z')
+ return c - 'A' + 'a';
+ return c;
+}
+
+static __attribute__((unused))
+int toupper(int c)
+{
+ if (c >= 'a' && c <= 'z')
+ return c - 'a' + 'A';
+ return c;
+}
#endif /* _NOLIBC_STRING_H */
diff --git a/tools/include/nolibc/sys.h b/tools/include/nolibc/sys.h
index dda9dffd1d74..c5564f57deec 100644
--- a/tools/include/nolibc/sys.h
+++ b/tools/include/nolibc/sys.h
@@ -4,26 +4,27 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_SYS_H
#define _NOLIBC_SYS_H
#include "std.h"
/* system includes */
-#include <asm/unistd.h>
-#include <asm/signal.h> /* for SIGCHLD */
-#include <asm/ioctls.h>
-#include <asm/mman.h>
+#include <linux/unistd.h>
+#include <linux/signal.h> /* for SIGCHLD */
+#include <linux/termios.h>
+#include <linux/mman.h>
#include <linux/fs.h>
#include <linux/loop.h>
#include <linux/time.h>
#include <linux/auxvec.h>
#include <linux/fcntl.h> /* for O_* and AT_* */
+#include <linux/sched.h> /* for clone_args */
#include <linux/stat.h> /* for statx() */
-#include <linux/prctl.h>
-#include <linux/resource.h>
-#include "arch.h"
#include "errno.h"
#include "stdarg.h"
#include "types.h"
@@ -139,12 +140,10 @@ int chdir(const char *path)
static __attribute__((unused))
int sys_chmod(const char *path, mode_t mode)
{
-#ifdef __NR_fchmodat
+#if defined(__NR_fchmodat)
return my_syscall4(__NR_fchmodat, AT_FDCWD, path, mode, 0);
-#elif defined(__NR_chmod)
- return my_syscall2(__NR_chmod, path, mode);
#else
- return __nolibc_enosys(__func__, path, mode);
+ return my_syscall2(__NR_chmod, path, mode);
#endif
}
@@ -162,12 +161,10 @@ int chmod(const char *path, mode_t mode)
static __attribute__((unused))
int sys_chown(const char *path, uid_t owner, gid_t group)
{
-#ifdef __NR_fchownat
+#if defined(__NR_fchownat)
return my_syscall5(__NR_fchownat, AT_FDCWD, path, owner, group, 0);
-#elif defined(__NR_chown)
- return my_syscall3(__NR_chown, path, owner, group);
#else
- return __nolibc_enosys(__func__, path, owner, group);
+ return my_syscall3(__NR_chown, path, owner, group);
#endif
}
@@ -236,12 +233,23 @@ int dup(int fd)
static __attribute__((unused))
int sys_dup2(int old, int new)
{
-#ifdef __NR_dup3
+#if defined(__NR_dup3)
+ int ret, nr_fcntl;
+
+#ifdef __NR_fcntl64
+ nr_fcntl = __NR_fcntl64;
+#else
+ nr_fcntl = __NR_fcntl;
+#endif
+
+ if (old == new) {
+ ret = my_syscall2(nr_fcntl, old, F_GETFD);
+ return ret < 0 ? ret : old;
+ }
+
return my_syscall3(__NR_dup3, old, new, 0);
-#elif defined(__NR_dup2)
- return my_syscall2(__NR_dup2, old, new);
#else
- return __nolibc_enosys(__func__, old, new);
+ return my_syscall2(__NR_dup2, old, new);
#endif
}
@@ -256,7 +264,7 @@ int dup2(int old, int new)
* int dup3(int old, int new, int flags);
*/
-#ifdef __NR_dup3
+#if defined(__NR_dup3)
static __attribute__((unused))
int sys_dup3(int old, int new, int flags)
{
@@ -300,11 +308,17 @@ void sys_exit(int status)
}
static __attribute__((noreturn,unused))
-void exit(int status)
+void _exit(int status)
{
sys_exit(status);
}
+static __attribute__((noreturn,unused))
+void exit(int status)
+{
+ _exit(status);
+}
+
/*
* pid_t fork(void);
@@ -314,16 +328,14 @@ void exit(int status)
static __attribute__((unused))
pid_t sys_fork(void)
{
-#ifdef __NR_clone
+#if defined(__NR_clone)
/* note: some archs only have clone() and not fork(). Different archs
* have a different API, but most archs have the flags on first arg and
* will not use the rest with no other flag.
*/
return my_syscall5(__NR_clone, SIGCHLD, 0, 0, 0, 0);
-#elif defined(__NR_fork)
- return my_syscall0(__NR_fork);
#else
- return __nolibc_enosys(__func__);
+ return my_syscall0(__NR_fork);
#endif
}
#endif
@@ -334,6 +346,32 @@ pid_t fork(void)
return __sysret(sys_fork());
}
+#ifndef sys_vfork
+static __attribute__((unused))
+pid_t sys_vfork(void)
+{
+#if defined(__NR_vfork)
+ return my_syscall0(__NR_vfork);
+#else
+ /*
+ * clone() could be used but has different argument orders per
+ * architecture.
+ */
+ struct clone_args args = {
+ .flags = CLONE_VM | CLONE_VFORK,
+ .exit_signal = SIGCHLD,
+ };
+
+ return my_syscall2(__NR_clone3, &args, sizeof(args));
+#endif
+}
+#endif
+
+static __attribute__((unused))
+pid_t vfork(void)
+{
+ return __sysret(sys_vfork());
+}
/*
* int fsync(int fd);
@@ -376,7 +414,7 @@ int getdents64(int fd, struct linux_dirent64 *dirp, int count)
static __attribute__((unused))
uid_t sys_geteuid(void)
{
-#ifdef __NR_geteuid32
+#if defined(__NR_geteuid32)
return my_syscall0(__NR_geteuid32);
#else
return my_syscall0(__NR_geteuid);
@@ -488,34 +526,13 @@ int getpagesize(void)
/*
- * int gettimeofday(struct timeval *tv, struct timezone *tz);
- */
-
-static __attribute__((unused))
-int sys_gettimeofday(struct timeval *tv, struct timezone *tz)
-{
-#ifdef __NR_gettimeofday
- return my_syscall2(__NR_gettimeofday, tv, tz);
-#else
- return __nolibc_enosys(__func__, tv, tz);
-#endif
-}
-
-static __attribute__((unused))
-int gettimeofday(struct timeval *tv, struct timezone *tz)
-{
- return __sysret(sys_gettimeofday(tv, tz));
-}
-
-
-/*
* uid_t getuid(void);
*/
static __attribute__((unused))
uid_t sys_getuid(void)
{
-#ifdef __NR_getuid32
+#if defined(__NR_getuid32)
return my_syscall0(__NR_getuid32);
#else
return my_syscall0(__NR_getuid);
@@ -530,22 +547,6 @@ uid_t getuid(void)
/*
- * int ioctl(int fd, unsigned long req, void *value);
- */
-
-static __attribute__((unused))
-int sys_ioctl(int fd, unsigned long req, void *value)
-{
- return my_syscall3(__NR_ioctl, fd, req, value);
-}
-
-static __attribute__((unused))
-int ioctl(int fd, unsigned long req, void *value)
-{
- return __sysret(sys_ioctl(fd, req, value));
-}
-
-/*
* int kill(pid_t pid, int signal);
*/
@@ -569,12 +570,10 @@ int kill(pid_t pid, int signal)
static __attribute__((unused))
int sys_link(const char *old, const char *new)
{
-#ifdef __NR_linkat
+#if defined(__NR_linkat)
return my_syscall5(__NR_linkat, AT_FDCWD, old, AT_FDCWD, new, 0);
-#elif defined(__NR_link)
- return my_syscall2(__NR_link, old, new);
#else
- return __nolibc_enosys(__func__, old, new);
+ return my_syscall2(__NR_link, old, new);
#endif
}
@@ -592,10 +591,23 @@ int link(const char *old, const char *new)
static __attribute__((unused))
off_t sys_lseek(int fd, off_t offset, int whence)
{
-#ifdef __NR_lseek
+#if defined(__NR_lseek)
return my_syscall3(__NR_lseek, fd, offset, whence);
#else
- return __nolibc_enosys(__func__, fd, offset, whence);
+ __kernel_loff_t loff = 0;
+ off_t result;
+ int ret;
+
+ /* Only exists on 32bit where nolibc off_t is also 32bit */
+ ret = my_syscall5(__NR_llseek, fd, 0, offset, &loff, whence);
+ if (ret < 0)
+ result = ret;
+ else if (loff != (off_t)loff)
+ result = -EOVERFLOW;
+ else
+ result = loff;
+
+ return result;
#endif
}
@@ -613,12 +625,10 @@ off_t lseek(int fd, off_t offset, int whence)
static __attribute__((unused))
int sys_mkdir(const char *path, mode_t mode)
{
-#ifdef __NR_mkdirat
+#if defined(__NR_mkdirat)
return my_syscall3(__NR_mkdirat, AT_FDCWD, path, mode);
-#elif defined(__NR_mkdir)
- return my_syscall2(__NR_mkdir, path, mode);
#else
- return __nolibc_enosys(__func__, path, mode);
+ return my_syscall2(__NR_mkdir, path, mode);
#endif
}
@@ -635,12 +645,10 @@ int mkdir(const char *path, mode_t mode)
static __attribute__((unused))
int sys_rmdir(const char *path)
{
-#ifdef __NR_rmdir
+#if defined(__NR_rmdir)
return my_syscall1(__NR_rmdir, path);
-#elif defined(__NR_unlinkat)
- return my_syscall3(__NR_unlinkat, AT_FDCWD, path, AT_REMOVEDIR);
#else
- return __nolibc_enosys(__func__, path);
+ return my_syscall3(__NR_unlinkat, AT_FDCWD, path, AT_REMOVEDIR);
#endif
}
@@ -658,12 +666,10 @@ int rmdir(const char *path)
static __attribute__((unused))
long sys_mknod(const char *path, mode_t mode, dev_t dev)
{
-#ifdef __NR_mknodat
+#if defined(__NR_mknodat)
return my_syscall4(__NR_mknodat, AT_FDCWD, path, mode, dev);
-#elif defined(__NR_mknod)
- return my_syscall3(__NR_mknod, path, mode, dev);
#else
- return __nolibc_enosys(__func__, path, mode, dev);
+ return my_syscall3(__NR_mknod, path, mode, dev);
#endif
}
@@ -673,106 +679,6 @@ int mknod(const char *path, mode_t mode, dev_t dev)
return __sysret(sys_mknod(path, mode, dev));
}
-#ifndef sys_mmap
-static __attribute__((unused))
-void *sys_mmap(void *addr, size_t length, int prot, int flags, int fd,
- off_t offset)
-{
- int n;
-
-#if defined(__NR_mmap2)
- n = __NR_mmap2;
- offset >>= 12;
-#else
- n = __NR_mmap;
-#endif
-
- return (void *)my_syscall6(n, addr, length, prot, flags, fd, offset);
-}
-#endif
-
-/* Note that on Linux, MAP_FAILED is -1 so we can use the generic __sysret()
- * which returns -1 upon error and still satisfy user land that checks for
- * MAP_FAILED.
- */
-
-static __attribute__((unused))
-void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset)
-{
- void *ret = sys_mmap(addr, length, prot, flags, fd, offset);
-
- if ((unsigned long)ret >= -4095UL) {
- SET_ERRNO(-(long)ret);
- ret = MAP_FAILED;
- }
- return ret;
-}
-
-static __attribute__((unused))
-int sys_munmap(void *addr, size_t length)
-{
- return my_syscall2(__NR_munmap, addr, length);
-}
-
-static __attribute__((unused))
-int munmap(void *addr, size_t length)
-{
- return __sysret(sys_munmap(addr, length));
-}
-
-/*
- * int mount(const char *source, const char *target,
- * const char *fstype, unsigned long flags,
- * const void *data);
- */
-static __attribute__((unused))
-int sys_mount(const char *src, const char *tgt, const char *fst,
- unsigned long flags, const void *data)
-{
- return my_syscall5(__NR_mount, src, tgt, fst, flags, data);
-}
-
-static __attribute__((unused))
-int mount(const char *src, const char *tgt,
- const char *fst, unsigned long flags,
- const void *data)
-{
- return __sysret(sys_mount(src, tgt, fst, flags, data));
-}
-
-
-/*
- * int open(const char *path, int flags[, mode_t mode]);
- */
-
-static __attribute__((unused))
-int sys_open(const char *path, int flags, mode_t mode)
-{
-#ifdef __NR_openat
- return my_syscall4(__NR_openat, AT_FDCWD, path, flags, mode);
-#elif defined(__NR_open)
- return my_syscall3(__NR_open, path, flags, mode);
-#else
- return __nolibc_enosys(__func__, path, flags, mode);
-#endif
-}
-
-static __attribute__((unused))
-int open(const char *path, int flags, ...)
-{
- mode_t mode = 0;
-
- if (flags & O_CREAT) {
- va_list args;
-
- va_start(args, flags);
- mode = va_arg(args, int);
- va_end(args);
- }
-
- return __sysret(sys_open(path, flags, mode));
-}
-
/*
* int pipe2(int pipefd[2], int flags);
@@ -799,26 +705,6 @@ int pipe(int pipefd[2])
/*
- * int prctl(int option, unsigned long arg2, unsigned long arg3,
- * unsigned long arg4, unsigned long arg5);
- */
-
-static __attribute__((unused))
-int sys_prctl(int option, unsigned long arg2, unsigned long arg3,
- unsigned long arg4, unsigned long arg5)
-{
- return my_syscall5(__NR_prctl, option, arg2, arg3, arg4, arg5);
-}
-
-static __attribute__((unused))
-int prctl(int option, unsigned long arg2, unsigned long arg3,
- unsigned long arg4, unsigned long arg5)
-{
- return __sysret(sys_prctl(option, arg2, arg3, arg4, arg5));
-}
-
-
-/*
* int pivot_root(const char *new, const char *old);
*/
@@ -836,35 +722,6 @@ int pivot_root(const char *new, const char *old)
/*
- * int poll(struct pollfd *fds, int nfds, int timeout);
- */
-
-static __attribute__((unused))
-int sys_poll(struct pollfd *fds, int nfds, int timeout)
-{
-#if defined(__NR_ppoll)
- struct timespec t;
-
- if (timeout >= 0) {
- t.tv_sec = timeout / 1000;
- t.tv_nsec = (timeout % 1000) * 1000000;
- }
- return my_syscall5(__NR_ppoll, fds, nfds, (timeout >= 0) ? &t : NULL, NULL, 0);
-#elif defined(__NR_poll)
- return my_syscall3(__NR_poll, fds, nfds, timeout);
-#else
- return __nolibc_enosys(__func__, fds, nfds, timeout);
-#endif
-}
-
-static __attribute__((unused))
-int poll(struct pollfd *fds, int nfds, int timeout)
-{
- return __sysret(sys_poll(fds, nfds, timeout));
-}
-
-
-/*
* ssize_t read(int fd, void *buf, size_t count);
*/
@@ -882,61 +739,6 @@ ssize_t read(int fd, void *buf, size_t count)
/*
- * int reboot(int cmd);
- * <cmd> is among LINUX_REBOOT_CMD_*
- */
-
-static __attribute__((unused))
-ssize_t sys_reboot(int magic1, int magic2, int cmd, void *arg)
-{
- return my_syscall4(__NR_reboot, magic1, magic2, cmd, arg);
-}
-
-static __attribute__((unused))
-int reboot(int cmd)
-{
- return __sysret(sys_reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, cmd, 0));
-}
-
-
-/*
- * int getrlimit(int resource, struct rlimit *rlim);
- * int setrlimit(int resource, const struct rlimit *rlim);
- */
-
-static __attribute__((unused))
-int sys_prlimit64(pid_t pid, int resource,
- const struct rlimit64 *new_limit, struct rlimit64 *old_limit)
-{
- return my_syscall4(__NR_prlimit64, pid, resource, new_limit, old_limit);
-}
-
-static __attribute__((unused))
-int getrlimit(int resource, struct rlimit *rlim)
-{
- struct rlimit64 rlim64;
- int ret;
-
- ret = __sysret(sys_prlimit64(0, resource, NULL, &rlim64));
- rlim->rlim_cur = rlim64.rlim_cur;
- rlim->rlim_max = rlim64.rlim_max;
-
- return ret;
-}
-
-static __attribute__((unused))
-int setrlimit(int resource, const struct rlimit *rlim)
-{
- struct rlimit64 rlim64 = {
- .rlim_cur = rlim->rlim_cur,
- .rlim_max = rlim->rlim_max,
- };
-
- return __sysret(sys_prlimit64(0, resource, &rlim64, NULL));
-}
-
-
-/*
* int sched_yield(void);
*/
@@ -981,7 +783,13 @@ int sys_select(int nfds, fd_set *rfds, fd_set *wfds, fd_set *efds, struct timeva
}
return my_syscall6(__NR_pselect6, nfds, rfds, wfds, efds, timeout ? &t : NULL, NULL);
#else
- return __nolibc_enosys(__func__, nfds, rfds, wfds, efds, timeout);
+ struct __kernel_timespec t;
+
+ if (timeout) {
+ t.tv_sec = timeout->tv_sec;
+ t.tv_nsec = timeout->tv_usec * 1000;
+ }
+ return my_syscall6(__NR_pselect6_time64, nfds, rfds, wfds, efds, timeout ? &t : NULL, NULL);
#endif
}
@@ -1008,77 +816,31 @@ int setpgid(pid_t pid, pid_t pgid)
return __sysret(sys_setpgid(pid, pgid));
}
-
/*
- * pid_t setsid(void);
+ * pid_t setpgrp(void)
*/
static __attribute__((unused))
-pid_t sys_setsid(void)
+pid_t setpgrp(void)
{
- return my_syscall0(__NR_setsid);
+ return setpgid(0, 0);
}
-static __attribute__((unused))
-pid_t setsid(void)
-{
- return __sysret(sys_setsid());
-}
/*
- * int statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf);
- * int stat(const char *path, struct stat *buf);
+ * pid_t setsid(void);
*/
static __attribute__((unused))
-int sys_statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf)
+pid_t sys_setsid(void)
{
-#ifdef __NR_statx
- return my_syscall5(__NR_statx, fd, path, flags, mask, buf);
-#else
- return __nolibc_enosys(__func__, fd, path, flags, mask, buf);
-#endif
+ return my_syscall0(__NR_setsid);
}
static __attribute__((unused))
-int statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf)
-{
- return __sysret(sys_statx(fd, path, flags, mask, buf));
-}
-
-
-static __attribute__((unused))
-int stat(const char *path, struct stat *buf)
+pid_t setsid(void)
{
- struct statx statx;
- long ret;
-
- ret = __sysret(sys_statx(AT_FDCWD, path, AT_NO_AUTOMOUNT, STATX_BASIC_STATS, &statx));
- if (ret == -1)
- return ret;
-
- buf->st_dev = ((statx.stx_dev_minor & 0xff)
- | (statx.stx_dev_major << 8)
- | ((statx.stx_dev_minor & ~0xff) << 12));
- buf->st_ino = statx.stx_ino;
- buf->st_mode = statx.stx_mode;
- buf->st_nlink = statx.stx_nlink;
- buf->st_uid = statx.stx_uid;
- buf->st_gid = statx.stx_gid;
- buf->st_rdev = ((statx.stx_rdev_minor & 0xff)
- | (statx.stx_rdev_major << 8)
- | ((statx.stx_rdev_minor & ~0xff) << 12));
- buf->st_size = statx.stx_size;
- buf->st_blksize = statx.stx_blksize;
- buf->st_blocks = statx.stx_blocks;
- buf->st_atim.tv_sec = statx.stx_atime.tv_sec;
- buf->st_atim.tv_nsec = statx.stx_atime.tv_nsec;
- buf->st_mtim.tv_sec = statx.stx_mtime.tv_sec;
- buf->st_mtim.tv_nsec = statx.stx_mtime.tv_nsec;
- buf->st_ctim.tv_sec = statx.stx_ctime.tv_sec;
- buf->st_ctim.tv_nsec = statx.stx_ctime.tv_nsec;
-
- return 0;
+ return __sysret(sys_setsid());
}
@@ -1089,12 +851,10 @@ int stat(const char *path, struct stat *buf)
static __attribute__((unused))
int sys_symlink(const char *old, const char *new)
{
-#ifdef __NR_symlinkat
+#if defined(__NR_symlinkat)
return my_syscall3(__NR_symlinkat, old, AT_FDCWD, new);
-#elif defined(__NR_symlink)
- return my_syscall2(__NR_symlink, old, new);
#else
- return __nolibc_enosys(__func__, old, new);
+ return my_syscall2(__NR_symlink, old, new);
#endif
}
@@ -1146,12 +906,10 @@ int umount2(const char *path, int flags)
static __attribute__((unused))
int sys_unlink(const char *path)
{
-#ifdef __NR_unlinkat
+#if defined(__NR_unlinkat)
return my_syscall3(__NR_unlinkat, AT_FDCWD, path, 0);
-#elif defined(__NR_unlink)
- return my_syscall1(__NR_unlink, path);
#else
- return __nolibc_enosys(__func__, path);
+ return my_syscall1(__NR_unlink, path);
#endif
}
@@ -1163,42 +921,6 @@ int unlink(const char *path)
/*
- * pid_t wait(int *status);
- * pid_t wait4(pid_t pid, int *status, int options, struct rusage *rusage);
- * pid_t waitpid(pid_t pid, int *status, int options);
- */
-
-static __attribute__((unused))
-pid_t sys_wait4(pid_t pid, int *status, int options, struct rusage *rusage)
-{
-#ifdef __NR_wait4
- return my_syscall4(__NR_wait4, pid, status, options, rusage);
-#else
- return __nolibc_enosys(__func__, pid, status, options, rusage);
-#endif
-}
-
-static __attribute__((unused))
-pid_t wait(int *status)
-{
- return __sysret(sys_wait4(-1, status, 0, NULL));
-}
-
-static __attribute__((unused))
-pid_t wait4(pid_t pid, int *status, int options, struct rusage *rusage)
-{
- return __sysret(sys_wait4(pid, status, options, rusage));
-}
-
-
-static __attribute__((unused))
-pid_t waitpid(pid_t pid, int *status, int options)
-{
- return __sysret(sys_wait4(pid, status, options, NULL));
-}
-
-
-/*
* ssize_t write(int fd, const void *buf, size_t count);
*/
@@ -1231,7 +953,4 @@ int memfd_create(const char *name, unsigned int flags)
return __sysret(sys_memfd_create(name, flags));
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_SYS_H */
diff --git a/tools/include/nolibc/sys/auxv.h b/tools/include/nolibc/sys/auxv.h
new file mode 100644
index 000000000000..c52463d6c18d
--- /dev/null
+++ b/tools/include/nolibc/sys/auxv.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * auxv definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_AUXV_H
+#define _NOLIBC_SYS_AUXV_H
+
+#include "../crt.h"
+
+static __attribute__((unused))
+unsigned long getauxval(unsigned long type)
+{
+ const unsigned long *auxv = _auxv;
+ unsigned long ret;
+
+ if (!auxv)
+ return 0;
+
+ while (1) {
+ if (!auxv[0] && !auxv[1]) {
+ ret = 0;
+ break;
+ }
+
+ if (auxv[0] == type) {
+ ret = auxv[1];
+ break;
+ }
+
+ auxv += 2;
+ }
+
+ return ret;
+}
+
+#endif /* _NOLIBC_SYS_AUXV_H */
diff --git a/tools/include/nolibc/sys/ioctl.h b/tools/include/nolibc/sys/ioctl.h
new file mode 100644
index 000000000000..fc880687e02a
--- /dev/null
+++ b/tools/include/nolibc/sys/ioctl.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Ioctl definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_IOCTL_H
+#define _NOLIBC_SYS_IOCTL_H
+
+#include "../sys.h"
+
+#include <linux/ioctl.h>
+
+/*
+ * int ioctl(int fd, unsigned long cmd, ... arg);
+ */
+
+static __attribute__((unused))
+long sys_ioctl(unsigned int fd, unsigned int cmd, unsigned long arg)
+{
+ return my_syscall3(__NR_ioctl, fd, cmd, arg);
+}
+
+#define ioctl(fd, cmd, arg) __sysret(sys_ioctl(fd, cmd, (unsigned long)(arg)))
+
+#endif /* _NOLIBC_SYS_IOCTL_H */
diff --git a/tools/include/nolibc/sys/mman.h b/tools/include/nolibc/sys/mman.h
new file mode 100644
index 000000000000..5228751b458c
--- /dev/null
+++ b/tools/include/nolibc/sys/mman.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * mm definition for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_MMAN_H
+#define _NOLIBC_SYS_MMAN_H
+
+#include "../arch.h"
+#include "../sys.h"
+
+#ifndef sys_mmap
+static __attribute__((unused))
+void *sys_mmap(void *addr, size_t length, int prot, int flags, int fd,
+ off_t offset)
+{
+ int n;
+
+#if defined(__NR_mmap2)
+ n = __NR_mmap2;
+ offset >>= 12;
+#else
+ n = __NR_mmap;
+#endif
+
+ return (void *)my_syscall6(n, addr, length, prot, flags, fd, offset);
+}
+#endif
+
+/* Note that on Linux, MAP_FAILED is -1 so we can use the generic __sysret()
+ * which returns -1 upon error and still satisfy user land that checks for
+ * MAP_FAILED.
+ */
+
+static __attribute__((unused))
+void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset)
+{
+ void *ret = sys_mmap(addr, length, prot, flags, fd, offset);
+
+ if ((unsigned long)ret >= -4095UL) {
+ SET_ERRNO(-(long)ret);
+ ret = MAP_FAILED;
+ }
+ return ret;
+}
+
+static __attribute__((unused))
+void *sys_mremap(void *old_address, size_t old_size, size_t new_size, int flags, void *new_address)
+{
+ return (void *)my_syscall5(__NR_mremap, old_address, old_size,
+ new_size, flags, new_address);
+}
+
+static __attribute__((unused))
+void *mremap(void *old_address, size_t old_size, size_t new_size, int flags, void *new_address)
+{
+ void *ret = sys_mremap(old_address, old_size, new_size, flags, new_address);
+
+ if ((unsigned long)ret >= -4095UL) {
+ SET_ERRNO(-(long)ret);
+ ret = MAP_FAILED;
+ }
+ return ret;
+}
+
+static __attribute__((unused))
+int sys_munmap(void *addr, size_t length)
+{
+ return my_syscall2(__NR_munmap, addr, length);
+}
+
+static __attribute__((unused))
+int munmap(void *addr, size_t length)
+{
+ return __sysret(sys_munmap(addr, length));
+}
+
+#endif /* _NOLIBC_SYS_MMAN_H */
diff --git a/tools/include/nolibc/sys/mount.h b/tools/include/nolibc/sys/mount.h
new file mode 100644
index 000000000000..e39ec02ea24c
--- /dev/null
+++ b/tools/include/nolibc/sys/mount.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Mount definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_MOUNT_H
+#define _NOLIBC_SYS_MOUNT_H
+
+#include "../sys.h"
+
+#include <linux/mount.h>
+
+/*
+ * int mount(const char *source, const char *target,
+ * const char *fstype, unsigned long flags,
+ * const void *data);
+ */
+static __attribute__((unused))
+int sys_mount(const char *src, const char *tgt, const char *fst,
+ unsigned long flags, const void *data)
+{
+ return my_syscall5(__NR_mount, src, tgt, fst, flags, data);
+}
+
+static __attribute__((unused))
+int mount(const char *src, const char *tgt,
+ const char *fst, unsigned long flags,
+ const void *data)
+{
+ return __sysret(sys_mount(src, tgt, fst, flags, data));
+}
+
+#endif /* _NOLIBC_SYS_MOUNT_H */
diff --git a/tools/include/nolibc/sys/prctl.h b/tools/include/nolibc/sys/prctl.h
new file mode 100644
index 000000000000..0205907b6ac8
--- /dev/null
+++ b/tools/include/nolibc/sys/prctl.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Prctl definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_PRCTL_H
+#define _NOLIBC_SYS_PRCTL_H
+
+#include "../sys.h"
+
+#include <linux/prctl.h>
+
+/*
+ * int prctl(int option, unsigned long arg2, unsigned long arg3,
+ * unsigned long arg4, unsigned long arg5);
+ */
+
+static __attribute__((unused))
+int sys_prctl(int option, unsigned long arg2, unsigned long arg3,
+ unsigned long arg4, unsigned long arg5)
+{
+ return my_syscall5(__NR_prctl, option, arg2, arg3, arg4, arg5);
+}
+
+static __attribute__((unused))
+int prctl(int option, unsigned long arg2, unsigned long arg3,
+ unsigned long arg4, unsigned long arg5)
+{
+ return __sysret(sys_prctl(option, arg2, arg3, arg4, arg5));
+}
+
+#endif /* _NOLIBC_SYS_PRCTL_H */
diff --git a/tools/include/nolibc/sys/random.h b/tools/include/nolibc/sys/random.h
new file mode 100644
index 000000000000..cd5d25c571a8
--- /dev/null
+++ b/tools/include/nolibc/sys/random.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * random definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_RANDOM_H
+#define _NOLIBC_SYS_RANDOM_H
+
+#include "../arch.h"
+#include "../sys.h"
+
+#include <linux/random.h>
+
+/*
+ * ssize_t getrandom(void *buf, size_t buflen, unsigned int flags);
+ */
+
+static __attribute__((unused))
+ssize_t sys_getrandom(void *buf, size_t buflen, unsigned int flags)
+{
+ return my_syscall3(__NR_getrandom, buf, buflen, flags);
+}
+
+static __attribute__((unused))
+ssize_t getrandom(void *buf, size_t buflen, unsigned int flags)
+{
+ return __sysret(sys_getrandom(buf, buflen, flags));
+}
+
+#endif /* _NOLIBC_SYS_RANDOM_H */
diff --git a/tools/include/nolibc/sys/reboot.h b/tools/include/nolibc/sys/reboot.h
new file mode 100644
index 000000000000..4a1e435be669
--- /dev/null
+++ b/tools/include/nolibc/sys/reboot.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Reboot definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_REBOOT_H
+#define _NOLIBC_SYS_REBOOT_H
+
+#include "../sys.h"
+
+#include <linux/reboot.h>
+
+/*
+ * int reboot(int cmd);
+ * <cmd> is among LINUX_REBOOT_CMD_*
+ */
+
+static __attribute__((unused))
+ssize_t sys_reboot(int magic1, int magic2, int cmd, void *arg)
+{
+ return my_syscall4(__NR_reboot, magic1, magic2, cmd, arg);
+}
+
+static __attribute__((unused))
+int reboot(int cmd)
+{
+ return __sysret(sys_reboot(LINUX_REBOOT_MAGIC1, LINUX_REBOOT_MAGIC2, cmd, 0));
+}
+
+#endif /* _NOLIBC_SYS_REBOOT_H */
diff --git a/tools/include/nolibc/sys/resource.h b/tools/include/nolibc/sys/resource.h
new file mode 100644
index 000000000000..b990f914dc56
--- /dev/null
+++ b/tools/include/nolibc/sys/resource.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Resource definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_RESOURCE_H
+#define _NOLIBC_SYS_RESOURCE_H
+
+#include "../sys.h"
+
+#include <linux/resource.h>
+
+/*
+ * int getrlimit(int resource, struct rlimit *rlim);
+ * int setrlimit(int resource, const struct rlimit *rlim);
+ */
+
+static __attribute__((unused))
+int sys_prlimit64(pid_t pid, int resource,
+ const struct rlimit64 *new_limit, struct rlimit64 *old_limit)
+{
+ return my_syscall4(__NR_prlimit64, pid, resource, new_limit, old_limit);
+}
+
+static __attribute__((unused))
+int getrlimit(int resource, struct rlimit *rlim)
+{
+ struct rlimit64 rlim64;
+ int ret;
+
+ ret = __sysret(sys_prlimit64(0, resource, NULL, &rlim64));
+ rlim->rlim_cur = rlim64.rlim_cur;
+ rlim->rlim_max = rlim64.rlim_max;
+
+ return ret;
+}
+
+static __attribute__((unused))
+int setrlimit(int resource, const struct rlimit *rlim)
+{
+ struct rlimit64 rlim64 = {
+ .rlim_cur = rlim->rlim_cur,
+ .rlim_max = rlim->rlim_max,
+ };
+
+ return __sysret(sys_prlimit64(0, resource, &rlim64, NULL));
+}
+
+#endif /* _NOLIBC_SYS_RESOURCE_H */
diff --git a/tools/include/nolibc/sys/stat.h b/tools/include/nolibc/sys/stat.h
new file mode 100644
index 000000000000..8b4d80e3ea03
--- /dev/null
+++ b/tools/include/nolibc/sys/stat.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * stat definition for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_STAT_H
+#define _NOLIBC_SYS_STAT_H
+
+#include "../arch.h"
+#include "../types.h"
+#include "../sys.h"
+
+/*
+ * int statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf);
+ * int stat(const char *path, struct stat *buf);
+ * int fstatat(int fd, const char *path, struct stat *buf, int flag);
+ * int fstat(int fildes, struct stat *buf);
+ * int lstat(const char *path, struct stat *buf);
+ */
+
+static __attribute__((unused))
+int sys_statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf)
+{
+#ifdef __NR_statx
+ return my_syscall5(__NR_statx, fd, path, flags, mask, buf);
+#else
+ return __nolibc_enosys(__func__, fd, path, flags, mask, buf);
+#endif
+}
+
+static __attribute__((unused))
+int statx(int fd, const char *path, int flags, unsigned int mask, struct statx *buf)
+{
+ return __sysret(sys_statx(fd, path, flags, mask, buf));
+}
+
+
+static __attribute__((unused))
+int fstatat(int fd, const char *path, struct stat *buf, int flag)
+{
+ struct statx statx;
+ long ret;
+
+ ret = __sysret(sys_statx(fd, path, flag | AT_NO_AUTOMOUNT, STATX_BASIC_STATS, &statx));
+ if (ret == -1)
+ return ret;
+
+ buf->st_dev = ((statx.stx_dev_minor & 0xff)
+ | (statx.stx_dev_major << 8)
+ | ((statx.stx_dev_minor & ~0xff) << 12));
+ buf->st_ino = statx.stx_ino;
+ buf->st_mode = statx.stx_mode;
+ buf->st_nlink = statx.stx_nlink;
+ buf->st_uid = statx.stx_uid;
+ buf->st_gid = statx.stx_gid;
+ buf->st_rdev = ((statx.stx_rdev_minor & 0xff)
+ | (statx.stx_rdev_major << 8)
+ | ((statx.stx_rdev_minor & ~0xff) << 12));
+ buf->st_size = statx.stx_size;
+ buf->st_blksize = statx.stx_blksize;
+ buf->st_blocks = statx.stx_blocks;
+ buf->st_atim.tv_sec = statx.stx_atime.tv_sec;
+ buf->st_atim.tv_nsec = statx.stx_atime.tv_nsec;
+ buf->st_mtim.tv_sec = statx.stx_mtime.tv_sec;
+ buf->st_mtim.tv_nsec = statx.stx_mtime.tv_nsec;
+ buf->st_ctim.tv_sec = statx.stx_ctime.tv_sec;
+ buf->st_ctim.tv_nsec = statx.stx_ctime.tv_nsec;
+
+ return 0;
+}
+
+static __attribute__((unused))
+int stat(const char *path, struct stat *buf)
+{
+ return fstatat(AT_FDCWD, path, buf, 0);
+}
+
+static __attribute__((unused))
+int fstat(int fildes, struct stat *buf)
+{
+ return fstatat(fildes, "", buf, AT_EMPTY_PATH);
+}
+
+static __attribute__((unused))
+int lstat(const char *path, struct stat *buf)
+{
+ return fstatat(AT_FDCWD, path, buf, AT_SYMLINK_NOFOLLOW);
+}
+
+#endif /* _NOLIBC_SYS_STAT_H */
diff --git a/tools/include/nolibc/sys/syscall.h b/tools/include/nolibc/sys/syscall.h
new file mode 100644
index 000000000000..4bf97f1386a0
--- /dev/null
+++ b/tools/include/nolibc/sys/syscall.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * syscall() definition for NOLIBC
+ * Copyright (C) 2024 Thomas Weißschuh <linux@weissschuh.net>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_SYSCALL_H
+#define _NOLIBC_SYS_SYSCALL_H
+
+#define __syscall_narg(_0, _1, _2, _3, _4, _5, _6, N, ...) N
+#define _syscall_narg(...) __syscall_narg(__VA_ARGS__, 6, 5, 4, 3, 2, 1, 0)
+#define _syscall(N, ...) __sysret(my_syscall##N(__VA_ARGS__))
+#define _syscall_n(N, ...) _syscall(N, __VA_ARGS__)
+#define syscall(...) _syscall_n(_syscall_narg(__VA_ARGS__), ##__VA_ARGS__)
+
+#endif /* _NOLIBC_SYS_SYSCALL_H */
diff --git a/tools/include/nolibc/sys/sysmacros.h b/tools/include/nolibc/sys/sysmacros.h
new file mode 100644
index 000000000000..37c33f030f02
--- /dev/null
+++ b/tools/include/nolibc/sys/sysmacros.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Sysmacro definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_SYSMACROS_H
+#define _NOLIBC_SYS_SYSMACROS_H
+
+#include "../std.h"
+
+/* WARNING, it only deals with the 4096 first majors and 256 first minors */
+#define makedev(major, minor) ((dev_t)((((major) & 0xfff) << 8) | ((minor) & 0xff)))
+#define major(dev) ((unsigned int)(((dev) >> 8) & 0xfff))
+#define minor(dev) ((unsigned int)((dev) & 0xff))
+
+#endif /* _NOLIBC_SYS_SYSMACROS_H */
diff --git a/tools/include/nolibc/sys/time.h b/tools/include/nolibc/sys/time.h
new file mode 100644
index 000000000000..33782a19aae9
--- /dev/null
+++ b/tools/include/nolibc/sys/time.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * time definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_TIME_H
+#define _NOLIBC_SYS_TIME_H
+
+#include "../arch.h"
+#include "../sys.h"
+
+static int sys_clock_gettime(clockid_t clockid, struct timespec *tp);
+
+/*
+ * int gettimeofday(struct timeval *tv, struct timezone *tz);
+ */
+
+static __attribute__((unused))
+int sys_gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+#ifdef __NR_gettimeofday
+ return my_syscall2(__NR_gettimeofday, tv, tz);
+#else
+ (void) tz; /* Non-NULL tz is undefined behaviour */
+
+ struct timespec tp;
+ int ret;
+
+ ret = sys_clock_gettime(CLOCK_REALTIME, &tp);
+ if (!ret && tv) {
+ tv->tv_sec = tp.tv_sec;
+ tv->tv_usec = tp.tv_nsec / 1000;
+ }
+
+ return ret;
+#endif
+}
+
+static __attribute__((unused))
+int gettimeofday(struct timeval *tv, struct timezone *tz)
+{
+ return __sysret(sys_gettimeofday(tv, tz));
+}
+
+#endif /* _NOLIBC_SYS_TIME_H */
diff --git a/tools/include/nolibc/sys/timerfd.h b/tools/include/nolibc/sys/timerfd.h
new file mode 100644
index 000000000000..5dd61030c991
--- /dev/null
+++ b/tools/include/nolibc/sys/timerfd.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * timerfd definitions for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_TIMERFD_H
+#define _NOLIBC_SYS_TIMERFD_H
+
+#include "../sys.h"
+#include "../time.h"
+
+#include <linux/timerfd.h>
+
+
+static __attribute__((unused))
+int sys_timerfd_create(int clockid, int flags)
+{
+ return my_syscall2(__NR_timerfd_create, clockid, flags);
+}
+
+static __attribute__((unused))
+int timerfd_create(int clockid, int flags)
+{
+ return __sysret(sys_timerfd_create(clockid, flags));
+}
+
+
+static __attribute__((unused))
+int sys_timerfd_gettime(int fd, struct itimerspec *curr_value)
+{
+#if defined(__NR_timerfd_gettime)
+ return my_syscall2(__NR_timerfd_gettime, fd, curr_value);
+#else
+ struct __kernel_itimerspec kcurr_value;
+ int ret;
+
+ ret = my_syscall2(__NR_timerfd_gettime64, fd, &kcurr_value);
+ __nolibc_timespec_kernel_to_user(&kcurr_value.it_interval, &curr_value->it_interval);
+ __nolibc_timespec_kernel_to_user(&kcurr_value.it_value, &curr_value->it_value);
+ return ret;
+#endif
+}
+
+static __attribute__((unused))
+int timerfd_gettime(int fd, struct itimerspec *curr_value)
+{
+ return __sysret(sys_timerfd_gettime(fd, curr_value));
+}
+
+
+static __attribute__((unused))
+int sys_timerfd_settime(int fd, int flags,
+ const struct itimerspec *new_value, struct itimerspec *old_value)
+{
+#if defined(__NR_timerfd_settime)
+ return my_syscall4(__NR_timerfd_settime, fd, flags, new_value, old_value);
+#else
+ struct __kernel_itimerspec knew_value, kold_value;
+ int ret;
+
+ __nolibc_timespec_user_to_kernel(&new_value->it_value, &knew_value.it_value);
+ __nolibc_timespec_user_to_kernel(&new_value->it_interval, &knew_value.it_interval);
+ ret = my_syscall4(__NR_timerfd_settime64, fd, flags, &knew_value, &kold_value);
+ if (old_value) {
+ __nolibc_timespec_kernel_to_user(&kold_value.it_interval, &old_value->it_interval);
+ __nolibc_timespec_kernel_to_user(&kold_value.it_value, &old_value->it_value);
+ }
+ return ret;
+#endif
+}
+
+static __attribute__((unused))
+int timerfd_settime(int fd, int flags,
+ const struct itimerspec *new_value, struct itimerspec *old_value)
+{
+ return __sysret(sys_timerfd_settime(fd, flags, new_value, old_value));
+}
+
+#endif /* _NOLIBC_SYS_TIMERFD_H */
diff --git a/tools/include/nolibc/sys/types.h b/tools/include/nolibc/sys/types.h
new file mode 100644
index 000000000000..8a264a13275c
--- /dev/null
+++ b/tools/include/nolibc/sys/types.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * sys/types.h shim for NOLIBC
+ * Copyright (C) 2025 Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+ */
+
+#include "../types.h"
diff --git a/tools/include/nolibc/sys/utsname.h b/tools/include/nolibc/sys/utsname.h
new file mode 100644
index 000000000000..01023e1bb439
--- /dev/null
+++ b/tools/include/nolibc/sys/utsname.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Utsname definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_UTSNAME_H
+#define _NOLIBC_SYS_UTSNAME_H
+
+#include "../sys.h"
+
+#include <linux/utsname.h>
+
+/*
+ * int uname(struct utsname *buf);
+ */
+
+struct utsname {
+ char sysname[65];
+ char nodename[65];
+ char release[65];
+ char version[65];
+ char machine[65];
+ char domainname[65];
+};
+
+static __attribute__((unused))
+int sys_uname(struct utsname *buf)
+{
+ return my_syscall1(__NR_uname, buf);
+}
+
+static __attribute__((unused))
+int uname(struct utsname *buf)
+{
+ return __sysret(sys_uname(buf));
+}
+
+#endif /* _NOLIBC_SYS_UTSNAME_H */
diff --git a/tools/include/nolibc/sys/wait.h b/tools/include/nolibc/sys/wait.h
new file mode 100644
index 000000000000..4e66e1f7a03e
--- /dev/null
+++ b/tools/include/nolibc/sys/wait.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * wait definitions for NOLIBC
+ * Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
+ */
+
+/* make sure to include all global symbols */
+#include "../nolibc.h"
+
+#ifndef _NOLIBC_SYS_WAIT_H
+#define _NOLIBC_SYS_WAIT_H
+
+#include "../arch.h"
+#include "../std.h"
+#include "../types.h"
+
+/*
+ * pid_t wait(int *status);
+ * pid_t waitpid(pid_t pid, int *status, int options);
+ * int waitid(idtype_t idtype, id_t id, siginfo_t *infop, int options);
+ */
+
+static __attribute__((unused))
+int sys_waitid(int which, pid_t pid, siginfo_t *infop, int options, struct rusage *rusage)
+{
+ return my_syscall5(__NR_waitid, which, pid, infop, options, rusage);
+}
+
+static __attribute__((unused))
+int waitid(int which, pid_t pid, siginfo_t *infop, int options)
+{
+ return __sysret(sys_waitid(which, pid, infop, options, NULL));
+}
+
+
+static __attribute__((unused))
+pid_t waitpid(pid_t pid, int *status, int options)
+{
+ int idtype, ret;
+ siginfo_t info;
+ pid_t id;
+
+ if (pid == INT_MIN) {
+ SET_ERRNO(ESRCH);
+ return -1;
+ } else if (pid < -1) {
+ idtype = P_PGID;
+ id = -pid;
+ } else if (pid == -1) {
+ idtype = P_ALL;
+ id = 0;
+ } else if (pid == 0) {
+ idtype = P_PGID;
+ id = 0;
+ } else {
+ idtype = P_PID;
+ id = pid;
+ }
+
+ options |= WEXITED;
+
+ ret = waitid(idtype, id, &info, options);
+ if (ret)
+ return -1;
+
+ switch (info.si_code) {
+ case 0:
+ *status = 0;
+ break;
+ case CLD_EXITED:
+ *status = (info.si_status & 0xff) << 8;
+ break;
+ case CLD_KILLED:
+ *status = info.si_status & 0x7f;
+ break;
+ case CLD_DUMPED:
+ *status = (info.si_status & 0x7f) | 0x80;
+ break;
+ case CLD_STOPPED:
+ case CLD_TRAPPED:
+ *status = (info.si_status << 8) + 0x7f;
+ break;
+ case CLD_CONTINUED:
+ *status = 0xffff;
+ break;
+ default:
+ return -1;
+ }
+
+ return info.si_pid;
+}
+
+static __attribute__((unused))
+pid_t wait(int *status)
+{
+ return waitpid(-1, status, 0);
+}
+
+#endif /* _NOLIBC_SYS_WAIT_H */
diff --git a/tools/include/nolibc/time.h b/tools/include/nolibc/time.h
index 84655361b9ad..6c276b8d646a 100644
--- a/tools/include/nolibc/time.h
+++ b/tools/include/nolibc/time.h
@@ -4,6 +4,9 @@
* Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_TIME_H
#define _NOLIBC_TIME_H
@@ -12,6 +15,137 @@
#include "types.h"
#include "sys.h"
+#include <linux/signal.h>
+#include <linux/time.h>
+
+static __inline__
+void __nolibc_timespec_user_to_kernel(const struct timespec *ts, struct __kernel_timespec *kts)
+{
+ kts->tv_sec = ts->tv_sec;
+ kts->tv_nsec = ts->tv_nsec;
+}
+
+static __inline__
+void __nolibc_timespec_kernel_to_user(const struct __kernel_timespec *kts, struct timespec *ts)
+{
+ ts->tv_sec = kts->tv_sec;
+ ts->tv_nsec = kts->tv_nsec;
+}
+
+/*
+ * int clock_getres(clockid_t clockid, struct timespec *res);
+ * int clock_gettime(clockid_t clockid, struct timespec *tp);
+ * int clock_settime(clockid_t clockid, const struct timespec *tp);
+ * int clock_nanosleep(clockid_t clockid, int flags, const struct timespec *rqtp,
+ * struct timespec *rmtp)
+ */
+
+static __attribute__((unused))
+int sys_clock_getres(clockid_t clockid, struct timespec *res)
+{
+#if defined(__NR_clock_getres)
+ return my_syscall2(__NR_clock_getres, clockid, res);
+#else
+ struct __kernel_timespec kres;
+ int ret;
+
+ ret = my_syscall2(__NR_clock_getres_time64, clockid, &kres);
+ if (res)
+ __nolibc_timespec_kernel_to_user(&kres, res);
+ return ret;
+#endif
+}
+
+static __attribute__((unused))
+int clock_getres(clockid_t clockid, struct timespec *res)
+{
+ return __sysret(sys_clock_getres(clockid, res));
+}
+
+static __attribute__((unused))
+int sys_clock_gettime(clockid_t clockid, struct timespec *tp)
+{
+#if defined(__NR_clock_gettime)
+ return my_syscall2(__NR_clock_gettime, clockid, tp);
+#else
+ struct __kernel_timespec ktp;
+ int ret;
+
+ ret = my_syscall2(__NR_clock_gettime64, clockid, &ktp);
+ if (tp)
+ __nolibc_timespec_kernel_to_user(&ktp, tp);
+ return ret;
+#endif
+}
+
+static __attribute__((unused))
+int clock_gettime(clockid_t clockid, struct timespec *tp)
+{
+ return __sysret(sys_clock_gettime(clockid, tp));
+}
+
+static __attribute__((unused))
+int sys_clock_settime(clockid_t clockid, struct timespec *tp)
+{
+#if defined(__NR_clock_settime)
+ return my_syscall2(__NR_clock_settime, clockid, tp);
+#elif defined(__NR_clock_settime64)
+ struct __kernel_timespec ktp;
+
+ __nolibc_timespec_user_to_kernel(tp, &ktp);
+ return my_syscall2(__NR_clock_settime64, clockid, &ktp);
+#else
+ return __nolibc_enosys(__func__, clockid, tp);
+#endif
+}
+
+static __attribute__((unused))
+int clock_settime(clockid_t clockid, struct timespec *tp)
+{
+ return __sysret(sys_clock_settime(clockid, tp));
+}
+
+static __attribute__((unused))
+int sys_clock_nanosleep(clockid_t clockid, int flags, const struct timespec *rqtp,
+ struct timespec *rmtp)
+{
+#if defined(__NR_clock_nanosleep)
+ return my_syscall4(__NR_clock_nanosleep, clockid, flags, rqtp, rmtp);
+#elif defined(__NR_clock_nanosleep_time64)
+ struct __kernel_timespec krqtp, krmtp;
+ int ret;
+
+ __nolibc_timespec_user_to_kernel(rqtp, &krqtp);
+ ret = my_syscall4(__NR_clock_nanosleep_time64, clockid, flags, &krqtp, &krmtp);
+ if (rmtp)
+ __nolibc_timespec_kernel_to_user(&krmtp, rmtp);
+ return ret;
+#else
+ return __nolibc_enosys(__func__, clockid, flags, rqtp, rmtp);
+#endif
+}
+
+static __attribute__((unused))
+int clock_nanosleep(clockid_t clockid, int flags, const struct timespec *rqtp,
+ struct timespec *rmtp)
+{
+ /* Directly return a positive error number */
+ return -sys_clock_nanosleep(clockid, flags, rqtp, rmtp);
+}
+
+static __inline__
+double difftime(time_t time1, time_t time2)
+{
+ return time1 - time2;
+}
+
+static __inline__
+int nanosleep(const struct timespec *rqtp, struct timespec *rmtp)
+{
+ return __sysret(sys_clock_nanosleep(CLOCK_REALTIME, 0, rqtp, rmtp));
+}
+
+
static __attribute__((unused))
time_t time(time_t *tptr)
{
@@ -25,7 +159,89 @@ time_t time(time_t *tptr)
return tv.tv_sec;
}
-/* make sure to include all global symbols */
-#include "nolibc.h"
+
+/*
+ * int timer_create(clockid_t clockid, struct sigevent *evp, timer_t *timerid);
+ * int timer_gettime(timer_t timerid, struct itimerspec *curr_value);
+ * int timer_settime(timer_t timerid, int flags, const struct itimerspec *new_value, struct itimerspec *old_value);
+ */
+
+static __attribute__((unused))
+int sys_timer_create(clockid_t clockid, struct sigevent *evp, timer_t *timerid)
+{
+ return my_syscall3(__NR_timer_create, clockid, evp, timerid);
+}
+
+static __attribute__((unused))
+int timer_create(clockid_t clockid, struct sigevent *evp, timer_t *timerid)
+{
+ return __sysret(sys_timer_create(clockid, evp, timerid));
+}
+
+static __attribute__((unused))
+int sys_timer_delete(timer_t timerid)
+{
+ return my_syscall1(__NR_timer_delete, timerid);
+}
+
+static __attribute__((unused))
+int timer_delete(timer_t timerid)
+{
+ return __sysret(sys_timer_delete(timerid));
+}
+
+static __attribute__((unused))
+int sys_timer_gettime(timer_t timerid, struct itimerspec *curr_value)
+{
+#if defined(__NR_timer_gettime)
+ return my_syscall2(__NR_timer_gettime, timerid, curr_value);
+#elif defined(__NR_timer_gettime64)
+ struct __kernel_itimerspec kcurr_value;
+ int ret;
+
+ ret = my_syscall2(__NR_timer_gettime64, timerid, &kcurr_value);
+ __nolibc_timespec_kernel_to_user(&kcurr_value.it_interval, &curr_value->it_interval);
+ __nolibc_timespec_kernel_to_user(&kcurr_value.it_value, &curr_value->it_value);
+ return ret;
+#else
+ return __nolibc_enosys(__func__, timerid, curr_value);
+#endif
+}
+
+static __attribute__((unused))
+int timer_gettime(timer_t timerid, struct itimerspec *curr_value)
+{
+ return __sysret(sys_timer_gettime(timerid, curr_value));
+}
+
+static __attribute__((unused))
+int sys_timer_settime(timer_t timerid, int flags,
+ const struct itimerspec *new_value, struct itimerspec *old_value)
+{
+#if defined(__NR_timer_settime)
+ return my_syscall4(__NR_timer_settime, timerid, flags, new_value, old_value);
+#elif defined(__NR_timer_settime64)
+ struct __kernel_itimerspec knew_value, kold_value;
+ int ret;
+
+ __nolibc_timespec_user_to_kernel(&new_value->it_value, &knew_value.it_value);
+ __nolibc_timespec_user_to_kernel(&new_value->it_interval, &knew_value.it_interval);
+ ret = my_syscall4(__NR_timer_settime64, timerid, flags, &knew_value, &kold_value);
+ if (old_value) {
+ __nolibc_timespec_kernel_to_user(&kold_value.it_interval, &old_value->it_interval);
+ __nolibc_timespec_kernel_to_user(&kold_value.it_value, &old_value->it_value);
+ }
+ return ret;
+#else
+ return __nolibc_enosys(__func__, timerid, flags, new_value, old_value);
+#endif
+}
+
+static __attribute__((unused))
+int timer_settime(timer_t timerid, int flags,
+ const struct itimerspec *new_value, struct itimerspec *old_value)
+{
+ return __sysret(sys_timer_settime(timerid, flags, new_value, old_value));
+}
#endif /* _NOLIBC_TIME_H */
diff --git a/tools/include/nolibc/types.h b/tools/include/nolibc/types.h
index b26a5d0c417c..16c6e9ec9451 100644
--- a/tools/include/nolibc/types.h
+++ b/tools/include/nolibc/types.h
@@ -4,16 +4,17 @@
* Copyright (C) 2017-2021 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_TYPES_H
#define _NOLIBC_TYPES_H
#include "std.h"
#include <linux/mman.h>
-#include <linux/reboot.h> /* for LINUX_REBOOT_* */
#include <linux/stat.h>
#include <linux/time.h>
#include <linux/wait.h>
-#include <linux/resource.h>
/* Only the generic macros and types may be defined here. The arch-specific
@@ -127,7 +128,7 @@ typedef struct {
int __fd = (fd); \
if (__fd >= 0) \
__set->fds[__fd / FD_SETIDXMASK] &= \
- ~(1U << (__fd & FX_SETBITMASK)); \
+ ~(1U << (__fd & FD_SETBITMASK)); \
} while (0)
#define FD_SET(fd, set) do { \
@@ -144,7 +145,7 @@ typedef struct {
int __r = 0; \
if (__fd >= 0) \
__r = !!(__set->fds[__fd / FD_SETIDXMASK] & \
-1U << (__fd & FD_SET_BITMASK)); \
+1U << (__fd & FD_SETBITMASK)); \
__r; \
})
@@ -156,20 +157,6 @@ typedef struct {
__set->fds[__idx] = 0; \
} while (0)
-/* for poll() */
-#define POLLIN 0x0001
-#define POLLPRI 0x0002
-#define POLLOUT 0x0004
-#define POLLERR 0x0008
-#define POLLHUP 0x0010
-#define POLLNVAL 0x0020
-
-struct pollfd {
- int fd;
- short int events;
- short int revents;
-};
-
/* for getdents64() */
struct linux_dirent64 {
uint64_t d_ino;
@@ -198,14 +185,8 @@ struct stat {
union { time_t st_ctime; struct timespec st_ctim; }; /* time of last status change */
};
-/* WARNING, it only deals with the 4096 first majors and 256 first minors */
-#define makedev(major, minor) ((dev_t)((((major) & 0xfff) << 8) | ((minor) & 0xff)))
-#define major(dev) ((unsigned int)(((dev) >> 8) & 0xfff))
-#define minor(dev) ((unsigned int)(((dev) & 0xff))
-
-#ifndef offsetof
-#define offsetof(TYPE, FIELD) ((size_t) &((TYPE *)0)->FIELD)
-#endif
+typedef __kernel_clockid_t clockid_t;
+typedef int timer_t;
#ifndef container_of
#define container_of(PTR, TYPE, FIELD) ({ \
@@ -214,7 +195,4 @@ struct stat {
})
#endif
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_TYPES_H */
diff --git a/tools/include/nolibc/unistd.h b/tools/include/nolibc/unistd.h
index e38f3660c051..7405fa2b89ba 100644
--- a/tools/include/nolibc/unistd.h
+++ b/tools/include/nolibc/unistd.h
@@ -4,6 +4,9 @@
* Copyright (C) 2017-2022 Willy Tarreau <w@1wt.eu>
*/
+/* make sure to include all global symbols */
+#include "nolibc.h"
+
#ifndef _NOLIBC_UNISTD_H
#define _NOLIBC_UNISTD_H
@@ -17,6 +20,34 @@
#define STDOUT_FILENO 1
#define STDERR_FILENO 2
+#define F_OK 0
+#define X_OK 1
+#define W_OK 2
+#define R_OK 4
+
+/*
+ * int access(const char *path, int amode);
+ * int faccessat(int fd, const char *path, int amode, int flag);
+ */
+
+static __attribute__((unused))
+int sys_faccessat(int fd, const char *path, int amode, int flag)
+{
+ return my_syscall4(__NR_faccessat, fd, path, amode, flag);
+}
+
+static __attribute__((unused))
+int faccessat(int fd, const char *path, int amode, int flag)
+{
+ return __sysret(sys_faccessat(fd, path, amode, flag));
+}
+
+static __attribute__((unused))
+int access(const char *path, int amode)
+{
+ return faccessat(AT_FDCWD, path, amode, 0);
+}
+
static __attribute__((unused))
int msleep(unsigned int msecs)
@@ -56,13 +87,4 @@ int tcsetpgrp(int fd, pid_t pid)
return ioctl(fd, TIOCSPGRP, &pid);
}
-#define __syscall_narg(_0, _1, _2, _3, _4, _5, _6, N, ...) N
-#define _syscall_narg(...) __syscall_narg(__VA_ARGS__, 6, 5, 4, 3, 2, 1, 0)
-#define _syscall(N, ...) __sysret(my_syscall##N(__VA_ARGS__))
-#define _syscall_n(N, ...) _syscall(N, __VA_ARGS__)
-#define syscall(...) _syscall_n(_syscall_narg(__VA_ARGS__), ##__VA_ARGS__)
-
-/* make sure to include all global symbols */
-#include "nolibc.h"
-
#endif /* _NOLIBC_UNISTD_H */
diff --git a/tools/include/uapi/README b/tools/include/uapi/README
new file mode 100644
index 000000000000..7147b1b2cb28
--- /dev/null
+++ b/tools/include/uapi/README
@@ -0,0 +1,73 @@
+Why we want a copy of kernel headers in tools?
+==============================================
+
+There used to be no copies, with tools/ code using kernel headers
+directly. From time to time tools/perf/ broke due to legitimate kernel
+hacking. At some point Linus complained about such direct usage. Then we
+adopted the current model.
+
+The way these headers are used in perf are not restricted to just
+including them to compile something.
+
+There are sometimes used in scripts that convert defines into string
+tables, etc, so some change may break one of these scripts, or new MSRs
+may use some different #define pattern, etc.
+
+E.g.:
+
+ $ ls -1 tools/perf/trace/beauty/*.sh | head -5
+ tools/perf/trace/beauty/arch_errno_names.sh
+ tools/perf/trace/beauty/drm_ioctl.sh
+ tools/perf/trace/beauty/fadvise.sh
+ tools/perf/trace/beauty/fsconfig.sh
+ tools/perf/trace/beauty/fsmount.sh
+ $
+ $ tools/perf/trace/beauty/fadvise.sh
+ static const char *fadvise_advices[] = {
+ [0] = "NORMAL",
+ [1] = "RANDOM",
+ [2] = "SEQUENTIAL",
+ [3] = "WILLNEED",
+ [4] = "DONTNEED",
+ [5] = "NOREUSE",
+ };
+ $
+
+The tools/perf/check-headers.sh script, part of the tools/ build
+process, points out changes in the original files.
+
+So its important not to touch the copies in tools/ when doing changes in
+the original kernel headers, that will be done later, when
+check-headers.sh inform about the change to the perf tools hackers.
+
+Another explanation from Ingo Molnar:
+It's better than all the alternatives we tried so far:
+
+ - Symbolic links and direct #includes: this was the original approach but
+ was pushed back on from the kernel side, when tooling modified the
+ headers and broke them accidentally for kernel builds.
+
+ - Duplicate self-defined ABI headers like glibc: double the maintenance
+ burden, double the chance for mistakes, plus there's no tech-driven
+ notification mechanism to look at new kernel side changes.
+
+What we are doing now is a third option:
+
+ - A software-enforced copy-on-write mechanism of kernel headers to
+ tooling, driven by non-fatal warnings on the tooling side build when
+ kernel headers get modified:
+
+ Warning: Kernel ABI header differences:
+ diff -u tools/include/uapi/drm/i915_drm.h include/uapi/drm/i915_drm.h
+ diff -u tools/include/uapi/linux/fs.h include/uapi/linux/fs.h
+ diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
+ ...
+
+ The tooling policy is to always pick up the kernel side headers as-is,
+ and integate them into the tooling build. The warnings above serve as a
+ notification to tooling maintainers that there's changes on the kernel
+ side.
+
+We've been using this for many years now, and it might seem hacky, but
+works surprisingly well.
+
diff --git a/tools/include/uapi/asm-generic/bitsperlong.h b/tools/include/uapi/asm-generic/bitsperlong.h
index 352cb81947b8..fadb3f857f28 100644
--- a/tools/include/uapi/asm-generic/bitsperlong.h
+++ b/tools/include/uapi/asm-generic/bitsperlong.h
@@ -24,4 +24,8 @@
#endif
#endif
+#ifndef __BITS_PER_LONG_LONG
+#define __BITS_PER_LONG_LONG 64
+#endif
+
#endif /* _UAPI__ASM_GENERIC_BITS_PER_LONG */
diff --git a/tools/include/uapi/asm-generic/fcntl.h b/tools/include/uapi/asm-generic/fcntl.h
deleted file mode 100644
index 1c7a0f6632c0..000000000000
--- a/tools/include/uapi/asm-generic/fcntl.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _ASM_GENERIC_FCNTL_H
-#define _ASM_GENERIC_FCNTL_H
-
-#include <linux/types.h>
-
-/*
- * FMODE_EXEC is 0x20
- * FMODE_NONOTIFY is 0x4000000
- * These cannot be used by userspace O_* until internal and external open
- * flags are split.
- * -Eric Paris
- */
-
-/*
- * When introducing new O_* bits, please check its uniqueness in fcntl_init().
- */
-
-#define O_ACCMODE 00000003
-#define O_RDONLY 00000000
-#define O_WRONLY 00000001
-#define O_RDWR 00000002
-#ifndef O_CREAT
-#define O_CREAT 00000100 /* not fcntl */
-#endif
-#ifndef O_EXCL
-#define O_EXCL 00000200 /* not fcntl */
-#endif
-#ifndef O_NOCTTY
-#define O_NOCTTY 00000400 /* not fcntl */
-#endif
-#ifndef O_TRUNC
-#define O_TRUNC 00001000 /* not fcntl */
-#endif
-#ifndef O_APPEND
-#define O_APPEND 00002000
-#endif
-#ifndef O_NONBLOCK
-#define O_NONBLOCK 00004000
-#endif
-#ifndef O_DSYNC
-#define O_DSYNC 00010000 /* used to be O_SYNC, see below */
-#endif
-#ifndef FASYNC
-#define FASYNC 00020000 /* fcntl, for BSD compatibility */
-#endif
-#ifndef O_DIRECT
-#define O_DIRECT 00040000 /* direct disk access hint */
-#endif
-#ifndef O_LARGEFILE
-#define O_LARGEFILE 00100000
-#endif
-#ifndef O_DIRECTORY
-#define O_DIRECTORY 00200000 /* must be a directory */
-#endif
-#ifndef O_NOFOLLOW
-#define O_NOFOLLOW 00400000 /* don't follow links */
-#endif
-#ifndef O_NOATIME
-#define O_NOATIME 01000000
-#endif
-#ifndef O_CLOEXEC
-#define O_CLOEXEC 02000000 /* set close_on_exec */
-#endif
-
-/*
- * Before Linux 2.6.33 only O_DSYNC semantics were implemented, but using
- * the O_SYNC flag. We continue to use the existing numerical value
- * for O_DSYNC semantics now, but using the correct symbolic name for it.
- * This new value is used to request true Posix O_SYNC semantics. It is
- * defined in this strange way to make sure applications compiled against
- * new headers get at least O_DSYNC semantics on older kernels.
- *
- * This has the nice side-effect that we can simply test for O_DSYNC
- * wherever we do not care if O_DSYNC or O_SYNC is used.
- *
- * Note: __O_SYNC must never be used directly.
- */
-#ifndef O_SYNC
-#define __O_SYNC 04000000
-#define O_SYNC (__O_SYNC|O_DSYNC)
-#endif
-
-#ifndef O_PATH
-#define O_PATH 010000000
-#endif
-
-#ifndef __O_TMPFILE
-#define __O_TMPFILE 020000000
-#endif
-
-/* a horrid kludge trying to make sure that this will fail on old kernels */
-#define O_TMPFILE (__O_TMPFILE | O_DIRECTORY)
-
-#ifndef O_NDELAY
-#define O_NDELAY O_NONBLOCK
-#endif
-
-#define F_DUPFD 0 /* dup */
-#define F_GETFD 1 /* get close_on_exec */
-#define F_SETFD 2 /* set/clear close_on_exec */
-#define F_GETFL 3 /* get file->f_flags */
-#define F_SETFL 4 /* set file->f_flags */
-#ifndef F_GETLK
-#define F_GETLK 5
-#define F_SETLK 6
-#define F_SETLKW 7
-#endif
-#ifndef F_SETOWN
-#define F_SETOWN 8 /* for sockets. */
-#define F_GETOWN 9 /* for sockets. */
-#endif
-#ifndef F_SETSIG
-#define F_SETSIG 10 /* for sockets. */
-#define F_GETSIG 11 /* for sockets. */
-#endif
-
-#if __BITS_PER_LONG == 32 || defined(__KERNEL__)
-#ifndef F_GETLK64
-#define F_GETLK64 12 /* using 'struct flock64' */
-#define F_SETLK64 13
-#define F_SETLKW64 14
-#endif
-#endif /* __BITS_PER_LONG == 32 || defined(__KERNEL__) */
-
-#ifndef F_SETOWN_EX
-#define F_SETOWN_EX 15
-#define F_GETOWN_EX 16
-#endif
-
-#ifndef F_GETOWNER_UIDS
-#define F_GETOWNER_UIDS 17
-#endif
-
-/*
- * Open File Description Locks
- *
- * Usually record locks held by a process are released on *any* close and are
- * not inherited across a fork().
- *
- * These cmd values will set locks that conflict with process-associated
- * record locks, but are "owned" by the open file description, not the
- * process. This means that they are inherited across fork() like BSD (flock)
- * locks, and they are only released automatically when the last reference to
- * the open file against which they were acquired is put.
- */
-#define F_OFD_GETLK 36
-#define F_OFD_SETLK 37
-#define F_OFD_SETLKW 38
-
-#define F_OWNER_TID 0
-#define F_OWNER_PID 1
-#define F_OWNER_PGRP 2
-
-struct f_owner_ex {
- int type;
- __kernel_pid_t pid;
-};
-
-/* for F_[GET|SET]FL */
-#define FD_CLOEXEC 1 /* actually anything with low bit set goes */
-
-/* for posix fcntl() and lockf() */
-#ifndef F_RDLCK
-#define F_RDLCK 0
-#define F_WRLCK 1
-#define F_UNLCK 2
-#endif
-
-/* for old implementation of bsd flock () */
-#ifndef F_EXLCK
-#define F_EXLCK 4 /* or 3 */
-#define F_SHLCK 8 /* or 4 */
-#endif
-
-/* operations for bsd flock(), also used by the kernel implementation */
-#define LOCK_SH 1 /* shared lock */
-#define LOCK_EX 2 /* exclusive lock */
-#define LOCK_NB 4 /* or'd with one of the above to prevent
- blocking */
-#define LOCK_UN 8 /* remove lock */
-
-/*
- * LOCK_MAND support has been removed from the kernel. We leave the symbols
- * here to not break legacy builds, but these should not be used in new code.
- */
-#define LOCK_MAND 32 /* This is a mandatory flock ... */
-#define LOCK_READ 64 /* which allows concurrent read operations */
-#define LOCK_WRITE 128 /* which allows concurrent write operations */
-#define LOCK_RW 192 /* which allows concurrent read & write ops */
-
-#define F_LINUX_SPECIFIC_BASE 1024
-
-#ifndef HAVE_ARCH_STRUCT_FLOCK
-struct flock {
- short l_type;
- short l_whence;
- __kernel_off_t l_start;
- __kernel_off_t l_len;
- __kernel_pid_t l_pid;
-#ifdef __ARCH_FLOCK_EXTRA_SYSID
- __ARCH_FLOCK_EXTRA_SYSID
-#endif
-#ifdef __ARCH_FLOCK_PAD
- __ARCH_FLOCK_PAD
-#endif
-};
-
-struct flock64 {
- short l_type;
- short l_whence;
- __kernel_loff_t l_start;
- __kernel_loff_t l_len;
- __kernel_pid_t l_pid;
-#ifdef __ARCH_FLOCK64_PAD
- __ARCH_FLOCK64_PAD
-#endif
-};
-#endif /* HAVE_ARCH_STRUCT_FLOCK */
-
-#endif /* _ASM_GENERIC_FCNTL_H */
diff --git a/tools/include/uapi/asm-generic/mman-common.h b/tools/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..ef1c27fa3c57 100644
--- a/tools/include/uapi/asm-generic/mman-common.h
+++ b/tools/include/uapi/asm-generic/mman-common.h
@@ -79,9 +79,13 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */
+#define MADV_GUARD_REMOVE 103 /* unguard range */
+
/* compatibility flags */
#define MAP_FILE 0
+#define PKEY_UNRESTRICTED 0x0
#define PKEY_DISABLE_ACCESS 0x1
#define PKEY_DISABLE_WRITE 0x2
#define PKEY_ACCESS_MASK (PKEY_DISABLE_ACCESS |\
diff --git a/tools/include/uapi/asm-generic/mman.h b/tools/include/uapi/asm-generic/mman.h
index 406f7718f9ad..51d2556af54a 100644
--- a/tools/include/uapi/asm-generic/mman.h
+++ b/tools/include/uapi/asm-generic/mman.h
@@ -19,4 +19,8 @@
#define MCL_FUTURE 2 /* lock all future mappings */
#define MCL_ONFAULT 4 /* lock all pages that are faulted in */
+#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */
+#define SHADOW_STACK_SET_MARKER (1ULL << 1) /* Set up a top of stack marker in the shadow stack */
+
+
#endif /* __ASM_GENERIC_MMAN_H */
diff --git a/tools/include/uapi/asm-generic/socket.h b/tools/include/uapi/asm-generic/socket.h
index 54d9c8bf7c55..f333a0ac4ee4 100644
--- a/tools/include/uapi/asm-generic/socket.h
+++ b/tools/include/uapi/asm-generic/socket.h
@@ -119,11 +119,34 @@
#define SO_DETACH_REUSEPORT_BPF 68
+#define SO_PREFER_BUSY_POLL 69
+#define SO_BUSY_POLL_BUDGET 70
+
+#define SO_NETNS_COOKIE 71
+
+#define SO_BUF_LOCK 72
+
+#define SO_RESERVE_MEM 73
+
+#define SO_TXREHASH 74
+
#define SO_RCVMARK 75
#define SO_PASSPIDFD 76
#define SO_PEERPIDFD 77
+#define SO_DEVMEM_LINEAR 78
+#define SCM_DEVMEM_LINEAR SO_DEVMEM_LINEAR
+#define SO_DEVMEM_DMABUF 79
+#define SCM_DEVMEM_DMABUF SO_DEVMEM_DMABUF
+#define SO_DEVMEM_DONTNEED 80
+
+#define SCM_TS_OPT_ID 81
+
+#define SO_RCVPRIORITY 82
+
+#define SO_PASSRIGHTS 83
+
#if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__))
diff --git a/tools/include/uapi/asm-generic/unistd.h b/tools/include/uapi/asm-generic/unistd.h
index 75f00965ab15..04e0077fb4c9 100644
--- a/tools/include/uapi/asm-generic/unistd.h
+++ b/tools/include/uapi/asm-generic/unistd.h
@@ -737,7 +737,7 @@ __SC_COMP(__NR_pselect6_time64, sys_pselect6, compat_sys_pselect6_time64)
#define __NR_ppoll_time64 414
__SC_COMP(__NR_ppoll_time64, sys_ppoll, compat_sys_ppoll_time64)
#define __NR_io_pgetevents_time64 416
-__SYSCALL(__NR_io_pgetevents_time64, sys_io_pgetevents)
+__SC_COMP(__NR_io_pgetevents_time64, sys_io_pgetevents, compat_sys_io_pgetevents_time64)
#define __NR_recvmmsg_time64 417
__SC_COMP(__NR_recvmmsg_time64, sys_recvmmsg, compat_sys_recvmmsg_time64)
#define __NR_mq_timedsend_time64 418
@@ -776,12 +776,8 @@ __SYSCALL(__NR_fsmount, sys_fsmount)
__SYSCALL(__NR_fspick, sys_fspick)
#define __NR_pidfd_open 434
__SYSCALL(__NR_pidfd_open, sys_pidfd_open)
-
-#ifdef __ARCH_WANT_SYS_CLONE3
#define __NR_clone3 435
__SYSCALL(__NR_clone3, sys_clone3)
-#endif
-
#define __NR_close_range 436
__SYSCALL(__NR_close_range, sys_close_range)
#define __NR_openat2 437
@@ -842,8 +838,28 @@ __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr)
#define __NR_lsm_list_modules 461
__SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules)
+#define __NR_mseal 462
+__SYSCALL(__NR_mseal, sys_mseal)
+
+#define __NR_setxattrat 463
+__SYSCALL(__NR_setxattrat, sys_setxattrat)
+#define __NR_getxattrat 464
+__SYSCALL(__NR_getxattrat, sys_getxattrat)
+#define __NR_listxattrat 465
+__SYSCALL(__NR_listxattrat, sys_listxattrat)
+#define __NR_removexattrat 466
+__SYSCALL(__NR_removexattrat, sys_removexattrat)
+#define __NR_open_tree_attr 467
+__SYSCALL(__NR_open_tree_attr, sys_open_tree_attr)
+
+/* fs/inode.c */
+#define __NR_file_getattr 468
+__SYSCALL(__NR_file_getattr, sys_file_getattr)
+#define __NR_file_setattr 469
+__SYSCALL(__NR_file_setattr, sys_file_setattr)
+
#undef __NR_syscalls
-#define __NR_syscalls 462
+#define __NR_syscalls 470
/*
* 32 bit systems traditionally used different
diff --git a/tools/include/uapi/drm/drm.h b/tools/include/uapi/drm/drm.h
index 16122819edfe..e63a71d3c607 100644
--- a/tools/include/uapi/drm/drm.h
+++ b/tools/include/uapi/drm/drm.h
@@ -905,13 +905,17 @@ struct drm_syncobj_destroy {
};
#define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0)
+#define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1)
#define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0)
+#define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1)
struct drm_syncobj_handle {
__u32 handle;
__u32 flags;
__s32 fd;
__u32 pad;
+
+ __u64 point;
};
struct drm_syncobj_transfer {
@@ -1024,6 +1028,13 @@ struct drm_crtc_queue_sequence {
__u64 user_data; /* user data passed to event */
};
+#define DRM_CLIENT_NAME_MAX_LEN 64
+struct drm_set_client_name {
+ __u64 name_len;
+ __u64 name;
+};
+
+
#if defined(__cplusplus)
}
#endif
@@ -1288,6 +1299,16 @@ extern "C" {
*/
#define DRM_IOCTL_MODE_CLOSEFB DRM_IOWR(0xD0, struct drm_mode_closefb)
+/**
+ * DRM_IOCTL_SET_CLIENT_NAME - Attach a name to a drm_file
+ *
+ * Having a name allows for easier tracking and debugging.
+ * The length of the name (without null ending char) must be
+ * <= DRM_CLIENT_NAME_MAX_LEN.
+ * The call will fail if the name contains whitespaces or non-printable chars.
+ */
+#define DRM_IOCTL_SET_CLIENT_NAME DRM_IOWR(0xD1, struct drm_set_client_name)
+
/*
* Device specific ioctls should only be in their respective headers
* The device specific ioctl range is from 0x40 to 0x9f.
diff --git a/tools/include/uapi/drm/i915_drm.h b/tools/include/uapi/drm/i915_drm.h
index fd4f9574d177..535cb68fdb5c 100644
--- a/tools/include/uapi/drm/i915_drm.h
+++ b/tools/include/uapi/drm/i915_drm.h
@@ -806,6 +806,12 @@ typedef struct drm_i915_irq_wait {
*/
#define I915_PARAM_PXP_STATUS 58
+/*
+ * Query if kernel allows marking a context to send a Freq hint to SLPC. This
+ * will enable use of the strategies allowed by the SLPC algorithm.
+ */
+#define I915_PARAM_HAS_CONTEXT_FREQ_HINT 59
+
/* Must be kept compact -- no holes and well documented */
/**
@@ -2148,6 +2154,24 @@ struct drm_i915_gem_context_param {
* -EIO: The firmware did not succeed in creating the protected context.
*/
#define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd
+
+/*
+ * I915_CONTEXT_PARAM_LOW_LATENCY:
+ *
+ * Mark this context as a low latency workload which requires aggressive GT
+ * frequency scaling. Use I915_PARAM_HAS_CONTEXT_FREQ_HINT to check if the kernel
+ * supports this per context flag.
+ */
+#define I915_CONTEXT_PARAM_LOW_LATENCY 0xe
+
+/*
+ * I915_CONTEXT_PARAM_CONTEXT_IMAGE:
+ *
+ * Allows userspace to provide own context images.
+ *
+ * Note that this is a debug API not available on production kernel builds.
+ */
+#define I915_CONTEXT_PARAM_CONTEXT_IMAGE 0xf
/* Must be kept compact -- no holes and well documented */
/** @value: Context parameter value to be set or queried */
@@ -2549,6 +2573,24 @@ struct i915_context_param_engines {
struct i915_engine_class_instance engines[N__]; \
} __attribute__((packed)) name__
+struct i915_gem_context_param_context_image {
+ /** @engine: Engine class & instance to be configured. */
+ struct i915_engine_class_instance engine;
+
+ /** @flags: One of the supported flags or zero. */
+ __u32 flags;
+#define I915_CONTEXT_IMAGE_FLAG_ENGINE_INDEX (1u << 0)
+
+ /** @size: Size of the image blob pointed to by @image. */
+ __u32 size;
+
+ /** @mbz: Must be zero. */
+ __u32 mbz;
+
+ /** @image: Userspace memory containing the context image. */
+ __u64 image;
+} __attribute__((packed));
+
/**
* struct drm_i915_gem_context_create_ext_setparam - Context parameter
* to set or query during context creation.
@@ -2623,19 +2665,29 @@ struct drm_i915_reg_read {
*
*/
+/*
+ * struct drm_i915_reset_stats - Return global reset and other context stats
+ *
+ * Driver keeps few stats for each contexts and also global reset count.
+ * This struct can be used to query those stats.
+ */
struct drm_i915_reset_stats {
+ /** @ctx_id: ID of the requested context */
__u32 ctx_id;
+
+ /** @flags: MBZ */
__u32 flags;
- /* All resets since boot/module reload, for all contexts */
+ /** @reset_count: All resets since boot/module reload, for all contexts */
__u32 reset_count;
- /* Number of batches lost when active in GPU, for this context */
+ /** @batch_active: Number of batches lost when active in GPU, for this context */
__u32 batch_active;
- /* Number of batches lost pending for execution, for this context */
+ /** @batch_pending: Number of batches lost pending for execution, for this context */
__u32 batch_pending;
+ /** @pad: MBZ */
__u32 pad;
};
@@ -3013,6 +3065,7 @@ struct drm_i915_query_item {
* - %DRM_I915_QUERY_MEMORY_REGIONS (see struct drm_i915_query_memory_regions)
* - %DRM_I915_QUERY_HWCONFIG_BLOB (see `GuC HWCONFIG blob uAPI`)
* - %DRM_I915_QUERY_GEOMETRY_SUBSLICES (see struct drm_i915_query_topology_info)
+ * - %DRM_I915_QUERY_GUC_SUBMISSION_VERSION (see struct drm_i915_query_guc_submission_version)
*/
__u64 query_id;
#define DRM_I915_QUERY_TOPOLOGY_INFO 1
@@ -3021,6 +3074,7 @@ struct drm_i915_query_item {
#define DRM_I915_QUERY_MEMORY_REGIONS 4
#define DRM_I915_QUERY_HWCONFIG_BLOB 5
#define DRM_I915_QUERY_GEOMETRY_SUBSLICES 6
+#define DRM_I915_QUERY_GUC_SUBMISSION_VERSION 7
/* Must be kept compact -- no holes and well documented */
/**
@@ -3567,6 +3621,20 @@ struct drm_i915_query_memory_regions {
};
/**
+ * struct drm_i915_query_guc_submission_version - query GuC submission interface version
+ */
+struct drm_i915_query_guc_submission_version {
+ /** @branch: Firmware branch version. */
+ __u32 branch;
+ /** @major: Firmware major version. */
+ __u32 major;
+ /** @minor: Firmware minor version. */
+ __u32 minor;
+ /** @patch: Firmware patch version. */
+ __u32 patch;
+};
+
+/**
* DOC: GuC HWCONFIG blob uAPI
*
* The GuC produces a blob with information about the current device.
diff --git a/tools/include/uapi/linux/bits.h b/tools/include/uapi/linux/bits.h
new file mode 100644
index 000000000000..a04afef9efca
--- /dev/null
+++ b/tools/include/uapi/linux/bits.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* bits.h: Macros for dealing with bitmasks. */
+
+#ifndef _UAPI_LINUX_BITS_H
+#define _UAPI_LINUX_BITS_H
+
+#define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h))))
+
+#define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h))))
+
+#define __GENMASK_U128(h, l) \
+ ((_BIT128((h)) << 1) - (_BIT128(l)))
+
+#endif /* _UAPI_LINUX_BITS_H */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 7f24d898efbb..6829936d33f5 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -42,6 +42,7 @@
#define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */
#define BPF_JSLT 0xc0 /* SLT is signed, '<' */
#define BPF_JSLE 0xd0 /* SLE is signed, '<=' */
+#define BPF_JCOND 0xe0 /* conditional pseudo jumps: may_goto, goto_or_nop */
#define BPF_CALL 0x80 /* function call */
#define BPF_EXIT 0x90 /* function return */
@@ -50,6 +51,13 @@
#define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */
#define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */
+#define BPF_LOAD_ACQ 0x100 /* load-acquire */
+#define BPF_STORE_REL 0x110 /* store-release */
+
+enum bpf_cond_pseudo_jmp {
+ BPF_MAY_GOTO = 0,
+};
+
/* Register numbers */
enum {
BPF_REG_0 = 0,
@@ -77,12 +85,29 @@ struct bpf_insn {
__s32 imm; /* signed immediate constant */
};
-/* Key of an a BPF_MAP_TYPE_LPM_TRIE entry */
+/* Deprecated: use struct bpf_lpm_trie_key_u8 (when the "data" member is needed for
+ * byte access) or struct bpf_lpm_trie_key_hdr (when using an alternative type for
+ * the trailing flexible array member) instead.
+ */
struct bpf_lpm_trie_key {
__u32 prefixlen; /* up to 32 for AF_INET, 128 for AF_INET6 */
__u8 data[0]; /* Arbitrary size */
};
+/* Header for bpf_lpm_trie_key structs */
+struct bpf_lpm_trie_key_hdr {
+ __u32 prefixlen;
+};
+
+/* Key of an a BPF_MAP_TYPE_LPM_TRIE entry, with trailing byte array. */
+struct bpf_lpm_trie_key_u8 {
+ union {
+ struct bpf_lpm_trie_key_hdr hdr;
+ __u32 prefixlen;
+ };
+ __u8 data[]; /* Arbitrary size */
+};
+
struct bpf_cgroup_storage_key {
__u64 cgroup_inode_id; /* cgroup inode id */
__u32 attach_type; /* program attach type (enum bpf_attach_type) */
@@ -425,6 +450,7 @@ union bpf_iter_link_info {
* * **struct bpf_map_info**
* * **struct bpf_btf_info**
* * **struct bpf_link_info**
+ * * **struct bpf_token_info**
*
* Return
* Returns zero on success. On error, -1 is returned and *errno*
@@ -617,7 +643,11 @@ union bpf_iter_link_info {
* to NULL to begin the batched operation. After each subsequent
* **BPF_MAP_LOOKUP_BATCH**, the caller should pass the resultant
* *out_batch* as the *in_batch* for the next operation to
- * continue iteration from the current point.
+ * continue iteration from the current point. Both *in_batch* and
+ * *out_batch* must point to memory large enough to hold a key,
+ * except for maps of type **BPF_MAP_TYPE_{HASH, PERCPU_HASH,
+ * LRU_HASH, LRU_PERCPU_HASH}**, for which batch parameters
+ * must be at least 4 bytes wide regardless of key size.
*
* The *keys* and *values* are output parameters which must point
* to memory large enough to hold *count* items based on the key
@@ -847,6 +877,47 @@ union bpf_iter_link_info {
* Returns zero on success. On error, -1 is returned and *errno*
* is set appropriately.
*
+ * BPF_TOKEN_CREATE
+ * Description
+ * Create BPF token with embedded information about what
+ * BPF-related functionality it allows:
+ * - a set of allowed bpf() syscall commands;
+ * - a set of allowed BPF map types to be created with
+ * BPF_MAP_CREATE command, if BPF_MAP_CREATE itself is allowed;
+ * - a set of allowed BPF program types and BPF program attach
+ * types to be loaded with BPF_PROG_LOAD command, if
+ * BPF_PROG_LOAD itself is allowed.
+ *
+ * BPF token is created (derived) from an instance of BPF FS,
+ * assuming it has necessary delegation mount options specified.
+ * This BPF token can be passed as an extra parameter to various
+ * bpf() syscall commands to grant BPF subsystem functionality to
+ * unprivileged processes.
+ *
+ * When created, BPF token is "associated" with the owning
+ * user namespace of BPF FS instance (super block) that it was
+ * derived from, and subsequent BPF operations performed with
+ * BPF token would be performing capabilities checks (i.e.,
+ * CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, CAP_SYS_ADMIN) within
+ * that user namespace. Without BPF token, such capabilities
+ * have to be granted in init user namespace, making bpf()
+ * syscall incompatible with user namespace, for the most part.
+ *
+ * Return
+ * A new file descriptor (a nonnegative integer), or -1 if an
+ * error occurred (in which case, *errno* is set appropriately).
+ *
+ * BPF_PROG_STREAM_READ_BY_FD
+ * Description
+ * Read data of a program's BPF stream. The program is identified
+ * by *prog_fd*, and the stream is identified by the *stream_id*.
+ * The data is copied to a buffer pointed to by *stream_buf*, and
+ * filled less than or equal to *stream_buf_len* bytes.
+ *
+ * Return
+ * Number of bytes read from the stream on success, or -1 if an
+ * error occurred (in which case, *errno* is set appropriately).
+ *
* NOTES
* eBPF objects (maps and programs) can be shared between processes.
*
@@ -901,6 +972,9 @@ enum bpf_cmd {
BPF_ITER_CREATE,
BPF_LINK_DETACH,
BPF_PROG_BIND_MAP,
+ BPF_TOKEN_CREATE,
+ BPF_PROG_STREAM_READ_BY_FD,
+ __MAX_BPF_CMD,
};
enum bpf_map_type {
@@ -951,6 +1025,8 @@ enum bpf_map_type {
BPF_MAP_TYPE_BLOOM_FILTER,
BPF_MAP_TYPE_USER_RINGBUF,
BPF_MAP_TYPE_CGRP_STORAGE,
+ BPF_MAP_TYPE_ARENA,
+ __MAX_BPF_MAP_TYPE
};
/* Note that tracing related programs such as
@@ -995,6 +1071,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_SK_LOOKUP,
BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */
BPF_PROG_TYPE_NETFILTER,
+ __MAX_BPF_PROG_TYPE
};
enum bpf_attach_type {
@@ -1054,11 +1131,16 @@ enum bpf_attach_type {
BPF_CGROUP_UNIX_GETSOCKNAME,
BPF_NETKIT_PRIMARY,
BPF_NETKIT_PEER,
+ BPF_TRACE_KPROBE_SESSION,
+ BPF_TRACE_UPROBE_SESSION,
__MAX_BPF_ATTACH_TYPE
};
#define MAX_BPF_ATTACH_TYPE __MAX_BPF_ATTACH_TYPE
+/* Add BPF_LINK_TYPE(type, name) in bpf_types.h to keep bpf_link_type_strs[]
+ * in sync with the definitions below.
+ */
enum bpf_link_type {
BPF_LINK_TYPE_UNSPEC = 0,
BPF_LINK_TYPE_RAW_TRACEPOINT = 1,
@@ -1074,6 +1156,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_TCX = 11,
BPF_LINK_TYPE_UPROBE_MULTI = 12,
BPF_LINK_TYPE_NETKIT = 13,
+ BPF_LINK_TYPE_SOCKMAP = 14,
__MAX_BPF_LINK_TYPE,
};
@@ -1140,6 +1223,7 @@ enum bpf_perf_event_type {
#define BPF_F_BEFORE (1U << 3)
#define BPF_F_AFTER (1U << 4)
#define BPF_F_ID (1U << 5)
+#define BPF_F_PREORDER (1U << 6)
#define BPF_F_LINK BPF_F_LINK /* 1 << 13 */
/* If BPF_F_STRICT_ALIGNMENT is used in BPF_PROG_LOAD command, the
@@ -1278,6 +1362,10 @@ enum {
*/
#define BPF_PSEUDO_KFUNC_CALL 2
+enum bpf_addr_space_cast {
+ BPF_ADDR_SPACE_CAST = 1,
+};
+
/* flags for BPF_MAP_UPDATE_ELEM command */
enum {
BPF_ANY = 0, /* create new element or update existing */
@@ -1330,6 +1418,18 @@ enum {
/* Get path from provided FD in BPF_OBJ_PIN/BPF_OBJ_GET commands */
BPF_F_PATH_FD = (1U << 14),
+
+/* Flag for value_type_btf_obj_fd, the fd is available */
+ BPF_F_VTYPE_BTF_OBJ_FD = (1U << 15),
+
+/* BPF token FD is passed in a corresponding command's token_fd field */
+ BPF_F_TOKEN_FD = (1U << 16),
+
+/* When user space page faults in bpf_arena send SIGSEGV instead of inserting new page */
+ BPF_F_SEGV_ON_FAULT = (1U << 17),
+
+/* Do not translate kernel bpf_arena pointers to user pointers */
+ BPF_F_NO_USER_CONV = (1U << 18),
};
/* Flags for BPF_PROG_QUERY. */
@@ -1346,6 +1446,8 @@ enum {
#define BPF_F_TEST_RUN_ON_CPU (1U << 0)
/* If set, XDP frames will be transmitted after processing */
#define BPF_F_TEST_XDP_LIVE_FRAMES (1U << 1)
+/* If set, apply CHECKSUM_COMPLETE to skb and validate the checksum */
+#define BPF_F_TEST_SKB_CHECKSUM_COMPLETE (1U << 2)
/* type for BPF_ENABLE_STATS */
enum bpf_stats_type {
@@ -1374,6 +1476,11 @@ struct bpf_stack_build_id {
#define BPF_OBJ_NAME_LEN 16U
+enum {
+ BPF_STREAM_STDOUT = 1,
+ BPF_STREAM_STDERR = 2,
+};
+
union bpf_attr {
struct { /* anonymous struct used by BPF_MAP_CREATE command */
__u32 map_type; /* one of enum bpf_map_type */
@@ -1401,11 +1508,29 @@ union bpf_attr {
* BPF_MAP_TYPE_BLOOM_FILTER - the lowest 4 bits indicate the
* number of hash functions (if 0, the bloom filter will default
* to using 5 hash functions).
+ *
+ * BPF_MAP_TYPE_ARENA - contains the address where user space
+ * is going to mmap() the arena. It has to be page aligned.
*/
__u64 map_extra;
+
+ __s32 value_type_btf_obj_fd; /* fd pointing to a BTF
+ * type data for
+ * btf_vmlinux_value_type_id.
+ */
+ /* BPF token FD to use with BPF_MAP_CREATE operation.
+ * If provided, map_flags should have BPF_F_TOKEN_FD flag set.
+ */
+ __s32 map_token_fd;
+
+ /* Hash of the program that has exclusive access to the map.
+ */
+ __aligned_u64 excl_prog_hash;
+ /* Size of the passed excl_prog_hash. */
+ __u32 excl_prog_hash_size;
};
- struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */
+ struct { /* anonymous struct used by BPF_MAP_*_ELEM and BPF_MAP_FREEZE commands */
__u32 map_fd;
__aligned_u64 key;
union {
@@ -1472,6 +1597,30 @@ union bpf_attr {
* truncated), or smaller (if log buffer wasn't filled completely).
*/
__u32 log_true_size;
+ /* BPF token FD to use with BPF_PROG_LOAD operation.
+ * If provided, prog_flags should have BPF_F_TOKEN_FD flag set.
+ */
+ __s32 prog_token_fd;
+ /* The fd_array_cnt can be used to pass the length of the
+ * fd_array array. In this case all the [map] file descriptors
+ * passed in this array will be bound to the program, even if
+ * the maps are not referenced directly. The functionality is
+ * similar to the BPF_PROG_BIND_MAP syscall, but maps can be
+ * used by the verifier during the program load. If provided,
+ * then the fd_array[0,...,fd_array_cnt-1] is expected to be
+ * continuous.
+ */
+ __u32 fd_array_cnt;
+ /* Pointer to a buffer containing the signature of the BPF
+ * program.
+ */
+ __aligned_u64 signature;
+ /* Size of the signature buffer in bytes. */
+ __u32 signature_size;
+ /* ID of the kernel keyring to be used for signature
+ * verification.
+ */
+ __s32 keyring_id;
};
struct { /* anonymous struct used by BPF_OBJ_* commands */
@@ -1537,6 +1686,7 @@ union bpf_attr {
};
__u32 next_id;
__u32 open_flags;
+ __s32 fd_by_id_token_fd;
};
struct { /* anonymous struct used by BPF_OBJ_GET_INFO_BY_FD */
@@ -1569,8 +1719,10 @@ union bpf_attr {
} query;
struct { /* anonymous struct used by BPF_RAW_TRACEPOINT_OPEN command */
- __u64 name;
- __u32 prog_fd;
+ __u64 name;
+ __u32 prog_fd;
+ __u32 :32;
+ __aligned_u64 cookie;
} raw_tracepoint;
struct { /* anonymous struct for BPF_BTF_LOAD */
@@ -1584,6 +1736,11 @@ union bpf_attr {
* truncated), or smaller (if log buffer wasn't filled completely).
*/
__u32 btf_log_true_size;
+ __u32 btf_flags;
+ /* BPF token FD to use with BPF_BTF_LOAD operation.
+ * If provided, btf_flags should have BPF_F_TOKEN_FD flag set.
+ */
+ __s32 btf_token_fd;
};
struct {
@@ -1671,6 +1828,13 @@ union bpf_attr {
};
__u64 expected_revision;
} netkit;
+ struct {
+ union {
+ __u32 relative_fd;
+ __u32 relative_id;
+ };
+ __u64 expected_revision;
+ } cgroup;
};
} link_create;
@@ -1714,6 +1878,18 @@ union bpf_attr {
__u32 flags; /* extra flags */
} prog_bind_map;
+ struct { /* struct used by BPF_TOKEN_CREATE command */
+ __u32 flags;
+ __u32 bpffs_fd;
+ } token_create;
+
+ struct {
+ __aligned_u64 stream_buf;
+ __u32 stream_buf_len;
+ __u32 stream_id;
+ __u32 prog_fd;
+ } prog_stream_read;
+
} __attribute__((aligned(8)));
/* The description below is an attempt at providing documentation to eBPF
@@ -1861,15 +2037,21 @@ union bpf_attr {
* program.
* Return
* The SMP id of the processor running the program.
+ * Attributes
+ * __bpf_fastcall
*
* long bpf_skb_store_bytes(struct sk_buff *skb, u32 offset, const void *from, u32 len, u64 flags)
* Description
* Store *len* bytes from address *from* into the packet
- * associated to *skb*, at *offset*. *flags* are a combination of
- * **BPF_F_RECOMPUTE_CSUM** (automatically recompute the
- * checksum for the packet after storing the bytes) and
- * **BPF_F_INVALIDATE_HASH** (set *skb*\ **->hash**, *skb*\
- * **->swhash** and *skb*\ **->l4hash** to 0).
+ * associated to *skb*, at *offset*. The *flags* are a combination
+ * of the following values:
+ *
+ * **BPF_F_RECOMPUTE_CSUM**
+ * Automatically update *skb*\ **->csum** after storing the
+ * bytes.
+ * **BPF_F_INVALIDATE_HASH**
+ * Set *skb*\ **->hash**, *skb*\ **->swhash** and *skb*\
+ * **->l4hash** to 0.
*
* A call to this helper is susceptible to change the underlying
* packet buffer. Therefore, at load time, all checks on pointers
@@ -1921,7 +2103,8 @@ union bpf_attr {
* untouched (unless **BPF_F_MARK_ENFORCE** is added as well), and
* for updates resulting in a null checksum the value is set to
* **CSUM_MANGLED_0** instead. Flag **BPF_F_PSEUDO_HDR** indicates
- * the checksum is to be computed against a pseudo-header.
+ * that the modified header field is part of the pseudo-header.
+ * Flag **BPF_F_IPV6** should be set for IPv6 packets.
*
* This helper works in combination with **bpf_csum_diff**\ (),
* which does not update the checksum in-place, but offers more
@@ -2268,7 +2451,7 @@ union bpf_attr {
* into it. An example is available in file
* *samples/bpf/trace_output_user.c* in the Linux kernel source
* tree (the eBPF program counterpart is in
- * *samples/bpf/trace_output_kern.c*).
+ * *samples/bpf/trace_output.bpf.c*).
*
* **bpf_perf_event_output**\ () achieves better performance
* than **bpf_trace_printk**\ () for sharing data with user
@@ -2742,7 +2925,7 @@ union bpf_attr {
* **TCP_SYNCNT**, **TCP_USER_TIMEOUT**, **TCP_NOTSENT_LOWAT**,
* **TCP_NODELAY**, **TCP_MAXSEG**, **TCP_WINDOW_CLAMP**,
* **TCP_THIN_LINEAR_TIMEOUTS**, **TCP_BPF_DELACK_MAX**,
- * **TCP_BPF_RTO_MIN**.
+ * **TCP_BPF_RTO_MIN**, **TCP_BPF_SOCK_OPS_CB_FLAGS**.
* * **IPPROTO_IP**, which supports *optname* **IP_TOS**.
* * **IPPROTO_IPV6**, which supports the following *optname*\ s:
* **IPV6_TCLASS**, **IPV6_AUTOFLOWLABEL**.
@@ -2992,10 +3175,6 @@ union bpf_attr {
* with the **CONFIG_BPF_KPROBE_OVERRIDE** configuration
* option, and in this case it only works on functions tagged with
* **ALLOW_ERROR_INJECTION** in the kernel code.
- *
- * Also, the helper is only available for the architectures having
- * the CONFIG_FUNCTION_ERROR_INJECTION option. As of this writing,
- * x86 architecture is the only one to support this feature.
* Return
* 0
*
@@ -3289,6 +3468,10 @@ union bpf_attr {
* for the nexthop. If the src addr cannot be derived,
* **BPF_FIB_LKUP_RET_NO_SRC_ADDR** is returned. In this
* case, *params*->dmac and *params*->smac are not set either.
+ * **BPF_FIB_LOOKUP_MARK**
+ * Use the mark present in *params*->mark for the fib lookup.
+ * This option should not be used with BPF_FIB_LOOKUP_DIRECT,
+ * as it only has meaning for full lookups.
*
* *ctx* is either **struct xdp_md** for XDP programs or
* **struct sk_buff** tc cls_act programs.
@@ -4708,7 +4891,7 @@ union bpf_attr {
*
* **-ENOENT** if the bpf_local_storage cannot be found.
*
- * long bpf_d_path(struct path *path, char *buf, u32 sz)
+ * long bpf_d_path(const struct path *path, char *buf, u32 sz)
* Description
* Return full path for given **struct path** object, which
* needs to be the kernel BTF *path* object. The path is
@@ -4838,10 +5021,13 @@ union bpf_attr {
* the netns switch takes place from ingress to ingress without
* going through the CPU's backlog queue.
*
+ * *skb*\ **->mark** and *skb*\ **->tstamp** are not cleared during
+ * the netns switch.
+ *
* The *flags* argument is reserved and must be 0. The helper is
- * currently only supported for tc BPF program types at the ingress
- * hook and for veth device types. The peer device must reside in a
- * different network namespace.
+ * currently only supported for tc BPF program types at the
+ * ingress hook and for veth and netkit target device types. The
+ * peer device must reside in a different network namespace.
* Return
* The helper returns **TC_ACT_REDIRECT** on success or
* **TC_ACT_SHOT** on error.
@@ -4917,7 +5103,7 @@ union bpf_attr {
* bytes will be copied to *dst*
* Return
* The **hash_algo** is returned on success,
- * **-EOPNOTSUP** if IMA is disabled or **-EINVAL** if
+ * **-EOPNOTSUPP** if IMA is disabled or **-EINVAL** if
* invalid arguments are passed.
*
* struct socket *bpf_sock_from_file(struct file *file)
@@ -5256,7 +5442,7 @@ union bpf_attr {
* Currently, the **flags** must be 0. Currently, nr_loops is
* limited to 1 << 23 (~8 million) loops.
*
- * long (\*callback_fn)(u32 index, void \*ctx);
+ * long (\*callback_fn)(u64 index, void \*ctx);
*
* where **index** is the current index in the loop. The index
* is zero-indexed.
@@ -5403,14 +5589,15 @@ union bpf_attr {
* bytes will be copied to *dst*
* Return
* The **hash_algo** is returned on success,
- * **-EOPNOTSUP** if the hash calculation failed or **-EINVAL** if
+ * **-EOPNOTSUPP** if the hash calculation failed or **-EINVAL** if
* invalid arguments are passed.
*
- * void *bpf_kptr_xchg(void *map_value, void *ptr)
+ * void *bpf_kptr_xchg(void *dst, void *ptr)
* Description
- * Exchange kptr at pointer *map_value* with *ptr*, and return the
- * old value. *ptr* can be NULL, otherwise it must be a referenced
- * pointer which will be released when this helper is called.
+ * Exchange kptr at pointer *dst* with *ptr*, and return the old value.
+ * *dst* can be map value or local kptr. *ptr* can be NULL, otherwise
+ * it must be a referenced pointer which will be released when this helper
+ * is called.
* Return
* The old value of kptr (which can be NULL). The returned pointer
* if not NULL, is a reference which must be released using its
@@ -5893,7 +6080,10 @@ union bpf_attr {
FN(user_ringbuf_drain, 209, ##ctx) \
FN(cgrp_storage_get, 210, ##ctx) \
FN(cgrp_storage_delete, 211, ##ctx) \
- /* */
+ /* This helper list is effectively frozen. If you are trying to \
+ * add a new helper, you should add a kfunc instead which has \
+ * less stability guarantees. See Documentation/bpf/kfuncs.rst \
+ */
/* backwards-compatibility macros for users of __BPF_FUNC_MAPPER that don't
* know or care about integer value that is now passed as second argument
@@ -5931,11 +6121,7 @@ enum {
BPF_F_PSEUDO_HDR = (1ULL << 4),
BPF_F_MARK_MANGLED_0 = (1ULL << 5),
BPF_F_MARK_ENFORCE = (1ULL << 6),
-};
-
-/* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */
-enum {
- BPF_F_INGRESS = (1ULL << 0),
+ BPF_F_IPV6 = (1ULL << 7),
};
/* BPF_FUNC_skb_set_tunnel_key and BPF_FUNC_skb_get_tunnel_key flags. */
@@ -6084,10 +6270,12 @@ enum {
BPF_F_BPRM_SECUREEXEC = (1ULL << 0),
};
-/* Flags for bpf_redirect_map helper */
+/* Flags for bpf_redirect and bpf_redirect_map helpers */
enum {
- BPF_F_BROADCAST = (1ULL << 3),
- BPF_F_EXCLUDE_INGRESS = (1ULL << 4),
+ BPF_F_INGRESS = (1ULL << 0), /* used for skb path */
+ BPF_F_BROADCAST = (1ULL << 3), /* used for XDP path */
+ BPF_F_EXCLUDE_INGRESS = (1ULL << 4), /* used for XDP path */
+#define BPF_F_REDIRECT_FLAGS (BPF_F_INGRESS | BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS)
};
#define __bpf_md_ptr(type, name) \
@@ -6096,12 +6284,17 @@ union { \
__u64 :64; \
} __attribute__((aligned(8)))
+/* The enum used in skb->tstamp_type. It specifies the clock type
+ * of the time stored in the skb->tstamp.
+ */
enum {
- BPF_SKB_TSTAMP_UNSPEC,
- BPF_SKB_TSTAMP_DELIVERY_MONO, /* tstamp has mono delivery time */
- /* For any BPF_SKB_TSTAMP_* that the bpf prog cannot handle,
- * the bpf prog should handle it like BPF_SKB_TSTAMP_UNSPEC
- * and try to deduce it by ingress, egress or skb->sk->sk_clockid.
+ BPF_SKB_TSTAMP_UNSPEC = 0, /* DEPRECATED */
+ BPF_SKB_TSTAMP_DELIVERY_MONO = 1, /* DEPRECATED */
+ BPF_SKB_CLOCK_REALTIME = 0,
+ BPF_SKB_CLOCK_MONOTONIC = 1,
+ BPF_SKB_CLOCK_TAI = 2,
+ /* For any future BPF_SKB_CLOCK_* that the bpf prog cannot handle,
+ * the bpf prog can try to deduce it by ingress/egress/skb->sk->sk_clockid.
*/
};
@@ -6487,8 +6680,10 @@ struct bpf_map_info {
__u32 btf_id;
__u32 btf_key_type_id;
__u32 btf_value_type_id;
- __u32 :32; /* alignment pad */
+ __u32 btf_vmlinux_id;
__u64 map_extra;
+ __aligned_u64 hash;
+ __u32 hash_size;
} __attribute__((aligned(8)));
struct bpf_btf_info {
@@ -6508,11 +6703,15 @@ struct bpf_link_info {
struct {
__aligned_u64 tp_name; /* in/out: tp_name buffer ptr */
__u32 tp_name_len; /* in/out: tp_name buffer len */
+ __u32 :32;
+ __u64 cookie;
} raw_tracepoint;
struct {
__u32 attach_type;
__u32 target_obj_id; /* prog_id for PROG_EXT, otherwise btf object id */
__u32 target_btf_id; /* BTF type id inside the object */
+ __u32 :32;
+ __u64 cookie;
} tracing;
struct {
__u64 cgroup_id;
@@ -6563,6 +6762,7 @@ struct bpf_link_info {
__u32 count; /* in/out: kprobe_multi function count */
__u32 flags;
__u64 missed;
+ __aligned_u64 cookies;
} kprobe_multi;
struct {
__aligned_u64 path;
@@ -6582,6 +6782,8 @@ struct bpf_link_info {
__aligned_u64 file_name; /* in/out */
__u32 name_len;
__u32 offset; /* offset from file_name */
+ __u64 cookie;
+ __u64 ref_ctr_offset;
} uprobe; /* BPF_PERF_EVENT_UPROBE, BPF_PERF_EVENT_URETPROBE */
struct {
__aligned_u64 func_name; /* in/out */
@@ -6589,14 +6791,19 @@ struct bpf_link_info {
__u32 offset; /* offset from func_name */
__u64 addr;
__u64 missed;
+ __u64 cookie;
} kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */
struct {
__aligned_u64 tp_name; /* in/out */
__u32 name_len;
+ __u32 :32;
+ __u64 cookie;
} tracepoint; /* BPF_PERF_EVENT_TRACEPOINT */
struct {
__u64 config;
__u32 type;
+ __u32 :32;
+ __u64 cookie;
} event; /* BPF_PERF_EVENT_EVENT */
};
} perf_event;
@@ -6608,9 +6815,20 @@ struct bpf_link_info {
__u32 ifindex;
__u32 attach_type;
} netkit;
+ struct {
+ __u32 map_id;
+ __u32 attach_type;
+ } sockmap;
};
} __attribute__((aligned(8)));
+struct bpf_token_info {
+ __u64 allowed_cmds;
+ __u64 allowed_maps;
+ __u64 allowed_progs;
+ __u64 allowed_attachs;
+} __attribute__((aligned(8)));
+
/* User bpf_sock_addr struct to access socket fields and sockaddr struct passed
* by user and intended to be used by socket (e.g. to bind to, depends on
* attach type).
@@ -6774,6 +6992,12 @@ enum {
BPF_SOCK_OPS_ALL_CB_FLAGS = 0x7F,
};
+enum {
+ SK_BPF_CB_TX_TIMESTAMPING = 1<<0,
+ SK_BPF_CB_MASK = (SK_BPF_CB_TX_TIMESTAMPING - 1) |
+ SK_BPF_CB_TX_TIMESTAMPING
+};
+
/* List of known BPF sock_ops operators.
* New entries can only be added at the end
*/
@@ -6826,6 +7050,8 @@ enum {
* socket transition to LISTEN state.
*/
BPF_SOCK_OPS_RTT_CB, /* Called on every RTT.
+ * Arg1: measured RTT input (mrtt)
+ * Arg2: updated srtt
*/
BPF_SOCK_OPS_PARSE_HDR_OPT_CB, /* Parse the header option.
* It will be called to handle
@@ -6884,6 +7110,29 @@ enum {
* by the kernel or the
* earlier bpf-progs.
*/
+ BPF_SOCK_OPS_TSTAMP_SCHED_CB, /* Called when skb is passing
+ * through dev layer when
+ * SK_BPF_CB_TX_TIMESTAMPING
+ * feature is on.
+ */
+ BPF_SOCK_OPS_TSTAMP_SND_SW_CB, /* Called when skb is about to send
+ * to the nic when SK_BPF_CB_TX_TIMESTAMPING
+ * feature is on.
+ */
+ BPF_SOCK_OPS_TSTAMP_SND_HW_CB, /* Called in hardware phase when
+ * SK_BPF_CB_TX_TIMESTAMPING feature
+ * is on.
+ */
+ BPF_SOCK_OPS_TSTAMP_ACK_CB, /* Called when all the skbs in the
+ * same sendmsg call are acked
+ * when SK_BPF_CB_TX_TIMESTAMPING
+ * feature is on.
+ */
+ BPF_SOCK_OPS_TSTAMP_SENDMSG_CB, /* Called when every sendmsg syscall
+ * is triggered. It's used to correlate
+ * sendmsg timestamp with corresponding
+ * tskey.
+ */
};
/* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
@@ -6904,6 +7153,7 @@ enum {
BPF_TCP_LISTEN,
BPF_TCP_CLOSING, /* Now a valid state */
BPF_TCP_NEW_SYN_RECV,
+ BPF_TCP_BOUND_INACTIVE,
BPF_TCP_MAX_STATES /* Leave at the end! */
};
@@ -6948,6 +7198,8 @@ enum {
TCP_BPF_SYN = 1005, /* Copy the TCP header */
TCP_BPF_SYN_IP = 1006, /* Copy the IP[46] and TCP header */
TCP_BPF_SYN_MAC = 1007, /* Copy the MAC, IP[46], and TCP header */
+ TCP_BPF_SOCK_OPS_CB_FLAGS = 1008, /* Get or Set TCP sock ops flags */
+ SK_BPF_CB_FLAGS = 1009, /* Get or set sock ops flags in socket */
};
enum {
@@ -7007,6 +7259,7 @@ enum {
BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
BPF_FIB_LOOKUP_TBID = (1U << 3),
BPF_FIB_LOOKUP_SRC = (1U << 4),
+ BPF_FIB_LOOKUP_MARK = (1U << 5),
};
enum {
@@ -7039,7 +7292,7 @@ struct bpf_fib_lookup {
/* output: MTU value */
__u16 mtu_result;
- };
+ } __attribute__((packed, aligned(2)));
/* input: L3 device index for lookup
* output: device index from FIB lookup
*/
@@ -7084,8 +7337,19 @@ struct bpf_fib_lookup {
__u32 tbid;
};
- __u8 smac[6]; /* ETH_ALEN */
- __u8 dmac[6]; /* ETH_ALEN */
+ union {
+ /* input */
+ struct {
+ __u32 mark; /* policy routing */
+ /* 2 4-byte holes for input */
+ };
+
+ /* output: source and dest mac */
+ struct {
+ __u8 smac[6]; /* ETH_ALEN */
+ __u8 dmac[6]; /* ETH_ALEN */
+ };
+ };
};
struct bpf_redir_neigh {
@@ -7172,6 +7436,14 @@ struct bpf_timer {
__u64 __opaque[2];
} __attribute__((aligned(8)));
+struct bpf_task_work {
+ __u64 __opaque;
+} __attribute__((aligned(8)));
+
+struct bpf_wq {
+ __u64 __opaque[2];
+} __attribute__((aligned(8)));
+
struct bpf_dynptr {
__u64 __opaque[2];
} __attribute__((aligned(8)));
@@ -7364,4 +7636,13 @@ struct bpf_iter_num {
__u64 __opaque[1];
} __attribute__((aligned(8)));
+/*
+ * Flags to control BPF kfunc behaviour.
+ * - BPF_F_PAD_ZEROS: Pad destination buffer with zeros. (See the respective
+ * helper documentation for details.)
+ */
+enum bpf_kfunc_flags {
+ BPF_F_PAD_ZEROS = (1ULL << 0),
+};
+
#endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/include/uapi/linux/btf.h b/tools/include/uapi/linux/btf.h
index ec1798b6d3ff..266d4ffa6c07 100644
--- a/tools/include/uapi/linux/btf.h
+++ b/tools/include/uapi/linux/btf.h
@@ -36,7 +36,8 @@ struct btf_type {
* bits 24-28: kind (e.g. int, ptr, array...etc)
* bits 29-30: unused
* bit 31: kind_flag, currently used by
- * struct, union, enum, fwd and enum64
+ * struct, union, enum, fwd, enum64,
+ * decl_tag and type_tag
*/
__u32 info;
/* "size" is used by INT, ENUM, STRUCT, UNION, DATASEC and ENUM64.
diff --git a/tools/include/uapi/linux/const.h b/tools/include/uapi/linux/const.h
index a429381e7ca5..b8f629ef135f 100644
--- a/tools/include/uapi/linux/const.h
+++ b/tools/include/uapi/linux/const.h
@@ -28,6 +28,23 @@
#define _BITUL(x) (_UL(1) << (x))
#define _BITULL(x) (_ULL(1) << (x))
+#if !defined(__ASSEMBLY__)
+/*
+ * Missing asm support
+ *
+ * __BIT128() would not work in the asm code, as it shifts an
+ * 'unsigned __int128' data type as direct representation of
+ * 128 bit constants is not supported in the gcc compiler, as
+ * they get silently truncated.
+ *
+ * TODO: Please revisit this implementation when gcc compiler
+ * starts representing 128 bit constants directly like long
+ * and unsigned long etc. Subsequently drop the comment for
+ * GENMASK_U128() which would then start supporting asm code.
+ */
+#define _BIT128(x) ((unsigned __int128)(1) << (x))
+#endif
+
#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (__typeof__(x))(a) - 1)
#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
diff --git a/tools/include/uapi/linux/coredump.h b/tools/include/uapi/linux/coredump.h
new file mode 100644
index 000000000000..dc3789b78af0
--- /dev/null
+++ b/tools/include/uapi/linux/coredump.h
@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+#ifndef _UAPI_LINUX_COREDUMP_H
+#define _UAPI_LINUX_COREDUMP_H
+
+#include <linux/types.h>
+
+/**
+ * coredump_{req,ack} flags
+ * @COREDUMP_KERNEL: kernel writes coredump
+ * @COREDUMP_USERSPACE: userspace writes coredump
+ * @COREDUMP_REJECT: don't generate coredump
+ * @COREDUMP_WAIT: wait for coredump server
+ */
+enum {
+ COREDUMP_KERNEL = (1ULL << 0),
+ COREDUMP_USERSPACE = (1ULL << 1),
+ COREDUMP_REJECT = (1ULL << 2),
+ COREDUMP_WAIT = (1ULL << 3),
+};
+
+/**
+ * struct coredump_req - message kernel sends to userspace
+ * @size: size of struct coredump_req
+ * @size_ack: known size of struct coredump_ack on this kernel
+ * @mask: supported features
+ *
+ * When a coredump happens the kernel will connect to the coredump
+ * socket and send a coredump request to the coredump server. The @size
+ * member is set to the size of struct coredump_req and provides a hint
+ * to userspace how much data can be read. Userspace may use MSG_PEEK to
+ * peek the size of struct coredump_req and then choose to consume it in
+ * one go. Userspace may also simply read a COREDUMP_ACK_SIZE_VER0
+ * request. If the size the kernel sends is larger userspace simply
+ * discards any remaining data.
+ *
+ * The coredump_req->mask member is set to the currently know features.
+ * Userspace may only set coredump_ack->mask to the bits raised by the
+ * kernel in coredump_req->mask.
+ *
+ * The coredump_req->size_ack member is set by the kernel to the size of
+ * struct coredump_ack the kernel knows. Userspace may only send up to
+ * coredump_req->size_ack bytes to the kernel and must set
+ * coredump_ack->size accordingly.
+ */
+struct coredump_req {
+ __u32 size;
+ __u32 size_ack;
+ __u64 mask;
+};
+
+enum {
+ COREDUMP_REQ_SIZE_VER0 = 16U, /* size of first published struct */
+};
+
+/**
+ * struct coredump_ack - message userspace sends to kernel
+ * @size: size of the struct
+ * @spare: unused
+ * @mask: features kernel is supposed to use
+ *
+ * The @size member must be set to the size of struct coredump_ack. It
+ * may never exceed what the kernel returned in coredump_req->size_ack
+ * but it may of course be smaller (>= COREDUMP_ACK_SIZE_VER0 and <=
+ * coredump_req->size_ack).
+ *
+ * The @mask member must be set to the features the coredump server
+ * wants the kernel to use. Only bits the kernel returned in
+ * coredump_req->mask may be set.
+ */
+struct coredump_ack {
+ __u32 size;
+ __u32 spare;
+ __u64 mask;
+};
+
+enum {
+ COREDUMP_ACK_SIZE_VER0 = 16U, /* size of first published struct */
+};
+
+/**
+ * enum coredump_mark - Markers for the coredump socket
+ *
+ * The kernel will place a single byte on the coredump socket. The
+ * markers notify userspace whether the coredump ack succeeded or
+ * failed.
+ *
+ * @COREDUMP_MARK_MINSIZE: the provided coredump_ack size was too small
+ * @COREDUMP_MARK_MAXSIZE: the provided coredump_ack size was too big
+ * @COREDUMP_MARK_UNSUPPORTED: the provided coredump_ack mask was invalid
+ * @COREDUMP_MARK_CONFLICTING: the provided coredump_ack mask has conflicting options
+ * @COREDUMP_MARK_REQACK: the coredump request and ack was successful
+ * @__COREDUMP_MARK_MAX: the maximum coredump mark value
+ */
+enum coredump_mark {
+ COREDUMP_MARK_REQACK = 0U,
+ COREDUMP_MARK_MINSIZE = 1U,
+ COREDUMP_MARK_MAXSIZE = 2U,
+ COREDUMP_MARK_UNSUPPORTED = 3U,
+ COREDUMP_MARK_CONFLICTING = 4U,
+ __COREDUMP_MARK_MAX = (1U << 31),
+};
+
+#endif /* _UAPI_LINUX_COREDUMP_H */
diff --git a/tools/include/uapi/linux/elf.h b/tools/include/uapi/linux/elf.h
new file mode 100644
index 000000000000..5834b83d7f9a
--- /dev/null
+++ b/tools/include/uapi/linux/elf.h
@@ -0,0 +1,524 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_ELF_H
+#define _LINUX_ELF_H
+
+#include <linux/types.h>
+#include <linux/elf-em.h>
+
+/* 32-bit ELF base types. */
+typedef __u32 Elf32_Addr;
+typedef __u16 Elf32_Half;
+typedef __u32 Elf32_Off;
+typedef __s32 Elf32_Sword;
+typedef __u32 Elf32_Word;
+typedef __u16 Elf32_Versym;
+
+/* 64-bit ELF base types. */
+typedef __u64 Elf64_Addr;
+typedef __u16 Elf64_Half;
+typedef __s16 Elf64_SHalf;
+typedef __u64 Elf64_Off;
+typedef __s32 Elf64_Sword;
+typedef __u32 Elf64_Word;
+typedef __u64 Elf64_Xword;
+typedef __s64 Elf64_Sxword;
+typedef __u16 Elf64_Versym;
+
+/* These constants are for the segment types stored in the image headers */
+#define PT_NULL 0
+#define PT_LOAD 1
+#define PT_DYNAMIC 2
+#define PT_INTERP 3
+#define PT_NOTE 4
+#define PT_SHLIB 5
+#define PT_PHDR 6
+#define PT_TLS 7 /* Thread local storage segment */
+#define PT_LOOS 0x60000000 /* OS-specific */
+#define PT_HIOS 0x6fffffff /* OS-specific */
+#define PT_LOPROC 0x70000000
+#define PT_HIPROC 0x7fffffff
+#define PT_GNU_EH_FRAME (PT_LOOS + 0x474e550)
+#define PT_GNU_STACK (PT_LOOS + 0x474e551)
+#define PT_GNU_RELRO (PT_LOOS + 0x474e552)
+#define PT_GNU_PROPERTY (PT_LOOS + 0x474e553)
+
+
+/* ARM MTE memory tag segment type */
+#define PT_AARCH64_MEMTAG_MTE (PT_LOPROC + 0x2)
+
+/*
+ * Extended Numbering
+ *
+ * If the real number of program header table entries is larger than
+ * or equal to PN_XNUM(0xffff), it is set to sh_info field of the
+ * section header at index 0, and PN_XNUM is set to e_phnum
+ * field. Otherwise, the section header at index 0 is zero
+ * initialized, if it exists.
+ *
+ * Specifications are available in:
+ *
+ * - Oracle: Linker and Libraries.
+ * Part No: 817–1984–19, August 2011.
+ * https://docs.oracle.com/cd/E18752_01/pdf/817-1984.pdf
+ *
+ * - System V ABI AMD64 Architecture Processor Supplement
+ * Draft Version 0.99.4,
+ * January 13, 2010.
+ * http://www.cs.washington.edu/education/courses/cse351/12wi/supp-docs/abi.pdf
+ */
+#define PN_XNUM 0xffff
+
+/* These constants define the different elf file types */
+#define ET_NONE 0
+#define ET_REL 1
+#define ET_EXEC 2
+#define ET_DYN 3
+#define ET_CORE 4
+#define ET_LOPROC 0xff00
+#define ET_HIPROC 0xffff
+
+/* This is the info that is needed to parse the dynamic section of the file */
+#define DT_NULL 0
+#define DT_NEEDED 1
+#define DT_PLTRELSZ 2
+#define DT_PLTGOT 3
+#define DT_HASH 4
+#define DT_STRTAB 5
+#define DT_SYMTAB 6
+#define DT_RELA 7
+#define DT_RELASZ 8
+#define DT_RELAENT 9
+#define DT_STRSZ 10
+#define DT_SYMENT 11
+#define DT_INIT 12
+#define DT_FINI 13
+#define DT_SONAME 14
+#define DT_RPATH 15
+#define DT_SYMBOLIC 16
+#define DT_REL 17
+#define DT_RELSZ 18
+#define DT_RELENT 19
+#define DT_PLTREL 20
+#define DT_DEBUG 21
+#define DT_TEXTREL 22
+#define DT_JMPREL 23
+#define DT_ENCODING 32
+#define OLD_DT_LOOS 0x60000000
+#define DT_LOOS 0x6000000d
+#define DT_HIOS 0x6ffff000
+#define DT_VALRNGLO 0x6ffffd00
+#define DT_VALRNGHI 0x6ffffdff
+#define DT_ADDRRNGLO 0x6ffffe00
+#define DT_GNU_HASH 0x6ffffef5
+#define DT_ADDRRNGHI 0x6ffffeff
+#define DT_VERSYM 0x6ffffff0
+#define DT_RELACOUNT 0x6ffffff9
+#define DT_RELCOUNT 0x6ffffffa
+#define DT_FLAGS_1 0x6ffffffb
+#define DT_VERDEF 0x6ffffffc
+#define DT_VERDEFNUM 0x6ffffffd
+#define DT_VERNEED 0x6ffffffe
+#define DT_VERNEEDNUM 0x6fffffff
+#define OLD_DT_HIOS 0x6fffffff
+#define DT_LOPROC 0x70000000
+#define DT_HIPROC 0x7fffffff
+
+/* This info is needed when parsing the symbol table */
+#define STB_LOCAL 0
+#define STB_GLOBAL 1
+#define STB_WEAK 2
+
+#define STN_UNDEF 0
+
+#define STT_NOTYPE 0
+#define STT_OBJECT 1
+#define STT_FUNC 2
+#define STT_SECTION 3
+#define STT_FILE 4
+#define STT_COMMON 5
+#define STT_TLS 6
+
+#define VER_FLG_BASE 0x1
+#define VER_FLG_WEAK 0x2
+
+#define ELF_ST_BIND(x) ((x) >> 4)
+#define ELF_ST_TYPE(x) ((x) & 0xf)
+#define ELF32_ST_BIND(x) ELF_ST_BIND(x)
+#define ELF32_ST_TYPE(x) ELF_ST_TYPE(x)
+#define ELF64_ST_BIND(x) ELF_ST_BIND(x)
+#define ELF64_ST_TYPE(x) ELF_ST_TYPE(x)
+
+typedef struct {
+ Elf32_Sword d_tag;
+ union {
+ Elf32_Sword d_val;
+ Elf32_Addr d_ptr;
+ } d_un;
+} Elf32_Dyn;
+
+typedef struct {
+ Elf64_Sxword d_tag; /* entry tag value */
+ union {
+ Elf64_Xword d_val;
+ Elf64_Addr d_ptr;
+ } d_un;
+} Elf64_Dyn;
+
+/* The following are used with relocations */
+#define ELF32_R_SYM(x) ((x) >> 8)
+#define ELF32_R_TYPE(x) ((x) & 0xff)
+
+#define ELF64_R_SYM(i) ((i) >> 32)
+#define ELF64_R_TYPE(i) ((i) & 0xffffffff)
+
+typedef struct elf32_rel {
+ Elf32_Addr r_offset;
+ Elf32_Word r_info;
+} Elf32_Rel;
+
+typedef struct elf64_rel {
+ Elf64_Addr r_offset; /* Location at which to apply the action */
+ Elf64_Xword r_info; /* index and type of relocation */
+} Elf64_Rel;
+
+typedef struct elf32_rela {
+ Elf32_Addr r_offset;
+ Elf32_Word r_info;
+ Elf32_Sword r_addend;
+} Elf32_Rela;
+
+typedef struct elf64_rela {
+ Elf64_Addr r_offset; /* Location at which to apply the action */
+ Elf64_Xword r_info; /* index and type of relocation */
+ Elf64_Sxword r_addend; /* Constant addend used to compute value */
+} Elf64_Rela;
+
+typedef struct elf32_sym {
+ Elf32_Word st_name;
+ Elf32_Addr st_value;
+ Elf32_Word st_size;
+ unsigned char st_info;
+ unsigned char st_other;
+ Elf32_Half st_shndx;
+} Elf32_Sym;
+
+typedef struct elf64_sym {
+ Elf64_Word st_name; /* Symbol name, index in string tbl */
+ unsigned char st_info; /* Type and binding attributes */
+ unsigned char st_other; /* No defined meaning, 0 */
+ Elf64_Half st_shndx; /* Associated section index */
+ Elf64_Addr st_value; /* Value of the symbol */
+ Elf64_Xword st_size; /* Associated symbol size */
+} Elf64_Sym;
+
+
+#define EI_NIDENT 16
+
+typedef struct elf32_hdr {
+ unsigned char e_ident[EI_NIDENT];
+ Elf32_Half e_type;
+ Elf32_Half e_machine;
+ Elf32_Word e_version;
+ Elf32_Addr e_entry; /* Entry point */
+ Elf32_Off e_phoff;
+ Elf32_Off e_shoff;
+ Elf32_Word e_flags;
+ Elf32_Half e_ehsize;
+ Elf32_Half e_phentsize;
+ Elf32_Half e_phnum;
+ Elf32_Half e_shentsize;
+ Elf32_Half e_shnum;
+ Elf32_Half e_shstrndx;
+} Elf32_Ehdr;
+
+typedef struct elf64_hdr {
+ unsigned char e_ident[EI_NIDENT]; /* ELF "magic number" */
+ Elf64_Half e_type;
+ Elf64_Half e_machine;
+ Elf64_Word e_version;
+ Elf64_Addr e_entry; /* Entry point virtual address */
+ Elf64_Off e_phoff; /* Program header table file offset */
+ Elf64_Off e_shoff; /* Section header table file offset */
+ Elf64_Word e_flags;
+ Elf64_Half e_ehsize;
+ Elf64_Half e_phentsize;
+ Elf64_Half e_phnum;
+ Elf64_Half e_shentsize;
+ Elf64_Half e_shnum;
+ Elf64_Half e_shstrndx;
+} Elf64_Ehdr;
+
+/* These constants define the permissions on sections in the program
+ header, p_flags. */
+#define PF_R 0x4
+#define PF_W 0x2
+#define PF_X 0x1
+
+typedef struct elf32_phdr {
+ Elf32_Word p_type;
+ Elf32_Off p_offset;
+ Elf32_Addr p_vaddr;
+ Elf32_Addr p_paddr;
+ Elf32_Word p_filesz;
+ Elf32_Word p_memsz;
+ Elf32_Word p_flags;
+ Elf32_Word p_align;
+} Elf32_Phdr;
+
+typedef struct elf64_phdr {
+ Elf64_Word p_type;
+ Elf64_Word p_flags;
+ Elf64_Off p_offset; /* Segment file offset */
+ Elf64_Addr p_vaddr; /* Segment virtual address */
+ Elf64_Addr p_paddr; /* Segment physical address */
+ Elf64_Xword p_filesz; /* Segment size in file */
+ Elf64_Xword p_memsz; /* Segment size in memory */
+ Elf64_Xword p_align; /* Segment alignment, file & memory */
+} Elf64_Phdr;
+
+/* sh_type */
+#define SHT_NULL 0
+#define SHT_PROGBITS 1
+#define SHT_SYMTAB 2
+#define SHT_STRTAB 3
+#define SHT_RELA 4
+#define SHT_HASH 5
+#define SHT_DYNAMIC 6
+#define SHT_NOTE 7
+#define SHT_NOBITS 8
+#define SHT_REL 9
+#define SHT_SHLIB 10
+#define SHT_DYNSYM 11
+#define SHT_NUM 12
+#define SHT_LOPROC 0x70000000
+#define SHT_HIPROC 0x7fffffff
+#define SHT_LOUSER 0x80000000
+#define SHT_HIUSER 0xffffffff
+
+/* sh_flags */
+#define SHF_WRITE 0x1
+#define SHF_ALLOC 0x2
+#define SHF_EXECINSTR 0x4
+#define SHF_RELA_LIVEPATCH 0x00100000
+#define SHF_RO_AFTER_INIT 0x00200000
+#define SHF_MASKPROC 0xf0000000
+
+/* special section indexes */
+#define SHN_UNDEF 0
+#define SHN_LORESERVE 0xff00
+#define SHN_LOPROC 0xff00
+#define SHN_HIPROC 0xff1f
+#define SHN_LIVEPATCH 0xff20
+#define SHN_ABS 0xfff1
+#define SHN_COMMON 0xfff2
+#define SHN_HIRESERVE 0xffff
+
+typedef struct elf32_shdr {
+ Elf32_Word sh_name;
+ Elf32_Word sh_type;
+ Elf32_Word sh_flags;
+ Elf32_Addr sh_addr;
+ Elf32_Off sh_offset;
+ Elf32_Word sh_size;
+ Elf32_Word sh_link;
+ Elf32_Word sh_info;
+ Elf32_Word sh_addralign;
+ Elf32_Word sh_entsize;
+} Elf32_Shdr;
+
+typedef struct elf64_shdr {
+ Elf64_Word sh_name; /* Section name, index in string tbl */
+ Elf64_Word sh_type; /* Type of section */
+ Elf64_Xword sh_flags; /* Miscellaneous section attributes */
+ Elf64_Addr sh_addr; /* Section virtual addr at execution */
+ Elf64_Off sh_offset; /* Section file offset */
+ Elf64_Xword sh_size; /* Size of section in bytes */
+ Elf64_Word sh_link; /* Index of another section */
+ Elf64_Word sh_info; /* Additional section information */
+ Elf64_Xword sh_addralign; /* Section alignment */
+ Elf64_Xword sh_entsize; /* Entry size if section holds table */
+} Elf64_Shdr;
+
+#define EI_MAG0 0 /* e_ident[] indexes */
+#define EI_MAG1 1
+#define EI_MAG2 2
+#define EI_MAG3 3
+#define EI_CLASS 4
+#define EI_DATA 5
+#define EI_VERSION 6
+#define EI_OSABI 7
+#define EI_PAD 8
+
+#define ELFMAG0 0x7f /* EI_MAG */
+#define ELFMAG1 'E'
+#define ELFMAG2 'L'
+#define ELFMAG3 'F'
+#define ELFMAG "\177ELF"
+#define SELFMAG 4
+
+#define ELFCLASSNONE 0 /* EI_CLASS */
+#define ELFCLASS32 1
+#define ELFCLASS64 2
+#define ELFCLASSNUM 3
+
+#define ELFDATANONE 0 /* e_ident[EI_DATA] */
+#define ELFDATA2LSB 1
+#define ELFDATA2MSB 2
+
+#define EV_NONE 0 /* e_version, EI_VERSION */
+#define EV_CURRENT 1
+#define EV_NUM 2
+
+#define ELFOSABI_NONE 0
+#define ELFOSABI_LINUX 3
+
+#ifndef ELF_OSABI
+#define ELF_OSABI ELFOSABI_NONE
+#endif
+
+/*
+ * Notes used in ET_CORE. Architectures export some of the arch register sets
+ * using the corresponding note types via the PTRACE_GETREGSET and
+ * PTRACE_SETREGSET requests.
+ * The note name for these types is "LINUX", except NT_PRFPREG that is named
+ * "CORE".
+ */
+#define NT_PRSTATUS 1
+#define NT_PRFPREG 2
+#define NT_PRPSINFO 3
+#define NT_TASKSTRUCT 4
+#define NT_AUXV 6
+/*
+ * Note to userspace developers: size of NT_SIGINFO note may increase
+ * in the future to accomodate more fields, don't assume it is fixed!
+ */
+#define NT_SIGINFO 0x53494749
+#define NT_FILE 0x46494c45
+#define NT_PRXFPREG 0x46e62b7f /* copied from gdb5.1/include/elf/common.h */
+#define NT_PPC_VMX 0x100 /* PowerPC Altivec/VMX registers */
+#define NT_PPC_SPE 0x101 /* PowerPC SPE/EVR registers */
+#define NT_PPC_VSX 0x102 /* PowerPC VSX registers */
+#define NT_PPC_TAR 0x103 /* Target Address Register */
+#define NT_PPC_PPR 0x104 /* Program Priority Register */
+#define NT_PPC_DSCR 0x105 /* Data Stream Control Register */
+#define NT_PPC_EBB 0x106 /* Event Based Branch Registers */
+#define NT_PPC_PMU 0x107 /* Performance Monitor Registers */
+#define NT_PPC_TM_CGPR 0x108 /* TM checkpointed GPR Registers */
+#define NT_PPC_TM_CFPR 0x109 /* TM checkpointed FPR Registers */
+#define NT_PPC_TM_CVMX 0x10a /* TM checkpointed VMX Registers */
+#define NT_PPC_TM_CVSX 0x10b /* TM checkpointed VSX Registers */
+#define NT_PPC_TM_SPR 0x10c /* TM Special Purpose Registers */
+#define NT_PPC_TM_CTAR 0x10d /* TM checkpointed Target Address Register */
+#define NT_PPC_TM_CPPR 0x10e /* TM checkpointed Program Priority Register */
+#define NT_PPC_TM_CDSCR 0x10f /* TM checkpointed Data Stream Control Register */
+#define NT_PPC_PKEY 0x110 /* Memory Protection Keys registers */
+#define NT_PPC_DEXCR 0x111 /* PowerPC DEXCR registers */
+#define NT_PPC_HASHKEYR 0x112 /* PowerPC HASHKEYR register */
+#define NT_386_TLS 0x200 /* i386 TLS slots (struct user_desc) */
+#define NT_386_IOPERM 0x201 /* x86 io permission bitmap (1=deny) */
+#define NT_X86_XSTATE 0x202 /* x86 extended state using xsave */
+/* Old binutils treats 0x203 as a CET state */
+#define NT_X86_SHSTK 0x204 /* x86 SHSTK state */
+#define NT_X86_XSAVE_LAYOUT 0x205 /* XSAVE layout description */
+#define NT_S390_HIGH_GPRS 0x300 /* s390 upper register halves */
+#define NT_S390_TIMER 0x301 /* s390 timer register */
+#define NT_S390_TODCMP 0x302 /* s390 TOD clock comparator register */
+#define NT_S390_TODPREG 0x303 /* s390 TOD programmable register */
+#define NT_S390_CTRS 0x304 /* s390 control registers */
+#define NT_S390_PREFIX 0x305 /* s390 prefix register */
+#define NT_S390_LAST_BREAK 0x306 /* s390 breaking event address */
+#define NT_S390_SYSTEM_CALL 0x307 /* s390 system call restart data */
+#define NT_S390_TDB 0x308 /* s390 transaction diagnostic block */
+#define NT_S390_VXRS_LOW 0x309 /* s390 vector registers 0-15 upper half */
+#define NT_S390_VXRS_HIGH 0x30a /* s390 vector registers 16-31 */
+#define NT_S390_GS_CB 0x30b /* s390 guarded storage registers */
+#define NT_S390_GS_BC 0x30c /* s390 guarded storage broadcast control block */
+#define NT_S390_RI_CB 0x30d /* s390 runtime instrumentation */
+#define NT_S390_PV_CPU_DATA 0x30e /* s390 protvirt cpu dump data */
+#define NT_ARM_VFP 0x400 /* ARM VFP/NEON registers */
+#define NT_ARM_TLS 0x401 /* ARM TLS register */
+#define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
+#define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
+#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
+#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension registers */
+#define NT_ARM_PAC_MASK 0x406 /* ARM pointer authentication code masks */
+#define NT_ARM_PACA_KEYS 0x407 /* ARM pointer authentication address keys */
+#define NT_ARM_PACG_KEYS 0x408 /* ARM pointer authentication generic key */
+#define NT_ARM_TAGGED_ADDR_CTRL 0x409 /* arm64 tagged address control (prctl()) */
+#define NT_ARM_PAC_ENABLED_KEYS 0x40a /* arm64 ptr auth enabled keys (prctl()) */
+#define NT_ARM_SSVE 0x40b /* ARM Streaming SVE registers */
+#define NT_ARM_ZA 0x40c /* ARM SME ZA registers */
+#define NT_ARM_ZT 0x40d /* ARM SME ZT registers */
+#define NT_ARM_FPMR 0x40e /* ARM floating point mode register */
+#define NT_ARM_POE 0x40f /* ARM POE registers */
+#define NT_ARM_GCS 0x410 /* ARM GCS state */
+#define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */
+#define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */
+#define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */
+#define NT_MIPS_FP_MODE 0x801 /* MIPS floating-point mode */
+#define NT_MIPS_MSA 0x802 /* MIPS SIMD registers */
+#define NT_RISCV_CSR 0x900 /* RISC-V Control and Status Registers */
+#define NT_RISCV_VECTOR 0x901 /* RISC-V vector registers */
+#define NT_RISCV_TAGGED_ADDR_CTRL 0x902 /* RISC-V tagged address control (prctl()) */
+#define NT_LOONGARCH_CPUCFG 0xa00 /* LoongArch CPU config registers */
+#define NT_LOONGARCH_CSR 0xa01 /* LoongArch control and status registers */
+#define NT_LOONGARCH_LSX 0xa02 /* LoongArch Loongson SIMD Extension registers */
+#define NT_LOONGARCH_LASX 0xa03 /* LoongArch Loongson Advanced SIMD Extension registers */
+#define NT_LOONGARCH_LBT 0xa04 /* LoongArch Loongson Binary Translation registers */
+#define NT_LOONGARCH_HW_BREAK 0xa05 /* LoongArch hardware breakpoint registers */
+#define NT_LOONGARCH_HW_WATCH 0xa06 /* LoongArch hardware watchpoint registers */
+
+/* Note types with note name "GNU" */
+#define NT_GNU_PROPERTY_TYPE_0 5
+
+/* Note header in a PT_NOTE section */
+typedef struct elf32_note {
+ Elf32_Word n_namesz; /* Name size */
+ Elf32_Word n_descsz; /* Content size */
+ Elf32_Word n_type; /* Content type */
+} Elf32_Nhdr;
+
+/* Note header in a PT_NOTE section */
+typedef struct elf64_note {
+ Elf64_Word n_namesz; /* Name size */
+ Elf64_Word n_descsz; /* Content size */
+ Elf64_Word n_type; /* Content type */
+} Elf64_Nhdr;
+
+/* .note.gnu.property types for EM_AARCH64: */
+#define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000
+
+/* Bits for GNU_PROPERTY_AARCH64_FEATURE_1_BTI */
+#define GNU_PROPERTY_AARCH64_FEATURE_1_BTI (1U << 0)
+
+typedef struct {
+ Elf32_Half vd_version;
+ Elf32_Half vd_flags;
+ Elf32_Half vd_ndx;
+ Elf32_Half vd_cnt;
+ Elf32_Word vd_hash;
+ Elf32_Word vd_aux;
+ Elf32_Word vd_next;
+} Elf32_Verdef;
+
+typedef struct {
+ Elf64_Half vd_version;
+ Elf64_Half vd_flags;
+ Elf64_Half vd_ndx;
+ Elf64_Half vd_cnt;
+ Elf64_Word vd_hash;
+ Elf64_Word vd_aux;
+ Elf64_Word vd_next;
+} Elf64_Verdef;
+
+typedef struct {
+ Elf32_Word vda_name;
+ Elf32_Word vda_next;
+} Elf32_Verdaux;
+
+typedef struct {
+ Elf64_Word vda_name;
+ Elf64_Word vda_next;
+} Elf64_Verdaux;
+
+#endif /* _LINUX_ELF_H */
diff --git a/tools/include/uapi/linux/ethtool.h b/tools/include/uapi/linux/ethtool.h
deleted file mode 100644
index 47afae3895ec..000000000000
--- a/tools/include/uapi/linux/ethtool.h
+++ /dev/null
@@ -1,104 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-/*
- * ethtool.h: Defines for Linux ethtool.
- *
- * Copyright (C) 1998 David S. Miller (davem@redhat.com)
- * Copyright 2001 Jeff Garzik <jgarzik@pobox.com>
- * Portions Copyright 2001 Sun Microsystems (thockin@sun.com)
- * Portions Copyright 2002 Intel (eli.kupermann@intel.com,
- * christopher.leech@intel.com,
- * scott.feldman@intel.com)
- * Portions Copyright (C) Sun Microsystems 2008
- */
-
-#ifndef _UAPI_LINUX_ETHTOOL_H
-#define _UAPI_LINUX_ETHTOOL_H
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/if_ether.h>
-
-#define ETHTOOL_GCHANNELS 0x0000003c /* Get no of channels */
-
-/**
- * struct ethtool_channels - configuring number of network channel
- * @cmd: ETHTOOL_{G,S}CHANNELS
- * @max_rx: Read only. Maximum number of receive channel the driver support.
- * @max_tx: Read only. Maximum number of transmit channel the driver support.
- * @max_other: Read only. Maximum number of other channel the driver support.
- * @max_combined: Read only. Maximum number of combined channel the driver
- * support. Set of queues RX, TX or other.
- * @rx_count: Valid values are in the range 1 to the max_rx.
- * @tx_count: Valid values are in the range 1 to the max_tx.
- * @other_count: Valid values are in the range 1 to the max_other.
- * @combined_count: Valid values are in the range 1 to the max_combined.
- *
- * This can be used to configure RX, TX and other channels.
- */
-
-struct ethtool_channels {
- __u32 cmd;
- __u32 max_rx;
- __u32 max_tx;
- __u32 max_other;
- __u32 max_combined;
- __u32 rx_count;
- __u32 tx_count;
- __u32 other_count;
- __u32 combined_count;
-};
-
-#define ETHTOOL_FWVERS_LEN 32
-#define ETHTOOL_BUSINFO_LEN 32
-#define ETHTOOL_EROMVERS_LEN 32
-
-/**
- * struct ethtool_drvinfo - general driver and device information
- * @cmd: Command number = %ETHTOOL_GDRVINFO
- * @driver: Driver short name. This should normally match the name
- * in its bus driver structure (e.g. pci_driver::name). Must
- * not be an empty string.
- * @version: Driver version string; may be an empty string
- * @fw_version: Firmware version string; may be an empty string
- * @erom_version: Expansion ROM version string; may be an empty string
- * @bus_info: Device bus address. This should match the dev_name()
- * string for the underlying bus device, if there is one. May be
- * an empty string.
- * @reserved2: Reserved for future use; see the note on reserved space.
- * @n_priv_flags: Number of flags valid for %ETHTOOL_GPFLAGS and
- * %ETHTOOL_SPFLAGS commands; also the number of strings in the
- * %ETH_SS_PRIV_FLAGS set
- * @n_stats: Number of u64 statistics returned by the %ETHTOOL_GSTATS
- * command; also the number of strings in the %ETH_SS_STATS set
- * @testinfo_len: Number of results returned by the %ETHTOOL_TEST
- * command; also the number of strings in the %ETH_SS_TEST set
- * @eedump_len: Size of EEPROM accessible through the %ETHTOOL_GEEPROM
- * and %ETHTOOL_SEEPROM commands, in bytes
- * @regdump_len: Size of register dump returned by the %ETHTOOL_GREGS
- * command, in bytes
- *
- * Users can use the %ETHTOOL_GSSET_INFO command to get the number of
- * strings in any string set (from Linux 2.6.34).
- *
- * Drivers should set at most @driver, @version, @fw_version and
- * @bus_info in their get_drvinfo() implementation. The ethtool
- * core fills in the other fields using other driver operations.
- */
-struct ethtool_drvinfo {
- __u32 cmd;
- char driver[32];
- char version[32];
- char fw_version[ETHTOOL_FWVERS_LEN];
- char bus_info[ETHTOOL_BUSINFO_LEN];
- char erom_version[ETHTOOL_EROMVERS_LEN];
- char reserved2[12];
- __u32 n_priv_flags;
- __u32 n_stats;
- __u32 testinfo_len;
- __u32 eedump_len;
- __u32 regdump_len;
-};
-
-#define ETHTOOL_GDRVINFO 0x00000003
-
-#endif /* _UAPI_LINUX_ETHTOOL_H */
diff --git a/tools/include/uapi/linux/fanotify.h b/tools/include/uapi/linux/fanotify.h
new file mode 100644
index 000000000000..e710967c7c26
--- /dev/null
+++ b/tools/include/uapi/linux/fanotify.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_FANOTIFY_H
+#define _UAPI_LINUX_FANOTIFY_H
+
+#include <linux/types.h>
+
+/* the following events that user-space can register for */
+#define FAN_ACCESS 0x00000001 /* File was accessed */
+#define FAN_MODIFY 0x00000002 /* File was modified */
+#define FAN_ATTRIB 0x00000004 /* Metadata changed */
+#define FAN_CLOSE_WRITE 0x00000008 /* Writable file closed */
+#define FAN_CLOSE_NOWRITE 0x00000010 /* Unwritable file closed */
+#define FAN_OPEN 0x00000020 /* File was opened */
+#define FAN_MOVED_FROM 0x00000040 /* File was moved from X */
+#define FAN_MOVED_TO 0x00000080 /* File was moved to Y */
+#define FAN_CREATE 0x00000100 /* Subfile was created */
+#define FAN_DELETE 0x00000200 /* Subfile was deleted */
+#define FAN_DELETE_SELF 0x00000400 /* Self was deleted */
+#define FAN_MOVE_SELF 0x00000800 /* Self was moved */
+#define FAN_OPEN_EXEC 0x00001000 /* File was opened for exec */
+
+#define FAN_Q_OVERFLOW 0x00004000 /* Event queued overflowed */
+#define FAN_FS_ERROR 0x00008000 /* Filesystem error */
+
+#define FAN_OPEN_PERM 0x00010000 /* File open in perm check */
+#define FAN_ACCESS_PERM 0x00020000 /* File accessed in perm check */
+#define FAN_OPEN_EXEC_PERM 0x00040000 /* File open/exec in perm check */
+/* #define FAN_DIR_MODIFY 0x00080000 */ /* Deprecated (reserved) */
+
+#define FAN_PRE_ACCESS 0x00100000 /* Pre-content access hook */
+#define FAN_MNT_ATTACH 0x01000000 /* Mount was attached */
+#define FAN_MNT_DETACH 0x02000000 /* Mount was detached */
+
+#define FAN_EVENT_ON_CHILD 0x08000000 /* Interested in child events */
+
+#define FAN_RENAME 0x10000000 /* File was renamed */
+
+#define FAN_ONDIR 0x40000000 /* Event occurred against dir */
+
+/* helper events */
+#define FAN_CLOSE (FAN_CLOSE_WRITE | FAN_CLOSE_NOWRITE) /* close */
+#define FAN_MOVE (FAN_MOVED_FROM | FAN_MOVED_TO) /* moves */
+
+/* flags used for fanotify_init() */
+#define FAN_CLOEXEC 0x00000001
+#define FAN_NONBLOCK 0x00000002
+
+/* These are NOT bitwise flags. Both bits are used together. */
+#define FAN_CLASS_NOTIF 0x00000000
+#define FAN_CLASS_CONTENT 0x00000004
+#define FAN_CLASS_PRE_CONTENT 0x00000008
+
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_CLASS_BITS (FAN_CLASS_NOTIF | FAN_CLASS_CONTENT | \
+ FAN_CLASS_PRE_CONTENT)
+
+#define FAN_UNLIMITED_QUEUE 0x00000010
+#define FAN_UNLIMITED_MARKS 0x00000020
+#define FAN_ENABLE_AUDIT 0x00000040
+
+/* Flags to determine fanotify event format */
+#define FAN_REPORT_PIDFD 0x00000080 /* Report pidfd for event->pid */
+#define FAN_REPORT_TID 0x00000100 /* event->pid is thread id */
+#define FAN_REPORT_FID 0x00000200 /* Report unique file id */
+#define FAN_REPORT_DIR_FID 0x00000400 /* Report unique directory id */
+#define FAN_REPORT_NAME 0x00000800 /* Report events with name */
+#define FAN_REPORT_TARGET_FID 0x00001000 /* Report dirent target id */
+#define FAN_REPORT_FD_ERROR 0x00002000 /* event->fd can report error */
+#define FAN_REPORT_MNT 0x00004000 /* Report mount events */
+
+/* Convenience macro - FAN_REPORT_NAME requires FAN_REPORT_DIR_FID */
+#define FAN_REPORT_DFID_NAME (FAN_REPORT_DIR_FID | FAN_REPORT_NAME)
+/* Convenience macro - FAN_REPORT_TARGET_FID requires all other FID flags */
+#define FAN_REPORT_DFID_NAME_TARGET (FAN_REPORT_DFID_NAME | \
+ FAN_REPORT_FID | FAN_REPORT_TARGET_FID)
+
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_INIT_FLAGS (FAN_CLOEXEC | FAN_NONBLOCK | \
+ FAN_ALL_CLASS_BITS | FAN_UNLIMITED_QUEUE |\
+ FAN_UNLIMITED_MARKS)
+
+/* flags used for fanotify_modify_mark() */
+#define FAN_MARK_ADD 0x00000001
+#define FAN_MARK_REMOVE 0x00000002
+#define FAN_MARK_DONT_FOLLOW 0x00000004
+#define FAN_MARK_ONLYDIR 0x00000008
+/* FAN_MARK_MOUNT is 0x00000010 */
+#define FAN_MARK_IGNORED_MASK 0x00000020
+#define FAN_MARK_IGNORED_SURV_MODIFY 0x00000040
+#define FAN_MARK_FLUSH 0x00000080
+/* FAN_MARK_FILESYSTEM is 0x00000100 */
+#define FAN_MARK_EVICTABLE 0x00000200
+/* This bit is mutually exclusive with FAN_MARK_IGNORED_MASK bit */
+#define FAN_MARK_IGNORE 0x00000400
+
+/* These are NOT bitwise flags. Both bits can be used togther. */
+#define FAN_MARK_INODE 0x00000000
+#define FAN_MARK_MOUNT 0x00000010
+#define FAN_MARK_FILESYSTEM 0x00000100
+#define FAN_MARK_MNTNS 0x00000110
+
+/*
+ * Convenience macro - FAN_MARK_IGNORE requires FAN_MARK_IGNORED_SURV_MODIFY
+ * for non-inode mark types.
+ */
+#define FAN_MARK_IGNORE_SURV (FAN_MARK_IGNORE | FAN_MARK_IGNORED_SURV_MODIFY)
+
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_MARK_FLAGS (FAN_MARK_ADD |\
+ FAN_MARK_REMOVE |\
+ FAN_MARK_DONT_FOLLOW |\
+ FAN_MARK_ONLYDIR |\
+ FAN_MARK_MOUNT |\
+ FAN_MARK_IGNORED_MASK |\
+ FAN_MARK_IGNORED_SURV_MODIFY |\
+ FAN_MARK_FLUSH)
+
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_EVENTS (FAN_ACCESS |\
+ FAN_MODIFY |\
+ FAN_CLOSE |\
+ FAN_OPEN)
+
+/*
+ * All events which require a permission response from userspace
+ */
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_PERM_EVENTS (FAN_OPEN_PERM |\
+ FAN_ACCESS_PERM)
+
+/* Deprecated - do not use this in programs and do not add new flags here! */
+#define FAN_ALL_OUTGOING_EVENTS (FAN_ALL_EVENTS |\
+ FAN_ALL_PERM_EVENTS |\
+ FAN_Q_OVERFLOW)
+
+#define FANOTIFY_METADATA_VERSION 3
+
+struct fanotify_event_metadata {
+ __u32 event_len;
+ __u8 vers;
+ __u8 reserved;
+ __u16 metadata_len;
+ __aligned_u64 mask;
+ __s32 fd;
+ __s32 pid;
+};
+
+#define FAN_EVENT_INFO_TYPE_FID 1
+#define FAN_EVENT_INFO_TYPE_DFID_NAME 2
+#define FAN_EVENT_INFO_TYPE_DFID 3
+#define FAN_EVENT_INFO_TYPE_PIDFD 4
+#define FAN_EVENT_INFO_TYPE_ERROR 5
+#define FAN_EVENT_INFO_TYPE_RANGE 6
+#define FAN_EVENT_INFO_TYPE_MNT 7
+
+/* Special info types for FAN_RENAME */
+#define FAN_EVENT_INFO_TYPE_OLD_DFID_NAME 10
+/* Reserved for FAN_EVENT_INFO_TYPE_OLD_DFID 11 */
+#define FAN_EVENT_INFO_TYPE_NEW_DFID_NAME 12
+/* Reserved for FAN_EVENT_INFO_TYPE_NEW_DFID 13 */
+
+/* Variable length info record following event metadata */
+struct fanotify_event_info_header {
+ __u8 info_type;
+ __u8 pad;
+ __u16 len;
+};
+
+/*
+ * Unique file identifier info record.
+ * This structure is used for records of types FAN_EVENT_INFO_TYPE_FID,
+ * FAN_EVENT_INFO_TYPE_DFID and FAN_EVENT_INFO_TYPE_DFID_NAME.
+ * For FAN_EVENT_INFO_TYPE_DFID_NAME there is additionally a null terminated
+ * name immediately after the file handle.
+ */
+struct fanotify_event_info_fid {
+ struct fanotify_event_info_header hdr;
+ __kernel_fsid_t fsid;
+ /*
+ * Following is an opaque struct file_handle that can be passed as
+ * an argument to open_by_handle_at(2).
+ */
+ unsigned char handle[];
+};
+
+/*
+ * This structure is used for info records of type FAN_EVENT_INFO_TYPE_PIDFD.
+ * It holds a pidfd for the pid that was responsible for generating an event.
+ */
+struct fanotify_event_info_pidfd {
+ struct fanotify_event_info_header hdr;
+ __s32 pidfd;
+};
+
+struct fanotify_event_info_error {
+ struct fanotify_event_info_header hdr;
+ __s32 error;
+ __u32 error_count;
+};
+
+struct fanotify_event_info_range {
+ struct fanotify_event_info_header hdr;
+ __u32 pad;
+ __u64 offset;
+ __u64 count;
+};
+
+struct fanotify_event_info_mnt {
+ struct fanotify_event_info_header hdr;
+ __u64 mnt_id;
+};
+
+/*
+ * User space may need to record additional information about its decision.
+ * The extra information type records what kind of information is included.
+ * The default is none. We also define an extra information buffer whose
+ * size is determined by the extra information type.
+ *
+ * If the information type is Audit Rule, then the information following
+ * is the rule number that triggered the user space decision that
+ * requires auditing.
+ */
+
+#define FAN_RESPONSE_INFO_NONE 0
+#define FAN_RESPONSE_INFO_AUDIT_RULE 1
+
+struct fanotify_response {
+ __s32 fd;
+ __u32 response;
+};
+
+struct fanotify_response_info_header {
+ __u8 type;
+ __u8 pad;
+ __u16 len;
+};
+
+struct fanotify_response_info_audit_rule {
+ struct fanotify_response_info_header hdr;
+ __u32 rule_number;
+ __u32 subj_trust;
+ __u32 obj_trust;
+};
+
+/* Legit userspace responses to a _PERM event */
+#define FAN_ALLOW 0x01
+#define FAN_DENY 0x02
+/* errno other than EPERM can specified in upper byte of deny response */
+#define FAN_ERRNO_BITS 8
+#define FAN_ERRNO_SHIFT (32 - FAN_ERRNO_BITS)
+#define FAN_ERRNO_MASK ((1 << FAN_ERRNO_BITS) - 1)
+#define FAN_DENY_ERRNO(err) \
+ (FAN_DENY | ((((__u32)(err)) & FAN_ERRNO_MASK) << FAN_ERRNO_SHIFT))
+
+#define FAN_AUDIT 0x10 /* Bitmask to create audit record for result */
+#define FAN_INFO 0x20 /* Bitmask to indicate additional information */
+
+/* No fd set in event */
+#define FAN_NOFD -1
+#define FAN_NOPIDFD FAN_NOFD
+#define FAN_EPIDFD -2
+
+/* Helper functions to deal with fanotify_event_metadata buffers */
+#define FAN_EVENT_METADATA_LEN (sizeof(struct fanotify_event_metadata))
+
+#define FAN_EVENT_NEXT(meta, len) ((len) -= (meta)->event_len, \
+ (struct fanotify_event_metadata*)(((char *)(meta)) + \
+ (meta)->event_len))
+
+#define FAN_EVENT_OK(meta, len) ((long)(len) >= (long)FAN_EVENT_METADATA_LEN && \
+ (long)(meta)->event_len >= (long)FAN_EVENT_METADATA_LEN && \
+ (long)(meta)->event_len <= (long)(len))
+
+#endif /* _UAPI_LINUX_FANOTIFY_H */
diff --git a/tools/include/uapi/linux/fcntl.h b/tools/include/uapi/linux/fcntl.h
deleted file mode 100644
index 282e90aeb163..000000000000
--- a/tools/include/uapi/linux/fcntl.h
+++ /dev/null
@@ -1,123 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _UAPI_LINUX_FCNTL_H
-#define _UAPI_LINUX_FCNTL_H
-
-#include <asm/fcntl.h>
-#include <linux/openat2.h>
-
-#define F_SETLEASE (F_LINUX_SPECIFIC_BASE + 0)
-#define F_GETLEASE (F_LINUX_SPECIFIC_BASE + 1)
-
-/*
- * Cancel a blocking posix lock; internal use only until we expose an
- * asynchronous lock api to userspace:
- */
-#define F_CANCELLK (F_LINUX_SPECIFIC_BASE + 5)
-
-/* Create a file descriptor with FD_CLOEXEC set. */
-#define F_DUPFD_CLOEXEC (F_LINUX_SPECIFIC_BASE + 6)
-
-/*
- * Request nofications on a directory.
- * See below for events that may be notified.
- */
-#define F_NOTIFY (F_LINUX_SPECIFIC_BASE+2)
-
-/*
- * Set and get of pipe page size array
- */
-#define F_SETPIPE_SZ (F_LINUX_SPECIFIC_BASE + 7)
-#define F_GETPIPE_SZ (F_LINUX_SPECIFIC_BASE + 8)
-
-/*
- * Set/Get seals
- */
-#define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9)
-#define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10)
-
-/*
- * Types of seals
- */
-#define F_SEAL_SEAL 0x0001 /* prevent further seals from being set */
-#define F_SEAL_SHRINK 0x0002 /* prevent file from shrinking */
-#define F_SEAL_GROW 0x0004 /* prevent file from growing */
-#define F_SEAL_WRITE 0x0008 /* prevent writes */
-#define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */
-#define F_SEAL_EXEC 0x0020 /* prevent chmod modifying exec bits */
-/* (1U << 31) is reserved for signed error codes */
-
-/*
- * Set/Get write life time hints. {GET,SET}_RW_HINT operate on the
- * underlying inode, while {GET,SET}_FILE_RW_HINT operate only on
- * the specific file.
- */
-#define F_GET_RW_HINT (F_LINUX_SPECIFIC_BASE + 11)
-#define F_SET_RW_HINT (F_LINUX_SPECIFIC_BASE + 12)
-#define F_GET_FILE_RW_HINT (F_LINUX_SPECIFIC_BASE + 13)
-#define F_SET_FILE_RW_HINT (F_LINUX_SPECIFIC_BASE + 14)
-
-/*
- * Valid hint values for F_{GET,SET}_RW_HINT. 0 is "not set", or can be
- * used to clear any hints previously set.
- */
-#define RWH_WRITE_LIFE_NOT_SET 0
-#define RWH_WRITE_LIFE_NONE 1
-#define RWH_WRITE_LIFE_SHORT 2
-#define RWH_WRITE_LIFE_MEDIUM 3
-#define RWH_WRITE_LIFE_LONG 4
-#define RWH_WRITE_LIFE_EXTREME 5
-
-/*
- * The originally introduced spelling is remained from the first
- * versions of the patch set that introduced the feature, see commit
- * v4.13-rc1~212^2~51.
- */
-#define RWF_WRITE_LIFE_NOT_SET RWH_WRITE_LIFE_NOT_SET
-
-/*
- * Types of directory notifications that may be requested.
- */
-#define DN_ACCESS 0x00000001 /* File accessed */
-#define DN_MODIFY 0x00000002 /* File modified */
-#define DN_CREATE 0x00000004 /* File created */
-#define DN_DELETE 0x00000008 /* File removed */
-#define DN_RENAME 0x00000010 /* File renamed */
-#define DN_ATTRIB 0x00000020 /* File changed attibutes */
-#define DN_MULTISHOT 0x80000000 /* Don't remove notifier */
-
-/*
- * The constants AT_REMOVEDIR and AT_EACCESS have the same value. AT_EACCESS is
- * meaningful only to faccessat, while AT_REMOVEDIR is meaningful only to
- * unlinkat. The two functions do completely different things and therefore,
- * the flags can be allowed to overlap. For example, passing AT_REMOVEDIR to
- * faccessat would be undefined behavior and thus treating it equivalent to
- * AT_EACCESS is valid undefined behavior.
- */
-#define AT_FDCWD -100 /* Special value used to indicate
- openat should use the current
- working directory. */
-#define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
-#define AT_EACCESS 0x200 /* Test access permitted for
- effective IDs, not real IDs. */
-#define AT_REMOVEDIR 0x200 /* Remove directory instead of
- unlinking file. */
-#define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
-#define AT_NO_AUTOMOUNT 0x800 /* Suppress terminal automount traversal */
-#define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
-
-#define AT_STATX_SYNC_TYPE 0x6000 /* Type of synchronisation required from statx() */
-#define AT_STATX_SYNC_AS_STAT 0x0000 /* - Do whatever stat() does */
-#define AT_STATX_FORCE_SYNC 0x2000 /* - Force the attributes to be sync'd with the server */
-#define AT_STATX_DONT_SYNC 0x4000 /* - Don't sync attributes with the server */
-
-#define AT_RECURSIVE 0x8000 /* Apply to the entire subtree */
-
-/* Flags for name_to_handle_at(2). We reuse AT_ flag space to save bits... */
-#define AT_HANDLE_FID AT_REMOVEDIR /* file handle is needed to
- compare object identity and may not
- be usable to open_by_handle_at(2) */
-#if defined(__KERNEL__)
-#define AT_GETATTR_NOSEC 0x80000000
-#endif
-
-#endif /* _UAPI_LINUX_FCNTL_H */
diff --git a/tools/include/uapi/linux/fs.h b/tools/include/uapi/linux/fs.h
index 48ad69f7722e..24ddf7bc4f25 100644
--- a/tools/include/uapi/linux/fs.h
+++ b/tools/include/uapi/linux/fs.h
@@ -28,8 +28,8 @@
* nr_file rlimit, so it's safe to set up a ridiculously high absolute
* upper limit on files-per-process.
*
- * Some programs (notably those using select()) may have to be
- * recompiled to take full advantage of the new limits..
+ * Some programs (notably those using select()) may have to be
+ * recompiled to take full advantage of the new limits..
*/
/* Fixed constants first: */
@@ -40,6 +40,15 @@
#define BLOCK_SIZE_BITS 10
#define BLOCK_SIZE (1<<BLOCK_SIZE_BITS)
+/* flags for integrity meta */
+#define IO_INTEGRITY_CHK_GUARD (1U << 0) /* enforce guard check */
+#define IO_INTEGRITY_CHK_REFTAG (1U << 1) /* enforce ref check */
+#define IO_INTEGRITY_CHK_APPTAG (1U << 2) /* enforce app check */
+
+#define IO_INTEGRITY_VALID_FLAGS (IO_INTEGRITY_CHK_GUARD | \
+ IO_INTEGRITY_CHK_REFTAG | \
+ IO_INTEGRITY_CHK_APPTAG)
+
#define SEEK_SET 0 /* seek relative to beginning of file */
#define SEEK_CUR 1 /* seek relative to current file position */
#define SEEK_END 2 /* seek relative to end of file */
@@ -64,6 +73,24 @@ struct fstrim_range {
__u64 minlen;
};
+/*
+ * We include a length field because some filesystems (vfat) have an identifier
+ * that we do want to expose as a UUID, but doesn't have the standard length.
+ *
+ * We use a fixed size buffer beacuse this interface will, by fiat, never
+ * support "UUIDs" longer than 16 bytes; we don't want to force all downstream
+ * users to have to deal with that.
+ */
+struct fsuuid2 {
+ __u8 len;
+ __u8 uuid[16];
+};
+
+struct fs_sysfs_path {
+ __u8 len;
+ __u8 name[128];
+};
+
/* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */
#define FILE_DEDUPE_RANGE_SAME 0
#define FILE_DEDUPE_RANGE_DIFFERS 1
@@ -215,6 +242,13 @@ struct fsxattr {
#define FS_IOC_FSSETXATTR _IOW('X', 32, struct fsxattr)
#define FS_IOC_GETFSLABEL _IOR(0x94, 49, char[FSLABEL_MAX])
#define FS_IOC_SETFSLABEL _IOW(0x94, 50, char[FSLABEL_MAX])
+/* Returns the external filesystem UUID, the same one blkid returns */
+#define FS_IOC_GETFSUUID _IOR(0x15, 0, struct fsuuid2)
+/*
+ * Returns the path component under /sys/fs/ that refers to this filesystem;
+ * also /sys/kernel/debug/ for filesystems with debugfs exports
+ */
+#define FS_IOC_GETFSSYSFSPATH _IOR(0x15, 1, struct fs_sysfs_path)
/*
* Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
@@ -301,12 +335,24 @@ typedef int __bitwise __kernel_rwf_t;
/* per-IO O_APPEND */
#define RWF_APPEND ((__force __kernel_rwf_t)0x00000010)
+/* per-IO negation of O_APPEND */
+#define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020)
+
+/* Atomic Write */
+#define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040)
+
+/* buffered IO that drops the cache after reading or writing data */
+#define RWF_DONTCACHE ((__force __kernel_rwf_t)0x00000080)
+
/* mask of flags supported by the kernel */
#define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
- RWF_APPEND)
+ RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC |\
+ RWF_DONTCACHE)
+
+#define PROCFS_IOCTL_MAGIC 'f'
/* Pagemap ioctl */
-#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
+#define PAGEMAP_SCAN _IOWR(PROCFS_IOCTL_MAGIC, 16, struct pm_scan_arg)
/* Bitmasks provided in pm_scan_args masks and reported in page_region.categories. */
#define PAGE_IS_WPALLOWED (1 << 0)
@@ -317,6 +363,7 @@ typedef int __bitwise __kernel_rwf_t;
#define PAGE_IS_PFNZERO (1 << 5)
#define PAGE_IS_HUGE (1 << 6)
#define PAGE_IS_SOFT_DIRTY (1 << 7)
+#define PAGE_IS_GUARD (1 << 8)
/*
* struct page_region - Page region with flags
@@ -365,4 +412,158 @@ struct pm_scan_arg {
__u64 return_mask;
};
+/* /proc/<pid>/maps ioctl */
+#define PROCMAP_QUERY _IOWR(PROCFS_IOCTL_MAGIC, 17, struct procmap_query)
+
+enum procmap_query_flags {
+ /*
+ * VMA permission flags.
+ *
+ * Can be used as part of procmap_query.query_flags field to look up
+ * only VMAs satisfying specified subset of permissions. E.g., specifying
+ * PROCMAP_QUERY_VMA_READABLE only will return both readable and read/write VMAs,
+ * while having PROCMAP_QUERY_VMA_READABLE | PROCMAP_QUERY_VMA_WRITABLE will only
+ * return read/write VMAs, though both executable/non-executable and
+ * private/shared will be ignored.
+ *
+ * PROCMAP_QUERY_VMA_* flags are also returned in procmap_query.vma_flags
+ * field to specify actual VMA permissions.
+ */
+ PROCMAP_QUERY_VMA_READABLE = 0x01,
+ PROCMAP_QUERY_VMA_WRITABLE = 0x02,
+ PROCMAP_QUERY_VMA_EXECUTABLE = 0x04,
+ PROCMAP_QUERY_VMA_SHARED = 0x08,
+ /*
+ * Query modifier flags.
+ *
+ * By default VMA that covers provided address is returned, or -ENOENT
+ * is returned. With PROCMAP_QUERY_COVERING_OR_NEXT_VMA flag set, closest
+ * VMA with vma_start > addr will be returned if no covering VMA is
+ * found.
+ *
+ * PROCMAP_QUERY_FILE_BACKED_VMA instructs query to consider only VMAs that
+ * have file backing. Can be combined with PROCMAP_QUERY_COVERING_OR_NEXT_VMA
+ * to iterate all VMAs with file backing.
+ */
+ PROCMAP_QUERY_COVERING_OR_NEXT_VMA = 0x10,
+ PROCMAP_QUERY_FILE_BACKED_VMA = 0x20,
+};
+
+/*
+ * Input/output argument structured passed into ioctl() call. It can be used
+ * to query a set of VMAs (Virtual Memory Areas) of a process.
+ *
+ * Each field can be one of three kinds, marked in a short comment to the
+ * right of the field:
+ * - "in", input argument, user has to provide this value, kernel doesn't modify it;
+ * - "out", output argument, kernel sets this field with VMA data;
+ * - "in/out", input and output argument; user provides initial value (used
+ * to specify maximum allowable buffer size), and kernel sets it to actual
+ * amount of data written (or zero, if there is no data).
+ *
+ * If matching VMA is found (according to criterias specified by
+ * query_addr/query_flags, all the out fields are filled out, and ioctl()
+ * returns 0. If there is no matching VMA, -ENOENT will be returned.
+ * In case of any other error, negative error code other than -ENOENT is
+ * returned.
+ *
+ * Most of the data is similar to the one returned as text in /proc/<pid>/maps
+ * file, but procmap_query provides more querying flexibility. There are no
+ * consistency guarantees between subsequent ioctl() calls, but data returned
+ * for matched VMA is self-consistent.
+ */
+struct procmap_query {
+ /* Query struct size, for backwards/forward compatibility */
+ __u64 size;
+ /*
+ * Query flags, a combination of enum procmap_query_flags values.
+ * Defines query filtering and behavior, see enum procmap_query_flags.
+ *
+ * Input argument, provided by user. Kernel doesn't modify it.
+ */
+ __u64 query_flags; /* in */
+ /*
+ * Query address. By default, VMA that covers this address will
+ * be looked up. PROCMAP_QUERY_* flags above modify this default
+ * behavior further.
+ *
+ * Input argument, provided by user. Kernel doesn't modify it.
+ */
+ __u64 query_addr; /* in */
+ /* VMA starting (inclusive) and ending (exclusive) address, if VMA is found. */
+ __u64 vma_start; /* out */
+ __u64 vma_end; /* out */
+ /* VMA permissions flags. A combination of PROCMAP_QUERY_VMA_* flags. */
+ __u64 vma_flags; /* out */
+ /* VMA backing page size granularity. */
+ __u64 vma_page_size; /* out */
+ /*
+ * VMA file offset. If VMA has file backing, this specifies offset
+ * within the file that VMA's start address corresponds to.
+ * Is set to zero if VMA has no backing file.
+ */
+ __u64 vma_offset; /* out */
+ /* Backing file's inode number, or zero, if VMA has no backing file. */
+ __u64 inode; /* out */
+ /* Backing file's device major/minor number, or zero, if VMA has no backing file. */
+ __u32 dev_major; /* out */
+ __u32 dev_minor; /* out */
+ /*
+ * If set to non-zero value, signals the request to return VMA name
+ * (i.e., VMA's backing file's absolute path, with " (deleted)" suffix
+ * appended, if file was unlinked from FS) for matched VMA. VMA name
+ * can also be some special name (e.g., "[heap]", "[stack]") or could
+ * be even user-supplied with prctl(PR_SET_VMA, PR_SET_VMA_ANON_NAME).
+ *
+ * Kernel will set this field to zero, if VMA has no associated name.
+ * Otherwise kernel will return actual amount of bytes filled in
+ * user-supplied buffer (see vma_name_addr field below), including the
+ * terminating zero.
+ *
+ * If VMA name is longer that user-supplied maximum buffer size,
+ * -E2BIG error is returned.
+ *
+ * If this field is set to non-zero value, vma_name_addr should point
+ * to valid user space memory buffer of at least vma_name_size bytes.
+ * If set to zero, vma_name_addr should be set to zero as well
+ */
+ __u32 vma_name_size; /* in/out */
+ /*
+ * If set to non-zero value, signals the request to extract and return
+ * VMA's backing file's build ID, if the backing file is an ELF file
+ * and it contains embedded build ID.
+ *
+ * Kernel will set this field to zero, if VMA has no backing file,
+ * backing file is not an ELF file, or ELF file has no build ID
+ * embedded.
+ *
+ * Build ID is a binary value (not a string). Kernel will set
+ * build_id_size field to exact number of bytes used for build ID.
+ * If build ID is requested and present, but needs more bytes than
+ * user-supplied maximum buffer size (see build_id_addr field below),
+ * -E2BIG error will be returned.
+ *
+ * If this field is set to non-zero value, build_id_addr should point
+ * to valid user space memory buffer of at least build_id_size bytes.
+ * If set to zero, build_id_addr should be set to zero as well
+ */
+ __u32 build_id_size; /* in/out */
+ /*
+ * User-supplied address of a buffer of at least vma_name_size bytes
+ * for kernel to fill with matched VMA's name (see vma_name_size field
+ * description above for details).
+ *
+ * Should be set to zero if VMA name should not be returned.
+ */
+ __u64 vma_name_addr; /* in */
+ /*
+ * User-supplied address of a buffer of at least build_id_size bytes
+ * for kernel to fill with matched VMA's ELF build ID, if available
+ * (see build_id_size field description above for details).
+ *
+ * Should be set to zero if build ID should not be returned.
+ */
+ __u64 build_id_addr; /* in */
+};
+
#endif /* _UAPI_LINUX_FS_H */
diff --git a/tools/include/uapi/linux/fscrypt.h b/tools/include/uapi/linux/fscrypt.h
index 7a8f4c290187..3aff99f2696a 100644
--- a/tools/include/uapi/linux/fscrypt.h
+++ b/tools/include/uapi/linux/fscrypt.h
@@ -119,7 +119,7 @@ struct fscrypt_key_specifier {
*/
struct fscrypt_provisioning_key_payload {
__u32 type;
- __u32 __reserved;
+ __u32 flags;
__u8 raw[];
};
@@ -128,7 +128,9 @@ struct fscrypt_add_key_arg {
struct fscrypt_key_specifier key_spec;
__u32 raw_size;
__u32 key_id;
- __u32 __reserved[8];
+#define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED 0x00000001
+ __u32 flags;
+ __u32 __reserved[7];
__u8 raw[];
};
diff --git a/tools/include/uapi/linux/genetlink.h b/tools/include/uapi/linux/genetlink.h
new file mode 100644
index 000000000000..ddba3ca01e39
--- /dev/null
+++ b/tools/include/uapi/linux/genetlink.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI__LINUX_GENERIC_NETLINK_H
+#define _UAPI__LINUX_GENERIC_NETLINK_H
+
+#include <linux/types.h>
+#include <linux/netlink.h>
+
+#define GENL_NAMSIZ 16 /* length of family name */
+
+#define GENL_MIN_ID NLMSG_MIN_TYPE
+#define GENL_MAX_ID 1023
+
+struct genlmsghdr {
+ __u8 cmd;
+ __u8 version;
+ __u16 reserved;
+};
+
+#define GENL_HDRLEN NLMSG_ALIGN(sizeof(struct genlmsghdr))
+
+#define GENL_ADMIN_PERM 0x01
+#define GENL_CMD_CAP_DO 0x02
+#define GENL_CMD_CAP_DUMP 0x04
+#define GENL_CMD_CAP_HASPOL 0x08
+#define GENL_UNS_ADMIN_PERM 0x10
+
+/*
+ * List of reserved static generic netlink identifiers:
+ */
+#define GENL_ID_CTRL NLMSG_MIN_TYPE
+#define GENL_ID_VFS_DQUOT (NLMSG_MIN_TYPE + 1)
+#define GENL_ID_PMCRAID (NLMSG_MIN_TYPE + 2)
+/* must be last reserved + 1 */
+#define GENL_START_ALLOC (NLMSG_MIN_TYPE + 3)
+
+/**************************************************************************
+ * Controller
+ **************************************************************************/
+
+enum {
+ CTRL_CMD_UNSPEC,
+ CTRL_CMD_NEWFAMILY,
+ CTRL_CMD_DELFAMILY,
+ CTRL_CMD_GETFAMILY,
+ CTRL_CMD_NEWOPS,
+ CTRL_CMD_DELOPS,
+ CTRL_CMD_GETOPS,
+ CTRL_CMD_NEWMCAST_GRP,
+ CTRL_CMD_DELMCAST_GRP,
+ CTRL_CMD_GETMCAST_GRP, /* unused */
+ CTRL_CMD_GETPOLICY,
+ __CTRL_CMD_MAX,
+};
+
+#define CTRL_CMD_MAX (__CTRL_CMD_MAX - 1)
+
+enum {
+ CTRL_ATTR_UNSPEC,
+ CTRL_ATTR_FAMILY_ID,
+ CTRL_ATTR_FAMILY_NAME,
+ CTRL_ATTR_VERSION,
+ CTRL_ATTR_HDRSIZE,
+ CTRL_ATTR_MAXATTR,
+ CTRL_ATTR_OPS,
+ CTRL_ATTR_MCAST_GROUPS,
+ CTRL_ATTR_POLICY,
+ CTRL_ATTR_OP_POLICY,
+ CTRL_ATTR_OP,
+ __CTRL_ATTR_MAX,
+};
+
+#define CTRL_ATTR_MAX (__CTRL_ATTR_MAX - 1)
+
+enum {
+ CTRL_ATTR_OP_UNSPEC,
+ CTRL_ATTR_OP_ID,
+ CTRL_ATTR_OP_FLAGS,
+ __CTRL_ATTR_OP_MAX,
+};
+
+#define CTRL_ATTR_OP_MAX (__CTRL_ATTR_OP_MAX - 1)
+
+enum {
+ CTRL_ATTR_MCAST_GRP_UNSPEC,
+ CTRL_ATTR_MCAST_GRP_NAME,
+ CTRL_ATTR_MCAST_GRP_ID,
+ __CTRL_ATTR_MCAST_GRP_MAX,
+};
+
+#define CTRL_ATTR_MCAST_GRP_MAX (__CTRL_ATTR_MCAST_GRP_MAX - 1)
+
+enum {
+ CTRL_ATTR_POLICY_UNSPEC,
+ CTRL_ATTR_POLICY_DO,
+ CTRL_ATTR_POLICY_DUMP,
+
+ __CTRL_ATTR_POLICY_DUMP_MAX,
+ CTRL_ATTR_POLICY_DUMP_MAX = __CTRL_ATTR_POLICY_DUMP_MAX - 1
+};
+
+#define CTRL_ATTR_POLICY_MAX (__CTRL_ATTR_POLICY_DUMP_MAX - 1)
+
+#endif /* _UAPI__LINUX_GENERIC_NETLINK_H */
diff --git a/tools/include/uapi/linux/if_addr.h b/tools/include/uapi/linux/if_addr.h
new file mode 100644
index 000000000000..aa7958b4e41d
--- /dev/null
+++ b/tools/include/uapi/linux/if_addr.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI__LINUX_IF_ADDR_H
+#define _UAPI__LINUX_IF_ADDR_H
+
+#include <linux/types.h>
+#include <linux/netlink.h>
+
+struct ifaddrmsg {
+ __u8 ifa_family;
+ __u8 ifa_prefixlen; /* The prefix length */
+ __u8 ifa_flags; /* Flags */
+ __u8 ifa_scope; /* Address scope */
+ __u32 ifa_index; /* Link index */
+};
+
+/*
+ * Important comment:
+ * IFA_ADDRESS is prefix address, rather than local interface address.
+ * It makes no difference for normally configured broadcast interfaces,
+ * but for point-to-point IFA_ADDRESS is DESTINATION address,
+ * local address is supplied in IFA_LOCAL attribute.
+ *
+ * IFA_FLAGS is a u32 attribute that extends the u8 field ifa_flags.
+ * If present, the value from struct ifaddrmsg will be ignored.
+ */
+enum {
+ IFA_UNSPEC,
+ IFA_ADDRESS,
+ IFA_LOCAL,
+ IFA_LABEL,
+ IFA_BROADCAST,
+ IFA_ANYCAST,
+ IFA_CACHEINFO,
+ IFA_MULTICAST,
+ IFA_FLAGS,
+ IFA_RT_PRIORITY, /* u32, priority/metric for prefix route */
+ IFA_TARGET_NETNSID,
+ IFA_PROTO, /* u8, address protocol */
+ __IFA_MAX,
+};
+
+#define IFA_MAX (__IFA_MAX - 1)
+
+/* ifa_flags */
+#define IFA_F_SECONDARY 0x01
+#define IFA_F_TEMPORARY IFA_F_SECONDARY
+
+#define IFA_F_NODAD 0x02
+#define IFA_F_OPTIMISTIC 0x04
+#define IFA_F_DADFAILED 0x08
+#define IFA_F_HOMEADDRESS 0x10
+#define IFA_F_DEPRECATED 0x20
+#define IFA_F_TENTATIVE 0x40
+#define IFA_F_PERMANENT 0x80
+#define IFA_F_MANAGETEMPADDR 0x100
+#define IFA_F_NOPREFIXROUTE 0x200
+#define IFA_F_MCAUTOJOIN 0x400
+#define IFA_F_STABLE_PRIVACY 0x800
+
+struct ifa_cacheinfo {
+ __u32 ifa_prefered;
+ __u32 ifa_valid;
+ __u32 cstamp; /* created timestamp, hundredths of seconds */
+ __u32 tstamp; /* updated timestamp, hundredths of seconds */
+};
+
+/* backwards compatibility for userspace */
+#ifndef __KERNEL__
+#define IFA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct ifaddrmsg))))
+#define IFA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct ifaddrmsg))
+#endif
+
+/* ifa_proto */
+#define IFAPROT_UNSPEC 0
+#define IFAPROT_KERNEL_LO 1 /* loopback */
+#define IFAPROT_KERNEL_RA 2 /* set by kernel from router announcement */
+#define IFAPROT_KERNEL_LL 3 /* link-local set by kernel */
+
+#endif
diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h
index a0aa05a28cf2..7e46ca4cd31b 100644
--- a/tools/include/uapi/linux/if_link.h
+++ b/tools/include/uapi/linux/if_link.h
@@ -377,6 +377,7 @@ enum {
IFLA_GSO_IPV4_MAX_SIZE,
IFLA_GRO_IPV4_MAX_SIZE,
IFLA_DPLL_PIN,
+ IFLA_MAX_PACING_OFFLOAD_HORIZON,
__IFLA_MAX
};
@@ -461,6 +462,286 @@ enum in6_addr_gen_mode {
/* Bridge section */
+/**
+ * DOC: Bridge enum definition
+ *
+ * Please *note* that the timer values in the following section are expected
+ * in clock_t format, which is seconds multiplied by USER_HZ (generally
+ * defined as 100).
+ *
+ * @IFLA_BR_FORWARD_DELAY
+ * The bridge forwarding delay is the time spent in LISTENING state
+ * (before moving to LEARNING) and in LEARNING state (before moving
+ * to FORWARDING). Only relevant if STP is enabled.
+ *
+ * The valid values are between (2 * USER_HZ) and (30 * USER_HZ).
+ * The default value is (15 * USER_HZ).
+ *
+ * @IFLA_BR_HELLO_TIME
+ * The time between hello packets sent by the bridge, when it is a root
+ * bridge or a designated bridge. Only relevant if STP is enabled.
+ *
+ * The valid values are between (1 * USER_HZ) and (10 * USER_HZ).
+ * The default value is (2 * USER_HZ).
+ *
+ * @IFLA_BR_MAX_AGE
+ * The hello packet timeout is the time until another bridge in the
+ * spanning tree is assumed to be dead, after reception of its last hello
+ * message. Only relevant if STP is enabled.
+ *
+ * The valid values are between (6 * USER_HZ) and (40 * USER_HZ).
+ * The default value is (20 * USER_HZ).
+ *
+ * @IFLA_BR_AGEING_TIME
+ * Configure the bridge's FDB entries aging time. It is the time a MAC
+ * address will be kept in the FDB after a packet has been received from
+ * that address. After this time has passed, entries are cleaned up.
+ * Allow values outside the 802.1 standard specification for special cases:
+ *
+ * * 0 - entry never ages (all permanent)
+ * * 1 - entry disappears (no persistence)
+ *
+ * The default value is (300 * USER_HZ).
+ *
+ * @IFLA_BR_STP_STATE
+ * Turn spanning tree protocol on (*IFLA_BR_STP_STATE* > 0) or off
+ * (*IFLA_BR_STP_STATE* == 0) for this bridge.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_PRIORITY
+ * Set this bridge's spanning tree priority, used during STP root bridge
+ * election.
+ *
+ * The valid values are between 0 and 65535.
+ *
+ * @IFLA_BR_VLAN_FILTERING
+ * Turn VLAN filtering on (*IFLA_BR_VLAN_FILTERING* > 0) or off
+ * (*IFLA_BR_VLAN_FILTERING* == 0). When disabled, the bridge will not
+ * consider the VLAN tag when handling packets.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_VLAN_PROTOCOL
+ * Set the protocol used for VLAN filtering.
+ *
+ * The valid values are 0x8100(802.1Q) or 0x88A8(802.1AD). The default value
+ * is 0x8100(802.1Q).
+ *
+ * @IFLA_BR_GROUP_FWD_MASK
+ * The group forwarding mask. This is the bitmask that is applied to
+ * decide whether to forward incoming frames destined to link-local
+ * addresses (of the form 01:80:C2:00:00:0X).
+ *
+ * The default value is 0, which means the bridge does not forward any
+ * link-local frames coming on this port.
+ *
+ * @IFLA_BR_ROOT_ID
+ * The bridge root id, read only.
+ *
+ * @IFLA_BR_BRIDGE_ID
+ * The bridge id, read only.
+ *
+ * @IFLA_BR_ROOT_PORT
+ * The bridge root port, read only.
+ *
+ * @IFLA_BR_ROOT_PATH_COST
+ * The bridge root path cost, read only.
+ *
+ * @IFLA_BR_TOPOLOGY_CHANGE
+ * The bridge topology change, read only.
+ *
+ * @IFLA_BR_TOPOLOGY_CHANGE_DETECTED
+ * The bridge topology change detected, read only.
+ *
+ * @IFLA_BR_HELLO_TIMER
+ * The bridge hello timer, read only.
+ *
+ * @IFLA_BR_TCN_TIMER
+ * The bridge tcn timer, read only.
+ *
+ * @IFLA_BR_TOPOLOGY_CHANGE_TIMER
+ * The bridge topology change timer, read only.
+ *
+ * @IFLA_BR_GC_TIMER
+ * The bridge gc timer, read only.
+ *
+ * @IFLA_BR_GROUP_ADDR
+ * Set the MAC address of the multicast group this bridge uses for STP.
+ * The address must be a link-local address in standard Ethernet MAC address
+ * format. It is an address of the form 01:80:C2:00:00:0X, with X in [0, 4..f].
+ *
+ * The default value is 0.
+ *
+ * @IFLA_BR_FDB_FLUSH
+ * Flush bridge's fdb dynamic entries.
+ *
+ * @IFLA_BR_MCAST_ROUTER
+ * Set bridge's multicast router if IGMP snooping is enabled.
+ * The valid values are:
+ *
+ * * 0 - disabled.
+ * * 1 - automatic (queried).
+ * * 2 - permanently enabled.
+ *
+ * The default value is 1.
+ *
+ * @IFLA_BR_MCAST_SNOOPING
+ * Turn multicast snooping on (*IFLA_BR_MCAST_SNOOPING* > 0) or off
+ * (*IFLA_BR_MCAST_SNOOPING* == 0).
+ *
+ * The default value is 1.
+ *
+ * @IFLA_BR_MCAST_QUERY_USE_IFADDR
+ * If enabled use the bridge's own IP address as source address for IGMP
+ * queries (*IFLA_BR_MCAST_QUERY_USE_IFADDR* > 0) or the default of 0.0.0.0
+ * (*IFLA_BR_MCAST_QUERY_USE_IFADDR* == 0).
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_MCAST_QUERIER
+ * Enable (*IFLA_BR_MULTICAST_QUERIER* > 0) or disable
+ * (*IFLA_BR_MULTICAST_QUERIER* == 0) IGMP querier, ie sending of multicast
+ * queries by the bridge.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_MCAST_HASH_ELASTICITY
+ * Set multicast database hash elasticity, It is the maximum chain length in
+ * the multicast hash table. This attribute is *deprecated* and the value
+ * is always 16.
+ *
+ * @IFLA_BR_MCAST_HASH_MAX
+ * Set maximum size of the multicast hash table
+ *
+ * The default value is 4096, the value must be a power of 2.
+ *
+ * @IFLA_BR_MCAST_LAST_MEMBER_CNT
+ * The Last Member Query Count is the number of Group-Specific Queries
+ * sent before the router assumes there are no local members. The Last
+ * Member Query Count is also the number of Group-and-Source-Specific
+ * Queries sent before the router assumes there are no listeners for a
+ * particular source.
+ *
+ * The default value is 2.
+ *
+ * @IFLA_BR_MCAST_STARTUP_QUERY_CNT
+ * The Startup Query Count is the number of Queries sent out on startup,
+ * separated by the Startup Query Interval.
+ *
+ * The default value is 2.
+ *
+ * @IFLA_BR_MCAST_LAST_MEMBER_INTVL
+ * The Last Member Query Interval is the Max Response Time inserted into
+ * Group-Specific Queries sent in response to Leave Group messages, and
+ * is also the amount of time between Group-Specific Query messages.
+ *
+ * The default value is (1 * USER_HZ).
+ *
+ * @IFLA_BR_MCAST_MEMBERSHIP_INTVL
+ * The interval after which the bridge will leave a group, if no membership
+ * reports for this group are received.
+ *
+ * The default value is (260 * USER_HZ).
+ *
+ * @IFLA_BR_MCAST_QUERIER_INTVL
+ * The interval between queries sent by other routers. if no queries are
+ * seen after this delay has passed, the bridge will start to send its own
+ * queries (as if *IFLA_BR_MCAST_QUERIER_INTVL* was enabled).
+ *
+ * The default value is (255 * USER_HZ).
+ *
+ * @IFLA_BR_MCAST_QUERY_INTVL
+ * The Query Interval is the interval between General Queries sent by
+ * the Querier.
+ *
+ * The default value is (125 * USER_HZ). The minimum value is (1 * USER_HZ).
+ *
+ * @IFLA_BR_MCAST_QUERY_RESPONSE_INTVL
+ * The Max Response Time used to calculate the Max Resp Code inserted
+ * into the periodic General Queries.
+ *
+ * The default value is (10 * USER_HZ).
+ *
+ * @IFLA_BR_MCAST_STARTUP_QUERY_INTVL
+ * The interval between queries in the startup phase.
+ *
+ * The default value is (125 * USER_HZ) / 4. The minimum value is (1 * USER_HZ).
+ *
+ * @IFLA_BR_NF_CALL_IPTABLES
+ * Enable (*NF_CALL_IPTABLES* > 0) or disable (*NF_CALL_IPTABLES* == 0)
+ * iptables hooks on the bridge.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_NF_CALL_IP6TABLES
+ * Enable (*NF_CALL_IP6TABLES* > 0) or disable (*NF_CALL_IP6TABLES* == 0)
+ * ip6tables hooks on the bridge.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_NF_CALL_ARPTABLES
+ * Enable (*NF_CALL_ARPTABLES* > 0) or disable (*NF_CALL_ARPTABLES* == 0)
+ * arptables hooks on the bridge.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_VLAN_DEFAULT_PVID
+ * VLAN ID applied to untagged and priority-tagged incoming packets.
+ *
+ * The default value is 1. Setting to the special value 0 makes all ports of
+ * this bridge not have a PVID by default, which means that they will
+ * not accept VLAN-untagged traffic.
+ *
+ * @IFLA_BR_PAD
+ * Bridge attribute padding type for netlink message.
+ *
+ * @IFLA_BR_VLAN_STATS_ENABLED
+ * Enable (*IFLA_BR_VLAN_STATS_ENABLED* == 1) or disable
+ * (*IFLA_BR_VLAN_STATS_ENABLED* == 0) per-VLAN stats accounting.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_MCAST_STATS_ENABLED
+ * Enable (*IFLA_BR_MCAST_STATS_ENABLED* > 0) or disable
+ * (*IFLA_BR_MCAST_STATS_ENABLED* == 0) multicast (IGMP/MLD) stats
+ * accounting.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_MCAST_IGMP_VERSION
+ * Set the IGMP version.
+ *
+ * The valid values are 2 and 3. The default value is 2.
+ *
+ * @IFLA_BR_MCAST_MLD_VERSION
+ * Set the MLD version.
+ *
+ * The valid values are 1 and 2. The default value is 1.
+ *
+ * @IFLA_BR_VLAN_STATS_PER_PORT
+ * Enable (*IFLA_BR_VLAN_STATS_PER_PORT* == 1) or disable
+ * (*IFLA_BR_VLAN_STATS_PER_PORT* == 0) per-VLAN per-port stats accounting.
+ * Can be changed only when there are no port VLANs configured.
+ *
+ * The default value is 0 (disabled).
+ *
+ * @IFLA_BR_MULTI_BOOLOPT
+ * The multi_boolopt is used to control new boolean options to avoid adding
+ * new netlink attributes. You can look at ``enum br_boolopt_id`` for those
+ * options.
+ *
+ * @IFLA_BR_MCAST_QUERIER_STATE
+ * Bridge mcast querier states, read only.
+ *
+ * @IFLA_BR_FDB_N_LEARNED
+ * The number of dynamically learned FDB entries for the current bridge,
+ * read only.
+ *
+ * @IFLA_BR_FDB_MAX_LEARNED
+ * Set the number of max dynamically learned FDB entries for the current
+ * bridge.
+ */
enum {
IFLA_BR_UNSPEC,
IFLA_BR_FORWARD_DELAY,
@@ -510,6 +791,8 @@ enum {
IFLA_BR_VLAN_STATS_PER_PORT,
IFLA_BR_MULTI_BOOLOPT,
IFLA_BR_MCAST_QUERIER_STATE,
+ IFLA_BR_FDB_N_LEARNED,
+ IFLA_BR_FDB_MAX_LEARNED,
__IFLA_BR_MAX,
};
@@ -520,11 +803,252 @@ struct ifla_bridge_id {
__u8 addr[6]; /* ETH_ALEN */
};
+/**
+ * DOC: Bridge mode enum definition
+ *
+ * @BRIDGE_MODE_HAIRPIN
+ * Controls whether traffic may be sent back out of the port on which it
+ * was received. This option is also called reflective relay mode, and is
+ * used to support basic VEPA (Virtual Ethernet Port Aggregator)
+ * capabilities. By default, this flag is turned off and the bridge will
+ * not forward traffic back out of the receiving port.
+ */
enum {
BRIDGE_MODE_UNSPEC,
BRIDGE_MODE_HAIRPIN,
};
+/**
+ * DOC: Bridge port enum definition
+ *
+ * @IFLA_BRPORT_STATE
+ * The operation state of the port. Here are the valid values.
+ *
+ * * 0 - port is in STP *DISABLED* state. Make this port completely
+ * inactive for STP. This is also called BPDU filter and could be used
+ * to disable STP on an untrusted port, like a leaf virtual device.
+ * The traffic forwarding is also stopped on this port.
+ * * 1 - port is in STP *LISTENING* state. Only valid if STP is enabled
+ * on the bridge. In this state the port listens for STP BPDUs and
+ * drops all other traffic frames.
+ * * 2 - port is in STP *LEARNING* state. Only valid if STP is enabled on
+ * the bridge. In this state the port will accept traffic only for the
+ * purpose of updating MAC address tables.
+ * * 3 - port is in STP *FORWARDING* state. Port is fully active.
+ * * 4 - port is in STP *BLOCKING* state. Only valid if STP is enabled on
+ * the bridge. This state is used during the STP election process.
+ * In this state, port will only process STP BPDUs.
+ *
+ * @IFLA_BRPORT_PRIORITY
+ * The STP port priority. The valid values are between 0 and 255.
+ *
+ * @IFLA_BRPORT_COST
+ * The STP path cost of the port. The valid values are between 1 and 65535.
+ *
+ * @IFLA_BRPORT_MODE
+ * Set the bridge port mode. See *BRIDGE_MODE_HAIRPIN* for more details.
+ *
+ * @IFLA_BRPORT_GUARD
+ * Controls whether STP BPDUs will be processed by the bridge port. By
+ * default, the flag is turned off to allow BPDU processing. Turning this
+ * flag on will disable the bridge port if a STP BPDU packet is received.
+ *
+ * If the bridge has Spanning Tree enabled, hostile devices on the network
+ * may send BPDU on a port and cause network failure. Setting *guard on*
+ * will detect and stop this by disabling the port. The port will be
+ * restarted if the link is brought down, or removed and reattached.
+ *
+ * @IFLA_BRPORT_PROTECT
+ * Controls whether a given port is allowed to become a root port or not.
+ * Only used when STP is enabled on the bridge. By default the flag is off.
+ *
+ * This feature is also called root port guard. If BPDU is received from a
+ * leaf (edge) port, it should not be elected as root port. This could
+ * be used if using STP on a bridge and the downstream bridges are not fully
+ * trusted; this prevents a hostile guest from rerouting traffic.
+ *
+ * @IFLA_BRPORT_FAST_LEAVE
+ * This flag allows the bridge to immediately stop multicast traffic
+ * forwarding on a port that receives an IGMP Leave message. It is only used
+ * when IGMP snooping is enabled on the bridge. By default the flag is off.
+ *
+ * @IFLA_BRPORT_LEARNING
+ * Controls whether a given port will learn *source* MAC addresses from
+ * received traffic or not. Also controls whether dynamic FDB entries
+ * (which can also be added by software) will be refreshed by incoming
+ * traffic. By default this flag is on.
+ *
+ * @IFLA_BRPORT_UNICAST_FLOOD
+ * Controls whether unicast traffic for which there is no FDB entry will
+ * be flooded towards this port. By default this flag is on.
+ *
+ * @IFLA_BRPORT_PROXYARP
+ * Enable proxy ARP on this port.
+ *
+ * @IFLA_BRPORT_LEARNING_SYNC
+ * Controls whether a given port will sync MAC addresses learned on device
+ * port to bridge FDB.
+ *
+ * @IFLA_BRPORT_PROXYARP_WIFI
+ * Enable proxy ARP on this port which meets extended requirements by
+ * IEEE 802.11 and Hotspot 2.0 specifications.
+ *
+ * @IFLA_BRPORT_ROOT_ID
+ *
+ * @IFLA_BRPORT_BRIDGE_ID
+ *
+ * @IFLA_BRPORT_DESIGNATED_PORT
+ *
+ * @IFLA_BRPORT_DESIGNATED_COST
+ *
+ * @IFLA_BRPORT_ID
+ *
+ * @IFLA_BRPORT_NO
+ *
+ * @IFLA_BRPORT_TOPOLOGY_CHANGE_ACK
+ *
+ * @IFLA_BRPORT_CONFIG_PENDING
+ *
+ * @IFLA_BRPORT_MESSAGE_AGE_TIMER
+ *
+ * @IFLA_BRPORT_FORWARD_DELAY_TIMER
+ *
+ * @IFLA_BRPORT_HOLD_TIMER
+ *
+ * @IFLA_BRPORT_FLUSH
+ * Flush bridge ports' fdb dynamic entries.
+ *
+ * @IFLA_BRPORT_MULTICAST_ROUTER
+ * Configure the port's multicast router presence. A port with
+ * a multicast router will receive all multicast traffic.
+ * The valid values are:
+ *
+ * * 0 disable multicast routers on this port
+ * * 1 let the system detect the presence of routers (default)
+ * * 2 permanently enable multicast traffic forwarding on this port
+ * * 3 enable multicast routers temporarily on this port, not depending
+ * on incoming queries.
+ *
+ * @IFLA_BRPORT_PAD
+ *
+ * @IFLA_BRPORT_MCAST_FLOOD
+ * Controls whether a given port will flood multicast traffic for which
+ * there is no MDB entry. By default this flag is on.
+ *
+ * @IFLA_BRPORT_MCAST_TO_UCAST
+ * Controls whether a given port will replicate packets using unicast
+ * instead of multicast. By default this flag is off.
+ *
+ * This is done by copying the packet per host and changing the multicast
+ * destination MAC to a unicast one accordingly.
+ *
+ * *mcast_to_unicast* works on top of the multicast snooping feature of the
+ * bridge. Which means unicast copies are only delivered to hosts which
+ * are interested in unicast and signaled this via IGMP/MLD reports previously.
+ *
+ * This feature is intended for interface types which have a more reliable
+ * and/or efficient way to deliver unicast packets than broadcast ones
+ * (e.g. WiFi).
+ *
+ * However, it should only be enabled on interfaces where no IGMPv2/MLDv1
+ * report suppression takes place. IGMP/MLD report suppression issue is
+ * usually overcome by the network daemon (supplicant) enabling AP isolation
+ * and by that separating all STAs.
+ *
+ * Delivery of STA-to-STA IP multicast is made possible again by enabling
+ * and utilizing the bridge hairpin mode, which considers the incoming port
+ * as a potential outgoing port, too (see *BRIDGE_MODE_HAIRPIN* option).
+ * Hairpin mode is performed after multicast snooping, therefore leading
+ * to only deliver reports to STAs running a multicast router.
+ *
+ * @IFLA_BRPORT_VLAN_TUNNEL
+ * Controls whether vlan to tunnel mapping is enabled on the port.
+ * By default this flag is off.
+ *
+ * @IFLA_BRPORT_BCAST_FLOOD
+ * Controls flooding of broadcast traffic on the given port. By default
+ * this flag is on.
+ *
+ * @IFLA_BRPORT_GROUP_FWD_MASK
+ * Set the group forward mask. This is a bitmask that is applied to
+ * decide whether to forward incoming frames destined to link-local
+ * addresses. The addresses of the form are 01:80:C2:00:00:0X (defaults
+ * to 0, which means the bridge does not forward any link-local frames
+ * coming on this port).
+ *
+ * @IFLA_BRPORT_NEIGH_SUPPRESS
+ * Controls whether neighbor discovery (arp and nd) proxy and suppression
+ * is enabled on the port. By default this flag is off.
+ *
+ * @IFLA_BRPORT_ISOLATED
+ * Controls whether a given port will be isolated, which means it will be
+ * able to communicate with non-isolated ports only. By default this
+ * flag is off.
+ *
+ * @IFLA_BRPORT_BACKUP_PORT
+ * Set a backup port. If the port loses carrier all traffic will be
+ * redirected to the configured backup port. Set the value to 0 to disable
+ * it.
+ *
+ * @IFLA_BRPORT_MRP_RING_OPEN
+ *
+ * @IFLA_BRPORT_MRP_IN_OPEN
+ *
+ * @IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT
+ * The number of per-port EHT hosts limit. The default value is 512.
+ * Setting to 0 is not allowed.
+ *
+ * @IFLA_BRPORT_MCAST_EHT_HOSTS_CNT
+ * The current number of tracked hosts, read only.
+ *
+ * @IFLA_BRPORT_LOCKED
+ * Controls whether a port will be locked, meaning that hosts behind the
+ * port will not be able to communicate through the port unless an FDB
+ * entry with the unit's MAC address is in the FDB. The common use case is
+ * that hosts are allowed access through authentication with the IEEE 802.1X
+ * protocol or based on whitelists. By default this flag is off.
+ *
+ * Please note that secure 802.1X deployments should always use the
+ * *BR_BOOLOPT_NO_LL_LEARN* flag, to not permit the bridge to populate its
+ * FDB based on link-local (EAPOL) traffic received on the port.
+ *
+ * @IFLA_BRPORT_MAB
+ * Controls whether a port will use MAC Authentication Bypass (MAB), a
+ * technique through which select MAC addresses may be allowed on a locked
+ * port, without using 802.1X authentication. Packets with an unknown source
+ * MAC address generates a "locked" FDB entry on the incoming bridge port.
+ * The common use case is for user space to react to these bridge FDB
+ * notifications and optionally replace the locked FDB entry with a normal
+ * one, allowing traffic to pass for whitelisted MAC addresses.
+ *
+ * Setting this flag also requires *IFLA_BRPORT_LOCKED* and
+ * *IFLA_BRPORT_LEARNING*. *IFLA_BRPORT_LOCKED* ensures that unauthorized
+ * data packets are dropped, and *IFLA_BRPORT_LEARNING* allows the dynamic
+ * FDB entries installed by user space (as replacements for the locked FDB
+ * entries) to be refreshed and/or aged out.
+ *
+ * @IFLA_BRPORT_MCAST_N_GROUPS
+ *
+ * @IFLA_BRPORT_MCAST_MAX_GROUPS
+ * Sets the maximum number of MDB entries that can be registered for a
+ * given port. Attempts to register more MDB entries at the port than this
+ * limit allows will be rejected, whether they are done through netlink
+ * (e.g. the bridge tool), or IGMP or MLD membership reports. Setting a
+ * limit of 0 disables the limit. The default value is 0.
+ *
+ * @IFLA_BRPORT_NEIGH_VLAN_SUPPRESS
+ * Controls whether neighbor discovery (arp and nd) proxy and suppression is
+ * enabled for a given port. By default this flag is off.
+ *
+ * Note that this option only takes effect when *IFLA_BRPORT_NEIGH_SUPPRESS*
+ * is enabled for a given port.
+ *
+ * @IFLA_BRPORT_BACKUP_NHID
+ * The FDB nexthop object ID to attach to packets being redirected to a
+ * backup port that has VLAN tunnel mapping enabled (via the
+ * *IFLA_BRPORT_VLAN_TUNNEL* option). Setting a value of 0 (default) has
+ * the effect of not attaching any ID.
+ */
enum {
IFLA_BRPORT_UNSPEC,
IFLA_BRPORT_STATE, /* Spanning tree state */
@@ -769,6 +1293,19 @@ enum netkit_mode {
NETKIT_L3,
};
+/* NETKIT_SCRUB_NONE leaves clearing skb->{mark,priority} up to
+ * the BPF program if attached. This also means the latter can
+ * consume the two fields if they were populated earlier.
+ *
+ * NETKIT_SCRUB_DEFAULT zeroes skb->{mark,priority} fields before
+ * invoking the attached BPF program when the peer device resides
+ * in a different network namespace. This is the default behavior.
+ */
+enum netkit_scrub {
+ NETKIT_SCRUB_NONE,
+ NETKIT_SCRUB_DEFAULT,
+};
+
enum {
IFLA_NETKIT_UNSPEC,
IFLA_NETKIT_PEER_INFO,
@@ -776,6 +1313,10 @@ enum {
IFLA_NETKIT_POLICY,
IFLA_NETKIT_PEER_POLICY,
IFLA_NETKIT_MODE,
+ IFLA_NETKIT_SCRUB,
+ IFLA_NETKIT_PEER_SCRUB,
+ IFLA_NETKIT_HEADROOM,
+ IFLA_NETKIT_TAILROOM,
__IFLA_NETKIT_MAX,
};
#define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
@@ -854,6 +1395,7 @@ enum {
IFLA_VXLAN_DF,
IFLA_VXLAN_VNIFILTER, /* only applicable with COLLECT_METADATA mode */
IFLA_VXLAN_LOCALBYPASS,
+ IFLA_VXLAN_LABEL_POLICY, /* IPv6 flow label policy; ifla_vxlan_label_policy */
__IFLA_VXLAN_MAX
};
#define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
@@ -871,6 +1413,13 @@ enum ifla_vxlan_df {
VXLAN_DF_MAX = __VXLAN_DF_END - 1,
};
+enum ifla_vxlan_label_policy {
+ VXLAN_LABEL_FIXED = 0,
+ VXLAN_LABEL_INHERIT = 1,
+ __VXLAN_LABEL_END,
+ VXLAN_LABEL_MAX = __VXLAN_LABEL_END - 1,
+};
+
/* GENEVE section */
enum {
IFLA_GENEVE_UNSPEC,
@@ -935,6 +1484,8 @@ enum {
IFLA_GTP_ROLE,
IFLA_GTP_CREATE_SOCKETS,
IFLA_GTP_RESTART_COUNT,
+ IFLA_GTP_LOCAL,
+ IFLA_GTP_LOCAL6,
__IFLA_GTP_MAX,
};
#define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)
@@ -974,6 +1525,7 @@ enum {
IFLA_BOND_AD_LACP_ACTIVE,
IFLA_BOND_MISSED_MAX,
IFLA_BOND_NS_IP6_TARGET,
+ IFLA_BOND_COUPLED_CONTROL,
__IFLA_BOND_MAX,
};
@@ -1239,6 +1791,7 @@ enum {
IFLA_HSR_PROTOCOL, /* Indicate different protocol than
* HSR. For example PRP.
*/
+ IFLA_HSR_INTERLINK, /* HSR interlink network device */
__IFLA_HSR_MAX,
};
@@ -1416,7 +1969,9 @@ enum {
enum {
IFLA_DSA_UNSPEC,
- IFLA_DSA_MASTER,
+ IFLA_DSA_CONDUIT,
+ /* Deprecated, use IFLA_DSA_CONDUIT instead */
+ IFLA_DSA_MASTER = IFLA_DSA_CONDUIT,
__IFLA_DSA_MAX,
};
diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
index 638c606dfa74..23a062781468 100644
--- a/tools/include/uapi/linux/if_xdp.h
+++ b/tools/include/uapi/linux/if_xdp.h
@@ -7,8 +7,8 @@
* Magnus Karlsson <magnus.karlsson@intel.com>
*/
-#ifndef _LINUX_IF_XDP_H
-#define _LINUX_IF_XDP_H
+#ifndef _UAPI_LINUX_IF_XDP_H
+#define _UAPI_LINUX_IF_XDP_H
#include <linux/types.h>
@@ -41,6 +41,10 @@
*/
#define XDP_UMEM_TX_SW_CSUM (1 << 1)
+/* Request to reserve tx_metadata_len bytes of per-chunk metadata.
+ */
+#define XDP_UMEM_TX_METADATA_LEN (1 << 2)
+
struct sockaddr_xdp {
__u16 sxdp_family;
__u16 sxdp_flags;
@@ -75,6 +79,7 @@ struct xdp_mmap_offsets {
#define XDP_UMEM_COMPLETION_RING 6
#define XDP_STATISTICS 7
#define XDP_OPTIONS 8
+#define XDP_MAX_TX_SKB_BUDGET 9
struct xdp_umem_reg {
__u64 addr; /* Start of packet data area */
@@ -113,16 +118,22 @@ struct xdp_options {
((1ULL << XSK_UNALIGNED_BUF_OFFSET_SHIFT) - 1)
/* Request transmit timestamp. Upon completion, put it into tx_timestamp
- * field of union xsk_tx_metadata.
+ * field of struct xsk_tx_metadata.
*/
#define XDP_TXMD_FLAGS_TIMESTAMP (1 << 0)
/* Request transmit checksum offload. Checksum start position and offset
- * are communicated via csum_start and csum_offset fields of union
+ * are communicated via csum_start and csum_offset fields of struct
* xsk_tx_metadata.
*/
#define XDP_TXMD_FLAGS_CHECKSUM (1 << 1)
+/* Request launch time hardware offload. The device will schedule the packet for
+ * transmission at a pre-determined time called launch time. The value of
+ * launch time is communicated via launch_time field of struct xsk_tx_metadata.
+ */
+#define XDP_TXMD_FLAGS_LAUNCH_TIME (1 << 2)
+
/* AF_XDP offloads request. 'request' union member is consumed by the driver
* when the packet is being transmitted. 'completion' union member is
* filled by the driver when the transmit completion arrives.
@@ -138,6 +149,10 @@ struct xsk_tx_metadata {
__u16 csum_start;
/* Offset from csum_start where checksum should be stored. */
__u16 csum_offset;
+
+ /* XDP_TXMD_FLAGS_LAUNCH_TIME */
+ /* Launch time in nanosecond against the PTP HW Clock */
+ __u64 launch_time;
} request;
struct {
@@ -166,4 +181,4 @@ struct xdp_desc {
/* TX packet carries valid metadata. */
#define XDP_TX_METADATA (1 << 1)
-#endif /* _LINUX_IF_XDP_H */
+#endif /* _UAPI_LINUX_IF_XDP_H */
diff --git a/tools/include/uapi/linux/in.h b/tools/include/uapi/linux/in.h
index e682ab628dfa..ced0fc3c3aa5 100644
--- a/tools/include/uapi/linux/in.h
+++ b/tools/include/uapi/linux/in.h
@@ -79,8 +79,12 @@ enum {
#define IPPROTO_MPLS IPPROTO_MPLS
IPPROTO_ETHERNET = 143, /* Ethernet-within-IPv6 Encapsulation */
#define IPPROTO_ETHERNET IPPROTO_ETHERNET
+ IPPROTO_AGGFRAG = 144, /* AGGFRAG in ESP (RFC 9347) */
+#define IPPROTO_AGGFRAG IPPROTO_AGGFRAG
IPPROTO_RAW = 255, /* Raw IP packets */
#define IPPROTO_RAW IPPROTO_RAW
+ IPPROTO_SMC = 256, /* Shared Memory Communications */
+#define IPPROTO_SMC IPPROTO_SMC
IPPROTO_MPTCP = 262, /* Multipath TCP connection */
#define IPPROTO_MPTCP IPPROTO_MPTCP
IPPROTO_MAX
@@ -139,7 +143,7 @@ struct in_addr {
*/
#define IP_PMTUDISC_INTERFACE 4
/* weaker version of IP_PMTUDISC_INTERFACE, which allows packets to get
- * fragmented if they exeed the interface mtu
+ * fragmented if they exceed the interface mtu
*/
#define IP_PMTUDISC_OMIT 5
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index c3308536482b..f0f0d49d2544 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -16,6 +16,11 @@
#define KVM_API_VERSION 12
+/*
+ * Backwards-compatible definitions.
+ */
+#define __KVM_HAVE_GUEST_DEBUG
+
/* for KVM_SET_USER_MEMORY_REGION */
struct kvm_userspace_memory_region {
__u32 slot;
@@ -85,43 +90,6 @@ struct kvm_pit_config {
#define KVM_PIT_SPEAKER_DUMMY 1
-struct kvm_s390_skeys {
- __u64 start_gfn;
- __u64 count;
- __u64 skeydata_addr;
- __u32 flags;
- __u32 reserved[9];
-};
-
-#define KVM_S390_CMMA_PEEK (1 << 0)
-
-/**
- * kvm_s390_cmma_log - Used for CMMA migration.
- *
- * Used both for input and output.
- *
- * @start_gfn: Guest page number to start from.
- * @count: Size of the result buffer.
- * @flags: Control operation mode via KVM_S390_CMMA_* flags
- * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty
- * pages are still remaining.
- * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set
- * in the PGSTE.
- * @values: Pointer to the values buffer.
- *
- * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls.
- */
-struct kvm_s390_cmma_log {
- __u64 start_gfn;
- __u32 count;
- __u32 flags;
- union {
- __u64 remaining;
- __u64 mask;
- };
- __u64 values;
-};
-
struct kvm_hyperv_exit {
#define KVM_EXIT_HYPERV_SYNIC 1
#define KVM_EXIT_HYPERV_HCALL 2
@@ -210,6 +178,7 @@ struct kvm_xen_exit {
#define KVM_EXIT_NOTIFY 37
#define KVM_EXIT_LOONGARCH_IOCSR 38
#define KVM_EXIT_MEMORY_FAULT 39
+#define KVM_EXIT_TDX 40
/* For KVM_EXIT_INTERNAL_ERROR */
/* Emulate instruction failed. */
@@ -224,11 +193,24 @@ struct kvm_xen_exit {
/* Flags that describe what fields in emulation_failure hold valid data. */
#define KVM_INTERNAL_ERROR_EMULATION_FLAG_INSTRUCTION_BYTES (1ULL << 0)
+/*
+ * struct kvm_run can be modified by userspace at any time, so KVM must be
+ * careful to avoid TOCTOU bugs. In order to protect KVM, HINT_UNSAFE_IN_KVM()
+ * renames fields in struct kvm_run from <symbol> to <symbol>__unsafe when
+ * compiled into the kernel, ensuring that any use within KVM is obvious and
+ * gets extra scrutiny.
+ */
+#ifdef __KERNEL__
+#define HINT_UNSAFE_IN_KVM(_symbol) _symbol##__unsafe
+#else
+#define HINT_UNSAFE_IN_KVM(_symbol) _symbol
+#endif
+
/* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */
struct kvm_run {
/* in */
__u8 request_interrupt_window;
- __u8 immediate_exit;
+ __u8 HINT_UNSAFE_IN_KVM(immediate_exit);
__u8 padding1[6];
/* out */
@@ -315,11 +297,6 @@ struct kvm_run {
__u32 ipb;
} s390_sieic;
/* KVM_EXIT_S390_RESET */
-#define KVM_S390_RESET_POR 1
-#define KVM_S390_RESET_CLEAR 2
-#define KVM_S390_RESET_SUBSYSTEM 4
-#define KVM_S390_RESET_CPU_INIT 8
-#define KVM_S390_RESET_IPL 16
__u64 s390_reset_flags;
/* KVM_EXIT_S390_UCONTROL */
struct {
@@ -399,6 +376,7 @@ struct kvm_run {
#define KVM_SYSTEM_EVENT_WAKEUP 4
#define KVM_SYSTEM_EVENT_SUSPEND 5
#define KVM_SYSTEM_EVENT_SEV_TERM 6
+#define KVM_SYSTEM_EVENT_TDX_FATAL 7
__u32 type;
__u32 ndata;
union {
@@ -470,6 +448,31 @@ struct kvm_run {
__u64 gpa;
__u64 size;
} memory_fault;
+ /* KVM_EXIT_TDX */
+ struct {
+ __u64 flags;
+ __u64 nr;
+ union {
+ struct {
+ __u64 ret;
+ __u64 data[5];
+ } unknown;
+ struct {
+ __u64 ret;
+ __u64 gpa;
+ __u64 size;
+ } get_quote;
+ struct {
+ __u64 ret;
+ __u64 leaf;
+ __u64 r11, r12, r13, r14;
+ } get_tdvmcall_info;
+ struct {
+ __u64 ret;
+ __u64 vector;
+ } setup_event_notify;
+ };
+ } tdx;
/* Fix the size of the union. */
char padding[256];
};
@@ -536,43 +539,6 @@ struct kvm_translation {
__u8 pad[5];
};
-/* for KVM_S390_MEM_OP */
-struct kvm_s390_mem_op {
- /* in */
- __u64 gaddr; /* the guest address */
- __u64 flags; /* flags */
- __u32 size; /* amount of bytes */
- __u32 op; /* type of operation */
- __u64 buf; /* buffer in userspace */
- union {
- struct {
- __u8 ar; /* the access register number */
- __u8 key; /* access key, ignored if flag unset */
- __u8 pad1[6]; /* ignored */
- __u64 old_addr; /* ignored if cmpxchg flag unset */
- };
- __u32 sida_offset; /* offset into the sida */
- __u8 reserved[32]; /* ignored */
- };
-};
-/* types for kvm_s390_mem_op->op */
-#define KVM_S390_MEMOP_LOGICAL_READ 0
-#define KVM_S390_MEMOP_LOGICAL_WRITE 1
-#define KVM_S390_MEMOP_SIDA_READ 2
-#define KVM_S390_MEMOP_SIDA_WRITE 3
-#define KVM_S390_MEMOP_ABSOLUTE_READ 4
-#define KVM_S390_MEMOP_ABSOLUTE_WRITE 5
-#define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG 6
-
-/* flags for kvm_s390_mem_op->flags */
-#define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0)
-#define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1)
-#define KVM_S390_MEMOP_F_SKEY_PROTECTION (1ULL << 2)
-
-/* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */
-#define KVM_S390_MEMOP_EXTENSION_CAP_BASE (1 << 0)
-#define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG (1 << 1)
-
/* for KVM_INTERRUPT */
struct kvm_interrupt {
/* in */
@@ -637,124 +603,6 @@ struct kvm_mp_state {
__u32 mp_state;
};
-struct kvm_s390_psw {
- __u64 mask;
- __u64 addr;
-};
-
-/* valid values for type in kvm_s390_interrupt */
-#define KVM_S390_SIGP_STOP 0xfffe0000u
-#define KVM_S390_PROGRAM_INT 0xfffe0001u
-#define KVM_S390_SIGP_SET_PREFIX 0xfffe0002u
-#define KVM_S390_RESTART 0xfffe0003u
-#define KVM_S390_INT_PFAULT_INIT 0xfffe0004u
-#define KVM_S390_INT_PFAULT_DONE 0xfffe0005u
-#define KVM_S390_MCHK 0xfffe1000u
-#define KVM_S390_INT_CLOCK_COMP 0xffff1004u
-#define KVM_S390_INT_CPU_TIMER 0xffff1005u
-#define KVM_S390_INT_VIRTIO 0xffff2603u
-#define KVM_S390_INT_SERVICE 0xffff2401u
-#define KVM_S390_INT_EMERGENCY 0xffff1201u
-#define KVM_S390_INT_EXTERNAL_CALL 0xffff1202u
-/* Anything below 0xfffe0000u is taken by INT_IO */
-#define KVM_S390_INT_IO(ai,cssid,ssid,schid) \
- (((schid)) | \
- ((ssid) << 16) | \
- ((cssid) << 18) | \
- ((ai) << 26))
-#define KVM_S390_INT_IO_MIN 0x00000000u
-#define KVM_S390_INT_IO_MAX 0xfffdffffu
-#define KVM_S390_INT_IO_AI_MASK 0x04000000u
-
-
-struct kvm_s390_interrupt {
- __u32 type;
- __u32 parm;
- __u64 parm64;
-};
-
-struct kvm_s390_io_info {
- __u16 subchannel_id;
- __u16 subchannel_nr;
- __u32 io_int_parm;
- __u32 io_int_word;
-};
-
-struct kvm_s390_ext_info {
- __u32 ext_params;
- __u32 pad;
- __u64 ext_params2;
-};
-
-struct kvm_s390_pgm_info {
- __u64 trans_exc_code;
- __u64 mon_code;
- __u64 per_address;
- __u32 data_exc_code;
- __u16 code;
- __u16 mon_class_nr;
- __u8 per_code;
- __u8 per_atmid;
- __u8 exc_access_id;
- __u8 per_access_id;
- __u8 op_access_id;
-#define KVM_S390_PGM_FLAGS_ILC_VALID 0x01
-#define KVM_S390_PGM_FLAGS_ILC_0 0x02
-#define KVM_S390_PGM_FLAGS_ILC_1 0x04
-#define KVM_S390_PGM_FLAGS_ILC_MASK 0x06
-#define KVM_S390_PGM_FLAGS_NO_REWIND 0x08
- __u8 flags;
- __u8 pad[2];
-};
-
-struct kvm_s390_prefix_info {
- __u32 address;
-};
-
-struct kvm_s390_extcall_info {
- __u16 code;
-};
-
-struct kvm_s390_emerg_info {
- __u16 code;
-};
-
-#define KVM_S390_STOP_FLAG_STORE_STATUS 0x01
-struct kvm_s390_stop_info {
- __u32 flags;
-};
-
-struct kvm_s390_mchk_info {
- __u64 cr14;
- __u64 mcic;
- __u64 failing_storage_address;
- __u32 ext_damage_code;
- __u32 pad;
- __u8 fixed_logout[16];
-};
-
-struct kvm_s390_irq {
- __u64 type;
- union {
- struct kvm_s390_io_info io;
- struct kvm_s390_ext_info ext;
- struct kvm_s390_pgm_info pgm;
- struct kvm_s390_emerg_info emerg;
- struct kvm_s390_extcall_info extcall;
- struct kvm_s390_prefix_info prefix;
- struct kvm_s390_stop_info stop;
- struct kvm_s390_mchk_info mchk;
- char reserved[64];
- } u;
-};
-
-struct kvm_s390_irq_state {
- __u64 buf;
- __u32 flags; /* will stay unused for compatibility reasons */
- __u32 len;
- __u32 reserved[4]; /* will stay unused for compatibility reasons */
-};
-
/* for KVM_SET_GUEST_DEBUG */
#define KVM_GUESTDBG_ENABLE 0x00000001
@@ -796,10 +644,7 @@ struct kvm_ioeventfd {
#define KVM_X86_DISABLE_EXITS_HLT (1 << 1)
#define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2)
#define KVM_X86_DISABLE_EXITS_CSTATE (1 << 3)
-#define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \
- KVM_X86_DISABLE_EXITS_HLT | \
- KVM_X86_DISABLE_EXITS_PAUSE | \
- KVM_X86_DISABLE_EXITS_CSTATE)
+#define KVM_X86_DISABLE_EXITS_APERFMPERF (1 << 4)
/* for KVM_ENABLE_CAP */
struct kvm_enable_cap {
@@ -810,50 +655,6 @@ struct kvm_enable_cap {
__u8 pad[64];
};
-/* for KVM_PPC_GET_PVINFO */
-
-#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
-
-struct kvm_ppc_pvinfo {
- /* out */
- __u32 flags;
- __u32 hcall[4];
- __u8 pad[108];
-};
-
-/* for KVM_PPC_GET_SMMU_INFO */
-#define KVM_PPC_PAGE_SIZES_MAX_SZ 8
-
-struct kvm_ppc_one_page_size {
- __u32 page_shift; /* Page shift (or 0) */
- __u32 pte_enc; /* Encoding in the HPTE (>>12) */
-};
-
-struct kvm_ppc_one_seg_page_size {
- __u32 page_shift; /* Base page shift of segment (or 0) */
- __u32 slb_enc; /* SLB encoding for BookS */
- struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ];
-};
-
-#define KVM_PPC_PAGE_SIZES_REAL 0x00000001
-#define KVM_PPC_1T_SEGMENTS 0x00000002
-#define KVM_PPC_NO_HASH 0x00000004
-
-struct kvm_ppc_smmu_info {
- __u64 flags;
- __u32 slb_size;
- __u16 data_keys; /* # storage keys supported for data */
- __u16 instr_keys; /* # storage keys supported for instructions */
- struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
-};
-
-/* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */
-struct kvm_ppc_resize_hpt {
- __u64 flags;
- __u32 shift;
- __u32 pad;
-};
-
#define KVMIO 0xAE
/* machine type bits, to be used as argument to KVM_CREATE_VM */
@@ -923,9 +724,7 @@ struct kvm_ppc_resize_hpt {
/* Bug in KVM_SET_USER_MEMORY_REGION fixed: */
#define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21
#define KVM_CAP_USER_NMI 22
-#ifdef __KVM_HAVE_GUEST_DEBUG
#define KVM_CAP_SET_GUEST_DEBUG 23
-#endif
#ifdef __KVM_HAVE_PIT
#define KVM_CAP_REINJECT_CONTROL 24
#endif
@@ -1155,8 +954,14 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_MEMORY_ATTRIBUTES 233
#define KVM_CAP_GUEST_MEMFD 234
#define KVM_CAP_VM_TYPES 235
-
-#ifdef KVM_CAP_IRQ_ROUTING
+#define KVM_CAP_PRE_FAULT_MEMORY 236
+#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
+#define KVM_CAP_X86_GUEST_MODE 238
+#define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239
+#define KVM_CAP_ARM_EL2 240
+#define KVM_CAP_ARM_EL2_E2H0 241
+#define KVM_CAP_RISCV_MP_STATE_RESET 242
+#define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243
struct kvm_irq_routing_irqchip {
__u32 irqchip;
@@ -1222,42 +1027,6 @@ struct kvm_irq_routing {
struct kvm_irq_routing_entry entries[];
};
-#endif
-
-#ifdef KVM_CAP_MCE
-/* x86 MCE */
-struct kvm_x86_mce {
- __u64 status;
- __u64 addr;
- __u64 misc;
- __u64 mcg_status;
- __u8 bank;
- __u8 pad1[7];
- __u64 pad2[3];
-};
-#endif
-
-#ifdef KVM_CAP_XEN_HVM
-#define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR (1 << 0)
-#define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL (1 << 1)
-#define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2)
-#define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3)
-#define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4)
-#define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5)
-#define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6)
-#define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7)
-
-struct kvm_xen_hvm_config {
- __u32 flags;
- __u32 msr;
- __u64 blob_addr_32;
- __u64 blob_addr_64;
- __u8 blob_size_32;
- __u8 blob_size_64;
- __u8 pad2[30];
-};
-#endif
-
#define KVM_IRQFD_FLAG_DEASSIGN (1 << 0)
/*
* Available with KVM_CAP_IRQFD_RESAMPLE
@@ -1330,6 +1099,10 @@ struct kvm_dirty_tlb {
#define KVM_REG_SIZE_SHIFT 52
#define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
+
+#define KVM_REG_SIZE(id) \
+ (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
+
#define KVM_REG_SIZE_U8 0x0000000000000000ULL
#define KVM_REG_SIZE_U16 0x0010000000000000ULL
#define KVM_REG_SIZE_U32 0x0020000000000000ULL
@@ -1418,7 +1191,15 @@ enum kvm_device_type {
#define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME
KVM_DEV_TYPE_RISCV_AIA,
#define KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_RISCV_AIA
+ KVM_DEV_TYPE_LOONGARCH_IPI,
+#define KVM_DEV_TYPE_LOONGARCH_IPI KVM_DEV_TYPE_LOONGARCH_IPI
+ KVM_DEV_TYPE_LOONGARCH_EIOINTC,
+#define KVM_DEV_TYPE_LOONGARCH_EIOINTC KVM_DEV_TYPE_LOONGARCH_EIOINTC
+ KVM_DEV_TYPE_LOONGARCH_PCHPIC,
+#define KVM_DEV_TYPE_LOONGARCH_PCHPIC KVM_DEV_TYPE_LOONGARCH_PCHPIC
+
KVM_DEV_TYPE_MAX,
+
};
struct kvm_vfio_spapr_tce {
@@ -1442,11 +1223,6 @@ struct kvm_vfio_spapr_tce {
struct kvm_userspace_memory_region2)
/* enable ucontrol for s390 */
-struct kvm_s390_ucas_mapping {
- __u64 user_addr;
- __u64 vcpu_addr;
- __u64 length;
-};
#define KVM_S390_UCAS_MAP _IOW(KVMIO, 0x50, struct kvm_s390_ucas_mapping)
#define KVM_S390_UCAS_UNMAP _IOW(KVMIO, 0x51, struct kvm_s390_ucas_mapping)
#define KVM_S390_VCPU_FAULT _IOW(KVMIO, 0x52, unsigned long)
@@ -1502,9 +1278,9 @@ struct kvm_s390_ucas_mapping {
/* Available with KVM_CAP_SPAPR_RESIZE_HPT */
#define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt)
#define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt)
-/* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3 */
+/* Available with KVM_CAP_PPC_MMU_RADIX or KVM_CAP_PPC_MMU_HASH_V3 */
#define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg)
-/* Available with KVM_CAP_PPC_RADIX_MMU */
+/* Available with KVM_CAP_PPC_MMU_RADIX */
#define KVM_PPC_GET_RMMU_INFO _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info)
/* Available with KVM_CAP_PPC_GET_CPU_CHAR */
#define KVM_PPC_GET_CPU_CHAR _IOR(KVMIO, 0xb1, struct kvm_ppc_cpu_char)
@@ -1641,89 +1417,6 @@ struct kvm_enc_region {
#define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3)
#define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4)
-struct kvm_s390_pv_sec_parm {
- __u64 origin;
- __u64 length;
-};
-
-struct kvm_s390_pv_unp {
- __u64 addr;
- __u64 size;
- __u64 tweak;
-};
-
-enum pv_cmd_dmp_id {
- KVM_PV_DUMP_INIT,
- KVM_PV_DUMP_CONFIG_STOR_STATE,
- KVM_PV_DUMP_COMPLETE,
- KVM_PV_DUMP_CPU,
-};
-
-struct kvm_s390_pv_dmp {
- __u64 subcmd;
- __u64 buff_addr;
- __u64 buff_len;
- __u64 gaddr; /* For dump storage state */
- __u64 reserved[4];
-};
-
-enum pv_cmd_info_id {
- KVM_PV_INFO_VM,
- KVM_PV_INFO_DUMP,
-};
-
-struct kvm_s390_pv_info_dump {
- __u64 dump_cpu_buffer_len;
- __u64 dump_config_mem_buffer_per_1m;
- __u64 dump_config_finalize_len;
-};
-
-struct kvm_s390_pv_info_vm {
- __u64 inst_calls_list[4];
- __u64 max_cpus;
- __u64 max_guests;
- __u64 max_guest_addr;
- __u64 feature_indication;
-};
-
-struct kvm_s390_pv_info_header {
- __u32 id;
- __u32 len_max;
- __u32 len_written;
- __u32 reserved;
-};
-
-struct kvm_s390_pv_info {
- struct kvm_s390_pv_info_header header;
- union {
- struct kvm_s390_pv_info_dump dump;
- struct kvm_s390_pv_info_vm vm;
- };
-};
-
-enum pv_cmd_id {
- KVM_PV_ENABLE,
- KVM_PV_DISABLE,
- KVM_PV_SET_SEC_PARMS,
- KVM_PV_UNPACK,
- KVM_PV_VERIFY,
- KVM_PV_PREP_RESET,
- KVM_PV_UNSHARE_ALL,
- KVM_PV_INFO,
- KVM_PV_DUMP,
- KVM_PV_ASYNC_CLEANUP_PREPARE,
- KVM_PV_ASYNC_CLEANUP_PERFORM,
-};
-
-struct kvm_pv_cmd {
- __u32 cmd; /* Command to be executed */
- __u16 rc; /* Ultravisor return code */
- __u16 rrc; /* Ultravisor return reason code */
- __u64 data; /* Data or address */
- __u32 flags; /* flags for future extensions. Must be 0 for now */
- __u32 reserved[3];
-};
-
/* Available with KVM_CAP_S390_PROTECTED */
#define KVM_S390_PV_COMMAND _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
@@ -1737,58 +1430,6 @@ struct kvm_pv_cmd {
#define KVM_XEN_HVM_GET_ATTR _IOWR(KVMIO, 0xc8, struct kvm_xen_hvm_attr)
#define KVM_XEN_HVM_SET_ATTR _IOW(KVMIO, 0xc9, struct kvm_xen_hvm_attr)
-struct kvm_xen_hvm_attr {
- __u16 type;
- __u16 pad[3];
- union {
- __u8 long_mode;
- __u8 vector;
- __u8 runstate_update_flag;
- struct {
- __u64 gfn;
-#define KVM_XEN_INVALID_GFN ((__u64)-1)
- } shared_info;
- struct {
- __u32 send_port;
- __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */
- __u32 flags;
-#define KVM_XEN_EVTCHN_DEASSIGN (1 << 0)
-#define KVM_XEN_EVTCHN_UPDATE (1 << 1)
-#define KVM_XEN_EVTCHN_RESET (1 << 2)
- /*
- * Events sent by the guest are either looped back to
- * the guest itself (potentially on a different port#)
- * or signalled via an eventfd.
- */
- union {
- struct {
- __u32 port;
- __u32 vcpu;
- __u32 priority;
- } port;
- struct {
- __u32 port; /* Zero for eventfd */
- __s32 fd;
- } eventfd;
- __u32 padding[4];
- } deliver;
- } evtchn;
- __u32 xen_version;
- __u64 pad[8];
- } u;
-};
-
-
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
-#define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0
-#define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1
-#define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
-#define KVM_XEN_ATTR_TYPE_EVTCHN 0x3
-#define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */
-#define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5
-
/* Per-vCPU Xen attributes */
#define KVM_XEN_VCPU_GET_ATTR _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr)
#define KVM_XEN_VCPU_SET_ATTR _IOW(KVMIO, 0xcb, struct kvm_xen_vcpu_attr)
@@ -1799,242 +1440,6 @@ struct kvm_xen_hvm_attr {
#define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2)
#define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2)
-struct kvm_xen_vcpu_attr {
- __u16 type;
- __u16 pad[3];
- union {
- __u64 gpa;
-#define KVM_XEN_INVALID_GPA ((__u64)-1)
- __u64 pad[8];
- struct {
- __u64 state;
- __u64 state_entry_time;
- __u64 time_running;
- __u64 time_runnable;
- __u64 time_blocked;
- __u64 time_offline;
- } runstate;
- __u32 vcpu_id;
- struct {
- __u32 port;
- __u32 priority;
- __u64 expires_ns;
- } timer;
- __u8 vector;
- } u;
-};
-
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO 0x0
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO 0x1
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR 0x2
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4
-#define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5
-/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */
-#define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6
-#define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7
-#define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8
-
-/* Secure Encrypted Virtualization command */
-enum sev_cmd_id {
- /* Guest initialization commands */
- KVM_SEV_INIT = 0,
- KVM_SEV_ES_INIT,
- /* Guest launch commands */
- KVM_SEV_LAUNCH_START,
- KVM_SEV_LAUNCH_UPDATE_DATA,
- KVM_SEV_LAUNCH_UPDATE_VMSA,
- KVM_SEV_LAUNCH_SECRET,
- KVM_SEV_LAUNCH_MEASURE,
- KVM_SEV_LAUNCH_FINISH,
- /* Guest migration commands (outgoing) */
- KVM_SEV_SEND_START,
- KVM_SEV_SEND_UPDATE_DATA,
- KVM_SEV_SEND_UPDATE_VMSA,
- KVM_SEV_SEND_FINISH,
- /* Guest migration commands (incoming) */
- KVM_SEV_RECEIVE_START,
- KVM_SEV_RECEIVE_UPDATE_DATA,
- KVM_SEV_RECEIVE_UPDATE_VMSA,
- KVM_SEV_RECEIVE_FINISH,
- /* Guest status and debug commands */
- KVM_SEV_GUEST_STATUS,
- KVM_SEV_DBG_DECRYPT,
- KVM_SEV_DBG_ENCRYPT,
- /* Guest certificates commands */
- KVM_SEV_CERT_EXPORT,
- /* Attestation report */
- KVM_SEV_GET_ATTESTATION_REPORT,
- /* Guest Migration Extension */
- KVM_SEV_SEND_CANCEL,
-
- KVM_SEV_NR_MAX,
-};
-
-struct kvm_sev_cmd {
- __u32 id;
- __u64 data;
- __u32 error;
- __u32 sev_fd;
-};
-
-struct kvm_sev_launch_start {
- __u32 handle;
- __u32 policy;
- __u64 dh_uaddr;
- __u32 dh_len;
- __u64 session_uaddr;
- __u32 session_len;
-};
-
-struct kvm_sev_launch_update_data {
- __u64 uaddr;
- __u32 len;
-};
-
-
-struct kvm_sev_launch_secret {
- __u64 hdr_uaddr;
- __u32 hdr_len;
- __u64 guest_uaddr;
- __u32 guest_len;
- __u64 trans_uaddr;
- __u32 trans_len;
-};
-
-struct kvm_sev_launch_measure {
- __u64 uaddr;
- __u32 len;
-};
-
-struct kvm_sev_guest_status {
- __u32 handle;
- __u32 policy;
- __u32 state;
-};
-
-struct kvm_sev_dbg {
- __u64 src_uaddr;
- __u64 dst_uaddr;
- __u32 len;
-};
-
-struct kvm_sev_attestation_report {
- __u8 mnonce[16];
- __u64 uaddr;
- __u32 len;
-};
-
-struct kvm_sev_send_start {
- __u32 policy;
- __u64 pdh_cert_uaddr;
- __u32 pdh_cert_len;
- __u64 plat_certs_uaddr;
- __u32 plat_certs_len;
- __u64 amd_certs_uaddr;
- __u32 amd_certs_len;
- __u64 session_uaddr;
- __u32 session_len;
-};
-
-struct kvm_sev_send_update_data {
- __u64 hdr_uaddr;
- __u32 hdr_len;
- __u64 guest_uaddr;
- __u32 guest_len;
- __u64 trans_uaddr;
- __u32 trans_len;
-};
-
-struct kvm_sev_receive_start {
- __u32 handle;
- __u32 policy;
- __u64 pdh_uaddr;
- __u32 pdh_len;
- __u64 session_uaddr;
- __u32 session_len;
-};
-
-struct kvm_sev_receive_update_data {
- __u64 hdr_uaddr;
- __u32 hdr_len;
- __u64 guest_uaddr;
- __u32 guest_len;
- __u64 trans_uaddr;
- __u32 trans_len;
-};
-
-#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
-#define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
-#define KVM_DEV_ASSIGN_MASK_INTX (1 << 2)
-
-struct kvm_assigned_pci_dev {
- __u32 assigned_dev_id;
- __u32 busnr;
- __u32 devfn;
- __u32 flags;
- __u32 segnr;
- union {
- __u32 reserved[11];
- };
-};
-
-#define KVM_DEV_IRQ_HOST_INTX (1 << 0)
-#define KVM_DEV_IRQ_HOST_MSI (1 << 1)
-#define KVM_DEV_IRQ_HOST_MSIX (1 << 2)
-
-#define KVM_DEV_IRQ_GUEST_INTX (1 << 8)
-#define KVM_DEV_IRQ_GUEST_MSI (1 << 9)
-#define KVM_DEV_IRQ_GUEST_MSIX (1 << 10)
-
-#define KVM_DEV_IRQ_HOST_MASK 0x00ff
-#define KVM_DEV_IRQ_GUEST_MASK 0xff00
-
-struct kvm_assigned_irq {
- __u32 assigned_dev_id;
- __u32 host_irq; /* ignored (legacy field) */
- __u32 guest_irq;
- __u32 flags;
- union {
- __u32 reserved[12];
- };
-};
-
-struct kvm_assigned_msix_nr {
- __u32 assigned_dev_id;
- __u16 entry_nr;
- __u16 padding;
-};
-
-#define KVM_MAX_MSIX_PER_DEV 256
-struct kvm_assigned_msix_entry {
- __u32 assigned_dev_id;
- __u32 gsi;
- __u16 entry; /* The index of entry in the MSI-X table */
- __u16 padding[3];
-};
-
-#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
-#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
-
-/* Available with KVM_CAP_ARM_USER_IRQ */
-
-/* Bits for run->s.regs.device_irq_level */
-#define KVM_ARM_DEV_EL1_VTIMER (1 << 0)
-#define KVM_ARM_DEV_EL1_PTIMER (1 << 1)
-#define KVM_ARM_DEV_PMU (1 << 2)
-
-struct kvm_hyperv_eventfd {
- __u32 conn_id;
- __s32 fd;
- __u32 flags;
- __u32 padding[3];
-};
-
-#define KVM_HYPERV_CONN_ID_MASK 0x00ffffff
-#define KVM_HYPERV_EVENTFD_DEASSIGN (1 << 0)
-
#define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0)
#define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1)
@@ -2180,33 +1585,6 @@ struct kvm_stats_desc {
/* Available with KVM_CAP_S390_ZPCI_OP */
#define KVM_S390_ZPCI_OP _IOW(KVMIO, 0xd1, struct kvm_s390_zpci_op)
-struct kvm_s390_zpci_op {
- /* in */
- __u32 fh; /* target device */
- __u8 op; /* operation to perform */
- __u8 pad[3];
- union {
- /* for KVM_S390_ZPCIOP_REG_AEN */
- struct {
- __u64 ibv; /* Guest addr of interrupt bit vector */
- __u64 sb; /* Guest addr of summary bit */
- __u32 flags;
- __u32 noi; /* Number of interrupts */
- __u8 isc; /* Guest interrupt subclass */
- __u8 sbo; /* Offset of guest summary bit vector */
- __u16 pad;
- } reg_aen;
- __u64 reserved[8];
- } u;
-};
-
-/* types for kvm_s390_zpci_op->op */
-#define KVM_S390_ZPCIOP_REG_AEN 0
-#define KVM_S390_ZPCIOP_DEREG_AEN 1
-
-/* flags for kvm_s390_zpci_op->u.reg_aen.flags */
-#define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0)
-
/* Available with KVM_CAP_MEMORY_ATTRIBUTES */
#define KVM_SET_MEMORY_ATTRIBUTES _IOW(KVMIO, 0xd2, struct kvm_memory_attributes)
@@ -2227,4 +1605,13 @@ struct kvm_create_guest_memfd {
__u64 reserved[6];
};
+#define KVM_PRE_FAULT_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
+
+struct kvm_pre_fault_memory {
+ __u64 gpa;
+ __u64 size;
+ __u64 flags;
+ __u64 padding[5];
+};
+
#endif /* __LINUX_KVM_H */
diff --git a/tools/include/uapi/linux/memfd.h b/tools/include/uapi/linux/memfd.h
new file mode 100644
index 000000000000..01c0324e7733
--- /dev/null
+++ b/tools/include/uapi/linux/memfd.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_MEMFD_H
+#define _LINUX_MEMFD_H
+
+#include <asm-generic/hugetlb_encode.h>
+
+/* flags for memfd_create(2) (unsigned int) */
+#define MFD_CLOEXEC 0x0001U
+#define MFD_ALLOW_SEALING 0x0002U
+#define MFD_HUGETLB 0x0004U
+/* not executable and sealed to prevent changing to executable. */
+#define MFD_NOEXEC_SEAL 0x0008U
+/* executable */
+#define MFD_EXEC 0x0010U
+
+/*
+ * Huge page size encoding when MFD_HUGETLB is specified, and a huge page
+ * size other than the default is desired. See hugetlb_encode.h.
+ * All known huge page size encodings are provided here. It is the
+ * responsibility of the application to know which sizes are supported on
+ * the running system. See mmap(2) man page for details.
+ */
+#define MFD_HUGE_SHIFT HUGETLB_FLAG_ENCODE_SHIFT
+#define MFD_HUGE_MASK HUGETLB_FLAG_ENCODE_MASK
+
+#define MFD_HUGE_64KB HUGETLB_FLAG_ENCODE_64KB
+#define MFD_HUGE_512KB HUGETLB_FLAG_ENCODE_512KB
+#define MFD_HUGE_1MB HUGETLB_FLAG_ENCODE_1MB
+#define MFD_HUGE_2MB HUGETLB_FLAG_ENCODE_2MB
+#define MFD_HUGE_8MB HUGETLB_FLAG_ENCODE_8MB
+#define MFD_HUGE_16MB HUGETLB_FLAG_ENCODE_16MB
+#define MFD_HUGE_32MB HUGETLB_FLAG_ENCODE_32MB
+#define MFD_HUGE_256MB HUGETLB_FLAG_ENCODE_256MB
+#define MFD_HUGE_512MB HUGETLB_FLAG_ENCODE_512MB
+#define MFD_HUGE_1GB HUGETLB_FLAG_ENCODE_1GB
+#define MFD_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB
+#define MFD_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB
+
+#endif /* _LINUX_MEMFD_H */
diff --git a/tools/include/uapi/linux/mman.h b/tools/include/uapi/linux/mman.h
index a246e11988d5..e89d00528f2f 100644
--- a/tools/include/uapi/linux/mman.h
+++ b/tools/include/uapi/linux/mman.h
@@ -17,6 +17,7 @@
#define MAP_SHARED 0x01 /* Share changes */
#define MAP_PRIVATE 0x02 /* Changes are private */
#define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */
+#define MAP_DROPPABLE 0x08 /* Zero memory under memory pressure. */
/*
* Huge page size encoding when MAP_HUGETLB is specified, and a huge page
diff --git a/tools/include/uapi/linux/mount.h b/tools/include/uapi/linux/mount.h
index ad5478dbad00..7fa67c2031a5 100644
--- a/tools/include/uapi/linux/mount.h
+++ b/tools/include/uapi/linux/mount.h
@@ -154,7 +154,7 @@ struct mount_attr {
*/
struct statmount {
__u32 size; /* Total size, including strings */
- __u32 __spare1;
+ __u32 mnt_opts; /* [str] Options (comma separated, escaped) */
__u64 mask; /* What results were written */
__u32 sb_dev_major; /* Device ID */
__u32 sb_dev_minor;
@@ -172,7 +172,19 @@ struct statmount {
__u64 propagate_from; /* Propagation from in current namespace */
__u32 mnt_root; /* [str] Root of mount relative to root of fs */
__u32 mnt_point; /* [str] Mountpoint relative to current root */
- __u64 __spare2[50];
+ __u64 mnt_ns_id; /* ID of the mount namespace */
+ __u32 fs_subtype; /* [str] Subtype of fs_type (if any) */
+ __u32 sb_source; /* [str] Source string of the mount */
+ __u32 opt_num; /* Number of fs options */
+ __u32 opt_array; /* [str] Array of nul terminated fs options */
+ __u32 opt_sec_num; /* Number of security options */
+ __u32 opt_sec_array; /* [str] Array of nul terminated security options */
+ __u64 supported_mask; /* Mask flags that this kernel supports */
+ __u32 mnt_uidmap_num; /* Number of uid mappings */
+ __u32 mnt_uidmap; /* [str] Array of uid mappings (as seen from callers namespace) */
+ __u32 mnt_gidmap_num; /* Number of gid mappings */
+ __u32 mnt_gidmap; /* [str] Array of gid mappings (as seen from callers namespace) */
+ __u64 __spare2[43];
char str[]; /* Variable size part containing strings */
};
@@ -188,10 +200,12 @@ struct mnt_id_req {
__u32 spare;
__u64 mnt_id;
__u64 param;
+ __u64 mnt_ns_id;
};
/* List of all mnt_id_req versions. */
#define MNT_ID_REQ_SIZE_VER0 24 /* sizeof first published struct */
+#define MNT_ID_REQ_SIZE_VER1 32 /* sizeof second published struct */
/*
* @mask bits for statmount(2)
@@ -202,10 +216,20 @@ struct mnt_id_req {
#define STATMOUNT_MNT_ROOT 0x00000008U /* Want/got mnt_root */
#define STATMOUNT_MNT_POINT 0x00000010U /* Want/got mnt_point */
#define STATMOUNT_FS_TYPE 0x00000020U /* Want/got fs_type */
+#define STATMOUNT_MNT_NS_ID 0x00000040U /* Want/got mnt_ns_id */
+#define STATMOUNT_MNT_OPTS 0x00000080U /* Want/got mnt_opts */
+#define STATMOUNT_FS_SUBTYPE 0x00000100U /* Want/got fs_subtype */
+#define STATMOUNT_SB_SOURCE 0x00000200U /* Want/got sb_source */
+#define STATMOUNT_OPT_ARRAY 0x00000400U /* Want/got opt_... */
+#define STATMOUNT_OPT_SEC_ARRAY 0x00000800U /* Want/got opt_sec... */
+#define STATMOUNT_SUPPORTED_MASK 0x00001000U /* Want/got supported mask flags */
+#define STATMOUNT_MNT_UIDMAP 0x00002000U /* Want/got uidmap... */
+#define STATMOUNT_MNT_GIDMAP 0x00004000U /* Want/got gidmap... */
/*
* Special @mnt_id values that can be passed to listmount
*/
#define LSMT_ROOT 0xffffffffffffffff /* root mount */
+#define LISTMOUNT_REVERSE (1 << 0) /* List later mounts first */
#endif /* _UAPI_LINUX_MOUNT_H */
diff --git a/tools/include/uapi/linux/neighbour.h b/tools/include/uapi/linux/neighbour.h
new file mode 100644
index 000000000000..c34a81245f87
--- /dev/null
+++ b/tools/include/uapi/linux/neighbour.h
@@ -0,0 +1,229 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI__LINUX_NEIGHBOUR_H
+#define _UAPI__LINUX_NEIGHBOUR_H
+
+#include <linux/types.h>
+#include <linux/netlink.h>
+
+struct ndmsg {
+ __u8 ndm_family;
+ __u8 ndm_pad1;
+ __u16 ndm_pad2;
+ __s32 ndm_ifindex;
+ __u16 ndm_state;
+ __u8 ndm_flags;
+ __u8 ndm_type;
+};
+
+enum {
+ NDA_UNSPEC,
+ NDA_DST,
+ NDA_LLADDR,
+ NDA_CACHEINFO,
+ NDA_PROBES,
+ NDA_VLAN,
+ NDA_PORT,
+ NDA_VNI,
+ NDA_IFINDEX,
+ NDA_MASTER,
+ NDA_LINK_NETNSID,
+ NDA_SRC_VNI,
+ NDA_PROTOCOL, /* Originator of entry */
+ NDA_NH_ID,
+ NDA_FDB_EXT_ATTRS,
+ NDA_FLAGS_EXT,
+ NDA_NDM_STATE_MASK,
+ NDA_NDM_FLAGS_MASK,
+ __NDA_MAX
+};
+
+#define NDA_MAX (__NDA_MAX - 1)
+
+/*
+ * Neighbor Cache Entry Flags
+ */
+
+#define NTF_USE (1 << 0)
+#define NTF_SELF (1 << 1)
+#define NTF_MASTER (1 << 2)
+#define NTF_PROXY (1 << 3) /* == ATF_PUBL */
+#define NTF_EXT_LEARNED (1 << 4)
+#define NTF_OFFLOADED (1 << 5)
+#define NTF_STICKY (1 << 6)
+#define NTF_ROUTER (1 << 7)
+/* Extended flags under NDA_FLAGS_EXT: */
+#define NTF_EXT_MANAGED (1 << 0)
+#define NTF_EXT_LOCKED (1 << 1)
+#define NTF_EXT_EXT_VALIDATED (1 << 2)
+
+/*
+ * Neighbor Cache Entry States.
+ */
+
+#define NUD_INCOMPLETE 0x01
+#define NUD_REACHABLE 0x02
+#define NUD_STALE 0x04
+#define NUD_DELAY 0x08
+#define NUD_PROBE 0x10
+#define NUD_FAILED 0x20
+
+/* Dummy states */
+#define NUD_NOARP 0x40
+#define NUD_PERMANENT 0x80
+#define NUD_NONE 0x00
+
+/* NUD_NOARP & NUD_PERMANENT are pseudostates, they never change and make no
+ * address resolution or NUD.
+ *
+ * NUD_PERMANENT also cannot be deleted by garbage collectors. This holds true
+ * for dynamic entries with NTF_EXT_LEARNED flag as well. However, upon carrier
+ * down event, NUD_PERMANENT entries are not flushed whereas NTF_EXT_LEARNED
+ * flagged entries explicitly are (which is also consistent with the routing
+ * subsystem).
+ *
+ * When NTF_EXT_LEARNED is set for a bridge fdb entry the different cache entry
+ * states don't make sense and thus are ignored. Such entries don't age and
+ * can roam.
+ *
+ * NTF_EXT_MANAGED flagged neigbor entries are managed by the kernel on behalf
+ * of a user space control plane, and automatically refreshed so that (if
+ * possible) they remain in NUD_REACHABLE state.
+ *
+ * NTF_EXT_LOCKED flagged bridge FDB entries are entries generated by the
+ * bridge in response to a host trying to communicate via a locked bridge port
+ * with MAB enabled. Their purpose is to notify user space that a host requires
+ * authentication.
+ *
+ * NTF_EXT_EXT_VALIDATED flagged neighbor entries were externally validated by
+ * a user space control plane. The kernel will not remove or invalidate them,
+ * but it can probe them and notify user space when they become reachable.
+ */
+
+struct nda_cacheinfo {
+ __u32 ndm_confirmed;
+ __u32 ndm_used;
+ __u32 ndm_updated;
+ __u32 ndm_refcnt;
+};
+
+/*****************************************************************
+ * Neighbour tables specific messages.
+ *
+ * To retrieve the neighbour tables send RTM_GETNEIGHTBL with the
+ * NLM_F_DUMP flag set. Every neighbour table configuration is
+ * spread over multiple messages to avoid running into message
+ * size limits on systems with many interfaces. The first message
+ * in the sequence transports all not device specific data such as
+ * statistics, configuration, and the default parameter set.
+ * This message is followed by 0..n messages carrying device
+ * specific parameter sets.
+ * Although the ordering should be sufficient, NDTA_NAME can be
+ * used to identify sequences. The initial message can be identified
+ * by checking for NDTA_CONFIG. The device specific messages do
+ * not contain this TLV but have NDTPA_IFINDEX set to the
+ * corresponding interface index.
+ *
+ * To change neighbour table attributes, send RTM_SETNEIGHTBL
+ * with NDTA_NAME set. Changeable attribute include NDTA_THRESH[1-3],
+ * NDTA_GC_INTERVAL, and all TLVs in NDTA_PARMS unless marked
+ * otherwise. Device specific parameter sets can be changed by
+ * setting NDTPA_IFINDEX to the interface index of the corresponding
+ * device.
+ ****/
+
+struct ndt_stats {
+ __u64 ndts_allocs;
+ __u64 ndts_destroys;
+ __u64 ndts_hash_grows;
+ __u64 ndts_res_failed;
+ __u64 ndts_lookups;
+ __u64 ndts_hits;
+ __u64 ndts_rcv_probes_mcast;
+ __u64 ndts_rcv_probes_ucast;
+ __u64 ndts_periodic_gc_runs;
+ __u64 ndts_forced_gc_runs;
+ __u64 ndts_table_fulls;
+};
+
+enum {
+ NDTPA_UNSPEC,
+ NDTPA_IFINDEX, /* u32, unchangeable */
+ NDTPA_REFCNT, /* u32, read-only */
+ NDTPA_REACHABLE_TIME, /* u64, read-only, msecs */
+ NDTPA_BASE_REACHABLE_TIME, /* u64, msecs */
+ NDTPA_RETRANS_TIME, /* u64, msecs */
+ NDTPA_GC_STALETIME, /* u64, msecs */
+ NDTPA_DELAY_PROBE_TIME, /* u64, msecs */
+ NDTPA_QUEUE_LEN, /* u32 */
+ NDTPA_APP_PROBES, /* u32 */
+ NDTPA_UCAST_PROBES, /* u32 */
+ NDTPA_MCAST_PROBES, /* u32 */
+ NDTPA_ANYCAST_DELAY, /* u64, msecs */
+ NDTPA_PROXY_DELAY, /* u64, msecs */
+ NDTPA_PROXY_QLEN, /* u32 */
+ NDTPA_LOCKTIME, /* u64, msecs */
+ NDTPA_QUEUE_LENBYTES, /* u32 */
+ NDTPA_MCAST_REPROBES, /* u32 */
+ NDTPA_PAD,
+ NDTPA_INTERVAL_PROBE_TIME_MS, /* u64, msecs */
+ __NDTPA_MAX
+};
+#define NDTPA_MAX (__NDTPA_MAX - 1)
+
+struct ndtmsg {
+ __u8 ndtm_family;
+ __u8 ndtm_pad1;
+ __u16 ndtm_pad2;
+};
+
+struct ndt_config {
+ __u16 ndtc_key_len;
+ __u16 ndtc_entry_size;
+ __u32 ndtc_entries;
+ __u32 ndtc_last_flush; /* delta to now in msecs */
+ __u32 ndtc_last_rand; /* delta to now in msecs */
+ __u32 ndtc_hash_rnd;
+ __u32 ndtc_hash_mask;
+ __u32 ndtc_hash_chain_gc;
+ __u32 ndtc_proxy_qlen;
+};
+
+enum {
+ NDTA_UNSPEC,
+ NDTA_NAME, /* char *, unchangeable */
+ NDTA_THRESH1, /* u32 */
+ NDTA_THRESH2, /* u32 */
+ NDTA_THRESH3, /* u32 */
+ NDTA_CONFIG, /* struct ndt_config, read-only */
+ NDTA_PARMS, /* nested TLV NDTPA_* */
+ NDTA_STATS, /* struct ndt_stats, read-only */
+ NDTA_GC_INTERVAL, /* u64, msecs */
+ NDTA_PAD,
+ __NDTA_MAX
+};
+#define NDTA_MAX (__NDTA_MAX - 1)
+
+ /* FDB activity notification bits used in NFEA_ACTIVITY_NOTIFY:
+ * - FDB_NOTIFY_BIT - notify on activity/expire for any entry
+ * - FDB_NOTIFY_INACTIVE_BIT - mark as inactive to avoid multiple notifications
+ */
+enum {
+ FDB_NOTIFY_BIT = (1 << 0),
+ FDB_NOTIFY_INACTIVE_BIT = (1 << 1)
+};
+
+/* embedded into NDA_FDB_EXT_ATTRS:
+ * [NDA_FDB_EXT_ATTRS] = {
+ * [NFEA_ACTIVITY_NOTIFY]
+ * ...
+ * }
+ */
+enum {
+ NFEA_UNSPEC,
+ NFEA_ACTIVITY_NOTIFY,
+ NFEA_DONT_REFRESH,
+ __NFEA_MAX
+};
+#define NFEA_MAX (__NFEA_MAX - 1)
+
+#endif
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index 93cb411adf72..48eb49aa03d4 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -59,10 +59,13 @@ enum netdev_xdp_rx_metadata {
* by the driver.
* @NETDEV_XSK_FLAGS_TX_CHECKSUM: L3 checksum HW offload is supported by the
* driver.
+ * @NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO: Launch time HW offload is supported
+ * by the driver.
*/
enum netdev_xsk_flags {
NETDEV_XSK_FLAGS_TX_TIMESTAMP = 1,
NETDEV_XSK_FLAGS_TX_CHECKSUM = 2,
+ NETDEV_XSK_FLAGS_TX_LAUNCH_TIME_FIFO = 4,
};
enum netdev_queue_type {
@@ -70,6 +73,15 @@ enum netdev_queue_type {
NETDEV_QUEUE_TYPE_TX,
};
+enum netdev_qstats_scope {
+ NETDEV_QSTATS_SCOPE_QUEUE = 1,
+};
+
+enum netdev_napi_threaded {
+ NETDEV_NAPI_THREADED_DISABLED,
+ NETDEV_NAPI_THREADED_ENABLED,
+};
+
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
@@ -83,12 +95,19 @@ enum {
};
enum {
+ __NETDEV_A_IO_URING_PROVIDER_INFO_MAX,
+ NETDEV_A_IO_URING_PROVIDER_INFO_MAX = (__NETDEV_A_IO_URING_PROVIDER_INFO_MAX - 1)
+};
+
+enum {
NETDEV_A_PAGE_POOL_ID = 1,
NETDEV_A_PAGE_POOL_IFINDEX,
NETDEV_A_PAGE_POOL_NAPI_ID,
NETDEV_A_PAGE_POOL_INFLIGHT,
NETDEV_A_PAGE_POOL_INFLIGHT_MEM,
NETDEV_A_PAGE_POOL_DETACH_TIME,
+ NETDEV_A_PAGE_POOL_DMABUF,
+ NETDEV_A_PAGE_POOL_IO_URING,
__NETDEV_A_PAGE_POOL_MAX,
NETDEV_A_PAGE_POOL_MAX = (__NETDEV_A_PAGE_POOL_MAX - 1)
@@ -117,22 +136,81 @@ enum {
NETDEV_A_NAPI_ID,
NETDEV_A_NAPI_IRQ,
NETDEV_A_NAPI_PID,
+ NETDEV_A_NAPI_DEFER_HARD_IRQS,
+ NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+ NETDEV_A_NAPI_IRQ_SUSPEND_TIMEOUT,
+ NETDEV_A_NAPI_THREADED,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
};
enum {
+ __NETDEV_A_XSK_INFO_MAX,
+ NETDEV_A_XSK_INFO_MAX = (__NETDEV_A_XSK_INFO_MAX - 1)
+};
+
+enum {
NETDEV_A_QUEUE_ID = 1,
NETDEV_A_QUEUE_IFINDEX,
NETDEV_A_QUEUE_TYPE,
NETDEV_A_QUEUE_NAPI_ID,
+ NETDEV_A_QUEUE_DMABUF,
+ NETDEV_A_QUEUE_IO_URING,
+ NETDEV_A_QUEUE_XSK,
__NETDEV_A_QUEUE_MAX,
NETDEV_A_QUEUE_MAX = (__NETDEV_A_QUEUE_MAX - 1)
};
enum {
+ NETDEV_A_QSTATS_IFINDEX = 1,
+ NETDEV_A_QSTATS_QUEUE_TYPE,
+ NETDEV_A_QSTATS_QUEUE_ID,
+ NETDEV_A_QSTATS_SCOPE,
+ NETDEV_A_QSTATS_RX_PACKETS = 8,
+ NETDEV_A_QSTATS_RX_BYTES,
+ NETDEV_A_QSTATS_TX_PACKETS,
+ NETDEV_A_QSTATS_TX_BYTES,
+ NETDEV_A_QSTATS_RX_ALLOC_FAIL,
+ NETDEV_A_QSTATS_RX_HW_DROPS,
+ NETDEV_A_QSTATS_RX_HW_DROP_OVERRUNS,
+ NETDEV_A_QSTATS_RX_CSUM_COMPLETE,
+ NETDEV_A_QSTATS_RX_CSUM_UNNECESSARY,
+ NETDEV_A_QSTATS_RX_CSUM_NONE,
+ NETDEV_A_QSTATS_RX_CSUM_BAD,
+ NETDEV_A_QSTATS_RX_HW_GRO_PACKETS,
+ NETDEV_A_QSTATS_RX_HW_GRO_BYTES,
+ NETDEV_A_QSTATS_RX_HW_GRO_WIRE_PACKETS,
+ NETDEV_A_QSTATS_RX_HW_GRO_WIRE_BYTES,
+ NETDEV_A_QSTATS_RX_HW_DROP_RATELIMITS,
+ NETDEV_A_QSTATS_TX_HW_DROPS,
+ NETDEV_A_QSTATS_TX_HW_DROP_ERRORS,
+ NETDEV_A_QSTATS_TX_CSUM_NONE,
+ NETDEV_A_QSTATS_TX_NEEDS_CSUM,
+ NETDEV_A_QSTATS_TX_HW_GSO_PACKETS,
+ NETDEV_A_QSTATS_TX_HW_GSO_BYTES,
+ NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS,
+ NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES,
+ NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS,
+ NETDEV_A_QSTATS_TX_STOP,
+ NETDEV_A_QSTATS_TX_WAKE,
+
+ __NETDEV_A_QSTATS_MAX,
+ NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1)
+};
+
+enum {
+ NETDEV_A_DMABUF_IFINDEX = 1,
+ NETDEV_A_DMABUF_QUEUES,
+ NETDEV_A_DMABUF_FD,
+ NETDEV_A_DMABUF_ID,
+
+ __NETDEV_A_DMABUF_MAX,
+ NETDEV_A_DMABUF_MAX = (__NETDEV_A_DMABUF_MAX - 1)
+};
+
+enum {
NETDEV_CMD_DEV_GET = 1,
NETDEV_CMD_DEV_ADD_NTF,
NETDEV_CMD_DEV_DEL_NTF,
@@ -144,6 +222,10 @@ enum {
NETDEV_CMD_PAGE_POOL_STATS_GET,
NETDEV_CMD_QUEUE_GET,
NETDEV_CMD_NAPI_GET,
+ NETDEV_CMD_QSTATS_GET,
+ NETDEV_CMD_BIND_RX,
+ NETDEV_CMD_NAPI_SET,
+ NETDEV_CMD_BIND_TX,
__NETDEV_CMD_MAX,
NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1)
diff --git a/tools/include/uapi/linux/netfilter.h b/tools/include/uapi/linux/netfilter.h
new file mode 100644
index 000000000000..5a79ccb76701
--- /dev/null
+++ b/tools/include/uapi/linux/netfilter.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI__LINUX_NETFILTER_H
+#define _UAPI__LINUX_NETFILTER_H
+
+#include <linux/types.h>
+#include <linux/compiler.h>
+#include <linux/in.h>
+#include <linux/in6.h>
+
+/* Responses from hook functions. */
+#define NF_DROP 0
+#define NF_ACCEPT 1
+#define NF_STOLEN 2
+#define NF_QUEUE 3
+#define NF_REPEAT 4
+#define NF_STOP 5 /* Deprecated, for userspace nf_queue compatibility. */
+#define NF_MAX_VERDICT NF_STOP
+
+/* we overload the higher bits for encoding auxiliary data such as the queue
+ * number or errno values. Not nice, but better than additional function
+ * arguments. */
+#define NF_VERDICT_MASK 0x000000ff
+
+/* extra verdict flags have mask 0x0000ff00 */
+#define NF_VERDICT_FLAG_QUEUE_BYPASS 0x00008000
+
+/* queue number (NF_QUEUE) or errno (NF_DROP) */
+#define NF_VERDICT_QMASK 0xffff0000
+#define NF_VERDICT_QBITS 16
+
+#define NF_QUEUE_NR(x) ((((x) << 16) & NF_VERDICT_QMASK) | NF_QUEUE)
+
+#define NF_DROP_ERR(x) (((-x) << 16) | NF_DROP)
+
+/* only for userspace compatibility */
+#ifndef __KERNEL__
+
+/* NF_VERDICT_BITS should be 8 now, but userspace might break if this changes */
+#define NF_VERDICT_BITS 16
+#endif
+
+enum nf_inet_hooks {
+ NF_INET_PRE_ROUTING,
+ NF_INET_LOCAL_IN,
+ NF_INET_FORWARD,
+ NF_INET_LOCAL_OUT,
+ NF_INET_POST_ROUTING,
+ NF_INET_NUMHOOKS,
+ NF_INET_INGRESS = NF_INET_NUMHOOKS,
+};
+
+enum nf_dev_hooks {
+ NF_NETDEV_INGRESS,
+ NF_NETDEV_EGRESS,
+ NF_NETDEV_NUMHOOKS
+};
+
+enum {
+ NFPROTO_UNSPEC = 0,
+ NFPROTO_INET = 1,
+ NFPROTO_IPV4 = 2,
+ NFPROTO_ARP = 3,
+ NFPROTO_NETDEV = 5,
+ NFPROTO_BRIDGE = 7,
+ NFPROTO_IPV6 = 10,
+#ifndef __KERNEL__ /* no longer supported by kernel */
+ NFPROTO_DECNET = 12,
+#endif
+ NFPROTO_NUMPROTO,
+};
+
+union nf_inet_addr {
+ __u32 all[4];
+ __be32 ip;
+ __be32 ip6[4];
+ struct in_addr in;
+ struct in6_addr in6;
+};
+
+#endif /* _UAPI__LINUX_NETFILTER_H */
diff --git a/tools/include/uapi/linux/netfilter_arp.h b/tools/include/uapi/linux/netfilter_arp.h
new file mode 100644
index 000000000000..791dfc5ae907
--- /dev/null
+++ b/tools/include/uapi/linux/netfilter_arp.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-1.0+ WITH Linux-syscall-note */
+#ifndef __LINUX_ARP_NETFILTER_H
+#define __LINUX_ARP_NETFILTER_H
+
+/* ARP-specific defines for netfilter.
+ * (C)2002 Rusty Russell IBM -- This code is GPL.
+ */
+
+#include <linux/netfilter.h>
+
+/* There is no PF_ARP. */
+#define NF_ARP 0
+
+/* ARP Hooks */
+#define NF_ARP_IN 0
+#define NF_ARP_OUT 1
+#define NF_ARP_FORWARD 2
+
+#ifndef __KERNEL__
+#define NF_ARP_NUMHOOKS 3
+#endif
+
+#endif /* __LINUX_ARP_NETFILTER_H */
diff --git a/tools/include/uapi/linux/nsfs.h b/tools/include/uapi/linux/nsfs.h
new file mode 100644
index 000000000000..33c9b578b3b2
--- /dev/null
+++ b/tools/include/uapi/linux/nsfs.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef __LINUX_NSFS_H
+#define __LINUX_NSFS_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+#define NSIO 0xb7
+
+/* Returns a file descriptor that refers to an owning user namespace */
+#define NS_GET_USERNS _IO(NSIO, 0x1)
+/* Returns a file descriptor that refers to a parent namespace */
+#define NS_GET_PARENT _IO(NSIO, 0x2)
+/* Returns the type of namespace (CLONE_NEW* value) referred to by
+ file descriptor */
+#define NS_GET_NSTYPE _IO(NSIO, 0x3)
+/* Get owner UID (in the caller's user namespace) for a user namespace */
+#define NS_GET_OWNER_UID _IO(NSIO, 0x4)
+/* Translate pid from target pid namespace into the caller's pid namespace. */
+#define NS_GET_PID_FROM_PIDNS _IOR(NSIO, 0x6, int)
+/* Return thread-group leader id of pid in the callers pid namespace. */
+#define NS_GET_TGID_FROM_PIDNS _IOR(NSIO, 0x7, int)
+/* Translate pid from caller's pid namespace into a target pid namespace. */
+#define NS_GET_PID_IN_PIDNS _IOR(NSIO, 0x8, int)
+/* Return thread-group leader id of pid in the target pid namespace. */
+#define NS_GET_TGID_IN_PIDNS _IOR(NSIO, 0x9, int)
+
+struct mnt_ns_info {
+ __u32 size;
+ __u32 nr_mounts;
+ __u64 mnt_ns_id;
+};
+
+#define MNT_NS_INFO_SIZE_VER0 16 /* size of first published struct */
+
+/* Get information about namespace. */
+#define NS_MNT_GET_INFO _IOR(NSIO, 10, struct mnt_ns_info)
+/* Get next namespace. */
+#define NS_MNT_GET_NEXT _IOR(NSIO, 11, struct mnt_ns_info)
+/* Get previous namespace. */
+#define NS_MNT_GET_PREV _IOR(NSIO, 12, struct mnt_ns_info)
+
+/* Retrieve namespace identifiers. */
+#define NS_GET_MNTNS_ID _IOR(NSIO, 5, __u64)
+#define NS_GET_ID _IOR(NSIO, 13, __u64)
+
+enum init_ns_ino {
+ IPC_NS_INIT_INO = 0xEFFFFFFFU,
+ UTS_NS_INIT_INO = 0xEFFFFFFEU,
+ USER_NS_INIT_INO = 0xEFFFFFFDU,
+ PID_NS_INIT_INO = 0xEFFFFFFCU,
+ CGROUP_NS_INIT_INO = 0xEFFFFFFBU,
+ TIME_NS_INIT_INO = 0xEFFFFFFAU,
+ NET_NS_INIT_INO = 0xEFFFFFF9U,
+ MNT_NS_INIT_INO = 0xEFFFFFF8U,
+};
+
+#endif /* __LINUX_NSFS_H */
diff --git a/tools/include/uapi/linux/openat2.h b/tools/include/uapi/linux/openat2.h
deleted file mode 100644
index a5feb7604948..000000000000
--- a/tools/include/uapi/linux/openat2.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _UAPI_LINUX_OPENAT2_H
-#define _UAPI_LINUX_OPENAT2_H
-
-#include <linux/types.h>
-
-/*
- * Arguments for how openat2(2) should open the target path. If only @flags and
- * @mode are non-zero, then openat2(2) operates very similarly to openat(2).
- *
- * However, unlike openat(2), unknown or invalid bits in @flags result in
- * -EINVAL rather than being silently ignored. @mode must be zero unless one of
- * {O_CREAT, O_TMPFILE} are set.
- *
- * @flags: O_* flags.
- * @mode: O_CREAT/O_TMPFILE file mode.
- * @resolve: RESOLVE_* flags.
- */
-struct open_how {
- __u64 flags;
- __u64 mode;
- __u64 resolve;
-};
-
-/* how->resolve flags for openat2(2). */
-#define RESOLVE_NO_XDEV 0x01 /* Block mount-point crossings
- (includes bind-mounts). */
-#define RESOLVE_NO_MAGICLINKS 0x02 /* Block traversal through procfs-style
- "magic-links". */
-#define RESOLVE_NO_SYMLINKS 0x04 /* Block traversal through all symlinks
- (implies OEXT_NO_MAGICLINKS) */
-#define RESOLVE_BENEATH 0x08 /* Block "lexical" trickery like
- "..", symlinks, and absolute
- paths which escape the dirfd. */
-#define RESOLVE_IN_ROOT 0x10 /* Make all jumps to "/" and ".."
- be scoped inside the dirfd
- (similar to chroot(2)). */
-#define RESOLVE_CACHED 0x20 /* Only complete if resolution can be
- completed through cached lookup. May
- return -EAGAIN if that's not
- possible. */
-
-#endif /* _UAPI_LINUX_OPENAT2_H */
diff --git a/tools/include/uapi/linux/perf_event.h b/tools/include/uapi/linux/perf_event.h
index 3a64499b0f5d..78a362b80027 100644
--- a/tools/include/uapi/linux/perf_event.h
+++ b/tools/include/uapi/linux/perf_event.h
@@ -39,18 +39,21 @@ enum perf_type_id {
/*
* attr.config layout for type PERF_TYPE_HARDWARE and PERF_TYPE_HW_CACHE
+ *
* PERF_TYPE_HARDWARE: 0xEEEEEEEE000000AA
* AA: hardware event ID
* EEEEEEEE: PMU type ID
+ *
* PERF_TYPE_HW_CACHE: 0xEEEEEEEE00DDCCBB
* BB: hardware cache ID
* CC: hardware cache op ID
* DD: hardware cache op result ID
* EEEEEEEE: PMU type ID
- * If the PMU type ID is 0, the PERF_TYPE_RAW will be applied.
+ *
+ * If the PMU type ID is 0, PERF_TYPE_RAW will be applied.
*/
-#define PERF_PMU_TYPE_SHIFT 32
-#define PERF_HW_EVENT_MASK 0xffffffff
+#define PERF_PMU_TYPE_SHIFT 32
+#define PERF_HW_EVENT_MASK 0xffffffff
/*
* Generalized performance event event_id types, used by the
@@ -112,7 +115,7 @@ enum perf_hw_cache_op_result_id {
/*
* Special "software" events provided by the kernel, even if the hardware
* does not support performance events. These events measure various
- * physical and sw events of the kernel (and allow the profiling of them as
+ * physical and SW events of the kernel (and allow the profiling of them as
* well):
*/
enum perf_sw_ids {
@@ -167,8 +170,9 @@ enum perf_event_sample_format {
};
#define PERF_SAMPLE_WEIGHT_TYPE (PERF_SAMPLE_WEIGHT | PERF_SAMPLE_WEIGHT_STRUCT)
+
/*
- * values to program into branch_sample_type when PERF_SAMPLE_BRANCH is set
+ * Values to program into branch_sample_type when PERF_SAMPLE_BRANCH is set.
*
* If the user does not pass priv level information via branch_sample_type,
* the kernel uses the event's priv level. Branch and event priv levels do
@@ -178,20 +182,20 @@ enum perf_event_sample_format {
* of branches and therefore it supersedes all the other types.
*/
enum perf_branch_sample_type_shift {
- PERF_SAMPLE_BRANCH_USER_SHIFT = 0, /* user branches */
- PERF_SAMPLE_BRANCH_KERNEL_SHIFT = 1, /* kernel branches */
- PERF_SAMPLE_BRANCH_HV_SHIFT = 2, /* hypervisor branches */
-
- PERF_SAMPLE_BRANCH_ANY_SHIFT = 3, /* any branch types */
- PERF_SAMPLE_BRANCH_ANY_CALL_SHIFT = 4, /* any call branch */
- PERF_SAMPLE_BRANCH_ANY_RETURN_SHIFT = 5, /* any return branch */
- PERF_SAMPLE_BRANCH_IND_CALL_SHIFT = 6, /* indirect calls */
- PERF_SAMPLE_BRANCH_ABORT_TX_SHIFT = 7, /* transaction aborts */
- PERF_SAMPLE_BRANCH_IN_TX_SHIFT = 8, /* in transaction */
- PERF_SAMPLE_BRANCH_NO_TX_SHIFT = 9, /* not in transaction */
+ PERF_SAMPLE_BRANCH_USER_SHIFT = 0, /* user branches */
+ PERF_SAMPLE_BRANCH_KERNEL_SHIFT = 1, /* kernel branches */
+ PERF_SAMPLE_BRANCH_HV_SHIFT = 2, /* hypervisor branches */
+
+ PERF_SAMPLE_BRANCH_ANY_SHIFT = 3, /* any branch types */
+ PERF_SAMPLE_BRANCH_ANY_CALL_SHIFT = 4, /* any call branch */
+ PERF_SAMPLE_BRANCH_ANY_RETURN_SHIFT = 5, /* any return branch */
+ PERF_SAMPLE_BRANCH_IND_CALL_SHIFT = 6, /* indirect calls */
+ PERF_SAMPLE_BRANCH_ABORT_TX_SHIFT = 7, /* transaction aborts */
+ PERF_SAMPLE_BRANCH_IN_TX_SHIFT = 8, /* in transaction */
+ PERF_SAMPLE_BRANCH_NO_TX_SHIFT = 9, /* not in transaction */
PERF_SAMPLE_BRANCH_COND_SHIFT = 10, /* conditional branches */
- PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT = 11, /* call/ret stack */
+ PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT = 11, /* CALL/RET stack */
PERF_SAMPLE_BRANCH_IND_JUMP_SHIFT = 12, /* indirect jumps */
PERF_SAMPLE_BRANCH_CALL_SHIFT = 13, /* direct call */
@@ -210,96 +214,95 @@ enum perf_branch_sample_type_shift {
};
enum perf_branch_sample_type {
- PERF_SAMPLE_BRANCH_USER = 1U << PERF_SAMPLE_BRANCH_USER_SHIFT,
- PERF_SAMPLE_BRANCH_KERNEL = 1U << PERF_SAMPLE_BRANCH_KERNEL_SHIFT,
- PERF_SAMPLE_BRANCH_HV = 1U << PERF_SAMPLE_BRANCH_HV_SHIFT,
+ PERF_SAMPLE_BRANCH_USER = 1U << PERF_SAMPLE_BRANCH_USER_SHIFT,
+ PERF_SAMPLE_BRANCH_KERNEL = 1U << PERF_SAMPLE_BRANCH_KERNEL_SHIFT,
+ PERF_SAMPLE_BRANCH_HV = 1U << PERF_SAMPLE_BRANCH_HV_SHIFT,
- PERF_SAMPLE_BRANCH_ANY = 1U << PERF_SAMPLE_BRANCH_ANY_SHIFT,
- PERF_SAMPLE_BRANCH_ANY_CALL = 1U << PERF_SAMPLE_BRANCH_ANY_CALL_SHIFT,
- PERF_SAMPLE_BRANCH_ANY_RETURN = 1U << PERF_SAMPLE_BRANCH_ANY_RETURN_SHIFT,
- PERF_SAMPLE_BRANCH_IND_CALL = 1U << PERF_SAMPLE_BRANCH_IND_CALL_SHIFT,
- PERF_SAMPLE_BRANCH_ABORT_TX = 1U << PERF_SAMPLE_BRANCH_ABORT_TX_SHIFT,
- PERF_SAMPLE_BRANCH_IN_TX = 1U << PERF_SAMPLE_BRANCH_IN_TX_SHIFT,
- PERF_SAMPLE_BRANCH_NO_TX = 1U << PERF_SAMPLE_BRANCH_NO_TX_SHIFT,
- PERF_SAMPLE_BRANCH_COND = 1U << PERF_SAMPLE_BRANCH_COND_SHIFT,
+ PERF_SAMPLE_BRANCH_ANY = 1U << PERF_SAMPLE_BRANCH_ANY_SHIFT,
+ PERF_SAMPLE_BRANCH_ANY_CALL = 1U << PERF_SAMPLE_BRANCH_ANY_CALL_SHIFT,
+ PERF_SAMPLE_BRANCH_ANY_RETURN = 1U << PERF_SAMPLE_BRANCH_ANY_RETURN_SHIFT,
+ PERF_SAMPLE_BRANCH_IND_CALL = 1U << PERF_SAMPLE_BRANCH_IND_CALL_SHIFT,
+ PERF_SAMPLE_BRANCH_ABORT_TX = 1U << PERF_SAMPLE_BRANCH_ABORT_TX_SHIFT,
+ PERF_SAMPLE_BRANCH_IN_TX = 1U << PERF_SAMPLE_BRANCH_IN_TX_SHIFT,
+ PERF_SAMPLE_BRANCH_NO_TX = 1U << PERF_SAMPLE_BRANCH_NO_TX_SHIFT,
+ PERF_SAMPLE_BRANCH_COND = 1U << PERF_SAMPLE_BRANCH_COND_SHIFT,
- PERF_SAMPLE_BRANCH_CALL_STACK = 1U << PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT,
- PERF_SAMPLE_BRANCH_IND_JUMP = 1U << PERF_SAMPLE_BRANCH_IND_JUMP_SHIFT,
- PERF_SAMPLE_BRANCH_CALL = 1U << PERF_SAMPLE_BRANCH_CALL_SHIFT,
+ PERF_SAMPLE_BRANCH_CALL_STACK = 1U << PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT,
+ PERF_SAMPLE_BRANCH_IND_JUMP = 1U << PERF_SAMPLE_BRANCH_IND_JUMP_SHIFT,
+ PERF_SAMPLE_BRANCH_CALL = 1U << PERF_SAMPLE_BRANCH_CALL_SHIFT,
- PERF_SAMPLE_BRANCH_NO_FLAGS = 1U << PERF_SAMPLE_BRANCH_NO_FLAGS_SHIFT,
- PERF_SAMPLE_BRANCH_NO_CYCLES = 1U << PERF_SAMPLE_BRANCH_NO_CYCLES_SHIFT,
+ PERF_SAMPLE_BRANCH_NO_FLAGS = 1U << PERF_SAMPLE_BRANCH_NO_FLAGS_SHIFT,
+ PERF_SAMPLE_BRANCH_NO_CYCLES = 1U << PERF_SAMPLE_BRANCH_NO_CYCLES_SHIFT,
- PERF_SAMPLE_BRANCH_TYPE_SAVE =
- 1U << PERF_SAMPLE_BRANCH_TYPE_SAVE_SHIFT,
+ PERF_SAMPLE_BRANCH_TYPE_SAVE = 1U << PERF_SAMPLE_BRANCH_TYPE_SAVE_SHIFT,
- PERF_SAMPLE_BRANCH_HW_INDEX = 1U << PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT,
+ PERF_SAMPLE_BRANCH_HW_INDEX = 1U << PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT,
- PERF_SAMPLE_BRANCH_PRIV_SAVE = 1U << PERF_SAMPLE_BRANCH_PRIV_SAVE_SHIFT,
+ PERF_SAMPLE_BRANCH_PRIV_SAVE = 1U << PERF_SAMPLE_BRANCH_PRIV_SAVE_SHIFT,
- PERF_SAMPLE_BRANCH_COUNTERS = 1U << PERF_SAMPLE_BRANCH_COUNTERS_SHIFT,
+ PERF_SAMPLE_BRANCH_COUNTERS = 1U << PERF_SAMPLE_BRANCH_COUNTERS_SHIFT,
- PERF_SAMPLE_BRANCH_MAX = 1U << PERF_SAMPLE_BRANCH_MAX_SHIFT,
+ PERF_SAMPLE_BRANCH_MAX = 1U << PERF_SAMPLE_BRANCH_MAX_SHIFT,
};
/*
- * Common flow change classification
+ * Common control flow change classifications:
*/
enum {
- PERF_BR_UNKNOWN = 0, /* unknown */
- PERF_BR_COND = 1, /* conditional */
- PERF_BR_UNCOND = 2, /* unconditional */
- PERF_BR_IND = 3, /* indirect */
- PERF_BR_CALL = 4, /* function call */
- PERF_BR_IND_CALL = 5, /* indirect function call */
- PERF_BR_RET = 6, /* function return */
- PERF_BR_SYSCALL = 7, /* syscall */
- PERF_BR_SYSRET = 8, /* syscall return */
- PERF_BR_COND_CALL = 9, /* conditional function call */
- PERF_BR_COND_RET = 10, /* conditional function return */
- PERF_BR_ERET = 11, /* exception return */
- PERF_BR_IRQ = 12, /* irq */
- PERF_BR_SERROR = 13, /* system error */
- PERF_BR_NO_TX = 14, /* not in transaction */
- PERF_BR_EXTEND_ABI = 15, /* extend ABI */
+ PERF_BR_UNKNOWN = 0, /* Unknown */
+ PERF_BR_COND = 1, /* Conditional */
+ PERF_BR_UNCOND = 2, /* Unconditional */
+ PERF_BR_IND = 3, /* Indirect */
+ PERF_BR_CALL = 4, /* Function call */
+ PERF_BR_IND_CALL = 5, /* Indirect function call */
+ PERF_BR_RET = 6, /* Function return */
+ PERF_BR_SYSCALL = 7, /* Syscall */
+ PERF_BR_SYSRET = 8, /* Syscall return */
+ PERF_BR_COND_CALL = 9, /* Conditional function call */
+ PERF_BR_COND_RET = 10, /* Conditional function return */
+ PERF_BR_ERET = 11, /* Exception return */
+ PERF_BR_IRQ = 12, /* IRQ */
+ PERF_BR_SERROR = 13, /* System error */
+ PERF_BR_NO_TX = 14, /* Not in transaction */
+ PERF_BR_EXTEND_ABI = 15, /* Extend ABI */
PERF_BR_MAX,
};
/*
- * Common branch speculation outcome classification
+ * Common branch speculation outcome classifications:
*/
enum {
- PERF_BR_SPEC_NA = 0, /* Not available */
- PERF_BR_SPEC_WRONG_PATH = 1, /* Speculative but on wrong path */
- PERF_BR_NON_SPEC_CORRECT_PATH = 2, /* Non-speculative but on correct path */
- PERF_BR_SPEC_CORRECT_PATH = 3, /* Speculative and on correct path */
+ PERF_BR_SPEC_NA = 0, /* Not available */
+ PERF_BR_SPEC_WRONG_PATH = 1, /* Speculative but on wrong path */
+ PERF_BR_NON_SPEC_CORRECT_PATH = 2, /* Non-speculative but on correct path */
+ PERF_BR_SPEC_CORRECT_PATH = 3, /* Speculative and on correct path */
PERF_BR_SPEC_MAX,
};
enum {
- PERF_BR_NEW_FAULT_ALGN = 0, /* Alignment fault */
- PERF_BR_NEW_FAULT_DATA = 1, /* Data fault */
- PERF_BR_NEW_FAULT_INST = 2, /* Inst fault */
- PERF_BR_NEW_ARCH_1 = 3, /* Architecture specific */
- PERF_BR_NEW_ARCH_2 = 4, /* Architecture specific */
- PERF_BR_NEW_ARCH_3 = 5, /* Architecture specific */
- PERF_BR_NEW_ARCH_4 = 6, /* Architecture specific */
- PERF_BR_NEW_ARCH_5 = 7, /* Architecture specific */
+ PERF_BR_NEW_FAULT_ALGN = 0, /* Alignment fault */
+ PERF_BR_NEW_FAULT_DATA = 1, /* Data fault */
+ PERF_BR_NEW_FAULT_INST = 2, /* Inst fault */
+ PERF_BR_NEW_ARCH_1 = 3, /* Architecture specific */
+ PERF_BR_NEW_ARCH_2 = 4, /* Architecture specific */
+ PERF_BR_NEW_ARCH_3 = 5, /* Architecture specific */
+ PERF_BR_NEW_ARCH_4 = 6, /* Architecture specific */
+ PERF_BR_NEW_ARCH_5 = 7, /* Architecture specific */
PERF_BR_NEW_MAX,
};
enum {
- PERF_BR_PRIV_UNKNOWN = 0,
- PERF_BR_PRIV_USER = 1,
- PERF_BR_PRIV_KERNEL = 2,
- PERF_BR_PRIV_HV = 3,
+ PERF_BR_PRIV_UNKNOWN = 0,
+ PERF_BR_PRIV_USER = 1,
+ PERF_BR_PRIV_KERNEL = 2,
+ PERF_BR_PRIV_HV = 3,
};
-#define PERF_BR_ARM64_FIQ PERF_BR_NEW_ARCH_1
-#define PERF_BR_ARM64_DEBUG_HALT PERF_BR_NEW_ARCH_2
-#define PERF_BR_ARM64_DEBUG_EXIT PERF_BR_NEW_ARCH_3
-#define PERF_BR_ARM64_DEBUG_INST PERF_BR_NEW_ARCH_4
-#define PERF_BR_ARM64_DEBUG_DATA PERF_BR_NEW_ARCH_5
+#define PERF_BR_ARM64_FIQ PERF_BR_NEW_ARCH_1
+#define PERF_BR_ARM64_DEBUG_HALT PERF_BR_NEW_ARCH_2
+#define PERF_BR_ARM64_DEBUG_EXIT PERF_BR_NEW_ARCH_3
+#define PERF_BR_ARM64_DEBUG_INST PERF_BR_NEW_ARCH_4
+#define PERF_BR_ARM64_DEBUG_DATA PERF_BR_NEW_ARCH_5
#define PERF_SAMPLE_BRANCH_PLM_ALL \
(PERF_SAMPLE_BRANCH_USER|\
@@ -310,9 +313,9 @@ enum {
* Values to determine ABI of the registers dump.
*/
enum perf_sample_regs_abi {
- PERF_SAMPLE_REGS_ABI_NONE = 0,
- PERF_SAMPLE_REGS_ABI_32 = 1,
- PERF_SAMPLE_REGS_ABI_64 = 2,
+ PERF_SAMPLE_REGS_ABI_NONE = 0,
+ PERF_SAMPLE_REGS_ABI_32 = 1,
+ PERF_SAMPLE_REGS_ABI_64 = 2,
};
/*
@@ -320,21 +323,21 @@ enum perf_sample_regs_abi {
* abort events. Multiple bits can be set.
*/
enum {
- PERF_TXN_ELISION = (1 << 0), /* From elision */
- PERF_TXN_TRANSACTION = (1 << 1), /* From transaction */
- PERF_TXN_SYNC = (1 << 2), /* Instruction is related */
- PERF_TXN_ASYNC = (1 << 3), /* Instruction not related */
- PERF_TXN_RETRY = (1 << 4), /* Retry possible */
- PERF_TXN_CONFLICT = (1 << 5), /* Conflict abort */
- PERF_TXN_CAPACITY_WRITE = (1 << 6), /* Capacity write abort */
- PERF_TXN_CAPACITY_READ = (1 << 7), /* Capacity read abort */
+ PERF_TXN_ELISION = (1 << 0), /* From elision */
+ PERF_TXN_TRANSACTION = (1 << 1), /* From transaction */
+ PERF_TXN_SYNC = (1 << 2), /* Instruction is related */
+ PERF_TXN_ASYNC = (1 << 3), /* Instruction is not related */
+ PERF_TXN_RETRY = (1 << 4), /* Retry possible */
+ PERF_TXN_CONFLICT = (1 << 5), /* Conflict abort */
+ PERF_TXN_CAPACITY_WRITE = (1 << 6), /* Capacity write abort */
+ PERF_TXN_CAPACITY_READ = (1 << 7), /* Capacity read abort */
- PERF_TXN_MAX = (1 << 8), /* non-ABI */
+ PERF_TXN_MAX = (1 << 8), /* non-ABI */
- /* bits 32..63 are reserved for the abort code */
+ /* Bits 32..63 are reserved for the abort code */
- PERF_TXN_ABORT_MASK = (0xffffffffULL << 32),
- PERF_TXN_ABORT_SHIFT = 32,
+ PERF_TXN_ABORT_MASK = (0xffffffffULL << 32),
+ PERF_TXN_ABORT_SHIFT = 32,
};
/*
@@ -369,22 +372,22 @@ enum perf_event_read_format {
PERF_FORMAT_MAX = 1U << 5, /* non-ABI */
};
-#define PERF_ATTR_SIZE_VER0 64 /* sizeof first published struct */
-#define PERF_ATTR_SIZE_VER1 72 /* add: config2 */
-#define PERF_ATTR_SIZE_VER2 80 /* add: branch_sample_type */
-#define PERF_ATTR_SIZE_VER3 96 /* add: sample_regs_user */
- /* add: sample_stack_user */
-#define PERF_ATTR_SIZE_VER4 104 /* add: sample_regs_intr */
-#define PERF_ATTR_SIZE_VER5 112 /* add: aux_watermark */
-#define PERF_ATTR_SIZE_VER6 120 /* add: aux_sample_size */
-#define PERF_ATTR_SIZE_VER7 128 /* add: sig_data */
-#define PERF_ATTR_SIZE_VER8 136 /* add: config3 */
+#define PERF_ATTR_SIZE_VER0 64 /* Size of first published 'struct perf_event_attr' */
+#define PERF_ATTR_SIZE_VER1 72 /* Add: config2 */
+#define PERF_ATTR_SIZE_VER2 80 /* Add: branch_sample_type */
+#define PERF_ATTR_SIZE_VER3 96 /* Add: sample_regs_user */
+ /* Add: sample_stack_user */
+#define PERF_ATTR_SIZE_VER4 104 /* Add: sample_regs_intr */
+#define PERF_ATTR_SIZE_VER5 112 /* Add: aux_watermark */
+#define PERF_ATTR_SIZE_VER6 120 /* Add: aux_sample_size */
+#define PERF_ATTR_SIZE_VER7 128 /* Add: sig_data */
+#define PERF_ATTR_SIZE_VER8 136 /* Add: config3 */
/*
- * Hardware event_id to monitor via a performance monitoring event:
- *
- * @sample_max_stack: Max number of frame pointers in a callchain,
- * should be < /proc/sys/kernel/perf_event_max_stack
+ * 'struct perf_event_attr' contains various attributes that define
+ * a performance event - most of them hardware related configuration
+ * details, but also a lot of behavioral switches and values implemented
+ * by the kernel.
*/
struct perf_event_attr {
@@ -394,7 +397,7 @@ struct perf_event_attr {
__u32 type;
/*
- * Size of the attr structure, for fwd/bwd compat.
+ * Size of the attr structure, for forward/backwards compatibility.
*/
__u32 size;
@@ -449,21 +452,21 @@ struct perf_event_attr {
comm_exec : 1, /* flag comm events that are due to an exec */
use_clockid : 1, /* use @clockid for time fields */
context_switch : 1, /* context switch data */
- write_backward : 1, /* Write ring buffer from end to beginning */
+ write_backward : 1, /* write ring buffer from end to beginning */
namespaces : 1, /* include namespaces data */
ksymbol : 1, /* include ksymbol events */
- bpf_event : 1, /* include bpf events */
+ bpf_event : 1, /* include BPF events */
aux_output : 1, /* generate AUX records instead of events */
cgroup : 1, /* include cgroup events */
text_poke : 1, /* include text poke events */
- build_id : 1, /* use build id in mmap2 events */
+ build_id : 1, /* use build ID in mmap2 events */
inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */
remove_on_exec : 1, /* event is removed from task on exec */
sigtrap : 1, /* send synchronous SIGTRAP on event */
__reserved_1 : 26;
union {
- __u32 wakeup_events; /* wakeup every n events */
+ __u32 wakeup_events; /* wake up every n events */
__u32 wakeup_watermark; /* bytes before wakeup */
};
@@ -472,13 +475,13 @@ struct perf_event_attr {
__u64 bp_addr;
__u64 kprobe_func; /* for perf_kprobe */
__u64 uprobe_path; /* for perf_uprobe */
- __u64 config1; /* extension of config */
+ __u64 config1; /* extension of config */
};
union {
__u64 bp_len;
- __u64 kprobe_addr; /* when kprobe_func == NULL */
+ __u64 kprobe_addr; /* when kprobe_func == NULL */
__u64 probe_offset; /* for perf_[k,u]probe */
- __u64 config2; /* extension of config1 */
+ __u64 config2; /* extension of config1 */
};
__u64 branch_sample_type; /* enum perf_branch_sample_type */
@@ -508,10 +511,28 @@ struct perf_event_attr {
* Wakeup watermark for AUX area
*/
__u32 aux_watermark;
+
+ /*
+ * Max number of frame pointers in a callchain, should be
+ * lower than /proc/sys/kernel/perf_event_max_stack.
+ *
+ * Max number of entries of branch stack should be lower
+ * than the hardware limit.
+ */
__u16 sample_max_stack;
+
__u16 __reserved_2;
__u32 aux_sample_size;
- __u32 __reserved_3;
+
+ union {
+ __u32 aux_action;
+ struct {
+ __u32 aux_start_paused : 1, /* start AUX area tracing paused */
+ aux_pause : 1, /* on overflow, pause AUX area tracing */
+ aux_resume : 1, /* on overflow, resume AUX area tracing */
+ __reserved_3 : 29;
+ };
+ };
/*
* User provided data if sigtrap=1, passed back to user via
@@ -526,7 +547,7 @@ struct perf_event_attr {
/*
* Structure used by below PERF_EVENT_IOC_QUERY_BPF command
- * to query bpf programs attached to the same perf tracepoint
+ * to query BPF programs attached to the same perf tracepoint
* as the given perf event.
*/
struct perf_event_query_bpf {
@@ -548,21 +569,21 @@ struct perf_event_query_bpf {
/*
* Ioctls that can be done on a perf event fd:
*/
-#define PERF_EVENT_IOC_ENABLE _IO ('$', 0)
-#define PERF_EVENT_IOC_DISABLE _IO ('$', 1)
-#define PERF_EVENT_IOC_REFRESH _IO ('$', 2)
-#define PERF_EVENT_IOC_RESET _IO ('$', 3)
-#define PERF_EVENT_IOC_PERIOD _IOW('$', 4, __u64)
-#define PERF_EVENT_IOC_SET_OUTPUT _IO ('$', 5)
-#define PERF_EVENT_IOC_SET_FILTER _IOW('$', 6, char *)
-#define PERF_EVENT_IOC_ID _IOR('$', 7, __u64 *)
-#define PERF_EVENT_IOC_SET_BPF _IOW('$', 8, __u32)
-#define PERF_EVENT_IOC_PAUSE_OUTPUT _IOW('$', 9, __u32)
+#define PERF_EVENT_IOC_ENABLE _IO ('$', 0)
+#define PERF_EVENT_IOC_DISABLE _IO ('$', 1)
+#define PERF_EVENT_IOC_REFRESH _IO ('$', 2)
+#define PERF_EVENT_IOC_RESET _IO ('$', 3)
+#define PERF_EVENT_IOC_PERIOD _IOW ('$', 4, __u64)
+#define PERF_EVENT_IOC_SET_OUTPUT _IO ('$', 5)
+#define PERF_EVENT_IOC_SET_FILTER _IOW ('$', 6, char *)
+#define PERF_EVENT_IOC_ID _IOR ('$', 7, __u64 *)
+#define PERF_EVENT_IOC_SET_BPF _IOW ('$', 8, __u32)
+#define PERF_EVENT_IOC_PAUSE_OUTPUT _IOW ('$', 9, __u32)
#define PERF_EVENT_IOC_QUERY_BPF _IOWR('$', 10, struct perf_event_query_bpf *)
-#define PERF_EVENT_IOC_MODIFY_ATTRIBUTES _IOW('$', 11, struct perf_event_attr *)
+#define PERF_EVENT_IOC_MODIFY_ATTRIBUTES _IOW ('$', 11, struct perf_event_attr *)
enum perf_event_ioc_flags {
- PERF_IOC_FLAG_GROUP = 1U << 0,
+ PERF_IOC_FLAG_GROUP = 1U << 0,
};
/*
@@ -573,7 +594,7 @@ struct perf_event_mmap_page {
__u32 compat_version; /* lowest version this is compat with */
/*
- * Bits needed to read the hw events in user-space.
+ * Bits needed to read the HW events in user-space.
*
* u32 seq, time_mult, time_shift, index, width;
* u64 count, enabled, running;
@@ -611,7 +632,7 @@ struct perf_event_mmap_page {
__u32 index; /* hardware event identifier */
__s64 offset; /* add to hardware event value */
__u64 time_enabled; /* time event active */
- __u64 time_running; /* time event on cpu */
+ __u64 time_running; /* time event on CPU */
union {
__u64 capabilities;
struct {
@@ -639,7 +660,7 @@ struct perf_event_mmap_page {
/*
* If cap_usr_time the below fields can be used to compute the time
- * delta since time_enabled (in ns) using rdtsc or similar.
+ * delta since time_enabled (in ns) using RDTSC or similar.
*
* u64 quot, rem;
* u64 delta;
@@ -712,7 +733,7 @@ struct perf_event_mmap_page {
* after reading this value.
*
* When the mapping is PROT_WRITE the @data_tail value should be
- * written by userspace to reflect the last read data, after issueing
+ * written by user-space to reflect the last read data, after issuing
* an smp_mb() to separate the data read from the ->data_tail store.
* In this case the kernel will not over-write unread data.
*
@@ -728,7 +749,7 @@ struct perf_event_mmap_page {
/*
* AUX area is defined by aux_{offset,size} fields that should be set
- * by the userspace, so that
+ * by the user-space, so that
*
* aux_offset >= data_offset + data_size
*
@@ -802,7 +823,7 @@ struct perf_event_mmap_page {
* Indicates that thread was preempted in TASK_RUNNING state.
*
* PERF_RECORD_MISC_MMAP_BUILD_ID:
- * Indicates that mmap2 event carries build id data.
+ * Indicates that mmap2 event carries build ID data.
*/
#define PERF_RECORD_MISC_EXACT_IP (1 << 14)
#define PERF_RECORD_MISC_SWITCH_OUT_PREEMPT (1 << 14)
@@ -813,26 +834,26 @@ struct perf_event_mmap_page {
#define PERF_RECORD_MISC_EXT_RESERVED (1 << 15)
struct perf_event_header {
- __u32 type;
- __u16 misc;
- __u16 size;
+ __u32 type;
+ __u16 misc;
+ __u16 size;
};
struct perf_ns_link_info {
- __u64 dev;
- __u64 ino;
+ __u64 dev;
+ __u64 ino;
};
enum {
- NET_NS_INDEX = 0,
- UTS_NS_INDEX = 1,
- IPC_NS_INDEX = 2,
- PID_NS_INDEX = 3,
- USER_NS_INDEX = 4,
- MNT_NS_INDEX = 5,
- CGROUP_NS_INDEX = 6,
-
- NR_NAMESPACES, /* number of available namespaces */
+ NET_NS_INDEX = 0,
+ UTS_NS_INDEX = 1,
+ IPC_NS_INDEX = 2,
+ PID_NS_INDEX = 3,
+ USER_NS_INDEX = 4,
+ MNT_NS_INDEX = 5,
+ CGROUP_NS_INDEX = 6,
+
+ NR_NAMESPACES, /* number of available namespaces */
};
enum perf_event_type {
@@ -848,11 +869,11 @@ enum perf_event_type {
* optional fields being ignored.
*
* struct sample_id {
- * { u32 pid, tid; } && PERF_SAMPLE_TID
- * { u64 time; } && PERF_SAMPLE_TIME
- * { u64 id; } && PERF_SAMPLE_ID
- * { u64 stream_id;} && PERF_SAMPLE_STREAM_ID
- * { u32 cpu, res; } && PERF_SAMPLE_CPU
+ * { u32 pid, tid; } && PERF_SAMPLE_TID
+ * { u64 time; } && PERF_SAMPLE_TIME
+ * { u64 id; } && PERF_SAMPLE_ID
+ * { u64 stream_id;} && PERF_SAMPLE_STREAM_ID
+ * { u32 cpu, res; } && PERF_SAMPLE_CPU
* { u64 id; } && PERF_SAMPLE_IDENTIFIER
* } && perf_event_attr::sample_id_all
*
@@ -863,7 +884,7 @@ enum perf_event_type {
/*
* The MMAP events record the PROT_EXEC mappings so that we can
- * correlate userspace IPs to code. They have the following structure:
+ * correlate user-space IPs to code. They have the following structure:
*
* struct {
* struct perf_event_header header;
@@ -873,7 +894,7 @@ enum perf_event_type {
* u64 len;
* u64 pgoff;
* char filename[];
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_MMAP = 1,
@@ -883,7 +904,7 @@ enum perf_event_type {
* struct perf_event_header header;
* u64 id;
* u64 lost;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_LOST = 2,
@@ -894,7 +915,7 @@ enum perf_event_type {
*
* u32 pid, tid;
* char comm[];
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_COMM = 3,
@@ -905,7 +926,7 @@ enum perf_event_type {
* u32 pid, ppid;
* u32 tid, ptid;
* u64 time;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_EXIT = 4,
@@ -916,7 +937,7 @@ enum perf_event_type {
* u64 time;
* u64 id;
* u64 stream_id;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_THROTTLE = 5,
@@ -928,7 +949,7 @@ enum perf_event_type {
* u32 pid, ppid;
* u32 tid, ptid;
* u64 time;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_FORK = 7,
@@ -939,7 +960,7 @@ enum perf_event_type {
* u32 pid, tid;
*
* struct read_format values;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_READ = 8,
@@ -994,12 +1015,12 @@ enum perf_event_type {
* { u64 counters; } cntr[nr] && PERF_SAMPLE_BRANCH_COUNTERS
* } && PERF_SAMPLE_BRANCH_STACK
*
- * { u64 abi; # enum perf_sample_regs_abi
- * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
+ * { u64 abi; # enum perf_sample_regs_abi
+ * u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_USER
*
- * { u64 size;
- * char data[size];
- * u64 dyn_size; } && PERF_SAMPLE_STACK_USER
+ * { u64 size;
+ * char data[size];
+ * u64 dyn_size; } && PERF_SAMPLE_STACK_USER
*
* { union perf_sample_weight
* {
@@ -1024,10 +1045,11 @@ enum perf_event_type {
* { u64 abi; # enum perf_sample_regs_abi
* u64 regs[weight(mask)]; } && PERF_SAMPLE_REGS_INTR
* { u64 phys_addr;} && PERF_SAMPLE_PHYS_ADDR
- * { u64 size;
- * char data[size]; } && PERF_SAMPLE_AUX
+ * { u64 cgroup;} && PERF_SAMPLE_CGROUP
* { u64 data_page_size;} && PERF_SAMPLE_DATA_PAGE_SIZE
* { u64 code_page_size;} && PERF_SAMPLE_CODE_PAGE_SIZE
+ * { u64 size;
+ * char data[size]; } && PERF_SAMPLE_AUX
* };
*/
PERF_RECORD_SAMPLE = 9,
@@ -1059,7 +1081,7 @@ enum perf_event_type {
* };
* u32 prot, flags;
* char filename[];
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_MMAP2 = 10,
@@ -1068,12 +1090,12 @@ enum perf_event_type {
* Records that new data landed in the AUX buffer part.
*
* struct {
- * struct perf_event_header header;
+ * struct perf_event_header header;
*
- * u64 aux_offset;
- * u64 aux_size;
+ * u64 aux_offset;
+ * u64 aux_size;
* u64 flags;
- * struct sample_id sample_id;
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_AUX = 11,
@@ -1156,7 +1178,7 @@ enum perf_event_type {
PERF_RECORD_KSYMBOL = 17,
/*
- * Record bpf events:
+ * Record BPF events:
* enum perf_bpf_event_type {
* PERF_BPF_EVENT_UNKNOWN = 0,
* PERF_BPF_EVENT_PROG_LOAD = 1,
@@ -1234,179 +1256,181 @@ enum perf_record_ksymbol_type {
#define PERF_RECORD_KSYMBOL_FLAGS_UNREGISTER (1 << 0)
enum perf_bpf_event_type {
- PERF_BPF_EVENT_UNKNOWN = 0,
- PERF_BPF_EVENT_PROG_LOAD = 1,
- PERF_BPF_EVENT_PROG_UNLOAD = 2,
- PERF_BPF_EVENT_MAX, /* non-ABI */
+ PERF_BPF_EVENT_UNKNOWN = 0,
+ PERF_BPF_EVENT_PROG_LOAD = 1,
+ PERF_BPF_EVENT_PROG_UNLOAD = 2,
+ PERF_BPF_EVENT_MAX, /* non-ABI */
};
-#define PERF_MAX_STACK_DEPTH 127
-#define PERF_MAX_CONTEXTS_PER_STACK 8
+#define PERF_MAX_STACK_DEPTH 127
+#define PERF_MAX_CONTEXTS_PER_STACK 8
enum perf_callchain_context {
- PERF_CONTEXT_HV = (__u64)-32,
- PERF_CONTEXT_KERNEL = (__u64)-128,
- PERF_CONTEXT_USER = (__u64)-512,
+ PERF_CONTEXT_HV = (__u64)-32,
+ PERF_CONTEXT_KERNEL = (__u64)-128,
+ PERF_CONTEXT_USER = (__u64)-512,
- PERF_CONTEXT_GUEST = (__u64)-2048,
- PERF_CONTEXT_GUEST_KERNEL = (__u64)-2176,
- PERF_CONTEXT_GUEST_USER = (__u64)-2560,
+ PERF_CONTEXT_GUEST = (__u64)-2048,
+ PERF_CONTEXT_GUEST_KERNEL = (__u64)-2176,
+ PERF_CONTEXT_GUEST_USER = (__u64)-2560,
- PERF_CONTEXT_MAX = (__u64)-4095,
+ PERF_CONTEXT_MAX = (__u64)-4095,
};
/**
* PERF_RECORD_AUX::flags bits
*/
-#define PERF_AUX_FLAG_TRUNCATED 0x01 /* record was truncated to fit */
-#define PERF_AUX_FLAG_OVERWRITE 0x02 /* snapshot from overwrite mode */
-#define PERF_AUX_FLAG_PARTIAL 0x04 /* record contains gaps */
-#define PERF_AUX_FLAG_COLLISION 0x08 /* sample collided with another */
+#define PERF_AUX_FLAG_TRUNCATED 0x0001 /* Record was truncated to fit */
+#define PERF_AUX_FLAG_OVERWRITE 0x0002 /* Snapshot from overwrite mode */
+#define PERF_AUX_FLAG_PARTIAL 0x0004 /* Record contains gaps */
+#define PERF_AUX_FLAG_COLLISION 0x0008 /* Sample collided with another */
#define PERF_AUX_FLAG_PMU_FORMAT_TYPE_MASK 0xff00 /* PMU specific trace format type */
/* CoreSight PMU AUX buffer formats */
-#define PERF_AUX_FLAG_CORESIGHT_FORMAT_CORESIGHT 0x0000 /* Default for backward compatibility */
-#define PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW 0x0100 /* Raw format of the source */
+#define PERF_AUX_FLAG_CORESIGHT_FORMAT_CORESIGHT 0x0000 /* Default for backward compatibility */
+#define PERF_AUX_FLAG_CORESIGHT_FORMAT_RAW 0x0100 /* Raw format of the source */
-#define PERF_FLAG_FD_NO_GROUP (1UL << 0)
-#define PERF_FLAG_FD_OUTPUT (1UL << 1)
-#define PERF_FLAG_PID_CGROUP (1UL << 2) /* pid=cgroup id, per-cpu mode only */
-#define PERF_FLAG_FD_CLOEXEC (1UL << 3) /* O_CLOEXEC */
+#define PERF_FLAG_FD_NO_GROUP (1UL << 0)
+#define PERF_FLAG_FD_OUTPUT (1UL << 1)
+#define PERF_FLAG_PID_CGROUP (1UL << 2) /* pid=cgroup ID, per-CPU mode only */
+#define PERF_FLAG_FD_CLOEXEC (1UL << 3) /* O_CLOEXEC */
#if defined(__LITTLE_ENDIAN_BITFIELD)
union perf_mem_data_src {
__u64 val;
struct {
- __u64 mem_op:5, /* type of opcode */
- mem_lvl:14, /* memory hierarchy level */
- mem_snoop:5, /* snoop mode */
- mem_lock:2, /* lock instr */
- mem_dtlb:7, /* tlb access */
- mem_lvl_num:4, /* memory hierarchy level number */
- mem_remote:1, /* remote */
- mem_snoopx:2, /* snoop mode, ext */
- mem_blk:3, /* access blocked */
- mem_hops:3, /* hop level */
- mem_rsvd:18;
+ __u64 mem_op : 5, /* Type of opcode */
+ mem_lvl : 14, /* Memory hierarchy level */
+ mem_snoop : 5, /* Snoop mode */
+ mem_lock : 2, /* Lock instr */
+ mem_dtlb : 7, /* TLB access */
+ mem_lvl_num : 4, /* Memory hierarchy level number */
+ mem_remote : 1, /* Remote */
+ mem_snoopx : 2, /* Snoop mode, ext */
+ mem_blk : 3, /* Access blocked */
+ mem_hops : 3, /* Hop level */
+ mem_rsvd : 18;
};
};
#elif defined(__BIG_ENDIAN_BITFIELD)
union perf_mem_data_src {
__u64 val;
struct {
- __u64 mem_rsvd:18,
- mem_hops:3, /* hop level */
- mem_blk:3, /* access blocked */
- mem_snoopx:2, /* snoop mode, ext */
- mem_remote:1, /* remote */
- mem_lvl_num:4, /* memory hierarchy level number */
- mem_dtlb:7, /* tlb access */
- mem_lock:2, /* lock instr */
- mem_snoop:5, /* snoop mode */
- mem_lvl:14, /* memory hierarchy level */
- mem_op:5; /* type of opcode */
+ __u64 mem_rsvd : 18,
+ mem_hops : 3, /* Hop level */
+ mem_blk : 3, /* Access blocked */
+ mem_snoopx : 2, /* Snoop mode, ext */
+ mem_remote : 1, /* Remote */
+ mem_lvl_num : 4, /* Memory hierarchy level number */
+ mem_dtlb : 7, /* TLB access */
+ mem_lock : 2, /* Lock instr */
+ mem_snoop : 5, /* Snoop mode */
+ mem_lvl : 14, /* Memory hierarchy level */
+ mem_op : 5; /* Type of opcode */
};
};
#else
-#error "Unknown endianness"
+# error "Unknown endianness"
#endif
-/* type of opcode (load/store/prefetch,code) */
-#define PERF_MEM_OP_NA 0x01 /* not available */
-#define PERF_MEM_OP_LOAD 0x02 /* load instruction */
-#define PERF_MEM_OP_STORE 0x04 /* store instruction */
-#define PERF_MEM_OP_PFETCH 0x08 /* prefetch */
-#define PERF_MEM_OP_EXEC 0x10 /* code (execution) */
-#define PERF_MEM_OP_SHIFT 0
+/* Type of memory opcode: */
+#define PERF_MEM_OP_NA 0x0001 /* Not available */
+#define PERF_MEM_OP_LOAD 0x0002 /* Load instruction */
+#define PERF_MEM_OP_STORE 0x0004 /* Store instruction */
+#define PERF_MEM_OP_PFETCH 0x0008 /* Prefetch */
+#define PERF_MEM_OP_EXEC 0x0010 /* Code (execution) */
+#define PERF_MEM_OP_SHIFT 0
/*
- * PERF_MEM_LVL_* namespace being depricated to some extent in the
+ * The PERF_MEM_LVL_* namespace is being deprecated to some extent in
* favour of newer composite PERF_MEM_{LVLNUM_,REMOTE_,SNOOPX_} fields.
- * Supporting this namespace inorder to not break defined ABIs.
+ * We support this namespace in order to not break defined ABIs.
*
- * memory hierarchy (memory level, hit or miss)
+ * Memory hierarchy (memory level, hit or miss)
*/
-#define PERF_MEM_LVL_NA 0x01 /* not available */
-#define PERF_MEM_LVL_HIT 0x02 /* hit level */
-#define PERF_MEM_LVL_MISS 0x04 /* miss level */
-#define PERF_MEM_LVL_L1 0x08 /* L1 */
-#define PERF_MEM_LVL_LFB 0x10 /* Line Fill Buffer */
-#define PERF_MEM_LVL_L2 0x20 /* L2 */
-#define PERF_MEM_LVL_L3 0x40 /* L3 */
-#define PERF_MEM_LVL_LOC_RAM 0x80 /* Local DRAM */
-#define PERF_MEM_LVL_REM_RAM1 0x100 /* Remote DRAM (1 hop) */
-#define PERF_MEM_LVL_REM_RAM2 0x200 /* Remote DRAM (2 hops) */
-#define PERF_MEM_LVL_REM_CCE1 0x400 /* Remote Cache (1 hop) */
-#define PERF_MEM_LVL_REM_CCE2 0x800 /* Remote Cache (2 hops) */
-#define PERF_MEM_LVL_IO 0x1000 /* I/O memory */
-#define PERF_MEM_LVL_UNC 0x2000 /* Uncached memory */
-#define PERF_MEM_LVL_SHIFT 5
-
-#define PERF_MEM_REMOTE_REMOTE 0x01 /* Remote */
-#define PERF_MEM_REMOTE_SHIFT 37
-
-#define PERF_MEM_LVLNUM_L1 0x01 /* L1 */
-#define PERF_MEM_LVLNUM_L2 0x02 /* L2 */
-#define PERF_MEM_LVLNUM_L3 0x03 /* L3 */
-#define PERF_MEM_LVLNUM_L4 0x04 /* L4 */
-/* 5-0x7 available */
-#define PERF_MEM_LVLNUM_UNC 0x08 /* Uncached */
-#define PERF_MEM_LVLNUM_CXL 0x09 /* CXL */
-#define PERF_MEM_LVLNUM_IO 0x0a /* I/O */
-#define PERF_MEM_LVLNUM_ANY_CACHE 0x0b /* Any cache */
-#define PERF_MEM_LVLNUM_LFB 0x0c /* LFB */
-#define PERF_MEM_LVLNUM_RAM 0x0d /* RAM */
-#define PERF_MEM_LVLNUM_PMEM 0x0e /* PMEM */
-#define PERF_MEM_LVLNUM_NA 0x0f /* N/A */
-
-#define PERF_MEM_LVLNUM_SHIFT 33
-
-/* snoop mode */
-#define PERF_MEM_SNOOP_NA 0x01 /* not available */
-#define PERF_MEM_SNOOP_NONE 0x02 /* no snoop */
-#define PERF_MEM_SNOOP_HIT 0x04 /* snoop hit */
-#define PERF_MEM_SNOOP_MISS 0x08 /* snoop miss */
-#define PERF_MEM_SNOOP_HITM 0x10 /* snoop hit modified */
-#define PERF_MEM_SNOOP_SHIFT 19
-
-#define PERF_MEM_SNOOPX_FWD 0x01 /* forward */
-#define PERF_MEM_SNOOPX_PEER 0x02 /* xfer from peer */
-#define PERF_MEM_SNOOPX_SHIFT 38
-
-/* locked instruction */
-#define PERF_MEM_LOCK_NA 0x01 /* not available */
-#define PERF_MEM_LOCK_LOCKED 0x02 /* locked transaction */
-#define PERF_MEM_LOCK_SHIFT 24
+#define PERF_MEM_LVL_NA 0x0001 /* Not available */
+#define PERF_MEM_LVL_HIT 0x0002 /* Hit level */
+#define PERF_MEM_LVL_MISS 0x0004 /* Miss level */
+#define PERF_MEM_LVL_L1 0x0008 /* L1 */
+#define PERF_MEM_LVL_LFB 0x0010 /* Line Fill Buffer */
+#define PERF_MEM_LVL_L2 0x0020 /* L2 */
+#define PERF_MEM_LVL_L3 0x0040 /* L3 */
+#define PERF_MEM_LVL_LOC_RAM 0x0080 /* Local DRAM */
+#define PERF_MEM_LVL_REM_RAM1 0x0100 /* Remote DRAM (1 hop) */
+#define PERF_MEM_LVL_REM_RAM2 0x0200 /* Remote DRAM (2 hops) */
+#define PERF_MEM_LVL_REM_CCE1 0x0400 /* Remote Cache (1 hop) */
+#define PERF_MEM_LVL_REM_CCE2 0x0800 /* Remote Cache (2 hops) */
+#define PERF_MEM_LVL_IO 0x1000 /* I/O memory */
+#define PERF_MEM_LVL_UNC 0x2000 /* Uncached memory */
+#define PERF_MEM_LVL_SHIFT 5
+
+#define PERF_MEM_REMOTE_REMOTE 0x0001 /* Remote */
+#define PERF_MEM_REMOTE_SHIFT 37
+
+#define PERF_MEM_LVLNUM_L1 0x0001 /* L1 */
+#define PERF_MEM_LVLNUM_L2 0x0002 /* L2 */
+#define PERF_MEM_LVLNUM_L3 0x0003 /* L3 */
+#define PERF_MEM_LVLNUM_L4 0x0004 /* L4 */
+#define PERF_MEM_LVLNUM_L2_MHB 0x0005 /* L2 Miss Handling Buffer */
+#define PERF_MEM_LVLNUM_MSC 0x0006 /* Memory-side Cache */
+/* 0x007 available */
+#define PERF_MEM_LVLNUM_UNC 0x0008 /* Uncached */
+#define PERF_MEM_LVLNUM_CXL 0x0009 /* CXL */
+#define PERF_MEM_LVLNUM_IO 0x000a /* I/O */
+#define PERF_MEM_LVLNUM_ANY_CACHE 0x000b /* Any cache */
+#define PERF_MEM_LVLNUM_LFB 0x000c /* LFB / L1 Miss Handling Buffer */
+#define PERF_MEM_LVLNUM_RAM 0x000d /* RAM */
+#define PERF_MEM_LVLNUM_PMEM 0x000e /* PMEM */
+#define PERF_MEM_LVLNUM_NA 0x000f /* N/A */
+
+#define PERF_MEM_LVLNUM_SHIFT 33
+
+/* Snoop mode */
+#define PERF_MEM_SNOOP_NA 0x0001 /* Not available */
+#define PERF_MEM_SNOOP_NONE 0x0002 /* No snoop */
+#define PERF_MEM_SNOOP_HIT 0x0004 /* Snoop hit */
+#define PERF_MEM_SNOOP_MISS 0x0008 /* Snoop miss */
+#define PERF_MEM_SNOOP_HITM 0x0010 /* Snoop hit modified */
+#define PERF_MEM_SNOOP_SHIFT 19
+
+#define PERF_MEM_SNOOPX_FWD 0x0001 /* Forward */
+#define PERF_MEM_SNOOPX_PEER 0x0002 /* Transfer from peer */
+#define PERF_MEM_SNOOPX_SHIFT 38
+
+/* Locked instruction */
+#define PERF_MEM_LOCK_NA 0x0001 /* Not available */
+#define PERF_MEM_LOCK_LOCKED 0x0002 /* Locked transaction */
+#define PERF_MEM_LOCK_SHIFT 24
/* TLB access */
-#define PERF_MEM_TLB_NA 0x01 /* not available */
-#define PERF_MEM_TLB_HIT 0x02 /* hit level */
-#define PERF_MEM_TLB_MISS 0x04 /* miss level */
-#define PERF_MEM_TLB_L1 0x08 /* L1 */
-#define PERF_MEM_TLB_L2 0x10 /* L2 */
-#define PERF_MEM_TLB_WK 0x20 /* Hardware Walker*/
-#define PERF_MEM_TLB_OS 0x40 /* OS fault handler */
-#define PERF_MEM_TLB_SHIFT 26
+#define PERF_MEM_TLB_NA 0x0001 /* Not available */
+#define PERF_MEM_TLB_HIT 0x0002 /* Hit level */
+#define PERF_MEM_TLB_MISS 0x0004 /* Miss level */
+#define PERF_MEM_TLB_L1 0x0008 /* L1 */
+#define PERF_MEM_TLB_L2 0x0010 /* L2 */
+#define PERF_MEM_TLB_WK 0x0020 /* Hardware Walker*/
+#define PERF_MEM_TLB_OS 0x0040 /* OS fault handler */
+#define PERF_MEM_TLB_SHIFT 26
/* Access blocked */
-#define PERF_MEM_BLK_NA 0x01 /* not available */
-#define PERF_MEM_BLK_DATA 0x02 /* data could not be forwarded */
-#define PERF_MEM_BLK_ADDR 0x04 /* address conflict */
-#define PERF_MEM_BLK_SHIFT 40
-
-/* hop level */
-#define PERF_MEM_HOPS_0 0x01 /* remote core, same node */
-#define PERF_MEM_HOPS_1 0x02 /* remote node, same socket */
-#define PERF_MEM_HOPS_2 0x03 /* remote socket, same board */
-#define PERF_MEM_HOPS_3 0x04 /* remote board */
+#define PERF_MEM_BLK_NA 0x0001 /* Not available */
+#define PERF_MEM_BLK_DATA 0x0002 /* Data could not be forwarded */
+#define PERF_MEM_BLK_ADDR 0x0004 /* Address conflict */
+#define PERF_MEM_BLK_SHIFT 40
+
+/* Hop level */
+#define PERF_MEM_HOPS_0 0x0001 /* Remote core, same node */
+#define PERF_MEM_HOPS_1 0x0002 /* Remote node, same socket */
+#define PERF_MEM_HOPS_2 0x0003 /* Remote socket, same board */
+#define PERF_MEM_HOPS_3 0x0004 /* Remote board */
/* 5-7 available */
-#define PERF_MEM_HOPS_SHIFT 43
+#define PERF_MEM_HOPS_SHIFT 43
#define PERF_MEM_S(a, s) \
(((__u64)PERF_MEM_##a##_##s) << PERF_MEM_##a##_SHIFT)
/*
- * single taken branch record layout:
+ * Layout of single taken branch records:
*
* from: source instruction (may not always be a branch insn)
* to: branch target
@@ -1425,37 +1449,37 @@ union perf_mem_data_src {
struct perf_branch_entry {
__u64 from;
__u64 to;
- __u64 mispred:1, /* target mispredicted */
- predicted:1,/* target predicted */
- in_tx:1, /* in transaction */
- abort:1, /* transaction abort */
- cycles:16, /* cycle count to last branch */
- type:4, /* branch type */
- spec:2, /* branch speculation info */
- new_type:4, /* additional branch type */
- priv:3, /* privilege level */
- reserved:31;
+ __u64 mispred : 1, /* target mispredicted */
+ predicted : 1, /* target predicted */
+ in_tx : 1, /* in transaction */
+ abort : 1, /* transaction abort */
+ cycles : 16, /* cycle count to last branch */
+ type : 4, /* branch type */
+ spec : 2, /* branch speculation info */
+ new_type : 4, /* additional branch type */
+ priv : 3, /* privilege level */
+ reserved : 31;
};
/* Size of used info bits in struct perf_branch_entry */
#define PERF_BRANCH_ENTRY_INFO_BITS_MAX 33
union perf_sample_weight {
- __u64 full;
+ __u64 full;
#if defined(__LITTLE_ENDIAN_BITFIELD)
struct {
- __u32 var1_dw;
- __u16 var2_w;
- __u16 var3_w;
+ __u32 var1_dw;
+ __u16 var2_w;
+ __u16 var3_w;
};
#elif defined(__BIG_ENDIAN_BITFIELD)
struct {
- __u16 var3_w;
- __u16 var2_w;
- __u32 var1_dw;
+ __u16 var3_w;
+ __u16 var2_w;
+ __u32 var1_dw;
};
#else
-#error "Unknown endianness"
+# error "Unknown endianness"
#endif
};
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index 370ed14b1ae0..475fc8ca4403 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -230,7 +230,7 @@ struct prctl_mm_map {
# define PR_PAC_APDBKEY (1UL << 3)
# define PR_PAC_APGAKEY (1UL << 4)
-/* Tagged user address controls for arm64 */
+/* Tagged user address controls for arm64 and RISC-V */
#define PR_SET_TAGGED_ADDR_CTRL 55
#define PR_GET_TAGGED_ADDR_CTRL 56
# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
@@ -244,6 +244,9 @@ struct prctl_mm_map {
# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT)
/* Unused; kept only for source compatibility */
# define PR_MTE_TCF_SHIFT 1
+/* RISC-V pointer masking tag length */
+# define PR_PMLEN_SHIFT 24
+# define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT)
/* Control reclaim behavior when allocating memory */
#define PR_SET_IO_FLUSHER 57
@@ -252,7 +255,12 @@ struct prctl_mm_map {
/* Dispatch syscalls to a userspace handler */
#define PR_SET_SYSCALL_USER_DISPATCH 59
# define PR_SYS_DISPATCH_OFF 0
-# define PR_SYS_DISPATCH_ON 1
+/* Enable dispatch except for the specified range */
+# define PR_SYS_DISPATCH_EXCLUSIVE_ON 1
+/* Enable dispatch for the specified range */
+# define PR_SYS_DISPATCH_INCLUSIVE_ON 2
+/* Legacy name for backwards compatibility */
+# define PR_SYS_DISPATCH_ON PR_SYS_DISPATCH_EXCLUSIVE_ON
/* The control values for the user space selector when dispatch is enabled */
# define SYSCALL_DISPATCH_FILTER_ALLOW 0
# define SYSCALL_DISPATCH_FILTER_BLOCK 1
@@ -306,4 +314,64 @@ struct prctl_mm_map {
# define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
# define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f
+#define PR_RISCV_SET_ICACHE_FLUSH_CTX 71
+# define PR_RISCV_CTX_SW_FENCEI_ON 0
+# define PR_RISCV_CTX_SW_FENCEI_OFF 1
+# define PR_RISCV_SCOPE_PER_PROCESS 0
+# define PR_RISCV_SCOPE_PER_THREAD 1
+
+/* PowerPC Dynamic Execution Control Register (DEXCR) controls */
+#define PR_PPC_GET_DEXCR 72
+#define PR_PPC_SET_DEXCR 73
+/* DEXCR aspect to act on */
+# define PR_PPC_DEXCR_SBHE 0 /* Speculative branch hint enable */
+# define PR_PPC_DEXCR_IBRTPD 1 /* Indirect branch recurrent target prediction disable */
+# define PR_PPC_DEXCR_SRAPD 2 /* Subroutine return address prediction disable */
+# define PR_PPC_DEXCR_NPHIE 3 /* Non-privileged hash instruction enable */
+/* Action to apply / return */
+# define PR_PPC_DEXCR_CTRL_EDITABLE 0x1 /* Aspect can be modified with PR_PPC_SET_DEXCR */
+# define PR_PPC_DEXCR_CTRL_SET 0x2 /* Set the aspect for this process */
+# define PR_PPC_DEXCR_CTRL_CLEAR 0x4 /* Clear the aspect for this process */
+# define PR_PPC_DEXCR_CTRL_SET_ONEXEC 0x8 /* Set the aspect on exec */
+# define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */
+# define PR_PPC_DEXCR_CTRL_MASK 0x1f
+
+/*
+ * Get the current shadow stack configuration for the current thread,
+ * this will be the value configured via PR_SET_SHADOW_STACK_STATUS.
+ */
+#define PR_GET_SHADOW_STACK_STATUS 74
+
+/*
+ * Set the current shadow stack configuration. Enabling the shadow
+ * stack will cause a shadow stack to be allocated for the thread.
+ */
+#define PR_SET_SHADOW_STACK_STATUS 75
+# define PR_SHADOW_STACK_ENABLE (1UL << 0)
+# define PR_SHADOW_STACK_WRITE (1UL << 1)
+# define PR_SHADOW_STACK_PUSH (1UL << 2)
+
+/*
+ * Prevent further changes to the specified shadow stack
+ * configuration. All bits may be locked via this call, including
+ * undefined bits.
+ */
+#define PR_LOCK_SHADOW_STACK_STATUS 76
+
+/*
+ * Controls the mode of timer_create() for CRIU restore operations.
+ * Enabling this allows CRIU to restore timers with explicit IDs.
+ *
+ * Don't use for normal operations as the result might be undefined.
+ */
+#define PR_TIMER_CREATE_RESTORE_IDS 77
+# define PR_TIMER_CREATE_RESTORE_IDS_OFF 0
+# define PR_TIMER_CREATE_RESTORE_IDS_ON 1
+# define PR_TIMER_CREATE_RESTORE_IDS_GET 2
+
+/* FUTEX hash management */
+#define PR_FUTEX_HASH 78
+# define PR_FUTEX_HASH_SET_SLOTS 1
+# define PR_FUTEX_HASH_GET_SLOTS 2
+
#endif /* _LINUX_PRCTL_H */
diff --git a/tools/include/uapi/linux/rtnetlink.h b/tools/include/uapi/linux/rtnetlink.h
new file mode 100644
index 000000000000..dab9493c791b
--- /dev/null
+++ b/tools/include/uapi/linux/rtnetlink.h
@@ -0,0 +1,848 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI__LINUX_RTNETLINK_H
+#define _UAPI__LINUX_RTNETLINK_H
+
+#include <linux/types.h>
+#include <linux/netlink.h>
+#include <linux/if_link.h>
+#include <linux/if_addr.h>
+#include <linux/neighbour.h>
+
+/* rtnetlink families. Values up to 127 are reserved for real address
+ * families, values above 128 may be used arbitrarily.
+ */
+#define RTNL_FAMILY_IPMR 128
+#define RTNL_FAMILY_IP6MR 129
+#define RTNL_FAMILY_MAX 129
+
+/****
+ * Routing/neighbour discovery messages.
+ ****/
+
+/* Types of messages */
+
+enum {
+ RTM_BASE = 16,
+#define RTM_BASE RTM_BASE
+
+ RTM_NEWLINK = 16,
+#define RTM_NEWLINK RTM_NEWLINK
+ RTM_DELLINK,
+#define RTM_DELLINK RTM_DELLINK
+ RTM_GETLINK,
+#define RTM_GETLINK RTM_GETLINK
+ RTM_SETLINK,
+#define RTM_SETLINK RTM_SETLINK
+
+ RTM_NEWADDR = 20,
+#define RTM_NEWADDR RTM_NEWADDR
+ RTM_DELADDR,
+#define RTM_DELADDR RTM_DELADDR
+ RTM_GETADDR,
+#define RTM_GETADDR RTM_GETADDR
+
+ RTM_NEWROUTE = 24,
+#define RTM_NEWROUTE RTM_NEWROUTE
+ RTM_DELROUTE,
+#define RTM_DELROUTE RTM_DELROUTE
+ RTM_GETROUTE,
+#define RTM_GETROUTE RTM_GETROUTE
+
+ RTM_NEWNEIGH = 28,
+#define RTM_NEWNEIGH RTM_NEWNEIGH
+ RTM_DELNEIGH,
+#define RTM_DELNEIGH RTM_DELNEIGH
+ RTM_GETNEIGH,
+#define RTM_GETNEIGH RTM_GETNEIGH
+
+ RTM_NEWRULE = 32,
+#define RTM_NEWRULE RTM_NEWRULE
+ RTM_DELRULE,
+#define RTM_DELRULE RTM_DELRULE
+ RTM_GETRULE,
+#define RTM_GETRULE RTM_GETRULE
+
+ RTM_NEWQDISC = 36,
+#define RTM_NEWQDISC RTM_NEWQDISC
+ RTM_DELQDISC,
+#define RTM_DELQDISC RTM_DELQDISC
+ RTM_GETQDISC,
+#define RTM_GETQDISC RTM_GETQDISC
+
+ RTM_NEWTCLASS = 40,
+#define RTM_NEWTCLASS RTM_NEWTCLASS
+ RTM_DELTCLASS,
+#define RTM_DELTCLASS RTM_DELTCLASS
+ RTM_GETTCLASS,
+#define RTM_GETTCLASS RTM_GETTCLASS
+
+ RTM_NEWTFILTER = 44,
+#define RTM_NEWTFILTER RTM_NEWTFILTER
+ RTM_DELTFILTER,
+#define RTM_DELTFILTER RTM_DELTFILTER
+ RTM_GETTFILTER,
+#define RTM_GETTFILTER RTM_GETTFILTER
+
+ RTM_NEWACTION = 48,
+#define RTM_NEWACTION RTM_NEWACTION
+ RTM_DELACTION,
+#define RTM_DELACTION RTM_DELACTION
+ RTM_GETACTION,
+#define RTM_GETACTION RTM_GETACTION
+
+ RTM_NEWPREFIX = 52,
+#define RTM_NEWPREFIX RTM_NEWPREFIX
+
+ RTM_NEWMULTICAST = 56,
+#define RTM_NEWMULTICAST RTM_NEWMULTICAST
+ RTM_DELMULTICAST,
+#define RTM_DELMULTICAST RTM_DELMULTICAST
+ RTM_GETMULTICAST,
+#define RTM_GETMULTICAST RTM_GETMULTICAST
+
+ RTM_NEWANYCAST = 60,
+#define RTM_NEWANYCAST RTM_NEWANYCAST
+ RTM_DELANYCAST,
+#define RTM_DELANYCAST RTM_DELANYCAST
+ RTM_GETANYCAST,
+#define RTM_GETANYCAST RTM_GETANYCAST
+
+ RTM_NEWNEIGHTBL = 64,
+#define RTM_NEWNEIGHTBL RTM_NEWNEIGHTBL
+ RTM_GETNEIGHTBL = 66,
+#define RTM_GETNEIGHTBL RTM_GETNEIGHTBL
+ RTM_SETNEIGHTBL,
+#define RTM_SETNEIGHTBL RTM_SETNEIGHTBL
+
+ RTM_NEWNDUSEROPT = 68,
+#define RTM_NEWNDUSEROPT RTM_NEWNDUSEROPT
+
+ RTM_NEWADDRLABEL = 72,
+#define RTM_NEWADDRLABEL RTM_NEWADDRLABEL
+ RTM_DELADDRLABEL,
+#define RTM_DELADDRLABEL RTM_DELADDRLABEL
+ RTM_GETADDRLABEL,
+#define RTM_GETADDRLABEL RTM_GETADDRLABEL
+
+ RTM_GETDCB = 78,
+#define RTM_GETDCB RTM_GETDCB
+ RTM_SETDCB,
+#define RTM_SETDCB RTM_SETDCB
+
+ RTM_NEWNETCONF = 80,
+#define RTM_NEWNETCONF RTM_NEWNETCONF
+ RTM_DELNETCONF,
+#define RTM_DELNETCONF RTM_DELNETCONF
+ RTM_GETNETCONF = 82,
+#define RTM_GETNETCONF RTM_GETNETCONF
+
+ RTM_NEWMDB = 84,
+#define RTM_NEWMDB RTM_NEWMDB
+ RTM_DELMDB = 85,
+#define RTM_DELMDB RTM_DELMDB
+ RTM_GETMDB = 86,
+#define RTM_GETMDB RTM_GETMDB
+
+ RTM_NEWNSID = 88,
+#define RTM_NEWNSID RTM_NEWNSID
+ RTM_DELNSID = 89,
+#define RTM_DELNSID RTM_DELNSID
+ RTM_GETNSID = 90,
+#define RTM_GETNSID RTM_GETNSID
+
+ RTM_NEWSTATS = 92,
+#define RTM_NEWSTATS RTM_NEWSTATS
+ RTM_GETSTATS = 94,
+#define RTM_GETSTATS RTM_GETSTATS
+ RTM_SETSTATS,
+#define RTM_SETSTATS RTM_SETSTATS
+
+ RTM_NEWCACHEREPORT = 96,
+#define RTM_NEWCACHEREPORT RTM_NEWCACHEREPORT
+
+ RTM_NEWCHAIN = 100,
+#define RTM_NEWCHAIN RTM_NEWCHAIN
+ RTM_DELCHAIN,
+#define RTM_DELCHAIN RTM_DELCHAIN
+ RTM_GETCHAIN,
+#define RTM_GETCHAIN RTM_GETCHAIN
+
+ RTM_NEWNEXTHOP = 104,
+#define RTM_NEWNEXTHOP RTM_NEWNEXTHOP
+ RTM_DELNEXTHOP,
+#define RTM_DELNEXTHOP RTM_DELNEXTHOP
+ RTM_GETNEXTHOP,
+#define RTM_GETNEXTHOP RTM_GETNEXTHOP
+
+ RTM_NEWLINKPROP = 108,
+#define RTM_NEWLINKPROP RTM_NEWLINKPROP
+ RTM_DELLINKPROP,
+#define RTM_DELLINKPROP RTM_DELLINKPROP
+ RTM_GETLINKPROP,
+#define RTM_GETLINKPROP RTM_GETLINKPROP
+
+ RTM_NEWVLAN = 112,
+#define RTM_NEWVLAN RTM_NEWVLAN
+ RTM_DELVLAN,
+#define RTM_DELVLAN RTM_DELVLAN
+ RTM_GETVLAN,
+#define RTM_GETVLAN RTM_GETVLAN
+
+ RTM_NEWNEXTHOPBUCKET = 116,
+#define RTM_NEWNEXTHOPBUCKET RTM_NEWNEXTHOPBUCKET
+ RTM_DELNEXTHOPBUCKET,
+#define RTM_DELNEXTHOPBUCKET RTM_DELNEXTHOPBUCKET
+ RTM_GETNEXTHOPBUCKET,
+#define RTM_GETNEXTHOPBUCKET RTM_GETNEXTHOPBUCKET
+
+ RTM_NEWTUNNEL = 120,
+#define RTM_NEWTUNNEL RTM_NEWTUNNEL
+ RTM_DELTUNNEL,
+#define RTM_DELTUNNEL RTM_DELTUNNEL
+ RTM_GETTUNNEL,
+#define RTM_GETTUNNEL RTM_GETTUNNEL
+
+ __RTM_MAX,
+#define RTM_MAX (((__RTM_MAX + 3) & ~3) - 1)
+};
+
+#define RTM_NR_MSGTYPES (RTM_MAX + 1 - RTM_BASE)
+#define RTM_NR_FAMILIES (RTM_NR_MSGTYPES >> 2)
+#define RTM_FAM(cmd) (((cmd) - RTM_BASE) >> 2)
+
+/*
+ Generic structure for encapsulation of optional route information.
+ It is reminiscent of sockaddr, but with sa_family replaced
+ with attribute type.
+ */
+
+struct rtattr {
+ unsigned short rta_len;
+ unsigned short rta_type;
+};
+
+/* Macros to handle rtattributes */
+
+#define RTA_ALIGNTO 4U
+#define RTA_ALIGN(len) ( ((len)+RTA_ALIGNTO-1) & ~(RTA_ALIGNTO-1) )
+#define RTA_OK(rta,len) ((len) >= (int)sizeof(struct rtattr) && \
+ (rta)->rta_len >= sizeof(struct rtattr) && \
+ (rta)->rta_len <= (len))
+#define RTA_NEXT(rta,attrlen) ((attrlen) -= RTA_ALIGN((rta)->rta_len), \
+ (struct rtattr*)(((char*)(rta)) + RTA_ALIGN((rta)->rta_len)))
+#define RTA_LENGTH(len) (RTA_ALIGN(sizeof(struct rtattr)) + (len))
+#define RTA_SPACE(len) RTA_ALIGN(RTA_LENGTH(len))
+#define RTA_DATA(rta) ((void*)(((char*)(rta)) + RTA_LENGTH(0)))
+#define RTA_PAYLOAD(rta) ((int)((rta)->rta_len) - RTA_LENGTH(0))
+
+
+
+
+/******************************************************************************
+ * Definitions used in routing table administration.
+ ****/
+
+struct rtmsg {
+ unsigned char rtm_family;
+ unsigned char rtm_dst_len;
+ unsigned char rtm_src_len;
+ unsigned char rtm_tos;
+
+ unsigned char rtm_table; /* Routing table id */
+ unsigned char rtm_protocol; /* Routing protocol; see below */
+ unsigned char rtm_scope; /* See below */
+ unsigned char rtm_type; /* See below */
+
+ unsigned rtm_flags;
+};
+
+/* rtm_type */
+
+enum {
+ RTN_UNSPEC,
+ RTN_UNICAST, /* Gateway or direct route */
+ RTN_LOCAL, /* Accept locally */
+ RTN_BROADCAST, /* Accept locally as broadcast,
+ send as broadcast */
+ RTN_ANYCAST, /* Accept locally as broadcast,
+ but send as unicast */
+ RTN_MULTICAST, /* Multicast route */
+ RTN_BLACKHOLE, /* Drop */
+ RTN_UNREACHABLE, /* Destination is unreachable */
+ RTN_PROHIBIT, /* Administratively prohibited */
+ RTN_THROW, /* Not in this table */
+ RTN_NAT, /* Translate this address */
+ RTN_XRESOLVE, /* Use external resolver */
+ __RTN_MAX
+};
+
+#define RTN_MAX (__RTN_MAX - 1)
+
+
+/* rtm_protocol */
+
+#define RTPROT_UNSPEC 0
+#define RTPROT_REDIRECT 1 /* Route installed by ICMP redirects;
+ not used by current IPv4 */
+#define RTPROT_KERNEL 2 /* Route installed by kernel */
+#define RTPROT_BOOT 3 /* Route installed during boot */
+#define RTPROT_STATIC 4 /* Route installed by administrator */
+
+/* Values of protocol >= RTPROT_STATIC are not interpreted by kernel;
+ they are just passed from user and back as is.
+ It will be used by hypothetical multiple routing daemons.
+ Note that protocol values should be standardized in order to
+ avoid conflicts.
+ */
+
+#define RTPROT_GATED 8 /* Apparently, GateD */
+#define RTPROT_RA 9 /* RDISC/ND router advertisements */
+#define RTPROT_MRT 10 /* Merit MRT */
+#define RTPROT_ZEBRA 11 /* Zebra */
+#define RTPROT_BIRD 12 /* BIRD */
+#define RTPROT_DNROUTED 13 /* DECnet routing daemon */
+#define RTPROT_XORP 14 /* XORP */
+#define RTPROT_NTK 15 /* Netsukuku */
+#define RTPROT_DHCP 16 /* DHCP client */
+#define RTPROT_MROUTED 17 /* Multicast daemon */
+#define RTPROT_KEEPALIVED 18 /* Keepalived daemon */
+#define RTPROT_BABEL 42 /* Babel daemon */
+#define RTPROT_OVN 84 /* OVN daemon */
+#define RTPROT_OPENR 99 /* Open Routing (Open/R) Routes */
+#define RTPROT_BGP 186 /* BGP Routes */
+#define RTPROT_ISIS 187 /* ISIS Routes */
+#define RTPROT_OSPF 188 /* OSPF Routes */
+#define RTPROT_RIP 189 /* RIP Routes */
+#define RTPROT_EIGRP 192 /* EIGRP Routes */
+
+/* rtm_scope
+
+ Really it is not scope, but sort of distance to the destination.
+ NOWHERE are reserved for not existing destinations, HOST is our
+ local addresses, LINK are destinations, located on directly attached
+ link and UNIVERSE is everywhere in the Universe.
+
+ Intermediate values are also possible f.e. interior routes
+ could be assigned a value between UNIVERSE and LINK.
+*/
+
+enum rt_scope_t {
+ RT_SCOPE_UNIVERSE=0,
+/* User defined values */
+ RT_SCOPE_SITE=200,
+ RT_SCOPE_LINK=253,
+ RT_SCOPE_HOST=254,
+ RT_SCOPE_NOWHERE=255
+};
+
+/* rtm_flags */
+
+#define RTM_F_NOTIFY 0x100 /* Notify user of route change */
+#define RTM_F_CLONED 0x200 /* This route is cloned */
+#define RTM_F_EQUALIZE 0x400 /* Multipath equalizer: NI */
+#define RTM_F_PREFIX 0x800 /* Prefix addresses */
+#define RTM_F_LOOKUP_TABLE 0x1000 /* set rtm_table to FIB lookup result */
+#define RTM_F_FIB_MATCH 0x2000 /* return full fib lookup match */
+#define RTM_F_OFFLOAD 0x4000 /* route is offloaded */
+#define RTM_F_TRAP 0x8000 /* route is trapping packets */
+#define RTM_F_OFFLOAD_FAILED 0x20000000 /* route offload failed, this value
+ * is chosen to avoid conflicts with
+ * other flags defined in
+ * include/uapi/linux/ipv6_route.h
+ */
+
+/* Reserved table identifiers */
+
+enum rt_class_t {
+ RT_TABLE_UNSPEC=0,
+/* User defined values */
+ RT_TABLE_COMPAT=252,
+ RT_TABLE_DEFAULT=253,
+ RT_TABLE_MAIN=254,
+ RT_TABLE_LOCAL=255,
+ RT_TABLE_MAX=0xFFFFFFFF
+};
+
+
+/* Routing message attributes */
+
+enum rtattr_type_t {
+ RTA_UNSPEC,
+ RTA_DST,
+ RTA_SRC,
+ RTA_IIF,
+ RTA_OIF,
+ RTA_GATEWAY,
+ RTA_PRIORITY,
+ RTA_PREFSRC,
+ RTA_METRICS,
+ RTA_MULTIPATH,
+ RTA_PROTOINFO, /* no longer used */
+ RTA_FLOW,
+ RTA_CACHEINFO,
+ RTA_SESSION, /* no longer used */
+ RTA_MP_ALGO, /* no longer used */
+ RTA_TABLE,
+ RTA_MARK,
+ RTA_MFC_STATS,
+ RTA_VIA,
+ RTA_NEWDST,
+ RTA_PREF,
+ RTA_ENCAP_TYPE,
+ RTA_ENCAP,
+ RTA_EXPIRES,
+ RTA_PAD,
+ RTA_UID,
+ RTA_TTL_PROPAGATE,
+ RTA_IP_PROTO,
+ RTA_SPORT,
+ RTA_DPORT,
+ RTA_NH_ID,
+ RTA_FLOWLABEL,
+ __RTA_MAX
+};
+
+#define RTA_MAX (__RTA_MAX - 1)
+
+#define RTM_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg))))
+#define RTM_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct rtmsg))
+
+/* RTM_MULTIPATH --- array of struct rtnexthop.
+ *
+ * "struct rtnexthop" describes all necessary nexthop information,
+ * i.e. parameters of path to a destination via this nexthop.
+ *
+ * At the moment it is impossible to set different prefsrc, mtu, window
+ * and rtt for different paths from multipath.
+ */
+
+struct rtnexthop {
+ unsigned short rtnh_len;
+ unsigned char rtnh_flags;
+ unsigned char rtnh_hops;
+ int rtnh_ifindex;
+};
+
+/* rtnh_flags */
+
+#define RTNH_F_DEAD 1 /* Nexthop is dead (used by multipath) */
+#define RTNH_F_PERVASIVE 2 /* Do recursive gateway lookup */
+#define RTNH_F_ONLINK 4 /* Gateway is forced on link */
+#define RTNH_F_OFFLOAD 8 /* Nexthop is offloaded */
+#define RTNH_F_LINKDOWN 16 /* carrier-down on nexthop */
+#define RTNH_F_UNRESOLVED 32 /* The entry is unresolved (ipmr) */
+#define RTNH_F_TRAP 64 /* Nexthop is trapping packets */
+
+#define RTNH_COMPARE_MASK (RTNH_F_DEAD | RTNH_F_LINKDOWN | \
+ RTNH_F_OFFLOAD | RTNH_F_TRAP)
+
+/* Macros to handle hexthops */
+
+#define RTNH_ALIGNTO 4
+#define RTNH_ALIGN(len) ( ((len)+RTNH_ALIGNTO-1) & ~(RTNH_ALIGNTO-1) )
+#define RTNH_OK(rtnh,len) ((rtnh)->rtnh_len >= sizeof(struct rtnexthop) && \
+ ((int)(rtnh)->rtnh_len) <= (len))
+#define RTNH_NEXT(rtnh) ((struct rtnexthop*)(((char*)(rtnh)) + RTNH_ALIGN((rtnh)->rtnh_len)))
+#define RTNH_LENGTH(len) (RTNH_ALIGN(sizeof(struct rtnexthop)) + (len))
+#define RTNH_SPACE(len) RTNH_ALIGN(RTNH_LENGTH(len))
+#define RTNH_DATA(rtnh) ((struct rtattr*)(((char*)(rtnh)) + RTNH_LENGTH(0)))
+
+/* RTA_VIA */
+struct rtvia {
+ __kernel_sa_family_t rtvia_family;
+ __u8 rtvia_addr[];
+};
+
+/* RTM_CACHEINFO */
+
+struct rta_cacheinfo {
+ __u32 rta_clntref;
+ __u32 rta_lastuse;
+ __s32 rta_expires;
+ __u32 rta_error;
+ __u32 rta_used;
+
+#define RTNETLINK_HAVE_PEERINFO 1
+ __u32 rta_id;
+ __u32 rta_ts;
+ __u32 rta_tsage;
+};
+
+/* RTM_METRICS --- array of struct rtattr with types of RTAX_* */
+
+enum {
+ RTAX_UNSPEC,
+#define RTAX_UNSPEC RTAX_UNSPEC
+ RTAX_LOCK,
+#define RTAX_LOCK RTAX_LOCK
+ RTAX_MTU,
+#define RTAX_MTU RTAX_MTU
+ RTAX_WINDOW,
+#define RTAX_WINDOW RTAX_WINDOW
+ RTAX_RTT,
+#define RTAX_RTT RTAX_RTT
+ RTAX_RTTVAR,
+#define RTAX_RTTVAR RTAX_RTTVAR
+ RTAX_SSTHRESH,
+#define RTAX_SSTHRESH RTAX_SSTHRESH
+ RTAX_CWND,
+#define RTAX_CWND RTAX_CWND
+ RTAX_ADVMSS,
+#define RTAX_ADVMSS RTAX_ADVMSS
+ RTAX_REORDERING,
+#define RTAX_REORDERING RTAX_REORDERING
+ RTAX_HOPLIMIT,
+#define RTAX_HOPLIMIT RTAX_HOPLIMIT
+ RTAX_INITCWND,
+#define RTAX_INITCWND RTAX_INITCWND
+ RTAX_FEATURES,
+#define RTAX_FEATURES RTAX_FEATURES
+ RTAX_RTO_MIN,
+#define RTAX_RTO_MIN RTAX_RTO_MIN
+ RTAX_INITRWND,
+#define RTAX_INITRWND RTAX_INITRWND
+ RTAX_QUICKACK,
+#define RTAX_QUICKACK RTAX_QUICKACK
+ RTAX_CC_ALGO,
+#define RTAX_CC_ALGO RTAX_CC_ALGO
+ RTAX_FASTOPEN_NO_COOKIE,
+#define RTAX_FASTOPEN_NO_COOKIE RTAX_FASTOPEN_NO_COOKIE
+ __RTAX_MAX
+};
+
+#define RTAX_MAX (__RTAX_MAX - 1)
+
+#define RTAX_FEATURE_ECN (1 << 0)
+#define RTAX_FEATURE_SACK (1 << 1) /* unused */
+#define RTAX_FEATURE_TIMESTAMP (1 << 2) /* unused */
+#define RTAX_FEATURE_ALLFRAG (1 << 3) /* unused */
+#define RTAX_FEATURE_TCP_USEC_TS (1 << 4)
+
+#define RTAX_FEATURE_MASK (RTAX_FEATURE_ECN | \
+ RTAX_FEATURE_SACK | \
+ RTAX_FEATURE_TIMESTAMP | \
+ RTAX_FEATURE_ALLFRAG | \
+ RTAX_FEATURE_TCP_USEC_TS)
+
+struct rta_session {
+ __u8 proto;
+ __u8 pad1;
+ __u16 pad2;
+
+ union {
+ struct {
+ __u16 sport;
+ __u16 dport;
+ } ports;
+
+ struct {
+ __u8 type;
+ __u8 code;
+ __u16 ident;
+ } icmpt;
+
+ __u32 spi;
+ } u;
+};
+
+struct rta_mfc_stats {
+ __u64 mfcs_packets;
+ __u64 mfcs_bytes;
+ __u64 mfcs_wrong_if;
+};
+
+/****
+ * General form of address family dependent message.
+ ****/
+
+struct rtgenmsg {
+ unsigned char rtgen_family;
+};
+
+/*****************************************************************
+ * Link layer specific messages.
+ ****/
+
+/* struct ifinfomsg
+ * passes link level specific information, not dependent
+ * on network protocol.
+ */
+
+struct ifinfomsg {
+ unsigned char ifi_family;
+ unsigned char __ifi_pad;
+ unsigned short ifi_type; /* ARPHRD_* */
+ int ifi_index; /* Link index */
+ unsigned ifi_flags; /* IFF_* flags */
+ unsigned ifi_change; /* IFF_* change mask */
+};
+
+/********************************************************************
+ * prefix information
+ ****/
+
+struct prefixmsg {
+ unsigned char prefix_family;
+ unsigned char prefix_pad1;
+ unsigned short prefix_pad2;
+ int prefix_ifindex;
+ unsigned char prefix_type;
+ unsigned char prefix_len;
+ unsigned char prefix_flags;
+ unsigned char prefix_pad3;
+};
+
+enum
+{
+ PREFIX_UNSPEC,
+ PREFIX_ADDRESS,
+ PREFIX_CACHEINFO,
+ __PREFIX_MAX
+};
+
+#define PREFIX_MAX (__PREFIX_MAX - 1)
+
+struct prefix_cacheinfo {
+ __u32 preferred_time;
+ __u32 valid_time;
+};
+
+
+/*****************************************************************
+ * Traffic control messages.
+ ****/
+
+struct tcmsg {
+ unsigned char tcm_family;
+ unsigned char tcm__pad1;
+ unsigned short tcm__pad2;
+ int tcm_ifindex;
+ __u32 tcm_handle;
+ __u32 tcm_parent;
+/* tcm_block_index is used instead of tcm_parent
+ * in case tcm_ifindex == TCM_IFINDEX_MAGIC_BLOCK
+ */
+#define tcm_block_index tcm_parent
+ __u32 tcm_info;
+};
+
+/* For manipulation of filters in shared block, tcm_ifindex is set to
+ * TCM_IFINDEX_MAGIC_BLOCK, and tcm_parent is aliased to tcm_block_index
+ * which is the block index.
+ */
+#define TCM_IFINDEX_MAGIC_BLOCK (0xFFFFFFFFU)
+
+enum {
+ TCA_UNSPEC,
+ TCA_KIND,
+ TCA_OPTIONS,
+ TCA_STATS,
+ TCA_XSTATS,
+ TCA_RATE,
+ TCA_FCNT,
+ TCA_STATS2,
+ TCA_STAB,
+ TCA_PAD,
+ TCA_DUMP_INVISIBLE,
+ TCA_CHAIN,
+ TCA_HW_OFFLOAD,
+ TCA_INGRESS_BLOCK,
+ TCA_EGRESS_BLOCK,
+ TCA_DUMP_FLAGS,
+ TCA_EXT_WARN_MSG,
+ __TCA_MAX
+};
+
+#define TCA_MAX (__TCA_MAX - 1)
+
+#define TCA_DUMP_FLAGS_TERSE (1 << 0) /* Means that in dump user gets only basic
+ * data necessary to identify the objects
+ * (handle, cookie, etc.) and stats.
+ */
+
+#define TCA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct tcmsg))))
+#define TCA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct tcmsg))
+
+/********************************************************************
+ * Neighbor Discovery userland options
+ ****/
+
+struct nduseroptmsg {
+ unsigned char nduseropt_family;
+ unsigned char nduseropt_pad1;
+ unsigned short nduseropt_opts_len; /* Total length of options */
+ int nduseropt_ifindex;
+ __u8 nduseropt_icmp_type;
+ __u8 nduseropt_icmp_code;
+ unsigned short nduseropt_pad2;
+ unsigned int nduseropt_pad3;
+ /* Followed by one or more ND options */
+};
+
+enum {
+ NDUSEROPT_UNSPEC,
+ NDUSEROPT_SRCADDR,
+ __NDUSEROPT_MAX
+};
+
+#define NDUSEROPT_MAX (__NDUSEROPT_MAX - 1)
+
+#ifndef __KERNEL__
+/* RTnetlink multicast groups - backwards compatibility for userspace */
+#define RTMGRP_LINK 1
+#define RTMGRP_NOTIFY 2
+#define RTMGRP_NEIGH 4
+#define RTMGRP_TC 8
+
+#define RTMGRP_IPV4_IFADDR 0x10
+#define RTMGRP_IPV4_MROUTE 0x20
+#define RTMGRP_IPV4_ROUTE 0x40
+#define RTMGRP_IPV4_RULE 0x80
+
+#define RTMGRP_IPV6_IFADDR 0x100
+#define RTMGRP_IPV6_MROUTE 0x200
+#define RTMGRP_IPV6_ROUTE 0x400
+#define RTMGRP_IPV6_IFINFO 0x800
+
+#define RTMGRP_DECnet_IFADDR 0x1000
+#define RTMGRP_DECnet_ROUTE 0x4000
+
+#define RTMGRP_IPV6_PREFIX 0x20000
+#endif
+
+/* RTnetlink multicast groups */
+enum rtnetlink_groups {
+ RTNLGRP_NONE,
+#define RTNLGRP_NONE RTNLGRP_NONE
+ RTNLGRP_LINK,
+#define RTNLGRP_LINK RTNLGRP_LINK
+ RTNLGRP_NOTIFY,
+#define RTNLGRP_NOTIFY RTNLGRP_NOTIFY
+ RTNLGRP_NEIGH,
+#define RTNLGRP_NEIGH RTNLGRP_NEIGH
+ RTNLGRP_TC,
+#define RTNLGRP_TC RTNLGRP_TC
+ RTNLGRP_IPV4_IFADDR,
+#define RTNLGRP_IPV4_IFADDR RTNLGRP_IPV4_IFADDR
+ RTNLGRP_IPV4_MROUTE,
+#define RTNLGRP_IPV4_MROUTE RTNLGRP_IPV4_MROUTE
+ RTNLGRP_IPV4_ROUTE,
+#define RTNLGRP_IPV4_ROUTE RTNLGRP_IPV4_ROUTE
+ RTNLGRP_IPV4_RULE,
+#define RTNLGRP_IPV4_RULE RTNLGRP_IPV4_RULE
+ RTNLGRP_IPV6_IFADDR,
+#define RTNLGRP_IPV6_IFADDR RTNLGRP_IPV6_IFADDR
+ RTNLGRP_IPV6_MROUTE,
+#define RTNLGRP_IPV6_MROUTE RTNLGRP_IPV6_MROUTE
+ RTNLGRP_IPV6_ROUTE,
+#define RTNLGRP_IPV6_ROUTE RTNLGRP_IPV6_ROUTE
+ RTNLGRP_IPV6_IFINFO,
+#define RTNLGRP_IPV6_IFINFO RTNLGRP_IPV6_IFINFO
+ RTNLGRP_DECnet_IFADDR,
+#define RTNLGRP_DECnet_IFADDR RTNLGRP_DECnet_IFADDR
+ RTNLGRP_NOP2,
+ RTNLGRP_DECnet_ROUTE,
+#define RTNLGRP_DECnet_ROUTE RTNLGRP_DECnet_ROUTE
+ RTNLGRP_DECnet_RULE,
+#define RTNLGRP_DECnet_RULE RTNLGRP_DECnet_RULE
+ RTNLGRP_NOP4,
+ RTNLGRP_IPV6_PREFIX,
+#define RTNLGRP_IPV6_PREFIX RTNLGRP_IPV6_PREFIX
+ RTNLGRP_IPV6_RULE,
+#define RTNLGRP_IPV6_RULE RTNLGRP_IPV6_RULE
+ RTNLGRP_ND_USEROPT,
+#define RTNLGRP_ND_USEROPT RTNLGRP_ND_USEROPT
+ RTNLGRP_PHONET_IFADDR,
+#define RTNLGRP_PHONET_IFADDR RTNLGRP_PHONET_IFADDR
+ RTNLGRP_PHONET_ROUTE,
+#define RTNLGRP_PHONET_ROUTE RTNLGRP_PHONET_ROUTE
+ RTNLGRP_DCB,
+#define RTNLGRP_DCB RTNLGRP_DCB
+ RTNLGRP_IPV4_NETCONF,
+#define RTNLGRP_IPV4_NETCONF RTNLGRP_IPV4_NETCONF
+ RTNLGRP_IPV6_NETCONF,
+#define RTNLGRP_IPV6_NETCONF RTNLGRP_IPV6_NETCONF
+ RTNLGRP_MDB,
+#define RTNLGRP_MDB RTNLGRP_MDB
+ RTNLGRP_MPLS_ROUTE,
+#define RTNLGRP_MPLS_ROUTE RTNLGRP_MPLS_ROUTE
+ RTNLGRP_NSID,
+#define RTNLGRP_NSID RTNLGRP_NSID
+ RTNLGRP_MPLS_NETCONF,
+#define RTNLGRP_MPLS_NETCONF RTNLGRP_MPLS_NETCONF
+ RTNLGRP_IPV4_MROUTE_R,
+#define RTNLGRP_IPV4_MROUTE_R RTNLGRP_IPV4_MROUTE_R
+ RTNLGRP_IPV6_MROUTE_R,
+#define RTNLGRP_IPV6_MROUTE_R RTNLGRP_IPV6_MROUTE_R
+ RTNLGRP_NEXTHOP,
+#define RTNLGRP_NEXTHOP RTNLGRP_NEXTHOP
+ RTNLGRP_BRVLAN,
+#define RTNLGRP_BRVLAN RTNLGRP_BRVLAN
+ RTNLGRP_MCTP_IFADDR,
+#define RTNLGRP_MCTP_IFADDR RTNLGRP_MCTP_IFADDR
+ RTNLGRP_TUNNEL,
+#define RTNLGRP_TUNNEL RTNLGRP_TUNNEL
+ RTNLGRP_STATS,
+#define RTNLGRP_STATS RTNLGRP_STATS
+ RTNLGRP_IPV4_MCADDR,
+#define RTNLGRP_IPV4_MCADDR RTNLGRP_IPV4_MCADDR
+ RTNLGRP_IPV6_MCADDR,
+#define RTNLGRP_IPV6_MCADDR RTNLGRP_IPV6_MCADDR
+ RTNLGRP_IPV6_ACADDR,
+#define RTNLGRP_IPV6_ACADDR RTNLGRP_IPV6_ACADDR
+ __RTNLGRP_MAX
+};
+#define RTNLGRP_MAX (__RTNLGRP_MAX - 1)
+
+/* TC action piece */
+struct tcamsg {
+ unsigned char tca_family;
+ unsigned char tca__pad1;
+ unsigned short tca__pad2;
+};
+
+enum {
+ TCA_ROOT_UNSPEC,
+ TCA_ROOT_TAB,
+#define TCA_ACT_TAB TCA_ROOT_TAB
+#define TCAA_MAX TCA_ROOT_TAB
+ TCA_ROOT_FLAGS,
+ TCA_ROOT_COUNT,
+ TCA_ROOT_TIME_DELTA, /* in msecs */
+ TCA_ROOT_EXT_WARN_MSG,
+ __TCA_ROOT_MAX,
+#define TCA_ROOT_MAX (__TCA_ROOT_MAX - 1)
+};
+
+#define TA_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct tcamsg))))
+#define TA_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct tcamsg))
+/* tcamsg flags stored in attribute TCA_ROOT_FLAGS
+ *
+ * TCA_ACT_FLAG_LARGE_DUMP_ON user->kernel to request for larger than
+ * TCA_ACT_MAX_PRIO actions in a dump. All dump responses will contain the
+ * number of actions being dumped stored in for user app's consumption in
+ * TCA_ROOT_COUNT
+ *
+ * TCA_ACT_FLAG_TERSE_DUMP user->kernel to request terse (brief) dump that only
+ * includes essential action info (kind, index, etc.)
+ *
+ */
+#define TCA_FLAG_LARGE_DUMP_ON (1 << 0)
+#define TCA_ACT_FLAG_LARGE_DUMP_ON TCA_FLAG_LARGE_DUMP_ON
+#define TCA_ACT_FLAG_TERSE_DUMP (1 << 1)
+
+/* New extended info filters for IFLA_EXT_MASK */
+#define RTEXT_FILTER_VF (1 << 0)
+#define RTEXT_FILTER_BRVLAN (1 << 1)
+#define RTEXT_FILTER_BRVLAN_COMPRESSED (1 << 2)
+#define RTEXT_FILTER_SKIP_STATS (1 << 3)
+#define RTEXT_FILTER_MRP (1 << 4)
+#define RTEXT_FILTER_CFM_CONFIG (1 << 5)
+#define RTEXT_FILTER_CFM_STATUS (1 << 6)
+#define RTEXT_FILTER_MST (1 << 7)
+
+/* End of information exported to user level */
+
+
+
+#endif /* _UAPI__LINUX_RTNETLINK_H */
diff --git a/tools/include/uapi/linux/stat.h b/tools/include/uapi/linux/stat.h
index 2f2ee82d5517..1686861aae20 100644
--- a/tools/include/uapi/linux/stat.h
+++ b/tools/include/uapi/linux/stat.h
@@ -98,36 +98,97 @@ struct statx_timestamp {
*/
struct statx {
/* 0x00 */
- __u32 stx_mask; /* What results were written [uncond] */
- __u32 stx_blksize; /* Preferred general I/O size [uncond] */
- __u64 stx_attributes; /* Flags conveying information about the file [uncond] */
+ /* What results were written [uncond] */
+ __u32 stx_mask;
+
+ /* Preferred general I/O size [uncond] */
+ __u32 stx_blksize;
+
+ /* Flags conveying information about the file [uncond] */
+ __u64 stx_attributes;
+
/* 0x10 */
- __u32 stx_nlink; /* Number of hard links */
- __u32 stx_uid; /* User ID of owner */
- __u32 stx_gid; /* Group ID of owner */
- __u16 stx_mode; /* File mode */
+ /* Number of hard links */
+ __u32 stx_nlink;
+
+ /* User ID of owner */
+ __u32 stx_uid;
+
+ /* Group ID of owner */
+ __u32 stx_gid;
+
+ /* File mode */
+ __u16 stx_mode;
__u16 __spare0[1];
+
/* 0x20 */
- __u64 stx_ino; /* Inode number */
- __u64 stx_size; /* File size */
- __u64 stx_blocks; /* Number of 512-byte blocks allocated */
- __u64 stx_attributes_mask; /* Mask to show what's supported in stx_attributes */
+ /* Inode number */
+ __u64 stx_ino;
+
+ /* File size */
+ __u64 stx_size;
+
+ /* Number of 512-byte blocks allocated */
+ __u64 stx_blocks;
+
+ /* Mask to show what's supported in stx_attributes */
+ __u64 stx_attributes_mask;
+
/* 0x40 */
- struct statx_timestamp stx_atime; /* Last access time */
- struct statx_timestamp stx_btime; /* File creation time */
- struct statx_timestamp stx_ctime; /* Last attribute change time */
- struct statx_timestamp stx_mtime; /* Last data modification time */
+ /* Last access time */
+ struct statx_timestamp stx_atime;
+
+ /* File creation time */
+ struct statx_timestamp stx_btime;
+
+ /* Last attribute change time */
+ struct statx_timestamp stx_ctime;
+
+ /* Last data modification time */
+ struct statx_timestamp stx_mtime;
+
/* 0x80 */
- __u32 stx_rdev_major; /* Device ID of special file [if bdev/cdev] */
+ /* Device ID of special file [if bdev/cdev] */
+ __u32 stx_rdev_major;
__u32 stx_rdev_minor;
- __u32 stx_dev_major; /* ID of device containing file [uncond] */
+
+ /* ID of device containing file [uncond] */
+ __u32 stx_dev_major;
__u32 stx_dev_minor;
+
/* 0x90 */
__u64 stx_mnt_id;
- __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */
- __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */
+
+ /* Memory buffer alignment for direct I/O */
+ __u32 stx_dio_mem_align;
+
+ /* File offset alignment for direct I/O */
+ __u32 stx_dio_offset_align;
+
/* 0xa0 */
- __u64 __spare3[12]; /* Spare space for future expansion */
+ /* Subvolume identifier */
+ __u64 stx_subvol;
+
+ /* Min atomic write unit in bytes */
+ __u32 stx_atomic_write_unit_min;
+
+ /* Max atomic write unit in bytes */
+ __u32 stx_atomic_write_unit_max;
+
+ /* 0xb0 */
+ /* Max atomic write segment count */
+ __u32 stx_atomic_write_segments_max;
+
+ /* File offset alignment for direct I/O reads */
+ __u32 stx_dio_read_offset_align;
+
+ /* Optimised max atomic write unit in bytes */
+ __u32 stx_atomic_write_unit_max_opt;
+ __u32 __spare2[1];
+
+ /* 0xc0 */
+ __u64 __spare3[8]; /* Spare space for future expansion */
+
/* 0x100 */
};
@@ -155,6 +216,9 @@ struct statx {
#define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */
#define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */
#define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */
+#define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */
+#define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */
+#define STATX_DIO_READ_ALIGN 0x00020000U /* Want/got dio read alignment info */
#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
@@ -190,6 +254,7 @@ struct statx {
#define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */
#define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */
#define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */
+#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */
#endif /* _UAPI_LINUX_STAT_H */
diff --git a/tools/include/uapi/linux/stddef.h b/tools/include/uapi/linux/stddef.h
index bb6ea517efb5..c53cde425406 100644
--- a/tools/include/uapi/linux/stddef.h
+++ b/tools/include/uapi/linux/stddef.h
@@ -8,6 +8,13 @@
#define __always_inline __inline__
#endif
+/* Not all C++ standards support type declarations inside an anonymous union */
+#ifndef __cplusplus
+#define __struct_group_tag(TAG) TAG
+#else
+#define __struct_group_tag(TAG)
+#endif
+
/**
* __struct_group() - Create a mirrored named and anonyomous struct
*
@@ -20,14 +27,14 @@
* and size: one anonymous and one named. The former's members can be used
* normally without sub-struct naming, and the latter can be used to
* reason about the start, end, and size of the group of struct members.
- * The named struct can also be explicitly tagged for layer reuse, as well
- * as both having struct attributes appended.
+ * The named struct can also be explicitly tagged for layer reuse (C only),
+ * as well as both having struct attributes appended.
*/
#define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
union { \
struct { MEMBERS } ATTRS; \
- struct TAG { MEMBERS } ATTRS NAME; \
- }
+ struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
+ } ATTRS
/**
* __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
diff --git a/tools/include/uapi/linux/types.h b/tools/include/uapi/linux/types.h
index 91fa51a9c31d..85aa327245c6 100644
--- a/tools/include/uapi/linux/types.h
+++ b/tools/include/uapi/linux/types.h
@@ -4,6 +4,8 @@
#include <asm-generic/int-ll64.h>
+#ifndef __ASSEMBLER__
+
/* copied from linux:include/uapi/linux/types.h */
#define __bitwise
typedef __u16 __bitwise __le16;
@@ -20,4 +22,5 @@ typedef __u32 __bitwise __wsum;
#define __aligned_be64 __be64 __attribute__((aligned(8)))
#define __aligned_le64 __le64 __attribute__((aligned(8)))
+#endif /* __ASSEMBLER__ */
#endif /* _UAPI_LINUX_TYPES_H */
diff --git a/tools/include/uapi/linux/userfaultfd.h b/tools/include/uapi/linux/userfaultfd.h
new file mode 100644
index 000000000000..4283de22d5b6
--- /dev/null
+++ b/tools/include/uapi/linux/userfaultfd.h
@@ -0,0 +1,386 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * include/linux/userfaultfd.h
+ *
+ * Copyright (C) 2007 Davide Libenzi <davidel@xmailserver.org>
+ * Copyright (C) 2015 Red Hat, Inc.
+ *
+ */
+
+#ifndef _LINUX_USERFAULTFD_H
+#define _LINUX_USERFAULTFD_H
+
+#include <linux/types.h>
+
+/* ioctls for /dev/userfaultfd */
+#define USERFAULTFD_IOC 0xAA
+#define USERFAULTFD_IOC_NEW _IO(USERFAULTFD_IOC, 0x00)
+
+/*
+ * If the UFFDIO_API is upgraded someday, the UFFDIO_UNREGISTER and
+ * UFFDIO_WAKE ioctls should be defined as _IOW and not as _IOR. In
+ * userfaultfd.h we assumed the kernel was reading (instead _IOC_READ
+ * means the userland is reading).
+ */
+#define UFFD_API ((__u64)0xAA)
+#define UFFD_API_REGISTER_MODES (UFFDIO_REGISTER_MODE_MISSING | \
+ UFFDIO_REGISTER_MODE_WP | \
+ UFFDIO_REGISTER_MODE_MINOR)
+#define UFFD_API_FEATURES (UFFD_FEATURE_PAGEFAULT_FLAG_WP | \
+ UFFD_FEATURE_EVENT_FORK | \
+ UFFD_FEATURE_EVENT_REMAP | \
+ UFFD_FEATURE_EVENT_REMOVE | \
+ UFFD_FEATURE_EVENT_UNMAP | \
+ UFFD_FEATURE_MISSING_HUGETLBFS | \
+ UFFD_FEATURE_MISSING_SHMEM | \
+ UFFD_FEATURE_SIGBUS | \
+ UFFD_FEATURE_THREAD_ID | \
+ UFFD_FEATURE_MINOR_HUGETLBFS | \
+ UFFD_FEATURE_MINOR_SHMEM | \
+ UFFD_FEATURE_EXACT_ADDRESS | \
+ UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
+ UFFD_FEATURE_WP_UNPOPULATED | \
+ UFFD_FEATURE_POISON | \
+ UFFD_FEATURE_WP_ASYNC | \
+ UFFD_FEATURE_MOVE)
+#define UFFD_API_IOCTLS \
+ ((__u64)1 << _UFFDIO_REGISTER | \
+ (__u64)1 << _UFFDIO_UNREGISTER | \
+ (__u64)1 << _UFFDIO_API)
+#define UFFD_API_RANGE_IOCTLS \
+ ((__u64)1 << _UFFDIO_WAKE | \
+ (__u64)1 << _UFFDIO_COPY | \
+ (__u64)1 << _UFFDIO_ZEROPAGE | \
+ (__u64)1 << _UFFDIO_MOVE | \
+ (__u64)1 << _UFFDIO_WRITEPROTECT | \
+ (__u64)1 << _UFFDIO_CONTINUE | \
+ (__u64)1 << _UFFDIO_POISON)
+#define UFFD_API_RANGE_IOCTLS_BASIC \
+ ((__u64)1 << _UFFDIO_WAKE | \
+ (__u64)1 << _UFFDIO_COPY | \
+ (__u64)1 << _UFFDIO_WRITEPROTECT | \
+ (__u64)1 << _UFFDIO_CONTINUE | \
+ (__u64)1 << _UFFDIO_POISON)
+
+/*
+ * Valid ioctl command number range with this API is from 0x00 to
+ * 0x3F. UFFDIO_API is the fixed number, everything else can be
+ * changed by implementing a different UFFD_API. If sticking to the
+ * same UFFD_API more ioctl can be added and userland will be aware of
+ * which ioctl the running kernel implements through the ioctl command
+ * bitmask written by the UFFDIO_API.
+ */
+#define _UFFDIO_REGISTER (0x00)
+#define _UFFDIO_UNREGISTER (0x01)
+#define _UFFDIO_WAKE (0x02)
+#define _UFFDIO_COPY (0x03)
+#define _UFFDIO_ZEROPAGE (0x04)
+#define _UFFDIO_MOVE (0x05)
+#define _UFFDIO_WRITEPROTECT (0x06)
+#define _UFFDIO_CONTINUE (0x07)
+#define _UFFDIO_POISON (0x08)
+#define _UFFDIO_API (0x3F)
+
+/* userfaultfd ioctl ids */
+#define UFFDIO 0xAA
+#define UFFDIO_API _IOWR(UFFDIO, _UFFDIO_API, \
+ struct uffdio_api)
+#define UFFDIO_REGISTER _IOWR(UFFDIO, _UFFDIO_REGISTER, \
+ struct uffdio_register)
+#define UFFDIO_UNREGISTER _IOR(UFFDIO, _UFFDIO_UNREGISTER, \
+ struct uffdio_range)
+#define UFFDIO_WAKE _IOR(UFFDIO, _UFFDIO_WAKE, \
+ struct uffdio_range)
+#define UFFDIO_COPY _IOWR(UFFDIO, _UFFDIO_COPY, \
+ struct uffdio_copy)
+#define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
+ struct uffdio_zeropage)
+#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
+ struct uffdio_move)
+#define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
+ struct uffdio_writeprotect)
+#define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
+ struct uffdio_continue)
+#define UFFDIO_POISON _IOWR(UFFDIO, _UFFDIO_POISON, \
+ struct uffdio_poison)
+
+/* read() structure */
+struct uffd_msg {
+ __u8 event;
+
+ __u8 reserved1;
+ __u16 reserved2;
+ __u32 reserved3;
+
+ union {
+ struct {
+ __u64 flags;
+ __u64 address;
+ union {
+ __u32 ptid;
+ } feat;
+ } pagefault;
+
+ struct {
+ __u32 ufd;
+ } fork;
+
+ struct {
+ __u64 from;
+ __u64 to;
+ __u64 len;
+ } remap;
+
+ struct {
+ __u64 start;
+ __u64 end;
+ } remove;
+
+ struct {
+ /* unused reserved fields */
+ __u64 reserved1;
+ __u64 reserved2;
+ __u64 reserved3;
+ } reserved;
+ } arg;
+} __attribute__((packed));
+
+/*
+ * Start at 0x12 and not at 0 to be more strict against bugs.
+ */
+#define UFFD_EVENT_PAGEFAULT 0x12
+#define UFFD_EVENT_FORK 0x13
+#define UFFD_EVENT_REMAP 0x14
+#define UFFD_EVENT_REMOVE 0x15
+#define UFFD_EVENT_UNMAP 0x16
+
+/* flags for UFFD_EVENT_PAGEFAULT */
+#define UFFD_PAGEFAULT_FLAG_WRITE (1<<0) /* If this was a write fault */
+#define UFFD_PAGEFAULT_FLAG_WP (1<<1) /* If reason is VM_UFFD_WP */
+#define UFFD_PAGEFAULT_FLAG_MINOR (1<<2) /* If reason is VM_UFFD_MINOR */
+
+struct uffdio_api {
+ /* userland asks for an API number and the features to enable */
+ __u64 api;
+ /*
+ * Kernel answers below with the all available features for
+ * the API, this notifies userland of which events and/or
+ * which flags for each event are enabled in the current
+ * kernel.
+ *
+ * Note: UFFD_EVENT_PAGEFAULT and UFFD_PAGEFAULT_FLAG_WRITE
+ * are to be considered implicitly always enabled in all kernels as
+ * long as the uffdio_api.api requested matches UFFD_API.
+ *
+ * UFFD_FEATURE_MISSING_HUGETLBFS means an UFFDIO_REGISTER
+ * with UFFDIO_REGISTER_MODE_MISSING mode will succeed on
+ * hugetlbfs virtual memory ranges. Adding or not adding
+ * UFFD_FEATURE_MISSING_HUGETLBFS to uffdio_api.features has
+ * no real functional effect after UFFDIO_API returns, but
+ * it's only useful for an initial feature set probe at
+ * UFFDIO_API time. There are two ways to use it:
+ *
+ * 1) by adding UFFD_FEATURE_MISSING_HUGETLBFS to the
+ * uffdio_api.features before calling UFFDIO_API, an error
+ * will be returned by UFFDIO_API on a kernel without
+ * hugetlbfs missing support
+ *
+ * 2) the UFFD_FEATURE_MISSING_HUGETLBFS can not be added in
+ * uffdio_api.features and instead it will be set by the
+ * kernel in the uffdio_api.features if the kernel supports
+ * it, so userland can later check if the feature flag is
+ * present in uffdio_api.features after UFFDIO_API
+ * succeeded.
+ *
+ * UFFD_FEATURE_MISSING_SHMEM works the same as
+ * UFFD_FEATURE_MISSING_HUGETLBFS, but it applies to shmem
+ * (i.e. tmpfs and other shmem based APIs).
+ *
+ * UFFD_FEATURE_SIGBUS feature means no page-fault
+ * (UFFD_EVENT_PAGEFAULT) event will be delivered, instead
+ * a SIGBUS signal will be sent to the faulting process.
+ *
+ * UFFD_FEATURE_THREAD_ID pid of the page faulted task_struct will
+ * be returned, if feature is not requested 0 will be returned.
+ *
+ * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults
+ * can be intercepted (via REGISTER_MODE_MINOR) for
+ * hugetlbfs-backed pages.
+ *
+ * UFFD_FEATURE_MINOR_SHMEM indicates the same support as
+ * UFFD_FEATURE_MINOR_HUGETLBFS, but for shmem-backed pages instead.
+ *
+ * UFFD_FEATURE_EXACT_ADDRESS indicates that the exact address of page
+ * faults would be provided and the offset within the page would not be
+ * masked.
+ *
+ * UFFD_FEATURE_WP_HUGETLBFS_SHMEM indicates that userfaultfd
+ * write-protection mode is supported on both shmem and hugetlbfs.
+ *
+ * UFFD_FEATURE_WP_UNPOPULATED indicates that userfaultfd
+ * write-protection mode will always apply to unpopulated pages
+ * (i.e. empty ptes). This will be the default behavior for shmem
+ * & hugetlbfs, so this flag only affects anonymous memory behavior
+ * when userfault write-protection mode is registered.
+ *
+ * UFFD_FEATURE_WP_ASYNC indicates that userfaultfd write-protection
+ * asynchronous mode is supported in which the write fault is
+ * automatically resolved and write-protection is un-set.
+ * It implies UFFD_FEATURE_WP_UNPOPULATED.
+ *
+ * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
+ * existing page contents from userspace.
+ */
+#define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
+#define UFFD_FEATURE_EVENT_FORK (1<<1)
+#define UFFD_FEATURE_EVENT_REMAP (1<<2)
+#define UFFD_FEATURE_EVENT_REMOVE (1<<3)
+#define UFFD_FEATURE_MISSING_HUGETLBFS (1<<4)
+#define UFFD_FEATURE_MISSING_SHMEM (1<<5)
+#define UFFD_FEATURE_EVENT_UNMAP (1<<6)
+#define UFFD_FEATURE_SIGBUS (1<<7)
+#define UFFD_FEATURE_THREAD_ID (1<<8)
+#define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9)
+#define UFFD_FEATURE_MINOR_SHMEM (1<<10)
+#define UFFD_FEATURE_EXACT_ADDRESS (1<<11)
+#define UFFD_FEATURE_WP_HUGETLBFS_SHMEM (1<<12)
+#define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
+#define UFFD_FEATURE_POISON (1<<14)
+#define UFFD_FEATURE_WP_ASYNC (1<<15)
+#define UFFD_FEATURE_MOVE (1<<16)
+ __u64 features;
+
+ __u64 ioctls;
+};
+
+struct uffdio_range {
+ __u64 start;
+ __u64 len;
+};
+
+struct uffdio_register {
+ struct uffdio_range range;
+#define UFFDIO_REGISTER_MODE_MISSING ((__u64)1<<0)
+#define UFFDIO_REGISTER_MODE_WP ((__u64)1<<1)
+#define UFFDIO_REGISTER_MODE_MINOR ((__u64)1<<2)
+ __u64 mode;
+
+ /*
+ * kernel answers which ioctl commands are available for the
+ * range, keep at the end as the last 8 bytes aren't read.
+ */
+ __u64 ioctls;
+};
+
+struct uffdio_copy {
+ __u64 dst;
+ __u64 src;
+ __u64 len;
+#define UFFDIO_COPY_MODE_DONTWAKE ((__u64)1<<0)
+ /*
+ * UFFDIO_COPY_MODE_WP will map the page write protected on
+ * the fly. UFFDIO_COPY_MODE_WP is available only if the
+ * write protected ioctl is implemented for the range
+ * according to the uffdio_register.ioctls.
+ */
+#define UFFDIO_COPY_MODE_WP ((__u64)1<<1)
+ __u64 mode;
+
+ /*
+ * "copy" is written by the ioctl and must be at the end: the
+ * copy_from_user will not read the last 8 bytes.
+ */
+ __s64 copy;
+};
+
+struct uffdio_zeropage {
+ struct uffdio_range range;
+#define UFFDIO_ZEROPAGE_MODE_DONTWAKE ((__u64)1<<0)
+ __u64 mode;
+
+ /*
+ * "zeropage" is written by the ioctl and must be at the end:
+ * the copy_from_user will not read the last 8 bytes.
+ */
+ __s64 zeropage;
+};
+
+struct uffdio_writeprotect {
+ struct uffdio_range range;
+/*
+ * UFFDIO_WRITEPROTECT_MODE_WP: set the flag to write protect a range,
+ * unset the flag to undo protection of a range which was previously
+ * write protected.
+ *
+ * UFFDIO_WRITEPROTECT_MODE_DONTWAKE: set the flag to avoid waking up
+ * any wait thread after the operation succeeds.
+ *
+ * NOTE: Write protecting a region (WP=1) is unrelated to page faults,
+ * therefore DONTWAKE flag is meaningless with WP=1. Removing write
+ * protection (WP=0) in response to a page fault wakes the faulting
+ * task unless DONTWAKE is set.
+ */
+#define UFFDIO_WRITEPROTECT_MODE_WP ((__u64)1<<0)
+#define UFFDIO_WRITEPROTECT_MODE_DONTWAKE ((__u64)1<<1)
+ __u64 mode;
+};
+
+struct uffdio_continue {
+ struct uffdio_range range;
+#define UFFDIO_CONTINUE_MODE_DONTWAKE ((__u64)1<<0)
+ /*
+ * UFFDIO_CONTINUE_MODE_WP will map the page write protected on
+ * the fly. UFFDIO_CONTINUE_MODE_WP is available only if the
+ * write protected ioctl is implemented for the range
+ * according to the uffdio_register.ioctls.
+ */
+#define UFFDIO_CONTINUE_MODE_WP ((__u64)1<<1)
+ __u64 mode;
+
+ /*
+ * Fields below here are written by the ioctl and must be at the end:
+ * the copy_from_user will not read past here.
+ */
+ __s64 mapped;
+};
+
+struct uffdio_poison {
+ struct uffdio_range range;
+#define UFFDIO_POISON_MODE_DONTWAKE ((__u64)1<<0)
+ __u64 mode;
+
+ /*
+ * Fields below here are written by the ioctl and must be at the end:
+ * the copy_from_user will not read past here.
+ */
+ __s64 updated;
+};
+
+struct uffdio_move {
+ __u64 dst;
+ __u64 src;
+ __u64 len;
+ /*
+ * Especially if used to atomically remove memory from the
+ * address space the wake on the dst range is not needed.
+ */
+#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
+#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
+ __u64 mode;
+ /*
+ * "move" is written by the ioctl and must be at the end: the
+ * copy_from_user will not read the last 8 bytes.
+ */
+ __s64 move;
+};
+
+/*
+ * Flags for the userfaultfd(2) system call itself.
+ */
+
+/*
+ * Create a userfaultfd that can handle page faults only in user mode.
+ */
+#define UFFD_USER_MODE_ONLY 1
+
+#endif /* _LINUX_USERFAULTFD_H */
diff --git a/tools/include/vdso/unaligned.h b/tools/include/vdso/unaligned.h
new file mode 100644
index 000000000000..ff0c06b6513e
--- /dev/null
+++ b/tools/include/vdso/unaligned.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __VDSO_UNALIGNED_H
+#define __VDSO_UNALIGNED_H
+
+#define __get_unaligned_t(type, ptr) ({ \
+ const struct { type x; } __packed * __get_pptr = (typeof(__get_pptr))(ptr); \
+ __get_pptr->x; \
+})
+
+#define __put_unaligned_t(type, val, ptr) do { \
+ struct { type x; } __packed * __put_pptr = (typeof(__put_pptr))(ptr); \
+ __put_pptr->x = (val); \
+} while (0)
+
+#endif /* __VDSO_UNALIGNED_H */
diff --git a/tools/lib/api/Makefile b/tools/lib/api/Makefile
index 044860ac1ed1..8665c799e0fa 100644
--- a/tools/lib/api/Makefile
+++ b/tools/lib/api/Makefile
@@ -31,11 +31,7 @@ CFLAGS := $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)
CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -U_FORTIFY_SOURCE -fPIC
ifeq ($(DEBUG),0)
-ifeq ($(CC_NO_CLANG), 0)
CFLAGS += -O3
-else
- CFLAGS += -O6
-endif
endif
ifeq ($(DEBUG),0)
@@ -99,7 +95,7 @@ install_lib: $(LIBFILE)
$(call do_install_mkdir,$(libdir_SQ)); \
cp -fpR $(LIBFILE) $(DESTDIR)$(libdir_SQ)
-HDRS := cpu.h debug.h io.h
+HDRS := cpu.h debug.h io.h io_dir.h
FD_HDRS := fd/array.h
FS_HDRS := fs/fs.h fs/tracing_path.h
INSTALL_HDRS_PFX := $(DESTDIR)$(prefix)/include/api
diff --git a/tools/lib/api/fs/fs.c b/tools/lib/api/fs/fs.c
index 337fde770e45..edec23406dbc 100644
--- a/tools/lib/api/fs/fs.c
+++ b/tools/lib/api/fs/fs.c
@@ -296,7 +296,7 @@ int filename__read_int(const char *filename, int *value)
int fd = open(filename, O_RDONLY), err = -1;
if (fd < 0)
- return -1;
+ return -errno;
if (read(fd, line, sizeof(line)) > 0) {
*value = atoi(line);
@@ -314,7 +314,7 @@ static int filename__read_ull_base(const char *filename,
int fd = open(filename, O_RDONLY), err = -1;
if (fd < 0)
- return -1;
+ return -errno;
if (read(fd, line, sizeof(line)) > 0) {
*value = strtoull(line, NULL, base);
@@ -372,7 +372,7 @@ int filename__write_int(const char *filename, int value)
char buf[64];
if (fd < 0)
- return err;
+ return -errno;
sprintf(buf, "%d", value);
if (write(fd, buf, sizeof(buf)) == sizeof(buf))
diff --git a/tools/lib/api/fs/tracing_path.c b/tools/lib/api/fs/tracing_path.c
index 30745f35d0d2..834fd64c7130 100644
--- a/tools/lib/api/fs/tracing_path.c
+++ b/tools/lib/api/fs/tracing_path.c
@@ -69,7 +69,7 @@ char *get_tracing_file(const char *name)
{
char *file;
- if (asprintf(&file, "%s/%s", tracing_path_mount(), name) < 0)
+ if (asprintf(&file, "%s%s", tracing_path_mount(), name) < 0)
return NULL;
return file;
diff --git a/tools/lib/api/io.h b/tools/lib/api/io.h
index 84adf8102018..1731996b2c32 100644
--- a/tools/lib/api/io.h
+++ b/tools/lib/api/io.h
@@ -43,48 +43,55 @@ static inline void io__init(struct io *io, int fd,
io->eof = false;
}
-/* Reads one character from the "io" file with similar semantics to fgetc. */
-static inline int io__get_char(struct io *io)
+/* Read from fd filling the buffer. Called when io->data == io->end. */
+static inline int io__fill_buffer(struct io *io)
{
- char *ptr = io->data;
+ ssize_t n;
if (io->eof)
return -1;
- if (ptr == io->end) {
- ssize_t n;
-
- if (io->timeout_ms != 0) {
- struct pollfd pfds[] = {
- {
- .fd = io->fd,
- .events = POLLIN,
- },
- };
-
- n = poll(pfds, 1, io->timeout_ms);
- if (n == 0)
- errno = ETIMEDOUT;
- if (n > 0 && !(pfds[0].revents & POLLIN)) {
- errno = EIO;
- n = -1;
- }
- if (n <= 0) {
- io->eof = true;
- return -1;
- }
+ if (io->timeout_ms != 0) {
+ struct pollfd pfds[] = {
+ {
+ .fd = io->fd,
+ .events = POLLIN,
+ },
+ };
+
+ n = poll(pfds, 1, io->timeout_ms);
+ if (n == 0)
+ errno = ETIMEDOUT;
+ if (n > 0 && !(pfds[0].revents & POLLIN)) {
+ errno = EIO;
+ n = -1;
}
- n = read(io->fd, io->buf, io->buf_len);
-
if (n <= 0) {
io->eof = true;
return -1;
}
- ptr = &io->buf[0];
- io->end = &io->buf[n];
}
- io->data = ptr + 1;
- return *ptr;
+ n = read(io->fd, io->buf, io->buf_len);
+
+ if (n <= 0) {
+ io->eof = true;
+ return -1;
+ }
+ io->data = &io->buf[0];
+ io->end = &io->buf[n];
+ return 0;
+}
+
+/* Reads one character from the "io" file with similar semantics to fgetc. */
+static inline int io__get_char(struct io *io)
+{
+ if (io->data == io->end) {
+ int ret = io__fill_buffer(io);
+
+ if (ret)
+ return ret;
+ }
+ return *io->data++;
}
/* Read a hexadecimal value with no 0x prefix into the out argument hex. If the
@@ -182,6 +189,7 @@ static inline ssize_t io__getdelim(struct io *io, char **line_out, size_t *line_
err_out:
free(line);
*line_out = NULL;
+ *line_len_out = 0;
return -ENOMEM;
}
diff --git a/tools/lib/api/io_dir.h b/tools/lib/api/io_dir.h
new file mode 100644
index 000000000000..ef83e967e48c
--- /dev/null
+++ b/tools/lib/api/io_dir.h
@@ -0,0 +1,105 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+/*
+ * Lightweight directory reading library.
+ */
+#ifndef __API_IO_DIR__
+#define __API_IO_DIR__
+
+#include <dirent.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/stat.h>
+#include <sys/syscall.h>
+#include <linux/limits.h>
+
+#if !defined(SYS_getdents64)
+#if defined(__x86_64__) || defined(__arm__)
+ #define SYS_getdents64 217
+#elif defined(__i386__) || defined(__s390x__) || defined(__sh__)
+ #define SYS_getdents64 220
+#elif defined(__alpha__)
+ #define SYS_getdents64 377
+#elif defined(__mips__)
+ #define SYS_getdents64 308
+#elif defined(__powerpc64__) || defined(__powerpc__)
+ #define SYS_getdents64 202
+#elif defined(__sparc64__) || defined(__sparc__)
+ #define SYS_getdents64 154
+#elif defined(__xtensa__)
+ #define SYS_getdents64 60
+#else
+ #define SYS_getdents64 61
+#endif
+#endif /* !defined(SYS_getdents64) */
+
+static inline ssize_t perf_getdents64(int fd, void *dirp, size_t count)
+{
+#ifdef MEMORY_SANITIZER
+ memset(dirp, 0, count);
+#endif
+ return syscall(SYS_getdents64, fd, dirp, count);
+}
+
+struct io_dirent64 {
+ ino64_t d_ino; /* 64-bit inode number */
+ off64_t d_off; /* 64-bit offset to next structure */
+ unsigned short d_reclen; /* Size of this dirent */
+ unsigned char d_type; /* File type */
+ char d_name[NAME_MAX + 1]; /* Filename (null-terminated) */
+};
+
+struct io_dir {
+ int dirfd;
+ ssize_t available_bytes;
+ struct io_dirent64 *next;
+ struct io_dirent64 buff[4];
+};
+
+static inline void io_dir__init(struct io_dir *iod, int dirfd)
+{
+ iod->dirfd = dirfd;
+ iod->available_bytes = 0;
+}
+
+static inline void io_dir__rewinddir(struct io_dir *iod)
+{
+ lseek(iod->dirfd, 0, SEEK_SET);
+ iod->available_bytes = 0;
+}
+
+static inline struct io_dirent64 *io_dir__readdir(struct io_dir *iod)
+{
+ struct io_dirent64 *entry;
+
+ if (iod->available_bytes <= 0) {
+ ssize_t rc = perf_getdents64(iod->dirfd, iod->buff, sizeof(iod->buff));
+
+ if (rc <= 0)
+ return NULL;
+ iod->available_bytes = rc;
+ iod->next = iod->buff;
+ }
+ entry = iod->next;
+ iod->next = (struct io_dirent64 *)((char *)entry + entry->d_reclen);
+ iod->available_bytes -= entry->d_reclen;
+ return entry;
+}
+
+static inline bool io_dir__is_dir(const struct io_dir *iod, struct io_dirent64 *dent)
+{
+ if (dent->d_type == DT_UNKNOWN) {
+ struct stat st;
+
+ if (fstatat(iod->dirfd, dent->d_name, &st, /*flags=*/0))
+ return false;
+
+ if (S_ISDIR(st.st_mode)) {
+ dent->d_type = DT_DIR;
+ return true;
+ }
+ }
+ return dent->d_type == DT_DIR;
+}
+
+#endif /* __API_IO_DIR__ */
diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c
index c3e4871967bc..51255c69754d 100644
--- a/tools/lib/bitmap.c
+++ b/tools/lib/bitmap.c
@@ -100,3 +100,43 @@ bool __bitmap_intersects(const unsigned long *bitmap1,
return true;
return false;
}
+
+void __bitmap_set(unsigned long *map, unsigned int start, int len)
+{
+ unsigned long *p = map + BIT_WORD(start);
+ const unsigned int size = start + len;
+ int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
+ unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
+
+ while (len - bits_to_set >= 0) {
+ *p |= mask_to_set;
+ len -= bits_to_set;
+ bits_to_set = BITS_PER_LONG;
+ mask_to_set = ~0UL;
+ p++;
+ }
+ if (len) {
+ mask_to_set &= BITMAP_LAST_WORD_MASK(size);
+ *p |= mask_to_set;
+ }
+}
+
+void __bitmap_clear(unsigned long *map, unsigned int start, int len)
+{
+ unsigned long *p = map + BIT_WORD(start);
+ const unsigned int size = start + len;
+ int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
+ unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
+
+ while (len - bits_to_clear >= 0) {
+ *p &= ~mask_to_clear;
+ len -= bits_to_clear;
+ bits_to_clear = BITS_PER_LONG;
+ mask_to_clear = ~0UL;
+ p++;
+ }
+ if (len) {
+ mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
+ *p &= ~mask_to_clear;
+ }
+}
diff --git a/tools/lib/bpf/.gitignore b/tools/lib/bpf/.gitignore
index 0da84cb9e66d..f02725b123b3 100644
--- a/tools/lib/bpf/.gitignore
+++ b/tools/lib/bpf/.gitignore
@@ -5,3 +5,4 @@ TAGS
tags
cscope.*
/bpf_helper_defs.h
+fixdep
diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
index 2d0c282c8588..c80204bb72a2 100644
--- a/tools/lib/bpf/Build
+++ b/tools/lib/bpf/Build
@@ -1,4 +1,4 @@
-libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
+libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_utils.o \
netlink.o bpf_prog_linfo.o libbpf_probes.o hashmap.o \
btf_dump.o ringbuf.o strset.o linker.o gen_loader.o relo_core.o \
- usdt.o zip.o elf.o
+ usdt.o zip.o elf.o features.o btf_iter.o btf_relocate.o
diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
index 4be7144e4803..168140f8e646 100644
--- a/tools/lib/bpf/Makefile
+++ b/tools/lib/bpf/Makefile
@@ -2,7 +2,7 @@
# Most of this file is copied from tools/lib/traceevent/Makefile
RM ?= rm
-srctree = $(abs_srctree)
+srctree := $(realpath $(srctree))
VERSION_SCRIPT := libbpf.map
LIBBPF_VERSION := $(shell \
@@ -53,15 +53,9 @@ include $(srctree)/tools/scripts/Makefile.include
# copy a bit from Linux kbuild
-ifeq ("$(origin V)", "command line")
- VERBOSE = $(V)
-endif
-ifndef VERBOSE
- VERBOSE = 0
-endif
-
INCLUDES = -I$(or $(OUTPUT),.) \
- -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi
+ -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi \
+ -I$(srctree)/tools/arch/$(SRCARCH)/include
export prefix libdir src obj
@@ -95,12 +89,6 @@ override CFLAGS += $(CLANG_CROSS_FLAGS)
# flags specific for shared library
SHLIB_FLAGS := -DSHARED -fPIC
-ifeq ($(VERBOSE),1)
- Q =
-else
- Q = @
-endif
-
# Disable command line variables (CFLAGS) override from top
# level Makefile (perf), otherwise build Makefile will get
# the same command line setup.
@@ -108,6 +96,8 @@ MAKEOVERRIDES=
all:
+OUTPUT ?= ./
+OUTPUT := $(abspath $(OUTPUT))/
export srctree OUTPUT CC LD CFLAGS V
include $(srctree)/tools/build/Makefile.include
@@ -141,7 +131,10 @@ all: fixdep
all_cmd: $(CMD_TARGETS) check
-$(BPF_IN_SHARED): force $(BPF_GENERATED)
+$(SHARED_OBJDIR) $(STATIC_OBJDIR):
+ $(Q)mkdir -p $@
+
+$(BPF_IN_SHARED): force $(BPF_GENERATED) | $(SHARED_OBJDIR)
@(test -f ../../include/uapi/linux/bpf.h -a -f ../../../include/uapi/linux/bpf.h && ( \
(diff -B ../../include/uapi/linux/bpf.h ../../../include/uapi/linux/bpf.h >/dev/null) || \
echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/bpf.h' differs from latest version at 'include/uapi/linux/bpf.h'" >&2 )) || true
@@ -151,9 +144,11 @@ $(BPF_IN_SHARED): force $(BPF_GENERATED)
@(test -f ../../include/uapi/linux/if_xdp.h -a -f ../../../include/uapi/linux/if_xdp.h && ( \
(diff -B ../../include/uapi/linux/if_xdp.h ../../../include/uapi/linux/if_xdp.h >/dev/null) || \
echo "Warning: Kernel ABI header at 'tools/include/uapi/linux/if_xdp.h' differs from latest version at 'include/uapi/linux/if_xdp.h'" >&2 )) || true
+ $(SILENT_MAKE) -C $(srctree)/tools/build CFLAGS= LDFLAGS= OUTPUT=$(SHARED_OBJDIR) $(SHARED_OBJDIR)fixdep
$(Q)$(MAKE) $(build)=libbpf OUTPUT=$(SHARED_OBJDIR) CFLAGS="$(CFLAGS) $(SHLIB_FLAGS)"
-$(BPF_IN_STATIC): force $(BPF_GENERATED)
+$(BPF_IN_STATIC): force $(BPF_GENERATED) | $(STATIC_OBJDIR)
+ $(SILENT_MAKE) -C $(srctree)/tools/build CFLAGS= LDFLAGS= OUTPUT=$(STATIC_OBJDIR) $(STATIC_OBJDIR)fixdep
$(Q)$(MAKE) $(build)=libbpf OUTPUT=$(STATIC_OBJDIR)
$(BPF_HELPER_DEFS): $(srctree)/tools/include/uapi/linux/bpf.h
@@ -263,7 +258,7 @@ install_pkgconfig: $(PC_FILE)
install: install_lib install_pkgconfig install_headers
-clean:
+clean: fixdep-clean
$(call QUIET_CLEAN, libbpf) $(RM) -rf $(CMD_TARGETS) \
*~ .*.d .*.cmd LIBBPF-CFLAGS $(BPF_GENERATED) \
$(SHARED_OBJDIR) $(STATIC_OBJDIR) \
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 9dc9625651dc..339b19797237 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -103,9 +103,9 @@ int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
* [0] https://lore.kernel.org/bpf/20201201215900.3569844-1-guro@fb.com/
* [1] d05512618056 ("bpf: Add bpf_ktime_get_coarse_ns helper")
*/
-int probe_memcg_account(void)
+int probe_memcg_account(int token_fd)
{
- const size_t attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
+ const size_t attr_sz = offsetofend(union bpf_attr, prog_token_fd);
struct bpf_insn insns[] = {
BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
BPF_EXIT_INSN(),
@@ -120,6 +120,9 @@ int probe_memcg_account(void)
attr.insns = ptr_to_u64(insns);
attr.insn_cnt = insn_cnt;
attr.license = ptr_to_u64("GPL");
+ attr.prog_token_fd = token_fd;
+ if (token_fd)
+ attr.prog_flags |= BPF_F_TOKEN_FD;
prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, attr_sz);
if (prog_fd >= 0) {
@@ -146,7 +149,7 @@ int bump_rlimit_memlock(void)
struct rlimit rlim;
/* if kernel supports memcg-based accounting, skip bumping RLIMIT_MEMLOCK */
- if (memlock_bumped || kernel_supports(NULL, FEAT_MEMCG_ACCOUNT))
+ if (memlock_bumped || feat_supported(NULL, FEAT_MEMCG_ACCOUNT))
return 0;
memlock_bumped = true;
@@ -169,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
__u32 max_entries,
const struct bpf_map_create_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
+ const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash_size);
union bpf_attr attr;
int fd;
@@ -181,7 +184,7 @@ int bpf_map_create(enum bpf_map_type map_type,
return libbpf_err(-EINVAL);
attr.map_type = map_type;
- if (map_name && kernel_supports(NULL, FEAT_PROG_NAME))
+ if (map_name && feat_supported(NULL, FEAT_PROG_NAME))
libbpf_strlcpy(attr.map_name, map_name, sizeof(attr.map_name));
attr.key_size = key_size;
attr.value_size = value_size;
@@ -191,6 +194,7 @@ int bpf_map_create(enum bpf_map_type map_type,
attr.btf_key_type_id = OPTS_GET(opts, btf_key_type_id, 0);
attr.btf_value_type_id = OPTS_GET(opts, btf_value_type_id, 0);
attr.btf_vmlinux_value_type_id = OPTS_GET(opts, btf_vmlinux_value_type_id, 0);
+ attr.value_type_btf_obj_fd = OPTS_GET(opts, value_type_btf_obj_fd, 0);
attr.inner_map_fd = OPTS_GET(opts, inner_map_fd, 0);
attr.map_flags = OPTS_GET(opts, map_flags, 0);
@@ -198,6 +202,10 @@ int bpf_map_create(enum bpf_map_type map_type,
attr.numa_node = OPTS_GET(opts, numa_node, 0);
attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
+ attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
+ attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
+ attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
+
fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
return libbpf_err_errno(fd);
}
@@ -232,7 +240,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
const struct bpf_insn *insns, size_t insn_cnt,
struct bpf_prog_load_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, log_true_size);
+ const size_t attr_sz = offsetofend(union bpf_attr, keyring_id);
void *finfo = NULL, *linfo = NULL;
const char *func_info, *line_info;
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
@@ -261,8 +269,9 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
attr.prog_flags = OPTS_GET(opts, prog_flags, 0);
attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0);
attr.kern_version = OPTS_GET(opts, kern_version, 0);
+ attr.prog_token_fd = OPTS_GET(opts, token_fd, 0);
- if (prog_name && kernel_supports(NULL, FEAT_PROG_NAME))
+ if (prog_name && feat_supported(NULL, FEAT_PROG_NAME))
libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name));
attr.license = ptr_to_u64(license);
@@ -304,6 +313,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
attr.line_info_cnt = OPTS_GET(opts, line_info_cnt, 0);
attr.fd_array = ptr_to_u64(OPTS_GET(opts, fd_array, NULL));
+ attr.fd_array_cnt = OPTS_GET(opts, fd_array_cnt, 0);
if (log_level) {
attr.log_buf = ptr_to_u64(log_buf);
@@ -759,6 +769,7 @@ int bpf_link_create(int prog_fd, int target_fd,
return libbpf_err(-EINVAL);
break;
case BPF_TRACE_KPROBE_MULTI:
+ case BPF_TRACE_KPROBE_SESSION:
attr.link_create.kprobe_multi.flags = OPTS_GET(opts, kprobe_multi.flags, 0);
attr.link_create.kprobe_multi.cnt = OPTS_GET(opts, kprobe_multi.cnt, 0);
attr.link_create.kprobe_multi.syms = ptr_to_u64(OPTS_GET(opts, kprobe_multi.syms, 0));
@@ -768,6 +779,7 @@ int bpf_link_create(int prog_fd, int target_fd,
return libbpf_err(-EINVAL);
break;
case BPF_TRACE_UPROBE_MULTI:
+ case BPF_TRACE_UPROBE_SESSION:
attr.link_create.uprobe_multi.flags = OPTS_GET(opts, uprobe_multi.flags, 0);
attr.link_create.uprobe_multi.cnt = OPTS_GET(opts, uprobe_multi.cnt, 0);
attr.link_create.uprobe_multi.path = ptr_to_u64(OPTS_GET(opts, uprobe_multi.path, 0));
@@ -778,6 +790,7 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, uprobe_multi))
return libbpf_err(-EINVAL);
break;
+ case BPF_TRACE_RAW_TP:
case BPF_TRACE_FENTRY:
case BPF_TRACE_FEXIT:
case BPF_MODIFY_RETURN:
@@ -826,6 +839,50 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, netkit))
return libbpf_err(-EINVAL);
break;
+ case BPF_CGROUP_INET_INGRESS:
+ case BPF_CGROUP_INET_EGRESS:
+ case BPF_CGROUP_INET_SOCK_CREATE:
+ case BPF_CGROUP_INET_SOCK_RELEASE:
+ case BPF_CGROUP_INET4_BIND:
+ case BPF_CGROUP_INET6_BIND:
+ case BPF_CGROUP_INET4_POST_BIND:
+ case BPF_CGROUP_INET6_POST_BIND:
+ case BPF_CGROUP_INET4_CONNECT:
+ case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
+ case BPF_CGROUP_INET4_GETPEERNAME:
+ case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
+ case BPF_CGROUP_INET4_GETSOCKNAME:
+ case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
+ case BPF_CGROUP_UDP4_SENDMSG:
+ case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
+ case BPF_CGROUP_UDP4_RECVMSG:
+ case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
+ case BPF_CGROUP_SOCK_OPS:
+ case BPF_CGROUP_DEVICE:
+ case BPF_CGROUP_SYSCTL:
+ case BPF_CGROUP_GETSOCKOPT:
+ case BPF_CGROUP_SETSOCKOPT:
+ case BPF_LSM_CGROUP:
+ relative_fd = OPTS_GET(opts, cgroup.relative_fd, 0);
+ relative_id = OPTS_GET(opts, cgroup.relative_id, 0);
+ if (relative_fd && relative_id)
+ return libbpf_err(-EINVAL);
+ if (relative_id) {
+ attr.link_create.cgroup.relative_id = relative_id;
+ attr.link_create.flags |= BPF_F_ID;
+ } else {
+ attr.link_create.cgroup.relative_fd = relative_fd;
+ }
+ attr.link_create.cgroup.expected_revision =
+ OPTS_GET(opts, cgroup.expected_revision, 0);
+ if (!OPTS_ZEROED(opts, cgroup))
+ return libbpf_err(-EINVAL);
+ break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
@@ -1086,7 +1143,7 @@ int bpf_map_get_fd_by_id(__u32 id)
int bpf_btf_get_fd_by_id_opts(__u32 id,
const struct bpf_get_fd_by_id_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
+ const size_t attr_sz = offsetofend(union bpf_attr, fd_by_id_token_fd);
union bpf_attr attr;
int fd;
@@ -1096,6 +1153,7 @@ int bpf_btf_get_fd_by_id_opts(__u32 id,
memset(&attr, 0, attr_sz);
attr.btf_id = id;
attr.open_flags = OPTS_GET(opts, open_flags, 0);
+ attr.fd_by_id_token_fd = OPTS_GET(opts, token_fd, 0);
fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, attr_sz);
return libbpf_err_errno(fd);
@@ -1166,23 +1224,34 @@ int bpf_link_get_info_by_fd(int link_fd, struct bpf_link_info *info, __u32 *info
return bpf_obj_get_info_by_fd(link_fd, info, info_len);
}
-int bpf_raw_tracepoint_open(const char *name, int prog_fd)
+int bpf_raw_tracepoint_open_opts(int prog_fd, struct bpf_raw_tp_opts *opts)
{
const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint);
union bpf_attr attr;
int fd;
+ if (!OPTS_VALID(opts, bpf_raw_tp_opts))
+ return libbpf_err(-EINVAL);
+
memset(&attr, 0, attr_sz);
- attr.raw_tracepoint.name = ptr_to_u64(name);
attr.raw_tracepoint.prog_fd = prog_fd;
+ attr.raw_tracepoint.name = ptr_to_u64(OPTS_GET(opts, tp_name, NULL));
+ attr.raw_tracepoint.cookie = OPTS_GET(opts, cookie, 0);
fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, attr_sz);
return libbpf_err_errno(fd);
}
+int bpf_raw_tracepoint_open(const char *name, int prog_fd)
+{
+ LIBBPF_OPTS(bpf_raw_tp_opts, opts, .tp_name = name);
+
+ return bpf_raw_tracepoint_open_opts(prog_fd, &opts);
+}
+
int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size);
+ const size_t attr_sz = offsetofend(union bpf_attr, btf_token_fd);
union bpf_attr attr;
char *log_buf;
size_t log_size;
@@ -1207,6 +1276,10 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts
attr.btf = ptr_to_u64(btf_data);
attr.btf_size = btf_size;
+
+ attr.btf_flags = OPTS_GET(opts, btf_flags, 0);
+ attr.btf_token_fd = OPTS_GET(opts, token_fd, 0);
+
/* log_level == 0 and log_buf != NULL means "try loading without
* log_buf, but retry with log_buf and log_level=1 on error", which is
* consistent across low-level and high-level BTF and program loading
@@ -1287,3 +1360,40 @@ int bpf_prog_bind_map(int prog_fd, int map_fd,
ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, attr_sz);
return libbpf_err_errno(ret);
}
+
+int bpf_token_create(int bpffs_fd, struct bpf_token_create_opts *opts)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, token_create);
+ union bpf_attr attr;
+ int fd;
+
+ if (!OPTS_VALID(opts, bpf_token_create_opts))
+ return libbpf_err(-EINVAL);
+
+ memset(&attr, 0, attr_sz);
+ attr.token_create.bpffs_fd = bpffs_fd;
+ attr.token_create.flags = OPTS_GET(opts, flags, 0);
+
+ fd = sys_bpf_fd(BPF_TOKEN_CREATE, &attr, attr_sz);
+ return libbpf_err_errno(fd);
+}
+
+int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
+ struct bpf_prog_stream_read_opts *opts)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, prog_stream_read);
+ union bpf_attr attr;
+ int err;
+
+ if (!OPTS_VALID(opts, bpf_prog_stream_read_opts))
+ return libbpf_err(-EINVAL);
+
+ memset(&attr, 0, attr_sz);
+ attr.prog_stream_read.stream_buf = ptr_to_u64(buf);
+ attr.prog_stream_read.stream_buf_len = buf_len;
+ attr.prog_stream_read.stream_id = stream_id;
+ attr.prog_stream_read.prog_fd = prog_fd;
+
+ err = sys_bpf(BPF_PROG_STREAM_READ_BY_FD, &attr, attr_sz);
+ return libbpf_err_errno(err);
+}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index d0f53772bdc0..e983a3e40d61 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -35,7 +35,7 @@
extern "C" {
#endif
-int libbpf_set_memlock_rlim(size_t memlock_bytes);
+LIBBPF_API int libbpf_set_memlock_rlim(size_t memlock_bytes);
struct bpf_map_create_opts {
size_t sz; /* size of this struct for forward/backward compatibility */
@@ -51,8 +51,15 @@ struct bpf_map_create_opts {
__u32 numa_node;
__u32 map_ifindex;
+ __s32 value_type_btf_obj_fd;
+
+ __u32 token_fd;
+
+ const void *excl_prog_hash;
+ __u32 excl_prog_hash_size;
+ size_t :0;
};
-#define bpf_map_create_opts__last_field map_ifindex
+#define bpf_map_create_opts__last_field excl_prog_hash_size
LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
const char *map_name,
@@ -96,15 +103,19 @@ struct bpf_prog_load_opts {
__u32 log_level;
__u32 log_size;
char *log_buf;
- /* output: actual total log contents size (including termintaing zero).
+ /* output: actual total log contents size (including terminating zero).
* It could be both larger than original log_size (if log was
* truncated), or smaller (if log buffer wasn't filled completely).
* If kernel doesn't support this feature, log_size is left unchanged.
*/
__u32 log_true_size;
+ __u32 token_fd;
+
+ /* if set, provides the length of fd_array */
+ __u32 fd_array_cnt;
size_t :0;
};
-#define bpf_prog_load_opts__last_field log_true_size
+#define bpf_prog_load_opts__last_field fd_array_cnt
LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type,
const char *prog_name, const char *license,
@@ -124,15 +135,18 @@ struct bpf_btf_load_opts {
char *log_buf;
__u32 log_level;
__u32 log_size;
- /* output: actual total log contents size (including termintaing zero).
+ /* output: actual total log contents size (including terminating zero).
* It could be both larger than original log_size (if log was
* truncated), or smaller (if log buffer wasn't filled completely).
* If kernel doesn't support this feature, log_size is left unchanged.
*/
__u32 log_true_size;
+
+ __u32 btf_flags;
+ __u32 token_fd;
size_t :0;
};
-#define bpf_btf_load_opts__last_field log_true_size
+#define bpf_btf_load_opts__last_field token_fd
LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size,
struct bpf_btf_load_opts *opts);
@@ -182,10 +196,14 @@ LIBBPF_API int bpf_map_delete_batch(int fd, const void *keys,
/**
* @brief **bpf_map_lookup_batch()** allows for batch lookup of BPF map elements.
*
- * The parameter *in_batch* is the address of the first element in the batch to read.
- * *out_batch* is an output parameter that should be passed as *in_batch* to subsequent
- * calls to **bpf_map_lookup_batch()**. NULL can be passed for *in_batch* to indicate
- * that the batched lookup starts from the beginning of the map.
+ * The parameter *in_batch* is the address of the first element in the batch to
+ * read. *out_batch* is an output parameter that should be passed as *in_batch*
+ * to subsequent calls to **bpf_map_lookup_batch()**. NULL can be passed for
+ * *in_batch* to indicate that the batched lookup starts from the beginning of
+ * the map. Both *in_batch* and *out_batch* must point to memory large enough to
+ * hold a single key, except for maps of type **BPF_MAP_TYPE_{HASH, PERCPU_HASH,
+ * LRU_HASH, LRU_PERCPU_HASH}**, for which the memory size must be at
+ * least 4 bytes wide regardless of key size.
*
* The *keys* and *values* are output parameters which must point to memory large enough to
* hold *count* items based on the key and value size of the map *map_fd*. The *keys*
@@ -218,7 +236,10 @@ LIBBPF_API int bpf_map_lookup_batch(int fd, void *in_batch, void *out_batch,
*
* @param fd BPF map file descriptor
* @param in_batch address of the first element in batch to read, can pass NULL to
- * get address of the first element in *out_batch*
+ * get address of the first element in *out_batch*. If not NULL, must be large
+ * enough to hold a key. For **BPF_MAP_TYPE_{HASH, PERCPU_HASH, LRU_HASH,
+ * LRU_PERCPU_HASH}**, the memory size must be at least 4 bytes wide regardless
+ * of key size.
* @param out_batch output parameter that should be passed to next call as *in_batch*
* @param keys pointer to an array of *count* keys
* @param values pointer to an array large enough for *count* values
@@ -420,6 +441,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} netkit;
+ struct {
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ } cgroup;
};
size_t :0;
};
@@ -469,9 +495,10 @@ LIBBPF_API int bpf_link_get_next_id(__u32 start_id, __u32 *next_id);
struct bpf_get_fd_by_id_opts {
size_t sz; /* size of this struct for forward/backward compatibility */
__u32 open_flags; /* permissions requested for the operation on fd */
+ __u32 token_fd;
size_t :0;
};
-#define bpf_get_fd_by_id_opts__last_field open_flags
+#define bpf_get_fd_by_id_opts__last_field token_fd
LIBBPF_API int bpf_prog_get_fd_by_id(__u32 id);
LIBBPF_API int bpf_prog_get_fd_by_id_opts(__u32 id,
@@ -492,7 +519,10 @@ LIBBPF_API int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len);
* program corresponding to *prog_fd*.
*
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
- * actual number of bytes written to *info*.
+ * actual number of bytes written to *info*. Note that *info* should be
+ * zero-initialized or initialized as expected by the requested *info*
+ * type. Failing to (zero-)initialize *info* under certain circumstances can
+ * result in this helper returning an error.
*
* @param prog_fd BPF program file descriptor
* @param info pointer to **struct bpf_prog_info** that will be populated with
@@ -509,7 +539,10 @@ LIBBPF_API int bpf_prog_get_info_by_fd(int prog_fd, struct bpf_prog_info *info,
* map corresponding to *map_fd*.
*
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
- * actual number of bytes written to *info*.
+ * actual number of bytes written to *info*. Note that *info* should be
+ * zero-initialized or initialized as expected by the requested *info*
+ * type. Failing to (zero-)initialize *info* under certain circumstances can
+ * result in this helper returning an error.
*
* @param map_fd BPF map file descriptor
* @param info pointer to **struct bpf_map_info** that will be populated with
@@ -522,11 +555,14 @@ LIBBPF_API int bpf_prog_get_info_by_fd(int prog_fd, struct bpf_prog_info *info,
LIBBPF_API int bpf_map_get_info_by_fd(int map_fd, struct bpf_map_info *info, __u32 *info_len);
/**
- * @brief **bpf_btf_get_info_by_fd()** obtains information about the
+ * @brief **bpf_btf_get_info_by_fd()** obtains information about the
* BTF object corresponding to *btf_fd*.
*
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
- * actual number of bytes written to *info*.
+ * actual number of bytes written to *info*. Note that *info* should be
+ * zero-initialized or initialized as expected by the requested *info*
+ * type. Failing to (zero-)initialize *info* under certain circumstances can
+ * result in this helper returning an error.
*
* @param btf_fd BTF object file descriptor
* @param info pointer to **struct bpf_btf_info** that will be populated with
@@ -543,7 +579,10 @@ LIBBPF_API int bpf_btf_get_info_by_fd(int btf_fd, struct bpf_btf_info *info, __u
* link corresponding to *link_fd*.
*
* Populates up to *info_len* bytes of *info* and updates *info_len* with the
- * actual number of bytes written to *info*.
+ * actual number of bytes written to *info*. Note that *info* should be
+ * zero-initialized or initialized as expected by the requested *info*
+ * type. Failing to (zero-)initialize *info* under certain circumstances can
+ * result in this helper returning an error.
*
* @param link_fd BPF link file descriptor
* @param info pointer to **struct bpf_link_info** that will be populated with
@@ -590,6 +629,15 @@ LIBBPF_API int bpf_prog_query(int target_fd, enum bpf_attach_type type,
__u32 query_flags, __u32 *attach_flags,
__u32 *prog_ids, __u32 *prog_cnt);
+struct bpf_raw_tp_opts {
+ size_t sz; /* size of this struct for forward/backward compatibility */
+ const char *tp_name;
+ __u64 cookie;
+ size_t :0;
+};
+#define bpf_raw_tp_opts__last_field cookie
+
+LIBBPF_API int bpf_raw_tracepoint_open_opts(int prog_fd, struct bpf_raw_tp_opts *opts);
LIBBPF_API int bpf_raw_tracepoint_open(const char *name, int prog_fd);
LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,
__u32 *buf_len, __u32 *prog_id, __u32 *fd_type,
@@ -640,6 +688,51 @@ struct bpf_test_run_opts {
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
struct bpf_test_run_opts *opts);
+struct bpf_token_create_opts {
+ size_t sz; /* size of this struct for forward/backward compatibility */
+ __u32 flags;
+ size_t :0;
+};
+#define bpf_token_create_opts__last_field flags
+
+/**
+ * @brief **bpf_token_create()** creates a new instance of BPF token derived
+ * from specified BPF FS mount point.
+ *
+ * BPF token created with this API can be passed to bpf() syscall for
+ * commands like BPF_PROG_LOAD, BPF_MAP_CREATE, etc.
+ *
+ * @param bpffs_fd FD for BPF FS instance from which to derive a BPF token
+ * instance.
+ * @param opts optional BPF token creation options, can be NULL
+ *
+ * @return BPF token FD > 0, on success; negative error code, otherwise (errno
+ * is also set to the error code)
+ */
+LIBBPF_API int bpf_token_create(int bpffs_fd,
+ struct bpf_token_create_opts *opts);
+
+struct bpf_prog_stream_read_opts {
+ size_t sz;
+ size_t :0;
+};
+#define bpf_prog_stream_read_opts__last_field sz
+/**
+ * @brief **bpf_prog_stream_read** reads data from the BPF stream of a given BPF
+ * program.
+ *
+ * @param prog_fd FD for the BPF program whose BPF stream is to be read.
+ * @param stream_id ID of the BPF stream to be read.
+ * @param buf Buffer to read data into from the BPF stream.
+ * @param buf_len Maximum number of bytes to read from the BPF stream.
+ * @param opts optional options, can be NULL
+ *
+ * @return The number of bytes read, on success; negative error code, otherwise
+ * (errno is also set to the error code)
+ */
+LIBBPF_API int bpf_prog_stream_read(int prog_fd, __u32 stream_id, void *buf, __u32 buf_len,
+ struct bpf_prog_stream_read_opts *opts);
+
#ifdef __cplusplus
} /* extern "C" */
#endif
diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index 7325a12692a3..b997c68bd945 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -2,6 +2,8 @@
#ifndef __BPF_CORE_READ_H__
#define __BPF_CORE_READ_H__
+#include "bpf_helpers.h"
+
/*
* enum bpf_field_info_kind is passed as a second argument into
* __builtin_preserve_field_info() built-in to get a specific aspect of
@@ -44,7 +46,7 @@ enum bpf_enum_value_kind {
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define __CORE_BITFIELD_PROBE_READ(dst, src, fld) \
bpf_probe_read_kernel( \
- (void *)dst, \
+ (void *)dst, \
__CORE_RELO(src, fld, BYTE_SIZE), \
(const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
#else
@@ -102,6 +104,7 @@ enum bpf_enum_value_kind {
case 2: val = *(const unsigned short *)p; break; \
case 4: val = *(const unsigned int *)p; break; \
case 8: val = *(const unsigned long long *)p; break; \
+ default: val = 0; break; \
} \
val <<= __CORE_RELO(s, field, LSHIFT_U64); \
if (__CORE_RELO(s, field, SIGNED)) \
@@ -143,8 +146,29 @@ enum bpf_enum_value_kind {
} \
})
+/* Differentiator between compilers builtin implementations. This is a
+ * requirement due to the compiler parsing differences where GCC optimizes
+ * early in parsing those constructs of type pointers to the builtin specific
+ * type, resulting in not being possible to collect the required type
+ * information in the builtin expansion.
+ */
+#ifdef __clang__
+#define ___bpf_typeof(type) ((typeof(type) *) 0)
+#else
+#define ___bpf_typeof1(type, NR) ({ \
+ extern typeof(type) *___concat(bpf_type_tmp_, NR); \
+ ___concat(bpf_type_tmp_, NR); \
+})
+#define ___bpf_typeof(type) ___bpf_typeof1(type, __COUNTER__)
+#endif
+
+#ifdef __clang__
#define ___bpf_field_ref1(field) (field)
-#define ___bpf_field_ref2(type, field) (((typeof(type) *)0)->field)
+#define ___bpf_field_ref2(type, field) (___bpf_typeof(type)->field)
+#else
+#define ___bpf_field_ref1(field) (&(field))
+#define ___bpf_field_ref2(type, field) (&(___bpf_typeof(type)->field))
+#endif
#define ___bpf_field_ref(args...) \
___bpf_apply(___bpf_field_ref, ___bpf_narg(args))(args)
@@ -194,7 +218,7 @@ enum bpf_enum_value_kind {
* BTF. Always succeeds.
*/
#define bpf_core_type_id_local(type) \
- __builtin_btf_type_id(*(typeof(type) *)0, BPF_TYPE_ID_LOCAL)
+ __builtin_btf_type_id(*___bpf_typeof(type), BPF_TYPE_ID_LOCAL)
/*
* Convenience macro to get BTF type ID of a target kernel's type that matches
@@ -204,7 +228,7 @@ enum bpf_enum_value_kind {
* - 0, if no matching type was found in a target kernel BTF.
*/
#define bpf_core_type_id_kernel(type) \
- __builtin_btf_type_id(*(typeof(type) *)0, BPF_TYPE_ID_TARGET)
+ __builtin_btf_type_id(*___bpf_typeof(type), BPF_TYPE_ID_TARGET)
/*
* Convenience macro to check that provided named type
@@ -214,7 +238,7 @@ enum bpf_enum_value_kind {
* 0, if no matching type is found.
*/
#define bpf_core_type_exists(type) \
- __builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_EXISTS)
+ __builtin_preserve_type_info(*___bpf_typeof(type), BPF_TYPE_EXISTS)
/*
* Convenience macro to check that provided named type
@@ -224,7 +248,7 @@ enum bpf_enum_value_kind {
* 0, if the type does not match any in the target kernel
*/
#define bpf_core_type_matches(type) \
- __builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_MATCHES)
+ __builtin_preserve_type_info(*___bpf_typeof(type), BPF_TYPE_MATCHES)
/*
* Convenience macro to get the byte size of a provided named type
@@ -234,7 +258,7 @@ enum bpf_enum_value_kind {
* 0, if no matching type is found.
*/
#define bpf_core_type_size(type) \
- __builtin_preserve_type_info(*(typeof(type) *)0, BPF_TYPE_SIZE)
+ __builtin_preserve_type_info(*___bpf_typeof(type), BPF_TYPE_SIZE)
/*
* Convenience macro to check that provided enumerator value is defined in
@@ -244,8 +268,13 @@ enum bpf_enum_value_kind {
* kernel's BTF;
* 0, if no matching enum and/or enum value within that enum is found.
*/
+#ifdef __clang__
#define bpf_core_enum_value_exists(enum_type, enum_value) \
__builtin_preserve_enum_value(*(typeof(enum_type) *)enum_value, BPF_ENUMVAL_EXISTS)
+#else
+#define bpf_core_enum_value_exists(enum_type, enum_value) \
+ __builtin_preserve_enum_value(___bpf_typeof(enum_type), enum_value, BPF_ENUMVAL_EXISTS)
+#endif
/*
* Convenience macro to get the integer value of an enumerator value in
@@ -255,8 +284,13 @@ enum bpf_enum_value_kind {
* present in target kernel's BTF;
* 0, if no matching enum and/or enum value within that enum is found.
*/
+#ifdef __clang__
#define bpf_core_enum_value(enum_type, enum_value) \
__builtin_preserve_enum_value(*(typeof(enum_type) *)enum_value, BPF_ENUMVAL_VALUE)
+#else
+#define bpf_core_enum_value(enum_type, enum_value) \
+ __builtin_preserve_enum_value(___bpf_typeof(enum_type), enum_value, BPF_ENUMVAL_VALUE)
+#endif
/*
* bpf_core_read() abstracts away bpf_probe_read_kernel() call and captures
@@ -268,7 +302,7 @@ enum bpf_enum_value_kind {
* a relocation, which records BTF type ID describing root struct/union and an
* accessor string which describes exact embedded field that was used to take
* an address. See detailed description of this relocation format and
- * semantics in comments to struct bpf_field_reloc in libbpf_internal.h.
+ * semantics in comments to struct bpf_core_relo in include/uapi/linux/bpf.h.
*
* This relocation allows libbpf to adjust BPF instruction to use correct
* actual field offset, based on target kernel BTF type that matches original
@@ -292,6 +326,17 @@ enum bpf_enum_value_kind {
#define bpf_core_read_user_str(dst, sz, src) \
bpf_probe_read_user_str(dst, sz, (const void *)__builtin_preserve_access_index(src))
+extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
+
+/*
+ * Cast provided pointer *ptr* into a pointer to a specified *type* in such
+ * a way that BPF verifier will become aware of associated kernel-side BTF
+ * type. This allows to access members of kernel types directly without the
+ * need to use BPF_CORE_READ() macros.
+ */
+#define bpf_core_cast(ptr, type) \
+ ((typeof(type) *)bpf_rdonly_cast((ptr), bpf_core_type_id_kernel(type)))
+
#define ___concat(a, b) a ## b
#define ___apply(fn, n) ___concat(fn, n)
#define ___nth(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, __11, N, ...) N
@@ -343,7 +388,13 @@ enum bpf_enum_value_kind {
#define ___arrow10(a, b, c, d, e, f, g, h, i, j) a->b->c->d->e->f->g->h->i->j
#define ___arrow(...) ___apply(___arrow, ___narg(__VA_ARGS__))(__VA_ARGS__)
+#if defined(__clang__) && (__clang_major__ >= 19)
+#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
+#elif defined(__GNUC__) && (__GNUC__ >= 14)
+#define ___type(...) __typeof_unqual__(___arrow(__VA_ARGS__))
+#else
#define ___type(...) typeof(___arrow(__VA_ARGS__))
+#endif
#define ___read(read_fn, dst, src_type, src, accessor) \
read_fn((void *)(dst), sizeof(*(dst)), &((src_type)(src))->accessor)
diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index fdf44403ff36..49af4260b8e6 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -4,6 +4,7 @@
#define __BPF_GEN_INTERNAL_H
#include "bpf.h"
+#include "libbpf_internal.h"
struct ksym_relo_desc {
const char *name;
@@ -34,6 +35,7 @@ struct bpf_gen {
void *data_cur;
void *insn_start;
void *insn_cur;
+ bool swapped_endian;
ssize_t cleanup_label;
__u32 nr_progs;
__u32 nr_maps;
@@ -49,6 +51,7 @@ struct bpf_gen {
__u32 nr_ksyms;
int fd_array;
int nr_fd_array;
+ int hash_insn_offset[SHA256_DWORD_SIZE];
};
void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index 2324cc42b017..80c028540656 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -13,6 +13,15 @@
#define __uint(name, val) int (*name)[val]
#define __type(name, val) typeof(val) *name
#define __array(name, val) typeof(val) *name[]
+#define __ulong(name, val) enum { ___bpf_concat(__unique_value, __COUNTER__) = val } name
+
+#ifndef likely
+#define likely(x) (__builtin_expect(!!(x), 1))
+#endif
+
+#ifndef unlikely
+#define unlikely(x) (__builtin_expect(!!(x), 0))
+#endif
/*
* Helper macro to place programs, maps, license in
@@ -136,7 +145,8 @@
/*
* Helper function to perform a tail call with a constant/immediate map slot.
*/
-#if __clang_major__ >= 8 && defined(__bpf__)
+#if (defined(__clang__) && __clang_major__ >= 8) || (!defined(__clang__) && __GNUC__ > 12)
+#if defined(__bpf__)
static __always_inline void
bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
{
@@ -164,6 +174,7 @@ bpf_tail_call_static(void *ctx, const void *map, const __u32 slot)
: "r0", "r1", "r2", "r3", "r4", "r5");
}
#endif
+#endif
enum libbpf_pin_type {
LIBBPF_PIN_NONE,
@@ -182,14 +193,30 @@ enum libbpf_tristate {
#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
#define __kptr __attribute__((btf_type_tag("kptr")))
#define __percpu_kptr __attribute__((btf_type_tag("percpu_kptr")))
+#define __uptr __attribute__((btf_type_tag("uptr")))
-#define bpf_ksym_exists(sym) ({ \
- _Static_assert(!__builtin_constant_p(!!sym), #sym " should be marked as __weak"); \
- !!sym; \
+#if defined (__clang__)
+#define bpf_ksym_exists(sym) ({ \
+ _Static_assert(!__builtin_constant_p(!!sym), \
+ #sym " should be marked as __weak"); \
+ !!sym; \
})
+#elif __GNUC__ > 8
+#define bpf_ksym_exists(sym) ({ \
+ _Static_assert(__builtin_has_attribute (*sym, __weak__), \
+ #sym " should be marked as __weak"); \
+ !!sym; \
+})
+#else
+#define bpf_ksym_exists(sym) !!sym
+#endif
#define __arg_ctx __attribute__((btf_decl_tag("arg:ctx")))
#define __arg_nonnull __attribute((btf_decl_tag("arg:nonnull")))
+#define __arg_nullable __attribute((btf_decl_tag("arg:nullable")))
+#define __arg_trusted __attribute((btf_decl_tag("arg:trusted")))
+#define __arg_untrusted __attribute((btf_decl_tag("arg:untrusted")))
+#define __arg_arena __attribute((btf_decl_tag("arg:arena")))
#ifndef ___bpf_concat
#define ___bpf_concat(a, b) a ## b
@@ -288,6 +315,22 @@ enum libbpf_tristate {
___param, sizeof(___param)); \
})
+extern int bpf_stream_vprintk(int stream_id, const char *fmt__str, const void *args,
+ __u32 len__sz, void *aux__prog) __weak __ksym;
+
+#define bpf_stream_printk(stream_id, fmt, args...) \
+({ \
+ static const char ___fmt[] = fmt; \
+ unsigned long long ___param[___bpf_narg(args)]; \
+ \
+ _Pragma("GCC diagnostic push") \
+ _Pragma("GCC diagnostic ignored \"-Wint-conversion\"") \
+ ___bpf_fill(___param, args); \
+ _Pragma("GCC diagnostic pop") \
+ \
+ bpf_stream_vprintk(stream_id, ___fmt, ___param, sizeof(___param), NULL);\
+})
+
/* Use __bpf_printk when bpf_printk call has 3 or fewer fmt args
* Otherwise use __bpf_vprintk
*/
@@ -324,7 +367,7 @@ extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __weak __ksym;
* I.e., it looks almost like high-level for each loop in other languages,
* supports continue/break, and is verifiable by BPF verifier.
*
- * For iterating integers, the difference betwen bpf_for_each(num, i, N, M)
+ * For iterating integers, the difference between bpf_for_each(num, i, N, M)
* and bpf_for(i, N, M) is in that bpf_for() provides additional proof to
* verifier that i is in [N, M) range, and in bpf_for_each() case i is `int
* *`, not just `int`. So for integers bpf_for() is more convenient.
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index 1c13f8e88833..a8f6cd4841b0 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -163,7 +163,7 @@
struct pt_regs___s390 {
unsigned long orig_gpr2;
-};
+} __attribute__((preserve_access_index));
/* s390 provides user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const user_pt_regs *)(x))
@@ -179,7 +179,7 @@ struct pt_regs___s390 {
#define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
#define __PT_PARM5_SYSCALL_REG __PT_PARM5_REG
#define __PT_PARM6_SYSCALL_REG gprs[7]
-#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
+#define PT_REGS_PARM1_SYSCALL(x) (((const struct pt_regs___s390 *)(x))->__PT_PARM1_SYSCALL_REG)
#define PT_REGS_PARM1_CORE_SYSCALL(x) \
BPF_CORE_READ((const struct pt_regs___s390 *)(x), __PT_PARM1_SYSCALL_REG)
@@ -222,7 +222,7 @@ struct pt_regs___s390 {
struct pt_regs___arm64 {
unsigned long orig_x0;
-};
+} __attribute__((preserve_access_index));
/* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
@@ -241,7 +241,7 @@ struct pt_regs___arm64 {
#define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
#define __PT_PARM5_SYSCALL_REG __PT_PARM5_REG
#define __PT_PARM6_SYSCALL_REG __PT_PARM6_REG
-#define PT_REGS_PARM1_SYSCALL(x) PT_REGS_PARM1_CORE_SYSCALL(x)
+#define PT_REGS_PARM1_SYSCALL(x) (((const struct pt_regs___arm64 *)(x))->__PT_PARM1_SYSCALL_REG)
#define PT_REGS_PARM1_CORE_SYSCALL(x) \
BPF_CORE_READ((const struct pt_regs___arm64 *)(x), __PT_PARM1_SYSCALL_REG)
@@ -351,6 +351,10 @@ struct pt_regs___arm64 {
* https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-cc.adoc#risc-v-calling-conventions
*/
+struct pt_regs___riscv {
+ unsigned long orig_a0;
+} __attribute__((preserve_access_index));
+
/* riscv provides struct user_regs_struct instead of struct pt_regs to userspace */
#define __PT_REGS_CAST(x) ((const struct user_regs_struct *)(x))
#define __PT_PARM1_REG a0
@@ -362,12 +366,15 @@ struct pt_regs___arm64 {
#define __PT_PARM7_REG a6
#define __PT_PARM8_REG a7
-#define __PT_PARM1_SYSCALL_REG __PT_PARM1_REG
+#define __PT_PARM1_SYSCALL_REG orig_a0
#define __PT_PARM2_SYSCALL_REG __PT_PARM2_REG
#define __PT_PARM3_SYSCALL_REG __PT_PARM3_REG
#define __PT_PARM4_SYSCALL_REG __PT_PARM4_REG
#define __PT_PARM5_SYSCALL_REG __PT_PARM5_REG
#define __PT_PARM6_SYSCALL_REG __PT_PARM6_REG
+#define PT_REGS_PARM1_SYSCALL(x) (((const struct pt_regs___riscv *)(x))->__PT_PARM1_SYSCALL_REG)
+#define PT_REGS_PARM1_CORE_SYSCALL(x) \
+ BPF_CORE_READ((const struct pt_regs___riscv *)(x), __PT_PARM1_SYSCALL_REG)
#define __PT_RET_REG ra
#define __PT_FP_REG s0
@@ -473,7 +480,7 @@ struct pt_regs;
#endif
/*
* Similarly, syscall-specific conventions might differ between function call
- * conventions within each architecutre. All supported architectures pass
+ * conventions within each architecture. All supported architectures pass
* either 6 or 7 syscall arguments in registers.
*
* See syscall(2) manpage for succinct table with information on each arch.
@@ -515,7 +522,7 @@ struct pt_regs;
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ (ip) = (ctx)->link; })
#define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP
-#elif defined(bpf_target_sparc)
+#elif defined(bpf_target_sparc) || defined(bpf_target_arm64)
#define BPF_KPROBE_READ_RET_IP(ip, ctx) ({ (ip) = PT_REGS_RET(ctx); })
#define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP
@@ -633,25 +640,25 @@ struct pt_regs;
#endif
#define ___bpf_ctx_cast0() ctx
-#define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), (void *)ctx[0]
-#define ___bpf_ctx_cast2(x, args...) ___bpf_ctx_cast1(args), (void *)ctx[1]
-#define ___bpf_ctx_cast3(x, args...) ___bpf_ctx_cast2(args), (void *)ctx[2]
-#define ___bpf_ctx_cast4(x, args...) ___bpf_ctx_cast3(args), (void *)ctx[3]
-#define ___bpf_ctx_cast5(x, args...) ___bpf_ctx_cast4(args), (void *)ctx[4]
-#define ___bpf_ctx_cast6(x, args...) ___bpf_ctx_cast5(args), (void *)ctx[5]
-#define ___bpf_ctx_cast7(x, args...) ___bpf_ctx_cast6(args), (void *)ctx[6]
-#define ___bpf_ctx_cast8(x, args...) ___bpf_ctx_cast7(args), (void *)ctx[7]
-#define ___bpf_ctx_cast9(x, args...) ___bpf_ctx_cast8(args), (void *)ctx[8]
-#define ___bpf_ctx_cast10(x, args...) ___bpf_ctx_cast9(args), (void *)ctx[9]
-#define ___bpf_ctx_cast11(x, args...) ___bpf_ctx_cast10(args), (void *)ctx[10]
-#define ___bpf_ctx_cast12(x, args...) ___bpf_ctx_cast11(args), (void *)ctx[11]
+#define ___bpf_ctx_cast1(x) ___bpf_ctx_cast0(), ctx[0]
+#define ___bpf_ctx_cast2(x, args...) ___bpf_ctx_cast1(args), ctx[1]
+#define ___bpf_ctx_cast3(x, args...) ___bpf_ctx_cast2(args), ctx[2]
+#define ___bpf_ctx_cast4(x, args...) ___bpf_ctx_cast3(args), ctx[3]
+#define ___bpf_ctx_cast5(x, args...) ___bpf_ctx_cast4(args), ctx[4]
+#define ___bpf_ctx_cast6(x, args...) ___bpf_ctx_cast5(args), ctx[5]
+#define ___bpf_ctx_cast7(x, args...) ___bpf_ctx_cast6(args), ctx[6]
+#define ___bpf_ctx_cast8(x, args...) ___bpf_ctx_cast7(args), ctx[7]
+#define ___bpf_ctx_cast9(x, args...) ___bpf_ctx_cast8(args), ctx[8]
+#define ___bpf_ctx_cast10(x, args...) ___bpf_ctx_cast9(args), ctx[9]
+#define ___bpf_ctx_cast11(x, args...) ___bpf_ctx_cast10(args), ctx[10]
+#define ___bpf_ctx_cast12(x, args...) ___bpf_ctx_cast11(args), ctx[11]
#define ___bpf_ctx_cast(args...) ___bpf_apply(___bpf_ctx_cast, ___bpf_narg(args))(args)
/*
* BPF_PROG is a convenience wrapper for generic tp_btf/fentry/fexit and
* similar kinds of BPF programs, that accept input arguments as a single
* pointer to untyped u64 array, where each u64 can actually be a typed
- * pointer or integer of different size. Instead of requring user to write
+ * pointer or integer of different size. Instead of requiring user to write
* manual casts and work with array elements by index, BPF_PROG macro
* allows user to declare a list of named and typed input arguments in the
* same syntax as for normal C function. All the casting is hidden and
@@ -786,14 +793,14 @@ ____##name(unsigned long long *ctx ___bpf_ctx_decl(args))
struct pt_regs;
#define ___bpf_kprobe_args0() ctx
-#define ___bpf_kprobe_args1(x) ___bpf_kprobe_args0(), (void *)PT_REGS_PARM1(ctx)
-#define ___bpf_kprobe_args2(x, args...) ___bpf_kprobe_args1(args), (void *)PT_REGS_PARM2(ctx)
-#define ___bpf_kprobe_args3(x, args...) ___bpf_kprobe_args2(args), (void *)PT_REGS_PARM3(ctx)
-#define ___bpf_kprobe_args4(x, args...) ___bpf_kprobe_args3(args), (void *)PT_REGS_PARM4(ctx)
-#define ___bpf_kprobe_args5(x, args...) ___bpf_kprobe_args4(args), (void *)PT_REGS_PARM5(ctx)
-#define ___bpf_kprobe_args6(x, args...) ___bpf_kprobe_args5(args), (void *)PT_REGS_PARM6(ctx)
-#define ___bpf_kprobe_args7(x, args...) ___bpf_kprobe_args6(args), (void *)PT_REGS_PARM7(ctx)
-#define ___bpf_kprobe_args8(x, args...) ___bpf_kprobe_args7(args), (void *)PT_REGS_PARM8(ctx)
+#define ___bpf_kprobe_args1(x) ___bpf_kprobe_args0(), (unsigned long long)PT_REGS_PARM1(ctx)
+#define ___bpf_kprobe_args2(x, args...) ___bpf_kprobe_args1(args), (unsigned long long)PT_REGS_PARM2(ctx)
+#define ___bpf_kprobe_args3(x, args...) ___bpf_kprobe_args2(args), (unsigned long long)PT_REGS_PARM3(ctx)
+#define ___bpf_kprobe_args4(x, args...) ___bpf_kprobe_args3(args), (unsigned long long)PT_REGS_PARM4(ctx)
+#define ___bpf_kprobe_args5(x, args...) ___bpf_kprobe_args4(args), (unsigned long long)PT_REGS_PARM5(ctx)
+#define ___bpf_kprobe_args6(x, args...) ___bpf_kprobe_args5(args), (unsigned long long)PT_REGS_PARM6(ctx)
+#define ___bpf_kprobe_args7(x, args...) ___bpf_kprobe_args6(args), (unsigned long long)PT_REGS_PARM7(ctx)
+#define ___bpf_kprobe_args8(x, args...) ___bpf_kprobe_args7(args), (unsigned long long)PT_REGS_PARM8(ctx)
#define ___bpf_kprobe_args(args...) ___bpf_apply(___bpf_kprobe_args, ___bpf_narg(args))(args)
/*
@@ -801,7 +808,7 @@ struct pt_regs;
* tp_btf/fentry/fexit BPF programs. It hides the underlying platform-specific
* low-level way of getting kprobe input arguments from struct pt_regs, and
* provides a familiar typed and named function arguments syntax and
- * semantics of accessing kprobe input paremeters.
+ * semantics of accessing kprobe input parameters.
*
* Original struct pt_regs* context is preserved as 'ctx' argument. This might
* be necessary when using BPF helpers like bpf_perf_event_output().
@@ -821,7 +828,7 @@ static __always_inline typeof(name(0)) \
____##name(struct pt_regs *ctx, ##args)
#define ___bpf_kretprobe_args0() ctx
-#define ___bpf_kretprobe_args1(x) ___bpf_kretprobe_args0(), (void *)PT_REGS_RC(ctx)
+#define ___bpf_kretprobe_args1(x) ___bpf_kretprobe_args0(), (unsigned long long)PT_REGS_RC(ctx)
#define ___bpf_kretprobe_args(args...) ___bpf_apply(___bpf_kretprobe_args, ___bpf_narg(args))(args)
/*
@@ -845,24 +852,24 @@ static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
/* If kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER, read pt_regs directly */
#define ___bpf_syscall_args0() ctx
-#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (void *)PT_REGS_PARM1_SYSCALL(regs)
-#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (void *)PT_REGS_PARM2_SYSCALL(regs)
-#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (void *)PT_REGS_PARM3_SYSCALL(regs)
-#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (void *)PT_REGS_PARM4_SYSCALL(regs)
-#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (void *)PT_REGS_PARM5_SYSCALL(regs)
-#define ___bpf_syscall_args6(x, args...) ___bpf_syscall_args5(args), (void *)PT_REGS_PARM6_SYSCALL(regs)
-#define ___bpf_syscall_args7(x, args...) ___bpf_syscall_args6(args), (void *)PT_REGS_PARM7_SYSCALL(regs)
+#define ___bpf_syscall_args1(x) ___bpf_syscall_args0(), (unsigned long long)PT_REGS_PARM1_SYSCALL(regs)
+#define ___bpf_syscall_args2(x, args...) ___bpf_syscall_args1(args), (unsigned long long)PT_REGS_PARM2_SYSCALL(regs)
+#define ___bpf_syscall_args3(x, args...) ___bpf_syscall_args2(args), (unsigned long long)PT_REGS_PARM3_SYSCALL(regs)
+#define ___bpf_syscall_args4(x, args...) ___bpf_syscall_args3(args), (unsigned long long)PT_REGS_PARM4_SYSCALL(regs)
+#define ___bpf_syscall_args5(x, args...) ___bpf_syscall_args4(args), (unsigned long long)PT_REGS_PARM5_SYSCALL(regs)
+#define ___bpf_syscall_args6(x, args...) ___bpf_syscall_args5(args), (unsigned long long)PT_REGS_PARM6_SYSCALL(regs)
+#define ___bpf_syscall_args7(x, args...) ___bpf_syscall_args6(args), (unsigned long long)PT_REGS_PARM7_SYSCALL(regs)
#define ___bpf_syscall_args(args...) ___bpf_apply(___bpf_syscall_args, ___bpf_narg(args))(args)
/* If kernel doesn't have CONFIG_ARCH_HAS_SYSCALL_WRAPPER, we have to BPF_CORE_READ from pt_regs */
#define ___bpf_syswrap_args0() ctx
-#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (void *)PT_REGS_PARM1_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (void *)PT_REGS_PARM2_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (void *)PT_REGS_PARM3_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (void *)PT_REGS_PARM4_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (void *)PT_REGS_PARM5_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args6(x, args...) ___bpf_syswrap_args5(args), (void *)PT_REGS_PARM6_CORE_SYSCALL(regs)
-#define ___bpf_syswrap_args7(x, args...) ___bpf_syswrap_args6(args), (void *)PT_REGS_PARM7_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args1(x) ___bpf_syswrap_args0(), (unsigned long long)PT_REGS_PARM1_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args2(x, args...) ___bpf_syswrap_args1(args), (unsigned long long)PT_REGS_PARM2_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args3(x, args...) ___bpf_syswrap_args2(args), (unsigned long long)PT_REGS_PARM3_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args4(x, args...) ___bpf_syswrap_args3(args), (unsigned long long)PT_REGS_PARM4_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args5(x, args...) ___bpf_syswrap_args4(args), (unsigned long long)PT_REGS_PARM5_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args6(x, args...) ___bpf_syswrap_args5(args), (unsigned long long)PT_REGS_PARM6_CORE_SYSCALL(regs)
+#define ___bpf_syswrap_args7(x, args...) ___bpf_syswrap_args6(args), (unsigned long long)PT_REGS_PARM7_CORE_SYSCALL(regs)
#define ___bpf_syswrap_args(args...) ___bpf_apply(___bpf_syswrap_args, ___bpf_narg(args))(args)
/*
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index ee95fd379d4d..18907f0fcf9f 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -12,6 +12,7 @@
#include <sys/utsname.h>
#include <sys/param.h>
#include <sys/stat.h>
+#include <sys/mman.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/btf.h>
@@ -116,6 +117,12 @@ struct btf {
/* whether strings are already deduplicated */
bool strs_deduped;
+ /* whether base_btf should be freed in btf_free for this instance */
+ bool owns_base;
+
+ /* whether raw_data is a (read-only) mmap */
+ bool raw_data_is_mmap;
+
/* BTF object FD, if loaded into kernel */
int fd;
@@ -279,7 +286,7 @@ static int btf_parse_str_sec(struct btf *btf)
return -EINVAL;
}
if (!btf->base_btf && start[0]) {
- pr_debug("Invalid BTF string section\n");
+ pr_debug("Malformed BTF string section, did you forget to provide base BTF?\n");
return -EINVAL;
}
return 0;
@@ -598,7 +605,7 @@ static int btf_sanity_check(const struct btf *btf)
__u32 i, n = btf__type_cnt(btf);
int err;
- for (i = 1; i < n; i++) {
+ for (i = btf->start_id; i < n; i++) {
t = btf_type_by_id(btf, i);
err = btf_validate_type(btf, t, i);
if (err)
@@ -947,6 +954,17 @@ static bool btf_is_modifiable(const struct btf *btf)
return (void *)btf->hdr != btf->raw_data;
}
+static void btf_free_raw_data(struct btf *btf)
+{
+ if (btf->raw_data_is_mmap) {
+ munmap(btf->raw_data, btf->raw_size);
+ btf->raw_data_is_mmap = false;
+ } else {
+ free(btf->raw_data);
+ }
+ btf->raw_data = NULL;
+}
+
void btf__free(struct btf *btf)
{
if (IS_ERR_OR_NULL(btf))
@@ -966,9 +984,11 @@ void btf__free(struct btf *btf)
free(btf->types_data);
strset__free(btf->strs_set);
}
- free(btf->raw_data);
+ btf_free_raw_data(btf);
free(btf->raw_data_swapped);
free(btf->type_offs);
+ if (btf->owns_base)
+ btf__free(btf->base_btf);
free(btf);
}
@@ -990,7 +1010,8 @@ static struct btf *btf_new_empty(struct btf *base_btf)
if (base_btf) {
btf->base_btf = base_btf;
btf->start_id = btf__type_cnt(base_btf);
- btf->start_str_off = base_btf->hdr->str_len;
+ btf->start_str_off = base_btf->hdr->str_len + base_btf->start_str_off;
+ btf->swapped_endian = base_btf->swapped_endian;
}
/* +1 for empty string at offset 0 */
@@ -1023,7 +1044,7 @@ struct btf *btf__new_empty_split(struct btf *base_btf)
return libbpf_ptr(btf_new_empty(base_btf));
}
-static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
+static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf, bool is_mmap)
{
struct btf *btf;
int err;
@@ -1043,12 +1064,18 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
btf->start_str_off = base_btf->hdr->str_len;
}
- btf->raw_data = malloc(size);
- if (!btf->raw_data) {
- err = -ENOMEM;
- goto done;
+ if (is_mmap) {
+ btf->raw_data = (void *)data;
+ btf->raw_data_is_mmap = true;
+ } else {
+ btf->raw_data = malloc(size);
+ if (!btf->raw_data) {
+ err = -ENOMEM;
+ goto done;
+ }
+ memcpy(btf->raw_data, data, size);
}
- memcpy(btf->raw_data, data, size);
+
btf->raw_size = size;
btf->hdr = btf->raw_data;
@@ -1076,56 +1103,46 @@ done:
struct btf *btf__new(const void *data, __u32 size)
{
- return libbpf_ptr(btf_new(data, size, NULL));
+ return libbpf_ptr(btf_new(data, size, NULL, false));
}
-static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
- struct btf_ext **btf_ext)
+struct btf *btf__new_split(const void *data, __u32 size, struct btf *base_btf)
+{
+ return libbpf_ptr(btf_new(data, size, base_btf, false));
+}
+
+struct btf_elf_secs {
+ Elf_Data *btf_data;
+ Elf_Data *btf_ext_data;
+ Elf_Data *btf_base_data;
+};
+
+static int btf_find_elf_sections(Elf *elf, const char *path, struct btf_elf_secs *secs)
{
- Elf_Data *btf_data = NULL, *btf_ext_data = NULL;
- int err = 0, fd = -1, idx = 0;
- struct btf *btf = NULL;
Elf_Scn *scn = NULL;
- Elf *elf = NULL;
+ Elf_Data *data;
GElf_Ehdr ehdr;
size_t shstrndx;
+ int idx = 0;
- if (elf_version(EV_CURRENT) == EV_NONE) {
- pr_warn("failed to init libelf for %s\n", path);
- return ERR_PTR(-LIBBPF_ERRNO__LIBELF);
- }
-
- fd = open(path, O_RDONLY | O_CLOEXEC);
- if (fd < 0) {
- err = -errno;
- pr_warn("failed to open %s: %s\n", path, strerror(errno));
- return ERR_PTR(err);
- }
-
- err = -LIBBPF_ERRNO__FORMAT;
-
- elf = elf_begin(fd, ELF_C_READ, NULL);
- if (!elf) {
- pr_warn("failed to open %s as ELF file\n", path);
- goto done;
- }
if (!gelf_getehdr(elf, &ehdr)) {
pr_warn("failed to get EHDR from %s\n", path);
- goto done;
+ goto err;
}
if (elf_getshdrstrndx(elf, &shstrndx)) {
pr_warn("failed to get section names section index for %s\n",
path);
- goto done;
+ goto err;
}
if (!elf_rawdata(elf_getscn(elf, shstrndx), NULL)) {
pr_warn("failed to get e_shstrndx from %s\n", path);
- goto done;
+ goto err;
}
while ((scn = elf_nextscn(elf, scn)) != NULL) {
+ Elf_Data **field;
GElf_Shdr sh;
char *name;
@@ -1133,42 +1150,109 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
if (gelf_getshdr(scn, &sh) != &sh) {
pr_warn("failed to get section(%d) header from %s\n",
idx, path);
- goto done;
+ goto err;
}
name = elf_strptr(elf, shstrndx, sh.sh_name);
if (!name) {
pr_warn("failed to get section(%d) name from %s\n",
idx, path);
- goto done;
+ goto err;
}
- if (strcmp(name, BTF_ELF_SEC) == 0) {
- btf_data = elf_getdata(scn, 0);
- if (!btf_data) {
- pr_warn("failed to get section(%d, %s) data from %s\n",
- idx, name, path);
- goto done;
- }
- continue;
- } else if (btf_ext && strcmp(name, BTF_EXT_ELF_SEC) == 0) {
- btf_ext_data = elf_getdata(scn, 0);
- if (!btf_ext_data) {
- pr_warn("failed to get section(%d, %s) data from %s\n",
- idx, name, path);
- goto done;
- }
+
+ if (strcmp(name, BTF_ELF_SEC) == 0)
+ field = &secs->btf_data;
+ else if (strcmp(name, BTF_EXT_ELF_SEC) == 0)
+ field = &secs->btf_ext_data;
+ else if (strcmp(name, BTF_BASE_ELF_SEC) == 0)
+ field = &secs->btf_base_data;
+ else
continue;
+
+ if (sh.sh_type != SHT_PROGBITS) {
+ pr_warn("unexpected section type (%d) of section(%d, %s) from %s\n",
+ sh.sh_type, idx, name, path);
+ goto err;
}
+
+ data = elf_getdata(scn, 0);
+ if (!data) {
+ pr_warn("failed to get section(%d, %s) data from %s\n",
+ idx, name, path);
+ goto err;
+ }
+ *field = data;
+ }
+
+ return 0;
+
+err:
+ return -LIBBPF_ERRNO__FORMAT;
+}
+
+static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
+ struct btf_ext **btf_ext)
+{
+ struct btf_elf_secs secs = {};
+ struct btf *dist_base_btf = NULL;
+ struct btf *btf = NULL;
+ int err = 0, fd = -1;
+ Elf *elf = NULL;
+
+ if (elf_version(EV_CURRENT) == EV_NONE) {
+ pr_warn("failed to init libelf for %s\n", path);
+ return ERR_PTR(-LIBBPF_ERRNO__LIBELF);
+ }
+
+ fd = open(path, O_RDONLY | O_CLOEXEC);
+ if (fd < 0) {
+ err = -errno;
+ pr_warn("failed to open %s: %s\n", path, errstr(err));
+ return ERR_PTR(err);
+ }
+
+ elf = elf_begin(fd, ELF_C_READ, NULL);
+ if (!elf) {
+ err = -LIBBPF_ERRNO__FORMAT;
+ pr_warn("failed to open %s as ELF file\n", path);
+ goto done;
}
- if (!btf_data) {
+ err = btf_find_elf_sections(elf, path, &secs);
+ if (err)
+ goto done;
+
+ if (!secs.btf_data) {
pr_warn("failed to find '%s' ELF section in %s\n", BTF_ELF_SEC, path);
err = -ENODATA;
goto done;
}
- btf = btf_new(btf_data->d_buf, btf_data->d_size, base_btf);
- err = libbpf_get_error(btf);
- if (err)
+
+ if (secs.btf_base_data) {
+ dist_base_btf = btf_new(secs.btf_base_data->d_buf, secs.btf_base_data->d_size,
+ NULL, false);
+ if (IS_ERR(dist_base_btf)) {
+ err = PTR_ERR(dist_base_btf);
+ dist_base_btf = NULL;
+ goto done;
+ }
+ }
+
+ btf = btf_new(secs.btf_data->d_buf, secs.btf_data->d_size,
+ dist_base_btf ?: base_btf, false);
+ if (IS_ERR(btf)) {
+ err = PTR_ERR(btf);
goto done;
+ }
+ if (dist_base_btf && base_btf) {
+ err = btf__relocate(btf, base_btf);
+ if (err)
+ goto done;
+ btf__free(dist_base_btf);
+ dist_base_btf = NULL;
+ }
+
+ if (dist_base_btf)
+ btf->owns_base = true;
switch (gelf_getclass(elf)) {
case ELFCLASS32:
@@ -1182,11 +1266,12 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
break;
}
- if (btf_ext && btf_ext_data) {
- *btf_ext = btf_ext__new(btf_ext_data->d_buf, btf_ext_data->d_size);
- err = libbpf_get_error(*btf_ext);
- if (err)
+ if (btf_ext && secs.btf_ext_data) {
+ *btf_ext = btf_ext__new(secs.btf_ext_data->d_buf, secs.btf_ext_data->d_size);
+ if (IS_ERR(*btf_ext)) {
+ err = PTR_ERR(*btf_ext);
goto done;
+ }
} else if (btf_ext) {
*btf_ext = NULL;
}
@@ -1200,6 +1285,7 @@ done:
if (btf_ext)
btf_ext__free(*btf_ext);
+ btf__free(dist_base_btf);
btf__free(btf);
return ERR_PTR(err);
@@ -1269,7 +1355,7 @@ static struct btf *btf_parse_raw(const char *path, struct btf *base_btf)
}
/* finally parse BTF data */
- btf = btf_new(data, sz, base_btf);
+ btf = btf_new(data, sz, base_btf, false);
err_out:
free(data);
@@ -1288,6 +1374,37 @@ struct btf *btf__parse_raw_split(const char *path, struct btf *base_btf)
return libbpf_ptr(btf_parse_raw(path, base_btf));
}
+static struct btf *btf_parse_raw_mmap(const char *path, struct btf *base_btf)
+{
+ struct stat st;
+ void *data;
+ struct btf *btf;
+ int fd, err;
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return ERR_PTR(-errno);
+
+ if (fstat(fd, &st) < 0) {
+ err = -errno;
+ close(fd);
+ return ERR_PTR(err);
+ }
+
+ data = mmap(NULL, st.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
+ err = -errno;
+ close(fd);
+
+ if (data == MAP_FAILED)
+ return ERR_PTR(err);
+
+ btf = btf_new(data, st.st_size, base_btf, true);
+ if (IS_ERR(btf))
+ munmap(data, st.st_size);
+
+ return btf;
+}
+
static struct btf *btf_parse(const char *path, struct btf *base_btf, struct btf_ext **btf_ext)
{
struct btf *btf;
@@ -1317,7 +1434,9 @@ struct btf *btf__parse_split(const char *path, struct btf *base_btf)
static void *btf_get_raw_data(const struct btf *btf, __u32 *size, bool swap_endian);
-int btf_load_into_kernel(struct btf *btf, char *log_buf, size_t log_sz, __u32 log_level)
+int btf_load_into_kernel(struct btf *btf,
+ char *log_buf, size_t log_sz, __u32 log_level,
+ int token_fd)
{
LIBBPF_OPTS(bpf_btf_load_opts, opts);
__u32 buf_sz = 0, raw_size;
@@ -1367,6 +1486,10 @@ retry_load:
opts.log_level = log_level;
}
+ opts.token_fd = token_fd;
+ if (token_fd)
+ opts.btf_flags |= BPF_F_TOKEN_FD;
+
btf->fd = bpf_btf_load(raw_data, raw_size, &opts);
if (btf->fd < 0) {
/* time to turn on verbose mode and try again */
@@ -1381,7 +1504,7 @@ retry_load:
goto retry_load;
err = -errno;
- pr_warn("BTF loading error: %d\n", err);
+ pr_warn("BTF loading error: %s\n", errstr(err));
/* don't print out contents of custom log_buf */
if (!log_buf && buf[0])
pr_warn("-- BEGIN BTF LOAD LOG ---\n%s\n-- END BTF LOAD LOG --\n", buf);
@@ -1394,7 +1517,7 @@ done:
int btf__load_into_kernel(struct btf *btf)
{
- return btf_load_into_kernel(btf, NULL, 0, 0);
+ return btf_load_into_kernel(btf, NULL, 0, 0, 0);
}
int btf__fd(const struct btf *btf)
@@ -1546,19 +1669,25 @@ struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf)
goto exit_free;
}
- btf = btf_new(ptr, btf_info.btf_size, base_btf);
+ btf = btf_new(ptr, btf_info.btf_size, base_btf, false);
exit_free:
free(ptr);
return btf;
}
-struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf)
+struct btf *btf_load_from_kernel(__u32 id, struct btf *base_btf, int token_fd)
{
struct btf *btf;
int btf_fd;
+ LIBBPF_OPTS(bpf_get_fd_by_id_opts, opts);
- btf_fd = bpf_btf_get_fd_by_id(id);
+ if (token_fd) {
+ opts.open_flags |= BPF_F_TOKEN_FD;
+ opts.token_fd = token_fd;
+ }
+
+ btf_fd = bpf_btf_get_fd_by_id_opts(id, &opts);
if (btf_fd < 0)
return libbpf_err_ptr(-errno);
@@ -1568,6 +1697,11 @@ struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf)
return libbpf_ptr(btf);
}
+struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf)
+{
+ return btf_load_from_kernel(id, base_btf, 0);
+}
+
struct btf *btf__load_from_kernel_by_id(__u32 id)
{
return btf__load_from_kernel_by_id_split(id, NULL);
@@ -1575,10 +1709,8 @@ struct btf *btf__load_from_kernel_by_id(__u32 id)
static void btf_invalidate_raw_data(struct btf *btf)
{
- if (btf->raw_data) {
- free(btf->raw_data);
- btf->raw_data = NULL;
- }
+ if (btf->raw_data)
+ btf_free_raw_data(btf);
if (btf->raw_data_swapped) {
free(btf->raw_data_swapped);
btf->raw_data_swapped = NULL;
@@ -1728,9 +1860,8 @@ struct btf_pipe {
struct hashmap *str_off_map; /* map string offsets from src to dst */
};
-static int btf_rewrite_str(__u32 *str_off, void *ctx)
+static int btf_rewrite_str(struct btf_pipe *p, __u32 *str_off)
{
- struct btf_pipe *p = ctx;
long mapped_off;
int off, err;
@@ -1760,10 +1891,11 @@ static int btf_rewrite_str(__u32 *str_off, void *ctx)
return 0;
}
-int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_type *src_type)
+static int btf_add_type(struct btf_pipe *p, const struct btf_type *src_type)
{
- struct btf_pipe p = { .src = src_btf, .dst = btf };
+ struct btf_field_iter it;
struct btf_type *t;
+ __u32 *str_off;
int sz, err;
sz = btf_type_size(src_type);
@@ -1771,35 +1903,33 @@ int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_t
return libbpf_err(sz);
/* deconstruct BTF, if necessary, and invalidate raw_data */
- if (btf_ensure_modifiable(btf))
+ if (btf_ensure_modifiable(p->dst))
return libbpf_err(-ENOMEM);
- t = btf_add_type_mem(btf, sz);
+ t = btf_add_type_mem(p->dst, sz);
if (!t)
return libbpf_err(-ENOMEM);
memcpy(t, src_type, sz);
- err = btf_type_visit_str_offs(t, btf_rewrite_str, &p);
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
if (err)
return libbpf_err(err);
- return btf_commit_type(btf, sz);
+ while ((str_off = btf_field_iter_next(&it))) {
+ err = btf_rewrite_str(p, str_off);
+ if (err)
+ return libbpf_err(err);
+ }
+
+ return btf_commit_type(p->dst, sz);
}
-static int btf_rewrite_type_ids(__u32 *type_id, void *ctx)
+int btf__add_type(struct btf *btf, const struct btf *src_btf, const struct btf_type *src_type)
{
- struct btf *btf = ctx;
-
- if (!*type_id) /* nothing to do for VOID references */
- return 0;
+ struct btf_pipe p = { .src = src_btf, .dst = btf };
- /* we haven't updated btf's type count yet, so
- * btf->start_id + btf->nr_types - 1 is the type ID offset we should
- * add to all newly added BTF types
- */
- *type_id += btf->start_id + btf->nr_types - 1;
- return 0;
+ return btf_add_type(&p, src_type);
}
static size_t btf_dedup_identity_hash_fn(long key, void *ctx);
@@ -1847,6 +1977,9 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
memcpy(t, src_btf->types_data, data_sz);
for (i = 0; i < cnt; i++) {
+ struct btf_field_iter it;
+ __u32 *type_id, *str_off;
+
sz = btf_type_size(t);
if (sz < 0) {
/* unlikely, has to be corrupted src_btf */
@@ -1858,15 +1991,31 @@ int btf__add_btf(struct btf *btf, const struct btf *src_btf)
*off = t - btf->types_data;
/* add, dedup, and remap strings referenced by this BTF type */
- err = btf_type_visit_str_offs(t, btf_rewrite_str, &p);
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
if (err)
goto err_out;
+ while ((str_off = btf_field_iter_next(&it))) {
+ err = btf_rewrite_str(&p, str_off);
+ if (err)
+ goto err_out;
+ }
/* remap all type IDs referenced from this BTF type */
- err = btf_type_visit_type_ids(t, btf_rewrite_type_ids, btf);
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (err)
goto err_out;
+ while ((type_id = btf_field_iter_next(&it))) {
+ if (!*type_id) /* nothing to do for VOID references */
+ continue;
+
+ /* we haven't updated btf's type count yet, so
+ * btf->start_id + btf->nr_types - 1 is the type ID offset we should
+ * add to all newly added BTF types
+ */
+ *type_id += btf->start_id + btf->nr_types - 1;
+ }
+
/* go to next type data and type offset index entry */
t += sz;
off++;
@@ -2007,7 +2156,7 @@ static int validate_type_id(int id)
}
/* generic append function for PTR, TYPEDEF, CONST/VOLATILE/RESTRICT */
-static int btf_add_ref_kind(struct btf *btf, int kind, const char *name, int ref_type_id)
+static int btf_add_ref_kind(struct btf *btf, int kind, const char *name, int ref_type_id, int kflag)
{
struct btf_type *t;
int sz, name_off = 0;
@@ -2030,7 +2179,7 @@ static int btf_add_ref_kind(struct btf *btf, int kind, const char *name, int ref
}
t->name_off = name_off;
- t->info = btf_type_info(kind, 0, 0);
+ t->info = btf_type_info(kind, 0, kflag);
t->type = ref_type_id;
return btf_commit_type(btf, sz);
@@ -2045,7 +2194,7 @@ static int btf_add_ref_kind(struct btf *btf, int kind, const char *name, int ref
*/
int btf__add_ptr(struct btf *btf, int ref_type_id)
{
- return btf_add_ref_kind(btf, BTF_KIND_PTR, NULL, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_PTR, NULL, ref_type_id, 0);
}
/*
@@ -2423,7 +2572,7 @@ int btf__add_fwd(struct btf *btf, const char *name, enum btf_fwd_kind fwd_kind)
struct btf_type *t;
int id;
- id = btf_add_ref_kind(btf, BTF_KIND_FWD, name, 0);
+ id = btf_add_ref_kind(btf, BTF_KIND_FWD, name, 0, 0);
if (id <= 0)
return id;
t = btf_type_by_id(btf, id);
@@ -2453,7 +2602,7 @@ int btf__add_typedef(struct btf *btf, const char *name, int ref_type_id)
if (!name || !name[0])
return libbpf_err(-EINVAL);
- return btf_add_ref_kind(btf, BTF_KIND_TYPEDEF, name, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_TYPEDEF, name, ref_type_id, 0);
}
/*
@@ -2465,7 +2614,7 @@ int btf__add_typedef(struct btf *btf, const char *name, int ref_type_id)
*/
int btf__add_volatile(struct btf *btf, int ref_type_id)
{
- return btf_add_ref_kind(btf, BTF_KIND_VOLATILE, NULL, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_VOLATILE, NULL, ref_type_id, 0);
}
/*
@@ -2477,7 +2626,7 @@ int btf__add_volatile(struct btf *btf, int ref_type_id)
*/
int btf__add_const(struct btf *btf, int ref_type_id)
{
- return btf_add_ref_kind(btf, BTF_KIND_CONST, NULL, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_CONST, NULL, ref_type_id, 0);
}
/*
@@ -2489,7 +2638,7 @@ int btf__add_const(struct btf *btf, int ref_type_id)
*/
int btf__add_restrict(struct btf *btf, int ref_type_id)
{
- return btf_add_ref_kind(btf, BTF_KIND_RESTRICT, NULL, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_RESTRICT, NULL, ref_type_id, 0);
}
/*
@@ -2505,7 +2654,24 @@ int btf__add_type_tag(struct btf *btf, const char *value, int ref_type_id)
if (!value || !value[0])
return libbpf_err(-EINVAL);
- return btf_add_ref_kind(btf, BTF_KIND_TYPE_TAG, value, ref_type_id);
+ return btf_add_ref_kind(btf, BTF_KIND_TYPE_TAG, value, ref_type_id, 0);
+}
+
+/*
+ * Append new BTF_KIND_TYPE_TAG type with:
+ * - *value*, non-empty/non-NULL tag value;
+ * - *ref_type_id* - referenced type ID, it might not exist yet;
+ * Set info->kflag to 1, indicating this tag is an __attribute__
+ * Returns:
+ * - >0, type ID of newly added BTF type;
+ * - <0, on error.
+ */
+int btf__add_type_attr(struct btf *btf, const char *value, int ref_type_id)
+{
+ if (!value || !value[0])
+ return libbpf_err(-EINVAL);
+
+ return btf_add_ref_kind(btf, BTF_KIND_TYPE_TAG, value, ref_type_id, 1);
}
/*
@@ -2527,7 +2693,7 @@ int btf__add_func(struct btf *btf, const char *name,
linkage != BTF_FUNC_EXTERN)
return libbpf_err(-EINVAL);
- id = btf_add_ref_kind(btf, BTF_KIND_FUNC, name, proto_type_id);
+ id = btf_add_ref_kind(btf, BTF_KIND_FUNC, name, proto_type_id, 0);
if (id > 0) {
struct btf_type *t = btf_type_by_id(btf, id);
@@ -2762,18 +2928,8 @@ int btf__add_datasec_var_info(struct btf *btf, int var_type_id, __u32 offset, __
return 0;
}
-/*
- * Append new BTF_KIND_DECL_TAG type with:
- * - *value* - non-empty/non-NULL string;
- * - *ref_type_id* - referenced type ID, it might not exist yet;
- * - *component_idx* - -1 for tagging reference type, otherwise struct/union
- * member or function argument index;
- * Returns:
- * - >0, type ID of newly added BTF type;
- * - <0, on error.
- */
-int btf__add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
- int component_idx)
+static int btf_add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
+ int component_idx, int kflag)
{
struct btf_type *t;
int sz, value_off;
@@ -2797,14 +2953,47 @@ int btf__add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
return value_off;
t->name_off = value_off;
- t->info = btf_type_info(BTF_KIND_DECL_TAG, 0, false);
+ t->info = btf_type_info(BTF_KIND_DECL_TAG, 0, kflag);
t->type = ref_type_id;
btf_decl_tag(t)->component_idx = component_idx;
return btf_commit_type(btf, sz);
}
-struct btf_ext_sec_setup_param {
+/*
+ * Append new BTF_KIND_DECL_TAG type with:
+ * - *value* - non-empty/non-NULL string;
+ * - *ref_type_id* - referenced type ID, it might not exist yet;
+ * - *component_idx* - -1 for tagging reference type, otherwise struct/union
+ * member or function argument index;
+ * Returns:
+ * - >0, type ID of newly added BTF type;
+ * - <0, on error.
+ */
+int btf__add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
+ int component_idx)
+{
+ return btf_add_decl_tag(btf, value, ref_type_id, component_idx, 0);
+}
+
+/*
+ * Append new BTF_KIND_DECL_TAG type with:
+ * - *value* - non-empty/non-NULL string;
+ * - *ref_type_id* - referenced type ID, it might not exist yet;
+ * - *component_idx* - -1 for tagging reference type, otherwise struct/union
+ * member or function argument index;
+ * Set info->kflag to 1, indicating this tag is an __attribute__
+ * Returns:
+ * - >0, type ID of newly added BTF type;
+ * - <0, on error.
+ */
+int btf__add_decl_attr(struct btf *btf, const char *value, int ref_type_id,
+ int component_idx)
+{
+ return btf_add_decl_tag(btf, value, ref_type_id, component_idx, 1);
+}
+
+struct btf_ext_sec_info_param {
__u32 off;
__u32 len;
__u32 min_rec_size;
@@ -2812,14 +3001,20 @@ struct btf_ext_sec_setup_param {
const char *desc;
};
-static int btf_ext_setup_info(struct btf_ext *btf_ext,
- struct btf_ext_sec_setup_param *ext_sec)
+/*
+ * Parse a single info subsection of the BTF.ext info data:
+ * - validate subsection structure and elements
+ * - save info subsection start and sizing details in struct btf_ext
+ * - endian-independent operation, for calling before byte-swapping
+ */
+static int btf_ext_parse_sec_info(struct btf_ext *btf_ext,
+ struct btf_ext_sec_info_param *ext_sec,
+ bool is_native)
{
const struct btf_ext_info_sec *sinfo;
struct btf_ext_info *ext_info;
__u32 info_left, record_size;
size_t sec_cnt = 0;
- /* The start of the info sec (including the __u32 record_size). */
void *info;
if (ext_sec->len == 0)
@@ -2831,6 +3026,7 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
return -EINVAL;
}
+ /* The start of the info sec (including the __u32 record_size). */
info = btf_ext->data + btf_ext->hdr->hdr_len + ext_sec->off;
info_left = ext_sec->len;
@@ -2846,9 +3042,13 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
return -EINVAL;
}
- /* The record size needs to meet the minimum standard */
- record_size = *(__u32 *)info;
+ /* The record size needs to meet either the minimum standard or, when
+ * handling non-native endianness data, the exact standard so as
+ * to allow safe byte-swapping.
+ */
+ record_size = is_native ? *(__u32 *)info : bswap_32(*(__u32 *)info);
if (record_size < ext_sec->min_rec_size ||
+ (!is_native && record_size != ext_sec->min_rec_size) ||
record_size & 0x03) {
pr_debug("%s section in .BTF.ext has invalid record size %u\n",
ext_sec->desc, record_size);
@@ -2860,7 +3060,7 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
/* If no records, return failure now so .BTF.ext won't be used. */
if (!info_left) {
- pr_debug("%s section in .BTF.ext has no records", ext_sec->desc);
+ pr_debug("%s section in .BTF.ext has no records\n", ext_sec->desc);
return -EINVAL;
}
@@ -2875,7 +3075,7 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
return -EINVAL;
}
- num_records = sinfo->num_info;
+ num_records = is_native ? sinfo->num_info : bswap_32(sinfo->num_info);
if (num_records == 0) {
pr_debug("%s section has incorrect num_records in .BTF.ext\n",
ext_sec->desc);
@@ -2903,64 +3103,157 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
return 0;
}
-static int btf_ext_setup_func_info(struct btf_ext *btf_ext)
+/* Parse all info secs in the BTF.ext info data */
+static int btf_ext_parse_info(struct btf_ext *btf_ext, bool is_native)
{
- struct btf_ext_sec_setup_param param = {
+ struct btf_ext_sec_info_param func_info = {
.off = btf_ext->hdr->func_info_off,
.len = btf_ext->hdr->func_info_len,
.min_rec_size = sizeof(struct bpf_func_info_min),
.ext_info = &btf_ext->func_info,
.desc = "func_info"
};
-
- return btf_ext_setup_info(btf_ext, &param);
-}
-
-static int btf_ext_setup_line_info(struct btf_ext *btf_ext)
-{
- struct btf_ext_sec_setup_param param = {
+ struct btf_ext_sec_info_param line_info = {
.off = btf_ext->hdr->line_info_off,
.len = btf_ext->hdr->line_info_len,
.min_rec_size = sizeof(struct bpf_line_info_min),
.ext_info = &btf_ext->line_info,
.desc = "line_info",
};
-
- return btf_ext_setup_info(btf_ext, &param);
-}
-
-static int btf_ext_setup_core_relos(struct btf_ext *btf_ext)
-{
- struct btf_ext_sec_setup_param param = {
- .off = btf_ext->hdr->core_relo_off,
- .len = btf_ext->hdr->core_relo_len,
+ struct btf_ext_sec_info_param core_relo = {
.min_rec_size = sizeof(struct bpf_core_relo),
.ext_info = &btf_ext->core_relo_info,
.desc = "core_relo",
};
+ int err;
+
+ err = btf_ext_parse_sec_info(btf_ext, &func_info, is_native);
+ if (err)
+ return err;
- return btf_ext_setup_info(btf_ext, &param);
+ err = btf_ext_parse_sec_info(btf_ext, &line_info, is_native);
+ if (err)
+ return err;
+
+ if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, core_relo_len))
+ return 0; /* skip core relos parsing */
+
+ core_relo.off = btf_ext->hdr->core_relo_off;
+ core_relo.len = btf_ext->hdr->core_relo_len;
+ err = btf_ext_parse_sec_info(btf_ext, &core_relo, is_native);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/* Swap byte-order of BTF.ext header with any endianness */
+static void btf_ext_bswap_hdr(struct btf_ext_header *h)
+{
+ bool is_native = h->magic == BTF_MAGIC;
+ __u32 hdr_len;
+
+ hdr_len = is_native ? h->hdr_len : bswap_32(h->hdr_len);
+
+ h->magic = bswap_16(h->magic);
+ h->hdr_len = bswap_32(h->hdr_len);
+ h->func_info_off = bswap_32(h->func_info_off);
+ h->func_info_len = bswap_32(h->func_info_len);
+ h->line_info_off = bswap_32(h->line_info_off);
+ h->line_info_len = bswap_32(h->line_info_len);
+
+ if (hdr_len < offsetofend(struct btf_ext_header, core_relo_len))
+ return;
+
+ h->core_relo_off = bswap_32(h->core_relo_off);
+ h->core_relo_len = bswap_32(h->core_relo_len);
}
-static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
+/* Swap byte-order of generic info subsection */
+static void btf_ext_bswap_info_sec(void *info, __u32 len, bool is_native,
+ info_rec_bswap_fn bswap_fn)
{
- const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
+ struct btf_ext_info_sec *sec;
+ __u32 info_left, rec_size, *rs;
- if (data_size < offsetofend(struct btf_ext_header, hdr_len) ||
- data_size < hdr->hdr_len) {
- pr_debug("BTF.ext header not found");
+ if (len == 0)
+ return;
+
+ rs = info; /* info record size */
+ rec_size = is_native ? *rs : bswap_32(*rs);
+ *rs = bswap_32(*rs);
+
+ sec = info + sizeof(__u32); /* info sec #1 */
+ info_left = len - sizeof(__u32);
+ while (info_left) {
+ unsigned int sec_hdrlen = sizeof(struct btf_ext_info_sec);
+ __u32 i, num_recs;
+ void *p;
+
+ num_recs = is_native ? sec->num_info : bswap_32(sec->num_info);
+ sec->sec_name_off = bswap_32(sec->sec_name_off);
+ sec->num_info = bswap_32(sec->num_info);
+ p = sec->data; /* info rec #1 */
+ for (i = 0; i < num_recs; i++, p += rec_size)
+ bswap_fn(p);
+ sec = p;
+ info_left -= sec_hdrlen + (__u64)rec_size * num_recs;
+ }
+}
+
+/*
+ * Swap byte-order of all info data in a BTF.ext section
+ * - requires BTF.ext hdr in native endianness
+ */
+static void btf_ext_bswap_info(struct btf_ext *btf_ext, void *data)
+{
+ const bool is_native = btf_ext->swapped_endian;
+ const struct btf_ext_header *h = data;
+ void *info;
+
+ /* Swap func_info subsection byte-order */
+ info = data + h->hdr_len + h->func_info_off;
+ btf_ext_bswap_info_sec(info, h->func_info_len, is_native,
+ (info_rec_bswap_fn)bpf_func_info_bswap);
+
+ /* Swap line_info subsection byte-order */
+ info = data + h->hdr_len + h->line_info_off;
+ btf_ext_bswap_info_sec(info, h->line_info_len, is_native,
+ (info_rec_bswap_fn)bpf_line_info_bswap);
+
+ /* Swap core_relo subsection byte-order (if present) */
+ if (h->hdr_len < offsetofend(struct btf_ext_header, core_relo_len))
+ return;
+
+ info = data + h->hdr_len + h->core_relo_off;
+ btf_ext_bswap_info_sec(info, h->core_relo_len, is_native,
+ (info_rec_bswap_fn)bpf_core_relo_bswap);
+}
+
+/* Parse hdr data and info sections: check and convert to native endianness */
+static int btf_ext_parse(struct btf_ext *btf_ext)
+{
+ __u32 hdr_len, data_size = btf_ext->data_size;
+ struct btf_ext_header *hdr = btf_ext->hdr;
+ bool swapped_endian = false;
+ int err;
+
+ if (data_size < offsetofend(struct btf_ext_header, hdr_len)) {
+ pr_debug("BTF.ext header too short\n");
return -EINVAL;
}
+ hdr_len = hdr->hdr_len;
if (hdr->magic == bswap_16(BTF_MAGIC)) {
- pr_warn("BTF.ext in non-native endianness is not supported\n");
- return -ENOTSUP;
+ swapped_endian = true;
+ hdr_len = bswap_32(hdr_len);
} else if (hdr->magic != BTF_MAGIC) {
pr_debug("Invalid BTF.ext magic:%x\n", hdr->magic);
return -EINVAL;
}
- if (hdr->version != BTF_VERSION) {
+ /* Ensure known version of structs, current BTF_VERSION == 1 */
+ if (hdr->version != 1) {
pr_debug("Unsupported BTF.ext version:%u\n", hdr->version);
return -ENOTSUP;
}
@@ -2970,11 +3263,39 @@ static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
return -ENOTSUP;
}
- if (data_size == hdr->hdr_len) {
+ if (data_size < hdr_len) {
+ pr_debug("BTF.ext header not found\n");
+ return -EINVAL;
+ } else if (data_size == hdr_len) {
pr_debug("BTF.ext has no data\n");
return -EINVAL;
}
+ /* Verify mandatory hdr info details present */
+ if (hdr_len < offsetofend(struct btf_ext_header, line_info_len)) {
+ pr_warn("BTF.ext header missing func_info, line_info\n");
+ return -EINVAL;
+ }
+
+ /* Keep hdr native byte-order in memory for introspection */
+ if (swapped_endian)
+ btf_ext_bswap_hdr(btf_ext->hdr);
+
+ /* Validate info subsections and cache key metadata */
+ err = btf_ext_parse_info(btf_ext, !swapped_endian);
+ if (err)
+ return err;
+
+ /* Keep infos native byte-order in memory for introspection */
+ if (swapped_endian)
+ btf_ext_bswap_info(btf_ext, btf_ext->data);
+
+ /*
+ * Set btf_ext->swapped_endian only after all header and info data has
+ * been swapped, helping bswap functions determine if their data are
+ * in native byte-order when called.
+ */
+ btf_ext->swapped_endian = swapped_endian;
return 0;
}
@@ -2986,6 +3307,7 @@ void btf_ext__free(struct btf_ext *btf_ext)
free(btf_ext->line_info.sec_idxs);
free(btf_ext->core_relo_info.sec_idxs);
free(btf_ext->data);
+ free(btf_ext->data_swapped);
free(btf_ext);
}
@@ -3006,29 +3328,7 @@ struct btf_ext *btf_ext__new(const __u8 *data, __u32 size)
}
memcpy(btf_ext->data, data, size);
- err = btf_ext_parse_hdr(btf_ext->data, size);
- if (err)
- goto done;
-
- if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, line_info_len)) {
- err = -EINVAL;
- goto done;
- }
-
- err = btf_ext_setup_func_info(btf_ext);
- if (err)
- goto done;
-
- err = btf_ext_setup_line_info(btf_ext);
- if (err)
- goto done;
-
- if (btf_ext->hdr->hdr_len < offsetofend(struct btf_ext_header, core_relo_len))
- goto done; /* skip core relos parsing */
-
- err = btf_ext_setup_core_relos(btf_ext);
- if (err)
- goto done;
+ err = btf_ext_parse(btf_ext);
done:
if (err) {
@@ -3039,10 +3339,65 @@ done:
return btf_ext;
}
-const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext, __u32 *size)
+static void *btf_ext_raw_data(const struct btf_ext *btf_ext_ro, bool swap_endian)
{
+ struct btf_ext *btf_ext = (struct btf_ext *)btf_ext_ro;
+ const __u32 data_sz = btf_ext->data_size;
+ void *data;
+
+ /* Return native data (always present) or swapped data if present */
+ if (!swap_endian)
+ return btf_ext->data;
+ else if (btf_ext->data_swapped)
+ return btf_ext->data_swapped;
+
+ /* Recreate missing swapped data, then cache and return */
+ data = calloc(1, data_sz);
+ if (!data)
+ return NULL;
+ memcpy(data, btf_ext->data, data_sz);
+
+ btf_ext_bswap_info(btf_ext, data);
+ btf_ext_bswap_hdr(data);
+ btf_ext->data_swapped = data;
+ return data;
+}
+
+const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size)
+{
+ void *data;
+
+ data = btf_ext_raw_data(btf_ext, btf_ext->swapped_endian);
+ if (!data)
+ return errno = ENOMEM, NULL;
+
*size = btf_ext->data_size;
- return btf_ext->data;
+ return data;
+}
+
+__attribute__((alias("btf_ext__raw_data")))
+const void *btf_ext__get_raw_data(const struct btf_ext *btf_ext, __u32 *size);
+
+enum btf_endianness btf_ext__endianness(const struct btf_ext *btf_ext)
+{
+ if (is_host_big_endian())
+ return btf_ext->swapped_endian ? BTF_LITTLE_ENDIAN : BTF_BIG_ENDIAN;
+ else
+ return btf_ext->swapped_endian ? BTF_BIG_ENDIAN : BTF_LITTLE_ENDIAN;
+}
+
+int btf_ext__set_endianness(struct btf_ext *btf_ext, enum btf_endianness endian)
+{
+ if (endian != BTF_LITTLE_ENDIAN && endian != BTF_BIG_ENDIAN)
+ return libbpf_err(-EINVAL);
+
+ btf_ext->swapped_endian = is_host_big_endian() != (endian == BTF_BIG_ENDIAN);
+
+ if (!btf_ext->swapped_endian) {
+ free(btf_ext->data_swapped);
+ btf_ext->data_swapped = NULL;
+ }
+ return 0;
}
struct btf_dedup;
@@ -3206,7 +3561,7 @@ int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts)
d = btf_dedup_new(btf, opts);
if (IS_ERR(d)) {
- pr_debug("btf_dedup_new failed: %ld", PTR_ERR(d));
+ pr_debug("btf_dedup_new failed: %ld\n", PTR_ERR(d));
return libbpf_err(-EINVAL);
}
@@ -3217,42 +3572,42 @@ int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts)
err = btf_dedup_prep(d);
if (err) {
- pr_debug("btf_dedup_prep failed:%d\n", err);
+ pr_debug("btf_dedup_prep failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_strings(d);
if (err < 0) {
- pr_debug("btf_dedup_strings failed:%d\n", err);
+ pr_debug("btf_dedup_strings failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_prim_types(d);
if (err < 0) {
- pr_debug("btf_dedup_prim_types failed:%d\n", err);
+ pr_debug("btf_dedup_prim_types failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_struct_types(d);
if (err < 0) {
- pr_debug("btf_dedup_struct_types failed:%d\n", err);
+ pr_debug("btf_dedup_struct_types failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_resolve_fwds(d);
if (err < 0) {
- pr_debug("btf_dedup_resolve_fwds failed:%d\n", err);
+ pr_debug("btf_dedup_resolve_fwds failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_ref_types(d);
if (err < 0) {
- pr_debug("btf_dedup_ref_types failed:%d\n", err);
+ pr_debug("btf_dedup_ref_types failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_compact_types(d);
if (err < 0) {
- pr_debug("btf_dedup_compact_types failed:%d\n", err);
+ pr_debug("btf_dedup_compact_types failed: %s\n", errstr(err));
goto done;
}
err = btf_dedup_remap_types(d);
if (err < 0) {
- pr_debug("btf_dedup_remap_types failed:%d\n", err);
+ pr_debug("btf_dedup_remap_types failed: %s\n", errstr(err));
goto done;
}
@@ -3300,7 +3655,7 @@ struct btf_dedup {
struct strset *strs_set;
};
-static long hash_combine(long h, long value)
+static unsigned long hash_combine(unsigned long h, unsigned long value)
{
return h * 31 + value;
}
@@ -3438,11 +3793,19 @@ static int btf_for_each_str_off(struct btf_dedup *d, str_off_visit_fn fn, void *
int i, r;
for (i = 0; i < d->btf->nr_types; i++) {
+ struct btf_field_iter it;
struct btf_type *t = btf_type_by_id(d->btf, d->btf->start_id + i);
+ __u32 *str_off;
- r = btf_type_visit_str_offs(t, fn, ctx);
+ r = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
if (r)
return r;
+
+ while ((str_off = btf_field_iter_next(&it))) {
+ r = fn(str_off, ctx);
+ if (r)
+ return r;
+ }
}
if (!d->btf_ext)
@@ -4042,46 +4405,109 @@ static inline __u16 btf_fwd_kind(struct btf_type *t)
return btf_kflag(t) ? BTF_KIND_UNION : BTF_KIND_STRUCT;
}
-/* Check if given two types are identical ARRAY definitions */
-static bool btf_dedup_identical_arrays(struct btf_dedup *d, __u32 id1, __u32 id2)
+static bool btf_dedup_identical_types(struct btf_dedup *d, __u32 id1, __u32 id2, int depth)
{
struct btf_type *t1, *t2;
+ int k1, k2;
+recur:
+ if (depth <= 0)
+ return false;
t1 = btf_type_by_id(d->btf, id1);
t2 = btf_type_by_id(d->btf, id2);
- if (!btf_is_array(t1) || !btf_is_array(t2))
+
+ k1 = btf_kind(t1);
+ k2 = btf_kind(t2);
+ if (k1 != k2)
return false;
- return btf_equal_array(t1, t2);
-}
+ switch (k1) {
+ case BTF_KIND_UNKN: /* VOID */
+ return true;
+ case BTF_KIND_INT:
+ return btf_equal_int_tag(t1, t2);
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ return btf_compat_enum(t1, t2);
+ case BTF_KIND_FWD:
+ case BTF_KIND_FLOAT:
+ return btf_equal_common(t1, t2);
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_FUNC:
+ case BTF_KIND_TYPE_TAG:
+ if (t1->info != t2->info || t1->name_off != t2->name_off)
+ return false;
+ id1 = t1->type;
+ id2 = t2->type;
+ goto recur;
+ case BTF_KIND_ARRAY: {
+ struct btf_array *a1, *a2;
-/* Check if given two types are identical STRUCT/UNION definitions */
-static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id2)
-{
- const struct btf_member *m1, *m2;
- struct btf_type *t1, *t2;
- int n, i;
+ if (!btf_compat_array(t1, t2))
+ return false;
- t1 = btf_type_by_id(d->btf, id1);
- t2 = btf_type_by_id(d->btf, id2);
+ a1 = btf_array(t1);
+ a2 = btf_array(t1);
- if (!btf_is_composite(t1) || btf_kind(t1) != btf_kind(t2))
- return false;
+ if (a1->index_type != a2->index_type &&
+ !btf_dedup_identical_types(d, a1->index_type, a2->index_type, depth - 1))
+ return false;
- if (!btf_shallow_equal_struct(t1, t2))
- return false;
+ if (a1->type != a2->type &&
+ !btf_dedup_identical_types(d, a1->type, a2->type, depth - 1))
+ return false;
- m1 = btf_members(t1);
- m2 = btf_members(t2);
- for (i = 0, n = btf_vlen(t1); i < n; i++, m1++, m2++) {
- if (m1->type != m2->type &&
- !btf_dedup_identical_arrays(d, m1->type, m2->type) &&
- !btf_dedup_identical_structs(d, m1->type, m2->type))
+ return true;
+ }
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION: {
+ const struct btf_member *m1, *m2;
+ int i, n;
+
+ if (!btf_shallow_equal_struct(t1, t2))
+ return false;
+
+ m1 = btf_members(t1);
+ m2 = btf_members(t2);
+ for (i = 0, n = btf_vlen(t1); i < n; i++, m1++, m2++) {
+ if (m1->type == m2->type)
+ continue;
+ if (!btf_dedup_identical_types(d, m1->type, m2->type, depth - 1))
+ return false;
+ }
+ return true;
+ }
+ case BTF_KIND_FUNC_PROTO: {
+ const struct btf_param *p1, *p2;
+ int i, n;
+
+ if (!btf_compat_fnproto(t1, t2))
return false;
+
+ if (t1->type != t2->type &&
+ !btf_dedup_identical_types(d, t1->type, t2->type, depth - 1))
+ return false;
+
+ p1 = btf_params(t1);
+ p2 = btf_params(t2);
+ for (i = 0, n = btf_vlen(t1); i < n; i++, p1++, p2++) {
+ if (p1->type == p2->type)
+ continue;
+ if (!btf_dedup_identical_types(d, p1->type, p2->type, depth - 1))
+ return false;
+ }
+ return true;
+ }
+ default:
+ return false;
}
- return true;
}
+
/*
* Check equivalence of BTF type graph formed by candidate struct/union (we'll
* call it "candidate graph" in this description for brevity) to a type graph
@@ -4099,7 +4525,7 @@ static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id
* and canonical graphs are not compatible structurally, whole graphs are
* incompatible. If types are structurally equivalent (i.e., all information
* except referenced type IDs is exactly the same), a mapping from `canon_id` to
- * a `cand_id` is recored in hypothetical mapping (`btf_dedup->hypot_map`).
+ * a `cand_id` is recoded in hypothetical mapping (`btf_dedup->hypot_map`).
* If a type references other types, then those referenced types are checked
* for equivalence recursively.
*
@@ -4137,7 +4563,7 @@ static bool btf_dedup_identical_structs(struct btf_dedup *d, __u32 id1, __u32 id
* consists of portions of the graph that come from multiple compilation units.
* This is due to the fact that types within single compilation unit are always
* deduplicated and FWDs are already resolved, if referenced struct/union
- * definiton is available. So, if we had unresolved FWD and found corresponding
+ * definition is available. So, if we had unresolved FWD and found corresponding
* STRUCT/UNION, they will be from different compilation units. This
* consequently means that when we "link" FWD to corresponding STRUCT/UNION,
* type graph will likely have at least two different BTF types that describe
@@ -4200,19 +4626,13 @@ static int btf_dedup_is_equiv(struct btf_dedup *d, __u32 cand_id,
* different fields within the *same* struct. This breaks type
* equivalence check, which makes an assumption that candidate
* types sub-graph has a consistent and deduped-by-compiler
- * types within a single CU. So work around that by explicitly
- * allowing identical array types here.
- */
- if (btf_dedup_identical_arrays(d, hypot_type_id, cand_id))
- return 1;
- /* It turns out that similar situation can happen with
- * struct/union sometimes, sigh... Handle the case where
- * structs/unions are exactly the same, down to the referenced
- * type IDs. Anything more complicated (e.g., if referenced
- * types are different, but equivalent) is *way more*
- * complicated and requires a many-to-many equivalence mapping.
+ * types within a single CU. And similar situation can happen
+ * with struct/union sometimes, and event with pointers.
+ * So accommodate cases like this doing a structural
+ * comparison recursively, but avoiding being stuck in endless
+ * loops by limiting the depth up to which we check.
*/
- if (btf_dedup_identical_structs(d, hypot_type_id, cand_id))
+ if (btf_dedup_identical_types(d, hypot_type_id, cand_id, 16))
return 1;
return 0;
}
@@ -4904,10 +5324,23 @@ static int btf_dedup_remap_types(struct btf_dedup *d)
for (i = 0; i < d->btf->nr_types; i++) {
struct btf_type *t = btf_type_by_id(d->btf, d->btf->start_id + i);
+ struct btf_field_iter it;
+ __u32 *type_id;
- r = btf_type_visit_type_ids(t, btf_dedup_remap_type_id, d);
+ r = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (r)
return r;
+
+ while ((type_id = btf_field_iter_next(&it))) {
+ __u32 resolved_id, new_id;
+
+ resolved_id = resolve_type_id(d, *type_id);
+ new_id = d->hypot_map[resolved_id];
+ if (new_id > BTF_MAX_NR_TYPES)
+ return -EINVAL;
+
+ *type_id = new_id;
+ }
}
if (!d->btf_ext)
@@ -4926,10 +5359,9 @@ static int btf_dedup_remap_types(struct btf_dedup *d)
*/
struct btf *btf__load_vmlinux_btf(void)
{
+ const char *sysfs_btf_path = "/sys/kernel/btf/vmlinux";
+ /* fall back locations, trying to find vmlinux on disk */
const char *locations[] = {
- /* try canonical vmlinux BTF through sysfs first */
- "/sys/kernel/btf/vmlinux",
- /* fall back to trying to find vmlinux on disk otherwise */
"/boot/vmlinux-%1$s",
"/lib/modules/%1$s/vmlinux-%1$s",
"/lib/modules/%1$s/build/vmlinux",
@@ -4943,8 +5375,27 @@ struct btf *btf__load_vmlinux_btf(void)
struct btf *btf;
int i, err;
- uname(&buf);
+ /* is canonical sysfs location accessible? */
+ if (faccessat(AT_FDCWD, sysfs_btf_path, F_OK, AT_EACCESS) < 0) {
+ pr_warn("kernel BTF is missing at '%s', was CONFIG_DEBUG_INFO_BTF enabled?\n",
+ sysfs_btf_path);
+ } else {
+ btf = btf_parse_raw_mmap(sysfs_btf_path, NULL);
+ if (IS_ERR(btf))
+ btf = btf__parse(sysfs_btf_path, NULL);
+
+ if (!btf) {
+ err = -errno;
+ pr_warn("failed to read kernel BTF from '%s': %s\n",
+ sysfs_btf_path, errstr(err));
+ return libbpf_err_ptr(err);
+ }
+ pr_debug("loaded kernel BTF from '%s'\n", sysfs_btf_path);
+ return btf;
+ }
+ /* try fallback locations */
+ uname(&buf);
for (i = 0; i < ARRAY_SIZE(locations); i++) {
snprintf(path, PATH_MAX, locations[i], buf.release);
@@ -4953,7 +5404,7 @@ struct btf *btf__load_vmlinux_btf(void)
btf = btf__parse(path, NULL);
err = libbpf_get_error(btf);
- pr_debug("loading kernel BTF '%s': %d\n", path, err);
+ pr_debug("loading kernel BTF '%s': %s\n", path, errstr(err));
if (err)
continue;
@@ -4974,136 +5425,6 @@ struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_bt
return btf__parse_split(path, vmlinux_btf);
}
-int btf_type_visit_type_ids(struct btf_type *t, type_id_visit_fn visit, void *ctx)
-{
- int i, n, err;
-
- switch (btf_kind(t)) {
- case BTF_KIND_INT:
- case BTF_KIND_FLOAT:
- case BTF_KIND_ENUM:
- case BTF_KIND_ENUM64:
- return 0;
-
- case BTF_KIND_FWD:
- case BTF_KIND_CONST:
- case BTF_KIND_VOLATILE:
- case BTF_KIND_RESTRICT:
- case BTF_KIND_PTR:
- case BTF_KIND_TYPEDEF:
- case BTF_KIND_FUNC:
- case BTF_KIND_VAR:
- case BTF_KIND_DECL_TAG:
- case BTF_KIND_TYPE_TAG:
- return visit(&t->type, ctx);
-
- case BTF_KIND_ARRAY: {
- struct btf_array *a = btf_array(t);
-
- err = visit(&a->type, ctx);
- err = err ?: visit(&a->index_type, ctx);
- return err;
- }
-
- case BTF_KIND_STRUCT:
- case BTF_KIND_UNION: {
- struct btf_member *m = btf_members(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->type, ctx);
- if (err)
- return err;
- }
- return 0;
- }
-
- case BTF_KIND_FUNC_PROTO: {
- struct btf_param *m = btf_params(t);
-
- err = visit(&t->type, ctx);
- if (err)
- return err;
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->type, ctx);
- if (err)
- return err;
- }
- return 0;
- }
-
- case BTF_KIND_DATASEC: {
- struct btf_var_secinfo *m = btf_var_secinfos(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->type, ctx);
- if (err)
- return err;
- }
- return 0;
- }
-
- default:
- return -EINVAL;
- }
-}
-
-int btf_type_visit_str_offs(struct btf_type *t, str_off_visit_fn visit, void *ctx)
-{
- int i, n, err;
-
- err = visit(&t->name_off, ctx);
- if (err)
- return err;
-
- switch (btf_kind(t)) {
- case BTF_KIND_STRUCT:
- case BTF_KIND_UNION: {
- struct btf_member *m = btf_members(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->name_off, ctx);
- if (err)
- return err;
- }
- break;
- }
- case BTF_KIND_ENUM: {
- struct btf_enum *m = btf_enum(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->name_off, ctx);
- if (err)
- return err;
- }
- break;
- }
- case BTF_KIND_ENUM64: {
- struct btf_enum64 *m = btf_enum64(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->name_off, ctx);
- if (err)
- return err;
- }
- break;
- }
- case BTF_KIND_FUNC_PROTO: {
- struct btf_param *m = btf_params(t);
-
- for (i = 0, n = btf_vlen(t); i < n; i++, m++) {
- err = visit(&m->name_off, ctx);
- if (err)
- return err;
- }
- break;
- }
- default:
- break;
- }
-
- return 0;
-}
-
int btf_ext_visit_type_ids(struct btf_ext *btf_ext, type_id_visit_fn visit, void *ctx)
{
const struct btf_ext_info *seg;
@@ -5183,3 +5504,328 @@ int btf_ext_visit_str_offs(struct btf_ext *btf_ext, str_off_visit_fn visit, void
return 0;
}
+
+struct btf_distill {
+ struct btf_pipe pipe;
+ int *id_map;
+ unsigned int split_start_id;
+ unsigned int split_start_str;
+ int diff_id;
+};
+
+static int btf_add_distilled_type_ids(struct btf_distill *dist, __u32 i)
+{
+ struct btf_type *split_t = btf_type_by_id(dist->pipe.src, i);
+ struct btf_field_iter it;
+ __u32 *id;
+ int err;
+
+ err = btf_field_iter_init(&it, split_t, BTF_FIELD_ITER_IDS);
+ if (err)
+ return err;
+ while ((id = btf_field_iter_next(&it))) {
+ struct btf_type *base_t;
+
+ if (!*id)
+ continue;
+ /* split BTF id, not needed */
+ if (*id >= dist->split_start_id)
+ continue;
+ /* already added ? */
+ if (dist->id_map[*id] > 0)
+ continue;
+
+ /* only a subset of base BTF types should be referenced from
+ * split BTF; ensure nothing unexpected is referenced.
+ */
+ base_t = btf_type_by_id(dist->pipe.src, *id);
+ switch (btf_kind(base_t)) {
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_FWD:
+ case BTF_KIND_ARRAY:
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ case BTF_KIND_PTR:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_FUNC_PROTO:
+ case BTF_KIND_TYPE_TAG:
+ dist->id_map[*id] = *id;
+ break;
+ default:
+ pr_warn("unexpected reference to base type[%u] of kind [%u] when creating distilled base BTF.\n",
+ *id, btf_kind(base_t));
+ return -EINVAL;
+ }
+ /* If a base type is used, ensure types it refers to are
+ * marked as used also; so for example if we find a PTR to INT
+ * we need both the PTR and INT.
+ *
+ * The only exception is named struct/unions, since distilled
+ * base BTF composite types have no members.
+ */
+ if (btf_is_composite(base_t) && base_t->name_off)
+ continue;
+ err = btf_add_distilled_type_ids(dist, *id);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+static int btf_add_distilled_types(struct btf_distill *dist)
+{
+ bool adding_to_base = dist->pipe.dst->start_id == 1;
+ int id = btf__type_cnt(dist->pipe.dst);
+ struct btf_type *t;
+ int i, err = 0;
+
+
+ /* Add types for each of the required references to either distilled
+ * base or split BTF, depending on type characteristics.
+ */
+ for (i = 1; i < dist->split_start_id; i++) {
+ const char *name;
+ int kind;
+
+ if (!dist->id_map[i])
+ continue;
+ t = btf_type_by_id(dist->pipe.src, i);
+ kind = btf_kind(t);
+ name = btf__name_by_offset(dist->pipe.src, t->name_off);
+
+ switch (kind) {
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_FWD:
+ /* Named int, float, fwd are added to base. */
+ if (!adding_to_base)
+ continue;
+ err = btf_add_type(&dist->pipe, t);
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ /* Named struct/union are added to base as 0-vlen
+ * struct/union of same size. Anonymous struct/unions
+ * are added to split BTF as-is.
+ */
+ if (adding_to_base) {
+ if (!t->name_off)
+ continue;
+ err = btf_add_composite(dist->pipe.dst, kind, name, t->size);
+ } else {
+ if (t->name_off)
+ continue;
+ err = btf_add_type(&dist->pipe, t);
+ }
+ break;
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ /* Named enum[64]s are added to base as a sized
+ * enum; relocation will match with appropriately-named
+ * and sized enum or enum64.
+ *
+ * Anonymous enums are added to split BTF as-is.
+ */
+ if (adding_to_base) {
+ if (!t->name_off)
+ continue;
+ err = btf__add_enum(dist->pipe.dst, name, t->size);
+ } else {
+ if (t->name_off)
+ continue;
+ err = btf_add_type(&dist->pipe, t);
+ }
+ break;
+ case BTF_KIND_ARRAY:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_PTR:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_FUNC_PROTO:
+ case BTF_KIND_TYPE_TAG:
+ /* All other types are added to split BTF. */
+ if (adding_to_base)
+ continue;
+ err = btf_add_type(&dist->pipe, t);
+ break;
+ default:
+ pr_warn("unexpected kind when adding base type '%s'[%u] of kind [%u] to distilled base BTF.\n",
+ name, i, kind);
+ return -EINVAL;
+
+ }
+ if (err < 0)
+ break;
+ dist->id_map[i] = id++;
+ }
+ return err;
+}
+
+/* Split BTF ids without a mapping will be shifted downwards since distilled
+ * base BTF is smaller than the original base BTF. For those that have a
+ * mapping (either to base or updated split BTF), update the id based on
+ * that mapping.
+ */
+static int btf_update_distilled_type_ids(struct btf_distill *dist, __u32 i)
+{
+ struct btf_type *t = btf_type_by_id(dist->pipe.dst, i);
+ struct btf_field_iter it;
+ __u32 *id;
+ int err;
+
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
+ if (err)
+ return err;
+ while ((id = btf_field_iter_next(&it))) {
+ if (dist->id_map[*id])
+ *id = dist->id_map[*id];
+ else if (*id >= dist->split_start_id)
+ *id -= dist->diff_id;
+ }
+ return 0;
+}
+
+/* Create updated split BTF with distilled base BTF; distilled base BTF
+ * consists of BTF information required to clarify the types that split
+ * BTF refers to, omitting unneeded details. Specifically it will contain
+ * base types and memberless definitions of named structs, unions and enumerated
+ * types. Associated reference types like pointers, arrays and anonymous
+ * structs, unions and enumerated types will be added to split BTF.
+ * Size is recorded for named struct/unions to help guide matching to the
+ * target base BTF during later relocation.
+ *
+ * The only case where structs, unions or enumerated types are fully represented
+ * is when they are anonymous; in such cases, the anonymous type is added to
+ * split BTF in full.
+ *
+ * We return newly-created split BTF where the split BTF refers to a newly-created
+ * distilled base BTF. Both must be freed separately by the caller.
+ */
+int btf__distill_base(const struct btf *src_btf, struct btf **new_base_btf,
+ struct btf **new_split_btf)
+{
+ struct btf *new_base = NULL, *new_split = NULL;
+ const struct btf *old_base;
+ unsigned int n = btf__type_cnt(src_btf);
+ struct btf_distill dist = {};
+ struct btf_type *t;
+ int i, err = 0;
+
+ /* src BTF must be split BTF. */
+ old_base = btf__base_btf(src_btf);
+ if (!new_base_btf || !new_split_btf || !old_base)
+ return libbpf_err(-EINVAL);
+
+ new_base = btf__new_empty();
+ if (!new_base)
+ return libbpf_err(-ENOMEM);
+
+ btf__set_endianness(new_base, btf__endianness(src_btf));
+
+ dist.id_map = calloc(n, sizeof(*dist.id_map));
+ if (!dist.id_map) {
+ err = -ENOMEM;
+ goto done;
+ }
+ dist.pipe.src = src_btf;
+ dist.pipe.dst = new_base;
+ dist.pipe.str_off_map = hashmap__new(btf_dedup_identity_hash_fn, btf_dedup_equal_fn, NULL);
+ if (IS_ERR(dist.pipe.str_off_map)) {
+ err = -ENOMEM;
+ goto done;
+ }
+ dist.split_start_id = btf__type_cnt(old_base);
+ dist.split_start_str = old_base->hdr->str_len;
+
+ /* Pass over src split BTF; generate the list of base BTF type ids it
+ * references; these will constitute our distilled BTF set to be
+ * distributed over base and split BTF as appropriate.
+ */
+ for (i = src_btf->start_id; i < n; i++) {
+ err = btf_add_distilled_type_ids(&dist, i);
+ if (err < 0)
+ goto done;
+ }
+ /* Next add types for each of the required references to base BTF and split BTF
+ * in turn.
+ */
+ err = btf_add_distilled_types(&dist);
+ if (err < 0)
+ goto done;
+
+ /* Create new split BTF with distilled base BTF as its base; the final
+ * state is split BTF with distilled base BTF that represents enough
+ * about its base references to allow it to be relocated with the base
+ * BTF available.
+ */
+ new_split = btf__new_empty_split(new_base);
+ if (!new_split) {
+ err = -errno;
+ goto done;
+ }
+ dist.pipe.dst = new_split;
+ /* First add all split types */
+ for (i = src_btf->start_id; i < n; i++) {
+ t = btf_type_by_id(src_btf, i);
+ err = btf_add_type(&dist.pipe, t);
+ if (err < 0)
+ goto done;
+ }
+ /* Now add distilled types to split BTF that are not added to base. */
+ err = btf_add_distilled_types(&dist);
+ if (err < 0)
+ goto done;
+
+ /* All split BTF ids will be shifted downwards since there are less base
+ * BTF ids in distilled base BTF.
+ */
+ dist.diff_id = dist.split_start_id - btf__type_cnt(new_base);
+
+ n = btf__type_cnt(new_split);
+ /* Now update base/split BTF ids. */
+ for (i = 1; i < n; i++) {
+ err = btf_update_distilled_type_ids(&dist, i);
+ if (err < 0)
+ break;
+ }
+done:
+ free(dist.id_map);
+ hashmap__free(dist.pipe.str_off_map);
+ if (err) {
+ btf__free(new_split);
+ btf__free(new_base);
+ return libbpf_err(err);
+ }
+ *new_base_btf = new_base;
+ *new_split_btf = new_split;
+
+ return 0;
+}
+
+const struct btf_header *btf_header(const struct btf *btf)
+{
+ return btf->hdr;
+}
+
+void btf_set_base_btf(struct btf *btf, const struct btf *base_btf)
+{
+ btf->base_btf = (struct btf *)base_btf;
+ btf->start_id = btf__type_cnt(base_btf);
+ btf->start_str_off = base_btf->hdr->str_len;
+}
+
+int btf__relocate(struct btf *btf, const struct btf *base_btf)
+{
+ int err = btf_relocate(btf, base_btf, NULL);
+
+ if (!err)
+ btf->owns_base = false;
+ return libbpf_err(err);
+}
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 8e6880d91c84..ccfd905f03df 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -18,6 +18,7 @@ extern "C" {
#define BTF_ELF_SEC ".BTF"
#define BTF_EXT_ELF_SEC ".BTF.ext"
+#define BTF_BASE_ELF_SEC ".BTF.base"
#define MAPS_ELF_SEC ".maps"
struct btf;
@@ -107,6 +108,27 @@ LIBBPF_API struct btf *btf__new_empty(void);
*/
LIBBPF_API struct btf *btf__new_empty_split(struct btf *base_btf);
+/**
+ * @brief **btf__distill_base()** creates new versions of the split BTF
+ * *src_btf* and its base BTF. The new base BTF will only contain the types
+ * needed to improve robustness of the split BTF to small changes in base BTF.
+ * When that split BTF is loaded against a (possibly changed) base, this
+ * distilled base BTF will help update references to that (possibly changed)
+ * base BTF.
+ *
+ * Both the new split and its associated new base BTF must be freed by
+ * the caller.
+ *
+ * If successful, 0 is returned and **new_base_btf** and **new_split_btf**
+ * will point at new base/split BTF. Both the new split and its associated
+ * new base BTF must be freed by the caller.
+ *
+ * A negative value is returned on error and the thread-local `errno` variable
+ * is set to the error code as well.
+ */
+LIBBPF_API int btf__distill_base(const struct btf *src_btf, struct btf **new_base_btf,
+ struct btf **new_split_btf);
+
LIBBPF_API struct btf *btf__parse(const char *path, struct btf_ext **btf_ext);
LIBBPF_API struct btf *btf__parse_split(const char *path, struct btf *base_btf);
LIBBPF_API struct btf *btf__parse_elf(const char *path, struct btf_ext **btf_ext);
@@ -145,6 +167,9 @@ LIBBPF_API const char *btf__str_by_offset(const struct btf *btf, __u32 offset);
LIBBPF_API struct btf_ext *btf_ext__new(const __u8 *data, __u32 size);
LIBBPF_API void btf_ext__free(struct btf_ext *btf_ext);
LIBBPF_API const void *btf_ext__raw_data(const struct btf_ext *btf_ext, __u32 *size);
+LIBBPF_API enum btf_endianness btf_ext__endianness(const struct btf_ext *btf_ext);
+LIBBPF_API int btf_ext__set_endianness(struct btf_ext *btf_ext,
+ enum btf_endianness endian);
LIBBPF_API int btf__find_str(struct btf *btf, const char *s);
LIBBPF_API int btf__add_str(struct btf *btf, const char *s);
@@ -202,6 +227,7 @@ LIBBPF_API int btf__add_volatile(struct btf *btf, int ref_type_id);
LIBBPF_API int btf__add_const(struct btf *btf, int ref_type_id);
LIBBPF_API int btf__add_restrict(struct btf *btf, int ref_type_id);
LIBBPF_API int btf__add_type_tag(struct btf *btf, const char *value, int ref_type_id);
+LIBBPF_API int btf__add_type_attr(struct btf *btf, const char *value, int ref_type_id);
/* func and func_proto construction APIs */
LIBBPF_API int btf__add_func(struct btf *btf, const char *name,
@@ -218,6 +244,8 @@ LIBBPF_API int btf__add_datasec_var_info(struct btf *btf, int var_type_id,
/* tag construction API */
LIBBPF_API int btf__add_decl_tag(struct btf *btf, const char *value, int ref_type_id,
int component_idx);
+LIBBPF_API int btf__add_decl_attr(struct btf *btf, const char *value, int ref_type_id,
+ int component_idx);
struct btf_dedup_opts {
size_t sz;
@@ -231,6 +259,20 @@ struct btf_dedup_opts {
LIBBPF_API int btf__dedup(struct btf *btf, const struct btf_dedup_opts *opts);
+/**
+ * @brief **btf__relocate()** will check the split BTF *btf* for references
+ * to base BTF kinds, and verify those references are compatible with
+ * *base_btf*; if they are, *btf* is adjusted such that is re-parented to
+ * *base_btf* and type ids and strings are adjusted to accommodate this.
+ *
+ * If successful, 0 is returned and **btf** now has **base_btf** as its
+ * base.
+ *
+ * A negative value is returned on error and the thread-local `errno` variable
+ * is set to the error code as well.
+ */
+LIBBPF_API int btf__relocate(struct btf *btf, const struct btf *base_btf);
+
struct btf_dump;
struct btf_dump_opts {
@@ -250,7 +292,7 @@ LIBBPF_API void btf_dump__free(struct btf_dump *d);
LIBBPF_API int btf_dump__dump_type(struct btf_dump *d, __u32 id);
struct btf_dump_emit_type_decl_opts {
- /* size of this struct, for forward/backward compatiblity */
+ /* size of this struct, for forward/backward compatibility */
size_t sz;
/* optional field name for type declaration, e.g.:
* - struct my_struct <FNAME>
@@ -284,9 +326,10 @@ struct btf_dump_type_data_opts {
bool compact; /* no newlines/indentation */
bool skip_names; /* skip member/type names */
bool emit_zeroes; /* show 0-valued fields */
+ bool emit_strings; /* print char arrays as strings */
size_t :0;
};
-#define btf_dump_type_data_opts__last_field emit_zeroes
+#define btf_dump_type_data_opts__last_field emit_strings
LIBBPF_API int
btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
index 4d9f30bf7f01..6388392f49a0 100644
--- a/tools/lib/bpf/btf_dump.c
+++ b/tools/lib/bpf/btf_dump.c
@@ -67,6 +67,7 @@ struct btf_dump_data {
bool compact;
bool skip_names;
bool emit_zeroes;
+ bool emit_strings;
__u8 indent_lvl; /* base indent level */
char indent_str[BTF_DATA_INDENT_STR_LEN];
/* below are used during iteration */
@@ -225,6 +226,9 @@ static void btf_dump_free_names(struct hashmap *map)
size_t bkt;
struct hashmap_entry *cur;
+ if (!map)
+ return;
+
hashmap__for_each_entry(map, cur, bkt)
free((void *)cur->pkey);
@@ -304,7 +308,7 @@ int btf_dump__dump_type(struct btf_dump *d, __u32 id)
* definition, in which case they have to be declared inline as part of field
* type declaration; or as a top-level anonymous enum, typically used for
* declaring global constants. It's impossible to distinguish between two
- * without knowning whether given enum type was referenced from other type:
+ * without knowing whether given enum type was referenced from other type:
* top-level anonymous enum won't be referenced by anything, while embedded
* one will.
*/
@@ -867,8 +871,8 @@ static void btf_dump_emit_bit_padding(const struct btf_dump *d,
} pads[] = {
{"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
};
- int new_off, pad_bits, bits, i;
- const char *pad_type;
+ int new_off = 0, pad_bits = 0, bits, i;
+ const char *pad_type = NULL;
if (cur_off >= next_off)
return; /* no gap */
@@ -1304,7 +1308,7 @@ static void btf_dump_emit_type_decl(struct btf_dump *d, __u32 id,
* chain, restore stack, emit warning, and try to
* proceed nevertheless
*/
- pr_warn("not enough memory for decl stack:%d", err);
+ pr_warn("not enough memory for decl stack: %s\n", errstr(err));
d->decl_stack_cnt = stack_start;
return;
}
@@ -1493,7 +1497,10 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
case BTF_KIND_TYPE_TAG:
btf_dump_emit_mods(d, decls);
name = btf_name_of(d, t->name_off);
- btf_dump_printf(d, " __attribute__((btf_type_tag(\"%s\")))", name);
+ if (btf_kflag(t))
+ btf_dump_printf(d, " __attribute__((%s))", name);
+ else
+ btf_dump_printf(d, " __attribute__((btf_type_tag(\"%s\")))", name);
break;
case BTF_KIND_ARRAY: {
const struct btf_array *a = btf_array(t);
@@ -1559,10 +1566,12 @@ static void btf_dump_emit_type_chain(struct btf_dump *d,
* Clang for BPF target generates func_proto with no
* args as a func_proto with a single void arg (e.g.,
* `int (*f)(void)` vs just `int (*f)()`). We are
- * going to pretend there are no args for such case.
+ * going to emit valid empty args (void) syntax for
+ * such case. Similarly and conveniently, valid
+ * no args case can be special-cased here as well.
*/
- if (vlen == 1 && p->type == 0) {
- btf_dump_printf(d, ")");
+ if (vlen == 0 || (vlen == 1 && p->type == 0)) {
+ btf_dump_printf(d, "void)");
return;
}
@@ -1929,6 +1938,7 @@ static int btf_dump_int_data(struct btf_dump *d,
if (d->typed_dump->is_array_terminated)
break;
if (*(char *)data == '\0') {
+ btf_dump_type_values(d, "'\\0'");
d->typed_dump->is_array_terminated = true;
break;
}
@@ -2021,6 +2031,52 @@ static int btf_dump_var_data(struct btf_dump *d,
return btf_dump_dump_type_data(d, NULL, t, type_id, data, 0, 0);
}
+static int btf_dump_string_data(struct btf_dump *d,
+ const struct btf_type *t,
+ __u32 id,
+ const void *data)
+{
+ const struct btf_array *array = btf_array(t);
+ const char *chars = data;
+ __u32 i;
+
+ /* Make sure it is a NUL-terminated string. */
+ for (i = 0; i < array->nelems; i++) {
+ if ((void *)(chars + i) >= d->typed_dump->data_end)
+ return -E2BIG;
+ if (chars[i] == '\0')
+ break;
+ }
+ if (i == array->nelems) {
+ /* The caller will print this as a regular array. */
+ return -EINVAL;
+ }
+
+ btf_dump_data_pfx(d);
+ btf_dump_printf(d, "\"");
+
+ for (i = 0; i < array->nelems; i++) {
+ char c = chars[i];
+
+ if (c == '\0') {
+ /*
+ * When printing character arrays as strings, NUL bytes
+ * are always treated as string terminators; they are
+ * never printed.
+ */
+ break;
+ }
+ if (isprint(c))
+ btf_dump_printf(d, "%c", c);
+ else
+ btf_dump_printf(d, "\\x%02x", (__u8)c);
+ }
+
+ btf_dump_printf(d, "\"");
+
+ return 0;
+}
+
static int btf_dump_array_data(struct btf_dump *d,
const struct btf_type *t,
__u32 id,
@@ -2031,6 +2087,7 @@ static int btf_dump_array_data(struct btf_dump *d,
__u32 i, elem_type_id;
__s64 elem_size;
bool is_array_member;
+ bool is_array_terminated;
elem_type_id = array->type;
elem_type = skip_mods_and_typedefs(d->btf, elem_type_id, NULL);
@@ -2047,8 +2104,13 @@ static int btf_dump_array_data(struct btf_dump *d,
* char arrays, so if size is 1 and element is
* printable as a char, we'll do that.
*/
- if (elem_size == 1)
+ if (elem_size == 1) {
+ if (d->typed_dump->emit_strings &&
+ btf_dump_string_data(d, t, id, data) == 0) {
+ return 0;
+ }
d->typed_dump->is_array_char = true;
+ }
}
/* note that we increment depth before calling btf_dump_print() below;
@@ -2066,12 +2128,15 @@ static int btf_dump_array_data(struct btf_dump *d,
*/
is_array_member = d->typed_dump->is_array_member;
d->typed_dump->is_array_member = true;
+ is_array_terminated = d->typed_dump->is_array_terminated;
+ d->typed_dump->is_array_terminated = false;
for (i = 0; i < array->nelems; i++, data += elem_size) {
if (d->typed_dump->is_array_terminated)
break;
btf_dump_dump_type_data(d, NULL, elem_type, elem_type_id, data, 0, 0);
}
d->typed_dump->is_array_member = is_array_member;
+ d->typed_dump->is_array_terminated = is_array_terminated;
d->typed_dump->depth--;
btf_dump_data_pfx(d);
btf_dump_type_values(d, "]");
@@ -2533,6 +2598,7 @@ int btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
d->typed_dump->compact = OPTS_GET(opts, compact, false);
d->typed_dump->skip_names = OPTS_GET(opts, skip_names, false);
d->typed_dump->emit_zeroes = OPTS_GET(opts, emit_zeroes, false);
+ d->typed_dump->emit_strings = OPTS_GET(opts, emit_strings, false);
ret = btf_dump_dump_type_data(d, NULL, t, id, data, 0, 0);
diff --git a/tools/lib/bpf/btf_iter.c b/tools/lib/bpf/btf_iter.c
new file mode 100644
index 000000000000..9a6c822c2294
--- /dev/null
+++ b/tools/lib/bpf/btf_iter.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/* Copyright (c) 2021 Facebook */
+/* Copyright (c) 2024, Oracle and/or its affiliates. */
+
+#ifdef __KERNEL__
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+#define btf_var_secinfos(t) (struct btf_var_secinfo *)btf_type_var_secinfo(t)
+
+#else
+#include "btf.h"
+#include "libbpf_internal.h"
+#endif
+
+int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t,
+ enum btf_field_iter_kind iter_kind)
+{
+ it->p = NULL;
+ it->m_idx = -1;
+ it->off_idx = 0;
+ it->vlen = 0;
+
+ switch (iter_kind) {
+ case BTF_FIELD_ITER_IDS:
+ switch (btf_kind(t)) {
+ case BTF_KIND_UNKN:
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ it->desc = (struct btf_field_desc) {};
+ break;
+ case BTF_KIND_FWD:
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_FUNC:
+ case BTF_KIND_VAR:
+ case BTF_KIND_DECL_TAG:
+ case BTF_KIND_TYPE_TAG:
+ it->desc = (struct btf_field_desc) { 1, {offsetof(struct btf_type, type)} };
+ break;
+ case BTF_KIND_ARRAY:
+ it->desc = (struct btf_field_desc) {
+ 2, {sizeof(struct btf_type) + offsetof(struct btf_array, type),
+ sizeof(struct btf_type) + offsetof(struct btf_array, index_type)}
+ };
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ it->desc = (struct btf_field_desc) {
+ 0, {},
+ sizeof(struct btf_member),
+ 1, {offsetof(struct btf_member, type)}
+ };
+ break;
+ case BTF_KIND_FUNC_PROTO:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, type)},
+ sizeof(struct btf_param),
+ 1, {offsetof(struct btf_param, type)}
+ };
+ break;
+ case BTF_KIND_DATASEC:
+ it->desc = (struct btf_field_desc) {
+ 0, {},
+ sizeof(struct btf_var_secinfo),
+ 1, {offsetof(struct btf_var_secinfo, type)}
+ };
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BTF_FIELD_ITER_STRS:
+ switch (btf_kind(t)) {
+ case BTF_KIND_UNKN:
+ it->desc = (struct btf_field_desc) {};
+ break;
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_FWD:
+ case BTF_KIND_ARRAY:
+ case BTF_KIND_CONST:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_FUNC:
+ case BTF_KIND_VAR:
+ case BTF_KIND_DECL_TAG:
+ case BTF_KIND_TYPE_TAG:
+ case BTF_KIND_DATASEC:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)}
+ };
+ break;
+ case BTF_KIND_ENUM:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_enum),
+ 1, {offsetof(struct btf_enum, name_off)}
+ };
+ break;
+ case BTF_KIND_ENUM64:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_enum64),
+ 1, {offsetof(struct btf_enum64, name_off)}
+ };
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_member),
+ 1, {offsetof(struct btf_member, name_off)}
+ };
+ break;
+ case BTF_KIND_FUNC_PROTO:
+ it->desc = (struct btf_field_desc) {
+ 1, {offsetof(struct btf_type, name_off)},
+ sizeof(struct btf_param),
+ 1, {offsetof(struct btf_param, name_off)}
+ };
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (it->desc.m_sz)
+ it->vlen = btf_vlen(t);
+
+ it->p = t;
+ return 0;
+}
+
+__u32 *btf_field_iter_next(struct btf_field_iter *it)
+{
+ if (!it->p)
+ return NULL;
+
+ if (it->m_idx < 0) {
+ if (it->off_idx < it->desc.t_off_cnt)
+ return it->p + it->desc.t_offs[it->off_idx++];
+ /* move to per-member iteration */
+ it->m_idx = 0;
+ it->p += sizeof(struct btf_type);
+ it->off_idx = 0;
+ }
+
+ /* if type doesn't have members, stop */
+ if (it->desc.m_sz == 0) {
+ it->p = NULL;
+ return NULL;
+ }
+
+ if (it->off_idx >= it->desc.m_off_cnt) {
+ /* exhausted this member's fields, go to the next member */
+ it->m_idx++;
+ it->p += it->desc.m_sz;
+ it->off_idx = 0;
+ }
+
+ if (it->m_idx < it->vlen)
+ return it->p + it->desc.m_offs[it->off_idx++];
+
+ it->p = NULL;
+ return NULL;
+}
diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
new file mode 100644
index 000000000000..53d1f3541bce
--- /dev/null
+++ b/tools/lib/bpf/btf_relocate.c
@@ -0,0 +1,519 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/* Copyright (c) 2024, Oracle and/or its affiliates. */
+
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+
+#ifdef __KERNEL__
+#include <linux/bpf.h>
+#include <linux/bsearch.h>
+#include <linux/btf.h>
+#include <linux/sort.h>
+#include <linux/string.h>
+#include <linux/bpf_verifier.h>
+
+#define btf_type_by_id (struct btf_type *)btf_type_by_id
+#define btf__type_cnt btf_nr_types
+#define btf__base_btf btf_base_btf
+#define btf__name_by_offset btf_name_by_offset
+#define btf__str_by_offset btf_str_by_offset
+#define btf_kflag btf_type_kflag
+
+#define calloc(nmemb, sz) kvcalloc(nmemb, sz, GFP_KERNEL | __GFP_NOWARN)
+#define free(ptr) kvfree(ptr)
+#define qsort(base, num, sz, cmp) sort(base, num, sz, cmp, NULL)
+
+#else
+
+#include "btf.h"
+#include "bpf.h"
+#include "libbpf.h"
+#include "libbpf_internal.h"
+
+#endif /* __KERNEL__ */
+
+struct btf;
+
+struct btf_relocate {
+ struct btf *btf;
+ const struct btf *base_btf;
+ const struct btf *dist_base_btf;
+ unsigned int nr_base_types;
+ unsigned int nr_split_types;
+ unsigned int nr_dist_base_types;
+ int dist_str_len;
+ int base_str_len;
+ __u32 *id_map;
+ __u32 *str_map;
+};
+
+/* Set temporarily in relocation id_map if distilled base struct/union is
+ * embedded in a split BTF struct/union; in such a case, size information must
+ * match between distilled base BTF and base BTF representation of type.
+ */
+#define BTF_IS_EMBEDDED ((__u32)-1)
+
+/* <name, size, id> triple used in sorting/searching distilled base BTF. */
+struct btf_name_info {
+ const char *name;
+ /* set when search requires a size match */
+ bool needs_size: 1;
+ unsigned int size: 31;
+ __u32 id;
+};
+
+static int btf_relocate_rewrite_type_id(struct btf_relocate *r, __u32 i)
+{
+ struct btf_type *t = btf_type_by_id(r->btf, i);
+ struct btf_field_iter it;
+ __u32 *id;
+ int err;
+
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
+ if (err)
+ return err;
+
+ while ((id = btf_field_iter_next(&it)))
+ *id = r->id_map[*id];
+ return 0;
+}
+
+/* Simple string comparison used for sorting within BTF, since all distilled
+ * types are named. If strings match, and size is non-zero for both elements
+ * fall back to using size for ordering.
+ */
+static int cmp_btf_name_size(const void *n1, const void *n2)
+{
+ const struct btf_name_info *ni1 = n1;
+ const struct btf_name_info *ni2 = n2;
+ int name_diff = strcmp(ni1->name, ni2->name);
+
+ if (!name_diff && ni1->needs_size && ni2->needs_size)
+ return ni2->size - ni1->size;
+ return name_diff;
+}
+
+/* Binary search with a small twist; find leftmost element that matches
+ * so that we can then iterate through all exact matches. So for example
+ * searching { "a", "bb", "bb", "c" } we would always match on the
+ * leftmost "bb".
+ */
+static struct btf_name_info *search_btf_name_size(struct btf_name_info *key,
+ struct btf_name_info *vals,
+ int nelems)
+{
+ struct btf_name_info *ret = NULL;
+ int high = nelems - 1;
+ int low = 0;
+
+ while (low <= high) {
+ int mid = (low + high)/2;
+ struct btf_name_info *val = &vals[mid];
+ int diff = cmp_btf_name_size(key, val);
+
+ if (diff == 0)
+ ret = val;
+ /* even if found, keep searching for leftmost match */
+ if (diff <= 0)
+ high = mid - 1;
+ else
+ low = mid + 1;
+ }
+ return ret;
+}
+
+/* If a member of a split BTF struct/union refers to a base BTF
+ * struct/union, mark that struct/union id temporarily in the id_map
+ * with BTF_IS_EMBEDDED. Members can be const/restrict/volatile/typedef
+ * reference types, but if a pointer is encountered, the type is no longer
+ * considered embedded.
+ */
+static int btf_mark_embedded_composite_type_ids(struct btf_relocate *r, __u32 i)
+{
+ struct btf_type *t = btf_type_by_id(r->btf, i);
+ struct btf_field_iter it;
+ __u32 *id;
+ int err;
+
+ if (!btf_is_composite(t))
+ return 0;
+
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
+ if (err)
+ return err;
+
+ while ((id = btf_field_iter_next(&it))) {
+ __u32 next_id = *id;
+
+ while (next_id) {
+ t = btf_type_by_id(r->btf, next_id);
+ switch (btf_kind(t)) {
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_TYPE_TAG:
+ next_id = t->type;
+ break;
+ case BTF_KIND_ARRAY: {
+ struct btf_array *a = btf_array(t);
+
+ next_id = a->type;
+ break;
+ }
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ if (next_id < r->nr_dist_base_types)
+ r->id_map[next_id] = BTF_IS_EMBEDDED;
+ next_id = 0;
+ break;
+ default:
+ next_id = 0;
+ break;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/* Build a map from distilled base BTF ids to base BTF ids. To do so, iterate
+ * through base BTF looking up distilled type (using binary search) equivalents.
+ */
+static int btf_relocate_map_distilled_base(struct btf_relocate *r)
+{
+ struct btf_name_info *info, *info_end;
+ struct btf_type *base_t, *dist_t;
+ __u8 *base_name_cnt = NULL;
+ int err = 0;
+ __u32 id;
+
+ /* generate a sort index array of name/type ids sorted by name for
+ * distilled base BTF to speed name-based lookups.
+ */
+ info = calloc(r->nr_dist_base_types, sizeof(*info));
+ if (!info) {
+ err = -ENOMEM;
+ goto done;
+ }
+ info_end = info + r->nr_dist_base_types;
+ for (id = 0; id < r->nr_dist_base_types; id++) {
+ dist_t = btf_type_by_id(r->dist_base_btf, id);
+ info[id].name = btf__name_by_offset(r->dist_base_btf, dist_t->name_off);
+ info[id].id = id;
+ info[id].size = dist_t->size;
+ info[id].needs_size = true;
+ }
+ qsort(info, r->nr_dist_base_types, sizeof(*info), cmp_btf_name_size);
+
+ /* Mark distilled base struct/union members of split BTF structs/unions
+ * in id_map with BTF_IS_EMBEDDED; this signals that these types
+ * need to match both name and size, otherwise embedding the base
+ * struct/union in the split type is invalid.
+ */
+ for (id = r->nr_dist_base_types; id < r->nr_dist_base_types + r->nr_split_types; id++) {
+ err = btf_mark_embedded_composite_type_ids(r, id);
+ if (err)
+ goto done;
+ }
+
+ /* Collect name counts for composite types in base BTF. If multiple
+ * instances of a struct/union of the same name exist, we need to use
+ * size to determine which to map to since name alone is ambiguous.
+ */
+ base_name_cnt = calloc(r->base_str_len, sizeof(*base_name_cnt));
+ if (!base_name_cnt) {
+ err = -ENOMEM;
+ goto done;
+ }
+ for (id = 1; id < r->nr_base_types; id++) {
+ base_t = btf_type_by_id(r->base_btf, id);
+ if (!btf_is_composite(base_t) || !base_t->name_off)
+ continue;
+ if (base_name_cnt[base_t->name_off] < 255)
+ base_name_cnt[base_t->name_off]++;
+ }
+
+ /* Now search base BTF for matching distilled base BTF types. */
+ for (id = 1; id < r->nr_base_types; id++) {
+ struct btf_name_info *dist_info, base_info = {};
+ int dist_kind, base_kind;
+
+ base_t = btf_type_by_id(r->base_btf, id);
+ /* distilled base consists of named types only. */
+ if (!base_t->name_off)
+ continue;
+ base_kind = btf_kind(base_t);
+ base_info.id = id;
+ base_info.name = btf__name_by_offset(r->base_btf, base_t->name_off);
+ switch (base_kind) {
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_ENUM64:
+ /* These types should match both name and size */
+ base_info.needs_size = true;
+ base_info.size = base_t->size;
+ break;
+ case BTF_KIND_FWD:
+ /* No size considerations for fwds. */
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ /* Size only needs to be used for struct/union if there
+ * are multiple types in base BTF with the same name.
+ * If there are multiple _distilled_ types with the same
+ * name (a very unlikely scenario), that doesn't matter
+ * unless corresponding _base_ types to match them are
+ * missing.
+ */
+ base_info.needs_size = base_name_cnt[base_t->name_off] > 1;
+ base_info.size = base_t->size;
+ break;
+ default:
+ continue;
+ }
+ /* iterate over all matching distilled base types */
+ for (dist_info = search_btf_name_size(&base_info, info, r->nr_dist_base_types);
+ dist_info != NULL && dist_info < info_end &&
+ cmp_btf_name_size(&base_info, dist_info) == 0;
+ dist_info++) {
+ if (!dist_info->id || dist_info->id >= r->nr_dist_base_types) {
+ pr_warn("base BTF id [%d] maps to invalid distilled base BTF id [%d]\n",
+ id, dist_info->id);
+ err = -EINVAL;
+ goto done;
+ }
+ dist_t = btf_type_by_id(r->dist_base_btf, dist_info->id);
+ dist_kind = btf_kind(dist_t);
+
+ /* Validate that the found distilled type is compatible.
+ * Do not error out on mismatch as another match may
+ * occur for an identically-named type.
+ */
+ switch (dist_kind) {
+ case BTF_KIND_FWD:
+ switch (base_kind) {
+ case BTF_KIND_FWD:
+ if (btf_kflag(dist_t) != btf_kflag(base_t))
+ continue;
+ break;
+ case BTF_KIND_STRUCT:
+ if (btf_kflag(base_t))
+ continue;
+ break;
+ case BTF_KIND_UNION:
+ if (!btf_kflag(base_t))
+ continue;
+ break;
+ default:
+ continue;
+ }
+ break;
+ case BTF_KIND_INT:
+ if (dist_kind != base_kind ||
+ btf_int_encoding(base_t) != btf_int_encoding(dist_t))
+ continue;
+ break;
+ case BTF_KIND_FLOAT:
+ if (dist_kind != base_kind)
+ continue;
+ break;
+ case BTF_KIND_ENUM:
+ /* ENUM and ENUM64 are encoded as sized ENUM in
+ * distilled base BTF.
+ */
+ if (base_kind != dist_kind && base_kind != BTF_KIND_ENUM64)
+ continue;
+ break;
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ /* size verification is required for embedded
+ * struct/unions.
+ */
+ if (r->id_map[dist_info->id] == BTF_IS_EMBEDDED &&
+ base_t->size != dist_t->size)
+ continue;
+ break;
+ default:
+ continue;
+ }
+ if (r->id_map[dist_info->id] &&
+ r->id_map[dist_info->id] != BTF_IS_EMBEDDED) {
+ /* we already have a match; this tells us that
+ * multiple base types of the same name
+ * have the same size, since for cases where
+ * multiple types have the same name we match
+ * on name and size. In this case, we have
+ * no way of determining which to relocate
+ * to in base BTF, so error out.
+ */
+ pr_warn("distilled base BTF type '%s' [%u], size %u has multiple candidates of the same size (ids [%u, %u]) in base BTF\n",
+ base_info.name, dist_info->id,
+ base_t->size, id, r->id_map[dist_info->id]);
+ err = -EINVAL;
+ goto done;
+ }
+ /* map id and name */
+ r->id_map[dist_info->id] = id;
+ r->str_map[dist_t->name_off] = base_t->name_off;
+ }
+ }
+ /* ensure all distilled BTF ids now have a mapping... */
+ for (id = 1; id < r->nr_dist_base_types; id++) {
+ const char *name;
+
+ if (r->id_map[id] && r->id_map[id] != BTF_IS_EMBEDDED)
+ continue;
+ dist_t = btf_type_by_id(r->dist_base_btf, id);
+ name = btf__name_by_offset(r->dist_base_btf, dist_t->name_off);
+ pr_warn("distilled base BTF type '%s' [%d] is not mapped to base BTF id\n",
+ name, id);
+ err = -EINVAL;
+ break;
+ }
+done:
+ free(base_name_cnt);
+ free(info);
+ return err;
+}
+
+/* distilled base should only have named int/float/enum/fwd/struct/union types. */
+static int btf_relocate_validate_distilled_base(struct btf_relocate *r)
+{
+ unsigned int i;
+
+ for (i = 1; i < r->nr_dist_base_types; i++) {
+ struct btf_type *t = btf_type_by_id(r->dist_base_btf, i);
+ int kind = btf_kind(t);
+
+ switch (kind) {
+ case BTF_KIND_INT:
+ case BTF_KIND_FLOAT:
+ case BTF_KIND_ENUM:
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION:
+ case BTF_KIND_FWD:
+ if (t->name_off)
+ break;
+ pr_warn("type [%d], kind [%d] is invalid for distilled base BTF; it is anonymous\n",
+ i, kind);
+ return -EINVAL;
+ default:
+ pr_warn("type [%d] in distilled based BTF has unexpected kind [%d]\n",
+ i, kind);
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+static int btf_relocate_rewrite_strs(struct btf_relocate *r, __u32 i)
+{
+ struct btf_type *t = btf_type_by_id(r->btf, i);
+ struct btf_field_iter it;
+ __u32 *str_off;
+ int off, err;
+
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
+ if (err)
+ return err;
+
+ while ((str_off = btf_field_iter_next(&it))) {
+ if (!*str_off)
+ continue;
+ if (*str_off >= r->dist_str_len) {
+ *str_off += r->base_str_len - r->dist_str_len;
+ } else {
+ off = r->str_map[*str_off];
+ if (!off) {
+ pr_warn("string '%s' [offset %u] is not mapped to base BTF\n",
+ btf__str_by_offset(r->btf, off), *str_off);
+ return -ENOENT;
+ }
+ *str_off = off;
+ }
+ }
+ return 0;
+}
+
+/* If successful, output of relocation is updated BTF with base BTF pointing
+ * at base_btf, and type ids, strings adjusted accordingly.
+ */
+int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **id_map)
+{
+ unsigned int nr_types = btf__type_cnt(btf);
+ const struct btf_header *dist_base_hdr;
+ const struct btf_header *base_hdr;
+ struct btf_relocate r = {};
+ int err = 0;
+ __u32 id, i;
+
+ r.dist_base_btf = btf__base_btf(btf);
+ if (!base_btf || r.dist_base_btf == base_btf)
+ return -EINVAL;
+
+ r.nr_dist_base_types = btf__type_cnt(r.dist_base_btf);
+ r.nr_base_types = btf__type_cnt(base_btf);
+ r.nr_split_types = nr_types - r.nr_dist_base_types;
+ r.btf = btf;
+ r.base_btf = base_btf;
+
+ r.id_map = calloc(nr_types, sizeof(*r.id_map));
+ r.str_map = calloc(btf_header(r.dist_base_btf)->str_len, sizeof(*r.str_map));
+ dist_base_hdr = btf_header(r.dist_base_btf);
+ base_hdr = btf_header(r.base_btf);
+ r.dist_str_len = dist_base_hdr->str_len;
+ r.base_str_len = base_hdr->str_len;
+ if (!r.id_map || !r.str_map) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ err = btf_relocate_validate_distilled_base(&r);
+ if (err)
+ goto err_out;
+
+ /* Split BTF ids need to be adjusted as base and distilled base
+ * have different numbers of types, changing the start id of split
+ * BTF.
+ */
+ for (id = r.nr_dist_base_types; id < nr_types; id++)
+ r.id_map[id] = id + r.nr_base_types - r.nr_dist_base_types;
+
+ /* Build a map from distilled base ids to actual base BTF ids; it is used
+ * to update split BTF id references. Also build a str_map mapping from
+ * distilled base BTF names to base BTF names.
+ */
+ err = btf_relocate_map_distilled_base(&r);
+ if (err)
+ goto err_out;
+
+ /* Next, rewrite type ids in split BTF, replacing split ids with updated
+ * ids based on number of types in base BTF, and base ids with
+ * relocated ids from base_btf.
+ */
+ for (i = 0, id = r.nr_dist_base_types; i < r.nr_split_types; i++, id++) {
+ err = btf_relocate_rewrite_type_id(&r, id);
+ if (err)
+ goto err_out;
+ }
+ /* String offsets now need to be updated using the str_map. */
+ for (i = 0; i < r.nr_split_types; i++) {
+ err = btf_relocate_rewrite_strs(&r, i + r.nr_dist_base_types);
+ if (err)
+ goto err_out;
+ }
+ /* Finally reset base BTF to be base_btf */
+ btf_set_base_btf(btf, base_btf);
+
+ if (id_map) {
+ *id_map = r.id_map;
+ r.id_map = NULL;
+ }
+err_out:
+ free(r.id_map);
+ free(r.str_map);
+ return err;
+}
diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
index b02faec748a5..295dbda24580 100644
--- a/tools/lib/bpf/elf.c
+++ b/tools/lib/bpf/elf.c
@@ -9,9 +9,6 @@
#include <linux/kernel.h>
#include "libbpf_internal.h"
-#include "str_error.h"
-
-#define STRERR_BUFSIZE 128
/* A SHT_GNU_versym section holds 16-bit words. This bit is set if
* the symbol is hidden and can only be seen when referenced using an
@@ -26,10 +23,12 @@
int elf_open(const char *binary_path, struct elf_fd *elf_fd)
{
- char errmsg[STRERR_BUFSIZE];
int fd, ret;
Elf *elf;
+ elf_fd->elf = NULL;
+ elf_fd->fd = -1;
+
if (elf_version(EV_CURRENT) == EV_NONE) {
pr_warn("elf: failed to init libelf for %s\n", binary_path);
return -LIBBPF_ERRNO__LIBELF;
@@ -37,8 +36,7 @@ int elf_open(const char *binary_path, struct elf_fd *elf_fd)
fd = open(binary_path, O_RDONLY | O_CLOEXEC);
if (fd < 0) {
ret = -errno;
- pr_warn("elf: failed to open %s: %s\n", binary_path,
- libbpf_strerror_r(ret, errmsg, sizeof(errmsg)));
+ pr_warn("elf: failed to open %s: %s\n", binary_path, errstr(ret));
return ret;
}
elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
diff --git a/tools/lib/bpf/features.c b/tools/lib/bpf/features.c
new file mode 100644
index 000000000000..b842b83e2480
--- /dev/null
+++ b/tools/lib/bpf/features.c
@@ -0,0 +1,609 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+#include <linux/kernel.h>
+#include <linux/filter.h>
+#include "bpf.h"
+#include "libbpf.h"
+#include "libbpf_common.h"
+#include "libbpf_internal.h"
+
+static inline __u64 ptr_to_u64(const void *ptr)
+{
+ return (__u64)(unsigned long)ptr;
+}
+
+int probe_fd(int fd)
+{
+ if (fd >= 0)
+ close(fd);
+ return fd >= 0;
+}
+
+static int probe_kern_prog_name(int token_fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, prog_token_fd);
+ struct bpf_insn insns[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ union bpf_attr attr;
+ int ret;
+
+ memset(&attr, 0, attr_sz);
+ attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
+ attr.license = ptr_to_u64("GPL");
+ attr.insns = ptr_to_u64(insns);
+ attr.insn_cnt = (__u32)ARRAY_SIZE(insns);
+ attr.prog_token_fd = token_fd;
+ if (token_fd)
+ attr.prog_flags |= BPF_F_TOKEN_FD;
+ libbpf_strlcpy(attr.prog_name, "libbpf_nametest", sizeof(attr.prog_name));
+
+ /* make sure loading with name works */
+ ret = sys_bpf_prog_load(&attr, attr_sz, PROG_LOAD_ATTEMPTS);
+ return probe_fd(ret);
+}
+
+static int probe_kern_global_data(int token_fd)
+{
+ struct bpf_insn insns[] = {
+ BPF_LD_MAP_VALUE(BPF_REG_1, 0, 16),
+ BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 42),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ LIBBPF_OPTS(bpf_map_create_opts, map_opts,
+ .token_fd = token_fd,
+ .map_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ LIBBPF_OPTS(bpf_prog_load_opts, prog_opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ int ret, map, insn_cnt = ARRAY_SIZE(insns);
+
+ map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_global", sizeof(int), 32, 1, &map_opts);
+ if (map < 0) {
+ ret = -errno;
+ pr_warn("Error in %s(): %s. Couldn't create simple array map.\n",
+ __func__, errstr(ret));
+ return ret;
+ }
+
+ insns[0].imm = map;
+
+ ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &prog_opts);
+ close(map);
+ return probe_fd(ret);
+}
+
+static int probe_kern_btf(int token_fd)
+{
+ static const char strs[] = "\0int";
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_func(int token_fd)
+{
+ static const char strs[] = "\0int\0x\0a";
+ /* void x(int a) {} */
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* FUNC_PROTO */ /* [2] */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
+ BTF_PARAM_ENC(7, 1),
+ /* FUNC x */ /* [3] */
+ BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0), 2),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_func_global(int token_fd)
+{
+ static const char strs[] = "\0int\0x\0a";
+ /* static void x(int a) {} */
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* FUNC_PROTO */ /* [2] */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
+ BTF_PARAM_ENC(7, 1),
+ /* FUNC x BTF_FUNC_GLOBAL */ /* [3] */
+ BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 2),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_datasec(int token_fd)
+{
+ static const char strs[] = "\0x\0.data";
+ /* static int a; */
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* VAR x */ /* [2] */
+ BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
+ BTF_VAR_STATIC,
+ /* DATASEC val */ /* [3] */
+ BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
+ BTF_VAR_SECINFO_ENC(2, 0, 4),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_qmark_datasec(int token_fd)
+{
+ static const char strs[] = "\0x\0?.data";
+ /* static int a; */
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* VAR x */ /* [2] */
+ BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
+ BTF_VAR_STATIC,
+ /* DATASEC ?.data */ /* [3] */
+ BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
+ BTF_VAR_SECINFO_ENC(2, 0, 4),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_float(int token_fd)
+{
+ static const char strs[] = "\0float";
+ __u32 types[] = {
+ /* float */
+ BTF_TYPE_FLOAT_ENC(1, 4),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_decl_tag(int token_fd)
+{
+ static const char strs[] = "\0tag";
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* VAR x */ /* [2] */
+ BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
+ BTF_VAR_STATIC,
+ /* attr */
+ BTF_TYPE_DECL_TAG_ENC(1, 2, -1),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_btf_type_tag(int token_fd)
+{
+ static const char strs[] = "\0tag";
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+ /* attr */
+ BTF_TYPE_TYPE_TAG_ENC(1, 1), /* [2] */
+ /* ptr */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), /* [3] */
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_array_mmap(int token_fd)
+{
+ LIBBPF_OPTS(bpf_map_create_opts, opts,
+ .map_flags = BPF_F_MMAPABLE | (token_fd ? BPF_F_TOKEN_FD : 0),
+ .token_fd = token_fd,
+ );
+ int fd;
+
+ fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_mmap", sizeof(int), sizeof(int), 1, &opts);
+ return probe_fd(fd);
+}
+
+static int probe_kern_exp_attach_type(int token_fd)
+{
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ struct bpf_insn insns[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ int fd, insn_cnt = ARRAY_SIZE(insns);
+
+ /* use any valid combination of program type and (optional)
+ * non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS)
+ * to see if kernel supports expected_attach_type field for
+ * BPF_PROG_LOAD command
+ */
+ fd = bpf_prog_load(BPF_PROG_TYPE_CGROUP_SOCK, NULL, "GPL", insns, insn_cnt, &opts);
+ return probe_fd(fd);
+}
+
+static int probe_kern_probe_read_kernel(int token_fd)
+{
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ struct bpf_insn insns[] = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
+ BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
+ BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
+ BPF_EXIT_INSN(),
+ };
+ int fd, insn_cnt = ARRAY_SIZE(insns);
+
+ fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
+ return probe_fd(fd);
+}
+
+static int probe_prog_bind_map(int token_fd)
+{
+ struct bpf_insn insns[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ LIBBPF_OPTS(bpf_map_create_opts, map_opts,
+ .token_fd = token_fd,
+ .map_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ LIBBPF_OPTS(bpf_prog_load_opts, prog_opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ int ret, map, prog, insn_cnt = ARRAY_SIZE(insns);
+
+ map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_det_bind", sizeof(int), 32, 1, &map_opts);
+ if (map < 0) {
+ ret = -errno;
+ pr_warn("Error in %s(): %s. Couldn't create simple array map.\n",
+ __func__, errstr(ret));
+ return ret;
+ }
+
+ prog = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &prog_opts);
+ if (prog < 0) {
+ close(map);
+ return 0;
+ }
+
+ ret = bpf_prog_bind_map(prog, map, NULL);
+
+ close(map);
+ close(prog);
+
+ return ret >= 0;
+}
+
+static int probe_module_btf(int token_fd)
+{
+ static const char strs[] = "\0int";
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
+ };
+ struct bpf_btf_info info;
+ __u32 len = sizeof(info);
+ char name[16];
+ int fd, err;
+
+ fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs), token_fd);
+ if (fd < 0)
+ return 0; /* BTF not supported at all */
+
+ memset(&info, 0, sizeof(info));
+ info.name = ptr_to_u64(name);
+ info.name_len = sizeof(name);
+
+ /* check that BPF_OBJ_GET_INFO_BY_FD supports specifying name pointer;
+ * kernel's module BTF support coincides with support for
+ * name/name_len fields in struct bpf_btf_info.
+ */
+ err = bpf_btf_get_info_by_fd(fd, &info, &len);
+ close(fd);
+ return !err;
+}
+
+static int probe_perf_link(int token_fd)
+{
+ struct bpf_insn insns[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ int prog_fd, link_fd, err;
+
+ prog_fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL",
+ insns, ARRAY_SIZE(insns), &opts);
+ if (prog_fd < 0)
+ return -errno;
+
+ /* use invalid perf_event FD to get EBADF, if link is supported;
+ * otherwise EINVAL should be returned
+ */
+ link_fd = bpf_link_create(prog_fd, -1, BPF_PERF_EVENT, NULL);
+ err = -errno; /* close() can clobber errno */
+
+ if (link_fd >= 0)
+ close(link_fd);
+ close(prog_fd);
+
+ return link_fd < 0 && err == -EBADF;
+}
+
+static int probe_uprobe_multi_link(int token_fd)
+{
+ LIBBPF_OPTS(bpf_prog_load_opts, load_opts,
+ .expected_attach_type = BPF_TRACE_UPROBE_MULTI,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ LIBBPF_OPTS(bpf_link_create_opts, link_opts);
+ struct bpf_insn insns[] = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ int prog_fd, link_fd, err;
+ unsigned long offset = 0;
+
+ prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL",
+ insns, ARRAY_SIZE(insns), &load_opts);
+ if (prog_fd < 0)
+ return -errno;
+
+ /* Creating uprobe in '/' binary should fail with -EBADF. */
+ link_opts.uprobe_multi.path = "/";
+ link_opts.uprobe_multi.offsets = &offset;
+ link_opts.uprobe_multi.cnt = 1;
+
+ link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts);
+ err = -errno; /* close() can clobber errno */
+
+ if (link_fd >= 0 || err != -EBADF) {
+ if (link_fd >= 0)
+ close(link_fd);
+ close(prog_fd);
+ return 0;
+ }
+
+ /* Initial multi-uprobe support in kernel didn't handle PID filtering
+ * correctly (it was doing thread filtering, not process filtering).
+ * So now we'll detect if PID filtering logic was fixed, and, if not,
+ * we'll pretend multi-uprobes are not supported, if not.
+ * Multi-uprobes are used in USDT attachment logic, and we need to be
+ * conservative here, because multi-uprobe selection happens early at
+ * load time, while the use of PID filtering is known late at
+ * attachment time, at which point it's too late to undo multi-uprobe
+ * selection.
+ *
+ * Creating uprobe with pid == -1 for (invalid) '/' binary will fail
+ * early with -EINVAL on kernels with fixed PID filtering logic;
+ * otherwise -ESRCH would be returned if passed correct binary path
+ * (but we'll just get -BADF, of course).
+ */
+ link_opts.uprobe_multi.pid = -1; /* invalid PID */
+ link_opts.uprobe_multi.path = "/"; /* invalid path */
+ link_opts.uprobe_multi.offsets = &offset;
+ link_opts.uprobe_multi.cnt = 1;
+
+ link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts);
+ err = -errno; /* close() can clobber errno */
+
+ if (link_fd >= 0)
+ close(link_fd);
+ close(prog_fd);
+
+ return link_fd < 0 && err == -EINVAL;
+}
+
+static int probe_kern_bpf_cookie(int token_fd)
+{
+ struct bpf_insn insns[] = {
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
+ BPF_EXIT_INSN(),
+ };
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ int ret, insn_cnt = ARRAY_SIZE(insns);
+
+ ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
+ return probe_fd(ret);
+}
+
+static int probe_kern_btf_enum64(int token_fd)
+{
+ static const char strs[] = "\0enum64";
+ __u32 types[] = {
+ BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_ENUM64, 0, 0), 8),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs), token_fd));
+}
+
+static int probe_kern_arg_ctx_tag(int token_fd)
+{
+ static const char strs[] = "\0a\0b\0arg:ctx\0";
+ const __u32 types[] = {
+ /* [1] INT */
+ BTF_TYPE_INT_ENC(1 /* "a" */, BTF_INT_SIGNED, 0, 32, 4),
+ /* [2] PTR -> VOID */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 0),
+ /* [3] FUNC_PROTO `int(void *a)` */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 1),
+ BTF_PARAM_ENC(1 /* "a" */, 2),
+ /* [4] FUNC 'a' -> FUNC_PROTO (main prog) */
+ BTF_TYPE_ENC(1 /* "a" */, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 3),
+ /* [5] FUNC_PROTO `int(void *b __arg_ctx)` */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 1),
+ BTF_PARAM_ENC(3 /* "b" */, 2),
+ /* [6] FUNC 'b' -> FUNC_PROTO (subprog) */
+ BTF_TYPE_ENC(3 /* "b" */, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 5),
+ /* [7] DECL_TAG 'arg:ctx' -> func 'b' arg 'b' */
+ BTF_TYPE_DECL_TAG_ENC(5 /* "arg:ctx" */, 6, 0),
+ };
+ const struct bpf_insn insns[] = {
+ /* main prog */
+ BPF_CALL_REL(+1),
+ BPF_EXIT_INSN(),
+ /* global subprog */
+ BPF_EMIT_CALL(BPF_FUNC_get_func_ip), /* needs PTR_TO_CTX */
+ BPF_EXIT_INSN(),
+ };
+ const struct bpf_func_info_min func_infos[] = {
+ { 0, 4 }, /* main prog -> FUNC 'a' */
+ { 2, 6 }, /* subprog -> FUNC 'b' */
+ };
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .token_fd = token_fd,
+ .prog_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
+ int prog_fd, btf_fd, insn_cnt = ARRAY_SIZE(insns);
+
+ btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs), token_fd);
+ if (btf_fd < 0)
+ return 0;
+
+ opts.prog_btf_fd = btf_fd;
+ opts.func_info = &func_infos;
+ opts.func_info_cnt = ARRAY_SIZE(func_infos);
+ opts.func_info_rec_size = sizeof(func_infos[0]);
+
+ prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, "det_arg_ctx",
+ "GPL", insns, insn_cnt, &opts);
+ close(btf_fd);
+
+ return probe_fd(prog_fd);
+}
+
+typedef int (*feature_probe_fn)(int /* token_fd */);
+
+static struct kern_feature_cache feature_cache;
+
+static struct kern_feature_desc {
+ const char *desc;
+ feature_probe_fn probe;
+} feature_probes[__FEAT_CNT] = {
+ [FEAT_PROG_NAME] = {
+ "BPF program name", probe_kern_prog_name,
+ },
+ [FEAT_GLOBAL_DATA] = {
+ "global variables", probe_kern_global_data,
+ },
+ [FEAT_BTF] = {
+ "minimal BTF", probe_kern_btf,
+ },
+ [FEAT_BTF_FUNC] = {
+ "BTF functions", probe_kern_btf_func,
+ },
+ [FEAT_BTF_GLOBAL_FUNC] = {
+ "BTF global function", probe_kern_btf_func_global,
+ },
+ [FEAT_BTF_DATASEC] = {
+ "BTF data section and variable", probe_kern_btf_datasec,
+ },
+ [FEAT_ARRAY_MMAP] = {
+ "ARRAY map mmap()", probe_kern_array_mmap,
+ },
+ [FEAT_EXP_ATTACH_TYPE] = {
+ "BPF_PROG_LOAD expected_attach_type attribute",
+ probe_kern_exp_attach_type,
+ },
+ [FEAT_PROBE_READ_KERN] = {
+ "bpf_probe_read_kernel() helper", probe_kern_probe_read_kernel,
+ },
+ [FEAT_PROG_BIND_MAP] = {
+ "BPF_PROG_BIND_MAP support", probe_prog_bind_map,
+ },
+ [FEAT_MODULE_BTF] = {
+ "module BTF support", probe_module_btf,
+ },
+ [FEAT_BTF_FLOAT] = {
+ "BTF_KIND_FLOAT support", probe_kern_btf_float,
+ },
+ [FEAT_PERF_LINK] = {
+ "BPF perf link support", probe_perf_link,
+ },
+ [FEAT_BTF_DECL_TAG] = {
+ "BTF_KIND_DECL_TAG support", probe_kern_btf_decl_tag,
+ },
+ [FEAT_BTF_TYPE_TAG] = {
+ "BTF_KIND_TYPE_TAG support", probe_kern_btf_type_tag,
+ },
+ [FEAT_MEMCG_ACCOUNT] = {
+ "memcg-based memory accounting", probe_memcg_account,
+ },
+ [FEAT_BPF_COOKIE] = {
+ "BPF cookie support", probe_kern_bpf_cookie,
+ },
+ [FEAT_BTF_ENUM64] = {
+ "BTF_KIND_ENUM64 support", probe_kern_btf_enum64,
+ },
+ [FEAT_SYSCALL_WRAPPER] = {
+ "Kernel using syscall wrapper", probe_kern_syscall_wrapper,
+ },
+ [FEAT_UPROBE_MULTI_LINK] = {
+ "BPF multi-uprobe link support", probe_uprobe_multi_link,
+ },
+ [FEAT_ARG_CTX_TAG] = {
+ "kernel-side __arg_ctx tag", probe_kern_arg_ctx_tag,
+ },
+ [FEAT_BTF_QMARK_DATASEC] = {
+ "BTF DATASEC names starting from '?'", probe_kern_btf_qmark_datasec,
+ },
+};
+
+bool feat_supported(struct kern_feature_cache *cache, enum kern_feature_id feat_id)
+{
+ struct kern_feature_desc *feat = &feature_probes[feat_id];
+ int ret;
+
+ /* assume global feature cache, unless custom one is provided */
+ if (!cache)
+ cache = &feature_cache;
+
+ if (READ_ONCE(cache->res[feat_id]) == FEAT_UNKNOWN) {
+ ret = feat->probe(cache->token_fd);
+ if (ret > 0) {
+ WRITE_ONCE(cache->res[feat_id], FEAT_SUPPORTED);
+ } else if (ret == 0) {
+ WRITE_ONCE(cache->res[feat_id], FEAT_MISSING);
+ } else {
+ pr_warn("Detection of kernel %s support failed: %s\n",
+ feat->desc, errstr(ret));
+ WRITE_ONCE(cache->res[feat_id], FEAT_MISSING);
+ }
+ }
+
+ return READ_ONCE(cache->res[feat_id]) == FEAT_SUPPORTED;
+}
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index cf3323fd47b8..cd5c2543f54d 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -4,6 +4,7 @@
#include <stdlib.h>
#include <string.h>
#include <errno.h>
+#include <asm/byteorder.h>
#include <linux/filter.h>
#include <sys/param.h>
#include "btf.h"
@@ -13,7 +14,6 @@
#include "hashmap.h"
#include "bpf_gen_internal.h"
#include "skel_internal.h"
-#include <asm/byteorder.h>
#define MAX_USED_MAPS 64
#define MAX_USED_PROGS 32
@@ -109,6 +109,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
+static void emit_signature_match(struct bpf_gen *gen);
void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
{
@@ -151,6 +152,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
/* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
emit(gen, BPF_EXIT_INSN());
+ if (OPTS_GET(gen->opts, gen_hash, false))
+ emit_signature_match(gen);
}
static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
@@ -367,6 +370,8 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
__emit_sys_close(gen);
}
+static void compute_sha_update_offsets(struct bpf_gen *gen);
+
int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
{
int i;
@@ -393,7 +398,10 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
blob_fd_array_off(gen, i));
emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
emit(gen, BPF_EXIT_INSN());
- pr_debug("gen: finish %d\n", gen->error);
+ if (OPTS_GET(gen->opts, gen_hash, false))
+ compute_sha_update_offsets(gen);
+
+ pr_debug("gen: finish %s\n", errstr(gen->error));
if (!gen->error) {
struct gen_loader_opts *opts = gen->opts;
@@ -401,6 +409,15 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
opts->insns_sz = gen->insn_cur - gen->insn_start;
opts->data = gen->data_start;
opts->data_sz = gen->data_cur - gen->data_start;
+
+ /* use target endianness for embedded loader */
+ if (gen->swapped_endian) {
+ struct bpf_insn *insn = (struct bpf_insn *)opts->insns;
+ int insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
+
+ for (i = 0; i < insn_cnt; i++)
+ bpf_insn_bswap(insn++);
+ }
}
return gen->error;
}
@@ -414,6 +431,44 @@ void bpf_gen__free(struct bpf_gen *gen)
free(gen);
}
+/*
+ * Fields of bpf_attr are set to values in native byte-order before being
+ * written to the target-bound data blob, and may need endian conversion.
+ * This macro allows providing the correct value in situ more simply than
+ * writing a separate converter for *all fields* of *all records* included
+ * in union bpf_attr. Note that sizeof(rval) should match the assignment
+ * target to avoid runtime problems.
+ */
+#define tgt_endian(rval) ({ \
+ typeof(rval) _val = (rval); \
+ if (gen->swapped_endian) { \
+ switch (sizeof(_val)) { \
+ case 1: break; \
+ case 2: _val = bswap_16(_val); break; \
+ case 4: _val = bswap_32(_val); break; \
+ case 8: _val = bswap_64(_val); break; \
+ default: pr_warn("unsupported bswap size!\n"); \
+ } \
+ } \
+ _val; \
+})
+
+static void compute_sha_update_offsets(struct bpf_gen *gen)
+{
+ __u64 sha[SHA256_DWORD_SIZE];
+ __u64 sha_dw;
+ int i;
+
+ libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, (__u8 *)sha);
+ for (i = 0; i < SHA256_DWORD_SIZE; i++) {
+ struct bpf_insn *insn =
+ (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
+ sha_dw = tgt_endian(sha[i]);
+ insn[0].imm = (__u32)sha_dw;
+ insn[1].imm = sha_dw >> 32;
+ }
+}
+
void bpf_gen__load_btf(struct bpf_gen *gen, const void *btf_raw_data,
__u32 btf_raw_size)
{
@@ -422,11 +477,12 @@ void bpf_gen__load_btf(struct bpf_gen *gen, const void *btf_raw_data,
union bpf_attr attr;
memset(&attr, 0, attr_size);
- pr_debug("gen: load_btf: size %d\n", btf_raw_size);
btf_data = add_data(gen, btf_raw_data, btf_raw_size);
- attr.btf_size = btf_raw_size;
+ attr.btf_size = tgt_endian(btf_raw_size);
btf_load_attr = add_data(gen, &attr, attr_size);
+ pr_debug("gen: load_btf: off %d size %d, attr: off %d size %d\n",
+ btf_data, btf_raw_size, btf_load_attr, attr_size);
/* populate union bpf_attr with user provided log details */
move_ctx2blob(gen, attr_field(btf_load_attr, btf_log_level), 4,
@@ -457,28 +513,29 @@ void bpf_gen__map_create(struct bpf_gen *gen,
union bpf_attr attr;
memset(&attr, 0, attr_size);
- attr.map_type = map_type;
- attr.key_size = key_size;
- attr.value_size = value_size;
- attr.map_flags = map_attr->map_flags;
- attr.map_extra = map_attr->map_extra;
+ attr.map_type = tgt_endian(map_type);
+ attr.key_size = tgt_endian(key_size);
+ attr.value_size = tgt_endian(value_size);
+ attr.map_flags = tgt_endian(map_attr->map_flags);
+ attr.map_extra = tgt_endian(map_attr->map_extra);
if (map_name)
libbpf_strlcpy(attr.map_name, map_name, sizeof(attr.map_name));
- attr.numa_node = map_attr->numa_node;
- attr.map_ifindex = map_attr->map_ifindex;
- attr.max_entries = max_entries;
- attr.btf_key_type_id = map_attr->btf_key_type_id;
- attr.btf_value_type_id = map_attr->btf_value_type_id;
-
- pr_debug("gen: map_create: %s idx %d type %d value_type_id %d\n",
- attr.map_name, map_idx, map_type, attr.btf_value_type_id);
+ attr.numa_node = tgt_endian(map_attr->numa_node);
+ attr.map_ifindex = tgt_endian(map_attr->map_ifindex);
+ attr.max_entries = tgt_endian(max_entries);
+ attr.btf_key_type_id = tgt_endian(map_attr->btf_key_type_id);
+ attr.btf_value_type_id = tgt_endian(map_attr->btf_value_type_id);
map_create_attr = add_data(gen, &attr, attr_size);
- if (attr.btf_value_type_id)
+ pr_debug("gen: map_create: %s idx %d type %d value_type_id %d, attr: off %d size %d\n",
+ map_name, map_idx, map_type, map_attr->btf_value_type_id,
+ map_create_attr, attr_size);
+
+ if (map_attr->btf_value_type_id)
/* populate union bpf_attr with btf_fd saved in the stack earlier */
move_stack2blob(gen, attr_field(map_create_attr, btf_fd), 4,
stack_off(btf_fd));
- switch (attr.map_type) {
+ switch (map_type) {
case BPF_MAP_TYPE_ARRAY_OF_MAPS:
case BPF_MAP_TYPE_HASH_OF_MAPS:
move_stack2blob(gen, attr_field(map_create_attr, inner_map_fd), 4,
@@ -498,8 +555,8 @@ void bpf_gen__map_create(struct bpf_gen *gen,
/* emit MAP_CREATE command */
emit_sys_bpf(gen, BPF_MAP_CREATE, map_create_attr, attr_size);
debug_ret(gen, "map_create %s idx %d type %d value_size %d value_btf_id %d",
- attr.map_name, map_idx, map_type, value_size,
- attr.btf_value_type_id);
+ map_name, map_idx, map_type, value_size,
+ map_attr->btf_value_type_id);
emit_check_err(gen);
/* remember map_fd in the stack, if successful */
if (map_idx < 0) {
@@ -523,6 +580,29 @@ void bpf_gen__map_create(struct bpf_gen *gen,
emit_sys_close_stack(gen, stack_off(inner_map_fd));
}
+static void emit_signature_match(struct bpf_gen *gen)
+{
+ __s64 off;
+ int i;
+
+ for (i = 0; i < SHA256_DWORD_SIZE; i++) {
+ emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
+ 0, 0, 0, 0));
+ emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
+ gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
+ emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_3, 0, 0, 0, 0, 0));
+
+ off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
+ if (is_simm16(off)) {
+ emit(gen, BPF_MOV64_IMM(BPF_REG_7, -EINVAL));
+ emit(gen, BPF_JMP_REG(BPF_JNE, BPF_REG_2, BPF_REG_3, off));
+ } else {
+ gen->error = -ERANGE;
+ emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
+ }
+ }
+}
+
void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
enum bpf_attach_type type)
{
@@ -784,12 +864,12 @@ log:
emit_ksym_relo_log(gen, relo, kdesc->ref);
}
-static __u32 src_reg_mask(void)
+static __u32 src_reg_mask(struct bpf_gen *gen)
{
-#if defined(__LITTLE_ENDIAN_BITFIELD)
- return 0x0f; /* src_reg,dst_reg,... */
-#elif defined(__BIG_ENDIAN_BITFIELD)
- return 0xf0; /* dst_reg,src_reg,... */
+#if defined(__LITTLE_ENDIAN_BITFIELD) /* src_reg,dst_reg,... */
+ return gen->swapped_endian ? 0xf0 : 0x0f;
+#elif defined(__BIG_ENDIAN_BITFIELD) /* dst_reg,src_reg,... */
+ return gen->swapped_endian ? 0x0f : 0xf0;
#else
#error "Unsupported bit endianness, cannot proceed"
#endif
@@ -840,7 +920,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 3));
clear_src_reg:
/* clear bpf_object__relocate_data's src_reg assignment, otherwise we get a verifier failure */
- reg_mask = src_reg_mask();
+ reg_mask = src_reg_mask(gen);
emit(gen, BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_8, offsetofend(struct bpf_insn, code)));
emit(gen, BPF_ALU32_IMM(BPF_AND, BPF_REG_9, reg_mask));
emit(gen, BPF_STX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, offsetofend(struct bpf_insn, code)));
@@ -931,48 +1011,94 @@ static void cleanup_relos(struct bpf_gen *gen, int insns)
cleanup_core_relo(gen);
}
+/* Convert func, line, and core relo info blobs to target endianness */
+static void info_blob_bswap(struct bpf_gen *gen, int func_info, int line_info,
+ int core_relos, struct bpf_prog_load_opts *load_attr)
+{
+ struct bpf_func_info *fi = gen->data_start + func_info;
+ struct bpf_line_info *li = gen->data_start + line_info;
+ struct bpf_core_relo *cr = gen->data_start + core_relos;
+ int i;
+
+ for (i = 0; i < load_attr->func_info_cnt; i++)
+ bpf_func_info_bswap(fi++);
+
+ for (i = 0; i < load_attr->line_info_cnt; i++)
+ bpf_line_info_bswap(li++);
+
+ for (i = 0; i < gen->core_relo_cnt; i++)
+ bpf_core_relo_bswap(cr++);
+}
+
void bpf_gen__prog_load(struct bpf_gen *gen,
enum bpf_prog_type prog_type, const char *prog_name,
const char *license, struct bpf_insn *insns, size_t insn_cnt,
struct bpf_prog_load_opts *load_attr, int prog_idx)
{
+ int func_info_tot_sz = load_attr->func_info_cnt *
+ load_attr->func_info_rec_size;
+ int line_info_tot_sz = load_attr->line_info_cnt *
+ load_attr->line_info_rec_size;
+ int core_relo_tot_sz = gen->core_relo_cnt *
+ sizeof(struct bpf_core_relo);
int prog_load_attr, license_off, insns_off, func_info, line_info, core_relos;
int attr_size = offsetofend(union bpf_attr, core_relo_rec_size);
union bpf_attr attr;
memset(&attr, 0, attr_size);
- pr_debug("gen: prog_load: type %d insns_cnt %zd progi_idx %d\n",
- prog_type, insn_cnt, prog_idx);
/* add license string to blob of bytes */
license_off = add_data(gen, license, strlen(license) + 1);
/* add insns to blob of bytes */
insns_off = add_data(gen, insns, insn_cnt * sizeof(struct bpf_insn));
+ pr_debug("gen: prog_load: prog_idx %d type %d insn off %d insns_cnt %zd license off %d\n",
+ prog_idx, prog_type, insns_off, insn_cnt, license_off);
- attr.prog_type = prog_type;
- attr.expected_attach_type = load_attr->expected_attach_type;
- attr.attach_btf_id = load_attr->attach_btf_id;
- attr.prog_ifindex = load_attr->prog_ifindex;
- attr.kern_version = 0;
- attr.insn_cnt = (__u32)insn_cnt;
- attr.prog_flags = load_attr->prog_flags;
-
- attr.func_info_rec_size = load_attr->func_info_rec_size;
- attr.func_info_cnt = load_attr->func_info_cnt;
- func_info = add_data(gen, load_attr->func_info,
- attr.func_info_cnt * attr.func_info_rec_size);
+ /* convert blob insns to target endianness */
+ if (gen->swapped_endian) {
+ struct bpf_insn *insn = gen->data_start + insns_off;
+ int i;
- attr.line_info_rec_size = load_attr->line_info_rec_size;
- attr.line_info_cnt = load_attr->line_info_cnt;
- line_info = add_data(gen, load_attr->line_info,
- attr.line_info_cnt * attr.line_info_rec_size);
+ for (i = 0; i < insn_cnt; i++, insn++)
+ bpf_insn_bswap(insn);
+ }
- attr.core_relo_rec_size = sizeof(struct bpf_core_relo);
- attr.core_relo_cnt = gen->core_relo_cnt;
- core_relos = add_data(gen, gen->core_relos,
- attr.core_relo_cnt * attr.core_relo_rec_size);
+ attr.prog_type = tgt_endian(prog_type);
+ attr.expected_attach_type = tgt_endian(load_attr->expected_attach_type);
+ attr.attach_btf_id = tgt_endian(load_attr->attach_btf_id);
+ attr.prog_ifindex = tgt_endian(load_attr->prog_ifindex);
+ attr.kern_version = 0;
+ attr.insn_cnt = tgt_endian((__u32)insn_cnt);
+ attr.prog_flags = tgt_endian(load_attr->prog_flags);
+
+ attr.func_info_rec_size = tgt_endian(load_attr->func_info_rec_size);
+ attr.func_info_cnt = tgt_endian(load_attr->func_info_cnt);
+ func_info = add_data(gen, load_attr->func_info, func_info_tot_sz);
+ pr_debug("gen: prog_load: func_info: off %d cnt %d rec size %d\n",
+ func_info, load_attr->func_info_cnt,
+ load_attr->func_info_rec_size);
+
+ attr.line_info_rec_size = tgt_endian(load_attr->line_info_rec_size);
+ attr.line_info_cnt = tgt_endian(load_attr->line_info_cnt);
+ line_info = add_data(gen, load_attr->line_info, line_info_tot_sz);
+ pr_debug("gen: prog_load: line_info: off %d cnt %d rec size %d\n",
+ line_info, load_attr->line_info_cnt,
+ load_attr->line_info_rec_size);
+
+ attr.core_relo_rec_size = tgt_endian((__u32)sizeof(struct bpf_core_relo));
+ attr.core_relo_cnt = tgt_endian(gen->core_relo_cnt);
+ core_relos = add_data(gen, gen->core_relos, core_relo_tot_sz);
+ pr_debug("gen: prog_load: core_relos: off %d cnt %d rec size %zd\n",
+ core_relos, gen->core_relo_cnt,
+ sizeof(struct bpf_core_relo));
+
+ /* convert all info blobs to target endianness */
+ if (gen->swapped_endian)
+ info_blob_bswap(gen, func_info, line_info, core_relos, load_attr);
libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name));
prog_load_attr = add_data(gen, &attr, attr_size);
+ pr_debug("gen: prog_load: attr: off %d size %d\n",
+ prog_load_attr, attr_size);
/* populate union bpf_attr with a pointer to license */
emit_rel_store(gen, attr_field(prog_load_attr, license), license_off);
@@ -1040,7 +1166,6 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
int zero = 0;
memset(&attr, 0, attr_size);
- pr_debug("gen: map_update_elem: idx %d\n", map_idx);
value = add_data(gen, pvalue, value_size);
key = add_data(gen, &zero, sizeof(zero));
@@ -1068,6 +1193,8 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
emit(gen, BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel));
map_update_attr = add_data(gen, &attr, attr_size);
+ pr_debug("gen: map_update_elem: idx %d, value: off %d size %d, attr: off %d size %d\n",
+ map_idx, value, value_size, map_update_attr, attr_size);
move_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
blob_fd_array_off(gen, map_idx));
emit_rel_store(gen, attr_field(map_update_attr, key), key);
@@ -1084,14 +1211,16 @@ void bpf_gen__populate_outer_map(struct bpf_gen *gen, int outer_map_idx, int slo
int attr_size = offsetofend(union bpf_attr, flags);
int map_update_attr, key;
union bpf_attr attr;
+ int tgt_slot;
memset(&attr, 0, attr_size);
- pr_debug("gen: populate_outer_map: outer %d key %d inner %d\n",
- outer_map_idx, slot, inner_map_idx);
- key = add_data(gen, &slot, sizeof(slot));
+ tgt_slot = tgt_endian(slot);
+ key = add_data(gen, &tgt_slot, sizeof(tgt_slot));
map_update_attr = add_data(gen, &attr, attr_size);
+ pr_debug("gen: populate_outer_map: outer %d key %d inner %d, attr: off %d size %d\n",
+ outer_map_idx, slot, inner_map_idx, map_update_attr, attr_size);
move_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
blob_fd_array_off(gen, outer_map_idx));
emit_rel_store(gen, attr_field(map_update_attr, key), key);
@@ -1112,8 +1241,9 @@ void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx)
union bpf_attr attr;
memset(&attr, 0, attr_size);
- pr_debug("gen: map_freeze: idx %d\n", map_idx);
map_freeze_attr = add_data(gen, &attr, attr_size);
+ pr_debug("gen: map_freeze: idx %d, attr: off %d size %d\n",
+ map_idx, map_freeze_attr, attr_size);
move_blob2blob(gen, attr_field(map_freeze_attr, map_fd), 4,
blob_fd_array_off(gen, map_idx));
/* emit MAP_FREEZE command */
diff --git a/tools/lib/bpf/hashmap.h b/tools/lib/bpf/hashmap.h
index c12f8320e668..0c4f155e8eb7 100644
--- a/tools/lib/bpf/hashmap.h
+++ b/tools/lib/bpf/hashmap.h
@@ -166,8 +166,8 @@ bool hashmap_find(const struct hashmap *map, long key, long *value);
* @bkt: integer used as a bucket loop cursor
*/
#define hashmap__for_each_entry(map, cur, bkt) \
- for (bkt = 0; bkt < map->cap; bkt++) \
- for (cur = map->buckets[bkt]; cur; cur = cur->next)
+ for (bkt = 0; bkt < (map)->cap; bkt++) \
+ for (cur = (map)->buckets[bkt]; cur; cur = cur->next)
/*
* hashmap__for_each_entry_safe - iterate over all entries in hashmap, safe
@@ -178,8 +178,8 @@ bool hashmap_find(const struct hashmap *map, long key, long *value);
* @bkt: integer used as a bucket loop cursor
*/
#define hashmap__for_each_entry_safe(map, cur, tmp, bkt) \
- for (bkt = 0; bkt < map->cap; bkt++) \
- for (cur = map->buckets[bkt]; \
+ for (bkt = 0; bkt < (map)->cap; bkt++) \
+ for (cur = (map)->buckets[bkt]; \
cur && ({tmp = cur->next; true; }); \
cur = tmp)
@@ -190,19 +190,19 @@ bool hashmap_find(const struct hashmap *map, long key, long *value);
* @key: key to iterate entries for
*/
#define hashmap__for_each_key_entry(map, cur, _key) \
- for (cur = map->buckets \
- ? map->buckets[hash_bits(map->hash_fn((_key), map->ctx), map->cap_bits)] \
+ for (cur = (map)->buckets \
+ ? (map)->buckets[hash_bits((map)->hash_fn((_key), (map)->ctx), (map)->cap_bits)] \
: NULL; \
cur; \
cur = cur->next) \
- if (map->equal_fn(cur->key, (_key), map->ctx))
+ if ((map)->equal_fn(cur->key, (_key), (map)->ctx))
#define hashmap__for_each_key_entry_safe(map, cur, tmp, _key) \
- for (cur = map->buckets \
- ? map->buckets[hash_bits(map->hash_fn((_key), map->ctx), map->cap_bits)] \
+ for (cur = (map)->buckets \
+ ? (map)->buckets[hash_bits((map)->hash_fn((_key), (map)->ctx), (map)->cap_bits)] \
: NULL; \
cur && ({ tmp = cur->next; true; }); \
cur = tmp) \
- if (map->equal_fn(cur->key, (_key), map->ctx))
+ if ((map)->equal_fn(cur->key, (_key), (map)->ctx))
#endif /* __LIBBPF_HASHMAP_H */
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index afd09571c482..dd3b2f57082d 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -33,6 +33,7 @@
#include <linux/filter.h>
#include <linux/limits.h>
#include <linux/perf_event.h>
+#include <linux/bpf_perf_event.h>
#include <linux/ring_buffer.h>
#include <sys/epoll.h>
#include <sys/ioctl.h>
@@ -49,7 +50,6 @@
#include "libbpf.h"
#include "bpf.h"
#include "btf.h"
-#include "str_error.h"
#include "libbpf_internal.h"
#include "hashmap.h"
#include "bpf_gen_internal.h"
@@ -59,6 +59,10 @@
#define BPF_FS_MAGIC 0xcafe4a11
#endif
+#define MAX_EVENT_NAME_LEN 64
+
+#define BPF_FS_DEFAULT_PATH "/sys/fs/bpf"
+
#define BPF_INSN_SZ (sizeof(struct bpf_insn))
/* vsprintf() in __base_pr() uses nonliteral format string. It may break
@@ -70,6 +74,7 @@
static struct bpf_map *bpf_object__add_map(struct bpf_object *obj);
static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog);
+static int map_set_def_max_entries(struct bpf_map *map);
static const char * const attach_type_name[] = {
[BPF_CGROUP_INET_INGRESS] = "cgroup_inet_ingress",
@@ -128,6 +133,8 @@ static const char * const attach_type_name[] = {
[BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi",
[BPF_NETKIT_PRIMARY] = "netkit_primary",
[BPF_NETKIT_PEER] = "netkit_peer",
+ [BPF_TRACE_KPROBE_SESSION] = "trace_kprobe_session",
+ [BPF_TRACE_UPROBE_SESSION] = "trace_uprobe_session",
};
static const char * const link_type_name[] = {
@@ -145,6 +152,7 @@ static const char * const link_type_name[] = {
[BPF_LINK_TYPE_TCX] = "tcx",
[BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi",
[BPF_LINK_TYPE_NETKIT] = "netkit",
+ [BPF_LINK_TYPE_SOCKMAP] = "sockmap",
};
static const char * const map_type_name[] = {
@@ -181,6 +189,7 @@ static const char * const map_type_name[] = {
[BPF_MAP_TYPE_BLOOM_FILTER] = "bloom_filter",
[BPF_MAP_TYPE_USER_RINGBUF] = "user_ringbuf",
[BPF_MAP_TYPE_CGRP_STORAGE] = "cgrp_storage",
+ [BPF_MAP_TYPE_ARENA] = "arena",
};
static const char * const prog_type_name[] = {
@@ -222,7 +231,30 @@ static const char * const prog_type_name[] = {
static int __base_pr(enum libbpf_print_level level, const char *format,
va_list args)
{
- if (level == LIBBPF_DEBUG)
+ const char *env_var = "LIBBPF_LOG_LEVEL";
+ static enum libbpf_print_level min_level = LIBBPF_INFO;
+ static bool initialized;
+
+ if (!initialized) {
+ char *verbosity;
+
+ initialized = true;
+ verbosity = getenv(env_var);
+ if (verbosity) {
+ if (strcasecmp(verbosity, "warn") == 0)
+ min_level = LIBBPF_WARN;
+ else if (strcasecmp(verbosity, "debug") == 0)
+ min_level = LIBBPF_DEBUG;
+ else if (strcasecmp(verbosity, "info") == 0)
+ min_level = LIBBPF_INFO;
+ else
+ fprintf(stderr, "libbpf: unrecognized '%s' envvar value: '%s', should be one of 'warn', 'debug', or 'info'.\n",
+ env_var, verbosity);
+ }
+ }
+
+ /* if too verbose, skip logging */
+ if (level > min_level)
return 0;
return vfprintf(stderr, format, args);
@@ -253,7 +285,7 @@ void libbpf_print(enum libbpf_print_level level, const char *format, ...)
old_errno = errno;
va_start(args, format);
- __libbpf_pr(level, format, args);
+ print_fn(level, format, args);
va_end(args);
errno = old_errno;
@@ -285,8 +317,6 @@ static void pr_perm_msg(int err)
buf);
}
-#define STRERR_BUFSIZE 128
-
/* Copied from tools/perf/util/util.h */
#ifndef zfree
# define zfree(ptr) ({ free(*ptr); *ptr = NULL; })
@@ -463,11 +493,10 @@ struct bpf_program {
__u32 line_info_rec_size;
__u32 line_info_cnt;
__u32 prog_flags;
+ __u8 hash[SHA256_DIGEST_LENGTH];
};
struct bpf_struct_ops {
- const char *tname;
- const struct btf_type *type;
struct bpf_program **progs;
__u32 *kern_func_off;
/* e.g. struct tcp_congestion_ops in bpf_prog's btf format */
@@ -493,6 +522,7 @@ struct bpf_struct_ops {
#define KSYMS_SEC ".ksyms"
#define STRUCT_OPS_SEC ".struct_ops"
#define STRUCT_OPS_LINK_SEC ".struct_ops.link"
+#define ARENA_SEC ".addr_space.1"
enum libbpf_map_type {
LIBBPF_MAP_UNSPEC,
@@ -527,6 +557,7 @@ struct bpf_map {
struct bpf_map_def def;
__u32 numa_node;
__u32 btf_var_idx;
+ int mod_btf_fd;
__u32 btf_key_type_id;
__u32 btf_value_type_id;
__u32 btf_vmlinux_value_type_id;
@@ -540,7 +571,9 @@ struct bpf_map {
bool pinned;
bool reused;
bool autocreate;
+ bool autoattach;
__u64 map_extra;
+ struct bpf_program *excl_prog;
};
enum extern_type {
@@ -563,7 +596,7 @@ struct extern_desc {
int sym_idx;
int btf_id;
int sec_btf_id;
- const char *name;
+ char *name;
char *essent_name;
bool is_set;
bool is_weak;
@@ -607,6 +640,7 @@ enum sec_type {
SEC_BSS,
SEC_DATA,
SEC_RODATA,
+ SEC_ST_OPS,
};
struct elf_sec_desc {
@@ -622,8 +656,7 @@ struct elf_state {
Elf *elf;
Elf64_Ehdr *ehdr;
Elf_Data *symbols;
- Elf_Data *st_ops_data;
- Elf_Data *st_ops_link_data;
+ Elf_Data *arena_data;
size_t shstrndx; /* section index for section name strings */
size_t strtabidx;
struct elf_sec_desc *secs;
@@ -632,17 +665,24 @@ struct elf_state {
__u32 btf_maps_sec_btf_id;
int text_shndx;
int symbols_shndx;
- int st_ops_shndx;
- int st_ops_link_shndx;
+ bool has_st_ops;
+ int arena_data_shndx;
};
struct usdt_manager;
+enum bpf_object_state {
+ OBJ_OPEN,
+ OBJ_PREPARED,
+ OBJ_LOADED,
+};
+
struct bpf_object {
char name[BPF_OBJ_NAME_LEN];
char license[64];
__u32 kern_version;
+ enum bpf_object_state state;
struct bpf_program *programs;
size_t nr_programs;
struct bpf_map *maps;
@@ -654,7 +694,6 @@ struct bpf_object {
int nr_extern;
int kconfig_map_idx;
- bool loaded;
bool has_subcalls;
bool has_rodata;
@@ -663,6 +702,8 @@ struct bpf_object {
/* Information when doing ELF related work. Only valid if efile.elf is not NULL */
struct elf_state efile;
+ unsigned char byteorder;
+
struct btf *btf;
struct btf_ext *btf_ext;
@@ -693,6 +734,14 @@ struct bpf_object {
struct usdt_manager *usdt_man;
+ int arena_map_idx;
+ void *arena_data;
+ size_t arena_data_sz;
+
+ struct kern_feature_cache *feat_cache;
+ char *token_path;
+ int token_fd;
+
char path[];
};
@@ -848,7 +897,7 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
return -LIBBPF_ERRNO__FORMAT;
}
- if (sec_off + prog_sz > sec_sz) {
+ if (sec_off + prog_sz > sec_sz || sec_off + prog_sz < sec_off) {
pr_warn("sec '%s': program at offset %zu crosses section boundary\n",
sec_name, sec_off);
return -LIBBPF_ERRNO__FORMAT;
@@ -901,6 +950,20 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
return 0;
}
+static void bpf_object_bswap_progs(struct bpf_object *obj)
+{
+ struct bpf_program *prog = obj->programs;
+ struct bpf_insn *insn;
+ int p, i;
+
+ for (p = 0; p < obj->nr_programs; p++, prog++) {
+ insn = prog->insns;
+ for (i = 0; i < prog->insns_cnt; i++, insn++)
+ bpf_insn_bswap(insn);
+ }
+ pr_debug("converted %zu BPF programs to native byte order\n", obj->nr_programs);
+}
+
static const struct btf_member *
find_member_by_offset(const struct btf_type *t, __u32 bit_offset)
{
@@ -930,43 +993,52 @@ find_member_by_name(const struct btf *btf, const struct btf_type *t,
return NULL;
}
+static int find_ksym_btf_id(struct bpf_object *obj, const char *ksym_name,
+ __u16 kind, struct btf **res_btf,
+ struct module_btf **res_mod_btf);
+
#define STRUCT_OPS_VALUE_PREFIX "bpf_struct_ops_"
static int find_btf_by_prefix_kind(const struct btf *btf, const char *prefix,
const char *name, __u32 kind);
static int
-find_struct_ops_kern_types(const struct btf *btf, const char *tname,
+find_struct_ops_kern_types(struct bpf_object *obj, const char *tname_raw,
+ struct module_btf **mod_btf,
const struct btf_type **type, __u32 *type_id,
const struct btf_type **vtype, __u32 *vtype_id,
const struct btf_member **data_member)
{
const struct btf_type *kern_type, *kern_vtype;
const struct btf_member *kern_data_member;
+ struct btf *btf = NULL;
__s32 kern_vtype_id, kern_type_id;
+ char tname[192], stname[256];
__u32 i;
- kern_type_id = btf__find_by_name_kind(btf, tname, BTF_KIND_STRUCT);
- if (kern_type_id < 0) {
- pr_warn("struct_ops init_kern: struct %s is not found in kernel BTF\n",
- tname);
- return kern_type_id;
- }
- kern_type = btf__type_by_id(btf, kern_type_id);
+ snprintf(tname, sizeof(tname), "%.*s",
+ (int)bpf_core_essential_name_len(tname_raw), tname_raw);
- /* Find the corresponding "map_value" type that will be used
- * in map_update(BPF_MAP_TYPE_STRUCT_OPS). For example,
- * find "struct bpf_struct_ops_tcp_congestion_ops" from the
- * btf_vmlinux.
+ snprintf(stname, sizeof(stname), "%s%s", STRUCT_OPS_VALUE_PREFIX, tname);
+
+ /* Look for the corresponding "map_value" type that will be used
+ * in map_update(BPF_MAP_TYPE_STRUCT_OPS) first, figure out the btf
+ * and the mod_btf.
+ * For example, find "struct bpf_struct_ops_tcp_congestion_ops".
*/
- kern_vtype_id = find_btf_by_prefix_kind(btf, STRUCT_OPS_VALUE_PREFIX,
- tname, BTF_KIND_STRUCT);
+ kern_vtype_id = find_ksym_btf_id(obj, stname, BTF_KIND_STRUCT, &btf, mod_btf);
if (kern_vtype_id < 0) {
- pr_warn("struct_ops init_kern: struct %s%s is not found in kernel BTF\n",
- STRUCT_OPS_VALUE_PREFIX, tname);
+ pr_warn("struct_ops init_kern: struct %s is not found in kernel BTF\n", stname);
return kern_vtype_id;
}
kern_vtype = btf__type_by_id(btf, kern_vtype_id);
+ kern_type_id = btf__find_by_name_kind(btf, tname, BTF_KIND_STRUCT);
+ if (kern_type_id < 0) {
+ pr_warn("struct_ops init_kern: struct %s is not found in kernel BTF\n", tname);
+ return kern_type_id;
+ }
+ kern_type = btf__type_by_id(btf, kern_type_id);
+
/* Find "struct tcp_congestion_ops" from
* struct bpf_struct_ops_tcp_congestion_ops {
* [ ... ]
@@ -979,8 +1051,8 @@ find_struct_ops_kern_types(const struct btf *btf, const char *tname,
break;
}
if (i == btf_vlen(kern_vtype)) {
- pr_warn("struct_ops init_kern: struct %s data is not found in struct %s%s\n",
- tname, STRUCT_OPS_VALUE_PREFIX, tname);
+ pr_warn("struct_ops init_kern: struct %s data is not found in struct %s\n",
+ tname, stname);
return -EINVAL;
}
@@ -998,32 +1070,95 @@ static bool bpf_map__is_struct_ops(const struct bpf_map *map)
return map->def.type == BPF_MAP_TYPE_STRUCT_OPS;
}
+static bool is_valid_st_ops_program(struct bpf_object *obj,
+ const struct bpf_program *prog)
+{
+ int i;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ if (&obj->programs[i] == prog)
+ return prog->type == BPF_PROG_TYPE_STRUCT_OPS;
+ }
+
+ return false;
+}
+
+/* For each struct_ops program P, referenced from some struct_ops map M,
+ * enable P.autoload if there are Ms for which M.autocreate is true,
+ * disable P.autoload if for all Ms M.autocreate is false.
+ * Don't change P.autoload for programs that are not referenced from any maps.
+ */
+static int bpf_object_adjust_struct_ops_autoload(struct bpf_object *obj)
+{
+ struct bpf_program *prog, *slot_prog;
+ struct bpf_map *map;
+ int i, j, k, vlen;
+
+ for (i = 0; i < obj->nr_programs; ++i) {
+ int should_load = false;
+ int use_cnt = 0;
+
+ prog = &obj->programs[i];
+ if (prog->type != BPF_PROG_TYPE_STRUCT_OPS)
+ continue;
+
+ for (j = 0; j < obj->nr_maps; ++j) {
+ const struct btf_type *type;
+
+ map = &obj->maps[j];
+ if (!bpf_map__is_struct_ops(map))
+ continue;
+
+ type = btf__type_by_id(obj->btf, map->st_ops->type_id);
+ vlen = btf_vlen(type);
+ for (k = 0; k < vlen; ++k) {
+ slot_prog = map->st_ops->progs[k];
+ if (prog != slot_prog)
+ continue;
+
+ use_cnt++;
+ if (map->autocreate)
+ should_load = true;
+ }
+ }
+ if (use_cnt)
+ prog->autoload = should_load;
+ }
+
+ return 0;
+}
+
/* Init the map's fields that depend on kern_btf */
-static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
- const struct btf *btf,
- const struct btf *kern_btf)
+static int bpf_map__init_kern_struct_ops(struct bpf_map *map)
{
const struct btf_member *member, *kern_member, *kern_data_member;
const struct btf_type *type, *kern_type, *kern_vtype;
__u32 i, kern_type_id, kern_vtype_id, kern_data_off;
+ struct bpf_object *obj = map->obj;
+ const struct btf *btf = obj->btf;
struct bpf_struct_ops *st_ops;
+ const struct btf *kern_btf;
+ struct module_btf *mod_btf = NULL;
void *data, *kern_data;
const char *tname;
int err;
st_ops = map->st_ops;
- type = st_ops->type;
- tname = st_ops->tname;
- err = find_struct_ops_kern_types(kern_btf, tname,
+ type = btf__type_by_id(btf, st_ops->type_id);
+ tname = btf__name_by_offset(btf, type->name_off);
+ err = find_struct_ops_kern_types(obj, tname, &mod_btf,
&kern_type, &kern_type_id,
&kern_vtype, &kern_vtype_id,
&kern_data_member);
if (err)
return err;
+ kern_btf = mod_btf ? mod_btf->btf : obj->btf_vmlinux;
+
pr_debug("struct_ops init_kern %s: type_id:%u kern_type_id:%u kern_vtype_id:%u\n",
map->name, st_ops->type_id, kern_type_id, kern_vtype_id);
+ map->mod_btf_fd = mod_btf ? mod_btf->fd : -1;
map->def.value_size = kern_vtype->size;
map->btf_vmlinux_value_type_id = kern_vtype_id;
@@ -1040,17 +1175,46 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
const struct btf_type *mtype, *kern_mtype;
__u32 mtype_id, kern_mtype_id;
void *mdata, *kern_mdata;
+ struct bpf_program *prog;
__s64 msize, kern_msize;
__u32 moff, kern_moff;
__u32 kern_member_idx;
const char *mname;
mname = btf__name_by_offset(btf, member->name_off);
+ moff = member->offset / 8;
+ mdata = data + moff;
+ msize = btf__resolve_size(btf, member->type);
+ if (msize < 0) {
+ pr_warn("struct_ops init_kern %s: failed to resolve the size of member %s\n",
+ map->name, mname);
+ return msize;
+ }
+
kern_member = find_member_by_name(kern_btf, kern_type, mname);
if (!kern_member) {
- pr_warn("struct_ops init_kern %s: Cannot find member %s in kernel BTF\n",
+ if (!libbpf_is_mem_zeroed(mdata, msize)) {
+ pr_warn("struct_ops init_kern %s: Cannot find member %s in kernel BTF\n",
+ map->name, mname);
+ return -ENOTSUP;
+ }
+
+ if (st_ops->progs[i]) {
+ /* If we had declaratively set struct_ops callback, we need to
+ * force its autoload to false, because it doesn't have
+ * a chance of succeeding from POV of the current struct_ops map.
+ * If this program is still referenced somewhere else, though,
+ * then bpf_object_adjust_struct_ops_autoload() will update its
+ * autoload accordingly.
+ */
+ st_ops->progs[i]->autoload = false;
+ st_ops->progs[i] = NULL;
+ }
+
+ /* Skip all-zero/NULL fields if they are not present in the kernel BTF */
+ pr_info("struct_ops %s: member %s not found in kernel, skipping it as it's set to zero\n",
map->name, mname);
- return -ENOTSUP;
+ continue;
}
kern_member_idx = kern_member - btf_members(kern_type);
@@ -1061,10 +1225,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
return -ENOTSUP;
}
- moff = member->offset / 8;
kern_moff = kern_member->offset / 8;
-
- mdata = data + moff;
kern_mdata = kern_data + kern_moff;
mtype = skip_mods_and_typedefs(btf, member->type, &mtype_id);
@@ -1079,12 +1240,25 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
}
if (btf_is_ptr(mtype)) {
- struct bpf_program *prog;
+ prog = *(void **)mdata;
+ /* just like for !kern_member case above, reset declaratively
+ * set (at compile time) program's autload to false,
+ * if user replaced it with another program or NULL
+ */
+ if (st_ops->progs[i] && st_ops->progs[i] != prog)
+ st_ops->progs[i]->autoload = false;
- prog = st_ops->progs[i];
+ /* Update the value from the shadow type */
+ st_ops->progs[i] = prog;
if (!prog)
continue;
+ if (!is_valid_st_ops_program(obj, prog)) {
+ pr_warn("struct_ops init_kern %s: member %s is not a struct_ops program\n",
+ map->name, mname);
+ return -ENOTSUP;
+ }
+
kern_mtype = skip_mods_and_typedefs(kern_btf,
kern_mtype->type,
&kern_mtype_id);
@@ -1099,8 +1273,34 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
return -ENOTSUP;
}
- prog->attach_btf_id = kern_type_id;
- prog->expected_attach_type = kern_member_idx;
+ if (mod_btf)
+ prog->attach_btf_obj_fd = mod_btf->fd;
+
+ /* if we haven't yet processed this BPF program, record proper
+ * attach_btf_id and member_idx
+ */
+ if (!prog->attach_btf_id) {
+ prog->attach_btf_id = kern_type_id;
+ prog->expected_attach_type = kern_member_idx;
+ }
+
+ /* struct_ops BPF prog can be re-used between multiple
+ * .struct_ops & .struct_ops.link as long as it's the
+ * same struct_ops struct definition and the same
+ * function pointer field
+ */
+ if (prog->attach_btf_id != kern_type_id) {
+ pr_warn("struct_ops init_kern %s func ptr %s: invalid reuse of prog %s in sec %s with type %u: attach_btf_id %u != kern_type_id %u\n",
+ map->name, mname, prog->name, prog->sec_name, prog->type,
+ prog->attach_btf_id, kern_type_id);
+ return -EINVAL;
+ }
+ if (prog->expected_attach_type != kern_member_idx) {
+ pr_warn("struct_ops init_kern %s func ptr %s: invalid reuse of prog %s in sec %s with type %u: expected_attach_type %u != kern_member_idx %u\n",
+ map->name, mname, prog->name, prog->sec_name, prog->type,
+ prog->expected_attach_type, kern_member_idx);
+ return -EINVAL;
+ }
st_ops->kern_func_off[i] = kern_data_off + kern_moff;
@@ -1111,9 +1311,8 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
continue;
}
- msize = btf__resolve_size(btf, mtype_id);
kern_msize = btf__resolve_size(kern_btf, kern_mtype_id);
- if (msize < 0 || kern_msize < 0 || msize != kern_msize) {
+ if (kern_msize < 0 || msize != kern_msize) {
pr_warn("struct_ops init_kern %s: Error in size of member %s: %zd != %zd(kernel)\n",
map->name, mname, (ssize_t)msize,
(ssize_t)kern_msize);
@@ -1141,8 +1340,10 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
if (!bpf_map__is_struct_ops(map))
continue;
- err = bpf_map__init_kern_struct_ops(map, obj->btf,
- obj->btf_vmlinux);
+ if (!map->autocreate)
+ continue;
+
+ err = bpf_map__init_kern_struct_ops(map);
if (err)
return err;
}
@@ -1151,7 +1352,7 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
}
static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
- int shndx, Elf_Data *data, __u32 map_flags)
+ int shndx, Elf_Data *data)
{
const struct btf_type *type, *datasec;
const struct btf_var_secinfo *vsi;
@@ -1207,12 +1408,23 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
map->name = strdup(var_name);
if (!map->name)
return -ENOMEM;
+ map->btf_value_type_id = type_id;
+
+ /* Follow same convention as for programs autoload:
+ * SEC("?.struct_ops") means map is not created by default.
+ */
+ if (sec_name[0] == '?') {
+ map->autocreate = false;
+ /* from now on forget there was ? in section name */
+ sec_name++;
+ }
map->def.type = BPF_MAP_TYPE_STRUCT_OPS;
map->def.key_size = sizeof(int);
map->def.value_size = type->size;
map->def.max_entries = 1;
- map->def.map_flags = map_flags;
+ map->def.map_flags = strcmp(sec_name, STRUCT_OPS_LINK_SEC) == 0 ? BPF_F_LINK : 0;
+ map->autoattach = true;
map->st_ops = calloc(1, sizeof(*map->st_ops));
if (!map->st_ops)
@@ -1234,8 +1446,6 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
memcpy(st_ops->data,
data->d_buf + vsi->offset,
type->size);
- st_ops->tname = tname;
- st_ops->type = type;
st_ops->type_id = type_id;
pr_debug("struct_ops init: struct %s(type_id=%u) %s found at offset %u\n",
@@ -1247,15 +1457,25 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
static int bpf_object_init_struct_ops(struct bpf_object *obj)
{
- int err;
+ const char *sec_name;
+ int sec_idx, err;
- err = init_struct_ops_maps(obj, STRUCT_OPS_SEC, obj->efile.st_ops_shndx,
- obj->efile.st_ops_data, 0);
- err = err ?: init_struct_ops_maps(obj, STRUCT_OPS_LINK_SEC,
- obj->efile.st_ops_link_shndx,
- obj->efile.st_ops_link_data,
- BPF_F_LINK);
- return err;
+ for (sec_idx = 0; sec_idx < obj->efile.sec_cnt; ++sec_idx) {
+ struct elf_sec_desc *desc = &obj->efile.secs[sec_idx];
+
+ if (desc->sec_type != SEC_ST_OPS)
+ continue;
+
+ sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, sec_idx));
+ if (!sec_name)
+ return -LIBBPF_ERRNO__FORMAT;
+
+ err = init_struct_ops_maps(obj, sec_name, sec_idx, desc->data);
+ if (err)
+ return err;
+ }
+
+ return 0;
}
static struct bpf_object *bpf_object__new(const char *path,
@@ -1293,12 +1513,11 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf = obj_buf;
obj->efile.obj_buf_sz = obj_buf_sz;
obj->efile.btf_maps_shndx = -1;
- obj->efile.st_ops_shndx = -1;
- obj->efile.st_ops_link_shndx = -1;
obj->kconfig_map_idx = -1;
+ obj->arena_map_idx = -1;
obj->kern_version = get_kernel_version();
- obj->loaded = false;
+ obj->state = OBJ_OPEN;
return obj;
}
@@ -1310,9 +1529,9 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
elf_end(obj->efile.elf);
obj->efile.elf = NULL;
+ obj->efile.ehdr = NULL;
obj->efile.symbols = NULL;
- obj->efile.st_ops_data = NULL;
- obj->efile.st_ops_link_data = NULL;
+ obj->efile.arena_data = NULL;
zfree(&obj->efile.secs);
obj->efile.sec_cnt = 0;
@@ -1338,11 +1557,8 @@ static int bpf_object__elf_init(struct bpf_object *obj)
} else {
obj->efile.fd = open(obj->path, O_RDONLY | O_CLOEXEC);
if (obj->efile.fd < 0) {
- char errmsg[STRERR_BUFSIZE], *cp;
-
err = -errno;
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("elf: failed to open %s: %s\n", obj->path, cp);
+ pr_warn("elf: failed to open %s: %s\n", obj->path, errstr(err));
return err;
}
@@ -1376,6 +1592,16 @@ static int bpf_object__elf_init(struct bpf_object *obj)
goto errout;
}
+ /* Validate ELF object endianness... */
+ if (ehdr->e_ident[EI_DATA] != ELFDATA2LSB &&
+ ehdr->e_ident[EI_DATA] != ELFDATA2MSB) {
+ err = -LIBBPF_ERRNO__ENDIAN;
+ pr_warn("elf: '%s' has unknown byte order\n", obj->path);
+ goto errout;
+ }
+ /* and save after bpf_object_open() frees ELF data */
+ obj->byteorder = ehdr->e_ident[EI_DATA];
+
if (elf_getshdrstrndx(elf, &obj->efile.shstrndx)) {
pr_warn("elf: failed to get section names section index for %s: %s\n",
obj->path, elf_errmsg(-1));
@@ -1404,19 +1630,15 @@ errout:
return err;
}
-static int bpf_object__check_endianness(struct bpf_object *obj)
+static bool is_native_endianness(struct bpf_object *obj)
{
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
- if (obj->efile.ehdr->e_ident[EI_DATA] == ELFDATA2LSB)
- return 0;
+ return obj->byteorder == ELFDATA2LSB;
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- if (obj->efile.ehdr->e_ident[EI_DATA] == ELFDATA2MSB)
- return 0;
+ return obj->byteorder == ELFDATA2MSB;
#else
# error "Unrecognized __BYTE_ORDER__"
#endif
- pr_warn("elf: endianness mismatch in %s.\n", obj->path);
- return -LIBBPF_ERRNO__ENDIAN;
}
static int
@@ -1503,11 +1725,27 @@ static Elf64_Sym *find_elf_var_sym(const struct bpf_object *obj, const char *nam
return ERR_PTR(-ENOENT);
}
+#ifndef MFD_CLOEXEC
+#define MFD_CLOEXEC 0x0001U
+#endif
+#ifndef MFD_NOEXEC_SEAL
+#define MFD_NOEXEC_SEAL 0x0008U
+#endif
+
static int create_placeholder_fd(void)
{
+ unsigned int flags = MFD_CLOEXEC | MFD_NOEXEC_SEAL;
+ const char *name = "libbpf-placeholder-fd";
int fd;
- fd = ensure_good_fd(memfd_create("libbpf-placeholder-fd", MFD_CLOEXEC));
+ fd = ensure_good_fd(sys_memfd_create(name, flags));
+ if (fd >= 0)
+ return fd;
+ else if (errno != EINVAL)
+ return -errno;
+
+ /* Possibly running on kernel without MFD_NOEXEC_SEAL */
+ fd = ensure_good_fd(sys_memfd_create(name, flags & ~MFD_NOEXEC_SEAL));
if (fd < 0)
return -errno;
return fd;
@@ -1546,7 +1784,7 @@ static struct bpf_map *bpf_object__add_map(struct bpf_object *obj)
return map;
}
-static size_t bpf_map_mmap_sz(unsigned int value_sz, unsigned int max_entries)
+static size_t array_map_mmap_sz(unsigned int value_sz, unsigned int max_entries)
{
const long page_sz = sysconf(_SC_PAGE_SIZE);
size_t map_sz;
@@ -1556,6 +1794,20 @@ static size_t bpf_map_mmap_sz(unsigned int value_sz, unsigned int max_entries)
return map_sz;
}
+static size_t bpf_map_mmap_sz(const struct bpf_map *map)
+{
+ const long page_sz = sysconf(_SC_PAGE_SIZE);
+
+ switch (map->def.type) {
+ case BPF_MAP_TYPE_ARRAY:
+ return array_map_mmap_sz(map->def.value_size, map->def.max_entries);
+ case BPF_MAP_TYPE_ARENA:
+ return page_sz * map->def.max_entries;
+ default:
+ return 0; /* not supported */
+ }
+}
+
static int bpf_map_mmap_resize(struct bpf_map *map, size_t old_sz, size_t new_sz)
{
void *mmaped;
@@ -1626,7 +1878,7 @@ static char *internal_map_name(struct bpf_object *obj, const char *real_name)
snprintf(map_name, sizeof(map_name), "%.*s%.*s", pfx_len, obj->name,
sfx_len, real_name);
- /* sanitise map name to characters allowed by kernel */
+ /* sanities map name to characters allowed by kernel */
for (p = map_name; *p && p < map_name + sizeof(map_name); p++)
if (!isalnum(*p) && *p != '_' && *p != '.')
*p = '_';
@@ -1698,7 +1950,7 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
def->value_size = data_sz;
def->max_entries = 1;
def->map_flags = type == LIBBPF_MAP_RODATA || type == LIBBPF_MAP_KCONFIG
- ? BPF_F_RDONLY_PROG : 0;
+ ? BPF_F_RDONLY_PROG : 0;
/* failures are fine because of maps like .rodata.str1.1 */
(void) map_fill_btf_type_info(obj, map);
@@ -1709,14 +1961,13 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
pr_debug("map '%s' (global data): at sec_idx %d, offset %zu, flags %x.\n",
map->name, map->sec_idx, map->sec_offset, def->map_flags);
- mmap_sz = bpf_map_mmap_sz(map->def.value_size, map->def.max_entries);
+ mmap_sz = bpf_map_mmap_sz(map);
map->mmaped = mmap(NULL, mmap_sz, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_ANONYMOUS, -1, 0);
if (map->mmaped == MAP_FAILED) {
err = -errno;
map->mmaped = NULL;
- pr_warn("failed to alloc map '%s' content buffer: %d\n",
- map->name, err);
+ pr_warn("failed to alloc map '%s' content buffer: %s\n", map->name, errstr(err));
zfree(&map->real_name);
zfree(&map->name);
return err;
@@ -1791,6 +2042,20 @@ static struct extern_desc *find_extern_by_name(const struct bpf_object *obj,
return NULL;
}
+static struct extern_desc *find_extern_by_name_with_len(const struct bpf_object *obj,
+ const void *name, int len)
+{
+ const char *ext_name;
+ int i;
+
+ for (i = 0; i < obj->nr_extern; i++) {
+ ext_name = obj->externs[i].name;
+ if (strlen(ext_name) == len && strncmp(ext_name, name, len) == 0)
+ return &obj->externs[i];
+ }
+ return NULL;
+}
+
static int set_kcfg_value_tri(struct extern_desc *ext, void *ext_val,
char value)
{
@@ -1838,7 +2103,7 @@ static int set_kcfg_value_str(struct extern_desc *ext, char *ext_val,
}
len = strlen(value);
- if (value[len - 1] != '"') {
+ if (len < 2 || value[len - 1] != '"') {
pr_warn("extern (kcfg) '%s': invalid string config '%s'\n",
ext->name, value);
return -EINVAL;
@@ -1866,7 +2131,7 @@ static int parse_u64(const char *value, __u64 *res)
*res = strtoull(value, &value_end, 0);
if (errno) {
err = -errno;
- pr_warn("failed to parse '%s' as integer: %d\n", value, err);
+ pr_warn("failed to parse '%s': %s\n", value, errstr(err));
return err;
}
if (*value_end) {
@@ -2032,8 +2297,8 @@ static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data)
while (gzgets(file, buf, sizeof(buf))) {
err = bpf_object__process_kconfig_line(obj, buf, data);
if (err) {
- pr_warn("error parsing system Kconfig line '%s': %d\n",
- buf, err);
+ pr_warn("error parsing system Kconfig line '%s': %s\n",
+ buf, errstr(err));
goto out;
}
}
@@ -2053,15 +2318,15 @@ static int bpf_object__read_kconfig_mem(struct bpf_object *obj,
file = fmemopen((void *)config, strlen(config), "r");
if (!file) {
err = -errno;
- pr_warn("failed to open in-memory Kconfig: %d\n", err);
+ pr_warn("failed to open in-memory Kconfig: %s\n", errstr(err));
return err;
}
while (fgets(buf, sizeof(buf), file)) {
err = bpf_object__process_kconfig_line(obj, buf, data);
if (err) {
- pr_warn("error parsing in-memory Kconfig line '%s': %d\n",
- buf, err);
+ pr_warn("error parsing in-memory Kconfig line '%s': %s\n",
+ buf, errstr(err));
break;
}
}
@@ -2197,6 +2462,46 @@ static bool get_map_field_int(const char *map_name, const struct btf *btf,
return true;
}
+static bool get_map_field_long(const char *map_name, const struct btf *btf,
+ const struct btf_member *m, __u64 *res)
+{
+ const struct btf_type *t = skip_mods_and_typedefs(btf, m->type, NULL);
+ const char *name = btf__name_by_offset(btf, m->name_off);
+
+ if (btf_is_ptr(t)) {
+ __u32 res32;
+ bool ret;
+
+ ret = get_map_field_int(map_name, btf, m, &res32);
+ if (ret)
+ *res = (__u64)res32;
+ return ret;
+ }
+
+ if (!btf_is_enum(t) && !btf_is_enum64(t)) {
+ pr_warn("map '%s': attr '%s': expected ENUM or ENUM64, got %s.\n",
+ map_name, name, btf_kind_str(t));
+ return false;
+ }
+
+ if (btf_vlen(t) != 1) {
+ pr_warn("map '%s': attr '%s': invalid __ulong\n",
+ map_name, name);
+ return false;
+ }
+
+ if (btf_is_enum(t)) {
+ const struct btf_enum *e = btf_enum(t);
+
+ *res = e->val;
+ } else {
+ const struct btf_enum64 *e = btf_enum64(t);
+
+ *res = btf_enum64_value(e);
+ }
+ return true;
+}
+
static int pathname_concat(char *buf, size_t buf_sz, const char *path, const char *name)
{
int len;
@@ -2216,7 +2521,7 @@ static int build_map_pin_path(struct bpf_map *map, const char *path)
int err;
if (!path)
- path = "/sys/fs/bpf";
+ path = BPF_FS_DEFAULT_PATH;
err = pathname_concat(buf, sizeof(buf), path, bpf_map__name(map));
if (err)
@@ -2430,9 +2735,9 @@ int parse_btf_map_def(const char *map_name, struct btf *btf,
map_def->pinning = val;
map_def->parts |= MAP_DEF_PINNING;
} else if (strcmp(name, "map_extra") == 0) {
- __u32 map_extra;
+ __u64 map_extra;
- if (!get_map_field_int(map_name, btf, m, &map_extra))
+ if (!get_map_field_long(map_name, btf, m, &map_extra))
return -EINVAL;
map_def->map_extra = map_extra;
map_def->parts |= MAP_DEF_MAP_EXTRA;
@@ -2650,6 +2955,32 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
return 0;
}
+static int init_arena_map_data(struct bpf_object *obj, struct bpf_map *map,
+ const char *sec_name, int sec_idx,
+ void *data, size_t data_sz)
+{
+ const long page_sz = sysconf(_SC_PAGE_SIZE);
+ size_t mmap_sz;
+
+ mmap_sz = bpf_map_mmap_sz(map);
+ if (roundup(data_sz, page_sz) > mmap_sz) {
+ pr_warn("elf: sec '%s': declared ARENA map size (%zu) is too small to hold global __arena variables of size %zu\n",
+ sec_name, mmap_sz, data_sz);
+ return -E2BIG;
+ }
+
+ obj->arena_data = malloc(data_sz);
+ if (!obj->arena_data)
+ return -ENOMEM;
+ memcpy(obj->arena_data, data, data_sz);
+ obj->arena_data_sz = data_sz;
+
+ /* make bpf_map__init_value() work for ARENA maps */
+ map->mmaped = obj->arena_data;
+
+ return 0;
+}
+
static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
const char *pin_root_path)
{
@@ -2699,6 +3030,33 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
return err;
}
+ for (i = 0; i < obj->nr_maps; i++) {
+ struct bpf_map *map = &obj->maps[i];
+
+ if (map->def.type != BPF_MAP_TYPE_ARENA)
+ continue;
+
+ if (obj->arena_map_idx >= 0) {
+ pr_warn("map '%s': only single ARENA map is supported (map '%s' is also ARENA)\n",
+ map->name, obj->maps[obj->arena_map_idx].name);
+ return -EINVAL;
+ }
+ obj->arena_map_idx = i;
+
+ if (obj->efile.arena_data) {
+ err = init_arena_map_data(obj, map, ARENA_SEC, obj->efile.arena_data_shndx,
+ obj->efile.arena_data->d_buf,
+ obj->efile.arena_data->d_size);
+ if (err)
+ return err;
+ }
+ }
+ if (obj->efile.arena_data && obj->arena_map_idx < 0) {
+ pr_warn("elf: sec '%s': to use global __arena variables the ARENA map should be explicitly declared in SEC(\".maps\")\n",
+ ARENA_SEC);
+ return -ENOENT;
+ }
+
return 0;
}
@@ -2731,6 +3089,11 @@ static bool section_have_execinstr(struct bpf_object *obj, int idx)
return sh->sh_flags & SHF_EXECINSTR;
}
+static bool starts_with_qmark(const char *s)
+{
+ return s && s[0] == '?';
+}
+
static bool btf_needs_sanitization(struct bpf_object *obj)
{
bool has_func_global = kernel_supports(obj, FEAT_BTF_GLOBAL_FUNC);
@@ -2740,9 +3103,10 @@ static bool btf_needs_sanitization(struct bpf_object *obj)
bool has_decl_tag = kernel_supports(obj, FEAT_BTF_DECL_TAG);
bool has_type_tag = kernel_supports(obj, FEAT_BTF_TYPE_TAG);
bool has_enum64 = kernel_supports(obj, FEAT_BTF_ENUM64);
+ bool has_qmark_datasec = kernel_supports(obj, FEAT_BTF_QMARK_DATASEC);
return !has_func || !has_datasec || !has_func_global || !has_float ||
- !has_decl_tag || !has_type_tag || !has_enum64;
+ !has_decl_tag || !has_type_tag || !has_enum64 || !has_qmark_datasec;
}
static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
@@ -2754,6 +3118,7 @@ static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
bool has_decl_tag = kernel_supports(obj, FEAT_BTF_DECL_TAG);
bool has_type_tag = kernel_supports(obj, FEAT_BTF_TYPE_TAG);
bool has_enum64 = kernel_supports(obj, FEAT_BTF_ENUM64);
+ bool has_qmark_datasec = kernel_supports(obj, FEAT_BTF_QMARK_DATASEC);
int enum64_placeholder_id = 0;
struct btf_type *t;
int i, j, vlen;
@@ -2780,7 +3145,7 @@ static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
name = (char *)btf__name_by_offset(btf, t->name_off);
while (*name) {
- if (*name == '.')
+ if (*name == '.' || *name == '?')
*name = '_';
name++;
}
@@ -2795,6 +3160,14 @@ static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
vt = (void *)btf__type_by_id(btf, v->type);
m->name_off = vt->name_off;
}
+ } else if (!has_qmark_datasec && btf_is_datasec(t) &&
+ starts_with_qmark(btf__name_by_offset(btf, t->name_off))) {
+ /* replace '?' prefix with '_' for DATASEC names */
+ char *name;
+
+ name = (char *)btf__name_by_offset(btf, t->name_off);
+ if (name[0] == '?')
+ name[0] = '_';
} else if (!has_func && btf_is_func_proto(t)) {
/* replace FUNC_PROTO with ENUM */
vlen = btf_vlen(t);
@@ -2848,14 +3221,13 @@ static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
static bool libbpf_needs_btf(const struct bpf_object *obj)
{
return obj->efile.btf_maps_shndx >= 0 ||
- obj->efile.st_ops_shndx >= 0 ||
- obj->efile.st_ops_link_shndx >= 0 ||
+ obj->efile.has_st_ops ||
obj->nr_extern > 0;
}
static bool kernel_needs_btf(const struct bpf_object *obj)
{
- return obj->efile.st_ops_shndx >= 0 || obj->efile.st_ops_link_shndx >= 0;
+ return obj->efile.has_st_ops;
}
static int bpf_object__init_btf(struct bpf_object *obj,
@@ -2869,7 +3241,7 @@ static int bpf_object__init_btf(struct bpf_object *obj,
err = libbpf_get_error(obj->btf);
if (err) {
obj->btf = NULL;
- pr_warn("Error loading ELF section %s: %d.\n", BTF_ELF_SEC, err);
+ pr_warn("Error loading ELF section %s: %s.\n", BTF_ELF_SEC, errstr(err));
goto out;
}
/* enforce 8-byte pointers for BPF-targeted BTFs */
@@ -2887,8 +3259,8 @@ static int bpf_object__init_btf(struct bpf_object *obj,
obj->btf_ext = btf_ext__new(btf_ext_data->d_buf, btf_ext_data->d_size);
err = libbpf_get_error(obj->btf_ext);
if (err) {
- pr_warn("Error loading ELF section %s: %d. Ignored and continue.\n",
- BTF_EXT_ELF_SEC, err);
+ pr_warn("Error loading ELF section %s: %s. Ignored and continue.\n",
+ BTF_EXT_ELF_SEC, errstr(err));
obj->btf_ext = NULL;
goto out;
}
@@ -2980,8 +3352,8 @@ static int btf_fixup_datasec(struct bpf_object *obj, struct btf *btf,
if (t->size == 0) {
err = find_elf_sec_sz(obj, sec_name, &size);
if (err || !size) {
- pr_debug("sec '%s': failed to determine size from ELF: size %u, err %d\n",
- sec_name, size, err);
+ pr_debug("sec '%s': failed to determine size from ELF: size %u, err %s\n",
+ sec_name, size, errstr(err));
return -ENOENT;
}
@@ -3135,7 +3507,7 @@ static int bpf_object__load_vmlinux_btf(struct bpf_object *obj, bool force)
obj->btf_vmlinux = btf__load_vmlinux_btf();
err = libbpf_get_error(obj->btf_vmlinux);
if (err) {
- pr_warn("Error loading vmlinux BTF: %d\n", err);
+ pr_warn("Error loading vmlinux BTF: %s\n", errstr(err));
obj->btf_vmlinux = NULL;
return err;
}
@@ -3225,7 +3597,7 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
} else {
/* currently BPF_BTF_LOAD only supports log_level 1 */
err = btf_load_into_kernel(kern_btf, obj->log_buf, obj->log_size,
- obj->log_level ? 1 : 0);
+ obj->log_level ? 1 : 0, obj->token_fd);
}
if (sanitize) {
if (!err) {
@@ -3238,11 +3610,14 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
report:
if (err) {
btf_mandatory = kernel_needs_btf(obj);
- pr_warn("Error loading .BTF into kernel: %d. %s\n", err,
- btf_mandatory ? "BTF is mandatory, can't proceed."
- : "BTF is optional, ignoring.");
- if (!btf_mandatory)
+ if (btf_mandatory) {
+ pr_warn("Error loading .BTF into kernel: %s. BTF is mandatory, can't proceed.\n",
+ errstr(err));
+ } else {
+ pr_info("Error loading .BTF into kernel: %s. BTF is optional, ignoring.\n",
+ errstr(err));
err = 0;
+ }
}
return err;
}
@@ -3556,12 +3931,17 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
sec_desc->sec_type = SEC_RODATA;
sec_desc->shdr = sh;
sec_desc->data = data;
- } else if (strcmp(name, STRUCT_OPS_SEC) == 0) {
- obj->efile.st_ops_data = data;
- obj->efile.st_ops_shndx = idx;
- } else if (strcmp(name, STRUCT_OPS_LINK_SEC) == 0) {
- obj->efile.st_ops_link_data = data;
- obj->efile.st_ops_link_shndx = idx;
+ } else if (strcmp(name, STRUCT_OPS_SEC) == 0 ||
+ strcmp(name, STRUCT_OPS_LINK_SEC) == 0 ||
+ strcmp(name, "?" STRUCT_OPS_SEC) == 0 ||
+ strcmp(name, "?" STRUCT_OPS_LINK_SEC) == 0) {
+ sec_desc->sec_type = SEC_ST_OPS;
+ sec_desc->shdr = sh;
+ sec_desc->data = data;
+ obj->efile.has_st_ops = true;
+ } else if (strcmp(name, ARENA_SEC) == 0) {
+ obj->efile.arena_data = data;
+ obj->efile.arena_data_shndx = idx;
} else {
pr_info("elf: skipping unrecognized data section(%d) %s\n",
idx, name);
@@ -3577,6 +3957,8 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
if (!section_have_execinstr(obj, targ_sec_idx) &&
strcmp(name, ".rel" STRUCT_OPS_SEC) &&
strcmp(name, ".rel" STRUCT_OPS_LINK_SEC) &&
+ strcmp(name, ".rel?" STRUCT_OPS_SEC) &&
+ strcmp(name, ".rel?" STRUCT_OPS_LINK_SEC) &&
strcmp(name, ".rel" MAPS_ELF_SEC)) {
pr_info("elf: skipping relo section(%d) %s for section(%d) %s\n",
idx, name, targ_sec_idx,
@@ -3603,6 +3985,10 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
return -LIBBPF_ERRNO__FORMAT;
}
+ /* change BPF program insns to native endianness for introspection */
+ if (!is_native_endianness(obj))
+ bpf_object_bswap_progs(obj);
+
/* sort BPF programs by section name and in-section instruction offset
* for faster search
*/
@@ -3635,7 +4021,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx)
return true;
/* global function */
- return bind == STB_GLOBAL && type == STT_FUNC;
+ return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC;
}
static int find_extern_btf_id(const struct btf *btf, const char *ext_name)
@@ -3871,7 +4257,9 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
return ext->btf_id;
}
t = btf__type_by_id(obj->btf, ext->btf_id);
- ext->name = btf__name_by_offset(obj->btf, t->name_off);
+ ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off));
+ if (!ext->name)
+ return -ENOMEM;
ext->sym_idx = i;
ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK;
@@ -4039,7 +4427,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog)
{
- return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
+ return prog->sec_idx == obj->efile.text_shndx;
}
struct bpf_program *
@@ -4094,6 +4482,44 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
}
}
+static int bpf_prog_compute_hash(struct bpf_program *prog)
+{
+ struct bpf_insn *purged;
+ int i, err = 0;
+
+ purged = calloc(prog->insns_cnt, BPF_INSN_SZ);
+ if (!purged)
+ return -ENOMEM;
+
+ /* If relocations have been done, the map_fd needs to be
+ * discarded for the digest calculation.
+ */
+ for (i = 0; i < prog->insns_cnt; i++) {
+ purged[i] = prog->insns[i];
+ if (purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
+ (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
+ purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
+ purged[i].imm = 0;
+ i++;
+ if (i >= prog->insns_cnt ||
+ prog->insns[i].code != 0 ||
+ prog->insns[i].dst_reg != 0 ||
+ prog->insns[i].src_reg != 0 ||
+ prog->insns[i].off != 0) {
+ err = -EINVAL;
+ goto out;
+ }
+ purged[i] = prog->insns[i];
+ purged[i].imm = 0;
+ }
+ }
+ libbpf_sha256(purged, prog->insns_cnt * sizeof(struct bpf_insn),
+ prog->hash);
+out:
+ free(purged);
+ return err;
+}
+
static int bpf_program__record_reloc(struct bpf_program *prog,
struct reloc_desc *reloc_desc,
__u32 insn_idx, const char *sym_name,
@@ -4189,6 +4615,25 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
type = bpf_object__section_to_libbpf_map_type(obj, shdr_idx);
sym_sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, shdr_idx));
+ /* arena data relocation */
+ if (shdr_idx == obj->efile.arena_data_shndx) {
+ if (obj->arena_map_idx < 0) {
+ pr_warn("prog '%s': bad arena data relocation at insn %u, no arena maps defined\n",
+ prog->name, insn_idx);
+ return -LIBBPF_ERRNO__RELOC;
+ }
+ reloc_desc->type = RELO_DATA;
+ reloc_desc->insn_idx = insn_idx;
+ reloc_desc->map_idx = obj->arena_map_idx;
+ reloc_desc->sym_off = sym->st_value;
+
+ map = &obj->maps[obj->arena_map_idx];
+ pr_debug("prog '%s': found arena map %d (%s, sec %d, off %zu) for insn %u\n",
+ prog->name, obj->arena_map_idx, map->name, map->sec_idx,
+ map->sec_offset, insn_idx);
+ return 0;
+ }
+
/* generic map reference relocation */
if (type == LIBBPF_MAP_UNSPEC) {
if (!bpf_object__shndx_is_maps(obj, shdr_idx)) {
@@ -4424,8 +4869,8 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
fp = fopen(file, "re");
if (!fp) {
err = -errno;
- pr_warn("failed to open %s: %d. No procfs support?\n", file,
- err);
+ pr_warn("failed to open %s: %s. No procfs support?\n", file,
+ errstr(err));
return err;
}
@@ -4447,6 +4892,11 @@ static int bpf_get_map_info_from_fdinfo(int fd, struct bpf_map_info *info)
return 0;
}
+static bool map_is_created(const struct bpf_map *map)
+{
+ return map->obj->state >= OBJ_PREPARED || map->reused;
+}
+
bool bpf_map__autocreate(const struct bpf_map *map)
{
return map->autocreate;
@@ -4454,13 +4904,27 @@ bool bpf_map__autocreate(const struct bpf_map *map)
int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
{
- if (map->obj->loaded)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
map->autocreate = autocreate;
return 0;
}
+int bpf_map__set_autoattach(struct bpf_map *map, bool autoattach)
+{
+ if (!bpf_map__is_struct_ops(map))
+ return libbpf_err(-EINVAL);
+
+ map->autoattach = autoattach;
+ return 0;
+}
+
+bool bpf_map__autoattach(const struct bpf_map *map)
+{
+ return map->autoattach;
+}
+
int bpf_map__reuse_fd(struct bpf_map *map, int fd)
{
struct bpf_map_info info;
@@ -4534,7 +4998,7 @@ struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
{
- if (map->obj->loaded)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
map->def.max_entries = max_entries;
@@ -4546,34 +5010,87 @@ int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
return 0;
}
+static int bpf_object_prepare_token(struct bpf_object *obj)
+{
+ const char *bpffs_path;
+ int bpffs_fd = -1, token_fd, err;
+ bool mandatory;
+ enum libbpf_print_level level;
+
+ /* token is explicitly prevented */
+ if (obj->token_path && obj->token_path[0] == '\0') {
+ pr_debug("object '%s': token is prevented, skipping...\n", obj->name);
+ return 0;
+ }
+
+ mandatory = obj->token_path != NULL;
+ level = mandatory ? LIBBPF_WARN : LIBBPF_DEBUG;
+
+ bpffs_path = obj->token_path ?: BPF_FS_DEFAULT_PATH;
+ bpffs_fd = open(bpffs_path, O_DIRECTORY, O_RDWR);
+ if (bpffs_fd < 0) {
+ err = -errno;
+ __pr(level, "object '%s': failed (%s) to open BPF FS mount at '%s'%s\n",
+ obj->name, errstr(err), bpffs_path,
+ mandatory ? "" : ", skipping optional step...");
+ return mandatory ? err : 0;
+ }
+
+ token_fd = bpf_token_create(bpffs_fd, 0);
+ close(bpffs_fd);
+ if (token_fd < 0) {
+ if (!mandatory && token_fd == -ENOENT) {
+ pr_debug("object '%s': BPF FS at '%s' doesn't have BPF token delegation set up, skipping...\n",
+ obj->name, bpffs_path);
+ return 0;
+ }
+ __pr(level, "object '%s': failed (%d) to create BPF token from '%s'%s\n",
+ obj->name, token_fd, bpffs_path,
+ mandatory ? "" : ", skipping optional step...");
+ return mandatory ? token_fd : 0;
+ }
+
+ obj->feat_cache = calloc(1, sizeof(*obj->feat_cache));
+ if (!obj->feat_cache) {
+ close(token_fd);
+ return -ENOMEM;
+ }
+
+ obj->token_fd = token_fd;
+ obj->feat_cache->token_fd = token_fd;
+
+ return 0;
+}
+
static int
bpf_object__probe_loading(struct bpf_object *obj)
{
- char *cp, errmsg[STRERR_BUFSIZE];
struct bpf_insn insns[] = {
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
};
int ret, insn_cnt = ARRAY_SIZE(insns);
+ LIBBPF_OPTS(bpf_prog_load_opts, opts,
+ .token_fd = obj->token_fd,
+ .prog_flags = obj->token_fd ? BPF_F_TOKEN_FD : 0,
+ );
if (obj->gen_loader)
return 0;
ret = bump_rlimit_memlock();
if (ret)
- pr_warn("Failed to bump RLIMIT_MEMLOCK (err = %d), you might need to do it explicitly!\n", ret);
+ pr_warn("Failed to bump RLIMIT_MEMLOCK (err = %s), you might need to do it explicitly!\n",
+ errstr(ret));
/* make sure basic loading works */
- ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
+ ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, &opts);
if (ret < 0)
- ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
+ ret = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, &opts);
if (ret < 0) {
ret = errno;
- cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
- pr_warn("Error in %s():%s(%d). Couldn't load trivial BPF "
- "program. Make sure your kernel supports BPF "
- "(CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is "
- "set to big enough value.\n", __func__, cp, ret);
+ pr_warn("Error in %s(): %s. Couldn't load trivial BPF program. Make sure your kernel supports BPF (CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big enough value.\n",
+ __func__, errstr(ret));
return -ret;
}
close(ret);
@@ -4581,468 +5098,23 @@ bpf_object__probe_loading(struct bpf_object *obj)
return 0;
}
-static int probe_fd(int fd)
-{
- if (fd >= 0)
- close(fd);
- return fd >= 0;
-}
-
-static int probe_kern_prog_name(void)
-{
- const size_t attr_sz = offsetofend(union bpf_attr, prog_name);
- struct bpf_insn insns[] = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- union bpf_attr attr;
- int ret;
-
- memset(&attr, 0, attr_sz);
- attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
- attr.license = ptr_to_u64("GPL");
- attr.insns = ptr_to_u64(insns);
- attr.insn_cnt = (__u32)ARRAY_SIZE(insns);
- libbpf_strlcpy(attr.prog_name, "libbpf_nametest", sizeof(attr.prog_name));
-
- /* make sure loading with name works */
- ret = sys_bpf_prog_load(&attr, attr_sz, PROG_LOAD_ATTEMPTS);
- return probe_fd(ret);
-}
-
-static int probe_kern_global_data(void)
-{
- char *cp, errmsg[STRERR_BUFSIZE];
- struct bpf_insn insns[] = {
- BPF_LD_MAP_VALUE(BPF_REG_1, 0, 16),
- BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 42),
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- int ret, map, insn_cnt = ARRAY_SIZE(insns);
-
- map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_global", sizeof(int), 32, 1, NULL);
- if (map < 0) {
- ret = -errno;
- cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
- pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
- __func__, cp, -ret);
- return ret;
- }
-
- insns[0].imm = map;
-
- ret = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
- close(map);
- return probe_fd(ret);
-}
-
-static int probe_kern_btf(void)
-{
- static const char strs[] = "\0int";
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_func(void)
-{
- static const char strs[] = "\0int\0x\0a";
- /* void x(int a) {} */
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
- /* FUNC_PROTO */ /* [2] */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
- BTF_PARAM_ENC(7, 1),
- /* FUNC x */ /* [3] */
- BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0), 2),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_func_global(void)
-{
- static const char strs[] = "\0int\0x\0a";
- /* static void x(int a) {} */
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
- /* FUNC_PROTO */ /* [2] */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 0),
- BTF_PARAM_ENC(7, 1),
- /* FUNC x BTF_FUNC_GLOBAL */ /* [3] */
- BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 2),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_datasec(void)
-{
- static const char strs[] = "\0x\0.data";
- /* static int a; */
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
- /* VAR x */ /* [2] */
- BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
- BTF_VAR_STATIC,
- /* DATASEC val */ /* [3] */
- BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
- BTF_VAR_SECINFO_ENC(2, 0, 4),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_float(void)
-{
- static const char strs[] = "\0float";
- __u32 types[] = {
- /* float */
- BTF_TYPE_FLOAT_ENC(1, 4),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_decl_tag(void)
-{
- static const char strs[] = "\0tag";
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
- /* VAR x */ /* [2] */
- BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1),
- BTF_VAR_STATIC,
- /* attr */
- BTF_TYPE_DECL_TAG_ENC(1, 2, -1),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_btf_type_tag(void)
-{
- static const char strs[] = "\0tag";
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
- /* attr */
- BTF_TYPE_TYPE_TAG_ENC(1, 1), /* [2] */
- /* ptr */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2), /* [3] */
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_array_mmap(void)
-{
- LIBBPF_OPTS(bpf_map_create_opts, opts, .map_flags = BPF_F_MMAPABLE);
- int fd;
-
- fd = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_mmap", sizeof(int), sizeof(int), 1, &opts);
- return probe_fd(fd);
-}
-
-static int probe_kern_exp_attach_type(void)
-{
- LIBBPF_OPTS(bpf_prog_load_opts, opts, .expected_attach_type = BPF_CGROUP_INET_SOCK_CREATE);
- struct bpf_insn insns[] = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- int fd, insn_cnt = ARRAY_SIZE(insns);
-
- /* use any valid combination of program type and (optional)
- * non-zero expected attach type (i.e., not a BPF_CGROUP_INET_INGRESS)
- * to see if kernel supports expected_attach_type field for
- * BPF_PROG_LOAD command
- */
- fd = bpf_prog_load(BPF_PROG_TYPE_CGROUP_SOCK, NULL, "GPL", insns, insn_cnt, &opts);
- return probe_fd(fd);
-}
-
-static int probe_kern_probe_read_kernel(void)
-{
- struct bpf_insn insns[] = {
- BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
- BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
- BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
- BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
- BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
- BPF_EXIT_INSN(),
- };
- int fd, insn_cnt = ARRAY_SIZE(insns);
-
- fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
- return probe_fd(fd);
-}
-
-static int probe_prog_bind_map(void)
-{
- char *cp, errmsg[STRERR_BUFSIZE];
- struct bpf_insn insns[] = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- int ret, map, prog, insn_cnt = ARRAY_SIZE(insns);
-
- map = bpf_map_create(BPF_MAP_TYPE_ARRAY, "libbpf_det_bind", sizeof(int), 32, 1, NULL);
- if (map < 0) {
- ret = -errno;
- cp = libbpf_strerror_r(ret, errmsg, sizeof(errmsg));
- pr_warn("Error in %s():%s(%d). Couldn't create simple array map.\n",
- __func__, cp, -ret);
- return ret;
- }
-
- prog = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, NULL, "GPL", insns, insn_cnt, NULL);
- if (prog < 0) {
- close(map);
- return 0;
- }
-
- ret = bpf_prog_bind_map(prog, map, NULL);
-
- close(map);
- close(prog);
-
- return ret >= 0;
-}
-
-static int probe_module_btf(void)
-{
- static const char strs[] = "\0int";
- __u32 types[] = {
- /* int */
- BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
- };
- struct bpf_btf_info info;
- __u32 len = sizeof(info);
- char name[16];
- int fd, err;
-
- fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs));
- if (fd < 0)
- return 0; /* BTF not supported at all */
-
- memset(&info, 0, sizeof(info));
- info.name = ptr_to_u64(name);
- info.name_len = sizeof(name);
-
- /* check that BPF_OBJ_GET_INFO_BY_FD supports specifying name pointer;
- * kernel's module BTF support coincides with support for
- * name/name_len fields in struct bpf_btf_info.
- */
- err = bpf_btf_get_info_by_fd(fd, &info, &len);
- close(fd);
- return !err;
-}
-
-static int probe_perf_link(void)
-{
- struct bpf_insn insns[] = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- int prog_fd, link_fd, err;
-
- prog_fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL",
- insns, ARRAY_SIZE(insns), NULL);
- if (prog_fd < 0)
- return -errno;
-
- /* use invalid perf_event FD to get EBADF, if link is supported;
- * otherwise EINVAL should be returned
- */
- link_fd = bpf_link_create(prog_fd, -1, BPF_PERF_EVENT, NULL);
- err = -errno; /* close() can clobber errno */
-
- if (link_fd >= 0)
- close(link_fd);
- close(prog_fd);
-
- return link_fd < 0 && err == -EBADF;
-}
-
-static int probe_uprobe_multi_link(void)
-{
- LIBBPF_OPTS(bpf_prog_load_opts, load_opts,
- .expected_attach_type = BPF_TRACE_UPROBE_MULTI,
- );
- LIBBPF_OPTS(bpf_link_create_opts, link_opts);
- struct bpf_insn insns[] = {
- BPF_MOV64_IMM(BPF_REG_0, 0),
- BPF_EXIT_INSN(),
- };
- int prog_fd, link_fd, err;
- unsigned long offset = 0;
-
- prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL",
- insns, ARRAY_SIZE(insns), &load_opts);
- if (prog_fd < 0)
- return -errno;
-
- /* Creating uprobe in '/' binary should fail with -EBADF. */
- link_opts.uprobe_multi.path = "/";
- link_opts.uprobe_multi.offsets = &offset;
- link_opts.uprobe_multi.cnt = 1;
-
- link_fd = bpf_link_create(prog_fd, -1, BPF_TRACE_UPROBE_MULTI, &link_opts);
- err = -errno; /* close() can clobber errno */
-
- if (link_fd >= 0)
- close(link_fd);
- close(prog_fd);
-
- return link_fd < 0 && err == -EBADF;
-}
-
-static int probe_kern_bpf_cookie(void)
-{
- struct bpf_insn insns[] = {
- BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
- BPF_EXIT_INSN(),
- };
- int ret, insn_cnt = ARRAY_SIZE(insns);
-
- ret = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", insns, insn_cnt, NULL);
- return probe_fd(ret);
-}
-
-static int probe_kern_btf_enum64(void)
-{
- static const char strs[] = "\0enum64";
- __u32 types[] = {
- BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_ENUM64, 0, 0), 8),
- };
-
- return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs)));
-}
-
-static int probe_kern_syscall_wrapper(void);
-
-enum kern_feature_result {
- FEAT_UNKNOWN = 0,
- FEAT_SUPPORTED = 1,
- FEAT_MISSING = 2,
-};
-
-typedef int (*feature_probe_fn)(void);
-
-static struct kern_feature_desc {
- const char *desc;
- feature_probe_fn probe;
- enum kern_feature_result res;
-} feature_probes[__FEAT_CNT] = {
- [FEAT_PROG_NAME] = {
- "BPF program name", probe_kern_prog_name,
- },
- [FEAT_GLOBAL_DATA] = {
- "global variables", probe_kern_global_data,
- },
- [FEAT_BTF] = {
- "minimal BTF", probe_kern_btf,
- },
- [FEAT_BTF_FUNC] = {
- "BTF functions", probe_kern_btf_func,
- },
- [FEAT_BTF_GLOBAL_FUNC] = {
- "BTF global function", probe_kern_btf_func_global,
- },
- [FEAT_BTF_DATASEC] = {
- "BTF data section and variable", probe_kern_btf_datasec,
- },
- [FEAT_ARRAY_MMAP] = {
- "ARRAY map mmap()", probe_kern_array_mmap,
- },
- [FEAT_EXP_ATTACH_TYPE] = {
- "BPF_PROG_LOAD expected_attach_type attribute",
- probe_kern_exp_attach_type,
- },
- [FEAT_PROBE_READ_KERN] = {
- "bpf_probe_read_kernel() helper", probe_kern_probe_read_kernel,
- },
- [FEAT_PROG_BIND_MAP] = {
- "BPF_PROG_BIND_MAP support", probe_prog_bind_map,
- },
- [FEAT_MODULE_BTF] = {
- "module BTF support", probe_module_btf,
- },
- [FEAT_BTF_FLOAT] = {
- "BTF_KIND_FLOAT support", probe_kern_btf_float,
- },
- [FEAT_PERF_LINK] = {
- "BPF perf link support", probe_perf_link,
- },
- [FEAT_BTF_DECL_TAG] = {
- "BTF_KIND_DECL_TAG support", probe_kern_btf_decl_tag,
- },
- [FEAT_BTF_TYPE_TAG] = {
- "BTF_KIND_TYPE_TAG support", probe_kern_btf_type_tag,
- },
- [FEAT_MEMCG_ACCOUNT] = {
- "memcg-based memory accounting", probe_memcg_account,
- },
- [FEAT_BPF_COOKIE] = {
- "BPF cookie support", probe_kern_bpf_cookie,
- },
- [FEAT_BTF_ENUM64] = {
- "BTF_KIND_ENUM64 support", probe_kern_btf_enum64,
- },
- [FEAT_SYSCALL_WRAPPER] = {
- "Kernel using syscall wrapper", probe_kern_syscall_wrapper,
- },
- [FEAT_UPROBE_MULTI_LINK] = {
- "BPF multi-uprobe link support", probe_uprobe_multi_link,
- },
-};
-
bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id)
{
- struct kern_feature_desc *feat = &feature_probes[feat_id];
- int ret;
-
- if (obj && obj->gen_loader)
+ if (obj->gen_loader)
/* To generate loader program assume the latest kernel
* to avoid doing extra prog_load, map_create syscalls.
*/
return true;
- if (READ_ONCE(feat->res) == FEAT_UNKNOWN) {
- ret = feat->probe();
- if (ret > 0) {
- WRITE_ONCE(feat->res, FEAT_SUPPORTED);
- } else if (ret == 0) {
- WRITE_ONCE(feat->res, FEAT_MISSING);
- } else {
- pr_warn("Detection of kernel %s support failed: %d\n", feat->desc, ret);
- WRITE_ONCE(feat->res, FEAT_MISSING);
- }
- }
+ if (obj->token_fd)
+ return feat_supported(obj->feat_cache, feat_id);
- return READ_ONCE(feat->res) == FEAT_SUPPORTED;
+ return feat_supported(NULL, feat_id);
}
static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
{
struct bpf_map_info map_info;
- char msg[STRERR_BUFSIZE];
__u32 map_info_len = sizeof(map_info);
int err;
@@ -5052,10 +5124,20 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
if (err) {
pr_warn("failed to get map info for map FD %d: %s\n", map_fd,
- libbpf_strerror_r(errno, msg, sizeof(msg)));
+ errstr(err));
return false;
}
+ /*
+ * bpf_get_map_info_by_fd() for DEVMAP will always return flags with
+ * BPF_F_RDONLY_PROG set, but it generally is not set at map creation time.
+ * Thus, ignore the BPF_F_RDONLY_PROG flag in the flags returned from
+ * bpf_get_map_info_by_fd() when checking for compatibility with an
+ * existing DEVMAP.
+ */
+ if (map->def.type == BPF_MAP_TYPE_DEVMAP || map->def.type == BPF_MAP_TYPE_DEVMAP_HASH)
+ map_info.map_flags &= ~BPF_F_RDONLY_PROG;
+
return (map_info.type == map->def.type &&
map_info.key_size == map->def.key_size &&
map_info.value_size == map->def.value_size &&
@@ -5067,7 +5149,6 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
static int
bpf_object__reuse_map(struct bpf_map *map)
{
- char *cp, errmsg[STRERR_BUFSIZE];
int err, pin_fd;
pin_fd = bpf_obj_get(map->pin_path);
@@ -5079,9 +5160,8 @@ bpf_object__reuse_map(struct bpf_map *map)
return 0;
}
- cp = libbpf_strerror_r(-err, errmsg, sizeof(errmsg));
pr_warn("couldn't retrieve pinned map '%s': %s\n",
- map->pin_path, cp);
+ map->pin_path, errstr(err));
return err;
}
@@ -5107,8 +5187,8 @@ static int
bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
{
enum libbpf_map_type map_type = map->libbpf_type;
- char *cp, errmsg[STRERR_BUFSIZE];
int err, zero = 0;
+ size_t mmap_sz;
if (obj->gen_loader) {
bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps,
@@ -5117,12 +5197,12 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
bpf_gen__map_freeze(obj->gen_loader, map - obj->maps);
return 0;
}
+
err = bpf_map_update_elem(map->fd, &zero, map->mmaped, 0);
if (err) {
err = -errno;
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("Error setting initial map(%s) contents: %s\n",
- map->name, cp);
+ pr_warn("map '%s': failed to set initial contents: %s\n",
+ bpf_map__name(map), errstr(err));
return err;
}
@@ -5131,22 +5211,48 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
err = bpf_map_freeze(map->fd);
if (err) {
err = -errno;
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("Error freezing map(%s) as read-only: %s\n",
- map->name, cp);
+ pr_warn("map '%s': failed to freeze as read-only: %s\n",
+ bpf_map__name(map), errstr(err));
return err;
}
}
+
+ /* Remap anonymous mmap()-ed "map initialization image" as
+ * a BPF map-backed mmap()-ed memory, but preserving the same
+ * memory address. This will cause kernel to change process'
+ * page table to point to a different piece of kernel memory,
+ * but from userspace point of view memory address (and its
+ * contents, being identical at this point) will stay the
+ * same. This mapping will be released by bpf_object__close()
+ * as per normal clean up procedure.
+ */
+ mmap_sz = bpf_map_mmap_sz(map);
+ if (map->def.map_flags & BPF_F_MMAPABLE) {
+ void *mmaped;
+ int prot;
+
+ if (map->def.map_flags & BPF_F_RDONLY_PROG)
+ prot = PROT_READ;
+ else
+ prot = PROT_READ | PROT_WRITE;
+ mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0);
+ if (mmaped == MAP_FAILED) {
+ err = -errno;
+ pr_warn("map '%s': failed to re-mmap() contents: %s\n",
+ bpf_map__name(map), errstr(err));
+ return err;
+ }
+ map->mmaped = mmaped;
+ } else if (map->mmaped) {
+ munmap(map->mmaped, mmap_sz);
+ map->mmaped = NULL;
+ }
+
return 0;
}
static void bpf_map__destroy(struct bpf_map *map);
-static bool map_is_created(const struct bpf_map *map)
-{
- return map->obj->loaded || map->reused;
-}
-
static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
{
LIBBPF_OPTS(bpf_map_create_opts, create_attr);
@@ -5160,9 +5266,25 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
create_attr.map_flags = def->map_flags;
create_attr.numa_node = map->numa_node;
create_attr.map_extra = map->map_extra;
+ create_attr.token_fd = obj->token_fd;
+ if (obj->token_fd)
+ create_attr.map_flags |= BPF_F_TOKEN_FD;
+ if (map->excl_prog) {
+ err = bpf_prog_compute_hash(map->excl_prog);
+ if (err)
+ return err;
+
+ create_attr.excl_prog_hash = map->excl_prog->hash;
+ create_attr.excl_prog_hash_size = SHA256_DIGEST_LENGTH;
+ }
- if (bpf_map__is_struct_ops(map))
+ if (bpf_map__is_struct_ops(map)) {
create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
+ if (map->mod_btf_fd >= 0) {
+ create_attr.value_type_btf_obj_fd = map->mod_btf_fd;
+ create_attr.map_flags |= BPF_F_VTYPE_BTF_OBJ_FD;
+ }
+ }
if (obj->btf && btf__fd(obj->btf) >= 0) {
create_attr.btf_fd = btf__fd(obj->btf);
@@ -5172,10 +5294,13 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
if (bpf_map_type__is_map_in_map(def->type)) {
if (map->inner_map) {
+ err = map_set_def_max_entries(map->inner_map);
+ if (err)
+ return err;
err = bpf_object__create_map(obj, map->inner_map, true);
if (err) {
- pr_warn("map '%s': failed to create inner map: %d\n",
- map->name, err);
+ pr_warn("map '%s': failed to create inner map: %s\n",
+ map->name, errstr(err));
return err;
}
map->inner_map_fd = map->inner_map->fd;
@@ -5198,11 +5323,16 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
case BPF_MAP_TYPE_SOCKHASH:
case BPF_MAP_TYPE_QUEUE:
case BPF_MAP_TYPE_STACK:
+ case BPF_MAP_TYPE_ARENA:
create_attr.btf_fd = 0;
create_attr.btf_key_type_id = 0;
create_attr.btf_value_type_id = 0;
map->btf_key_type_id = 0;
map->btf_value_type_id = 0;
+ break;
+ case BPF_MAP_TYPE_STRUCT_OPS:
+ create_attr.btf_value_type_id = 0;
+ break;
default:
break;
}
@@ -5224,12 +5354,9 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
def->max_entries, &create_attr);
}
if (map_fd < 0 && (create_attr.btf_key_type_id || create_attr.btf_value_type_id)) {
- char *cp, errmsg[STRERR_BUFSIZE];
-
err = -errno;
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n",
- map->name, cp, err);
+ pr_warn("Error in bpf_create_map_xattr(%s): %s. Retrying without BTF.\n",
+ map->name, errstr(err));
create_attr.btf_fd = 0;
create_attr.btf_key_type_id = 0;
create_attr.btf_value_type_id = 0;
@@ -5284,8 +5411,8 @@ static int init_map_in_map_slots(struct bpf_object *obj, struct bpf_map *map)
}
if (err) {
err = -errno;
- pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %d\n",
- map->name, i, targ_map->name, fd, err);
+ pr_warn("map '%s': failed to initialize slot [%d] to map '%s' fd=%d: %s\n",
+ map->name, i, targ_map->name, fd, errstr(err));
return err;
}
pr_debug("map '%s': slot [%d] set to map '%s' fd=%d\n",
@@ -5317,8 +5444,8 @@ static int init_prog_array_slots(struct bpf_object *obj, struct bpf_map *map)
err = bpf_map_update_elem(map->fd, &i, &fd, 0);
if (err) {
err = -errno;
- pr_warn("map '%s': failed to initialize slot [%d] to prog '%s' fd=%d: %d\n",
- map->name, i, targ_prog->name, fd, err);
+ pr_warn("map '%s': failed to initialize slot [%d] to prog '%s' fd=%d: %s\n",
+ map->name, i, targ_prog->name, fd, errstr(err));
return err;
}
pr_debug("map '%s': slot [%d] set to prog '%s' fd=%d\n",
@@ -5371,7 +5498,6 @@ static int
bpf_object__create_maps(struct bpf_object *obj)
{
struct bpf_map *map;
- char *cp, errmsg[STRERR_BUFSIZE];
unsigned int i, j;
int err;
bool retried;
@@ -5437,8 +5563,23 @@ retry:
err = bpf_object__populate_internal_map(obj, map);
if (err < 0)
goto err_out;
+ } else if (map->def.type == BPF_MAP_TYPE_ARENA) {
+ map->mmaped = mmap((void *)(long)map->map_extra,
+ bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE,
+ map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED,
+ map->fd, 0);
+ if (map->mmaped == MAP_FAILED) {
+ err = -errno;
+ map->mmaped = NULL;
+ pr_warn("map '%s': failed to mmap arena: %s\n",
+ map->name, errstr(err));
+ return err;
+ }
+ if (obj->arena_data) {
+ memcpy(map->mmaped, obj->arena_data, obj->arena_data_sz);
+ zfree(&obj->arena_data);
+ }
}
-
if (map->init_slots_sz && map->def.type != BPF_MAP_TYPE_PROG_ARRAY) {
err = init_map_in_map_slots(obj, map);
if (err < 0)
@@ -5453,8 +5594,8 @@ retry:
retried = true;
goto retry;
}
- pr_warn("map '%s': failed to auto-pin at '%s': %d\n",
- map->name, map->pin_path, err);
+ pr_warn("map '%s': failed to auto-pin at '%s': %s\n",
+ map->name, map->pin_path, errstr(err));
goto err_out;
}
}
@@ -5463,8 +5604,7 @@ retry:
return 0;
err_out:
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("map '%s': failed to create: %s(%d)\n", map->name, cp, err);
+ pr_warn("map '%s': failed to create: %s\n", map->name, errstr(err));
pr_perm_msg(err);
for (j = 0; j < i; j++)
zclose(obj->maps[j].fd);
@@ -5588,7 +5728,7 @@ static int load_module_btfs(struct bpf_object *obj)
}
if (err) {
err = -errno;
- pr_warn("failed to iterate BTF objects: %d\n", err);
+ pr_warn("failed to iterate BTF objects: %s\n", errstr(err));
return err;
}
@@ -5597,7 +5737,7 @@ static int load_module_btfs(struct bpf_object *obj)
if (errno == ENOENT)
continue; /* expected race: BTF was unloaded */
err = -errno;
- pr_warn("failed to get BTF object #%d FD: %d\n", id, err);
+ pr_warn("failed to get BTF object #%d FD: %s\n", id, errstr(err));
return err;
}
@@ -5609,7 +5749,7 @@ static int load_module_btfs(struct bpf_object *obj)
err = bpf_btf_get_info_by_fd(fd, &info, &len);
if (err) {
err = -errno;
- pr_warn("failed to get BTF object #%d info: %d\n", id, err);
+ pr_warn("failed to get BTF object #%d info: %s\n", id, errstr(err));
goto err_out;
}
@@ -5622,8 +5762,8 @@ static int load_module_btfs(struct bpf_object *obj)
btf = btf_get_from_fd(fd, obj->btf_vmlinux);
err = libbpf_get_error(btf);
if (err) {
- pr_warn("failed to load module [%s]'s BTF object #%d: %d\n",
- name, id, err);
+ pr_warn("failed to load module [%s]'s BTF object #%d: %s\n",
+ name, id, errstr(err));
goto err_out;
}
@@ -5852,7 +5992,7 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
obj->btf_vmlinux_override = btf__parse(targ_btf_path, NULL);
err = libbpf_get_error(obj->btf_vmlinux_override);
if (err) {
- pr_warn("failed to parse target BTF: %d\n", err);
+ pr_warn("failed to parse target BTF: %s\n", errstr(err));
return err;
}
}
@@ -5912,8 +6052,8 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
err = record_relo_core(prog, rec, insn_idx);
if (err) {
- pr_warn("prog '%s': relo #%d: failed to record relocation: %d\n",
- prog->name, i, err);
+ pr_warn("prog '%s': relo #%d: failed to record relocation: %s\n",
+ prog->name, i, errstr(err));
goto out;
}
@@ -5922,15 +6062,15 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
err = bpf_core_resolve_relo(prog, rec, i, obj->btf, cand_cache, &targ_res);
if (err) {
- pr_warn("prog '%s': relo #%d: failed to relocate: %d\n",
- prog->name, i, err);
+ pr_warn("prog '%s': relo #%d: failed to relocate: %s\n",
+ prog->name, i, errstr(err));
goto out;
}
err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res);
if (err) {
- pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
- prog->name, i, insn_idx, err);
+ pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %s\n",
+ prog->name, i, insn_idx, errstr(err));
goto out;
}
}
@@ -6198,8 +6338,8 @@ reloc_prog_func_and_line_info(const struct bpf_object *obj,
&main_prog->func_info_rec_size);
if (err) {
if (err != -ENOENT) {
- pr_warn("prog '%s': error relocating .BTF.ext function info: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': error relocating .BTF.ext function info: %s\n",
+ prog->name, errstr(err));
return err;
}
if (main_prog->func_info) {
@@ -6226,8 +6366,8 @@ line_info:
&main_prog->line_info_rec_size);
if (err) {
if (err != -ENOENT) {
- pr_warn("prog '%s': error relocating .BTF.ext line info: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': error relocating .BTF.ext line info: %s\n",
+ prog->name, errstr(err));
return err;
}
if (main_prog->line_info) {
@@ -6695,6 +6835,14 @@ static struct {
/* all other program types don't have "named" context structs */
};
+/* forward declarations for arch-specific underlying types of bpf_user_pt_regs_t typedef,
+ * for below __builtin_types_compatible_p() checks;
+ * with this approach we don't need any extra arch-specific #ifdef guards
+ */
+struct pt_regs;
+struct user_pt_regs;
+struct user_regs_struct;
+
static bool need_func_arg_type_fixup(const struct btf *btf, const struct bpf_program *prog,
const char *subprog_name, int arg_idx,
int arg_type_id, const char *ctx_name)
@@ -6735,11 +6883,21 @@ static bool need_func_arg_type_fixup(const struct btf *btf, const struct bpf_pro
/* special cases */
switch (prog->type) {
case BPF_PROG_TYPE_KPROBE:
- case BPF_PROG_TYPE_PERF_EVENT:
/* `struct pt_regs *` is expected, but we need to fix up */
if (btf_is_struct(t) && strcmp(tname, "pt_regs") == 0)
return true;
break;
+ case BPF_PROG_TYPE_PERF_EVENT:
+ if (__builtin_types_compatible_p(bpf_user_pt_regs_t, struct pt_regs) &&
+ btf_is_struct(t) && strcmp(tname, "pt_regs") == 0)
+ return true;
+ if (__builtin_types_compatible_p(bpf_user_pt_regs_t, struct user_pt_regs) &&
+ btf_is_struct(t) && strcmp(tname, "user_pt_regs") == 0)
+ return true;
+ if (__builtin_types_compatible_p(bpf_user_pt_regs_t, struct user_regs_struct) &&
+ btf_is_struct(t) && strcmp(tname, "user_regs_struct") == 0)
+ return true;
+ break;
case BPF_PROG_TYPE_RAW_TRACEPOINT:
case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
/* allow u64* as ctx */
@@ -6818,69 +6976,6 @@ static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_progr
return fn_id;
}
-static int probe_kern_arg_ctx_tag(void)
-{
- /* To minimize merge conflicts with BPF token series that refactors
- * feature detection code a lot, we don't integrate
- * probe_kern_arg_ctx_tag() into kernel_supports() feature-detection
- * framework yet, doing our own caching internally.
- * This will be cleaned up a bit later when bpf/bpf-next trees settle.
- */
- static int cached_result = -1;
- static const char strs[] = "\0a\0b\0arg:ctx\0";
- const __u32 types[] = {
- /* [1] INT */
- BTF_TYPE_INT_ENC(1 /* "a" */, BTF_INT_SIGNED, 0, 32, 4),
- /* [2] PTR -> VOID */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 0),
- /* [3] FUNC_PROTO `int(void *a)` */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 1),
- BTF_PARAM_ENC(1 /* "a" */, 2),
- /* [4] FUNC 'a' -> FUNC_PROTO (main prog) */
- BTF_TYPE_ENC(1 /* "a" */, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 3),
- /* [5] FUNC_PROTO `int(void *b __arg_ctx)` */
- BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_FUNC_PROTO, 0, 1), 1),
- BTF_PARAM_ENC(3 /* "b" */, 2),
- /* [6] FUNC 'b' -> FUNC_PROTO (subprog) */
- BTF_TYPE_ENC(3 /* "b" */, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 5),
- /* [7] DECL_TAG 'arg:ctx' -> func 'b' arg 'b' */
- BTF_TYPE_DECL_TAG_ENC(5 /* "arg:ctx" */, 6, 0),
- };
- const struct bpf_insn insns[] = {
- /* main prog */
- BPF_CALL_REL(+1),
- BPF_EXIT_INSN(),
- /* global subprog */
- BPF_EMIT_CALL(BPF_FUNC_get_func_ip), /* needs PTR_TO_CTX */
- BPF_EXIT_INSN(),
- };
- const struct bpf_func_info_min func_infos[] = {
- { 0, 4 }, /* main prog -> FUNC 'a' */
- { 2, 6 }, /* subprog -> FUNC 'b' */
- };
- LIBBPF_OPTS(bpf_prog_load_opts, opts);
- int prog_fd, btf_fd, insn_cnt = ARRAY_SIZE(insns);
-
- if (cached_result >= 0)
- return cached_result;
-
- btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types), strs, sizeof(strs));
- if (btf_fd < 0)
- return 0;
-
- opts.prog_btf_fd = btf_fd;
- opts.func_info = &func_infos;
- opts.func_info_cnt = ARRAY_SIZE(func_infos);
- opts.func_info_rec_size = sizeof(func_infos[0]);
-
- prog_fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, "det_arg_ctx",
- "GPL", insns, insn_cnt, &opts);
- close(btf_fd);
-
- cached_result = probe_fd(prog_fd);
- return cached_result;
-}
-
/* Check if main program or global subprog's function prototype has `arg:ctx`
* argument tags, and, if necessary, substitute correct type to match what BPF
* verifier would expect, taking into account specific program type. This
@@ -6905,7 +7000,7 @@ static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_progra
return 0;
/* don't do any fix ups if kernel natively supports __arg_ctx */
- if (probe_kern_arg_ctx_tag() > 0)
+ if (kernel_supports(obj, FEAT_ARG_CTX_TAG))
return 0;
/* some BPF program types just don't have named context structs, so
@@ -7036,8 +7131,8 @@ static int bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_pat
if (obj->btf_ext) {
err = bpf_object__relocate_core(obj, targ_btf_path);
if (err) {
- pr_warn("failed to perform CO-RE relocations: %d\n",
- err);
+ pr_warn("failed to perform CO-RE relocations: %s\n",
+ errstr(err));
return err;
}
bpf_object__sort_relos(obj);
@@ -7081,8 +7176,8 @@ static int bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_pat
err = bpf_object__relocate_calls(obj, prog);
if (err) {
- pr_warn("prog '%s': failed to relocate calls: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': failed to relocate calls: %s\n",
+ prog->name, errstr(err));
return err;
}
@@ -7118,16 +7213,16 @@ static int bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_pat
/* Process data relos for main programs */
err = bpf_object__relocate_data(obj, prog);
if (err) {
- pr_warn("prog '%s': failed to relocate data references: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': failed to relocate data references: %s\n",
+ prog->name, errstr(err));
return err;
}
/* Fix up .BTF.ext information, if necessary */
err = bpf_program_fixup_func_info(obj, prog);
if (err) {
- pr_warn("prog '%s': failed to perform .BTF.ext fix ups: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': failed to perform .BTF.ext fix ups: %s\n",
+ prog->name, errstr(err));
return err;
}
}
@@ -7292,12 +7387,12 @@ static int bpf_object__collect_relos(struct bpf_object *obj)
data = sec_desc->data;
idx = shdr->sh_info;
- if (shdr->sh_type != SHT_REL) {
+ if (shdr->sh_type != SHT_REL || idx < 0 || idx >= obj->efile.sec_cnt) {
pr_warn("internal error at %d\n", __LINE__);
return -LIBBPF_ERRNO__INTERNAL;
}
- if (idx == obj->efile.st_ops_shndx || idx == obj->efile.st_ops_link_shndx)
+ if (obj->efile.secs[idx].sec_type == SEC_ST_OPS)
err = bpf_object__collect_st_ops_relos(obj, shdr, data);
else if (idx == obj->efile.btf_maps_shndx)
err = bpf_object__collect_map_relos(obj, shdr, data);
@@ -7379,8 +7474,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
/* special check for usdt to use uprobe_multi link */
- if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK))
+ if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) {
+ /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type
+ * in prog, and expected_attach_type we set in kernel is from opts, so we
+ * update both.
+ */
prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
+ opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
+ }
if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) {
int btf_obj_fd = 0, btf_type_id = 0, err;
@@ -7430,14 +7531,17 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
{
LIBBPF_OPTS(bpf_prog_load_opts, load_attr);
const char *prog_name = NULL;
- char *cp, errmsg[STRERR_BUFSIZE];
size_t log_buf_size = 0;
char *log_buf = NULL, *tmp;
- int btf_fd, ret, err;
bool own_log_buf = true;
__u32 log_level = prog->log_level;
+ int ret, err;
- if (prog->type == BPF_PROG_TYPE_UNSPEC) {
+ /* Be more helpful by rejecting programs that can't be validated early
+ * with more meaningful and actionable error message.
+ */
+ switch (prog->type) {
+ case BPF_PROG_TYPE_UNSPEC:
/*
* The program type must be set. Most likely we couldn't find a proper
* section definition at load time, and thus we didn't infer the type.
@@ -7445,6 +7549,15 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
pr_warn("prog '%s': missing BPF prog type, check ELF section name '%s'\n",
prog->name, prog->sec_name);
return -EINVAL;
+ case BPF_PROG_TYPE_STRUCT_OPS:
+ if (prog->attach_btf_id == 0) {
+ pr_warn("prog '%s': SEC(\"struct_ops\") program isn't referenced anywhere, did you forget to use it?\n",
+ prog->name);
+ return -EINVAL;
+ }
+ break;
+ default:
+ break;
}
if (!insns || !insns_cnt)
@@ -7457,11 +7570,11 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
load_attr.attach_btf_id = prog->attach_btf_id;
load_attr.kern_version = kern_version;
load_attr.prog_ifindex = prog->prog_ifindex;
+ load_attr.expected_attach_type = prog->expected_attach_type;
/* specify func_info/line_info only if kernel supports them */
- btf_fd = btf__fd(obj->btf);
- if (btf_fd >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
- load_attr.prog_btf_fd = btf_fd;
+ if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
+ load_attr.prog_btf_fd = btf__fd(obj->btf);
load_attr.func_info = prog->func_info;
load_attr.func_info_rec_size = prog->func_info_rec_size;
load_attr.func_info_cnt = prog->func_info_cnt;
@@ -7473,21 +7586,22 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
load_attr.prog_flags = prog->prog_flags;
load_attr.fd_array = obj->fd_array;
+ load_attr.token_fd = obj->token_fd;
+ if (obj->token_fd)
+ load_attr.prog_flags |= BPF_F_TOKEN_FD;
+
/* adjust load_attr if sec_def provides custom preload callback */
if (prog->sec_def && prog->sec_def->prog_prepare_load_fn) {
err = prog->sec_def->prog_prepare_load_fn(prog, &load_attr, prog->sec_def->cookie);
if (err < 0) {
- pr_warn("prog '%s': failed to prepare load attributes: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': failed to prepare load attributes: %s\n",
+ prog->name, errstr(err));
return err;
}
insns = prog->insns;
insns_cnt = prog->insns_cnt;
}
- /* allow prog_prepare_load_fn to change expected_attach_type */
- load_attr.expected_attach_type = prog->expected_attach_type;
-
if (obj->gen_loader) {
bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name,
license, insns, insns_cnt, &load_attr,
@@ -7545,9 +7659,8 @@ retry_load:
continue;
if (bpf_prog_bind_map(ret, map->fd, NULL)) {
- cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
pr_warn("prog '%s': failed to bind map '%s': %s\n",
- prog->name, map->real_name, cp);
+ prog->name, map->real_name, errstr(errno));
/* Don't fail hard if can't bind rodata. */
}
}
@@ -7577,8 +7690,7 @@ retry_load:
/* post-process verifier log to improve error descriptions */
fixup_verifier_log(prog, log_buf, log_buf_size);
- cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
- pr_warn("prog '%s': BPF program load failed: %s\n", prog->name, cp);
+ pr_warn("prog '%s': BPF program load failed: %s\n", prog->name, errstr(errno));
pr_perm_msg(ret);
if (own_log_buf && log_buf && log_buf[0] != '\0') {
@@ -7850,13 +7962,6 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
for (i = 0; i < obj->nr_programs; i++) {
prog = &obj->programs[i];
- err = bpf_object__sanitize_prog(obj, prog);
- if (err)
- return err;
- }
-
- for (i = 0; i < obj->nr_programs; i++) {
- prog = &obj->programs[i];
if (prog_is_subprog(obj, prog))
continue;
if (!prog->autoload) {
@@ -7871,7 +7976,7 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
err = bpf_object_load_prog(obj, prog, prog->insns, prog->insns_cnt,
obj->license, obj->kern_version, &prog->fd);
if (err) {
- pr_warn("prog '%s': failed to load: %d\n", prog->name, err);
+ pr_warn("prog '%s': failed to load: %s\n", prog->name, errstr(err));
return err;
}
}
@@ -7880,6 +7985,21 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
return 0;
}
+static int bpf_object_prepare_progs(struct bpf_object *obj)
+{
+ struct bpf_program *prog;
+ size_t i;
+ int err;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ prog = &obj->programs[i];
+ err = bpf_object__sanitize_prog(obj, prog);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
static const struct bpf_sec_def *find_sec_def(const char *sec_name);
static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object_open_opts *opts)
@@ -7905,8 +8025,8 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
if (prog->sec_def->prog_setup_fn) {
err = prog->sec_def->prog_setup_fn(prog, prog->sec_def->cookie);
if (err < 0) {
- pr_warn("prog '%s': failed to initialize: %d\n",
- prog->name, err);
+ pr_warn("prog '%s': failed to initialize: %s\n",
+ prog->name, errstr(err));
return err;
}
}
@@ -7916,16 +8036,19 @@ static int bpf_object_init_progs(struct bpf_object *obj, const struct bpf_object
}
static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf, size_t obj_buf_sz,
+ const char *obj_name,
const struct bpf_object_open_opts *opts)
{
- const char *obj_name, *kconfig, *btf_tmp_path;
+ const char *kconfig, *btf_tmp_path, *token_path;
struct bpf_object *obj;
- char tmp_name[64];
int err;
char *log_buf;
size_t log_size;
__u32 log_level;
+ if (obj_buf && !obj_name)
+ return ERR_PTR(-EINVAL);
+
if (elf_version(EV_CURRENT) == EV_NONE) {
pr_warn("failed to init libelf for %s\n",
path ? : "(mem buf)");
@@ -7935,16 +8058,12 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
if (!OPTS_VALID(opts, bpf_object_open_opts))
return ERR_PTR(-EINVAL);
- obj_name = OPTS_GET(opts, object_name, NULL);
+ obj_name = OPTS_GET(opts, object_name, NULL) ?: obj_name;
if (obj_buf) {
- if (!obj_name) {
- snprintf(tmp_name, sizeof(tmp_name), "%lx-%lx",
- (unsigned long)obj_buf,
- (unsigned long)obj_buf_sz);
- obj_name = tmp_name;
- }
path = obj_name;
pr_debug("loading object '%s' from buffer\n", obj_name);
+ } else {
+ pr_debug("loading object from %s\n", path);
}
log_buf = OPTS_GET(opts, kernel_log_buf, NULL);
@@ -7955,6 +8074,16 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
if (log_size && !log_buf)
return ERR_PTR(-EINVAL);
+ token_path = OPTS_GET(opts, bpf_token_path, NULL);
+ /* if user didn't specify bpf_token_path explicitly, check if
+ * LIBBPF_BPF_TOKEN_PATH envvar was set and treat it as bpf_token_path
+ * option
+ */
+ if (!token_path)
+ token_path = getenv("LIBBPF_BPF_TOKEN_PATH");
+ if (token_path && strlen(token_path) >= PATH_MAX)
+ return ERR_PTR(-ENAMETOOLONG);
+
obj = bpf_object__new(path, obj_buf, obj_buf_sz, obj_name);
if (IS_ERR(obj))
return obj;
@@ -7963,6 +8092,14 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
obj->log_size = log_size;
obj->log_level = log_level;
+ if (token_path) {
+ obj->token_path = strdup(token_path);
+ if (!obj->token_path) {
+ err = -ENOMEM;
+ goto out;
+ }
+ }
+
btf_tmp_path = OPTS_GET(opts, btf_custom_path, NULL);
if (btf_tmp_path) {
if (strlen(btf_tmp_path) >= PATH_MAX) {
@@ -7986,7 +8123,6 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
}
err = bpf_object__elf_init(obj);
- err = err ? : bpf_object__check_endianness(obj);
err = err ? : bpf_object__elf_collect(obj);
err = err ? : bpf_object__collect_externs(obj);
err = err ? : bpf_object_fixup_btf(obj);
@@ -8010,9 +8146,7 @@ bpf_object__open_file(const char *path, const struct bpf_object_open_opts *opts)
if (!path)
return libbpf_err_ptr(-EINVAL);
- pr_debug("loading %s\n", path);
-
- return libbpf_ptr(bpf_object_open(path, NULL, 0, opts));
+ return libbpf_ptr(bpf_object_open(path, NULL, 0, NULL, opts));
}
struct bpf_object *bpf_object__open(const char *path)
@@ -8024,10 +8158,15 @@ struct bpf_object *
bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
const struct bpf_object_open_opts *opts)
{
+ char tmp_name[64];
+
if (!obj_buf || obj_buf_sz == 0)
return libbpf_err_ptr(-EINVAL);
- return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, opts));
+ /* create a (quite useless) default "name" for this memory buffer object */
+ snprintf(tmp_name, sizeof(tmp_name), "%lx-%zx", (unsigned long)obj_buf, obj_buf_sz);
+
+ return libbpf_ptr(bpf_object_open(NULL, obj_buf, obj_buf_sz, tmp_name, opts));
}
static int bpf_object_unload(struct bpf_object *obj)
@@ -8063,7 +8202,10 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj)
return 0;
}
-int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *ctx)
+typedef int (*kallsyms_cb_t)(unsigned long long sym_addr, char sym_type,
+ const char *sym_name, void *ctx);
+
+static int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *ctx)
{
char sym_type, sym_name[500];
unsigned long long sym_addr;
@@ -8073,7 +8215,7 @@ int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *ctx)
f = fopen("/proc/kallsyms", "re");
if (!f) {
err = -errno;
- pr_warn("failed to open /proc/kallsyms: %d\n", err);
+ pr_warn("failed to open /proc/kallsyms: %s\n", errstr(err));
return err;
}
@@ -8103,8 +8245,13 @@ static int kallsyms_cb(unsigned long long sym_addr, char sym_type,
struct bpf_object *obj = ctx;
const struct btf_type *t;
struct extern_desc *ext;
+ char *res;
- ext = find_extern_by_name(obj, sym_name);
+ res = strstr(sym_name, ".llvm.");
+ if (sym_type == 'd' && res)
+ ext = find_extern_by_name_with_len(obj, sym_name, res - sym_name);
+ else
+ ext = find_extern_by_name(obj, sym_name);
if (!ext || ext->type != EXT_KSYM)
return 0;
@@ -8429,11 +8576,13 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
static void bpf_map_prepare_vdata(const struct bpf_map *map)
{
+ const struct btf_type *type;
struct bpf_struct_ops *st_ops;
__u32 i;
st_ops = map->st_ops;
- for (i = 0; i < btf_vlen(st_ops->type); i++) {
+ type = btf__type_by_id(map->obj->btf, st_ops->type_id);
+ for (i = 0; i < btf_vlen(type); i++) {
struct bpf_program *prog = st_ops->progs[i];
void *kern_data;
int prog_fd;
@@ -8449,39 +8598,115 @@ static void bpf_map_prepare_vdata(const struct bpf_map *map)
static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
{
+ struct bpf_map *map;
int i;
- for (i = 0; i < obj->nr_maps; i++)
- if (bpf_map__is_struct_ops(&obj->maps[i]))
- bpf_map_prepare_vdata(&obj->maps[i]);
+ for (i = 0; i < obj->nr_maps; i++) {
+ map = &obj->maps[i];
+
+ if (!bpf_map__is_struct_ops(map))
+ continue;
+
+ if (!map->autocreate)
+ continue;
+
+ bpf_map_prepare_vdata(map);
+ }
return 0;
}
-static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
+static void bpf_object_unpin(struct bpf_object *obj)
{
- int err, i;
+ int i;
- if (!obj)
- return libbpf_err(-EINVAL);
+ /* unpin any maps that were auto-pinned during load */
+ for (i = 0; i < obj->nr_maps; i++)
+ if (obj->maps[i].pinned && !obj->maps[i].reused)
+ bpf_map__unpin(&obj->maps[i], NULL);
+}
- if (obj->loaded) {
- pr_warn("object '%s': load can't be attempted twice\n", obj->name);
- return libbpf_err(-EINVAL);
+static void bpf_object_post_load_cleanup(struct bpf_object *obj)
+{
+ int i;
+
+ /* clean up fd_array */
+ zfree(&obj->fd_array);
+
+ /* clean up module BTFs */
+ for (i = 0; i < obj->btf_module_cnt; i++) {
+ close(obj->btf_modules[i].fd);
+ btf__free(obj->btf_modules[i].btf);
+ free(obj->btf_modules[i].name);
}
+ obj->btf_module_cnt = 0;
+ zfree(&obj->btf_modules);
- if (obj->gen_loader)
- bpf_gen__init(obj->gen_loader, extra_log_level, obj->nr_programs, obj->nr_maps);
+ /* clean up vmlinux BTF */
+ btf__free(obj->btf_vmlinux);
+ obj->btf_vmlinux = NULL;
+}
+
+static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_path)
+{
+ int err;
+
+ if (obj->state >= OBJ_PREPARED) {
+ pr_warn("object '%s': prepare loading can't be attempted twice\n", obj->name);
+ return -EINVAL;
+ }
- err = bpf_object__probe_loading(obj);
+ err = bpf_object_prepare_token(obj);
+ err = err ? : bpf_object__probe_loading(obj);
err = err ? : bpf_object__load_vmlinux_btf(obj, false);
err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
err = err ? : bpf_object__sanitize_maps(obj);
err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
+ err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
err = err ? : bpf_object__sanitize_and_load_btf(obj);
err = err ? : bpf_object__create_maps(obj);
- err = err ? : bpf_object__load_progs(obj, extra_log_level);
+ err = err ? : bpf_object_prepare_progs(obj);
+
+ if (err) {
+ bpf_object_unpin(obj);
+ bpf_object_unload(obj);
+ obj->state = OBJ_LOADED;
+ return err;
+ }
+
+ obj->state = OBJ_PREPARED;
+ return 0;
+}
+
+static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path)
+{
+ int err;
+
+ if (!obj)
+ return libbpf_err(-EINVAL);
+
+ if (obj->state >= OBJ_LOADED) {
+ pr_warn("object '%s': load can't be attempted twice\n", obj->name);
+ return libbpf_err(-EINVAL);
+ }
+
+ /* Disallow kernel loading programs of non-native endianness but
+ * permit cross-endian creation of "light skeleton".
+ */
+ if (obj->gen_loader) {
+ bpf_gen__init(obj->gen_loader, extra_log_level, obj->nr_programs, obj->nr_maps);
+ } else if (!is_native_endianness(obj)) {
+ pr_warn("object '%s': loading non-native endianness is unsupported\n", obj->name);
+ return libbpf_err(-LIBBPF_ERRNO__ENDIAN);
+ }
+
+ if (obj->state < OBJ_PREPARED) {
+ err = bpf_object_prepare(obj, target_btf_path);
+ if (err)
+ return libbpf_err(err);
+ }
+ err = bpf_object__load_progs(obj, extra_log_level);
err = err ? : bpf_object_init_prog_arrays(obj);
err = err ? : bpf_object_prepare_struct_ops(obj);
@@ -8493,36 +8718,22 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
err = bpf_gen__finish(obj->gen_loader, obj->nr_programs, obj->nr_maps);
}
- /* clean up fd_array */
- zfree(&obj->fd_array);
+ bpf_object_post_load_cleanup(obj);
+ obj->state = OBJ_LOADED; /* doesn't matter if successfully or not */
- /* clean up module BTFs */
- for (i = 0; i < obj->btf_module_cnt; i++) {
- close(obj->btf_modules[i].fd);
- btf__free(obj->btf_modules[i].btf);
- free(obj->btf_modules[i].name);
+ if (err) {
+ bpf_object_unpin(obj);
+ bpf_object_unload(obj);
+ pr_warn("failed to load object '%s'\n", obj->path);
+ return libbpf_err(err);
}
- free(obj->btf_modules);
-
- /* clean up vmlinux BTF */
- btf__free(obj->btf_vmlinux);
- obj->btf_vmlinux = NULL;
-
- obj->loaded = true; /* doesn't matter if successfully or not */
-
- if (err)
- goto out;
return 0;
-out:
- /* unpin any maps that were auto-pinned during load */
- for (i = 0; i < obj->nr_maps; i++)
- if (obj->maps[i].pinned && !obj->maps[i].reused)
- bpf_map__unpin(&obj->maps[i], NULL);
+}
- bpf_object_unload(obj);
- pr_warn("failed to load object '%s'\n", obj->path);
- return libbpf_err(err);
+int bpf_object__prepare(struct bpf_object *obj)
+{
+ return libbpf_err(bpf_object_prepare(obj, NULL));
}
int bpf_object__load(struct bpf_object *obj)
@@ -8532,7 +8743,6 @@ int bpf_object__load(struct bpf_object *obj)
static int make_parent_dir(const char *path)
{
- char *cp, errmsg[STRERR_BUFSIZE];
char *dname, *dir;
int err = 0;
@@ -8546,15 +8756,13 @@ static int make_parent_dir(const char *path)
free(dname);
if (err) {
- cp = libbpf_strerror_r(-err, errmsg, sizeof(errmsg));
- pr_warn("failed to mkdir %s: %s\n", path, cp);
+ pr_warn("failed to mkdir %s: %s\n", path, errstr(err));
}
return err;
}
static int check_path(const char *path)
{
- char *cp, errmsg[STRERR_BUFSIZE];
struct statfs st_fs;
char *dname, *dir;
int err = 0;
@@ -8568,8 +8776,7 @@ static int check_path(const char *path)
dir = dirname(dname);
if (statfs(dir, &st_fs)) {
- cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
- pr_warn("failed to statfs %s: %s\n", dir, cp);
+ pr_warn("failed to statfs %s: %s\n", dir, errstr(errno));
err = -errno;
}
free(dname);
@@ -8584,7 +8791,6 @@ static int check_path(const char *path)
int bpf_program__pin(struct bpf_program *prog, const char *path)
{
- char *cp, errmsg[STRERR_BUFSIZE];
int err;
if (prog->fd < 0) {
@@ -8602,8 +8808,7 @@ int bpf_program__pin(struct bpf_program *prog, const char *path)
if (bpf_obj_pin(prog->fd, path)) {
err = -errno;
- cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
- pr_warn("prog '%s': failed to pin at '%s': %s\n", prog->name, path, cp);
+ pr_warn("prog '%s': failed to pin at '%s': %s\n", prog->name, path, errstr(err));
return libbpf_err(err);
}
@@ -8634,7 +8839,6 @@ int bpf_program__unpin(struct bpf_program *prog, const char *path)
int bpf_map__pin(struct bpf_map *map, const char *path)
{
- char *cp, errmsg[STRERR_BUFSIZE];
int err;
if (map == NULL) {
@@ -8642,6 +8846,11 @@ int bpf_map__pin(struct bpf_map *map, const char *path)
return libbpf_err(-EINVAL);
}
+ if (map->fd < 0) {
+ pr_warn("map '%s': can't pin BPF map without FD (was it created?)\n", map->name);
+ return libbpf_err(-EINVAL);
+ }
+
if (map->pin_path) {
if (path && strcmp(path, map->pin_path)) {
pr_warn("map '%s' already has pin path '%s' different from '%s'\n",
@@ -8688,8 +8897,7 @@ int bpf_map__pin(struct bpf_map *map, const char *path)
return 0;
out_err:
- cp = libbpf_strerror_r(-err, errmsg, sizeof(errmsg));
- pr_warn("failed to pin map: %s\n", cp);
+ pr_warn("failed to pin map: %s\n", errstr(err));
return libbpf_err(err);
}
@@ -8775,7 +8983,7 @@ int bpf_object__pin_maps(struct bpf_object *obj, const char *path)
if (!obj)
return libbpf_err(-ENOENT);
- if (!obj->loaded) {
+ if (obj->state < OBJ_PREPARED) {
pr_warn("object not yet loaded; load it first\n");
return libbpf_err(-ENOENT);
}
@@ -8854,7 +9062,7 @@ int bpf_object__pin_programs(struct bpf_object *obj, const char *path)
if (!obj)
return libbpf_err(-ENOENT);
- if (!obj->loaded) {
+ if (obj->state < OBJ_LOADED) {
pr_warn("object not yet loaded; load it first\n");
return libbpf_err(-ENOENT);
}
@@ -8947,13 +9155,9 @@ static void bpf_map__destroy(struct bpf_map *map)
zfree(&map->init_slots);
map->init_slots_sz = 0;
- if (map->mmaped) {
- size_t mmap_sz;
-
- mmap_sz = bpf_map_mmap_sz(map->def.value_size, map->def.max_entries);
- munmap(map->mmaped, mmap_sz);
- map->mmaped = NULL;
- }
+ if (map->mmaped && map->mmaped != map->obj->arena_data)
+ munmap(map->mmaped, bpf_map_mmap_sz(map));
+ map->mmaped = NULL;
if (map->st_ops) {
zfree(&map->st_ops->data);
@@ -8977,6 +9181,13 @@ void bpf_object__close(struct bpf_object *obj)
if (IS_ERR_OR_NULL(obj))
return;
+ /*
+ * if user called bpf_object__prepare() without ever getting to
+ * bpf_object__load(), we need to clean up stuff that is normally
+ * cleaned up at the end of loading step
+ */
+ bpf_object_post_load_cleanup(obj);
+
usdt_manager_free(obj->usdt_man);
obj->usdt_man = NULL;
@@ -8993,8 +9204,10 @@ void bpf_object__close(struct bpf_object *obj)
zfree(&obj->btf_custom_path);
zfree(&obj->kconfig);
- for (i = 0; i < obj->nr_extern; i++)
+ for (i = 0; i < obj->nr_extern; i++) {
+ zfree(&obj->externs[i].name);
zfree(&obj->externs[i].essent_name);
+ }
zfree(&obj->externs);
obj->nr_extern = 0;
@@ -9008,6 +9221,13 @@ void bpf_object__close(struct bpf_object *obj)
}
zfree(&obj->programs);
+ zfree(&obj->feat_cache);
+ zfree(&obj->token_path);
+ if (obj->token_fd > 0)
+ close(obj->token_fd);
+
+ zfree(&obj->arena_data);
+
free(obj);
}
@@ -9021,6 +9241,11 @@ unsigned int bpf_object__kversion(const struct bpf_object *obj)
return obj ? obj->kern_version : 0;
}
+int bpf_object__token_fd(const struct bpf_object *obj)
+{
+ return obj->token_fd ?: -1;
+}
+
struct btf *bpf_object__btf(const struct bpf_object *obj)
{
return obj ? obj->btf : NULL;
@@ -9033,7 +9258,7 @@ int bpf_object__btf_fd(const struct bpf_object *obj)
int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version)
{
- if (obj->loaded)
+ if (obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
obj->kern_version = kern_version;
@@ -9046,13 +9271,14 @@ int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts)
struct bpf_gen *gen;
if (!opts)
- return -EFAULT;
+ return libbpf_err(-EFAULT);
if (!OPTS_VALID(opts, gen_loader_opts))
- return -EINVAL;
- gen = calloc(sizeof(*gen), 1);
+ return libbpf_err(-EINVAL);
+ gen = calloc(1, sizeof(*gen));
if (!gen)
- return -ENOMEM;
+ return libbpf_err(-ENOMEM);
gen->opts = opts;
+ gen->swapped_endian = !is_native_endianness(obj);
obj->gen_loader = gen;
return 0;
}
@@ -9129,7 +9355,7 @@ bool bpf_program__autoload(const struct bpf_program *prog)
int bpf_program__set_autoload(struct bpf_program *prog, bool autoload)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
prog->autoload = autoload;
@@ -9161,14 +9387,14 @@ int bpf_program__set_insns(struct bpf_program *prog,
{
struct bpf_insn *insns;
- if (prog->obj->loaded)
- return -EBUSY;
+ if (prog->obj->state >= OBJ_LOADED)
+ return libbpf_err(-EBUSY);
insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns));
/* NULL is a valid return from reallocarray if the new count is zero */
if (!insns && new_insn_cnt) {
pr_warn("prog '%s': failed to realloc prog code\n", prog->name);
- return -ENOMEM;
+ return libbpf_err(-ENOMEM);
}
memcpy(insns, new_insns, new_insn_cnt * sizeof(*insns));
@@ -9204,7 +9430,7 @@ static int last_custom_sec_def_handler_id;
int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
/* if type is not changed, do nothing */
@@ -9235,7 +9461,7 @@ enum bpf_attach_type bpf_program__expected_attach_type(const struct bpf_program
int bpf_program__set_expected_attach_type(struct bpf_program *prog,
enum bpf_attach_type type)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->expected_attach_type = type;
@@ -9249,7 +9475,7 @@ __u32 bpf_program__flags(const struct bpf_program *prog)
int bpf_program__set_flags(struct bpf_program *prog, __u32 flags)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->prog_flags = flags;
@@ -9263,7 +9489,7 @@ __u32 bpf_program__log_level(const struct bpf_program *prog)
int bpf_program__set_log_level(struct bpf_program *prog, __u32 log_level)
{
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EBUSY);
prog->log_level = log_level;
@@ -9279,17 +9505,41 @@ const char *bpf_program__log_buf(const struct bpf_program *prog, size_t *log_siz
int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log_size)
{
if (log_size && !log_buf)
- return -EINVAL;
+ return libbpf_err(-EINVAL);
if (prog->log_size > UINT_MAX)
- return -EINVAL;
- if (prog->obj->loaded)
- return -EBUSY;
+ return libbpf_err(-EINVAL);
+ if (prog->obj->state >= OBJ_LOADED)
+ return libbpf_err(-EBUSY);
prog->log_buf = log_buf;
prog->log_size = log_size;
return 0;
}
+struct bpf_func_info *bpf_program__func_info(const struct bpf_program *prog)
+{
+ if (prog->func_info_rec_size != sizeof(struct bpf_func_info))
+ return libbpf_err_ptr(-EOPNOTSUPP);
+ return prog->func_info;
+}
+
+__u32 bpf_program__func_info_cnt(const struct bpf_program *prog)
+{
+ return prog->func_info_cnt;
+}
+
+struct bpf_line_info *bpf_program__line_info(const struct bpf_program *prog)
+{
+ if (prog->line_info_rec_size != sizeof(struct bpf_line_info))
+ return libbpf_err_ptr(-EOPNOTSUPP);
+ return prog->line_info;
+}
+
+__u32 bpf_program__line_info_cnt(const struct bpf_program *prog)
+{
+ return prog->line_info_cnt;
+}
+
#define SEC_DEF(sec_pfx, ptype, atype, flags, ...) { \
.sec = (char *)sec_pfx, \
.prog_type = BPF_PROG_TYPE_##ptype, \
@@ -9307,6 +9557,7 @@ static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_lin
static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
+static int attach_kprobe_session(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_lsm(const struct bpf_program *prog, long cookie, struct bpf_link **link);
static int attach_iter(const struct bpf_program *prog, long cookie, struct bpf_link **link);
@@ -9323,10 +9574,13 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("uretprobe.s+", KPROBE, 0, SEC_SLEEPABLE, attach_uprobe),
SEC_DEF("kprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
SEC_DEF("kretprobe.multi+", KPROBE, BPF_TRACE_KPROBE_MULTI, SEC_NONE, attach_kprobe_multi),
+ SEC_DEF("kprobe.session+", KPROBE, BPF_TRACE_KPROBE_SESSION, SEC_NONE, attach_kprobe_session),
SEC_DEF("uprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uretprobe.multi+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_NONE, attach_uprobe_multi),
+ SEC_DEF("uprobe.session+", KPROBE, BPF_TRACE_UPROBE_SESSION, SEC_NONE, attach_uprobe_multi),
SEC_DEF("uprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
SEC_DEF("uretprobe.multi.s+", KPROBE, BPF_TRACE_UPROBE_MULTI, SEC_SLEEPABLE, attach_uprobe_multi),
+ SEC_DEF("uprobe.session.s+", KPROBE, BPF_TRACE_UPROBE_SESSION, SEC_SLEEPABLE, attach_uprobe_multi),
SEC_DEF("ksyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall),
SEC_DEF("kretsyscall+", KPROBE, 0, SEC_NONE, attach_ksyscall),
SEC_DEF("usdt+", KPROBE, 0, SEC_USDT, attach_usdt),
@@ -9374,6 +9628,7 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("sockops", SOCK_OPS, BPF_CGROUP_SOCK_OPS, SEC_ATTACHABLE_OPT),
SEC_DEF("sk_skb/stream_parser", SK_SKB, BPF_SK_SKB_STREAM_PARSER, SEC_ATTACHABLE_OPT),
SEC_DEF("sk_skb/stream_verdict",SK_SKB, BPF_SK_SKB_STREAM_VERDICT, SEC_ATTACHABLE_OPT),
+ SEC_DEF("sk_skb/verdict", SK_SKB, BPF_SK_SKB_VERDICT, SEC_ATTACHABLE_OPT),
SEC_DEF("sk_skb", SK_SKB, 0, SEC_NONE),
SEC_DEF("sk_msg", SK_MSG, BPF_SK_MSG_VERDICT, SEC_ATTACHABLE_OPT),
SEC_DEF("lirc_mode2", LIRC_MODE2, BPF_LIRC_MODE2, SEC_ATTACHABLE_OPT),
@@ -9668,10 +9923,13 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
return NULL;
}
-/* Collect the reloc from ELF and populate the st_ops->progs[] */
+/* Collect the reloc from ELF, populate the st_ops->progs[], and update
+ * st_ops->data for shadow type.
+ */
static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
Elf64_Shdr *shdr, Elf_Data *data)
{
+ const struct btf_type *type;
const struct btf_member *member;
struct bpf_struct_ops *st_ops;
struct bpf_program *prog;
@@ -9731,13 +9989,14 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
}
insn_idx = sym->st_value / BPF_INSN_SZ;
- member = find_member_by_offset(st_ops->type, moff * 8);
+ type = btf__type_by_id(btf, st_ops->type_id);
+ member = find_member_by_offset(type, moff * 8);
if (!member) {
pr_warn("struct_ops reloc %s: cannot find member at moff %u\n",
map->name, moff);
return -EINVAL;
}
- member_idx = member - btf_members(st_ops->type);
+ member_idx = member - btf_members(type);
name = btf__name_by_offset(btf, member->name_off);
if (!resolve_func_ptr(btf, member->type, NULL)) {
@@ -9760,28 +10019,15 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
return -EINVAL;
}
- /* if we haven't yet processed this BPF program, record proper
- * attach_btf_id and member_idx
- */
- if (!prog->attach_btf_id) {
- prog->attach_btf_id = st_ops->type_id;
- prog->expected_attach_type = member_idx;
- }
+ st_ops->progs[member_idx] = prog;
- /* struct_ops BPF prog can be re-used between multiple
- * .struct_ops & .struct_ops.link as long as it's the
- * same struct_ops struct definition and the same
- * function pointer field
+ /* st_ops->data will be exposed to users, being returned by
+ * bpf_map__initial_value() as a pointer to the shadow
+ * type. All function pointers in the original struct type
+ * should be converted to a pointer to struct bpf_program
+ * in the shadow type.
*/
- if (prog->attach_btf_id != st_ops->type_id ||
- prog->expected_attach_type != member_idx) {
- pr_warn("struct_ops reloc %s: cannot use prog %s in sec %s with type %u attach_btf_id %u expected_attach_type %u for func ptr %s\n",
- map->name, prog->name, prog->sec_name, prog->type,
- prog->attach_btf_id, prog->expected_attach_type, name);
- return -EINVAL;
- }
-
- st_ops->progs[member_idx] = prog;
+ *((struct bpf_program **)(st_ops->data + moff)) = prog;
}
return 0;
@@ -9863,7 +10109,7 @@ int libbpf_find_vmlinux_btf_id(const char *name,
return libbpf_err(err);
}
-static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
+static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd, int token_fd)
{
struct bpf_prog_info info;
__u32 info_len = sizeof(info);
@@ -9873,8 +10119,8 @@ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
memset(&info, 0, info_len);
err = bpf_prog_get_info_by_fd(attach_prog_fd, &info, &info_len);
if (err) {
- pr_warn("failed bpf_prog_get_info_by_fd for FD %d: %d\n",
- attach_prog_fd, err);
+ pr_warn("failed bpf_prog_get_info_by_fd for FD %d: %s\n",
+ attach_prog_fd, errstr(err));
return err;
}
@@ -9883,10 +10129,10 @@ static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
pr_warn("The target program doesn't have BTF\n");
goto out;
}
- btf = btf__load_from_kernel_by_id(info.btf_id);
+ btf = btf_load_from_kernel(info.btf_id, NULL, token_fd);
err = libbpf_get_error(btf);
if (err) {
- pr_warn("Failed to get BTF %d of the program: %d\n", info.btf_id, err);
+ pr_warn("Failed to get BTF %d of the program: %s\n", info.btf_id, errstr(err));
goto out;
}
err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
@@ -9903,16 +10149,28 @@ static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
enum bpf_attach_type attach_type,
int *btf_obj_fd, int *btf_type_id)
{
- int ret, i;
+ int ret, i, mod_len = 0;
+ const char *fn_name, *mod_name = NULL;
- ret = find_attach_btf_id(obj->btf_vmlinux, attach_name, attach_type);
- if (ret > 0) {
- *btf_obj_fd = 0; /* vmlinux BTF */
- *btf_type_id = ret;
- return 0;
+ fn_name = strchr(attach_name, ':');
+ if (fn_name) {
+ mod_name = attach_name;
+ mod_len = fn_name - mod_name;
+ fn_name++;
+ }
+
+ if (!mod_name || strncmp(mod_name, "vmlinux", mod_len) == 0) {
+ ret = find_attach_btf_id(obj->btf_vmlinux,
+ mod_name ? fn_name : attach_name,
+ attach_type);
+ if (ret > 0) {
+ *btf_obj_fd = 0; /* vmlinux BTF */
+ *btf_type_id = ret;
+ return 0;
+ }
+ if (ret != -ENOENT)
+ return ret;
}
- if (ret != -ENOENT)
- return ret;
ret = load_module_btfs(obj);
if (ret)
@@ -9921,7 +10179,12 @@ static int find_kernel_btf_id(struct bpf_object *obj, const char *attach_name,
for (i = 0; i < obj->btf_module_cnt; i++) {
const struct module_btf *mod = &obj->btf_modules[i];
- ret = find_attach_btf_id(mod->btf, attach_name, attach_type);
+ if (mod_name && strncmp(mod->name, mod_name, mod_len) != 0)
+ continue;
+
+ ret = find_attach_btf_id(mod->btf,
+ mod_name ? fn_name : attach_name,
+ attach_type);
if (ret > 0) {
*btf_obj_fd = mod->fd;
*btf_type_id = ret;
@@ -9949,10 +10212,10 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attac
pr_warn("prog '%s': attach program FD is not set\n", prog->name);
return -EINVAL;
}
- err = libbpf_find_prog_btf_id(attach_name, attach_prog_fd);
+ err = libbpf_find_prog_btf_id(attach_name, attach_prog_fd, prog->obj->token_fd);
if (err < 0) {
- pr_warn("prog '%s': failed to find BPF program (FD %d) BTF ID for '%s': %d\n",
- prog->name, attach_prog_fd, attach_name, err);
+ pr_warn("prog '%s': failed to find BPF program (FD %d) BTF ID for '%s': %s\n",
+ prog->name, attach_prog_fd, attach_name, errstr(err));
return err;
}
*btf_obj_fd = 0;
@@ -9966,11 +10229,13 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, const char *attac
*btf_obj_fd = 0;
*btf_type_id = 1;
} else {
- err = find_kernel_btf_id(prog->obj, attach_name, attach_type, btf_obj_fd, btf_type_id);
+ err = find_kernel_btf_id(prog->obj, attach_name,
+ attach_type, btf_obj_fd,
+ btf_type_id);
}
if (err) {
- pr_warn("prog '%s': failed to find kernel BTF type ID of '%s': %d\n",
- prog->name, attach_name, err);
+ pr_warn("prog '%s': failed to find kernel BTF type ID of '%s': %s\n",
+ prog->name, attach_name, errstr(err));
return err;
}
return 0;
@@ -10184,25 +10449,28 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
{
- if (map->obj->loaded || map->reused)
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
if (map->mmaped) {
- int err;
size_t mmap_old_sz, mmap_new_sz;
+ int err;
+
+ if (map->def.type != BPF_MAP_TYPE_ARRAY)
+ return libbpf_err(-EOPNOTSUPP);
- mmap_old_sz = bpf_map_mmap_sz(map->def.value_size, map->def.max_entries);
- mmap_new_sz = bpf_map_mmap_sz(size, map->def.max_entries);
+ mmap_old_sz = bpf_map_mmap_sz(map);
+ mmap_new_sz = array_map_mmap_sz(size, map->def.max_entries);
err = bpf_map_mmap_resize(map, mmap_old_sz, mmap_new_sz);
if (err) {
- pr_warn("map '%s': failed to resize memory-mapped region: %d\n",
- bpf_map__name(map), err);
- return err;
+ pr_warn("map '%s': failed to resize memory-mapped region: %s\n",
+ bpf_map__name(map), errstr(err));
+ return libbpf_err(err);
}
err = map_btf_datasec_resize(map, size);
if (err && err != -ENOENT) {
- pr_warn("map '%s': failed to adjust resized BTF, clearing BTF key/value info: %d\n",
- bpf_map__name(map), err);
+ pr_warn("map '%s': failed to adjust resized BTF, clearing BTF key/value info: %s\n",
+ bpf_map__name(map), errstr(err));
map->btf_value_type_id = 0;
map->btf_key_type_id = 0;
}
@@ -10225,22 +10493,41 @@ __u32 bpf_map__btf_value_type_id(const struct bpf_map *map)
int bpf_map__set_initial_value(struct bpf_map *map,
const void *data, size_t size)
{
- if (map->obj->loaded || map->reused)
+ size_t actual_sz;
+
+ if (map_is_created(map))
return libbpf_err(-EBUSY);
- if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG ||
- size != map->def.value_size)
+ if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG)
+ return libbpf_err(-EINVAL);
+
+ if (map->def.type == BPF_MAP_TYPE_ARENA)
+ actual_sz = map->obj->arena_data_sz;
+ else
+ actual_sz = map->def.value_size;
+ if (size != actual_sz)
return libbpf_err(-EINVAL);
memcpy(map->mmaped, data, size);
return 0;
}
-void *bpf_map__initial_value(struct bpf_map *map, size_t *psize)
+void *bpf_map__initial_value(const struct bpf_map *map, size_t *psize)
{
+ if (bpf_map__is_struct_ops(map)) {
+ if (psize)
+ *psize = map->def.value_size;
+ return map->st_ops->data;
+ }
+
if (!map->mmaped)
return NULL;
- *psize = map->def.value_size;
+
+ if (map->def.type == BPF_MAP_TYPE_ARENA)
+ *psize = map->obj->arena_data_sz;
+ else
+ *psize = map->def.value_size;
+
return map->mmaped;
}
@@ -10280,6 +10567,27 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
return 0;
}
+int bpf_map__set_exclusive_program(struct bpf_map *map, struct bpf_program *prog)
+{
+ if (map_is_created(map)) {
+ pr_warn("exclusive programs must be set before map creation\n");
+ return libbpf_err(-EINVAL);
+ }
+
+ if (map->obj != prog->obj) {
+ pr_warn("excl_prog and map must be from the same bpf object\n");
+ return libbpf_err(-EINVAL);
+ }
+
+ map->excl_prog = prog;
+ return 0;
+}
+
+struct bpf_program *bpf_map__exclusive_program(struct bpf_map *map)
+{
+ return map->excl_prog;
+}
+
static struct bpf_map *
__bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
{
@@ -10307,7 +10615,7 @@ __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
struct bpf_map *
bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
{
- if (prev == NULL)
+ if (prev == NULL && obj != NULL)
return obj->maps;
return __bpf_map__iter(prev, obj, 1);
@@ -10316,7 +10624,7 @@ bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
struct bpf_map *
bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *next)
{
- if (next == NULL) {
+ if (next == NULL && obj != NULL) {
if (!obj->nr_maps)
return NULL;
return obj->maps + obj->nr_maps - 1;
@@ -10370,6 +10678,11 @@ static int validate_map_op(const struct bpf_map *map, size_t key_sz,
return -EINVAL;
}
+ if (map->fd < 0) {
+ pr_warn("map '%s': can't use BPF map without FD (was it created?)\n", map->name);
+ return -EINVAL;
+ }
+
if (!check_value_sz)
return 0;
@@ -10482,8 +10795,15 @@ long libbpf_get_error(const void *ptr)
int bpf_link__update_program(struct bpf_link *link, struct bpf_program *prog)
{
int ret;
+ int prog_fd = bpf_program__fd(prog);
- ret = bpf_link_update(bpf_link__fd(link), bpf_program__fd(prog), NULL);
+ if (prog_fd < 0) {
+ pr_warn("prog '%s': can't use BPF program without FD (was it loaded?)\n",
+ prog->name);
+ return libbpf_err(-EINVAL);
+ }
+
+ ret = bpf_link_update(bpf_link__fd(link), prog_fd, NULL);
return libbpf_err_errno(ret);
}
@@ -10662,7 +10982,6 @@ static void bpf_link_perf_dealloc(struct bpf_link *link)
struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *prog, int pfd,
const struct bpf_perf_event_opts *opts)
{
- char errmsg[STRERR_BUFSIZE];
struct bpf_link_perf *link;
int prog_fd, link_fd = -1, err;
bool force_ioctl_attach;
@@ -10677,7 +10996,7 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
}
prog_fd = bpf_program__fd(prog);
if (prog_fd < 0) {
- pr_warn("prog '%s': can't attach BPF program w/o FD (did you load it?)\n",
+ pr_warn("prog '%s': can't attach BPF program without FD (was it loaded?)\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
@@ -10697,9 +11016,8 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
link_fd = bpf_link_create(prog_fd, pfd, BPF_PERF_EVENT, &link_opts);
if (link_fd < 0) {
err = -errno;
- pr_warn("prog '%s': failed to create BPF link for perf_event FD %d: %d (%s)\n",
- prog->name, pfd,
- err, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ pr_warn("prog '%s': failed to create BPF link for perf_event FD %d: %s\n",
+ prog->name, pfd, errstr(err));
goto err_out;
}
link->link.fd = link_fd;
@@ -10713,7 +11031,7 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, prog_fd) < 0) {
err = -errno;
pr_warn("prog '%s': failed to attach to perf_event FD %d: %s\n",
- prog->name, pfd, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ prog->name, pfd, errstr(err));
if (err == -EPROTO)
pr_warn("prog '%s': try add PERF_SAMPLE_CALLCHAIN to or remove exclude_callchain_[kernel|user] from pfd %d\n",
prog->name, pfd);
@@ -10721,11 +11039,14 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
}
link->link.fd = pfd;
}
- if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
- err = -errno;
- pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
- prog->name, pfd, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
- goto err_out;
+
+ if (!OPTS_GET(opts, dont_enable, false)) {
+ if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
+ err = -errno;
+ pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
+ prog->name, pfd, errstr(err));
+ goto err_out;
+ }
}
return &link->link;
@@ -10748,22 +11069,19 @@ struct bpf_link *bpf_program__attach_perf_event(const struct bpf_program *prog,
*/
static int parse_uint_from_file(const char *file, const char *fmt)
{
- char buf[STRERR_BUFSIZE];
int err, ret;
FILE *f;
f = fopen(file, "re");
if (!f) {
err = -errno;
- pr_debug("failed to open '%s': %s\n", file,
- libbpf_strerror_r(err, buf, sizeof(buf)));
+ pr_debug("failed to open '%s': %s\n", file, errstr(err));
return err;
}
err = fscanf(f, fmt, &ret);
if (err != 1) {
err = err == EOF ? -EIO : -errno;
- pr_debug("failed to parse '%s': %s\n", file,
- libbpf_strerror_r(err, buf, sizeof(buf)));
+ pr_debug("failed to parse '%s': %s\n", file, errstr(err));
fclose(f);
return err;
}
@@ -10807,7 +11125,6 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
{
const size_t attr_sz = sizeof(struct perf_event_attr);
struct perf_event_attr attr;
- char errmsg[STRERR_BUFSIZE];
int type, pfd;
if ((__u64)ref_ctr_off >= (1ULL << PERF_UPROBE_REF_CTR_OFFSET_BITS))
@@ -10820,7 +11137,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
if (type < 0) {
pr_warn("failed to determine %s perf type: %s\n",
uprobe ? "uprobe" : "kprobe",
- libbpf_strerror_r(type, errmsg, sizeof(errmsg)));
+ errstr(type));
return type;
}
if (retprobe) {
@@ -10830,7 +11147,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
if (bit < 0) {
pr_warn("failed to determine %s retprobe bit: %s\n",
uprobe ? "uprobe" : "kprobe",
- libbpf_strerror_r(bit, errmsg, sizeof(errmsg)));
+ errstr(bit));
return bit;
}
attr.config |= 1 << bit;
@@ -10913,16 +11230,16 @@ static const char *tracefs_available_filter_functions_addrs(void)
: TRACEFS"/available_filter_functions_addrs";
}
-static void gen_kprobe_legacy_event_name(char *buf, size_t buf_sz,
- const char *kfunc_name, size_t offset)
+static void gen_probe_legacy_event_name(char *buf, size_t buf_sz,
+ const char *name, size_t offset)
{
static int index = 0;
int i;
- snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx_%d", getpid(), kfunc_name, offset,
- __sync_fetch_and_add(&index, 1));
+ snprintf(buf, buf_sz, "libbpf_%u_%d_%s_0x%zx", getpid(),
+ __sync_fetch_and_add(&index, 1), name, offset);
- /* sanitize binary_path in the probe name */
+ /* sanitize name in the probe name */
for (i = 0; buf[i]; i++) {
if (!isalnum(buf[i]))
buf[i] = '_';
@@ -10959,14 +11276,13 @@ static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
{
const size_t attr_sz = sizeof(struct perf_event_attr);
struct perf_event_attr attr;
- char errmsg[STRERR_BUFSIZE];
int type, pfd, err;
err = add_kprobe_event_legacy(probe_name, retprobe, kfunc_name, offset);
if (err < 0) {
pr_warn("failed to add legacy kprobe event for '%s+0x%zx': %s\n",
kfunc_name, offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
return err;
}
type = determine_kprobe_perf_type_legacy(probe_name, retprobe);
@@ -10974,7 +11290,7 @@ static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
err = type;
pr_warn("failed to determine legacy kprobe event id for '%s+0x%zx': %s\n",
kfunc_name, offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_clean_legacy;
}
@@ -10990,7 +11306,7 @@ static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
if (pfd < 0) {
err = -errno;
pr_warn("legacy kprobe perf_event_open() failed: %s\n",
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_clean_legacy;
}
return pfd;
@@ -11028,7 +11344,7 @@ static const char *arch_specific_syscall_pfx(void)
#endif
}
-static int probe_kern_syscall_wrapper(void)
+int probe_kern_syscall_wrapper(int token_fd)
{
char syscall_name[64];
const char *ksys_pfx;
@@ -11048,9 +11364,9 @@ static int probe_kern_syscall_wrapper(void)
return pfd >= 0 ? 1 : 0;
} else { /* legacy mode */
- char probe_name[128];
+ char probe_name[MAX_EVENT_NAME_LEN];
- gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
+ gen_probe_legacy_event_name(probe_name, sizeof(probe_name), syscall_name, 0);
if (add_kprobe_event_legacy(probe_name, false, syscall_name, 0) < 0)
return 0;
@@ -11066,7 +11382,6 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
{
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
enum probe_attach_mode attach_mode;
- char errmsg[STRERR_BUFSIZE];
char *legacy_probe = NULL;
struct bpf_link *link;
size_t offset;
@@ -11107,10 +11422,10 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
func_name, offset,
-1 /* pid */, 0 /* ref_ctr_off */);
} else {
- char probe_name[256];
+ char probe_name[MAX_EVENT_NAME_LEN];
- gen_kprobe_legacy_event_name(probe_name, sizeof(probe_name),
- func_name, offset);
+ gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
+ func_name, offset);
legacy_probe = strdup(probe_name);
if (!legacy_probe)
@@ -11124,7 +11439,7 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
pr_warn("prog '%s': failed to create %s '%s+0x%zx' perf event: %s\n",
prog->name, retprobe ? "kretprobe" : "kprobe",
func_name, offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_out;
}
link = bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts);
@@ -11134,7 +11449,7 @@ bpf_program__attach_kprobe_opts(const struct bpf_program *prog,
pr_warn("prog '%s': failed to attach to %s '%s+0x%zx': %s\n",
prog->name, retprobe ? "kretprobe" : "kprobe",
func_name, offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_clean_legacy;
}
if (legacy) {
@@ -11246,9 +11561,33 @@ static int avail_kallsyms_cb(unsigned long long sym_addr, char sym_type,
struct kprobe_multi_resolve *res = data->res;
int err;
- if (!bsearch(&sym_name, data->syms, data->cnt, sizeof(*data->syms), avail_func_cmp))
+ if (!glob_match(sym_name, res->pattern))
return 0;
+ if (!bsearch(&sym_name, data->syms, data->cnt, sizeof(*data->syms), avail_func_cmp)) {
+ /* Some versions of kernel strip out .llvm.<hash> suffix from
+ * function names reported in available_filter_functions, but
+ * don't do so for kallsyms. While this is clearly a kernel
+ * bug (fixed by [0]) we try to accommodate that in libbpf to
+ * make multi-kprobe usability a bit better: if no match is
+ * found, we will strip .llvm. suffix and try one more time.
+ *
+ * [0] fb6a421fb615 ("kallsyms: Match symbols exactly with CONFIG_LTO_CLANG")
+ */
+ char sym_trim[256], *psym_trim = sym_trim, *sym_sfx;
+
+ if (!(sym_sfx = strstr(sym_name, ".llvm.")))
+ return 0;
+
+ /* psym_trim vs sym_trim dance is done to avoid pointer vs array
+ * coercion differences and get proper `const char **` pointer
+ * which avail_func_cmp() expects
+ */
+ snprintf(sym_trim, sizeof(sym_trim), "%.*s", (int)(sym_sfx - sym_name), sym_name);
+ if (!bsearch(&psym_trim, data->syms, data->cnt, sizeof(*data->syms), avail_func_cmp))
+ return 0;
+ }
+
err = libbpf_ensure_mem((void **)&res->addrs, &res->cap, sizeof(*res->addrs), res->cnt + 1);
if (err)
return err;
@@ -11270,7 +11609,7 @@ static int libbpf_available_kallsyms_parse(struct kprobe_multi_resolve *res)
f = fopen(available_functions_file, "re");
if (!f) {
err = -errno;
- pr_warn("failed to open %s: %d\n", available_functions_file, err);
+ pr_warn("failed to open %s: %s\n", available_functions_file, errstr(err));
return err;
}
@@ -11345,7 +11684,7 @@ static int libbpf_available_kprobes_parse(struct kprobe_multi_resolve *res)
f = fopen(available_path, "re");
if (!f) {
err = -errno;
- pr_warn("failed to open %s: %d\n", available_path, err);
+ pr_warn("failed to open %s: %s\n", available_path, errstr(err));
return err;
}
@@ -11389,22 +11728,30 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
struct kprobe_multi_resolve res = {
.pattern = pattern,
};
+ enum bpf_attach_type attach_type;
struct bpf_link *link = NULL;
- char errmsg[STRERR_BUFSIZE];
const unsigned long *addrs;
int err, link_fd, prog_fd;
+ bool retprobe, session, unique_match;
const __u64 *cookies;
const char **syms;
- bool retprobe;
size_t cnt;
if (!OPTS_VALID(opts, bpf_kprobe_multi_opts))
return libbpf_err_ptr(-EINVAL);
+ prog_fd = bpf_program__fd(prog);
+ if (prog_fd < 0) {
+ pr_warn("prog '%s': can't attach BPF program without FD (was it loaded?)\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
syms = OPTS_GET(opts, syms, false);
addrs = OPTS_GET(opts, addrs, false);
cnt = OPTS_GET(opts, cnt, false);
cookies = OPTS_GET(opts, cookies, false);
+ unique_match = OPTS_GET(opts, unique_match, false);
if (!pattern && !addrs && !syms)
return libbpf_err_ptr(-EINVAL);
@@ -11412,6 +11759,8 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
return libbpf_err_ptr(-EINVAL);
if (!pattern && !cnt)
return libbpf_err_ptr(-EINVAL);
+ if (!pattern && unique_match)
+ return libbpf_err_ptr(-EINVAL);
if (addrs && syms)
return libbpf_err_ptr(-EINVAL);
@@ -11422,11 +11771,25 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
err = libbpf_available_kallsyms_parse(&res);
if (err)
goto error;
+
+ if (unique_match && res.cnt != 1) {
+ pr_warn("prog '%s': failed to find a unique match for '%s' (%zu matches)\n",
+ prog->name, pattern, res.cnt);
+ err = -EINVAL;
+ goto error;
+ }
+
addrs = res.addrs;
cnt = res.cnt;
}
retprobe = OPTS_GET(opts, retprobe, false);
+ session = OPTS_GET(opts, session, false);
+
+ if (retprobe && session)
+ return libbpf_err_ptr(-EINVAL);
+
+ attach_type = session ? BPF_TRACE_KPROBE_SESSION : BPF_TRACE_KPROBE_MULTI;
lopts.kprobe_multi.syms = syms;
lopts.kprobe_multi.addrs = addrs;
@@ -11441,12 +11804,11 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
}
link->detach = &bpf_link__detach_fd;
- prog_fd = bpf_program__fd(prog);
- link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &lopts);
+ link_fd = bpf_link_create(prog_fd, 0, attach_type, &lopts);
if (link_fd < 0) {
err = -errno;
pr_warn("prog '%s': failed to attach: %s\n",
- prog->name, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ prog->name, errstr(err));
goto error;
}
link->fd = link_fd;
@@ -11539,7 +11901,7 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru
n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
if (n < 1) {
- pr_warn("kprobe multi pattern is invalid: %s\n", pattern);
+ pr_warn("kprobe multi pattern is invalid: %s\n", spec);
return -EINVAL;
}
@@ -11548,6 +11910,32 @@ static int attach_kprobe_multi(const struct bpf_program *prog, long cookie, stru
return libbpf_get_error(*link);
}
+static int attach_kprobe_session(const struct bpf_program *prog, long cookie,
+ struct bpf_link **link)
+{
+ LIBBPF_OPTS(bpf_kprobe_multi_opts, opts, .session = true);
+ const char *spec;
+ char *pattern;
+ int n;
+
+ *link = NULL;
+
+ /* no auto-attach for SEC("kprobe.session") */
+ if (strcmp(prog->sec_name, "kprobe.session") == 0)
+ return 0;
+
+ spec = prog->sec_name + sizeof("kprobe.session/") - 1;
+ n = sscanf(spec, "%m[a-zA-Z0-9_.*?]", &pattern);
+ if (n < 1) {
+ pr_warn("kprobe session pattern is invalid: %s\n", spec);
+ return -EINVAL;
+ }
+
+ *link = bpf_program__attach_kprobe_multi_opts(prog, pattern, &opts);
+ free(pattern);
+ return *link ? 0 : -errno;
+}
+
static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
char *probe_type = NULL, *binary_path = NULL, *func_name = NULL;
@@ -11564,7 +11952,9 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
ret = 0;
break;
case 3:
- opts.retprobe = strcmp(probe_type, "uretprobe.multi") == 0;
+ opts.session = str_has_pfx(probe_type, "uprobe.session");
+ opts.retprobe = str_has_pfx(probe_type, "uretprobe.multi");
+
*link = bpf_program__attach_uprobe_multi(prog, -1, binary_path, func_name, &opts);
ret = libbpf_get_error(*link);
break;
@@ -11579,20 +11969,6 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
return ret;
}
-static void gen_uprobe_legacy_event_name(char *buf, size_t buf_sz,
- const char *binary_path, uint64_t offset)
-{
- int i;
-
- snprintf(buf, buf_sz, "libbpf_%u_%s_0x%zx", getpid(), binary_path, (size_t)offset);
-
- /* sanitize binary_path in the probe name */
- for (i = 0; buf[i]; i++) {
- if (!isalnum(buf[i]))
- buf[i] = '_';
- }
-}
-
static inline int add_uprobe_event_legacy(const char *probe_name, bool retprobe,
const char *binary_path, size_t offset)
{
@@ -11627,15 +12003,15 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
err = add_uprobe_event_legacy(probe_name, retprobe, binary_path, offset);
if (err < 0) {
- pr_warn("failed to add legacy uprobe event for %s:0x%zx: %d\n",
- binary_path, (size_t)offset, err);
+ pr_warn("failed to add legacy uprobe event for %s:0x%zx: %s\n",
+ binary_path, (size_t)offset, errstr(err));
return err;
}
type = determine_uprobe_perf_type_legacy(probe_name, retprobe);
if (type < 0) {
err = type;
- pr_warn("failed to determine legacy uprobe event id for %s:0x%zx: %d\n",
- binary_path, offset, err);
+ pr_warn("failed to determine legacy uprobe event id for %s:0x%zx: %s\n",
+ binary_path, offset, errstr(err));
goto err_clean_legacy;
}
@@ -11650,7 +12026,7 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
-1 /* group_fd */, PERF_FLAG_FD_CLOEXEC);
if (pfd < 0) {
err = -errno;
- pr_warn("legacy uprobe perf_event_open() failed: %d\n", err);
+ pr_warn("legacy uprobe perf_event_open() failed: %s\n", errstr(err));
goto err_clean_legacy;
}
return pfd;
@@ -11813,10 +12189,11 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
const unsigned long *ref_ctr_offsets = NULL, *offsets = NULL;
LIBBPF_OPTS(bpf_link_create_opts, lopts);
unsigned long *resolved_offsets = NULL;
+ enum bpf_attach_type attach_type;
int err = 0, link_fd, prog_fd;
struct bpf_link *link = NULL;
- char errmsg[STRERR_BUFSIZE];
char full_path[PATH_MAX];
+ bool retprobe, session;
const __u64 *cookies;
const char **syms;
size_t cnt;
@@ -11824,11 +12201,20 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
if (!OPTS_VALID(opts, bpf_uprobe_multi_opts))
return libbpf_err_ptr(-EINVAL);
+ prog_fd = bpf_program__fd(prog);
+ if (prog_fd < 0) {
+ pr_warn("prog '%s': can't attach BPF program without FD (was it loaded?)\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
syms = OPTS_GET(opts, syms, NULL);
offsets = OPTS_GET(opts, offsets, NULL);
ref_ctr_offsets = OPTS_GET(opts, ref_ctr_offsets, NULL);
cookies = OPTS_GET(opts, cookies, NULL);
cnt = OPTS_GET(opts, cnt, 0);
+ retprobe = OPTS_GET(opts, retprobe, false);
+ session = OPTS_GET(opts, session, false);
/*
* User can specify 2 mutually exclusive set of inputs:
@@ -11857,12 +12243,15 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
return libbpf_err_ptr(-EINVAL);
}
+ if (retprobe && session)
+ return libbpf_err_ptr(-EINVAL);
+
if (func_pattern) {
if (!strchr(path, '/')) {
err = resolve_full_path(path, full_path, sizeof(full_path));
if (err) {
- pr_warn("prog '%s': failed to resolve full path for '%s': %d\n",
- prog->name, path, err);
+ pr_warn("prog '%s': failed to resolve full path for '%s': %s\n",
+ prog->name, path, errstr(err));
return libbpf_err_ptr(err);
}
path = full_path;
@@ -11880,12 +12269,14 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
offsets = resolved_offsets;
}
+ attach_type = session ? BPF_TRACE_UPROBE_SESSION : BPF_TRACE_UPROBE_MULTI;
+
lopts.uprobe_multi.path = path;
lopts.uprobe_multi.offsets = offsets;
lopts.uprobe_multi.ref_ctr_offsets = ref_ctr_offsets;
lopts.uprobe_multi.cookies = cookies;
lopts.uprobe_multi.cnt = cnt;
- lopts.uprobe_multi.flags = OPTS_GET(opts, retprobe, false) ? BPF_F_UPROBE_MULTI_RETURN : 0;
+ lopts.uprobe_multi.flags = retprobe ? BPF_F_UPROBE_MULTI_RETURN : 0;
if (pid == 0)
pid = getpid();
@@ -11899,12 +12290,11 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
}
link->detach = &bpf_link__detach_fd;
- prog_fd = bpf_program__fd(prog);
- link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &lopts);
+ link_fd = bpf_link_create(prog_fd, 0, attach_type, &lopts);
if (link_fd < 0) {
err = -errno;
pr_warn("prog '%s': failed to attach multi-uprobe: %s\n",
- prog->name, libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ prog->name, errstr(err));
goto error;
}
link->fd = link_fd;
@@ -11923,7 +12313,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
const struct bpf_uprobe_opts *opts)
{
const char *archive_path = NULL, *archive_sep = NULL;
- char errmsg[STRERR_BUFSIZE], *legacy_probe = NULL;
+ char *legacy_probe = NULL;
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
enum probe_attach_mode attach_mode;
char full_path[PATH_MAX];
@@ -11955,8 +12345,8 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
} else if (!strchr(binary_path, '/')) {
err = resolve_full_path(binary_path, full_path, sizeof(full_path));
if (err) {
- pr_warn("prog '%s': failed to resolve full path for '%s': %d\n",
- prog->name, binary_path, err);
+ pr_warn("prog '%s': failed to resolve full path for '%s': %s\n",
+ prog->name, binary_path, errstr(err));
return libbpf_err_ptr(err);
}
binary_path = full_path;
@@ -12002,13 +12392,14 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
pfd = perf_event_open_probe(true /* uprobe */, retprobe, binary_path,
func_offset, pid, ref_ctr_off);
} else {
- char probe_name[PATH_MAX + 64];
+ char probe_name[MAX_EVENT_NAME_LEN];
if (ref_ctr_off)
return libbpf_err_ptr(-EINVAL);
- gen_uprobe_legacy_event_name(probe_name, sizeof(probe_name),
- binary_path, func_offset);
+ gen_probe_legacy_event_name(probe_name, sizeof(probe_name),
+ strrchr(binary_path, '/') ? : binary_path,
+ func_offset);
legacy_probe = strdup(probe_name);
if (!legacy_probe)
@@ -12022,7 +12413,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
pr_warn("prog '%s': failed to create %s '%s:0x%zx' perf event: %s\n",
prog->name, retprobe ? "uretprobe" : "uprobe",
binary_path, func_offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_out;
}
@@ -12033,7 +12424,7 @@ bpf_program__attach_uprobe_opts(const struct bpf_program *prog, pid_t pid,
pr_warn("prog '%s': failed to attach to %s '%s:0x%zx': %s\n",
prog->name, retprobe ? "uretprobe" : "uprobe",
binary_path, func_offset,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
goto err_clean_legacy;
}
if (legacy) {
@@ -12143,7 +12534,7 @@ struct bpf_link *bpf_program__attach_usdt(const struct bpf_program *prog,
return libbpf_err_ptr(-EINVAL);
if (bpf_program__fd(prog) < 0) {
- pr_warn("prog '%s': can't attach BPF program w/o FD (did you load it?)\n",
+ pr_warn("prog '%s': can't attach BPF program without FD (was it loaded?)\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
@@ -12154,8 +12545,8 @@ struct bpf_link *bpf_program__attach_usdt(const struct bpf_program *prog,
if (!strchr(binary_path, '/')) {
err = resolve_full_path(binary_path, resolved_path, sizeof(resolved_path));
if (err) {
- pr_warn("prog '%s': failed to resolve full path for '%s': %d\n",
- prog->name, binary_path, err);
+ pr_warn("prog '%s': failed to resolve full path for '%s': %s\n",
+ prog->name, binary_path, errstr(err));
return libbpf_err_ptr(err);
}
binary_path = resolved_path;
@@ -12233,14 +12624,13 @@ static int perf_event_open_tracepoint(const char *tp_category,
{
const size_t attr_sz = sizeof(struct perf_event_attr);
struct perf_event_attr attr;
- char errmsg[STRERR_BUFSIZE];
int tp_id, pfd, err;
tp_id = determine_tracepoint_id(tp_category, tp_name);
if (tp_id < 0) {
pr_warn("failed to determine tracepoint '%s/%s' perf event ID: %s\n",
tp_category, tp_name,
- libbpf_strerror_r(tp_id, errmsg, sizeof(errmsg)));
+ errstr(tp_id));
return tp_id;
}
@@ -12255,7 +12645,7 @@ static int perf_event_open_tracepoint(const char *tp_category,
err = -errno;
pr_warn("tracepoint '%s/%s' perf_event_open() failed: %s\n",
tp_category, tp_name,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
return err;
}
return pfd;
@@ -12267,7 +12657,6 @@ struct bpf_link *bpf_program__attach_tracepoint_opts(const struct bpf_program *p
const struct bpf_tracepoint_opts *opts)
{
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
- char errmsg[STRERR_BUFSIZE];
struct bpf_link *link;
int pfd, err;
@@ -12280,7 +12669,7 @@ struct bpf_link *bpf_program__attach_tracepoint_opts(const struct bpf_program *p
if (pfd < 0) {
pr_warn("prog '%s': failed to create tracepoint '%s/%s' perf event: %s\n",
prog->name, tp_category, tp_name,
- libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+ errstr(pfd));
return libbpf_err_ptr(pfd);
}
link = bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts);
@@ -12289,7 +12678,7 @@ struct bpf_link *bpf_program__attach_tracepoint_opts(const struct bpf_program *p
close(pfd);
pr_warn("prog '%s': failed to attach to tracepoint '%s/%s': %s\n",
prog->name, tp_category, tp_name,
- libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
+ errstr(err));
return libbpf_err_ptr(err);
}
return link;
@@ -12334,13 +12723,18 @@ static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_lin
return libbpf_get_error(*link);
}
-struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *prog,
- const char *tp_name)
+struct bpf_link *
+bpf_program__attach_raw_tracepoint_opts(const struct bpf_program *prog,
+ const char *tp_name,
+ struct bpf_raw_tracepoint_opts *opts)
{
- char errmsg[STRERR_BUFSIZE];
+ LIBBPF_OPTS(bpf_raw_tp_opts, raw_opts);
struct bpf_link *link;
int prog_fd, pfd;
+ if (!OPTS_VALID(opts, bpf_raw_tracepoint_opts))
+ return libbpf_err_ptr(-EINVAL);
+
prog_fd = bpf_program__fd(prog);
if (prog_fd < 0) {
pr_warn("prog '%s': can't attach before loaded\n", prog->name);
@@ -12352,18 +12746,26 @@ struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *pr
return libbpf_err_ptr(-ENOMEM);
link->detach = &bpf_link__detach_fd;
- pfd = bpf_raw_tracepoint_open(tp_name, prog_fd);
+ raw_opts.tp_name = tp_name;
+ raw_opts.cookie = OPTS_GET(opts, cookie, 0);
+ pfd = bpf_raw_tracepoint_open_opts(prog_fd, &raw_opts);
if (pfd < 0) {
pfd = -errno;
free(link);
pr_warn("prog '%s': failed to attach to raw tracepoint '%s': %s\n",
- prog->name, tp_name, libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+ prog->name, tp_name, errstr(pfd));
return libbpf_err_ptr(pfd);
}
link->fd = pfd;
return link;
}
+struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *prog,
+ const char *tp_name)
+{
+ return bpf_program__attach_raw_tracepoint_opts(prog, tp_name, NULL);
+}
+
static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
static const char *const prefixes[] = {
@@ -12410,7 +12812,6 @@ static struct bpf_link *bpf_program__attach_btf_id(const struct bpf_program *pro
const struct bpf_trace_opts *opts)
{
LIBBPF_OPTS(bpf_link_create_opts, link_opts);
- char errmsg[STRERR_BUFSIZE];
struct bpf_link *link;
int prog_fd, pfd;
@@ -12435,7 +12836,7 @@ static struct bpf_link *bpf_program__attach_btf_id(const struct bpf_program *pro
pfd = -errno;
free(link);
pr_warn("prog '%s': failed to attach: %s\n",
- prog->name, libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+ prog->name, errstr(pfd));
return libbpf_err_ptr(pfd);
}
link->fd = pfd;
@@ -12476,7 +12877,6 @@ bpf_program_attach_fd(const struct bpf_program *prog,
const struct bpf_link_create_opts *opts)
{
enum bpf_attach_type attach_type;
- char errmsg[STRERR_BUFSIZE];
struct bpf_link *link;
int prog_fd, link_fd;
@@ -12498,7 +12898,7 @@ bpf_program_attach_fd(const struct bpf_program *prog,
free(link);
pr_warn("prog '%s': failed to attach to %s: %s\n",
prog->name, target_name,
- libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg)));
+ errstr(link_fd));
return libbpf_err_ptr(link_fd);
}
link->fd = link_fd;
@@ -12517,6 +12917,12 @@ bpf_program__attach_netns(const struct bpf_program *prog, int netns_fd)
return bpf_program_attach_fd(prog, netns_fd, "netns", NULL);
}
+struct bpf_link *
+bpf_program__attach_sockmap(const struct bpf_program *prog, int map_fd)
+{
+ return bpf_program_attach_fd(prog, map_fd, "sockmap", NULL);
+}
+
struct bpf_link *bpf_program__attach_xdp(const struct bpf_program *prog, int ifindex)
{
/* target_fd/target_ifindex use the same field in LINK_CREATE */
@@ -12524,6 +12930,34 @@ struct bpf_link *bpf_program__attach_xdp(const struct bpf_program *prog, int ifi
}
struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+ const struct bpf_cgroup_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
+ __u32 relative_id;
+ int relative_fd;
+
+ if (!OPTS_VALID(opts, bpf_cgroup_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ relative_id = OPTS_GET(opts, relative_id, 0);
+ relative_fd = OPTS_GET(opts, relative_fd, 0);
+
+ if (relative_fd && relative_id) {
+ pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
+ link_create_opts.cgroup.expected_revision = OPTS_GET(opts, expected_revision, 0);
+ link_create_opts.cgroup.relative_fd = relative_fd;
+ link_create_opts.cgroup.relative_id = relative_id;
+ link_create_opts.flags = OPTS_GET(opts, flags, 0);
+
+ return bpf_program_attach_fd(prog, cgroup_fd, "cgroup", &link_create_opts);
+}
+
+struct bpf_link *
bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
const struct bpf_tcx_opts *opts)
{
@@ -12605,7 +13039,7 @@ struct bpf_link *bpf_program__attach_freplace(const struct bpf_program *prog,
}
if (prog->type != BPF_PROG_TYPE_EXT) {
- pr_warn("prog '%s': only BPF_PROG_TYPE_EXT can attach as freplace",
+ pr_warn("prog '%s': only BPF_PROG_TYPE_EXT can attach as freplace\n",
prog->name);
return libbpf_err_ptr(-EINVAL);
}
@@ -12613,7 +13047,7 @@ struct bpf_link *bpf_program__attach_freplace(const struct bpf_program *prog,
if (target_fd) {
LIBBPF_OPTS(bpf_link_create_opts, target_opts);
- btf_id = libbpf_find_prog_btf_id(attach_func_name, target_fd);
+ btf_id = libbpf_find_prog_btf_id(attach_func_name, target_fd, prog->obj->token_fd);
if (btf_id < 0)
return libbpf_err_ptr(btf_id);
@@ -12634,7 +13068,6 @@ bpf_program__attach_iter(const struct bpf_program *prog,
const struct bpf_iter_attach_opts *opts)
{
DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
- char errmsg[STRERR_BUFSIZE];
struct bpf_link *link;
int prog_fd, link_fd;
__u32 target_fd = 0;
@@ -12662,7 +13095,7 @@ bpf_program__attach_iter(const struct bpf_program *prog,
link_fd = -errno;
free(link);
pr_warn("prog '%s': failed to attach to iterator: %s\n",
- prog->name, libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg)));
+ prog->name, errstr(link_fd));
return libbpf_err_ptr(link_fd);
}
link->fd = link_fd;
@@ -12704,12 +13137,10 @@ struct bpf_link *bpf_program__attach_netfilter(const struct bpf_program *prog,
link_fd = bpf_link_create(prog_fd, 0, BPF_NETFILTER, &lopts);
if (link_fd < 0) {
- char errmsg[STRERR_BUFSIZE];
-
link_fd = -errno;
free(link);
pr_warn("prog '%s': failed to attach to netfilter: %s\n",
- prog->name, libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg)));
+ prog->name, errstr(link_fd));
return libbpf_err_ptr(link_fd);
}
link->fd = link_fd;
@@ -12725,6 +13156,12 @@ struct bpf_link *bpf_program__attach(const struct bpf_program *prog)
if (!prog->sec_def || !prog->sec_def->prog_attach_fn)
return libbpf_err_ptr(-EOPNOTSUPP);
+ if (bpf_program__fd(prog) < 0) {
+ pr_warn("prog '%s': can't attach BPF program without FD (was it loaded?)\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
err = prog->sec_def->prog_attach_fn(prog, prog->sec_def->cookie, &link);
if (err)
return libbpf_err_ptr(err);
@@ -12765,8 +13202,15 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map)
__u32 zero = 0;
int err, fd;
- if (!bpf_map__is_struct_ops(map) || map->fd == -1)
+ if (!bpf_map__is_struct_ops(map)) {
+ pr_warn("map '%s': can't attach non-struct_ops map\n", map->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
+ if (map->fd < 0) {
+ pr_warn("map '%s': can't attach BPF map without FD (was it created?)\n", map->name);
return libbpf_err_ptr(-EINVAL);
+ }
link = calloc(1, sizeof(*link));
if (!link)
@@ -12814,13 +13258,18 @@ int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map)
__u32 zero = 0;
int err;
- if (!bpf_map__is_struct_ops(map) || !map_is_created(map))
- return -EINVAL;
+ if (!bpf_map__is_struct_ops(map))
+ return libbpf_err(-EINVAL);
+
+ if (map->fd < 0) {
+ pr_warn("map '%s': can't use BPF map without FD (was it created?)\n", map->name);
+ return libbpf_err(-EINVAL);
+ }
st_ops_link = container_of(link, struct bpf_link_struct_ops, link);
/* Ensure the type of a link is correct */
if (st_ops_link->map_fd < 0)
- return -EINVAL;
+ return libbpf_err(-EINVAL);
err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0);
/* It can be EBUSY if the map has been used to create or
@@ -12976,7 +13425,6 @@ perf_buffer__open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
int cpu, int map_key)
{
struct perf_cpu_buf *cpu_buf;
- char msg[STRERR_BUFSIZE];
int err;
cpu_buf = calloc(1, sizeof(*cpu_buf));
@@ -12992,7 +13440,7 @@ perf_buffer__open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
if (cpu_buf->fd < 0) {
err = -errno;
pr_warn("failed to open perf buffer event on cpu #%d: %s\n",
- cpu, libbpf_strerror_r(err, msg, sizeof(msg)));
+ cpu, errstr(err));
goto error;
}
@@ -13003,14 +13451,14 @@ perf_buffer__open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
cpu_buf->base = NULL;
err = -errno;
pr_warn("failed to mmap perf buffer on cpu #%d: %s\n",
- cpu, libbpf_strerror_r(err, msg, sizeof(msg)));
+ cpu, errstr(err));
goto error;
}
if (ioctl(cpu_buf->fd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
err = -errno;
pr_warn("failed to enable perf buffer event on cpu #%d: %s\n",
- cpu, libbpf_strerror_r(err, msg, sizeof(msg)));
+ cpu, errstr(err));
goto error;
}
@@ -13047,7 +13495,6 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
attr.config = PERF_COUNT_SW_BPF_OUTPUT;
attr.type = PERF_TYPE_SOFTWARE;
attr.sample_type = PERF_SAMPLE_RAW;
- attr.sample_period = sample_period;
attr.wakeup_events = sample_period;
p.attr = &attr;
@@ -13086,7 +13533,6 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
{
const char *online_cpus_file = "/sys/devices/system/cpu/online";
struct bpf_map_info map;
- char msg[STRERR_BUFSIZE];
struct perf_buffer *pb;
bool *online = NULL;
__u32 map_info_len;
@@ -13109,7 +13555,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
*/
if (err != -EINVAL) {
pr_warn("failed to get map info for map FD %d: %s\n",
- map_fd, libbpf_strerror_r(err, msg, sizeof(msg)));
+ map_fd, errstr(err));
return ERR_PTR(err);
}
pr_debug("failed to get map info for FD %d; API not supported? Ignoring...\n",
@@ -13139,7 +13585,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
if (pb->epoll_fd < 0) {
err = -errno;
pr_warn("failed to create epoll instance: %s\n",
- libbpf_strerror_r(err, msg, sizeof(msg)));
+ errstr(err));
goto error;
}
@@ -13170,7 +13616,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
err = parse_cpu_mask_file(online_cpus_file, &online, &n);
if (err) {
- pr_warn("failed to get online CPU mask: %d\n", err);
+ pr_warn("failed to get online CPU mask: %s\n", errstr(err));
goto error;
}
@@ -13201,7 +13647,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
err = -errno;
pr_warn("failed to set cpu #%d, key %d -> perf FD %d: %s\n",
cpu, map_key, cpu_buf->fd,
- libbpf_strerror_r(err, msg, sizeof(msg)));
+ errstr(err));
goto error;
}
@@ -13212,7 +13658,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
err = -errno;
pr_warn("failed to epoll_ctl cpu #%d perf FD %d: %s\n",
cpu, cpu_buf->fd,
- libbpf_strerror_r(err, msg, sizeof(msg)));
+ errstr(err));
goto error;
}
j++;
@@ -13307,7 +13753,7 @@ int perf_buffer__poll(struct perf_buffer *pb, int timeout_ms)
err = perf_buffer__process_records(pb, cpu_buf);
if (err) {
- pr_warn("error while processing records: %d\n", err);
+ pr_warn("error while processing records: %s\n", errstr(err));
return libbpf_err(err);
}
}
@@ -13391,7 +13837,8 @@ int perf_buffer__consume(struct perf_buffer *pb)
err = perf_buffer__process_records(pb, cpu_buf);
if (err) {
- pr_warn("perf_buffer: failed to process records in buffer #%d: %d\n", i, err);
+ pr_warn("perf_buffer: failed to process records in buffer #%d: %s\n",
+ i, errstr(err));
return libbpf_err(err);
}
}
@@ -13407,7 +13854,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
if (!prog || attach_prog_fd < 0)
return libbpf_err(-EINVAL);
- if (prog->obj->loaded)
+ if (prog->obj->state >= OBJ_LOADED)
return libbpf_err(-EINVAL);
if (attach_prog_fd && !attach_func_name) {
@@ -13420,7 +13867,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
if (attach_prog_fd) {
btf_id = libbpf_find_prog_btf_id(attach_func_name,
- attach_prog_fd);
+ attach_prog_fd, prog->obj->token_fd);
if (btf_id < 0)
return libbpf_err(btf_id);
} else {
@@ -13502,14 +13949,14 @@ int parse_cpu_mask_file(const char *fcpu, bool **mask, int *mask_sz)
fd = open(fcpu, O_RDONLY | O_CLOEXEC);
if (fd < 0) {
err = -errno;
- pr_warn("Failed to open cpu mask file %s: %d\n", fcpu, err);
+ pr_warn("Failed to open cpu mask file %s: %s\n", fcpu, errstr(err));
return err;
}
len = read(fd, buf, sizeof(buf));
close(fd);
if (len <= 0) {
err = len ? -errno : -EINVAL;
- pr_warn("Failed to read cpu mask from %s: %d\n", fcpu, err);
+ pr_warn("Failed to read cpu mask from %s: %s\n", fcpu, errstr(err));
return err;
}
if (len >= sizeof(buf)) {
@@ -13549,14 +13996,15 @@ int libbpf_num_possible_cpus(void)
static int populate_skeleton_maps(const struct bpf_object *obj,
struct bpf_map_skeleton *maps,
- size_t map_cnt)
+ size_t map_cnt, size_t map_skel_sz)
{
int i;
for (i = 0; i < map_cnt; i++) {
- struct bpf_map **map = maps[i].map;
- const char *name = maps[i].name;
- void **mmaped = maps[i].mmaped;
+ struct bpf_map_skeleton *map_skel = (void *)maps + i * map_skel_sz;
+ struct bpf_map **map = map_skel->map;
+ const char *name = map_skel->name;
+ void **mmaped = map_skel->mmaped;
*map = bpf_object__find_map_by_name(obj, name);
if (!*map) {
@@ -13573,13 +14021,14 @@ static int populate_skeleton_maps(const struct bpf_object *obj,
static int populate_skeleton_progs(const struct bpf_object *obj,
struct bpf_prog_skeleton *progs,
- size_t prog_cnt)
+ size_t prog_cnt, size_t prog_skel_sz)
{
int i;
for (i = 0; i < prog_cnt; i++) {
- struct bpf_program **prog = progs[i].prog;
- const char *name = progs[i].name;
+ struct bpf_prog_skeleton *prog_skel = (void *)progs + i * prog_skel_sz;
+ struct bpf_program **prog = prog_skel->prog;
+ const char *name = prog_skel->name;
*prog = bpf_object__find_program_by_name(obj, name);
if (!*prog) {
@@ -13593,42 +14042,27 @@ static int populate_skeleton_progs(const struct bpf_object *obj,
int bpf_object__open_skeleton(struct bpf_object_skeleton *s,
const struct bpf_object_open_opts *opts)
{
- DECLARE_LIBBPF_OPTS(bpf_object_open_opts, skel_opts,
- .object_name = s->name,
- );
struct bpf_object *obj;
int err;
- /* Attempt to preserve opts->object_name, unless overriden by user
- * explicitly. Overwriting object name for skeletons is discouraged,
- * as it breaks global data maps, because they contain object name
- * prefix as their own map name prefix. When skeleton is generated,
- * bpftool is making an assumption that this name will stay the same.
- */
- if (opts) {
- memcpy(&skel_opts, opts, sizeof(*opts));
- if (!opts->object_name)
- skel_opts.object_name = s->name;
- }
-
- obj = bpf_object__open_mem(s->data, s->data_sz, &skel_opts);
- err = libbpf_get_error(obj);
- if (err) {
- pr_warn("failed to initialize skeleton BPF object '%s': %d\n",
- s->name, err);
+ obj = bpf_object_open(NULL, s->data, s->data_sz, s->name, opts);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ pr_warn("failed to initialize skeleton BPF object '%s': %s\n",
+ s->name, errstr(err));
return libbpf_err(err);
}
*s->obj = obj;
- err = populate_skeleton_maps(obj, s->maps, s->map_cnt);
+ err = populate_skeleton_maps(obj, s->maps, s->map_cnt, s->map_skel_sz);
if (err) {
- pr_warn("failed to populate skeleton maps for '%s': %d\n", s->name, err);
+ pr_warn("failed to populate skeleton maps for '%s': %s\n", s->name, errstr(err));
return libbpf_err(err);
}
- err = populate_skeleton_progs(obj, s->progs, s->prog_cnt);
+ err = populate_skeleton_progs(obj, s->progs, s->prog_cnt, s->prog_skel_sz);
if (err) {
- pr_warn("failed to populate skeleton progs for '%s': %d\n", s->name, err);
+ pr_warn("failed to populate skeleton progs for '%s': %s\n", s->name, errstr(err));
return libbpf_err(err);
}
@@ -13656,26 +14090,26 @@ int bpf_object__open_subskeleton(struct bpf_object_subskeleton *s)
return libbpf_err(-errno);
}
- err = populate_skeleton_maps(s->obj, s->maps, s->map_cnt);
+ err = populate_skeleton_maps(s->obj, s->maps, s->map_cnt, s->map_skel_sz);
if (err) {
- pr_warn("failed to populate subskeleton maps: %d\n", err);
+ pr_warn("failed to populate subskeleton maps: %s\n", errstr(err));
return libbpf_err(err);
}
- err = populate_skeleton_progs(s->obj, s->progs, s->prog_cnt);
+ err = populate_skeleton_progs(s->obj, s->progs, s->prog_cnt, s->prog_skel_sz);
if (err) {
- pr_warn("failed to populate subskeleton maps: %d\n", err);
+ pr_warn("failed to populate subskeleton maps: %s\n", errstr(err));
return libbpf_err(err);
}
for (var_idx = 0; var_idx < s->var_cnt; var_idx++) {
- var_skel = &s->vars[var_idx];
+ var_skel = (void *)s->vars + var_idx * s->var_skel_sz;
map = *var_skel->map;
map_type_id = bpf_map__btf_value_type_id(map);
map_type = btf__type_by_id(btf, map_type_id);
if (!btf_is_datasec(map_type)) {
- pr_warn("type for map '%1$s' is not a datasec: %2$s",
+ pr_warn("type for map '%1$s' is not a datasec: %2$s\n",
bpf_map__name(map),
__btf_kind_str(btf_kind(map_type)));
return libbpf_err(-EINVAL);
@@ -13711,47 +14145,18 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
err = bpf_object__load(*s->obj);
if (err) {
- pr_warn("failed to load BPF skeleton '%s': %d\n", s->name, err);
+ pr_warn("failed to load BPF skeleton '%s': %s\n", s->name, errstr(err));
return libbpf_err(err);
}
for (i = 0; i < s->map_cnt; i++) {
- struct bpf_map *map = *s->maps[i].map;
- size_t mmap_sz = bpf_map_mmap_sz(map->def.value_size, map->def.max_entries);
- int prot, map_fd = map->fd;
- void **mmaped = s->maps[i].mmaped;
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_map *map = *map_skel->map;
- if (!mmaped)
+ if (!map_skel->mmaped)
continue;
- if (!(map->def.map_flags & BPF_F_MMAPABLE)) {
- *mmaped = NULL;
- continue;
- }
-
- if (map->def.map_flags & BPF_F_RDONLY_PROG)
- prot = PROT_READ;
- else
- prot = PROT_READ | PROT_WRITE;
-
- /* Remap anonymous mmap()-ed "map initialization image" as
- * a BPF map-backed mmap()-ed memory, but preserving the same
- * memory address. This will cause kernel to change process'
- * page table to point to a different piece of kernel memory,
- * but from userspace point of view memory address (and its
- * contents, being identical at this point) will stay the
- * same. This mapping will be released by bpf_object__close()
- * as per normal clean up procedure, so we don't need to worry
- * about it from skeleton's clean up perspective.
- */
- *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0);
- if (*mmaped == MAP_FAILED) {
- err = -errno;
- *mmaped = NULL;
- pr_warn("failed to re-mmap() map '%s': %d\n",
- bpf_map__name(map), err);
- return libbpf_err(err);
- }
+ *map_skel->mmaped = map->mmaped;
}
return 0;
@@ -13762,8 +14167,9 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s)
int i, err;
for (i = 0; i < s->prog_cnt; i++) {
- struct bpf_program *prog = *s->progs[i].prog;
- struct bpf_link **link = s->progs[i].link;
+ struct bpf_prog_skeleton *prog_skel = (void *)s->progs + i * s->prog_skel_sz;
+ struct bpf_program *prog = *prog_skel->prog;
+ struct bpf_link **link = prog_skel->link;
if (!prog->autoload || !prog->autoattach)
continue;
@@ -13778,8 +14184,8 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s)
err = prog->sec_def->prog_attach_fn(prog, prog->sec_def->cookie, link);
if (err) {
- pr_warn("prog '%s': failed to auto-attach: %d\n",
- bpf_program__name(prog), err);
+ pr_warn("prog '%s': failed to auto-attach: %s\n",
+ bpf_program__name(prog), errstr(err));
return libbpf_err(err);
}
@@ -13795,6 +14201,45 @@ int bpf_object__attach_skeleton(struct bpf_object_skeleton *s)
*/
}
+
+ for (i = 0; i < s->map_cnt; i++) {
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_map *map = *map_skel->map;
+ struct bpf_link **link;
+
+ if (!map->autocreate || !map->autoattach)
+ continue;
+
+ /* only struct_ops maps can be attached */
+ if (!bpf_map__is_struct_ops(map))
+ continue;
+
+ /* skeleton is created with earlier version of bpftool, notify user */
+ if (s->map_skel_sz < offsetofend(struct bpf_map_skeleton, link)) {
+ pr_warn("map '%s': BPF skeleton version is old, skipping map auto-attachment...\n",
+ bpf_map__name(map));
+ continue;
+ }
+
+ link = map_skel->link;
+ if (!link) {
+ pr_warn("map '%s': BPF map skeleton link is uninitialized\n",
+ bpf_map__name(map));
+ continue;
+ }
+
+ if (*link)
+ continue;
+
+ *link = bpf_map__attach_struct_ops(map);
+ if (!*link) {
+ err = -errno;
+ pr_warn("map '%s': failed to auto-attach: %s\n",
+ bpf_map__name(map), errstr(err));
+ return libbpf_err(err);
+ }
+ }
+
return 0;
}
@@ -13803,11 +14248,25 @@ void bpf_object__detach_skeleton(struct bpf_object_skeleton *s)
int i;
for (i = 0; i < s->prog_cnt; i++) {
- struct bpf_link **link = s->progs[i].link;
+ struct bpf_prog_skeleton *prog_skel = (void *)s->progs + i * s->prog_skel_sz;
+ struct bpf_link **link = prog_skel->link;
bpf_link__destroy(*link);
*link = NULL;
}
+
+ if (s->map_skel_sz < sizeof(struct bpf_map_skeleton))
+ return;
+
+ for (i = 0; i < s->map_cnt; i++) {
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_link **link = map_skel->link;
+
+ if (link) {
+ bpf_link__destroy(*link);
+ *link = NULL;
+ }
+ }
}
void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
@@ -13815,8 +14274,7 @@ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
if (!s)
return;
- if (s->progs)
- bpf_object__detach_skeleton(s);
+ bpf_object__detach_skeleton(s);
if (s->obj)
bpf_object__close(*s->obj);
free(s->maps);
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 6cd9c501624f..5118d0a90e24 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -24,8 +24,25 @@
extern "C" {
#endif
+/**
+ * @brief **libbpf_major_version()** provides the major version of libbpf.
+ * @return An integer, the major version number
+ */
LIBBPF_API __u32 libbpf_major_version(void);
+
+/**
+ * @brief **libbpf_minor_version()** provides the minor version of libbpf.
+ * @return An integer, the minor version number
+ */
LIBBPF_API __u32 libbpf_minor_version(void);
+
+/**
+ * @brief **libbpf_version_string()** provides the version of libbpf in a
+ * human-readable form, e.g., "v1.7".
+ * @return Pointer to a static string containing the version
+ *
+ * The format is *not* a part of a stable API and may change in the future.
+ */
LIBBPF_API const char *libbpf_version_string(void);
enum libbpf_errno {
@@ -49,6 +66,14 @@ enum libbpf_errno {
__LIBBPF_ERRNO__END,
};
+/**
+ * @brief **libbpf_strerror()** converts the provided error code into a
+ * human-readable string.
+ * @param err The error code to convert
+ * @param buf Pointer to a buffer where the error message will be stored
+ * @param size The number of bytes in the buffer
+ * @return 0, on success; negative error code, otherwise
+ */
LIBBPF_API int libbpf_strerror(int err, char *buf, size_t size);
/**
@@ -98,7 +123,10 @@ typedef int (*libbpf_print_fn_t)(enum libbpf_print_level level,
/**
* @brief **libbpf_set_print()** sets user-provided log callback function to
- * be used for libbpf warnings and informational messages.
+ * be used for libbpf warnings and informational messages. If the user callback
+ * is not set, messages are logged to stderr by default. The verbosity of these
+ * messages can be controlled by setting the environment variable
+ * LIBBPF_LOG_LEVEL to either warn, info, or debug.
* @param fn The log print function. If NULL, libbpf won't print anything.
* @return Pointer to old print function.
*
@@ -149,7 +177,7 @@ struct bpf_object_open_opts {
* log_buf and log_level settings.
*
* If specified, this log buffer will be passed for:
- * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overriden
+ * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overridden
* with bpf_program__set_log() on per-program level, to get
* BPF verifier log output.
* - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
@@ -177,10 +205,29 @@ struct bpf_object_open_opts {
* logs through its print callback.
*/
__u32 kernel_log_level;
+ /* Path to BPF FS mount point to derive BPF token from.
+ *
+ * Created BPF token will be used for all bpf() syscall operations
+ * that accept BPF token (e.g., map creation, BTF and program loads,
+ * etc) automatically within instantiated BPF object.
+ *
+ * If bpf_token_path is not specified, libbpf will consult
+ * LIBBPF_BPF_TOKEN_PATH environment variable. If set, it will be
+ * taken as a value of bpf_token_path option and will force libbpf to
+ * either create BPF token from provided custom BPF FS path, or will
+ * disable implicit BPF token creation, if envvar value is an empty
+ * string. bpf_token_path overrides LIBBPF_BPF_TOKEN_PATH, if both are
+ * set at the same time.
+ *
+ * Setting bpf_token_path option to empty string disables libbpf's
+ * automatic attempt to create BPF token from default BPF FS mount
+ * point (/sys/fs/bpf), in case this default behavior is undesirable.
+ */
+ const char *bpf_token_path;
size_t :0;
};
-#define bpf_object_open_opts__last_field kernel_log_level
+#define bpf_object_open_opts__last_field bpf_token_path
/**
* @brief **bpf_object__open()** creates a bpf_object by opening
@@ -220,6 +267,19 @@ bpf_object__open_mem(const void *obj_buf, size_t obj_buf_sz,
const struct bpf_object_open_opts *opts);
/**
+ * @brief **bpf_object__prepare()** prepares BPF object for loading:
+ * performs ELF processing, relocations, prepares final state of BPF program
+ * instructions (accessible with bpf_program__insns()), creates and
+ * (potentially) pins maps. Leaves BPF object in the state ready for program
+ * loading.
+ * @param obj Pointer to a valid BPF object instance returned by
+ * **bpf_object__open*()** API
+ * @return 0, on success; negative error code, otherwise, error code is
+ * stored in errno
+ */
+LIBBPF_API int bpf_object__prepare(struct bpf_object *obj);
+
+/**
* @brief **bpf_object__load()** loads BPF object into kernel.
* @param obj Pointer to a valid BPF object instance returned by
* **bpf_object__open*()** APIs
@@ -272,6 +332,14 @@ LIBBPF_API const char *bpf_object__name(const struct bpf_object *obj);
LIBBPF_API unsigned int bpf_object__kversion(const struct bpf_object *obj);
LIBBPF_API int bpf_object__set_kversion(struct bpf_object *obj, __u32 kern_version);
+/**
+ * @brief **bpf_object__token_fd** is an accessor for BPF token FD associated
+ * with BPF object.
+ * @param obj Pointer to a valid BPF object
+ * @return BPF token FD or -1, if it wasn't set
+ */
+LIBBPF_API int bpf_object__token_fd(const struct bpf_object *obj);
+
struct btf;
LIBBPF_API struct btf *bpf_object__btf(const struct bpf_object *obj);
LIBBPF_API int bpf_object__btf_fd(const struct bpf_object *obj);
@@ -433,7 +501,7 @@ LIBBPF_API int bpf_link__destroy(struct bpf_link *link);
/**
* @brief **bpf_program__attach()** is a generic function for attaching
* a BPF program based on auto-detection of program type, attach type,
- * and extra paremeters, where applicable.
+ * and extra parameters, where applicable.
*
* @param prog BPF program to attach
* @return Reference to the newly created BPF link; or NULL is returned on error,
@@ -456,9 +524,11 @@ struct bpf_perf_event_opts {
__u64 bpf_cookie;
/* don't use BPF link when attach BPF program */
bool force_ioctl_attach;
+ /* don't automatically enable the event */
+ bool dont_enable;
size_t :0;
};
-#define bpf_perf_event_opts__last_field force_ioctl_attach
+#define bpf_perf_event_opts__last_field dont_enable
LIBBPF_API struct bpf_link *
bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);
@@ -520,10 +590,14 @@ struct bpf_kprobe_multi_opts {
size_t cnt;
/* create return kprobes */
bool retprobe;
+ /* create session kprobes */
+ bool session;
+ /* enforce unique match */
+ bool unique_match;
size_t :0;
};
-#define bpf_kprobe_multi_opts__last_field retprobe
+#define bpf_kprobe_multi_opts__last_field unique_match
LIBBPF_API struct bpf_link *
bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
@@ -545,10 +619,12 @@ struct bpf_uprobe_multi_opts {
size_t cnt;
/* create return uprobes */
bool retprobe;
+ /* create session kprobes */
+ bool session;
size_t :0;
};
-#define bpf_uprobe_multi_opts__last_field retprobe
+#define bpf_uprobe_multi_opts__last_field session
/**
* @brief **bpf_program__attach_uprobe_multi()** attaches a BPF program
@@ -655,7 +731,7 @@ struct bpf_uprobe_opts {
/**
* @brief **bpf_program__attach_uprobe()** attaches a BPF program
* to the userspace function which is found by binary path and
- * offset. You can optionally specify a particular proccess to attach
+ * offset. You can optionally specify a particular process to attach
* to. You can also optionally attach the program to the function
* exit instead of entry.
*
@@ -741,9 +817,20 @@ bpf_program__attach_tracepoint_opts(const struct bpf_program *prog,
const char *tp_name,
const struct bpf_tracepoint_opts *opts);
+struct bpf_raw_tracepoint_opts {
+ size_t sz; /* size of this struct for forward/backward compatibility */
+ __u64 cookie;
+ size_t :0;
+};
+#define bpf_raw_tracepoint_opts__last_field cookie
+
LIBBPF_API struct bpf_link *
bpf_program__attach_raw_tracepoint(const struct bpf_program *prog,
const char *tp_name);
+LIBBPF_API struct bpf_link *
+bpf_program__attach_raw_tracepoint_opts(const struct bpf_program *prog,
+ const char *tp_name,
+ struct bpf_raw_tracepoint_opts *opts);
struct bpf_trace_opts {
/* size of this struct, for forward/backward compatibility */
@@ -765,6 +852,8 @@ bpf_program__attach_cgroup(const struct bpf_program *prog, int cgroup_fd);
LIBBPF_API struct bpf_link *
bpf_program__attach_netns(const struct bpf_program *prog, int netns_fd);
LIBBPF_API struct bpf_link *
+bpf_program__attach_sockmap(const struct bpf_program *prog, int map_fd);
+LIBBPF_API struct bpf_link *
bpf_program__attach_xdp(const struct bpf_program *prog, int ifindex);
LIBBPF_API struct bpf_link *
bpf_program__attach_freplace(const struct bpf_program *prog,
@@ -815,6 +904,21 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
const struct bpf_netkit_opts *opts);
+struct bpf_cgroup_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ __u32 flags;
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ size_t :0;
+};
+#define bpf_cgroup_opts__last_field expected_revision
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_cgroup_opts(const struct bpf_program *prog, int cgroup_fd,
+ const struct bpf_cgroup_opts *opts);
+
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
@@ -878,6 +982,12 @@ LIBBPF_API int bpf_program__set_log_level(struct bpf_program *prog, __u32 log_le
LIBBPF_API const char *bpf_program__log_buf(const struct bpf_program *prog, size_t *log_size);
LIBBPF_API int bpf_program__set_log_buf(struct bpf_program *prog, char *log_buf, size_t log_size);
+LIBBPF_API struct bpf_func_info *bpf_program__func_info(const struct bpf_program *prog);
+LIBBPF_API __u32 bpf_program__func_info_cnt(const struct bpf_program *prog);
+
+LIBBPF_API struct bpf_line_info *bpf_program__line_info(const struct bpf_program *prog);
+LIBBPF_API __u32 bpf_program__line_info_cnt(const struct bpf_program *prog);
+
/**
* @brief **bpf_program__set_attach_target()** sets BTF-based attach target
* for supported BPF program types:
@@ -942,6 +1052,23 @@ LIBBPF_API int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate);
LIBBPF_API bool bpf_map__autocreate(const struct bpf_map *map);
/**
+ * @brief **bpf_map__set_autoattach()** sets whether libbpf has to auto-attach
+ * map during BPF skeleton attach phase.
+ * @param map the BPF map instance
+ * @param autoattach whether to attach map during BPF skeleton attach phase
+ * @return 0 on success; negative error code, otherwise
+ */
+LIBBPF_API int bpf_map__set_autoattach(struct bpf_map *map, bool autoattach);
+
+/**
+ * @brief **bpf_map__autoattach()** returns whether BPF map is configured to
+ * auto-attach during BPF skeleton attach phase.
+ * @param map the BPF map instance
+ * @return true if map is set to auto-attach during skeleton attach phase; false, otherwise
+ */
+LIBBPF_API bool bpf_map__autoattach(const struct bpf_map *map);
+
+/**
* @brief **bpf_map__fd()** gets the file descriptor of the passed
* BPF map
* @param map the BPF map instance
@@ -995,7 +1122,7 @@ LIBBPF_API int bpf_map__set_map_extra(struct bpf_map *map, __u64 map_extra);
LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map,
const void *data, size_t size);
-LIBBPF_API void *bpf_map__initial_value(struct bpf_map *map, size_t *psize);
+LIBBPF_API void *bpf_map__initial_value(const struct bpf_map *map, size_t *psize);
/**
* @brief **bpf_map__is_internal()** tells the caller whether or not the
@@ -1164,6 +1291,28 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
*/
LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
const void *cur_key, void *next_key, size_t key_sz);
+/**
+ * @brief **bpf_map__set_exclusive_program()** sets a map to be exclusive to the
+ * specified program. This must be called *before* the map is created.
+ *
+ * @param map BPF map to make exclusive.
+ * @param prog BPF program to be the exclusive user of the map. Must belong
+ * to the same bpf_object as the map.
+ * @return 0 on success; a negative error code otherwise.
+ *
+ * This function must be called after the BPF object is opened but before
+ * it is loaded. Once the object is loaded, only the specified program
+ * will be able to access the map's contents.
+ */
+LIBBPF_API int bpf_map__set_exclusive_program(struct bpf_map *map, struct bpf_program *prog);
+
+/**
+ * @brief **bpf_map__exclusive_program()** returns the exclusive program
+ * that is registered with the map (if any).
+ * @param map BPF map to which the exclusive program is registered.
+ * @return the registered exclusive program.
+ */
+LIBBPF_API struct bpf_program *bpf_map__exclusive_program(struct bpf_map *map);
struct bpf_xdp_set_link_opts {
size_t sz;
@@ -1204,6 +1353,7 @@ enum bpf_tc_attach_point {
BPF_TC_INGRESS = 1 << 0,
BPF_TC_EGRESS = 1 << 1,
BPF_TC_CUSTOM = 1 << 2,
+ BPF_TC_QDISC = 1 << 3,
};
#define BPF_TC_PARENT(a, b) \
@@ -1218,9 +1368,11 @@ struct bpf_tc_hook {
int ifindex;
enum bpf_tc_attach_point attach_point;
__u32 parent;
+ __u32 handle;
+ const char *qdisc;
size_t :0;
};
-#define bpf_tc_hook__last_field parent
+#define bpf_tc_hook__last_field qdisc
struct bpf_tc_opts {
size_t sz;
@@ -1263,6 +1415,7 @@ LIBBPF_API int ring_buffer__add(struct ring_buffer *rb, int map_fd,
ring_buffer_sample_fn sample_cb, void *ctx);
LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms);
LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb);
+LIBBPF_API int ring_buffer__consume_n(struct ring_buffer *rb, size_t n);
LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb);
/**
@@ -1337,6 +1490,17 @@ LIBBPF_API int ring__map_fd(const struct ring *r);
*/
LIBBPF_API int ring__consume(struct ring *r);
+/**
+ * @brief **ring__consume_n()** consumes up to a requested amount of items from
+ * a ringbuffer without event polling.
+ *
+ * @param r A ringbuffer object.
+ * @param n Maximum amount of items to consume.
+ * @return The number of items consumed, or a negative number if any of the
+ * callbacks return an error.
+ */
+LIBBPF_API int ring__consume_n(struct ring *r, size_t n);
+
struct user_ring_buffer_opts {
size_t sz; /* size of this struct, for forward/backward compatibility */
};
@@ -1527,11 +1691,11 @@ LIBBPF_API int perf_buffer__buffer_fd(const struct perf_buffer *pb, size_t buf_i
* memory region of the ring buffer.
* This ring buffer can be used to implement a custom events consumer.
* The ring buffer starts with the *struct perf_event_mmap_page*, which
- * holds the ring buffer managment fields, when accessing the header
+ * holds the ring buffer management fields, when accessing the header
* structure it's important to be SMP aware.
* You can refer to *perf_event_read_simple* for a simple example.
* @param pb the perf buffer structure
- * @param buf_idx the buffer index to retreive
+ * @param buf_idx the buffer index to retrieve
* @param buf (out) gets the base pointer of the mmap()'ed memory
* @param buf_size (out) gets the size of the mmap()'ed region
* @return 0 on success, negative error code for failure
@@ -1623,6 +1787,7 @@ struct bpf_map_skeleton {
const char *name;
struct bpf_map **map;
void **mmaped;
+ struct bpf_link **link;
};
struct bpf_prog_skeleton {
@@ -1692,9 +1857,10 @@ struct gen_loader_opts {
const char *insns;
__u32 data_sz;
__u32 insns_sz;
+ bool gen_hash;
};
-#define gen_loader_opts__last_field insns_sz
+#define gen_loader_opts__last_field gen_hash
LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
struct gen_loader_opts *opts);
@@ -1719,9 +1885,14 @@ struct bpf_linker_file_opts {
struct bpf_linker;
LIBBPF_API struct bpf_linker *bpf_linker__new(const char *filename, struct bpf_linker_opts *opts);
+LIBBPF_API struct bpf_linker *bpf_linker__new_fd(int fd, struct bpf_linker_opts *opts);
LIBBPF_API int bpf_linker__add_file(struct bpf_linker *linker,
const char *filename,
const struct bpf_linker_file_opts *opts);
+LIBBPF_API int bpf_linker__add_fd(struct bpf_linker *linker, int fd,
+ const struct bpf_linker_file_opts *opts);
+LIBBPF_API int bpf_linker__add_buf(struct bpf_linker *linker, void *buf, size_t buf_sz,
+ const struct bpf_linker_file_opts *opts);
LIBBPF_API int bpf_linker__finalize(struct bpf_linker *linker);
LIBBPF_API void bpf_linker__free(struct bpf_linker *linker);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 91c5aef7dae7..8ed8749907d4 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -245,7 +245,6 @@ LIBBPF_0.3.0 {
btf__parse_raw_split;
btf__parse_split;
btf__new_empty_split;
- btf__new_split;
ring_buffer__epoll_fd;
} LIBBPF_0.2.0;
@@ -326,7 +325,6 @@ LIBBPF_0.7.0 {
bpf_xdp_detach;
bpf_xdp_query;
bpf_xdp_query_id;
- btf_ext__raw_data;
libbpf_probe_bpf_helper;
libbpf_probe_bpf_map_type;
libbpf_probe_bpf_prog_type;
@@ -411,4 +409,46 @@ LIBBPF_1.3.0 {
} LIBBPF_1.2.0;
LIBBPF_1.4.0 {
+ global:
+ bpf_program__attach_raw_tracepoint_opts;
+ bpf_raw_tracepoint_open_opts;
+ bpf_token_create;
+ btf__new_split;
+ btf_ext__raw_data;
} LIBBPF_1.3.0;
+
+LIBBPF_1.5.0 {
+ global:
+ btf__distill_base;
+ btf__relocate;
+ btf_ext__endianness;
+ btf_ext__set_endianness;
+ bpf_map__autoattach;
+ bpf_map__set_autoattach;
+ bpf_object__token_fd;
+ bpf_program__attach_sockmap;
+ ring__consume_n;
+ ring_buffer__consume_n;
+} LIBBPF_1.4.0;
+
+LIBBPF_1.6.0 {
+ global:
+ bpf_linker__add_buf;
+ bpf_linker__add_fd;
+ bpf_linker__new_fd;
+ bpf_object__prepare;
+ bpf_prog_stream_read;
+ bpf_program__attach_cgroup_opts;
+ bpf_program__func_info;
+ bpf_program__func_info_cnt;
+ bpf_program__line_info;
+ bpf_program__line_info_cnt;
+ btf__add_decl_attr;
+ btf__add_type_attr;
+} LIBBPF_1.5.0;
+
+LIBBPF_1.7.0 {
+ global:
+ bpf_map__set_exclusive_program;
+ bpf_map__exclusive_program;
+} LIBBPF_1.6.0;
diff --git a/tools/lib/bpf/libbpf_errno.c b/tools/lib/bpf/libbpf_errno.c
deleted file mode 100644
index 6b180172ec6b..000000000000
--- a/tools/lib/bpf/libbpf_errno.c
+++ /dev/null
@@ -1,75 +0,0 @@
-// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-
-/*
- * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
- * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
- * Copyright (C) 2015 Huawei Inc.
- * Copyright (C) 2017 Nicira, Inc.
- */
-
-#undef _GNU_SOURCE
-#include <stdio.h>
-#include <string.h>
-
-#include "libbpf.h"
-#include "libbpf_internal.h"
-
-/* make sure libbpf doesn't use kernel-only integer typedefs */
-#pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
-
-#define ERRNO_OFFSET(e) ((e) - __LIBBPF_ERRNO__START)
-#define ERRCODE_OFFSET(c) ERRNO_OFFSET(LIBBPF_ERRNO__##c)
-#define NR_ERRNO (__LIBBPF_ERRNO__END - __LIBBPF_ERRNO__START)
-
-static const char *libbpf_strerror_table[NR_ERRNO] = {
- [ERRCODE_OFFSET(LIBELF)] = "Something wrong in libelf",
- [ERRCODE_OFFSET(FORMAT)] = "BPF object format invalid",
- [ERRCODE_OFFSET(KVERSION)] = "'version' section incorrect or lost",
- [ERRCODE_OFFSET(ENDIAN)] = "Endian mismatch",
- [ERRCODE_OFFSET(INTERNAL)] = "Internal error in libbpf",
- [ERRCODE_OFFSET(RELOC)] = "Relocation failed",
- [ERRCODE_OFFSET(VERIFY)] = "Kernel verifier blocks program loading",
- [ERRCODE_OFFSET(PROG2BIG)] = "Program too big",
- [ERRCODE_OFFSET(KVER)] = "Incorrect kernel version",
- [ERRCODE_OFFSET(PROGTYPE)] = "Kernel doesn't support this program type",
- [ERRCODE_OFFSET(WRNGPID)] = "Wrong pid in netlink message",
- [ERRCODE_OFFSET(INVSEQ)] = "Invalid netlink sequence",
- [ERRCODE_OFFSET(NLPARSE)] = "Incorrect netlink message parsing",
-};
-
-int libbpf_strerror(int err, char *buf, size_t size)
-{
- int ret;
-
- if (!buf || !size)
- return libbpf_err(-EINVAL);
-
- err = err > 0 ? err : -err;
-
- if (err < __LIBBPF_ERRNO__START) {
- ret = strerror_r(err, buf, size);
- buf[size - 1] = '\0';
- return libbpf_err_errno(ret);
- }
-
- if (err < __LIBBPF_ERRNO__END) {
- const char *msg;
-
- msg = libbpf_strerror_table[ERRNO_OFFSET(err)];
- ret = snprintf(buf, size, "%s", msg);
- buf[size - 1] = '\0';
- /* The length of the buf and msg is positive.
- * A negative number may be returned only when the
- * size exceeds INT_MAX. Not likely to appear.
- */
- if (ret >= size)
- return libbpf_err(-ERANGE);
- return 0;
- }
-
- ret = snprintf(buf, size, "Unknown libbpf error %d", err);
- buf[size - 1] = '\0';
- if (ret >= size)
- return libbpf_err(-ERANGE);
- return libbpf_err(-ENOENT);
-}
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 27e4e320e1a6..35b2527bedec 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -10,14 +10,30 @@
#define __LIBBPF_LIBBPF_INTERNAL_H
#include <stdlib.h>
+#include <byteswap.h>
#include <limits.h>
#include <errno.h>
#include <linux/err.h>
#include <fcntl.h>
#include <unistd.h>
+#include <sys/syscall.h>
#include <libelf.h>
#include "relo_core.h"
+/* Android's libc doesn't support AT_EACCESS in faccessat() implementation
+ * ([0]), and just returns -EINVAL even if file exists and is accessible.
+ * See [1] for issues caused by this.
+ *
+ * So just redefine it to 0 on Android.
+ *
+ * [0] https://android.googlesource.com/platform/bionic/+/refs/heads/android13-release/libc/bionic/faccessat.cpp#50
+ * [1] https://github.com/libbpf/libbpf-bootstrap/issues/250#issuecomment-1911324250
+ */
+#ifdef __ANDROID__
+#undef AT_EACCESS
+#define AT_EACCESS 0
+#endif
+
/* make sure libbpf doesn't use kernel-only integer typedefs */
#pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
@@ -156,6 +172,16 @@ do { \
#define pr_info(fmt, ...) __pr(LIBBPF_INFO, fmt, ##__VA_ARGS__)
#define pr_debug(fmt, ...) __pr(LIBBPF_DEBUG, fmt, ##__VA_ARGS__)
+/**
+ * @brief **libbpf_errstr()** returns string corresponding to numeric errno
+ * @param err negative numeric errno
+ * @return pointer to string representation of the errno, that is invalidated
+ * upon the next call.
+ */
+const char *libbpf_errstr(int err);
+
+#define errstr(err) libbpf_errstr(err)
+
#ifndef __has_builtin
#define __has_builtin(x) 0
#endif
@@ -219,6 +245,9 @@ struct btf_type;
struct btf_type *btf_type_by_id(const struct btf *btf, __u32 type_id);
const char *btf_kind_str(const struct btf_type *t);
const struct btf_type *skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id);
+const struct btf_header *btf_header(const struct btf *btf);
+void btf_set_base_btf(struct btf *btf, const struct btf *base_btf);
+int btf_relocate(struct btf *btf, const struct btf *base_btf, __u32 **id_map);
static inline enum btf_func_linkage btf_func_linkage(const struct btf_type *t)
{
@@ -357,18 +386,40 @@ enum kern_feature_id {
FEAT_SYSCALL_WRAPPER,
/* BPF multi-uprobe link support */
FEAT_UPROBE_MULTI_LINK,
+ /* Kernel supports arg:ctx tag (__arg_ctx) for global subprogs natively */
+ FEAT_ARG_CTX_TAG,
+ /* Kernel supports '?' at the front of datasec names */
+ FEAT_BTF_QMARK_DATASEC,
__FEAT_CNT,
};
-int probe_memcg_account(void);
+enum kern_feature_result {
+ FEAT_UNKNOWN = 0,
+ FEAT_SUPPORTED = 1,
+ FEAT_MISSING = 2,
+};
+
+struct kern_feature_cache {
+ enum kern_feature_result res[__FEAT_CNT];
+ int token_fd;
+};
+
+bool feat_supported(struct kern_feature_cache *cache, enum kern_feature_id feat_id);
bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id);
+
+int probe_kern_syscall_wrapper(int token_fd);
+int probe_memcg_account(int token_fd);
int bump_rlimit_memlock(void);
int parse_cpu_mask_str(const char *s, bool **mask, int *mask_sz);
int parse_cpu_mask_file(const char *fcpu, bool **mask, int *mask_sz);
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
- const char *str_sec, size_t str_len);
-int btf_load_into_kernel(struct btf *btf, char *log_buf, size_t log_sz, __u32 log_level);
+ const char *str_sec, size_t str_len,
+ int token_fd);
+int btf_load_into_kernel(struct btf *btf,
+ char *log_buf, size_t log_sz, __u32 log_level,
+ int token_fd);
+struct btf *btf_load_from_kernel(__u32 id, struct btf *base_btf, int token_fd);
struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf);
void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type,
@@ -409,11 +460,11 @@ struct btf_ext_info {
*
* The func_info subsection layout:
* record size for struct bpf_func_info in the func_info subsection
- * struct btf_sec_func_info for section #1
+ * struct btf_ext_info_sec for section #1
* a list of bpf_func_info records for section #1
* where struct bpf_func_info mimics one in include/uapi/linux/bpf.h
* but may not be identical
- * struct btf_sec_func_info for section #2
+ * struct btf_ext_info_sec for section #2
* a list of bpf_func_info records for section #2
* ......
*
@@ -445,6 +496,8 @@ struct btf_ext {
struct btf_ext_header *hdr;
void *data;
};
+ void *data_swapped;
+ bool swapped_endian;
struct btf_ext_info func_info;
struct btf_ext_info line_info;
struct btf_ext_info core_relo_info;
@@ -472,21 +525,64 @@ struct bpf_line_info_min {
__u32 line_col;
};
+/* Functions to byte-swap info records */
+
+typedef void (*info_rec_bswap_fn)(void *);
+
+static inline void bpf_func_info_bswap(struct bpf_func_info *i)
+{
+ i->insn_off = bswap_32(i->insn_off);
+ i->type_id = bswap_32(i->type_id);
+}
+
+static inline void bpf_line_info_bswap(struct bpf_line_info *i)
+{
+ i->insn_off = bswap_32(i->insn_off);
+ i->file_name_off = bswap_32(i->file_name_off);
+ i->line_off = bswap_32(i->line_off);
+ i->line_col = bswap_32(i->line_col);
+}
+
+static inline void bpf_core_relo_bswap(struct bpf_core_relo *i)
+{
+ i->insn_off = bswap_32(i->insn_off);
+ i->type_id = bswap_32(i->type_id);
+ i->access_str_off = bswap_32(i->access_str_off);
+ i->kind = bswap_32(i->kind);
+}
+
+enum btf_field_iter_kind {
+ BTF_FIELD_ITER_IDS,
+ BTF_FIELD_ITER_STRS,
+};
+
+struct btf_field_desc {
+ /* once-per-type offsets */
+ int t_off_cnt, t_offs[2];
+ /* member struct size, or zero, if no members */
+ int m_sz;
+ /* repeated per-member offsets */
+ int m_off_cnt, m_offs[1];
+};
+
+struct btf_field_iter {
+ struct btf_field_desc desc;
+ void *p;
+ int m_idx;
+ int off_idx;
+ int vlen;
+};
+
+int btf_field_iter_init(struct btf_field_iter *it, struct btf_type *t, enum btf_field_iter_kind iter_kind);
+__u32 *btf_field_iter_next(struct btf_field_iter *it);
typedef int (*type_id_visit_fn)(__u32 *type_id, void *ctx);
typedef int (*str_off_visit_fn)(__u32 *str_off, void *ctx);
-int btf_type_visit_type_ids(struct btf_type *t, type_id_visit_fn visit, void *ctx);
-int btf_type_visit_str_offs(struct btf_type *t, str_off_visit_fn visit, void *ctx);
int btf_ext_visit_type_ids(struct btf_ext *btf_ext, type_id_visit_fn visit, void *ctx);
int btf_ext_visit_str_offs(struct btf_ext *btf_ext, str_off_visit_fn visit, void *ctx);
__s32 btf__find_by_name_kind_own(const struct btf *btf, const char *type_name,
__u32 kind);
-typedef int (*kallsyms_cb_t)(unsigned long long sym_addr, char sym_type,
- const char *sym_name, void *ctx);
-
-int libbpf_kallsyms_parse(kallsyms_cb_t cb, void *arg);
-
/* handle direct returned errors */
static inline int libbpf_err(int ret)
{
@@ -532,6 +628,27 @@ static inline bool is_ldimm64_insn(struct bpf_insn *insn)
return insn->code == (BPF_LD | BPF_IMM | BPF_DW);
}
+static inline void bpf_insn_bswap(struct bpf_insn *insn)
+{
+ __u8 tmp_reg = insn->dst_reg;
+
+ insn->dst_reg = insn->src_reg;
+ insn->src_reg = tmp_reg;
+ insn->off = bswap_16(insn->off);
+ insn->imm = bswap_32(insn->imm);
+}
+
+/* Unconditionally dup FD, ensuring it doesn't use [0, 2] range.
+ * Original FD is not closed or altered in any other way.
+ * Preserves original FD value, if it's invalid (negative).
+ */
+static inline int dup_good_fd(int fd)
+{
+ if (fd < 0)
+ return fd;
+ return fcntl(fd, F_DUPFD_CLOEXEC, 3);
+}
+
/* if fd is stdin, stdout, or stderr, dup to a fd greater than 2
* Takes ownership of the fd passed in, and closes it if calling
* fcntl(fd, F_DUPFD_CLOEXEC, 3).
@@ -543,7 +660,7 @@ static inline int ensure_good_fd(int fd)
if (fd < 0)
return fd;
if (fd < 3) {
- fd = fcntl(fd, F_DUPFD_CLOEXEC, 3);
+ fd = dup_good_fd(fd);
saved_errno = errno;
close(old_fd);
errno = saved_errno;
@@ -555,6 +672,20 @@ static inline int ensure_good_fd(int fd)
return fd;
}
+static inline int sys_dup3(int oldfd, int newfd, int flags)
+{
+ return syscall(__NR_dup3, oldfd, newfd, flags);
+}
+
+/* Some versions of Android don't provide memfd_create() in their libc
+ * implementation, so avoid complications and just go straight to Linux
+ * syscall.
+ */
+static inline int sys_memfd_create(const char *name, unsigned flags)
+{
+ return syscall(__NR_memfd_create, name, flags);
+}
+
/* Point *fixed_fd* to the same file that *tmp_fd* points to.
* Regardless of success, *tmp_fd* is closed.
* Whatever *fixed_fd* pointed to is closed silently.
@@ -563,7 +694,7 @@ static inline int reuse_fd(int fixed_fd, int tmp_fd)
{
int err;
- err = dup2(tmp_fd, fixed_fd);
+ err = sys_dup3(tmp_fd, fixed_fd, O_CLOEXEC);
err = err < 0 ? -errno : 0;
close(tmp_fd); /* clean up temporary FD */
return err;
@@ -591,6 +722,11 @@ static inline bool is_pow_of_2(size_t x)
return x && (x & (x - 1)) == 0;
}
+static inline __u32 ror32(__u32 v, int bits)
+{
+ return (v >> bits) | (v << (32 - bits));
+}
+
#define PROG_LOAD_ATTEMPTS 5
int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts);
@@ -613,4 +749,10 @@ int elf_resolve_syms_offsets(const char *binary_path, int cnt,
int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern,
unsigned long **poffsets, size_t *pcnt);
+int probe_fd(int fd);
+
+#define SHA256_DIGEST_LENGTH 32
+#define SHA256_DWORD_SIZE SHA256_DIGEST_LENGTH / sizeof(__u64)
+
+void libbpf_sha256(const void *data, size_t len, __u8 out[SHA256_DIGEST_LENGTH]);
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
diff --git a/tools/lib/bpf/libbpf_legacy.h b/tools/lib/bpf/libbpf_legacy.h
index 1e1be467bede..60b2600be88a 100644
--- a/tools/lib/bpf/libbpf_legacy.h
+++ b/tools/lib/bpf/libbpf_legacy.h
@@ -76,7 +76,7 @@ enum libbpf_strict_mode {
* first BPF program or map creation operation. This is done only if
* kernel is too old to support memcg-based memory accounting for BPF
* subsystem. By default, RLIMIT_MEMLOCK limit is set to RLIM_INFINITY,
- * but it can be overriden with libbpf_set_memlock_rlim() API.
+ * but it can be overridden with libbpf_set_memlock_rlim() API.
* Note that libbpf_set_memlock_rlim() needs to be called before
* the very first bpf_prog_load(), bpf_map_create() or bpf_object__load()
* operation.
@@ -97,7 +97,7 @@ LIBBPF_API int libbpf_set_strict_mode(enum libbpf_strict_mode mode);
* @brief **libbpf_get_error()** extracts the error code from the passed
* pointer
* @param ptr pointer returned from libbpf API function
- * @return error code; or 0 if no error occured
+ * @return error code; or 0 if no error occurred
*
* Note, as of libbpf 1.0 this function is not necessary and not recommended
* to be used. Libbpf doesn't return error code embedded into the pointer
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index 9c4db90b92b6..9dfbe7750f56 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -219,7 +219,8 @@ int libbpf_probe_bpf_prog_type(enum bpf_prog_type prog_type, const void *opts)
}
int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
- const char *str_sec, size_t str_len)
+ const char *str_sec, size_t str_len,
+ int token_fd)
{
struct btf_header hdr = {
.magic = BTF_MAGIC,
@@ -229,6 +230,10 @@ int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
.str_off = types_len,
.str_len = str_len,
};
+ LIBBPF_OPTS(bpf_btf_load_opts, opts,
+ .token_fd = token_fd,
+ .btf_flags = token_fd ? BPF_F_TOKEN_FD : 0,
+ );
int btf_fd, btf_len;
__u8 *raw_btf;
@@ -241,7 +246,7 @@ int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
memcpy(raw_btf + hdr.hdr_len, raw_types, hdr.type_len);
memcpy(raw_btf + hdr.hdr_len + hdr.type_len, str_sec, hdr.str_len);
- btf_fd = bpf_btf_load(raw_btf, btf_len, NULL);
+ btf_fd = bpf_btf_load(raw_btf, btf_len, &opts);
free(raw_btf);
return btf_fd;
@@ -271,7 +276,7 @@ static int load_local_storage_btf(void)
};
return libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs));
+ strs, sizeof(strs), 0);
}
static int probe_map_create(enum bpf_map_type map_type)
@@ -326,12 +331,20 @@ static int probe_map_create(enum bpf_map_type map_type)
case BPF_MAP_TYPE_STRUCT_OPS:
/* we'll get -ENOTSUPP for invalid BTF type ID for struct_ops */
opts.btf_vmlinux_value_type_id = 1;
+ opts.value_type_btf_obj_fd = -1;
exp_err = -524; /* -ENOTSUPP */
break;
case BPF_MAP_TYPE_BLOOM_FILTER:
key_size = 0;
max_entries = 1;
break;
+ case BPF_MAP_TYPE_ARENA:
+ key_size = 0;
+ value_size = 0;
+ max_entries = 1; /* one page */
+ opts.map_extra = 0; /* can mmap() at any address */
+ opts.map_flags = BPF_F_MMAPABLE;
+ break;
case BPF_MAP_TYPE_HASH:
case BPF_MAP_TYPE_ARRAY:
case BPF_MAP_TYPE_PROG_ARRAY:
@@ -435,7 +448,8 @@ int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helpe
/* If BPF verifier doesn't recognize BPF helper ID (enum bpf_func_id)
* at all, it will emit something like "invalid func unknown#181".
* If BPF verifier recognizes BPF helper but it's not supported for
- * given BPF program type, it will emit "unknown func bpf_sys_bpf#166".
+ * given BPF program type, it will emit "unknown func bpf_sys_bpf#166"
+ * or "program of this type cannot use helper bpf_sys_bpf#166".
* In both cases, provided combination of BPF program type and BPF
* helper is not supported by the kernel.
* In all other cases, probe_prog_load() above will either succeed (e.g.,
@@ -444,7 +458,8 @@ int libbpf_probe_bpf_helper(enum bpf_prog_type prog_type, enum bpf_func_id helpe
* that), or we'll get some more specific BPF verifier error about
* some unsatisfied conditions.
*/
- if (ret == 0 && (strstr(buf, "invalid func ") || strstr(buf, "unknown func ")))
+ if (ret == 0 && (strstr(buf, "invalid func ") || strstr(buf, "unknown func ") ||
+ strstr(buf, "program of this type cannot use helper ")))
return 0;
return 1; /* assume supported */
}
diff --git a/tools/lib/bpf/libbpf_utils.c b/tools/lib/bpf/libbpf_utils.c
new file mode 100644
index 000000000000..ac3beae54cf6
--- /dev/null
+++ b/tools/lib/bpf/libbpf_utils.c
@@ -0,0 +1,256 @@
+// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+
+/*
+ * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
+ * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
+ * Copyright (C) 2015 Huawei Inc.
+ * Copyright (C) 2017 Nicira, Inc.
+ */
+
+#undef _GNU_SOURCE
+#include <stdio.h>
+#include <string.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <linux/kernel.h>
+
+#include "libbpf.h"
+#include "libbpf_internal.h"
+
+#ifndef ENOTSUPP
+#define ENOTSUPP 524
+#endif
+
+/* make sure libbpf doesn't use kernel-only integer typedefs */
+#pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
+
+#define ERRNO_OFFSET(e) ((e) - __LIBBPF_ERRNO__START)
+#define ERRCODE_OFFSET(c) ERRNO_OFFSET(LIBBPF_ERRNO__##c)
+#define NR_ERRNO (__LIBBPF_ERRNO__END - __LIBBPF_ERRNO__START)
+
+static const char *libbpf_strerror_table[NR_ERRNO] = {
+ [ERRCODE_OFFSET(LIBELF)] = "Something wrong in libelf",
+ [ERRCODE_OFFSET(FORMAT)] = "BPF object format invalid",
+ [ERRCODE_OFFSET(KVERSION)] = "'version' section incorrect or lost",
+ [ERRCODE_OFFSET(ENDIAN)] = "Endian mismatch",
+ [ERRCODE_OFFSET(INTERNAL)] = "Internal error in libbpf",
+ [ERRCODE_OFFSET(RELOC)] = "Relocation failed",
+ [ERRCODE_OFFSET(VERIFY)] = "Kernel verifier blocks program loading",
+ [ERRCODE_OFFSET(PROG2BIG)] = "Program too big",
+ [ERRCODE_OFFSET(KVER)] = "Incorrect kernel version",
+ [ERRCODE_OFFSET(PROGTYPE)] = "Kernel doesn't support this program type",
+ [ERRCODE_OFFSET(WRNGPID)] = "Wrong pid in netlink message",
+ [ERRCODE_OFFSET(INVSEQ)] = "Invalid netlink sequence",
+ [ERRCODE_OFFSET(NLPARSE)] = "Incorrect netlink message parsing",
+};
+
+int libbpf_strerror(int err, char *buf, size_t size)
+{
+ int ret;
+
+ if (!buf || !size)
+ return libbpf_err(-EINVAL);
+
+ err = err > 0 ? err : -err;
+
+ if (err < __LIBBPF_ERRNO__START) {
+ ret = strerror_r(err, buf, size);
+ buf[size - 1] = '\0';
+ return libbpf_err_errno(ret);
+ }
+
+ if (err < __LIBBPF_ERRNO__END) {
+ const char *msg;
+
+ msg = libbpf_strerror_table[ERRNO_OFFSET(err)];
+ ret = snprintf(buf, size, "%s", msg);
+ buf[size - 1] = '\0';
+ /* The length of the buf and msg is positive.
+ * A negative number may be returned only when the
+ * size exceeds INT_MAX. Not likely to appear.
+ */
+ if (ret >= size)
+ return libbpf_err(-ERANGE);
+ return 0;
+ }
+
+ ret = snprintf(buf, size, "Unknown libbpf error %d", err);
+ buf[size - 1] = '\0';
+ if (ret >= size)
+ return libbpf_err(-ERANGE);
+ return libbpf_err(-ENOENT);
+}
+
+const char *libbpf_errstr(int err)
+{
+ static __thread char buf[12];
+
+ if (err > 0)
+ err = -err;
+
+ switch (err) {
+ case -E2BIG: return "-E2BIG";
+ case -EACCES: return "-EACCES";
+ case -EADDRINUSE: return "-EADDRINUSE";
+ case -EADDRNOTAVAIL: return "-EADDRNOTAVAIL";
+ case -EAGAIN: return "-EAGAIN";
+ case -EALREADY: return "-EALREADY";
+ case -EBADF: return "-EBADF";
+ case -EBADFD: return "-EBADFD";
+ case -EBUSY: return "-EBUSY";
+ case -ECANCELED: return "-ECANCELED";
+ case -ECHILD: return "-ECHILD";
+ case -EDEADLK: return "-EDEADLK";
+ case -EDOM: return "-EDOM";
+ case -EEXIST: return "-EEXIST";
+ case -EFAULT: return "-EFAULT";
+ case -EFBIG: return "-EFBIG";
+ case -EILSEQ: return "-EILSEQ";
+ case -EINPROGRESS: return "-EINPROGRESS";
+ case -EINTR: return "-EINTR";
+ case -EINVAL: return "-EINVAL";
+ case -EIO: return "-EIO";
+ case -EISDIR: return "-EISDIR";
+ case -ELOOP: return "-ELOOP";
+ case -EMFILE: return "-EMFILE";
+ case -EMLINK: return "-EMLINK";
+ case -EMSGSIZE: return "-EMSGSIZE";
+ case -ENAMETOOLONG: return "-ENAMETOOLONG";
+ case -ENFILE: return "-ENFILE";
+ case -ENODATA: return "-ENODATA";
+ case -ENODEV: return "-ENODEV";
+ case -ENOENT: return "-ENOENT";
+ case -ENOEXEC: return "-ENOEXEC";
+ case -ENOLINK: return "-ENOLINK";
+ case -ENOMEM: return "-ENOMEM";
+ case -ENOSPC: return "-ENOSPC";
+ case -ENOTBLK: return "-ENOTBLK";
+ case -ENOTDIR: return "-ENOTDIR";
+ case -ENOTSUPP: return "-ENOTSUPP";
+ case -ENOTTY: return "-ENOTTY";
+ case -ENXIO: return "-ENXIO";
+ case -EOPNOTSUPP: return "-EOPNOTSUPP";
+ case -EOVERFLOW: return "-EOVERFLOW";
+ case -EPERM: return "-EPERM";
+ case -EPIPE: return "-EPIPE";
+ case -EPROTO: return "-EPROTO";
+ case -EPROTONOSUPPORT: return "-EPROTONOSUPPORT";
+ case -ERANGE: return "-ERANGE";
+ case -EROFS: return "-EROFS";
+ case -ESPIPE: return "-ESPIPE";
+ case -ESRCH: return "-ESRCH";
+ case -ETXTBSY: return "-ETXTBSY";
+ case -EUCLEAN: return "-EUCLEAN";
+ case -EXDEV: return "-EXDEV";
+ default:
+ snprintf(buf, sizeof(buf), "%d", err);
+ return buf;
+ }
+}
+
+static inline __u32 get_unaligned_be32(const void *p)
+{
+ __be32 val;
+
+ memcpy(&val, p, sizeof(val));
+ return be32_to_cpu(val);
+}
+
+static inline void put_unaligned_be32(__u32 val, void *p)
+{
+ __be32 be_val = cpu_to_be32(val);
+
+ memcpy(p, &be_val, sizeof(be_val));
+}
+
+#define SHA256_BLOCK_LENGTH 64
+#define Ch(x, y, z) (((x) & (y)) ^ (~(x) & (z)))
+#define Maj(x, y, z) (((x) & (y)) ^ ((x) & (z)) ^ ((y) & (z)))
+#define Sigma_0(x) (ror32((x), 2) ^ ror32((x), 13) ^ ror32((x), 22))
+#define Sigma_1(x) (ror32((x), 6) ^ ror32((x), 11) ^ ror32((x), 25))
+#define sigma_0(x) (ror32((x), 7) ^ ror32((x), 18) ^ ((x) >> 3))
+#define sigma_1(x) (ror32((x), 17) ^ ror32((x), 19) ^ ((x) >> 10))
+
+static const __u32 sha256_K[64] = {
+ 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1,
+ 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3,
+ 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786,
+ 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da,
+ 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147,
+ 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13,
+ 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b,
+ 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070,
+ 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a,
+ 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208,
+ 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2,
+};
+
+#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) \
+ { \
+ __u32 tmp = h + Sigma_1(e) + Ch(e, f, g) + sha256_K[i] + w[i]; \
+ d += tmp; \
+ h = tmp + Sigma_0(a) + Maj(a, b, c); \
+ }
+
+static void sha256_blocks(__u32 state[8], const __u8 *data, size_t nblocks)
+{
+ while (nblocks--) {
+ __u32 a = state[0];
+ __u32 b = state[1];
+ __u32 c = state[2];
+ __u32 d = state[3];
+ __u32 e = state[4];
+ __u32 f = state[5];
+ __u32 g = state[6];
+ __u32 h = state[7];
+ __u32 w[64];
+ int i;
+
+ for (i = 0; i < 16; i++)
+ w[i] = get_unaligned_be32(&data[4 * i]);
+ for (; i < ARRAY_SIZE(w); i++)
+ w[i] = sigma_1(w[i - 2]) + w[i - 7] +
+ sigma_0(w[i - 15]) + w[i - 16];
+ for (i = 0; i < ARRAY_SIZE(w); i += 8) {
+ SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h);
+ SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g);
+ SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f);
+ SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e);
+ SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d);
+ SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c);
+ SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b);
+ SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a);
+ }
+ state[0] += a;
+ state[1] += b;
+ state[2] += c;
+ state[3] += d;
+ state[4] += e;
+ state[5] += f;
+ state[6] += g;
+ state[7] += h;
+ data += SHA256_BLOCK_LENGTH;
+ }
+}
+
+void libbpf_sha256(const void *data, size_t len, __u8 out[SHA256_DIGEST_LENGTH])
+{
+ __u32 state[8] = { 0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
+ 0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19 };
+ const __be64 bitcount = cpu_to_be64((__u64)len * 8);
+ __u8 final_data[2 * SHA256_BLOCK_LENGTH] = { 0 };
+ size_t final_len = len % SHA256_BLOCK_LENGTH;
+ int i;
+
+ sha256_blocks(state, data, len / SHA256_BLOCK_LENGTH);
+
+ memcpy(final_data, data + len - final_len, final_len);
+ final_data[final_len] = 0x80;
+ final_len = roundup(final_len + 9, SHA256_BLOCK_LENGTH);
+ memcpy(&final_data[final_len - 8], &bitcount, 8);
+
+ sha256_blocks(state, final_data, final_len / SHA256_BLOCK_LENGTH);
+
+ for (i = 0; i < ARRAY_SIZE(state); i++)
+ put_unaligned_be32(state[i], &out[4 * i]);
+}
diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
index e783a47da815..99331e317dee 100644
--- a/tools/lib/bpf/libbpf_version.h
+++ b/tools/lib/bpf/libbpf_version.h
@@ -4,6 +4,6 @@
#define __LIBBPF_VERSION_H
#define LIBBPF_MAJOR_VERSION 1
-#define LIBBPF_MINOR_VERSION 4
+#define LIBBPF_MINOR_VERSION 7
#endif /* __LIBBPF_VERSION_H */
diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
index 16bca56002ab..56ae77047bc3 100644
--- a/tools/lib/bpf/linker.c
+++ b/tools/lib/bpf/linker.c
@@ -4,6 +4,10 @@
*
* Copyright (c) 2021 Facebook
*/
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
+
#include <stdbool.h>
#include <stddef.h>
#include <stdio.h>
@@ -16,6 +20,7 @@
#include <elf.h>
#include <libelf.h>
#include <fcntl.h>
+#include <sys/mman.h>
#include "libbpf.h"
#include "btf.h"
#include "libbpf_internal.h"
@@ -135,6 +140,7 @@ struct bpf_linker {
int fd;
Elf *elf;
Elf64_Ehdr *elf_hdr;
+ bool swapped_endian;
/* Output sections metadata */
struct dst_sec *secs;
@@ -150,15 +156,19 @@ struct bpf_linker {
/* global (including extern) ELF symbols */
int glob_sym_cnt;
struct glob_sym *glob_syms;
+
+ bool fd_is_owned;
};
#define pr_warn_elf(fmt, ...) \
libbpf_print(LIBBPF_WARN, "libbpf: " fmt ": %s\n", ##__VA_ARGS__, elf_errmsg(-1))
-static int init_output_elf(struct bpf_linker *linker, const char *file);
+static int init_output_elf(struct bpf_linker *linker);
-static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
- const struct bpf_linker_file_opts *opts,
+static int bpf_linker_add_file(struct bpf_linker *linker, int fd,
+ const char *filename);
+
+static int linker_load_obj_file(struct bpf_linker *linker,
struct src_obj *obj);
static int linker_sanity_check_elf(struct src_obj *obj);
static int linker_sanity_check_elf_symtab(struct src_obj *obj, struct src_sec *sec);
@@ -189,7 +199,7 @@ void bpf_linker__free(struct bpf_linker *linker)
if (linker->elf)
elf_end(linker->elf);
- if (linker->fd >= 0)
+ if (linker->fd >= 0 && linker->fd_is_owned)
close(linker->fd);
strset__free(linker->strtab_strs);
@@ -231,9 +241,63 @@ struct bpf_linker *bpf_linker__new(const char *filename, struct bpf_linker_opts
if (!linker)
return errno = ENOMEM, NULL;
- linker->fd = -1;
+ linker->filename = strdup(filename);
+ if (!linker->filename) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ linker->fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC | O_CLOEXEC, 0644);
+ if (linker->fd < 0) {
+ err = -errno;
+ pr_warn("failed to create '%s': %d\n", filename, err);
+ goto err_out;
+ }
+ linker->fd_is_owned = true;
+
+ err = init_output_elf(linker);
+ if (err)
+ goto err_out;
+
+ return linker;
+
+err_out:
+ bpf_linker__free(linker);
+ return errno = -err, NULL;
+}
+
+struct bpf_linker *bpf_linker__new_fd(int fd, struct bpf_linker_opts *opts)
+{
+ struct bpf_linker *linker;
+ char filename[32];
+ int err;
+
+ if (fd < 0)
+ return errno = EINVAL, NULL;
+
+ if (!OPTS_VALID(opts, bpf_linker_opts))
+ return errno = EINVAL, NULL;
+
+ if (elf_version(EV_CURRENT) == EV_NONE) {
+ pr_warn_elf("libelf initialization failed");
+ return errno = EINVAL, NULL;
+ }
- err = init_output_elf(linker, filename);
+ linker = calloc(1, sizeof(*linker));
+ if (!linker)
+ return errno = ENOMEM, NULL;
+
+ snprintf(filename, sizeof(filename), "fd:%d", fd);
+ linker->filename = strdup(filename);
+ if (!linker->filename) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+
+ linker->fd = fd;
+ linker->fd_is_owned = false;
+
+ err = init_output_elf(linker);
if (err)
goto err_out;
@@ -292,23 +356,12 @@ static Elf64_Sym *add_new_sym(struct bpf_linker *linker, size_t *sym_idx)
return sym;
}
-static int init_output_elf(struct bpf_linker *linker, const char *file)
+static int init_output_elf(struct bpf_linker *linker)
{
int err, str_off;
Elf64_Sym *init_sym;
struct dst_sec *sec;
- linker->filename = strdup(file);
- if (!linker->filename)
- return -ENOMEM;
-
- linker->fd = open(file, O_WRONLY | O_CREAT | O_TRUNC | O_CLOEXEC, 0644);
- if (linker->fd < 0) {
- err = -errno;
- pr_warn("failed to create '%s': %d\n", file, err);
- return err;
- }
-
linker->elf = elf_begin(linker->fd, ELF_C_WRITE, NULL);
if (!linker->elf) {
pr_warn_elf("failed to create ELF object");
@@ -324,13 +377,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
linker->elf_hdr->e_machine = EM_BPF;
linker->elf_hdr->e_type = ET_REL;
-#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
- linker->elf_hdr->e_ident[EI_DATA] = ELFDATA2LSB;
-#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- linker->elf_hdr->e_ident[EI_DATA] = ELFDATA2MSB;
-#else
-#error "Unknown __BYTE_ORDER__"
-#endif
+ /* Set unknown ELF endianness, assign later from input files */
+ linker->elf_hdr->e_ident[EI_DATA] = ELFDATANONE;
/* STRTAB */
/* initialize strset with an empty string to conform to ELF */
@@ -396,6 +444,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
pr_warn_elf("failed to create SYMTAB data");
return -EINVAL;
}
+ /* Ensure libelf translates byte-order of symbol records */
+ sec->data->d_type = ELF_T_SYM;
str_off = strset__add_str(linker->strtab_strs, sec->sec_name);
if (str_off < 0)
@@ -437,19 +487,16 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
return 0;
}
-int bpf_linker__add_file(struct bpf_linker *linker, const char *filename,
- const struct bpf_linker_file_opts *opts)
+static int bpf_linker_add_file(struct bpf_linker *linker, int fd,
+ const char *filename)
{
struct src_obj obj = {};
int err = 0;
- if (!OPTS_VALID(opts, bpf_linker_file_opts))
- return libbpf_err(-EINVAL);
-
- if (!linker->elf)
- return libbpf_err(-EINVAL);
+ obj.filename = filename;
+ obj.fd = fd;
- err = err ?: linker_load_obj_file(linker, filename, opts, &obj);
+ err = err ?: linker_load_obj_file(linker, &obj);
err = err ?: linker_append_sec_data(linker, &obj);
err = err ?: linker_append_elf_syms(linker, &obj);
err = err ?: linker_append_elf_relos(linker, &obj);
@@ -464,12 +511,91 @@ int bpf_linker__add_file(struct bpf_linker *linker, const char *filename,
free(obj.sym_map);
if (obj.elf)
elf_end(obj.elf);
- if (obj.fd >= 0)
- close(obj.fd);
+ return err;
+}
+
+int bpf_linker__add_file(struct bpf_linker *linker, const char *filename,
+ const struct bpf_linker_file_opts *opts)
+{
+ int fd, err;
+
+ if (!OPTS_VALID(opts, bpf_linker_file_opts))
+ return libbpf_err(-EINVAL);
+
+ if (!linker->elf)
+ return libbpf_err(-EINVAL);
+
+ fd = open(filename, O_RDONLY | O_CLOEXEC);
+ if (fd < 0) {
+ err = -errno;
+ pr_warn("failed to open file '%s': %s\n", filename, errstr(err));
+ return libbpf_err(err);
+ }
+
+ err = bpf_linker_add_file(linker, fd, filename);
+ close(fd);
+ return libbpf_err(err);
+}
+
+int bpf_linker__add_fd(struct bpf_linker *linker, int fd,
+ const struct bpf_linker_file_opts *opts)
+{
+ char filename[32];
+ int err;
+
+ if (!OPTS_VALID(opts, bpf_linker_file_opts))
+ return libbpf_err(-EINVAL);
+
+ if (!linker->elf)
+ return libbpf_err(-EINVAL);
+
+ if (fd < 0)
+ return libbpf_err(-EINVAL);
+
+ snprintf(filename, sizeof(filename), "fd:%d", fd);
+ err = bpf_linker_add_file(linker, fd, filename);
return libbpf_err(err);
}
+int bpf_linker__add_buf(struct bpf_linker *linker, void *buf, size_t buf_sz,
+ const struct bpf_linker_file_opts *opts)
+{
+ char filename[32];
+ int fd, written, ret;
+
+ if (!OPTS_VALID(opts, bpf_linker_file_opts))
+ return libbpf_err(-EINVAL);
+
+ if (!linker->elf)
+ return libbpf_err(-EINVAL);
+
+ snprintf(filename, sizeof(filename), "mem:%p+%zu", buf, buf_sz);
+
+ fd = sys_memfd_create(filename, 0);
+ if (fd < 0) {
+ ret = -errno;
+ pr_warn("failed to create memfd '%s': %s\n", filename, errstr(ret));
+ return libbpf_err(ret);
+ }
+
+ written = 0;
+ while (written < buf_sz) {
+ ret = write(fd, buf, buf_sz);
+ if (ret < 0) {
+ ret = -errno;
+ pr_warn("failed to write '%s': %s\n", filename, errstr(ret));
+ goto err_out;
+ }
+ written += ret;
+ }
+
+ ret = bpf_linker_add_file(linker, fd, filename);
+err_out:
+ close(fd);
+ return libbpf_err(ret);
+}
+
static bool is_dwarf_sec_name(const char *name)
{
/* approximation, but the actual list is too long */
@@ -535,65 +661,69 @@ static struct src_sec *add_src_sec(struct src_obj *obj, const char *sec_name)
return sec;
}
-static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
- const struct bpf_linker_file_opts *opts,
+static int linker_load_obj_file(struct bpf_linker *linker,
struct src_obj *obj)
{
-#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
- const int host_endianness = ELFDATA2LSB;
-#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- const int host_endianness = ELFDATA2MSB;
-#else
-#error "Unknown __BYTE_ORDER__"
-#endif
int err = 0;
Elf_Scn *scn;
Elf_Data *data;
Elf64_Ehdr *ehdr;
Elf64_Shdr *shdr;
struct src_sec *sec;
+ unsigned char obj_byteorder;
+ unsigned char link_byteorder = linker->elf_hdr->e_ident[EI_DATA];
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ const unsigned char host_byteorder = ELFDATA2LSB;
+#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+ const unsigned char host_byteorder = ELFDATA2MSB;
+#else
+#error "Unknown __BYTE_ORDER__"
+#endif
- pr_debug("linker: adding object file '%s'...\n", filename);
-
- obj->filename = filename;
+ pr_debug("linker: adding object file '%s'...\n", obj->filename);
- obj->fd = open(filename, O_RDONLY | O_CLOEXEC);
- if (obj->fd < 0) {
- err = -errno;
- pr_warn("failed to open file '%s': %d\n", filename, err);
- return err;
- }
obj->elf = elf_begin(obj->fd, ELF_C_READ_MMAP, NULL);
if (!obj->elf) {
- err = -errno;
- pr_warn_elf("failed to parse ELF file '%s'", filename);
- return err;
+ pr_warn_elf("failed to parse ELF file '%s'", obj->filename);
+ return -EINVAL;
}
/* Sanity check ELF file high-level properties */
ehdr = elf64_getehdr(obj->elf);
if (!ehdr) {
- err = -errno;
- pr_warn_elf("failed to get ELF header for %s", filename);
+ pr_warn_elf("failed to get ELF header for %s", obj->filename);
+ return -EINVAL;
+ }
+
+ /* Linker output endianness set by first input object */
+ obj_byteorder = ehdr->e_ident[EI_DATA];
+ if (obj_byteorder != ELFDATA2LSB && obj_byteorder != ELFDATA2MSB) {
+ err = -EOPNOTSUPP;
+ pr_warn("unknown byte order of ELF file %s\n", obj->filename);
return err;
}
- if (ehdr->e_ident[EI_DATA] != host_endianness) {
+ if (link_byteorder == ELFDATANONE) {
+ linker->elf_hdr->e_ident[EI_DATA] = obj_byteorder;
+ linker->swapped_endian = obj_byteorder != host_byteorder;
+ pr_debug("linker: set %s-endian output byte order\n",
+ obj_byteorder == ELFDATA2MSB ? "big" : "little");
+ } else if (link_byteorder != obj_byteorder) {
err = -EOPNOTSUPP;
- pr_warn_elf("unsupported byte order of ELF file %s", filename);
+ pr_warn("byte order mismatch with ELF file %s\n", obj->filename);
return err;
}
+
if (ehdr->e_type != ET_REL
|| ehdr->e_machine != EM_BPF
|| ehdr->e_ident[EI_CLASS] != ELFCLASS64) {
err = -EOPNOTSUPP;
- pr_warn_elf("unsupported kind of ELF file %s", filename);
+ pr_warn_elf("unsupported kind of ELF file %s", obj->filename);
return err;
}
if (elf_getshdrstrndx(obj->elf, &obj->shstrs_sec_idx)) {
- err = -errno;
- pr_warn_elf("failed to get SHSTRTAB section index for %s", filename);
- return err;
+ pr_warn_elf("failed to get SHSTRTAB section index for %s", obj->filename);
+ return -EINVAL;
}
scn = NULL;
@@ -603,26 +733,23 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
shdr = elf64_getshdr(scn);
if (!shdr) {
- err = -errno;
pr_warn_elf("failed to get section #%zu header for %s",
- sec_idx, filename);
- return err;
+ sec_idx, obj->filename);
+ return -EINVAL;
}
sec_name = elf_strptr(obj->elf, obj->shstrs_sec_idx, shdr->sh_name);
if (!sec_name) {
- err = -errno;
pr_warn_elf("failed to get section #%zu name for %s",
- sec_idx, filename);
- return err;
+ sec_idx, obj->filename);
+ return -EINVAL;
}
data = elf_getdata(scn, 0);
if (!data) {
- err = -errno;
pr_warn_elf("failed to get section #%zu (%s) data from %s",
- sec_idx, sec_name, filename);
- return err;
+ sec_idx, sec_name, obj->filename);
+ return -EINVAL;
}
sec = add_src_sec(obj, sec_name);
@@ -656,7 +783,8 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
obj->btf = btf__new(data->d_buf, shdr->sh_size);
err = libbpf_get_error(obj->btf);
if (err) {
- pr_warn("failed to parse .BTF from %s: %d\n", filename, err);
+ pr_warn("failed to parse .BTF from %s: %s\n",
+ obj->filename, errstr(err));
return err;
}
sec->skipped = true;
@@ -666,7 +794,8 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
obj->btf_ext = btf_ext__new(data->d_buf, shdr->sh_size);
err = libbpf_get_error(obj->btf_ext);
if (err) {
- pr_warn("failed to parse .BTF.ext from '%s': %d\n", filename, err);
+ pr_warn("failed to parse .BTF.ext from '%s': %s\n",
+ obj->filename, errstr(err));
return err;
}
sec->skipped = true;
@@ -683,7 +812,7 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
break;
default:
pr_warn("unrecognized section #%zu (%s) in %s\n",
- sec_idx, sec_name, filename);
+ sec_idx, sec_name, obj->filename);
err = -EINVAL;
return err;
}
@@ -957,19 +1086,33 @@ static int check_btf_str_off(__u32 *str_off, void *ctx)
static int linker_sanity_check_btf(struct src_obj *obj)
{
struct btf_type *t;
- int i, n, err = 0;
+ int i, n, err;
if (!obj->btf)
return 0;
n = btf__type_cnt(obj->btf);
for (i = 1; i < n; i++) {
+ struct btf_field_iter it;
+ __u32 *type_id, *str_off;
+
t = btf_type_by_id(obj->btf, i);
- err = err ?: btf_type_visit_type_ids(t, check_btf_type_id, obj->btf);
- err = err ?: btf_type_visit_str_offs(t, check_btf_str_off, obj->btf);
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_IDS);
if (err)
return err;
+ while ((type_id = btf_field_iter_next(&it))) {
+ if (*type_id >= n)
+ return -EINVAL;
+ }
+
+ err = btf_field_iter_init(&it, t, BTF_FIELD_ITER_STRS);
+ if (err)
+ return err;
+ while ((str_off = btf_field_iter_next(&it))) {
+ if (!btf__str_by_offset(obj->btf, *str_off))
+ return -EINVAL;
+ }
}
return 0;
@@ -1095,6 +1238,24 @@ static bool sec_content_is_same(struct dst_sec *dst_sec, struct src_sec *src_sec
return true;
}
+static bool is_exec_sec(struct dst_sec *sec)
+{
+ if (!sec || sec->ephemeral)
+ return false;
+ return (sec->shdr->sh_type == SHT_PROGBITS) &&
+ (sec->shdr->sh_flags & SHF_EXECINSTR);
+}
+
+static void exec_sec_bswap(void *raw_data, int size)
+{
+ const int insn_cnt = size / sizeof(struct bpf_insn);
+ struct bpf_insn *insn = raw_data;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++, insn++)
+ bpf_insn_bswap(insn);
+}
+
static int extend_sec(struct bpf_linker *linker, struct dst_sec *dst, struct src_sec *src)
{
void *tmp;
@@ -1154,6 +1315,10 @@ static int extend_sec(struct bpf_linker *linker, struct dst_sec *dst, struct src
memset(dst->raw_data + dst->sec_sz, 0, dst_align_sz - dst->sec_sz);
/* now copy src data at a properly aligned offset */
memcpy(dst->raw_data + dst_align_sz, src->data->d_buf, src->shdr->sh_size);
+
+ /* convert added bpf insns to native byte-order */
+ if (linker->swapped_endian && is_exec_sec(dst))
+ exec_sec_bswap(dst->raw_data + dst_align_sz, src->shdr->sh_size);
}
dst->sec_sz = dst_final_sz;
@@ -1210,7 +1375,7 @@ static int linker_append_sec_data(struct bpf_linker *linker, struct src_obj *obj
} else {
if (!secs_match(dst_sec, src_sec)) {
pr_warn("ELF sections %s are incompatible\n", src_sec->sec_name);
- return -1;
+ return -EINVAL;
}
/* "license" and "version" sections are deduped */
@@ -1399,7 +1564,7 @@ recur:
return true;
case BTF_KIND_PTR:
/* just validate overall shape of the referenced type, so no
- * contents comparison for struct/union, and allowd fwd vs
+ * contents comparison for struct/union, and allowed fwd vs
* struct/union
*/
exact = false;
@@ -1948,7 +2113,7 @@ static int linker_append_elf_sym(struct bpf_linker *linker, struct src_obj *obj,
/* If existing symbol is a strong resolved symbol, bail out,
* because we lost resolution battle have nothing to
- * contribute. We already checked abover that there is no
+ * contribute. We already checked above that there is no
* strong-strong conflict. We also already tightened binding
* and visibility, so nothing else to contribute at that point.
*/
@@ -1997,7 +2162,7 @@ add_sym:
obj->sym_map[src_sym_idx] = dst_sym_idx;
- if (sym_type == STT_SECTION && dst_sym) {
+ if (sym_type == STT_SECTION && dst_sec) {
dst_sec->sec_sym_idx = dst_sym_idx;
dst_sym->st_value = 0;
}
@@ -2057,7 +2222,7 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
}
} else if (!secs_match(dst_sec, src_sec)) {
pr_warn("sections %s are not compatible\n", src_sec->sec_name);
- return -1;
+ return -EINVAL;
}
/* shdr->sh_link points to SYMTAB */
@@ -2213,10 +2378,17 @@ static int linker_fixup_btf(struct src_obj *obj)
vi = btf_var_secinfos(t);
for (j = 0, m = btf_vlen(t); j < m; j++, vi++) {
const struct btf_type *vt = btf__type_by_id(obj->btf, vi->type);
- const char *var_name = btf__str_by_offset(obj->btf, vt->name_off);
- int var_linkage = btf_var(vt)->linkage;
+ const char *var_name;
+ int var_linkage;
Elf64_Sym *sym;
+ /* could be a variable or function */
+ if (!btf_is_var(vt))
+ continue;
+
+ var_name = btf__str_by_offset(obj->btf, vt->name_off);
+ var_linkage = btf_var(vt)->linkage;
+
/* no need to patch up static or extern vars */
if (var_linkage != BTF_VAR_GLOBAL_ALLOCATED)
continue;
@@ -2234,26 +2406,10 @@ static int linker_fixup_btf(struct src_obj *obj)
return 0;
}
-static int remap_type_id(__u32 *type_id, void *ctx)
-{
- int *id_map = ctx;
- int new_id = id_map[*type_id];
-
- /* Error out if the type wasn't remapped. Ignore VOID which stays VOID. */
- if (new_id == 0 && *type_id != 0) {
- pr_warn("failed to find new ID mapping for original BTF type ID %u\n", *type_id);
- return -EINVAL;
- }
-
- *type_id = id_map[*type_id];
-
- return 0;
-}
-
static int linker_append_btf(struct bpf_linker *linker, struct src_obj *obj)
{
const struct btf_type *t;
- int i, j, n, start_id, id;
+ int i, j, n, start_id, id, err;
const char *name;
if (!obj->btf)
@@ -2324,9 +2480,25 @@ static int linker_append_btf(struct bpf_linker *linker, struct src_obj *obj)
n = btf__type_cnt(linker->btf);
for (i = start_id; i < n; i++) {
struct btf_type *dst_t = btf_type_by_id(linker->btf, i);
+ struct btf_field_iter it;
+ __u32 *type_id;
- if (btf_type_visit_type_ids(dst_t, remap_type_id, obj->btf_type_map))
- return -EINVAL;
+ err = btf_field_iter_init(&it, dst_t, BTF_FIELD_ITER_IDS);
+ if (err)
+ return err;
+
+ while ((type_id = btf_field_iter_next(&it))) {
+ int new_id = obj->btf_type_map[*type_id];
+
+ /* Error out if the type wasn't remapped. Ignore VOID which stays VOID. */
+ if (new_id == 0 && *type_id != 0) {
+ pr_warn("failed to find new ID mapping for original BTF type ID %u\n",
+ *type_id);
+ return -EINVAL;
+ }
+
+ *type_id = obj->btf_type_map[*type_id];
+ }
}
/* Rewrite VAR/FUNC underlying types (i.e., FUNC's FUNC_PROTO and VAR's
@@ -2394,6 +2566,10 @@ static int linker_append_btf(struct bpf_linker *linker, struct src_obj *obj)
if (glob_sym && glob_sym->var_idx >= 0) {
__s64 sz;
+ /* FUNCs don't have size, nothing to update */
+ if (btf_is_func(t))
+ continue;
+
dst_var = &dst_sec->sec_vars[glob_sym->var_idx];
/* Because underlying BTF type might have
* changed, so might its size have changed, so
@@ -2607,27 +2783,32 @@ int bpf_linker__finalize(struct bpf_linker *linker)
if (!sec->scn)
continue;
+ /* restore sections with bpf insns to target byte-order */
+ if (linker->swapped_endian && is_exec_sec(sec))
+ exec_sec_bswap(sec->raw_data, sec->sec_sz);
+
sec->data->d_buf = sec->raw_data;
}
/* Finalize ELF layout */
if (elf_update(linker->elf, ELF_C_NULL) < 0) {
- err = -errno;
+ err = -EINVAL;
pr_warn_elf("failed to finalize ELF layout");
return libbpf_err(err);
}
/* Write out final ELF contents */
if (elf_update(linker->elf, ELF_C_WRITE) < 0) {
- err = -errno;
+ err = -EINVAL;
pr_warn_elf("failed to write ELF contents");
return libbpf_err(err);
}
elf_end(linker->elf);
- close(linker->fd);
-
linker->elf = NULL;
+
+ if (linker->fd_is_owned)
+ close(linker->fd);
linker->fd = -1;
return 0;
@@ -2675,6 +2856,7 @@ static int emit_elf_data_sec(struct bpf_linker *linker, const char *sec_name,
static int finalize_btf(struct bpf_linker *linker)
{
+ enum btf_endianness link_endianness;
LIBBPF_OPTS(btf_dedup_opts, opts);
struct btf *btf = linker->btf;
const void *raw_data;
@@ -2708,17 +2890,24 @@ static int finalize_btf(struct bpf_linker *linker)
err = finalize_btf_ext(linker);
if (err) {
- pr_warn(".BTF.ext generation failed: %d\n", err);
+ pr_warn(".BTF.ext generation failed: %s\n", errstr(err));
return err;
}
opts.btf_ext = linker->btf_ext;
err = btf__dedup(linker->btf, &opts);
if (err) {
- pr_warn("BTF dedup failed: %d\n", err);
+ pr_warn("BTF dedup failed: %s\n", errstr(err));
return err;
}
+ /* Set .BTF and .BTF.ext output byte order */
+ link_endianness = linker->elf_hdr->e_ident[EI_DATA] == ELFDATA2MSB ?
+ BTF_BIG_ENDIAN : BTF_LITTLE_ENDIAN;
+ btf__set_endianness(linker->btf, link_endianness);
+ if (linker->btf_ext)
+ btf_ext__set_endianness(linker->btf_ext, link_endianness);
+
/* Emit .BTF section */
raw_data = btf__raw_data(linker->btf, &raw_sz);
if (!raw_data)
@@ -2726,19 +2915,19 @@ static int finalize_btf(struct bpf_linker *linker)
err = emit_elf_data_sec(linker, BTF_ELF_SEC, 8, raw_data, raw_sz);
if (err) {
- pr_warn("failed to write out .BTF ELF section: %d\n", err);
+ pr_warn("failed to write out .BTF ELF section: %s\n", errstr(err));
return err;
}
/* Emit .BTF.ext section */
if (linker->btf_ext) {
- raw_data = btf_ext__get_raw_data(linker->btf_ext, &raw_sz);
+ raw_data = btf_ext__raw_data(linker->btf_ext, &raw_sz);
if (!raw_data)
return -ENOMEM;
err = emit_elf_data_sec(linker, BTF_EXT_ELF_SEC, 8, raw_data, raw_sz);
if (err) {
- pr_warn("failed to write out .BTF.ext ELF section: %d\n", err);
+ pr_warn("failed to write out .BTF.ext ELF section: %s\n", errstr(err));
return err;
}
}
@@ -2914,7 +3103,7 @@ static int finalize_btf_ext(struct bpf_linker *linker)
err = libbpf_get_error(linker->btf_ext);
if (err) {
linker->btf_ext = NULL;
- pr_warn("failed to parse final .BTF.ext data: %d\n", err);
+ pr_warn("failed to parse final .BTF.ext data: %s\n", errstr(err));
goto out;
}
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index 090bcf6e3b3d..c997e69d507f 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -496,8 +496,8 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts)
if (err)
return libbpf_err(err);
- opts->feature_flags = md.flags;
- opts->xdp_zc_max_segs = md.xdp_zc_max_segs;
+ OPTS_SET(opts, feature_flags, md.flags);
+ OPTS_SET(opts, xdp_zc_max_segs, md.xdp_zc_max_segs);
skip_feature_flags:
return 0;
@@ -529,9 +529,9 @@ int bpf_xdp_query_id(int ifindex, int flags, __u32 *prog_id)
}
-typedef int (*qdisc_config_t)(struct libbpf_nla_req *req);
+typedef int (*qdisc_config_t)(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook);
-static int clsact_config(struct libbpf_nla_req *req)
+static int clsact_config(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook)
{
req->tc.tcm_parent = TC_H_CLSACT;
req->tc.tcm_handle = TC_H_MAKE(TC_H_CLSACT, 0);
@@ -539,6 +539,16 @@ static int clsact_config(struct libbpf_nla_req *req)
return nlattr_add(req, TCA_KIND, "clsact", sizeof("clsact"));
}
+static int qdisc_config(struct libbpf_nla_req *req, const struct bpf_tc_hook *hook)
+{
+ const char *qdisc = OPTS_GET(hook, qdisc, NULL);
+
+ req->tc.tcm_parent = OPTS_GET(hook, parent, TC_H_ROOT);
+ req->tc.tcm_handle = OPTS_GET(hook, handle, 0);
+
+ return nlattr_add(req, TCA_KIND, qdisc, strlen(qdisc) + 1);
+}
+
static int attach_point_to_config(struct bpf_tc_hook *hook,
qdisc_config_t *config)
{
@@ -552,6 +562,9 @@ static int attach_point_to_config(struct bpf_tc_hook *hook,
return 0;
case BPF_TC_CUSTOM:
return -EOPNOTSUPP;
+ case BPF_TC_QDISC:
+ *config = &qdisc_config;
+ return 0;
default:
return -EINVAL;
}
@@ -596,7 +609,7 @@ static int tc_qdisc_modify(struct bpf_tc_hook *hook, int cmd, int flags)
req.tc.tcm_family = AF_UNSPEC;
req.tc.tcm_ifindex = OPTS_GET(hook, ifindex, 0);
- ret = config(&req);
+ ret = config(&req, hook);
if (ret < 0)
return ret;
@@ -639,6 +652,7 @@ int bpf_tc_hook_destroy(struct bpf_tc_hook *hook)
case BPF_TC_INGRESS:
case BPF_TC_EGRESS:
return libbpf_err(__bpf_tc_detach(hook, NULL, true));
+ case BPF_TC_QDISC:
case BPF_TC_INGRESS | BPF_TC_EGRESS:
return libbpf_err(tc_qdisc_delete(hook));
case BPF_TC_CUSTOM:
diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
index 975e265eab3b..06663f9ea581 100644
--- a/tools/lib/bpf/nlattr.c
+++ b/tools/lib/bpf/nlattr.c
@@ -63,16 +63,16 @@ static int validate_nla(struct nlattr *nla, int maxtype,
minlen = nla_attr_minlen[pt->type];
if (libbpf_nla_len(nla) < minlen)
- return -1;
+ return -EINVAL;
if (pt->maxlen && libbpf_nla_len(nla) > pt->maxlen)
- return -1;
+ return -EINVAL;
if (pt->type == LIBBPF_NLA_STRING) {
char *data = libbpf_nla_data(nla);
if (data[libbpf_nla_len(nla) - 1] != '\0')
- return -1;
+ return -EINVAL;
}
return 0;
@@ -118,19 +118,18 @@ int libbpf_nla_parse(struct nlattr *tb[], int maxtype, struct nlattr *head,
if (policy) {
err = validate_nla(nla, maxtype, policy);
if (err < 0)
- goto errout;
+ return err;
}
- if (tb[type])
+ if (tb[type]) {
pr_warn("Attribute of type %#x found multiple times in message, "
"previous attribute is being ignored.\n", type);
+ }
tb[type] = nla;
}
- err = 0;
-errout:
- return err;
+ return 0;
}
/**
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index 63a4d5ad12d1..6eea5edba58a 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -64,7 +64,6 @@ enum libbpf_print_level {
#include "libbpf.h"
#include "bpf.h"
#include "btf.h"
-#include "str_error.h"
#include "libbpf_internal.h"
#endif
@@ -683,7 +682,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
{
const struct bpf_core_accessor *acc;
const struct btf_type *t;
- __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
+ __u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id, elem_id;
const struct btf_member *m;
const struct btf_type *mt;
bool bitfield;
@@ -706,8 +705,14 @@ static int bpf_core_calc_field_relo(const char *prog_name,
if (!acc->name) {
if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
*val = spec->bit_offset / 8;
- /* remember field size for load/store mem size */
- sz = btf__resolve_size(spec->btf, acc->type_id);
+ /* remember field size for load/store mem size;
+ * note, for arrays we care about individual element
+ * sizes, not the overall array size
+ */
+ t = skip_mods_and_typedefs(spec->btf, acc->type_id, &elem_id);
+ while (btf_is_array(t))
+ t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
+ sz = btf__resolve_size(spec->btf, elem_id);
if (sz < 0)
return -EINVAL;
*field_sz = sz;
@@ -767,7 +772,17 @@ static int bpf_core_calc_field_relo(const char *prog_name,
case BPF_CORE_FIELD_BYTE_OFFSET:
*val = byte_off;
if (!bitfield) {
- *field_sz = byte_sz;
+ /* remember field size for load/store mem size;
+ * note, for arrays we care about individual element
+ * sizes, not the overall array size
+ */
+ t = skip_mods_and_typedefs(spec->btf, field_type_id, &elem_id);
+ while (btf_is_array(t))
+ t = skip_mods_and_typedefs(spec->btf, btf_array(t)->type, &elem_id);
+ sz = btf__resolve_size(spec->btf, elem_id);
+ if (sz < 0)
+ return -EINVAL;
+ *field_sz = sz;
*type_id = field_type_id;
}
break;
@@ -1339,7 +1354,7 @@ int bpf_core_calc_relo_insn(const char *prog_name,
cands->cands[i].id, cand_spec);
if (err < 0) {
bpf_core_format_spec(spec_buf, sizeof(spec_buf), cand_spec);
- pr_warn("prog '%s': relo #%d: error matching candidate #%d %s: %d\n ",
+ pr_warn("prog '%s': relo #%d: error matching candidate #%d %s: %d\n",
prog_name, relo_idx, i, spec_buf, err);
return err;
}
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index aacb64278a01..00ec4837a06d 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -88,8 +88,8 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
err = bpf_map_get_info_by_fd(map_fd, &info, &len);
if (err) {
err = -errno;
- pr_warn("ringbuf: failed to get map info for fd=%d: %d\n",
- map_fd, err);
+ pr_warn("ringbuf: failed to get map info for fd=%d: %s\n",
+ map_fd, errstr(err));
return libbpf_err(err);
}
@@ -123,8 +123,8 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
tmp = mmap(NULL, rb->page_size, PROT_READ | PROT_WRITE, MAP_SHARED, map_fd, 0);
if (tmp == MAP_FAILED) {
err = -errno;
- pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
- map_fd, err);
+ pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %s\n",
+ map_fd, errstr(err));
goto err_out;
}
r->consumer_pos = tmp;
@@ -142,8 +142,8 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ, MAP_SHARED, map_fd, rb->page_size);
if (tmp == MAP_FAILED) {
err = -errno;
- pr_warn("ringbuf: failed to mmap data pages for map fd=%d: %d\n",
- map_fd, err);
+ pr_warn("ringbuf: failed to mmap data pages for map fd=%d: %s\n",
+ map_fd, errstr(err));
goto err_out;
}
r->producer_pos = tmp;
@@ -156,8 +156,8 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
e->data.fd = rb->ring_cnt;
if (epoll_ctl(rb->epoll_fd, EPOLL_CTL_ADD, map_fd, e) < 0) {
err = -errno;
- pr_warn("ringbuf: failed to epoll add map fd=%d: %d\n",
- map_fd, err);
+ pr_warn("ringbuf: failed to epoll add map fd=%d: %s\n",
+ map_fd, errstr(err));
goto err_out;
}
@@ -205,7 +205,7 @@ ring_buffer__new(int map_fd, ring_buffer_sample_fn sample_cb, void *ctx,
rb->epoll_fd = epoll_create1(EPOLL_CLOEXEC);
if (rb->epoll_fd < 0) {
err = -errno;
- pr_warn("ringbuf: failed to create epoll instance: %d\n", err);
+ pr_warn("ringbuf: failed to create epoll instance: %s\n", errstr(err));
goto err_out;
}
@@ -231,7 +231,7 @@ static inline int roundup_len(__u32 len)
return (len + 7) / 8 * 8;
}
-static int64_t ringbuf_process_ring(struct ring *r)
+static int64_t ringbuf_process_ring(struct ring *r, size_t n)
{
int *len_ptr, len, err;
/* 64-bit to avoid overflow in case of extreme application behavior */
@@ -268,12 +268,42 @@ static int64_t ringbuf_process_ring(struct ring *r)
}
smp_store_release(r->consumer_pos, cons_pos);
+
+ if (cnt >= n)
+ goto done;
}
} while (got_new_data);
done:
return cnt;
}
+/* Consume available ring buffer(s) data without event polling, up to n
+ * records.
+ *
+ * Returns number of records consumed across all registered ring buffers (or
+ * n, whichever is less), or negative number if any of the callbacks return
+ * error.
+ */
+int ring_buffer__consume_n(struct ring_buffer *rb, size_t n)
+{
+ int64_t err, res = 0;
+ int i;
+
+ for (i = 0; i < rb->ring_cnt; i++) {
+ struct ring *ring = rb->rings[i];
+
+ err = ringbuf_process_ring(ring, n);
+ if (err < 0)
+ return libbpf_err(err);
+ res += err;
+ n -= err;
+
+ if (n == 0)
+ break;
+ }
+ return res > INT_MAX ? INT_MAX : res;
+}
+
/* Consume available ring buffer(s) data without event polling.
* Returns number of records consumed across all registered ring buffers (or
* INT_MAX, whichever is less), or negative number if any of the callbacks
@@ -287,13 +317,15 @@ int ring_buffer__consume(struct ring_buffer *rb)
for (i = 0; i < rb->ring_cnt; i++) {
struct ring *ring = rb->rings[i];
- err = ringbuf_process_ring(ring);
+ err = ringbuf_process_ring(ring, INT_MAX);
if (err < 0)
return libbpf_err(err);
res += err;
+ if (res > INT_MAX) {
+ res = INT_MAX;
+ break;
+ }
}
- if (res > INT_MAX)
- return INT_MAX;
return res;
}
@@ -314,13 +346,13 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
__u32 ring_id = rb->events[i].data.fd;
struct ring *ring = rb->rings[ring_id];
- err = ringbuf_process_ring(ring);
+ err = ringbuf_process_ring(ring, INT_MAX);
if (err < 0)
return libbpf_err(err);
res += err;
}
if (res > INT_MAX)
- return INT_MAX;
+ res = INT_MAX;
return res;
}
@@ -371,17 +403,22 @@ int ring__map_fd(const struct ring *r)
return r->map_fd;
}
-int ring__consume(struct ring *r)
+int ring__consume_n(struct ring *r, size_t n)
{
int64_t res;
- res = ringbuf_process_ring(r);
+ res = ringbuf_process_ring(r, n);
if (res < 0)
return libbpf_err(res);
return res > INT_MAX ? INT_MAX : res;
}
+int ring__consume(struct ring *r)
+{
+ return ring__consume_n(r, INT_MAX);
+}
+
static void user_ringbuf_unmap_ring(struct user_ring_buffer *rb)
{
if (rb->consumer_pos) {
@@ -421,7 +458,8 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
err = bpf_map_get_info_by_fd(map_fd, &info, &len);
if (err) {
err = -errno;
- pr_warn("user ringbuf: failed to get map info for fd=%d: %d\n", map_fd, err);
+ pr_warn("user ringbuf: failed to get map info for fd=%d: %s\n",
+ map_fd, errstr(err));
return err;
}
@@ -437,8 +475,8 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
tmp = mmap(NULL, rb->page_size, PROT_READ, MAP_SHARED, map_fd, 0);
if (tmp == MAP_FAILED) {
err = -errno;
- pr_warn("user ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
- map_fd, err);
+ pr_warn("user ringbuf: failed to mmap consumer page for map fd=%d: %s\n",
+ map_fd, errstr(err));
return err;
}
rb->consumer_pos = tmp;
@@ -457,8 +495,8 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
map_fd, rb->page_size);
if (tmp == MAP_FAILED) {
err = -errno;
- pr_warn("user ringbuf: failed to mmap data pages for map fd=%d: %d\n",
- map_fd, err);
+ pr_warn("user ringbuf: failed to mmap data pages for map fd=%d: %s\n",
+ map_fd, errstr(err));
return err;
}
@@ -469,7 +507,7 @@ static int user_ringbuf_map(struct user_ring_buffer *rb, int map_fd)
rb_epoll->events = EPOLLOUT;
if (epoll_ctl(rb->epoll_fd, EPOLL_CTL_ADD, map_fd, rb_epoll) < 0) {
err = -errno;
- pr_warn("user ringbuf: failed to epoll add map fd=%d: %d\n", map_fd, err);
+ pr_warn("user ringbuf: failed to epoll add map fd=%d: %s\n", map_fd, errstr(err));
return err;
}
@@ -494,7 +532,7 @@ user_ring_buffer__new(int map_fd, const struct user_ring_buffer_opts *opts)
rb->epoll_fd = epoll_create1(EPOLL_CLOEXEC);
if (rb->epoll_fd < 0) {
err = -errno;
- pr_warn("user ringbuf: failed to create epoll instance: %d\n", err);
+ pr_warn("user ringbuf: failed to create epoll instance: %s\n", errstr(err));
goto err_out;
}
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index 1e82ab06c3eb..6a8f5c7a02eb 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -13,10 +13,15 @@
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/mman.h>
+#include <linux/keyctl.h>
#include <stdlib.h>
#include "bpf.h"
#endif
+#ifndef SHA256_DIGEST_LENGTH
+#define SHA256_DIGEST_LENGTH 32
+#endif
+
#ifndef __NR_bpf
# if defined(__mips__) && defined(_ABIO32)
# define __NR_bpf 4355
@@ -64,6 +69,11 @@ struct bpf_load_and_run_opts {
__u32 data_sz;
__u32 insns_sz;
const char *errstr;
+ void *signature;
+ __u32 signature_sz;
+ __s32 keyring_id;
+ void *excl_prog_hash;
+ __u32 excl_prog_hash_sz;
};
long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
@@ -107,7 +117,7 @@ static inline void skel_free(const void *p)
* The loader program will perform probe_read_kernel() from maps.rodata.initial_value.
* skel_finalize_map_data() sets skel->rodata to point to actual value in a bpf map and
* does maps.rodata.initial_value = ~0ULL to signal skel_free_map_data() that kvfree
- * is not nessary.
+ * is not necessary.
*
* For user space:
* skel_prep_map_data() mmaps anon memory into skel->rodata that can be accessed directly.
@@ -220,14 +230,19 @@ static inline int skel_map_create(enum bpf_map_type map_type,
const char *map_name,
__u32 key_size,
__u32 value_size,
- __u32 max_entries)
+ __u32 max_entries,
+ const void *excl_prog_hash,
+ __u32 excl_prog_hash_sz)
{
- const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
+ const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash_size);
union bpf_attr attr;
memset(&attr, 0, attr_sz);
attr.map_type = map_type;
+ attr.excl_prog_hash = (unsigned long) excl_prog_hash;
+ attr.excl_prog_hash_size = excl_prog_hash_sz;
+
strncpy(attr.map_name, map_name, sizeof(attr.map_name));
attr.key_size = key_size;
attr.value_size = value_size;
@@ -300,6 +315,35 @@ static inline int skel_link_create(int prog_fd, int target_fd,
return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz);
}
+static inline int skel_obj_get_info_by_fd(int fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, info);
+ __u8 sha[SHA256_DIGEST_LENGTH];
+ struct bpf_map_info info;
+ __u32 info_len = sizeof(info);
+ union bpf_attr attr;
+
+ memset(&info, 0, sizeof(info));
+ info.hash = (long) &sha;
+ info.hash_size = SHA256_DIGEST_LENGTH;
+
+ memset(&attr, 0, attr_sz);
+ attr.info.bpf_fd = fd;
+ attr.info.info = (long) &info;
+ attr.info.info_len = info_len;
+ return skel_sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, attr_sz);
+}
+
+static inline int skel_map_freeze(int fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, map_fd);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.map_fd = fd;
+
+ return skel_sys_bpf(BPF_MAP_FREEZE, &attr, attr_sz);
+}
#ifdef __KERNEL__
#define set_err
#else
@@ -308,12 +352,13 @@ static inline int skel_link_create(int prog_fd, int target_fd,
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
{
- const size_t prog_load_attr_sz = offsetofend(union bpf_attr, fd_array);
+ const size_t prog_load_attr_sz = offsetofend(union bpf_attr, keyring_id);
const size_t test_run_attr_sz = offsetofend(union bpf_attr, test);
int map_fd = -1, prog_fd = -1, key = 0, err;
union bpf_attr attr;
- err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1);
+ err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1,
+ opts->excl_prog_hash, opts->excl_prog_hash_sz);
if (map_fd < 0) {
opts->errstr = "failed to create loader map";
set_err;
@@ -327,11 +372,34 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
goto out;
}
+#ifndef __KERNEL__
+ err = skel_map_freeze(map_fd);
+ if (err < 0) {
+ opts->errstr = "failed to freeze map";
+ set_err;
+ goto out;
+ }
+ err = skel_obj_get_info_by_fd(map_fd);
+ if (err < 0) {
+ opts->errstr = "failed to fetch obj info";
+ set_err;
+ goto out;
+ }
+#endif
+
memset(&attr, 0, prog_load_attr_sz);
attr.prog_type = BPF_PROG_TYPE_SYSCALL;
attr.insns = (long) opts->insns;
attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
attr.license = (long) "Dual BSD/GPL";
+#ifndef __KERNEL__
+ attr.signature = (long) opts->signature;
+ attr.signature_size = opts->signature_sz;
+#else
+ if (opts->signature || opts->signature_sz)
+ pr_warn("signatures are not supported from bpf_preload\n");
+#endif
+ attr.keyring_id = opts->keyring_id;
memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog"));
attr.fd_array = (long) &map_fd;
attr.log_level = opts->ctx->log_level;
@@ -351,10 +419,11 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
attr.test.ctx_size_in = opts->ctx->sz;
err = skel_sys_bpf(BPF_PROG_RUN, &attr, test_run_attr_sz);
if (err < 0 || (int)attr.test.retval < 0) {
- opts->errstr = "failed to execute loader prog";
if (err < 0) {
+ opts->errstr = "failed to execute loader prog";
set_err;
} else {
+ opts->errstr = "error returned by loader prog";
err = (int)attr.test.retval;
#ifndef __KERNEL__
errno = -err;
diff --git a/tools/lib/bpf/str_error.c b/tools/lib/bpf/str_error.c
deleted file mode 100644
index 146da01979c7..000000000000
--- a/tools/lib/bpf/str_error.c
+++ /dev/null
@@ -1,21 +0,0 @@
-// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-#undef _GNU_SOURCE
-#include <string.h>
-#include <stdio.h>
-#include "str_error.h"
-
-/* make sure libbpf doesn't use kernel-only integer typedefs */
-#pragma GCC poison u8 u16 u32 u64 s8 s16 s32 s64
-
-/*
- * Wrapper to allow for building in non-GNU systems such as Alpine Linux's musl
- * libc, while checking strerror_r() return to avoid having to check this in
- * all places calling it.
- */
-char *libbpf_strerror_r(int err, char *dst, int len)
-{
- int ret = strerror_r(err < 0 ? -err : err, dst, len);
- if (ret)
- snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);
- return dst;
-}
diff --git a/tools/lib/bpf/str_error.h b/tools/lib/bpf/str_error.h
deleted file mode 100644
index a139334d57b6..000000000000
--- a/tools/lib/bpf/str_error.h
+++ /dev/null
@@ -1,6 +0,0 @@
-/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
-#ifndef __LIBBPF_STR_ERROR_H
-#define __LIBBPF_STR_ERROR_H
-
-char *libbpf_strerror_r(int err, char *dst, int len);
-#endif /* __LIBBPF_STR_ERROR_H */
diff --git a/tools/lib/bpf/usdt.bpf.h b/tools/lib/bpf/usdt.bpf.h
index f6763300b26a..43deb05a5197 100644
--- a/tools/lib/bpf/usdt.bpf.h
+++ b/tools/lib/bpf/usdt.bpf.h
@@ -34,13 +34,32 @@ enum __bpf_usdt_arg_type {
BPF_USDT_ARG_CONST,
BPF_USDT_ARG_REG,
BPF_USDT_ARG_REG_DEREF,
+ BPF_USDT_ARG_SIB,
};
+/*
+ * This struct layout is designed specifically to be backwards/forward
+ * compatible between libbpf versions for ARG_CONST, ARG_REG, and
+ * ARG_REG_DEREF modes. ARG_SIB requires libbpf v1.7+.
+ */
struct __bpf_usdt_arg_spec {
/* u64 scalar interpreted depending on arg_type, see below */
__u64 val_off;
- /* arg location case, see bpf_udst_arg() for details */
- enum __bpf_usdt_arg_type arg_type;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ /* arg location case, see bpf_usdt_arg() for details */
+ enum __bpf_usdt_arg_type arg_type: 8;
+ /* index register offset within struct pt_regs */
+ __u16 idx_reg_off: 12;
+ /* scale factor for index register (1, 2, 4, or 8) */
+ __u16 scale_bitshift: 4;
+ /* reserved for future use, keeps reg_off offset stable */
+ __u8 __reserved: 8;
+#else
+ __u8 __reserved: 8;
+ __u16 idx_reg_off: 12;
+ __u16 scale_bitshift: 4;
+ enum __bpf_usdt_arg_type arg_type: 8;
+#endif
/* offset of referenced register within struct pt_regs */
short reg_off;
/* whether arg should be interpreted as signed value */
@@ -108,6 +127,38 @@ int bpf_usdt_arg_cnt(struct pt_regs *ctx)
return spec->arg_cnt;
}
+/* Returns the size in bytes of the #*arg_num* (zero-indexed) USDT argument.
+ * Returns negative error if argument is not found or arg_num is invalid.
+ */
+static __always_inline
+int bpf_usdt_arg_size(struct pt_regs *ctx, __u64 arg_num)
+{
+ struct __bpf_usdt_arg_spec *arg_spec;
+ struct __bpf_usdt_spec *spec;
+ int spec_id;
+
+ spec_id = __bpf_usdt_spec_id(ctx);
+ if (spec_id < 0)
+ return -ESRCH;
+
+ spec = bpf_map_lookup_elem(&__bpf_usdt_specs, &spec_id);
+ if (!spec)
+ return -ESRCH;
+
+ if (arg_num >= BPF_USDT_MAX_ARG_CNT)
+ return -ENOENT;
+ barrier_var(arg_num);
+ if (arg_num >= spec->arg_cnt)
+ return -ENOENT;
+
+ arg_spec = &spec->args[arg_num];
+
+ /* arg_spec->arg_bitshift = 64 - arg_sz * 8
+ * so: arg_sz = (64 - arg_spec->arg_bitshift) / 8
+ */
+ return (unsigned int)(64 - arg_spec->arg_bitshift) / 8;
+}
+
/* Fetch USDT argument #*arg_num* (zero-indexed) and put its value into *res.
* Returns 0 on success; negative error, otherwise.
* On error *res is guaranteed to be set to zero.
@@ -117,7 +168,7 @@ int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res)
{
struct __bpf_usdt_spec *spec;
struct __bpf_usdt_arg_spec *arg_spec;
- unsigned long val;
+ unsigned long val, idx;
int err, spec_id;
*res = 0;
@@ -172,6 +223,27 @@ int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res)
val >>= arg_spec->arg_bitshift;
#endif
break;
+ case BPF_USDT_ARG_SIB:
+ /* Arg is in memory addressed by SIB (Scale-Index-Base) mode
+ * (e.g., "-1@-96(%rbp,%rax,8)" in USDT arg spec). We first
+ * fetch the base register contents and the index register
+ * contents from pt_regs. Then we calculate the final address
+ * as base + (index * scale) + offset, and do a user-space
+ * probe read to fetch the argument value.
+ */
+ err = bpf_probe_read_kernel(&val, sizeof(val), (void *)ctx + arg_spec->reg_off);
+ if (err)
+ return err;
+ err = bpf_probe_read_kernel(&idx, sizeof(idx), (void *)ctx + arg_spec->idx_reg_off);
+ if (err)
+ return err;
+ err = bpf_probe_read_user(&val, sizeof(val), (void *)(val + (idx << arg_spec->scale_bitshift) + arg_spec->val_off));
+ if (err)
+ return err;
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+ val >>= arg_spec->arg_bitshift;
+#endif
+ break;
default:
return -EINVAL;
}
@@ -214,18 +286,18 @@ long bpf_usdt_cookie(struct pt_regs *ctx)
/* we rely on ___bpf_apply() and ___bpf_narg() macros already defined in bpf_tracing.h */
#define ___bpf_usdt_args0() ctx
-#define ___bpf_usdt_args1(x) ___bpf_usdt_args0(), ({ long _x; bpf_usdt_arg(ctx, 0, &_x); (void *)_x; })
-#define ___bpf_usdt_args2(x, args...) ___bpf_usdt_args1(args), ({ long _x; bpf_usdt_arg(ctx, 1, &_x); (void *)_x; })
-#define ___bpf_usdt_args3(x, args...) ___bpf_usdt_args2(args), ({ long _x; bpf_usdt_arg(ctx, 2, &_x); (void *)_x; })
-#define ___bpf_usdt_args4(x, args...) ___bpf_usdt_args3(args), ({ long _x; bpf_usdt_arg(ctx, 3, &_x); (void *)_x; })
-#define ___bpf_usdt_args5(x, args...) ___bpf_usdt_args4(args), ({ long _x; bpf_usdt_arg(ctx, 4, &_x); (void *)_x; })
-#define ___bpf_usdt_args6(x, args...) ___bpf_usdt_args5(args), ({ long _x; bpf_usdt_arg(ctx, 5, &_x); (void *)_x; })
-#define ___bpf_usdt_args7(x, args...) ___bpf_usdt_args6(args), ({ long _x; bpf_usdt_arg(ctx, 6, &_x); (void *)_x; })
-#define ___bpf_usdt_args8(x, args...) ___bpf_usdt_args7(args), ({ long _x; bpf_usdt_arg(ctx, 7, &_x); (void *)_x; })
-#define ___bpf_usdt_args9(x, args...) ___bpf_usdt_args8(args), ({ long _x; bpf_usdt_arg(ctx, 8, &_x); (void *)_x; })
-#define ___bpf_usdt_args10(x, args...) ___bpf_usdt_args9(args), ({ long _x; bpf_usdt_arg(ctx, 9, &_x); (void *)_x; })
-#define ___bpf_usdt_args11(x, args...) ___bpf_usdt_args10(args), ({ long _x; bpf_usdt_arg(ctx, 10, &_x); (void *)_x; })
-#define ___bpf_usdt_args12(x, args...) ___bpf_usdt_args11(args), ({ long _x; bpf_usdt_arg(ctx, 11, &_x); (void *)_x; })
+#define ___bpf_usdt_args1(x) ___bpf_usdt_args0(), ({ long _x; bpf_usdt_arg(ctx, 0, &_x); _x; })
+#define ___bpf_usdt_args2(x, args...) ___bpf_usdt_args1(args), ({ long _x; bpf_usdt_arg(ctx, 1, &_x); _x; })
+#define ___bpf_usdt_args3(x, args...) ___bpf_usdt_args2(args), ({ long _x; bpf_usdt_arg(ctx, 2, &_x); _x; })
+#define ___bpf_usdt_args4(x, args...) ___bpf_usdt_args3(args), ({ long _x; bpf_usdt_arg(ctx, 3, &_x); _x; })
+#define ___bpf_usdt_args5(x, args...) ___bpf_usdt_args4(args), ({ long _x; bpf_usdt_arg(ctx, 4, &_x); _x; })
+#define ___bpf_usdt_args6(x, args...) ___bpf_usdt_args5(args), ({ long _x; bpf_usdt_arg(ctx, 5, &_x); _x; })
+#define ___bpf_usdt_args7(x, args...) ___bpf_usdt_args6(args), ({ long _x; bpf_usdt_arg(ctx, 6, &_x); _x; })
+#define ___bpf_usdt_args8(x, args...) ___bpf_usdt_args7(args), ({ long _x; bpf_usdt_arg(ctx, 7, &_x); _x; })
+#define ___bpf_usdt_args9(x, args...) ___bpf_usdt_args8(args), ({ long _x; bpf_usdt_arg(ctx, 8, &_x); _x; })
+#define ___bpf_usdt_args10(x, args...) ___bpf_usdt_args9(args), ({ long _x; bpf_usdt_arg(ctx, 9, &_x); _x; })
+#define ___bpf_usdt_args11(x, args...) ___bpf_usdt_args10(args), ({ long _x; bpf_usdt_arg(ctx, 10, &_x); _x; })
+#define ___bpf_usdt_args12(x, args...) ___bpf_usdt_args11(args), ({ long _x; bpf_usdt_arg(ctx, 11, &_x); _x; })
#define ___bpf_usdt_args(args...) ___bpf_apply(___bpf_usdt_args, ___bpf_narg(args))(args)
/*
diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
index 93794f01bb67..c174b4086673 100644
--- a/tools/lib/bpf/usdt.c
+++ b/tools/lib/bpf/usdt.c
@@ -58,7 +58,7 @@
*
* STAP_PROBE3(my_usdt_provider, my_usdt_probe_name, 123, x, &y);
*
- * USDT is identified by it's <provider-name>:<probe-name> pair of names. Each
+ * USDT is identified by its <provider-name>:<probe-name> pair of names. Each
* individual USDT has a fixed number of arguments (3 in the above example)
* and specifies values of each argument as if it was a function call.
*
@@ -80,7 +80,7 @@
* NOP instruction that kernel can replace with an interrupt instruction to
* trigger instrumentation code (BPF program for all that we care about).
*
- * Semaphore above is and optional feature. It records an address of a 2-byte
+ * Semaphore above is an optional feature. It records an address of a 2-byte
* refcount variable (normally in '.probes' ELF section) used for signaling if
* there is anything that is attached to USDT. This is useful for user
* applications if, for example, they need to prepare some arguments that are
@@ -120,7 +120,7 @@
* a uprobe BPF program (which for kernel, at least currently, is just a kprobe
* program, so BPF_PROG_TYPE_KPROBE program type). With the only difference
* that uprobe is usually attached at the function entry, while USDT will
- * normally will be somewhere inside the function. But it should always be
+ * normally be somewhere inside the function. But it should always be
* pointing to NOP instruction, which makes such uprobes the fastest uprobe
* kind.
*
@@ -150,7 +150,7 @@
* libbpf sets to spec ID during attach time, or, if kernel is too old to
* support BPF cookie, through IP-to-spec-ID map that libbpf maintains in such
* case. The latter means that some modes of operation can't be supported
- * without BPF cookie. Such mode is attaching to shared library "generically",
+ * without BPF cookie. Such a mode is attaching to shared library "generically",
* without specifying target process. In such case, it's impossible to
* calculate absolute IP addresses for IP-to-spec-ID map, and thus such mode
* is not supported without BPF cookie support.
@@ -184,7 +184,7 @@
* as even if USDT spec string is the same, USDT cookie value can be
* different. It was deemed excessive to try to deduplicate across independent
* USDT attachments by taking into account USDT spec string *and* USDT cookie
- * value, which would complicated spec ID accounting significantly for little
+ * value, which would complicate spec ID accounting significantly for little
* gain.
*/
@@ -199,12 +199,23 @@ enum usdt_arg_type {
USDT_ARG_CONST,
USDT_ARG_REG,
USDT_ARG_REG_DEREF,
+ USDT_ARG_SIB,
};
/* should match exactly struct __bpf_usdt_arg_spec from usdt.bpf.h */
struct usdt_arg_spec {
__u64 val_off;
- enum usdt_arg_type arg_type;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ enum usdt_arg_type arg_type: 8;
+ __u16 idx_reg_off: 12;
+ __u16 scale_bitshift: 4;
+ __u8 __reserved: 8; /* keep reg_off offset stable */
+#else
+ __u8 __reserved: 8; /* keep reg_off offset stable */
+ __u16 idx_reg_off: 12;
+ __u16 scale_bitshift: 4;
+ enum usdt_arg_type arg_type: 8;
+#endif
short reg_off;
bool arg_signed;
char arg_bitshift;
@@ -465,8 +476,8 @@ static int parse_vma_segs(int pid, const char *lib_path, struct elf_seg **segs,
goto proceed;
if (!realpath(lib_path, path)) {
- pr_warn("usdt: failed to get absolute path of '%s' (err %d), using path as is...\n",
- lib_path, -errno);
+ pr_warn("usdt: failed to get absolute path of '%s' (err %s), using path as is...\n",
+ lib_path, errstr(-errno));
libbpf_strlcpy(path, lib_path, sizeof(path));
}
@@ -475,8 +486,8 @@ proceed:
f = fopen(line, "re");
if (!f) {
err = -errno;
- pr_warn("usdt: failed to open '%s' to get base addr of '%s': %d\n",
- line, lib_path, err);
+ pr_warn("usdt: failed to open '%s' to get base addr of '%s': %s\n",
+ line, lib_path, errstr(err));
return err;
}
@@ -569,9 +580,8 @@ static struct elf_seg *find_vma_seg(struct elf_seg *segs, size_t seg_cnt, long o
return NULL;
}
-static int parse_usdt_note(Elf *elf, const char *path, GElf_Nhdr *nhdr,
- const char *data, size_t name_off, size_t desc_off,
- struct usdt_note *usdt_note);
+static int parse_usdt_note(GElf_Nhdr *nhdr, const char *data, size_t name_off,
+ size_t desc_off, struct usdt_note *usdt_note);
static int parse_usdt_spec(struct usdt_spec *spec, const struct usdt_note *note, __u64 usdt_cookie);
@@ -606,7 +616,8 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
err = parse_elf_segs(elf, path, &segs, &seg_cnt);
if (err) {
- pr_warn("usdt: failed to process ELF program segments for '%s': %d\n", path, err);
+ pr_warn("usdt: failed to process ELF program segments for '%s': %s\n",
+ path, errstr(err));
goto err_out;
}
@@ -624,7 +635,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
struct elf_seg *seg = NULL;
void *tmp;
- err = parse_usdt_note(elf, path, &nhdr, data->d_buf, name_off, desc_off, &note);
+ err = parse_usdt_note(&nhdr, data->d_buf, name_off, desc_off, &note);
if (err)
goto err_out;
@@ -659,7 +670,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
* [0] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
*/
usdt_abs_ip = note.loc_addr;
- if (base_addr)
+ if (base_addr && note.base_addr)
usdt_abs_ip += base_addr - note.base_addr;
/* When attaching uprobes (which is what USDTs basically are)
@@ -708,8 +719,8 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
if (vma_seg_cnt == 0) {
err = parse_vma_segs(pid, path, &vma_segs, &vma_seg_cnt);
if (err) {
- pr_warn("usdt: failed to get memory segments in PID %d for shared library '%s': %d\n",
- pid, path, err);
+ pr_warn("usdt: failed to get memory segments in PID %d for shared library '%s': %s\n",
+ pid, path, errstr(err));
goto err_out;
}
}
@@ -1047,8 +1058,8 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct
if (is_new && bpf_map_update_elem(spec_map_fd, &spec_id, &target->spec, BPF_ANY)) {
err = -errno;
- pr_warn("usdt: failed to set USDT spec #%d for '%s:%s' in '%s': %d\n",
- spec_id, usdt_provider, usdt_name, path, err);
+ pr_warn("usdt: failed to set USDT spec #%d for '%s:%s' in '%s': %s\n",
+ spec_id, usdt_provider, usdt_name, path, errstr(err));
goto err_out;
}
if (!man->has_bpf_cookie &&
@@ -1058,9 +1069,9 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct
pr_warn("usdt: IP collision detected for spec #%d for '%s:%s' in '%s'\n",
spec_id, usdt_provider, usdt_name, path);
} else {
- pr_warn("usdt: failed to map IP 0x%lx to spec #%d for '%s:%s' in '%s': %d\n",
+ pr_warn("usdt: failed to map IP 0x%lx to spec #%d for '%s:%s' in '%s': %s\n",
target->abs_ip, spec_id, usdt_provider, usdt_name,
- path, err);
+ path, errstr(err));
}
goto err_out;
}
@@ -1076,8 +1087,8 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct
target->rel_ip, &opts);
err = libbpf_get_error(uprobe_link);
if (err) {
- pr_warn("usdt: failed to attach uprobe #%d for '%s:%s' in '%s': %d\n",
- i, usdt_provider, usdt_name, path, err);
+ pr_warn("usdt: failed to attach uprobe #%d for '%s:%s' in '%s': %s\n",
+ i, usdt_provider, usdt_name, path, errstr(err));
goto err_out;
}
@@ -1099,8 +1110,8 @@ struct bpf_link *usdt_manager_attach_usdt(struct usdt_manager *man, const struct
NULL, &opts_multi);
if (!link->multi_link) {
err = -errno;
- pr_warn("usdt: failed to attach uprobe multi for '%s:%s' in '%s': %d\n",
- usdt_provider, usdt_name, path, err);
+ pr_warn("usdt: failed to attach uprobe multi for '%s:%s' in '%s': %s\n",
+ usdt_provider, usdt_name, path, errstr(err));
goto err_out;
}
@@ -1130,8 +1141,7 @@ err_out:
/* Parse out USDT ELF note from '.note.stapsdt' section.
* Logic inspired by perf's code.
*/
-static int parse_usdt_note(Elf *elf, const char *path, GElf_Nhdr *nhdr,
- const char *data, size_t name_off, size_t desc_off,
+static int parse_usdt_note(GElf_Nhdr *nhdr, const char *data, size_t name_off, size_t desc_off,
struct usdt_note *note)
{
const char *provider, *name, *args;
@@ -1281,11 +1291,51 @@ static int calc_pt_regs_off(const char *reg_name)
static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
{
- char reg_name[16];
- int len, reg_off;
- long off;
+ char reg_name[16] = {0}, idx_reg_name[16] = {0};
+ int len, reg_off, idx_reg_off, scale = 1;
+ long off = 0;
+
+ if (sscanf(arg_str, " %d @ %ld ( %%%15[^,] , %%%15[^,] , %d ) %n",
+ arg_sz, &off, reg_name, idx_reg_name, &scale, &len) == 5 ||
+ sscanf(arg_str, " %d @ ( %%%15[^,] , %%%15[^,] , %d ) %n",
+ arg_sz, reg_name, idx_reg_name, &scale, &len) == 4 ||
+ sscanf(arg_str, " %d @ %ld ( %%%15[^,] , %%%15[^)] ) %n",
+ arg_sz, &off, reg_name, idx_reg_name, &len) == 4 ||
+ sscanf(arg_str, " %d @ ( %%%15[^,] , %%%15[^)] ) %n",
+ arg_sz, reg_name, idx_reg_name, &len) == 3
+ ) {
+ /*
+ * Scale Index Base case:
+ * 1@-96(%rbp,%rax,8)
+ * 1@(%rbp,%rax,8)
+ * 1@-96(%rbp,%rax)
+ * 1@(%rbp,%rax)
+ */
+ arg->arg_type = USDT_ARG_SIB;
+ arg->val_off = off;
- if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n", arg_sz, &off, reg_name, &len) == 3) {
+ reg_off = calc_pt_regs_off(reg_name);
+ if (reg_off < 0)
+ return reg_off;
+ arg->reg_off = reg_off;
+
+ idx_reg_off = calc_pt_regs_off(idx_reg_name);
+ if (idx_reg_off < 0)
+ return idx_reg_off;
+ arg->idx_reg_off = idx_reg_off;
+
+ /* validate scale factor and set fields directly */
+ switch (scale) {
+ case 1: arg->scale_bitshift = 0; break;
+ case 2: arg->scale_bitshift = 1; break;
+ case 4: arg->scale_bitshift = 2; break;
+ case 8: arg->scale_bitshift = 3; break;
+ default:
+ pr_warn("usdt: invalid SIB scale %d, expected 1, 2, 4, 8\n", scale);
+ return -EINVAL;
+ }
+ } else if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n",
+ arg_sz, &off, reg_name, &len) == 3) {
/* Memory dereference case, e.g., -4@-20(%rbp) */
arg->arg_type = USDT_ARG_REG_DEREF;
arg->val_off = off;
@@ -1304,6 +1354,7 @@ static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec
} else if (sscanf(arg_str, " %d @ %%%15s %n", arg_sz, reg_name, &len) == 2) {
/* Register read case, e.g., -4@%eax */
arg->arg_type = USDT_ARG_REG;
+ /* register read has no memory offset */
arg->val_off = 0;
reg_off = calc_pt_regs_off(reg_name);
diff --git a/tools/lib/bpf/zip.c b/tools/lib/bpf/zip.c
index 3f26d629b2b4..88c376a8348d 100644
--- a/tools/lib/bpf/zip.c
+++ b/tools/lib/bpf/zip.c
@@ -223,7 +223,7 @@ struct zip_archive *zip_archive_open(const char *path)
if (!archive) {
munmap(data, size);
return ERR_PTR(-ENOMEM);
- };
+ }
archive->data = data;
archive->size = size;
diff --git a/tools/lib/cmdline.c b/tools/lib/cmdline.c
new file mode 100644
index 000000000000..c85f00f43c5e
--- /dev/null
+++ b/tools/lib/cmdline.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * From lib/cmdline.c
+ */
+#include <stdlib.h>
+
+#if __has_attribute(__fallthrough__)
+# define fallthrough __attribute__((__fallthrough__))
+#else
+# define fallthrough do {} while (0) /* fallthrough */
+#endif
+
+unsigned long long memparse(const char *ptr, char **retptr)
+{
+ char *endptr; /* local pointer to end of parsed string */
+
+ unsigned long long ret = strtoll(ptr, &endptr, 0);
+
+ switch (*endptr) {
+ case 'E':
+ case 'e':
+ ret <<= 10;
+ fallthrough;
+ case 'P':
+ case 'p':
+ ret <<= 10;
+ fallthrough;
+ case 'T':
+ case 't':
+ ret <<= 10;
+ fallthrough;
+ case 'G':
+ case 'g':
+ ret <<= 10;
+ fallthrough;
+ case 'M':
+ case 'm':
+ ret <<= 10;
+ fallthrough;
+ case 'K':
+ case 'k':
+ ret <<= 10;
+ endptr++;
+ fallthrough;
+ default:
+ break;
+ }
+
+ if (retptr)
+ *retptr = endptr;
+
+ return ret;
+}
diff --git a/tools/lib/list_sort.c b/tools/lib/list_sort.c
index 10c067e3a8d2..bb99e493dcd1 100644
--- a/tools/lib/list_sort.c
+++ b/tools/lib/list_sort.c
@@ -1,8 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
-#include <linux/kernel.h>
#include <linux/compiler.h>
#include <linux/export.h>
-#include <linux/string.h>
#include <linux/list_sort.h>
#include <linux/list.h>
@@ -52,7 +50,6 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
struct list_head *a, struct list_head *b)
{
struct list_head *tail = head;
- u8 count = 0;
for (;;) {
/* if equal, take 'a' -- important for sort stability */
@@ -78,15 +75,6 @@ static void merge_final(void *priv, list_cmp_func_t cmp, struct list_head *head,
/* Finish linking remainder of list b on to tail */
tail->next = b;
do {
- /*
- * If the merge is highly unbalanced (e.g. the input is
- * already sorted), this loop may run many iterations.
- * Continue callbacks to the client even though no
- * element comparison is needed, so the client's cmp()
- * routine can invoke cond_resched() periodically.
- */
- if (unlikely(!++count))
- cmp(priv, b, b);
b->prev = tail;
tail = b;
b = b->next;
diff --git a/tools/lib/perf/.gitignore b/tools/lib/perf/.gitignore
new file mode 100644
index 000000000000..0f5b4af63f62
--- /dev/null
+++ b/tools/lib/perf/.gitignore
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0-only
+libperf.pc
+libperf.so.*
+tests-shared
+tests-static
diff --git a/tools/lib/perf/Documentation/Makefile b/tools/lib/perf/Documentation/Makefile
index 972754082a85..573ca5b27556 100644
--- a/tools/lib/perf/Documentation/Makefile
+++ b/tools/lib/perf/Documentation/Makefile
@@ -121,7 +121,7 @@ install-man: all
$(INSTALL) -d -m 755 $(DESTDIR)$(man7dir); \
$(INSTALL) -m 644 $(MAN_7) $(DESTDIR)$(man7dir);
-install-html:
+install-html: $(MAN_HTML)
$(call QUIET_INSTALL, html) \
$(INSTALL) -d -m 755 $(DESTDIR)$(htmldir); \
$(INSTALL) -m 644 $(MAN_HTML) $(DESTDIR)$(htmldir); \
diff --git a/tools/lib/perf/Documentation/libperf.txt b/tools/lib/perf/Documentation/libperf.txt
index fcfb9499ef9c..4072bc9b7670 100644
--- a/tools/lib/perf/Documentation/libperf.txt
+++ b/tools/lib/perf/Documentation/libperf.txt
@@ -39,7 +39,6 @@ SYNOPSIS
struct perf_cpu_map *perf_cpu_map__new_any_cpu(void);
struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list);
- struct perf_cpu_map *perf_cpu_map__read(FILE *file);
struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map);
struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
struct perf_cpu_map *other);
@@ -211,6 +210,7 @@ SYNOPSIS
struct perf_record_time_conv;
struct perf_record_header_feature;
struct perf_record_compressed;
+ struct perf_record_compressed2;
--
DESCRIPTION
diff --git a/tools/lib/perf/Makefile b/tools/lib/perf/Makefile
index 3a9b2140aa04..7fbb50b74c00 100644
--- a/tools/lib/perf/Makefile
+++ b/tools/lib/perf/Makefile
@@ -39,29 +39,10 @@ libdir = $(prefix)/$(libdir_relative)
libdir_SQ = $(subst ','\'',$(libdir))
libdir_relative_SQ = $(subst ','\'',$(libdir_relative))
-ifeq ("$(origin V)", "command line")
- VERBOSE = $(V)
-endif
-ifndef VERBOSE
- VERBOSE = 0
-endif
-
-ifeq ($(VERBOSE),1)
- Q =
-else
- Q = @
-endif
-
TEST_ARGS := $(if $(V),-v)
-# Set compile option CFLAGS
-ifdef EXTRA_CFLAGS
- CFLAGS := $(EXTRA_CFLAGS)
-else
- CFLAGS := -g -Wall
-endif
-
INCLUDES = \
+-I$(OUTPUT)arch/$(SRCARCH)/include/generated/uapi \
-I$(srctree)/tools/lib/perf/include \
-I$(srctree)/tools/lib/ \
-I$(srctree)/tools/include \
@@ -70,11 +51,12 @@ INCLUDES = \
-I$(srctree)/tools/include/uapi
# Append required CFLAGS
-override CFLAGS += $(EXTRA_WARNINGS)
-override CFLAGS += -Werror -Wall
+override CFLAGS += -g -Werror -Wall
override CFLAGS += -fPIC
override CFLAGS += $(INCLUDES)
override CFLAGS += -fvisibility=hidden
+override CFLAGS += $(EXTRA_WARNINGS)
+override CFLAGS += $(EXTRA_CFLAGS)
all:
@@ -118,7 +100,16 @@ $(LIBAPI)-clean:
$(call QUIET_CLEAN, libapi)
$(Q)$(MAKE) -C $(LIB_DIR) O=$(OUTPUT) clean >/dev/null
-$(LIBPERF_IN): FORCE
+uapi-asm := $(OUTPUT)arch/$(SRCARCH)/include/generated/uapi/asm
+ifeq ($(SRCARCH),arm64)
+ syscall-y := $(uapi-asm)/unistd_64.h
+endif
+uapi-asm-generic:
+ $(if $(syscall-y),\
+ $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.asm-headers obj=$(uapi-asm) \
+ generic=include/uapi/asm-generic $(syscall-y),)
+
+$(LIBPERF_IN): uapi-asm-generic FORCE
$(Q)$(MAKE) $(build)=libperf
$(LIBPERF_A): $(LIBPERF_IN)
@@ -139,7 +130,7 @@ all: fixdep
clean: $(LIBAPI)-clean
$(call QUIET_CLEAN, libperf) $(RM) $(LIBPERF_A) \
*.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBPERF_VERSION) .*.d .*.cmd tests/*.o LIBPERF-CFLAGS $(LIBPERF_PC) \
- $(TESTS_STATIC) $(TESTS_SHARED)
+ $(TESTS_STATIC) $(TESTS_SHARED) $(syscall-y)
TESTS_IN = tests-in.o
diff --git a/tools/lib/perf/cpumap.c b/tools/lib/perf/cpumap.c
index 4adcd7920d03..b20a5280f2b3 100644
--- a/tools/lib/perf/cpumap.c
+++ b/tools/lib/perf/cpumap.c
@@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
+#include <errno.h>
#include <perf/cpumap.h>
#include <stdlib.h>
#include <linux/refcount.h>
@@ -10,6 +11,9 @@
#include <ctype.h>
#include <limits.h>
#include "internal.h"
+#include <api/fs/fs.h>
+
+#define MAX_NR_CPUS 4096
void perf_cpu_map__set_nr(struct perf_cpu_map *map, int nr_cpus)
{
@@ -18,9 +22,13 @@ void perf_cpu_map__set_nr(struct perf_cpu_map *map, int nr_cpus)
struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus)
{
- RC_STRUCT(perf_cpu_map) *cpus = malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
+ RC_STRUCT(perf_cpu_map) *cpus;
struct perf_cpu_map *result;
+ if (nr_cpus == 0)
+ return NULL;
+
+ cpus = malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
if (ADD_RC_CHK(result, cpus)) {
cpus->nr = nr_cpus;
refcount_set(&cpus->refcnt, 1);
@@ -96,12 +104,12 @@ static struct perf_cpu_map *cpu_map__new_sysconf(void)
static struct perf_cpu_map *cpu_map__new_sysfs_online(void)
{
struct perf_cpu_map *cpus = NULL;
- FILE *onlnf;
+ char *buf = NULL;
+ size_t buf_len;
- onlnf = fopen("/sys/devices/system/cpu/online", "r");
- if (onlnf) {
- cpus = perf_cpu_map__read(onlnf);
- fclose(onlnf);
+ if (sysfs__read_str("devices/system/cpu/online", &buf, &buf_len) >= 0) {
+ cpus = perf_cpu_map__new(buf);
+ free(buf);
}
return cpus;
}
@@ -154,62 +162,6 @@ static struct perf_cpu_map *cpu_map__trim_new(int nr_cpus, const struct perf_cpu
return cpus;
}
-struct perf_cpu_map *perf_cpu_map__read(FILE *file)
-{
- struct perf_cpu_map *cpus = NULL;
- int nr_cpus = 0;
- struct perf_cpu *tmp_cpus = NULL, *tmp;
- int max_entries = 0;
- int n, cpu, prev;
- char sep;
-
- sep = 0;
- prev = -1;
- for (;;) {
- n = fscanf(file, "%u%c", &cpu, &sep);
- if (n <= 0)
- break;
- if (prev >= 0) {
- int new_max = nr_cpus + cpu - prev - 1;
-
- WARN_ONCE(new_max >= MAX_NR_CPUS, "Perf can support %d CPUs. "
- "Consider raising MAX_NR_CPUS\n", MAX_NR_CPUS);
-
- if (new_max >= max_entries) {
- max_entries = new_max + MAX_NR_CPUS / 2;
- tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu));
- if (tmp == NULL)
- goto out_free_tmp;
- tmp_cpus = tmp;
- }
-
- while (++prev < cpu)
- tmp_cpus[nr_cpus++].cpu = prev;
- }
- if (nr_cpus == max_entries) {
- max_entries += MAX_NR_CPUS;
- tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu));
- if (tmp == NULL)
- goto out_free_tmp;
- tmp_cpus = tmp;
- }
-
- tmp_cpus[nr_cpus++].cpu = cpu;
- if (n == 2 && sep == '-')
- prev = cpu;
- else
- prev = -1;
- if (n == 1 || sep == '\n')
- break;
- }
-
- if (nr_cpus > 0)
- cpus = cpu_map__trim_new(nr_cpus, tmp_cpus);
-out_free_tmp:
- free(tmp_cpus);
- return cpus;
-}
-
struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
{
struct perf_cpu_map *cpus = NULL;
@@ -233,8 +185,8 @@ struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
while (isdigit(*cpu_list)) {
p = NULL;
start_cpu = strtoul(cpu_list, &p, 0);
- if (start_cpu >= INT_MAX
- || (*p != '\0' && *p != ',' && *p != '-'))
+ if (start_cpu >= INT16_MAX
+ || (*p != '\0' && *p != ',' && *p != '-' && *p != '\n'))
goto invalid;
if (*p == '-') {
@@ -242,7 +194,7 @@ struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
p = NULL;
end_cpu = strtoul(cpu_list, &p, 0);
- if (end_cpu >= INT_MAX || (*p != '\0' && *p != ','))
+ if (end_cpu >= INT16_MAX || (*p != '\0' && *p != ',' && *p != '\n'))
goto invalid;
if (end_cpu < start_cpu)
@@ -257,17 +209,17 @@ struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
for (; start_cpu <= end_cpu; start_cpu++) {
/* check for duplicates */
for (i = 0; i < nr_cpus; i++)
- if (tmp_cpus[i].cpu == (int)start_cpu)
+ if (tmp_cpus[i].cpu == (int16_t)start_cpu)
goto invalid;
if (nr_cpus == max_entries) {
- max_entries += MAX_NR_CPUS;
+ max_entries += max(end_cpu - start_cpu + 1, 16UL);
tmp = realloc(tmp_cpus, max_entries * sizeof(struct perf_cpu));
if (tmp == NULL)
goto invalid;
tmp_cpus = tmp;
}
- tmp_cpus[nr_cpus++].cpu = (int)start_cpu;
+ tmp_cpus[nr_cpus++].cpu = (int16_t)start_cpu;
}
if (*p)
++p;
@@ -275,20 +227,31 @@ struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list)
cpu_list = p;
}
- if (nr_cpus > 0)
+ if (nr_cpus > 0) {
cpus = cpu_map__trim_new(nr_cpus, tmp_cpus);
- else if (*cpu_list != '\0') {
+ } else if (*cpu_list != '\0') {
pr_warning("Unexpected characters at end of cpu list ('%s'), using online CPUs.",
cpu_list);
cpus = perf_cpu_map__new_online_cpus();
- } else
+ } else {
cpus = perf_cpu_map__new_any_cpu();
+ }
invalid:
free(tmp_cpus);
out:
return cpus;
}
+struct perf_cpu_map *perf_cpu_map__new_int(int cpu)
+{
+ struct perf_cpu_map *cpus = perf_cpu_map__alloc(1);
+
+ if (cpus)
+ RC_CHK_ACCESS(cpus)->map[0].cpu = cpu;
+
+ return cpus;
+}
+
static int __perf_cpu_map__nr(const struct perf_cpu_map *cpus)
{
return RC_CHK_ACCESS(cpus)->nr;
@@ -316,6 +279,19 @@ bool perf_cpu_map__has_any_cpu_or_is_empty(const struct perf_cpu_map *map)
return map ? __perf_cpu_map__cpu(map, 0).cpu == -1 : true;
}
+bool perf_cpu_map__is_any_cpu_or_is_empty(const struct perf_cpu_map *map)
+{
+ if (!map)
+ return true;
+
+ return __perf_cpu_map__nr(map) == 1 && __perf_cpu_map__cpu(map, 0).cpu == -1;
+}
+
+bool perf_cpu_map__is_empty(const struct perf_cpu_map *map)
+{
+ return map == NULL;
+}
+
int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu)
{
int low, high;
@@ -372,6 +348,20 @@ bool perf_cpu_map__has_any_cpu(const struct perf_cpu_map *map)
return map && __perf_cpu_map__cpu(map, 0).cpu == -1;
}
+struct perf_cpu perf_cpu_map__min(const struct perf_cpu_map *map)
+{
+ struct perf_cpu cpu, result = {
+ .cpu = -1
+ };
+ int idx;
+
+ perf_cpu_map__for_each_cpu_skip_any(cpu, idx, map) {
+ result = cpu;
+ break;
+ }
+ return result;
+}
+
struct perf_cpu perf_cpu_map__max(const struct perf_cpu_map *map)
{
struct perf_cpu result = {
@@ -405,46 +395,49 @@ bool perf_cpu_map__is_subset(const struct perf_cpu_map *a, const struct perf_cpu
}
/*
- * Merge two cpumaps
+ * Merge two cpumaps.
+ *
+ * If 'other' is subset of '*orig', '*orig' keeps itself with no reference count
+ * change (similar to "realloc").
+ *
+ * If '*orig' is subset of 'other', '*orig' reuses 'other' with its reference
+ * count increased.
*
- * orig either gets freed and replaced with a new map, or reused
- * with no reference count change (similar to "realloc")
- * other has its reference count increased.
+ * Otherwise, '*orig' gets freed and replaced with a new map.
*/
-
-struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
- struct perf_cpu_map *other)
+int perf_cpu_map__merge(struct perf_cpu_map **orig, struct perf_cpu_map *other)
{
struct perf_cpu *tmp_cpus;
int tmp_len;
int i, j, k;
struct perf_cpu_map *merged;
- if (perf_cpu_map__is_subset(orig, other))
- return orig;
- if (perf_cpu_map__is_subset(other, orig)) {
- perf_cpu_map__put(orig);
- return perf_cpu_map__get(other);
+ if (perf_cpu_map__is_subset(*orig, other))
+ return 0;
+ if (perf_cpu_map__is_subset(other, *orig)) {
+ perf_cpu_map__put(*orig);
+ *orig = perf_cpu_map__get(other);
+ return 0;
}
- tmp_len = __perf_cpu_map__nr(orig) + __perf_cpu_map__nr(other);
+ tmp_len = __perf_cpu_map__nr(*orig) + __perf_cpu_map__nr(other);
tmp_cpus = malloc(tmp_len * sizeof(struct perf_cpu));
if (!tmp_cpus)
- return NULL;
+ return -ENOMEM;
/* Standard merge algorithm from wikipedia */
i = j = k = 0;
- while (i < __perf_cpu_map__nr(orig) && j < __perf_cpu_map__nr(other)) {
- if (__perf_cpu_map__cpu(orig, i).cpu <= __perf_cpu_map__cpu(other, j).cpu) {
- if (__perf_cpu_map__cpu(orig, i).cpu == __perf_cpu_map__cpu(other, j).cpu)
+ while (i < __perf_cpu_map__nr(*orig) && j < __perf_cpu_map__nr(other)) {
+ if (__perf_cpu_map__cpu(*orig, i).cpu <= __perf_cpu_map__cpu(other, j).cpu) {
+ if (__perf_cpu_map__cpu(*orig, i).cpu == __perf_cpu_map__cpu(other, j).cpu)
j++;
- tmp_cpus[k++] = __perf_cpu_map__cpu(orig, i++);
+ tmp_cpus[k++] = __perf_cpu_map__cpu(*orig, i++);
} else
tmp_cpus[k++] = __perf_cpu_map__cpu(other, j++);
}
- while (i < __perf_cpu_map__nr(orig))
- tmp_cpus[k++] = __perf_cpu_map__cpu(orig, i++);
+ while (i < __perf_cpu_map__nr(*orig))
+ tmp_cpus[k++] = __perf_cpu_map__cpu(*orig, i++);
while (j < __perf_cpu_map__nr(other))
tmp_cpus[k++] = __perf_cpu_map__cpu(other, j++);
@@ -452,8 +445,9 @@ struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
merged = cpu_map__trim_new(k, tmp_cpus);
free(tmp_cpus);
- perf_cpu_map__put(orig);
- return merged;
+ perf_cpu_map__put(*orig);
+ *orig = merged;
+ return 0;
}
struct perf_cpu_map *perf_cpu_map__intersect(struct perf_cpu_map *orig,
diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
index 058e3ff10f9b..3ed023f4b190 100644
--- a/tools/lib/perf/evlist.c
+++ b/tools/lib/perf/evlist.c
@@ -36,35 +36,88 @@ void perf_evlist__init(struct perf_evlist *evlist)
static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
struct perf_evsel *evsel)
{
- if (evsel->system_wide) {
- /* System wide: set the cpu map of the evsel to all online CPUs. */
+ if (perf_cpu_map__is_empty(evsel->cpus)) {
+ if (perf_cpu_map__is_empty(evsel->pmu_cpus)) {
+ /*
+ * Assume the unset PMU cpus were for a system-wide
+ * event, like a software or tracepoint.
+ */
+ evsel->pmu_cpus = perf_cpu_map__new_online_cpus();
+ }
+ if (evlist->has_user_cpus && !evsel->system_wide) {
+ /*
+ * Use the user CPUs unless the evsel is set to be
+ * system wide, such as the dummy event.
+ */
+ evsel->cpus = perf_cpu_map__get(evlist->user_requested_cpus);
+ } else {
+ /*
+ * System wide and other modes, assume the cpu map
+ * should be set to all PMU CPUs.
+ */
+ evsel->cpus = perf_cpu_map__get(evsel->pmu_cpus);
+ }
+ }
+ /*
+ * Avoid "any CPU"(-1) for uncore and PMUs that require a CPU, even if
+ * requested.
+ */
+ if (evsel->requires_cpu && perf_cpu_map__has_any_cpu(evsel->cpus)) {
perf_cpu_map__put(evsel->cpus);
- evsel->cpus = perf_cpu_map__new_online_cpus();
- } else if (evlist->has_user_cpus && evsel->is_pmu_core) {
- /*
- * User requested CPUs on a core PMU, ensure the requested CPUs
- * are valid by intersecting with those of the PMU.
- */
+ evsel->cpus = perf_cpu_map__get(evsel->pmu_cpus);
+ }
+
+ /*
+ * Globally requested CPUs replace user requested unless the evsel is
+ * set to be system wide.
+ */
+ if (evlist->has_user_cpus && !evsel->system_wide) {
+ assert(!perf_cpu_map__has_any_cpu(evlist->user_requested_cpus));
+ if (!perf_cpu_map__equal(evsel->cpus, evlist->user_requested_cpus)) {
+ perf_cpu_map__put(evsel->cpus);
+ evsel->cpus = perf_cpu_map__get(evlist->user_requested_cpus);
+ }
+ }
+
+ /* Ensure cpus only references valid PMU CPUs. */
+ if (!perf_cpu_map__has_any_cpu(evsel->cpus) &&
+ !perf_cpu_map__is_subset(evsel->pmu_cpus, evsel->cpus)) {
+ struct perf_cpu_map *tmp = perf_cpu_map__intersect(evsel->pmu_cpus, evsel->cpus);
+
perf_cpu_map__put(evsel->cpus);
- evsel->cpus = perf_cpu_map__intersect(evlist->user_requested_cpus, evsel->own_cpus);
- } else if (!evsel->own_cpus || evlist->has_user_cpus ||
- (!evsel->requires_cpu && perf_cpu_map__has_any_cpu(evlist->user_requested_cpus))) {
- /*
- * The PMU didn't specify a default cpu map, this isn't a core
- * event and the user requested CPUs or the evlist user
- * requested CPUs have the "any CPU" (aka dummy) CPU value. In
- * which case use the user requested CPUs rather than the PMU
- * ones.
- */
+ evsel->cpus = tmp;
+ }
+
+ /*
+ * Was event requested on all the PMU's CPUs but the user requested is
+ * any CPU (-1)? If so switch to using any CPU (-1) to reduce the number
+ * of events.
+ */
+ if (!evsel->system_wide &&
+ !evsel->requires_cpu &&
+ perf_cpu_map__equal(evsel->cpus, evsel->pmu_cpus) &&
+ perf_cpu_map__has_any_cpu(evlist->user_requested_cpus)) {
perf_cpu_map__put(evsel->cpus);
evsel->cpus = perf_cpu_map__get(evlist->user_requested_cpus);
- } else if (evsel->cpus != evsel->own_cpus) {
- /*
- * No user requested cpu map but the PMU cpu map doesn't match
- * the evsel's. Reset it back to the PMU cpu map.
- */
- perf_cpu_map__put(evsel->cpus);
- evsel->cpus = perf_cpu_map__get(evsel->own_cpus);
+ }
+
+ /* Sanity check assert before the evsel is potentially removed. */
+ assert(!evsel->requires_cpu || !perf_cpu_map__has_any_cpu(evsel->cpus));
+
+ /*
+ * Empty cpu lists would eventually get opened as "any" so remove
+ * genuinely empty ones before they're opened in the wrong place.
+ */
+ if (perf_cpu_map__is_empty(evsel->cpus)) {
+ struct perf_evsel *next = perf_evlist__next(evlist, evsel);
+
+ perf_evlist__remove(evlist, evsel);
+ /* Keep idx contiguous */
+ if (next)
+ list_for_each_entry_from(next, &evlist->entries, node)
+ next->idx--;
+
+ return;
}
if (evsel->system_wide) {
@@ -75,16 +128,20 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
evsel->threads = perf_thread_map__get(evlist->threads);
}
- evlist->all_cpus = perf_cpu_map__merge(evlist->all_cpus, evsel->cpus);
+ perf_cpu_map__merge(&evlist->all_cpus, evsel->cpus);
}
static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
{
- struct perf_evsel *evsel;
+ struct perf_evsel *evsel, *n;
evlist->needs_map_propagation = true;
- perf_evlist__for_each_evsel(evlist, evsel)
+ /* Clear the all_cpus set which will be merged into during propagation. */
+ perf_cpu_map__put(evlist->all_cpus);
+ evlist->all_cpus = NULL;
+
+ list_for_each_entry_safe(evsel, n, &evlist->entries, node)
__perf_evlist__propagate_maps(evlist, evsel);
}
@@ -248,10 +305,10 @@ u64 perf_evlist__read_format(struct perf_evlist *evlist)
static void perf_evlist__id_hash(struct perf_evlist *evlist,
struct perf_evsel *evsel,
- int cpu, int thread, u64 id)
+ int cpu_map_idx, int thread, u64 id)
{
int hash;
- struct perf_sample_id *sid = SID(evsel, cpu, thread);
+ struct perf_sample_id *sid = SID(evsel, cpu_map_idx, thread);
sid->id = id;
sid->evsel = evsel;
@@ -269,21 +326,27 @@ void perf_evlist__reset_id_hash(struct perf_evlist *evlist)
void perf_evlist__id_add(struct perf_evlist *evlist,
struct perf_evsel *evsel,
- int cpu, int thread, u64 id)
+ int cpu_map_idx, int thread, u64 id)
{
- perf_evlist__id_hash(evlist, evsel, cpu, thread, id);
+ if (!SID(evsel, cpu_map_idx, thread))
+ return;
+
+ perf_evlist__id_hash(evlist, evsel, cpu_map_idx, thread, id);
evsel->id[evsel->ids++] = id;
}
int perf_evlist__id_add_fd(struct perf_evlist *evlist,
struct perf_evsel *evsel,
- int cpu, int thread, int fd)
+ int cpu_map_idx, int thread, int fd)
{
u64 read_data[4] = { 0, };
int id_idx = 1; /* The first entry is the counter value */
u64 id;
int ret;
+ if (!SID(evsel, cpu_map_idx, thread))
+ return -1;
+
ret = ioctl(fd, PERF_EVENT_IOC_ID, &id);
if (!ret)
goto add;
@@ -312,7 +375,7 @@ int perf_evlist__id_add_fd(struct perf_evlist *evlist,
id = read_data[id_idx];
add:
- perf_evlist__id_add(evlist, evsel, cpu, thread, id);
+ perf_evlist__id_add(evlist, evsel, cpu_map_idx, thread, id);
return 0;
}
diff --git a/tools/lib/perf/evsel.c b/tools/lib/perf/evsel.c
index c07160953224..13a307fc75ae 100644
--- a/tools/lib/perf/evsel.c
+++ b/tools/lib/perf/evsel.c
@@ -5,6 +5,7 @@
#include <perf/evsel.h>
#include <perf/cpumap.h>
#include <perf/threadmap.h>
+#include <linux/hash.h>
#include <linux/list.h>
#include <internal/evsel.h>
#include <linux/zalloc.h>
@@ -23,6 +24,7 @@ void perf_evsel__init(struct perf_evsel *evsel, struct perf_event_attr *attr,
int idx)
{
INIT_LIST_HEAD(&evsel->node);
+ INIT_LIST_HEAD(&evsel->per_stream_periods);
evsel->attr = *attr;
evsel->idx = idx;
evsel->leader = evsel;
@@ -38,8 +40,19 @@ struct perf_evsel *perf_evsel__new(struct perf_event_attr *attr)
return evsel;
}
+void perf_evsel__exit(struct perf_evsel *evsel)
+{
+ assert(evsel->fd == NULL); /* If not fds were not closed. */
+ assert(evsel->mmap == NULL); /* If not munmap wasn't called. */
+ assert(evsel->sample_id == NULL); /* If not free_id wasn't called. */
+ perf_cpu_map__put(evsel->cpus);
+ perf_cpu_map__put(evsel->pmu_cpus);
+ perf_thread_map__put(evsel->threads);
+}
+
void perf_evsel__delete(struct perf_evsel *evsel)
{
+ perf_evsel__exit(evsel);
free(evsel);
}
@@ -531,10 +544,56 @@ int perf_evsel__alloc_id(struct perf_evsel *evsel, int ncpus, int nthreads)
void perf_evsel__free_id(struct perf_evsel *evsel)
{
+ struct perf_sample_id_period *pos, *n;
+
xyarray__delete(evsel->sample_id);
evsel->sample_id = NULL;
zfree(&evsel->id);
evsel->ids = 0;
+
+ perf_evsel_for_each_per_thread_period_safe(evsel, n, pos) {
+ list_del_init(&pos->node);
+ free(pos);
+ }
+}
+
+bool perf_evsel__attr_has_per_thread_sample_period(struct perf_evsel *evsel)
+{
+ return (evsel->attr.sample_type & PERF_SAMPLE_READ) &&
+ (evsel->attr.sample_type & PERF_SAMPLE_TID) &&
+ evsel->attr.inherit;
+}
+
+u64 *perf_sample_id__get_period_storage(struct perf_sample_id *sid, u32 tid, bool per_thread)
+{
+ struct hlist_head *head;
+ struct perf_sample_id_period *res;
+ int hash;
+
+ if (!per_thread)
+ return &sid->period;
+
+ hash = hash_32(tid, PERF_SAMPLE_ID__HLIST_BITS);
+ head = &sid->periods[hash];
+
+ hlist_for_each_entry(res, head, hnode)
+ if (res->tid == tid)
+ return &res->period;
+
+ if (sid->evsel == NULL)
+ return NULL;
+
+ res = zalloc(sizeof(struct perf_sample_id_period));
+ if (res == NULL)
+ return NULL;
+
+ INIT_LIST_HEAD(&res->node);
+ res->tid = tid;
+
+ list_add_tail(&res->node, &sid->evsel->per_stream_periods);
+ hlist_add_head(&res->hnode, &sid->periods[hash]);
+
+ return &res->period;
}
void perf_counts_values__scale(struct perf_counts_values *count,
diff --git a/tools/lib/perf/include/internal/cpumap.h b/tools/lib/perf/include/internal/cpumap.h
index 49649eb51ce4..e2be2d17c32b 100644
--- a/tools/lib/perf/include/internal/cpumap.h
+++ b/tools/lib/perf/include/internal/cpumap.h
@@ -21,10 +21,6 @@ DECLARE_RC_STRUCT(perf_cpu_map) {
struct perf_cpu map[];
};
-#ifndef MAX_NR_CPUS
-#define MAX_NR_CPUS 2048
-#endif
-
struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus);
int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu);
bool perf_cpu_map__is_subset(const struct perf_cpu_map *a, const struct perf_cpu_map *b);
diff --git a/tools/lib/perf/include/internal/evlist.h b/tools/lib/perf/include/internal/evlist.h
index d86ffe8ed483..f43bdb9b6227 100644
--- a/tools/lib/perf/include/internal/evlist.h
+++ b/tools/lib/perf/include/internal/evlist.h
@@ -126,11 +126,11 @@ u64 perf_evlist__read_format(struct perf_evlist *evlist);
void perf_evlist__id_add(struct perf_evlist *evlist,
struct perf_evsel *evsel,
- int cpu, int thread, u64 id);
+ int cpu_map_idx, int thread, u64 id);
int perf_evlist__id_add_fd(struct perf_evlist *evlist,
struct perf_evsel *evsel,
- int cpu, int thread, int fd);
+ int cpu_map_idx, int thread, int fd);
void perf_evlist__reset_id_hash(struct perf_evlist *evlist);
diff --git a/tools/lib/perf/include/internal/evsel.h b/tools/lib/perf/include/internal/evsel.h
index 5cd220a61962..fefe64ba5e26 100644
--- a/tools/lib/perf/include/internal/evsel.h
+++ b/tools/lib/perf/include/internal/evsel.h
@@ -11,6 +11,32 @@
struct perf_thread_map;
struct xyarray;
+/**
+ * The per-thread accumulated period storage node.
+ */
+struct perf_sample_id_period {
+ struct list_head node;
+ struct hlist_node hnode;
+ /* Holds total ID period value for PERF_SAMPLE_READ processing. */
+ u64 period;
+ /* The TID that the values belongs to */
+ u32 tid;
+};
+
+/**
+ * perf_evsel_for_each_per_thread_period_safe - safely iterate thru all the
+ * per_stream_periods
+ * @evlist:perf_evsel instance to iterate
+ * @item: struct perf_sample_id_period iterator
+ * @tmp: struct perf_sample_id_period temp iterator
+ */
+#define perf_evsel_for_each_per_thread_period_safe(evsel, tmp, item) \
+ list_for_each_entry_safe(item, tmp, &(evsel)->per_stream_periods, node)
+
+
+#define PERF_SAMPLE_ID__HLIST_BITS 4
+#define PERF_SAMPLE_ID__HLIST_SIZE (1 << PERF_SAMPLE_ID__HLIST_BITS)
+
/*
* Per fd, to map back from PERF_SAMPLE_ID to evsel, only used when there are
* more than one entry in the evlist.
@@ -34,8 +60,32 @@ struct perf_sample_id {
pid_t machine_pid;
struct perf_cpu vcpu;
- /* Holds total ID period value for PERF_SAMPLE_READ processing. */
- u64 period;
+ /*
+ * Per-thread, and global event counts are mutually exclusive:
+ * Whilst it is possible to combine events into a group with differing
+ * values of PERF_SAMPLE_READ, it is not valid to have inconsistent
+ * values for `inherit`. Therefore it is not possible to have a
+ * situation where a per-thread event is sampled as a global event;
+ * all !inherit groups are global, and all groups where the sampling
+ * event is inherit + PERF_SAMPLE_READ will be per-thread. Any event
+ * that is part of such a group that is inherit but not PERF_SAMPLE_READ
+ * will be read as per-thread. If such an event can also trigger a
+ * sample (such as with sample_period > 0) then it will not cause
+ * `read_format` to be included in its PERF_RECORD_SAMPLE, and
+ * therefore will not expose the per-thread group members as global.
+ */
+ union {
+ /*
+ * Holds total ID period value for PERF_SAMPLE_READ processing
+ * (when period is not per-thread).
+ */
+ u64 period;
+ /*
+ * Holds total ID period value for PERF_SAMPLE_READ processing
+ * (when period is per-thread).
+ */
+ struct hlist_head periods[PERF_SAMPLE_ID__HLIST_SIZE];
+ };
};
struct perf_evsel {
@@ -49,7 +99,7 @@ struct perf_evsel {
* cpu map for opening the event on, for example, the first CPU on a
* socket for an uncore event.
*/
- struct perf_cpu_map *own_cpus;
+ struct perf_cpu_map *pmu_cpus;
struct perf_thread_map *threads;
struct xyarray *fd;
struct xyarray *mmap;
@@ -58,6 +108,10 @@ struct perf_evsel {
u32 ids;
struct perf_evsel *leader;
+ /* For events where the read_format value is per-thread rather than
+ * global, stores the per-thread cumulative period */
+ struct list_head per_stream_periods;
+
/* parse modifier helper */
int nr_members;
/*
@@ -79,6 +133,7 @@ struct perf_evsel {
void perf_evsel__init(struct perf_evsel *evsel, struct perf_event_attr *attr,
int idx);
+void perf_evsel__exit(struct perf_evsel *evsel);
int perf_evsel__alloc_fd(struct perf_evsel *evsel, int ncpus, int nthreads);
void perf_evsel__close_fd(struct perf_evsel *evsel);
void perf_evsel__free_fd(struct perf_evsel *evsel);
@@ -88,4 +143,9 @@ int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter);
int perf_evsel__alloc_id(struct perf_evsel *evsel, int ncpus, int nthreads);
void perf_evsel__free_id(struct perf_evsel *evsel);
+bool perf_evsel__attr_has_per_thread_sample_period(struct perf_evsel *evsel);
+
+u64 *perf_sample_id__get_period_storage(struct perf_sample_id *sid, u32 tid,
+ bool per_thread);
+
#endif /* __LIBPERF_INTERNAL_EVSEL_H */
diff --git a/tools/lib/perf/include/perf/cpumap.h b/tools/lib/perf/include/perf/cpumap.h
index 228c6c629b0c..58cc5c5fa47c 100644
--- a/tools/lib/perf/include/perf/cpumap.h
+++ b/tools/lib/perf/include/perf/cpumap.h
@@ -3,12 +3,12 @@
#define __LIBPERF_CPUMAP_H
#include <perf/core.h>
-#include <stdio.h>
#include <stdbool.h>
+#include <stdint.h>
/** A wrapper around a CPU to avoid confusion with the perf_cpu_map's map's indices. */
struct perf_cpu {
- int cpu;
+ int16_t cpu;
};
struct perf_cache {
@@ -37,10 +37,11 @@ LIBPERF_API struct perf_cpu_map *perf_cpu_map__new_online_cpus(void);
* perf_cpu_map__new_online_cpus is returned.
*/
LIBPERF_API struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list);
-LIBPERF_API struct perf_cpu_map *perf_cpu_map__read(FILE *file);
+/** perf_cpu_map__new_int - create a map with the one given cpu. */
+LIBPERF_API struct perf_cpu_map *perf_cpu_map__new_int(int cpu);
LIBPERF_API struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map);
-LIBPERF_API struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
- struct perf_cpu_map *other);
+LIBPERF_API int perf_cpu_map__merge(struct perf_cpu_map **orig,
+ struct perf_cpu_map *other);
LIBPERF_API struct perf_cpu_map *perf_cpu_map__intersect(struct perf_cpu_map *orig,
struct perf_cpu_map *other);
LIBPERF_API void perf_cpu_map__put(struct perf_cpu_map *map);
@@ -61,6 +62,22 @@ LIBPERF_API int perf_cpu_map__nr(const struct perf_cpu_map *cpus);
* perf_cpu_map__has_any_cpu_or_is_empty - is map either empty or has the "any CPU"/dummy value.
*/
LIBPERF_API bool perf_cpu_map__has_any_cpu_or_is_empty(const struct perf_cpu_map *map);
+/**
+ * perf_cpu_map__is_any_cpu_or_is_empty - is map either empty or the "any CPU"/dummy value.
+ */
+LIBPERF_API bool perf_cpu_map__is_any_cpu_or_is_empty(const struct perf_cpu_map *map);
+/**
+ * perf_cpu_map__is_empty - does the map contain no values and it doesn't
+ * contain the special "any CPU"/dummy value.
+ */
+LIBPERF_API bool perf_cpu_map__is_empty(const struct perf_cpu_map *map);
+/**
+ * perf_cpu_map__min - the minimum CPU value or -1 if empty or just the "any CPU"/dummy value.
+ */
+LIBPERF_API struct perf_cpu perf_cpu_map__min(const struct perf_cpu_map *map);
+/**
+ * perf_cpu_map__max - the maximum CPU value or -1 if empty or just the "any CPU"/dummy value.
+ */
LIBPERF_API struct perf_cpu perf_cpu_map__max(const struct perf_cpu_map *map);
LIBPERF_API bool perf_cpu_map__has(const struct perf_cpu_map *map, struct perf_cpu cpu);
LIBPERF_API bool perf_cpu_map__equal(const struct perf_cpu_map *lhs,
diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h
index ae64090184d3..aa1e91c97a22 100644
--- a/tools/lib/perf/include/perf/event.h
+++ b/tools/lib/perf/include/perf/event.h
@@ -77,6 +77,12 @@ struct perf_record_lost_samples {
__u64 lost;
};
+#define MAX_ID_HDR_ENTRIES 6
+struct perf_record_lost_samples_and_ids {
+ struct perf_record_lost_samples lost;
+ __u64 sample_ids[MAX_ID_HDR_ENTRIES];
+};
+
/*
* PERF_FORMAT_ENABLED | PERF_FORMAT_RUNNING | PERF_FORMAT_ID | PERF_FORMAT_LOST
*/
@@ -285,6 +291,7 @@ struct perf_record_header_event_type {
struct perf_record_header_tracing_data {
struct perf_event_header header;
__u32 size;
+ __u32 pad;
};
#define PERF_RECORD_MISC_BUILD_ID_SIZE (1 << 15)
@@ -451,6 +458,32 @@ struct perf_record_compressed {
char data[];
};
+/*
+ * `header.size` includes the padding we are going to add while writing the record.
+ * `data_size` only includes the size of `data[]` itself.
+ */
+struct perf_record_compressed2 {
+ struct perf_event_header header;
+ __u64 data_size;
+ char data[];
+};
+
+#define BPF_METADATA_KEY_LEN 64
+#define BPF_METADATA_VALUE_LEN 256
+#define BPF_PROG_NAME_LEN KSYM_NAME_LEN
+
+struct perf_record_bpf_metadata_entry {
+ char key[BPF_METADATA_KEY_LEN];
+ char value[BPF_METADATA_VALUE_LEN];
+};
+
+struct perf_record_bpf_metadata {
+ struct perf_event_header header;
+ char prog_name[BPF_PROG_NAME_LEN];
+ __u64 nr_entries;
+ struct perf_record_bpf_metadata_entry entries[];
+};
+
enum perf_user_event_type { /* above any possible kernel type */
PERF_RECORD_USER_TYPE_START = 64,
PERF_RECORD_HEADER_ATTR = 64,
@@ -472,6 +505,8 @@ enum perf_user_event_type { /* above any possible kernel type */
PERF_RECORD_HEADER_FEATURE = 80,
PERF_RECORD_COMPRESSED = 81,
PERF_RECORD_FINISHED_INIT = 82,
+ PERF_RECORD_COMPRESSED2 = 83,
+ PERF_RECORD_BPF_METADATA = 84,
PERF_RECORD_HEADER_MAX
};
@@ -512,6 +547,8 @@ union perf_event {
struct perf_record_time_conv time_conv;
struct perf_record_header_feature feat;
struct perf_record_compressed pack;
+ struct perf_record_compressed2 pack2;
+ struct perf_record_bpf_metadata bpf_metadata;
};
#endif /* __LIBPERF_EVENT_H */
diff --git a/tools/lib/perf/include/perf/threadmap.h b/tools/lib/perf/include/perf/threadmap.h
index 8b40e7777cea..44deb815b817 100644
--- a/tools/lib/perf/include/perf/threadmap.h
+++ b/tools/lib/perf/include/perf/threadmap.h
@@ -14,6 +14,7 @@ LIBPERF_API void perf_thread_map__set_pid(struct perf_thread_map *map, int idx,
LIBPERF_API char *perf_thread_map__comm(struct perf_thread_map *map, int idx);
LIBPERF_API int perf_thread_map__nr(struct perf_thread_map *threads);
LIBPERF_API pid_t perf_thread_map__pid(struct perf_thread_map *map, int idx);
+LIBPERF_API int perf_thread_map__idx(struct perf_thread_map *map, pid_t pid);
LIBPERF_API struct perf_thread_map *perf_thread_map__get(struct perf_thread_map *map);
LIBPERF_API void perf_thread_map__put(struct perf_thread_map *map);
diff --git a/tools/lib/perf/libperf.map b/tools/lib/perf/libperf.map
index 10b3f3722642..fdd8304fe9d0 100644
--- a/tools/lib/perf/libperf.map
+++ b/tools/lib/perf/libperf.map
@@ -6,10 +6,13 @@ LIBPERF_0.0.1 {
perf_cpu_map__get;
perf_cpu_map__put;
perf_cpu_map__new;
- perf_cpu_map__read;
perf_cpu_map__nr;
perf_cpu_map__cpu;
perf_cpu_map__has_any_cpu_or_is_empty;
+ perf_cpu_map__is_any_cpu_or_is_empty;
+ perf_cpu_map__is_empty;
+ perf_cpu_map__has_any_cpu;
+ perf_cpu_map__min;
perf_cpu_map__max;
perf_cpu_map__has;
perf_thread_map__new_array;
diff --git a/tools/lib/perf/mmap.c b/tools/lib/perf/mmap.c
index 0c903c2372c9..ec124eb0ec0a 100644
--- a/tools/lib/perf/mmap.c
+++ b/tools/lib/perf/mmap.c
@@ -279,7 +279,7 @@ union perf_event *perf_mmap__read_event(struct perf_mmap *map)
if (!refcount_read(&map->refcnt))
return NULL;
- /* non-overwirte doesn't pause the ringbuffer */
+ /* non-overwrite doesn't pause the ringbuffer */
if (!map->overwrite)
map->end = perf_mmap__read_head(map);
@@ -508,7 +508,7 @@ int perf_mmap__read_self(struct perf_mmap *map, struct perf_counts_values *count
idx = READ_ONCE(pc->index);
cnt = READ_ONCE(pc->offset);
if (pc->cap_user_rdpmc && idx) {
- s64 evcnt = read_perf_counter(idx - 1);
+ u64 evcnt = read_perf_counter(idx - 1);
u16 width = READ_ONCE(pc->pmc_width);
evcnt <<= 64 - width;
diff --git a/tools/lib/perf/threadmap.c b/tools/lib/perf/threadmap.c
index 07968f3ea093..db431b036f57 100644
--- a/tools/lib/perf/threadmap.c
+++ b/tools/lib/perf/threadmap.c
@@ -97,5 +97,22 @@ int perf_thread_map__nr(struct perf_thread_map *threads)
pid_t perf_thread_map__pid(struct perf_thread_map *map, int idx)
{
+ if (!map) {
+ assert(idx == 0);
+ return -1;
+ }
+
return map->map[idx].pid;
}
+
+int perf_thread_map__idx(struct perf_thread_map *threads, pid_t pid)
+{
+ if (!threads)
+ return pid == -1 ? 0 : -1;
+
+ for (int i = 0; i < threads->nr; ++i) {
+ if (threads->map[i].pid == pid)
+ return i;
+ }
+ return -1;
+}
diff --git a/tools/lib/rbtree.c b/tools/lib/rbtree.c
index 727396de6be5..9e7307186b7f 100644
--- a/tools/lib/rbtree.c
+++ b/tools/lib/rbtree.c
@@ -58,7 +58,7 @@
static inline void rb_set_black(struct rb_node *rb)
{
- rb->__rb_parent_color |= RB_BLACK;
+ rb->__rb_parent_color += RB_BLACK;
}
static inline struct rb_node *rb_red_parent(struct rb_node *red)
diff --git a/tools/lib/slab.c b/tools/lib/slab.c
index 959997fb0652..981a21404f32 100644
--- a/tools/lib/slab.c
+++ b/tools/lib/slab.c
@@ -36,3 +36,19 @@ void kfree(void *p)
printf("Freeing %p to malloc\n", p);
free(p);
}
+
+void *kmalloc_array(size_t n, size_t size, gfp_t gfp)
+{
+ void *ret;
+
+ if (!(gfp & __GFP_DIRECT_RECLAIM))
+ return NULL;
+
+ ret = calloc(n, size);
+ uatomic_inc(&kmalloc_nr_allocated);
+ if (kmalloc_verbose)
+ printf("Allocating %p from calloc\n", ret);
+ if (gfp & __GFP_ZERO)
+ memset(ret, 0, n * size);
+ return ret;
+}
diff --git a/tools/lib/string.c b/tools/lib/string.c
index 8b6892f959ab..3126d2cff716 100644
--- a/tools/lib/string.c
+++ b/tools/lib/string.c
@@ -153,6 +153,19 @@ char *strim(char *s)
return skip_spaces(s);
}
+/*
+ * remove_spaces - Removes whitespaces from @s
+ */
+void remove_spaces(char *s)
+{
+ char *d = s;
+
+ do {
+ while (*d == ' ')
+ ++d;
+ } while ((*s++ = *d++));
+}
+
/**
* strreplace - Replace all occurrences of character in string.
* @s: The string to operate on.
diff --git a/tools/lib/subcmd/Makefile b/tools/lib/subcmd/Makefile
index b87213263a5e..8703ab487b68 100644
--- a/tools/lib/subcmd/Makefile
+++ b/tools/lib/subcmd/Makefile
@@ -38,10 +38,8 @@ endif
ifeq ($(DEBUG),1)
CFLAGS += -O0
-else ifeq ($(CC_NO_CLANG), 0)
- CFLAGS += -O3
else
- CFLAGS += -O6
+ CFLAGS += -O3
endif
# Treat warnings as errors unless directed not to
@@ -76,7 +74,7 @@ include $(srctree)/tools/build/Makefile.include
all: fixdep $(LIBFILE)
-$(SUBCMD_IN): FORCE
+$(SUBCMD_IN): fixdep FORCE
@$(MAKE) $(build)=libsubcmd
$(LIBFILE): $(SUBCMD_IN)
diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
index 8561b0f01a24..ddaeb4eb3e24 100644
--- a/tools/lib/subcmd/help.c
+++ b/tools/lib/subcmd/help.c
@@ -9,6 +9,7 @@
#include <sys/stat.h>
#include <unistd.h>
#include <dirent.h>
+#include <assert.h>
#include "subcmd-util.h"
#include "help.h"
#include "exec-cmd.h"
@@ -74,6 +75,9 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
size_t ci, cj, ei;
int cmp;
+ if (!excludes->cnt)
+ return;
+
ci = cj = ei = 0;
while (ci < cmds->cnt && ei < excludes->cnt) {
cmp = strcmp(cmds->names[ci]->name, excludes->names[ei]->name);
@@ -82,10 +86,11 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
ci++;
cj++;
} else {
- zfree(&cmds->names[cj]);
- cmds->names[cj++] = cmds->names[ci++];
+ cmds->names[cj++] = cmds->names[ci];
+ cmds->names[ci++] = NULL;
}
} else if (cmp == 0) {
+ zfree(&cmds->names[ci]);
ci++;
ei++;
} else if (cmp > 0) {
@@ -94,12 +99,12 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
}
if (ci != cj) {
while (ci < cmds->cnt) {
- zfree(&cmds->names[cj]);
- cmds->names[cj++] = cmds->names[ci++];
+ cmds->names[cj++] = cmds->names[ci];
+ cmds->names[ci++] = NULL;
}
}
for (ci = cj; ci < cmds->cnt; ci++)
- zfree(&cmds->names[ci]);
+ assert(cmds->names[ci] == NULL);
cmds->cnt = cj;
}
diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
index 9fa75943f2ed..555d617c1f50 100644
--- a/tools/lib/subcmd/parse-options.c
+++ b/tools/lib/subcmd/parse-options.c
@@ -806,18 +806,30 @@ static int option__cmp(const void *va, const void *vb)
static struct option *options__order(const struct option *opts)
{
- int nr_opts = 0, nr_group = 0, len;
- const struct option *o = opts;
- struct option *opt, *ordered, *group;
-
- for (o = opts; o->type != OPTION_END; o++)
- ++nr_opts;
-
- len = sizeof(*o) * (nr_opts + 1);
- ordered = malloc(len);
- if (!ordered)
- goto out;
- memcpy(ordered, opts, len);
+ int nr_opts = 0, nr_group = 0, nr_parent = 0, len;
+ const struct option *o = NULL, *p = opts;
+ struct option *opt, *ordered = NULL, *group;
+
+ /* flatten the options that have parents */
+ for (p = opts; p != NULL; p = o->parent) {
+ for (o = p; o->type != OPTION_END; o++)
+ ++nr_opts;
+
+ /*
+ * the length is given by the number of options plus a null
+ * terminator for the last loop iteration.
+ */
+ len = sizeof(*o) * (nr_opts + !o->parent);
+ group = realloc(ordered, len);
+ if (!group)
+ goto out;
+ ordered = group;
+ memcpy(&ordered[nr_parent], p, sizeof(*o) * (nr_opts - nr_parent));
+
+ nr_parent = nr_opts;
+ }
+ /* copy the last OPTION_END */
+ memcpy(&ordered[nr_opts], o, sizeof(*o));
/* sort each option group individually */
for (opt = group = ordered; opt->type != OPTION_END; opt++) {
diff --git a/tools/lib/subcmd/run-command.c b/tools/lib/subcmd/run-command.c
index 5cdac2162532..b7510f83209a 100644
--- a/tools/lib/subcmd/run-command.c
+++ b/tools/lib/subcmd/run-command.c
@@ -2,8 +2,10 @@
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
+#include <ctype.h>
#include <fcntl.h>
#include <string.h>
+#include <linux/compiler.h>
#include <linux/string.h>
#include <errno.h>
#include <sys/wait.h>
@@ -122,6 +124,8 @@ int start_command(struct child_process *cmd)
}
if (cmd->preexec_cb)
cmd->preexec_cb();
+ if (cmd->no_exec_cmd)
+ exit(cmd->no_exec_cmd(cmd));
if (cmd->exec_cmd) {
execv_cmd(cmd->argv);
} else {
@@ -163,43 +167,107 @@ int start_command(struct child_process *cmd)
return 0;
}
-static int wait_or_whine(pid_t pid)
+static int wait_or_whine(struct child_process *cmd, bool block)
{
- char sbuf[STRERR_BUFSIZE];
+ bool finished = cmd->finished;
+ int result = cmd->finish_result;
- for (;;) {
+ while (!finished) {
int status, code;
- pid_t waiting = waitpid(pid, &status, 0);
+ pid_t waiting = waitpid(cmd->pid, &status, block ? 0 : WNOHANG);
+
+ if (!block && waiting == 0)
+ break;
+
+ if (waiting < 0 && errno == EINTR)
+ continue;
+ finished = true;
if (waiting < 0) {
- if (errno == EINTR)
- continue;
+ char sbuf[STRERR_BUFSIZE];
+
fprintf(stderr, " Error: waitpid failed (%s)",
str_error_r(errno, sbuf, sizeof(sbuf)));
- return -ERR_RUN_COMMAND_WAITPID;
- }
- if (waiting != pid)
- return -ERR_RUN_COMMAND_WAITPID_WRONG_PID;
- if (WIFSIGNALED(status))
- return -ERR_RUN_COMMAND_WAITPID_SIGNAL;
-
- if (!WIFEXITED(status))
- return -ERR_RUN_COMMAND_WAITPID_NOEXIT;
- code = WEXITSTATUS(status);
- switch (code) {
- case 127:
- return -ERR_RUN_COMMAND_EXEC;
- case 0:
- return 0;
- default:
- return -code;
+ result = -ERR_RUN_COMMAND_WAITPID;
+ } else if (waiting != cmd->pid) {
+ result = -ERR_RUN_COMMAND_WAITPID_WRONG_PID;
+ } else if (WIFSIGNALED(status)) {
+ result = -ERR_RUN_COMMAND_WAITPID_SIGNAL;
+ } else if (!WIFEXITED(status)) {
+ result = -ERR_RUN_COMMAND_WAITPID_NOEXIT;
+ } else {
+ code = WEXITSTATUS(status);
+ switch (code) {
+ case 127:
+ result = -ERR_RUN_COMMAND_EXEC;
+ break;
+ case 0:
+ result = 0;
+ break;
+ default:
+ result = -code;
+ break;
+ }
}
}
+ if (finished) {
+ cmd->finished = 1;
+ cmd->finish_result = result;
+ }
+ return result;
+}
+
+/*
+ * Conservative estimate of number of characaters needed to hold an a decoded
+ * integer, assume each 3 bits needs a character byte and plus a possible sign
+ * character.
+ */
+#ifndef is_signed_type
+#define is_signed_type(type) (((type)(-1)) < (type)1)
+#endif
+#define MAX_STRLEN_TYPE(type) (sizeof(type) * 8 / 3 + (is_signed_type(type) ? 1 : 0))
+
+int check_if_command_finished(struct child_process *cmd)
+{
+#ifdef __linux__
+ char filename[6 + MAX_STRLEN_TYPE(typeof(cmd->pid)) + 7 + 1];
+ char status_line[256];
+ FILE *status_file;
+
+ /*
+ * Check by reading /proc/<pid>/status as calling waitpid causes
+ * stdout/stderr to be closed and data lost.
+ */
+ sprintf(filename, "/proc/%u/status", cmd->pid);
+ status_file = fopen(filename, "r");
+ if (status_file == NULL) {
+ /* Open failed assume finish_command was called. */
+ return true;
+ }
+ while (fgets(status_line, sizeof(status_line), status_file) != NULL) {
+ char *p;
+
+ if (strncmp(status_line, "State:", 6))
+ continue;
+
+ fclose(status_file);
+ p = status_line + 6;
+ while (isspace(*p))
+ p++;
+ return *p == 'Z' ? 1 : 0;
+ }
+ /* Read failed assume finish_command was called. */
+ fclose(status_file);
+ return 1;
+#else
+ wait_or_whine(cmd, /*block=*/false);
+ return cmd->finished;
+#endif
}
int finish_command(struct child_process *cmd)
{
- return wait_or_whine(cmd->pid);
+ return wait_or_whine(cmd, /*block=*/true);
}
int run_command(struct child_process *cmd)
diff --git a/tools/lib/subcmd/run-command.h b/tools/lib/subcmd/run-command.h
index 17d969c6add3..b2d39de6e690 100644
--- a/tools/lib/subcmd/run-command.h
+++ b/tools/lib/subcmd/run-command.h
@@ -41,15 +41,20 @@ struct child_process {
int err;
const char *dir;
const char *const *env;
+ int finish_result;
unsigned no_stdin:1;
unsigned no_stdout:1;
unsigned no_stderr:1;
unsigned exec_cmd:1; /* if this is to be external sub-command */
unsigned stdout_to_stderr:1;
+ unsigned finished:1;
void (*preexec_cb)(void);
+ /* If set, call function in child rather than doing an exec. */
+ int (*no_exec_cmd)(struct child_process *process);
};
int start_command(struct child_process *);
+int check_if_command_finished(struct child_process *);
int finish_command(struct child_process *);
int run_command(struct child_process *);
diff --git a/tools/lib/subcmd/subcmd-util.h b/tools/lib/subcmd/subcmd-util.h
index dfac76e35ac7..c742b08815dc 100644
--- a/tools/lib/subcmd/subcmd-util.h
+++ b/tools/lib/subcmd/subcmd-util.h
@@ -20,8 +20,8 @@ static __noreturn inline void die(const char *err, ...)
va_start(params, err);
report(" Fatal: ", err, params);
- exit(128);
va_end(params);
+ exit(128);
}
#define zfree(ptr) ({ free(*ptr); *ptr = NULL; })
diff --git a/tools/lib/symbol/Makefile b/tools/lib/symbol/Makefile
index 13d43c6f92b4..426b845edfac 100644
--- a/tools/lib/symbol/Makefile
+++ b/tools/lib/symbol/Makefile
@@ -31,11 +31,7 @@ CFLAGS := $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)
CFLAGS += -ggdb3 -Wall -Wextra -std=gnu11 -U_FORTIFY_SOURCE -fPIC
ifeq ($(DEBUG),0)
-ifeq ($(CC_NO_CLANG), 0)
CFLAGS += -O3
-else
- CFLAGS += -O6
-endif
endif
ifeq ($(DEBUG),0)
diff --git a/tools/lib/thermal/Makefile b/tools/lib/thermal/Makefile
index 2d0d255fd0e1..41aa7a324ff4 100644
--- a/tools/lib/thermal/Makefile
+++ b/tools/lib/thermal/Makefile
@@ -39,19 +39,6 @@ libdir = $(prefix)/$(libdir_relative)
libdir_SQ = $(subst ','\'',$(libdir))
libdir_relative_SQ = $(subst ','\'',$(libdir_relative))
-ifeq ("$(origin V)", "command line")
- VERBOSE = $(V)
-endif
-ifndef VERBOSE
- VERBOSE = 0
-endif
-
-ifeq ($(VERBOSE),1)
- Q =
-else
- Q = @
-endif
-
# Set compile option CFLAGS
ifdef EXTRA_CFLAGS
CFLAGS := $(EXTRA_CFLAGS)
@@ -59,8 +46,12 @@ else
CFLAGS := -g -Wall
endif
+NL3_CFLAGS = $(shell pkg-config --cflags libnl-3.0 2>/dev/null)
+ifeq ($(NL3_CFLAGS),)
+NL3_CFLAGS = -I/usr/include/libnl3
+endif
+
INCLUDES = \
--I/usr/include/libnl3 \
-I$(srctree)/tools/lib/thermal/include \
-I$(srctree)/tools/lib/ \
-I$(srctree)/tools/include \
@@ -72,6 +63,7 @@ INCLUDES = \
override CFLAGS += $(EXTRA_WARNINGS)
override CFLAGS += -Werror -Wall
override CFLAGS += -fPIC
+override CFLAGS += $(NL3_CFLAGS)
override CFLAGS += $(INCLUDES)
override CFLAGS += -fvisibility=hidden
override CFGLAS += -Wl,-L.
@@ -121,7 +113,9 @@ all: fixdep
clean:
$(call QUIET_CLEAN, libthermal) $(RM) $(LIBTHERMAL_A) \
- *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC)
+ *.o *~ *.a *.so *.so.$(VERSION) *.so.$(LIBTHERMAL_VERSION) \
+ .*.d .*.cmd LIBTHERMAL-CFLAGS $(LIBTHERMAL_PC) \
+ $(srctree)/tools/$(THERMAL_UAPI)
$(LIBTHERMAL_PC):
$(QUIET_GEN)sed -e "s|@PREFIX@|$(prefix)|" \
@@ -145,7 +139,7 @@ endef
install_lib: libs
$(call QUIET_INSTALL, $(LIBTHERMAL_ALL)) \
$(call do_install_mkdir,$(libdir_SQ)); \
- cp -fpR $(LIBTHERMAL_ALL) $(DESTDIR)$(libdir_SQ)
+ cp -fR --preserve=mode,timestamp $(LIBTHERMAL_ALL) $(DESTDIR)$(libdir_SQ)
install_headers:
$(call QUIET_INSTALL, headers) \
diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c
index 73d4d4e8d6ec..4998cec793ed 100644
--- a/tools/lib/thermal/commands.c
+++ b/tools/lib/thermal/commands.c
@@ -5,6 +5,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
+#include <limits.h>
#include <thermal.h>
#include "thermal_nl.h"
@@ -33,6 +34,11 @@ static struct nla_policy thermal_genl_policy[THERMAL_GENL_ATTR_MAX + 1] = {
[THERMAL_GENL_ATTR_CDEV_CUR_STATE] = { .type = NLA_U32 },
[THERMAL_GENL_ATTR_CDEV_MAX_STATE] = { .type = NLA_U32 },
[THERMAL_GENL_ATTR_CDEV_NAME] = { .type = NLA_STRING },
+
+ /* Thresholds */
+ [THERMAL_GENL_ATTR_THRESHOLD] = { .type = NLA_NESTED },
+ [THERMAL_GENL_ATTR_THRESHOLD_TEMP] = { .type = NLA_U32 },
+ [THERMAL_GENL_ATTR_THRESHOLD_DIRECTION] = { .type = NLA_U32 },
};
static int parse_tz_get(struct genl_info *info, struct thermal_zone **tz)
@@ -182,6 +188,48 @@ static int parse_tz_get_gov(struct genl_info *info, struct thermal_zone *tz)
return THERMAL_SUCCESS;
}
+static int parse_threshold_get(struct genl_info *info, struct thermal_zone *tz)
+{
+ struct nlattr *attr;
+ struct thermal_threshold *__tt = NULL;
+ size_t size = 0;
+ int rem;
+
+ /*
+ * The size contains the size of the array and we want to
+ * access the last element, size - 1.
+ *
+ * The variable size is initialized to zero but it will be
+ * then incremented by the first if() statement. The message
+ * attributes are ordered, so the first if() statement will be
+ * always called before the second one. If it happens that is
+ * not the case, then it is a kernel bug.
+ */
+ nla_for_each_nested(attr, info->attrs[THERMAL_GENL_ATTR_THRESHOLD], rem) {
+
+ if (nla_type(attr) == THERMAL_GENL_ATTR_THRESHOLD_TEMP) {
+
+ size++;
+
+ __tt = realloc(__tt, sizeof(*__tt) * (size + 2));
+ if (!__tt)
+ return THERMAL_ERROR;
+
+ __tt[size - 1].temperature = nla_get_u32(attr);
+ }
+
+ if (nla_type(attr) == THERMAL_GENL_ATTR_THRESHOLD_DIRECTION)
+ __tt[size - 1].direction = nla_get_u32(attr);
+ }
+
+ if (__tt)
+ __tt[size].temperature = INT_MAX;
+
+ tz->thresholds = __tt;
+
+ return THERMAL_SUCCESS;
+}
+
static int handle_netlink(struct nl_cache_ops *unused,
struct genl_cmd *cmd,
struct genl_info *info, void *arg)
@@ -210,6 +258,10 @@ static int handle_netlink(struct nl_cache_ops *unused,
ret = parse_tz_get_gov(info, arg);
break;
+ case THERMAL_GENL_CMD_THRESHOLD_GET:
+ ret = parse_threshold_get(info, arg);
+ break;
+
default:
return THERMAL_ERROR;
}
@@ -253,6 +305,34 @@ static struct genl_cmd thermal_cmds[] = {
.c_maxattr = THERMAL_GENL_ATTR_MAX,
.c_attr_policy = thermal_genl_policy,
},
+ {
+ .c_id = THERMAL_GENL_CMD_THRESHOLD_GET,
+ .c_name = (char *)"Get thresholds list",
+ .c_msg_parser = handle_netlink,
+ .c_maxattr = THERMAL_GENL_ATTR_MAX,
+ .c_attr_policy = thermal_genl_policy,
+ },
+ {
+ .c_id = THERMAL_GENL_CMD_THRESHOLD_ADD,
+ .c_name = (char *)"Add a threshold",
+ .c_msg_parser = handle_netlink,
+ .c_maxattr = THERMAL_GENL_ATTR_MAX,
+ .c_attr_policy = thermal_genl_policy,
+ },
+ {
+ .c_id = THERMAL_GENL_CMD_THRESHOLD_DELETE,
+ .c_name = (char *)"Delete a threshold",
+ .c_msg_parser = handle_netlink,
+ .c_maxattr = THERMAL_GENL_ATTR_MAX,
+ .c_attr_policy = thermal_genl_policy,
+ },
+ {
+ .c_id = THERMAL_GENL_CMD_THRESHOLD_FLUSH,
+ .c_name = (char *)"Flush the thresholds",
+ .c_msg_parser = handle_netlink,
+ .c_maxattr = THERMAL_GENL_ATTR_MAX,
+ .c_attr_policy = thermal_genl_policy,
+ },
};
static struct genl_ops thermal_cmd_ops = {
@@ -261,9 +341,41 @@ static struct genl_ops thermal_cmd_ops = {
.o_ncmds = ARRAY_SIZE(thermal_cmds),
};
-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd,
- int flags, void *arg)
+struct cmd_param {
+ int tz_id;
+ int temp;
+ int direction;
+};
+
+typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *);
+
+static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p)
{
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id))
+ return -1;
+
+ return 0;
+}
+
+static int thermal_genl_threshold_encode(struct nl_msg *msg, struct cmd_param *p)
+{
+ if (thermal_genl_tz_id_encode(msg, p))
+ return -1;
+
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_THRESHOLD_TEMP, p->temp))
+ return -1;
+
+ if (nla_put_u32(msg, THERMAL_GENL_ATTR_THRESHOLD_DIRECTION, p->direction))
+ return -1;
+
+ return 0;
+}
+
+static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb,
+ struct cmd_param *param,
+ int cmd, int flags, void *arg)
+{
+ thermal_error_t ret = THERMAL_ERROR;
struct nl_msg *msg;
void *hdr;
@@ -274,45 +386,95 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int
hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id,
0, flags, cmd, THERMAL_GENL_VERSION);
if (!hdr)
- return THERMAL_ERROR;
+ goto out;
- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id))
- return THERMAL_ERROR;
+ if (cmd_cb && cmd_cb(msg, param))
+ goto out;
if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg))
- return THERMAL_ERROR;
+ goto out;
+ ret = THERMAL_SUCCESS;
+out:
nlmsg_free(msg);
- return THERMAL_SUCCESS;
+ return ret;
}
thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz)
{
- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID,
+ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID,
NLM_F_DUMP | NLM_F_ACK, tz);
}
thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc)
{
- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET,
+ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET,
NLM_F_DUMP | NLM_F_ACK, tc);
}
thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP,
- 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz);
}
thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
}
thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz)
{
- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+}
+
+thermal_error_t thermal_cmd_threshold_get(struct thermal_handler *th,
+ struct thermal_zone *tz)
+{
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_THRESHOLD_GET, 0, tz);
+}
+
+thermal_error_t thermal_cmd_threshold_add(struct thermal_handler *th,
+ struct thermal_zone *tz,
+ int temperature,
+ int direction)
+{
+ struct cmd_param p = { .tz_id = tz->id, .temp = temperature, .direction = direction };
+
+ return thermal_genl_auto(th, thermal_genl_threshold_encode, &p,
+ THERMAL_GENL_CMD_THRESHOLD_ADD, 0, tz);
+}
+
+thermal_error_t thermal_cmd_threshold_delete(struct thermal_handler *th,
+ struct thermal_zone *tz,
+ int temperature,
+ int direction)
+{
+ struct cmd_param p = { .tz_id = tz->id, .temp = temperature, .direction = direction };
+
+ return thermal_genl_auto(th, thermal_genl_threshold_encode, &p,
+ THERMAL_GENL_CMD_THRESHOLD_DELETE, 0, tz);
+}
+
+thermal_error_t thermal_cmd_threshold_flush(struct thermal_handler *th,
+ struct thermal_zone *tz)
+{
+ struct cmd_param p = { .tz_id = tz->id };
+
+ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
+ THERMAL_GENL_CMD_THRESHOLD_FLUSH, 0, tz);
}
thermal_error_t thermal_cmd_exit(struct thermal_handler *th)
diff --git a/tools/lib/thermal/events.c b/tools/lib/thermal/events.c
index a7a55d1a0c4c..bd851c869029 100644
--- a/tools/lib/thermal/events.c
+++ b/tools/lib/thermal/events.c
@@ -94,6 +94,30 @@ static int handle_thermal_event(struct nl_msg *n, void *arg)
case THERMAL_GENL_EVENT_TZ_GOV_CHANGE:
return ops->gov_change(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]),
nla_get_string(attrs[THERMAL_GENL_ATTR_GOV_NAME]), arg);
+
+ case THERMAL_GENL_EVENT_THRESHOLD_ADD:
+ return ops->threshold_add(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_THRESHOLD_TEMP]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_THRESHOLD_DIRECTION]), arg);
+
+ case THERMAL_GENL_EVENT_THRESHOLD_DELETE:
+ return ops->threshold_delete(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_THRESHOLD_TEMP]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_THRESHOLD_DIRECTION]), arg);
+
+ case THERMAL_GENL_EVENT_THRESHOLD_FLUSH:
+ return ops->threshold_flush(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]), arg);
+
+ case THERMAL_GENL_EVENT_THRESHOLD_UP:
+ return ops->threshold_up(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_TEMP]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_PREV_TEMP]), arg);
+
+ case THERMAL_GENL_EVENT_THRESHOLD_DOWN:
+ return ops->threshold_down(nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_ID]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_TEMP]),
+ nla_get_u32(attrs[THERMAL_GENL_ATTR_TZ_PREV_TEMP]), arg);
+
default:
return -1;
}
@@ -101,19 +125,24 @@ static int handle_thermal_event(struct nl_msg *n, void *arg)
static void thermal_events_ops_init(struct thermal_events_ops *ops)
{
- enabled_ops[THERMAL_GENL_EVENT_TZ_CREATE] = !!ops->tz_create;
- enabled_ops[THERMAL_GENL_EVENT_TZ_DELETE] = !!ops->tz_delete;
- enabled_ops[THERMAL_GENL_EVENT_TZ_DISABLE] = !!ops->tz_disable;
- enabled_ops[THERMAL_GENL_EVENT_TZ_ENABLE] = !!ops->tz_enable;
- enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_UP] = !!ops->trip_high;
- enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_DOWN] = !!ops->trip_low;
- enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_CHANGE] = !!ops->trip_change;
- enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_ADD] = !!ops->trip_add;
- enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_DELETE] = !!ops->trip_delete;
- enabled_ops[THERMAL_GENL_EVENT_CDEV_ADD] = !!ops->cdev_add;
- enabled_ops[THERMAL_GENL_EVENT_CDEV_DELETE] = !!ops->cdev_delete;
- enabled_ops[THERMAL_GENL_EVENT_CDEV_STATE_UPDATE] = !!ops->cdev_update;
- enabled_ops[THERMAL_GENL_EVENT_TZ_GOV_CHANGE] = !!ops->gov_change;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_CREATE] = !!ops->tz_create;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_DELETE] = !!ops->tz_delete;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_DISABLE] = !!ops->tz_disable;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_ENABLE] = !!ops->tz_enable;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_UP] = !!ops->trip_high;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_DOWN] = !!ops->trip_low;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_CHANGE] = !!ops->trip_change;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_ADD] = !!ops->trip_add;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_TRIP_DELETE] = !!ops->trip_delete;
+ enabled_ops[THERMAL_GENL_EVENT_CDEV_ADD] = !!ops->cdev_add;
+ enabled_ops[THERMAL_GENL_EVENT_CDEV_DELETE] = !!ops->cdev_delete;
+ enabled_ops[THERMAL_GENL_EVENT_CDEV_STATE_UPDATE] = !!ops->cdev_update;
+ enabled_ops[THERMAL_GENL_EVENT_TZ_GOV_CHANGE] = !!ops->gov_change;
+ enabled_ops[THERMAL_GENL_EVENT_THRESHOLD_ADD] = !!ops->threshold_add;
+ enabled_ops[THERMAL_GENL_EVENT_THRESHOLD_DELETE] = !!ops->threshold_delete;
+ enabled_ops[THERMAL_GENL_EVENT_THRESHOLD_FLUSH] = !!ops->threshold_flush;
+ enabled_ops[THERMAL_GENL_EVENT_THRESHOLD_UP] = !!ops->threshold_up;
+ enabled_ops[THERMAL_GENL_EVENT_THRESHOLD_DOWN] = !!ops->threshold_down;
}
thermal_error_t thermal_events_handle(struct thermal_handler *th, void *arg)
diff --git a/tools/lib/thermal/include/thermal.h b/tools/lib/thermal/include/thermal.h
index 1abc560602cf..818ecdfb46e5 100644
--- a/tools/lib/thermal/include/thermal.h
+++ b/tools/lib/thermal/include/thermal.h
@@ -4,11 +4,20 @@
#define __LIBTHERMAL_H
#include <linux/thermal.h>
+#include <sys/types.h>
#ifndef LIBTHERMAL_API
#define LIBTHERMAL_API __attribute__((visibility("default")))
#endif
+#ifndef THERMAL_THRESHOLD_WAY_UP
+#define THERMAL_THRESHOLD_WAY_UP 0x1
+#endif
+
+#ifndef THERMAL_THRESHOLD_WAY_DOWN
+#define THERMAL_THRESHOLD_WAY_DOWN 0x2
+#endif
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -31,6 +40,11 @@ struct thermal_events_ops {
int (*cdev_delete)(int cdev_id, void *arg);
int (*cdev_update)(int cdev_id, int cur_state, void *arg);
int (*gov_change)(int tz_id, const char *gov_name, void *arg);
+ int (*threshold_add)(int tz_id, int temperature, int direction, void *arg);
+ int (*threshold_delete)(int tz_id, int temperature, int direction, void *arg);
+ int (*threshold_flush)(int tz_id, void *arg);
+ int (*threshold_up)(int tz_id, int temp, int prev_temp, void *arg);
+ int (*threshold_down)(int tz_id, int temp, int prev_temp, void *arg);
};
struct thermal_ops {
@@ -45,12 +59,18 @@ struct thermal_trip {
int hyst;
};
+struct thermal_threshold {
+ int temperature;
+ int direction;
+};
+
struct thermal_zone {
int id;
int temp;
char name[THERMAL_NAME_LENGTH];
char governor[THERMAL_NAME_LENGTH];
struct thermal_trip *trip;
+ struct thermal_threshold *thresholds;
};
struct thermal_cdev {
@@ -74,12 +94,16 @@ typedef int (*cb_tt_t)(struct thermal_trip *, void *);
typedef int (*cb_tc_t)(struct thermal_cdev *, void *);
+typedef int (*cb_th_t)(struct thermal_threshold *, void *);
+
LIBTHERMAL_API int for_each_thermal_zone(struct thermal_zone *tz, cb_tz_t cb, void *arg);
LIBTHERMAL_API int for_each_thermal_trip(struct thermal_trip *tt, cb_tt_t cb, void *arg);
LIBTHERMAL_API int for_each_thermal_cdev(struct thermal_cdev *cdev, cb_tc_t cb, void *arg);
+LIBTHERMAL_API int for_each_thermal_threshold(struct thermal_threshold *th, cb_th_t cb, void *arg);
+
LIBTHERMAL_API struct thermal_zone *thermal_zone_find_by_name(struct thermal_zone *tz,
const char *name);
@@ -124,6 +148,22 @@ LIBTHERMAL_API thermal_error_t thermal_cmd_get_governor(struct thermal_handler *
LIBTHERMAL_API thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th,
struct thermal_zone *tz);
+LIBTHERMAL_API thermal_error_t thermal_cmd_threshold_get(struct thermal_handler *th,
+ struct thermal_zone *tz);
+
+LIBTHERMAL_API thermal_error_t thermal_cmd_threshold_add(struct thermal_handler *th,
+ struct thermal_zone *tz,
+ int temperature,
+ int direction);
+
+LIBTHERMAL_API thermal_error_t thermal_cmd_threshold_delete(struct thermal_handler *th,
+ struct thermal_zone *tz,
+ int temperature,
+ int direction);
+
+LIBTHERMAL_API thermal_error_t thermal_cmd_threshold_flush(struct thermal_handler *th,
+ struct thermal_zone *tz);
+
/*
* Netlink thermal samples
*/
diff --git a/tools/lib/thermal/libthermal.map b/tools/lib/thermal/libthermal.map
index d5e77738c7a4..1d3d0c04e4b6 100644
--- a/tools/lib/thermal/libthermal.map
+++ b/tools/lib/thermal/libthermal.map
@@ -1,22 +1,30 @@
LIBTHERMAL_0.0.1 {
global:
- thermal_init;
for_each_thermal_zone;
for_each_thermal_trip;
for_each_thermal_cdev;
+ for_each_thermal_threshold;
thermal_zone_find_by_name;
thermal_zone_find_by_id;
thermal_zone_discover;
thermal_init;
+ thermal_exit;
+ thermal_events_exit;
thermal_events_init;
thermal_events_handle;
thermal_events_fd;
+ thermal_cmd_exit;
thermal_cmd_init;
thermal_cmd_get_tz;
thermal_cmd_get_cdev;
thermal_cmd_get_trip;
thermal_cmd_get_governor;
thermal_cmd_get_temp;
+ thermal_cmd_threshold_get;
+ thermal_cmd_threshold_add;
+ thermal_cmd_threshold_delete;
+ thermal_cmd_threshold_flush;
+ thermal_sampling_exit;
thermal_sampling_init;
thermal_sampling_handle;
thermal_sampling_fd;
diff --git a/tools/lib/thermal/sampling.c b/tools/lib/thermal/sampling.c
index 70577423a9f0..f67c1f9ea1d7 100644
--- a/tools/lib/thermal/sampling.c
+++ b/tools/lib/thermal/sampling.c
@@ -16,6 +16,8 @@ static int handle_thermal_sample(struct nl_msg *n, void *arg)
struct thermal_handler_param *thp = arg;
struct thermal_handler *th = thp->th;
+ arg = thp->arg;
+
genlmsg_parse(nlh, 0, attrs, THERMAL_GENL_ATTR_MAX, NULL);
switch (genlhdr->cmd) {
diff --git a/tools/lib/thermal/thermal.c b/tools/lib/thermal/thermal.c
index 72a76dc205bc..6f02e3539159 100644
--- a/tools/lib/thermal/thermal.c
+++ b/tools/lib/thermal/thermal.c
@@ -1,10 +1,24 @@
// SPDX-License-Identifier: LGPL-2.1+
// Copyright (C) 2022, Linaro Ltd - Daniel Lezcano <daniel.lezcano@linaro.org>
#include <stdio.h>
+#include <limits.h>
#include <thermal.h>
#include "thermal_nl.h"
+int for_each_thermal_threshold(struct thermal_threshold *th, cb_th_t cb, void *arg)
+{
+ int i, ret = 0;
+
+ if (!th)
+ return 0;
+
+ for (i = 0; th[i].temperature != INT_MAX; i++)
+ ret |= cb(&th[i], arg);
+
+ return ret;
+}
+
int for_each_thermal_cdev(struct thermal_cdev *cdev, cb_tc_t cb, void *arg)
{
int i, ret = 0;
@@ -80,6 +94,9 @@ static int __thermal_zone_discover(struct thermal_zone *tz, void *th)
if (thermal_cmd_get_trip(th, tz) < 0)
return -1;
+ if (thermal_cmd_threshold_get(th, tz))
+ return -1;
+
if (thermal_cmd_get_governor(th, tz))
return -1;
diff --git a/tools/memory-model/Documentation/README b/tools/memory-model/Documentation/README
index db90a26dbdf4..88870b0bceea 100644
--- a/tools/memory-model/Documentation/README
+++ b/tools/memory-model/Documentation/README
@@ -9,6 +9,8 @@ depending on what you know and what you would like to learn. Please note
that the documents later in this list assume that the reader understands
the material provided by documents earlier in this list.
+If LKMM-specific terms lost you, glossary.txt might help you.
+
o You are new to Linux-kernel concurrency: simple.txt
o You have some background in Linux-kernel concurrency, and would
@@ -21,6 +23,12 @@ o You are familiar with the Linux-kernel concurrency primitives
that you need, and just want to get started with LKMM litmus
tests: litmus-tests.txt
+o You need to locklessly access shared variables that are otherwise
+ protected by a lock: locking.txt
+
+ This locking.txt file expands on the "Locking" section in
+ recipes.txt, but is self-contained.
+
o You are familiar with Linux-kernel concurrency, and would
like a detailed intuitive understanding of LKMM, including
situations involving more than two threads: recipes.txt
@@ -28,12 +36,18 @@ o You are familiar with Linux-kernel concurrency, and would
o You would like a detailed understanding of what your compiler can
and cannot do to control dependencies: control-dependencies.txt
+o You would like to mark concurrent normal accesses to shared
+ variables so that intentional "racy" accesses can be properly
+ documented, especially when you are responding to complaints
+ from KCSAN: access-marking.txt
+
o You are familiar with Linux-kernel concurrency and the use of
LKMM, and would like a quick reference: cheatsheet.txt
o You are familiar with Linux-kernel concurrency and the use
of LKMM, and would like to learn about LKMM's requirements,
- rationale, and implementation: explanation.txt
+ rationale, and implementation: explanation.txt and
+ herd-representation.txt
o You are interested in the publications related to LKMM, including
hardware manuals, academic literature, standards-committee
@@ -47,6 +61,10 @@ DESCRIPTION OF FILES
README
This file.
+access-marking.txt
+ Guidelines for marking intentionally concurrent accesses to
+ shared memory.
+
cheatsheet.txt
Quick-reference guide to the Linux-kernel memory model.
@@ -57,10 +75,21 @@ control-dependencies.txt
explanation.txt
Detailed description of the memory model.
+glossary.txt
+ Brief definitions of LKMM-related terms.
+
+herd-representation.txt
+ The (abstract) representation of the Linux-kernel concurrency
+ primitives in terms of events.
+
litmus-tests.txt
The format, features, capabilities, and limitations of the litmus
tests that LKMM can evaluate.
+locking.txt
+ Rules for accessing lock-protected shared variables outside of
+ their corresponding critical sections.
+
ordering.txt
Overview of the Linux kernel's low-level memory-ordering
primitives by category.
diff --git a/tools/memory-model/Documentation/access-marking.txt b/tools/memory-model/Documentation/access-marking.txt
index 65778222183e..3fbe77fd564a 100644
--- a/tools/memory-model/Documentation/access-marking.txt
+++ b/tools/memory-model/Documentation/access-marking.txt
@@ -6,7 +6,8 @@ normal accesses to shared memory, that is "normal" as in accesses that do
not use read-modify-write atomic operations. It also describes how to
document these accesses, both with comments and with special assertions
processed by the Kernel Concurrency Sanitizer (KCSAN). This discussion
-builds on an earlier LWN article [1].
+builds on an earlier LWN article [1] and Linux Foundation mentorship
+session [2].
ACCESS-MARKING OPTIONS
@@ -24,6 +25,11 @@ The Linux kernel provides the following access-marking options:
4. WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
The various forms of atomic_set() also fit in here.
+5. __data_racy, for example "int __data_racy a;"
+
+6. KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()
+ and ASSERT_EXCLUSIVE_WRITER(), are described in the
+ "ACCESS-DOCUMENTATION OPTIONS" section below.
These may be used in combination, as shown in this admittedly improbable
example:
@@ -31,7 +37,7 @@ example:
WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
Neither plain C-language accesses nor data_race() (#1 and #2 above) place
-any sort of constraint on the compiler's choice of optimizations [2].
+any sort of constraint on the compiler's choice of optimizations [3].
In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
compiler's use of code-motion and common-subexpression optimizations.
Therefore, if a given access is involved in an intentional data race,
@@ -205,6 +211,23 @@ because doing otherwise prevents KCSAN from detecting violations of your
code's synchronization rules.
+Use of __data_racy
+------------------
+
+Adding the __data_racy type qualifier to the declaration of a variable
+causes KCSAN to treat all accesses to that variable as if they were
+enclosed by data_race(). However, __data_racy does not affect the
+compiler, though one could imagine hardened kernel builds treating the
+__data_racy type qualifier as if it was the volatile keyword.
+
+Note well that __data_racy is subject to the same pointer-declaration
+rules as are other type qualifiers such as const and volatile.
+For example:
+
+ int __data_racy *p; // Pointer to data-racy data.
+ int *__data_racy p; // Data-racy pointer to non-data-racy data.
+
+
ACCESS-DOCUMENTATION OPTIONS
============================
@@ -342,7 +365,7 @@ as follows:
Because foo is read locklessly, all accesses are marked. The purpose
of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
-concurrent lockless write.
+concurrent write, whether marked or not.
Lock-Protected Writes With Heuristic Lockless Reads
@@ -594,5 +617,8 @@ REFERENCES
[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
https://lwn.net/Articles/816854/
-[2] "Who's afraid of a big bad optimizing compiler?"
+[2] "The Kernel Concurrency Sanitizer"
+ https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer
+
+[3] "Who's afraid of a big bad optimizing compiler?"
https://lwn.net/Articles/793253/
diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index 6dc8b3642458..34aa3172071b 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -1896,7 +1896,7 @@ following respects:
3. The srcu_down_read() and srcu_up_read() primitives work
exactly like srcu_read_lock() and srcu_read_unlock(), except
- that matching calls don't have to execute on the same CPU.
+ that matching calls don't have to execute within the same context.
(The names are meant to be suggestive of operations on
semaphores.) Since the matching is determined by the domain
pointer and index value, these primitives make it possible for
diff --git a/tools/memory-model/Documentation/glossary.txt b/tools/memory-model/Documentation/glossary.txt
index 6f3d16dbf467..7ead94bffa4e 100644
--- a/tools/memory-model/Documentation/glossary.txt
+++ b/tools/memory-model/Documentation/glossary.txt
@@ -15,14 +15,14 @@ Address Dependency: When the address of a later memory access is computed
3 do_something(p->a);
4 rcu_read_unlock();
- In this case, because the address of "p->a" on line 3 is computed
- from the value returned by the rcu_dereference() on line 2, the
- address dependency extends from that rcu_dereference() to that
- "p->a". In rare cases, optimizing compilers can destroy address
- dependencies. Please see Documentation/RCU/rcu_dereference.rst
- for more information.
+ In this case, because the address of "p->a" on line 3 is computed
+ from the value returned by the rcu_dereference() on line 2, the
+ address dependency extends from that rcu_dereference() to that
+ "p->a". In rare cases, optimizing compilers can destroy address
+ dependencies. Please see Documentation/RCU/rcu_dereference.rst
+ for more information.
- See also "Control Dependency" and "Data Dependency".
+ See also "Control Dependency" and "Data Dependency".
Acquire: With respect to a lock, acquiring that lock, for example,
using spin_lock(). With respect to a non-lock shared variable,
@@ -59,12 +59,12 @@ Control Dependency: When a later store's execution depends on a test
1 if (READ_ONCE(x))
2 WRITE_ONCE(y, 1);
- Here, the control dependency extends from the READ_ONCE() on
- line 1 to the WRITE_ONCE() on line 2. Control dependencies are
- fragile, and can be easily destroyed by optimizing compilers.
- Please see control-dependencies.txt for more information.
+ Here, the control dependency extends from the READ_ONCE() on
+ line 1 to the WRITE_ONCE() on line 2. Control dependencies are
+ fragile, and can be easily destroyed by optimizing compilers.
+ Please see control-dependencies.txt for more information.
- See also "Address Dependency" and "Data Dependency".
+ See also "Address Dependency" and "Data Dependency".
Cycle: Memory-barrier pairing is restricted to a pair of CPUs, as the
name suggests. And in a great many cases, a pair of CPUs is all
@@ -72,10 +72,10 @@ Cycle: Memory-barrier pairing is restricted to a pair of CPUs, as the
extended to additional CPUs, and the result is called a "cycle".
In a cycle, each CPU's ordering interacts with that of the next:
- CPU 0 CPU 1 CPU 2
- WRITE_ONCE(x, 1); WRITE_ONCE(y, 1); WRITE_ONCE(z, 1);
- smp_mb(); smp_mb(); smp_mb();
- r0 = READ_ONCE(y); r1 = READ_ONCE(z); r2 = READ_ONCE(x);
+ CPU 0 CPU 1 CPU 2
+ WRITE_ONCE(x, 1); WRITE_ONCE(y, 1); WRITE_ONCE(z, 1);
+ smp_mb(); smp_mb(); smp_mb();
+ r0 = READ_ONCE(y); r1 = READ_ONCE(z); r2 = READ_ONCE(x);
CPU 0's smp_mb() interacts with that of CPU 1, which interacts
with that of CPU 2, which in turn interacts with that of CPU 0
diff --git a/tools/memory-model/Documentation/herd-representation.txt b/tools/memory-model/Documentation/herd-representation.txt
new file mode 100644
index 000000000000..4e19b4f2a476
--- /dev/null
+++ b/tools/memory-model/Documentation/herd-representation.txt
@@ -0,0 +1,113 @@
+#
+# Legend:
+# R, a Load event
+# W, a Store event
+# F, a Fence event
+# LKR, a Lock-Read event
+# LKW, a Lock-Write event
+# UL, an Unlock event
+# LF, a Lock-Fail event
+# RL, a Read-Locked event
+# RU, a Read-Unlocked event
+# R*, a Load event included in RMW
+# W*, a Store event included in RMW
+# SRCU, a Sleepable-Read-Copy-Update event
+#
+# po, a Program-Order link
+# rmw, a Read-Modify-Write link - every rmw link is a po link
+#
+# By convention, a blank line in a cell means "same as the preceding line".
+#
+# Note that the syntactic representation does not always match the sets and
+# relations in linux-kernel.cat, due to redefinitions in linux-kernel.bell and
+# lock.cat. For example, the po link between LKR and LKW is upgraded to an rmw
+# link, and W[ACQUIRE] are not included in the Acquire set.
+#
+# Disclaimer. The table includes representations of "add" and "and" operations;
+# corresponding/identical representations of "sub", "inc", "dec" and "or", "xor",
+# "andnot" operations are omitted.
+#
+ ------------------------------------------------------------------------------
+ | C macro | Events |
+ ------------------------------------------------------------------------------
+ | Non-RMW ops | |
+ ------------------------------------------------------------------------------
+ | READ_ONCE | R[ONCE] |
+ | atomic_read | |
+ | WRITE_ONCE | W[ONCE] |
+ | atomic_set | |
+ | smp_load_acquire | R[ACQUIRE] |
+ | atomic_read_acquire | |
+ | smp_store_release | W[RELEASE] |
+ | atomic_set_release | |
+ | smp_store_mb | W[ONCE] ->po F[MB] |
+ | smp_mb | F[MB] |
+ | smp_rmb | F[rmb] |
+ | smp_wmb | F[wmb] |
+ | smp_mb__before_atomic | F[before-atomic] |
+ | smp_mb__after_atomic | F[after-atomic] |
+ | spin_unlock | UL |
+ | spin_is_locked | On success: RL |
+ | | On failure: RU |
+ | smp_mb__after_spinlock | F[after-spinlock] |
+ | smp_mb__after_unlock_lock | F[after-unlock-lock] |
+ | rcu_read_lock | F[rcu-lock] |
+ | rcu_read_unlock | F[rcu-unlock] |
+ | synchronize_rcu | F[sync-rcu] |
+ | rcu_dereference | R[ONCE] |
+ | rcu_assign_pointer | W[RELEASE] |
+ | srcu_read_lock | R[srcu-lock] |
+ | srcu_down_read | |
+ | srcu_read_unlock | W[srcu-unlock] |
+ | srcu_up_read | |
+ | synchronize_srcu | SRCU[sync-srcu] |
+ | smp_mb__after_srcu_read_unlock | F[after-srcu-read-unlock] |
+ ------------------------------------------------------------------------------
+ | RMW ops w/o return value | |
+ ------------------------------------------------------------------------------
+ | atomic_add | R*[NORETURN] ->rmw W*[NORETURN] |
+ | atomic_and | |
+ | spin_lock | LKR ->po LKW |
+ ------------------------------------------------------------------------------
+ | RMW ops w/ return value | |
+ ------------------------------------------------------------------------------
+ | atomic_add_return | R*[MB] ->rmw W*[MB] |
+ | atomic_fetch_add | |
+ | atomic_fetch_and | |
+ | atomic_xchg | |
+ | xchg | |
+ | atomic_add_negative | |
+ | atomic_add_return_relaxed | R*[ONCE] ->rmw W*[ONCE] |
+ | atomic_fetch_add_relaxed | |
+ | atomic_fetch_and_relaxed | |
+ | atomic_xchg_relaxed | |
+ | xchg_relaxed | |
+ | atomic_add_negative_relaxed | |
+ | atomic_add_return_acquire | R*[ACQUIRE] ->rmw W*[ACQUIRE] |
+ | atomic_fetch_add_acquire | |
+ | atomic_fetch_and_acquire | |
+ | atomic_xchg_acquire | |
+ | xchg_acquire | |
+ | atomic_add_negative_acquire | |
+ | atomic_add_return_release | R*[RELEASE] ->rmw W*[RELEASE] |
+ | atomic_fetch_add_release | |
+ | atomic_fetch_and_release | |
+ | atomic_xchg_release | |
+ | xchg_release | |
+ | atomic_add_negative_release | |
+ ------------------------------------------------------------------------------
+ | Conditional RMW ops | |
+ ------------------------------------------------------------------------------
+ | atomic_cmpxchg | On success: R*[MB] ->rmw W*[MB] |
+ | | On failure: R*[MB] |
+ | cmpxchg | |
+ | atomic_add_unless | |
+ | atomic_cmpxchg_relaxed | On success: R*[ONCE] ->rmw W*[ONCE] |
+ | | On failure: R*[ONCE] |
+ | atomic_cmpxchg_acquire | On success: R*[ACQUIRE] ->rmw W*[ACQUIRE] |
+ | | On failure: R*[ACQUIRE] |
+ | atomic_cmpxchg_release | On success: R*[RELEASE] ->rmw W*[RELEASE] |
+ | | On failure: R*[RELEASE] |
+ | spin_trylock | On success: LKR ->po LKW |
+ | | On failure: LF |
+ ------------------------------------------------------------------------------
diff --git a/tools/memory-model/Documentation/locking.txt b/tools/memory-model/Documentation/locking.txt
index 65c898c64a93..d6dc3cc34ab6 100644
--- a/tools/memory-model/Documentation/locking.txt
+++ b/tools/memory-model/Documentation/locking.txt
@@ -1,3 +1,8 @@
+[!] Note:
+ This file expands on the "Locking" section of recipes.txt,
+ focusing on locklessly accessing shared variables that are
+ otherwise protected by a lock.
+
Locking
=======
diff --git a/tools/memory-model/Documentation/ordering.txt b/tools/memory-model/Documentation/ordering.txt
index 9b0949d3f5ec..7ab3744929d8 100644
--- a/tools/memory-model/Documentation/ordering.txt
+++ b/tools/memory-model/Documentation/ordering.txt
@@ -223,7 +223,7 @@ The Linux kernel's compiler barrier is barrier(). This primitive
prohibits compiler code-motion optimizations that might move memory
references across the point in the code containing the barrier(), but
does not constrain hardware memory ordering. For example, this can be
-used to prevent to compiler from moving code across an infinite loop:
+used to prevent the compiler from moving code across an infinite loop:
WRITE_ONCE(x, 1);
while (dontstop)
@@ -274,7 +274,7 @@ different pieces of the concurrent algorithm. The variable stored to
by the smp_store_release(), in this case "y", will normally be used in
an acquire operation in other parts of the concurrent algorithm.
-To see the performance advantages, suppose that the above example read
+To see the performance advantages, suppose that the above example reads
from "x" instead of writing to it. Then an smp_wmb() could not guarantee
ordering, and an smp_mb() would be needed instead:
@@ -394,17 +394,17 @@ from the value returned by the rcu_dereference() or srcu_dereference()
to that subsequent memory access.
A call to rcu_dereference() for a given RCU-protected pointer is
-usually paired with a call to a call to rcu_assign_pointer() for that
-same pointer in much the same way that a call to smp_load_acquire() is
-paired with a call to smp_store_release(). Calls to rcu_dereference()
-and rcu_assign_pointer are often buried in other APIs, for example,
+usually paired with a call to rcu_assign_pointer() for that same pointer
+in much the same way that a call to smp_load_acquire() is paired with
+a call to smp_store_release(). Calls to rcu_dereference() and
+rcu_assign_pointer() are often buried in other APIs, for example,
the RCU list API members defined in include/linux/rculist.h. For more
information, please see the docbook headers in that file, the most
-recent LWN article on the RCU API (https://lwn.net/Articles/777036/),
+recent LWN article on the RCU API (https://lwn.net/Articles/988638/),
and of course the material in Documentation/RCU.
If the pointer value is manipulated between the rcu_dereference()
-that returned it and a later dereference(), please read
+that returned it and a later rcu_dereference(), please read
Documentation/RCU/rcu_dereference.rst. It can also be quite helpful to
review uses in the Linux kernel.
@@ -457,7 +457,7 @@ described earlier in this document.
These operations come in three categories:
o Marked writes, such as WRITE_ONCE() and atomic_set(). These
- primitives required the compiler to emit the corresponding store
+ primitives require the compiler to emit the corresponding store
instructions in the expected execution order, thus suppressing
a number of destructive optimizations. However, they provide no
hardware ordering guarantees, and in fact many CPUs will happily
@@ -465,7 +465,7 @@ o Marked writes, such as WRITE_ONCE() and atomic_set(). These
operations, unless these operations are to the same variable.
o Marked reads, such as READ_ONCE() and atomic_read(). These
- primitives required the compiler to emit the corresponding load
+ primitives require the compiler to emit the corresponding load
instructions in the expected execution order, thus suppressing
a number of destructive optimizations. However, they provide no
hardware ordering guarantees, and in fact many CPUs will happily
@@ -506,7 +506,7 @@ of the old value and the new value.
Unmarked C-language accesses are unordered, and are also subject to
any number of compiler optimizations, many of which can break your
-concurrent code. It is possible to used unmarked C-language accesses for
+concurrent code. It is possible to use unmarked C-language accesses for
shared variables that are subject to concurrent access, but great care
is required on an ongoing basis. The compiler-constraining barrier()
primitive can be helpful, as can the various ordering primitives discussed
diff --git a/tools/memory-model/Documentation/recipes.txt b/tools/memory-model/Documentation/recipes.txt
index 03f58b11c252..52115ee5f393 100644
--- a/tools/memory-model/Documentation/recipes.txt
+++ b/tools/memory-model/Documentation/recipes.txt
@@ -61,6 +61,10 @@ usual) some things to be careful of:
Locking
-------
+[!] Note:
+ locking.txt expands on this section, providing more detail on
+ locklessly accessing lock-protected shared variables.
+
Locking is well-known and straightforward, at least if you don't think
about it too hard. And the basic rule is indeed quite simple: Any CPU that
has acquired a given lock sees any changes previously seen or made by any
diff --git a/tools/memory-model/Documentation/references.txt b/tools/memory-model/Documentation/references.txt
index c5fdfd19df24..d691390620b3 100644
--- a/tools/memory-model/Documentation/references.txt
+++ b/tools/memory-model/Documentation/references.txt
@@ -46,8 +46,7 @@ o ARM Ltd. (Ed.). 2014. "ARM Architecture Reference Manual (ARMv8,
o Imagination Technologies, LTD. 2015. "MIPS(R) Architecture
For Programmers, Volume II-A: The MIPS64(R) Instruction,
- Set Reference Manual". Imagination Technologies,
- LTD. https://imgtec.com/?do-download=4302.
+ Set Reference Manual". Imagination Technologies, LTD.
o Shaked Flur, Kathryn E. Gray, Christopher Pulte, Susmit
Sarkar, Ali Sezgin, Luc Maranget, Will Deacon, and Peter
diff --git a/tools/memory-model/Documentation/simple.txt b/tools/memory-model/Documentation/simple.txt
index 4c789ec8334f..2df148630cdc 100644
--- a/tools/memory-model/Documentation/simple.txt
+++ b/tools/memory-model/Documentation/simple.txt
@@ -134,7 +134,7 @@ Packaged primitives: Sequence locking
Lockless programming is considered by many to be more difficult than
lock-based programming, but there are a few lockless design patterns that
have been built out into an API. One of these APIs is sequence locking.
-Although this APIs can be used in extremely complex ways, there are simple
+Although this API can be used in extremely complex ways, there are simple
and effective ways of using it that avoid the need to pay attention to
memory ordering.
@@ -205,7 +205,7 @@ If you want to keep things simple, use the initialization and read-out
operations from the previous section only when there are no racing
accesses. Otherwise, use only fully ordered operations when accessing
or modifying the variable. This approach guarantees that code prior
-to a given access to that variable will be seen by all CPUs has having
+to a given access to that variable will be seen by all CPUs as having
happened before any code following any later access to that same variable.
Please note that per-CPU functions are not atomic operations and
@@ -266,5 +266,5 @@ More complex use cases
======================
If the alternatives above do not do what you need, please look at the
-recipes-pairs.txt file to peel off the next layer of the memory-ordering
+recipes.txt file to peel off the next layer of the memory-ordering
onion.
diff --git a/tools/memory-model/README b/tools/memory-model/README
index dab38904206a..64c860863aa9 100644
--- a/tools/memory-model/README
+++ b/tools/memory-model/README
@@ -20,7 +20,7 @@ that litmus test to be exercised within the Linux kernel.
REQUIREMENTS
============
-Version 7.52 or higher of the "herd7" and "klitmus7" tools must be
+Version 7.58 or higher of the "herd7" and "klitmus7" tools must be
downloaded separately:
https://github.com/herd/herdtools7
@@ -79,7 +79,7 @@ Several thousand more example litmus tests are available here:
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/herd
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git/tree/CodeSamples/formal/litmus
-Documentation describing litmus tests and now to use them may be found
+Documentation describing litmus tests and how to use them may be found
here:
tools/memory-model/Documentation/litmus-tests.txt
diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
index ce068700939c..fe65998002b9 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -13,17 +13,18 @@
"Linux-kernel memory consistency model"
-enum Accesses = 'once (*READ_ONCE,WRITE_ONCE*) ||
- 'release (*smp_store_release*) ||
- 'acquire (*smp_load_acquire*) ||
- 'noreturn (* R of non-return RMW *)
-instructions R[{'once,'acquire,'noreturn}]
-instructions W[{'once,'release}]
-instructions RMW[{'once,'acquire,'release}]
+enum Accesses = 'ONCE (*READ_ONCE,WRITE_ONCE*) ||
+ 'RELEASE (*smp_store_release*) ||
+ 'ACQUIRE (*smp_load_acquire*) ||
+ 'NORETURN (* R of non-return RMW *) ||
+ 'MB (*xchg(),cmpxchg(),...*)
+instructions R[Accesses]
+instructions W[Accesses]
+instructions RMW[Accesses]
enum Barriers = 'wmb (*smp_wmb*) ||
'rmb (*smp_rmb*) ||
- 'mb (*smp_mb*) ||
+ 'MB (*smp_mb*) ||
'barrier (*barrier*) ||
'rcu-lock (*rcu_read_lock*) ||
'rcu-unlock (*rcu_read_unlock*) ||
@@ -35,6 +36,17 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'after-srcu-read-unlock (*smp_mb__after_srcu_read_unlock*)
instructions F[Barriers]
+
+(*
+ * Filter out syntactic annotations that do not provide the corresponding
+ * semantic ordering, such as Acquire on a store or Mb on a failed RMW.
+ *)
+let FailedRMW = RMW \ (domain(rmw) | range(rmw))
+let Acquire = ACQUIRE \ W \ FailedRMW
+let Release = RELEASE \ R \ FailedRMW
+let Mb = MB \ FailedRMW
+let Noreturn = NORETURN \ W
+
(* SRCU *)
enum SRCU = 'srcu-lock || 'srcu-unlock || 'sync-srcu
instructions SRCU[SRCU]
@@ -73,7 +85,7 @@ flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid-sleep
flag ~empty different-values(srcu-rscs) as srcu-bad-value-match
(* Compute marked and plain memory accesses *)
-let Marked = (~M) | IW | Once | Release | Acquire | domain(rmw) | range(rmw) |
+let Marked = (~M) | IW | ONCE | RELEASE | ACQUIRE | MB | RMW |
LKR | LKW | UL | LF | RL | RU | Srcu-lock | Srcu-unlock
let Plain = M \ Marked
@@ -82,3 +94,6 @@ let carry-dep = (data ; [~ Srcu-unlock] ; rfi)*
let addr = carry-dep ; addr
let ctrl = carry-dep ; ctrl
let data = carry-dep ; data
+
+flag ~empty (if "lkmmv2" then 0 else _)
+ as this-model-requires-variant-higher-than-lkmmv1
diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index adf3c4f41229..d7e7bf13c831 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -34,6 +34,16 @@ let R4rmb = R \ Noreturn (* Reads for which rmb works *)
let rmb = [R4rmb] ; fencerel(Rmb) ; [R4rmb]
let wmb = [W] ; fencerel(Wmb) ; [W]
let mb = ([M] ; fencerel(Mb) ; [M]) |
+ (*
+ * full-barrier RMWs (successful cmpxchg(), xchg(), etc.) act as
+ * though there were enclosed by smp_mb().
+ * The effect of these virtual smp_mb() is formalized by adding
+ * Mb tags to the read and write of the operation, and providing
+ * the same ordering as though there were additional po edges
+ * between the Mb tag and the read resp. write.
+ *)
+ ([M] ; po ; [Mb & R]) |
+ ([Mb & W] ; po ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
diff --git a/tools/memory-model/linux-kernel.cfg b/tools/memory-model/linux-kernel.cfg
index 3c8098e99f41..69b04f3aad73 100644
--- a/tools/memory-model/linux-kernel.cfg
+++ b/tools/memory-model/linux-kernel.cfg
@@ -1,6 +1,7 @@
macros linux-kernel.def
bell linux-kernel.bell
model linux-kernel.cat
+variant lkmmv2
graph columns
squished true
showevents noregs
diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux-kernel.def
index 88a39601f525..49e402782e49 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -6,18 +6,18 @@
// which appeared in ASPLOS 2018.
// ONCE
-READ_ONCE(X) __load{once}(X)
-WRITE_ONCE(X,V) { __store{once}(X,V); }
+READ_ONCE(X) __load{ONCE}(X)
+WRITE_ONCE(X,V) { __store{ONCE}(X,V); }
// Release Acquire and friends
-smp_store_release(X,V) { __store{release}(*X,V); }
-smp_load_acquire(X) __load{acquire}(*X)
-rcu_assign_pointer(X,V) { __store{release}(X,V); }
-rcu_dereference(X) __load{once}(X)
-smp_store_mb(X,V) { __store{once}(X,V); __fence{mb}; }
+smp_store_release(X,V) { __store{RELEASE}(*X,V); }
+smp_load_acquire(X) __load{ACQUIRE}(*X)
+rcu_assign_pointer(X,V) { __store{RELEASE}(X,V); }
+rcu_dereference(X) __load{ONCE}(X)
+smp_store_mb(X,V) { __store{ONCE}(X,V); __fence{MB}; }
// Fences
-smp_mb() { __fence{mb}; }
+smp_mb() { __fence{MB}; }
smp_rmb() { __fence{rmb}; }
smp_wmb() { __fence{wmb}; }
smp_mb__before_atomic() { __fence{before-atomic}; }
@@ -28,14 +28,14 @@ smp_mb__after_srcu_read_unlock() { __fence{after-srcu-read-unlock}; }
barrier() { __fence{barrier}; }
// Exchange
-xchg(X,V) __xchg{mb}(X,V)
-xchg_relaxed(X,V) __xchg{once}(X,V)
-xchg_release(X,V) __xchg{release}(X,V)
-xchg_acquire(X,V) __xchg{acquire}(X,V)
-cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W)
-cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W)
-cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W)
-cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W)
+xchg(X,V) __xchg{MB}(X,V)
+xchg_relaxed(X,V) __xchg{ONCE}(X,V)
+xchg_release(X,V) __xchg{RELEASE}(X,V)
+xchg_acquire(X,V) __xchg{ACQUIRE}(X,V)
+cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W)
+cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W)
+cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W)
+cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W)
// Spinlocks
spin_lock(X) { __lock(X); }
@@ -63,57 +63,86 @@ atomic_set(X,V) { WRITE_ONCE(*X,V); }
atomic_read_acquire(X) smp_load_acquire(X)
atomic_set_release(X,V) { smp_store_release(X,V); }
-atomic_add(V,X) { __atomic_op(X,+,V); }
-atomic_sub(V,X) { __atomic_op(X,-,V); }
-atomic_inc(X) { __atomic_op(X,+,1); }
-atomic_dec(X) { __atomic_op(X,-,1); }
-
-atomic_add_return(V,X) __atomic_op_return{mb}(X,+,V)
-atomic_add_return_relaxed(V,X) __atomic_op_return{once}(X,+,V)
-atomic_add_return_acquire(V,X) __atomic_op_return{acquire}(X,+,V)
-atomic_add_return_release(V,X) __atomic_op_return{release}(X,+,V)
-atomic_fetch_add(V,X) __atomic_fetch_op{mb}(X,+,V)
-atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{once}(X,+,V)
-atomic_fetch_add_acquire(V,X) __atomic_fetch_op{acquire}(X,+,V)
-atomic_fetch_add_release(V,X) __atomic_fetch_op{release}(X,+,V)
-
-atomic_inc_return(X) __atomic_op_return{mb}(X,+,1)
-atomic_inc_return_relaxed(X) __atomic_op_return{once}(X,+,1)
-atomic_inc_return_acquire(X) __atomic_op_return{acquire}(X,+,1)
-atomic_inc_return_release(X) __atomic_op_return{release}(X,+,1)
-atomic_fetch_inc(X) __atomic_fetch_op{mb}(X,+,1)
-atomic_fetch_inc_relaxed(X) __atomic_fetch_op{once}(X,+,1)
-atomic_fetch_inc_acquire(X) __atomic_fetch_op{acquire}(X,+,1)
-atomic_fetch_inc_release(X) __atomic_fetch_op{release}(X,+,1)
-
-atomic_sub_return(V,X) __atomic_op_return{mb}(X,-,V)
-atomic_sub_return_relaxed(V,X) __atomic_op_return{once}(X,-,V)
-atomic_sub_return_acquire(V,X) __atomic_op_return{acquire}(X,-,V)
-atomic_sub_return_release(V,X) __atomic_op_return{release}(X,-,V)
-atomic_fetch_sub(V,X) __atomic_fetch_op{mb}(X,-,V)
-atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{once}(X,-,V)
-atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{acquire}(X,-,V)
-atomic_fetch_sub_release(V,X) __atomic_fetch_op{release}(X,-,V)
-
-atomic_dec_return(X) __atomic_op_return{mb}(X,-,1)
-atomic_dec_return_relaxed(X) __atomic_op_return{once}(X,-,1)
-atomic_dec_return_acquire(X) __atomic_op_return{acquire}(X,-,1)
-atomic_dec_return_release(X) __atomic_op_return{release}(X,-,1)
-atomic_fetch_dec(X) __atomic_fetch_op{mb}(X,-,1)
-atomic_fetch_dec_relaxed(X) __atomic_fetch_op{once}(X,-,1)
-atomic_fetch_dec_acquire(X) __atomic_fetch_op{acquire}(X,-,1)
-atomic_fetch_dec_release(X) __atomic_fetch_op{release}(X,-,1)
-
-atomic_xchg(X,V) __xchg{mb}(X,V)
-atomic_xchg_relaxed(X,V) __xchg{once}(X,V)
-atomic_xchg_release(X,V) __xchg{release}(X,V)
-atomic_xchg_acquire(X,V) __xchg{acquire}(X,V)
-atomic_cmpxchg(X,V,W) __cmpxchg{mb}(X,V,W)
-atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{once}(X,V,W)
-atomic_cmpxchg_acquire(X,V,W) __cmpxchg{acquire}(X,V,W)
-atomic_cmpxchg_release(X,V,W) __cmpxchg{release}(X,V,W)
-
-atomic_sub_and_test(V,X) __atomic_op_return{mb}(X,-,V) == 0
-atomic_dec_and_test(X) __atomic_op_return{mb}(X,-,1) == 0
-atomic_inc_and_test(X) __atomic_op_return{mb}(X,+,1) == 0
-atomic_add_negative(V,X) __atomic_op_return{mb}(X,+,V) < 0
+atomic_add(V,X) { __atomic_op{NORETURN}(X,+,V); }
+atomic_sub(V,X) { __atomic_op{NORETURN}(X,-,V); }
+atomic_and(V,X) { __atomic_op{NORETURN}(X,&,V); }
+atomic_or(V,X) { __atomic_op{NORETURN}(X,|,V); }
+atomic_xor(V,X) { __atomic_op{NORETURN}(X,^,V); }
+atomic_inc(X) { __atomic_op{NORETURN}(X,+,1); }
+atomic_dec(X) { __atomic_op{NORETURN}(X,-,1); }
+atomic_andnot(V,X) { __atomic_op{NORETURN}(X,&~,V); }
+
+atomic_add_return(V,X) __atomic_op_return{MB}(X,+,V)
+atomic_add_return_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V)
+atomic_add_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V)
+atomic_add_return_release(V,X) __atomic_op_return{RELEASE}(X,+,V)
+atomic_fetch_add(V,X) __atomic_fetch_op{MB}(X,+,V)
+atomic_fetch_add_relaxed(V,X) __atomic_fetch_op{ONCE}(X,+,V)
+atomic_fetch_add_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,+,V)
+atomic_fetch_add_release(V,X) __atomic_fetch_op{RELEASE}(X,+,V)
+
+atomic_fetch_and(V,X) __atomic_fetch_op{MB}(X,&,V)
+atomic_fetch_and_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&,V)
+atomic_fetch_and_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&,V)
+atomic_fetch_and_release(V,X) __atomic_fetch_op{RELEASE}(X,&,V)
+
+atomic_fetch_or(V,X) __atomic_fetch_op{MB}(X,|,V)
+atomic_fetch_or_relaxed(V,X) __atomic_fetch_op{ONCE}(X,|,V)
+atomic_fetch_or_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,|,V)
+atomic_fetch_or_release(V,X) __atomic_fetch_op{RELEASE}(X,|,V)
+
+atomic_fetch_xor(V,X) __atomic_fetch_op{MB}(X,^,V)
+atomic_fetch_xor_relaxed(V,X) __atomic_fetch_op{ONCE}(X,^,V)
+atomic_fetch_xor_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,^,V)
+atomic_fetch_xor_release(V,X) __atomic_fetch_op{RELEASE}(X,^,V)
+
+atomic_inc_return(X) __atomic_op_return{MB}(X,+,1)
+atomic_inc_return_relaxed(X) __atomic_op_return{ONCE}(X,+,1)
+atomic_inc_return_acquire(X) __atomic_op_return{ACQUIRE}(X,+,1)
+atomic_inc_return_release(X) __atomic_op_return{RELEASE}(X,+,1)
+atomic_fetch_inc(X) __atomic_fetch_op{MB}(X,+,1)
+atomic_fetch_inc_relaxed(X) __atomic_fetch_op{ONCE}(X,+,1)
+atomic_fetch_inc_acquire(X) __atomic_fetch_op{ACQUIRE}(X,+,1)
+atomic_fetch_inc_release(X) __atomic_fetch_op{RELEASE}(X,+,1)
+
+atomic_sub_return(V,X) __atomic_op_return{MB}(X,-,V)
+atomic_sub_return_relaxed(V,X) __atomic_op_return{ONCE}(X,-,V)
+atomic_sub_return_acquire(V,X) __atomic_op_return{ACQUIRE}(X,-,V)
+atomic_sub_return_release(V,X) __atomic_op_return{RELEASE}(X,-,V)
+atomic_fetch_sub(V,X) __atomic_fetch_op{MB}(X,-,V)
+atomic_fetch_sub_relaxed(V,X) __atomic_fetch_op{ONCE}(X,-,V)
+atomic_fetch_sub_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,-,V)
+atomic_fetch_sub_release(V,X) __atomic_fetch_op{RELEASE}(X,-,V)
+
+atomic_dec_return(X) __atomic_op_return{MB}(X,-,1)
+atomic_dec_return_relaxed(X) __atomic_op_return{ONCE}(X,-,1)
+atomic_dec_return_acquire(X) __atomic_op_return{ACQUIRE}(X,-,1)
+atomic_dec_return_release(X) __atomic_op_return{RELEASE}(X,-,1)
+atomic_fetch_dec(X) __atomic_fetch_op{MB}(X,-,1)
+atomic_fetch_dec_relaxed(X) __atomic_fetch_op{ONCE}(X,-,1)
+atomic_fetch_dec_acquire(X) __atomic_fetch_op{ACQUIRE}(X,-,1)
+atomic_fetch_dec_release(X) __atomic_fetch_op{RELEASE}(X,-,1)
+
+atomic_xchg(X,V) __xchg{MB}(X,V)
+atomic_xchg_relaxed(X,V) __xchg{ONCE}(X,V)
+atomic_xchg_release(X,V) __xchg{RELEASE}(X,V)
+atomic_xchg_acquire(X,V) __xchg{ACQUIRE}(X,V)
+atomic_cmpxchg(X,V,W) __cmpxchg{MB}(X,V,W)
+atomic_cmpxchg_relaxed(X,V,W) __cmpxchg{ONCE}(X,V,W)
+atomic_cmpxchg_acquire(X,V,W) __cmpxchg{ACQUIRE}(X,V,W)
+atomic_cmpxchg_release(X,V,W) __cmpxchg{RELEASE}(X,V,W)
+
+atomic_sub_and_test(V,X) __atomic_op_return{MB}(X,-,V) == 0
+atomic_dec_and_test(X) __atomic_op_return{MB}(X,-,1) == 0
+atomic_inc_and_test(X) __atomic_op_return{MB}(X,+,1) == 0
+atomic_add_negative(V,X) __atomic_op_return{MB}(X,+,V) < 0
+atomic_add_negative_relaxed(V,X) __atomic_op_return{ONCE}(X,+,V) < 0
+atomic_add_negative_acquire(V,X) __atomic_op_return{ACQUIRE}(X,+,V) < 0
+atomic_add_negative_release(V,X) __atomic_op_return{RELEASE}(X,+,V) < 0
+
+atomic_fetch_andnot(V,X) __atomic_fetch_op{MB}(X,&~,V)
+atomic_fetch_andnot_acquire(V,X) __atomic_fetch_op{ACQUIRE}(X,&~,V)
+atomic_fetch_andnot_release(V,X) __atomic_fetch_op{RELEASE}(X,&~,V)
+atomic_fetch_andnot_relaxed(V,X) __atomic_fetch_op{ONCE}(X,&~,V)
+
+atomic_add_unless(X,V,W) __atomic_add_unless{MB}(X,V,W)
diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
index 53b5a492739d..03c12efed66a 100644
--- a/tools/memory-model/lock.cat
+++ b/tools/memory-model/lock.cat
@@ -54,6 +54,12 @@ flag ~empty LKR \ domain(lk-rmw) as unpaired-LKR
*)
empty ([LKW] ; po-loc ; [LKR]) \ (po-loc ; [UL] ; po-loc) as lock-nest
+(*
+ * In the same way, spin_is_locked() inside a critical section must always
+ * return True (no RU events can be in a critical section for the same lock).
+ *)
+empty ([LKW] ; po-loc ; [RU]) \ (po-loc ; [UL] ; po-loc) as nested-is-locked
+
(* The final value of a spinlock should not be tested *)
flag ~empty [FW] ; loc ; [ALL-LOCKS] as lock-final
@@ -79,42 +85,50 @@ empty ([UNMATCHED-LKW] ; loc ; [UNMATCHED-LKW]) \ id as unmatched-locks
(* rfi for LF events: link each LKW to the LF events in its critical section *)
let rfi-lf = ([LKW] ; po-loc ; [LF]) \ ([LKW] ; po-loc ; [UL] ; po-loc)
-(* rfe for LF events *)
+(* Utility macro to convert a single pair to a single-edge relation *)
+let pair-to-relation p = p ++ 0
+
+(*
+ * If a given LF event e is outside a critical section, it cannot read
+ * internally but it may read from an LKW event in another thread.
+ * Compute the relation containing these possible edges.
+ *)
+let possible-rfe-noncrit-lf e = (LKW * {e}) & loc & ext
+
+(* Compute set of sets of possible rfe edges for LF events *)
let all-possible-rfe-lf =
(*
- * Given an LF event r, compute the possible rfe edges for that event
- * (all those starting from LKW events in other threads),
- * and then convert that relation to a set of single-edge relations.
+ * Convert the possible-rfe-noncrit-lf relation for e
+ * to a set of single edges
*)
- let possible-rfe-lf r =
- let pair-to-relation p = p ++ 0
- in map pair-to-relation ((LKW * {r}) & loc & ext)
- (* Do this for each LF event r that isn't in rfi-lf *)
- in map possible-rfe-lf (LF \ range(rfi-lf))
+ let set-of-singleton-rfe-lf e =
+ map pair-to-relation (possible-rfe-noncrit-lf e)
+ (* Do this for each LF event e that isn't in rfi-lf *)
+ in map set-of-singleton-rfe-lf (LF \ range(rfi-lf))
(* Generate all rf relations for LF events *)
with rfe-lf from cross(all-possible-rfe-lf)
let rf-lf = rfe-lf | rfi-lf
(*
- * RU, i.e., spin_is_locked() returning False, is slightly different.
- * We rely on the memory model to rule out cases where spin_is_locked()
- * within one of the lock's critical sections returns False.
+ * A given RU event e may read internally from the last po-previous UL,
+ * or it may read from a UL event in another thread or the initial write.
+ * Compute the relation containing these possible edges.
*)
-
-(* rfi for RU events: an RU may read from the last po-previous UL *)
-let rfi-ru = ([UL] ; po-loc ; [RU]) \ ([UL] ; po-loc ; [LKW] ; po-loc)
-
-(* rfe for RU events: an RU may read from an external UL or the initial write *)
-let all-possible-rfe-ru =
- let possible-rfe-ru r =
- let pair-to-relation p = p ++ 0
- in map pair-to-relation (((UL | IW) * {r}) & loc & ext)
- in map possible-rfe-ru RU
+let possible-rf-ru e = (((UL * {e}) & po-loc) \
+ ([UL] ; po-loc ; [UL] ; po-loc)) |
+ (((UL | IW) * {e}) & loc & ext)
+
+(* Compute set of sets of possible rf edges for RU events *)
+let all-possible-rf-ru =
+ (* Convert the possible-rf-ru relation for e to a set of single edges *)
+ let set-of-singleton-rf-ru e =
+ map pair-to-relation (possible-rf-ru e)
+ (* Do this for each RU event e *)
+ in map set-of-singleton-rf-ru RU
(* Generate all rf relations for RU events *)
-with rfe-ru from cross(all-possible-rfe-ru)
-let rf-ru = rfe-ru | rfi-ru
+with rf-ru from cross(all-possible-rf-ru)
(* Final rf relation *)
let rf = rf | rf-lf | rf-ru
diff --git a/tools/mm/Makefile b/tools/mm/Makefile
index 1c5606cc3334..f5725b5c23aa 100644
--- a/tools/mm/Makefile
+++ b/tools/mm/Makefile
@@ -3,7 +3,8 @@
#
include ../scripts/Makefile.include
-TARGETS=page-types slabinfo page_owner_sort
+BUILD_TARGETS=page-types slabinfo page_owner_sort thp_swap_allocator_test
+INSTALL_TARGETS = $(BUILD_TARGETS) thpmaps
LIB_DIR = ../lib/api
LIBS = $(LIB_DIR)/libapi.a
@@ -11,9 +12,9 @@ LIBS = $(LIB_DIR)/libapi.a
CFLAGS += -Wall -Wextra -I../lib/ -pthread
LDFLAGS += $(LIBS) -pthread
-all: $(TARGETS)
+all: $(BUILD_TARGETS)
-$(TARGETS): $(LIBS)
+$(BUILD_TARGETS): $(LIBS)
$(LIBS):
make -C $(LIB_DIR)
@@ -22,11 +23,11 @@ $(LIBS):
$(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
clean:
- $(RM) page-types slabinfo page_owner_sort
+ $(RM) page-types slabinfo page_owner_sort thp_swap_allocator_test
make -C $(LIB_DIR) clean
sbindir ?= /usr/sbin
install: all
install -d $(DESTDIR)$(sbindir)
- install -m 755 -p $(TARGETS) $(DESTDIR)$(sbindir)
+ install -m 755 -p $(INSTALL_TARGETS) $(DESTDIR)$(sbindir)
diff --git a/tools/mm/page-types.c b/tools/mm/page-types.c
index 8d5595b6c59f..d7e5e8902af8 100644
--- a/tools/mm/page-types.c
+++ b/tools/mm/page-types.c
@@ -22,9 +22,10 @@
#include <time.h>
#include <setjmp.h>
#include <signal.h>
+#include <inttypes.h>
#include <sys/types.h>
-#include <sys/errno.h>
-#include <sys/fcntl.h>
+#include <errno.h>
+#include <fcntl.h>
#include <sys/mount.h>
#include <sys/statfs.h>
#include <sys/mman.h>
@@ -71,12 +72,12 @@
/* [32-] kernel hacking assistances */
#define KPF_RESERVED 32
#define KPF_MLOCKED 33
-#define KPF_MAPPEDTODISK 34
+#define KPF_OWNER_2 34
#define KPF_PRIVATE 35
#define KPF_PRIVATE_2 36
#define KPF_OWNER_PRIVATE 37
#define KPF_ARCH 38
-#define KPF_UNCACHED 39
+#define KPF_UNCACHED 39 /* unused */
#define KPF_SOFTDIRTY 40
#define KPF_ARCH_2 41
@@ -129,12 +130,11 @@ static const char * const page_flag_names[] = {
[KPF_RESERVED] = "r:reserved",
[KPF_MLOCKED] = "m:mlocked",
- [KPF_MAPPEDTODISK] = "d:mappedtodisk",
+ [KPF_OWNER_2] = "d:owner_2",
[KPF_PRIVATE] = "P:private",
[KPF_PRIVATE_2] = "p:private_2",
[KPF_OWNER_PRIVATE] = "O:owner_private",
[KPF_ARCH] = "h:arch",
- [KPF_UNCACHED] = "c:uncached",
[KPF_SOFTDIRTY] = "f:softdirty",
[KPF_ARCH_2] = "H:arch_2",
@@ -392,9 +392,9 @@ static void show_page_range(unsigned long voffset, unsigned long offset,
if (opt_file)
printf("%lx\t", voff);
if (opt_list_cgroup)
- printf("@%llu\t", (unsigned long long)cgroup0);
+ printf("@%" PRIu64 "\t", cgroup0);
if (opt_list_mapcnt)
- printf("%lu\t", mapcnt0);
+ printf("%" PRIu64 "\t", mapcnt0);
printf("%lx\t%lx\t%s\n",
index, count, page_flag_name(flags0));
}
@@ -420,9 +420,9 @@ static void show_page(unsigned long voffset, unsigned long offset,
if (opt_file)
printf("%lx\t", voffset);
if (opt_list_cgroup)
- printf("@%llu\t", (unsigned long long)cgroup);
+ printf("@%" PRIu64 "\t", cgroup);
if (opt_list_mapcnt)
- printf("%lu\t", mapcnt);
+ printf("%" PRIu64 "\t", mapcnt);
printf("%lx\t%s\n", offset, page_flag_name(flags));
}
@@ -472,9 +472,9 @@ static int bit_mask_ok(uint64_t flags)
static uint64_t expand_overloaded_flags(uint64_t flags, uint64_t pme)
{
- /* Anonymous pages overload PG_mappedtodisk */
- if ((flags & BIT(ANON)) && (flags & BIT(MAPPEDTODISK)))
- flags ^= BIT(MAPPEDTODISK) | BIT(ANON_EXCLUSIVE);
+ /* Anonymous pages use PG_owner_2 for anon_exclusive */
+ if ((flags & BIT(ANON)) && (flags & BIT(OWNER_2)))
+ flags ^= BIT(OWNER_2) | BIT(ANON_EXCLUSIVE);
/* SLUB overloads several page flags */
if (flags & BIT(SLAB)) {
diff --git a/tools/mm/page_owner_sort.c b/tools/mm/page_owner_sort.c
index e1f264444342..880e36df0c11 100644
--- a/tools/mm/page_owner_sort.c
+++ b/tools/mm/page_owner_sort.c
@@ -377,6 +377,7 @@ static char *get_comm(char *buf)
if (errno != 0) {
if (debug_on)
fprintf(stderr, "wrong comm in follow buf:\n%s\n", buf);
+ free(comm_str);
return NULL;
}
diff --git a/tools/mm/show_page_info.py b/tools/mm/show_page_info.py
new file mode 100644
index 000000000000..c46d8ea283d7
--- /dev/null
+++ b/tools/mm/show_page_info.py
@@ -0,0 +1,169 @@
+#!/usr/bin/env drgn
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright (C) 2025 Ye Liu <liuye@kylinos.cn>
+
+import argparse
+import sys
+from drgn import Object, FaultError, PlatformFlags, cast
+from drgn.helpers.linux import find_task, follow_page, page_size
+from drgn.helpers.linux.mm import (
+ decode_page_flags, page_to_pfn, page_to_phys, page_to_virt, vma_find,
+ PageSlab, PageCompound, PageHead, PageTail, compound_head, compound_order, compound_nr
+)
+from drgn.helpers.linux.cgroup import cgroup_name, cgroup_path
+
+DESC = """
+This is a drgn script to show the page state.
+For more info on drgn, visit https://github.com/osandov/drgn.
+"""
+
+def format_page_data(page):
+ """
+ Format raw page data into a readable hex dump with "RAW:" prefix.
+
+ :param page: drgn.Object instance representing the page.
+ :return: Formatted string of memory contents.
+ """
+ try:
+ address = page.value_()
+ size = prog.type("struct page").size
+
+ if prog.platform.flags & PlatformFlags.IS_64_BIT:
+ word_size = 8
+ else:
+ word_size = 4
+ num_words = size // word_size
+
+ values = []
+ for i in range(num_words):
+ word_address = address + i * word_size
+ word = prog.read_word(word_address)
+ values.append(f"{word:0{word_size * 2}x}")
+
+ lines = [f"RAW: {' '.join(values[i:i + 4])}" for i in range(0, len(values), 4)]
+
+ return "\n".join(lines)
+
+ except FaultError as e:
+ return f"Error reading memory: {e}"
+ except Exception as e:
+ return f"Unexpected error: {e}"
+
+def get_memcg_info(page):
+ """Retrieve memory cgroup information for a page."""
+ try:
+ MEMCG_DATA_OBJEXTS = prog.constant("MEMCG_DATA_OBJEXTS").value_()
+ MEMCG_DATA_KMEM = prog.constant("MEMCG_DATA_KMEM").value_()
+ mask = prog.constant('__NR_MEMCG_DATA_FLAGS').value_() - 1
+ memcg_data = page.memcg_data.read_()
+ if memcg_data & MEMCG_DATA_OBJEXTS:
+ slabobj_ext = cast("struct slabobj_ext *", memcg_data & ~mask)
+ memcg = slabobj_ext.objcg.memcg.value_()
+ elif memcg_data & MEMCG_DATA_KMEM:
+ objcg = cast("struct obj_cgroup *", memcg_data & ~mask)
+ memcg = objcg.memcg.value_()
+ else:
+ memcg = cast("struct mem_cgroup *", memcg_data & ~mask)
+
+ if memcg.value_() == 0:
+ return "none", "/sys/fs/cgroup/memory/"
+ cgrp = memcg.css.cgroup
+ return cgroup_name(cgrp).decode(), f"/sys/fs/cgroup/memory{cgroup_path(cgrp).decode()}"
+ except FaultError as e:
+ return "unknown", f"Error retrieving memcg info: {e}"
+ except Exception as e:
+ return "unknown", f"Unexpected error: {e}"
+
+def show_page_state(page, addr, mm, pid, task):
+ """Display detailed information about a page."""
+ try:
+ print(f'PID: {pid} Comm: {task.comm.string_().decode()} mm: {hex(mm)}')
+ try:
+ print(format_page_data(page))
+ except FaultError as e:
+ print(f"Error reading page data: {e}")
+ fields = {
+ "Page Address": hex(page.value_()),
+ "Page Flags": decode_page_flags(page),
+ "Page Size": prog["PAGE_SIZE"].value_(),
+ "Page PFN": hex(page_to_pfn(page).value_()),
+ "Page Physical": hex(page_to_phys(page).value_()),
+ "Page Virtual": hex(page_to_virt(page).value_()),
+ "Page Refcount": page._refcount.counter.value_(),
+ "Page Mapcount": page._mapcount.counter.value_(),
+ "Page Index": hex(page.__folio_index.value_()),
+ "Page Memcg Data": hex(page.memcg_data.value_()),
+ }
+
+ memcg_name, memcg_path = get_memcg_info(page)
+ fields["Memcg Name"] = memcg_name
+ fields["Memcg Path"] = memcg_path
+ fields["Page Mapping"] = hex(page.mapping.value_())
+ fields["Page Anon/File"] = "Anon" if page.mapping.value_() & 0x1 else "File"
+
+ try:
+ vma = vma_find(mm, addr)
+ fields["Page VMA"] = hex(vma.value_())
+ fields["VMA Start"] = hex(vma.vm_start.value_())
+ fields["VMA End"] = hex(vma.vm_end.value_())
+ except FaultError as e:
+ fields["Page VMA"] = "Unavailable"
+ fields["VMA Start"] = "Unavailable"
+ fields["VMA End"] = "Unavailable"
+ print(f"Error retrieving VMA information: {e}")
+
+ # Calculate the maximum field name length for alignment
+ max_field_len = max(len(field) for field in fields)
+
+ # Print aligned fields
+ for field, value in fields.items():
+ print(f"{field}:".ljust(max_field_len + 2) + f"{value}")
+
+ # Additional information about the page
+ if PageSlab(page):
+ print("This page belongs to the slab allocator.")
+
+ if PageCompound(page):
+ print("This page is part of a compound page.")
+ if PageHead(page):
+ print("This page is the head page of a compound page.")
+ if PageTail(page):
+ print("This page is the tail page of a compound page.")
+ print(f"{'Head Page:'.ljust(max_field_len + 2)}{hex(compound_head(page).value_())}")
+ print(f"{'Compound Order:'.ljust(max_field_len + 2)}{compound_order(page).value_()}")
+ print(f"{'Number of Pages:'.ljust(max_field_len + 2)}{compound_nr(page).value_()}")
+ else:
+ print("This page is not part of a compound page.")
+ except FaultError as e:
+ print(f"Error accessing page state: {e}")
+ except Exception as e:
+ print(f"Unexpected error: {e}")
+
+def main():
+ """Main function to parse arguments and display page state."""
+ parser = argparse.ArgumentParser(description=DESC, formatter_class=argparse.RawTextHelpFormatter)
+ parser.add_argument('pid', metavar='PID', type=int, help='Target process ID (PID)')
+ parser.add_argument('vaddr', metavar='VADDR', type=str, help='Target virtual address in hexadecimal format (e.g., 0x7fff1234abcd)')
+ args = parser.parse_args()
+
+ try:
+ vaddr = int(args.vaddr, 16)
+ except ValueError:
+ sys.exit(f"Error: Invalid virtual address format: {args.vaddr}")
+
+ try:
+ task = find_task(args.pid)
+ mm = task.mm
+ page = follow_page(mm, vaddr)
+
+ if page:
+ show_page_state(page, vaddr, mm, args.pid, task)
+ else:
+ sys.exit(f"Address {hex(vaddr)} is not mapped.")
+ except FaultError as e:
+ sys.exit(f"Error accessing task or memory: {e}")
+ except Exception as e:
+ sys.exit(f"Unexpected error: {e}")
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/mm/slabinfo.c b/tools/mm/slabinfo.c
index cfaeaea71042..80cdbd3db82d 100644
--- a/tools/mm/slabinfo.c
+++ b/tools/mm/slabinfo.c
@@ -21,7 +21,7 @@
#include <regex.h>
#include <errno.h>
-#define MAX_SLABS 500
+#define MAX_SLABS 2000
#define MAX_ALIASES 500
#define MAX_NODES 1024
@@ -155,6 +155,7 @@ static void usage(void)
static unsigned long read_obj(const char *name)
{
+ size_t len;
FILE *f = fopen(name, "r");
if (!f) {
@@ -165,8 +166,10 @@ static unsigned long read_obj(const char *name)
if (!fgets(buffer, sizeof(buffer), f))
buffer[0] = 0;
fclose(f);
- if (buffer[strlen(buffer)] == '\n')
- buffer[strlen(buffer)] = 0;
+ len = strlen(buffer);
+
+ if (len > 0 && buffer[len - 1] == '\n')
+ buffer[len - 1] = 0;
}
return strlen(buffer);
}
@@ -1228,6 +1231,8 @@ static void read_slab_dir(void)
continue;
switch (de->d_type) {
case DT_LNK:
+ if (alias - aliasinfo == MAX_ALIASES)
+ fatal("Too many aliases\n");
alias->name = strdup(de->d_name);
count = readlink(de->d_name, buffer, sizeof(buffer)-1);
@@ -1242,6 +1247,8 @@ static void read_slab_dir(void)
alias++;
break;
case DT_DIR:
+ if (slab - slabinfo == MAX_SLABS)
+ fatal("Too many slabs\n");
if (chdir(de->d_name))
fatal("Unable to access slab %s\n", slab->name);
slab->name = strdup(de->d_name);
@@ -1297,7 +1304,9 @@ static void read_slab_dir(void)
slab->cpu_partial_free = get_obj("cpu_partial_free");
slab->alloc_node_mismatch = get_obj("alloc_node_mismatch");
slab->deactivate_bypass = get_obj("deactivate_bypass");
- chdir("..");
+ if (chdir(".."))
+ fatal("Unable to chdir from slab ../%s\n",
+ slab->name);
if (slab->name[0] == ':')
alias_targets++;
slab++;
@@ -1310,10 +1319,6 @@ static void read_slab_dir(void)
slabs = slab - slabinfo;
actual_slabs = slabs;
aliases = alias - aliasinfo;
- if (slabs > MAX_SLABS)
- fatal("Too many slabs\n");
- if (aliases > MAX_ALIASES)
- fatal("Too many aliases\n");
}
static void output_slabs(void)
diff --git a/tools/mm/thp_swap_allocator_test.c b/tools/mm/thp_swap_allocator_test.c
new file mode 100644
index 000000000000..83afc52275a5
--- /dev/null
+++ b/tools/mm/thp_swap_allocator_test.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * thp_swap_allocator_test
+ *
+ * The purpose of this test program is helping check if THP swpout
+ * can correctly get swap slots to swap out as a whole instead of
+ * being split. It randomly releases swap entries through madvise
+ * DONTNEED and swapin/out on two memory areas: a memory area for
+ * 64KB THP and the other area for small folios. The second memory
+ * can be enabled by "-s".
+ * Before running the program, we need to setup a zRAM or similar
+ * swap device by:
+ * echo lzo > /sys/block/zram0/comp_algorithm
+ * echo 64M > /sys/block/zram0/disksize
+ * echo never > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
+ * echo always > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabled
+ * mkswap /dev/zram0
+ * swapon /dev/zram0
+ * The expected result should be 0% anon swpout fallback ratio w/ or
+ * w/o "-s".
+ *
+ * Author(s): Barry Song <v-songbaohua@oppo.com>
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <linux/mman.h>
+#include <sys/mman.h>
+#include <errno.h>
+#include <time.h>
+
+#define MEMSIZE_MTHP (60 * 1024 * 1024)
+#define MEMSIZE_SMALLFOLIO (4 * 1024 * 1024)
+#define ALIGNMENT_MTHP (64 * 1024)
+#define ALIGNMENT_SMALLFOLIO (4 * 1024)
+#define TOTAL_DONTNEED_MTHP (16 * 1024 * 1024)
+#define TOTAL_DONTNEED_SMALLFOLIO (1 * 1024 * 1024)
+#define MTHP_FOLIO_SIZE (64 * 1024)
+
+#define SWPOUT_PATH \
+ "/sys/kernel/mm/transparent_hugepage/hugepages-64kB/stats/swpout"
+#define SWPOUT_FALLBACK_PATH \
+ "/sys/kernel/mm/transparent_hugepage/hugepages-64kB/stats/swpout_fallback"
+
+static void *aligned_alloc_mem(size_t size, size_t alignment)
+{
+ void *mem = NULL;
+
+ if (posix_memalign(&mem, alignment, size) != 0) {
+ perror("posix_memalign");
+ return NULL;
+ }
+ return mem;
+}
+
+/*
+ * This emulates the behavior of native libc and Java heap,
+ * as well as process exit and munmap. It helps generate mTHP
+ * and ensures that iterations can proceed with mTHP, as we
+ * currently don't support large folios swap-in.
+ */
+static void random_madvise_dontneed(void *mem, size_t mem_size,
+ size_t align_size, size_t total_dontneed_size)
+{
+ size_t num_pages = total_dontneed_size / align_size;
+ size_t i;
+ size_t offset;
+ void *addr;
+
+ for (i = 0; i < num_pages; ++i) {
+ offset = (rand() % (mem_size / align_size)) * align_size;
+ addr = (char *)mem + offset;
+ if (madvise(addr, align_size, MADV_DONTNEED) != 0)
+ perror("madvise dontneed");
+
+ memset(addr, 0x11, align_size);
+ }
+}
+
+static void random_swapin(void *mem, size_t mem_size,
+ size_t align_size, size_t total_swapin_size)
+{
+ size_t num_pages = total_swapin_size / align_size;
+ size_t i;
+ size_t offset;
+ void *addr;
+
+ for (i = 0; i < num_pages; ++i) {
+ offset = (rand() % (mem_size / align_size)) * align_size;
+ addr = (char *)mem + offset;
+ memset(addr, 0x11, align_size);
+ }
+}
+
+static unsigned long read_stat(const char *path)
+{
+ FILE *file;
+ unsigned long value;
+
+ file = fopen(path, "r");
+ if (!file) {
+ perror("fopen");
+ return 0;
+ }
+
+ if (fscanf(file, "%lu", &value) != 1) {
+ perror("fscanf");
+ fclose(file);
+ return 0;
+ }
+
+ fclose(file);
+ return value;
+}
+
+int main(int argc, char *argv[])
+{
+ int use_small_folio = 0, aligned_swapin = 0;
+ void *mem1 = NULL, *mem2 = NULL;
+ int i;
+
+ for (i = 1; i < argc; ++i) {
+ if (strcmp(argv[i], "-s") == 0)
+ use_small_folio = 1;
+ else if (strcmp(argv[i], "-a") == 0)
+ aligned_swapin = 1;
+ }
+
+ mem1 = aligned_alloc_mem(MEMSIZE_MTHP, ALIGNMENT_MTHP);
+ if (mem1 == NULL) {
+ fprintf(stderr, "Failed to allocate large folios memory\n");
+ return EXIT_FAILURE;
+ }
+
+ if (madvise(mem1, MEMSIZE_MTHP, MADV_HUGEPAGE) != 0) {
+ perror("madvise hugepage for mem1");
+ free(mem1);
+ return EXIT_FAILURE;
+ }
+
+ if (use_small_folio) {
+ mem2 = aligned_alloc_mem(MEMSIZE_SMALLFOLIO, ALIGNMENT_MTHP);
+ if (mem2 == NULL) {
+ fprintf(stderr, "Failed to allocate small folios memory\n");
+ free(mem1);
+ return EXIT_FAILURE;
+ }
+
+ if (madvise(mem2, MEMSIZE_SMALLFOLIO, MADV_NOHUGEPAGE) != 0) {
+ perror("madvise nohugepage for mem2");
+ free(mem1);
+ free(mem2);
+ return EXIT_FAILURE;
+ }
+ }
+
+ /* warm-up phase to occupy the swapfile */
+ memset(mem1, 0x11, MEMSIZE_MTHP);
+ madvise(mem1, MEMSIZE_MTHP, MADV_PAGEOUT);
+ if (use_small_folio) {
+ memset(mem2, 0x11, MEMSIZE_SMALLFOLIO);
+ madvise(mem2, MEMSIZE_SMALLFOLIO, MADV_PAGEOUT);
+ }
+
+ /* iterations with newly created mTHP, swap-in, and swap-out */
+ for (i = 0; i < 100; ++i) {
+ unsigned long initial_swpout;
+ unsigned long initial_swpout_fallback;
+ unsigned long final_swpout;
+ unsigned long final_swpout_fallback;
+ unsigned long swpout_inc;
+ unsigned long swpout_fallback_inc;
+ double fallback_percentage;
+
+ initial_swpout = read_stat(SWPOUT_PATH);
+ initial_swpout_fallback = read_stat(SWPOUT_FALLBACK_PATH);
+
+ /*
+ * The following setup creates a 1:1 ratio of mTHP to small folios
+ * since large folio swap-in isn't supported yet. Once we support
+ * mTHP swap-in, we'll likely need to reduce MEMSIZE_MTHP and
+ * increase MEMSIZE_SMALLFOLIO to maintain the ratio.
+ */
+ random_swapin(mem1, MEMSIZE_MTHP,
+ aligned_swapin ? ALIGNMENT_MTHP : ALIGNMENT_SMALLFOLIO,
+ TOTAL_DONTNEED_MTHP);
+ random_madvise_dontneed(mem1, MEMSIZE_MTHP, ALIGNMENT_MTHP,
+ TOTAL_DONTNEED_MTHP);
+
+ if (use_small_folio) {
+ random_swapin(mem2, MEMSIZE_SMALLFOLIO,
+ ALIGNMENT_SMALLFOLIO,
+ TOTAL_DONTNEED_SMALLFOLIO);
+ }
+
+ if (madvise(mem1, MEMSIZE_MTHP, MADV_PAGEOUT) != 0) {
+ perror("madvise pageout for mem1");
+ free(mem1);
+ if (mem2 != NULL)
+ free(mem2);
+ return EXIT_FAILURE;
+ }
+
+ if (use_small_folio) {
+ if (madvise(mem2, MEMSIZE_SMALLFOLIO, MADV_PAGEOUT) != 0) {
+ perror("madvise pageout for mem2");
+ free(mem1);
+ free(mem2);
+ return EXIT_FAILURE;
+ }
+ }
+
+ final_swpout = read_stat(SWPOUT_PATH);
+ final_swpout_fallback = read_stat(SWPOUT_FALLBACK_PATH);
+
+ swpout_inc = final_swpout - initial_swpout;
+ swpout_fallback_inc = final_swpout_fallback - initial_swpout_fallback;
+
+ fallback_percentage = (double)swpout_fallback_inc /
+ (swpout_fallback_inc + swpout_inc) * 100;
+
+ printf("Iteration %d: swpout inc: %lu, swpout fallback inc: %lu, Fallback percentage: %.2f%%\n",
+ i + 1, swpout_inc, swpout_fallback_inc, fallback_percentage);
+ }
+
+ free(mem1);
+ if (mem2 != NULL)
+ free(mem2);
+
+ return EXIT_SUCCESS;
+}
diff --git a/tools/mm/thpmaps b/tools/mm/thpmaps
new file mode 100644
index 000000000000..803e0318f2fe
--- /dev/null
+++ b/tools/mm/thpmaps
@@ -0,0 +1,675 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright (C) 2024 ARM Ltd.
+#
+# Utility providing smaps-like output detailing transparent hugepage usage.
+# For more info, run:
+# ./thpmaps --help
+#
+# Requires numpy:
+# pip3 install numpy
+
+
+import argparse
+import collections
+import math
+import os
+import re
+import resource
+import shutil
+import sys
+import textwrap
+import time
+import numpy as np
+
+
+with open('/sys/kernel/mm/transparent_hugepage/hpage_pmd_size') as f:
+ PAGE_SIZE = resource.getpagesize()
+ PAGE_SHIFT = int(math.log2(PAGE_SIZE))
+ PMD_SIZE = int(f.read())
+ PMD_ORDER = int(math.log2(PMD_SIZE / PAGE_SIZE))
+
+
+def align_forward(v, a):
+ return (v + (a - 1)) & ~(a - 1)
+
+
+def align_offset(v, a):
+ return v & (a - 1)
+
+
+def kbnr(kb):
+ # Convert KB to number of pages.
+ return (kb << 10) >> PAGE_SHIFT
+
+
+def nrkb(nr):
+ # Convert number of pages to KB.
+ return (nr << PAGE_SHIFT) >> 10
+
+
+def odkb(order):
+ # Convert page order to KB.
+ return (PAGE_SIZE << order) >> 10
+
+
+def cont_ranges_all(search, index):
+ # Given a list of arrays, find the ranges for which values are monotonically
+ # incrementing in all arrays. all arrays in search and index must be the
+ # same size.
+ sz = len(search[0])
+ r = np.full(sz, 2)
+ d = np.diff(search[0]) == 1
+ for dd in [np.diff(arr) == 1 for arr in search[1:]]:
+ d &= dd
+ r[1:] -= d
+ r[:-1] -= d
+ return [np.repeat(arr, r).reshape(-1, 2) for arr in index]
+
+
+class ArgException(Exception):
+ pass
+
+
+class FileIOException(Exception):
+ pass
+
+
+class BinArrayFile:
+ # Base class used to read /proc/<pid>/pagemap and /proc/kpageflags into a
+ # numpy array. Use inherrited class in a with clause to ensure file is
+ # closed when it goes out of scope.
+ def __init__(self, filename, element_size):
+ self.element_size = element_size
+ self.filename = filename
+ self.fd = os.open(self.filename, os.O_RDONLY)
+
+ def cleanup(self):
+ os.close(self.fd)
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.cleanup()
+
+ def _readin(self, offset, buffer):
+ length = os.preadv(self.fd, (buffer,), offset)
+ if len(buffer) != length:
+ raise FileIOException('error: {} failed to read {} bytes at {:x}'
+ .format(self.filename, len(buffer), offset))
+
+ def _toarray(self, buf):
+ assert(self.element_size == 8)
+ return np.frombuffer(buf, dtype=np.uint64)
+
+ def getv(self, vec):
+ vec *= self.element_size
+ offsets = vec[:, 0]
+ lengths = (np.diff(vec) + self.element_size).reshape(len(vec))
+ buf = bytearray(int(np.sum(lengths)))
+ view = memoryview(buf)
+ pos = 0
+ for offset, length in zip(offsets, lengths):
+ offset = int(offset)
+ length = int(length)
+ self._readin(offset, view[pos:pos+length])
+ pos += length
+ return self._toarray(buf)
+
+ def get(self, index, nr=1):
+ offset = index * self.element_size
+ length = nr * self.element_size
+ buf = bytearray(length)
+ self._readin(offset, buf)
+ return self._toarray(buf)
+
+
+PM_PAGE_PRESENT = 1 << 63
+PM_PFN_MASK = (1 << 55) - 1
+
+class PageMap(BinArrayFile):
+ # Read ranges of a given pid's pagemap into a numpy array.
+ def __init__(self, pid='self'):
+ super().__init__(f'/proc/{pid}/pagemap', 8)
+
+
+KPF_ANON = 1 << 12
+KPF_COMPOUND_HEAD = 1 << 15
+KPF_COMPOUND_TAIL = 1 << 16
+KPF_THP = 1 << 22
+
+class KPageFlags(BinArrayFile):
+ # Read ranges of /proc/kpageflags into a numpy array.
+ def __init__(self):
+ super().__init__(f'/proc/kpageflags', 8)
+
+
+vma_all_stats = set([
+ "Size",
+ "Rss",
+ "Pss",
+ "Pss_Dirty",
+ "Shared_Clean",
+ "Shared_Dirty",
+ "Private_Clean",
+ "Private_Dirty",
+ "Referenced",
+ "Anonymous",
+ "KSM",
+ "LazyFree",
+ "AnonHugePages",
+ "ShmemPmdMapped",
+ "FilePmdMapped",
+ "Shared_Hugetlb",
+ "Private_Hugetlb",
+ "Swap",
+ "SwapPss",
+ "Locked",
+])
+
+vma_min_stats = set([
+ "Rss",
+ "Anonymous",
+ "AnonHugePages",
+ "ShmemPmdMapped",
+ "FilePmdMapped",
+])
+
+VMA = collections.namedtuple('VMA', [
+ 'name',
+ 'start',
+ 'end',
+ 'read',
+ 'write',
+ 'execute',
+ 'private',
+ 'pgoff',
+ 'major',
+ 'minor',
+ 'inode',
+ 'stats',
+])
+
+class VMAList:
+ # A container for VMAs, parsed from /proc/<pid>/smaps. Iterate over the
+ # instance to receive VMAs.
+ def __init__(self, pid='self', stats=[]):
+ self.vmas = []
+ with open(f'/proc/{pid}/smaps', 'r') as file:
+ for line in file:
+ elements = line.split()
+ if '-' in elements[0]:
+ start, end = map(lambda x: int(x, 16), elements[0].split('-'))
+ major, minor = map(lambda x: int(x, 16), elements[3].split(':'))
+ self.vmas.append(VMA(
+ name=elements[5] if len(elements) == 6 else '',
+ start=start,
+ end=end,
+ read=elements[1][0] == 'r',
+ write=elements[1][1] == 'w',
+ execute=elements[1][2] == 'x',
+ private=elements[1][3] == 'p',
+ pgoff=int(elements[2], 16),
+ major=major,
+ minor=minor,
+ inode=int(elements[4], 16),
+ stats={},
+ ))
+ else:
+ param = elements[0][:-1]
+ if param in stats:
+ value = int(elements[1])
+ self.vmas[-1].stats[param] = {'type': None, 'value': value}
+
+ def __iter__(self):
+ yield from self.vmas
+
+
+def thp_parse(vma, kpageflags, ranges, indexes, vfns, pfns, anons, heads):
+ # Given 4 same-sized arrays representing a range within a page table backed
+ # by THPs (vfns: virtual frame numbers, pfns: physical frame numbers, anons:
+ # True if page is anonymous, heads: True if page is head of a THP), return a
+ # dictionary of statistics describing the mapped THPs.
+ stats = {
+ 'file': {
+ 'partial': 0,
+ 'aligned': [0] * (PMD_ORDER + 1),
+ 'unaligned': [0] * (PMD_ORDER + 1),
+ },
+ 'anon': {
+ 'partial': 0,
+ 'aligned': [0] * (PMD_ORDER + 1),
+ 'unaligned': [0] * (PMD_ORDER + 1),
+ },
+ }
+
+ for rindex, rpfn in zip(ranges[0], ranges[2]):
+ index_next = int(rindex[0])
+ index_end = int(rindex[1]) + 1
+ pfn_end = int(rpfn[1]) + 1
+
+ folios = indexes[index_next:index_end][heads[index_next:index_end]]
+
+ # Account pages for any partially mapped THP at the front. In that case,
+ # the first page of the range is a tail.
+ nr = (int(folios[0]) if len(folios) else index_end) - index_next
+ stats['anon' if anons[index_next] else 'file']['partial'] += nr
+
+ # Account pages for any partially mapped THP at the back. In that case,
+ # the next page after the range is a tail.
+ if len(folios):
+ flags = int(kpageflags.get(pfn_end)[0])
+ if flags & KPF_COMPOUND_TAIL:
+ nr = index_end - int(folios[-1])
+ folios = folios[:-1]
+ index_end -= nr
+ stats['anon' if anons[index_end - 1] else 'file']['partial'] += nr
+
+ # Account fully mapped THPs in the middle of the range.
+ if len(folios):
+ folio_nrs = np.append(np.diff(folios), np.uint64(index_end - folios[-1]))
+ folio_orders = np.log2(folio_nrs).astype(np.uint64)
+ for index, order in zip(folios, folio_orders):
+ index = int(index)
+ order = int(order)
+ nr = 1 << order
+ vfn = int(vfns[index])
+ align = 'aligned' if align_forward(vfn, nr) == vfn else 'unaligned'
+ anon = 'anon' if anons[index] else 'file'
+ stats[anon][align][order] += nr
+
+ # Account PMD-mapped THPs spearately, so filter out of the stats. There is a
+ # race between acquiring the smaps stats and reading pagemap, where memory
+ # could be deallocated. So clamp to zero incase it would have gone negative.
+ anon_pmd_mapped = vma.stats['AnonHugePages']['value']
+ file_pmd_mapped = vma.stats['ShmemPmdMapped']['value'] + \
+ vma.stats['FilePmdMapped']['value']
+ stats['anon']['aligned'][PMD_ORDER] = max(0, stats['anon']['aligned'][PMD_ORDER] - kbnr(anon_pmd_mapped))
+ stats['file']['aligned'][PMD_ORDER] = max(0, stats['file']['aligned'][PMD_ORDER] - kbnr(file_pmd_mapped))
+
+ rstats = {
+ f"anon-thp-pmd-aligned-{odkb(PMD_ORDER)}kB": {'type': 'anon', 'value': anon_pmd_mapped},
+ f"file-thp-pmd-aligned-{odkb(PMD_ORDER)}kB": {'type': 'file', 'value': file_pmd_mapped},
+ }
+
+ def flatten_sub(type, subtype, stats):
+ param = f"{type}-thp-pte-{subtype}-{{}}kB"
+ for od, nr in enumerate(stats[2:], 2):
+ rstats[param.format(odkb(od))] = {'type': type, 'value': nrkb(nr)}
+
+ def flatten_type(type, stats):
+ flatten_sub(type, 'aligned', stats['aligned'])
+ flatten_sub(type, 'unaligned', stats['unaligned'])
+ rstats[f"{type}-thp-pte-partial"] = {'type': type, 'value': nrkb(stats['partial'])}
+
+ flatten_type('anon', stats['anon'])
+ flatten_type('file', stats['file'])
+
+ return rstats
+
+
+def cont_parse(vma, order, ranges, anons, heads):
+ # Given 4 same-sized arrays representing a range within a page table backed
+ # by THPs (vfns: virtual frame numbers, pfns: physical frame numbers, anons:
+ # True if page is anonymous, heads: True if page is head of a THP), return a
+ # dictionary of statistics describing the contiguous blocks.
+ nr_cont = 1 << order
+ nr_anon = 0
+ nr_file = 0
+
+ for rindex, rvfn, rpfn in zip(*ranges):
+ index_next = int(rindex[0])
+ index_end = int(rindex[1]) + 1
+ vfn_start = int(rvfn[0])
+ pfn_start = int(rpfn[0])
+
+ if align_offset(pfn_start, nr_cont) != align_offset(vfn_start, nr_cont):
+ continue
+
+ off = align_forward(vfn_start, nr_cont) - vfn_start
+ index_next += off
+
+ while index_next + nr_cont <= index_end:
+ folio_boundary = heads[index_next+1:index_next+nr_cont].any()
+ if not folio_boundary:
+ if anons[index_next]:
+ nr_anon += nr_cont
+ else:
+ nr_file += nr_cont
+ index_next += nr_cont
+
+ # Account blocks that are PMD-mapped spearately, so filter out of the stats.
+ # There is a race between acquiring the smaps stats and reading pagemap,
+ # where memory could be deallocated. So clamp to zero incase it would have
+ # gone negative.
+ anon_pmd_mapped = vma.stats['AnonHugePages']['value']
+ file_pmd_mapped = vma.stats['ShmemPmdMapped']['value'] + \
+ vma.stats['FilePmdMapped']['value']
+ nr_anon = max(0, nr_anon - kbnr(anon_pmd_mapped))
+ nr_file = max(0, nr_file - kbnr(file_pmd_mapped))
+
+ rstats = {
+ f"anon-cont-pmd-aligned-{nrkb(nr_cont)}kB": {'type': 'anon', 'value': anon_pmd_mapped},
+ f"file-cont-pmd-aligned-{nrkb(nr_cont)}kB": {'type': 'file', 'value': file_pmd_mapped},
+ }
+
+ rstats[f"anon-cont-pte-aligned-{nrkb(nr_cont)}kB"] = {'type': 'anon', 'value': nrkb(nr_anon)}
+ rstats[f"file-cont-pte-aligned-{nrkb(nr_cont)}kB"] = {'type': 'file', 'value': nrkb(nr_file)}
+
+ return rstats
+
+
+def vma_print(vma, pid):
+ # Prints a VMA instance in a format similar to smaps. The main difference is
+ # that the pid is included as the first value.
+ print("{:010d}: {:016x}-{:016x} {}{}{}{} {:08x} {:02x}:{:02x} {:08x} {}"
+ .format(
+ pid, vma.start, vma.end,
+ 'r' if vma.read else '-', 'w' if vma.write else '-',
+ 'x' if vma.execute else '-', 'p' if vma.private else 's',
+ vma.pgoff, vma.major, vma.minor, vma.inode, vma.name
+ ))
+
+
+def stats_print(stats, tot_anon, tot_file, inc_empty):
+ # Print a statistics dictionary.
+ label_field = 32
+ for label, stat in stats.items():
+ type = stat['type']
+ value = stat['value']
+ if value or inc_empty:
+ pad = max(0, label_field - len(label) - 1)
+ if type == 'anon' and tot_anon > 0:
+ percent = f' ({value / tot_anon:3.0%})'
+ elif type == 'file' and tot_file > 0:
+ percent = f' ({value / tot_file:3.0%})'
+ else:
+ percent = ''
+ print(f"{label}:{' ' * pad}{value:8} kB{percent}")
+
+
+def vma_parse(vma, pagemap, kpageflags, contorders):
+ # Generate thp and cont statistics for a single VMA.
+ start = vma.start >> PAGE_SHIFT
+ end = vma.end >> PAGE_SHIFT
+
+ pmes = pagemap.get(start, end - start)
+ present = pmes & PM_PAGE_PRESENT != 0
+ pfns = pmes & PM_PFN_MASK
+ pfns = pfns[present]
+ vfns = np.arange(start, end, dtype=np.uint64)
+ vfns = vfns[present]
+
+ pfn_vec = cont_ranges_all([pfns], [pfns])[0]
+ flags = kpageflags.getv(pfn_vec)
+ anons = flags & KPF_ANON != 0
+ heads = flags & KPF_COMPOUND_HEAD != 0
+ thps = flags & KPF_THP != 0
+
+ vfns = vfns[thps]
+ pfns = pfns[thps]
+ anons = anons[thps]
+ heads = heads[thps]
+
+ indexes = np.arange(len(vfns), dtype=np.uint64)
+ ranges = cont_ranges_all([vfns, pfns], [indexes, vfns, pfns])
+
+ thpstats = thp_parse(vma, kpageflags, ranges, indexes, vfns, pfns, anons, heads)
+ contstats = [cont_parse(vma, order, ranges, anons, heads) for order in contorders]
+
+ tot_anon = vma.stats['Anonymous']['value']
+ tot_file = vma.stats['Rss']['value'] - tot_anon
+
+ return {
+ **thpstats,
+ **{k: v for s in contstats for k, v in s.items()}
+ }, tot_anon, tot_file
+
+
+def do_main(args):
+ pids = set()
+ rollup = {}
+ rollup_anon = 0
+ rollup_file = 0
+
+ if args.cgroup:
+ strict = False
+ for walk_info in os.walk(args.cgroup):
+ cgroup = walk_info[0]
+ with open(f'{cgroup}/cgroup.procs') as pidfile:
+ for line in pidfile.readlines():
+ pids.add(int(line.strip()))
+ elif args.pid:
+ strict = True
+ pids = pids.union(args.pid)
+ else:
+ strict = False
+ for pid in os.listdir('/proc'):
+ if pid.isdigit():
+ pids.add(int(pid))
+
+ if not args.rollup:
+ print(" PID START END PROT OFFSET DEV INODE OBJECT")
+
+ for pid in pids:
+ try:
+ with PageMap(pid) as pagemap:
+ with KPageFlags() as kpageflags:
+ for vma in VMAList(pid, vma_all_stats if args.inc_smaps else vma_min_stats):
+ if (vma.read or vma.write or vma.execute) and vma.stats['Rss']['value'] > 0:
+ stats, vma_anon, vma_file = vma_parse(vma, pagemap, kpageflags, args.cont)
+ else:
+ stats = {}
+ vma_anon = 0
+ vma_file = 0
+ if args.inc_smaps:
+ stats = {**vma.stats, **stats}
+ if args.rollup:
+ for k, v in stats.items():
+ if k in rollup:
+ assert(rollup[k]['type'] == v['type'])
+ rollup[k]['value'] += v['value']
+ else:
+ rollup[k] = v
+ rollup_anon += vma_anon
+ rollup_file += vma_file
+ else:
+ vma_print(vma, pid)
+ stats_print(stats, vma_anon, vma_file, args.inc_empty)
+ except (FileNotFoundError, ProcessLookupError, FileIOException):
+ if strict:
+ raise
+
+ if args.rollup:
+ stats_print(rollup, rollup_anon, rollup_file, args.inc_empty)
+
+
+def main():
+ docs_width = shutil.get_terminal_size().columns
+ docs_width -= 2
+ docs_width = min(80, docs_width)
+
+ def format(string):
+ text = re.sub(r'\s+', ' ', string)
+ text = re.sub(r'\s*\\n\s*', '\n', text)
+ paras = text.split('\n')
+ paras = [textwrap.fill(p, width=docs_width) for p in paras]
+ return '\n'.join(paras)
+
+ def formatter(prog):
+ return argparse.RawDescriptionHelpFormatter(prog, width=docs_width)
+
+ def size2order(human):
+ units = {
+ "K": 2**10, "M": 2**20, "G": 2**30,
+ "k": 2**10, "m": 2**20, "g": 2**30,
+ }
+ unit = 1
+ if human[-1] in units:
+ unit = units[human[-1]]
+ human = human[:-1]
+ try:
+ size = int(human)
+ except ValueError:
+ raise ArgException('error: --cont value must be integer size with optional KMG unit')
+ size *= unit
+ order = int(math.log2(size / PAGE_SIZE))
+ if order < 1:
+ raise ArgException('error: --cont value must be size of at least 2 pages')
+ if (1 << order) * PAGE_SIZE != size:
+ raise ArgException('error: --cont value must be size of power-of-2 pages')
+ if order > PMD_ORDER:
+ raise ArgException('error: --cont value must be less than or equal to PMD order')
+ return order
+
+ parser = argparse.ArgumentParser(formatter_class=formatter,
+ description=format("""Prints information about how transparent huge
+ pages are mapped, either system-wide, or for a specified
+ process or cgroup.\\n
+ \\n
+ When run with --pid, the user explicitly specifies the set
+ of pids to scan. e.g. "--pid 10 [--pid 134 ...]". When run
+ with --cgroup, the user passes either a v1 or v2 cgroup and
+ all pids that belong to the cgroup subtree are scanned. When
+ run with neither --pid nor --cgroup, the full set of pids on
+ the system is gathered from /proc and scanned as if the user
+ had provided "--pid 1 --pid 2 ...".\\n
+ \\n
+ A default set of statistics is always generated for THP
+ mappings. However, it is also possible to generate
+ additional statistics for "contiguous block mappings" where
+ the block size is user-defined.\\n
+ \\n
+ Statistics are maintained independently for anonymous and
+ file-backed (pagecache) memory and are shown both in kB and
+ as a percentage of either total anonymous or total
+ file-backed memory as appropriate.\\n
+ \\n
+ THP Statistics\\n
+ --------------\\n
+ \\n
+ Statistics are always generated for fully- and
+ contiguously-mapped THPs whose mapping address is aligned to
+ their size, for each <size> supported by the system.
+ Separate counters describe THPs mapped by PTE vs those
+ mapped by PMD. (Although note a THP can only be mapped by
+ PMD if it is PMD-sized):\\n
+ \\n
+ - anon-thp-pte-aligned-<size>kB\\n
+ - file-thp-pte-aligned-<size>kB\\n
+ - anon-thp-pmd-aligned-<size>kB\\n
+ - file-thp-pmd-aligned-<size>kB\\n
+ \\n
+ Similarly, statistics are always generated for fully- and
+ contiguously-mapped THPs whose mapping address is *not*
+ aligned to their size, for each <size> supported by the
+ system. Due to the unaligned mapping, it is impossible to
+ map by PMD, so there are only PTE counters for this case:\\n
+ \\n
+ - anon-thp-pte-unaligned-<size>kB\\n
+ - file-thp-pte-unaligned-<size>kB\\n
+ \\n
+ Statistics are also always generated for mapped pages that
+ belong to a THP but where the is THP is *not* fully- and
+ contiguously- mapped. These "partial" mappings are all
+ counted in the same counter regardless of the size of the
+ THP that is partially mapped:\\n
+ \\n
+ - anon-thp-pte-partial\\n
+ - file-thp-pte-partial\\n
+ \\n
+ Contiguous Block Statistics\\n
+ ---------------------------\\n
+ \\n
+ An optional, additional set of statistics is generated for
+ every contiguous block size specified with `--cont <size>`.
+ These statistics show how much memory is mapped in
+ contiguous blocks of <size> and also aligned to <size>. A
+ given contiguous block must all belong to the same THP, but
+ there is no requirement for it to be the *whole* THP.
+ Separate counters describe contiguous blocks mapped by PTE
+ vs those mapped by PMD:\\n
+ \\n
+ - anon-cont-pte-aligned-<size>kB\\n
+ - file-cont-pte-aligned-<size>kB\\n
+ - anon-cont-pmd-aligned-<size>kB\\n
+ - file-cont-pmd-aligned-<size>kB\\n
+ \\n
+ As an example, if monitoring 64K contiguous blocks (--cont
+ 64K), there are a number of sources that could provide such
+ blocks: a fully- and contiguously-mapped 64K THP that is
+ aligned to a 64K boundary would provide 1 block. A fully-
+ and contiguously-mapped 128K THP that is aligned to at least
+ a 64K boundary would provide 2 blocks. Or a 128K THP that
+ maps its first 100K, but contiguously and starting at a 64K
+ boundary would provide 1 block. A fully- and
+ contiguously-mapped 2M THP would provide 32 blocks. There
+ are many other possible permutations.\\n"""),
+ epilog=format("""Requires root privilege to access pagemap and
+ kpageflags."""))
+
+ group = parser.add_mutually_exclusive_group(required=False)
+ group.add_argument('--pid',
+ metavar='pid', required=False, type=int, default=[], action='append',
+ help="""Process id of the target process. Maybe issued multiple times to
+ scan multiple processes. --pid and --cgroup are mutually exclusive.
+ If neither are provided, all processes are scanned to provide
+ system-wide information.""")
+
+ group.add_argument('--cgroup',
+ metavar='path', required=False,
+ help="""Path to the target cgroup in sysfs. Iterates over every pid in
+ the cgroup and its children. --pid and --cgroup are mutually
+ exclusive. If neither are provided, all processes are scanned to
+ provide system-wide information.""")
+
+ parser.add_argument('--rollup',
+ required=False, default=False, action='store_true',
+ help="""Sum the per-vma statistics to provide a summary over the whole
+ system, process or cgroup.""")
+
+ parser.add_argument('--cont',
+ metavar='size[KMG]', required=False, default=[], action='append',
+ help="""Adds stats for memory that is mapped in contiguous blocks of
+ <size> and also aligned to <size>. May be issued multiple times to
+ track multiple sized blocks. Useful to infer e.g. arm64 contpte and
+ hpa mappings. Size must be a power-of-2 number of pages.""")
+
+ parser.add_argument('--inc-smaps',
+ required=False, default=False, action='store_true',
+ help="""Include all numerical, additive /proc/<pid>/smaps stats in the
+ output.""")
+
+ parser.add_argument('--inc-empty',
+ required=False, default=False, action='store_true',
+ help="""Show all statistics including those whose value is 0.""")
+
+ parser.add_argument('--periodic',
+ metavar='sleep_ms', required=False, type=int,
+ help="""Run in a loop, polling every sleep_ms milliseconds.""")
+
+ args = parser.parse_args()
+
+ try:
+ args.cont = [size2order(cont) for cont in args.cont]
+ except ArgException as e:
+ parser.print_usage()
+ raise
+
+ if args.periodic:
+ while True:
+ do_main(args)
+ print()
+ time.sleep(args.periodic / 1000)
+ else:
+ do_main(args)
+
+
+if __name__ == "__main__":
+ try:
+ main()
+ except Exception as e:
+ prog = os.path.basename(sys.argv[0])
+ print(f'{prog}: {e}')
+ exit(1)
diff --git a/tools/net/sunrpc/extract.sh b/tools/net/sunrpc/extract.sh
new file mode 100755
index 000000000000..f944066f25bc
--- /dev/null
+++ b/tools/net/sunrpc/extract.sh
@@ -0,0 +1,11 @@
+#! /bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# Extract an RPC protocol specification from an RFC document.
+# The version of this script comes from RFC 8166.
+#
+# Usage:
+# $ extract.sh < rfcNNNN.txt > protocol.x
+#
+
+grep '^ *///' | sed 's?^ */// ??' | sed 's?^ *///$??'
diff --git a/tools/net/sunrpc/xdrgen/.gitignore b/tools/net/sunrpc/xdrgen/.gitignore
new file mode 100644
index 000000000000..d7366c2f9be8
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/.gitignore
@@ -0,0 +1,2 @@
+__pycache__
+generators/__pycache__
diff --git a/tools/net/sunrpc/xdrgen/README b/tools/net/sunrpc/xdrgen/README
new file mode 100644
index 000000000000..27218a78ab40
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/README
@@ -0,0 +1,261 @@
+xdrgen - Linux Kernel XDR code generator
+
+Introduction
+------------
+
+SunRPC programs are typically specified using a language defined by
+RFC 4506. In fact, all IETF-published NFS specifications provide a
+description of the specified protocol using this language.
+
+Since the 1990's, user space consumers of SunRPC have had access to
+a tool that could read such XDR specifications and then generate C
+code that implements the RPC portions of that protocol. This tool is
+called rpcgen.
+
+This RPC-level code is code that handles input directly from the
+network, and thus a high degree of memory safety and sanity checking
+is needed to help ensure proper levels of security. Bugs in this
+code can have significant impact on security and performance.
+
+However, it is code that is repetitive and tedious to write by hand.
+
+The C code generated by rpcgen makes extensive use of the facilities
+of the user space TI-RPC library and libc. Furthermore, the dialect
+of the generated code is very traditional K&R C.
+
+The Linux kernel's implementation of SunRPC-based protocols hand-roll
+their XDR implementation. There are two main reasons for this:
+
+1. libtirpc (and its predecessors) operate only in user space. The
+ kernel's RPC implementation and its API are significantly
+ different than libtirpc.
+
+2. rpcgen-generated code is believed to be less efficient than code
+ that is hand-written.
+
+These days, gcc and its kin are capable of optimizing code better
+than human authors. There are only a few instances where writing
+XDR code by hand will make a measurable performance different.
+
+In addition, the current hand-written code in the Linux kernel is
+difficult to audit and prove that it implements exactly what is in
+the protocol specification.
+
+In order to accrue the benefits of machine-generated XDR code in the
+kernel, a tool is needed that will output C code that works against
+the kernel's SunRPC implementation rather than libtirpc.
+
+Enter xdrgen.
+
+
+Dependencies
+------------
+
+These dependencies are typically packaged by Linux distributions:
+
+- python3
+- python3-lark
+- python3-jinja2
+
+These dependencies are available via PyPi:
+
+- pip install 'lark[interegular]'
+
+
+XDR Specifications
+------------------
+
+When adding a new protocol implementation to the kernel, the XDR
+specification can be derived by feeding a .txt copy of the RFC to
+the script located in tools/net/sunrpc/extract.sh.
+
+ $ extract.sh < rfc0001.txt > new2.x
+
+
+Operation
+---------
+
+Once a .x file is available, use xdrgen to generate source and
+header files containing an implementation of XDR encoding and
+decoding functions for the specified protocol.
+
+ $ ./xdrgen definitions new2.x > include/linux/sunrpc/xdrgen/new2.h
+ $ ./xdrgen declarations new2.x > new2xdr_gen.h
+
+and
+
+ $ ./xdrgen source new2.x > new2xdr_gen.c
+
+The files are ready to use for a server-side protocol implementation,
+or may be used as a guide for implementing these routines by hand.
+
+By default, the only comments added to this code are kdoc comments
+that appear directly in front of the public per-procedure APIs. For
+deeper introspection, specifying the "--annotate" flag will insert
+additional comments in the generated code to help readers match the
+generated code to specific parts of the XDR specification.
+
+Because the generated code is targeted for the Linux kernel, it
+is tagged with a GPLv2-only license.
+
+The xdrgen tool can also provide lexical and syntax checking of
+an XDR specification:
+
+ $ ./xdrgen lint xdr/new.x
+
+
+How It Works
+------------
+
+xdrgen does not use machine learning to generate source code. The
+translation is entirely deterministic.
+
+RFC 4506 Section 6 contains a BNF grammar of the XDR specification
+language. The grammar has been adapted for use by the Python Lark
+module.
+
+The xdr.ebnf file in this directory contains the grammar used to
+parse XDR specifications. xdrgen configures Lark using the grammar
+in xdr.ebnf. Lark parses the target XDR specification using this
+grammar, creating a parse tree.
+
+xdrgen then transforms the parse tree into an abstract syntax tree.
+This tree is passed to a series of code generators.
+
+The generators are implemented as Python classes residing in the
+generators/ directory. Each generator emits code created from Jinja2
+templates stored in the templates/ directory.
+
+The source code is generated in the same order in which they appear
+in the specification to ensure the generated code compiles. This
+conforms with the behavior of rpcgen.
+
+xdrgen assumes that the generated source code is further compiled by
+a compiler that can optimize in a number of ways, including:
+
+ - Unused functions are discarded (ie, not added to the executable)
+
+ - Aggressive function inlining removes unnecessary stack frames
+
+ - Single-arm switch statements are replaced by a single conditional
+ branch
+
+And so on.
+
+
+Pragmas
+-------
+
+Pragma directives specify exceptions to the normal generation of
+encoding and decoding functions. Currently one directive is
+implemented: "public".
+
+Pragma big_endian
+------ ----------
+
+ pragma big_endian <enum> ;
+
+For variables that might contain only a small number values, it
+is more efficient to avoid the byte-swap when encoding or decoding
+on little-endian machines. Such is often the case with error status
+codes. For example:
+
+ pragma big_endian nfsstat3;
+
+In this case, when generating an XDR struct or union containing a
+field of type "nfsstat3", xdrgen will make the type of that field
+"__be32" instead of "enum nfsstat3". XDR unions then switch on the
+non-byte-swapped value of that field.
+
+Pragma exclude
+------ -------
+
+ pragma exclude <RPC procedure> ;
+
+In some cases, a procedure encoder or decoder function might need
+special processing that cannot be automatically generated. The
+automatically-generated functions might conflict or interfere with
+the hand-rolled function. To avoid editing the generated source code
+by hand, a pragma can specify that the procedure's encoder and
+decoder functions are not included in the generated header and
+source.
+
+For example:
+
+ pragma exclude NFSPROC3_READDIRPLUS;
+
+Excludes the decoder function for the READDIRPLUS argument and the
+encoder function for the READDIRPLUS result.
+
+Note that because data item encoder and decoder functions are
+defined "static __maybe_unused", subsequent compilation
+automatically excludes data item encoder and decoder functions that
+are used only by excluded procedure.
+
+Pragma header
+------ ------
+
+ pragma header <string> ;
+
+Provide a name to use for the header file. For example:
+
+ pragma header nlm4;
+
+Adds
+
+ #include "nlm4xdr_gen.h"
+
+to the generated source file.
+
+Pragma public
+------ ------
+
+ pragma public <XDR data item> ;
+
+Normally XDR encoder and decoder functions are "static". In case an
+implementer wants to call these functions from other source code,
+s/he can add a public pragma in the input .x file to indicate a set
+of functions that should get a prototype in the generated header,
+and the function definitions will not be declared static.
+
+For example:
+
+ pragma public nfsstat3;
+
+Adds these prototypes in the generated header:
+
+ bool xdrgen_decode_nfsstat3(struct xdr_stream *xdr, enum nfsstat3 *ptr);
+ bool xdrgen_encode_nfsstat3(struct xdr_stream *xdr, enum nfsstat3 value);
+
+And, in the generated source code, both of these functions appear
+without the "static __maybe_unused" modifiers.
+
+
+Future Work
+-----------
+
+Finish implementing XDR pointer and list types.
+
+Generate client-side procedure functions
+
+Expand the README into a user guide similar to rpcgen(1)
+
+Add more pragma directives:
+
+ * @pages -- use xdr_read/write_pages() for the specified opaque
+ field
+ * @skip -- do not decode, but rather skip, the specified argument
+ field
+
+Enable something like a #include to dynamically insert the content
+of other specification files
+
+Properly support line-by-line pass-through via the "%" decorator
+
+Build a unit test suite for verifying translation of XDR language
+into compilable code
+
+Add a command-line option to insert trace_printk call sites in the
+generated source code, for improved (temporary) observability
+
+Generate kernel Rust code as well as C code
diff --git a/tools/net/sunrpc/xdrgen/__init__.py b/tools/net/sunrpc/xdrgen/__init__.py
new file mode 100644
index 000000000000..c940e9275252
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/__init__.py
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+# Just to make sphinx-apidoc document this directory
diff --git a/tools/net/sunrpc/xdrgen/generators/__init__.py b/tools/net/sunrpc/xdrgen/generators/__init__.py
new file mode 100644
index 000000000000..b98574a36a4a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/__init__.py
@@ -0,0 +1,117 @@
+# SPDX-License-Identifier: GPL-2.0
+
+"""Define a base code generator class"""
+
+import sys
+from jinja2 import Environment, FileSystemLoader, Template
+
+from xdr_ast import _XdrAst, Specification, _RpcProgram, _XdrTypeSpecifier
+from xdr_ast import public_apis, pass_by_reference, get_header_name
+from xdr_parse import get_xdr_annotate
+
+
+def create_jinja2_environment(language: str, xdr_type: str) -> Environment:
+ """Open a set of templates based on output language"""
+ match language:
+ case "C":
+ environment = Environment(
+ loader=FileSystemLoader(sys.path[0] + "/templates/C/" + xdr_type + "/"),
+ trim_blocks=True,
+ lstrip_blocks=True,
+ )
+ environment.globals["annotate"] = get_xdr_annotate()
+ environment.globals["public_apis"] = public_apis
+ environment.globals["pass_by_reference"] = pass_by_reference
+ return environment
+ case _:
+ raise NotImplementedError("Language not supported")
+
+
+def get_jinja2_template(
+ environment: Environment, template_type: str, template_name: str
+) -> Template:
+ """Retrieve a Jinja2 template for emitting source code"""
+ return environment.get_template(template_type + "/" + template_name + ".j2")
+
+
+def find_xdr_program_name(root: Specification) -> str:
+ """Retrieve the RPC program name from an abstract syntax tree"""
+ raw_name = get_header_name()
+ if raw_name != "none":
+ return raw_name.lower()
+ for definition in root.definitions:
+ if isinstance(definition.value, _RpcProgram):
+ raw_name = definition.value.name
+ return raw_name.lower().removesuffix("_program").removesuffix("_prog")
+ return "noprog"
+
+
+def header_guard_infix(filename: str) -> str:
+ """Extract the header guard infix from the specification filename"""
+ basename = filename.split("/")[-1]
+ program = basename.replace(".x", "")
+ return program.upper()
+
+
+def kernel_c_type(spec: _XdrTypeSpecifier) -> str:
+ """Return name of C type"""
+ builtin_native_c_type = {
+ "bool": "bool",
+ "int": "s32",
+ "unsigned_int": "u32",
+ "long": "s32",
+ "unsigned_long": "u32",
+ "hyper": "s64",
+ "unsigned_hyper": "u64",
+ }
+ if spec.type_name in builtin_native_c_type:
+ return builtin_native_c_type[spec.type_name]
+ return spec.type_name
+
+
+class Boilerplate:
+ """Base class to generate boilerplate for source files"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ raise NotImplementedError("No language support defined")
+
+ def emit_declaration(self, filename: str, root: Specification) -> None:
+ """Emit declaration header boilerplate"""
+ raise NotImplementedError("Header boilerplate generation not supported")
+
+ def emit_definition(self, filename: str, root: Specification) -> None:
+ """Emit definition header boilerplate"""
+ raise NotImplementedError("Header boilerplate generation not supported")
+
+ def emit_source(self, filename: str, root: Specification) -> None:
+ """Emit generic source code for this XDR type"""
+ raise NotImplementedError("Source boilerplate generation not supported")
+
+
+class SourceGenerator:
+ """Base class to generate header and source code for XDR types"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ raise NotImplementedError("No language support defined")
+
+ def emit_declaration(self, node: _XdrAst) -> None:
+ """Emit one function declaration for this XDR type"""
+ raise NotImplementedError("Declaration generation not supported")
+
+ def emit_decoder(self, node: _XdrAst) -> None:
+ """Emit one decoder function for this XDR type"""
+ raise NotImplementedError("Decoder generation not supported")
+
+ def emit_definition(self, node: _XdrAst) -> None:
+ """Emit one definition for this XDR type"""
+ raise NotImplementedError("Definition generation not supported")
+
+ def emit_encoder(self, node: _XdrAst) -> None:
+ """Emit one encoder function for this XDR type"""
+ raise NotImplementedError("Encoder generation not supported")
+
+ def emit_maxsize(self, node: _XdrAst) -> None:
+ """Emit one maxsize macro for this XDR type"""
+ raise NotImplementedError("Maxsize macro generation not supported")
diff --git a/tools/net/sunrpc/xdrgen/generators/constant.py b/tools/net/sunrpc/xdrgen/generators/constant.py
new file mode 100644
index 000000000000..f2339caf0953
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/constant.py
@@ -0,0 +1,20 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR constants"""
+
+from generators import SourceGenerator, create_jinja2_environment
+from xdr_ast import _XdrConstant
+
+class XdrConstantGenerator(SourceGenerator):
+ """Generate source code for XDR constants"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "constants")
+ self.peer = peer
+
+ def emit_definition(self, node: _XdrConstant) -> None:
+ """Emit one definition for a constant"""
+ template = self.environment.get_template("definition.j2")
+ print(template.render(name=node.name, value=node.value))
diff --git a/tools/net/sunrpc/xdrgen/generators/enum.py b/tools/net/sunrpc/xdrgen/generators/enum.py
new file mode 100644
index 000000000000..e62f715d3996
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/enum.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR enum types"""
+
+from generators import SourceGenerator, create_jinja2_environment
+from xdr_ast import _XdrEnum, public_apis, big_endian, get_header_name
+
+
+class XdrEnumGenerator(SourceGenerator):
+ """Generate source code for XDR enum types"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "enum")
+ self.peer = peer
+
+ def emit_declaration(self, node: _XdrEnum) -> None:
+ """Emit one declaration pair for an XDR enum type"""
+ if node.name in public_apis:
+ template = self.environment.get_template("declaration/enum.j2")
+ print(template.render(name=node.name))
+
+ def emit_definition(self, node: _XdrEnum) -> None:
+ """Emit one definition for an XDR enum type"""
+ template = self.environment.get_template("definition/open.j2")
+ print(template.render(name=node.name))
+
+ template = self.environment.get_template("definition/enumerator.j2")
+ for enumerator in node.enumerators:
+ print(template.render(name=enumerator.name, value=enumerator.value))
+
+ if node.name in big_endian:
+ template = self.environment.get_template("definition/close_be.j2")
+ else:
+ template = self.environment.get_template("definition/close.j2")
+ print(template.render(name=node.name))
+
+ def emit_decoder(self, node: _XdrEnum) -> None:
+ """Emit one decoder function for an XDR enum type"""
+ if node.name in big_endian:
+ template = self.environment.get_template("decoder/enum_be.j2")
+ else:
+ template = self.environment.get_template("decoder/enum.j2")
+ print(template.render(name=node.name))
+
+ def emit_encoder(self, node: _XdrEnum) -> None:
+ """Emit one encoder function for an XDR enum type"""
+ if node.name in big_endian:
+ template = self.environment.get_template("encoder/enum_be.j2")
+ else:
+ template = self.environment.get_template("encoder/enum.j2")
+ print(template.render(name=node.name))
+
+ def emit_maxsize(self, node: _XdrEnum) -> None:
+ """Emit one maxsize macro for an XDR enum type"""
+ macro_name = get_header_name().upper() + "_" + node.name + "_sz"
+ template = self.environment.get_template("maxsize/enum.j2")
+ print(
+ template.render(
+ macro=macro_name,
+ width=" + ".join(node.symbolic_width()),
+ )
+ )
diff --git a/tools/net/sunrpc/xdrgen/generators/header_bottom.py b/tools/net/sunrpc/xdrgen/generators/header_bottom.py
new file mode 100644
index 000000000000..4b55b282dfc0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/header_bottom.py
@@ -0,0 +1,33 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate header bottom boilerplate"""
+
+import os.path
+import time
+
+from generators import Boilerplate, header_guard_infix
+from generators import create_jinja2_environment, get_jinja2_template
+from xdr_ast import Specification
+
+
+class XdrHeaderBottomGenerator(Boilerplate):
+ """Generate header boilerplate"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "header_bottom")
+ self.peer = peer
+
+ def emit_declaration(self, filename: str, root: Specification) -> None:
+ """Emit the bottom header guard"""
+ template = get_jinja2_template(self.environment, "declaration", "header")
+ print(template.render(infix=header_guard_infix(filename)))
+
+ def emit_definition(self, filename: str, root: Specification) -> None:
+ """Emit the bottom header guard"""
+ template = get_jinja2_template(self.environment, "definition", "header")
+ print(template.render(infix=header_guard_infix(filename)))
+
+ def emit_source(self, filename: str, root: Specification) -> None:
+ pass
diff --git a/tools/net/sunrpc/xdrgen/generators/header_top.py b/tools/net/sunrpc/xdrgen/generators/header_top.py
new file mode 100644
index 000000000000..c6bc21c71f19
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/header_top.py
@@ -0,0 +1,45 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate header top boilerplate"""
+
+import os.path
+import time
+
+from generators import Boilerplate, header_guard_infix
+from generators import create_jinja2_environment, get_jinja2_template
+from xdr_ast import Specification
+
+
+class XdrHeaderTopGenerator(Boilerplate):
+ """Generate header boilerplate"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "header_top")
+ self.peer = peer
+
+ def emit_declaration(self, filename: str, root: Specification) -> None:
+ """Emit the top header guard"""
+ template = get_jinja2_template(self.environment, "declaration", "header")
+ print(
+ template.render(
+ infix=header_guard_infix(filename),
+ filename=filename,
+ mtime=time.ctime(os.path.getmtime(filename)),
+ )
+ )
+
+ def emit_definition(self, filename: str, root: Specification) -> None:
+ """Emit the top header guard"""
+ template = get_jinja2_template(self.environment, "definition", "header")
+ print(
+ template.render(
+ infix=header_guard_infix(filename),
+ filename=filename,
+ mtime=time.ctime(os.path.getmtime(filename)),
+ )
+ )
+
+ def emit_source(self, filename: str, root: Specification) -> None:
+ pass
diff --git a/tools/net/sunrpc/xdrgen/generators/pointer.py b/tools/net/sunrpc/xdrgen/generators/pointer.py
new file mode 100644
index 000000000000..6dbda60ad2db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/pointer.py
@@ -0,0 +1,288 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR pointer types"""
+
+from jinja2 import Environment
+
+from generators import SourceGenerator, kernel_c_type
+from generators import create_jinja2_environment, get_jinja2_template
+
+from xdr_ast import _XdrBasic, _XdrString
+from xdr_ast import _XdrFixedLengthOpaque, _XdrVariableLengthOpaque
+from xdr_ast import _XdrFixedLengthArray, _XdrVariableLengthArray
+from xdr_ast import _XdrOptionalData, _XdrPointer, _XdrDeclaration
+from xdr_ast import public_apis, get_header_name
+
+
+def emit_pointer_declaration(environment: Environment, node: _XdrPointer) -> None:
+ """Emit a declaration pair for an XDR pointer type"""
+ if node.name in public_apis:
+ template = get_jinja2_template(environment, "declaration", "close")
+ print(template.render(name=node.name))
+
+
+def emit_pointer_member_definition(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit a definition for one field in an XDR struct"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(template.render(name=field.name))
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(template.render(name=field.name))
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_pointer_definition(environment: Environment, node: _XdrPointer) -> None:
+ """Emit a definition for an XDR pointer type"""
+ template = get_jinja2_template(environment, "definition", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields[0:-1]:
+ emit_pointer_member_definition(environment, field)
+
+ template = get_jinja2_template(environment, "definition", "close")
+ print(template.render(name=node.name))
+
+
+def emit_pointer_member_decoder(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit a decoder for one field in an XDR pointer"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ size=field.size,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ maxsize=field.maxsize,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_pointer_decoder(environment: Environment, node: _XdrPointer) -> None:
+ """Emit one decoder function for an XDR pointer type"""
+ template = get_jinja2_template(environment, "decoder", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields[0:-1]:
+ emit_pointer_member_decoder(environment, field)
+
+ template = get_jinja2_template(environment, "decoder", "close")
+ print(template.render())
+
+
+def emit_pointer_member_encoder(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit an encoder for one field in a XDR pointer"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_pointer_encoder(environment: Environment, node: _XdrPointer) -> None:
+ """Emit one encoder function for an XDR pointer type"""
+ template = get_jinja2_template(environment, "encoder", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields[0:-1]:
+ emit_pointer_member_encoder(environment, field)
+
+ template = get_jinja2_template(environment, "encoder", "close")
+ print(template.render())
+
+
+def emit_pointer_maxsize(environment: Environment, node: _XdrPointer) -> None:
+ """Emit one maxsize macro for an XDR pointer type"""
+ macro_name = get_header_name().upper() + "_" + node.name + "_sz"
+ template = get_jinja2_template(environment, "maxsize", "pointer")
+ print(
+ template.render(
+ macro=macro_name,
+ width=" + ".join(node.symbolic_width()),
+ )
+ )
+
+
+class XdrPointerGenerator(SourceGenerator):
+ """Generate source code for XDR pointer"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "pointer")
+ self.peer = peer
+
+ def emit_declaration(self, node: _XdrPointer) -> None:
+ """Emit one declaration pair for an XDR pointer type"""
+ emit_pointer_declaration(self.environment, node)
+
+ def emit_definition(self, node: _XdrPointer) -> None:
+ """Emit one declaration for an XDR pointer type"""
+ emit_pointer_definition(self.environment, node)
+
+ def emit_decoder(self, node: _XdrPointer) -> None:
+ """Emit one decoder function for an XDR pointer type"""
+ emit_pointer_decoder(self.environment, node)
+
+ def emit_encoder(self, node: _XdrPointer) -> None:
+ """Emit one encoder function for an XDR pointer type"""
+ emit_pointer_encoder(self.environment, node)
+
+ def emit_maxsize(self, node: _XdrPointer) -> None:
+ """Emit one maxsize macro for an XDR pointer type"""
+ emit_pointer_maxsize(self.environment, node)
diff --git a/tools/net/sunrpc/xdrgen/generators/program.py b/tools/net/sunrpc/xdrgen/generators/program.py
new file mode 100644
index 000000000000..ac3cf1694b68
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/program.py
@@ -0,0 +1,168 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code for an RPC program's procedures"""
+
+from jinja2 import Environment
+
+from generators import SourceGenerator, create_jinja2_environment
+from xdr_ast import _RpcProgram, _RpcVersion, excluded_apis
+
+
+def emit_version_definitions(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit procedure numbers for each RPC version's procedures"""
+ template = environment.get_template("definition/open.j2")
+ print(template.render(program=program.upper()))
+
+ template = environment.get_template("definition/procedure.j2")
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ print(
+ template.render(
+ name=procedure.name,
+ value=procedure.number,
+ )
+ )
+
+ template = environment.get_template("definition/close.j2")
+ print(template.render())
+
+
+def emit_version_declarations(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit declarations for each RPC version's procedures"""
+ arguments = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ arguments[procedure.argument.type_name] = None
+ if len(arguments) > 0:
+ print("")
+ template = environment.get_template("declaration/argument.j2")
+ for argument in arguments:
+ print(template.render(program=program, argument=argument))
+
+ results = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ results[procedure.result.type_name] = None
+ if len(results) > 0:
+ print("")
+ template = environment.get_template("declaration/result.j2")
+ for result in results:
+ print(template.render(program=program, result=result))
+
+
+def emit_version_argument_decoders(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit server argument decoders for each RPC version's procedures"""
+ arguments = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ arguments[procedure.argument.type_name] = None
+
+ template = environment.get_template("decoder/argument.j2")
+ for argument in arguments:
+ print(template.render(program=program, argument=argument))
+
+
+def emit_version_result_decoders(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit client result decoders for each RPC version's procedures"""
+ results = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ results[procedure.result.type_name] = None
+
+ template = environment.get_template("decoder/result.j2")
+ for result in results:
+ print(template.render(program=program, result=result))
+
+
+def emit_version_argument_encoders(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit client argument encoders for each RPC version's procedures"""
+ arguments = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ arguments[procedure.argument.type_name] = None
+
+ template = environment.get_template("encoder/argument.j2")
+ for argument in arguments:
+ print(template.render(program=program, argument=argument))
+
+
+def emit_version_result_encoders(
+ environment: Environment, program: str, version: _RpcVersion
+) -> None:
+ """Emit server result encoders for each RPC version's procedures"""
+ results = dict.fromkeys([])
+ for procedure in version.procedures:
+ if procedure.name not in excluded_apis:
+ results[procedure.result.type_name] = None
+
+ template = environment.get_template("encoder/result.j2")
+ for result in results:
+ print(template.render(program=program, result=result))
+
+
+class XdrProgramGenerator(SourceGenerator):
+ """Generate source code for an RPC program's procedures"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "program")
+ self.peer = peer
+
+ def emit_definition(self, node: _RpcProgram) -> None:
+ """Emit procedure numbers for each of an RPC programs's procedures"""
+ raw_name = node.name
+ program = raw_name.lower().removesuffix("_program").removesuffix("_prog")
+
+ for version in node.versions:
+ emit_version_definitions(self.environment, program, version)
+
+ def emit_declaration(self, node: _RpcProgram) -> None:
+ """Emit a declaration pair for each of an RPC programs's procedures"""
+ raw_name = node.name
+ program = raw_name.lower().removesuffix("_program").removesuffix("_prog")
+
+ for version in node.versions:
+ emit_version_declarations(self.environment, program, version)
+
+ def emit_decoder(self, node: _RpcProgram) -> None:
+ """Emit all decoder functions for an RPC program's procedures"""
+ raw_name = node.name
+ program = raw_name.lower().removesuffix("_program").removesuffix("_prog")
+ match self.peer:
+ case "server":
+ for version in node.versions:
+ emit_version_argument_decoders(
+ self.environment, program, version,
+ )
+ case "client":
+ for version in node.versions:
+ emit_version_result_decoders(
+ self.environment, program, version,
+ )
+
+ def emit_encoder(self, node: _RpcProgram) -> None:
+ """Emit all encoder functions for an RPC program's procedures"""
+ raw_name = node.name
+ program = raw_name.lower().removesuffix("_program").removesuffix("_prog")
+ match self.peer:
+ case "server":
+ for version in node.versions:
+ emit_version_result_encoders(
+ self.environment, program, version,
+ )
+ case "client":
+ for version in node.versions:
+ emit_version_argument_encoders(
+ self.environment, program, version,
+ )
diff --git a/tools/net/sunrpc/xdrgen/generators/source_top.py b/tools/net/sunrpc/xdrgen/generators/source_top.py
new file mode 100644
index 000000000000..bcf47d93d6f1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/source_top.py
@@ -0,0 +1,32 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate source code boilerplate"""
+
+import os.path
+import time
+
+from generators import Boilerplate
+from generators import find_xdr_program_name, create_jinja2_environment
+from xdr_ast import _RpcProgram, Specification, get_header_name
+
+
+class XdrSourceTopGenerator(Boilerplate):
+ """Generate source code boilerplate"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "source_top")
+ self.peer = peer
+
+ def emit_source(self, filename: str, root: Specification) -> None:
+ """Emit the top source boilerplate"""
+ name = find_xdr_program_name(root)
+ template = self.environment.get_template(self.peer + ".j2")
+ print(
+ template.render(
+ program=name,
+ filename=filename,
+ mtime=time.ctime(os.path.getmtime(filename)),
+ )
+ )
diff --git a/tools/net/sunrpc/xdrgen/generators/struct.py b/tools/net/sunrpc/xdrgen/generators/struct.py
new file mode 100644
index 000000000000..64911de46f62
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/struct.py
@@ -0,0 +1,288 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR struct types"""
+
+from jinja2 import Environment
+
+from generators import SourceGenerator, kernel_c_type
+from generators import create_jinja2_environment, get_jinja2_template
+
+from xdr_ast import _XdrBasic, _XdrString
+from xdr_ast import _XdrFixedLengthOpaque, _XdrVariableLengthOpaque
+from xdr_ast import _XdrFixedLengthArray, _XdrVariableLengthArray
+from xdr_ast import _XdrOptionalData, _XdrStruct, _XdrDeclaration
+from xdr_ast import public_apis, get_header_name
+
+
+def emit_struct_declaration(environment: Environment, node: _XdrStruct) -> None:
+ """Emit one declaration pair for an XDR struct type"""
+ if node.name in public_apis:
+ template = get_jinja2_template(environment, "declaration", "close")
+ print(template.render(name=node.name))
+
+
+def emit_struct_member_definition(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit a definition for one field in an XDR struct"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(template.render(name=field.name))
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(template.render(name=field.name))
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "definition", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=kernel_c_type(field.spec),
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_struct_definition(environment: Environment, node: _XdrStruct) -> None:
+ """Emit one definition for an XDR struct type"""
+ template = get_jinja2_template(environment, "definition", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields:
+ emit_struct_member_definition(environment, field)
+
+ template = get_jinja2_template(environment, "definition", "close")
+ print(template.render(name=node.name))
+
+
+def emit_struct_member_decoder(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit a decoder for one field in an XDR struct"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ size=field.size,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ maxsize=field.maxsize,
+ classifier=field.spec.c_classifier,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "decoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_struct_decoder(environment: Environment, node: _XdrStruct) -> None:
+ """Emit one decoder function for an XDR struct type"""
+ template = get_jinja2_template(environment, "decoder", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields:
+ emit_struct_member_decoder(environment, field)
+
+ template = get_jinja2_template(environment, "decoder", "close")
+ print(template.render())
+
+
+def emit_struct_member_encoder(
+ environment: Environment, field: _XdrDeclaration
+) -> None:
+ """Emit an encoder for one field in an XDR struct"""
+ if isinstance(field, _XdrBasic):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrString):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ size=field.size,
+ )
+ )
+ elif isinstance(field, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ maxsize=field.maxsize,
+ )
+ )
+ elif isinstance(field, _XdrOptionalData):
+ template = get_jinja2_template(environment, "encoder", field.template)
+ print(
+ template.render(
+ name=field.name,
+ type=field.spec.type_name,
+ classifier=field.spec.c_classifier,
+ )
+ )
+
+
+def emit_struct_encoder(environment: Environment, node: _XdrStruct) -> None:
+ """Emit one encoder function for an XDR struct type"""
+ template = get_jinja2_template(environment, "encoder", "open")
+ print(template.render(name=node.name))
+
+ for field in node.fields:
+ emit_struct_member_encoder(environment, field)
+
+ template = get_jinja2_template(environment, "encoder", "close")
+ print(template.render())
+
+
+def emit_struct_maxsize(environment: Environment, node: _XdrStruct) -> None:
+ """Emit one maxsize macro for an XDR struct type"""
+ macro_name = get_header_name().upper() + "_" + node.name + "_sz"
+ template = get_jinja2_template(environment, "maxsize", "struct")
+ print(
+ template.render(
+ macro=macro_name,
+ width=" + ".join(node.symbolic_width()),
+ )
+ )
+
+
+class XdrStructGenerator(SourceGenerator):
+ """Generate source code for XDR structs"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "struct")
+ self.peer = peer
+
+ def emit_declaration(self, node: _XdrStruct) -> None:
+ """Emit one declaration pair for an XDR struct type"""
+ emit_struct_declaration(self.environment, node)
+
+ def emit_definition(self, node: _XdrStruct) -> None:
+ """Emit one definition for an XDR struct type"""
+ emit_struct_definition(self.environment, node)
+
+ def emit_decoder(self, node: _XdrStruct) -> None:
+ """Emit one decoder function for an XDR struct type"""
+ emit_struct_decoder(self.environment, node)
+
+ def emit_encoder(self, node: _XdrStruct) -> None:
+ """Emit one encoder function for an XDR struct type"""
+ emit_struct_encoder(self.environment, node)
+
+ def emit_maxsize(self, node: _XdrStruct) -> None:
+ """Emit one maxsize macro for an XDR struct type"""
+ emit_struct_maxsize(self.environment, node)
diff --git a/tools/net/sunrpc/xdrgen/generators/typedef.py b/tools/net/sunrpc/xdrgen/generators/typedef.py
new file mode 100644
index 000000000000..fab72e9d6915
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/typedef.py
@@ -0,0 +1,271 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR typedefs"""
+
+from jinja2 import Environment
+
+from generators import SourceGenerator, kernel_c_type
+from generators import create_jinja2_environment, get_jinja2_template
+
+from xdr_ast import _XdrBasic, _XdrTypedef, _XdrString
+from xdr_ast import _XdrFixedLengthOpaque, _XdrVariableLengthOpaque
+from xdr_ast import _XdrFixedLengthArray, _XdrVariableLengthArray
+from xdr_ast import _XdrOptionalData, _XdrVoid, _XdrDeclaration
+from xdr_ast import public_apis, get_header_name
+
+
+def emit_typedef_declaration(environment: Environment, node: _XdrDeclaration) -> None:
+ """Emit a declaration pair for one XDR typedef"""
+ if node.name not in public_apis:
+ return
+ if isinstance(node, _XdrBasic):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=kernel_c_type(node.spec),
+ classifier=node.spec.c_classifier,
+ )
+ )
+ elif isinstance(node, _XdrString):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(template.render(name=node.name))
+ elif isinstance(node, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(template.render(name=node.name, size=node.size))
+ elif isinstance(node, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(template.render(name=node.name))
+ elif isinstance(node, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ size=node.size,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "declaration", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ classifier=node.spec.c_classifier,
+ )
+ )
+ elif isinstance(node, _XdrOptionalData):
+ raise NotImplementedError("<optional_data> typedef not yet implemented")
+ elif isinstance(node, _XdrVoid):
+ raise NotImplementedError("<void> typedef not yet implemented")
+ else:
+ raise NotImplementedError("typedef: type not recognized")
+
+
+def emit_type_definition(environment: Environment, node: _XdrDeclaration) -> None:
+ """Emit a definition for one XDR typedef"""
+ if isinstance(node, _XdrBasic):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=kernel_c_type(node.spec),
+ classifier=node.spec.c_classifier,
+ )
+ )
+ elif isinstance(node, _XdrString):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(template.render(name=node.name))
+ elif isinstance(node, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(template.render(name=node.name, size=node.size))
+ elif isinstance(node, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(template.render(name=node.name))
+ elif isinstance(node, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ size=node.size,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "definition", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ classifier=node.spec.c_classifier,
+ )
+ )
+ elif isinstance(node, _XdrOptionalData):
+ raise NotImplementedError("<optional_data> typedef not yet implemented")
+ elif isinstance(node, _XdrVoid):
+ raise NotImplementedError("<void> typedef not yet implemented")
+ else:
+ raise NotImplementedError("typedef: type not recognized")
+
+
+def emit_typedef_decoder(environment: Environment, node: _XdrDeclaration) -> None:
+ """Emit a decoder function for one XDR typedef"""
+ if isinstance(node, _XdrBasic):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ )
+ )
+ elif isinstance(node, _XdrString):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ size=node.size,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ size=node.size,
+ classifier=node.spec.c_classifier,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "decoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrOptionalData):
+ raise NotImplementedError("<optional_data> typedef not yet implemented")
+ elif isinstance(node, _XdrVoid):
+ raise NotImplementedError("<void> typedef not yet implemented")
+ else:
+ raise NotImplementedError("typedef: type not recognized")
+
+
+def emit_typedef_encoder(environment: Environment, node: _XdrDeclaration) -> None:
+ """Emit an encoder function for one XDR typedef"""
+ if isinstance(node, _XdrBasic):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ )
+ )
+ elif isinstance(node, _XdrString):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrFixedLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ size=node.size,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthOpaque):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrFixedLengthArray):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ size=node.size,
+ )
+ )
+ elif isinstance(node, _XdrVariableLengthArray):
+ template = get_jinja2_template(environment, "encoder", node.template)
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ maxsize=node.maxsize,
+ )
+ )
+ elif isinstance(node, _XdrOptionalData):
+ raise NotImplementedError("<optional_data> typedef not yet implemented")
+ elif isinstance(node, _XdrVoid):
+ raise NotImplementedError("<void> typedef not yet implemented")
+ else:
+ raise NotImplementedError("typedef: type not recognized")
+
+
+def emit_typedef_maxsize(environment: Environment, node: _XdrDeclaration) -> None:
+ """Emit a maxsize macro for an XDR typedef"""
+ macro_name = get_header_name().upper() + "_" + node.name + "_sz"
+ template = get_jinja2_template(environment, "maxsize", node.template)
+ print(
+ template.render(
+ macro=macro_name,
+ width=" + ".join(node.symbolic_width()),
+ )
+ )
+
+
+class XdrTypedefGenerator(SourceGenerator):
+ """Generate source code for XDR typedefs"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "typedef")
+ self.peer = peer
+
+ def emit_declaration(self, node: _XdrTypedef) -> None:
+ """Emit one declaration pair for an XDR enum type"""
+ emit_typedef_declaration(self.environment, node.declaration)
+
+ def emit_definition(self, node: _XdrTypedef) -> None:
+ """Emit one definition for an XDR typedef"""
+ emit_type_definition(self.environment, node.declaration)
+
+ def emit_decoder(self, node: _XdrTypedef) -> None:
+ """Emit one decoder function for an XDR typedef"""
+ emit_typedef_decoder(self.environment, node.declaration)
+
+ def emit_encoder(self, node: _XdrTypedef) -> None:
+ """Emit one encoder function for an XDR typedef"""
+ emit_typedef_encoder(self.environment, node.declaration)
+
+ def emit_maxsize(self, node: _XdrTypedef) -> None:
+ """Emit one maxsize macro for an XDR typedef"""
+ emit_typedef_maxsize(self.environment, node.declaration)
diff --git a/tools/net/sunrpc/xdrgen/generators/union.py b/tools/net/sunrpc/xdrgen/generators/union.py
new file mode 100644
index 000000000000..2cca00e279cd
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/generators/union.py
@@ -0,0 +1,275 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Generate code to handle XDR unions"""
+
+from jinja2 import Environment
+
+from generators import SourceGenerator
+from generators import create_jinja2_environment, get_jinja2_template
+
+from xdr_ast import _XdrBasic, _XdrUnion, _XdrVoid, get_header_name
+from xdr_ast import _XdrDeclaration, _XdrCaseSpec, public_apis, big_endian
+
+
+def emit_union_declaration(environment: Environment, node: _XdrUnion) -> None:
+ """Emit one declaration pair for an XDR union type"""
+ if node.name in public_apis:
+ template = get_jinja2_template(environment, "declaration", "close")
+ print(template.render(name=node.name))
+
+
+def emit_union_switch_spec_definition(
+ environment: Environment, node: _XdrDeclaration
+) -> None:
+ """Emit a definition for an XDR union's discriminant"""
+ assert isinstance(node, _XdrBasic)
+ template = get_jinja2_template(environment, "definition", "switch_spec")
+ print(
+ template.render(
+ name=node.name,
+ type=node.spec.type_name,
+ classifier=node.spec.c_classifier,
+ )
+ )
+
+
+def emit_union_case_spec_definition(
+ environment: Environment, node: _XdrDeclaration
+) -> None:
+ """Emit a definition for an XDR union's case arm"""
+ if isinstance(node.arm, _XdrVoid):
+ return
+ assert isinstance(node.arm, _XdrBasic)
+ template = get_jinja2_template(environment, "definition", "case_spec")
+ print(
+ template.render(
+ name=node.arm.name,
+ type=node.arm.spec.type_name,
+ classifier=node.arm.spec.c_classifier,
+ )
+ )
+
+
+def emit_union_definition(environment: Environment, node: _XdrUnion) -> None:
+ """Emit one XDR union definition"""
+ template = get_jinja2_template(environment, "definition", "open")
+ print(template.render(name=node.name))
+
+ emit_union_switch_spec_definition(environment, node.discriminant)
+
+ for case in node.cases:
+ emit_union_case_spec_definition(environment, case)
+
+ if node.default is not None:
+ emit_union_case_spec_definition(environment, node.default)
+
+ template = get_jinja2_template(environment, "definition", "close")
+ print(template.render(name=node.name))
+
+
+def emit_union_switch_spec_decoder(
+ environment: Environment, node: _XdrDeclaration
+) -> None:
+ """Emit a decoder for an XDR union's discriminant"""
+ assert isinstance(node, _XdrBasic)
+ template = get_jinja2_template(environment, "decoder", "switch_spec")
+ print(template.render(name=node.name, type=node.spec.type_name))
+
+
+def emit_union_case_spec_decoder(
+ environment: Environment, node: _XdrCaseSpec, big_endian_discriminant: bool
+) -> None:
+ """Emit decoder functions for an XDR union's case arm"""
+
+ if isinstance(node.arm, _XdrVoid):
+ return
+
+ if big_endian_discriminant:
+ template = get_jinja2_template(environment, "decoder", "case_spec_be")
+ else:
+ template = get_jinja2_template(environment, "decoder", "case_spec")
+ for case in node.values:
+ print(template.render(case=case))
+
+ assert isinstance(node.arm, _XdrBasic)
+ template = get_jinja2_template(environment, "decoder", node.arm.template)
+ print(
+ template.render(
+ name=node.arm.name,
+ type=node.arm.spec.type_name,
+ classifier=node.arm.spec.c_classifier,
+ )
+ )
+
+ template = get_jinja2_template(environment, "decoder", "break")
+ print(template.render())
+
+
+def emit_union_default_spec_decoder(environment: Environment, node: _XdrUnion) -> None:
+ """Emit a decoder function for an XDR union's default arm"""
+ default_case = node.default
+
+ # Avoid a gcc warning about a default case with boolean discriminant
+ if default_case is None and node.discriminant.spec.type_name == "bool":
+ return
+
+ template = get_jinja2_template(environment, "decoder", "default_spec")
+ print(template.render())
+
+ if default_case is None or isinstance(default_case.arm, _XdrVoid):
+ template = get_jinja2_template(environment, "decoder", "break")
+ print(template.render())
+ return
+
+ assert isinstance(default_case.arm, _XdrBasic)
+ template = get_jinja2_template(environment, "decoder", default_case.arm.template)
+ print(
+ template.render(
+ name=default_case.arm.name,
+ type=default_case.arm.spec.type_name,
+ classifier=default_case.arm.spec.c_classifier,
+ )
+ )
+
+
+def emit_union_decoder(environment: Environment, node: _XdrUnion) -> None:
+ """Emit one XDR union decoder"""
+ template = get_jinja2_template(environment, "decoder", "open")
+ print(template.render(name=node.name))
+
+ emit_union_switch_spec_decoder(environment, node.discriminant)
+
+ for case in node.cases:
+ emit_union_case_spec_decoder(
+ environment,
+ case,
+ node.discriminant.spec.type_name in big_endian,
+ )
+
+ emit_union_default_spec_decoder(environment, node)
+
+ template = get_jinja2_template(environment, "decoder", "close")
+ print(template.render())
+
+
+def emit_union_switch_spec_encoder(
+ environment: Environment, node: _XdrDeclaration
+) -> None:
+ """Emit an encoder for an XDR union's discriminant"""
+ assert isinstance(node, _XdrBasic)
+ template = get_jinja2_template(environment, "encoder", "switch_spec")
+ print(template.render(name=node.name, type=node.spec.type_name))
+
+
+def emit_union_case_spec_encoder(
+ environment: Environment, node: _XdrCaseSpec, big_endian_discriminant: bool
+) -> None:
+ """Emit encoder functions for an XDR union's case arm"""
+
+ if isinstance(node.arm, _XdrVoid):
+ return
+
+ if big_endian_discriminant:
+ template = get_jinja2_template(environment, "encoder", "case_spec_be")
+ else:
+ template = get_jinja2_template(environment, "encoder", "case_spec")
+ for case in node.values:
+ print(template.render(case=case))
+
+ template = get_jinja2_template(environment, "encoder", node.arm.template)
+ print(
+ template.render(
+ name=node.arm.name,
+ type=node.arm.spec.type_name,
+ )
+ )
+
+ template = get_jinja2_template(environment, "encoder", "break")
+ print(template.render())
+
+
+def emit_union_default_spec_encoder(environment: Environment, node: _XdrUnion) -> None:
+ """Emit an encoder function for an XDR union's default arm"""
+ default_case = node.default
+
+ # Avoid a gcc warning about a default case with boolean discriminant
+ if default_case is None and node.discriminant.spec.type_name == "bool":
+ return
+
+ template = get_jinja2_template(environment, "encoder", "default_spec")
+ print(template.render())
+
+ if default_case is None or isinstance(default_case.arm, _XdrVoid):
+ template = get_jinja2_template(environment, "encoder", "break")
+ print(template.render())
+ return
+
+ template = get_jinja2_template(environment, "encoder", default_case.arm.template)
+ print(
+ template.render(
+ name=default_case.arm.name,
+ type=default_case.arm.spec.type_name,
+ )
+ )
+
+
+def emit_union_encoder(environment, node: _XdrUnion) -> None:
+ """Emit one XDR union encoder"""
+ template = get_jinja2_template(environment, "encoder", "open")
+ print(template.render(name=node.name))
+
+ emit_union_switch_spec_encoder(environment, node.discriminant)
+
+ for case in node.cases:
+ emit_union_case_spec_encoder(
+ environment,
+ case,
+ node.discriminant.spec.type_name in big_endian,
+ )
+
+ emit_union_default_spec_encoder(environment, node)
+
+ template = get_jinja2_template(environment, "encoder", "close")
+ print(template.render())
+
+
+def emit_union_maxsize(environment: Environment, node: _XdrUnion) -> None:
+ """Emit one maxsize macro for an XDR union type"""
+ macro_name = get_header_name().upper() + "_" + node.name + "_sz"
+ template = get_jinja2_template(environment, "maxsize", "union")
+ print(
+ template.render(
+ macro=macro_name,
+ width=" + ".join(node.symbolic_width()),
+ )
+ )
+
+
+class XdrUnionGenerator(SourceGenerator):
+ """Generate source code for XDR unions"""
+
+ def __init__(self, language: str, peer: str):
+ """Initialize an instance of this class"""
+ self.environment = create_jinja2_environment(language, "union")
+ self.peer = peer
+
+ def emit_declaration(self, node: _XdrUnion) -> None:
+ """Emit one declaration pair for an XDR union"""
+ emit_union_declaration(self.environment, node)
+
+ def emit_definition(self, node: _XdrUnion) -> None:
+ """Emit one definition for an XDR union"""
+ emit_union_definition(self.environment, node)
+
+ def emit_decoder(self, node: _XdrUnion) -> None:
+ """Emit one decoder function for an XDR union"""
+ emit_union_decoder(self.environment, node)
+
+ def emit_encoder(self, node: _XdrUnion) -> None:
+ """Emit one encoder function for an XDR union"""
+ emit_union_encoder(self.environment, node)
+
+ def emit_maxsize(self, node: _XdrUnion) -> None:
+ """Emit one maxsize macro for an XDR union"""
+ emit_union_maxsize(self.environment, node)
diff --git a/tools/net/sunrpc/xdrgen/grammars/xdr.lark b/tools/net/sunrpc/xdrgen/grammars/xdr.lark
new file mode 100644
index 000000000000..7c2c1b8c86d1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/grammars/xdr.lark
@@ -0,0 +1,121 @@
+// A Lark grammar for the XDR specification language based on
+// https://tools.ietf.org/html/rfc4506 Section 6.3
+
+declaration : "opaque" identifier "[" value "]" -> fixed_length_opaque
+ | "opaque" identifier "<" [ value ] ">" -> variable_length_opaque
+ | "string" identifier "<" [ value ] ">" -> string
+ | type_specifier identifier "[" value "]" -> fixed_length_array
+ | type_specifier identifier "<" [ value ] ">" -> variable_length_array
+ | type_specifier "*" identifier -> optional_data
+ | type_specifier identifier -> basic
+ | "void" -> void
+
+value : decimal_constant
+ | hexadecimal_constant
+ | octal_constant
+ | identifier
+
+constant : decimal_constant | hexadecimal_constant | octal_constant
+
+type_specifier : unsigned_hyper
+ | unsigned_long
+ | unsigned_int
+ | hyper
+ | long
+ | int
+ | float
+ | double
+ | quadruple
+ | bool
+ | enum_type_spec
+ | struct_type_spec
+ | union_type_spec
+ | identifier
+
+unsigned_hyper : "unsigned" "hyper"
+unsigned_long : "unsigned" "long"
+unsigned_int : "unsigned" "int"
+hyper : "hyper"
+long : "long"
+int : "int"
+float : "float"
+double : "double"
+quadruple : "quadruple"
+bool : "bool"
+
+enum_type_spec : "enum" enum_body
+
+enum_body : "{" ( identifier "=" value ) ( "," identifier "=" value )* "}"
+
+struct_type_spec : "struct" struct_body
+
+struct_body : "{" ( declaration ";" )+ "}"
+
+union_type_spec : "union" union_body
+
+union_body : switch_spec "{" case_spec+ [ default_spec ] "}"
+
+switch_spec : "switch" "(" declaration ")"
+
+case_spec : ( "case" value ":" )+ declaration ";"
+
+default_spec : "default" ":" declaration ";"
+
+constant_def : "const" identifier "=" value ";"
+
+type_def : "typedef" declaration ";" -> typedef
+ | "enum" identifier enum_body ";" -> enum
+ | "struct" identifier struct_body ";" -> struct
+ | "union" identifier union_body ";" -> union
+
+specification : definition*
+
+definition : constant_def
+ | type_def
+ | program_def
+ | pragma_def
+
+//
+// RPC program definitions not specified in RFC 4506
+//
+
+program_def : "program" identifier "{" version_def+ "}" "=" constant ";"
+
+version_def : "version" identifier "{" procedure_def+ "}" "=" constant ";"
+
+procedure_def : type_specifier identifier "(" type_specifier ")" "=" constant ";"
+
+pragma_def : "pragma" directive identifier [ identifier ] ";"
+
+directive : big_endian_directive
+ | exclude_directive
+ | header_directive
+ | pages_directive
+ | public_directive
+ | skip_directive
+
+big_endian_directive : "big_endian"
+exclude_directive : "exclude"
+header_directive : "header"
+pages_directive : "pages"
+public_directive : "public"
+skip_directive : "skip"
+
+//
+// XDR language primitives
+//
+
+identifier : /([a-z]|[A-Z])(_|[a-z]|[A-Z]|[0-9])*/
+
+decimal_constant : /[\+-]?(0|[1-9][0-9]*)/
+hexadecimal_constant : /0x([a-f]|[A-F]|[0-9])+/
+octal_constant : /0[0-7]+/
+
+PASSTHRU : "%" | "%" /.+/
+%ignore PASSTHRU
+
+%import common.C_COMMENT
+%ignore C_COMMENT
+
+%import common.WS
+%ignore WS
diff --git a/tools/net/sunrpc/xdrgen/subcmds/__init__.py b/tools/net/sunrpc/xdrgen/subcmds/__init__.py
new file mode 100644
index 000000000000..c940e9275252
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/subcmds/__init__.py
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+# Just to make sphinx-apidoc document this directory
diff --git a/tools/net/sunrpc/xdrgen/subcmds/declarations.py b/tools/net/sunrpc/xdrgen/subcmds/declarations.py
new file mode 100644
index 000000000000..c5e8d79986ef
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/subcmds/declarations.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Translate an XDR specification into executable code that
+can be compiled for the Linux kernel."""
+
+import logging
+
+from argparse import Namespace
+from lark import logger
+from lark.exceptions import UnexpectedInput
+
+from generators.constant import XdrConstantGenerator
+from generators.enum import XdrEnumGenerator
+from generators.header_bottom import XdrHeaderBottomGenerator
+from generators.header_top import XdrHeaderTopGenerator
+from generators.pointer import XdrPointerGenerator
+from generators.program import XdrProgramGenerator
+from generators.typedef import XdrTypedefGenerator
+from generators.struct import XdrStructGenerator
+from generators.union import XdrUnionGenerator
+
+from xdr_ast import transform_parse_tree, _RpcProgram, Specification
+from xdr_ast import _XdrConstant, _XdrEnum, _XdrPointer
+from xdr_ast import _XdrTypedef, _XdrStruct, _XdrUnion
+from xdr_parse import xdr_parser, set_xdr_annotate
+
+logger.setLevel(logging.INFO)
+
+
+def emit_header_declarations(
+ root: Specification, language: str, peer: str
+) -> None:
+ """Emit header declarations"""
+ for definition in root.definitions:
+ if isinstance(definition.value, _XdrEnum):
+ gen = XdrEnumGenerator(language, peer)
+ elif isinstance(definition.value, _XdrPointer):
+ gen = XdrPointerGenerator(language, peer)
+ elif isinstance(definition.value, _XdrTypedef):
+ gen = XdrTypedefGenerator(language, peer)
+ elif isinstance(definition.value, _XdrStruct):
+ gen = XdrStructGenerator(language, peer)
+ elif isinstance(definition.value, _XdrUnion):
+ gen = XdrUnionGenerator(language, peer)
+ elif isinstance(definition.value, _RpcProgram):
+ gen = XdrProgramGenerator(language, peer)
+ else:
+ continue
+ gen.emit_declaration(definition.value)
+
+
+def handle_parse_error(e: UnexpectedInput) -> bool:
+ """Simple parse error reporting, no recovery attempted"""
+ print(e)
+ return True
+
+
+def subcmd(args: Namespace) -> int:
+ """Generate definitions and declarations"""
+
+ set_xdr_annotate(args.annotate)
+ parser = xdr_parser()
+ with open(args.filename, encoding="utf-8") as f:
+ parse_tree = parser.parse(f.read(), on_error=handle_parse_error)
+ ast = transform_parse_tree(parse_tree)
+
+ gen = XdrHeaderTopGenerator(args.language, args.peer)
+ gen.emit_declaration(args.filename, ast)
+
+ emit_header_declarations(ast, args.language, args.peer)
+
+ gen = XdrHeaderBottomGenerator(args.language, args.peer)
+ gen.emit_declaration(args.filename, ast)
+
+ return 0
diff --git a/tools/net/sunrpc/xdrgen/subcmds/definitions.py b/tools/net/sunrpc/xdrgen/subcmds/definitions.py
new file mode 100644
index 000000000000..c956e27f37c0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/subcmds/definitions.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Translate an XDR specification into executable code that
+can be compiled for the Linux kernel."""
+
+import logging
+
+from argparse import Namespace
+from lark import logger
+from lark.exceptions import UnexpectedInput
+
+from generators.constant import XdrConstantGenerator
+from generators.enum import XdrEnumGenerator
+from generators.header_bottom import XdrHeaderBottomGenerator
+from generators.header_top import XdrHeaderTopGenerator
+from generators.pointer import XdrPointerGenerator
+from generators.program import XdrProgramGenerator
+from generators.typedef import XdrTypedefGenerator
+from generators.struct import XdrStructGenerator
+from generators.union import XdrUnionGenerator
+
+from xdr_ast import transform_parse_tree, Specification
+from xdr_ast import _RpcProgram, _XdrConstant, _XdrEnum, _XdrPointer
+from xdr_ast import _XdrTypedef, _XdrStruct, _XdrUnion
+from xdr_parse import xdr_parser, set_xdr_annotate
+
+logger.setLevel(logging.INFO)
+
+
+def emit_header_definitions(root: Specification, language: str, peer: str) -> None:
+ """Emit header definitions"""
+ for definition in root.definitions:
+ if isinstance(definition.value, _XdrConstant):
+ gen = XdrConstantGenerator(language, peer)
+ elif isinstance(definition.value, _XdrEnum):
+ gen = XdrEnumGenerator(language, peer)
+ elif isinstance(definition.value, _XdrPointer):
+ gen = XdrPointerGenerator(language, peer)
+ elif isinstance(definition.value, _RpcProgram):
+ gen = XdrProgramGenerator(language, peer)
+ elif isinstance(definition.value, _XdrTypedef):
+ gen = XdrTypedefGenerator(language, peer)
+ elif isinstance(definition.value, _XdrStruct):
+ gen = XdrStructGenerator(language, peer)
+ elif isinstance(definition.value, _XdrUnion):
+ gen = XdrUnionGenerator(language, peer)
+ else:
+ continue
+ gen.emit_definition(definition.value)
+
+
+def emit_header_maxsize(root: Specification, language: str, peer: str) -> None:
+ """Emit header maxsize macros"""
+ print("")
+ for definition in root.definitions:
+ if isinstance(definition.value, _XdrEnum):
+ gen = XdrEnumGenerator(language, peer)
+ elif isinstance(definition.value, _XdrPointer):
+ gen = XdrPointerGenerator(language, peer)
+ elif isinstance(definition.value, _XdrTypedef):
+ gen = XdrTypedefGenerator(language, peer)
+ elif isinstance(definition.value, _XdrStruct):
+ gen = XdrStructGenerator(language, peer)
+ elif isinstance(definition.value, _XdrUnion):
+ gen = XdrUnionGenerator(language, peer)
+ else:
+ continue
+ gen.emit_maxsize(definition.value)
+
+
+def handle_parse_error(e: UnexpectedInput) -> bool:
+ """Simple parse error reporting, no recovery attempted"""
+ print(e)
+ return True
+
+
+def subcmd(args: Namespace) -> int:
+ """Generate definitions"""
+
+ set_xdr_annotate(args.annotate)
+ parser = xdr_parser()
+ with open(args.filename, encoding="utf-8") as f:
+ parse_tree = parser.parse(f.read(), on_error=handle_parse_error)
+ ast = transform_parse_tree(parse_tree)
+
+ gen = XdrHeaderTopGenerator(args.language, args.peer)
+ gen.emit_definition(args.filename, ast)
+
+ emit_header_definitions(ast, args.language, args.peer)
+ emit_header_maxsize(ast, args.language, args.peer)
+
+ gen = XdrHeaderBottomGenerator(args.language, args.peer)
+ gen.emit_definition(args.filename, ast)
+
+ return 0
diff --git a/tools/net/sunrpc/xdrgen/subcmds/lint.py b/tools/net/sunrpc/xdrgen/subcmds/lint.py
new file mode 100644
index 000000000000..36cc43717d30
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/subcmds/lint.py
@@ -0,0 +1,33 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Translate an XDR specification into executable code that
+can be compiled for the Linux kernel."""
+
+import logging
+
+from argparse import Namespace
+from lark import logger
+from lark.exceptions import UnexpectedInput
+
+from xdr_parse import xdr_parser
+from xdr_ast import transform_parse_tree
+
+logger.setLevel(logging.DEBUG)
+
+
+def handle_parse_error(e: UnexpectedInput) -> bool:
+ """Simple parse error reporting, no recovery attempted"""
+ print(e)
+ return True
+
+
+def subcmd(args: Namespace) -> int:
+ """Lexical and syntax check of an XDR specification"""
+
+ parser = xdr_parser()
+ with open(args.filename, encoding="utf-8") as f:
+ parse_tree = parser.parse(f.read(), on_error=handle_parse_error)
+ transform_parse_tree(parse_tree)
+
+ return 0
diff --git a/tools/net/sunrpc/xdrgen/subcmds/source.py b/tools/net/sunrpc/xdrgen/subcmds/source.py
new file mode 100644
index 000000000000..2024954748f0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/subcmds/source.py
@@ -0,0 +1,117 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Translate an XDR specification into executable code that
+can be compiled for the Linux kernel."""
+
+import logging
+
+from argparse import Namespace
+from lark import logger
+from lark.exceptions import UnexpectedInput
+
+from generators.source_top import XdrSourceTopGenerator
+from generators.enum import XdrEnumGenerator
+from generators.pointer import XdrPointerGenerator
+from generators.program import XdrProgramGenerator
+from generators.typedef import XdrTypedefGenerator
+from generators.struct import XdrStructGenerator
+from generators.union import XdrUnionGenerator
+
+from xdr_ast import transform_parse_tree, _RpcProgram, Specification
+from xdr_ast import _XdrAst, _XdrEnum, _XdrPointer
+from xdr_ast import _XdrStruct, _XdrTypedef, _XdrUnion
+
+from xdr_parse import xdr_parser, set_xdr_annotate
+
+logger.setLevel(logging.INFO)
+
+
+def emit_source_decoder(node: _XdrAst, language: str, peer: str) -> None:
+ """Emit one XDR decoder function for a source file"""
+ if isinstance(node, _XdrEnum):
+ gen = XdrEnumGenerator(language, peer)
+ elif isinstance(node, _XdrPointer):
+ gen = XdrPointerGenerator(language, peer)
+ elif isinstance(node, _XdrTypedef):
+ gen = XdrTypedefGenerator(language, peer)
+ elif isinstance(node, _XdrStruct):
+ gen = XdrStructGenerator(language, peer)
+ elif isinstance(node, _XdrUnion):
+ gen = XdrUnionGenerator(language, peer)
+ elif isinstance(node, _RpcProgram):
+ gen = XdrProgramGenerator(language, peer)
+ else:
+ return
+ gen.emit_decoder(node)
+
+
+def emit_source_encoder(node: _XdrAst, language: str, peer: str) -> None:
+ """Emit one XDR encoder function for a source file"""
+ if isinstance(node, _XdrEnum):
+ gen = XdrEnumGenerator(language, peer)
+ elif isinstance(node, _XdrPointer):
+ gen = XdrPointerGenerator(language, peer)
+ elif isinstance(node, _XdrTypedef):
+ gen = XdrTypedefGenerator(language, peer)
+ elif isinstance(node, _XdrStruct):
+ gen = XdrStructGenerator(language, peer)
+ elif isinstance(node, _XdrUnion):
+ gen = XdrUnionGenerator(language, peer)
+ elif isinstance(node, _RpcProgram):
+ gen = XdrProgramGenerator(language, peer)
+ else:
+ return
+ gen.emit_encoder(node)
+
+
+def generate_server_source(filename: str, root: Specification, language: str) -> None:
+ """Generate server-side source code"""
+
+ gen = XdrSourceTopGenerator(language, "server")
+ gen.emit_source(filename, root)
+
+ for definition in root.definitions:
+ emit_source_decoder(definition.value, language, "server")
+ for definition in root.definitions:
+ emit_source_encoder(definition.value, language, "server")
+
+
+def generate_client_source(filename: str, root: Specification, language: str) -> None:
+ """Generate server-side source code"""
+
+ gen = XdrSourceTopGenerator(language, "client")
+ gen.emit_source(filename, root)
+
+ print("")
+ for definition in root.definitions:
+ emit_source_encoder(definition.value, language, "client")
+ for definition in root.definitions:
+ emit_source_decoder(definition.value, language, "client")
+
+ # cel: todo: client needs PROC macros
+
+
+def handle_parse_error(e: UnexpectedInput) -> bool:
+ """Simple parse error reporting, no recovery attempted"""
+ print(e)
+ return True
+
+
+def subcmd(args: Namespace) -> int:
+ """Generate encoder and decoder functions"""
+
+ set_xdr_annotate(args.annotate)
+ parser = xdr_parser()
+ with open(args.filename, encoding="utf-8") as f:
+ parse_tree = parser.parse(f.read(), on_error=handle_parse_error)
+ ast = transform_parse_tree(parse_tree)
+ match args.peer:
+ case "server":
+ generate_server_source(args.filename, ast, args.language)
+ case "client":
+ generate_client_source(args.filename, ast, args.language)
+ case _:
+ print("Code generation for", args.peer, "is not yet supported")
+
+ return 0
diff --git a/tools/net/sunrpc/xdrgen/templates/C/constants/definition.j2 b/tools/net/sunrpc/xdrgen/templates/C/constants/definition.j2
new file mode 100644
index 000000000000..d648ca4193f8
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/constants/definition.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+enum { {{ name }} = {{ value }} };
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/declaration/enum.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/declaration/enum.j2
new file mode 100644
index 000000000000..d1405c7c5354
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/declaration/enum.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, {{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum.j2
new file mode 100644
index 000000000000..6482984f1cb7
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum.j2
@@ -0,0 +1,19 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* enum {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ name }} *ptr)
+{
+ u32 val;
+
+ if (xdr_stream_decode_u32(xdr, &val) < 0)
+ return false;
+ *ptr = val;
+ return true;
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum_be.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum_be.j2
new file mode 100644
index 000000000000..44c391c10b42
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/decoder/enum_be.j2
@@ -0,0 +1,14 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* enum {{ name }} (big-endian) */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ name }} *ptr)
+{
+ return xdr_stream_decode_be32(xdr, ptr) == 0;
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close.j2
new file mode 100644
index 000000000000..a07586cbee17
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+};
+typedef enum {{ name }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close_be.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close_be.j2
new file mode 100644
index 000000000000..2c18948bddf7
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/close_be.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+};
+typedef __be32 {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/definition/enumerator.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/enumerator.j2
new file mode 100644
index 000000000000..ff0b893b8b14
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/enumerator.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ {{ name }} = {{ value }},
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/definition/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/open.j2
new file mode 100644
index 000000000000..b25335221d48
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/definition/open.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+enum {{ name }} {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum.j2
new file mode 100644
index 000000000000..67245b9a914d
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum.j2
@@ -0,0 +1,14 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* enum {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, {{ name }} value)
+{
+ return xdr_stream_encode_u32(xdr, value) == XDR_UNIT;
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum_be.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum_be.j2
new file mode 100644
index 000000000000..fbbcc45948d6
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/encoder/enum_be.j2
@@ -0,0 +1,14 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* enum {{ name }} (big-endian) */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, {{ name }} value)
+{
+ return xdr_stream_encode_be32(xdr, value) == XDR_UNIT;
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/enum/maxsize/enum.j2 b/tools/net/sunrpc/xdrgen/templates/C/enum/maxsize/enum.j2
new file mode 100644
index 000000000000..45c1d4c21b22
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/enum/maxsize/enum.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/header_bottom/declaration/header.j2 b/tools/net/sunrpc/xdrgen/templates/C/header_bottom/declaration/header.j2
new file mode 100644
index 000000000000..0bb8c6fc0c20
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/header_bottom/declaration/header.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+#endif /* _LINUX_XDRGEN_{{ infix }}_DECL_H */
diff --git a/tools/net/sunrpc/xdrgen/templates/C/header_bottom/definition/header.j2 b/tools/net/sunrpc/xdrgen/templates/C/header_bottom/definition/header.j2
new file mode 100644
index 000000000000..69069d08dc91
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/header_bottom/definition/header.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+#endif /* _LINUX_XDRGEN_{{ infix }}_DEF_H */
diff --git a/tools/net/sunrpc/xdrgen/templates/C/header_top/declaration/header.j2 b/tools/net/sunrpc/xdrgen/templates/C/header_top/declaration/header.j2
new file mode 100644
index 000000000000..ebb4e1d32f85
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/header_top/declaration/header.j2
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Generated by xdrgen. Manual edits will be lost. */
+/* XDR specification file: {{ filename }} */
+/* XDR specification modification time: {{ mtime }} */
+
+#ifndef _LINUX_XDRGEN_{{ infix }}_DECL_H
+#define _LINUX_XDRGEN_{{ infix }}_DECL_H
+
+#include <linux/types.h>
+
+#include <linux/sunrpc/xdr.h>
+#include <linux/sunrpc/xdrgen/_defs.h>
+#include <linux/sunrpc/xdrgen/_builtins.h>
+#include <linux/sunrpc/xdrgen/{{ infix.lower() }}.h>
diff --git a/tools/net/sunrpc/xdrgen/templates/C/header_top/definition/header.j2 b/tools/net/sunrpc/xdrgen/templates/C/header_top/definition/header.j2
new file mode 100644
index 000000000000..92f1fd4ba024
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/header_top/definition/header.j2
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Generated by xdrgen. Manual edits will be lost. */
+/* XDR specification file: {{ filename }} */
+/* XDR specification modification time: {{ mtime }} */
+
+#ifndef _LINUX_XDRGEN_{{ infix }}_DEF_H
+#define _LINUX_XDRGEN_{{ infix }}_DEF_H
+
+#include <linux/types.h>
+#include <linux/sunrpc/xdrgen/_defs.h>
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/declaration/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/declaration/close.j2
new file mode 100644
index 000000000000..816291184e8c
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/declaration/close.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/basic.j2
new file mode 100644
index 000000000000..cde4ab53f4be
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/basic.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/close.j2
new file mode 100644
index 000000000000..5bf010665f84
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/close.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_array.j2
new file mode 100644
index 000000000000..cfd64217ad82
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_array.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+ if (xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}.items[i]) < 0)
+ return false;
+ }
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..b4695ece1884
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/fixed_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length opaque) */
+{% endif %}
+ if (xdr_stream_decode_opaque_fixed(xdr, ptr->{{ name }}, {{ size }}) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/open.j2
new file mode 100644
index 000000000000..c093d9e3c9ad
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/open.j2
@@ -0,0 +1,22 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* pointer {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr)
+{
+ bool opted;
+
+{% if annotate %}
+ /* opted */
+{% endif %}
+ if (!xdrgen_decode_bool(xdr, &opted))
+ return false;
+ if (!opted)
+ return true;
+
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/optional_data.j2
new file mode 100644
index 000000000000..b6834299a04b
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/optional_data.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (optional data) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, ptr->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/string.j2
new file mode 100644
index 000000000000..12d20b143b43
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/string.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length string) */
+{% endif %}
+ if (!xdrgen_decode_string(xdr, (string *)ptr, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_array.j2
new file mode 100644
index 000000000000..2f943909cdf7
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_array.j2
@@ -0,0 +1,13 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length array) */
+{% endif %}
+ if (xdr_stream_decode_u32(xdr, &ptr->{{ name }}.count) < 0)
+ return false;
+{% if maxsize != "0" %}
+ if (ptr->{{ name }}.count > {{ maxsize }})
+ return false;
+{% endif %}
+ for (u32 i = 0; i < ptr->{{ name }}.count; i++)
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}.element[i]))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..9a814de54ae8
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/decoder/variable_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length opaque) */
+{% endif %}
+ if (!xdrgen_decode_opaque(xdr, (opaque *)ptr, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/basic.j2
new file mode 100644
index 000000000000..b3430895f311
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/basic.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (basic) */
+{% endif %}
+ {{ classifier }}{{ type }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/close.j2
new file mode 100644
index 000000000000..9e62344a976a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/close.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_array.j2
new file mode 100644
index 000000000000..66be836826a0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_array.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (fixed-length array) */
+{% endif %}
+ {{ type }} {{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_opaque.j2
new file mode 100644
index 000000000000..0daba19aa0f0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/fixed_length_opaque.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (fixed-length opaque) */
+{% endif %}
+ u8 {{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/open.j2
new file mode 100644
index 000000000000..bc886b818d85
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/open.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* pointer {{ name }} */
+{% endif %}
+struct {{ name }} {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/optional_data.j2
new file mode 100644
index 000000000000..a33341f45e8f
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/optional_data.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (optional data) */
+{% endif %}
+ {{ classifier }}{{ type }} *{{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/string.j2
new file mode 100644
index 000000000000..2de2feec77db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/string.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length string) */
+{% endif %}
+ string {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_array.j2
new file mode 100644
index 000000000000..5d767f9b3674
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_array.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length array) */
+{% endif %}
+ struct {
+ u32 count;
+ {{ classifier }}{{ type }} *element;
+ } {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_opaque.j2
new file mode 100644
index 000000000000..4d0cd84be3db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/definition/variable_length_opaque.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length opaque) */
+{% endif %}
+ opaque {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/basic.j2
new file mode 100644
index 000000000000..a7d3695c5a6a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/basic.j2
@@ -0,0 +1,10 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &value->{{ name }}))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}))
+{% endif %}
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/close.j2
new file mode 100644
index 000000000000..5bf010665f84
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/close.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_array.j2
new file mode 100644
index 000000000000..b01833a2c7a1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_array.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+{% if type in pass_by_reference %}
+ if (xdrgen_encode_{{ type }}(xdr, &value->items[i]) < 0)
+{% else %}
+ if (xdrgen_encode_{{ type }}(xdr, value->items[i]) < 0)
+{% endif %}
+ return false;
+ }
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..07bc91919898
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/fixed_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length opaque) */
+{% endif %}
+ if (xdr_stream_encode_opaque_fixed(xdr, value->{{ name }}, {{ size }}) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/open.j2
new file mode 100644
index 000000000000..d67fae200261
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/open.j2
@@ -0,0 +1,20 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* pointer {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *value)
+{
+{% if annotate %}
+ /* opted */
+{% endif %}
+ if (!xdrgen_encode_bool(xdr, value != NULL))
+ return false;
+ if (!value)
+ return true;
+
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/optional_data.j2
new file mode 100644
index 000000000000..16fb3e09bba1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/optional_data.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (optional data) */
+{% endif %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/string.j2
new file mode 100644
index 000000000000..cf65b71eaef3
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/string.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length string) */
+{% endif %}
+ if (value->{{ name }}.len > {{ maxsize }})
+ return false;
+ if (xdr_stream_encode_opaque(xdr, value->{{ name }}.data, value->{{ name }}.len) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_array.j2
new file mode 100644
index 000000000000..b21476629679
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_array.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length array) */
+{% endif %}
+{% if maxsize != "0" %}
+ if (value->{{ name }}.count > {{ maxsize }})
+ return false;
+{% endif %}
+ if (xdr_stream_encode_u32(xdr, value->{{ name }}.count) != XDR_UNIT)
+ return false;
+ for (u32 i = 0; i < value->{{ name }}.count; i++)
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &value->{{ name }}.element[i]))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}.element[i]))
+{% endif %}
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..1d477c2d197a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/encoder/variable_length_opaque.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length opaque) */
+{% endif %}
+ if (value->{{ name }}.len > {{ maxsize }})
+ return false;
+ if (xdr_stream_encode_opaque(xdr, value->{{ name }}.data, value->{{ name }}.len) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/pointer/maxsize/pointer.j2 b/tools/net/sunrpc/xdrgen/templates/C/pointer/maxsize/pointer.j2
new file mode 100644
index 000000000000..9f3bfb47d2f4
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/pointer/maxsize/pointer.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} \
+ ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/declaration/argument.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/declaration/argument.j2
new file mode 100644
index 000000000000..4364fed19162
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/declaration/argument.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+bool {{ program }}_svc_decode_{{ argument }}(struct svc_rqst *rqstp, struct xdr_stream *xdr);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/declaration/result.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/declaration/result.j2
new file mode 100644
index 000000000000..e0ea1e849910
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/declaration/result.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+bool {{ program }}_svc_encode_{{ result }}(struct svc_rqst *rqstp, struct xdr_stream *xdr);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2
new file mode 100644
index 000000000000..0b1709cca0d4
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2
@@ -0,0 +1,21 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+/**
+ * {{ program }}_svc_decode_{{ argument }} - Decode a {{ argument }} argument
+ * @rqstp: RPC transaction context
+ * @xdr: source XDR data stream
+ *
+ * Return values:
+ * %true: procedure arguments decoded successfully
+ * %false: decode failed
+ */
+bool {{ program }}_svc_decode_{{ argument }}(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+{
+{% if argument == 'void' %}
+ return xdrgen_decode_void(xdr);
+{% else %}
+ struct {{ argument }} *argp = rqstp->rq_argp;
+
+ return xdrgen_decode_{{ argument }}(xdr, argp);
+{% endif %}
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/decoder/result.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/result.j2
new file mode 100644
index 000000000000..aa9940e322db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/result.j2
@@ -0,0 +1,18 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* Decode {{ result }} results */
+{% endif %}
+static int {{ program }}_xdr_dec_{{ result }}(struct rpc_rqst *req,
+ struct xdr_stream *xdr, void *data)
+{
+{% if result == 'void' %}
+ xdrgen_decode_void(xdr);
+{% else %}
+ struct {{ result }} *result = data;
+
+ if (!xdrgen_decode_{{ result }}(xdr, result))
+ return -EIO;
+{% endif %}
+ return 0;
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/definition/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/definition/close.j2
new file mode 100644
index 000000000000..9e62344a976a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/definition/close.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/definition/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/definition/open.j2
new file mode 100644
index 000000000000..f9a6d439f156
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/definition/open.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* procedure numbers for {{ program }} */
+{% endif %}
+enum {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/definition/procedure.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/definition/procedure.j2
new file mode 100644
index 000000000000..ff0b893b8b14
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/definition/procedure.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ {{ name }} = {{ value }},
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/encoder/argument.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/argument.j2
new file mode 100644
index 000000000000..2fbb5bd13aec
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/argument.j2
@@ -0,0 +1,16 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{%if annotate %}
+/* Encode {{ argument }} arguments */
+{% endif %}
+static void {{ program }}_xdr_enc_{{ argument }}(struct rpc_rqst *req,
+ struct xdr_stream *xdr, const void *data)
+{
+{% if argument == 'void' %}
+ xdrgen_encode_void(xdr);
+{% else %}
+ const struct {{ argument }} *args = data;
+
+ xdrgen_encode_{{ argument }}(xdr, args);
+{% endif %}
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2
new file mode 100644
index 000000000000..6fc61a5d47b7
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2
@@ -0,0 +1,21 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+/**
+ * {{ program }}_svc_encode_{{ result }} - Encode a {{ result }} result
+ * @rqstp: RPC transaction context
+ * @xdr: target XDR data stream
+ *
+ * Return values:
+ * %true: procedure results encoded successfully
+ * %false: encode failed
+ */
+bool {{ program }}_svc_encode_{{ result }}(struct svc_rqst *rqstp, struct xdr_stream *xdr)
+{
+{% if result == 'void' %}
+ return xdrgen_encode_void(xdr);
+{% else %}
+ struct {{ result }} *resp = rqstp->rq_resp;
+
+ return xdrgen_encode_{{ result }}(xdr, resp);
+{% endif %}
+}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/source_top/client.j2 b/tools/net/sunrpc/xdrgen/templates/C/source_top/client.j2
new file mode 100644
index 000000000000..c5518c519854
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/source_top/client.j2
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+// Generated by xdrgen. Manual edits will be lost.
+// XDR specification file: {{ filename }}
+// XDR specification modification time: {{ mtime }}
+
+#include <linux/types.h>
+
+#include <linux/sunrpc/xdr.h>
+#include <linux/sunrpc/xdrgen/_defs.h>
+#include <linux/sunrpc/xdrgen/_builtins.h>
+#include <linux/sunrpc/xdrgen/nlm4.h>
+
+#include <linux/sunrpc/clnt.h>
diff --git a/tools/net/sunrpc/xdrgen/templates/C/source_top/server.j2 b/tools/net/sunrpc/xdrgen/templates/C/source_top/server.j2
new file mode 100644
index 000000000000..974e1d971e5d
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/source_top/server.j2
@@ -0,0 +1,8 @@
+// SPDX-License-Identifier: GPL-2.0
+// Generated by xdrgen. Manual edits will be lost.
+// XDR specification file: {{ filename }}
+// XDR specification modification time: {{ mtime }}
+
+#include <linux/sunrpc/svc.h>
+
+#include "{{ program }}xdr_gen.h"
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/declaration/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/declaration/close.j2
new file mode 100644
index 000000000000..816291184e8c
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/declaration/close.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/basic.j2
new file mode 100644
index 000000000000..cde4ab53f4be
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/basic.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/close.j2
new file mode 100644
index 000000000000..5bf010665f84
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/close.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_array.j2
new file mode 100644
index 000000000000..cfd64217ad82
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_array.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+ if (xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}.items[i]) < 0)
+ return false;
+ }
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..b4695ece1884
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/fixed_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length opaque) */
+{% endif %}
+ if (xdr_stream_decode_opaque_fixed(xdr, ptr->{{ name }}, {{ size }}) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/open.j2
new file mode 100644
index 000000000000..289e67259f55
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/open.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* struct {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr)
+{
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/optional_data.j2
new file mode 100644
index 000000000000..b6834299a04b
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/optional_data.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (optional data) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, ptr->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/string.j2
new file mode 100644
index 000000000000..12d20b143b43
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/string.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length string) */
+{% endif %}
+ if (!xdrgen_decode_string(xdr, (string *)ptr, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_array.j2
new file mode 100644
index 000000000000..2f943909cdf7
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_array.j2
@@ -0,0 +1,13 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length array) */
+{% endif %}
+ if (xdr_stream_decode_u32(xdr, &ptr->{{ name }}.count) < 0)
+ return false;
+{% if maxsize != "0" %}
+ if (ptr->{{ name }}.count > {{ maxsize }})
+ return false;
+{% endif %}
+ for (u32 i = 0; i < ptr->{{ name }}.count; i++)
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}.element[i]))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..9a814de54ae8
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/decoder/variable_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length opaque) */
+{% endif %}
+ if (!xdrgen_decode_opaque(xdr, (opaque *)ptr, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/basic.j2
new file mode 100644
index 000000000000..b3430895f311
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/basic.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (basic) */
+{% endif %}
+ {{ classifier }}{{ type }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/close.j2
new file mode 100644
index 000000000000..9e62344a976a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/close.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_array.j2
new file mode 100644
index 000000000000..66be836826a0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_array.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (fixed-length array) */
+{% endif %}
+ {{ type }} {{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_opaque.j2
new file mode 100644
index 000000000000..0daba19aa0f0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/fixed_length_opaque.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (fixed-length opaque) */
+{% endif %}
+ u8 {{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/open.j2
new file mode 100644
index 000000000000..07cbf5424546
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/open.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* struct {{ name }} */
+{% endif %}
+struct {{ name }} {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/optional_data.j2
new file mode 100644
index 000000000000..a33341f45e8f
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/optional_data.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (optional data) */
+{% endif %}
+ {{ classifier }}{{ type }} *{{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/string.j2
new file mode 100644
index 000000000000..2de2feec77db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/string.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length string) */
+{% endif %}
+ string {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_array.j2
new file mode 100644
index 000000000000..5d767f9b3674
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_array.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length array) */
+{% endif %}
+ struct {
+ u32 count;
+ {{ classifier }}{{ type }} *element;
+ } {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_opaque.j2
new file mode 100644
index 000000000000..4d0cd84be3db
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/definition/variable_length_opaque.j2
@@ -0,0 +1,5 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* (variable-length opaque) */
+{% endif %}
+ opaque {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/basic.j2
new file mode 100644
index 000000000000..a7d3695c5a6a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/basic.j2
@@ -0,0 +1,10 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &value->{{ name }}))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}))
+{% endif %}
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/close.j2
new file mode 100644
index 000000000000..5bf010665f84
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/close.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_array.j2
new file mode 100644
index 000000000000..b01833a2c7a1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_array.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+{% if type in pass_by_reference %}
+ if (xdrgen_encode_{{ type }}(xdr, &value->items[i]) < 0)
+{% else %}
+ if (xdrgen_encode_{{ type }}(xdr, value->items[i]) < 0)
+{% endif %}
+ return false;
+ }
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..07bc91919898
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/fixed_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (fixed-length opaque) */
+{% endif %}
+ if (xdr_stream_encode_opaque_fixed(xdr, value->{{ name }}, {{ size }}) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/open.j2
new file mode 100644
index 000000000000..2286a3adf82a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/open.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* struct {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *value)
+{
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/optional_data.j2
new file mode 100644
index 000000000000..16fb3e09bba1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/optional_data.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (optional data) */
+{% endif %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/string.j2
new file mode 100644
index 000000000000..cf65b71eaef3
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/string.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length string) */
+{% endif %}
+ if (value->{{ name }}.len > {{ maxsize }})
+ return false;
+ if (xdr_stream_encode_opaque(xdr, value->{{ name }}.data, value->{{ name }}.len) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_array.j2
new file mode 100644
index 000000000000..b21476629679
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_array.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length array) */
+{% endif %}
+{% if maxsize != "0" %}
+ if (value->{{ name }}.count > {{ maxsize }})
+ return false;
+{% endif %}
+ if (xdr_stream_encode_u32(xdr, value->{{ name }}.count) != XDR_UNIT)
+ return false;
+ for (u32 i = 0; i < value->{{ name }}.count; i++)
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &value->{{ name }}.element[i]))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, value->{{ name }}.element[i]))
+{% endif %}
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..1d477c2d197a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/encoder/variable_length_opaque.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length opaque) */
+{% endif %}
+ if (value->{{ name }}.len > {{ maxsize }})
+ return false;
+ if (xdr_stream_encode_opaque(xdr, value->{{ name }}.data, value->{{ name }}.len) < 0)
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/struct/maxsize/struct.j2 b/tools/net/sunrpc/xdrgen/templates/C/struct/maxsize/struct.j2
new file mode 100644
index 000000000000..9f3bfb47d2f4
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/struct/maxsize/struct.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} \
+ ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/basic.j2
new file mode 100644
index 000000000000..455b10bd90ec
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/basic.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ name }} *ptr);
+{% if name in pass_by_reference %}
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ name }} *value);
+{%- else -%}
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ name }} value);
+{% endif %}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_array.j2
new file mode 100644
index 000000000000..3fe3ddd9f359
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_array.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_opaque.j2
new file mode 100644
index 000000000000..3fe3ddd9f359
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/fixed_length_opaque.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/string.j2
new file mode 100644
index 000000000000..3fe3ddd9f359
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/string.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_array.j2
new file mode 100644
index 000000000000..3fe3ddd9f359
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_array.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_opaque.j2
new file mode 100644
index 000000000000..3fe3ddd9f359
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/declaration/variable_length_opaque.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value);
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/basic.j2
new file mode 100644
index 000000000000..da4709403dc9
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/basic.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ name }} *ptr)
+{
+{% if annotate %}
+ /* (basic) */
+{% endif %}
+ return xdrgen_decode_{{ type }}(xdr, ptr);
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_array.j2
new file mode 100644
index 000000000000..d7c80e472fe3
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_array.j2
@@ -0,0 +1,25 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr)
+{
+{% if annotate %}
+ /* (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+{%- if classifier == '' %}
+ if (xdrgen_decode_{{ type }}(xdr, ptr->items[i]) < 0)
+{% else %}
+ if (xdrgen_decode_{{ type }}(xdr, &ptr->items[i]) < 0)
+{% endif %}
+ return false;
+ }
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..bdc7bd24ffb1
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/fixed_length_opaque.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr)
+{
+{% if annotate %}
+ /* (fixed-length opaque) */
+{% endif %}
+ return xdr_stream_decode_opaque_fixed(xdr, ptr, {{ size }}) == 0;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/string.j2
new file mode 100644
index 000000000000..56c5a17d6a70
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/string.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr)
+{
+{% if annotate %}
+ /* (variable-length string) */
+{% endif %}
+ return xdrgen_decode_string(xdr, ptr, {{ maxsize }});
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_array.j2
new file mode 100644
index 000000000000..e74ffdd98463
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_array.j2
@@ -0,0 +1,26 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr)
+{
+{% if annotate %}
+ /* (variable-length array) */
+{% endif %}
+ if (xdr_stream_decode_u32(xdr, &ptr->count) < 0)
+ return false;
+{% if maxsize != "0" %}
+ if (ptr->count > {{ maxsize }})
+ return false;
+{% endif %}
+ for (u32 i = 0; i < ptr->count; i++)
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->element[i]))
+ return false;
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..f28f8b228ad5
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/decoder/variable_length_opaque.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, {{ classifier }}{{ name }} *ptr)
+{
+{% if annotate %}
+ /* (variable-length opaque) */
+{% endif %}
+ return xdrgen_decode_opaque(xdr, ptr, {{ maxsize }});
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/basic.j2
new file mode 100644
index 000000000000..1c5f28135eec
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/basic.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (basic) */
+{% endif %}
+typedef {{ classifier }}{{ type }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_array.j2
new file mode 100644
index 000000000000..c3a67c952e77
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_array.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (fixed-length array) */
+{% endif %}
+typedef {{ type }}{{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_opaque.j2
new file mode 100644
index 000000000000..8788b02fe4f5
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/fixed_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (fixed-length opaque) */
+{% endif %}
+typedef u8 {{ name }}[{{ size }}];
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/string.j2
new file mode 100644
index 000000000000..c03c2df8e625
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/string.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (variable-length string) */
+{% endif %}
+typedef string {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_array.j2
new file mode 100644
index 000000000000..f03393760545
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_array.j2
@@ -0,0 +1,9 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (variable-length array) */
+{% endif %}
+typedef struct {
+ u32 count;
+ {{ classifier }}{{ type }} *element;
+} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_opaque.j2
new file mode 100644
index 000000000000..162f2610af34
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/definition/variable_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} (variable-length opaque) */
+{% endif %}
+typedef opaque {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/basic.j2
new file mode 100644
index 000000000000..35effe67e4ef
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/basic.j2
@@ -0,0 +1,21 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+{% if name in pass_by_reference %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} *value)
+{% else %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{% endif %}
+{
+{% if annotate %}
+ /* (basic) */
+{% endif %}
+ return xdrgen_encode_{{ type }}(xdr, value);
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_array.j2
new file mode 100644
index 000000000000..95202ad5ad2d
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_array.j2
@@ -0,0 +1,25 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{
+{% if annotate %}
+ /* (fixed-length array) */
+{% endif %}
+ for (u32 i = 0; i < {{ size }}; i++) {
+{% if type in pass_by_reference %}
+ if (xdrgen_encode_{{ type }}(xdr, &value->items[i]) < 0)
+{% else %}
+ if (xdrgen_encode_{{ type }}(xdr, value->items[i]) < 0)
+{% endif %}
+ return false;
+ }
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_opaque.j2
new file mode 100644
index 000000000000..9c66a11b9912
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/fixed_length_opaque.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{
+{% if annotate %}
+ /* (fixed-length opaque) */
+{% endif %}
+ return xdr_stream_encode_opaque_fixed(xdr, value, {{ size }}) >= 0;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/string.j2
new file mode 100644
index 000000000000..3d490ff180d0
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/string.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{
+{% if annotate %}
+ /* (variable-length string) */
+{% endif %}
+ return xdr_stream_encode_opaque(xdr, value.data, value.len) >= 0;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_array.j2
new file mode 100644
index 000000000000..2d2384f64918
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_array.j2
@@ -0,0 +1,30 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{
+{% if annotate %}
+ /* (variable-length array) */
+{% endif %}
+{% if maxsize != "0" %}
+ if (unlikely(value.count > {{ maxsize }}))
+ return false;
+{% endif %}
+ if (xdr_stream_encode_u32(xdr, value.count) != XDR_UNIT)
+ return false;
+ for (u32 i = 0; i < value.count; i++)
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &value.element[i]))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, value.element[i]))
+{% endif %}
+ return false;
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..8508f13c95b9
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/encoder/variable_length_opaque.j2
@@ -0,0 +1,17 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* typedef {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const {{ classifier }}{{ name }} value)
+{
+{% if annotate %}
+ /* (variable-length opaque) */
+{% endif %}
+ return xdr_stream_encode_opaque(xdr, value.data, value.len) >= 0;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/basic.j2
new file mode 100644
index 000000000000..9f3bfb47d2f4
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/basic.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} \
+ ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/fixed_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/fixed_length_opaque.j2
new file mode 100644
index 000000000000..45c1d4c21b22
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/fixed_length_opaque.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/string.j2
new file mode 100644
index 000000000000..45c1d4c21b22
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/string.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_array.j2
new file mode 100644
index 000000000000..45c1d4c21b22
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_array.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_opaque.j2
new file mode 100644
index 000000000000..45c1d4c21b22
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/typedef/maxsize/variable_length_opaque.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/basic.j2
new file mode 100644
index 000000000000..4d97cc5395eb
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/basic.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->u.{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/break.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/break.j2
new file mode 100644
index 000000000000..b286d1407029
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/break.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ break;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec.j2
new file mode 100644
index 000000000000..5fa2163f0a74
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ case {{ case }}:
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec_be.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec_be.j2
new file mode 100644
index 000000000000..917f3a1c4588
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/case_spec_be.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ case __constant_cpu_to_be32({{ case }}):
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/close.j2
new file mode 100644
index 000000000000..fdc2dfd1843b
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/close.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ }
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/default_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/default_spec.j2
new file mode 100644
index 000000000000..044a002d0589
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/default_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ default:
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/open.j2
new file mode 100644
index 000000000000..eb9941376e49
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/open.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* union {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr)
+{
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/optional_data.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/optional_data.j2
new file mode 100644
index 000000000000..e4476f5fd8d3
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/optional_data.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (optional data) */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->u.{{ name }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/string.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/string.j2
new file mode 100644
index 000000000000..83b6e5a14e7f
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/string.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length string) */
+{% endif %}
+ if (!xdrgen_decode_string(xdr, (struct string *)ptr->u.{{ name }}, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/switch_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/switch_spec.j2
new file mode 100644
index 000000000000..99b3067ef617
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/switch_spec.j2
@@ -0,0 +1,7 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* discriminant {{ name }} */
+{% endif %}
+ if (!xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}))
+ return false;
+ switch (ptr->{{ name }}) {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_array.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_array.j2
new file mode 100644
index 000000000000..53dfaf9cec68
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_array.j2
@@ -0,0 +1,15 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length array) */
+{% endif %}
+ if (xdr_stream_decode_u32(xdr, &count) < 0)
+ return false;
+{% if maxsize != "0" %}
+ if (count > {{ maxsize }})
+ return false;
+{% endif %}
+ for (u32 i = 0; i < count; i++) {
+ if (xdrgen_decode_{{ type }}(xdr, &ptr->{{ name }}.items[i]) < 0)
+ return false;
+ }
+ ptr->{{ name }}.len = count;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_opaque.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_opaque.j2
new file mode 100644
index 000000000000..c9d88ed29c78
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/variable_length_opaque.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (variable-length opaque) */
+{% endif %}
+ if (!xdrgen_decode_opaque(xdr, (struct opaque *)ptr->u.{{ name }}, {{ maxsize }}))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/decoder/void.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/void.j2
new file mode 100644
index 000000000000..65205ce37b36
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/decoder/void.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ if (!xdrgen_decode_void(xdr))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/definition/case_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/definition/case_spec.j2
new file mode 100644
index 000000000000..52f8d131b805
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/definition/case_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ {{ classifier }}{{ type }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/definition/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/definition/close.j2
new file mode 100644
index 000000000000..01d716d0099e
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/definition/close.j2
@@ -0,0 +1,8 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ } u;
+};
+{%- if name in public_apis %}
+
+bool xdrgen_decode_{{ name }}(struct xdr_stream *xdr, struct {{ name }} *ptr);
+bool xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *ptr);
+{%- endif -%}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/definition/default_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/definition/default_spec.j2
new file mode 100644
index 000000000000..52f8d131b805
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/definition/default_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ {{ classifier }}{{ type }} {{ name }};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/definition/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/definition/open.j2
new file mode 100644
index 000000000000..20fcfd1fc4e5
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/definition/open.j2
@@ -0,0 +1,6 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* union {{ name }} */
+{% endif %}
+struct {{ name }} {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/definition/switch_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/definition/switch_spec.j2
new file mode 100644
index 000000000000..3e552732502c
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/definition/switch_spec.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ {{ classifier }}{{ type }} {{ name }};
+ union {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/basic.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/basic.j2
new file mode 100644
index 000000000000..6452d75c6f9a
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/basic.j2
@@ -0,0 +1,10 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* member {{ name }} (basic) */
+{% endif %}
+{% if type in pass_by_reference %}
+ if (!xdrgen_encode_{{ type }}(xdr, &ptr->u.{{ name }}))
+{% else %}
+ if (!xdrgen_encode_{{ type }}(xdr, ptr->u.{{ name }}))
+{% endif %}
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/break.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/break.j2
new file mode 100644
index 000000000000..b286d1407029
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/break.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ break;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec.j2
new file mode 100644
index 000000000000..5fa2163f0a74
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ case {{ case }}:
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec_be.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec_be.j2
new file mode 100644
index 000000000000..917f3a1c4588
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/case_spec_be.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ case __constant_cpu_to_be32({{ case }}):
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/close.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/close.j2
new file mode 100644
index 000000000000..fdc2dfd1843b
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/close.j2
@@ -0,0 +1,4 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ }
+ return true;
+};
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/default_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/default_spec.j2
new file mode 100644
index 000000000000..044a002d0589
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/default_spec.j2
@@ -0,0 +1,2 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ default:
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/open.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/open.j2
new file mode 100644
index 000000000000..e5a206df10c6
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/open.j2
@@ -0,0 +1,12 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+
+{% if annotate %}
+/* union {{ name }} */
+{% endif %}
+{% if name in public_apis %}
+bool
+{% else %}
+static bool __maybe_unused
+{% endif %}
+xdrgen_encode_{{ name }}(struct xdr_stream *xdr, const struct {{ name }} *ptr)
+{
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/switch_spec.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/switch_spec.j2
new file mode 100644
index 000000000000..c8c3ecbe038b
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/switch_spec.j2
@@ -0,0 +1,7 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+{% if annotate %}
+ /* discriminant {{ name }} */
+{% endif %}
+ if (!xdrgen_encode_{{ type }}(xdr, ptr->{{ name }}))
+ return false;
+ switch (ptr->{{ name }}) {
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/encoder/void.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/void.j2
new file mode 100644
index 000000000000..84e7c2127d75
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/encoder/void.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+ if (!xdrgen_encode_void(xdr))
+ return false;
diff --git a/tools/net/sunrpc/xdrgen/templates/C/union/maxsize/union.j2 b/tools/net/sunrpc/xdrgen/templates/C/union/maxsize/union.j2
new file mode 100644
index 000000000000..9f3bfb47d2f4
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/templates/C/union/maxsize/union.j2
@@ -0,0 +1,3 @@
+{# SPDX-License-Identifier: GPL-2.0 #}
+#define {{ '{:<31}'.format(macro) }} \
+ ({{ width }})
diff --git a/tools/net/sunrpc/xdrgen/tests/test.x b/tools/net/sunrpc/xdrgen/tests/test.x
new file mode 100644
index 000000000000..90c8587f6fe5
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/tests/test.x
@@ -0,0 +1,36 @@
+/* Sample XDR specification from RFC 1832 Section 5.5 */
+
+const MAXUSERNAME = 32; /* max length of a user name */
+const MAXFILELEN = 65535; /* max length of a file */
+const MAXNAMELEN = 255; /* max length of a file name */
+
+/*
+ * Types of files:
+ */
+enum filekind {
+ TEXT = 0, /* ascii data */
+ DATA = 1, /* raw data */
+ EXEC = 2 /* executable */
+};
+
+/*
+ * File information, per kind of file:
+ */
+union filetype switch (filekind kind) {
+case TEXT:
+ void; /* no extra information */
+case DATA:
+ string creator<MAXNAMELEN>; /* data creator */
+case EXEC:
+ string interpretor<MAXNAMELEN>; /* program interpretor */
+};
+
+/*
+ * A complete file:
+ */
+struct file {
+ string filename<MAXNAMELEN>; /* name of file */
+ filetype type; /* info about file */
+ string owner<MAXUSERNAME>; /* owner of file */
+ opaque data<MAXFILELEN>; /* file data */
+};
diff --git a/tools/net/sunrpc/xdrgen/xdr_ast.py b/tools/net/sunrpc/xdrgen/xdr_ast.py
new file mode 100644
index 000000000000..5233e73c7046
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/xdr_ast.py
@@ -0,0 +1,753 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Define and implement the Abstract Syntax Tree for the XDR language."""
+
+import sys
+from typing import List
+from dataclasses import dataclass
+
+from lark import ast_utils, Transformer
+from lark.tree import Meta
+
+this_module = sys.modules[__name__]
+
+big_endian = []
+excluded_apis = []
+header_name = "none"
+public_apis = []
+structs = set()
+pass_by_reference = set()
+
+constants = {}
+
+
+def xdr_quadlen(val: str) -> int:
+ """Return integer XDR width of an XDR type"""
+ if val in constants:
+ octets = constants[val]
+ else:
+ octets = int(val)
+ return int((octets + 3) / 4)
+
+
+symbolic_widths = {
+ "void": ["XDR_void"],
+ "bool": ["XDR_bool"],
+ "int": ["XDR_int"],
+ "unsigned_int": ["XDR_unsigned_int"],
+ "long": ["XDR_long"],
+ "unsigned_long": ["XDR_unsigned_long"],
+ "hyper": ["XDR_hyper"],
+ "unsigned_hyper": ["XDR_unsigned_hyper"],
+}
+
+# Numeric XDR widths are tracked in a dictionary that is keyed
+# by type_name because sometimes a caller has nothing more than
+# the type_name to use to figure out the numeric width.
+max_widths = {
+ "void": 0,
+ "bool": 1,
+ "int": 1,
+ "unsigned_int": 1,
+ "long": 1,
+ "unsigned_long": 1,
+ "hyper": 2,
+ "unsigned_hyper": 2,
+}
+
+
+@dataclass
+class _XdrAst(ast_utils.Ast):
+ """Base class for the XDR abstract syntax tree"""
+
+
+@dataclass
+class _XdrIdentifier(_XdrAst):
+ """Corresponds to 'identifier' in the XDR language grammar"""
+
+ symbol: str
+
+
+@dataclass
+class _XdrValue(_XdrAst):
+ """Corresponds to 'value' in the XDR language grammar"""
+
+ value: str
+
+
+@dataclass
+class _XdrConstantValue(_XdrAst):
+ """Corresponds to 'constant' in the XDR language grammar"""
+
+ value: int
+
+
+@dataclass
+class _XdrTypeSpecifier(_XdrAst):
+ """Corresponds to 'type_specifier' in the XDR language grammar"""
+
+ type_name: str
+ c_classifier: str = ""
+
+
+@dataclass
+class _XdrDefinedType(_XdrTypeSpecifier):
+ """Corresponds to a type defined by the input specification"""
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return [get_header_name().upper() + "_" + self.type_name + "_sz"]
+
+ def __post_init__(self):
+ if self.type_name in structs:
+ self.c_classifier = "struct "
+ symbolic_widths[self.type_name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrBuiltInType(_XdrTypeSpecifier):
+ """Corresponds to a built-in XDR type"""
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return symbolic_widths[self.type_name]
+
+
+@dataclass
+class _XdrDeclaration(_XdrAst):
+ """Base class of XDR type declarations"""
+
+
+@dataclass
+class _XdrFixedLengthOpaque(_XdrDeclaration):
+ """A fixed-length opaque declaration"""
+
+ name: str
+ size: str
+ template: str = "fixed_length_opaque"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return xdr_quadlen(self.size)
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return ["XDR_QUADLEN(" + self.size + ")"]
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrVariableLengthOpaque(_XdrDeclaration):
+ """A variable-length opaque declaration"""
+
+ name: str
+ maxsize: str
+ template: str = "variable_length_opaque"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 1 + xdr_quadlen(self.maxsize)
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ widths = ["XDR_unsigned_int"]
+ if self.maxsize != "0":
+ widths.append("XDR_QUADLEN(" + self.maxsize + ")")
+ return widths
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrString(_XdrDeclaration):
+ """A (NUL-terminated) variable-length string declaration"""
+
+ name: str
+ maxsize: str
+ template: str = "string"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 1 + xdr_quadlen(self.maxsize)
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ widths = ["XDR_unsigned_int"]
+ if self.maxsize != "0":
+ widths.append("XDR_QUADLEN(" + self.maxsize + ")")
+ return widths
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrFixedLengthArray(_XdrDeclaration):
+ """A fixed-length array declaration"""
+
+ name: str
+ spec: _XdrTypeSpecifier
+ size: str
+ template: str = "fixed_length_array"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return xdr_quadlen(self.size) * max_widths[self.spec.type_name]
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ item_width = " + ".join(symbolic_widths[self.spec.type_name])
+ return ["(" + self.size + " * (" + item_width + "))"]
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrVariableLengthArray(_XdrDeclaration):
+ """A variable-length array declaration"""
+
+ name: str
+ spec: _XdrTypeSpecifier
+ maxsize: str
+ template: str = "variable_length_array"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 1 + (xdr_quadlen(self.maxsize) * max_widths[self.spec.type_name])
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ widths = ["XDR_unsigned_int"]
+ if self.maxsize != "0":
+ item_width = " + ".join(symbolic_widths[self.spec.type_name])
+ widths.append("(" + self.maxsize + " * (" + item_width + "))")
+ return widths
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrOptionalData(_XdrDeclaration):
+ """An 'optional_data' declaration"""
+
+ name: str
+ spec: _XdrTypeSpecifier
+ template: str = "optional_data"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 1
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return ["XDR_bool"]
+
+ def __post_init__(self):
+ structs.add(self.name)
+ pass_by_reference.add(self.name)
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrBasic(_XdrDeclaration):
+ """A 'basic' declaration"""
+
+ name: str
+ spec: _XdrTypeSpecifier
+ template: str = "basic"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return max_widths[self.spec.type_name]
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return symbolic_widths[self.spec.type_name]
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrVoid(_XdrDeclaration):
+ """A void declaration"""
+
+ name: str = "void"
+ template: str = "void"
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 0
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return []
+
+
+@dataclass
+class _XdrConstant(_XdrAst):
+ """Corresponds to 'constant_def' in the grammar"""
+
+ name: str
+ value: str
+
+ def __post_init__(self):
+ if self.value not in constants:
+ constants[self.name] = int(self.value, 0)
+
+
+@dataclass
+class _XdrEnumerator(_XdrAst):
+ """An 'identifier = value' enumerator"""
+
+ name: str
+ value: str
+
+ def __post_init__(self):
+ if self.value not in constants:
+ constants[self.name] = int(self.value, 0)
+
+
+@dataclass
+class _XdrEnum(_XdrAst):
+ """An XDR enum definition"""
+
+ name: str
+ minimum: int
+ maximum: int
+ enumerators: List[_XdrEnumerator]
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return 1
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return ["XDR_int"]
+
+ def __post_init__(self):
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrStruct(_XdrAst):
+ """An XDR struct definition"""
+
+ name: str
+ fields: List[_XdrDeclaration]
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ width = 0
+ for field in self.fields:
+ width += field.max_width()
+ return width
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ widths = []
+ for field in self.fields:
+ widths += field.symbolic_width()
+ return widths
+
+ def __post_init__(self):
+ structs.add(self.name)
+ pass_by_reference.add(self.name)
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrPointer(_XdrAst):
+ """An XDR pointer definition"""
+
+ name: str
+ fields: List[_XdrDeclaration]
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ width = 1
+ for field in self.fields[0:-1]:
+ width += field.max_width()
+ return width
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ widths = []
+ widths += ["XDR_bool"]
+ for field in self.fields[0:-1]:
+ widths += field.symbolic_width()
+ return widths
+
+ def __post_init__(self):
+ structs.add(self.name)
+ pass_by_reference.add(self.name)
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrTypedef(_XdrAst):
+ """An XDR typedef"""
+
+ declaration: _XdrDeclaration
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ return self.declaration.max_width()
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ return self.declaration.symbolic_width()
+
+ def __post_init__(self):
+ if isinstance(self.declaration, _XdrBasic):
+ new_type = self.declaration
+ if isinstance(new_type.spec, _XdrDefinedType):
+ if new_type.spec.type_name in pass_by_reference:
+ pass_by_reference.add(new_type.name)
+ max_widths[new_type.name] = self.max_width()
+ symbolic_widths[new_type.name] = self.symbolic_width()
+
+
+@dataclass
+class _XdrCaseSpec(_XdrAst):
+ """One case in an XDR union"""
+
+ values: List[str]
+ arm: _XdrDeclaration
+ template: str = "case_spec"
+
+
+@dataclass
+class _XdrDefaultSpec(_XdrAst):
+ """Default case in an XDR union"""
+
+ arm: _XdrDeclaration
+ template: str = "default_spec"
+
+
+@dataclass
+class _XdrUnion(_XdrAst):
+ """An XDR union"""
+
+ name: str
+ discriminant: _XdrDeclaration
+ cases: List[_XdrCaseSpec]
+ default: _XdrDeclaration
+
+ def max_width(self) -> int:
+ """Return width of type in XDR_UNITS"""
+ max_width = 0
+ for case in self.cases:
+ if case.arm.max_width() > max_width:
+ max_width = case.arm.max_width()
+ if self.default:
+ if self.default.arm.max_width() > max_width:
+ max_width = self.default.arm.max_width()
+ return 1 + max_width
+
+ def symbolic_width(self) -> List:
+ """Return list containing XDR width of type's components"""
+ max_width = 0
+ for case in self.cases:
+ if case.arm.max_width() > max_width:
+ max_width = case.arm.max_width()
+ width = case.arm.symbolic_width()
+ if self.default:
+ if self.default.arm.max_width() > max_width:
+ max_width = self.default.arm.max_width()
+ width = self.default.arm.symbolic_width()
+ return symbolic_widths[self.discriminant.name] + width
+
+ def __post_init__(self):
+ structs.add(self.name)
+ pass_by_reference.add(self.name)
+ max_widths[self.name] = self.max_width()
+ symbolic_widths[self.name] = self.symbolic_width()
+
+
+@dataclass
+class _RpcProcedure(_XdrAst):
+ """RPC procedure definition"""
+
+ name: str
+ number: str
+ argument: _XdrTypeSpecifier
+ result: _XdrTypeSpecifier
+
+
+@dataclass
+class _RpcVersion(_XdrAst):
+ """RPC version definition"""
+
+ name: str
+ number: str
+ procedures: List[_RpcProcedure]
+
+
+@dataclass
+class _RpcProgram(_XdrAst):
+ """RPC program definition"""
+
+ name: str
+ number: str
+ versions: List[_RpcVersion]
+
+
+@dataclass
+class _Pragma(_XdrAst):
+ """Empty class for pragma directives"""
+
+
+@dataclass
+class Definition(_XdrAst, ast_utils.WithMeta):
+ """Corresponds to 'definition' in the grammar"""
+
+ meta: Meta
+ value: _XdrAst
+
+
+@dataclass
+class Specification(_XdrAst, ast_utils.AsList):
+ """Corresponds to 'specification' in the grammar"""
+
+ definitions: List[Definition]
+
+
+class ParseToAst(Transformer):
+ """Functions that transform productions into AST nodes"""
+
+ def identifier(self, children):
+ """Instantiate one _XdrIdentifier object"""
+ return _XdrIdentifier(children[0].value)
+
+ def value(self, children):
+ """Instantiate one _XdrValue object"""
+ if isinstance(children[0], _XdrIdentifier):
+ return _XdrValue(children[0].symbol)
+ return _XdrValue(children[0].children[0].value)
+
+ def constant(self, children):
+ """Instantiate one _XdrConstantValue object"""
+ match children[0].data:
+ case "decimal_constant":
+ value = int(children[0].children[0].value, base=10)
+ case "hexadecimal_constant":
+ value = int(children[0].children[0].value, base=16)
+ case "octal_constant":
+ value = int(children[0].children[0].value, base=8)
+ return _XdrConstantValue(value)
+
+ def type_specifier(self, children):
+ """Instantiate one _XdrTypeSpecifier object"""
+ if isinstance(children[0], _XdrIdentifier):
+ name = children[0].symbol
+ return _XdrDefinedType(type_name=name)
+
+ name = children[0].data.value
+ return _XdrBuiltInType(type_name=name)
+
+ def constant_def(self, children):
+ """Instantiate one _XdrConstant object"""
+ name = children[0].symbol
+ value = children[1].value
+ return _XdrConstant(name, value)
+
+ # cel: Python can compute a min() and max() for the enumerator values
+ # so that the generated code can perform proper range checking.
+ def enum(self, children):
+ """Instantiate one _XdrEnum object"""
+ enum_name = children[0].symbol
+
+ i = 0
+ enumerators = []
+ body = children[1]
+ while i < len(body.children):
+ name = body.children[i].symbol
+ value = body.children[i + 1].value
+ enumerators.append(_XdrEnumerator(name, value))
+ i = i + 2
+
+ return _XdrEnum(enum_name, 0, 0, enumerators)
+
+ def fixed_length_opaque(self, children):
+ """Instantiate one _XdrFixedLengthOpaque declaration object"""
+ name = children[0].symbol
+ size = children[1].value
+
+ return _XdrFixedLengthOpaque(name, size)
+
+ def variable_length_opaque(self, children):
+ """Instantiate one _XdrVariableLengthOpaque declaration object"""
+ name = children[0].symbol
+ if children[1] is not None:
+ maxsize = children[1].value
+ else:
+ maxsize = "0"
+
+ return _XdrVariableLengthOpaque(name, maxsize)
+
+ def string(self, children):
+ """Instantiate one _XdrString declaration object"""
+ name = children[0].symbol
+ if children[1] is not None:
+ maxsize = children[1].value
+ else:
+ maxsize = "0"
+
+ return _XdrString(name, maxsize)
+
+ def fixed_length_array(self, children):
+ """Instantiate one _XdrFixedLengthArray declaration object"""
+ spec = children[0]
+ name = children[1].symbol
+ size = children[2].value
+
+ return _XdrFixedLengthArray(name, spec, size)
+
+ def variable_length_array(self, children):
+ """Instantiate one _XdrVariableLengthArray declaration object"""
+ spec = children[0]
+ name = children[1].symbol
+ if children[2] is not None:
+ maxsize = children[2].value
+ else:
+ maxsize = "0"
+
+ return _XdrVariableLengthArray(name, spec, maxsize)
+
+ def optional_data(self, children):
+ """Instantiate one _XdrOptionalData declaration object"""
+ spec = children[0]
+ name = children[1].symbol
+
+ return _XdrOptionalData(name, spec)
+
+ def basic(self, children):
+ """Instantiate one _XdrBasic object"""
+ spec = children[0]
+ name = children[1].symbol
+
+ return _XdrBasic(name, spec)
+
+ def void(self, children):
+ """Instantiate one _XdrVoid declaration object"""
+
+ return _XdrVoid()
+
+ def struct(self, children):
+ """Instantiate one _XdrStruct object"""
+ name = children[0].symbol
+ fields = children[1].children
+
+ last_field = fields[-1]
+ if (
+ isinstance(last_field, _XdrOptionalData)
+ and name == last_field.spec.type_name
+ ):
+ return _XdrPointer(name, fields)
+
+ return _XdrStruct(name, fields)
+
+ def typedef(self, children):
+ """Instantiate one _XdrTypedef object"""
+ new_type = children[0]
+
+ return _XdrTypedef(new_type)
+
+ def case_spec(self, children):
+ """Instantiate one _XdrCaseSpec object"""
+ values = []
+ for item in children[0:-1]:
+ values.append(item.value)
+ arm = children[-1]
+
+ return _XdrCaseSpec(values, arm)
+
+ def default_spec(self, children):
+ """Instantiate one _XdrDefaultSpec object"""
+ arm = children[0]
+
+ return _XdrDefaultSpec(arm)
+
+ def union(self, children):
+ """Instantiate one _XdrUnion object"""
+ name = children[0].symbol
+
+ body = children[1]
+ discriminant = body.children[0].children[0]
+ cases = body.children[1:-1]
+ default = body.children[-1]
+
+ return _XdrUnion(name, discriminant, cases, default)
+
+ def procedure_def(self, children):
+ """Instantiate one _RpcProcedure object"""
+ result = children[0]
+ name = children[1].symbol
+ argument = children[2]
+ number = children[3].value
+
+ return _RpcProcedure(name, number, argument, result)
+
+ def version_def(self, children):
+ """Instantiate one _RpcVersion object"""
+ name = children[0].symbol
+ number = children[-1].value
+ procedures = children[1:-1]
+
+ return _RpcVersion(name, number, procedures)
+
+ def program_def(self, children):
+ """Instantiate one _RpcProgram object"""
+ name = children[0].symbol
+ number = children[-1].value
+ versions = children[1:-1]
+
+ return _RpcProgram(name, number, versions)
+
+ def pragma_def(self, children):
+ """Instantiate one _Pragma object"""
+ directive = children[0].children[0].data
+ match directive:
+ case "big_endian_directive":
+ big_endian.append(children[1].symbol)
+ case "exclude_directive":
+ excluded_apis.append(children[1].symbol)
+ case "header_directive":
+ global header_name
+ header_name = children[1].symbol
+ case "public_directive":
+ public_apis.append(children[1].symbol)
+ case _:
+ raise NotImplementedError("Directive not supported")
+ return _Pragma()
+
+
+transformer = ast_utils.create_transformer(this_module, ParseToAst())
+
+
+def transform_parse_tree(parse_tree):
+ """Transform productions into an abstract syntax tree"""
+
+ return transformer.transform(parse_tree)
+
+
+def get_header_name() -> str:
+ """Return header name set by pragma header directive"""
+ return header_name
diff --git a/tools/net/sunrpc/xdrgen/xdr_parse.py b/tools/net/sunrpc/xdrgen/xdr_parse.py
new file mode 100644
index 000000000000..964b44e675df
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/xdr_parse.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Common parsing code for xdrgen"""
+
+from lark import Lark
+
+
+# Set to True to emit annotation comments in generated source
+annotate = False
+
+
+def set_xdr_annotate(set_it: bool) -> None:
+ """Set 'annotate' if --annotate was specified on the command line"""
+ global annotate
+ annotate = set_it
+
+
+def get_xdr_annotate() -> bool:
+ """Return True if --annotate was specified on the command line"""
+ return annotate
+
+
+def xdr_parser() -> Lark:
+ """Return a Lark parser instance configured with the XDR language grammar"""
+
+ return Lark.open(
+ "grammars/xdr.lark",
+ rel_to=__file__,
+ start="specification",
+ debug=True,
+ strict=True,
+ propagate_positions=True,
+ parser="lalr",
+ lexer="contextual",
+ )
diff --git a/tools/net/sunrpc/xdrgen/xdrgen b/tools/net/sunrpc/xdrgen/xdrgen
new file mode 100755
index 000000000000..43762be39252
--- /dev/null
+++ b/tools/net/sunrpc/xdrgen/xdrgen
@@ -0,0 +1,134 @@
+#!/usr/bin/env python3
+# ex: set filetype=python:
+
+"""Translate an XDR specification into executable code that
+can be compiled for the Linux kernel."""
+
+__author__ = "Chuck Lever"
+__copyright__ = "Copyright (c) 2024 Oracle and/or its affiliates."
+__license__ = "GPL-2.0 only"
+__version__ = "0.2"
+
+import sys
+import argparse
+
+from subcmds import definitions
+from subcmds import declarations
+from subcmds import lint
+from subcmds import source
+
+
+sys.path.insert(1, "@pythondir@")
+
+
+def main() -> int:
+ """Parse command-line options"""
+ parser = argparse.ArgumentParser(
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ description="Convert an XDR specification to Linux kernel source code",
+ epilog="""\
+Copyright (c) 2024 Oracle and/or its affiliates.
+
+License GPLv2: <http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt>
+This is free software. You are free to change and redistribute it.
+There is NO WARRANTY, to the extent permitted by law.""",
+ )
+ parser.add_argument(
+ "--version",
+ help="Display the version of this tool",
+ action="version",
+ version=__version__,
+ )
+
+ subcommands = parser.add_subparsers(title="Subcommands", required=True)
+
+ definitions_parser = subcommands.add_parser(
+ "definitions", help="Generate XDR definitions"
+ )
+ definitions_parser.add_argument(
+ "--annotate",
+ action="store_true",
+ default=False,
+ help="Add annotation comments",
+ )
+ definitions_parser.add_argument(
+ "--language",
+ action="store_true",
+ default="C",
+ help="Output language",
+ )
+ definitions_parser.add_argument(
+ "--peer",
+ choices=["server", "client",],
+ default="server",
+ help="Generate header code for client or server side",
+ type=str,
+ )
+ definitions_parser.add_argument("filename", help="File containing an XDR specification")
+ definitions_parser.set_defaults(func=definitions.subcmd)
+
+ declarations_parser = subcommands.add_parser(
+ "declarations", help="Generate function declarations"
+ )
+ declarations_parser.add_argument(
+ "--annotate",
+ action="store_true",
+ default=False,
+ help="Add annotation comments",
+ )
+ declarations_parser.add_argument(
+ "--language",
+ action="store_true",
+ default="C",
+ help="Output language",
+ )
+ declarations_parser.add_argument(
+ "--peer",
+ choices=["server", "client",],
+ default="server",
+ help="Generate code for client or server side",
+ type=str,
+ )
+ declarations_parser.add_argument("filename", help="File containing an XDR specification")
+ declarations_parser.set_defaults(func=declarations.subcmd)
+
+ linter_parser = subcommands.add_parser("lint", help="Check an XDR specification")
+ linter_parser.add_argument("filename", help="File containing an XDR specification")
+ linter_parser.set_defaults(func=lint.subcmd)
+
+ source_parser = subcommands.add_parser(
+ "source", help="Generate XDR encoder and decoder source code"
+ )
+ source_parser.add_argument(
+ "--annotate",
+ action="store_true",
+ default=False,
+ help="Add annotation comments",
+ )
+ source_parser.add_argument(
+ "--language",
+ action="store_true",
+ default="C",
+ help="Output language",
+ )
+ source_parser.add_argument(
+ "--peer",
+ choices=["server", "client",],
+ default="server",
+ help="Generate code for client or server side",
+ type=str,
+ )
+ source_parser.add_argument("filename", help="File containing an XDR specification")
+ source_parser.set_defaults(func=source.subcmd)
+
+ args = parser.parse_args()
+ return args.func(args)
+
+
+try:
+ if __name__ == "__main__":
+ sys.exit(main())
+except SystemExit:
+ sys.exit(0)
+except (KeyboardInterrupt, BrokenPipeError):
+ sys.exit(1)
diff --git a/tools/net/ynl/Makefile b/tools/net/ynl/Makefile
index da1aa10bbcc3..211df5a93ad9 100644
--- a/tools/net/ynl/Makefile
+++ b/tools/net/ynl/Makefile
@@ -1,21 +1,52 @@
# SPDX-License-Identifier: GPL-2.0
+include ../../scripts/Makefile.arch
+
+INSTALL ?= install
+prefix ?= /usr
+ifeq ($(LP64), 1)
+ libdir_relative = lib64
+else
+ libdir_relative = lib
+endif
+libdir ?= $(prefix)/$(libdir_relative)
+includedir ?= $(prefix)/include
+
SUBDIRS = lib generated samples
-all: $(SUBDIRS)
+all: $(SUBDIRS) libynl.a
samples: | lib generated
+libynl.a: | lib generated
+ @echo -e "\tAR $@"
+ @ar rcs $@ lib/ynl.o generated/*-user.o
$(SUBDIRS):
@if [ -f "$@/Makefile" ] ; then \
$(MAKE) -C $@ ; \
fi
-clean hardclean:
+clean distclean:
@for dir in $(SUBDIRS) ; do \
if [ -f "$$dir/Makefile" ] ; then \
$(MAKE) -C $$dir $@; \
fi \
done
+ rm -f libynl.a
+ rm -rf pyynl/__pycache__
+ rm -rf pyynl/lib/__pycache__
+ rm -rf pyynl.egg-info
+ rm -rf build
+
+install: libynl.a lib/*.h
+ @echo -e "\tINSTALL libynl.a"
+ @$(INSTALL) -d $(DESTDIR)$(libdir)
+ @$(INSTALL) -m 0644 libynl.a $(DESTDIR)$(libdir)/libynl.a
+ @echo -e "\tINSTALL libynl headers"
+ @$(INSTALL) -d $(DESTDIR)$(includedir)/ynl
+ @$(INSTALL) -m 0644 lib/*.h $(DESTDIR)$(includedir)/ynl/
+ @echo -e "\tINSTALL pyynl"
+ @pip install --prefix=$(DESTDIR)$(prefix) .
+ @make -C generated install
-.PHONY: clean all $(SUBDIRS)
+.PHONY: all clean distclean install $(SUBDIRS)
diff --git a/tools/net/ynl/Makefile.deps b/tools/net/ynl/Makefile.deps
index 3110f84dd029..865fd2e8519e 100644
--- a/tools/net/ynl/Makefile.deps
+++ b/tools/net/ynl/Makefile.deps
@@ -15,7 +15,36 @@ UAPI_PATH:=../../../../include/uapi/
get_hdr_inc=-D$(1) -include $(UAPI_PATH)/linux/$(2)
CFLAGS_devlink:=$(call get_hdr_inc,_LINUX_DEVLINK_H_,devlink.h)
-CFLAGS_ethtool:=$(call get_hdr_inc,_LINUX_ETHTOOL_NETLINK_H_,ethtool_netlink.h)
+CFLAGS_dpll:=$(call get_hdr_inc,_LINUX_DPLL_H,dpll.h)
+CFLAGS_ethtool:=$(call get_hdr_inc,_LINUX_ETHTOOL_H,ethtool.h) \
+ $(call get_hdr_inc,_LINUX_ETHTOOL_NETLINK_H_,ethtool_netlink.h) \
+ $(call get_hdr_inc,_LINUX_ETHTOOL_NETLINK_GENERATED_H,ethtool_netlink_generated.h)
CFLAGS_handshake:=$(call get_hdr_inc,_LINUX_HANDSHAKE_H,handshake.h)
+CFLAGS_lockd_netlink:=$(call get_hdr_inc,_LINUX_LOCKD_NETLINK_H,lockd_netlink.h)
+CFLAGS_mptcp_pm:=$(call get_hdr_inc,_LINUX_MPTCP_PM_H,mptcp_pm.h)
+CFLAGS_net_shaper:=$(call get_hdr_inc,_LINUX_NET_SHAPER_H,net_shaper.h)
CFLAGS_netdev:=$(call get_hdr_inc,_LINUX_NETDEV_H,netdev.h)
+CFLAGS_nl80211:=$(call get_hdr_inc,__LINUX_NL802121_H,nl80211.h)
+CFLAGS_nlctrl:=$(call get_hdr_inc,__LINUX_GENERIC_NETLINK_H,genetlink.h)
CFLAGS_nfsd:=$(call get_hdr_inc,_LINUX_NFSD_NETLINK_H,nfsd_netlink.h)
+CFLAGS_ovpn:=$(call get_hdr_inc,_LINUX_OVPN_H,ovpn.h)
+CFLAGS_ovs_datapath:=$(call get_hdr_inc,__LINUX_OPENVSWITCH_H,openvswitch.h)
+CFLAGS_ovs_flow:=$(call get_hdr_inc,__LINUX_OPENVSWITCH_H,openvswitch.h)
+CFLAGS_ovs_vport:=$(call get_hdr_inc,__LINUX_OPENVSWITCH_H,openvswitch.h)
+CFLAGS_psp:=$(call get_hdr_inc,_LINUX_PSP_H,psp.h)
+CFLAGS_rt-addr:=$(call get_hdr_inc,__LINUX_RTNETLINK_H,rtnetlink.h) \
+ $(call get_hdr_inc,__LINUX_IF_ADDR_H,if_addr.h)
+CFLAGS_rt-link:=$(call get_hdr_inc,__LINUX_RTNETLINK_H,rtnetlink.h) \
+ $(call get_hdr_inc,_LINUX_IF_LINK_H,if_link.h)
+CFLAGS_rt-neigh:=$(call get_hdr_inc,__LINUX_RTNETLINK_H,rtnetlink.h) \
+ $(call get_hdr_inc,__LINUX_NEIGHBOUR_H,neighbour.h)
+CFLAGS_rt-route:=$(call get_hdr_inc,__LINUX_RTNETLINK_H,rtnetlink.h)
+CFLAGS_rt-rule:=$(call get_hdr_inc,__LINUX_FIB_RULES_H,fib_rules.h)
+CFLAGS_tc:= $(call get_hdr_inc,__LINUX_RTNETLINK_H,rtnetlink.h) \
+ $(call get_hdr_inc,__LINUX_PKT_SCHED_H,pkt_sched.h) \
+ $(call get_hdr_inc,__LINUX_PKT_CLS_H,pkt_cls.h) \
+ $(call get_hdr_inc,_TC_CT_H,tc_act/tc_ct.h) \
+ $(call get_hdr_inc,_TC_MIRRED_H,tc_act/tc_mirred.h) \
+ $(call get_hdr_inc,_TC_SKBEDIT_H,tc_act/tc_skbedit.h) \
+ $(call get_hdr_inc,_TC_TUNNEL_KEY_H,tc_act/tc_tunnel_key.h)
+CFLAGS_tcp_metrics:=$(call get_hdr_inc,_LINUX_TCP_METRICS_H,tcp_metrics.h)
diff --git a/tools/net/ynl/cli.py b/tools/net/ynl/cli.py
deleted file mode 100755
index 2ad9ec0f5545..000000000000
--- a/tools/net/ynl/cli.py
+++ /dev/null
@@ -1,61 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
-
-import argparse
-import json
-import pprint
-import time
-
-from lib import YnlFamily, Netlink
-
-
-def main():
- parser = argparse.ArgumentParser(description='YNL CLI sample')
- parser.add_argument('--spec', dest='spec', type=str, required=True)
- parser.add_argument('--schema', dest='schema', type=str)
- parser.add_argument('--no-schema', action='store_true')
- parser.add_argument('--json', dest='json_text', type=str)
- parser.add_argument('--do', dest='do', type=str)
- parser.add_argument('--dump', dest='dump', type=str)
- parser.add_argument('--sleep', dest='sleep', type=int)
- parser.add_argument('--subscribe', dest='ntf', type=str)
- parser.add_argument('--replace', dest='flags', action='append_const',
- const=Netlink.NLM_F_REPLACE)
- parser.add_argument('--excl', dest='flags', action='append_const',
- const=Netlink.NLM_F_EXCL)
- parser.add_argument('--create', dest='flags', action='append_const',
- const=Netlink.NLM_F_CREATE)
- parser.add_argument('--append', dest='flags', action='append_const',
- const=Netlink.NLM_F_APPEND)
- parser.add_argument('--process-unknown', action=argparse.BooleanOptionalAction)
- args = parser.parse_args()
-
- if args.no_schema:
- args.schema = ''
-
- attrs = {}
- if args.json_text:
- attrs = json.loads(args.json_text)
-
- ynl = YnlFamily(args.spec, args.schema, args.process_unknown)
-
- if args.ntf:
- ynl.ntf_subscribe(args.ntf)
-
- if args.sleep:
- time.sleep(args.sleep)
-
- if args.do:
- reply = ynl.do(args.do, attrs, args.flags)
- pprint.PrettyPrinter().pprint(reply)
- if args.dump:
- reply = ynl.dump(args.dump, attrs)
- pprint.PrettyPrinter().pprint(reply)
-
- if args.ntf:
- ynl.check_ntf()
- pprint.PrettyPrinter().pprint(ynl.async_msg_queue)
-
-
-if __name__ == "__main__":
- main()
diff --git a/tools/net/ynl/generated/.gitignore b/tools/net/ynl/generated/.gitignore
index ade488626d26..859a6fb446e1 100644
--- a/tools/net/ynl/generated/.gitignore
+++ b/tools/net/ynl/generated/.gitignore
@@ -1,2 +1,3 @@
*-user.c
*-user.h
+*.rst
diff --git a/tools/net/ynl/generated/Makefile b/tools/net/ynl/generated/Makefile
index 84cbabdd02a8..86e1e4a959a7 100644
--- a/tools/net/ynl/generated/Makefile
+++ b/tools/net/ynl/generated/Makefile
@@ -1,35 +1,49 @@
# SPDX-License-Identifier: GPL-2.0
CC=gcc
-CFLAGS=-std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow \
+CFLAGS += -std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow \
-I../lib/ -idirafter $(UAPI_PATH)
ifeq ("$(DEBUG)","1")
CFLAGS += -g -fsanitize=address -fsanitize=leak -static-libasan
endif
+INSTALL ?= install
+prefix ?= /usr
+datarootdir ?= $(prefix)/share
+docdir ?= $(datarootdir)/doc
+includedir ?= $(prefix)/include
+
include ../Makefile.deps
YNL_GEN_ARG_ethtool:=--user-header linux/ethtool_netlink.h \
--exclude-op stats-get
-TOOL:=../ynl-gen-c.py
+TOOL:=../pyynl/ynl_gen_c.py
+TOOL_RST:=../pyynl/ynl_gen_rst.py
-GENS:=ethtool devlink handshake fou netdev nfsd
+SPECS_DIR:=../../../../Documentation/netlink/specs
+SPECS_PATHS=$(wildcard $(SPECS_DIR)/*.yaml)
+GENS_UNSUP=conntrack nftables
+GENS=$(filter-out ${GENS_UNSUP},$(patsubst $(SPECS_DIR)/%.yaml,%,${SPECS_PATHS}))
SRCS=$(patsubst %,%-user.c,${GENS})
HDRS=$(patsubst %,%-user.h,${GENS})
OBJS=$(patsubst %,%-user.o,${GENS})
-all: protos.a $(HDRS) $(SRCS) $(KHDRS) $(KSRCS) $(UAPI)
+SPECS_PATHS=$(wildcard $(SPECS_DIR)/*.yaml)
+SPECS=$(patsubst $(SPECS_DIR)/%.yaml,%,${SPECS_PATHS})
+RSTS=$(patsubst %,%.rst,${SPECS})
+
+all: protos.a $(HDRS) $(SRCS) $(KHDRS) $(KSRCS) $(UAPI) $(RSTS)
protos.a: $(OBJS)
@echo -e "\tAR $@"
@ar rcs $@ $(OBJS)
-%-user.h: ../../../../Documentation/netlink/specs/%.yaml $(TOOL)
+%-user.h: $(SPECS_DIR)/%.yaml $(TOOL)
@echo -e "\tGEN $@"
@$(TOOL) --mode user --header --spec $< -o $@ $(YNL_GEN_ARG_$*)
-%-user.c: ../../../../Documentation/netlink/specs/%.yaml $(TOOL)
+%-user.c: $(SPECS_DIR)/%.yaml $(TOOL)
@echo -e "\tGEN $@"
@$(TOOL) --mode user --source --spec $< -o $@ $(YNL_GEN_ARG_$*)
@@ -37,14 +51,37 @@ protos.a: $(OBJS)
@echo -e "\tCC $@"
@$(COMPILE.c) $(CFLAGS_$*) -o $@ $<
+%.rst: $(SPECS_DIR)/%.yaml $(TOOL_RST)
+ @echo -e "\tGEN_RST $@"
+ @$(TOOL_RST) -o $@ -i $<
+
clean:
rm -f *.o
-hardclean: clean
- rm -f *.c *.h *.a
+distclean: clean
+ rm -f *.c *.h *.a *.rst
regen:
@../ynl-regen.sh
-.PHONY: all clean hardclean regen
+install-headers: $(HDRS)
+ @echo -e "\tINSTALL generated headers"
+ @$(INSTALL) -d $(DESTDIR)$(includedir)/ynl
+ @$(INSTALL) -m 0644 *.h $(DESTDIR)$(includedir)/ynl/
+
+install-rsts: $(RSTS)
+ @echo -e "\tINSTALL generated docs"
+ @$(INSTALL) -d $(DESTDIR)$(docdir)/ynl
+ @$(INSTALL) -m 0644 $(RSTS) $(DESTDIR)$(docdir)/ynl/
+
+install-specs:
+ @echo -e "\tINSTALL specs"
+ @$(INSTALL) -d $(DESTDIR)$(datarootdir)/ynl
+ @$(INSTALL) -m 0644 ../../../../Documentation/netlink/*.yaml $(DESTDIR)$(datarootdir)/ynl/
+ @$(INSTALL) -d $(DESTDIR)$(datarootdir)/ynl/specs
+ @$(INSTALL) -m 0644 $(SPECS_DIR)/*.yaml $(DESTDIR)$(datarootdir)/ynl/specs/
+
+install: install-headers install-rsts install-specs
+
+.PHONY: all clean distclean regen install install-headers install-rsts install-specs
.DEFAULT_GOAL: all
diff --git a/tools/net/ynl/lib/.gitignore b/tools/net/ynl/lib/.gitignore
index c18dd8d83cee..a4383358ec72 100644
--- a/tools/net/ynl/lib/.gitignore
+++ b/tools/net/ynl/lib/.gitignore
@@ -1 +1 @@
-__pycache__/
+*.d
diff --git a/tools/net/ynl/lib/Makefile b/tools/net/ynl/lib/Makefile
index d2e50fd0a52d..4b2b98704ff9 100644
--- a/tools/net/ynl/lib/Makefile
+++ b/tools/net/ynl/lib/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
CC=gcc
-CFLAGS=-std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow
+CFLAGS += -std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow
ifeq ("$(DEBUG)","1")
CFLAGS += -g -fsanitize=address -fsanitize=leak -static-libasan
endif
@@ -14,15 +14,17 @@ include $(wildcard *.d)
all: ynl.a
ynl.a: $(OBJS)
- ar rcs $@ $(OBJS)
+ @echo -e "\tAR $@"
+ @ar rcs $@ $(OBJS)
+
clean:
rm -f *.o *.d *~
-hardclean: clean
+distclean: clean
rm -f *.a
%.o: %.c
$(COMPILE.c) -MMD -c -o $@ $<
-.PHONY: all clean
+.PHONY: all clean distclean
.DEFAULT_GOAL=all
diff --git a/tools/net/ynl/lib/__init__.py b/tools/net/ynl/lib/__init__.py
deleted file mode 100644
index f7eaa07783e7..000000000000
--- a/tools/net/ynl/lib/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
-
-from .nlspec import SpecAttr, SpecAttrSet, SpecEnumEntry, SpecEnumSet, \
- SpecFamily, SpecOperation
-from .ynl import YnlFamily, Netlink
-
-__all__ = ["SpecAttr", "SpecAttrSet", "SpecEnumEntry", "SpecEnumSet",
- "SpecFamily", "SpecOperation", "YnlFamily", "Netlink"]
diff --git a/tools/net/ynl/lib/ynl-priv.h b/tools/net/ynl/lib/ynl-priv.h
index 7491da8e7555..29481989ea76 100644
--- a/tools/net/ynl/lib/ynl-priv.h
+++ b/tools/net/ynl/lib/ynl-priv.h
@@ -2,16 +2,16 @@
#ifndef __YNL_C_PRIV_H
#define __YNL_C_PRIV_H 1
+#include <stdbool.h>
#include <stddef.h>
-#include <libmnl/libmnl.h>
#include <linux/types.h>
+struct ynl_parse_arg;
+
/*
* YNL internals / low level stuff
*/
-/* Generic mnl helper code */
-
enum ynl_policy_type {
YNL_PT_REJECT = 1,
YNL_PT_IGNORE,
@@ -25,23 +25,41 @@ enum ynl_policy_type {
YNL_PT_UINT,
YNL_PT_NUL_STR,
YNL_PT_BITFIELD32,
+ YNL_PT_SUBMSG,
+};
+
+enum ynl_parse_result {
+ YNL_PARSE_CB_ERROR = -1,
+ YNL_PARSE_CB_STOP = 0,
+ YNL_PARSE_CB_OK = 1,
};
+#define YNL_SOCKET_BUFFER_SIZE (1 << 17)
+
+#define YNL_ARRAY_SIZE(array) (sizeof(array) ? \
+ sizeof(array) / sizeof(array[0]) : 0)
+
+typedef int (*ynl_parse_cb_t)(const struct nlmsghdr *nlh,
+ struct ynl_parse_arg *yarg);
+
struct ynl_policy_attr {
- enum ynl_policy_type type;
+ enum ynl_policy_type type:8;
+ __u8 is_submsg:1;
+ __u8 is_selector:1;
+ __u16 selector_type;
unsigned int len;
const char *name;
- struct ynl_policy_nest *nest;
+ const struct ynl_policy_nest *nest;
};
struct ynl_policy_nest {
unsigned int max_attr;
- struct ynl_policy_attr *table;
+ const struct ynl_policy_attr *table;
};
struct ynl_parse_arg {
struct ynl_sock *ys;
- struct ynl_policy_nest *rsp_policy;
+ const struct ynl_policy_nest *rsp_policy;
void *data;
};
@@ -65,7 +83,7 @@ static inline void *ynl_dump_obj_next(void *obj)
struct ynl_dump_list_type *list;
uptr -= offsetof(struct ynl_dump_list_type, data);
- list = (void *)uptr;
+ list = (struct ynl_dump_list_type *)uptr;
uptr = (unsigned long)list->next;
uptr += offsetof(struct ynl_dump_list_type, data);
@@ -80,39 +98,37 @@ struct ynl_ntf_base_type {
unsigned char data[] __attribute__((aligned(8)));
};
-extern mnl_cb_t ynl_cb_array[NLMSG_MIN_TYPE];
+struct nlmsghdr *ynl_msg_start_req(struct ynl_sock *ys, __u32 id, __u16 flags);
+struct nlmsghdr *ynl_msg_start_dump(struct ynl_sock *ys, __u32 id);
struct nlmsghdr *
ynl_gemsg_start_req(struct ynl_sock *ys, __u32 id, __u8 cmd, __u8 version);
struct nlmsghdr *
ynl_gemsg_start_dump(struct ynl_sock *ys, __u32 id, __u8 cmd, __u8 version);
-int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr);
-
-int ynl_recv_ack(struct ynl_sock *ys, int ret);
-int ynl_cb_null(const struct nlmsghdr *nlh, void *data);
+int ynl_submsg_failed(struct ynl_parse_arg *yarg, const char *field_name,
+ const char *sel_name);
/* YNL specific helpers used by the auto-generated code */
struct ynl_req_state {
struct ynl_parse_arg yarg;
- mnl_cb_t cb;
+ ynl_parse_cb_t cb;
__u32 rsp_cmd;
};
struct ynl_dump_state {
- struct ynl_sock *ys;
- struct ynl_policy_nest *rsp_policy;
+ struct ynl_parse_arg yarg;
void *first;
struct ynl_dump_list_type *last;
size_t alloc_sz;
- mnl_cb_t cb;
+ ynl_parse_cb_t cb;
__u32 rsp_cmd;
};
struct ynl_ntf_info {
- struct ynl_policy_nest *policy;
- mnl_cb_t cb;
+ const struct ynl_policy_nest *policy;
+ ynl_parse_cb_t cb;
size_t alloc_sz;
void (*free)(struct ynl_ntf_base_type *ntf);
};
@@ -125,20 +141,338 @@ int ynl_exec_dump(struct ynl_sock *ys, struct nlmsghdr *req_nlh,
void ynl_error_unknown_notification(struct ynl_sock *ys, __u8 cmd);
int ynl_error_parse(struct ynl_parse_arg *yarg, const char *msg);
-#ifndef MNL_HAS_AUTO_SCALARS
-static inline uint64_t mnl_attr_get_uint(const struct nlattr *attr)
+/* Netlink message handling helpers */
+
+#define YNL_MSG_OVERFLOW 1
+
+static inline struct nlmsghdr *ynl_nlmsg_put_header(void *buf)
+{
+ struct nlmsghdr *nlh = (struct nlmsghdr *)buf;
+
+ memset(nlh, 0, sizeof(*nlh));
+ nlh->nlmsg_len = NLMSG_HDRLEN;
+
+ return nlh;
+}
+
+static inline unsigned int ynl_nlmsg_data_len(const struct nlmsghdr *nlh)
+{
+ return nlh->nlmsg_len - NLMSG_HDRLEN;
+}
+
+static inline void *ynl_nlmsg_data(const struct nlmsghdr *nlh)
+{
+ return (unsigned char *)nlh + NLMSG_HDRLEN;
+}
+
+static inline void *
+ynl_nlmsg_data_offset(const struct nlmsghdr *nlh, unsigned int offset)
+{
+ return (unsigned char *)nlh + NLMSG_HDRLEN + offset;
+}
+
+static inline void *ynl_nlmsg_end_addr(const struct nlmsghdr *nlh)
+{
+ return (char *)nlh + nlh->nlmsg_len;
+}
+
+static inline void *
+ynl_nlmsg_put_extra_header(struct nlmsghdr *nlh, unsigned int size)
+{
+ void *tail = ynl_nlmsg_end_addr(nlh);
+
+ nlh->nlmsg_len += NLMSG_ALIGN(size);
+ return tail;
+}
+
+/* Netlink attribute helpers */
+
+static inline unsigned int ynl_attr_type(const struct nlattr *attr)
+{
+ return attr->nla_type & NLA_TYPE_MASK;
+}
+
+static inline unsigned int ynl_attr_data_len(const struct nlattr *attr)
+{
+ return attr->nla_len - NLA_HDRLEN;
+}
+
+static inline void *ynl_attr_data(const struct nlattr *attr)
+{
+ return (unsigned char *)attr + NLA_HDRLEN;
+}
+
+static inline void *ynl_attr_data_end(const struct nlattr *attr)
{
- if (mnl_attr_get_payload_len(attr) == 4)
- return mnl_attr_get_u32(attr);
- return mnl_attr_get_u64(attr);
+ return (char *)ynl_attr_data(attr) + ynl_attr_data_len(attr);
+}
+
+#define ynl_attr_for_each(attr, nlh, fixed_hdr_sz) \
+ for ((attr) = ynl_attr_first(nlh, (nlh)->nlmsg_len, \
+ NLMSG_HDRLEN + fixed_hdr_sz); attr; \
+ (attr) = ynl_attr_next(ynl_nlmsg_end_addr(nlh), attr))
+
+#define ynl_attr_for_each_nested_off(attr, outer, offset) \
+ for ((attr) = ynl_attr_first(outer, outer->nla_len, \
+ sizeof(struct nlattr) + offset); \
+ attr; \
+ (attr) = ynl_attr_next(ynl_attr_data_end(outer), attr))
+
+#define ynl_attr_for_each_nested(attr, outer) \
+ ynl_attr_for_each_nested_off(attr, outer, 0)
+
+#define ynl_attr_for_each_payload(start, len, attr) \
+ for ((attr) = ynl_attr_first(start, len, 0); attr; \
+ (attr) = ynl_attr_next(start + len, attr))
+
+static inline struct nlattr *
+ynl_attr_if_good(const void *end, struct nlattr *attr)
+{
+ if (attr + 1 > (const struct nlattr *)end)
+ return NULL;
+ if (ynl_attr_data_end(attr) > end)
+ return NULL;
+ return attr;
+}
+
+static inline struct nlattr *
+ynl_attr_next(const void *end, const struct nlattr *prev)
+{
+ struct nlattr *attr;
+
+ attr = (struct nlattr *)((char *)prev + NLA_ALIGN(prev->nla_len));
+ return ynl_attr_if_good(end, attr);
+}
+
+static inline struct nlattr *
+ynl_attr_first(const void *start, size_t len, size_t skip)
+{
+ struct nlattr *attr;
+
+ attr = (struct nlattr *)((char *)start + NLMSG_ALIGN(skip));
+ return ynl_attr_if_good((char *)start + len, attr);
+}
+
+static inline bool
+__ynl_attr_put_overflow(struct nlmsghdr *nlh, size_t size)
+{
+ bool o;
+
+ /* ynl_msg_start() stashed buffer length in nlmsg_pid. */
+ o = nlh->nlmsg_len + NLA_HDRLEN + NLMSG_ALIGN(size) > nlh->nlmsg_pid;
+ if (o)
+ /* YNL_MSG_OVERFLOW is < NLMSG_HDRLEN, all subsequent checks
+ * are guaranteed to fail.
+ */
+ nlh->nlmsg_pid = YNL_MSG_OVERFLOW;
+ return o;
+}
+
+static inline struct nlattr *
+ynl_attr_nest_start(struct nlmsghdr *nlh, unsigned int attr_type)
+{
+ struct nlattr *attr;
+
+ if (__ynl_attr_put_overflow(nlh, 0))
+ return (struct nlattr *)ynl_nlmsg_end_addr(nlh) - 1;
+
+ attr = (struct nlattr *)ynl_nlmsg_end_addr(nlh);
+ attr->nla_type = attr_type | NLA_F_NESTED;
+ nlh->nlmsg_len += NLA_HDRLEN;
+
+ return attr;
}
static inline void
-mnl_attr_put_uint(struct nlmsghdr *nlh, uint16_t type, uint64_t data)
+ynl_attr_nest_end(struct nlmsghdr *nlh, struct nlattr *attr)
{
- if ((uint32_t)data == (uint64_t)data)
- return mnl_attr_put_u32(nlh, type, data);
- return mnl_attr_put_u64(nlh, type, data);
+ attr->nla_len = (char *)ynl_nlmsg_end_addr(nlh) - (char *)attr;
+}
+
+static inline void
+ynl_attr_put(struct nlmsghdr *nlh, unsigned int attr_type,
+ const void *value, size_t size)
+{
+ struct nlattr *attr;
+
+ if (__ynl_attr_put_overflow(nlh, size))
+ return;
+
+ attr = (struct nlattr *)ynl_nlmsg_end_addr(nlh);
+ attr->nla_type = attr_type;
+ attr->nla_len = NLA_HDRLEN + size;
+
+ memcpy(ynl_attr_data(attr), value, size);
+
+ nlh->nlmsg_len += NLMSG_ALIGN(attr->nla_len);
+}
+
+static inline void
+ynl_attr_put_str(struct nlmsghdr *nlh, unsigned int attr_type, const char *str)
+{
+ struct nlattr *attr;
+ size_t len;
+
+ len = strlen(str);
+ if (__ynl_attr_put_overflow(nlh, len))
+ return;
+
+ attr = (struct nlattr *)ynl_nlmsg_end_addr(nlh);
+ attr->nla_type = attr_type;
+
+ strcpy((char *)ynl_attr_data(attr), str);
+ attr->nla_len = NLA_HDRLEN + NLA_ALIGN(len);
+
+ nlh->nlmsg_len += NLMSG_ALIGN(attr->nla_len);
+}
+
+static inline const char *ynl_attr_get_str(const struct nlattr *attr)
+{
+ return (const char *)ynl_attr_data(attr);
+}
+
+static inline __s8 ynl_attr_get_s8(const struct nlattr *attr)
+{
+ return *(__s8 *)ynl_attr_data(attr);
+}
+
+static inline __s16 ynl_attr_get_s16(const struct nlattr *attr)
+{
+ return *(__s16 *)ynl_attr_data(attr);
+}
+
+static inline __s32 ynl_attr_get_s32(const struct nlattr *attr)
+{
+ return *(__s32 *)ynl_attr_data(attr);
+}
+
+static inline __s64 ynl_attr_get_s64(const struct nlattr *attr)
+{
+ __s64 tmp;
+
+ memcpy(&tmp, (unsigned char *)(attr + 1), sizeof(tmp));
+ return tmp;
+}
+
+static inline __u8 ynl_attr_get_u8(const struct nlattr *attr)
+{
+ return *(__u8 *)ynl_attr_data(attr);
+}
+
+static inline __u16 ynl_attr_get_u16(const struct nlattr *attr)
+{
+ return *(__u16 *)ynl_attr_data(attr);
+}
+
+static inline __u32 ynl_attr_get_u32(const struct nlattr *attr)
+{
+ return *(__u32 *)ynl_attr_data(attr);
+}
+
+static inline __u64 ynl_attr_get_u64(const struct nlattr *attr)
+{
+ __u64 tmp;
+
+ memcpy(&tmp, (unsigned char *)(attr + 1), sizeof(tmp));
+ return tmp;
+}
+
+static inline void
+ynl_attr_put_s8(struct nlmsghdr *nlh, unsigned int attr_type, __s8 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_s16(struct nlmsghdr *nlh, unsigned int attr_type, __s16 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_s32(struct nlmsghdr *nlh, unsigned int attr_type, __s32 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_s64(struct nlmsghdr *nlh, unsigned int attr_type, __s64 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_u8(struct nlmsghdr *nlh, unsigned int attr_type, __u8 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_u16(struct nlmsghdr *nlh, unsigned int attr_type, __u16 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_u32(struct nlmsghdr *nlh, unsigned int attr_type, __u32 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline void
+ynl_attr_put_u64(struct nlmsghdr *nlh, unsigned int attr_type, __u64 value)
+{
+ ynl_attr_put(nlh, attr_type, &value, sizeof(value));
+}
+
+static inline __u64 ynl_attr_get_uint(const struct nlattr *attr)
+{
+ switch (ynl_attr_data_len(attr)) {
+ case 4:
+ return ynl_attr_get_u32(attr);
+ case 8:
+ return ynl_attr_get_u64(attr);
+ default:
+ return 0;
+ }
+}
+
+static inline __s64 ynl_attr_get_sint(const struct nlattr *attr)
+{
+ switch (ynl_attr_data_len(attr)) {
+ case 4:
+ return ynl_attr_get_s32(attr);
+ case 8:
+ return ynl_attr_get_s64(attr);
+ default:
+ return 0;
+ }
+}
+
+static inline void
+ynl_attr_put_uint(struct nlmsghdr *nlh, __u16 type, __u64 data)
+{
+ if ((__u32)data == (__u64)data)
+ ynl_attr_put_u32(nlh, type, data);
+ else
+ ynl_attr_put_u64(nlh, type, data);
+}
+
+static inline void
+ynl_attr_put_sint(struct nlmsghdr *nlh, __u16 type, __s64 data)
+{
+ if ((__s32)data == (__s64)data)
+ ynl_attr_put_s32(nlh, type, data);
+ else
+ ynl_attr_put_s64(nlh, type, data);
+}
+
+int __ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr,
+ unsigned int type);
+
+static inline int ynl_attr_validate(struct ynl_parse_arg *yarg,
+ const struct nlattr *attr)
+{
+ return __ynl_attr_validate(yarg, attr, ynl_attr_type(attr));
}
-#endif
#endif
diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
index c82a7f41b31c..2bcd781111d7 100644
--- a/tools/net/ynl/lib/ynl.c
+++ b/tools/net/ynl/lib/ynl.c
@@ -3,10 +3,11 @@
#include <poll.h>
#include <string.h>
#include <stdlib.h>
+#include <stdio.h>
+#include <unistd.h>
#include <linux/types.h>
-
-#include <libmnl/libmnl.h>
#include <linux/genetlink.h>
+#include <sys/socket.h>
#include "ynl.h"
@@ -44,8 +45,39 @@
#define perr(_ys, _msg) __yerr(&(_ys)->err, errno, _msg)
/* -- Netlink boiler plate */
+static bool
+ynl_err_walk_is_sel(const struct ynl_policy_nest *policy,
+ const struct nlattr *attr)
+{
+ unsigned int type = ynl_attr_type(attr);
+
+ return policy && type <= policy->max_attr &&
+ policy->table[type].is_selector;
+}
+
+static const struct ynl_policy_nest *
+ynl_err_walk_sel_policy(const struct ynl_policy_attr *policy_attr,
+ const struct nlattr *selector)
+{
+ const struct ynl_policy_nest *policy = policy_attr->nest;
+ const char *sel;
+ unsigned int i;
+
+ if (!policy_attr->is_submsg)
+ return policy;
+
+ sel = ynl_attr_get_str(selector);
+ for (i = 0; i <= policy->max_attr; i++) {
+ if (!strcmp(sel, policy->table[i].name))
+ return policy->table[i].nest;
+ }
+
+ return NULL;
+}
+
static int
-ynl_err_walk_report_one(struct ynl_policy_nest *policy, unsigned int type,
+ynl_err_walk_report_one(const struct ynl_policy_nest *policy,
+ const struct nlattr *selector, unsigned int type,
char *str, int str_sz, int *n)
{
if (!policy) {
@@ -66,17 +98,44 @@ ynl_err_walk_report_one(struct ynl_policy_nest *policy, unsigned int type,
return 1;
}
- if (*n < str_sz)
- *n += snprintf(str, str_sz - *n,
- ".%s", policy->table[type].name);
+ if (*n < str_sz) {
+ int sz;
+
+ sz = snprintf(str, str_sz - *n,
+ ".%s", policy->table[type].name);
+ *n += sz;
+ str += sz;
+ }
+
+ if (policy->table[type].is_submsg) {
+ if (!selector) {
+ if (*n < str_sz)
+ *n += snprintf(str, str_sz, "(!selector)");
+ return 1;
+ }
+
+ if (ynl_attr_type(selector) !=
+ policy->table[type].selector_type) {
+ if (*n < str_sz)
+ *n += snprintf(str, str_sz, "(!=selector)");
+ return 1;
+ }
+
+ if (*n < str_sz)
+ *n += snprintf(str, str_sz - *n, "(%s)",
+ ynl_attr_get_str(selector));
+ }
+
return 0;
}
static int
ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
- struct ynl_policy_nest *policy, char *str, int str_sz,
- struct ynl_policy_nest **nest_pol)
+ const struct ynl_policy_nest *policy, char *str, int str_sz,
+ const struct ynl_policy_nest **nest_pol)
{
+ const struct ynl_policy_nest *next_pol;
+ const struct nlattr *selector = NULL;
unsigned int astart_off, aend_off;
const struct nlattr *attr;
unsigned int data_len;
@@ -92,9 +151,13 @@ ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
data_len = end - start;
- mnl_attr_for_each_payload(start, data_len) {
+ ynl_attr_for_each_payload(start, data_len, attr) {
astart_off = (char *)attr - (char *)start;
- aend_off = astart_off + mnl_attr_get_payload_len(attr);
+ aend_off = (char *)ynl_attr_data_end(attr) - (char *)start;
+
+ if (ynl_err_walk_is_sel(policy, attr))
+ selector = attr;
+
if (aend_off <= off)
continue;
@@ -106,28 +169,32 @@ ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
off -= astart_off;
- type = mnl_attr_get_type(attr);
+ type = ynl_attr_type(attr);
- if (ynl_err_walk_report_one(policy, type, str, str_sz, &n))
+ if (ynl_err_walk_report_one(policy, selector, type, str, str_sz, &n))
+ return n;
+
+ next_pol = ynl_err_walk_sel_policy(&policy->table[type], selector);
+ if (!next_pol)
return n;
if (!off) {
if (nest_pol)
- *nest_pol = policy->table[type].nest;
+ *nest_pol = next_pol;
return n;
}
- if (!policy->table[type].nest) {
+ if (!next_pol) {
if (n < str_sz)
n += snprintf(str, str_sz, "!nest");
return n;
}
off -= sizeof(struct nlattr);
- start = mnl_attr_get_payload(attr);
- end = start + mnl_attr_get_payload_len(attr);
+ start = ynl_attr_data(attr);
+ end = start + ynl_attr_data_len(attr);
- return n + ynl_err_walk(ys, start, end, off, policy->table[type].nest,
+ return n + ynl_err_walk(ys, start, end, off, next_pol,
&str[n], str_sz - n, nest_pol);
}
@@ -147,14 +214,14 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
if (!(nlh->nlmsg_flags & NLM_F_ACK_TLVS)) {
yerr_msg(ys, "%s", strerror(ys->err.code));
- return MNL_CB_OK;
+ return YNL_PARSE_CB_OK;
}
- mnl_attr_for_each(attr, nlh, hlen) {
+ ynl_attr_for_each(attr, nlh, hlen) {
unsigned int len, type;
- len = mnl_attr_get_payload_len(attr);
- type = mnl_attr_get_type(attr);
+ len = ynl_attr_data_len(attr);
+ type = ynl_attr_type(attr);
if (type > NLMSGERR_ATTR_MAX)
continue;
@@ -166,12 +233,12 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
case NLMSGERR_ATTR_MISS_TYPE:
case NLMSGERR_ATTR_MISS_NEST:
if (len != sizeof(__u32))
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
break;
case NLMSGERR_ATTR_MSG:
- str = mnl_attr_get_payload(attr);
+ str = ynl_attr_get_str(attr);
if (str[len - 1])
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
break;
default:
break;
@@ -185,18 +252,17 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
unsigned int n, off;
void *start, *end;
- ys->err.attr_offs = mnl_attr_get_u32(tb[NLMSGERR_ATTR_OFFS]);
+ ys->err.attr_offs = ynl_attr_get_u32(tb[NLMSGERR_ATTR_OFFS]);
n = snprintf(bad_attr, sizeof(bad_attr), "%sbad attribute: ",
str ? " (" : "");
- start = mnl_nlmsg_get_payload_offset(ys->nlh,
- ys->family->hdr_len);
- end = mnl_nlmsg_get_payload_tail(ys->nlh);
+ start = ynl_nlmsg_data_offset(ys->nlh, ys->req_hdr_len);
+ end = ynl_nlmsg_end_addr(ys->nlh);
off = ys->err.attr_offs;
off -= sizeof(struct nlmsghdr);
- off -= ys->family->hdr_len;
+ off -= ys->req_hdr_len;
n += ynl_err_walk(ys, start, end, off, ys->req_policy,
&bad_attr[n], sizeof(bad_attr) - n, NULL);
@@ -206,25 +272,24 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
bad_attr[n] = '\0';
}
if (tb[NLMSGERR_ATTR_MISS_TYPE]) {
- struct ynl_policy_nest *nest_pol = NULL;
+ const struct ynl_policy_nest *nest_pol = NULL;
unsigned int n, off, type;
void *start, *end;
int n2;
- type = mnl_attr_get_u32(tb[NLMSGERR_ATTR_MISS_TYPE]);
+ type = ynl_attr_get_u32(tb[NLMSGERR_ATTR_MISS_TYPE]);
n = snprintf(miss_attr, sizeof(miss_attr), "%smissing attribute: ",
bad_attr[0] ? ", " : (str ? " (" : ""));
- start = mnl_nlmsg_get_payload_offset(ys->nlh,
- ys->family->hdr_len);
- end = mnl_nlmsg_get_payload_tail(ys->nlh);
+ start = ynl_nlmsg_data_offset(ys->nlh, ys->req_hdr_len);
+ end = ynl_nlmsg_end_addr(ys->nlh);
nest_pol = ys->req_policy;
if (tb[NLMSGERR_ATTR_MISS_NEST]) {
- off = mnl_attr_get_u32(tb[NLMSGERR_ATTR_MISS_NEST]);
+ off = ynl_attr_get_u32(tb[NLMSGERR_ATTR_MISS_NEST]);
off -= sizeof(struct nlmsghdr);
- off -= ys->family->hdr_len;
+ off -= ys->req_hdr_len;
n += ynl_err_walk(ys, start, end, off, ys->req_policy,
&miss_attr[n], sizeof(miss_attr) - n,
@@ -232,7 +297,7 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
}
n2 = 0;
- ynl_err_walk_report_one(nest_pol, type, &miss_attr[n],
+ ynl_err_walk_report_one(nest_pol, NULL, type, &miss_attr[n],
sizeof(miss_attr) - n, &n2);
n += n2;
@@ -254,13 +319,13 @@ ynl_ext_ack_check(struct ynl_sock *ys, const struct nlmsghdr *nlh,
else
yerr_msg(ys, "%s", strerror(ys->err.code));
- return MNL_CB_OK;
+ return YNL_PARSE_CB_OK;
}
-static int ynl_cb_error(const struct nlmsghdr *nlh, void *data)
+static int
+ynl_cb_error(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- const struct nlmsgerr *err = mnl_nlmsg_get_payload(nlh);
- struct ynl_parse_arg *yarg = data;
+ const struct nlmsgerr *err = ynl_nlmsg_data(nlh);
unsigned int hlen;
int code;
@@ -270,16 +335,15 @@ static int ynl_cb_error(const struct nlmsghdr *nlh, void *data)
hlen = sizeof(*err);
if (!(nlh->nlmsg_flags & NLM_F_CAPPED))
- hlen += mnl_nlmsg_get_payload_len(&err->msg);
+ hlen += ynl_nlmsg_data_len(&err->msg);
ynl_ext_ack_check(yarg->ys, nlh, hlen);
- return code ? MNL_CB_ERROR : MNL_CB_STOP;
+ return code ? YNL_PARSE_CB_ERROR : YNL_PARSE_CB_STOP;
}
-static int ynl_cb_done(const struct nlmsghdr *nlh, void *data)
+static int ynl_cb_done(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- struct ynl_parse_arg *yarg = data;
int err;
err = *(int *)NLMSG_DATA(nlh);
@@ -289,34 +353,22 @@ static int ynl_cb_done(const struct nlmsghdr *nlh, void *data)
ynl_ext_ack_check(yarg->ys, nlh, sizeof(int));
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
}
- return MNL_CB_STOP;
-}
-
-static int ynl_cb_noop(const struct nlmsghdr *nlh, void *data)
-{
- return MNL_CB_OK;
+ return YNL_PARSE_CB_STOP;
}
-mnl_cb_t ynl_cb_array[NLMSG_MIN_TYPE] = {
- [NLMSG_NOOP] = ynl_cb_noop,
- [NLMSG_ERROR] = ynl_cb_error,
- [NLMSG_DONE] = ynl_cb_done,
- [NLMSG_OVERRUN] = ynl_cb_noop,
-};
-
/* Attribute validation */
-int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
+int __ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr,
+ unsigned int type)
{
- struct ynl_policy_attr *policy;
- unsigned int type, len;
+ const struct ynl_policy_attr *policy;
unsigned char *data;
+ unsigned int len;
- data = mnl_attr_get_payload(attr);
- len = mnl_attr_get_payload_len(attr);
- type = mnl_attr_get_type(attr);
+ data = ynl_attr_data(attr);
+ len = ynl_attr_data_len(attr);
if (type > yarg->rsp_policy->max_attr) {
yerr(yarg->ys, YNL_ERROR_INTERNAL,
"Internal error, validating unknown attribute");
@@ -378,7 +430,7 @@ int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
"Invalid attribute (binary %s)", policy->name);
return -1;
case YNL_PT_NUL_STR:
- if ((!policy->len || len <= policy->len) && !data[len - 1])
+ if (len && (!policy->len || len <= policy->len) && !data[len - 1])
break;
yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
"Invalid attribute (string %s)", policy->name);
@@ -398,6 +450,15 @@ int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
return 0;
}
+int ynl_submsg_failed(struct ynl_parse_arg *yarg, const char *field_name,
+ const char *sel_name)
+{
+ yerr(yarg->ys, YNL_ERROR_SUBMSG_KEY,
+ "Parsing error: Sub-message key not set (msg %s, key %s)",
+ field_name, sel_name);
+ return YNL_PARSE_CB_ERROR;
+}
+
/* Generic code */
static void ynl_err_reset(struct ynl_sock *ys)
@@ -413,14 +474,38 @@ struct nlmsghdr *ynl_msg_start(struct ynl_sock *ys, __u32 id, __u16 flags)
ynl_err_reset(ys);
- nlh = ys->nlh = mnl_nlmsg_put_header(ys->tx_buf);
+ nlh = ys->nlh = ynl_nlmsg_put_header(ys->tx_buf);
nlh->nlmsg_type = id;
nlh->nlmsg_flags = flags;
nlh->nlmsg_seq = ++ys->seq;
+ /* This is a local YNL hack for length checking, we put the buffer
+ * length in nlmsg_pid, since messages sent to the kernel always use
+ * PID 0. Message needs to be terminated with ynl_msg_end().
+ */
+ nlh->nlmsg_pid = YNL_SOCKET_BUFFER_SIZE;
+
return nlh;
}
+static int ynl_msg_end(struct ynl_sock *ys, struct nlmsghdr *nlh)
+{
+ /* We stash buffer length in nlmsg_pid. */
+ if (nlh->nlmsg_pid == 0) {
+ yerr(ys, YNL_ERROR_INPUT_INVALID,
+ "Unknown input buffer length");
+ return -EINVAL;
+ }
+ if (nlh->nlmsg_pid == YNL_MSG_OVERFLOW) {
+ yerr(ys, YNL_ERROR_INPUT_TOO_BIG,
+ "Constructed message longer than internal buffer");
+ return -EMSGSIZE;
+ }
+
+ nlh->nlmsg_pid = 0;
+ return 0;
+}
+
struct nlmsghdr *
ynl_gemsg_start(struct ynl_sock *ys, __u32 id, __u16 flags,
__u8 cmd, __u8 version)
@@ -435,20 +520,20 @@ ynl_gemsg_start(struct ynl_sock *ys, __u32 id, __u16 flags,
gehdr.cmd = cmd;
gehdr.version = version;
- data = mnl_nlmsg_put_extra_header(nlh, sizeof(gehdr));
+ data = ynl_nlmsg_put_extra_header(nlh, sizeof(gehdr));
memcpy(data, &gehdr, sizeof(gehdr));
return nlh;
}
-void ynl_msg_start_req(struct ynl_sock *ys, __u32 id)
+struct nlmsghdr *ynl_msg_start_req(struct ynl_sock *ys, __u32 id, __u16 flags)
{
- ynl_msg_start(ys, id, NLM_F_REQUEST | NLM_F_ACK);
+ return ynl_msg_start(ys, id, NLM_F_REQUEST | NLM_F_ACK | flags);
}
-void ynl_msg_start_dump(struct ynl_sock *ys, __u32 id)
+struct nlmsghdr *ynl_msg_start_dump(struct ynl_sock *ys, __u32 id)
{
- ynl_msg_start(ys, id, NLM_F_REQUEST | NLM_F_ACK | NLM_F_DUMP);
+ return ynl_msg_start(ys, id, NLM_F_REQUEST | NLM_F_ACK | NLM_F_DUMP);
}
struct nlmsghdr *
@@ -464,31 +549,85 @@ ynl_gemsg_start_dump(struct ynl_sock *ys, __u32 id, __u8 cmd, __u8 version)
cmd, version);
}
-int ynl_recv_ack(struct ynl_sock *ys, int ret)
+static int ynl_cb_null(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- if (!ret) {
- yerr(ys, YNL_ERROR_EXPECT_ACK,
- "Expecting an ACK but nothing received");
- return -1;
+ yerr(yarg->ys, YNL_ERROR_UNEXPECT_MSG,
+ "Received a message when none were expected");
+
+ return YNL_PARSE_CB_ERROR;
+}
+
+static int
+__ynl_sock_read_msgs(struct ynl_parse_arg *yarg, ynl_parse_cb_t cb, int flags)
+{
+ struct ynl_sock *ys = yarg->ys;
+ const struct nlmsghdr *nlh;
+ ssize_t len, rem;
+ int ret;
+
+ len = recv(ys->socket, ys->rx_buf, YNL_SOCKET_BUFFER_SIZE, flags);
+ if (len < 0) {
+ if (flags & MSG_DONTWAIT && errno == EAGAIN)
+ return YNL_PARSE_CB_STOP;
+ return len;
}
- ret = mnl_socket_recvfrom(ys->sock, ys->rx_buf, MNL_SOCKET_BUFFER_SIZE);
- if (ret < 0) {
- perr(ys, "Socket receive failed");
- return ret;
+ ret = YNL_PARSE_CB_STOP;
+ for (rem = len; rem > 0; NLMSG_NEXT(nlh, rem)) {
+ nlh = (struct nlmsghdr *)&ys->rx_buf[len - rem];
+ if (!NLMSG_OK(nlh, rem)) {
+ yerr(yarg->ys, YNL_ERROR_INV_RESP,
+ "Invalid message or trailing data in the response.");
+ return YNL_PARSE_CB_ERROR;
+ }
+
+ if (nlh->nlmsg_flags & NLM_F_DUMP_INTR) {
+ /* TODO: handle this better */
+ yerr(yarg->ys, YNL_ERROR_DUMP_INTER,
+ "Dump interrupted / inconsistent, please retry.");
+ return YNL_PARSE_CB_ERROR;
+ }
+
+ switch (nlh->nlmsg_type) {
+ case 0:
+ yerr(yarg->ys, YNL_ERROR_INV_RESP,
+ "Invalid message type in the response.");
+ return YNL_PARSE_CB_ERROR;
+ case NLMSG_NOOP:
+ case NLMSG_OVERRUN ... NLMSG_MIN_TYPE - 1:
+ ret = YNL_PARSE_CB_OK;
+ break;
+ case NLMSG_ERROR:
+ ret = ynl_cb_error(nlh, yarg);
+ break;
+ case NLMSG_DONE:
+ ret = ynl_cb_done(nlh, yarg);
+ break;
+ default:
+ ret = cb(nlh, yarg);
+ break;
+ }
}
- return mnl_cb_run(ys->rx_buf, ret, ys->seq, ys->portid,
- ynl_cb_null, ys);
+
+ return ret;
}
-int ynl_cb_null(const struct nlmsghdr *nlh, void *data)
+static int ynl_sock_read_msgs(struct ynl_parse_arg *yarg, ynl_parse_cb_t cb)
{
- struct ynl_parse_arg *yarg = data;
+ return __ynl_sock_read_msgs(yarg, cb, 0);
+}
- yerr(yarg->ys, YNL_ERROR_UNEXPECT_MSG,
- "Received a message when none were expected");
+static int ynl_recv_ack(struct ynl_sock *ys, int ret)
+{
+ struct ynl_parse_arg yarg = { .ys = ys, };
+
+ if (!ret) {
+ yerr(ys, YNL_ERROR_EXPECT_ACK,
+ "Expecting an ACK but nothing received");
+ return -1;
+ }
- return MNL_CB_ERROR;
+ return ynl_sock_read_msgs(&yarg, ynl_cb_null);
}
/* Init/fini and genetlink boiler plate */
@@ -498,7 +637,7 @@ ynl_get_family_info_mcast(struct ynl_sock *ys, const struct nlattr *mcasts)
const struct nlattr *entry, *attr;
unsigned int i;
- mnl_attr_for_each_nested(attr, mcasts)
+ ynl_attr_for_each_nested(attr, mcasts)
ys->n_mcast_groups++;
if (!ys->n_mcast_groups)
@@ -507,54 +646,55 @@ ynl_get_family_info_mcast(struct ynl_sock *ys, const struct nlattr *mcasts)
ys->mcast_groups = calloc(ys->n_mcast_groups,
sizeof(*ys->mcast_groups));
if (!ys->mcast_groups)
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
i = 0;
- mnl_attr_for_each_nested(entry, mcasts) {
- mnl_attr_for_each_nested(attr, entry) {
- if (mnl_attr_get_type(attr) == CTRL_ATTR_MCAST_GRP_ID)
- ys->mcast_groups[i].id = mnl_attr_get_u32(attr);
- if (mnl_attr_get_type(attr) == CTRL_ATTR_MCAST_GRP_NAME) {
+ ynl_attr_for_each_nested(entry, mcasts) {
+ ynl_attr_for_each_nested(attr, entry) {
+ if (ynl_attr_type(attr) == CTRL_ATTR_MCAST_GRP_ID)
+ ys->mcast_groups[i].id = ynl_attr_get_u32(attr);
+ if (ynl_attr_type(attr) == CTRL_ATTR_MCAST_GRP_NAME) {
strncpy(ys->mcast_groups[i].name,
- mnl_attr_get_str(attr),
+ ynl_attr_get_str(attr),
GENL_NAMSIZ - 1);
ys->mcast_groups[i].name[GENL_NAMSIZ - 1] = 0;
}
}
+ i++;
}
return 0;
}
-static int ynl_get_family_info_cb(const struct nlmsghdr *nlh, void *data)
+static int
+ynl_get_family_info_cb(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- struct ynl_parse_arg *yarg = data;
struct ynl_sock *ys = yarg->ys;
const struct nlattr *attr;
bool found_id = true;
- mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
- if (mnl_attr_get_type(attr) == CTRL_ATTR_MCAST_GROUPS)
+ ynl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ if (ynl_attr_type(attr) == CTRL_ATTR_MCAST_GROUPS)
if (ynl_get_family_info_mcast(ys, attr))
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
- if (mnl_attr_get_type(attr) != CTRL_ATTR_FAMILY_ID)
+ if (ynl_attr_type(attr) != CTRL_ATTR_FAMILY_ID)
continue;
- if (mnl_attr_get_payload_len(attr) != sizeof(__u16)) {
+ if (ynl_attr_data_len(attr) != sizeof(__u16)) {
yerr(ys, YNL_ERROR_ATTR_INVALID, "Invalid family ID");
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
}
- ys->family_id = mnl_attr_get_u16(attr);
+ ys->family_id = ynl_attr_get_u16(attr);
found_id = true;
}
if (!found_id) {
yerr(ys, YNL_ERROR_ATTR_MISSING, "Family ID missing");
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
}
- return MNL_CB_OK;
+ return YNL_PARSE_CB_OK;
}
static int ynl_sock_read_family(struct ynl_sock *ys, const char *family_name)
@@ -564,68 +704,91 @@ static int ynl_sock_read_family(struct ynl_sock *ys, const char *family_name)
int err;
nlh = ynl_gemsg_start_req(ys, GENL_ID_CTRL, CTRL_CMD_GETFAMILY, 1);
- mnl_attr_put_strz(nlh, CTRL_ATTR_FAMILY_NAME, family_name);
+ ynl_attr_put_str(nlh, CTRL_ATTR_FAMILY_NAME, family_name);
- err = mnl_socket_sendto(ys->sock, nlh, nlh->nlmsg_len);
+ err = ynl_msg_end(ys, nlh);
+ if (err < 0)
+ return err;
+
+ err = send(ys->socket, nlh, nlh->nlmsg_len, 0);
if (err < 0) {
perr(ys, "failed to request socket family info");
return err;
}
- err = mnl_socket_recvfrom(ys->sock, ys->rx_buf, MNL_SOCKET_BUFFER_SIZE);
- if (err <= 0) {
- perr(ys, "failed to receive the socket family info");
+ err = ynl_sock_read_msgs(&yarg, ynl_get_family_info_cb);
+ if (err < 0) {
+ free(ys->mcast_groups);
+ perr(ys, "failed to receive the socket family info - no such family?");
return err;
}
- err = mnl_cb_run2(ys->rx_buf, err, ys->seq, ys->portid,
- ynl_get_family_info_cb, &yarg,
- ynl_cb_array, ARRAY_SIZE(ynl_cb_array));
+
+ err = ynl_recv_ack(ys, err);
if (err < 0) {
free(ys->mcast_groups);
- perr(ys, "failed to receive the socket family info - no such family?");
return err;
}
- return ynl_recv_ack(ys, err);
+ return 0;
}
struct ynl_sock *
ynl_sock_create(const struct ynl_family *yf, struct ynl_error *yse)
{
+ struct sockaddr_nl addr;
struct ynl_sock *ys;
+ socklen_t addrlen;
+ int sock_type;
int one = 1;
- ys = malloc(sizeof(*ys) + 2 * MNL_SOCKET_BUFFER_SIZE);
+ ys = malloc(sizeof(*ys) + 2 * YNL_SOCKET_BUFFER_SIZE);
if (!ys)
return NULL;
memset(ys, 0, sizeof(*ys));
ys->family = yf;
ys->tx_buf = &ys->raw_buf[0];
- ys->rx_buf = &ys->raw_buf[MNL_SOCKET_BUFFER_SIZE];
+ ys->rx_buf = &ys->raw_buf[YNL_SOCKET_BUFFER_SIZE];
ys->ntf_last_next = &ys->ntf_first;
- ys->sock = mnl_socket_open(NETLINK_GENERIC);
- if (!ys->sock) {
+ sock_type = yf->is_classic ? yf->classic_id : NETLINK_GENERIC;
+
+ ys->socket = socket(AF_NETLINK, SOCK_RAW, sock_type);
+ if (ys->socket < 0) {
__perr(yse, "failed to create a netlink socket");
goto err_free_sock;
}
- if (mnl_socket_setsockopt(ys->sock, NETLINK_CAP_ACK,
- &one, sizeof(one))) {
+ if (setsockopt(ys->socket, SOL_NETLINK, NETLINK_CAP_ACK,
+ &one, sizeof(one))) {
__perr(yse, "failed to enable netlink ACK");
goto err_close_sock;
}
- if (mnl_socket_setsockopt(ys->sock, NETLINK_EXT_ACK,
- &one, sizeof(one))) {
+ if (setsockopt(ys->socket, SOL_NETLINK, NETLINK_EXT_ACK,
+ &one, sizeof(one))) {
__perr(yse, "failed to enable netlink ext ACK");
goto err_close_sock;
}
+ memset(&addr, 0, sizeof(addr));
+ addr.nl_family = AF_NETLINK;
+ if (bind(ys->socket, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
+ __perr(yse, "unable to bind to a socket address");
+ goto err_close_sock;
+ }
+
+ memset(&addr, 0, sizeof(addr));
+ addrlen = sizeof(addr);
+ if (getsockname(ys->socket, (struct sockaddr *)&addr, &addrlen) < 0) {
+ __perr(yse, "unable to read socket address");
+ goto err_close_sock;
+ }
+ ys->portid = addr.nl_pid;
ys->seq = random();
- ys->portid = mnl_socket_get_portid(ys->sock);
- if (ynl_sock_read_family(ys, yf->name)) {
+ if (yf->is_classic) {
+ ys->family_id = yf->classic_id;
+ } else if (ynl_sock_read_family(ys, yf->name)) {
if (yse)
memcpy(yse, &ys->err, sizeof(*yse));
goto err_close_sock;
@@ -634,7 +797,7 @@ ynl_sock_create(const struct ynl_family *yf, struct ynl_error *yse)
return ys;
err_close_sock:
- mnl_socket_close(ys->sock);
+ close(ys->socket);
err_free_sock:
free(ys);
return NULL;
@@ -644,7 +807,7 @@ void ynl_sock_destroy(struct ynl_sock *ys)
{
struct ynl_ntf_base_type *ntf;
- mnl_socket_close(ys->sock);
+ close(ys->socket);
while ((ntf = ynl_ntf_dequeue(ys)))
ynl_ntf_free(ntf);
free(ys->mcast_groups);
@@ -671,9 +834,9 @@ int ynl_subscribe(struct ynl_sock *ys, const char *grp_name)
return -1;
}
- err = mnl_socket_setsockopt(ys->sock, NETLINK_ADD_MEMBERSHIP,
- &ys->mcast_groups[i].id,
- sizeof(ys->mcast_groups[i].id));
+ err = setsockopt(ys->socket, SOL_NETLINK, NETLINK_ADD_MEMBERSHIP,
+ &ys->mcast_groups[i].id,
+ sizeof(ys->mcast_groups[i].id));
if (err < 0) {
perr(ys, "Subscribing to multicast group failed");
return -1;
@@ -684,7 +847,7 @@ int ynl_subscribe(struct ynl_sock *ys, const char *grp_name)
int ynl_socket_get_fd(struct ynl_sock *ys)
{
- return mnl_socket_get_fd(ys->sock);
+ return ys->socket;
}
struct ynl_ntf_base_type *ynl_ntf_dequeue(struct ynl_sock *ys)
@@ -707,15 +870,23 @@ static int ynl_ntf_parse(struct ynl_sock *ys, const struct nlmsghdr *nlh)
struct ynl_parse_arg yarg = { .ys = ys, };
const struct ynl_ntf_info *info;
struct ynl_ntf_base_type *rsp;
- struct genlmsghdr *gehdr;
+ __u32 cmd;
int ret;
- gehdr = mnl_nlmsg_get_payload(nlh);
- if (gehdr->cmd >= ys->family->ntf_info_size)
- return MNL_CB_ERROR;
- info = &ys->family->ntf_info[gehdr->cmd];
+ if (ys->family->is_classic) {
+ cmd = nlh->nlmsg_type;
+ } else {
+ struct genlmsghdr *gehdr;
+
+ gehdr = ynl_nlmsg_data(nlh);
+ cmd = gehdr->cmd;
+ }
+
+ if (cmd >= ys->family->ntf_info_size)
+ return YNL_PARSE_CB_ERROR;
+ info = &ys->family->ntf_info[cmd];
if (!info->cb)
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
rsp = calloc(1, info->alloc_sz);
rsp->free = info->free;
@@ -723,52 +894,36 @@ static int ynl_ntf_parse(struct ynl_sock *ys, const struct nlmsghdr *nlh)
yarg.rsp_policy = info->policy;
ret = info->cb(nlh, &yarg);
- if (ret <= MNL_CB_STOP)
+ if (ret <= YNL_PARSE_CB_STOP)
goto err_free;
rsp->family = nlh->nlmsg_type;
- rsp->cmd = gehdr->cmd;
+ rsp->cmd = cmd;
*ys->ntf_last_next = rsp;
ys->ntf_last_next = &rsp->next;
- return MNL_CB_OK;
+ return YNL_PARSE_CB_OK;
err_free:
info->free(rsp);
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
}
-static int ynl_ntf_trampoline(const struct nlmsghdr *nlh, void *data)
+static int
+ynl_ntf_trampoline(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- return ynl_ntf_parse((struct ynl_sock *)data, nlh);
+ return ynl_ntf_parse(yarg->ys, nlh);
}
int ynl_ntf_check(struct ynl_sock *ys)
{
- ssize_t len;
+ struct ynl_parse_arg yarg = { .ys = ys, };
int err;
do {
- /* libmnl doesn't let us pass flags to the recv to make
- * it non-blocking so we need to poll() or peek() :|
- */
- struct pollfd pfd = { };
-
- pfd.fd = mnl_socket_get_fd(ys->sock);
- pfd.events = POLLIN;
- err = poll(&pfd, 1, 1);
- if (err < 1)
- return err;
-
- len = mnl_socket_recvfrom(ys->sock, ys->rx_buf,
- MNL_SOCKET_BUFFER_SIZE);
- if (len < 0)
- return len;
-
- err = mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid,
- ynl_ntf_trampoline, ys,
- ynl_cb_array, NLMSG_MIN_TYPE);
+ err = __ynl_sock_read_msgs(&yarg, ynl_ntf_trampoline,
+ MSG_DONTWAIT);
if (err < 0)
return err;
} while (err > 0);
@@ -789,35 +944,41 @@ void ynl_error_unknown_notification(struct ynl_sock *ys, __u8 cmd)
int ynl_error_parse(struct ynl_parse_arg *yarg, const char *msg)
{
yerr(yarg->ys, YNL_ERROR_INV_RESP, "Error parsing response: %s", msg);
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
}
static int
ynl_check_alien(struct ynl_sock *ys, const struct nlmsghdr *nlh, __u32 rsp_cmd)
{
- struct genlmsghdr *gehdr;
+ if (ys->family->is_classic) {
+ if (nlh->nlmsg_type != rsp_cmd)
+ return ynl_ntf_parse(ys, nlh);
+ } else {
+ struct genlmsghdr *gehdr;
+
+ if (ynl_nlmsg_data_len(nlh) < sizeof(*gehdr)) {
+ yerr(ys, YNL_ERROR_INV_RESP,
+ "Kernel responded with truncated message");
+ return -1;
+ }
- if (mnl_nlmsg_get_payload_len(nlh) < sizeof(*gehdr)) {
- yerr(ys, YNL_ERROR_INV_RESP,
- "Kernel responded with truncated message");
- return -1;
+ gehdr = ynl_nlmsg_data(nlh);
+ if (gehdr->cmd != rsp_cmd)
+ return ynl_ntf_parse(ys, nlh);
}
- gehdr = mnl_nlmsg_get_payload(nlh);
- if (gehdr->cmd != rsp_cmd)
- return ynl_ntf_parse(ys, nlh);
-
return 0;
}
-static int ynl_req_trampoline(const struct nlmsghdr *nlh, void *data)
+static
+int ynl_req_trampoline(const struct nlmsghdr *nlh, struct ynl_parse_arg *yarg)
{
- struct ynl_req_state *yrs = data;
+ struct ynl_req_state *yrs = (void *)yarg;
int ret;
ret = ynl_check_alien(yrs->yarg.ys, nlh, yrs->rsp_cmd);
if (ret)
- return ret < 0 ? MNL_CB_ERROR : MNL_CB_OK;
+ return ret < 0 ? YNL_PARSE_CB_ERROR : YNL_PARSE_CB_OK;
return yrs->cb(nlh, &yrs->yarg);
}
@@ -825,43 +986,38 @@ static int ynl_req_trampoline(const struct nlmsghdr *nlh, void *data)
int ynl_exec(struct ynl_sock *ys, struct nlmsghdr *req_nlh,
struct ynl_req_state *yrs)
{
- ssize_t len;
int err;
- err = mnl_socket_sendto(ys->sock, req_nlh, req_nlh->nlmsg_len);
+ err = ynl_msg_end(ys, req_nlh);
+ if (err < 0)
+ return err;
+
+ err = send(ys->socket, req_nlh, req_nlh->nlmsg_len, 0);
if (err < 0)
return err;
do {
- len = mnl_socket_recvfrom(ys->sock, ys->rx_buf,
- MNL_SOCKET_BUFFER_SIZE);
- if (len < 0)
- return len;
-
- err = mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid,
- ynl_req_trampoline, yrs,
- ynl_cb_array, NLMSG_MIN_TYPE);
- if (err < 0)
- return err;
+ err = ynl_sock_read_msgs(&yrs->yarg, ynl_req_trampoline);
} while (err > 0);
- return 0;
+ return err;
}
-static int ynl_dump_trampoline(const struct nlmsghdr *nlh, void *data)
+static int
+ynl_dump_trampoline(const struct nlmsghdr *nlh, struct ynl_parse_arg *data)
{
- struct ynl_dump_state *ds = data;
+ struct ynl_dump_state *ds = (void *)data;
struct ynl_dump_list_type *obj;
struct ynl_parse_arg yarg = {};
int ret;
- ret = ynl_check_alien(ds->ys, nlh, ds->rsp_cmd);
+ ret = ynl_check_alien(ds->yarg.ys, nlh, ds->rsp_cmd);
if (ret)
- return ret < 0 ? MNL_CB_ERROR : MNL_CB_OK;
+ return ret < 0 ? YNL_PARSE_CB_ERROR : YNL_PARSE_CB_OK;
obj = calloc(1, ds->alloc_sz);
if (!obj)
- return MNL_CB_ERROR;
+ return YNL_PARSE_CB_ERROR;
if (!ds->first)
ds->first = obj;
@@ -869,8 +1025,7 @@ static int ynl_dump_trampoline(const struct nlmsghdr *nlh, void *data)
ds->last->next = obj;
ds->last = obj;
- yarg.ys = ds->ys;
- yarg.rsp_policy = ds->rsp_policy;
+ yarg = ds->yarg;
yarg.data = &obj->data;
return ds->cb(nlh, &yarg);
@@ -888,22 +1043,18 @@ static void *ynl_dump_end(struct ynl_dump_state *ds)
int ynl_exec_dump(struct ynl_sock *ys, struct nlmsghdr *req_nlh,
struct ynl_dump_state *yds)
{
- ssize_t len;
int err;
- err = mnl_socket_sendto(ys->sock, req_nlh, req_nlh->nlmsg_len);
+ err = ynl_msg_end(ys, req_nlh);
if (err < 0)
return err;
- do {
- len = mnl_socket_recvfrom(ys->sock, ys->rx_buf,
- MNL_SOCKET_BUFFER_SIZE);
- if (len < 0)
- goto err_close_list;
+ err = send(ys->socket, req_nlh, req_nlh->nlmsg_len, 0);
+ if (err < 0)
+ return err;
- err = mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid,
- ynl_dump_trampoline, yds,
- ynl_cb_array, NLMSG_MIN_TYPE);
+ do {
+ err = ynl_sock_read_msgs(&yds->yarg, ynl_dump_trampoline);
if (err < 0)
goto err_close_list;
} while (err > 0);
diff --git a/tools/net/ynl/lib/ynl.h b/tools/net/ynl/lib/ynl.h
index ce77a6d76ce0..db7c0591a63f 100644
--- a/tools/net/ynl/lib/ynl.h
+++ b/tools/net/ynl/lib/ynl.h
@@ -2,6 +2,7 @@
#ifndef __YNL_C_H
#define __YNL_C_H 1
+#include <stdbool.h>
#include <stddef.h>
#include <linux/genetlink.h>
#include <linux/types.h>
@@ -12,6 +13,7 @@ enum ynl_error_code {
YNL_ERROR_NONE = 0,
__YNL_ERRNO_END = 4096,
YNL_ERROR_INTERNAL,
+ YNL_ERROR_DUMP_INTER,
YNL_ERROR_EXPECT_ACK,
YNL_ERROR_EXPECT_MSG,
YNL_ERROR_UNEXPECT_MSG,
@@ -19,6 +21,9 @@ enum ynl_error_code {
YNL_ERROR_ATTR_INVALID,
YNL_ERROR_UNKNOWN_NTF,
YNL_ERROR_INV_RESP,
+ YNL_ERROR_INPUT_INVALID,
+ YNL_ERROR_INPUT_TOO_BIG,
+ YNL_ERROR_SUBMSG_KEY,
};
/**
@@ -45,6 +50,8 @@ struct ynl_family {
/* private: */
const char *name;
size_t hdr_len;
+ bool is_classic;
+ __u16 classic_id;
const struct ynl_ntf_info *ntf_info;
unsigned int ntf_info_size;
};
@@ -58,7 +65,7 @@ struct ynl_sock {
/* private: */
const struct ynl_family *family;
- struct mnl_socket *sock;
+ int socket;
__u32 seq;
__u32 portid;
__u16 family_id;
@@ -73,12 +80,26 @@ struct ynl_sock {
struct ynl_ntf_base_type **ntf_last_next;
struct nlmsghdr *nlh;
- struct ynl_policy_nest *req_policy;
+ const struct ynl_policy_nest *req_policy;
+ size_t req_hdr_len;
unsigned char *tx_buf;
unsigned char *rx_buf;
unsigned char raw_buf[];
};
+/**
+ * struct ynl_string - parsed individual string
+ * @len: length of the string (excluding terminating character)
+ * @str: value of the string
+ *
+ * Parsed and nul-terminated string. This struct is only used for arrays of
+ * strings. Non-array string members are placed directly in respective types.
+ */
+struct ynl_string {
+ unsigned int len;
+ char str[];
+};
+
struct ynl_sock *
ynl_sock_create(const struct ynl_family *yf, struct ynl_error *e);
void ynl_sock_destroy(struct ynl_sock *ys);
@@ -88,6 +109,18 @@ void ynl_sock_destroy(struct ynl_sock *ys);
!ynl_dump_obj_is_last(iter); \
iter = ynl_dump_obj_next(iter))
+/**
+ * ynl_dump_empty() - does the dump have no entries
+ * @dump: pointer to the dump list, as returned by a dump call
+ *
+ * Check if the dump is empty, i.e. contains no objects.
+ * Dump calls return NULL on error, and terminator element if empty.
+ */
+static inline bool ynl_dump_empty(void *dump)
+{
+ return dump == (void *)YNL_LIST_END;
+}
+
int ynl_subscribe(struct ynl_sock *ys, const char *grp_name);
int ynl_socket_get_fd(struct ynl_sock *ys);
int ynl_ntf_check(struct ynl_sock *ys);
diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py
deleted file mode 100644
index 1e10512b2117..000000000000
--- a/tools/net/ynl/lib/ynl.py
+++ /dev/null
@@ -1,824 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
-
-from collections import namedtuple
-import functools
-import os
-import random
-import socket
-import struct
-from struct import Struct
-import yaml
-import ipaddress
-import uuid
-
-from .nlspec import SpecFamily
-
-#
-# Generic Netlink code which should really be in some library, but I can't quickly find one.
-#
-
-
-class Netlink:
- # Netlink socket
- SOL_NETLINK = 270
-
- NETLINK_ADD_MEMBERSHIP = 1
- NETLINK_CAP_ACK = 10
- NETLINK_EXT_ACK = 11
- NETLINK_GET_STRICT_CHK = 12
-
- # Netlink message
- NLMSG_ERROR = 2
- NLMSG_DONE = 3
-
- NLM_F_REQUEST = 1
- NLM_F_ACK = 4
- NLM_F_ROOT = 0x100
- NLM_F_MATCH = 0x200
-
- NLM_F_REPLACE = 0x100
- NLM_F_EXCL = 0x200
- NLM_F_CREATE = 0x400
- NLM_F_APPEND = 0x800
-
- NLM_F_CAPPED = 0x100
- NLM_F_ACK_TLVS = 0x200
-
- NLM_F_DUMP = NLM_F_ROOT | NLM_F_MATCH
-
- NLA_F_NESTED = 0x8000
- NLA_F_NET_BYTEORDER = 0x4000
-
- NLA_TYPE_MASK = NLA_F_NESTED | NLA_F_NET_BYTEORDER
-
- # Genetlink defines
- NETLINK_GENERIC = 16
-
- GENL_ID_CTRL = 0x10
-
- # nlctrl
- CTRL_CMD_GETFAMILY = 3
-
- CTRL_ATTR_FAMILY_ID = 1
- CTRL_ATTR_FAMILY_NAME = 2
- CTRL_ATTR_MAXATTR = 5
- CTRL_ATTR_MCAST_GROUPS = 7
-
- CTRL_ATTR_MCAST_GRP_NAME = 1
- CTRL_ATTR_MCAST_GRP_ID = 2
-
- # Extack types
- NLMSGERR_ATTR_MSG = 1
- NLMSGERR_ATTR_OFFS = 2
- NLMSGERR_ATTR_COOKIE = 3
- NLMSGERR_ATTR_POLICY = 4
- NLMSGERR_ATTR_MISS_TYPE = 5
- NLMSGERR_ATTR_MISS_NEST = 6
-
-
-class NlError(Exception):
- def __init__(self, nl_msg):
- self.nl_msg = nl_msg
-
- def __str__(self):
- return f"Netlink error: {os.strerror(-self.nl_msg.error)}\n{self.nl_msg}"
-
-
-class NlAttr:
- ScalarFormat = namedtuple('ScalarFormat', ['native', 'big', 'little'])
- type_formats = {
- 'u8' : ScalarFormat(Struct('B'), Struct("B"), Struct("B")),
- 's8' : ScalarFormat(Struct('b'), Struct("b"), Struct("b")),
- 'u16': ScalarFormat(Struct('H'), Struct(">H"), Struct("<H")),
- 's16': ScalarFormat(Struct('h'), Struct(">h"), Struct("<h")),
- 'u32': ScalarFormat(Struct('I'), Struct(">I"), Struct("<I")),
- 's32': ScalarFormat(Struct('i'), Struct(">i"), Struct("<i")),
- 'u64': ScalarFormat(Struct('Q'), Struct(">Q"), Struct("<Q")),
- 's64': ScalarFormat(Struct('q'), Struct(">q"), Struct("<q"))
- }
-
- def __init__(self, raw, offset):
- self._len, self._type = struct.unpack("HH", raw[offset : offset + 4])
- self.type = self._type & ~Netlink.NLA_TYPE_MASK
- self.is_nest = self._type & Netlink.NLA_F_NESTED
- self.payload_len = self._len
- self.full_len = (self.payload_len + 3) & ~3
- self.raw = raw[offset + 4 : offset + self.payload_len]
-
- @classmethod
- def get_format(cls, attr_type, byte_order=None):
- format = cls.type_formats[attr_type]
- if byte_order:
- return format.big if byte_order == "big-endian" \
- else format.little
- return format.native
-
- @classmethod
- def formatted_string(cls, raw, display_hint):
- if display_hint == 'mac':
- formatted = ':'.join('%02x' % b for b in raw)
- elif display_hint == 'hex':
- formatted = bytes.hex(raw, ' ')
- elif display_hint in [ 'ipv4', 'ipv6' ]:
- formatted = format(ipaddress.ip_address(raw))
- elif display_hint == 'uuid':
- formatted = str(uuid.UUID(bytes=raw))
- else:
- formatted = raw
- return formatted
-
- def as_scalar(self, attr_type, byte_order=None):
- format = self.get_format(attr_type, byte_order)
- return format.unpack(self.raw)[0]
-
- def as_auto_scalar(self, attr_type, byte_order=None):
- if len(self.raw) != 4 and len(self.raw) != 8:
- raise Exception(f"Auto-scalar len payload be 4 or 8 bytes, got {len(self.raw)}")
- real_type = attr_type[0] + str(len(self.raw) * 8)
- format = self.get_format(real_type, byte_order)
- return format.unpack(self.raw)[0]
-
- def as_strz(self):
- return self.raw.decode('ascii')[:-1]
-
- def as_bin(self):
- return self.raw
-
- def as_c_array(self, type):
- format = self.get_format(type)
- return [ x[0] for x in format.iter_unpack(self.raw) ]
-
- def as_struct(self, members):
- value = dict()
- offset = 0
- for m in members:
- # TODO: handle non-scalar members
- if m.type == 'binary':
- decoded = self.raw[offset : offset + m['len']]
- offset += m['len']
- elif m.type in NlAttr.type_formats:
- format = self.get_format(m.type, m.byte_order)
- [ decoded ] = format.unpack_from(self.raw, offset)
- offset += format.size
- if m.display_hint:
- decoded = self.formatted_string(decoded, m.display_hint)
- value[m.name] = decoded
- return value
-
- def __repr__(self):
- return f"[type:{self.type} len:{self._len}] {self.raw}"
-
-
-class NlAttrs:
- def __init__(self, msg, offset=0):
- self.attrs = []
-
- while offset < len(msg):
- attr = NlAttr(msg, offset)
- offset += attr.full_len
- self.attrs.append(attr)
-
- def __iter__(self):
- yield from self.attrs
-
- def __repr__(self):
- msg = ''
- for a in self.attrs:
- if msg:
- msg += '\n'
- msg += repr(a)
- return msg
-
-
-class NlMsg:
- def __init__(self, msg, offset, attr_space=None):
- self.hdr = msg[offset : offset + 16]
-
- self.nl_len, self.nl_type, self.nl_flags, self.nl_seq, self.nl_portid = \
- struct.unpack("IHHII", self.hdr)
-
- self.raw = msg[offset + 16 : offset + self.nl_len]
-
- self.error = 0
- self.done = 0
-
- extack_off = None
- if self.nl_type == Netlink.NLMSG_ERROR:
- self.error = struct.unpack("i", self.raw[0:4])[0]
- self.done = 1
- extack_off = 20
- elif self.nl_type == Netlink.NLMSG_DONE:
- self.done = 1
- extack_off = 4
-
- self.extack = None
- if self.nl_flags & Netlink.NLM_F_ACK_TLVS and extack_off:
- self.extack = dict()
- extack_attrs = NlAttrs(self.raw[extack_off:])
- for extack in extack_attrs:
- if extack.type == Netlink.NLMSGERR_ATTR_MSG:
- self.extack['msg'] = extack.as_strz()
- elif extack.type == Netlink.NLMSGERR_ATTR_MISS_TYPE:
- self.extack['miss-type'] = extack.as_scalar('u32')
- elif extack.type == Netlink.NLMSGERR_ATTR_MISS_NEST:
- self.extack['miss-nest'] = extack.as_scalar('u32')
- elif extack.type == Netlink.NLMSGERR_ATTR_OFFS:
- self.extack['bad-attr-offs'] = extack.as_scalar('u32')
- else:
- if 'unknown' not in self.extack:
- self.extack['unknown'] = []
- self.extack['unknown'].append(extack)
-
- if attr_space:
- # We don't have the ability to parse nests yet, so only do global
- if 'miss-type' in self.extack and 'miss-nest' not in self.extack:
- miss_type = self.extack['miss-type']
- if miss_type in attr_space.attrs_by_val:
- spec = attr_space.attrs_by_val[miss_type]
- desc = spec['name']
- if 'doc' in spec:
- desc += f" ({spec['doc']})"
- self.extack['miss-type'] = desc
-
- def cmd(self):
- return self.nl_type
-
- def __repr__(self):
- msg = f"nl_len = {self.nl_len} ({len(self.raw)}) nl_flags = 0x{self.nl_flags:x} nl_type = {self.nl_type}\n"
- if self.error:
- msg += '\terror: ' + str(self.error)
- if self.extack:
- msg += '\textack: ' + repr(self.extack)
- return msg
-
-
-class NlMsgs:
- def __init__(self, data, attr_space=None):
- self.msgs = []
-
- offset = 0
- while offset < len(data):
- msg = NlMsg(data, offset, attr_space=attr_space)
- offset += msg.nl_len
- self.msgs.append(msg)
-
- def __iter__(self):
- yield from self.msgs
-
-
-genl_family_name_to_id = None
-
-
-def _genl_msg(nl_type, nl_flags, genl_cmd, genl_version, seq=None):
- # we prepend length in _genl_msg_finalize()
- if seq is None:
- seq = random.randint(1, 1024)
- nlmsg = struct.pack("HHII", nl_type, nl_flags, seq, 0)
- genlmsg = struct.pack("BBH", genl_cmd, genl_version, 0)
- return nlmsg + genlmsg
-
-
-def _genl_msg_finalize(msg):
- return struct.pack("I", len(msg) + 4) + msg
-
-
-def _genl_load_families():
- with socket.socket(socket.AF_NETLINK, socket.SOCK_RAW, Netlink.NETLINK_GENERIC) as sock:
- sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_CAP_ACK, 1)
-
- msg = _genl_msg(Netlink.GENL_ID_CTRL,
- Netlink.NLM_F_REQUEST | Netlink.NLM_F_ACK | Netlink.NLM_F_DUMP,
- Netlink.CTRL_CMD_GETFAMILY, 1)
- msg = _genl_msg_finalize(msg)
-
- sock.send(msg, 0)
-
- global genl_family_name_to_id
- genl_family_name_to_id = dict()
-
- while True:
- reply = sock.recv(128 * 1024)
- nms = NlMsgs(reply)
- for nl_msg in nms:
- if nl_msg.error:
- print("Netlink error:", nl_msg.error)
- return
- if nl_msg.done:
- return
-
- gm = GenlMsg(nl_msg)
- fam = dict()
- for attr in NlAttrs(gm.raw):
- if attr.type == Netlink.CTRL_ATTR_FAMILY_ID:
- fam['id'] = attr.as_scalar('u16')
- elif attr.type == Netlink.CTRL_ATTR_FAMILY_NAME:
- fam['name'] = attr.as_strz()
- elif attr.type == Netlink.CTRL_ATTR_MAXATTR:
- fam['maxattr'] = attr.as_scalar('u32')
- elif attr.type == Netlink.CTRL_ATTR_MCAST_GROUPS:
- fam['mcast'] = dict()
- for entry in NlAttrs(attr.raw):
- mcast_name = None
- mcast_id = None
- for entry_attr in NlAttrs(entry.raw):
- if entry_attr.type == Netlink.CTRL_ATTR_MCAST_GRP_NAME:
- mcast_name = entry_attr.as_strz()
- elif entry_attr.type == Netlink.CTRL_ATTR_MCAST_GRP_ID:
- mcast_id = entry_attr.as_scalar('u32')
- if mcast_name and mcast_id is not None:
- fam['mcast'][mcast_name] = mcast_id
- if 'name' in fam and 'id' in fam:
- genl_family_name_to_id[fam['name']] = fam
-
-
-class GenlMsg:
- def __init__(self, nl_msg):
- self.nl = nl_msg
- self.genl_cmd, self.genl_version, _ = struct.unpack_from("BBH", nl_msg.raw, 0)
- self.raw = nl_msg.raw[4:]
-
- def cmd(self):
- return self.genl_cmd
-
- def __repr__(self):
- msg = repr(self.nl)
- msg += f"\tgenl_cmd = {self.genl_cmd} genl_ver = {self.genl_version}\n"
- for a in self.raw_attrs:
- msg += '\t\t' + repr(a) + '\n'
- return msg
-
-
-class NetlinkProtocol:
- def __init__(self, family_name, proto_num):
- self.family_name = family_name
- self.proto_num = proto_num
-
- def _message(self, nl_type, nl_flags, seq=None):
- if seq is None:
- seq = random.randint(1, 1024)
- nlmsg = struct.pack("HHII", nl_type, nl_flags, seq, 0)
- return nlmsg
-
- def message(self, flags, command, version, seq=None):
- return self._message(command, flags, seq)
-
- def _decode(self, nl_msg):
- return nl_msg
-
- def decode(self, ynl, nl_msg):
- msg = self._decode(nl_msg)
- fixed_header_size = 0
- if ynl:
- op = ynl.rsp_by_value[msg.cmd()]
- fixed_header_size = ynl._fixed_header_size(op.fixed_header)
- msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size)
- return msg
-
- def get_mcast_id(self, mcast_name, mcast_groups):
- if mcast_name not in mcast_groups:
- raise Exception(f'Multicast group "{mcast_name}" not present in the spec')
- return mcast_groups[mcast_name].value
-
-
-class GenlProtocol(NetlinkProtocol):
- def __init__(self, family_name):
- super().__init__(family_name, Netlink.NETLINK_GENERIC)
-
- global genl_family_name_to_id
- if genl_family_name_to_id is None:
- _genl_load_families()
-
- self.genl_family = genl_family_name_to_id[family_name]
- self.family_id = genl_family_name_to_id[family_name]['id']
-
- def message(self, flags, command, version, seq=None):
- nlmsg = self._message(self.family_id, flags, seq)
- genlmsg = struct.pack("BBH", command, version, 0)
- return nlmsg + genlmsg
-
- def _decode(self, nl_msg):
- return GenlMsg(nl_msg)
-
- def get_mcast_id(self, mcast_name, mcast_groups):
- if mcast_name not in self.genl_family['mcast']:
- raise Exception(f'Multicast group "{mcast_name}" not present in the family')
- return self.genl_family['mcast'][mcast_name]
-
-
-#
-# YNL implementation details.
-#
-
-
-class YnlFamily(SpecFamily):
- def __init__(self, def_path, schema=None, process_unknown=False):
- super().__init__(def_path, schema)
-
- self.include_raw = False
- self.process_unknown = process_unknown
-
- try:
- if self.proto == "netlink-raw":
- self.nlproto = NetlinkProtocol(self.yaml['name'],
- self.yaml['protonum'])
- else:
- self.nlproto = GenlProtocol(self.yaml['name'])
- except KeyError:
- raise Exception(f"Family '{self.yaml['name']}' not supported by the kernel")
-
- self.sock = socket.socket(socket.AF_NETLINK, socket.SOCK_RAW, self.nlproto.proto_num)
- self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_CAP_ACK, 1)
- self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_EXT_ACK, 1)
- self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_GET_STRICT_CHK, 1)
-
- self.async_msg_ids = set()
- self.async_msg_queue = []
-
- for msg in self.msgs.values():
- if msg.is_async:
- self.async_msg_ids.add(msg.rsp_value)
-
- for op_name, op in self.ops.items():
- bound_f = functools.partial(self._op, op_name)
- setattr(self, op.ident_name, bound_f)
-
-
- def ntf_subscribe(self, mcast_name):
- mcast_id = self.nlproto.get_mcast_id(mcast_name, self.mcast_groups)
- self.sock.bind((0, 0))
- self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_ADD_MEMBERSHIP,
- mcast_id)
-
- def _add_attr(self, space, name, value):
- try:
- attr = self.attr_sets[space][name]
- except KeyError:
- raise Exception(f"Space '{space}' has no attribute '{name}'")
- nl_type = attr.value
- if attr["type"] == 'nest':
- nl_type |= Netlink.NLA_F_NESTED
- attr_payload = b''
- for subname, subvalue in value.items():
- attr_payload += self._add_attr(attr['nested-attributes'], subname, subvalue)
- elif attr["type"] == 'flag':
- attr_payload = b''
- elif attr["type"] == 'string':
- attr_payload = str(value).encode('ascii') + b'\x00'
- elif attr["type"] == 'binary':
- if isinstance(value, bytes):
- attr_payload = value
- elif isinstance(value, str):
- attr_payload = bytes.fromhex(value)
- else:
- raise Exception(f'Unknown type for binary attribute, value: {value}')
- elif attr.is_auto_scalar:
- scalar = int(value)
- real_type = attr["type"][0] + ('32' if scalar.bit_length() <= 32 else '64')
- format = NlAttr.get_format(real_type, attr.byte_order)
- attr_payload = format.pack(int(value))
- elif attr['type'] in NlAttr.type_formats:
- format = NlAttr.get_format(attr['type'], attr.byte_order)
- attr_payload = format.pack(int(value))
- elif attr['type'] in "bitfield32":
- attr_payload = struct.pack("II", int(value["value"]), int(value["selector"]))
- else:
- raise Exception(f'Unknown type at {space} {name} {value} {attr["type"]}')
-
- pad = b'\x00' * ((4 - len(attr_payload) % 4) % 4)
- return struct.pack('HH', len(attr_payload) + 4, nl_type) + attr_payload + pad
-
- def _decode_enum(self, raw, attr_spec):
- enum = self.consts[attr_spec['enum']]
- if enum.type == 'flags' or attr_spec.get('enum-as-flags', False):
- i = 0
- value = set()
- while raw:
- if raw & 1:
- value.add(enum.entries_by_val[i].name)
- raw >>= 1
- i += 1
- else:
- value = enum.entries_by_val[raw].name
- return value
-
- def _decode_binary(self, attr, attr_spec):
- if attr_spec.struct_name:
- members = self.consts[attr_spec.struct_name]
- decoded = attr.as_struct(members)
- for m in members:
- if m.enum:
- decoded[m.name] = self._decode_enum(decoded[m.name], m)
- elif attr_spec.sub_type:
- decoded = attr.as_c_array(attr_spec.sub_type)
- else:
- decoded = attr.as_bin()
- if attr_spec.display_hint:
- decoded = NlAttr.formatted_string(decoded, attr_spec.display_hint)
- return decoded
-
- def _decode_array_nest(self, attr, attr_spec):
- decoded = []
- offset = 0
- while offset < len(attr.raw):
- item = NlAttr(attr.raw, offset)
- offset += item.full_len
-
- subattrs = self._decode(NlAttrs(item.raw), attr_spec['nested-attributes'])
- decoded.append({ item.type: subattrs })
- return decoded
-
- def _decode_unknown(self, attr):
- if attr.is_nest:
- return self._decode(NlAttrs(attr.raw), None)
- else:
- return attr.as_bin()
-
- def _rsp_add(self, rsp, name, is_multi, decoded):
- if is_multi == None:
- if name in rsp and type(rsp[name]) is not list:
- rsp[name] = [rsp[name]]
- is_multi = True
- else:
- is_multi = False
-
- if not is_multi:
- rsp[name] = decoded
- elif name in rsp:
- rsp[name].append(decoded)
- else:
- rsp[name] = [decoded]
-
- def _resolve_selector(self, attr_spec, vals):
- sub_msg = attr_spec.sub_message
- if sub_msg not in self.sub_msgs:
- raise Exception(f"No sub-message spec named {sub_msg} for {attr_spec.name}")
- sub_msg_spec = self.sub_msgs[sub_msg]
-
- selector = attr_spec.selector
- if selector not in vals:
- raise Exception(f"There is no value for {selector} to resolve '{attr_spec.name}'")
- value = vals[selector]
- if value not in sub_msg_spec.formats:
- raise Exception(f"No message format for '{value}' in sub-message spec '{sub_msg}'")
-
- spec = sub_msg_spec.formats[value]
- return spec
-
- def _decode_sub_msg(self, attr, attr_spec, rsp):
- msg_format = self._resolve_selector(attr_spec, rsp)
- decoded = {}
- offset = 0
- if msg_format.fixed_header:
- decoded.update(self._decode_fixed_header(attr, msg_format.fixed_header));
- offset = self._fixed_header_size(msg_format.fixed_header)
- if msg_format.attr_set:
- if msg_format.attr_set in self.attr_sets:
- subdict = self._decode(NlAttrs(attr.raw, offset), msg_format.attr_set)
- decoded.update(subdict)
- else:
- raise Exception(f"Unknown attribute-set '{attr_space}' when decoding '{attr_spec.name}'")
- return decoded
-
- def _decode(self, attrs, space):
- if space:
- attr_space = self.attr_sets[space]
- rsp = dict()
- for attr in attrs:
- try:
- attr_spec = attr_space.attrs_by_val[attr.type]
- except (KeyError, UnboundLocalError):
- if not self.process_unknown:
- raise Exception(f"Space '{space}' has no attribute with value '{attr.type}'")
- attr_name = f"UnknownAttr({attr.type})"
- self._rsp_add(rsp, attr_name, None, self._decode_unknown(attr))
- continue
-
- if attr_spec["type"] == 'nest':
- subdict = self._decode(NlAttrs(attr.raw), attr_spec['nested-attributes'])
- decoded = subdict
- elif attr_spec["type"] == 'string':
- decoded = attr.as_strz()
- elif attr_spec["type"] == 'binary':
- decoded = self._decode_binary(attr, attr_spec)
- elif attr_spec["type"] == 'flag':
- decoded = True
- elif attr_spec.is_auto_scalar:
- decoded = attr.as_auto_scalar(attr_spec['type'], attr_spec.byte_order)
- elif attr_spec["type"] in NlAttr.type_formats:
- decoded = attr.as_scalar(attr_spec['type'], attr_spec.byte_order)
- if 'enum' in attr_spec:
- decoded = self._decode_enum(decoded, attr_spec)
- elif attr_spec["type"] == 'array-nest':
- decoded = self._decode_array_nest(attr, attr_spec)
- elif attr_spec["type"] == 'bitfield32':
- value, selector = struct.unpack("II", attr.raw)
- if 'enum' in attr_spec:
- value = self._decode_enum(value, attr_spec)
- selector = self._decode_enum(selector, attr_spec)
- decoded = {"value": value, "selector": selector}
- elif attr_spec["type"] == 'sub-message':
- decoded = self._decode_sub_msg(attr, attr_spec, rsp)
- else:
- if not self.process_unknown:
- raise Exception(f'Unknown {attr_spec["type"]} with name {attr_spec["name"]}')
- decoded = self._decode_unknown(attr)
-
- self._rsp_add(rsp, attr_spec["name"], attr_spec.is_multi, decoded)
-
- return rsp
-
- def _decode_extack_path(self, attrs, attr_set, offset, target):
- for attr in attrs:
- try:
- attr_spec = attr_set.attrs_by_val[attr.type]
- except KeyError:
- raise Exception(f"Space '{attr_set.name}' has no attribute with value '{attr.type}'")
- if offset > target:
- break
- if offset == target:
- return '.' + attr_spec.name
-
- if offset + attr.full_len <= target:
- offset += attr.full_len
- continue
- if attr_spec['type'] != 'nest':
- raise Exception(f"Can't dive into {attr.type} ({attr_spec['name']}) for extack")
- offset += 4
- subpath = self._decode_extack_path(NlAttrs(attr.raw),
- self.attr_sets[attr_spec['nested-attributes']],
- offset, target)
- if subpath is None:
- return None
- return '.' + attr_spec.name + subpath
-
- return None
-
- def _decode_extack(self, request, op, extack):
- if 'bad-attr-offs' not in extack:
- return
-
- msg = self.nlproto.decode(self, NlMsg(request, 0, op.attr_set))
- offset = 20 + self._fixed_header_size(op.fixed_header)
- path = self._decode_extack_path(msg.raw_attrs, op.attr_set, offset,
- extack['bad-attr-offs'])
- if path:
- del extack['bad-attr-offs']
- extack['bad-attr'] = path
-
- def _fixed_header_size(self, name):
- if name:
- fixed_header_members = self.consts[name].members
- size = 0
- for m in fixed_header_members:
- if m.type in ['pad', 'binary']:
- size += m.len
- else:
- format = NlAttr.get_format(m.type, m.byte_order)
- size += format.size
- return size
- else:
- return 0
-
- def _decode_fixed_header(self, msg, name):
- fixed_header_members = self.consts[name].members
- fixed_header_attrs = dict()
- offset = 0
- for m in fixed_header_members:
- value = None
- if m.type == 'pad':
- offset += m.len
- elif m.type == 'binary':
- value = msg.raw[offset : offset + m.len]
- offset += m.len
- else:
- format = NlAttr.get_format(m.type, m.byte_order)
- [ value ] = format.unpack_from(msg.raw, offset)
- offset += format.size
- if value is not None:
- if m.enum:
- value = self._decode_enum(value, m)
- fixed_header_attrs[m.name] = value
- return fixed_header_attrs
-
- def handle_ntf(self, decoded):
- msg = dict()
- if self.include_raw:
- msg['raw'] = decoded
- op = self.rsp_by_value[decoded.cmd()]
- attrs = self._decode(decoded.raw_attrs, op.attr_set.name)
- if op.fixed_header:
- attrs.update(self._decode_fixed_header(decoded, op.fixed_header))
-
- msg['name'] = op['name']
- msg['msg'] = attrs
- self.async_msg_queue.append(msg)
-
- def check_ntf(self):
- while True:
- try:
- reply = self.sock.recv(128 * 1024, socket.MSG_DONTWAIT)
- except BlockingIOError:
- return
-
- nms = NlMsgs(reply)
- for nl_msg in nms:
- if nl_msg.error:
- print("Netlink error in ntf!?", os.strerror(-nl_msg.error))
- print(nl_msg)
- continue
- if nl_msg.done:
- print("Netlink done while checking for ntf!?")
- continue
-
- decoded = self.nlproto.decode(self, nl_msg)
- if decoded.cmd() not in self.async_msg_ids:
- print("Unexpected msg id done while checking for ntf", decoded)
- continue
-
- self.handle_ntf(decoded)
-
- def operation_do_attributes(self, name):
- """
- For a given operation name, find and return a supported
- set of attributes (as a dict).
- """
- op = self.find_operation(name)
- if not op:
- return None
-
- return op['do']['request']['attributes'].copy()
-
- def _op(self, method, vals, flags=None, dump=False):
- op = self.ops[method]
-
- nl_flags = Netlink.NLM_F_REQUEST | Netlink.NLM_F_ACK
- for flag in flags or []:
- nl_flags |= flag
- if dump:
- nl_flags |= Netlink.NLM_F_DUMP
-
- req_seq = random.randint(1024, 65535)
- msg = self.nlproto.message(nl_flags, op.req_value, 1, req_seq)
- fixed_header_members = []
- if op.fixed_header:
- fixed_header_members = self.consts[op.fixed_header].members
- for m in fixed_header_members:
- value = vals.pop(m.name) if m.name in vals else 0
- if m.type == 'pad':
- msg += bytearray(m.len)
- elif m.type == 'binary':
- msg += bytes.fromhex(value)
- else:
- format = NlAttr.get_format(m.type, m.byte_order)
- msg += format.pack(value)
- for name, value in vals.items():
- msg += self._add_attr(op.attr_set.name, name, value)
- msg = _genl_msg_finalize(msg)
-
- self.sock.send(msg, 0)
-
- done = False
- rsp = []
- while not done:
- reply = self.sock.recv(128 * 1024)
- nms = NlMsgs(reply, attr_space=op.attr_set)
- for nl_msg in nms:
- if nl_msg.extack:
- self._decode_extack(msg, op, nl_msg.extack)
-
- if nl_msg.error:
- raise NlError(nl_msg)
- if nl_msg.done:
- if nl_msg.extack:
- print("Netlink warning:")
- print(nl_msg)
- done = True
- break
-
- decoded = self.nlproto.decode(self, nl_msg)
-
- # Check if this is a reply to our request
- if nl_msg.nl_seq != req_seq or decoded.cmd() != op.rsp_value:
- if decoded.cmd() in self.async_msg_ids:
- self.handle_ntf(decoded)
- continue
- else:
- print('Unexpected message: ' + repr(decoded))
- continue
-
- rsp_msg = self._decode(decoded.raw_attrs, op.attr_set.name)
- if op.fixed_header:
- rsp_msg.update(self._decode_fixed_header(decoded, op.fixed_header))
- rsp.append(rsp_msg)
-
- if not rsp:
- return None
- if not dump and len(rsp) == 1:
- return rsp[0]
- return rsp
-
- def do(self, method, vals, flags=None):
- return self._op(method, vals, flags)
-
- def dump(self, method, vals):
- return self._op(method, vals, [], dump=True)
diff --git a/tools/net/ynl/pyproject.toml b/tools/net/ynl/pyproject.toml
new file mode 100644
index 000000000000..a81d8779b0e0
--- /dev/null
+++ b/tools/net/ynl/pyproject.toml
@@ -0,0 +1,24 @@
+[build-system]
+requires = ["setuptools>=61.0"]
+build-backend = "setuptools.build_meta"
+
+[project]
+name = "pyynl"
+authors = [
+ {name = "Donald Hunter", email = "donald.hunter@gmail.com"},
+ {name = "Jakub Kicinski", email = "kuba@kernel.org"},
+]
+description = "yaml netlink (ynl)"
+version = "0.0.1"
+requires-python = ">=3.9"
+dependencies = [
+ "pyyaml==6.*",
+ "jsonschema==4.*"
+]
+
+[tool.setuptools.packages.find]
+include = ["pyynl", "pyynl.lib"]
+
+[project.scripts]
+ynl = "pyynl.cli:main"
+ynl-ethtool = "pyynl.ethtool:main"
diff --git a/tools/net/ynl/pyynl/.gitignore b/tools/net/ynl/pyynl/.gitignore
new file mode 100644
index 000000000000..b801cd2d016e
--- /dev/null
+++ b/tools/net/ynl/pyynl/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+lib/__pycache__/
diff --git a/tools/net/ynl/pyynl/__init__.py b/tools/net/ynl/pyynl/__init__.py
new file mode 100644
index 000000000000..e69de29bb2d1
--- /dev/null
+++ b/tools/net/ynl/pyynl/__init__.py
diff --git a/tools/net/ynl/pyynl/cli.py b/tools/net/ynl/pyynl/cli.py
new file mode 100755
index 000000000000..8c192e900bd3
--- /dev/null
+++ b/tools/net/ynl/pyynl/cli.py
@@ -0,0 +1,163 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+import argparse
+import json
+import os
+import pathlib
+import pprint
+import sys
+
+sys.path.append(pathlib.Path(__file__).resolve().parent.as_posix())
+from lib import YnlFamily, Netlink, NlError
+
+sys_schema_dir='/usr/share/ynl'
+relative_schema_dir='../../../../Documentation/netlink'
+
+def schema_dir():
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ schema_dir = os.path.abspath(f"{script_dir}/{relative_schema_dir}")
+ if not os.path.isdir(schema_dir):
+ schema_dir = sys_schema_dir
+ if not os.path.isdir(schema_dir):
+ raise Exception(f"Schema directory {schema_dir} does not exist")
+ return schema_dir
+
+def spec_dir():
+ spec_dir = schema_dir() + '/specs'
+ if not os.path.isdir(spec_dir):
+ raise Exception(f"Spec directory {spec_dir} does not exist")
+ return spec_dir
+
+
+class YnlEncoder(json.JSONEncoder):
+ def default(self, obj):
+ if isinstance(obj, bytes):
+ return bytes.hex(obj)
+ if isinstance(obj, set):
+ return list(obj)
+ return json.JSONEncoder.default(self, obj)
+
+
+def main():
+ description = """
+ YNL CLI utility - a general purpose netlink utility that uses YAML
+ specs to drive protocol encoding and decoding.
+ """
+ epilog = """
+ The --multi option can be repeated to include several do operations
+ in the same netlink payload.
+ """
+
+ parser = argparse.ArgumentParser(description=description,
+ epilog=epilog)
+ spec_group = parser.add_mutually_exclusive_group(required=True)
+ spec_group.add_argument('--family', dest='family', type=str,
+ help='name of the netlink FAMILY')
+ spec_group.add_argument('--list-families', action='store_true',
+ help='list all netlink families supported by YNL (has spec)')
+ spec_group.add_argument('--spec', dest='spec', type=str,
+ help='choose the family by SPEC file path')
+
+ parser.add_argument('--schema', dest='schema', type=str)
+ parser.add_argument('--no-schema', action='store_true')
+ parser.add_argument('--json', dest='json_text', type=str)
+
+ group = parser.add_mutually_exclusive_group()
+ group.add_argument('--do', dest='do', metavar='DO-OPERATION', type=str)
+ group.add_argument('--multi', dest='multi', nargs=2, action='append',
+ metavar=('DO-OPERATION', 'JSON_TEXT'), type=str)
+ group.add_argument('--dump', dest='dump', metavar='DUMP-OPERATION', type=str)
+ group.add_argument('--list-ops', action='store_true')
+ group.add_argument('--list-msgs', action='store_true')
+
+ parser.add_argument('--duration', dest='duration', type=int,
+ help='when subscribed, watch for DURATION seconds')
+ parser.add_argument('--sleep', dest='duration', type=int,
+ help='alias for duration')
+ parser.add_argument('--subscribe', dest='ntf', type=str)
+ parser.add_argument('--replace', dest='flags', action='append_const',
+ const=Netlink.NLM_F_REPLACE)
+ parser.add_argument('--excl', dest='flags', action='append_const',
+ const=Netlink.NLM_F_EXCL)
+ parser.add_argument('--create', dest='flags', action='append_const',
+ const=Netlink.NLM_F_CREATE)
+ parser.add_argument('--append', dest='flags', action='append_const',
+ const=Netlink.NLM_F_APPEND)
+ parser.add_argument('--process-unknown', action=argparse.BooleanOptionalAction)
+ parser.add_argument('--output-json', action='store_true')
+ parser.add_argument('--dbg-small-recv', default=0, const=4000,
+ action='store', nargs='?', type=int)
+ args = parser.parse_args()
+
+ def output(msg):
+ if args.output_json:
+ print(json.dumps(msg, cls=YnlEncoder))
+ else:
+ pprint.PrettyPrinter().pprint(msg)
+
+ if args.list_families:
+ for filename in sorted(os.listdir(spec_dir())):
+ if filename.endswith('.yaml'):
+ print(filename.removesuffix('.yaml'))
+ return
+
+ if args.no_schema:
+ args.schema = ''
+
+ attrs = {}
+ if args.json_text:
+ attrs = json.loads(args.json_text)
+
+ if args.family:
+ spec = f"{spec_dir()}/{args.family}.yaml"
+ if args.schema is None and spec.startswith(sys_schema_dir):
+ args.schema = '' # disable schema validation when installed
+ if args.process_unknown is None:
+ args.process_unknown = True
+ else:
+ spec = args.spec
+ if not os.path.isfile(spec):
+ raise Exception(f"Spec file {spec} does not exist")
+
+ ynl = YnlFamily(spec, args.schema, args.process_unknown,
+ recv_size=args.dbg_small_recv)
+ if args.dbg_small_recv:
+ ynl.set_recv_dbg(True)
+
+ if args.ntf:
+ ynl.ntf_subscribe(args.ntf)
+
+ if args.list_ops:
+ for op_name, op in ynl.ops.items():
+ print(op_name, " [", ", ".join(op.modes), "]")
+ if args.list_msgs:
+ for op_name, op in ynl.msgs.items():
+ print(op_name, " [", ", ".join(op.modes), "]")
+
+ try:
+ if args.do:
+ reply = ynl.do(args.do, attrs, args.flags)
+ output(reply)
+ if args.dump:
+ reply = ynl.dump(args.dump, attrs)
+ output(reply)
+ if args.multi:
+ ops = [ (item[0], json.loads(item[1]), args.flags or []) for item in args.multi ]
+ reply = ynl.do_multi(ops)
+ output(reply)
+
+ if args.ntf:
+ for msg in ynl.poll_ntf(duration=args.duration):
+ output(msg)
+ except NlError as e:
+ print(e)
+ exit(1)
+ except KeyboardInterrupt:
+ pass
+ except BrokenPipeError:
+ pass
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/net/ynl/ethtool.py b/tools/net/ynl/pyynl/ethtool.py
index 6c9f7e31250c..9b523cbb3568 100755
--- a/tools/net/ynl/ethtool.py
+++ b/tools/net/ynl/pyynl/ethtool.py
@@ -2,12 +2,15 @@
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
import argparse
-import json
+import pathlib
import pprint
import sys
import re
+import os
+sys.path.append(pathlib.Path(__file__).resolve().parent.as_posix())
from lib import YnlFamily
+from cli import schema_dir, spec_dir
def args_to_req(ynl, op_name, args, req):
"""
@@ -47,7 +50,7 @@ def print_field(reply, *desc):
for spec in desc:
try:
field, name, tp = spec
- except:
+ except ValueError:
field, name = spec
tp = 'int'
@@ -152,8 +155,8 @@ def main():
global args
args = parser.parse_args()
- spec = '../../../Documentation/netlink/specs/ethtool.yaml'
- schema = '../../../Documentation/netlink/genetlink-legacy.yaml'
+ spec = os.path.join(spec_dir(), 'ethtool.yaml')
+ schema = os.path.join(schema_dir(), 'genetlink-legacy.yaml')
ynl = YnlFamily(spec, schema)
@@ -250,14 +253,14 @@ def main():
reply = dumpit(ynl, args, 'channels-get')
print(f'Channel parameters for {args.device}:')
- print(f'Pre-set maximums:')
+ print('Pre-set maximums:')
print_field(reply,
('rx-max', 'RX'),
('tx-max', 'TX'),
('other-max', 'Other'),
('combined-max', 'Combined'))
- print(f'Current hardware settings:')
+ print('Current hardware settings:')
print_field(reply,
('rx-count', 'RX'),
('tx-count', 'TX'),
@@ -271,14 +274,14 @@ def main():
print(f'Ring parameters for {args.device}:')
- print(f'Pre-set maximums:')
+ print('Pre-set maximums:')
print_field(reply,
('rx-max', 'RX'),
('rx-mini-max', 'RX Mini'),
('rx-jumbo-max', 'RX Jumbo'),
('tx-max', 'TX'))
- print(f'Current hardware settings:')
+ print('Current hardware settings:')
print_field(reply,
('rx', 'RX'),
('rx-mini', 'RX Mini'),
@@ -293,7 +296,7 @@ def main():
return
if args.statistics:
- print(f'NIC statistics:')
+ print('NIC statistics:')
# TODO: pass id?
strset = dumpit(ynl, args, 'strset-get')
@@ -320,20 +323,37 @@ def main():
return
if args.show_time_stamping:
- tsinfo = dumpit(ynl, args, 'tsinfo-get')
+ req = {
+ 'header': {
+ 'flags': 'stats',
+ },
+ }
+
+ tsinfo = dumpit(ynl, args, 'tsinfo-get', req)
print(f'Time stamping parameters for {args.device}:')
print('Capabilities:')
[print(f'\t{v}') for v in bits_to_dict(tsinfo['timestamping'])]
- print(f'PTP Hardware Clock: {tsinfo["phc-index"]}')
+ print(f'PTP Hardware Clock: {tsinfo.get("phc-index", "none")}')
+
+ if 'tx-types' in tsinfo:
+ print('Hardware Transmit Timestamp Modes:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]
+ else:
+ print('Hardware Transmit Timestamp Modes: none')
+
+ if 'rx-filters' in tsinfo:
+ print('Hardware Receive Filter Modes:')
+ [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]
+ else:
+ print('Hardware Receive Filter Modes: none')
- print('Hardware Transmit Timestamp Modes:')
- [print(f'\t{v}') for v in bits_to_dict(tsinfo['tx-types'])]
+ if 'stats' in tsinfo and tsinfo['stats']:
+ print('Statistics:')
+ [print(f'\t{k}: {v}') for k, v in tsinfo['stats'].items()]
- print('Hardware Receive Filter Modes:')
- [print(f'\t{v}') for v in bits_to_dict(tsinfo['rx-filters'])]
return
print(f'Settings for {args.device}:')
diff --git a/tools/net/ynl/pyynl/lib/__init__.py b/tools/net/ynl/pyynl/lib/__init__.py
new file mode 100644
index 000000000000..ec9ea00071be
--- /dev/null
+++ b/tools/net/ynl/pyynl/lib/__init__.py
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+from .nlspec import SpecAttr, SpecAttrSet, SpecEnumEntry, SpecEnumSet, \
+ SpecFamily, SpecOperation, SpecSubMessage, SpecSubMessageFormat
+from .ynl import YnlFamily, Netlink, NlError
+
+from .doc_generator import YnlDocGenerator
+
+__all__ = ["SpecAttr", "SpecAttrSet", "SpecEnumEntry", "SpecEnumSet",
+ "SpecFamily", "SpecOperation", "SpecSubMessage", "SpecSubMessageFormat",
+ "YnlFamily", "Netlink", "NlError", "YnlDocGenerator"]
diff --git a/tools/net/ynl/pyynl/lib/doc_generator.py b/tools/net/ynl/pyynl/lib/doc_generator.py
new file mode 100644
index 000000000000..3a16b8eb01ca
--- /dev/null
+++ b/tools/net/ynl/pyynl/lib/doc_generator.py
@@ -0,0 +1,402 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# -*- coding: utf-8; mode: python -*-
+
+"""
+ Class to auto generate the documentation for Netlink specifications.
+
+ :copyright: Copyright (C) 2023 Breno Leitao <leitao@debian.org>
+ :license: GPL Version 2, June 1991 see linux/COPYING for details.
+
+ This class performs extensive parsing to the Linux kernel's netlink YAML
+ spec files, in an effort to avoid needing to heavily mark up the original
+ YAML file.
+
+ This code is split in two classes:
+ 1) RST formatters: Use to convert a string to a RST output
+ 2) YAML Netlink (YNL) doc generator: Generate docs from YAML data
+"""
+
+from typing import Any, Dict, List
+import yaml
+
+LINE_STR = '__lineno__'
+
+class NumberedSafeLoader(yaml.SafeLoader): # pylint: disable=R0901
+ """Override the SafeLoader class to add line number to parsed data"""
+
+ def construct_mapping(self, node, *args, **kwargs):
+ mapping = super().construct_mapping(node, *args, **kwargs)
+ mapping[LINE_STR] = node.start_mark.line
+
+ return mapping
+
+class RstFormatters:
+ """RST Formatters"""
+
+ SPACE_PER_LEVEL = 4
+
+ @staticmethod
+ def headroom(level: int) -> str:
+ """Return space to format"""
+ return " " * (level * RstFormatters.SPACE_PER_LEVEL)
+
+ @staticmethod
+ def bold(text: str) -> str:
+ """Format bold text"""
+ return f"**{text}**"
+
+ @staticmethod
+ def inline(text: str) -> str:
+ """Format inline text"""
+ return f"``{text}``"
+
+ @staticmethod
+ def sanitize(text: str) -> str:
+ """Remove newlines and multiple spaces"""
+ # This is useful for some fields that are spread across multiple lines
+ return str(text).replace("\n", " ").strip()
+
+ def rst_fields(self, key: str, value: str, level: int = 0) -> str:
+ """Return a RST formatted field"""
+ return self.headroom(level) + f":{key}: {value}"
+
+ def rst_definition(self, key: str, value: Any, level: int = 0) -> str:
+ """Format a single rst definition"""
+ return self.headroom(level) + key + "\n" + self.headroom(level + 1) + str(value)
+
+ def rst_paragraph(self, paragraph: str, level: int = 0) -> str:
+ """Return a formatted paragraph"""
+ return self.headroom(level) + paragraph
+
+ def rst_bullet(self, item: str, level: int = 0) -> str:
+ """Return a formatted a bullet"""
+ return self.headroom(level) + f"- {item}"
+
+ @staticmethod
+ def rst_subsection(title: str) -> str:
+ """Add a sub-section to the document"""
+ return f"{title}\n" + "-" * len(title)
+
+ @staticmethod
+ def rst_subsubsection(title: str) -> str:
+ """Add a sub-sub-section to the document"""
+ return f"{title}\n" + "~" * len(title)
+
+ @staticmethod
+ def rst_section(namespace: str, prefix: str, title: str) -> str:
+ """Add a section to the document"""
+ return f".. _{namespace}-{prefix}-{title}:\n\n{title}\n" + "=" * len(title)
+
+ @staticmethod
+ def rst_subtitle(title: str) -> str:
+ """Add a subtitle to the document"""
+ return "\n" + "-" * len(title) + f"\n{title}\n" + "-" * len(title) + "\n\n"
+
+ @staticmethod
+ def rst_title(title: str) -> str:
+ """Add a title to the document"""
+ return "=" * len(title) + f"\n{title}\n" + "=" * len(title) + "\n\n"
+
+ def rst_list_inline(self, list_: List[str], level: int = 0) -> str:
+ """Format a list using inlines"""
+ return self.headroom(level) + "[" + ", ".join(self.inline(i) for i in list_) + "]"
+
+ @staticmethod
+ def rst_ref(namespace: str, prefix: str, name: str) -> str:
+ """Add a hyperlink to the document"""
+ mappings = {'enum': 'definition',
+ 'fixed-header': 'definition',
+ 'nested-attributes': 'attribute-set',
+ 'struct': 'definition'}
+ if prefix in mappings:
+ prefix = mappings[prefix]
+ return f":ref:`{namespace}-{prefix}-{name}`"
+
+ def rst_header(self) -> str:
+ """The headers for all the auto generated RST files"""
+ lines = []
+
+ lines.append(self.rst_paragraph(".. SPDX-License-Identifier: GPL-2.0"))
+ lines.append(self.rst_paragraph(".. NOTE: This document was auto-generated.\n\n"))
+
+ return "\n".join(lines)
+
+ @staticmethod
+ def rst_toctree(maxdepth: int = 2) -> str:
+ """Generate a toctree RST primitive"""
+ lines = []
+
+ lines.append(".. toctree::")
+ lines.append(f" :maxdepth: {maxdepth}\n\n")
+
+ return "\n".join(lines)
+
+ @staticmethod
+ def rst_label(title: str) -> str:
+ """Return a formatted label"""
+ return f".. _{title}:\n\n"
+
+ @staticmethod
+ def rst_lineno(lineno: int) -> str:
+ """Return a lineno comment"""
+ return f".. LINENO {lineno}\n"
+
+class YnlDocGenerator:
+ """YAML Netlink specs Parser"""
+
+ fmt = RstFormatters()
+
+ def parse_mcast_group(self, mcast_group: List[Dict[str, Any]]) -> str:
+ """Parse 'multicast' group list and return a formatted string"""
+ lines = []
+ for group in mcast_group:
+ lines.append(self.fmt.rst_bullet(group["name"]))
+
+ return "\n".join(lines)
+
+ def parse_do(self, do_dict: Dict[str, Any], level: int = 0) -> str:
+ """Parse 'do' section and return a formatted string"""
+ lines = []
+ if LINE_STR in do_dict:
+ lines.append(self.fmt.rst_lineno(do_dict[LINE_STR]))
+
+ for key in do_dict.keys():
+ if key == LINE_STR:
+ continue
+ lines.append(self.fmt.rst_paragraph(self.fmt.bold(key), level + 1))
+ if key in ['request', 'reply']:
+ lines.append(self.parse_do_attributes(do_dict[key], level + 1) + "\n")
+ else:
+ lines.append(self.fmt.headroom(level + 2) + do_dict[key] + "\n")
+
+ return "\n".join(lines)
+
+ def parse_do_attributes(self, attrs: Dict[str, Any], level: int = 0) -> str:
+ """Parse 'attributes' section"""
+ if "attributes" not in attrs:
+ return ""
+ lines = [self.fmt.rst_fields("attributes",
+ self.fmt.rst_list_inline(attrs["attributes"]),
+ level + 1)]
+
+ return "\n".join(lines)
+
+ def parse_operations(self, operations: List[Dict[str, Any]], namespace: str) -> str:
+ """Parse operations block"""
+ preprocessed = ["name", "doc", "title", "do", "dump", "flags"]
+ linkable = ["fixed-header", "attribute-set"]
+ lines = []
+
+ for operation in operations:
+ if LINE_STR in operation:
+ lines.append(self.fmt.rst_lineno(operation[LINE_STR]))
+
+ lines.append(self.fmt.rst_section(namespace, 'operation',
+ operation["name"]))
+ lines.append(self.fmt.rst_paragraph(operation["doc"]) + "\n")
+
+ for key in operation.keys():
+ if key == LINE_STR:
+ continue
+
+ if key in preprocessed:
+ # Skip the special fields
+ continue
+ value = operation[key]
+ if key in linkable:
+ value = self.fmt.rst_ref(namespace, key, value)
+ lines.append(self.fmt.rst_fields(key, value, 0))
+ if 'flags' in operation:
+ lines.append(self.fmt.rst_fields('flags',
+ self.fmt.rst_list_inline(operation['flags'])))
+
+ if "do" in operation:
+ lines.append(self.fmt.rst_paragraph(":do:", 0))
+ lines.append(self.parse_do(operation["do"], 0))
+ if "dump" in operation:
+ lines.append(self.fmt.rst_paragraph(":dump:", 0))
+ lines.append(self.parse_do(operation["dump"], 0))
+
+ # New line after fields
+ lines.append("\n")
+
+ return "\n".join(lines)
+
+ def parse_entries(self, entries: List[Dict[str, Any]], level: int) -> str:
+ """Parse a list of entries"""
+ ignored = ["pad"]
+ lines = []
+ for entry in entries:
+ if isinstance(entry, dict):
+ # entries could be a list or a dictionary
+ field_name = entry.get("name", "")
+ if field_name in ignored:
+ continue
+ type_ = entry.get("type")
+ if type_:
+ field_name += f" ({self.fmt.inline(type_)})"
+ lines.append(
+ self.fmt.rst_fields(field_name,
+ self.fmt.sanitize(entry.get("doc", "")),
+ level)
+ )
+ elif isinstance(entry, list):
+ lines.append(self.fmt.rst_list_inline(entry, level))
+ else:
+ lines.append(self.fmt.rst_bullet(self.fmt.inline(self.fmt.sanitize(entry)),
+ level))
+
+ lines.append("\n")
+ return "\n".join(lines)
+
+ def parse_definitions(self, defs: Dict[str, Any], namespace: str) -> str:
+ """Parse definitions section"""
+ preprocessed = ["name", "entries", "members"]
+ ignored = ["render-max"] # This is not printed
+ lines = []
+
+ for definition in defs:
+ if LINE_STR in definition:
+ lines.append(self.fmt.rst_lineno(definition[LINE_STR]))
+
+ lines.append(self.fmt.rst_section(namespace, 'definition', definition["name"]))
+ for k in definition.keys():
+ if k == LINE_STR:
+ continue
+ if k in preprocessed + ignored:
+ continue
+ lines.append(self.fmt.rst_fields(k, self.fmt.sanitize(definition[k]), 0))
+
+ # Field list needs to finish with a new line
+ lines.append("\n")
+ if "entries" in definition:
+ lines.append(self.fmt.rst_paragraph(":entries:", 0))
+ lines.append(self.parse_entries(definition["entries"], 1))
+ if "members" in definition:
+ lines.append(self.fmt.rst_paragraph(":members:", 0))
+ lines.append(self.parse_entries(definition["members"], 1))
+
+ return "\n".join(lines)
+
+ def parse_attr_sets(self, entries: List[Dict[str, Any]], namespace: str) -> str:
+ """Parse attribute from attribute-set"""
+ preprocessed = ["name", "type"]
+ linkable = ["enum", "nested-attributes", "struct", "sub-message"]
+ ignored = ["checks"]
+ lines = []
+
+ for entry in entries:
+ lines.append(self.fmt.rst_section(namespace, 'attribute-set',
+ entry["name"]))
+
+ if "doc" in entry:
+ lines.append(self.fmt.rst_paragraph(entry["doc"], 0) + "\n")
+
+ for attr in entry["attributes"]:
+ if LINE_STR in attr:
+ lines.append(self.fmt.rst_lineno(attr[LINE_STR]))
+
+ type_ = attr.get("type")
+ attr_line = attr["name"]
+ if type_:
+ # Add the attribute type in the same line
+ attr_line += f" ({self.fmt.inline(type_)})"
+
+ lines.append(self.fmt.rst_subsubsection(attr_line))
+
+ for k in attr.keys():
+ if k == LINE_STR:
+ continue
+ if k in preprocessed + ignored:
+ continue
+ if k in linkable:
+ value = self.fmt.rst_ref(namespace, k, attr[k])
+ else:
+ value = self.fmt.sanitize(attr[k])
+ lines.append(self.fmt.rst_fields(k, value, 0))
+ lines.append("\n")
+
+ return "\n".join(lines)
+
+ def parse_sub_messages(self, entries: List[Dict[str, Any]], namespace: str) -> str:
+ """Parse sub-message definitions"""
+ lines = []
+
+ for entry in entries:
+ lines.append(self.fmt.rst_section(namespace, 'sub-message',
+ entry["name"]))
+ for fmt in entry["formats"]:
+ value = fmt["value"]
+
+ lines.append(self.fmt.rst_bullet(self.fmt.bold(value)))
+ for attr in ['fixed-header', 'attribute-set']:
+ if attr in fmt:
+ lines.append(self.fmt.rst_fields(attr,
+ self.fmt.rst_ref(namespace,
+ attr,
+ fmt[attr]),
+ 1))
+ lines.append("\n")
+
+ return "\n".join(lines)
+
+ def parse_yaml(self, obj: Dict[str, Any]) -> str:
+ """Format the whole YAML into a RST string"""
+ lines = []
+
+ # Main header
+ lineno = obj.get('__lineno__', 0)
+ lines.append(self.fmt.rst_lineno(lineno))
+
+ family = obj['name']
+
+ lines.append(self.fmt.rst_header())
+ lines.append(self.fmt.rst_label("netlink-" + family))
+
+ title = f"Family ``{family}`` netlink specification"
+ lines.append(self.fmt.rst_title(title))
+ lines.append(self.fmt.rst_paragraph(".. contents:: :depth: 3\n"))
+
+ if "doc" in obj:
+ lines.append(self.fmt.rst_subtitle("Summary"))
+ lines.append(self.fmt.rst_paragraph(obj["doc"], 0))
+
+ # Operations
+ if "operations" in obj:
+ lines.append(self.fmt.rst_subtitle("Operations"))
+ lines.append(self.parse_operations(obj["operations"]["list"],
+ family))
+
+ # Multicast groups
+ if "mcast-groups" in obj:
+ lines.append(self.fmt.rst_subtitle("Multicast groups"))
+ lines.append(self.parse_mcast_group(obj["mcast-groups"]["list"]))
+
+ # Definitions
+ if "definitions" in obj:
+ lines.append(self.fmt.rst_subtitle("Definitions"))
+ lines.append(self.parse_definitions(obj["definitions"], family))
+
+ # Attributes set
+ if "attribute-sets" in obj:
+ lines.append(self.fmt.rst_subtitle("Attribute sets"))
+ lines.append(self.parse_attr_sets(obj["attribute-sets"], family))
+
+ # Sub-messages
+ if "sub-messages" in obj:
+ lines.append(self.fmt.rst_subtitle("Sub-messages"))
+ lines.append(self.parse_sub_messages(obj["sub-messages"], family))
+
+ return "\n".join(lines)
+
+ # Main functions
+ # ==============
+
+ def parse_yaml_file(self, filename: str) -> str:
+ """Transform the YAML specified by filename into an RST-formatted string"""
+ with open(filename, "r", encoding="utf-8") as spec_file:
+ numbered_yaml = yaml.load(spec_file, Loader=NumberedSafeLoader)
+ content = self.parse_yaml(numbered_yaml)
+
+ return content
diff --git a/tools/net/ynl/lib/nlspec.py b/tools/net/ynl/pyynl/lib/nlspec.py
index 44f13e383e8a..85c17fe01e35 100644
--- a/tools/net/ynl/lib/nlspec.py
+++ b/tools/net/ynl/pyynl/lib/nlspec.py
@@ -131,6 +131,9 @@ class SpecEnumSet(SpecElement):
def has_doc(self):
if 'doc' in self.yaml:
return True
+ return self.has_entry_doc()
+
+ def has_entry_doc(self):
for entry in self.entries.values():
if entry.has_doc():
return True
@@ -144,7 +147,7 @@ class SpecEnumSet(SpecElement):
class SpecAttr(SpecElement):
- """ Single Netlink atttribute type
+ """ Single Netlink attribute type
Represents a single attribute type within an attr space.
@@ -216,7 +219,10 @@ class SpecAttrSet(SpecElement):
else:
real_set = family.attr_sets[self.subset_of]
for elem in self.yaml['attributes']:
- attr = real_set[elem['name']]
+ real_attr = real_set[elem['name']]
+ combined_elem = real_attr.yaml | elem
+ attr = self.new_attr(combined_elem, real_attr.value)
+
self.attrs[attr.name] = attr
self.attrs_by_val[attr.value] = attr
@@ -248,6 +254,7 @@ class SpecStructMember(SpecElement):
len integer, optional byte length of binary types
display_hint string, hint to help choose format specifier
when displaying the value
+ struct string, name of nested struct type
"""
def __init__(self, family, yaml):
super().__init__(family, yaml)
@@ -256,6 +263,7 @@ class SpecStructMember(SpecElement):
self.enum = yaml.get('enum')
self.len = yaml.get('len')
self.display_hint = yaml.get('display-hint')
+ self.struct = yaml.get('struct')
class SpecStruct(SpecElement):
@@ -306,10 +314,9 @@ class SpecSubMessage(SpecElement):
class SpecSubMessageFormat(SpecElement):
- """ Netlink sub-message definition
+ """ Netlink sub-message format definition
- Represents a set of sub-message formats for polymorphic nlattrs
- that contain type-specific sub messages.
+ Represents a single format for a sub-message.
Attributes:
value attribute value to match against type selector
@@ -334,6 +341,7 @@ class SpecOperation(SpecElement):
req_value numerical ID when serialized, user -> kernel
rsp_value numerical ID when serialized, user <- kernel
+ modes supported operation modes (do, dump, event etc.)
is_call bool, whether the operation is a call
is_async bool, whether the operation is a notification
is_resv bool, whether the operation does not exist (it's just a reserved ID)
@@ -349,6 +357,7 @@ class SpecOperation(SpecElement):
self.req_value = req_value
self.rsp_value = rsp_value
+ self.modes = yaml.keys() & {'do', 'dump', 'event', 'notify'}
self.is_call = 'do' in yaml or 'dump' in yaml
self.is_async = 'notify' in yaml or 'event' in yaml
self.is_resv = not self.is_async and not self.is_call
@@ -417,6 +426,7 @@ class SpecFamily(SpecElement):
consts dict of all constants/enums
fixed_header string, optional name of family default fixed header struct
mcast_groups dict of all multicast groups (index by name)
+ kernel_family dict of kernel family attributes
"""
def __init__(self, spec_path, schema_path=None, exclude_ops=None):
with open(spec_path, "r") as stream:
@@ -460,6 +470,7 @@ class SpecFamily(SpecElement):
self.ntfs = collections.OrderedDict()
self.consts = collections.OrderedDict()
self.mcast_groups = collections.OrderedDict()
+ self.kernel_family = collections.OrderedDict(self.yaml.get('kernel-family', {}))
last_exception = None
while len(self._resolution_list) > 0:
@@ -490,7 +501,7 @@ class SpecFamily(SpecElement):
return SpecStruct(self, elem)
def new_sub_message(self, elem):
- return SpecSubMessage(self, elem);
+ return SpecSubMessage(self, elem)
def new_operation(self, elem, req_val, rsp_val):
return SpecOperation(self, elem, req_val, rsp_val)
diff --git a/tools/net/ynl/pyynl/lib/ynl.py b/tools/net/ynl/pyynl/lib/ynl.py
new file mode 100644
index 000000000000..62383c70ebb9
--- /dev/null
+++ b/tools/net/ynl/pyynl/lib/ynl.py
@@ -0,0 +1,1150 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+
+from collections import namedtuple
+from enum import Enum
+import functools
+import os
+import random
+import socket
+import struct
+from struct import Struct
+import sys
+import ipaddress
+import uuid
+import queue
+import selectors
+import time
+
+from .nlspec import SpecFamily
+
+#
+# Generic Netlink code which should really be in some library, but I can't quickly find one.
+#
+
+
+class Netlink:
+ # Netlink socket
+ SOL_NETLINK = 270
+
+ NETLINK_ADD_MEMBERSHIP = 1
+ NETLINK_CAP_ACK = 10
+ NETLINK_EXT_ACK = 11
+ NETLINK_GET_STRICT_CHK = 12
+
+ # Netlink message
+ NLMSG_ERROR = 2
+ NLMSG_DONE = 3
+
+ NLM_F_REQUEST = 1
+ NLM_F_ACK = 4
+ NLM_F_ROOT = 0x100
+ NLM_F_MATCH = 0x200
+
+ NLM_F_REPLACE = 0x100
+ NLM_F_EXCL = 0x200
+ NLM_F_CREATE = 0x400
+ NLM_F_APPEND = 0x800
+
+ NLM_F_CAPPED = 0x100
+ NLM_F_ACK_TLVS = 0x200
+
+ NLM_F_DUMP = NLM_F_ROOT | NLM_F_MATCH
+
+ NLA_F_NESTED = 0x8000
+ NLA_F_NET_BYTEORDER = 0x4000
+
+ NLA_TYPE_MASK = NLA_F_NESTED | NLA_F_NET_BYTEORDER
+
+ # Genetlink defines
+ NETLINK_GENERIC = 16
+
+ GENL_ID_CTRL = 0x10
+
+ # nlctrl
+ CTRL_CMD_GETFAMILY = 3
+
+ CTRL_ATTR_FAMILY_ID = 1
+ CTRL_ATTR_FAMILY_NAME = 2
+ CTRL_ATTR_MAXATTR = 5
+ CTRL_ATTR_MCAST_GROUPS = 7
+
+ CTRL_ATTR_MCAST_GRP_NAME = 1
+ CTRL_ATTR_MCAST_GRP_ID = 2
+
+ # Extack types
+ NLMSGERR_ATTR_MSG = 1
+ NLMSGERR_ATTR_OFFS = 2
+ NLMSGERR_ATTR_COOKIE = 3
+ NLMSGERR_ATTR_POLICY = 4
+ NLMSGERR_ATTR_MISS_TYPE = 5
+ NLMSGERR_ATTR_MISS_NEST = 6
+
+ # Policy types
+ NL_POLICY_TYPE_ATTR_TYPE = 1
+ NL_POLICY_TYPE_ATTR_MIN_VALUE_S = 2
+ NL_POLICY_TYPE_ATTR_MAX_VALUE_S = 3
+ NL_POLICY_TYPE_ATTR_MIN_VALUE_U = 4
+ NL_POLICY_TYPE_ATTR_MAX_VALUE_U = 5
+ NL_POLICY_TYPE_ATTR_MIN_LENGTH = 6
+ NL_POLICY_TYPE_ATTR_MAX_LENGTH = 7
+ NL_POLICY_TYPE_ATTR_POLICY_IDX = 8
+ NL_POLICY_TYPE_ATTR_POLICY_MAXTYPE = 9
+ NL_POLICY_TYPE_ATTR_BITFIELD32_MASK = 10
+ NL_POLICY_TYPE_ATTR_PAD = 11
+ NL_POLICY_TYPE_ATTR_MASK = 12
+
+ AttrType = Enum('AttrType', ['flag', 'u8', 'u16', 'u32', 'u64',
+ 's8', 's16', 's32', 's64',
+ 'binary', 'string', 'nul-string',
+ 'nested', 'nested-array',
+ 'bitfield32', 'sint', 'uint'])
+
+class NlError(Exception):
+ def __init__(self, nl_msg):
+ self.nl_msg = nl_msg
+ self.error = -nl_msg.error
+
+ def __str__(self):
+ return f"Netlink error: {os.strerror(self.error)}\n{self.nl_msg}"
+
+
+class ConfigError(Exception):
+ pass
+
+
+class NlAttr:
+ ScalarFormat = namedtuple('ScalarFormat', ['native', 'big', 'little'])
+ type_formats = {
+ 'u8' : ScalarFormat(Struct('B'), Struct("B"), Struct("B")),
+ 's8' : ScalarFormat(Struct('b'), Struct("b"), Struct("b")),
+ 'u16': ScalarFormat(Struct('H'), Struct(">H"), Struct("<H")),
+ 's16': ScalarFormat(Struct('h'), Struct(">h"), Struct("<h")),
+ 'u32': ScalarFormat(Struct('I'), Struct(">I"), Struct("<I")),
+ 's32': ScalarFormat(Struct('i'), Struct(">i"), Struct("<i")),
+ 'u64': ScalarFormat(Struct('Q'), Struct(">Q"), Struct("<Q")),
+ 's64': ScalarFormat(Struct('q'), Struct(">q"), Struct("<q"))
+ }
+
+ def __init__(self, raw, offset):
+ self._len, self._type = struct.unpack("HH", raw[offset : offset + 4])
+ self.type = self._type & ~Netlink.NLA_TYPE_MASK
+ self.is_nest = self._type & Netlink.NLA_F_NESTED
+ self.payload_len = self._len
+ self.full_len = (self.payload_len + 3) & ~3
+ self.raw = raw[offset + 4 : offset + self.payload_len]
+
+ @classmethod
+ def get_format(cls, attr_type, byte_order=None):
+ format = cls.type_formats[attr_type]
+ if byte_order:
+ return format.big if byte_order == "big-endian" \
+ else format.little
+ return format.native
+
+ def as_scalar(self, attr_type, byte_order=None):
+ format = self.get_format(attr_type, byte_order)
+ return format.unpack(self.raw)[0]
+
+ def as_auto_scalar(self, attr_type, byte_order=None):
+ if len(self.raw) != 4 and len(self.raw) != 8:
+ raise Exception(f"Auto-scalar len payload be 4 or 8 bytes, got {len(self.raw)}")
+ real_type = attr_type[0] + str(len(self.raw) * 8)
+ format = self.get_format(real_type, byte_order)
+ return format.unpack(self.raw)[0]
+
+ def as_strz(self):
+ return self.raw.decode('ascii')[:-1]
+
+ def as_bin(self):
+ return self.raw
+
+ def as_c_array(self, type):
+ format = self.get_format(type)
+ return [ x[0] for x in format.iter_unpack(self.raw) ]
+
+ def __repr__(self):
+ return f"[type:{self.type} len:{self._len}] {self.raw}"
+
+
+class NlAttrs:
+ def __init__(self, msg, offset=0):
+ self.attrs = []
+
+ while offset < len(msg):
+ attr = NlAttr(msg, offset)
+ offset += attr.full_len
+ self.attrs.append(attr)
+
+ def __iter__(self):
+ yield from self.attrs
+
+ def __repr__(self):
+ msg = ''
+ for a in self.attrs:
+ if msg:
+ msg += '\n'
+ msg += repr(a)
+ return msg
+
+
+class NlMsg:
+ def __init__(self, msg, offset, attr_space=None):
+ self.hdr = msg[offset : offset + 16]
+
+ self.nl_len, self.nl_type, self.nl_flags, self.nl_seq, self.nl_portid = \
+ struct.unpack("IHHII", self.hdr)
+
+ self.raw = msg[offset + 16 : offset + self.nl_len]
+
+ self.error = 0
+ self.done = 0
+
+ extack_off = None
+ if self.nl_type == Netlink.NLMSG_ERROR:
+ self.error = struct.unpack("i", self.raw[0:4])[0]
+ self.done = 1
+ extack_off = 20
+ elif self.nl_type == Netlink.NLMSG_DONE:
+ self.error = struct.unpack("i", self.raw[0:4])[0]
+ self.done = 1
+ extack_off = 4
+
+ self.extack = None
+ if self.nl_flags & Netlink.NLM_F_ACK_TLVS and extack_off:
+ self.extack = dict()
+ extack_attrs = NlAttrs(self.raw[extack_off:])
+ for extack in extack_attrs:
+ if extack.type == Netlink.NLMSGERR_ATTR_MSG:
+ self.extack['msg'] = extack.as_strz()
+ elif extack.type == Netlink.NLMSGERR_ATTR_MISS_TYPE:
+ self.extack['miss-type'] = extack.as_scalar('u32')
+ elif extack.type == Netlink.NLMSGERR_ATTR_MISS_NEST:
+ self.extack['miss-nest'] = extack.as_scalar('u32')
+ elif extack.type == Netlink.NLMSGERR_ATTR_OFFS:
+ self.extack['bad-attr-offs'] = extack.as_scalar('u32')
+ elif extack.type == Netlink.NLMSGERR_ATTR_POLICY:
+ self.extack['policy'] = self._decode_policy(extack.raw)
+ else:
+ if 'unknown' not in self.extack:
+ self.extack['unknown'] = []
+ self.extack['unknown'].append(extack)
+
+ if attr_space:
+ self.annotate_extack(attr_space)
+
+ def _decode_policy(self, raw):
+ policy = {}
+ for attr in NlAttrs(raw):
+ if attr.type == Netlink.NL_POLICY_TYPE_ATTR_TYPE:
+ type = attr.as_scalar('u32')
+ policy['type'] = Netlink.AttrType(type).name
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MIN_VALUE_S:
+ policy['min-value'] = attr.as_scalar('s64')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MAX_VALUE_S:
+ policy['max-value'] = attr.as_scalar('s64')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MIN_VALUE_U:
+ policy['min-value'] = attr.as_scalar('u64')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MAX_VALUE_U:
+ policy['max-value'] = attr.as_scalar('u64')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MIN_LENGTH:
+ policy['min-length'] = attr.as_scalar('u32')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MAX_LENGTH:
+ policy['max-length'] = attr.as_scalar('u32')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_BITFIELD32_MASK:
+ policy['bitfield32-mask'] = attr.as_scalar('u32')
+ elif attr.type == Netlink.NL_POLICY_TYPE_ATTR_MASK:
+ policy['mask'] = attr.as_scalar('u64')
+ return policy
+
+ def annotate_extack(self, attr_space):
+ """ Make extack more human friendly with attribute information """
+
+ # We don't have the ability to parse nests yet, so only do global
+ if 'miss-type' in self.extack and 'miss-nest' not in self.extack:
+ miss_type = self.extack['miss-type']
+ if miss_type in attr_space.attrs_by_val:
+ spec = attr_space.attrs_by_val[miss_type]
+ self.extack['miss-type'] = spec['name']
+ if 'doc' in spec:
+ self.extack['miss-type-doc'] = spec['doc']
+
+ def cmd(self):
+ return self.nl_type
+
+ def __repr__(self):
+ msg = f"nl_len = {self.nl_len} ({len(self.raw)}) nl_flags = 0x{self.nl_flags:x} nl_type = {self.nl_type}"
+ if self.error:
+ msg += '\n\terror: ' + str(self.error)
+ if self.extack:
+ msg += '\n\textack: ' + repr(self.extack)
+ return msg
+
+
+class NlMsgs:
+ def __init__(self, data):
+ self.msgs = []
+
+ offset = 0
+ while offset < len(data):
+ msg = NlMsg(data, offset)
+ offset += msg.nl_len
+ self.msgs.append(msg)
+
+ def __iter__(self):
+ yield from self.msgs
+
+
+genl_family_name_to_id = None
+
+
+def _genl_msg(nl_type, nl_flags, genl_cmd, genl_version, seq=None):
+ # we prepend length in _genl_msg_finalize()
+ if seq is None:
+ seq = random.randint(1, 1024)
+ nlmsg = struct.pack("HHII", nl_type, nl_flags, seq, 0)
+ genlmsg = struct.pack("BBH", genl_cmd, genl_version, 0)
+ return nlmsg + genlmsg
+
+
+def _genl_msg_finalize(msg):
+ return struct.pack("I", len(msg) + 4) + msg
+
+
+def _genl_load_families():
+ with socket.socket(socket.AF_NETLINK, socket.SOCK_RAW, Netlink.NETLINK_GENERIC) as sock:
+ sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_CAP_ACK, 1)
+
+ msg = _genl_msg(Netlink.GENL_ID_CTRL,
+ Netlink.NLM_F_REQUEST | Netlink.NLM_F_ACK | Netlink.NLM_F_DUMP,
+ Netlink.CTRL_CMD_GETFAMILY, 1)
+ msg = _genl_msg_finalize(msg)
+
+ sock.send(msg, 0)
+
+ global genl_family_name_to_id
+ genl_family_name_to_id = dict()
+
+ while True:
+ reply = sock.recv(128 * 1024)
+ nms = NlMsgs(reply)
+ for nl_msg in nms:
+ if nl_msg.error:
+ print("Netlink error:", nl_msg.error)
+ return
+ if nl_msg.done:
+ return
+
+ gm = GenlMsg(nl_msg)
+ fam = dict()
+ for attr in NlAttrs(gm.raw):
+ if attr.type == Netlink.CTRL_ATTR_FAMILY_ID:
+ fam['id'] = attr.as_scalar('u16')
+ elif attr.type == Netlink.CTRL_ATTR_FAMILY_NAME:
+ fam['name'] = attr.as_strz()
+ elif attr.type == Netlink.CTRL_ATTR_MAXATTR:
+ fam['maxattr'] = attr.as_scalar('u32')
+ elif attr.type == Netlink.CTRL_ATTR_MCAST_GROUPS:
+ fam['mcast'] = dict()
+ for entry in NlAttrs(attr.raw):
+ mcast_name = None
+ mcast_id = None
+ for entry_attr in NlAttrs(entry.raw):
+ if entry_attr.type == Netlink.CTRL_ATTR_MCAST_GRP_NAME:
+ mcast_name = entry_attr.as_strz()
+ elif entry_attr.type == Netlink.CTRL_ATTR_MCAST_GRP_ID:
+ mcast_id = entry_attr.as_scalar('u32')
+ if mcast_name and mcast_id is not None:
+ fam['mcast'][mcast_name] = mcast_id
+ if 'name' in fam and 'id' in fam:
+ genl_family_name_to_id[fam['name']] = fam
+
+
+class GenlMsg:
+ def __init__(self, nl_msg):
+ self.nl = nl_msg
+ self.genl_cmd, self.genl_version, _ = struct.unpack_from("BBH", nl_msg.raw, 0)
+ self.raw = nl_msg.raw[4:]
+
+ def cmd(self):
+ return self.genl_cmd
+
+ def __repr__(self):
+ msg = repr(self.nl)
+ msg += f"\tgenl_cmd = {self.genl_cmd} genl_ver = {self.genl_version}\n"
+ for a in self.raw_attrs:
+ msg += '\t\t' + repr(a) + '\n'
+ return msg
+
+
+class NetlinkProtocol:
+ def __init__(self, family_name, proto_num):
+ self.family_name = family_name
+ self.proto_num = proto_num
+
+ def _message(self, nl_type, nl_flags, seq=None):
+ if seq is None:
+ seq = random.randint(1, 1024)
+ nlmsg = struct.pack("HHII", nl_type, nl_flags, seq, 0)
+ return nlmsg
+
+ def message(self, flags, command, version, seq=None):
+ return self._message(command, flags, seq)
+
+ def _decode(self, nl_msg):
+ return nl_msg
+
+ def decode(self, ynl, nl_msg, op):
+ msg = self._decode(nl_msg)
+ if op is None:
+ op = ynl.rsp_by_value[msg.cmd()]
+ fixed_header_size = ynl._struct_size(op.fixed_header)
+ msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size)
+ return msg
+
+ def get_mcast_id(self, mcast_name, mcast_groups):
+ if mcast_name not in mcast_groups:
+ raise Exception(f'Multicast group "{mcast_name}" not present in the spec')
+ return mcast_groups[mcast_name].value
+
+ def msghdr_size(self):
+ return 16
+
+
+class GenlProtocol(NetlinkProtocol):
+ def __init__(self, family_name):
+ super().__init__(family_name, Netlink.NETLINK_GENERIC)
+
+ global genl_family_name_to_id
+ if genl_family_name_to_id is None:
+ _genl_load_families()
+
+ self.genl_family = genl_family_name_to_id[family_name]
+ self.family_id = genl_family_name_to_id[family_name]['id']
+
+ def message(self, flags, command, version, seq=None):
+ nlmsg = self._message(self.family_id, flags, seq)
+ genlmsg = struct.pack("BBH", command, version, 0)
+ return nlmsg + genlmsg
+
+ def _decode(self, nl_msg):
+ return GenlMsg(nl_msg)
+
+ def get_mcast_id(self, mcast_name, mcast_groups):
+ if mcast_name not in self.genl_family['mcast']:
+ raise Exception(f'Multicast group "{mcast_name}" not present in the family')
+ return self.genl_family['mcast'][mcast_name]
+
+ def msghdr_size(self):
+ return super().msghdr_size() + 4
+
+
+class SpaceAttrs:
+ SpecValuesPair = namedtuple('SpecValuesPair', ['spec', 'values'])
+
+ def __init__(self, attr_space, attrs, outer = None):
+ outer_scopes = outer.scopes if outer else []
+ inner_scope = self.SpecValuesPair(attr_space, attrs)
+ self.scopes = [inner_scope] + outer_scopes
+
+ def lookup(self, name):
+ for scope in self.scopes:
+ if name in scope.spec:
+ if name in scope.values:
+ return scope.values[name]
+ spec_name = scope.spec.yaml['name']
+ raise Exception(
+ f"No value for '{name}' in attribute space '{spec_name}'")
+ raise Exception(f"Attribute '{name}' not defined in any attribute-set")
+
+
+#
+# YNL implementation details.
+#
+
+
+class YnlFamily(SpecFamily):
+ def __init__(self, def_path, schema=None, process_unknown=False,
+ recv_size=0):
+ super().__init__(def_path, schema)
+
+ self.include_raw = False
+ self.process_unknown = process_unknown
+
+ try:
+ if self.proto == "netlink-raw":
+ self.nlproto = NetlinkProtocol(self.yaml['name'],
+ self.yaml['protonum'])
+ else:
+ self.nlproto = GenlProtocol(self.yaml['name'])
+ except KeyError:
+ raise Exception(f"Family '{self.yaml['name']}' not supported by the kernel")
+
+ self._recv_dbg = False
+ # Note that netlink will use conservative (min) message size for
+ # the first dump recv() on the socket, our setting will only matter
+ # from the second recv() on.
+ self._recv_size = recv_size if recv_size else 131072
+ # Netlink will always allocate at least PAGE_SIZE - sizeof(skb_shinfo)
+ # for a message, so smaller receive sizes will lead to truncation.
+ # Note that the min size for other families may be larger than 4k!
+ if self._recv_size < 4000:
+ raise ConfigError()
+
+ self.sock = socket.socket(socket.AF_NETLINK, socket.SOCK_RAW, self.nlproto.proto_num)
+ self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_CAP_ACK, 1)
+ self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_EXT_ACK, 1)
+ self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_GET_STRICT_CHK, 1)
+
+ self.async_msg_ids = set()
+ self.async_msg_queue = queue.Queue()
+
+ for msg in self.msgs.values():
+ if msg.is_async:
+ self.async_msg_ids.add(msg.rsp_value)
+
+ for op_name, op in self.ops.items():
+ bound_f = functools.partial(self._op, op_name)
+ setattr(self, op.ident_name, bound_f)
+
+
+ def ntf_subscribe(self, mcast_name):
+ mcast_id = self.nlproto.get_mcast_id(mcast_name, self.mcast_groups)
+ self.sock.bind((0, 0))
+ self.sock.setsockopt(Netlink.SOL_NETLINK, Netlink.NETLINK_ADD_MEMBERSHIP,
+ mcast_id)
+
+ def set_recv_dbg(self, enabled):
+ self._recv_dbg = enabled
+
+ def _recv_dbg_print(self, reply, nl_msgs):
+ if not self._recv_dbg:
+ return
+ print("Recv: read", len(reply), "bytes,",
+ len(nl_msgs.msgs), "messages", file=sys.stderr)
+ for nl_msg in nl_msgs:
+ print(" ", nl_msg, file=sys.stderr)
+
+ def _encode_enum(self, attr_spec, value):
+ enum = self.consts[attr_spec['enum']]
+ if enum.type == 'flags' or attr_spec.get('enum-as-flags', False):
+ scalar = 0
+ if isinstance(value, str):
+ value = [value]
+ for single_value in value:
+ scalar += enum.entries[single_value].user_value(as_flags = True)
+ return scalar
+ else:
+ return enum.entries[value].user_value()
+
+ def _get_scalar(self, attr_spec, value):
+ try:
+ return int(value)
+ except (ValueError, TypeError) as e:
+ if 'enum' in attr_spec:
+ return self._encode_enum(attr_spec, value)
+ if attr_spec.display_hint:
+ return self._from_string(value, attr_spec)
+ raise e
+
+ def _add_attr(self, space, name, value, search_attrs):
+ try:
+ attr = self.attr_sets[space][name]
+ except KeyError:
+ raise Exception(f"Space '{space}' has no attribute '{name}'")
+ nl_type = attr.value
+
+ if attr.is_multi and isinstance(value, list):
+ attr_payload = b''
+ for subvalue in value:
+ attr_payload += self._add_attr(space, name, subvalue, search_attrs)
+ return attr_payload
+
+ if attr["type"] == 'nest':
+ nl_type |= Netlink.NLA_F_NESTED
+ sub_space = attr['nested-attributes']
+ attr_payload = self._add_nest_attrs(value, sub_space, search_attrs)
+ elif attr['type'] == 'indexed-array' and attr['sub-type'] == 'nest':
+ nl_type |= Netlink.NLA_F_NESTED
+ sub_space = attr['nested-attributes']
+ attr_payload = self._encode_indexed_array(value, sub_space,
+ search_attrs)
+ elif attr["type"] == 'flag':
+ if not value:
+ # If value is absent or false then skip attribute creation.
+ return b''
+ attr_payload = b''
+ elif attr["type"] == 'string':
+ attr_payload = str(value).encode('ascii') + b'\x00'
+ elif attr["type"] == 'binary':
+ if value is None:
+ attr_payload = b''
+ elif isinstance(value, bytes):
+ attr_payload = value
+ elif isinstance(value, str):
+ if attr.display_hint:
+ attr_payload = self._from_string(value, attr)
+ else:
+ attr_payload = bytes.fromhex(value)
+ elif isinstance(value, dict) and attr.struct_name:
+ attr_payload = self._encode_struct(attr.struct_name, value)
+ elif isinstance(value, list) and attr.sub_type in NlAttr.type_formats:
+ format = NlAttr.get_format(attr.sub_type)
+ attr_payload = b''.join([format.pack(x) for x in value])
+ else:
+ raise Exception(f'Unknown type for binary attribute, value: {value}')
+ elif attr['type'] in NlAttr.type_formats or attr.is_auto_scalar:
+ scalar = self._get_scalar(attr, value)
+ if attr.is_auto_scalar:
+ attr_type = attr["type"][0] + ('32' if scalar.bit_length() <= 32 else '64')
+ else:
+ attr_type = attr["type"]
+ format = NlAttr.get_format(attr_type, attr.byte_order)
+ attr_payload = format.pack(scalar)
+ elif attr['type'] in "bitfield32":
+ scalar_value = self._get_scalar(attr, value["value"])
+ scalar_selector = self._get_scalar(attr, value["selector"])
+ attr_payload = struct.pack("II", scalar_value, scalar_selector)
+ elif attr['type'] == 'sub-message':
+ msg_format, _ = self._resolve_selector(attr, search_attrs)
+ attr_payload = b''
+ if msg_format.fixed_header:
+ attr_payload += self._encode_struct(msg_format.fixed_header, value)
+ if msg_format.attr_set:
+ if msg_format.attr_set in self.attr_sets:
+ nl_type |= Netlink.NLA_F_NESTED
+ sub_attrs = SpaceAttrs(msg_format.attr_set, value, search_attrs)
+ for subname, subvalue in value.items():
+ attr_payload += self._add_attr(msg_format.attr_set,
+ subname, subvalue, sub_attrs)
+ else:
+ raise Exception(f"Unknown attribute-set '{msg_format.attr_set}'")
+ else:
+ raise Exception(f'Unknown type at {space} {name} {value} {attr["type"]}')
+
+ return self._add_attr_raw(nl_type, attr_payload)
+
+ def _add_attr_raw(self, nl_type, attr_payload):
+ pad = b'\x00' * ((4 - len(attr_payload) % 4) % 4)
+ return struct.pack('HH', len(attr_payload) + 4, nl_type) + attr_payload + pad
+
+ def _add_nest_attrs(self, value, sub_space, search_attrs):
+ sub_attrs = SpaceAttrs(self.attr_sets[sub_space], value, search_attrs)
+ attr_payload = b''
+ for subname, subvalue in value.items():
+ attr_payload += self._add_attr(sub_space, subname, subvalue,
+ sub_attrs)
+ return attr_payload
+
+ def _encode_indexed_array(self, vals, sub_space, search_attrs):
+ attr_payload = b''
+ for i, val in enumerate(vals):
+ idx = i | Netlink.NLA_F_NESTED
+ val_payload = self._add_nest_attrs(val, sub_space, search_attrs)
+ attr_payload += self._add_attr_raw(idx, val_payload)
+ return attr_payload
+
+ def _get_enum_or_unknown(self, enum, raw):
+ try:
+ name = enum.entries_by_val[raw].name
+ except KeyError as error:
+ if self.process_unknown:
+ name = f"Unknown({raw})"
+ else:
+ raise error
+ return name
+
+ def _decode_enum(self, raw, attr_spec):
+ enum = self.consts[attr_spec['enum']]
+ if enum.type == 'flags' or attr_spec.get('enum-as-flags', False):
+ i = 0
+ value = set()
+ while raw:
+ if raw & 1:
+ value.add(self._get_enum_or_unknown(enum, i))
+ raw >>= 1
+ i += 1
+ else:
+ value = self._get_enum_or_unknown(enum, raw)
+ return value
+
+ def _decode_binary(self, attr, attr_spec):
+ if attr_spec.struct_name:
+ decoded = self._decode_struct(attr.raw, attr_spec.struct_name)
+ elif attr_spec.sub_type:
+ decoded = attr.as_c_array(attr_spec.sub_type)
+ if 'enum' in attr_spec:
+ decoded = [ self._decode_enum(x, attr_spec) for x in decoded ]
+ elif attr_spec.display_hint:
+ decoded = [ self._formatted_string(x, attr_spec.display_hint)
+ for x in decoded ]
+ else:
+ decoded = attr.as_bin()
+ if attr_spec.display_hint:
+ decoded = self._formatted_string(decoded, attr_spec.display_hint)
+ return decoded
+
+ def _decode_array_attr(self, attr, attr_spec):
+ decoded = []
+ offset = 0
+ while offset < len(attr.raw):
+ item = NlAttr(attr.raw, offset)
+ offset += item.full_len
+
+ if attr_spec["sub-type"] == 'nest':
+ subattrs = self._decode(NlAttrs(item.raw), attr_spec['nested-attributes'])
+ decoded.append({ item.type: subattrs })
+ elif attr_spec["sub-type"] == 'binary':
+ subattr = item.as_bin()
+ if attr_spec.display_hint:
+ subattr = self._formatted_string(subattr, attr_spec.display_hint)
+ decoded.append(subattr)
+ elif attr_spec["sub-type"] in NlAttr.type_formats:
+ subattr = item.as_scalar(attr_spec['sub-type'], attr_spec.byte_order)
+ if 'enum' in attr_spec:
+ subattr = self._decode_enum(subattr, attr_spec)
+ elif attr_spec.display_hint:
+ subattr = self._formatted_string(subattr, attr_spec.display_hint)
+ decoded.append(subattr)
+ else:
+ raise Exception(f'Unknown {attr_spec["sub-type"]} with name {attr_spec["name"]}')
+ return decoded
+
+ def _decode_nest_type_value(self, attr, attr_spec):
+ decoded = {}
+ value = attr
+ for name in attr_spec['type-value']:
+ value = NlAttr(value.raw, 0)
+ decoded[name] = value.type
+ subattrs = self._decode(NlAttrs(value.raw), attr_spec['nested-attributes'])
+ decoded.update(subattrs)
+ return decoded
+
+ def _decode_unknown(self, attr):
+ if attr.is_nest:
+ return self._decode(NlAttrs(attr.raw), None)
+ else:
+ return attr.as_bin()
+
+ def _rsp_add(self, rsp, name, is_multi, decoded):
+ if is_multi is None:
+ if name in rsp and type(rsp[name]) is not list:
+ rsp[name] = [rsp[name]]
+ is_multi = True
+ else:
+ is_multi = False
+
+ if not is_multi:
+ rsp[name] = decoded
+ elif name in rsp:
+ rsp[name].append(decoded)
+ else:
+ rsp[name] = [decoded]
+
+ def _resolve_selector(self, attr_spec, search_attrs):
+ sub_msg = attr_spec.sub_message
+ if sub_msg not in self.sub_msgs:
+ raise Exception(f"No sub-message spec named {sub_msg} for {attr_spec.name}")
+ sub_msg_spec = self.sub_msgs[sub_msg]
+
+ selector = attr_spec.selector
+ value = search_attrs.lookup(selector)
+ if value not in sub_msg_spec.formats:
+ raise Exception(f"No message format for '{value}' in sub-message spec '{sub_msg}'")
+
+ spec = sub_msg_spec.formats[value]
+ return spec, value
+
+ def _decode_sub_msg(self, attr, attr_spec, search_attrs):
+ msg_format, _ = self._resolve_selector(attr_spec, search_attrs)
+ decoded = {}
+ offset = 0
+ if msg_format.fixed_header:
+ decoded.update(self._decode_struct(attr.raw, msg_format.fixed_header))
+ offset = self._struct_size(msg_format.fixed_header)
+ if msg_format.attr_set:
+ if msg_format.attr_set in self.attr_sets:
+ subdict = self._decode(NlAttrs(attr.raw, offset), msg_format.attr_set)
+ decoded.update(subdict)
+ else:
+ raise Exception(f"Unknown attribute-set '{msg_format.attr_set}' when decoding '{attr_spec.name}'")
+ return decoded
+
+ def _decode(self, attrs, space, outer_attrs = None):
+ rsp = dict()
+ if space:
+ attr_space = self.attr_sets[space]
+ search_attrs = SpaceAttrs(attr_space, rsp, outer_attrs)
+
+ for attr in attrs:
+ try:
+ attr_spec = attr_space.attrs_by_val[attr.type]
+ except (KeyError, UnboundLocalError):
+ if not self.process_unknown:
+ raise Exception(f"Space '{space}' has no attribute with value '{attr.type}'")
+ attr_name = f"UnknownAttr({attr.type})"
+ self._rsp_add(rsp, attr_name, None, self._decode_unknown(attr))
+ continue
+
+ try:
+ if attr_spec["type"] == 'nest':
+ subdict = self._decode(NlAttrs(attr.raw), attr_spec['nested-attributes'], search_attrs)
+ decoded = subdict
+ elif attr_spec["type"] == 'string':
+ decoded = attr.as_strz()
+ elif attr_spec["type"] == 'binary':
+ decoded = self._decode_binary(attr, attr_spec)
+ elif attr_spec["type"] == 'flag':
+ decoded = True
+ elif attr_spec.is_auto_scalar:
+ decoded = attr.as_auto_scalar(attr_spec['type'], attr_spec.byte_order)
+ if 'enum' in attr_spec:
+ decoded = self._decode_enum(decoded, attr_spec)
+ elif attr_spec["type"] in NlAttr.type_formats:
+ decoded = attr.as_scalar(attr_spec['type'], attr_spec.byte_order)
+ if 'enum' in attr_spec:
+ decoded = self._decode_enum(decoded, attr_spec)
+ elif attr_spec.display_hint:
+ decoded = self._formatted_string(decoded, attr_spec.display_hint)
+ elif attr_spec["type"] == 'indexed-array':
+ decoded = self._decode_array_attr(attr, attr_spec)
+ elif attr_spec["type"] == 'bitfield32':
+ value, selector = struct.unpack("II", attr.raw)
+ if 'enum' in attr_spec:
+ value = self._decode_enum(value, attr_spec)
+ selector = self._decode_enum(selector, attr_spec)
+ decoded = {"value": value, "selector": selector}
+ elif attr_spec["type"] == 'sub-message':
+ decoded = self._decode_sub_msg(attr, attr_spec, search_attrs)
+ elif attr_spec["type"] == 'nest-type-value':
+ decoded = self._decode_nest_type_value(attr, attr_spec)
+ else:
+ if not self.process_unknown:
+ raise Exception(f'Unknown {attr_spec["type"]} with name {attr_spec["name"]}')
+ decoded = self._decode_unknown(attr)
+
+ self._rsp_add(rsp, attr_spec["name"], attr_spec.is_multi, decoded)
+ except:
+ print(f"Error decoding '{attr_spec.name}' from '{space}'")
+ raise
+
+ return rsp
+
+ def _decode_extack_path(self, attrs, attr_set, offset, target, search_attrs):
+ for attr in attrs:
+ try:
+ attr_spec = attr_set.attrs_by_val[attr.type]
+ except KeyError:
+ raise Exception(f"Space '{attr_set.name}' has no attribute with value '{attr.type}'")
+ if offset > target:
+ break
+ if offset == target:
+ return '.' + attr_spec.name
+
+ if offset + attr.full_len <= target:
+ offset += attr.full_len
+ continue
+
+ pathname = attr_spec.name
+ if attr_spec['type'] == 'nest':
+ sub_attrs = self.attr_sets[attr_spec['nested-attributes']]
+ search_attrs = SpaceAttrs(sub_attrs, search_attrs.lookup(attr_spec['name']))
+ elif attr_spec['type'] == 'sub-message':
+ msg_format, value = self._resolve_selector(attr_spec, search_attrs)
+ if msg_format is None:
+ raise Exception(f"Can't resolve sub-message of {attr_spec['name']} for extack")
+ sub_attrs = self.attr_sets[msg_format.attr_set]
+ pathname += f"({value})"
+ else:
+ raise Exception(f"Can't dive into {attr.type} ({attr_spec['name']}) for extack")
+ offset += 4
+ subpath = self._decode_extack_path(NlAttrs(attr.raw), sub_attrs,
+ offset, target, search_attrs)
+ if subpath is None:
+ return None
+ return '.' + pathname + subpath
+
+ return None
+
+ def _decode_extack(self, request, op, extack, vals):
+ if 'bad-attr-offs' not in extack:
+ return
+
+ msg = self.nlproto.decode(self, NlMsg(request, 0, op.attr_set), op)
+ offset = self.nlproto.msghdr_size() + self._struct_size(op.fixed_header)
+ search_attrs = SpaceAttrs(op.attr_set, vals)
+ path = self._decode_extack_path(msg.raw_attrs, op.attr_set, offset,
+ extack['bad-attr-offs'], search_attrs)
+ if path:
+ del extack['bad-attr-offs']
+ extack['bad-attr'] = path
+
+ def _struct_size(self, name):
+ if name:
+ members = self.consts[name].members
+ size = 0
+ for m in members:
+ if m.type in ['pad', 'binary']:
+ if m.struct:
+ size += self._struct_size(m.struct)
+ else:
+ size += m.len
+ else:
+ format = NlAttr.get_format(m.type, m.byte_order)
+ size += format.size
+ return size
+ else:
+ return 0
+
+ def _decode_struct(self, data, name):
+ members = self.consts[name].members
+ attrs = dict()
+ offset = 0
+ for m in members:
+ value = None
+ if m.type == 'pad':
+ offset += m.len
+ elif m.type == 'binary':
+ if m.struct:
+ len = self._struct_size(m.struct)
+ value = self._decode_struct(data[offset : offset + len],
+ m.struct)
+ offset += len
+ else:
+ value = data[offset : offset + m.len]
+ offset += m.len
+ else:
+ format = NlAttr.get_format(m.type, m.byte_order)
+ [ value ] = format.unpack_from(data, offset)
+ offset += format.size
+ if value is not None:
+ if m.enum:
+ value = self._decode_enum(value, m)
+ elif m.display_hint:
+ value = self._formatted_string(value, m.display_hint)
+ attrs[m.name] = value
+ return attrs
+
+ def _encode_struct(self, name, vals):
+ members = self.consts[name].members
+ attr_payload = b''
+ for m in members:
+ value = vals.pop(m.name) if m.name in vals else None
+ if m.type == 'pad':
+ attr_payload += bytearray(m.len)
+ elif m.type == 'binary':
+ if m.struct:
+ if value is None:
+ value = dict()
+ attr_payload += self._encode_struct(m.struct, value)
+ else:
+ if value is None:
+ attr_payload += bytearray(m.len)
+ else:
+ attr_payload += bytes.fromhex(value)
+ else:
+ if value is None:
+ value = 0
+ format = NlAttr.get_format(m.type, m.byte_order)
+ attr_payload += format.pack(value)
+ return attr_payload
+
+ def _formatted_string(self, raw, display_hint):
+ if display_hint == 'mac':
+ formatted = ':'.join('%02x' % b for b in raw)
+ elif display_hint == 'hex':
+ if isinstance(raw, int):
+ formatted = hex(raw)
+ else:
+ formatted = bytes.hex(raw, ' ')
+ elif display_hint in [ 'ipv4', 'ipv6', 'ipv4-or-v6' ]:
+ formatted = format(ipaddress.ip_address(raw))
+ elif display_hint == 'uuid':
+ formatted = str(uuid.UUID(bytes=raw))
+ else:
+ formatted = raw
+ return formatted
+
+ def _from_string(self, string, attr_spec):
+ if attr_spec.display_hint in ['ipv4', 'ipv6', 'ipv4-or-v6']:
+ ip = ipaddress.ip_address(string)
+ if attr_spec['type'] == 'binary':
+ raw = ip.packed
+ else:
+ raw = int(ip)
+ elif attr_spec.display_hint == 'hex':
+ if attr_spec['type'] == 'binary':
+ raw = bytes.fromhex(string)
+ else:
+ raw = int(string, 16)
+ else:
+ raise Exception(f"Display hint '{attr_spec.display_hint}' not implemented"
+ f" when parsing '{attr_spec['name']}'")
+ return raw
+
+ def handle_ntf(self, decoded):
+ msg = dict()
+ if self.include_raw:
+ msg['raw'] = decoded
+ op = self.rsp_by_value[decoded.cmd()]
+ attrs = self._decode(decoded.raw_attrs, op.attr_set.name)
+ if op.fixed_header:
+ attrs.update(self._decode_struct(decoded.raw, op.fixed_header))
+
+ msg['name'] = op['name']
+ msg['msg'] = attrs
+ self.async_msg_queue.put(msg)
+
+ def check_ntf(self):
+ while True:
+ try:
+ reply = self.sock.recv(self._recv_size, socket.MSG_DONTWAIT)
+ except BlockingIOError:
+ return
+
+ nms = NlMsgs(reply)
+ self._recv_dbg_print(reply, nms)
+ for nl_msg in nms:
+ if nl_msg.error:
+ print("Netlink error in ntf!?", os.strerror(-nl_msg.error))
+ print(nl_msg)
+ continue
+ if nl_msg.done:
+ print("Netlink done while checking for ntf!?")
+ continue
+
+ decoded = self.nlproto.decode(self, nl_msg, None)
+ if decoded.cmd() not in self.async_msg_ids:
+ print("Unexpected msg id while checking for ntf", decoded)
+ continue
+
+ self.handle_ntf(decoded)
+
+ def poll_ntf(self, duration=None):
+ start_time = time.time()
+ selector = selectors.DefaultSelector()
+ selector.register(self.sock, selectors.EVENT_READ)
+
+ while True:
+ try:
+ yield self.async_msg_queue.get_nowait()
+ except queue.Empty:
+ if duration is not None:
+ timeout = start_time + duration - time.time()
+ if timeout <= 0:
+ return
+ else:
+ timeout = None
+ events = selector.select(timeout)
+ if events:
+ self.check_ntf()
+
+ def operation_do_attributes(self, name):
+ """
+ For a given operation name, find and return a supported
+ set of attributes (as a dict).
+ """
+ op = self.find_operation(name)
+ if not op:
+ return None
+
+ return op['do']['request']['attributes'].copy()
+
+ def _encode_message(self, op, vals, flags, req_seq):
+ nl_flags = Netlink.NLM_F_REQUEST | Netlink.NLM_F_ACK
+ for flag in flags or []:
+ nl_flags |= flag
+
+ msg = self.nlproto.message(nl_flags, op.req_value, 1, req_seq)
+ if op.fixed_header:
+ msg += self._encode_struct(op.fixed_header, vals)
+ search_attrs = SpaceAttrs(op.attr_set, vals)
+ for name, value in vals.items():
+ msg += self._add_attr(op.attr_set.name, name, value, search_attrs)
+ msg = _genl_msg_finalize(msg)
+ return msg
+
+ def _ops(self, ops):
+ reqs_by_seq = {}
+ req_seq = random.randint(1024, 65535)
+ payload = b''
+ for (method, vals, flags) in ops:
+ op = self.ops[method]
+ msg = self._encode_message(op, vals, flags, req_seq)
+ reqs_by_seq[req_seq] = (op, vals, msg, flags)
+ payload += msg
+ req_seq += 1
+
+ self.sock.send(payload, 0)
+
+ done = False
+ rsp = []
+ op_rsp = []
+ while not done:
+ reply = self.sock.recv(self._recv_size)
+ nms = NlMsgs(reply)
+ self._recv_dbg_print(reply, nms)
+ for nl_msg in nms:
+ if nl_msg.nl_seq in reqs_by_seq:
+ (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq]
+ if nl_msg.extack:
+ nl_msg.annotate_extack(op.attr_set)
+ self._decode_extack(req_msg, op, nl_msg.extack, vals)
+ else:
+ op = None
+ req_flags = []
+
+ if nl_msg.error:
+ raise NlError(nl_msg)
+ if nl_msg.done:
+ if nl_msg.extack:
+ print("Netlink warning:")
+ print(nl_msg)
+
+ if Netlink.NLM_F_DUMP in req_flags:
+ rsp.append(op_rsp)
+ elif not op_rsp:
+ rsp.append(None)
+ elif len(op_rsp) == 1:
+ rsp.append(op_rsp[0])
+ else:
+ rsp.append(op_rsp)
+ op_rsp = []
+
+ del reqs_by_seq[nl_msg.nl_seq]
+ done = len(reqs_by_seq) == 0
+ break
+
+ decoded = self.nlproto.decode(self, nl_msg, op)
+
+ # Check if this is a reply to our request
+ if nl_msg.nl_seq not in reqs_by_seq or decoded.cmd() != op.rsp_value:
+ if decoded.cmd() in self.async_msg_ids:
+ self.handle_ntf(decoded)
+ continue
+ else:
+ print('Unexpected message: ' + repr(decoded))
+ continue
+
+ rsp_msg = self._decode(decoded.raw_attrs, op.attr_set.name)
+ if op.fixed_header:
+ rsp_msg.update(self._decode_struct(decoded.raw, op.fixed_header))
+ op_rsp.append(rsp_msg)
+
+ return rsp
+
+ def _op(self, method, vals, flags=None, dump=False):
+ req_flags = flags or []
+ if dump:
+ req_flags.append(Netlink.NLM_F_DUMP)
+
+ ops = [(method, vals, req_flags)]
+ return self._ops(ops)[0]
+
+ def do(self, method, vals, flags=None):
+ return self._op(method, vals, flags)
+
+ def dump(self, method, vals):
+ return self._op(method, vals, dump=True)
+
+ def do_multi(self, ops):
+ return self._ops(ops)
diff --git a/tools/net/ynl/ynl-gen-c.py b/tools/net/ynl/pyynl/ynl_gen_c.py
index 7fc1aa788f6f..58086b101057 100755
--- a/tools/net/ynl/ynl-gen-c.py
+++ b/tools/net/ynl/pyynl/ynl_gen_c.py
@@ -2,15 +2,18 @@
# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
import argparse
-import collections
import filecmp
+import pathlib
import os
import re
import shutil
+import sys
import tempfile
import yaml
+sys.path.append(pathlib.Path(__file__).resolve().parent.as_posix())
from lib import SpecFamily, SpecAttrSet, SpecAttr, SpecOperation, SpecEnumSet, SpecEnumEntry
+from lib import SpecSubMessage
def c_upper(name):
@@ -40,14 +43,6 @@ class BaseNlLib:
def get_family_id(self):
return 'ys->family_id'
- def parse_cb_run(self, cb, data, is_dump=False, indent=1):
- ind = '\n\t\t' + '\t' * indent + ' '
- if is_dump:
- return f"mnl_cb_run2(ys->rx_buf, len, 0, 0, {cb}, {data},{ind}ynl_cb_array, NLMSG_MIN_TYPE)"
- else:
- return f"mnl_cb_run2(ys->rx_buf, len, ys->seq, ys->portid,{ind}{cb}, {data},{ind}" + \
- "ynl_cb_array, NLMSG_MIN_TYPE)"
-
class Type(SpecAttr):
def __init__(self, family, attr_set, attr, value):
@@ -61,15 +56,24 @@ class Type(SpecAttr):
self.request = False
self.reply = False
+ self.is_selector = False
+
if 'len' in attr:
self.len = attr['len']
if 'nested-attributes' in attr:
- self.nested_attrs = attr['nested-attributes']
+ nested = attr['nested-attributes']
+ elif 'sub-message' in attr:
+ nested = attr['sub-message']
+ else:
+ nested = None
+
+ if nested:
+ self.nested_attrs = nested
if self.nested_attrs == family.name:
- self.nested_render_name = c_lower(f"{family.name}")
+ self.nested_render_name = c_lower(f"{family.ident_name}")
else:
- self.nested_render_name = c_lower(f"{family.name}_{self.nested_attrs}")
+ self.nested_render_name = c_lower(f"{family.ident_name}_{self.nested_attrs}")
if self.nested_attrs in self.family.consts:
self.nested_struct_type = 'struct ' + self.nested_render_name + '_'
@@ -79,26 +83,63 @@ class Type(SpecAttr):
self.c_name = c_lower(self.name)
if self.c_name in _C_KW:
self.c_name += '_'
+ if self.c_name[0].isdigit():
+ self.c_name = '_' + self.c_name
# Added by resolve():
self.enum_name = None
delattr(self, "enum_name")
+ def _get_real_attr(self):
+ # if the attr is for a subset return the "real" attr (just one down, does not recurse)
+ return self.family.attr_sets[self.attr_set.subset_of][self.name]
+
+ def set_request(self):
+ self.request = True
+ if self.attr_set.subset_of:
+ self._get_real_attr().set_request()
+
+ def set_reply(self):
+ self.reply = True
+ if self.attr_set.subset_of:
+ self._get_real_attr().set_reply()
+
def get_limit(self, limit, default=None):
value = self.checks.get(limit, default)
if value is None:
return value
- if not isinstance(value, int):
- value = limit_to_number(value)
- return value
+ if isinstance(value, int):
+ return value
+ if value in self.family.consts:
+ return self.family.consts[value]["value"]
+ return limit_to_number(value)
+
+ def get_limit_str(self, limit, default=None, suffix=''):
+ value = self.checks.get(limit, default)
+ if value is None:
+ return ''
+ if isinstance(value, int):
+ return str(value) + suffix
+ if value in self.family.consts:
+ const = self.family.consts[value]
+ if const.get('header'):
+ return c_upper(value)
+ return c_upper(f"{self.family['name']}-{value}")
+ return c_upper(value)
def resolve(self):
- if 'name-prefix' in self.attr:
+ if 'parent-sub-message' in self.attr:
+ enum_name = self.attr['parent-sub-message'].enum_name
+ elif 'name-prefix' in self.attr:
enum_name = f"{self.attr['name-prefix']}{self.name}"
else:
enum_name = f"{self.attr_set.name_prefix}{self.name}"
self.enum_name = c_upper(enum_name)
+ if self.attr_set.subset_of:
+ if self.checks != self._get_real_attr().checks:
+ raise Exception("Overriding checks not supported by codegen, yet")
+
def is_multi_val(self):
return None
@@ -112,19 +153,19 @@ class Type(SpecAttr):
return self.is_recursive() and not ri.op
def presence_type(self):
- return 'bit'
+ return 'present'
def presence_member(self, space, type_filter):
if self.presence_type() != type_filter:
return
- if self.presence_type() == 'bit':
+ if self.presence_type() == 'present':
pfx = '__' if space == 'user' else ''
return f"{pfx}u32 {self.c_name}:1;"
- if self.presence_type() == 'len':
+ if self.presence_type() in {'len', 'count'}:
pfx = '__' if space == 'user' else ''
- return f"{pfx}u32 {self.c_name}_len;"
+ return f"{pfx}u32 {self.c_name};"
def _complex_member_type(self, ri):
return None
@@ -132,28 +173,34 @@ class Type(SpecAttr):
def free_needs_iter(self):
return False
+ def _free_lines(self, ri, var, ref):
+ if self.is_multi_val() or self.presence_type() in {'count', 'len'}:
+ return [f'free({var}->{ref}{self.c_name});']
+ return []
+
def free(self, ri, var, ref):
- if self.is_multi_val() or self.presence_type() == 'len':
- ri.cw.p(f'free({var}->{ref}{self.c_name});')
+ lines = self._free_lines(ri, var, ref)
+ for line in lines:
+ ri.cw.p(line)
def arg_member(self, ri):
member = self._complex_member_type(ri)
if member:
- arg = [member + ' *' + self.c_name]
+ spc = ' ' if member[-1] != '*' else ''
+ arg = [member + spc + '*' + self.c_name]
if self.presence_type() == 'count':
arg += ['unsigned int n_' + self.c_name]
return arg
raise Exception(f"Struct member not implemented for class type {self.type}")
def struct_member(self, ri):
- if self.is_multi_val():
- ri.cw.p(f"unsigned int n_{self.c_name};")
member = self._complex_member_type(ri)
if member:
ptr = '*' if self.is_multi_val() else ''
if self.is_recursive_for_op(ri):
ptr = '*'
- ri.cw.p(f"{member} {ptr}{self.c_name};")
+ spc = ' ' if member[-1] != '*' else ''
+ ri.cw.p(f"{member}{spc}{ptr}{self.c_name};")
return
members = self.arg_member(ri)
for one in members:
@@ -163,20 +210,14 @@ class Type(SpecAttr):
return '{ .type = ' + policy + ', }'
def attr_policy(self, cw):
- policy = c_upper('nla-' + self.attr['type'])
+ policy = f'NLA_{c_upper(self.type)}'
+ if self.attr.get('byte-order') == 'big-endian':
+ if self.type in {'u16', 'u32'}:
+ policy = f'NLA_BE{self.type[1:]}'
spec = self._attr_policy(policy)
cw.p(f"\t[{self.enum_name}] = {spec},")
- def _mnl_type(self):
- # mnl does not have helpers for signed integer types
- # turn signed type into unsigned
- # this only makes sense for scalar types
- t = self.type
- if t[0] == 's':
- t = 'u' + t[1:]
- return t
-
def _attr_typol(self):
raise Exception(f"Type policy not implemented for class type {self.type}")
@@ -185,14 +226,13 @@ class Type(SpecAttr):
cw.p(f'[{self.enum_name}] = {"{"} .name = "{self.name}", {typol}{"}"},')
def _attr_put_line(self, ri, var, line):
- if self.presence_type() == 'bit':
- ri.cw.p(f"if ({var}->_present.{self.c_name})")
- elif self.presence_type() == 'len':
- ri.cw.p(f"if ({var}->_present.{self.c_name}_len)")
+ presence = self.presence_type()
+ if presence in {'present', 'len'}:
+ ri.cw.p(f"if ({var}->_{presence}.{self.c_name})")
ri.cw.p(f"{line};")
def _attr_put_simple(self, ri, var, put_type):
- line = f"mnl_attr_put_{put_type}(nlh, {self.enum_name}, {var}->{self.c_name})"
+ line = f"ynl_attr_put_{put_type}(nlh, {self.enum_name}, {var}->{self.c_name})"
self._attr_put_line(ri, var, line)
def attr_put(self, ri, var):
@@ -202,7 +242,7 @@ class Type(SpecAttr):
raise Exception(f"Attr get not implemented for class type {self.type}")
def attr_get(self, ri, var, first):
- lines, init_lines, local_vars = self._attr_get(ri, var)
+ lines, init_lines, _ = self._attr_get(ri, var)
if type(lines) is str:
lines = [lines]
if type(init_lines) is str:
@@ -210,15 +250,11 @@ class Type(SpecAttr):
kw = 'if' if first else 'else if'
ri.cw.block_start(line=f"{kw} (type == {self.enum_name})")
- if local_vars:
- for local in local_vars:
- ri.cw.p(local)
- ri.cw.nl()
if not self.is_multi_val():
ri.cw.p("if (ynl_attr_validate(yarg, attr))")
- ri.cw.p("return MNL_CB_ERROR;")
- if self.presence_type() == 'bit':
+ ri.cw.p("return YNL_PARSE_CB_ERROR;")
+ if self.presence_type() == 'present':
ri.cw.p(f"{var}->_present.{self.c_name} = 1;")
if init_lines:
@@ -234,17 +270,28 @@ class Type(SpecAttr):
def _setter_lines(self, ri, member, presence):
raise Exception(f"Setter not implemented for class type {self.type}")
- def setter(self, ri, space, direction, deref=False, ref=None):
+ def setter(self, ri, space, direction, deref=False, ref=None, var="req"):
ref = (ref if ref else []) + [self.c_name]
- var = "req"
member = f"{var}->{'.'.join(ref)}"
+ local_vars = []
+ if self.free_needs_iter():
+ local_vars += ['unsigned int i;']
+
code = []
presence = ''
for i in range(0, len(ref)):
presence = f"{var}->{'.'.join(ref[:i] + [''])}_present.{ref[i]}"
- if self.presence_type() == 'bit':
- code.append(presence + ' = 1;')
+ # Every layer below last is a nest, so we know it uses bit presence
+ # last layer is "self" and may be a complex type
+ if i == len(ref) - 1 and self.presence_type() != 'present':
+ presence = f"{var}->{'.'.join(ref[:i] + [''])}_{self.presence_type()}.{ref[i]}"
+ continue
+ code.append(presence + ' = 1;')
+ ref_path = '.'.join(ref[:-1])
+ if ref_path:
+ ref_path += '.'
+ code += self._free_lines(ri, var, ref_path)
code += self._setter_lines(ri, member, presence)
func_name = f"{op_prefix(ri, direction, deref=deref)}_set_{'_'.join(ref)}"
@@ -252,7 +299,8 @@ class Type(SpecAttr):
alloc = bool([x for x in code if 'alloc(' in x])
if free and not alloc:
func_name = '__' + func_name
- ri.cw.write_func('static inline void', func_name, body=code,
+ ri.cw.write_func('static inline void', func_name, local_vars=local_vars,
+ body=code,
args=[f'{type_name(ri, direction, deref=deref)} *{var}'] + self.arg_member(ri))
@@ -264,7 +312,7 @@ class TypeUnused(Type):
return []
def _attr_get(self, ri, var):
- return ['return MNL_CB_ERROR;'], None, None
+ return ['return YNL_PARSE_CB_ERROR;'], None, None
def _attr_typol(self):
return '.type = YNL_PT_REJECT, '
@@ -278,7 +326,7 @@ class TypeUnused(Type):
def attr_get(self, ri, var, first):
pass
- def setter(self, ri, space, direction, deref=False, ref=None):
+ def setter(self, ri, space, direction, deref=False, ref=None, var=None):
pass
@@ -301,7 +349,7 @@ class TypePad(Type):
def attr_policy(self, cw):
pass
- def setter(self, ri, space, direction, deref=False, ref=None):
+ def setter(self, ri, space, direction, deref=False, ref=None, var=None):
pass
@@ -313,26 +361,10 @@ class TypeScalar(Type):
if 'byte-order' in attr:
self.byte_order_comment = f" /* {attr['byte-order']} */"
- if 'enum' in self.attr:
- enum = self.family.consts[self.attr['enum']]
- low, high = enum.value_range()
- if 'min' not in self.checks:
- if low != 0 or self.type[0] == 's':
- self.checks['min'] = low
- if 'max' not in self.checks:
- self.checks['max'] = high
-
- if 'min' in self.checks and 'max' in self.checks:
- if self.get_limit('min') > self.get_limit('max'):
- raise Exception(f'Invalid limit for "{self.name}" min: {self.get_limit("min")} max: {self.get_limit("max")}')
- self.checks['range'] = True
-
- low = min(self.get_limit('min', 0), self.get_limit('max', 0))
- high = max(self.get_limit('min', 0), self.get_limit('max', 0))
- if low < 0 and self.type[0] == 'u':
- raise Exception(f'Invalid limit for "{self.name}" negative limit for unsigned type')
- if low < -32768 or high > 32767:
- self.checks['full-range'] = True
+ # Classic families have some funny enums, don't bother
+ # computing checks, since we only need them for kernel policies
+ if not family.is_classic():
+ self._init_checks()
# Added by resolve():
self.is_bitfield = None
@@ -357,8 +389,30 @@ class TypeScalar(Type):
else:
self.type_name = '__' + self.type
- def mnl_type(self):
- return self._mnl_type()
+ def _init_checks(self):
+ if 'enum' in self.attr:
+ enum = self.family.consts[self.attr['enum']]
+ low, high = enum.value_range()
+ if low is None and high is None:
+ self.checks['sparse'] = True
+ else:
+ if 'min' not in self.checks:
+ if low != 0 or self.type[0] == 's':
+ self.checks['min'] = low
+ if 'max' not in self.checks:
+ self.checks['max'] = high
+
+ if 'min' in self.checks and 'max' in self.checks:
+ if self.get_limit('min') > self.get_limit('max'):
+ raise Exception(f'Invalid limit for "{self.name}" min: {self.get_limit("min")} max: {self.get_limit("max")}')
+ self.checks['range'] = True
+
+ low = min(self.get_limit('min', 0), self.get_limit('max', 0))
+ high = max(self.get_limit('min', 0), self.get_limit('max', 0))
+ if low < 0 and self.type[0] == 'u':
+ raise Exception(f'Invalid limit for "{self.name}" negative limit for unsigned type')
+ if low < -32768 or high > 32767:
+ self.checks['full-range'] = True
def _attr_policy(self, policy):
if 'flags-mask' in self.checks or self.is_bitfield:
@@ -373,11 +427,13 @@ class TypeScalar(Type):
elif 'full-range' in self.checks:
return f"NLA_POLICY_FULL_RANGE({policy}, &{c_lower(self.enum_name)}_range)"
elif 'range' in self.checks:
- return f"NLA_POLICY_RANGE({policy}, {self.get_limit('min')}, {self.get_limit('max')})"
+ return f"NLA_POLICY_RANGE({policy}, {self.get_limit_str('min')}, {self.get_limit_str('max')})"
elif 'min' in self.checks:
- return f"NLA_POLICY_MIN({policy}, {self.get_limit('min')})"
+ return f"NLA_POLICY_MIN({policy}, {self.get_limit_str('min')})"
elif 'max' in self.checks:
- return f"NLA_POLICY_MAX({policy}, {self.get_limit('max')})"
+ return f"NLA_POLICY_MAX({policy}, {self.get_limit_str('max')})"
+ elif 'sparse' in self.checks:
+ return f"NLA_POLICY_VALIDATE_FN({policy}, &{c_lower(self.enum_name)}_validate)"
return super()._attr_policy(policy)
def _attr_typol(self):
@@ -387,10 +443,10 @@ class TypeScalar(Type):
return [f'{self.type_name} {self.c_name}{self.byte_order_comment}']
def attr_put(self, ri, var):
- self._attr_put_simple(ri, var, self.mnl_type())
+ self._attr_put_simple(ri, var, self.type)
def _attr_get(self, ri, var):
- return f"{var}->{self.c_name} = mnl_attr_get_{self.mnl_type()}(attr);", None, None
+ return f"{var}->{self.c_name} = ynl_attr_get_{self.type}(attr);", None, None
def _setter_lines(self, ri, member, presence):
return [f"{member} = {self.c_name};"]
@@ -404,7 +460,7 @@ class TypeFlag(Type):
return '.type = YNL_PT_FLAG, '
def attr_put(self, ri, var):
- self._attr_put_line(ri, var, f"mnl_attr_put(nlh, {self.enum_name}, 0, NULL)")
+ self._attr_put_line(ri, var, f"ynl_attr_put(nlh, {self.enum_name}, NULL, 0)")
def _attr_get(self, ri, var):
return [], None, None
@@ -424,15 +480,18 @@ class TypeString(Type):
ri.cw.p(f"char *{self.c_name};")
def _attr_typol(self):
- return f'.type = YNL_PT_NUL_STR, '
+ typol = '.type = YNL_PT_NUL_STR, '
+ if self.is_selector:
+ typol += '.is_selector = 1, '
+ return typol
def _attr_policy(self, policy):
if 'exact-len' in self.checks:
- mem = 'NLA_POLICY_EXACT_LEN(' + str(self.checks['exact-len']) + ')'
+ mem = 'NLA_POLICY_EXACT_LEN(' + self.get_limit_str('exact-len') + ')'
else:
mem = '{ .type = ' + policy
if 'max-len' in self.checks:
- mem += ', .len = ' + str(self.get_limit('max-len'))
+ mem += ', .len = ' + self.get_limit_str('max-len')
mem += ', }'
return mem
@@ -446,23 +505,22 @@ class TypeString(Type):
cw.p(f"\t[{self.enum_name}] = {spec},")
def attr_put(self, ri, var):
- self._attr_put_simple(ri, var, 'strz')
+ self._attr_put_simple(ri, var, 'str')
def _attr_get(self, ri, var):
- len_mem = var + '->_present.' + self.c_name + '_len'
+ len_mem = var + '->_len.' + self.c_name
return [f"{len_mem} = len;",
f"{var}->{self.c_name} = malloc(len + 1);",
- f"memcpy({var}->{self.c_name}, mnl_attr_get_str(attr), len);",
+ f"memcpy({var}->{self.c_name}, ynl_attr_get_str(attr), len);",
f"{var}->{self.c_name}[len] = 0;"], \
- ['len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));'], \
+ ['len = strnlen(ynl_attr_get_str(attr), ynl_attr_data_len(attr));'], \
['unsigned int len;']
def _setter_lines(self, ri, member, presence):
- return [f"free({member});",
- f"{presence}_len = strlen({self.c_name});",
- f"{member} = malloc({presence}_len + 1);",
- f'memcpy({member}, {self.c_name}, {presence}_len);',
- f'{member}[{presence}_len] = 0;']
+ return [f"{presence} = strlen({self.c_name});",
+ f"{member} = malloc({presence} + 1);",
+ f'memcpy({member}, {self.c_name}, {presence});',
+ f'{member}[{presence}] = 0;']
class TypeBinary(Type):
@@ -476,39 +534,96 @@ class TypeBinary(Type):
ri.cw.p(f"void *{self.c_name};")
def _attr_typol(self):
- return f'.type = YNL_PT_BINARY,'
+ return '.type = YNL_PT_BINARY,'
def _attr_policy(self, policy):
- if 'exact-len' in self.checks:
- mem = 'NLA_POLICY_EXACT_LEN(' + str(self.checks['exact-len']) + ')'
+ if len(self.checks) == 0:
+ pass
+ elif len(self.checks) == 1:
+ check_name = list(self.checks)[0]
+ if check_name not in {'exact-len', 'min-len', 'max-len'}:
+ raise Exception('Unsupported check for binary type: ' + check_name)
else:
- mem = '{ '
- if len(self.checks) == 1 and 'min-len' in self.checks:
- mem += '.len = ' + str(self.get_limit('min-len'))
- elif len(self.checks) == 0:
- mem += '.type = NLA_BINARY'
- else:
- raise Exception('One or more of binary type checks not implemented, yet')
- mem += ', }'
+ raise Exception('More than one check for binary type not implemented, yet')
+
+ if len(self.checks) == 0:
+ mem = '{ .type = NLA_BINARY, }'
+ elif 'exact-len' in self.checks:
+ mem = 'NLA_POLICY_EXACT_LEN(' + self.get_limit_str('exact-len') + ')'
+ elif 'min-len' in self.checks:
+ mem = 'NLA_POLICY_MIN_LEN(' + self.get_limit_str('min-len') + ')'
+ elif 'max-len' in self.checks:
+ mem = 'NLA_POLICY_MAX_LEN(' + self.get_limit_str('max-len') + ')'
+
return mem
def attr_put(self, ri, var):
- self._attr_put_line(ri, var, f"mnl_attr_put(nlh, {self.enum_name}, " +
- f"{var}->_present.{self.c_name}_len, {var}->{self.c_name})")
+ self._attr_put_line(ri, var, f"ynl_attr_put(nlh, {self.enum_name}, " +
+ f"{var}->{self.c_name}, {var}->_len.{self.c_name})")
def _attr_get(self, ri, var):
- len_mem = var + '->_present.' + self.c_name + '_len'
+ len_mem = var + '->_len.' + self.c_name
return [f"{len_mem} = len;",
f"{var}->{self.c_name} = malloc(len);",
- f"memcpy({var}->{self.c_name}, mnl_attr_get_payload(attr), len);"], \
- ['len = mnl_attr_get_payload_len(attr);'], \
+ f"memcpy({var}->{self.c_name}, ynl_attr_data(attr), len);"], \
+ ['len = ynl_attr_data_len(attr);'], \
+ ['unsigned int len;']
+
+ def _setter_lines(self, ri, member, presence):
+ return [f"{presence} = len;",
+ f"{member} = malloc({presence});",
+ f'memcpy({member}, {self.c_name}, {presence});']
+
+
+class TypeBinaryStruct(TypeBinary):
+ def struct_member(self, ri):
+ ri.cw.p(f'struct {c_lower(self.get("struct"))} *{self.c_name};')
+
+ def _attr_get(self, ri, var):
+ struct_sz = 'sizeof(struct ' + c_lower(self.get("struct")) + ')'
+ len_mem = var + '->_' + self.presence_type() + '.' + self.c_name
+ return [f"{len_mem} = len;",
+ f"if (len < {struct_sz})",
+ f"{var}->{self.c_name} = calloc(1, {struct_sz});",
+ "else",
+ f"{var}->{self.c_name} = malloc(len);",
+ f"memcpy({var}->{self.c_name}, ynl_attr_data(attr), len);"], \
+ ['len = ynl_attr_data_len(attr);'], \
+ ['unsigned int len;']
+
+
+class TypeBinaryScalarArray(TypeBinary):
+ def arg_member(self, ri):
+ return [f'__{self.get("sub-type")} *{self.c_name}', 'size_t count']
+
+ def presence_type(self):
+ return 'count'
+
+ def struct_member(self, ri):
+ ri.cw.p(f'__{self.get("sub-type")} *{self.c_name};')
+
+ def attr_put(self, ri, var):
+ presence = self.presence_type()
+ ri.cw.block_start(line=f"if ({var}->_{presence}.{self.c_name})")
+ ri.cw.p(f"i = {var}->_{presence}.{self.c_name} * sizeof(__{self.get('sub-type')});")
+ ri.cw.p(f"ynl_attr_put(nlh, {self.enum_name}, " +
+ f"{var}->{self.c_name}, i);")
+ ri.cw.block_end()
+
+ def _attr_get(self, ri, var):
+ len_mem = var + '->_count.' + self.c_name
+ return [f"{len_mem} = len / sizeof(__{self.get('sub-type')});",
+ f"len = {len_mem} * sizeof(__{self.get('sub-type')});",
+ f"{var}->{self.c_name} = malloc(len);",
+ f"memcpy({var}->{self.c_name}, ynl_attr_data(attr), len);"], \
+ ['len = ynl_attr_data_len(attr);'], \
['unsigned int len;']
def _setter_lines(self, ri, member, presence):
- return [f"free({member});",
- f"{presence}_len = len;",
- f"{member} = malloc({presence}_len);",
- f'memcpy({member}, {self.c_name}, {presence}_len);']
+ return [f"{presence} = count;",
+ f"count *= sizeof(__{self.get('sub-type')});",
+ f"{member} = malloc(count);",
+ f'memcpy({member}, {self.c_name}, count);']
class TypeBitfield32(Type):
@@ -516,21 +631,21 @@ class TypeBitfield32(Type):
return "struct nla_bitfield32"
def _attr_typol(self):
- return f'.type = YNL_PT_BITFIELD32, '
+ return '.type = YNL_PT_BITFIELD32, '
def _attr_policy(self, policy):
- if not 'enum' in self.attr:
+ if 'enum' not in self.attr:
raise Exception('Enum required for bitfield32 attr')
enum = self.family.consts[self.attr['enum']]
mask = enum.get_mask(as_flags=True)
return f"NLA_POLICY_BITFIELD32({mask})"
def attr_put(self, ri, var):
- line = f"mnl_attr_put(nlh, {self.enum_name}, sizeof(struct nla_bitfield32), &{var}->{self.c_name})"
+ line = f"ynl_attr_put(nlh, {self.enum_name}, &{var}->{self.c_name}, sizeof(struct nla_bitfield32))"
self._attr_put_line(ri, var, line)
def _attr_get(self, ri, var):
- return f"memcpy(&{var}->{self.c_name}, mnl_attr_get_payload(attr), sizeof(struct nla_bitfield32));", None, None
+ return f"memcpy(&{var}->{self.c_name}, ynl_attr_data(attr), sizeof(struct nla_bitfield32));", None, None
def _setter_lines(self, ri, member, presence):
return [f"memcpy(&{member}, {self.c_name}, sizeof(struct nla_bitfield32));"]
@@ -543,12 +658,14 @@ class TypeNest(Type):
def _complex_member_type(self, ri):
return self.nested_struct_type
- def free(self, ri, var, ref):
+ def _free_lines(self, ri, var, ref):
+ lines = []
at = '&'
if self.is_recursive_for_op(ri):
at = ''
- ri.cw.p(f'if ({var}->{ref}{self.c_name})')
- ri.cw.p(f'{self.nested_render_name}_free({at}{var}->{ref}{self.c_name});')
+ lines += [f'if ({var}->{ref}{self.c_name})']
+ lines += [f'{self.nested_render_name}_free({at}{var}->{ref}{self.c_name});']
+ return lines
def _attr_typol(self):
return f'.type = YNL_PT_NEST, .nest = &{self.nested_render_name}_nest, '
@@ -562,19 +679,24 @@ class TypeNest(Type):
f"{self.enum_name}, {at}{var}->{self.c_name})")
def _attr_get(self, ri, var):
- get_lines = [f"if ({self.nested_render_name}_parse(&parg, attr))",
- "return MNL_CB_ERROR;"]
+ pns = self.family.pure_nested_structs[self.nested_attrs]
+ args = ["&parg", "attr"]
+ for sel in pns.external_selectors():
+ args.append(f'{var}->{sel.name}')
+ get_lines = [f"if ({self.nested_render_name}_parse({', '.join(args)}))",
+ "return YNL_PARSE_CB_ERROR;"]
init_lines = [f"parg.rsp_policy = &{self.nested_render_name}_nest;",
f"parg.data = &{var}->{self.c_name};"]
return get_lines, init_lines, None
- def setter(self, ri, space, direction, deref=False, ref=None):
+ def setter(self, ri, space, direction, deref=False, ref=None, var="req"):
ref = (ref if ref else []) + [self.c_name]
for _, attr in ri.family.pure_nested_structs[self.nested_attrs].member_list():
if attr.is_recursive():
continue
- attr.setter(ri, self.nested_attrs, direction, deref=deref, ref=ref)
+ attr.setter(ri, self.nested_attrs, direction, deref=deref, ref=ref,
+ var=var)
class TypeMultiAttr(Type):
@@ -589,30 +711,53 @@ class TypeMultiAttr(Type):
def presence_type(self):
return 'count'
- def mnl_type(self):
- return self._mnl_type()
-
def _complex_member_type(self, ri):
if 'type' not in self.attr or self.attr['type'] == 'nest':
return self.nested_struct_type
+ elif self.attr['type'] == 'binary' and 'struct' in self.attr:
+ return None # use arg_member()
+ elif self.attr['type'] == 'string':
+ return 'struct ynl_string *'
elif self.attr['type'] in scalars:
scalar_pfx = '__' if ri.ku_space == 'user' else ''
- return scalar_pfx + self.attr['type']
+ if self.is_auto_scalar:
+ name = self.type[0] + '64'
+ else:
+ name = self.attr['type']
+ return scalar_pfx + name
else:
raise Exception(f"Sub-type {self.attr['type']} not supported yet")
+ def arg_member(self, ri):
+ if self.type == 'binary' and 'struct' in self.attr:
+ return [f'struct {c_lower(self.attr["struct"])} *{self.c_name}',
+ f'unsigned int n_{self.c_name}']
+ return super().arg_member(ri)
+
def free_needs_iter(self):
- return 'type' not in self.attr or self.attr['type'] == 'nest'
+ return self.attr['type'] in {'nest', 'string'}
- def free(self, ri, var, ref):
+ def _free_lines(self, ri, var, ref):
+ lines = []
if self.attr['type'] in scalars:
- ri.cw.p(f"free({var}->{ref}{self.c_name});")
+ lines += [f"free({var}->{ref}{self.c_name});"]
+ elif self.attr['type'] == 'binary':
+ lines += [f"free({var}->{ref}{self.c_name});"]
+ elif self.attr['type'] == 'string':
+ lines += [
+ f"for (i = 0; i < {var}->{ref}_count.{self.c_name}; i++)",
+ f"free({var}->{ref}{self.c_name}[i]);",
+ f"free({var}->{ref}{self.c_name});",
+ ]
elif 'type' not in self.attr or self.attr['type'] == 'nest':
- ri.cw.p(f"for (i = 0; i < {var}->{ref}n_{self.c_name}; i++)")
- ri.cw.p(f'{self.nested_render_name}_free(&{var}->{ref}{self.c_name}[i]);')
- ri.cw.p(f"free({var}->{ref}{self.c_name});")
+ lines += [
+ f"for (i = 0; i < {var}->{ref}_count.{self.c_name}; i++)",
+ f'{self.nested_render_name}_free(&{var}->{ref}{self.c_name}[i]);',
+ f"free({var}->{ref}{self.c_name});",
+ ]
else:
raise Exception(f"Free of MultiAttr sub-type {self.attr['type']} not supported yet")
+ return lines
def _attr_policy(self, policy):
return self.base_type._attr_policy(policy)
@@ -625,25 +770,28 @@ class TypeMultiAttr(Type):
def attr_put(self, ri, var):
if self.attr['type'] in scalars:
- put_type = self.mnl_type()
- ri.cw.p(f"for (unsigned int i = 0; i < {var}->n_{self.c_name}; i++)")
- ri.cw.p(f"mnl_attr_put_{put_type}(nlh, {self.enum_name}, {var}->{self.c_name}[i]);")
+ put_type = self.type
+ ri.cw.p(f"for (i = 0; i < {var}->_count.{self.c_name}; i++)")
+ ri.cw.p(f"ynl_attr_put_{put_type}(nlh, {self.enum_name}, {var}->{self.c_name}[i]);")
+ elif self.attr['type'] == 'binary' and 'struct' in self.attr:
+ ri.cw.p(f"for (i = 0; i < {var}->_count.{self.c_name}; i++)")
+ ri.cw.p(f"ynl_attr_put(nlh, {self.enum_name}, &{var}->{self.c_name}[i], sizeof(struct {c_lower(self.attr['struct'])}));")
+ elif self.attr['type'] == 'string':
+ ri.cw.p(f"for (i = 0; i < {var}->_count.{self.c_name}; i++)")
+ ri.cw.p(f"ynl_attr_put_str(nlh, {self.enum_name}, {var}->{self.c_name}[i]->str);")
elif 'type' not in self.attr or self.attr['type'] == 'nest':
- ri.cw.p(f"for (unsigned int i = 0; i < {var}->n_{self.c_name}; i++)")
+ ri.cw.p(f"for (i = 0; i < {var}->_count.{self.c_name}; i++)")
self._attr_put_line(ri, var, f"{self.nested_render_name}_put(nlh, " +
f"{self.enum_name}, &{var}->{self.c_name}[i])")
else:
raise Exception(f"Put of MultiAttr sub-type {self.attr['type']} not supported yet")
def _setter_lines(self, ri, member, presence):
- # For multi-attr we have a count, not presence, hack up the presence
- presence = presence[:-(len('_present.') + len(self.c_name))] + "n_" + self.c_name
- return [f"free({member});",
- f"{member} = {self.c_name};",
+ return [f"{member} = {self.c_name};",
f"{presence} = n_{self.c_name};"]
-class TypeArrayNest(Type):
+class TypeIndexedArray(Type):
def is_multi_val(self):
return True
@@ -656,19 +804,63 @@ class TypeArrayNest(Type):
elif self.attr['sub-type'] in scalars:
scalar_pfx = '__' if ri.ku_space == 'user' else ''
return scalar_pfx + self.attr['sub-type']
+ elif self.attr['sub-type'] == 'binary' and 'exact-len' in self.checks:
+ return None # use arg_member()
else:
raise Exception(f"Sub-type {self.attr['sub-type']} not supported yet")
+ def arg_member(self, ri):
+ if self.sub_type == 'binary' and 'exact-len' in self.checks:
+ return [f'unsigned char (*{self.c_name})[{self.checks["exact-len"]}]',
+ f'unsigned int n_{self.c_name}']
+ return super().arg_member(ri)
+
+ def _attr_policy(self, policy):
+ if self.attr['sub-type'] == 'nest':
+ return f'NLA_POLICY_NESTED_ARRAY({self.nested_render_name}_nl_policy)'
+ return super()._attr_policy(policy)
+
def _attr_typol(self):
- return f'.type = YNL_PT_NEST, .nest = &{self.nested_render_name}_nest, '
+ if self.attr['sub-type'] in scalars:
+ return f'.type = YNL_PT_U{c_upper(self.sub_type[1:])}, '
+ elif self.attr['sub-type'] == 'binary' and 'exact-len' in self.checks:
+ return f'.type = YNL_PT_BINARY, .len = {self.checks["exact-len"]}, '
+ elif self.attr['sub-type'] == 'nest':
+ return f'.type = YNL_PT_NEST, .nest = &{self.nested_render_name}_nest, '
+ else:
+ raise Exception(f"Typol for IndexedArray sub-type {self.attr['sub-type']} not supported, yet")
def _attr_get(self, ri, var):
local_vars = ['const struct nlattr *attr2;']
get_lines = [f'attr_{self.c_name} = attr;',
- 'mnl_attr_for_each_nested(attr2, attr)',
- f'\t{var}->n_{self.c_name}++;']
+ 'ynl_attr_for_each_nested(attr2, attr) {',
+ '\tif (__ynl_attr_validate(yarg, attr2, type))',
+ '\t\treturn YNL_PARSE_CB_ERROR;',
+ f'\tn_{self.c_name}++;',
+ '}']
return get_lines, None, local_vars
+ def attr_put(self, ri, var):
+ ri.cw.p(f'array = ynl_attr_nest_start(nlh, {self.enum_name});')
+ if self.sub_type in scalars:
+ put_type = self.sub_type
+ ri.cw.block_start(line=f'for (i = 0; i < {var}->_count.{self.c_name}; i++)')
+ ri.cw.p(f"ynl_attr_put_{put_type}(nlh, i, {var}->{self.c_name}[i]);")
+ ri.cw.block_end()
+ elif self.sub_type == 'binary' and 'exact-len' in self.checks:
+ ri.cw.p(f'for (i = 0; i < {var}->_count.{self.c_name}; i++)')
+ ri.cw.p(f"ynl_attr_put(nlh, i, {var}->{self.c_name}[i], {self.checks['exact-len']});")
+ elif self.sub_type == 'nest':
+ ri.cw.p(f'for (i = 0; i < {var}->_count.{self.c_name}; i++)')
+ ri.cw.p(f"{self.nested_render_name}_put(nlh, i, &{var}->{self.c_name}[i]);")
+ else:
+ raise Exception(f"Put for IndexedArray sub-type {self.attr['sub-type']} not supported, yet")
+ ri.cw.p('ynl_attr_nest_end(nlh, array);')
+
+ def _setter_lines(self, ri, member, presence):
+ return [f"{member} = {self.c_name};",
+ f"{presence} = n_{self.c_name};"]
+
class TypeNestTypeValue(Type):
def _complex_member_type(self, ri):
@@ -690,8 +882,8 @@ class TypeNestTypeValue(Type):
local_vars += [f'__u32 {", ".join(tv_names)};']
for level in self.attr["type-value"]:
level = c_lower(level)
- get_lines += [f'attr_{level} = mnl_attr_get_payload({prev});']
- get_lines += [f'{level} = mnl_attr_get_type(attr_{level});']
+ get_lines += [f'attr_{level} = ynl_attr_data({prev});']
+ get_lines += [f'{level} = ynl_attr_type(attr_{level});']
prev = 'attr_' + level
tv_args = f", {', '.join(tv_names)}"
@@ -700,20 +892,77 @@ class TypeNestTypeValue(Type):
return get_lines, init_lines, local_vars
+class TypeSubMessage(TypeNest):
+ def __init__(self, family, attr_set, attr, value):
+ super().__init__(family, attr_set, attr, value)
+
+ self.selector = Selector(attr, attr_set)
+
+ def _attr_typol(self):
+ typol = f'.type = YNL_PT_NEST, .nest = &{self.nested_render_name}_nest, '
+ typol += '.is_submsg = 1, '
+ # Reverse-parsing of the policy (ynl_err_walk() in ynl.c) does not
+ # support external selectors. No family uses sub-messages with external
+ # selector for requests so this is fine for now.
+ if not self.selector.is_external():
+ typol += f'.selector_type = {self.attr_set[self["selector"]].value} '
+ return typol
+
+ def _attr_get(self, ri, var):
+ sel = c_lower(self['selector'])
+ if self.selector.is_external():
+ sel_var = f"_sel_{sel}"
+ else:
+ sel_var = f"{var}->{sel}"
+ get_lines = [f'if (!{sel_var})',
+ 'return ynl_submsg_failed(yarg, "%s", "%s");' %
+ (self.name, self['selector']),
+ f"if ({self.nested_render_name}_parse(&parg, {sel_var}, attr))",
+ "return YNL_PARSE_CB_ERROR;"]
+ init_lines = [f"parg.rsp_policy = &{self.nested_render_name}_nest;",
+ f"parg.data = &{var}->{self.c_name};"]
+ return get_lines, init_lines, None
+
+
+class Selector:
+ def __init__(self, msg_attr, attr_set):
+ self.name = msg_attr["selector"]
+
+ if self.name in attr_set:
+ self.attr = attr_set[self.name]
+ self.attr.is_selector = True
+ self._external = False
+ else:
+ # The selector will need to get passed down thru the structs
+ self.attr = None
+ self._external = True
+
+ def set_attr(self, attr):
+ self.attr = attr
+
+ def is_external(self):
+ return self._external
+
+
class Struct:
- def __init__(self, family, space_name, type_list=None, inherited=None):
+ def __init__(self, family, space_name, type_list=None, fixed_header=None,
+ inherited=None, submsg=None):
self.family = family
self.space_name = space_name
self.attr_set = family.attr_sets[space_name]
# Use list to catch comparisons with empty sets
self._inherited = inherited if inherited is not None else []
self.inherited = []
+ self.fixed_header = None
+ if fixed_header:
+ self.fixed_header = 'struct ' + c_lower(fixed_header)
+ self.submsg = submsg
self.nested = type_list is None
if family.name == c_lower(space_name):
- self.render_name = c_lower(family.name)
+ self.render_name = c_lower(family.ident_name)
else:
- self.render_name = c_lower(family.name + '-' + space_name)
+ self.render_name = c_lower(family.ident_name + '-' + space_name)
self.struct_name = 'struct ' + self.render_name
if self.nested and space_name in family.consts:
self.struct_name += '_'
@@ -724,6 +973,7 @@ class Struct:
self.request = False
self.reply = False
self.recursive = False
+ self.in_multi_val = False # used by a MultiAttr or and legacy arrays
self.attr_list = []
self.attrs = dict()
@@ -756,6 +1006,19 @@ class Struct:
raise Exception("Inheriting different members not supported")
self.inherited = [c_lower(x) for x in sorted(self._inherited)]
+ def external_selectors(self):
+ sels = []
+ for name, attr in self.attr_list:
+ if isinstance(attr, TypeSubMessage) and attr.selector.is_external():
+ sels.append(attr.selector)
+ return sels
+
+ def free_needs_iter(self):
+ for _, attr in self.attr_list:
+ if attr.free_needs_iter():
+ return True
+ return False
+
class EnumEntry(SpecEnumEntry):
def __init__(self, enum_set, yaml, prev, value_start):
@@ -779,7 +1042,7 @@ class EnumEntry(SpecEnumEntry):
class EnumSet(SpecEnumSet):
def __init__(self, family, yaml):
- self.render_name = c_lower(family.name + '-' + yaml['name'])
+ self.render_name = c_lower(family.ident_name + '-' + yaml['name'])
if 'enum-name' in yaml:
if yaml['enum-name']:
@@ -795,7 +1058,9 @@ class EnumSet(SpecEnumSet):
else:
self.user_type = 'int'
- self.value_pfx = yaml.get('name-prefix', f"{family.name}-{yaml['name']}-")
+ self.value_pfx = yaml.get('name-prefix', f"{family.ident_name}-{yaml['name']}-")
+ self.header = yaml.get('header', None)
+ self.enum_cnt_name = yaml.get('enum-cnt-name', None)
super().__init__(family, yaml)
@@ -807,7 +1072,7 @@ class EnumSet(SpecEnumSet):
high = max([x.value for x in self.entries.values()])
if high - low + 1 != len(self.entries):
- raise Exception("Can't get value range for a noncontiguous enum")
+ return None, None
return low, high
@@ -820,9 +1085,9 @@ class AttrSet(SpecAttrSet):
if 'name-prefix' in yaml:
pfx = yaml['name-prefix']
elif self.name == family.name:
- pfx = family.name + '-a-'
+ pfx = family.ident_name + '-a-'
else:
- pfx = f"{family.name}-a-{self.name}-"
+ pfx = f"{family.ident_name}-a-{self.name}-"
self.name_prefix = c_upper(pfx)
self.max_name = c_upper(self.yaml.get('attr-max-name', f"{self.name_prefix}max"))
self.cnt_name = c_upper(self.yaml.get('attr-cnt-name', f"__{self.name_prefix}max"))
@@ -854,15 +1119,25 @@ class AttrSet(SpecAttrSet):
elif elem['type'] == 'string':
t = TypeString(self.family, self, elem, value)
elif elem['type'] == 'binary':
- t = TypeBinary(self.family, self, elem, value)
+ if 'struct' in elem:
+ t = TypeBinaryStruct(self.family, self, elem, value)
+ elif elem.get('sub-type') in scalars:
+ t = TypeBinaryScalarArray(self.family, self, elem, value)
+ else:
+ t = TypeBinary(self.family, self, elem, value)
elif elem['type'] == 'bitfield32':
t = TypeBitfield32(self.family, self, elem, value)
elif elem['type'] == 'nest':
t = TypeNest(self.family, self, elem, value)
- elif elem['type'] == 'array-nest':
- t = TypeArrayNest(self.family, self, elem, value)
+ elif elem['type'] == 'indexed-array' and 'sub-type' in elem:
+ if elem["sub-type"] in ['binary', 'nest', 'u32']:
+ t = TypeIndexedArray(self.family, self, elem, value)
+ else:
+ raise Exception(f'new_attr: unsupported sub-type {elem["sub-type"]}')
elif elem['type'] == 'nest-type-value':
t = TypeNestTypeValue(self.family, self, elem, value)
+ elif elem['type'] == 'sub-message':
+ t = TypeSubMessage(self.family, self, elem, value)
else:
raise Exception(f"No typed class for type {elem['type']}")
@@ -874,9 +1149,17 @@ class AttrSet(SpecAttrSet):
class Operation(SpecOperation):
def __init__(self, family, yaml, req_value, rsp_value):
+ # Fill in missing operation properties (for fixed hdr-only msgs)
+ for mode in ['do', 'dump', 'event']:
+ for direction in ['request', 'reply']:
+ try:
+ yaml[mode][direction].setdefault('attributes', [])
+ except KeyError:
+ pass
+
super().__init__(family, yaml, req_value, rsp_value)
- self.render_name = c_lower(family.name + '_' + self.name)
+ self.render_name = c_lower(family.ident_name + '_' + self.name)
self.dual_policy = ('do' in yaml and 'request' in yaml['do']) and \
('dump' in yaml and 'request' in yaml['dump'])
@@ -899,6 +1182,16 @@ class Operation(SpecOperation):
self.has_ntf = True
+class SubMessage(SpecSubMessage):
+ def __init__(self, family, yaml):
+ super().__init__(family, yaml)
+
+ self.render_name = c_lower(family.ident_name + '-' + yaml['name'])
+
+ def resolve(self):
+ self.resolve_up(super())
+
+
class Family(SpecFamily):
def __init__(self, file_name, exclude_ops):
# Added by resolve:
@@ -926,19 +1219,16 @@ class Family(SpecFamily):
if 'uapi-header' in self.yaml:
self.uapi_header = self.yaml['uapi-header']
else:
- self.uapi_header = f"linux/{self.name}.h"
+ self.uapi_header = f"linux/{self.ident_name}.h"
if self.uapi_header.startswith("linux/") and self.uapi_header.endswith('.h'):
self.uapi_header_name = self.uapi_header[6:-2]
else:
- self.uapi_header_name = self.name
+ self.uapi_header_name = self.ident_name
def resolve(self):
self.resolve_up(super())
- if self.yaml.get('protocol', 'genetlink') not in {'genetlink', 'genetlink-c', 'genetlink-legacy'}:
- raise Exception("Codegen only supported for genetlink")
-
- self.c_name = c_lower(self.name)
+ self.c_name = c_lower(self.ident_name)
if 'name-prefix' in self.yaml['operations']:
self.op_prefix = c_upper(self.yaml['operations']['name-prefix'])
else:
@@ -960,7 +1250,7 @@ class Family(SpecFamily):
# dict space-name -> 'request': set(attrs), 'reply': set(attrs)
self.root_sets = dict()
- # dict space-name -> set('request', 'reply')
+ # dict space-name -> Struct
self.pure_nested_structs = dict()
self._mark_notify()
@@ -969,6 +1259,7 @@ class Family(SpecFamily):
self._load_root_sets()
self._load_nested_sets()
self._load_attr_use()
+ self._load_selector_passing()
self._load_hooks()
self.kernel_policy = self.yaml.get('kernel-policy', 'split')
@@ -984,6 +1275,12 @@ class Family(SpecFamily):
def new_operation(self, elem, req_value, rsp_value):
return Operation(self, elem, req_value, rsp_value)
+ def new_sub_message(self, elem):
+ return SubMessage(self, elem)
+
+ def is_classic(self):
+ return self.proto == 'netlink-raw'
+
def _mark_notify(self):
for op in self.msgs.values():
if 'notify' in op:
@@ -1033,20 +1330,85 @@ class Family(SpecFamily):
for _, spec in self.attr_sets[name].items():
if 'nested-attributes' in spec:
nested = spec['nested-attributes']
- # If the unknown nest we hit is recursive it's fine, it'll be a pointer
- if self.pure_nested_structs[nested].recursive:
- continue
- if nested not in pns_key_seen:
- # Dicts are sorted, this will make struct last
- struct = self.pure_nested_structs.pop(name)
- self.pure_nested_structs[name] = struct
- finished = False
- break
+ elif 'sub-message' in spec:
+ nested = spec.sub_message
+ else:
+ continue
+
+ # If the unknown nest we hit is recursive it's fine, it'll be a pointer
+ if self.pure_nested_structs[nested].recursive:
+ continue
+ if nested not in pns_key_seen:
+ # Dicts are sorted, this will make struct last
+ struct = self.pure_nested_structs.pop(name)
+ self.pure_nested_structs[name] = struct
+ finished = False
+ break
if finished:
pns_key_seen.add(name)
else:
pns_key_list.append(name)
+ def _load_nested_set_nest(self, spec):
+ inherit = set()
+ nested = spec['nested-attributes']
+ if nested not in self.root_sets:
+ if nested not in self.pure_nested_structs:
+ self.pure_nested_structs[nested] = \
+ Struct(self, nested, inherited=inherit,
+ fixed_header=spec.get('fixed-header'))
+ else:
+ raise Exception(f'Using attr set as root and nested not supported - {nested}')
+
+ if 'type-value' in spec:
+ if nested in self.root_sets:
+ raise Exception("Inheriting members to a space used as root not supported")
+ inherit.update(set(spec['type-value']))
+ elif spec['type'] == 'indexed-array':
+ inherit.add('idx')
+ self.pure_nested_structs[nested].set_inherited(inherit)
+
+ return nested
+
+ def _load_nested_set_submsg(self, spec):
+ # Fake the struct type for the sub-message itself
+ # its not a attr_set but codegen wants attr_sets.
+ submsg = self.sub_msgs[spec["sub-message"]]
+ nested = submsg.name
+
+ attrs = []
+ for name, fmt in submsg.formats.items():
+ attr = {
+ "name": name,
+ "parent-sub-message": spec,
+ }
+ if 'attribute-set' in fmt:
+ attr |= {
+ "type": "nest",
+ "nested-attributes": fmt['attribute-set'],
+ }
+ if 'fixed-header' in fmt:
+ attr |= { "fixed-header": fmt["fixed-header"] }
+ elif 'fixed-header' in fmt:
+ attr |= {
+ "type": "binary",
+ "struct": fmt["fixed-header"],
+ }
+ else:
+ attr["type"] = "flag"
+ attrs.append(attr)
+
+ self.attr_sets[nested] = AttrSet(self, {
+ "name": nested,
+ "name-pfx": self.name + '-' + spec.name + '-',
+ "attributes": attrs
+ })
+
+ if nested not in self.pure_nested_structs:
+ self.pure_nested_structs[nested] = Struct(self, nested, submsg=submsg)
+
+ return nested
+
def _load_nested_sets(self):
attr_set_queue = list(self.root_sets.keys())
attr_set_seen = set(self.root_sets.keys())
@@ -1054,72 +1416,102 @@ class Family(SpecFamily):
while len(attr_set_queue):
a_set = attr_set_queue.pop(0)
for attr, spec in self.attr_sets[a_set].items():
- if 'nested-attributes' not in spec:
+ if 'nested-attributes' in spec:
+ nested = self._load_nested_set_nest(spec)
+ elif 'sub-message' in spec:
+ nested = self._load_nested_set_submsg(spec)
+ else:
continue
- nested = spec['nested-attributes']
if nested not in attr_set_seen:
attr_set_queue.append(nested)
attr_set_seen.add(nested)
- inherit = set()
- if nested not in self.root_sets:
- if nested not in self.pure_nested_structs:
- self.pure_nested_structs[nested] = Struct(self, nested, inherited=inherit)
- else:
- raise Exception(f'Using attr set as root and nested not supported - {nested}')
-
- if 'type-value' in spec:
- if nested in self.root_sets:
- raise Exception("Inheriting members to a space used as root not supported")
- inherit.update(set(spec['type-value']))
- elif spec['type'] == 'array-nest':
- inherit.add('idx')
- self.pure_nested_structs[nested].set_inherited(inherit)
-
for root_set, rs_members in self.root_sets.items():
for attr, spec in self.attr_sets[root_set].items():
if 'nested-attributes' in spec:
nested = spec['nested-attributes']
+ elif 'sub-message' in spec:
+ nested = spec.sub_message
+ else:
+ nested = None
+
+ if nested:
if attr in rs_members['request']:
self.pure_nested_structs[nested].request = True
if attr in rs_members['reply']:
self.pure_nested_structs[nested].reply = True
+ if spec.is_multi_val():
+ child = self.pure_nested_structs.get(nested)
+ child.in_multi_val = True
+
self._sort_pure_types()
# Propagate the request / reply / recursive
for attr_set, struct in reversed(self.pure_nested_structs.items()):
for _, spec in self.attr_sets[attr_set].items():
- if 'nested-attributes' in spec:
- child_name = spec['nested-attributes']
- struct.child_nests.add(child_name)
- child = self.pure_nested_structs.get(child_name)
- if child:
- if not child.recursive:
- struct.child_nests.update(child.child_nests)
- child.request |= struct.request
- child.reply |= struct.reply
if attr_set in struct.child_nests:
struct.recursive = True
+ if 'nested-attributes' in spec:
+ child_name = spec['nested-attributes']
+ elif 'sub-message' in spec:
+ child_name = spec.sub_message
+ else:
+ continue
+
+ struct.child_nests.add(child_name)
+ child = self.pure_nested_structs.get(child_name)
+ if child:
+ if not child.recursive:
+ struct.child_nests.update(child.child_nests)
+ child.request |= struct.request
+ child.reply |= struct.reply
+ if spec.is_multi_val():
+ child.in_multi_val = True
+
self._sort_pure_types()
def _load_attr_use(self):
for _, struct in self.pure_nested_structs.items():
if struct.request:
for _, arg in struct.member_list():
- arg.request = True
+ arg.set_request()
if struct.reply:
for _, arg in struct.member_list():
- arg.reply = True
+ arg.set_reply()
for root_set, rs_members in self.root_sets.items():
for attr, spec in self.attr_sets[root_set].items():
if attr in rs_members['request']:
- spec.request = True
+ spec.set_request()
if attr in rs_members['reply']:
- spec.reply = True
+ spec.set_reply()
+
+ def _load_selector_passing(self):
+ def all_structs():
+ for k, v in reversed(self.pure_nested_structs.items()):
+ yield k, v
+ for k, _ in self.root_sets.items():
+ yield k, None # we don't have a struct, but it must be terminal
+
+ for attr_set, struct in all_structs():
+ for _, spec in self.attr_sets[attr_set].items():
+ if 'nested-attributes' in spec:
+ child_name = spec['nested-attributes']
+ elif 'sub-message' in spec:
+ child_name = spec.sub_message
+ else:
+ continue
+
+ child = self.pure_nested_structs.get(child_name)
+ for selector in child.external_selectors():
+ if selector.name in self.attr_sets[attr_set]:
+ sel_attr = self.attr_sets[attr_set][selector.name]
+ selector.set_attr(sel_attr)
+ else:
+ raise Exception("Passing selector thru more than one layer not supported")
def _load_global_policy(self):
global_set = set()
@@ -1170,12 +1562,19 @@ class RenderInfo:
self.op_mode = op_mode
self.op = op
- self.fixed_hdr = None
+ fixed_hdr = op.fixed_header if op else None
+ self.fixed_hdr_len = 'ys->family->hdr_len'
if op and op.fixed_header:
- self.fixed_hdr = 'struct ' + c_lower(op.fixed_header)
+ if op.fixed_header != family.fixed_header:
+ if family.is_classic():
+ self.fixed_hdr_len = f"sizeof(struct {c_lower(fixed_hdr)})"
+ else:
+ raise Exception("Per-op fixed header not supported, yet")
+
# 'do' and 'dump' response parsing is identical
self.type_consistent = True
+ self.type_oneside = False
if op_mode != 'do' and 'dump' in op:
if 'do' in op:
if ('reply' in op['do']) != ('reply' in op["dump"]):
@@ -1183,7 +1582,8 @@ class RenderInfo:
elif 'reply' in op['do'] and op["do"]["reply"] != op["dump"]["reply"]:
self.type_consistent = False
else:
- self.type_consistent = False
+ self.type_consistent = True
+ self.type_oneside = True
self.attr_set = attr_set
if not self.attr_set:
@@ -1201,15 +1601,26 @@ class RenderInfo:
self.struct = dict()
if op_mode == 'notify':
- op_mode = 'do'
+ op_mode = 'do' if 'do' in op else 'dump'
for op_dir in ['request', 'reply']:
if op:
type_list = []
if op_dir in op[op_mode]:
type_list = op[op_mode][op_dir]['attributes']
- self.struct[op_dir] = Struct(family, self.attr_set, type_list=type_list)
+ self.struct[op_dir] = Struct(family, self.attr_set,
+ fixed_header=fixed_hdr,
+ type_list=type_list)
if op_mode == 'event':
- self.struct['reply'] = Struct(family, self.attr_set, type_list=op['event']['attributes'])
+ self.struct['reply'] = Struct(family, self.attr_set,
+ fixed_header=fixed_hdr,
+ type_list=op['event']['attributes'])
+
+ def type_empty(self, key):
+ return len(self.struct[key].attr_list) == 0 and \
+ self.struct['request'].fixed_header is None
+
+ def needs_nlflags(self, direction):
+ return self.op_mode == 'do' and direction == 'request' and self.family.is_classic()
class CodeWriter:
@@ -1267,6 +1678,7 @@ class CodeWriter:
if self._silent_block:
ind += 1
self._silent_block = line.endswith(')') and CodeWriter._is_cond(line)
+ self._silent_block |= line.strip() == 'else'
if line[0] == '#':
ind = 0
if add_ind:
@@ -1363,9 +1775,9 @@ class CodeWriter:
def write_func(self, qual_ret, name, body, args=None, local_vars=None):
self.write_func_prot(qual_ret=qual_ret, name=name, args=args)
+ self.block_start()
self.write_func_lvar(local_vars=local_vars)
- self.block_start()
for line in body:
self.p(line)
self.block_end()
@@ -1409,7 +1821,7 @@ class CodeWriter:
self._ifdef_block = config_option
-scalars = {'u8', 'u16', 'u32', 'u64', 's32', 's64', 'uint', 'sint'}
+scalars = {'u8', 'u16', 'u32', 'u64', 's8', 's16', 's32', 's64', 'uint', 'sint'}
direction_to_suffix = {
'reply': '_rsp',
@@ -1473,11 +1885,15 @@ def rdir(direction):
def op_prefix(ri, direction, deref=False):
suffix = f"_{ri.type_name}"
- if not ri.op_mode or ri.op_mode == 'do':
+ if not ri.op_mode:
+ pass
+ elif ri.op_mode == 'do':
suffix += f"{direction_to_suffix[direction]}"
else:
if direction == 'request':
- suffix += '_req_dump'
+ suffix += '_req'
+ if not ri.type_oneside:
+ suffix += '_dump'
else:
if ri.type_consistent:
if deref:
@@ -1521,13 +1937,39 @@ def print_dump_prototype(ri):
print_prototype(ri, "request")
+def put_typol_submsg(cw, struct):
+ cw.block_start(line=f'const struct ynl_policy_attr {struct.render_name}_policy[] =')
+
+ i = 0
+ for name, arg in struct.member_list():
+ nest = ""
+ if arg.type == 'nest':
+ nest = f" .nest = &{arg.nested_render_name}_nest,"
+ cw.p('[%d] = { .type = YNL_PT_SUBMSG, .name = "%s",%s },' %
+ (i, name, nest))
+ i += 1
+
+ cw.block_end(line=';')
+ cw.nl()
+
+ cw.block_start(line=f'const struct ynl_policy_nest {struct.render_name}_nest =')
+ cw.p(f'.max_attr = {i - 1},')
+ cw.p(f'.table = {struct.render_name}_policy,')
+ cw.block_end(line=';')
+ cw.nl()
+
+
def put_typol_fwd(cw, struct):
- cw.p(f'extern struct ynl_policy_nest {struct.render_name}_nest;')
+ cw.p(f'extern const struct ynl_policy_nest {struct.render_name}_nest;')
def put_typol(cw, struct):
+ if struct.submsg:
+ put_typol_submsg(cw, struct)
+ return
+
type_max = struct.attr_set.max_name
- cw.block_start(line=f'struct ynl_policy_attr {struct.render_name}_policy[{type_max} + 1] =')
+ cw.block_start(line=f'const struct ynl_policy_attr {struct.render_name}_policy[{type_max} + 1] =')
for _, arg in struct.member_list():
arg.attr_typol(cw)
@@ -1535,7 +1977,7 @@ def put_typol(cw, struct):
cw.block_end(line=';')
cw.nl()
- cw.block_start(line=f'struct ynl_policy_nest {struct.render_name}_nest =')
+ cw.block_start(line=f'const struct ynl_policy_nest {struct.render_name}_nest =')
cw.p(f'.max_attr = {type_max},')
cw.p(f'.table = {struct.render_name}_policy,')
cw.block_end(line=';')
@@ -1550,7 +1992,7 @@ def _put_enum_to_str_helper(cw, render_name, map_name, arg_name, enum=None):
cw.block_start()
if enum and enum.type == 'flags':
cw.p(f'{arg_name} = ffs({arg_name}) - 1;')
- cw.p(f'if ({arg_name} < 0 || {arg_name} >= (int)MNL_ARRAY_SIZE({map_name}))')
+ cw.p(f'if ({arg_name} < 0 || {arg_name} >= (int)YNL_ARRAY_SIZE({map_name}))')
cw.p('return NULL;')
cw.p(f'return {map_name}[{arg_name}];')
cw.block_end()
@@ -1598,6 +2040,20 @@ def put_enum_to_str(family, cw, enum):
_put_enum_to_str_helper(cw, enum.render_name, map_name, 'value', enum=enum)
+def put_local_vars(struct):
+ local_vars = []
+ has_array = False
+ has_count = False
+ for _, arg in struct.member_list():
+ has_array |= arg.type == 'indexed-array'
+ has_count |= arg.presence_type() == 'count'
+ if has_array:
+ local_vars.append('struct nlattr *array;')
+ if has_count:
+ local_vars.append('unsigned int i;')
+ return local_vars
+
+
def put_req_nested_prototype(ri, struct, suffix=';'):
func_args = ['struct nlmsghdr *nlh',
'unsigned int attr_type',
@@ -1608,16 +2064,32 @@ def put_req_nested_prototype(ri, struct, suffix=';'):
def put_req_nested(ri, struct):
+ local_vars = []
+ init_lines = []
+
+ if struct.submsg is None:
+ local_vars.append('struct nlattr *nest;')
+ init_lines.append("nest = ynl_attr_nest_start(nlh, attr_type);")
+ if struct.fixed_header:
+ local_vars.append('void *hdr;')
+ struct_sz = f'sizeof({struct.fixed_header})'
+ init_lines.append(f"hdr = ynl_nlmsg_put_extra_header(nlh, {struct_sz});")
+ init_lines.append(f"memcpy(hdr, &obj->_hdr, {struct_sz});")
+
+ local_vars += put_local_vars(struct)
+
put_req_nested_prototype(ri, struct, suffix='')
ri.cw.block_start()
- ri.cw.write_func_lvar('struct nlattr *nest;')
+ ri.cw.write_func_lvar(local_vars)
- ri.cw.p("nest = mnl_attr_nest_start(nlh, attr_type);")
+ for line in init_lines:
+ ri.cw.p(line)
for _, arg in struct.member_list():
arg.attr_put(ri, "obj")
- ri.cw.p("mnl_attr_nest_end(nlh, nest);")
+ if struct.submsg is None:
+ ri.cw.p("ynl_attr_nest_end(nlh, nest);")
ri.cw.nl()
ri.cw.p('return 0;')
@@ -1626,33 +2098,56 @@ def put_req_nested(ri, struct):
def _multi_parse(ri, struct, init_lines, local_vars):
+ if struct.fixed_header:
+ local_vars += ['void *hdr;']
if struct.nested:
- iter_line = "mnl_attr_for_each_nested(attr, nested)"
+ if struct.fixed_header:
+ iter_line = f"ynl_attr_for_each_nested_off(attr, nested, sizeof({struct.fixed_header}))"
+ else:
+ iter_line = "ynl_attr_for_each_nested(attr, nested)"
else:
- if ri.fixed_hdr:
- local_vars += ['void *hdr;']
- iter_line = "mnl_attr_for_each(attr, nlh, yarg->ys->family->hdr_len)"
+ iter_line = "ynl_attr_for_each(attr, nlh, yarg->ys->family->hdr_len)"
+ if ri.op.fixed_header != ri.family.fixed_header:
+ if ri.family.is_classic():
+ iter_line = f"ynl_attr_for_each(attr, nlh, sizeof({struct.fixed_header}))"
+ else:
+ raise Exception("Per-op fixed header not supported, yet")
- array_nests = set()
+ indexed_arrays = set()
multi_attrs = set()
needs_parg = False
+ var_set = set()
for arg, aspec in struct.member_list():
- if aspec['type'] == 'array-nest':
- local_vars.append(f'const struct nlattr *attr_{aspec.c_name};')
- array_nests.add(arg)
+ if aspec['type'] == 'indexed-array' and 'sub-type' in aspec:
+ if aspec["sub-type"] in {'binary', 'nest'}:
+ local_vars.append(f'const struct nlattr *attr_{aspec.c_name} = NULL;')
+ indexed_arrays.add(arg)
+ elif aspec['sub-type'] in scalars:
+ local_vars.append(f'const struct nlattr *attr_{aspec.c_name} = NULL;')
+ indexed_arrays.add(arg)
+ else:
+ raise Exception(f'Not supported sub-type {aspec["sub-type"]}')
if 'multi-attr' in aspec:
multi_attrs.add(arg)
needs_parg |= 'nested-attributes' in aspec
- if array_nests or multi_attrs:
+ needs_parg |= 'sub-message' in aspec
+
+ try:
+ _, _, l_vars = aspec._attr_get(ri, '')
+ var_set |= set(l_vars) if l_vars else set()
+ except Exception:
+ pass # _attr_get() not implemented by simple types, ignore
+ local_vars += list(var_set)
+ if indexed_arrays or multi_attrs:
local_vars.append('int i;')
if needs_parg:
local_vars.append('struct ynl_parse_arg parg;')
init_lines.append('parg.ys = yarg->ys;')
- all_multi = array_nests | multi_attrs
+ all_multi = indexed_arrays | multi_attrs
- for anest in sorted(all_multi):
- local_vars.append(f"unsigned int n_{struct[anest].c_name} = 0;")
+ for arg in sorted(all_multi):
+ local_vars.append(f"unsigned int n_{struct[arg].c_name} = 0;")
ri.cw.block_start()
ri.cw.write_func_lvar(local_vars)
@@ -1664,17 +2159,22 @@ def _multi_parse(ri, struct, init_lines, local_vars):
for arg in struct.inherited:
ri.cw.p(f'dst->{arg} = {arg};')
- if ri.fixed_hdr:
- ri.cw.p('hdr = mnl_nlmsg_get_payload_offset(nlh, sizeof(struct genlmsghdr));')
- ri.cw.p(f"memcpy(&dst->_hdr, hdr, sizeof({ri.fixed_hdr}));")
- for anest in sorted(all_multi):
- aspec = struct[anest]
+ if struct.fixed_header:
+ if struct.nested:
+ ri.cw.p('hdr = ynl_attr_data(nested);')
+ elif ri.family.is_classic():
+ ri.cw.p('hdr = ynl_nlmsg_data(nlh);')
+ else:
+ ri.cw.p('hdr = ynl_nlmsg_data_offset(nlh, sizeof(struct genlmsghdr));')
+ ri.cw.p(f"memcpy(&dst->_hdr, hdr, sizeof({struct.fixed_header}));")
+ for arg in sorted(all_multi):
+ aspec = struct[arg]
ri.cw.p(f"if (dst->{aspec.c_name})")
ri.cw.p(f'return ynl_error_parse(yarg, "attribute already present ({struct.attr_set.name}.{aspec.name})");')
ri.cw.nl()
ri.cw.block_start(line=iter_line)
- ri.cw.p('unsigned int type = mnl_attr_get_type(attr);')
+ ri.cw.p('unsigned int type = ynl_attr_type(attr);')
ri.cw.nl()
first = True
@@ -1686,41 +2186,64 @@ def _multi_parse(ri, struct, init_lines, local_vars):
ri.cw.block_end()
ri.cw.nl()
- for anest in sorted(array_nests):
- aspec = struct[anest]
+ for arg in sorted(indexed_arrays):
+ aspec = struct[arg]
ri.cw.block_start(line=f"if (n_{aspec.c_name})")
- ri.cw.p(f"dst->{aspec.c_name} = calloc({aspec.c_name}, sizeof(*dst->{aspec.c_name}));")
- ri.cw.p(f"dst->n_{aspec.c_name} = n_{aspec.c_name};")
+ ri.cw.p(f"dst->{aspec.c_name} = calloc(n_{aspec.c_name}, sizeof(*dst->{aspec.c_name}));")
+ ri.cw.p(f"dst->_count.{aspec.c_name} = n_{aspec.c_name};")
ri.cw.p('i = 0;')
- ri.cw.p(f"parg.rsp_policy = &{aspec.nested_render_name}_nest;")
- ri.cw.block_start(line=f"mnl_attr_for_each_nested(attr, attr_{aspec.c_name})")
- ri.cw.p(f"parg.data = &dst->{aspec.c_name}[i];")
- ri.cw.p(f"if ({aspec.nested_render_name}_parse(&parg, attr, mnl_attr_get_type(attr)))")
- ri.cw.p('return MNL_CB_ERROR;')
+ if 'nested-attributes' in aspec:
+ ri.cw.p(f"parg.rsp_policy = &{aspec.nested_render_name}_nest;")
+ ri.cw.block_start(line=f"ynl_attr_for_each_nested(attr, attr_{aspec.c_name})")
+ if 'nested-attributes' in aspec:
+ ri.cw.p(f"parg.data = &dst->{aspec.c_name}[i];")
+ ri.cw.p(f"if ({aspec.nested_render_name}_parse(&parg, attr, ynl_attr_type(attr)))")
+ ri.cw.p('return YNL_PARSE_CB_ERROR;')
+ elif aspec.sub_type in scalars:
+ ri.cw.p(f"dst->{aspec.c_name}[i] = ynl_attr_get_{aspec.sub_type}(attr);")
+ elif aspec.sub_type == 'binary' and 'exact-len' in aspec.checks:
+ # Length is validated by typol
+ ri.cw.p(f'memcpy(dst->{aspec.c_name}[i], ynl_attr_data(attr), {aspec.checks["exact-len"]});')
+ else:
+ raise Exception(f"Nest parsing type not supported in {aspec['name']}")
ri.cw.p('i++;')
ri.cw.block_end()
ri.cw.block_end()
ri.cw.nl()
- for anest in sorted(multi_attrs):
- aspec = struct[anest]
+ for arg in sorted(multi_attrs):
+ aspec = struct[arg]
ri.cw.block_start(line=f"if (n_{aspec.c_name})")
ri.cw.p(f"dst->{aspec.c_name} = calloc(n_{aspec.c_name}, sizeof(*dst->{aspec.c_name}));")
- ri.cw.p(f"dst->n_{aspec.c_name} = n_{aspec.c_name};")
+ ri.cw.p(f"dst->_count.{aspec.c_name} = n_{aspec.c_name};")
ri.cw.p('i = 0;')
if 'nested-attributes' in aspec:
ri.cw.p(f"parg.rsp_policy = &{aspec.nested_render_name}_nest;")
ri.cw.block_start(line=iter_line)
- ri.cw.block_start(line=f"if (mnl_attr_get_type(attr) == {aspec.enum_name})")
+ ri.cw.block_start(line=f"if (ynl_attr_type(attr) == {aspec.enum_name})")
if 'nested-attributes' in aspec:
ri.cw.p(f"parg.data = &dst->{aspec.c_name}[i];")
ri.cw.p(f"if ({aspec.nested_render_name}_parse(&parg, attr))")
- ri.cw.p('return MNL_CB_ERROR;')
+ ri.cw.p('return YNL_PARSE_CB_ERROR;')
elif aspec.type in scalars:
- ri.cw.p(f"dst->{aspec.c_name}[i] = mnl_attr_get_{aspec.mnl_type()}(attr);")
+ ri.cw.p(f"dst->{aspec.c_name}[i] = ynl_attr_get_{aspec.type}(attr);")
+ elif aspec.type == 'binary' and 'struct' in aspec:
+ ri.cw.p('size_t len = ynl_attr_data_len(attr);')
+ ri.cw.nl()
+ ri.cw.p(f'if (len > sizeof(dst->{aspec.c_name}[0]))')
+ ri.cw.p(f'len = sizeof(dst->{aspec.c_name}[0]);')
+ ri.cw.p(f"memcpy(&dst->{aspec.c_name}[i], ynl_attr_data(attr), len);")
+ elif aspec.type == 'string':
+ ri.cw.p('unsigned int len;')
+ ri.cw.nl()
+ ri.cw.p('len = strnlen(ynl_attr_get_str(attr), ynl_attr_data_len(attr));')
+ ri.cw.p(f'dst->{aspec.c_name}[i] = malloc(sizeof(struct ynl_string) + len + 1);')
+ ri.cw.p(f"dst->{aspec.c_name}[i]->len = len;")
+ ri.cw.p(f"memcpy(dst->{aspec.c_name}[i]->str, ynl_attr_get_str(attr), len);")
+ ri.cw.p(f"dst->{aspec.c_name}[i]->str[len] = 0;")
else:
- raise Exception('Nest parsing type not supported yet')
+ raise Exception(f'Nest parsing of type {aspec.type} not supported yet')
ri.cw.p('i++;')
ri.cw.block_end()
ri.cw.block_end()
@@ -1730,7 +2253,43 @@ def _multi_parse(ri, struct, init_lines, local_vars):
if struct.nested:
ri.cw.p('return 0;')
else:
- ri.cw.p('return MNL_CB_OK;')
+ ri.cw.p('return YNL_PARSE_CB_OK;')
+ ri.cw.block_end()
+ ri.cw.nl()
+
+
+def parse_rsp_submsg(ri, struct):
+ parse_rsp_nested_prototype(ri, struct, suffix='')
+
+ var = 'dst'
+ local_vars = {'const struct nlattr *attr = nested;',
+ f'{struct.ptr_name}{var} = yarg->data;',
+ 'struct ynl_parse_arg parg;'}
+
+ for _, arg in struct.member_list():
+ _, _, l_vars = arg._attr_get(ri, var)
+ local_vars |= set(l_vars) if l_vars else set()
+
+ ri.cw.block_start()
+ ri.cw.write_func_lvar(list(local_vars))
+ ri.cw.p('parg.ys = yarg->ys;')
+ ri.cw.nl()
+
+ first = True
+ for name, arg in struct.member_list():
+ kw = 'if' if first else 'else if'
+ first = False
+
+ ri.cw.block_start(line=f'{kw} (!strcmp(sel, "{name}"))')
+ get_lines, init_lines, _ = arg._attr_get(ri, var)
+ for line in init_lines or []:
+ ri.cw.p(line)
+ for line in get_lines:
+ ri.cw.p(line)
+ if arg.presence_type() == 'present':
+ ri.cw.p(f"{var}->_present.{arg.c_name} = 1;")
+ ri.cw.block_end()
+ ri.cw.p('return 0;')
ri.cw.block_end()
ri.cw.nl()
@@ -1738,6 +2297,10 @@ def _multi_parse(ri, struct, init_lines, local_vars):
def parse_rsp_nested_prototype(ri, struct, suffix=';'):
func_args = ['struct ynl_parse_arg *yarg',
'const struct nlattr *nested']
+ for sel in struct.external_selectors():
+ func_args.append('const char *_sel_' + sel.name)
+ if struct.submsg:
+ func_args.insert(1, 'const char *sel')
for arg in struct.inherited:
func_args.append('__u32 ' + arg)
@@ -1746,13 +2309,23 @@ def parse_rsp_nested_prototype(ri, struct, suffix=';'):
def parse_rsp_nested(ri, struct):
+ if struct.submsg:
+ return parse_rsp_submsg(ri, struct)
+
parse_rsp_nested_prototype(ri, struct, suffix='')
local_vars = ['const struct nlattr *attr;',
f'{struct.ptr_name}dst = yarg->data;']
init_lines = []
- _multi_parse(ri, struct, init_lines, local_vars)
+ if struct.member_list():
+ _multi_parse(ri, struct, init_lines, local_vars)
+ else:
+ # Empty nest
+ ri.cw.block_start()
+ ri.cw.p('return 0;')
+ ri.cw.block_end()
+ ri.cw.nl()
def parse_rsp_msg(ri, deref=False):
@@ -1760,10 +2333,9 @@ def parse_rsp_msg(ri, deref=False):
return
func_args = ['const struct nlmsghdr *nlh',
- 'void *data']
+ 'struct ynl_parse_arg *yarg']
local_vars = [f'{type_name(ri, "reply", deref=deref)} *dst;',
- 'struct ynl_parse_arg *yarg = data;',
'const struct nlattr *attr;']
init_lines = ['dst = yarg->data;']
@@ -1774,7 +2346,7 @@ def parse_rsp_msg(ri, deref=False):
else:
# Empty reply
ri.cw.block_start()
- ri.cw.p('return MNL_CB_OK;')
+ ri.cw.p('return YNL_PARSE_CB_OK;')
ri.cw.block_end()
ri.cw.nl()
@@ -1792,24 +2364,30 @@ def print_req(ri):
ret_err = 'NULL'
local_vars += [f'{type_name(ri, rdir(direction))} *rsp;']
- if ri.fixed_hdr:
+ if ri.struct["request"].fixed_header:
local_vars += ['size_t hdr_len;',
'void *hdr;']
+ local_vars += put_local_vars(ri.struct['request'])
+
print_prototype(ri, direction, terminate=False)
ri.cw.block_start()
ri.cw.write_func_lvar(local_vars)
- ri.cw.p(f"nlh = ynl_gemsg_start_req(ys, {ri.nl.get_family_id()}, {ri.op.enum_name}, 1);")
+ if ri.family.is_classic():
+ ri.cw.p(f"nlh = ynl_msg_start_req(ys, {ri.op.enum_name}, req->_nlmsg_flags);")
+ else:
+ ri.cw.p(f"nlh = ynl_gemsg_start_req(ys, {ri.nl.get_family_id()}, {ri.op.enum_name}, 1);")
ri.cw.p(f"ys->req_policy = &{ri.struct['request'].render_name}_nest;")
+ ri.cw.p(f"ys->req_hdr_len = {ri.fixed_hdr_len};")
if 'reply' in ri.op[ri.op_mode]:
ri.cw.p(f"yrs.yarg.rsp_policy = &{ri.struct['reply'].render_name}_nest;")
ri.cw.nl()
- if ri.fixed_hdr:
+ if ri.struct['request'].fixed_header:
ri.cw.p("hdr_len = sizeof(req->_hdr);")
- ri.cw.p("hdr = mnl_nlmsg_put_extra_header(nlh, hdr_len);")
+ ri.cw.p("hdr = ynl_nlmsg_put_extra_header(nlh, hdr_len);")
ri.cw.p("memcpy(hdr, &req->_hdr, hdr_len);")
ri.cw.nl()
@@ -1853,31 +2431,39 @@ def print_dump(ri):
'struct nlmsghdr *nlh;',
'int err;']
- if ri.fixed_hdr:
+ if ri.struct['request'].fixed_header:
local_vars += ['size_t hdr_len;',
'void *hdr;']
+ if 'request' in ri.op[ri.op_mode]:
+ local_vars += put_local_vars(ri.struct['request'])
+
ri.cw.write_func_lvar(local_vars)
- ri.cw.p('yds.ys = ys;')
+ ri.cw.p('yds.yarg.ys = ys;')
+ ri.cw.p(f"yds.yarg.rsp_policy = &{ri.struct['reply'].render_name}_nest;")
+ ri.cw.p("yds.yarg.data = NULL;")
ri.cw.p(f"yds.alloc_sz = sizeof({type_name(ri, rdir(direction))});")
ri.cw.p(f"yds.cb = {op_prefix(ri, 'reply', deref=True)}_parse;")
if ri.op.value is not None:
ri.cw.p(f'yds.rsp_cmd = {ri.op.enum_name};')
else:
ri.cw.p(f'yds.rsp_cmd = {ri.op.rsp_value};')
- ri.cw.p(f"yds.rsp_policy = &{ri.struct['reply'].render_name}_nest;")
ri.cw.nl()
- ri.cw.p(f"nlh = ynl_gemsg_start_dump(ys, {ri.nl.get_family_id()}, {ri.op.enum_name}, 1);")
+ if ri.family.is_classic():
+ ri.cw.p(f"nlh = ynl_msg_start_dump(ys, {ri.op.enum_name});")
+ else:
+ ri.cw.p(f"nlh = ynl_gemsg_start_dump(ys, {ri.nl.get_family_id()}, {ri.op.enum_name}, 1);")
- if ri.fixed_hdr:
+ if ri.struct['request'].fixed_header:
ri.cw.p("hdr_len = sizeof(req->_hdr);")
- ri.cw.p("hdr = mnl_nlmsg_put_extra_header(nlh, hdr_len);")
+ ri.cw.p("hdr = ynl_nlmsg_put_extra_header(nlh, hdr_len);")
ri.cw.p("memcpy(hdr, &req->_hdr, hdr_len);")
ri.cw.nl()
if "request" in ri.op[ri.op_mode]:
ri.cw.p(f"ys->req_policy = &{ri.struct['request'].render_name}_nest;")
+ ri.cw.p(f"ys->req_hdr_len = {ri.fixed_hdr_len};")
ri.cw.nl()
for _, attr in ri.struct["request"].member_list():
attr.attr_put(ri, "req")
@@ -1906,11 +2492,22 @@ def free_arg_name(direction):
return 'obj'
-def print_alloc_wrapper(ri, direction):
+def print_alloc_wrapper(ri, direction, struct=None):
name = op_prefix(ri, direction)
- ri.cw.write_func_prot(f'static inline struct {name} *', f"{name}_alloc", [f"void"])
+ struct_name = name
+ if ri.type_name_conflict:
+ struct_name += '_'
+
+ args = ["void"]
+ cnt = "1"
+ if struct and struct.in_multi_val:
+ args = ["unsigned int n"]
+ cnt = "n"
+
+ ri.cw.write_func_prot(f'static inline struct {struct_name} *',
+ f"{name}_alloc", args)
ri.cw.block_start()
- ri.cw.p(f'return calloc(1, sizeof(struct {name}));')
+ ri.cw.p(f'return calloc({cnt}, sizeof(struct {struct_name}));')
ri.cw.block_end()
@@ -1923,32 +2520,45 @@ def print_free_prototype(ri, direction, suffix=';'):
ri.cw.write_func_prot('void', f"{name}_free", [f"struct {struct_name} *{arg}"], suffix=suffix)
+def print_nlflags_set(ri, direction):
+ name = op_prefix(ri, direction)
+ ri.cw.write_func_prot('static inline void', f"{name}_set_nlflags",
+ [f"struct {name} *req", "__u16 nl_flags"])
+ ri.cw.block_start()
+ ri.cw.p('req->_nlmsg_flags = nl_flags;')
+ ri.cw.block_end()
+ ri.cw.nl()
+
+
def _print_type(ri, direction, struct):
suffix = f'_{ri.type_name}{direction_to_suffix[direction]}'
if not direction and ri.type_name_conflict:
suffix += '_'
- if ri.op_mode == 'dump':
+ if ri.op_mode == 'dump' and not ri.type_oneside:
suffix += '_dump'
ri.cw.block_start(line=f"struct {ri.family.c_name}{suffix}")
- if ri.fixed_hdr:
- ri.cw.p(ri.fixed_hdr + ' _hdr;')
+ if ri.needs_nlflags(direction):
+ ri.cw.p('__u16 _nlmsg_flags;')
+ ri.cw.nl()
+ if struct.fixed_header:
+ ri.cw.p(struct.fixed_header + ' _hdr;')
ri.cw.nl()
- meta_started = False
- for _, attr in struct.member_list():
- for type_filter in ['len', 'bit']:
+ for type_filter in ['present', 'len', 'count']:
+ meta_started = False
+ for _, attr in struct.member_list():
line = attr.presence_member(ri.ku_space, type_filter)
if line:
if not meta_started:
- ri.cw.block_start(line=f"struct")
+ ri.cw.block_start(line="struct")
meta_started = True
ri.cw.p(line)
- if meta_started:
- ri.cw.block_end(line='_present;')
- ri.cw.nl()
+ if meta_started:
+ ri.cw.block_end(line=f'_{type_filter};')
+ ri.cw.nl()
for arg in struct.inherited:
ri.cw.p(f"__u32 {arg};")
@@ -1967,11 +2577,27 @@ def print_type(ri, direction):
def print_type_full(ri, struct):
_print_type(ri, "", struct)
+ if struct.request and struct.in_multi_val:
+ print_alloc_wrapper(ri, "", struct)
+ ri.cw.nl()
+ free_rsp_nested_prototype(ri)
+ ri.cw.nl()
+
+ # Name conflicts are too hard to deal with with the current code base,
+ # they are very rare so don't bother printing setters in that case.
+ if ri.ku_space == 'user' and not ri.type_name_conflict:
+ for _, attr in struct.member_list():
+ attr.setter(ri, ri.attr_set, "", var="obj")
+ ri.cw.nl()
+
def print_type_helpers(ri, direction, deref=False):
print_free_prototype(ri, direction)
ri.cw.nl()
+ if ri.needs_nlflags(direction):
+ print_nlflags_set(ri, direction)
+
if ri.ku_space == 'user' and direction == 'request':
for _, attr in ri.struct[direction].member_list():
attr.setter(ri, ri.attr_set, direction, deref=deref)
@@ -1979,7 +2605,7 @@ def print_type_helpers(ri, direction, deref=False):
def print_req_type_helpers(ri):
- if len(ri.struct["request"].attr_list) == 0:
+ if ri.type_empty("request"):
return
print_alloc_wrapper(ri, "request")
print_type_helpers(ri, "request")
@@ -2002,7 +2628,7 @@ def print_parse_prototype(ri, direction, terminate=True):
def print_req_type(ri):
- if len(ri.struct["request"].attr_list) == 0:
+ if ri.type_empty("request"):
return
print_type(ri, "request")
@@ -2040,11 +2666,9 @@ def print_wrapped_type(ri):
def _free_type_members_iter(ri, struct):
- for _, attr in struct.member_list():
- if attr.free_needs_iter():
- ri.cw.p('unsigned int i;')
- ri.cw.nl()
- break
+ if struct.free_needs_iter():
+ ri.cw.p('unsigned int i;')
+ ri.cw.nl()
def _free_type_members(ri, var, struct, ref=''):
@@ -2093,7 +2717,7 @@ def print_dump_type_free(ri):
ri.cw.nl()
_free_type_members(ri, 'rsp', ri.struct['reply'], ref='obj.')
- ri.cw.p(f'free(rsp);')
+ ri.cw.p('free(rsp);')
ri.cw.block_end()
ri.cw.block_end()
ri.cw.nl()
@@ -2104,7 +2728,7 @@ def print_ntf_type_free(ri):
ri.cw.block_start()
_free_type_members_iter(ri, ri.struct['reply'])
_free_type_members(ri, 'rsp', ri.struct['reply'], ref='obj.')
- ri.cw.p(f'free(rsp);')
+ ri.cw.p('free(rsp);')
ri.cw.block_end()
ri.cw.nl()
@@ -2173,19 +2797,57 @@ def print_kernel_policy_ranges(family, cw):
cw.block_start(line=f'static const struct netlink_range_validation{sign} {c_lower(attr.enum_name)}_range =')
members = []
if 'min' in attr.checks:
- members.append(('min', str(attr.get_limit('min')) + suffix))
+ members.append(('min', attr.get_limit_str('min', suffix=suffix)))
if 'max' in attr.checks:
- members.append(('max', str(attr.get_limit('max')) + suffix))
+ members.append(('max', attr.get_limit_str('max', suffix=suffix)))
cw.write_struct_init(members)
cw.block_end(line=';')
cw.nl()
+def print_kernel_policy_sparse_enum_validates(family, cw):
+ first = True
+ for _, attr_set in family.attr_sets.items():
+ if attr_set.subset_of:
+ continue
+
+ for _, attr in attr_set.items():
+ if not attr.request:
+ continue
+ if not attr.enum_name:
+ continue
+ if 'sparse' not in attr.checks:
+ continue
+
+ if first:
+ cw.p('/* Sparse enums validation callbacks */')
+ first = False
+
+ cw.write_func_prot('static int', f'{c_lower(attr.enum_name)}_validate',
+ ['const struct nlattr *attr', 'struct netlink_ext_ack *extack'])
+ cw.block_start()
+ cw.block_start(line=f'switch (nla_get_{attr["type"]}(attr))')
+ enum = family.consts[attr['enum']]
+ first_entry = True
+ for entry in enum.entries.values():
+ if first_entry:
+ first_entry = False
+ else:
+ cw.p('fallthrough;')
+ cw.p(f'case {entry.c_name}:')
+ cw.p('return 0;')
+ cw.block_end()
+ cw.p('NL_SET_ERR_MSG_ATTR(extack, attr, "invalid enum value");')
+ cw.p('return -EINVAL;')
+ cw.block_end()
+ cw.nl()
+
+
def print_kernel_op_table_fwd(family, cw, terminate):
exported = not kernel_can_gen_family_struct(family)
if not terminate or exported:
- cw.p(f"/* Ops table for {family.name} */")
+ cw.p(f"/* Ops table for {family.ident_name} */")
pol_to_struct = {'global': 'genl_small_ops',
'per-op': 'genl_ops',
@@ -2237,12 +2899,12 @@ def print_kernel_op_table_fwd(family, cw, terminate):
continue
if 'do' in op:
- name = c_lower(f"{family.name}-nl-{op_name}-doit")
+ name = c_lower(f"{family.ident_name}-nl-{op_name}-doit")
cw.write_func_prot('int', name,
['struct sk_buff *skb', 'struct genl_info *info'], suffix=';')
if 'dump' in op:
- name = c_lower(f"{family.name}-nl-{op_name}-dumpit")
+ name = c_lower(f"{family.ident_name}-nl-{op_name}-dumpit")
cw.write_func_prot('int', name,
['struct sk_buff *skb', 'struct netlink_callback *cb'], suffix=';')
cw.nl()
@@ -2268,13 +2930,13 @@ def print_kernel_op_table(family, cw):
for x in op['dont-validate']])), )
for op_mode in ['do', 'dump']:
if op_mode in op:
- name = c_lower(f"{family.name}-nl-{op_name}-{op_mode}it")
+ name = c_lower(f"{family.ident_name}-nl-{op_name}-{op_mode}it")
members.append((op_mode + 'it', name))
if family.kernel_policy == 'per-op':
struct = Struct(family, op['attribute-set'],
type_list=op['do']['request']['attributes'])
- name = c_lower(f"{family.name}-{op_name}-nl-policy")
+ name = c_lower(f"{family.ident_name}-{op_name}-nl-policy")
members.append(('policy', name))
members.append(('maxattr', struct.attr_max_val.enum_name))
if 'flags' in op:
@@ -2306,7 +2968,7 @@ def print_kernel_op_table(family, cw):
members.append(('validate',
' | '.join([c_upper('genl-dont-validate-' + x)
for x in dont_validate])), )
- name = c_lower(f"{family.name}-nl-{op_name}-{op_mode}it")
+ name = c_lower(f"{family.ident_name}-nl-{op_name}-{op_mode}it")
if 'pre' in op[op_mode]:
members.append((cb_names[op_mode]['pre'], c_lower(op[op_mode]['pre'])))
members.append((op_mode + 'it', name))
@@ -2317,9 +2979,9 @@ def print_kernel_op_table(family, cw):
type_list=op[op_mode]['request']['attributes'])
if op.dual_policy:
- name = c_lower(f"{family.name}-{op_name}-{op_mode}-nl-policy")
+ name = c_lower(f"{family.ident_name}-{op_name}-{op_mode}-nl-policy")
else:
- name = c_lower(f"{family.name}-{op_name}-nl-policy")
+ name = c_lower(f"{family.ident_name}-{op_name}-nl-policy")
members.append(('policy', name))
members.append(('maxattr', struct.attr_max_val.enum_name))
flags = (op['flags'] if 'flags' in op else []) + ['cmd-cap-' + op_mode]
@@ -2338,7 +3000,7 @@ def print_kernel_mcgrp_hdr(family, cw):
cw.block_start('enum')
for grp in family.mcgrps['list']:
- grp_id = c_upper(f"{family.name}-nlgrp-{grp['name']},")
+ grp_id = c_upper(f"{family.ident_name}-nlgrp-{grp['name']},")
cw.p(grp_id)
cw.block_end(';')
cw.nl()
@@ -2351,7 +3013,7 @@ def print_kernel_mcgrp_src(family, cw):
cw.block_start('static const struct genl_multicast_group ' + family.c_name + '_nl_mcgrps[] =')
for grp in family.mcgrps['list']:
name = grp['name']
- grp_id = c_upper(f"{family.name}-nlgrp-{name}")
+ grp_id = c_upper(f"{family.ident_name}-nlgrp-{name}")
cw.p('[' + grp_id + '] = { "' + name + '", },')
cw.block_end(';')
cw.nl()
@@ -2363,13 +3025,28 @@ def print_kernel_family_struct_hdr(family, cw):
cw.p(f"extern struct genl_family {family.c_name}_nl_family;")
cw.nl()
+ if 'sock-priv' in family.kernel_family:
+ cw.p(f'void {family.c_name}_nl_sock_priv_init({family.kernel_family["sock-priv"]} *priv);')
+ cw.p(f'void {family.c_name}_nl_sock_priv_destroy({family.kernel_family["sock-priv"]} *priv);')
+ cw.nl()
def print_kernel_family_struct_src(family, cw):
if not kernel_can_gen_family_struct(family):
return
- cw.block_start(f"struct genl_family {family.name}_nl_family __ro_after_init =")
+ if 'sock-priv' in family.kernel_family:
+ # Generate "trampolines" to make CFI happy
+ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_init",
+ [f"{family.c_name}_nl_sock_priv_init(priv);"],
+ ["void *priv"])
+ cw.nl()
+ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_destroy",
+ [f"{family.c_name}_nl_sock_priv_destroy(priv);"],
+ ["void *priv"])
+ cw.nl()
+
+ cw.block_start(f"struct genl_family {family.ident_name}_nl_family __ro_after_init =")
cw.p('.name\t\t= ' + family.fam_key + ',')
cw.p('.version\t= ' + family.ver_key + ',')
cw.p('.netnsok\t= true,')
@@ -2384,6 +3061,10 @@ def print_kernel_family_struct_src(family, cw):
if family.mcgrps['list']:
cw.p(f'.mcgrps\t\t= {family.c_name}_nl_mcgrps,')
cw.p(f'.n_mcgrps\t= ARRAY_SIZE({family.c_name}_nl_mcgrps),')
+ if 'sock-priv' in family.kernel_family:
+ cw.p(f'.sock_priv_size\t= sizeof({family.kernel_family["sock-priv"]}),')
+ cw.p(f'.sock_priv_init\t= __{family.c_name}_nl_sock_priv_init,')
+ cw.p(f'.sock_priv_destroy = __{family.c_name}_nl_sock_priv_destroy,')
cw.block_end(';')
@@ -2397,8 +3078,90 @@ def uapi_enum_start(family, cw, obj, ckey='', enum_name='enum-name'):
cw.block_start(line=start_line)
+def render_uapi_unified(family, cw, max_by_define, separate_ntf):
+ max_name = c_upper(family.get('cmd-max-name', f"{family.op_prefix}MAX"))
+ cnt_name = c_upper(family.get('cmd-cnt-name', f"__{family.op_prefix}MAX"))
+ max_value = f"({cnt_name} - 1)"
+
+ uapi_enum_start(family, cw, family['operations'], 'enum-name')
+ val = 0
+ for op in family.msgs.values():
+ if separate_ntf and ('notify' in op or 'event' in op):
+ continue
+
+ suffix = ','
+ if op.value != val:
+ suffix = f" = {op.value},"
+ val = op.value
+ cw.p(op.enum_name + suffix)
+ val += 1
+ cw.nl()
+ cw.p(cnt_name + ('' if max_by_define else ','))
+ if not max_by_define:
+ cw.p(f"{max_name} = {max_value}")
+ cw.block_end(line=';')
+ if max_by_define:
+ cw.p(f"#define {max_name} {max_value}")
+ cw.nl()
+
+
+def render_uapi_directional(family, cw, max_by_define):
+ max_name = f"{family.op_prefix}USER_MAX"
+ cnt_name = f"__{family.op_prefix}USER_CNT"
+ max_value = f"({cnt_name} - 1)"
+
+ cw.block_start(line='enum')
+ cw.p(c_upper(f'{family.name}_MSG_USER_NONE = 0,'))
+ val = 0
+ for op in family.msgs.values():
+ if 'do' in op and 'event' not in op:
+ suffix = ','
+ if op.value and op.value != val:
+ suffix = f" = {op.value},"
+ val = op.value
+ cw.p(op.enum_name + suffix)
+ val += 1
+ cw.nl()
+ cw.p(cnt_name + ('' if max_by_define else ','))
+ if not max_by_define:
+ cw.p(f"{max_name} = {max_value}")
+ cw.block_end(line=';')
+ if max_by_define:
+ cw.p(f"#define {max_name} {max_value}")
+ cw.nl()
+
+ max_name = f"{family.op_prefix}KERNEL_MAX"
+ cnt_name = f"__{family.op_prefix}KERNEL_CNT"
+ max_value = f"({cnt_name} - 1)"
+
+ cw.block_start(line='enum')
+ cw.p(c_upper(f'{family.name}_MSG_KERNEL_NONE = 0,'))
+ val = 0
+ for op in family.msgs.values():
+ if ('do' in op and 'reply' in op['do']) or 'notify' in op or 'event' in op:
+ enum_name = op.enum_name
+ if 'event' not in op and 'notify' not in op:
+ enum_name = f'{enum_name}_REPLY'
+
+ suffix = ','
+ if op.value and op.value != val:
+ suffix = f" = {op.value},"
+ val = op.value
+ cw.p(enum_name + suffix)
+ val += 1
+ cw.nl()
+ cw.p(cnt_name + ('' if max_by_define else ','))
+ if not max_by_define:
+ cw.p(f"{max_name} = {max_value}")
+ cw.block_end(line=';')
+ if max_by_define:
+ cw.p(f"#define {max_name} {max_value}")
+ cw.nl()
+
+
def render_uapi(family, cw):
hdr_prot = f"_UAPI_LINUX_{c_upper(family.uapi_header_name)}_H"
+ hdr_prot = hdr_prot.replace('/', '_')
cw.p('#ifndef ' + hdr_prot)
cw.p('#define ' + hdr_prot)
cw.nl()
@@ -2410,6 +3173,9 @@ def render_uapi(family, cw):
defines = []
for const in family['definitions']:
+ if const.get('header'):
+ continue
+
if const['type'] != 'const':
cw.writes_defines(defines)
defines = []
@@ -2419,12 +3185,19 @@ def render_uapi(family, cw):
if const['type'] == 'enum' or const['type'] == 'flags':
enum = family.consts[const['name']]
+ if enum.header:
+ continue
+
if enum.has_doc():
- cw.p('/**')
- doc = ''
- if 'doc' in enum:
- doc = ' - ' + enum['doc']
- cw.write_doc_line(enum.enum_name + doc)
+ if enum.has_entry_doc():
+ cw.p('/**')
+ doc = ''
+ if 'doc' in enum:
+ doc = ' - ' + enum['doc']
+ cw.write_doc_line(enum.enum_name + doc)
+ else:
+ cw.p('/*')
+ cw.write_doc_line(enum['doc'], indent=False)
for entry in enum.entries.values():
if entry.has_doc():
doc = '@' + entry.c_name + ': ' + entry['doc']
@@ -2432,7 +3205,7 @@ def render_uapi(family, cw):
cw.p(' */')
uapi_enum_start(family, cw, const, 'name')
- name_pfx = const.get('name-prefix', f"{family.name}-{const['name']}-")
+ name_pfx = const.get('name-prefix', f"{family.ident_name}-{const['name']}-")
for entry in enum.entries.values():
suffix = ','
if entry.value_change:
@@ -2447,14 +3220,18 @@ def render_uapi(family, cw):
max_val = f' = {enum.get_mask()},'
cw.p(max_name + max_val)
else:
+ cnt_name = enum.enum_cnt_name
max_name = c_upper(name_pfx + 'max')
- cw.p('__' + max_name + ',')
- cw.p(max_name + ' = (__' + max_name + ' - 1)')
+ if not cnt_name:
+ cnt_name = '__' + name_pfx + 'max'
+ cw.p(c_upper(cnt_name) + ',')
+ cw.p(max_name + ' = (' + c_upper(cnt_name) + ' - 1)')
cw.block_end(line=';')
cw.nl()
elif const['type'] == 'const':
+ name_pfx = const.get('name-prefix', f"{family.ident_name}-")
defines.append([c_upper(family.get('c-define-name',
- f"{family.name}-{const['name']}")),
+ f"{name_pfx}{const['name']}")),
const['value']])
if defines:
@@ -2478,7 +3255,8 @@ def render_uapi(family, cw):
val = attr.value
val += 1
cw.p(attr.enum_name + suffix)
- cw.nl()
+ if attr_set.items():
+ cw.nl()
cw.p(attr_set.cnt_name + ('' if max_by_define else ','))
if not max_by_define:
cw.p(f"{attr_set.max_name} = {max_value}")
@@ -2490,30 +3268,12 @@ def render_uapi(family, cw):
# Commands
separate_ntf = 'async-prefix' in family['operations']
- max_name = c_upper(family.get('cmd-max-name', f"{family.op_prefix}MAX"))
- cnt_name = c_upper(family.get('cmd-cnt-name', f"__{family.op_prefix}MAX"))
- max_value = f"({cnt_name} - 1)"
-
- uapi_enum_start(family, cw, family['operations'], 'enum-name')
- val = 0
- for op in family.msgs.values():
- if separate_ntf and ('notify' in op or 'event' in op):
- continue
-
- suffix = ','
- if op.value != val:
- suffix = f" = {op.value},"
- val = op.value
- cw.p(op.enum_name + suffix)
- val += 1
- cw.nl()
- cw.p(cnt_name + ('' if max_by_define else ','))
- if not max_by_define:
- cw.p(f"{max_name} = {max_value}")
- cw.block_end(line=';')
- if max_by_define:
- cw.p(f"#define {max_name} {max_value}")
- cw.nl()
+ if family.msg_id_model == 'unified':
+ render_uapi_unified(family, cw, max_by_define, separate_ntf)
+ elif family.msg_id_model == 'directional':
+ render_uapi_directional(family, cw, max_by_define)
+ else:
+ raise Exception(f'Unsupported message enum-model {family.msg_id_model}')
if separate_ntf:
uapi_enum_start(family, cw, family['operations'], enum_name='async-enum')
@@ -2532,7 +3292,7 @@ def render_uapi(family, cw):
defines = []
for grp in family.mcgrps['list']:
name = grp['name']
- defines.append([c_upper(grp.get('c-define-name', f"{family.name}-mcgrp-{name}")),
+ defines.append([c_upper(grp.get('c-define-name', f"{family.ident_name}-mcgrp-{name}")),
f'{name}'])
cw.nl()
if defines:
@@ -2543,7 +3303,11 @@ def render_uapi(family, cw):
def _render_user_ntf_entry(ri, op):
- ri.cw.block_start(line=f"[{op.enum_name}] = ")
+ if not ri.family.is_classic():
+ ri.cw.block_start(line=f"[{op.enum_name}] = ")
+ else:
+ crud_op = ri.family.req_by_value[op.rsp_value]
+ ri.cw.block_start(line=f"[{crud_op.enum_name}] = ")
ri.cw.p(f".alloc_sz\t= sizeof({type_name(ri, 'event')}),")
ri.cw.p(f".cb\t\t= {op_prefix(ri, 'reply', deref=True)}_parse,")
ri.cw.p(f".policy\t\t= &{ri.struct['reply'].render_name}_nest,")
@@ -2558,7 +3322,7 @@ def render_user_family(family, cw, prototype):
return
if family.ntfs:
- cw.block_start(line=f"static const struct ynl_ntf_info {family['name']}_ntf_info[] = ")
+ cw.block_start(line=f"static const struct ynl_ntf_info {family.c_name}_ntf_info[] = ")
for ntf_op_name, ntf_op in family.ntfs.items():
if 'notify' in ntf_op:
op = family.ops[ntf_op['notify']]
@@ -2578,13 +3342,19 @@ def render_user_family(family, cw, prototype):
cw.block_start(f'{symbol} = ')
cw.p(f'.name\t\t= "{family.c_name}",')
- if family.fixed_header:
+ if family.is_classic():
+ cw.p('.is_classic\t= true,')
+ cw.p(f'.classic_id\t= {family.get("protonum")},')
+ if family.is_classic():
+ if family.fixed_header:
+ cw.p(f'.hdr_len\t= sizeof(struct {c_lower(family.fixed_header)}),')
+ elif family.fixed_header:
cw.p(f'.hdr_len\t= sizeof(struct genlmsghdr) + sizeof(struct {c_lower(family.fixed_header)}),')
else:
cw.p('.hdr_len\t= sizeof(struct genlmsghdr),')
if family.ntfs:
- cw.p(f".ntf_info\t= {family['name']}_ntf_info,")
- cw.p(f".ntf_info_size\t= MNL_ARRAY_SIZE({family['name']}_ntf_info),")
+ cw.p(f".ntf_info\t= {family.c_name}_ntf_info,")
+ cw.p(f".ntf_info_size\t= YNL_ARRAY_SIZE({family.c_name}_ntf_info),")
cw.block_end(line=';')
@@ -2610,7 +3380,8 @@ def find_kernel_root(full_path):
def main():
parser = argparse.ArgumentParser(description='Netlink simple parsing generator')
- parser.add_argument('--mode', dest='mode', type=str, required=True)
+ parser.add_argument('--mode', dest='mode', type=str, required=True,
+ choices=('user', 'kernel', 'uapi'))
parser.add_argument('--spec', dest='spec', type=str, required=True)
parser.add_argument('--header', dest='header', action='store_true', default=None)
parser.add_argument('--source', dest='header', action='store_false')
@@ -2637,13 +3408,6 @@ def main():
os.sys.exit(1)
return
- supported_models = ['unified']
- if args.mode in ['user', 'kernel']:
- supported_models += ['directional']
- if parsed.msg_id_model not in supported_models:
- print(f'Message enum-model {parsed.msg_id_model} not supported for {args.mode} generation')
- os.sys.exit(1)
-
cw = CodeWriter(BaseNlLib(), args.out_file, overwrite=(not args.cmp_out))
_, spec_kernel = find_kernel_root(args.spec)
@@ -2671,15 +3435,21 @@ def main():
cw.p('#define ' + hdr_prot)
cw.nl()
+ if args.out_file:
+ hdr_file = os.path.basename(args.out_file[:-2]) + ".h"
+ else:
+ hdr_file = "generated_header_file.h"
+
if args.mode == 'kernel':
cw.p('#include <net/netlink.h>')
cw.p('#include <net/genetlink.h>')
cw.nl()
if not args.header:
if args.out_file:
- cw.p(f'#include "{os.path.basename(args.out_file[:-2])}.h"')
+ cw.p(f'#include "{hdr_file}"')
cw.nl()
headers = ['uapi/' + parsed.uapi_header]
+ headers += parsed.kernel_family.get('headers', [])
else:
cw.p('#include <stdlib.h>')
cw.p('#include <string.h>')
@@ -2688,19 +3458,23 @@ def main():
if family_contains_bitfield32(parsed):
cw.p('#include <linux/netlink.h>')
else:
- cw.p(f'#include "{parsed.name}-user.h"')
+ cw.p(f'#include "{hdr_file}"')
cw.p('#include "ynl.h"')
- headers = [parsed.uapi_header]
- for definition in parsed['definitions']:
+ headers = []
+ for definition in parsed['definitions'] + parsed['attribute-sets']:
if 'header' in definition:
headers.append(definition['header'])
+ if args.mode == 'user':
+ headers.append(parsed.uapi_header)
+ seen_header = []
for one in headers:
- cw.p(f"#include <{one}>")
+ if one not in seen_header:
+ cw.p(f"#include <{one}>")
+ seen_header.append(one)
cw.nl()
if args.mode == "user":
if not args.header:
- cw.p("#include <libmnl/libmnl.h>")
cw.p("#include <linux/genetlink.h>")
cw.nl()
for one in args.user_header:
@@ -2741,6 +3515,7 @@ def main():
print_kernel_family_struct_hdr(parsed, cw)
else:
print_kernel_policy_ranges(parsed, cw)
+ print_kernel_policy_sparse_enum_validates(parsed, cw)
for _, struct in sorted(parsed.pure_nested_structs.items()):
if struct.request:
@@ -2806,7 +3581,7 @@ def main():
ri = RenderInfo(cw, parsed, args.mode, op, 'dump')
print_req_type(ri)
print_req_type_helpers(ri)
- if not ri.type_consistent:
+ if not ri.type_consistent or ri.type_oneside:
print_rsp_type(ri)
print_wrapped_type(ri)
print_dump_prototype(ri)
@@ -2844,8 +3619,7 @@ def main():
has_recursive_nests = True
if has_recursive_nests:
cw.nl()
- for name in parsed.pure_nested_structs:
- struct = Struct(parsed, name)
+ for struct in parsed.pure_nested_structs.values():
put_typol(cw, struct)
for name in parsed.root_sets:
struct = Struct(parsed, name)
@@ -2884,7 +3658,7 @@ def main():
if 'dump' in op:
cw.p(f"/* {op.enum_name} - dump */")
ri = RenderInfo(cw, parsed, args.mode, op, "dump")
- if not ri.type_consistent:
+ if not ri.type_consistent or ri.type_oneside:
parse_rsp_msg(ri, deref=True)
print_req_free(ri)
print_dump_type_free(ri)
diff --git a/tools/net/ynl/pyynl/ynl_gen_rst.py b/tools/net/ynl/pyynl/ynl_gen_rst.py
new file mode 100755
index 000000000000..90ae19aac89d
--- /dev/null
+++ b/tools/net/ynl/pyynl/ynl_gen_rst.py
@@ -0,0 +1,83 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+# -*- coding: utf-8; mode: python -*-
+
+"""
+ Script to auto generate the documentation for Netlink specifications.
+
+ :copyright: Copyright (C) 2023 Breno Leitao <leitao@debian.org>
+ :license: GPL Version 2, June 1991 see linux/COPYING for details.
+
+ This script performs extensive parsing to the Linux kernel's netlink YAML
+ spec files, in an effort to avoid needing to heavily mark up the original
+ YAML file. It uses the library code from scripts/lib.
+"""
+
+import os.path
+import pathlib
+import sys
+import argparse
+import logging
+
+sys.path.append(pathlib.Path(__file__).resolve().parent.as_posix())
+from lib import YnlDocGenerator # pylint: disable=C0413
+
+def parse_arguments() -> argparse.Namespace:
+ """Parse arguments from user"""
+ parser = argparse.ArgumentParser(description="Netlink RST generator")
+
+ parser.add_argument("-v", "--verbose", action="store_true")
+ parser.add_argument("-o", "--output", help="Output file name")
+
+ # Index and input are mutually exclusive
+ group = parser.add_mutually_exclusive_group()
+ group.add_argument("-i", "--input", help="YAML file name")
+
+ args = parser.parse_args()
+
+ if args.verbose:
+ logging.basicConfig(level=logging.DEBUG)
+
+ if args.input and not os.path.isfile(args.input):
+ logging.warning("%s is not a valid file.", args.input)
+ sys.exit(-1)
+
+ if not args.output:
+ logging.error("No output file specified.")
+ sys.exit(-1)
+
+ if os.path.isfile(args.output):
+ logging.debug("%s already exists. Overwriting it.", args.output)
+
+ return args
+
+
+def write_to_rstfile(content: str, filename: str) -> None:
+ """Write the generated content into an RST file"""
+ logging.debug("Saving RST file to %s", filename)
+
+ with open(filename, "w", encoding="utf-8") as rst_file:
+ rst_file.write(content)
+
+
+def main() -> None:
+ """Main function that reads the YAML files and generates the RST files"""
+
+ args = parse_arguments()
+
+ parser = YnlDocGenerator()
+
+ if args.input:
+ logging.debug("Parsing %s", args.input)
+ try:
+ content = parser.parse_yaml_file(os.path.join(args.input))
+ except Exception as exception:
+ logging.warning("Failed to parse %s.", args.input)
+ logging.warning(exception)
+ sys.exit(-1)
+
+ write_to_rstfile(content, args.output)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tools/net/ynl/samples/.gitignore b/tools/net/ynl/samples/.gitignore
index 49637b26c482..7f5fca7682d7 100644
--- a/tools/net/ynl/samples/.gitignore
+++ b/tools/net/ynl/samples/.gitignore
@@ -1,4 +1,9 @@
ethtool
devlink
netdev
-page-pool \ No newline at end of file
+ovs
+page-pool
+rt-addr
+rt-link
+rt-route
+tc
diff --git a/tools/net/ynl/samples/Makefile b/tools/net/ynl/samples/Makefile
index 28bdb1557a54..c9494a564da4 100644
--- a/tools/net/ynl/samples/Makefile
+++ b/tools/net/ynl/samples/Makefile
@@ -3,13 +3,13 @@
include ../Makefile.deps
CC=gcc
-CFLAGS=-std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow \
+CFLAGS += -std=gnu11 -O2 -W -Wall -Wextra -Wno-unused-parameter -Wshadow \
-I../lib/ -I../generated/ -idirafter $(UAPI_PATH)
ifeq ("$(DEBUG)","1")
CFLAGS += -g -fsanitize=address -fsanitize=leak -static-libasan
endif
-LDLIBS=-lmnl ../lib/ynl.a ../generated/protos.a
+LDLIBS=../lib/ynl.a ../generated/protos.a
SRCS=$(wildcard *.c)
BINS=$(patsubst %.c,%,${SRCS})
@@ -28,8 +28,8 @@ $(BINS): ../lib/ynl.a ../generated/protos.a $(SRCS)
clean:
rm -f *.o *.d *~
-hardclean: clean
+distclean: clean
rm -f $(BINS)
-.PHONY: all clean
+.PHONY: all clean distclean
.DEFAULT_GOAL=all
diff --git a/tools/net/ynl/samples/devlink.c b/tools/net/ynl/samples/devlink.c
index d2611d7ebab4..ac9dfb01f280 100644
--- a/tools/net/ynl/samples/devlink.c
+++ b/tools/net/ynl/samples/devlink.c
@@ -22,6 +22,7 @@ int main(int argc, char **argv)
ynl_dump_foreach(devs, d) {
struct devlink_info_get_req *info_req;
struct devlink_info_get_rsp *info_rsp;
+ unsigned i;
printf("%s/%s:\n", d->bus_name, d->dev_name);
@@ -34,11 +35,11 @@ int main(int argc, char **argv)
if (!info_rsp)
goto err_free_devs;
- if (info_rsp->_present.info_driver_name_len)
+ if (info_rsp->_len.info_driver_name)
printf(" driver: %s\n", info_rsp->info_driver_name);
- if (info_rsp->n_info_version_running)
+ if (info_rsp->_count.info_version_running)
printf(" running fw:\n");
- for (unsigned i = 0; i < info_rsp->n_info_version_running; i++)
+ for (i = 0; i < info_rsp->_count.info_version_running; i++)
printf(" %s: %s\n",
info_rsp->info_version_running[i].info_version_name,
info_rsp->info_version_running[i].info_version_value);
diff --git a/tools/net/ynl/samples/netdev.c b/tools/net/ynl/samples/netdev.c
index 591b90e21890..22609d44c89a 100644
--- a/tools/net/ynl/samples/netdev.c
+++ b/tools/net/ynl/samples/netdev.c
@@ -79,7 +79,10 @@ int main(int argc, char **argv)
goto err_close;
printf("Select ifc ($ifindex; or 0 = dump; or -2 ntf check): ");
- scanf("%d", &ifindex);
+ if (scanf("%d", &ifindex) != 1) {
+ fprintf(stderr, "Error: unable to parse input\n");
+ goto err_destroy;
+ }
if (ifindex > 0) {
struct netdev_dev_get_req *req;
@@ -100,6 +103,8 @@ int main(int argc, char **argv)
if (!devs)
goto err_close;
+ if (ynl_dump_empty(devs))
+ fprintf(stderr, "Error: no devices reported\n");
ynl_dump_foreach(devs, d)
netdev_print_device(d, 0);
netdev_dev_get_list_free(devs);
@@ -117,6 +122,7 @@ int main(int argc, char **argv)
err_close:
fprintf(stderr, "YNL: %s\n", ys->err.msg);
+err_destroy:
ynl_sock_destroy(ys);
return 2;
}
diff --git a/tools/net/ynl/samples/ovs.c b/tools/net/ynl/samples/ovs.c
new file mode 100644
index 000000000000..3e975c003d77
--- /dev/null
+++ b/tools/net/ynl/samples/ovs.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <string.h>
+
+#include <ynl.h>
+
+#include "ovs_datapath-user.h"
+
+int main(int argc, char **argv)
+{
+ struct ynl_sock *ys;
+ int err;
+
+ ys = ynl_sock_create(&ynl_ovs_datapath_family, NULL);
+ if (!ys)
+ return 1;
+
+ if (argc > 1) {
+ struct ovs_datapath_new_req *req;
+
+ req = ovs_datapath_new_req_alloc();
+ if (!req)
+ goto err_close;
+
+ ovs_datapath_new_req_set_upcall_pid(req, 1);
+ ovs_datapath_new_req_set_name(req, argv[1]);
+
+ err = ovs_datapath_new(ys, req);
+ ovs_datapath_new_req_free(req);
+ if (err)
+ goto err_close;
+ } else {
+ struct ovs_datapath_get_req_dump *req;
+ struct ovs_datapath_get_list *dps;
+
+ printf("Dump:\n");
+ req = ovs_datapath_get_req_dump_alloc();
+
+ dps = ovs_datapath_get_dump(ys, req);
+ ovs_datapath_get_req_dump_free(req);
+ if (!dps)
+ goto err_close;
+
+ ynl_dump_foreach(dps, dp) {
+ printf(" %s(%d): pid:%u cache:%u\n",
+ dp->name, dp->_hdr.dp_ifindex,
+ dp->upcall_pid, dp->masks_cache_size);
+ }
+ ovs_datapath_get_list_free(dps);
+ }
+
+ ynl_sock_destroy(ys);
+
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL (%d): %s\n", ys->err.code, ys->err.msg);
+ ynl_sock_destroy(ys);
+ return 2;
+}
diff --git a/tools/net/ynl/samples/page-pool.c b/tools/net/ynl/samples/page-pool.c
index 098b5190d0e5..e5d521320fbf 100644
--- a/tools/net/ynl/samples/page-pool.c
+++ b/tools/net/ynl/samples/page-pool.c
@@ -95,6 +95,8 @@ int main(int argc, char **argv)
if (pp->_present.alloc_fast)
s->alloc_fast += pp->alloc_fast;
+ if (pp->_present.alloc_refill)
+ s->alloc_fast += pp->alloc_refill;
if (pp->_present.alloc_slow)
s->alloc_slow += pp->alloc_slow;
if (pp->_present.recycle_ring)
@@ -116,7 +118,7 @@ int main(int argc, char **argv)
name = if_indextoname(s->ifc, ifname);
if (name)
printf("%8s", name);
- printf("[%d]\t", s->ifc);
+ printf("[%u]\t", s->ifc);
}
printf("page pools: %u (zombies: %u)\n",
diff --git a/tools/net/ynl/samples/rt-addr.c b/tools/net/ynl/samples/rt-addr.c
new file mode 100644
index 000000000000..2edde5c36b18
--- /dev/null
+++ b/tools/net/ynl/samples/rt-addr.c
@@ -0,0 +1,80 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <string.h>
+
+#include <ynl.h>
+
+#include <arpa/inet.h>
+#include <net/if.h>
+
+#include "rt-addr-user.h"
+
+static void rt_addr_print(struct rt_addr_getaddr_rsp *a)
+{
+ char ifname[IF_NAMESIZE];
+ char addr_str[64];
+ const char *addr;
+ const char *name;
+
+ name = if_indextoname(a->_hdr.ifa_index, ifname);
+ if (name)
+ printf("%16s: ", name);
+
+ switch (a->_len.address) {
+ case 4:
+ addr = inet_ntop(AF_INET, a->address,
+ addr_str, sizeof(addr_str));
+ break;
+ case 16:
+ addr = inet_ntop(AF_INET6, a->address,
+ addr_str, sizeof(addr_str));
+ break;
+ default:
+ addr = NULL;
+ break;
+ }
+ if (addr)
+ printf("%s", addr);
+ else
+ printf("[%d]", a->_len.address);
+
+ printf("\n");
+}
+
+int main(int argc, char **argv)
+{
+ struct rt_addr_getaddr_list *rsp;
+ struct rt_addr_getaddr_req *req;
+ struct ynl_error yerr;
+ struct ynl_sock *ys;
+
+ ys = ynl_sock_create(&ynl_rt_addr_family, &yerr);
+ if (!ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return 1;
+ }
+
+ req = rt_addr_getaddr_req_alloc();
+ if (!req)
+ goto err_destroy;
+
+ rsp = rt_addr_getaddr_dump(ys, req);
+ rt_addr_getaddr_req_free(req);
+ if (!rsp)
+ goto err_close;
+
+ if (ynl_dump_empty(rsp))
+ fprintf(stderr, "Error: no addresses reported\n");
+ ynl_dump_foreach(rsp, addr)
+ rt_addr_print(addr);
+ rt_addr_getaddr_list_free(rsp);
+
+ ynl_sock_destroy(ys);
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+err_destroy:
+ ynl_sock_destroy(ys);
+ return 2;
+}
diff --git a/tools/net/ynl/samples/rt-link.c b/tools/net/ynl/samples/rt-link.c
new file mode 100644
index 000000000000..acdd4b4a0f74
--- /dev/null
+++ b/tools/net/ynl/samples/rt-link.c
@@ -0,0 +1,184 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <string.h>
+
+#include <ynl.h>
+
+#include <arpa/inet.h>
+#include <net/if.h>
+
+#include "rt-link-user.h"
+
+static void rt_link_print(struct rt_link_getlink_rsp *r)
+{
+ unsigned int i;
+
+ printf("%3d: ", r->_hdr.ifi_index);
+
+ if (r->_len.ifname)
+ printf("%16s: ", r->ifname);
+
+ if (r->_present.mtu)
+ printf("mtu %5d ", r->mtu);
+
+ if (r->linkinfo._len.kind)
+ printf("kind %-8s ", r->linkinfo.kind);
+ else
+ printf(" %8s ", "");
+
+ if (r->prop_list._count.alt_ifname) {
+ printf("altname ");
+ for (i = 0; i < r->prop_list._count.alt_ifname; i++)
+ printf("%s ", r->prop_list.alt_ifname[i]->str);
+ printf(" ");
+ }
+
+ if (r->linkinfo._present.data && r->linkinfo.data._present.netkit) {
+ struct rt_link_linkinfo_netkit_attrs *netkit;
+ const char *name;
+
+ netkit = &r->linkinfo.data.netkit;
+ printf("primary %d ", netkit->primary);
+
+ name = NULL;
+ if (netkit->_present.policy)
+ name = rt_link_netkit_policy_str(netkit->policy);
+ if (name)
+ printf("policy %s ", name);
+ }
+
+ printf("\n");
+}
+
+static int rt_link_create_netkit(struct ynl_sock *ys)
+{
+ struct rt_link_getlink_ntf *ntf_gl;
+ struct rt_link_newlink_req *req;
+ struct ynl_ntf_base_type *ntf;
+ int ret;
+
+ req = rt_link_newlink_req_alloc();
+ if (!req) {
+ fprintf(stderr, "Can't alloc req\n");
+ return -1;
+ }
+
+ /* rtnetlink doesn't provide info about the created object.
+ * It expects us to set the ECHO flag and the dig the info out
+ * of the notifications...
+ */
+ rt_link_newlink_req_set_nlflags(req, NLM_F_CREATE | NLM_F_ECHO);
+
+ rt_link_newlink_req_set_linkinfo_kind(req, "netkit");
+
+ /* Test error messages */
+ rt_link_newlink_req_set_linkinfo_data_netkit_policy(req, 10);
+ ret = rt_link_newlink(ys, req);
+ if (ret) {
+ printf("Testing error message for policy being bad:\n\t%s\n", ys->err.msg);
+ } else {
+ fprintf(stderr, "Warning: unexpected success creating netkit with bad attrs\n");
+ goto created;
+ }
+
+ rt_link_newlink_req_set_linkinfo_data_netkit_policy(req, NETKIT_DROP);
+
+ ret = rt_link_newlink(ys, req);
+created:
+ rt_link_newlink_req_free(req);
+ if (ret) {
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+ return -1;
+ }
+
+ if (!ynl_has_ntf(ys)) {
+ fprintf(stderr,
+ "Warning: interface created but received no notification, won't delete the interface\n");
+ return 0;
+ }
+
+ ntf = ynl_ntf_dequeue(ys);
+ if (ntf->cmd != RTM_NEWLINK) {
+ fprintf(stderr,
+ "Warning: unexpected notification type, won't delete the interface\n");
+ return 0;
+ }
+ ntf_gl = (void *)ntf;
+ ret = ntf_gl->obj._hdr.ifi_index;
+ ynl_ntf_free(ntf);
+
+ return ret;
+}
+
+static void rt_link_del(struct ynl_sock *ys, int ifindex)
+{
+ struct rt_link_dellink_req *req;
+
+ req = rt_link_dellink_req_alloc();
+ if (!req) {
+ fprintf(stderr, "Can't alloc req\n");
+ return;
+ }
+
+ req->_hdr.ifi_index = ifindex;
+ if (rt_link_dellink(ys, req))
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+ else
+ fprintf(stderr,
+ "Trying to delete a Netkit interface (ifindex %d)\n",
+ ifindex);
+
+ rt_link_dellink_req_free(req);
+}
+
+int main(int argc, char **argv)
+{
+ struct rt_link_getlink_req_dump *req;
+ struct rt_link_getlink_list *rsp;
+ struct ynl_error yerr;
+ struct ynl_sock *ys;
+ int created = 0;
+
+ ys = ynl_sock_create(&ynl_rt_link_family, &yerr);
+ if (!ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return 1;
+ }
+
+ if (argc > 1) {
+ fprintf(stderr, "Trying to create a Netkit interface\n");
+ created = rt_link_create_netkit(ys);
+ if (created < 0)
+ goto err_destroy;
+ }
+
+ req = rt_link_getlink_req_dump_alloc();
+ if (!req)
+ goto err_del_ifc;
+
+ rsp = rt_link_getlink_dump(ys, req);
+ rt_link_getlink_req_dump_free(req);
+ if (!rsp)
+ goto err_close;
+
+ if (ynl_dump_empty(rsp))
+ fprintf(stderr, "Error: no links reported\n");
+ ynl_dump_foreach(rsp, link)
+ rt_link_print(link);
+ rt_link_getlink_list_free(rsp);
+
+ if (created)
+ rt_link_del(ys, created);
+
+ ynl_sock_destroy(ys);
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+err_del_ifc:
+ if (created)
+ rt_link_del(ys, created);
+err_destroy:
+ ynl_sock_destroy(ys);
+ return 2;
+}
diff --git a/tools/net/ynl/samples/rt-route.c b/tools/net/ynl/samples/rt-route.c
new file mode 100644
index 000000000000..7427104a96df
--- /dev/null
+++ b/tools/net/ynl/samples/rt-route.c
@@ -0,0 +1,80 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <string.h>
+
+#include <ynl.h>
+
+#include <arpa/inet.h>
+#include <net/if.h>
+
+#include "rt-route-user.h"
+
+static void rt_route_print(struct rt_route_getroute_rsp *r)
+{
+ char ifname[IF_NAMESIZE];
+ char route_str[64];
+ const char *route;
+ const char *name;
+
+ /* Ignore local */
+ if (r->_hdr.rtm_table == RT_TABLE_LOCAL)
+ return;
+
+ if (r->_present.oif) {
+ name = if_indextoname(r->oif, ifname);
+ if (name)
+ printf("oif: %-16s ", name);
+ }
+
+ if (r->_len.dst) {
+ route = inet_ntop(r->_hdr.rtm_family, r->dst,
+ route_str, sizeof(route_str));
+ printf("dst: %s/%d", route, r->_hdr.rtm_dst_len);
+ }
+
+ if (r->_len.gateway) {
+ route = inet_ntop(r->_hdr.rtm_family, r->gateway,
+ route_str, sizeof(route_str));
+ printf("gateway: %s ", route);
+ }
+
+ printf("\n");
+}
+
+int main(int argc, char **argv)
+{
+ struct rt_route_getroute_req_dump *req;
+ struct rt_route_getroute_list *rsp;
+ struct ynl_error yerr;
+ struct ynl_sock *ys;
+
+ ys = ynl_sock_create(&ynl_rt_route_family, &yerr);
+ if (!ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return 1;
+ }
+
+ req = rt_route_getroute_req_dump_alloc();
+ if (!req)
+ goto err_destroy;
+
+ rsp = rt_route_getroute_dump(ys, req);
+ rt_route_getroute_req_dump_free(req);
+ if (!rsp)
+ goto err_close;
+
+ if (ynl_dump_empty(rsp))
+ fprintf(stderr, "Error: no routeesses reported\n");
+ ynl_dump_foreach(rsp, route)
+ rt_route_print(route);
+ rt_route_getroute_list_free(rsp);
+
+ ynl_sock_destroy(ys);
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+err_destroy:
+ ynl_sock_destroy(ys);
+ return 2;
+}
diff --git a/tools/net/ynl/samples/tc.c b/tools/net/ynl/samples/tc.c
new file mode 100644
index 000000000000..0bfff0fdd792
--- /dev/null
+++ b/tools/net/ynl/samples/tc.c
@@ -0,0 +1,80 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <string.h>
+
+#include <ynl.h>
+
+#include <net/if.h>
+
+#include "tc-user.h"
+
+static void tc_qdisc_print(struct tc_getqdisc_rsp *q)
+{
+ char ifname[IF_NAMESIZE];
+ const char *name;
+
+ name = if_indextoname(q->_hdr.tcm_ifindex, ifname);
+ if (name)
+ printf("%16s: ", name);
+
+ if (q->_len.kind) {
+ printf("%s ", q->kind);
+
+ if (q->options._present.fq_codel) {
+ struct tc_fq_codel_attrs *fq_codel;
+ struct tc_fq_codel_xstats *stats;
+
+ fq_codel = &q->options.fq_codel;
+ stats = q->stats2.app.fq_codel;
+
+ if (fq_codel->_present.limit)
+ printf("limit: %dp ", fq_codel->limit);
+ if (fq_codel->_present.target)
+ printf("target: %dms ",
+ (fq_codel->target + 500) / 1000);
+ if (q->stats2.app._len.fq_codel)
+ printf("new_flow_cnt: %d ",
+ stats->qdisc_stats.new_flow_count);
+ }
+ }
+
+ printf("\n");
+}
+
+int main(int argc, char **argv)
+{
+ struct tc_getqdisc_req_dump *req;
+ struct tc_getqdisc_list *rsp;
+ struct ynl_error yerr;
+ struct ynl_sock *ys;
+
+ ys = ynl_sock_create(&ynl_tc_family, &yerr);
+ if (!ys) {
+ fprintf(stderr, "YNL: %s\n", yerr.msg);
+ return 1;
+ }
+
+ req = tc_getqdisc_req_dump_alloc();
+ if (!req)
+ goto err_destroy;
+
+ rsp = tc_getqdisc_dump(ys, req);
+ tc_getqdisc_req_dump_free(req);
+ if (!rsp)
+ goto err_close;
+
+ if (ynl_dump_empty(rsp))
+ fprintf(stderr, "Error: no addresses reported\n");
+ ynl_dump_foreach(rsp, qdisc)
+ tc_qdisc_print(qdisc);
+ tc_getqdisc_list_free(rsp);
+
+ ynl_sock_destroy(ys);
+ return 0;
+
+err_close:
+ fprintf(stderr, "YNL: %s\n", ys->err.msg);
+err_destroy:
+ ynl_sock_destroy(ys);
+ return 2;
+}
diff --git a/tools/net/ynl/ynl-gen-rst.py b/tools/net/ynl/ynl-gen-rst.py
deleted file mode 100755
index 262d88f88696..000000000000
--- a/tools/net/ynl/ynl-gen-rst.py
+++ /dev/null
@@ -1,417 +0,0 @@
-#!/usr/bin/env python3
-# SPDX-License-Identifier: GPL-2.0
-# -*- coding: utf-8; mode: python -*-
-
-"""
- Script to auto generate the documentation for Netlink specifications.
-
- :copyright: Copyright (C) 2023 Breno Leitao <leitao@debian.org>
- :license: GPL Version 2, June 1991 see linux/COPYING for details.
-
- This script performs extensive parsing to the Linux kernel's netlink YAML
- spec files, in an effort to avoid needing to heavily mark up the original
- YAML file.
-
- This code is split in three big parts:
- 1) RST formatters: Use to convert a string to a RST output
- 2) Parser helpers: Functions to parse the YAML data structure
- 3) Main function and small helpers
-"""
-
-from typing import Any, Dict, List
-import os.path
-import sys
-import argparse
-import logging
-import yaml
-
-
-SPACE_PER_LEVEL = 4
-
-
-# RST Formatters
-# ==============
-def headroom(level: int) -> str:
- """Return space to format"""
- return " " * (level * SPACE_PER_LEVEL)
-
-
-def bold(text: str) -> str:
- """Format bold text"""
- return f"**{text}**"
-
-
-def inline(text: str) -> str:
- """Format inline text"""
- return f"``{text}``"
-
-
-def sanitize(text: str) -> str:
- """Remove newlines and multiple spaces"""
- # This is useful for some fields that are spread across multiple lines
- return str(text).replace("\n", "").strip()
-
-
-def rst_fields(key: str, value: str, level: int = 0) -> str:
- """Return a RST formatted field"""
- return headroom(level) + f":{key}: {value}"
-
-
-def rst_definition(key: str, value: Any, level: int = 0) -> str:
- """Format a single rst definition"""
- return headroom(level) + key + "\n" + headroom(level + 1) + str(value)
-
-
-def rst_paragraph(paragraph: str, level: int = 0) -> str:
- """Return a formatted paragraph"""
- return headroom(level) + paragraph
-
-
-def rst_bullet(item: str, level: int = 0) -> str:
- """Return a formatted a bullet"""
- return headroom(level) + f"- {item}"
-
-
-def rst_subsection(title: str) -> str:
- """Add a sub-section to the document"""
- return f"{title}\n" + "-" * len(title)
-
-
-def rst_subsubsection(title: str) -> str:
- """Add a sub-sub-section to the document"""
- return f"{title}\n" + "~" * len(title)
-
-
-def rst_section(title: str) -> str:
- """Add a section to the document"""
- return f"\n{title}\n" + "=" * len(title)
-
-
-def rst_subtitle(title: str) -> str:
- """Add a subtitle to the document"""
- return "\n" + "-" * len(title) + f"\n{title}\n" + "-" * len(title) + "\n\n"
-
-
-def rst_title(title: str) -> str:
- """Add a title to the document"""
- return "=" * len(title) + f"\n{title}\n" + "=" * len(title) + "\n\n"
-
-
-def rst_list_inline(list_: List[str], level: int = 0) -> str:
- """Format a list using inlines"""
- return headroom(level) + "[" + ", ".join(inline(i) for i in list_) + "]"
-
-
-def rst_header() -> str:
- """The headers for all the auto generated RST files"""
- lines = []
-
- lines.append(rst_paragraph(".. SPDX-License-Identifier: GPL-2.0"))
- lines.append(rst_paragraph(".. NOTE: This document was auto-generated.\n\n"))
-
- return "\n".join(lines)
-
-
-def rst_toctree(maxdepth: int = 2) -> str:
- """Generate a toctree RST primitive"""
- lines = []
-
- lines.append(".. toctree::")
- lines.append(f" :maxdepth: {maxdepth}\n\n")
-
- return "\n".join(lines)
-
-
-def rst_label(title: str) -> str:
- """Return a formatted label"""
- return f".. _{title}:\n\n"
-
-
-# Parsers
-# =======
-
-
-def parse_mcast_group(mcast_group: List[Dict[str, Any]]) -> str:
- """Parse 'multicast' group list and return a formatted string"""
- lines = []
- for group in mcast_group:
- lines.append(rst_bullet(group["name"]))
-
- return "\n".join(lines)
-
-
-def parse_do(do_dict: Dict[str, Any], level: int = 0) -> str:
- """Parse 'do' section and return a formatted string"""
- lines = []
- for key in do_dict.keys():
- lines.append(rst_paragraph(bold(key), level + 1))
- lines.append(parse_do_attributes(do_dict[key], level + 1) + "\n")
-
- return "\n".join(lines)
-
-
-def parse_do_attributes(attrs: Dict[str, Any], level: int = 0) -> str:
- """Parse 'attributes' section"""
- if "attributes" not in attrs:
- return ""
- lines = [rst_fields("attributes", rst_list_inline(attrs["attributes"]), level + 1)]
-
- return "\n".join(lines)
-
-
-def parse_operations(operations: List[Dict[str, Any]]) -> str:
- """Parse operations block"""
- preprocessed = ["name", "doc", "title", "do", "dump"]
- lines = []
-
- for operation in operations:
- lines.append(rst_section(operation["name"]))
- lines.append(rst_paragraph(sanitize(operation["doc"])) + "\n")
-
- for key in operation.keys():
- if key in preprocessed:
- # Skip the special fields
- continue
- lines.append(rst_fields(key, operation[key], 0))
-
- if "do" in operation:
- lines.append(rst_paragraph(":do:", 0))
- lines.append(parse_do(operation["do"], 0))
- if "dump" in operation:
- lines.append(rst_paragraph(":dump:", 0))
- lines.append(parse_do(operation["dump"], 0))
-
- # New line after fields
- lines.append("\n")
-
- return "\n".join(lines)
-
-
-def parse_entries(entries: List[Dict[str, Any]], level: int) -> str:
- """Parse a list of entries"""
- lines = []
- for entry in entries:
- if isinstance(entry, dict):
- # entries could be a list or a dictionary
- lines.append(
- rst_fields(entry.get("name", ""), sanitize(entry.get("doc", "")), level)
- )
- elif isinstance(entry, list):
- lines.append(rst_list_inline(entry, level))
- else:
- lines.append(rst_bullet(inline(sanitize(entry)), level))
-
- lines.append("\n")
- return "\n".join(lines)
-
-
-def parse_definitions(defs: Dict[str, Any]) -> str:
- """Parse definitions section"""
- preprocessed = ["name", "entries", "members"]
- ignored = ["render-max"] # This is not printed
- lines = []
-
- for definition in defs:
- lines.append(rst_section(definition["name"]))
- for k in definition.keys():
- if k in preprocessed + ignored:
- continue
- lines.append(rst_fields(k, sanitize(definition[k]), 0))
-
- # Field list needs to finish with a new line
- lines.append("\n")
- if "entries" in definition:
- lines.append(rst_paragraph(":entries:", 0))
- lines.append(parse_entries(definition["entries"], 1))
- if "members" in definition:
- lines.append(rst_paragraph(":members:", 0))
- lines.append(parse_entries(definition["members"], 1))
-
- return "\n".join(lines)
-
-
-def parse_attr_sets(entries: List[Dict[str, Any]]) -> str:
- """Parse attribute from attribute-set"""
- preprocessed = ["name", "type"]
- ignored = ["checks"]
- lines = []
-
- for entry in entries:
- lines.append(rst_section(entry["name"]))
- for attr in entry["attributes"]:
- type_ = attr.get("type")
- attr_line = attr["name"]
- if type_:
- # Add the attribute type in the same line
- attr_line += f" ({inline(type_)})"
-
- lines.append(rst_subsubsection(attr_line))
-
- for k in attr.keys():
- if k in preprocessed + ignored:
- continue
- lines.append(rst_fields(k, sanitize(attr[k]), 0))
- lines.append("\n")
-
- return "\n".join(lines)
-
-
-def parse_sub_messages(entries: List[Dict[str, Any]]) -> str:
- """Parse sub-message definitions"""
- lines = []
-
- for entry in entries:
- lines.append(rst_section(entry["name"]))
- for fmt in entry["formats"]:
- value = fmt["value"]
-
- lines.append(rst_bullet(bold(value)))
- for attr in ['fixed-header', 'attribute-set']:
- if attr in fmt:
- lines.append(rst_fields(attr, fmt[attr], 1))
- lines.append("\n")
-
- return "\n".join(lines)
-
-
-def parse_yaml(obj: Dict[str, Any]) -> str:
- """Format the whole YAML into a RST string"""
- lines = []
-
- # Main header
-
- lines.append(rst_header())
-
- title = f"Family ``{obj['name']}`` netlink specification"
- lines.append(rst_title(title))
- lines.append(rst_paragraph(".. contents::\n"))
-
- if "doc" in obj:
- lines.append(rst_subtitle("Summary"))
- lines.append(rst_paragraph(obj["doc"], 0))
-
- # Operations
- if "operations" in obj:
- lines.append(rst_subtitle("Operations"))
- lines.append(parse_operations(obj["operations"]["list"]))
-
- # Multicast groups
- if "mcast-groups" in obj:
- lines.append(rst_subtitle("Multicast groups"))
- lines.append(parse_mcast_group(obj["mcast-groups"]["list"]))
-
- # Definitions
- if "definitions" in obj:
- lines.append(rst_subtitle("Definitions"))
- lines.append(parse_definitions(obj["definitions"]))
-
- # Attributes set
- if "attribute-sets" in obj:
- lines.append(rst_subtitle("Attribute sets"))
- lines.append(parse_attr_sets(obj["attribute-sets"]))
-
- # Sub-messages
- if "sub-messages" in obj:
- lines.append(rst_subtitle("Sub-messages"))
- lines.append(parse_sub_messages(obj["sub-messages"]))
-
- return "\n".join(lines)
-
-
-# Main functions
-# ==============
-
-
-def parse_arguments() -> argparse.Namespace:
- """Parse arguments from user"""
- parser = argparse.ArgumentParser(description="Netlink RST generator")
-
- parser.add_argument("-v", "--verbose", action="store_true")
- parser.add_argument("-o", "--output", help="Output file name")
-
- # Index and input are mutually exclusive
- group = parser.add_mutually_exclusive_group()
- group.add_argument(
- "-x", "--index", action="store_true", help="Generate the index page"
- )
- group.add_argument("-i", "--input", help="YAML file name")
-
- args = parser.parse_args()
-
- if args.verbose:
- logging.basicConfig(level=logging.DEBUG)
-
- if args.input and not os.path.isfile(args.input):
- logging.warning("%s is not a valid file.", args.input)
- sys.exit(-1)
-
- if not args.output:
- logging.error("No output file specified.")
- sys.exit(-1)
-
- if os.path.isfile(args.output):
- logging.debug("%s already exists. Overwriting it.", args.output)
-
- return args
-
-
-def parse_yaml_file(filename: str) -> str:
- """Transform the YAML specified by filename into a rst-formmated string"""
- with open(filename, "r", encoding="utf-8") as spec_file:
- yaml_data = yaml.safe_load(spec_file)
- content = parse_yaml(yaml_data)
-
- return content
-
-
-def write_to_rstfile(content: str, filename: str) -> None:
- """Write the generated content into an RST file"""
- logging.debug("Saving RST file to %s", filename)
-
- with open(filename, "w", encoding="utf-8") as rst_file:
- rst_file.write(content)
-
-
-def generate_main_index_rst(output: str) -> None:
- """Generate the `networking_spec/index` content and write to the file"""
- lines = []
-
- lines.append(rst_header())
- lines.append(rst_label("specs"))
- lines.append(rst_title("Netlink Family Specifications"))
- lines.append(rst_toctree(1))
-
- index_dir = os.path.dirname(output)
- logging.debug("Looking for .rst files in %s", index_dir)
- for filename in sorted(os.listdir(index_dir)):
- if not filename.endswith(".rst") or filename == "index.rst":
- continue
- lines.append(f" {filename.replace('.rst', '')}\n")
-
- logging.debug("Writing an index file at %s", output)
- write_to_rstfile("".join(lines), output)
-
-
-def main() -> None:
- """Main function that reads the YAML files and generates the RST files"""
-
- args = parse_arguments()
-
- if args.input:
- logging.debug("Parsing %s", args.input)
- try:
- content = parse_yaml_file(os.path.join(args.input))
- except Exception as exception:
- logging.warning("Failed to parse %s.", args.input)
- logging.warning(exception)
- sys.exit(-1)
-
- write_to_rstfile(content, args.output)
-
- if args.index:
- # Generate the index RST file
- generate_main_index_rst(args.output)
-
-
-if __name__ == "__main__":
- main()
diff --git a/tools/net/ynl/ynl-regen.sh b/tools/net/ynl/ynl-regen.sh
index a37304dcc88e..81b4ecd89100 100755
--- a/tools/net/ynl/ynl-regen.sh
+++ b/tools/net/ynl/ynl-regen.sh
@@ -1,7 +1,7 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
-TOOL=$(dirname $(realpath $0))/ynl-gen-c.py
+TOOL=$(dirname $(realpath $0))/pyynl/ynl_gen_c.py
force=
search=
diff --git a/tools/objtool/Documentation/objtool.txt b/tools/objtool/Documentation/objtool.txt
index fe39c2a8ef0d..9e97fc25b2d8 100644
--- a/tools/objtool/Documentation/objtool.txt
+++ b/tools/objtool/Documentation/objtool.txt
@@ -28,6 +28,15 @@ Objtool has the following features:
sites, enabling the kernel to patch them inline, to prevent "thunk
funneling" for both security and performance reasons
+- Return thunk validation -- validates return thunks are used for
+ certain CPU mitigations including Retbleed and SRSO
+
+- Return thunk annotation -- annotates all return thunk sites so kernel
+ can patch them inline, depending on enabled mitigations
+
+- Return thunk untraining validation -- validate that all entry paths
+ untrain a "safe return" before the first return (or call)
+
- Non-instrumentation validation -- validates non-instrumentable
("noinstr") code rules, preventing instrumentation in low-level C
entry code
@@ -53,6 +62,9 @@ Objtool has the following features:
- Function entry annotation -- annotates function entries, enabling
kernel function tracing
+- Function preamble (prefix) annotation and/or symbol generation -- used
+ for FineIBT and call depth tracking
+
- Other toolchain hacks which will go unmentioned at this time...
Each feature can be enabled individually or in combination using the
@@ -197,19 +209,17 @@ To achieve the validation, objtool enforces the following rules:
1. Each callable function must be annotated as such with the ELF
function type. In asm code, this is typically done using the
- ENTRY/ENDPROC macros. If objtool finds a return instruction
+ SYM_FUNC_{START,END} macros. If objtool finds a return instruction
outside of a function, it flags an error since that usually indicates
callable code which should be annotated accordingly.
This rule is needed so that objtool can properly identify each
callable function in order to analyze its stack metadata.
-2. Conversely, each section of code which is *not* callable should *not*
- be annotated as an ELF function. The ENDPROC macro shouldn't be used
- in this case.
-
- This rule is needed so that objtool can ignore non-callable code.
- Such code doesn't have to follow any of the other rules.
+2. Conversely, each section of code which is *not* callable, or is
+ otherwise doing funny things with the stack or registers, should
+ *not* be annotated as an ELF function. Rather, SYM_CODE_{START,END}
+ should be used along with unwind hints.
3. Each callable function which calls another function must have the
correct frame pointer logic, if required by CONFIG_FRAME_POINTER or
@@ -221,7 +231,7 @@ To achieve the validation, objtool enforces the following rules:
function B, the _caller_ of function A will be skipped on the stack
trace.
-4. Dynamic jumps and jumps to undefined symbols are only allowed if:
+4. Indirect jumps and jumps to undefined symbols are only allowed if:
a) the jump is part of a switch statement; or
@@ -271,8 +281,8 @@ the objtool maintainers.
If the error is for an asm file, and func() is indeed a callable
function, add proper frame pointer logic using the FRAME_BEGIN and
FRAME_END macros. Otherwise, if it's not a callable function, remove
- its ELF function annotation by changing ENDPROC to END, and instead
- use the manual unwind hint macros in asm/unwind_hints.h.
+ its ELF function annotation by using SYM_CODE_{START,END} and use the
+ manual unwind hint macros in asm/unwind_hints.h.
If it's a GCC-compiled .c file, the error may be because the function
uses an inline asm() statement which has a "call" instruction. An
@@ -284,6 +294,26 @@ the objtool maintainers.
Otherwise the stack frame may not get created before the call.
+ objtool can help with pinpointing the exact function where it happens:
+
+ $ OBJTOOL_ARGS="--verbose" make arch/x86/kvm/
+
+ arch/x86/kvm/kvm.o: warning: objtool: .altinstr_replacement+0xc5: call without frame pointer save/setup
+ arch/x86/kvm/kvm.o: warning: objtool: em_loop.part.0+0x29: (alt)
+ arch/x86/kvm/kvm.o: warning: objtool: em_loop.part.0+0x0: <=== (sym)
+ LD [M] arch/x86/kvm/kvm-intel.o
+ 0000 0000000000028220 <em_loop.part.0>:
+ 0000 28220: 0f b6 47 61 movzbl 0x61(%rdi),%eax
+ 0004 28224: 3c e2 cmp $0xe2,%al
+ 0006 28226: 74 2c je 28254 <em_loop.part.0+0x34>
+ 0008 28228: 48 8b 57 10 mov 0x10(%rdi),%rdx
+ 000c 2822c: 83 f0 05 xor $0x5,%eax
+ 000f 2822f: 48 c1 e0 04 shl $0x4,%rax
+ 0013 28233: 25 f0 00 00 00 and $0xf0,%eax
+ 0018 28238: 81 e2 d5 08 00 00 and $0x8d5,%edx
+ 001e 2823e: 80 ce 02 or $0x2,%dh
+ ...
+
2. file.o: warning: objtool: .text+0x53: unreachable instruction
@@ -291,23 +321,22 @@ the objtool maintainers.
If the error is for an asm file, and the instruction is inside (or
reachable from) a callable function, the function should be annotated
- with the ENTRY/ENDPROC macros (ENDPROC is the important one).
- Otherwise, the code should probably be annotated with the unwind hint
- macros in asm/unwind_hints.h so objtool and the unwinder can know the
- stack state associated with the code.
+ with the SYM_FUNC_START and SYM_FUNC_END macros.
+
+ Otherwise, SYM_CODE_START can be used. In that case the code needs
+ to be annotated with unwind hint macros.
+
+ If you're sure the code won't affect the reliability of runtime stack
+ traces and want objtool to ignore it, see "Adding exceptions" below.
- If you're 100% sure the code won't affect stack traces, or if you're
- a just a bad person, you can tell objtool to ignore it. See the
- "Adding exceptions" section below.
- If it's not actually in a callable function (e.g. kernel entry code),
- change ENDPROC to END.
+3. file.o: warning: objtool: foo+0x48c: bar() missing __noreturn in .c/.h or NORETURN() in noreturns.h
-3. file.o: warning: objtool: foo+0x48c: bar() is missing a __noreturn annotation
+ The call from foo() to bar() doesn't return, but bar() is incorrectly
+ annotated. A noreturn function must be marked __noreturn in both its
+ declaration and its definition, and must have a NORETURN() annotation
+ in tools/objtool/noreturns.h.
- The call from foo() to bar() doesn't return, but bar() is missing the
- __noreturn annotation. NOTE: In addition to annotating the function
- with __noreturn, please also add it to tools/objtool/noreturns.h.
4. file.o: warning: objtool: func(): can't find starting instruction
or
@@ -322,23 +351,21 @@ the objtool maintainers.
This is a kernel entry/exit instruction like sysenter or iret. Such
instructions aren't allowed in a callable function, and are most
- likely part of the kernel entry code. They should usually not have
- the callable function annotation (ENDPROC) and should always be
- annotated with the unwind hint macros in asm/unwind_hints.h.
+ likely part of the kernel entry code. Such code should probably be
+ placed in a SYM_CODE_{START,END} block with unwind hints.
6. file.o: warning: objtool: func()+0x26: sibling call from callable instruction with modified stack frame
- This is a dynamic jump or a jump to an undefined symbol. Objtool
- assumed it's a sibling call and detected that the frame pointer
- wasn't first restored to its original state.
+ This is a branch to an UNDEF symbol. Objtool assumed it's a
+ sibling call and detected that the stack wasn't first restored to its
+ original state.
- If it's not really a sibling call, you may need to move the
- destination code to the local file.
+ If it's not really a sibling call, you may need to use unwind hints
+ and/or move the destination code to the local file.
If the instruction is not actually in a callable function (e.g.
- kernel entry code), change ENDPROC to END and annotate manually with
- the unwind hint macros in asm/unwind_hints.h.
+ kernel entry code), use SYM_CODE_{START,END} and unwind hints.
7. file: warning: objtool: func()+0x5c: stack state mismatch
@@ -354,8 +381,8 @@ the objtool maintainers.
Another possibility is that the code has some asm or inline asm which
does some unusual things to the stack or the frame pointer. In such
- cases it's probably appropriate to use the unwind hint macros in
- asm/unwind_hints.h.
+ cases it's probably appropriate to use SYM_CODE_{START,END} with unwind
+ hints.
8. file.o: warning: objtool: funcA() falls through to next function funcB()
@@ -365,17 +392,16 @@ the objtool maintainers.
can fall through into the next function. There could be different
reasons for this:
- 1) funcA()'s last instruction is a call to a "noreturn" function like
+ a) funcA()'s last instruction is a call to a "noreturn" function like
panic(). In this case the noreturn function needs to be added to
objtool's hard-coded global_noreturns array. Feel free to bug the
objtool maintainer, or you can submit a patch.
- 2) funcA() uses the unreachable() annotation in a section of code
+ b) funcA() uses the unreachable() annotation in a section of code
that is actually reachable.
- 3) If funcA() calls an inline function, the object code for funcA()
- might be corrupt due to a gcc bug. For more details, see:
- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70646
+ c) Some undefined behavior like divide by zero.
+
9. file.o: warning: objtool: funcA() call to funcB() with UACCESS enabled
@@ -413,24 +439,26 @@ the objtool maintainers.
This limitation can be overcome by massaging the alternatives with
NOPs to shift the stack changes around so they no longer conflict.
+
11. file.o: warning: unannotated intra-function call
- This warning means that a direct call is done to a destination which
- is not at the beginning of a function. If this is a legit call, you
- can remove this warning by putting the ANNOTATE_INTRA_FUNCTION_CALL
- directive right before the call.
+ This warning means that a direct call is done to a destination which
+ is not at the beginning of a function. If this is a legit call, you
+ can remove this warning by putting the ANNOTATE_INTRA_FUNCTION_CALL
+ directive right before the call.
+
12. file.o: warning: func(): not an indirect call target
- This means that objtool is running with --ibt and a function expected
- to be an indirect call target is not. In particular, this happens for
- init_module() or cleanup_module() if a module relies on these special
- names and does not use module_init() / module_exit() macros to create
- them.
+ This means that objtool is running with --ibt and a function
+ expected to be an indirect call target is not. In particular, this
+ happens for init_module() or cleanup_module() if a module relies on
+ these special names and does not use module_init() / module_exit()
+ macros to create them.
If the error doesn't seem to make sense, it could be a bug in objtool.
-Feel free to ask the objtool maintainer for help.
+Feel free to ask objtool maintainers for help.
Adding exceptions
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index 83b100c1e7f6..8c20361dd100 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -24,6 +24,7 @@ LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lel
all: $(OBJTOOL)
INCLUDES := -I$(srctree)/tools/include \
+ -I$(srctree)/tools/include/uapi \
-I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
-I$(srctree)/tools/arch/$(SRCARCH)/include \
-I$(srctree)/tools/objtool/include \
@@ -36,7 +37,7 @@ OBJTOOL_CFLAGS := -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBE
OBJTOOL_LDFLAGS := $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
# Allow old libelf to be used:
-elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(HOSTCC) $(OBJTOOL_CFLAGS) -x c -E - | grep elf_getshdr)
+elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(HOSTCC) $(OBJTOOL_CFLAGS) -x c -E - 2>/dev/null | grep elf_getshdr)
OBJTOOL_CFLAGS += $(if $(elfshdr),,-DLIBELF_USE_DEPRECATED)
# Always want host compilation.
@@ -45,18 +46,16 @@ HOST_OVERRIDES := CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)"
AWK = awk
MKDIR = mkdir
-ifeq ($(V),1)
- Q =
-else
- Q = @
-endif
-
BUILD_ORC := n
ifeq ($(SRCARCH),x86)
BUILD_ORC := y
endif
+ifeq ($(SRCARCH),loongarch)
+ BUILD_ORC := y
+endif
+
export BUILD_ORC
export srctree OUTPUT CFLAGS SRCARCH AWK
include $(srctree)/tools/build/Makefile.include
diff --git a/tools/objtool/arch/loongarch/Build b/tools/objtool/arch/loongarch/Build
new file mode 100644
index 000000000000..1d4b784b6887
--- /dev/null
+++ b/tools/objtool/arch/loongarch/Build
@@ -0,0 +1,3 @@
+objtool-y += decode.o
+objtool-y += special.o
+objtool-y += orc.o
diff --git a/tools/objtool/arch/loongarch/decode.c b/tools/objtool/arch/loongarch/decode.c
new file mode 100644
index 000000000000..2e555c4060c5
--- /dev/null
+++ b/tools/objtool/arch/loongarch/decode.c
@@ -0,0 +1,416 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <string.h>
+#include <objtool/check.h>
+#include <objtool/warn.h>
+#include <asm/inst.h>
+#include <asm/orc_types.h>
+#include <linux/objtool_types.h>
+#include <arch/elf.h>
+
+int arch_ftrace_match(char *name)
+{
+ return !strcmp(name, "_mcount");
+}
+
+unsigned long arch_jump_destination(struct instruction *insn)
+{
+ return insn->offset + (insn->immediate << 2);
+}
+
+unsigned long arch_dest_reloc_offset(int addend)
+{
+ return addend;
+}
+
+bool arch_pc_relative_reloc(struct reloc *reloc)
+{
+ return false;
+}
+
+bool arch_callee_saved_reg(unsigned char reg)
+{
+ switch (reg) {
+ case CFI_RA:
+ case CFI_FP:
+ case CFI_S0 ... CFI_S8:
+ return true;
+ default:
+ return false;
+ }
+}
+
+int arch_decode_hint_reg(u8 sp_reg, int *base)
+{
+ switch (sp_reg) {
+ case ORC_REG_UNDEFINED:
+ *base = CFI_UNDEFINED;
+ break;
+ case ORC_REG_SP:
+ *base = CFI_SP;
+ break;
+ case ORC_REG_FP:
+ *base = CFI_FP;
+ break;
+ default:
+ return -1;
+ }
+
+ return 0;
+}
+
+static bool is_loongarch(const struct elf *elf)
+{
+ if (elf->ehdr.e_machine == EM_LOONGARCH)
+ return true;
+
+ ERROR("unexpected ELF machine type %d", elf->ehdr.e_machine);
+ return false;
+}
+
+#define ADD_OP(op) \
+ if (!(op = calloc(1, sizeof(*op)))) \
+ return -1; \
+ else for (*ops_list = op, ops_list = &op->next; op; op = NULL)
+
+static bool decode_insn_reg0i26_fomat(union loongarch_instruction inst,
+ struct instruction *insn)
+{
+ switch (inst.reg0i26_format.opcode) {
+ case b_op:
+ insn->type = INSN_JUMP_UNCONDITIONAL;
+ insn->immediate = sign_extend64(inst.reg0i26_format.immediate_h << 16 |
+ inst.reg0i26_format.immediate_l, 25);
+ break;
+ case bl_op:
+ insn->type = INSN_CALL;
+ insn->immediate = sign_extend64(inst.reg0i26_format.immediate_h << 16 |
+ inst.reg0i26_format.immediate_l, 25);
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static bool decode_insn_reg1i21_fomat(union loongarch_instruction inst,
+ struct instruction *insn)
+{
+ switch (inst.reg1i21_format.opcode) {
+ case beqz_op:
+ case bnez_op:
+ case bceqz_op:
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64(inst.reg1i21_format.immediate_h << 16 |
+ inst.reg1i21_format.immediate_l, 20);
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static bool decode_insn_reg2i12_fomat(union loongarch_instruction inst,
+ struct instruction *insn,
+ struct stack_op **ops_list,
+ struct stack_op *op)
+{
+ switch (inst.reg2i12_format.opcode) {
+ case addid_op:
+ if ((inst.reg2i12_format.rd == CFI_SP) || (inst.reg2i12_format.rj == CFI_SP)) {
+ /* addi.d sp,sp,si12 or addi.d fp,sp,si12 or addi.d sp,fp,si12 */
+ insn->immediate = sign_extend64(inst.reg2i12_format.immediate, 11);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_ADD;
+ op->src.reg = inst.reg2i12_format.rj;
+ op->src.offset = insn->immediate;
+ op->dest.type = OP_DEST_REG;
+ op->dest.reg = inst.reg2i12_format.rd;
+ }
+ }
+ if ((inst.reg2i12_format.rd == CFI_SP) && (inst.reg2i12_format.rj == CFI_FP)) {
+ /* addi.d sp,fp,si12 */
+ struct symbol *func = find_func_containing(insn->sec, insn->offset);
+
+ if (!func)
+ return false;
+
+ func->frame_pointer = true;
+ }
+ break;
+ case ldd_op:
+ if (inst.reg2i12_format.rj == CFI_SP) {
+ /* ld.d rd,sp,si12 */
+ insn->immediate = sign_extend64(inst.reg2i12_format.immediate, 11);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_REG_INDIRECT;
+ op->src.reg = CFI_SP;
+ op->src.offset = insn->immediate;
+ op->dest.type = OP_DEST_REG;
+ op->dest.reg = inst.reg2i12_format.rd;
+ }
+ }
+ break;
+ case std_op:
+ if (inst.reg2i12_format.rj == CFI_SP) {
+ /* st.d rd,sp,si12 */
+ insn->immediate = sign_extend64(inst.reg2i12_format.immediate, 11);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_REG;
+ op->src.reg = inst.reg2i12_format.rd;
+ op->dest.type = OP_DEST_REG_INDIRECT;
+ op->dest.reg = CFI_SP;
+ op->dest.offset = insn->immediate;
+ }
+ }
+ break;
+ case andi_op:
+ if (inst.reg2i12_format.rd == 0 &&
+ inst.reg2i12_format.rj == 0 &&
+ inst.reg2i12_format.immediate == 0)
+ /* andi r0,r0,0 */
+ insn->type = INSN_NOP;
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static bool decode_insn_reg2i14_fomat(union loongarch_instruction inst,
+ struct instruction *insn,
+ struct stack_op **ops_list,
+ struct stack_op *op)
+{
+ switch (inst.reg2i14_format.opcode) {
+ case ldptrd_op:
+ if (inst.reg2i14_format.rj == CFI_SP) {
+ /* ldptr.d rd,sp,si14 */
+ insn->immediate = sign_extend64(inst.reg2i14_format.immediate, 13);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_REG_INDIRECT;
+ op->src.reg = CFI_SP;
+ op->src.offset = insn->immediate;
+ op->dest.type = OP_DEST_REG;
+ op->dest.reg = inst.reg2i14_format.rd;
+ }
+ }
+ break;
+ case stptrd_op:
+ if (inst.reg2i14_format.rj == CFI_SP) {
+ /* stptr.d ra,sp,0 */
+ if (inst.reg2i14_format.rd == LOONGARCH_GPR_RA &&
+ inst.reg2i14_format.immediate == 0)
+ break;
+
+ /* stptr.d rd,sp,si14 */
+ insn->immediate = sign_extend64(inst.reg2i14_format.immediate, 13);
+ ADD_OP(op) {
+ op->src.type = OP_SRC_REG;
+ op->src.reg = inst.reg2i14_format.rd;
+ op->dest.type = OP_DEST_REG_INDIRECT;
+ op->dest.reg = CFI_SP;
+ op->dest.offset = insn->immediate;
+ }
+ }
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static bool decode_insn_reg2i16_fomat(union loongarch_instruction inst,
+ struct instruction *insn)
+{
+ switch (inst.reg2i16_format.opcode) {
+ case jirl_op:
+ if (inst.reg2i16_format.rd == 0 &&
+ inst.reg2i16_format.rj == CFI_RA &&
+ inst.reg2i16_format.immediate == 0) {
+ /* jirl r0,ra,0 */
+ insn->type = INSN_RETURN;
+ } else if (inst.reg2i16_format.rd == CFI_RA) {
+ /* jirl ra,rj,offs16 */
+ insn->type = INSN_CALL_DYNAMIC;
+ } else if (inst.reg2i16_format.rd == CFI_A0 &&
+ inst.reg2i16_format.immediate == 0) {
+ /*
+ * jirl a0,t0,0
+ * this is a special case in loongarch_suspend_enter,
+ * just treat it as a call instruction.
+ */
+ insn->type = INSN_CALL_DYNAMIC;
+ } else if (inst.reg2i16_format.rd == 0 &&
+ inst.reg2i16_format.immediate == 0) {
+ /* jirl r0,rj,0 */
+ insn->type = INSN_JUMP_DYNAMIC;
+ } else if (inst.reg2i16_format.rd == 0 &&
+ inst.reg2i16_format.immediate != 0) {
+ /*
+ * jirl r0,t0,12
+ * this is a rare case in JUMP_VIRT_ADDR,
+ * just ignore it due to it is harmless for tracing.
+ */
+ break;
+ } else {
+ /* jirl rd,rj,offs16 */
+ insn->type = INSN_JUMP_UNCONDITIONAL;
+ insn->immediate = sign_extend64(inst.reg2i16_format.immediate, 15);
+ }
+ break;
+ case beq_op:
+ case bne_op:
+ case blt_op:
+ case bge_op:
+ case bltu_op:
+ case bgeu_op:
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64(inst.reg2i16_format.immediate, 15);
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static bool decode_insn_reg3_fomat(union loongarch_instruction inst,
+ struct instruction *insn)
+{
+ switch (inst.reg3_format.opcode) {
+ case amswapw_op:
+ if (inst.reg3_format.rd == LOONGARCH_GPR_ZERO &&
+ inst.reg3_format.rk == LOONGARCH_GPR_RA &&
+ inst.reg3_format.rj == LOONGARCH_GPR_ZERO) {
+ /* amswap.w $zero, $ra, $zero */
+ insn->type = INSN_BUG;
+ }
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+int arch_decode_instruction(struct objtool_file *file, const struct section *sec,
+ unsigned long offset, unsigned int maxlen,
+ struct instruction *insn)
+{
+ struct stack_op **ops_list = &insn->stack_ops;
+ const struct elf *elf = file->elf;
+ struct stack_op *op = NULL;
+ union loongarch_instruction inst;
+
+ if (!is_loongarch(elf))
+ return -1;
+
+ if (maxlen < LOONGARCH_INSN_SIZE)
+ return 0;
+
+ insn->len = LOONGARCH_INSN_SIZE;
+ insn->type = INSN_OTHER;
+ insn->immediate = 0;
+
+ inst = *(union loongarch_instruction *)(sec->data->d_buf + offset);
+
+ if (decode_insn_reg0i26_fomat(inst, insn))
+ return 0;
+ if (decode_insn_reg1i21_fomat(inst, insn))
+ return 0;
+ if (decode_insn_reg2i12_fomat(inst, insn, ops_list, op))
+ return 0;
+ if (decode_insn_reg2i14_fomat(inst, insn, ops_list, op))
+ return 0;
+ if (decode_insn_reg2i16_fomat(inst, insn))
+ return 0;
+ if (decode_insn_reg3_fomat(inst, insn))
+ return 0;
+
+ if (inst.word == 0) {
+ /* andi $zero, $zero, 0x0 */
+ insn->type = INSN_NOP;
+ } else if (inst.reg0i15_format.opcode == break_op &&
+ inst.reg0i15_format.immediate == 0x0) {
+ /* break 0x0 */
+ insn->type = INSN_TRAP;
+ } else if (inst.reg0i15_format.opcode == break_op &&
+ inst.reg0i15_format.immediate == 0x1) {
+ /* break 0x1 */
+ insn->type = INSN_BUG;
+ } else if (inst.reg2_format.opcode == ertn_op) {
+ /* ertn */
+ insn->type = INSN_RETURN;
+ }
+
+ return 0;
+}
+
+const char *arch_nop_insn(int len)
+{
+ static u32 nop;
+
+ if (len != LOONGARCH_INSN_SIZE) {
+ ERROR("invalid NOP size: %d\n", len);
+ return NULL;
+ }
+
+ nop = LOONGARCH_INSN_NOP;
+
+ return (const char *)&nop;
+}
+
+const char *arch_ret_insn(int len)
+{
+ static u32 ret;
+
+ if (len != LOONGARCH_INSN_SIZE) {
+ ERROR("invalid RET size: %d\n", len);
+ return NULL;
+ }
+
+ emit_jirl((union loongarch_instruction *)&ret, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
+
+ return (const char *)&ret;
+}
+
+void arch_initial_func_cfi_state(struct cfi_init_state *state)
+{
+ int i;
+
+ for (i = 0; i < CFI_NUM_REGS; i++) {
+ state->regs[i].base = CFI_UNDEFINED;
+ state->regs[i].offset = 0;
+ }
+
+ /* initial CFA (call frame address) */
+ state->cfa.base = CFI_SP;
+ state->cfa.offset = 0;
+}
+
+unsigned int arch_reloc_size(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_LARCH_32:
+ case R_LARCH_32_PCREL:
+ return 4;
+ default:
+ return 8;
+ }
+}
+
+unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table)
+{
+ switch (reloc_type(reloc)) {
+ case R_LARCH_32_PCREL:
+ case R_LARCH_64_PCREL:
+ return reloc->sym->offset + reloc_addend(reloc) -
+ (reloc_offset(reloc) - reloc_offset(table));
+ default:
+ return reloc->sym->offset + reloc_addend(reloc);
+ }
+}
diff --git a/tools/objtool/arch/loongarch/include/arch/cfi_regs.h b/tools/objtool/arch/loongarch/include/arch/cfi_regs.h
new file mode 100644
index 000000000000..d183cc8f43bf
--- /dev/null
+++ b/tools/objtool/arch/loongarch/include/arch/cfi_regs.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_CFI_REGS_H
+#define _OBJTOOL_ARCH_CFI_REGS_H
+
+#define CFI_RA 1
+#define CFI_SP 3
+#define CFI_A0 4
+#define CFI_FP 22
+#define CFI_S0 23
+#define CFI_S1 24
+#define CFI_S2 25
+#define CFI_S3 26
+#define CFI_S4 27
+#define CFI_S5 28
+#define CFI_S6 29
+#define CFI_S7 30
+#define CFI_S8 31
+#define CFI_NUM_REGS 32
+
+#define CFI_BP CFI_FP
+
+#endif /* _OBJTOOL_ARCH_CFI_REGS_H */
diff --git a/tools/objtool/arch/loongarch/include/arch/elf.h b/tools/objtool/arch/loongarch/include/arch/elf.h
new file mode 100644
index 000000000000..ec79062c9554
--- /dev/null
+++ b/tools/objtool/arch/loongarch/include/arch/elf.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_ELF_H
+#define _OBJTOOL_ARCH_ELF_H
+
+/*
+ * See the following link for more info about ELF Relocation types:
+ * https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html#_relocations
+ */
+#ifndef R_LARCH_NONE
+#define R_LARCH_NONE 0
+#endif
+#ifndef R_LARCH_32
+#define R_LARCH_32 1
+#endif
+#ifndef R_LARCH_64
+#define R_LARCH_64 2
+#endif
+#ifndef R_LARCH_32_PCREL
+#define R_LARCH_32_PCREL 99
+#endif
+#ifndef R_LARCH_64_PCREL
+#define R_LARCH_64_PCREL 109
+#endif
+
+#ifndef EM_LOONGARCH
+#define EM_LOONGARCH 258
+#endif
+
+#define R_NONE R_LARCH_NONE
+#define R_ABS32 R_LARCH_32
+#define R_ABS64 R_LARCH_64
+#define R_DATA32 R_LARCH_32_PCREL
+#define R_DATA64 R_LARCH_32_PCREL
+#define R_TEXT32 R_LARCH_32_PCREL
+#define R_TEXT64 R_LARCH_32_PCREL
+
+#endif /* _OBJTOOL_ARCH_ELF_H */
diff --git a/tools/objtool/arch/loongarch/include/arch/special.h b/tools/objtool/arch/loongarch/include/arch/special.h
new file mode 100644
index 000000000000..35fc979b550a
--- /dev/null
+++ b/tools/objtool/arch/loongarch/include/arch/special.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_SPECIAL_H
+#define _OBJTOOL_ARCH_SPECIAL_H
+
+/*
+ * See more info about struct exception_table_entry
+ * in arch/loongarch/include/asm/extable.h
+ */
+#define EX_ENTRY_SIZE 12
+#define EX_ORIG_OFFSET 0
+#define EX_NEW_OFFSET 4
+
+/*
+ * See more info about struct jump_entry
+ * in include/linux/jump_label.h
+ */
+#define JUMP_ENTRY_SIZE 16
+#define JUMP_ORIG_OFFSET 0
+#define JUMP_NEW_OFFSET 4
+#define JUMP_KEY_OFFSET 8
+
+/*
+ * See more info about struct alt_instr
+ * in arch/loongarch/include/asm/alternative.h
+ */
+#define ALT_ENTRY_SIZE 12
+#define ALT_ORIG_OFFSET 0
+#define ALT_NEW_OFFSET 4
+#define ALT_FEATURE_OFFSET 8
+#define ALT_ORIG_LEN_OFFSET 10
+#define ALT_NEW_LEN_OFFSET 11
+
+#endif /* _OBJTOOL_ARCH_SPECIAL_H */
diff --git a/tools/objtool/arch/loongarch/orc.c b/tools/objtool/arch/loongarch/orc.c
new file mode 100644
index 000000000000..b58c5ff443c9
--- /dev/null
+++ b/tools/objtool/arch/loongarch/orc.c
@@ -0,0 +1,171 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/objtool_types.h>
+#include <asm/orc_types.h>
+
+#include <objtool/check.h>
+#include <objtool/orc.h>
+#include <objtool/warn.h>
+#include <objtool/endianness.h>
+
+int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi, struct instruction *insn)
+{
+ struct cfi_reg *fp = &cfi->regs[CFI_FP];
+ struct cfi_reg *ra = &cfi->regs[CFI_RA];
+
+ memset(orc, 0, sizeof(*orc));
+
+ if (!cfi) {
+ /*
+ * This is usually either unreachable nops/traps (which don't
+ * trigger unreachable instruction warnings), or
+ * STACK_FRAME_NON_STANDARD functions.
+ */
+ orc->type = ORC_TYPE_UNDEFINED;
+ return 0;
+ }
+
+ switch (cfi->type) {
+ case UNWIND_HINT_TYPE_UNDEFINED:
+ orc->type = ORC_TYPE_UNDEFINED;
+ return 0;
+ case UNWIND_HINT_TYPE_END_OF_STACK:
+ orc->type = ORC_TYPE_END_OF_STACK;
+ return 0;
+ case UNWIND_HINT_TYPE_CALL:
+ orc->type = ORC_TYPE_CALL;
+ break;
+ case UNWIND_HINT_TYPE_REGS:
+ orc->type = ORC_TYPE_REGS;
+ break;
+ case UNWIND_HINT_TYPE_REGS_PARTIAL:
+ orc->type = ORC_TYPE_REGS_PARTIAL;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown unwind hint type %d", cfi->type);
+ return -1;
+ }
+
+ orc->signal = cfi->signal;
+
+ switch (cfi->cfa.base) {
+ case CFI_SP:
+ orc->sp_reg = ORC_REG_SP;
+ break;
+ case CFI_FP:
+ orc->sp_reg = ORC_REG_FP;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown CFA base reg %d", cfi->cfa.base);
+ return -1;
+ }
+
+ switch (fp->base) {
+ case CFI_UNDEFINED:
+ orc->fp_reg = ORC_REG_UNDEFINED;
+ orc->fp_offset = 0;
+ break;
+ case CFI_CFA:
+ orc->fp_reg = ORC_REG_PREV_SP;
+ orc->fp_offset = fp->offset;
+ break;
+ case CFI_FP:
+ orc->fp_reg = ORC_REG_FP;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown FP base reg %d", fp->base);
+ return -1;
+ }
+
+ switch (ra->base) {
+ case CFI_UNDEFINED:
+ orc->ra_reg = ORC_REG_UNDEFINED;
+ orc->ra_offset = 0;
+ break;
+ case CFI_CFA:
+ orc->ra_reg = ORC_REG_PREV_SP;
+ orc->ra_offset = ra->offset;
+ break;
+ case CFI_FP:
+ orc->ra_reg = ORC_REG_FP;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown RA base reg %d", ra->base);
+ return -1;
+ }
+
+ orc->sp_offset = cfi->cfa.offset;
+
+ return 0;
+}
+
+int write_orc_entry(struct elf *elf, struct section *orc_sec,
+ struct section *ip_sec, unsigned int idx,
+ struct section *insn_sec, unsigned long insn_off,
+ struct orc_entry *o)
+{
+ struct orc_entry *orc;
+
+ /* populate ORC data */
+ orc = (struct orc_entry *)orc_sec->data->d_buf + idx;
+ memcpy(orc, o, sizeof(*orc));
+
+ /* populate reloc for ip */
+ if (!elf_init_reloc_text_sym(elf, ip_sec, idx * sizeof(int), idx,
+ insn_sec, insn_off))
+ return -1;
+
+ return 0;
+}
+
+static const char *reg_name(unsigned int reg)
+{
+ switch (reg) {
+ case ORC_REG_SP:
+ return "sp";
+ case ORC_REG_FP:
+ return "fp";
+ case ORC_REG_PREV_SP:
+ return "prevsp";
+ default:
+ return "?";
+ }
+}
+
+static const char *orc_type_name(unsigned int type)
+{
+ switch (type) {
+ case UNWIND_HINT_TYPE_CALL:
+ return "call";
+ case UNWIND_HINT_TYPE_REGS:
+ return "regs";
+ case UNWIND_HINT_TYPE_REGS_PARTIAL:
+ return "regs (partial)";
+ default:
+ return "?";
+ }
+}
+
+static void print_reg(unsigned int reg, int offset)
+{
+ if (reg == ORC_REG_UNDEFINED)
+ printf(" (und) ");
+ else
+ printf("%s + %3d", reg_name(reg), offset);
+
+}
+
+void orc_print_dump(struct elf *dummy_elf, struct orc_entry *orc, int i)
+{
+ printf("type:%s", orc_type_name(orc[i].type));
+
+ printf(" sp:");
+ print_reg(orc[i].sp_reg, orc[i].sp_offset);
+
+ printf(" fp:");
+ print_reg(orc[i].fp_reg, orc[i].fp_offset);
+
+ printf(" ra:");
+ print_reg(orc[i].ra_reg, orc[i].ra_offset);
+
+ printf(" signal:%d\n", orc[i].signal);
+}
diff --git a/tools/objtool/arch/loongarch/special.c b/tools/objtool/arch/loongarch/special.c
new file mode 100644
index 000000000000..a80b75f7b061
--- /dev/null
+++ b/tools/objtool/arch/loongarch/special.c
@@ -0,0 +1,196 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <string.h>
+#include <objtool/special.h>
+#include <objtool/warn.h>
+
+bool arch_support_alt_relocation(struct special_alt *special_alt,
+ struct instruction *insn,
+ struct reloc *reloc)
+{
+ return false;
+}
+
+struct table_info {
+ struct list_head jump_info;
+ unsigned long insn_offset;
+ unsigned long rodata_offset;
+};
+
+static void get_rodata_table_size_by_table_annotate(struct objtool_file *file,
+ struct instruction *insn,
+ unsigned long *table_size)
+{
+ struct section *rsec;
+ struct reloc *reloc;
+ struct list_head table_list;
+ struct table_info *orig_table;
+ struct table_info *next_table;
+ unsigned long tmp_insn_offset;
+ unsigned long tmp_rodata_offset;
+ bool is_valid_list = false;
+
+ rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate");
+ if (!rsec)
+ return;
+
+ INIT_LIST_HEAD(&table_list);
+
+ for_each_reloc(rsec, reloc) {
+ if (reloc->sym->sec->rodata)
+ continue;
+
+ if (strcmp(insn->sec->name, reloc->sym->sec->name))
+ continue;
+
+ orig_table = malloc(sizeof(struct table_info));
+ if (!orig_table) {
+ WARN("malloc failed");
+ return;
+ }
+
+ orig_table->insn_offset = reloc->sym->offset + reloc_addend(reloc);
+ reloc++;
+ orig_table->rodata_offset = reloc->sym->offset + reloc_addend(reloc);
+
+ list_add_tail(&orig_table->jump_info, &table_list);
+
+ if (reloc_idx(reloc) + 1 == sec_num_entries(rsec))
+ break;
+
+ if (strcmp(insn->sec->name, (reloc + 1)->sym->sec->name)) {
+ list_for_each_entry(orig_table, &table_list, jump_info) {
+ if (orig_table->insn_offset == insn->offset) {
+ is_valid_list = true;
+ break;
+ }
+ }
+
+ if (!is_valid_list) {
+ list_del_init(&table_list);
+ continue;
+ }
+
+ break;
+ }
+ }
+
+ list_for_each_entry(orig_table, &table_list, jump_info) {
+ next_table = list_next_entry(orig_table, jump_info);
+ list_for_each_entry_from(next_table, &table_list, jump_info) {
+ if (next_table->rodata_offset < orig_table->rodata_offset) {
+ tmp_insn_offset = next_table->insn_offset;
+ tmp_rodata_offset = next_table->rodata_offset;
+ next_table->insn_offset = orig_table->insn_offset;
+ next_table->rodata_offset = orig_table->rodata_offset;
+ orig_table->insn_offset = tmp_insn_offset;
+ orig_table->rodata_offset = tmp_rodata_offset;
+ }
+ }
+ }
+
+ list_for_each_entry(orig_table, &table_list, jump_info) {
+ if (insn->offset == orig_table->insn_offset) {
+ next_table = list_next_entry(orig_table, jump_info);
+ if (&next_table->jump_info == &table_list) {
+ *table_size = 0;
+ return;
+ }
+
+ while (next_table->rodata_offset == orig_table->rodata_offset) {
+ next_table = list_next_entry(next_table, jump_info);
+ if (&next_table->jump_info == &table_list) {
+ *table_size = 0;
+ return;
+ }
+ }
+
+ *table_size = next_table->rodata_offset - orig_table->rodata_offset;
+ }
+ }
+}
+
+static struct reloc *find_reloc_by_table_annotate(struct objtool_file *file,
+ struct instruction *insn,
+ unsigned long *table_size)
+{
+ struct section *rsec;
+ struct reloc *reloc;
+ unsigned long offset;
+
+ rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate");
+ if (!rsec)
+ return NULL;
+
+ for_each_reloc(rsec, reloc) {
+ if (reloc->sym->sec->rodata)
+ continue;
+
+ if (strcmp(insn->sec->name, reloc->sym->sec->name))
+ continue;
+
+ offset = reloc->sym->offset + reloc_addend(reloc);
+ if (insn->offset == offset) {
+ get_rodata_table_size_by_table_annotate(file, insn, table_size);
+ reloc++;
+ return reloc;
+ }
+ }
+
+ return NULL;
+}
+
+static struct reloc *find_reloc_of_rodata_c_jump_table(struct section *sec,
+ unsigned long offset,
+ unsigned long *table_size)
+{
+ struct section *rsec;
+ struct reloc *reloc;
+
+ rsec = sec->rsec;
+ if (!rsec)
+ return NULL;
+
+ for_each_reloc(rsec, reloc) {
+ if (reloc_offset(reloc) > offset)
+ break;
+
+ if (!strcmp(reloc->sym->sec->name, C_JUMP_TABLE_SECTION)) {
+ *table_size = 0;
+ return reloc;
+ }
+ }
+
+ return NULL;
+}
+
+struct reloc *arch_find_switch_table(struct objtool_file *file,
+ struct instruction *insn,
+ unsigned long *table_size)
+{
+ struct reloc *annotate_reloc;
+ struct reloc *rodata_reloc;
+ struct section *table_sec;
+ unsigned long table_offset;
+
+ annotate_reloc = find_reloc_by_table_annotate(file, insn, table_size);
+ if (!annotate_reloc) {
+ annotate_reloc = find_reloc_of_rodata_c_jump_table(
+ insn->sec, insn->offset, table_size);
+ if (!annotate_reloc)
+ return NULL;
+ }
+
+ table_sec = annotate_reloc->sym->sec;
+ table_offset = annotate_reloc->sym->offset + reloc_addend(annotate_reloc);
+
+ /*
+ * Each table entry has a rela associated with it. The rela
+ * should reference text in the same function as the original
+ * instruction.
+ */
+ rodata_reloc = find_reloc_by_dest(file->elf, table_sec, table_offset);
+ if (!rodata_reloc)
+ return NULL;
+
+ return rodata_reloc;
+}
diff --git a/tools/objtool/arch/powerpc/decode.c b/tools/objtool/arch/powerpc/decode.c
index 53b55690f320..c851c51d4bd3 100644
--- a/tools/objtool/arch/powerpc/decode.c
+++ b/tools/objtool/arch/powerpc/decode.c
@@ -55,12 +55,17 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
switch (opcode) {
case 18: /* b[l][a] */
- if ((ins & 3) == 1) /* bl */
+ if (ins == 0x48000005) /* bl .+4 */
+ typ = INSN_OTHER;
+ else if (ins & 1) /* bl[a] */
typ = INSN_CALL;
+ else /* b[a] */
+ typ = INSN_JUMP_UNCONDITIONAL;
imm = ins & 0x3fffffc;
if (imm & 0x2000000)
imm -= 0x4000000;
+ imm |= ins & 2; /* AA flag */
break;
}
@@ -77,6 +82,9 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
unsigned long arch_jump_destination(struct instruction *insn)
{
+ if (insn->immediate & 2)
+ return insn->immediate & ~2;
+
return insn->offset + insn->immediate;
}
@@ -106,3 +114,17 @@ void arch_initial_func_cfi_state(struct cfi_init_state *state)
state->regs[CFI_RA].base = CFI_CFA;
state->regs[CFI_RA].offset = 0;
}
+
+unsigned int arch_reloc_size(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_PPC_REL32:
+ case R_PPC_ADDR32:
+ case R_PPC_UADDR32:
+ case R_PPC_PLT32:
+ case R_PPC_PLTREL32:
+ return 4;
+ default:
+ return 8;
+ }
+}
diff --git a/tools/objtool/arch/powerpc/special.c b/tools/objtool/arch/powerpc/special.c
index d33868147196..51610689abf7 100644
--- a/tools/objtool/arch/powerpc/special.c
+++ b/tools/objtool/arch/powerpc/special.c
@@ -13,7 +13,8 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
}
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn)
+ struct instruction *insn,
+ unsigned long *table_size)
{
exit(-1);
}
diff --git a/tools/objtool/arch/x86/Build b/tools/objtool/arch/x86/Build
index 9f7869b5c5e0..3dedb2fd8f3a 100644
--- a/tools/objtool/arch/x86/Build
+++ b/tools/objtool/arch/x86/Build
@@ -1,5 +1,6 @@
objtool-y += special.o
objtool-y += decode.o
+objtool-y += orc.o
inat_tables_script = ../arch/x86/tools/gen-insn-attr-x86.awk
inat_tables_maps = ../arch/x86/lib/x86-opcode-map.txt
diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c
index e327cd827135..0ad5cc70ecbe 100644
--- a/tools/objtool/arch/x86/decode.c
+++ b/tools/objtool/arch/x86/decode.c
@@ -36,7 +36,7 @@ static int is_x86_64(const struct elf *elf)
case EM_386:
return 0;
default:
- WARN("unexpected ELF machine type %d", elf->ehdr.e_machine);
+ ERROR("unexpected ELF machine type %d", elf->ehdr.e_machine);
return -1;
}
}
@@ -125,8 +125,14 @@ bool arch_pc_relative_reloc(struct reloc *reloc)
#define is_RIP() ((modrm_rm & 7) == CFI_BP && modrm_mod == 0)
#define have_SIB() ((modrm_rm & 7) == CFI_SP && mod_is_mem())
+/*
+ * Check the ModRM register. If there is a SIB byte then check with
+ * the SIB base register. But if the SIB base is 5 (i.e. CFI_BP) and
+ * ModRM mod is 0 then there is no base register.
+ */
#define rm_is(reg) (have_SIB() ? \
- sib_base == (reg) && sib_index == CFI_SP : \
+ sib_base == (reg) && sib_index == CFI_SP && \
+ (sib_base != CFI_BP || modrm_mod != 0) : \
modrm_rm == (reg))
#define rm_is_mem(reg) (mod_is_mem() && !is_RIP() && rm_is(reg))
@@ -167,7 +173,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
ret = insn_decode(&ins, sec->data->d_buf + offset, maxlen,
x86_64 ? INSN_MODE_64 : INSN_MODE_32);
if (ret < 0) {
- WARN("can't decode instruction at %s:0x%lx", sec->name, offset);
+ ERROR("can't decode instruction at %s:0x%lx", sec->name, offset);
return -1;
}
@@ -183,6 +189,15 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
op2 = ins.opcode.bytes[1];
op3 = ins.opcode.bytes[2];
+ /*
+ * XXX hack, decoder is buggered and thinks 0xea is 7 bytes long.
+ */
+ if (op1 == 0xea) {
+ insn->len = 1;
+ insn->type = INSN_BUG;
+ return 0;
+ }
+
if (ins.rex_prefix.nbytes) {
rex = ins.rex_prefix.bytes[0];
rex_w = X86_REX_W(rex) >> 3;
@@ -315,7 +330,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
break;
default:
- /* WARN ? */
+ /* ERROR ? */
break;
}
@@ -450,10 +465,6 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
if (!rex_w)
break;
- /* skip RIP relative displacement */
- if (is_RIP())
- break;
-
/* skip nontrivial SIB */
if (have_SIB()) {
modrm_rm = sib_base;
@@ -461,6 +472,12 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
break;
}
+ /* lea disp(%rip), %dst */
+ if (is_RIP()) {
+ insn->type = INSN_LEA_RIP;
+ break;
+ }
+
/* lea disp(%src), %dst */
ADD_OP(op) {
op->src.offset = ins.displacement.value;
@@ -509,20 +526,33 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
if (op2 == 0x01) {
- if (modrm == 0xca)
- insn->type = INSN_CLAC;
- else if (modrm == 0xcb)
- insn->type = INSN_STAC;
-
+ switch (insn_last_prefix_id(&ins)) {
+ case INAT_PFX_REPE:
+ case INAT_PFX_REPNE:
+ if (modrm == 0xca)
+ /* eretu/erets */
+ insn->type = INSN_SYSRET;
+ break;
+ default:
+ if (modrm == 0xca)
+ insn->type = INSN_CLAC;
+ else if (modrm == 0xcb)
+ insn->type = INSN_STAC;
+ break;
+ }
} else if (op2 >= 0x80 && op2 <= 0x8f) {
insn->type = INSN_JUMP_CONDITIONAL;
- } else if (op2 == 0x05 || op2 == 0x07 || op2 == 0x34 ||
- op2 == 0x35) {
+ } else if (op2 == 0x05 || op2 == 0x34) {
+
+ /* syscall, sysenter */
+ insn->type = INSN_SYSCALL;
- /* sysenter, sysret */
- insn->type = INSN_CONTEXT_SWITCH;
+ } else if (op2 == 0x07 || op2 == 0x35) {
+
+ /* sysret, sysexit */
+ insn->type = INSN_SYSRET;
} else if (op2 == 0x0b || op2 == 0xb9) {
@@ -544,8 +574,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
if (ins.prefixes.nbytes == 1 &&
ins.prefixes.bytes[0] == 0xf2) {
/* ENQCMD cannot be used in the kernel. */
- WARN("ENQCMD instruction at %s:%lx", sec->name,
- offset);
+ WARN("ENQCMD instruction at %s:%lx", sec->name, offset);
}
} else if (op2 == 0xa0 || op2 == 0xa8) {
@@ -629,7 +658,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
if (disp->sym->type == STT_SECTION)
func = find_symbol_by_offset(disp->sym->sec, reloc_addend(disp));
if (!func) {
- WARN("no func for pv_ops[]");
+ ERROR("no func for pv_ops[]");
return -1;
}
@@ -660,7 +689,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
case 0xca: /* retf */
case 0xcb: /* retf */
- insn->type = INSN_CONTEXT_SWITCH;
+ insn->type = INSN_SYSRET;
break;
case 0xe0: /* loopne */
@@ -705,7 +734,7 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
} else if (modrm_reg == 5) {
/* jmpf */
- insn->type = INSN_CONTEXT_SWITCH;
+ insn->type = INSN_SYSRET;
} else if (modrm_reg == 6) {
@@ -722,7 +751,10 @@ int arch_decode_instruction(struct objtool_file *file, const struct section *sec
break;
}
- insn->immediate = ins.immediate.nbytes ? ins.immediate.value : 0;
+ if (ins.immediate.nbytes)
+ insn->immediate = ins.immediate.value;
+ else if (ins.displacement.nbytes)
+ insn->immediate = ins.displacement.value;
return 0;
}
@@ -756,7 +788,7 @@ const char *arch_nop_insn(int len)
};
if (len < 1 || len > 5) {
- WARN("invalid NOP size: %d\n", len);
+ ERROR("invalid NOP size: %d\n", len);
return NULL;
}
@@ -776,7 +808,7 @@ const char *arch_ret_insn(int len)
};
if (len < 1 || len > 5) {
- WARN("invalid RET size: %d\n", len);
+ ERROR("invalid RET size: %d\n", len);
return NULL;
}
@@ -819,16 +851,44 @@ int arch_decode_hint_reg(u8 sp_reg, int *base)
bool arch_is_retpoline(struct symbol *sym)
{
- return !strncmp(sym->name, "__x86_indirect_", 15);
+ return !strncmp(sym->name, "__x86_indirect_", 15) ||
+ !strncmp(sym->name, "__pi___x86_indirect_", 20);
}
bool arch_is_rethunk(struct symbol *sym)
{
- return !strcmp(sym->name, "__x86_return_thunk");
+ return !strcmp(sym->name, "__x86_return_thunk") ||
+ !strcmp(sym->name, "__pi___x86_return_thunk");
}
bool arch_is_embedded_insn(struct symbol *sym)
{
return !strcmp(sym->name, "retbleed_return_thunk") ||
+ !strcmp(sym->name, "srso_alias_safe_ret") ||
!strcmp(sym->name, "srso_safe_ret");
}
+
+unsigned int arch_reloc_size(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_X86_64_32:
+ case R_X86_64_32S:
+ case R_X86_64_PC32:
+ case R_X86_64_PLT32:
+ return 4;
+ default:
+ return 8;
+ }
+}
+
+bool arch_absolute_reloc(struct elf *elf, struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_X86_64_32:
+ case R_X86_64_32S:
+ case R_X86_64_64:
+ return true;
+ default:
+ return false;
+ }
+}
diff --git a/tools/objtool/arch/x86/orc.c b/tools/objtool/arch/x86/orc.c
new file mode 100644
index 000000000000..7176b9ec5b05
--- /dev/null
+++ b/tools/objtool/arch/x86/orc.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/objtool_types.h>
+#include <asm/orc_types.h>
+
+#include <objtool/check.h>
+#include <objtool/orc.h>
+#include <objtool/warn.h>
+#include <objtool/endianness.h>
+
+int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi, struct instruction *insn)
+{
+ struct cfi_reg *bp = &cfi->regs[CFI_BP];
+
+ memset(orc, 0, sizeof(*orc));
+
+ if (!cfi) {
+ /*
+ * This is usually either unreachable nops/traps (which don't
+ * trigger unreachable instruction warnings), or
+ * STACK_FRAME_NON_STANDARD functions.
+ */
+ orc->type = ORC_TYPE_UNDEFINED;
+ return 0;
+ }
+
+ switch (cfi->type) {
+ case UNWIND_HINT_TYPE_UNDEFINED:
+ orc->type = ORC_TYPE_UNDEFINED;
+ return 0;
+ case UNWIND_HINT_TYPE_END_OF_STACK:
+ orc->type = ORC_TYPE_END_OF_STACK;
+ return 0;
+ case UNWIND_HINT_TYPE_CALL:
+ orc->type = ORC_TYPE_CALL;
+ break;
+ case UNWIND_HINT_TYPE_REGS:
+ orc->type = ORC_TYPE_REGS;
+ break;
+ case UNWIND_HINT_TYPE_REGS_PARTIAL:
+ orc->type = ORC_TYPE_REGS_PARTIAL;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown unwind hint type %d", cfi->type);
+ return -1;
+ }
+
+ orc->signal = cfi->signal;
+
+ switch (cfi->cfa.base) {
+ case CFI_SP:
+ orc->sp_reg = ORC_REG_SP;
+ break;
+ case CFI_SP_INDIRECT:
+ orc->sp_reg = ORC_REG_SP_INDIRECT;
+ break;
+ case CFI_BP:
+ orc->sp_reg = ORC_REG_BP;
+ break;
+ case CFI_BP_INDIRECT:
+ orc->sp_reg = ORC_REG_BP_INDIRECT;
+ break;
+ case CFI_R10:
+ orc->sp_reg = ORC_REG_R10;
+ break;
+ case CFI_R13:
+ orc->sp_reg = ORC_REG_R13;
+ break;
+ case CFI_DI:
+ orc->sp_reg = ORC_REG_DI;
+ break;
+ case CFI_DX:
+ orc->sp_reg = ORC_REG_DX;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown CFA base reg %d", cfi->cfa.base);
+ return -1;
+ }
+
+ switch (bp->base) {
+ case CFI_UNDEFINED:
+ orc->bp_reg = ORC_REG_UNDEFINED;
+ break;
+ case CFI_CFA:
+ orc->bp_reg = ORC_REG_PREV_SP;
+ break;
+ case CFI_BP:
+ orc->bp_reg = ORC_REG_BP;
+ break;
+ default:
+ ERROR_INSN(insn, "unknown BP base reg %d", bp->base);
+ return -1;
+ }
+
+ orc->sp_offset = cfi->cfa.offset;
+ orc->bp_offset = bp->offset;
+
+ return 0;
+}
+
+int write_orc_entry(struct elf *elf, struct section *orc_sec,
+ struct section *ip_sec, unsigned int idx,
+ struct section *insn_sec, unsigned long insn_off,
+ struct orc_entry *o)
+{
+ struct orc_entry *orc;
+
+ /* populate ORC data */
+ orc = (struct orc_entry *)orc_sec->data->d_buf + idx;
+ memcpy(orc, o, sizeof(*orc));
+ orc->sp_offset = bswap_if_needed(elf, orc->sp_offset);
+ orc->bp_offset = bswap_if_needed(elf, orc->bp_offset);
+
+ /* populate reloc for ip */
+ if (!elf_init_reloc_text_sym(elf, ip_sec, idx * sizeof(int), idx,
+ insn_sec, insn_off))
+ return -1;
+
+ return 0;
+}
+
+static const char *reg_name(unsigned int reg)
+{
+ switch (reg) {
+ case ORC_REG_PREV_SP:
+ return "prevsp";
+ case ORC_REG_DX:
+ return "dx";
+ case ORC_REG_DI:
+ return "di";
+ case ORC_REG_BP:
+ return "bp";
+ case ORC_REG_SP:
+ return "sp";
+ case ORC_REG_R10:
+ return "r10";
+ case ORC_REG_R13:
+ return "r13";
+ case ORC_REG_BP_INDIRECT:
+ return "bp(ind)";
+ case ORC_REG_SP_INDIRECT:
+ return "sp(ind)";
+ default:
+ return "?";
+ }
+}
+
+static const char *orc_type_name(unsigned int type)
+{
+ switch (type) {
+ case ORC_TYPE_UNDEFINED:
+ return "(und)";
+ case ORC_TYPE_END_OF_STACK:
+ return "end";
+ case ORC_TYPE_CALL:
+ return "call";
+ case ORC_TYPE_REGS:
+ return "regs";
+ case ORC_TYPE_REGS_PARTIAL:
+ return "regs (partial)";
+ default:
+ return "?";
+ }
+}
+
+static void print_reg(unsigned int reg, int offset)
+{
+ if (reg == ORC_REG_BP_INDIRECT)
+ printf("(bp%+d)", offset);
+ else if (reg == ORC_REG_SP_INDIRECT)
+ printf("(sp)%+d", offset);
+ else if (reg == ORC_REG_UNDEFINED)
+ printf("(und)");
+ else
+ printf("%s%+d", reg_name(reg), offset);
+}
+
+void orc_print_dump(struct elf *dummy_elf, struct orc_entry *orc, int i)
+{
+ printf("type:%s", orc_type_name(orc[i].type));
+
+ printf(" sp:");
+ print_reg(orc[i].sp_reg, bswap_if_needed(dummy_elf, orc[i].sp_offset));
+
+ printf(" bp:");
+ print_reg(orc[i].bp_reg, bswap_if_needed(dummy_elf, orc[i].bp_offset));
+
+ printf(" signal:%d\n", orc[i].signal);
+}
diff --git a/tools/objtool/arch/x86/special.c b/tools/objtool/arch/x86/special.c
index 29e949579ede..06ca4a2659a4 100644
--- a/tools/objtool/arch/x86/special.c
+++ b/tools/objtool/arch/x86/special.c
@@ -3,39 +3,32 @@
#include <objtool/special.h>
#include <objtool/builtin.h>
+#include <objtool/warn.h>
-#define X86_FEATURE_POPCNT (4 * 32 + 23)
-#define X86_FEATURE_SMAP (9 * 32 + 20)
-
-void arch_handle_alternative(unsigned short feature, struct special_alt *alt)
+void arch_handle_alternative(struct special_alt *alt)
{
- switch (feature) {
- case X86_FEATURE_SMAP:
- /*
- * If UACCESS validation is enabled; force that alternative;
- * otherwise force it the other way.
- *
- * What we want to avoid is having both the original and the
- * alternative code flow at the same time, in that case we can
- * find paths that see the STAC but take the NOP instead of
- * CLAC and the other way around.
- */
- if (opts.uaccess)
- alt->skip_orig = true;
- else
- alt->skip_alt = true;
- break;
- case X86_FEATURE_POPCNT:
- /*
- * It has been requested that we don't validate the !POPCNT
- * feature path which is a "very very small percentage of
- * machines".
- */
- alt->skip_orig = true;
- break;
- default:
- break;
- }
+ static struct special_alt *group, *prev;
+
+ /*
+ * Recompute orig_len for nested ALTERNATIVE()s.
+ */
+ if (group && group->orig_sec == alt->orig_sec &&
+ group->orig_off == alt->orig_off) {
+
+ struct special_alt *iter = group;
+ for (;;) {
+ unsigned int len = max(iter->orig_len, alt->orig_len);
+ iter->orig_len = alt->orig_len = len;
+
+ if (iter == prev)
+ break;
+
+ iter = list_next_entry(iter, list);
+ }
+
+ } else group = alt;
+
+ prev = alt;
}
bool arch_support_alt_relocation(struct special_alt *special_alt,
@@ -83,10 +76,11 @@ bool arch_support_alt_relocation(struct special_alt *special_alt,
* TODO: Once we have DWARF CFI and smarter instruction decoding logic,
* ensure the same register is used in the mov and jump instructions.
*
- * NOTE: RETPOLINE made it harder still to decode dynamic jumps.
+ * NOTE: MITIGATION_RETPOLINE made it harder still to decode dynamic jumps.
*/
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn)
+ struct instruction *insn,
+ unsigned long *table_size)
{
struct reloc *text_reloc, *rodata_reloc;
struct section *table_sec;
@@ -132,8 +126,11 @@ struct reloc *arch_find_switch_table(struct objtool_file *file,
* indicates a rare GCC quirk/bug which can leave dead
* code behind.
*/
- if (reloc_type(text_reloc) == R_X86_64_PC32)
+ if (!file->ignore_unreachables && reloc_type(text_reloc) == R_X86_64_PC32) {
+ WARN_INSN(insn, "ignoring unreachables due to jump table quirk");
file->ignore_unreachables = true;
+ }
+ *table_size = 0;
return rodata_reloc;
}
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index 5e21cfb7661d..0f6b197cfcb0 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -6,14 +6,20 @@
#include <subcmd/parse-options.h>
#include <string.h>
#include <stdlib.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <errno.h>
+#include <sys/stat.h>
+#include <sys/sendfile.h>
#include <objtool/builtin.h>
#include <objtool/objtool.h>
+#include <objtool/warn.h>
-#define ERROR(format, ...) \
- fprintf(stderr, \
- "error: objtool: " format "\n", \
- ##__VA_ARGS__)
+#define ORIG_SUFFIX ".orig"
+int orig_argc;
+static char **orig_argv;
+const char *objname;
struct opts opts;
static const char * const check_usage[] = {
@@ -71,7 +77,7 @@ static const struct option check_options[] = {
OPT_BOOLEAN('i', "ibt", &opts.ibt, "validate and annotate IBT"),
OPT_BOOLEAN('m', "mcount", &opts.mcount, "annotate mcount/fentry calls for ftrace"),
OPT_BOOLEAN('n', "noinstr", &opts.noinstr, "validate noinstr rules"),
- OPT_BOOLEAN('o', "orc", &opts.orc, "generate ORC metadata"),
+ OPT_BOOLEAN(0, "orc", &opts.orc, "generate ORC metadata"),
OPT_BOOLEAN('r', "retpoline", &opts.retpoline, "validate and annotate retpoline usage"),
OPT_BOOLEAN(0, "rethunk", &opts.rethunk, "validate and annotate rethunk usage"),
OPT_BOOLEAN(0, "unret", &opts.unret, "validate entry unret placement"),
@@ -81,19 +87,21 @@ static const struct option check_options[] = {
OPT_BOOLEAN('t', "static-call", &opts.static_call, "annotate static calls"),
OPT_BOOLEAN('u', "uaccess", &opts.uaccess, "validate uaccess rules for SMAP"),
OPT_BOOLEAN(0 , "cfi", &opts.cfi, "annotate kernel control flow integrity (kCFI) function preambles"),
+ OPT_BOOLEAN(0 , "noabs", &opts.noabs, "reject absolute references in allocatable sections"),
OPT_CALLBACK_OPTARG(0, "dump", NULL, NULL, "orc", "dump metadata", parse_dump),
OPT_GROUP("Options:"),
- OPT_BOOLEAN(0, "backtrace", &opts.backtrace, "unwind on error"),
- OPT_BOOLEAN(0, "backup", &opts.backup, "create .orig files before modification"),
- OPT_BOOLEAN(0, "dry-run", &opts.dryrun, "don't write modifications"),
- OPT_BOOLEAN(0, "link", &opts.link, "object is a linked object"),
- OPT_BOOLEAN(0, "module", &opts.module, "object is part of a kernel module"),
- OPT_BOOLEAN(0, "mnop", &opts.mnop, "nop out mcount call sites"),
- OPT_BOOLEAN(0, "no-unreachable", &opts.no_unreachable, "skip 'unreachable instruction' warnings"),
- OPT_BOOLEAN(0, "sec-address", &opts.sec_address, "print section addresses in warnings"),
- OPT_BOOLEAN(0, "stats", &opts.stats, "print statistics"),
+ OPT_BOOLEAN(0, "backtrace", &opts.backtrace, "unwind on error"),
+ OPT_BOOLEAN(0, "dry-run", &opts.dryrun, "don't write modifications"),
+ OPT_BOOLEAN(0, "link", &opts.link, "object is a linked object"),
+ OPT_BOOLEAN(0, "module", &opts.module, "object is part of a kernel module"),
+ OPT_BOOLEAN(0, "mnop", &opts.mnop, "nop out mcount call sites"),
+ OPT_BOOLEAN(0, "no-unreachable", &opts.no_unreachable, "skip 'unreachable instruction' warnings"),
+ OPT_STRING('o', "output", &opts.output, "file", "output file name"),
+ OPT_BOOLEAN(0, "sec-address", &opts.sec_address, "print section addresses in warnings"),
+ OPT_BOOLEAN(0, "stats", &opts.stats, "print statistics"),
OPT_BOOLEAN('v', "verbose", &opts.verbose, "verbose warnings"),
+ OPT_BOOLEAN(0, "Werror", &opts.werror, "return error on warnings"),
OPT_END(),
};
@@ -131,10 +139,31 @@ int cmd_parse_options(int argc, const char **argv, const char * const usage[])
static bool opts_valid(void)
{
+ if (opts.mnop && !opts.mcount) {
+ ERROR("--mnop requires --mcount");
+ return false;
+ }
+
+ if (opts.noinstr && !opts.link) {
+ ERROR("--noinstr requires --link");
+ return false;
+ }
+
+ if (opts.ibt && !opts.link) {
+ ERROR("--ibt requires --link");
+ return false;
+ }
+
+ if (opts.unret && !opts.link) {
+ ERROR("--unret requires --link");
+ return false;
+ }
+
if (opts.hack_jump_label ||
opts.hack_noinstr ||
opts.ibt ||
opts.mcount ||
+ opts.noabs ||
opts.noinstr ||
opts.orc ||
opts.retpoline ||
@@ -144,95 +173,164 @@ static bool opts_valid(void)
opts.static_call ||
opts.uaccess) {
if (opts.dump_orc) {
- ERROR("--dump can't be combined with other options");
+ ERROR("--dump can't be combined with other actions");
return false;
}
return true;
}
- if (opts.unret && !opts.rethunk) {
- ERROR("--unret requires --rethunk");
- return false;
- }
-
if (opts.dump_orc)
return true;
- ERROR("At least one command required");
+ ERROR("At least one action required");
return false;
}
-static bool mnop_opts_valid(void)
+static int copy_file(const char *src, const char *dst)
{
- if (opts.mnop && !opts.mcount) {
- ERROR("--mnop requires --mcount");
- return false;
+ size_t to_copy, copied;
+ int dst_fd, src_fd;
+ struct stat stat;
+ off_t offset = 0;
+
+ src_fd = open(src, O_RDONLY);
+ if (src_fd == -1) {
+ ERROR("can't open %s for reading: %s", src, strerror(errno));
+ return 1;
+ }
+
+ dst_fd = open(dst, O_WRONLY | O_CREAT | O_TRUNC, 0400);
+ if (dst_fd == -1) {
+ ERROR("can't open %s for writing: %s", dst, strerror(errno));
+ return 1;
}
- return true;
+ if (fstat(src_fd, &stat) == -1) {
+ ERROR_GLIBC("fstat");
+ return 1;
+ }
+
+ if (fchmod(dst_fd, stat.st_mode) == -1) {
+ ERROR_GLIBC("fchmod");
+ return 1;
+ }
+
+ for (to_copy = stat.st_size; to_copy > 0; to_copy -= copied) {
+ copied = sendfile(dst_fd, src_fd, &offset, to_copy);
+ if (copied == -1) {
+ ERROR_GLIBC("sendfile");
+ return 1;
+ }
+ }
+
+ close(dst_fd);
+ close(src_fd);
+ return 0;
}
-static bool link_opts_valid(struct objtool_file *file)
+static void save_argv(int argc, const char **argv)
{
- if (opts.link)
- return true;
-
- if (has_multiple_files(file->elf)) {
- ERROR("Linked object detected, forcing --link");
- opts.link = true;
- return true;
+ orig_argv = calloc(argc, sizeof(char *));
+ if (!orig_argv) {
+ ERROR_GLIBC("calloc");
+ exit(1);
}
- if (opts.noinstr) {
- ERROR("--noinstr requires --link");
- return false;
+ for (int i = 0; i < argc; i++) {
+ orig_argv[i] = strdup(argv[i]);
+ if (!orig_argv[i]) {
+ ERROR_GLIBC("strdup(%s)", argv[i]);
+ exit(1);
+ }
+ };
+}
+
+void print_args(void)
+{
+ char *backup = NULL;
+
+ if (opts.output || opts.dryrun)
+ goto print;
+
+ /*
+ * Make a backup before kbuild deletes the file so the error
+ * can be recreated without recompiling or relinking.
+ */
+ backup = malloc(strlen(objname) + strlen(ORIG_SUFFIX) + 1);
+ if (!backup) {
+ ERROR_GLIBC("malloc");
+ goto print;
}
- if (opts.ibt) {
- ERROR("--ibt requires --link");
- return false;
+ strcpy(backup, objname);
+ strcat(backup, ORIG_SUFFIX);
+ if (copy_file(objname, backup)) {
+ backup = NULL;
+ goto print;
}
- if (opts.unret) {
- ERROR("--unret requires --link");
- return false;
+print:
+ /*
+ * Print the cmdline args to make it easier to recreate. If '--output'
+ * wasn't used, add it to the printed args with the backup as input.
+ */
+ fprintf(stderr, "%s", orig_argv[0]);
+
+ for (int i = 1; i < orig_argc; i++) {
+ char *arg = orig_argv[i];
+
+ if (backup && !strcmp(arg, objname))
+ fprintf(stderr, " %s -o %s", backup, objname);
+ else
+ fprintf(stderr, " %s", arg);
}
- return true;
+ fprintf(stderr, "\n");
}
int objtool_run(int argc, const char **argv)
{
- const char *objname;
struct objtool_file *file;
- int ret;
+ int ret = 0;
- argc = cmd_parse_options(argc, argv, check_usage);
- objname = argv[0];
+ orig_argc = argc;
+ save_argv(argc, argv);
+
+ cmd_parse_options(argc, argv, check_usage);
if (!opts_valid())
return 1;
+ objname = argv[0];
+
if (opts.dump_orc)
return orc_dump(objname);
+ if (!opts.dryrun && opts.output) {
+ /* copy original .o file to output file */
+ if (copy_file(objname, opts.output))
+ return 1;
+
+ /* from here on, work directly on the output file */
+ objname = opts.output;
+ }
+
file = objtool_open_read(objname);
if (!file)
return 1;
- if (!mnop_opts_valid())
- return 1;
-
- if (!link_opts_valid(file))
+ if (!opts.link && has_multiple_files(file->elf)) {
+ ERROR("Linked object requires --link");
return 1;
+ }
ret = check(file);
if (ret)
return ret;
- if (file->elf->changed)
- return elf_write(file->elf);
+ if (!opts.dryrun && file->elf->changed && elf_write(file->elf))
+ return 1;
return 0;
}
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 548ec3cd7c00..a5770570b106 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -20,11 +20,11 @@
#include <linux/hashtable.h>
#include <linux/kernel.h>
#include <linux/static_call_types.h>
+#include <linux/string.h>
struct alternative {
struct alternative *next;
struct instruction *insn;
- bool skip_orig;
};
static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
@@ -149,6 +149,15 @@ static inline struct reloc *insn_jump_table(struct instruction *insn)
return NULL;
}
+static inline unsigned long insn_jump_table_size(struct instruction *insn)
+{
+ if (insn->type == INSN_JUMP_DYNAMIC ||
+ insn->type == INSN_CALL_DYNAMIC)
+ return insn->_jump_table_size;
+
+ return 0;
+}
+
static bool is_jump_table_jump(struct instruction *insn)
{
struct alt_group *alt_group = insn->alt_group;
@@ -177,6 +186,57 @@ static bool is_sibling_call(struct instruction *insn)
}
/*
+ * Checks if a string ends with another.
+ */
+static bool str_ends_with(const char *s, const char *sub)
+{
+ const int slen = strlen(s);
+ const int sublen = strlen(sub);
+
+ if (sublen > slen)
+ return 0;
+
+ return !memcmp(s + slen - sublen, sub, sublen);
+}
+
+/*
+ * Checks if a function is a Rust "noreturn" one.
+ */
+static bool is_rust_noreturn(const struct symbol *func)
+{
+ /*
+ * If it does not start with "_R", then it is not a Rust symbol.
+ */
+ if (strncmp(func->name, "_R", 2))
+ return false;
+
+ /*
+ * These are just heuristics -- we do not control the precise symbol
+ * name, due to the crate disambiguators (which depend on the compiler)
+ * as well as changes to the source code itself between versions (since
+ * these come from the Rust standard library).
+ */
+ return str_ends_with(func->name, "_4core5sliceSp15copy_from_slice17len_mismatch_fail") ||
+ str_ends_with(func->name, "_4core6option13unwrap_failed") ||
+ str_ends_with(func->name, "_4core6result13unwrap_failed") ||
+ str_ends_with(func->name, "_4core9panicking5panic") ||
+ str_ends_with(func->name, "_4core9panicking9panic_fmt") ||
+ str_ends_with(func->name, "_4core9panicking14panic_explicit") ||
+ str_ends_with(func->name, "_4core9panicking14panic_nounwind") ||
+ str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||
+ str_ends_with(func->name, "_4core9panicking18panic_nounwind_fmt") ||
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
+ str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
+ str_ends_with(func->name, "_7___rustc17rust_begin_unwind") ||
+ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+ (strstr(func->name, "_4core5slice5index") &&
+ strstr(func->name, "slice_") &&
+ str_ends_with(func->name, "_fail"));
+}
+
+/*
* This checks to see if the given function is a "noreturn" function.
*
* For global functions which are outside the scope of this object file, we
@@ -201,10 +261,14 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
if (!func)
return false;
- if (func->bind == STB_GLOBAL || func->bind == STB_WEAK)
+ if (func->bind == STB_GLOBAL || func->bind == STB_WEAK) {
+ if (is_rust_noreturn(func))
+ return true;
+
for (i = 0; i < ARRAY_SIZE(global_noreturns); i++)
if (!strcmp(func->name, global_noreturns[i]))
return true;
+ }
if (func->bind == STB_WEAK)
return false;
@@ -280,12 +344,7 @@ static void init_insn_state(struct objtool_file *file, struct insn_state *state,
memset(state, 0, sizeof(*state));
init_cfi_state(&state->cfi);
- /*
- * We need the full vmlinux for noinstr validation, otherwise we can
- * not correctly determine insn_call_dest(insn)->sec (external symbols
- * do not have a section).
- */
- if (opts.link && opts.noinstr && sec)
+ if (opts.noinstr && sec)
state->noinstr = sec->noinstr;
}
@@ -293,7 +352,7 @@ static struct cfi_state *cfi_alloc(void)
{
struct cfi_state *cfi = calloc(1, sizeof(struct cfi_state));
if (!cfi) {
- WARN("calloc failed");
+ ERROR_GLIBC("calloc");
exit(1);
}
nr_cfi++;
@@ -349,7 +408,7 @@ static void *cfi_hash_alloc(unsigned long size)
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANON, -1, 0);
if (cfi_hash == (void *)-1L) {
- WARN("mmap fail cfi_hash");
+ ERROR_GLIBC("mmap fail cfi_hash");
cfi_hash = NULL;
} else if (opts.stats) {
printf("cfi_bits: %d\n", cfi_bits);
@@ -405,7 +464,7 @@ static int decode_instructions(struct objtool_file *file)
if (!insns || idx == INSN_CHUNK_MAX) {
insns = calloc(sizeof(*insn), INSN_CHUNK_SIZE);
if (!insns) {
- WARN("malloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
idx = 0;
@@ -440,8 +499,6 @@ static int decode_instructions(struct objtool_file *file)
nr_insns++;
}
-// printf("%s: last chunk used: %d\n", sec->name, (int)idx);
-
sec_for_each_sym(sec, func) {
if (func->type != STT_NOTYPE && func->type != STT_FUNC)
continue;
@@ -450,8 +507,7 @@ static int decode_instructions(struct objtool_file *file)
/* Heuristic: likely an "end" symbol */
if (func->type == STT_NOTYPE)
continue;
- WARN("%s(): STT_FUNC at end of section",
- func->name);
+ ERROR("%s(): STT_FUNC at end of section", func->name);
return -1;
}
@@ -459,8 +515,7 @@ static int decode_instructions(struct objtool_file *file)
continue;
if (!find_insn(file, sec, func->offset)) {
- WARN("%s(): can't find starting instruction",
- func->name);
+ ERROR("%s(): can't find starting instruction", func->name);
return -1;
}
@@ -507,14 +562,20 @@ static int add_pv_ops(struct objtool_file *file, const char *symname)
if (!reloc)
break;
+ idx = (reloc_offset(reloc) - sym->offset) / sizeof(unsigned long);
+
func = reloc->sym;
if (func->type == STT_SECTION)
func = find_symbol_by_offset(reloc->sym->sec,
reloc_addend(reloc));
+ if (!func) {
+ ERROR_FUNC(reloc->sym->sec, reloc_addend(reloc),
+ "can't find func at %s[%d]", symname, idx);
+ return -1;
+ }
- idx = (reloc_offset(reloc) - sym->offset) / sizeof(unsigned long);
-
- objtool_pv_add(file, idx, func);
+ if (objtool_pv_add(file, idx, func))
+ return -1;
off = reloc_offset(reloc) + 1;
if (off > end)
@@ -538,7 +599,7 @@ static int init_pv_ops(struct objtool_file *file)
};
const char *pv_ops;
struct symbol *sym;
- int idx, nr;
+ int idx, nr, ret;
if (!opts.noinstr)
return 0;
@@ -551,113 +612,18 @@ static int init_pv_ops(struct objtool_file *file)
nr = sym->len / sizeof(unsigned long);
file->pv_ops = calloc(sizeof(struct pv_state), nr);
- if (!file->pv_ops)
+ if (!file->pv_ops) {
+ ERROR_GLIBC("calloc");
return -1;
+ }
for (idx = 0; idx < nr; idx++)
INIT_LIST_HEAD(&file->pv_ops[idx].targets);
- for (idx = 0; (pv_ops = pv_ops_tables[idx]); idx++)
- add_pv_ops(file, pv_ops);
-
- return 0;
-}
-
-static struct instruction *find_last_insn(struct objtool_file *file,
- struct section *sec)
-{
- struct instruction *insn = NULL;
- unsigned int offset;
- unsigned int end = (sec->sh.sh_size > 10) ? sec->sh.sh_size - 10 : 0;
-
- for (offset = sec->sh.sh_size - 1; offset >= end && !insn; offset--)
- insn = find_insn(file, sec, offset);
-
- return insn;
-}
-
-/*
- * Mark "ud2" instructions and manually annotated dead ends.
- */
-static int add_dead_ends(struct objtool_file *file)
-{
- struct section *rsec;
- struct reloc *reloc;
- struct instruction *insn;
- s64 addend;
-
- /*
- * Check for manually annotated dead ends.
- */
- rsec = find_section_by_name(file->elf, ".rela.discard.unreachable");
- if (!rsec)
- goto reachable;
-
- for_each_reloc(rsec, reloc) {
-
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- addend = reloc_addend(reloc);
-
- insn = find_insn(file, reloc->sym->sec, addend);
- if (insn)
- insn = prev_insn_same_sec(file, insn);
- else if (addend == reloc->sym->sec->sh.sh_size) {
- insn = find_last_insn(file, reloc->sym->sec);
- if (!insn) {
- WARN("can't find unreachable insn at %s+0x%" PRIx64,
- reloc->sym->sec->name, addend);
- return -1;
- }
- } else {
- WARN("can't find unreachable insn at %s+0x%" PRIx64,
- reloc->sym->sec->name, addend);
- return -1;
- }
-
- insn->dead_end = true;
- }
-
-reachable:
- /*
- * These manually annotated reachable checks are needed for GCC 4.4,
- * where the Linux unreachable() macro isn't supported. In that case
- * GCC doesn't know the "ud2" is fatal, so it generates code as if it's
- * not a dead end.
- */
- rsec = find_section_by_name(file->elf, ".rela.discard.reachable");
- if (!rsec)
- return 0;
-
- for_each_reloc(rsec, reloc) {
-
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- addend = reloc_addend(reloc);
-
- insn = find_insn(file, reloc->sym->sec, addend);
- if (insn)
- insn = prev_insn_same_sec(file, insn);
- else if (addend == reloc->sym->sec->sh.sh_size) {
- insn = find_last_insn(file, reloc->sym->sec);
- if (!insn) {
- WARN("can't find reachable insn at %s+0x%" PRIx64,
- reloc->sym->sec->name, addend);
- return -1;
- }
- } else {
- WARN("can't find reachable insn at %s+0x%" PRIx64,
- reloc->sym->sec->name, addend);
- return -1;
- }
-
- insn->dead_end = false;
+ for (idx = 0; (pv_ops = pv_ops_tables[idx]); idx++) {
+ ret = add_pv_ops(file, pv_ops);
+ if (ret)
+ return ret;
}
return 0;
@@ -706,13 +672,12 @@ static int create_static_call_sections(struct objtool_file *file)
/* find key symbol */
key_name = strdup(insn_call_dest(insn)->name);
if (!key_name) {
- perror("strdup");
+ ERROR_GLIBC("strdup");
return -1;
}
if (strncmp(key_name, STATIC_CALL_TRAMP_PREFIX_STR,
STATIC_CALL_TRAMP_PREFIX_LEN)) {
- WARN("static_call: trampoline name malformed: %s", key_name);
- free(key_name);
+ ERROR("static_call: trampoline name malformed: %s", key_name);
return -1;
}
tmp = key_name + STATIC_CALL_TRAMP_PREFIX_LEN - STATIC_CALL_KEY_PREFIX_LEN;
@@ -721,8 +686,7 @@ static int create_static_call_sections(struct objtool_file *file)
key_sym = find_symbol_by_name(file->elf, tmp);
if (!key_sym) {
if (!opts.module) {
- WARN("static_call: can't find static_call_key symbol: %s", tmp);
- free(key_name);
+ ERROR("static_call: can't find static_call_key symbol: %s", tmp);
return -1;
}
@@ -737,7 +701,6 @@ static int create_static_call_sections(struct objtool_file *file)
*/
key_sym = insn_call_dest(insn);
}
- free(key_name);
/* populate reloc for 'key' */
if (!elf_init_reloc_data_sym(file->elf, sec,
@@ -868,8 +831,11 @@ static int create_ibt_endbr_seal_sections(struct objtool_file *file)
if (opts.module && sym && sym->type == STT_FUNC &&
insn->offset == sym->offset &&
(!strcmp(sym->name, "init_module") ||
- !strcmp(sym->name, "cleanup_module")))
- WARN("%s(): not an indirect call target", sym->name);
+ !strcmp(sym->name, "cleanup_module"))) {
+ ERROR("%s(): Magic init_module() function name is deprecated, use module_init(fn) instead",
+ sym->name);
+ return -1;
+ }
if (!elf_init_reloc_text_sym(file->elf, sec,
idx * sizeof(int), idx,
@@ -1018,16 +984,15 @@ static int create_direct_call_sections(struct objtool_file *file)
/*
* Warnings shouldn't be reported for ignored functions.
*/
-static void add_ignores(struct objtool_file *file)
+static int add_ignores(struct objtool_file *file)
{
- struct instruction *insn;
struct section *rsec;
struct symbol *func;
struct reloc *reloc;
rsec = find_section_by_name(file->elf, ".rela.discard.func_stack_frame_non_standard");
if (!rsec)
- return;
+ return 0;
for_each_reloc(rsec, reloc) {
switch (reloc->sym->type) {
@@ -1042,14 +1007,17 @@ static void add_ignores(struct objtool_file *file)
break;
default:
- WARN("unexpected relocation symbol type in %s: %d",
- rsec->name, reloc->sym->type);
- continue;
+ ERROR("unexpected relocation symbol type in %s: %d",
+ rsec->name, reloc->sym->type);
+ return -1;
}
- func_for_each_insn(file, func, insn)
- insn->ignore = true;
+ func->ignore = true;
+ if (func->cfunc)
+ func->cfunc->ignore = true;
}
+
+ return 0;
}
/*
@@ -1199,6 +1167,8 @@ static const char *uaccess_safe_builtin[] = {
"__sanitizer_cov_trace_switch",
/* KMSAN */
"kmsan_copy_to_user",
+ "kmsan_disable_current",
+ "kmsan_enable_current",
"kmsan_report",
"kmsan_unpoison_entry_regs",
"kmsan_unpoison_memory",
@@ -1223,14 +1193,17 @@ static const char *uaccess_safe_builtin[] = {
"__ubsan_handle_type_mismatch_v1",
"__ubsan_handle_shift_out_of_bounds",
"__ubsan_handle_load_invalid_value",
- /* STACKLEAK */
- "stackleak_track_stack",
+ /* KSTACK_ERASE */
+ "__sanitizer_cov_stack_depth",
+ /* TRACE_BRANCH_PROFILING */
+ "ftrace_likely_update",
+ /* STACKPROTECTOR */
+ "__stack_chk_fail",
/* misc */
"csum_partial_copy_generic",
"copy_mc_fragile",
"copy_mc_fragile_handle_tail",
"copy_mc_enhanced_fast_string",
- "ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
"rep_stos_alternative",
"rep_movs_alternative",
"__copy_user_nocache",
@@ -1255,40 +1228,6 @@ static void add_uaccess_safe(struct objtool_file *file)
}
/*
- * FIXME: For now, just ignore any alternatives which add retpolines. This is
- * a temporary hack, as it doesn't allow ORC to unwind from inside a retpoline.
- * But it at least allows objtool to understand the control flow *around* the
- * retpoline.
- */
-static int add_ignore_alternatives(struct objtool_file *file)
-{
- struct section *rsec;
- struct reloc *reloc;
- struct instruction *insn;
-
- rsec = find_section_by_name(file->elf, ".rela.discard.ignore_alts");
- if (!rsec)
- return 0;
-
- for_each_reloc(rsec, reloc) {
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.ignore_alts entry");
- return -1;
- }
-
- insn->ignore_alts = true;
- }
-
- return 0;
-}
-
-/*
* Symbols that replace INSN_CALL_DYNAMIC, every (tail) call to such a symbol
* will be added to the .retpoline_sites section.
*/
@@ -1346,7 +1285,7 @@ static void remove_insn_ops(struct instruction *insn)
insn->stack_ops = NULL;
}
-static void annotate_call_site(struct objtool_file *file,
+static int annotate_call_site(struct objtool_file *file,
struct instruction *insn, bool sibling)
{
struct reloc *reloc = insn_reloc(file, insn);
@@ -1355,23 +1294,14 @@ static void annotate_call_site(struct objtool_file *file,
if (!sym)
sym = reloc->sym;
- /*
- * Alternative replacement code is just template code which is
- * sometimes copied to the original instruction. For now, don't
- * annotate it. (In the future we might consider annotating the
- * original instruction if/when it ever makes sense to do so.)
- */
- if (!strcmp(insn->sec->name, ".altinstr_replacement"))
- return;
-
if (sym->static_call_tramp) {
list_add_tail(&insn->call_node, &file->static_call_list);
- return;
+ return 0;
}
if (sym->retpoline_thunk) {
list_add_tail(&insn->call_node, &file->retpoline_call_list);
- return;
+ return 0;
}
/*
@@ -1383,10 +1313,12 @@ static void annotate_call_site(struct objtool_file *file,
if (reloc)
set_reloc_type(file->elf, reloc, R_NONE);
- elf_write_insn(file->elf, insn->sec,
- insn->offset, insn->len,
- sibling ? arch_ret_insn(insn->len)
- : arch_nop_insn(insn->len));
+ if (elf_write_insn(file->elf, insn->sec,
+ insn->offset, insn->len,
+ sibling ? arch_ret_insn(insn->len)
+ : arch_nop_insn(insn->len))) {
+ return -1;
+ }
insn->type = sibling ? INSN_RETURN : INSN_NOP;
@@ -1400,7 +1332,7 @@ static void annotate_call_site(struct objtool_file *file,
insn->retpoline_safe = true;
}
- return;
+ return 0;
}
if (opts.mcount && sym->fentry) {
@@ -1410,30 +1342,35 @@ static void annotate_call_site(struct objtool_file *file,
if (reloc)
set_reloc_type(file->elf, reloc, R_NONE);
- elf_write_insn(file->elf, insn->sec,
- insn->offset, insn->len,
- arch_nop_insn(insn->len));
+ if (elf_write_insn(file->elf, insn->sec,
+ insn->offset, insn->len,
+ arch_nop_insn(insn->len))) {
+ return -1;
+ }
insn->type = INSN_NOP;
}
list_add_tail(&insn->call_node, &file->mcount_loc_list);
- return;
+ return 0;
}
- if (insn->type == INSN_CALL && !insn->sec->init)
+ if (insn->type == INSN_CALL && !insn->sec->init &&
+ !insn->_call_dest->embedded_insn)
list_add_tail(&insn->call_node, &file->call_list);
if (!sibling && dead_end_function(file, sym))
insn->dead_end = true;
+
+ return 0;
}
-static void add_call_dest(struct objtool_file *file, struct instruction *insn,
+static int add_call_dest(struct objtool_file *file, struct instruction *insn,
struct symbol *dest, bool sibling)
{
insn->_call_dest = dest;
if (!dest)
- return;
+ return 0;
/*
* Whatever stack impact regular CALLs have, should be undone
@@ -1444,10 +1381,10 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn,
*/
remove_insn_ops(insn);
- annotate_call_site(file, insn, sibling);
+ return annotate_call_site(file, insn, sibling);
}
-static void add_retpoline_call(struct objtool_file *file, struct instruction *insn)
+static int add_retpoline_call(struct objtool_file *file, struct instruction *insn)
{
/*
* Retpoline calls/jumps are really dynamic calls/jumps in disguise,
@@ -1464,7 +1401,7 @@ static void add_retpoline_call(struct objtool_file *file, struct instruction *in
insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL;
break;
default:
- return;
+ return 0;
}
insn->retpoline_safe = true;
@@ -1478,7 +1415,7 @@ static void add_retpoline_call(struct objtool_file *file, struct instruction *in
*/
remove_insn_ops(insn);
- annotate_call_site(file, insn, false);
+ return annotate_call_site(file, insn, false);
}
static void add_return_call(struct objtool_file *file, struct instruction *insn, bool add)
@@ -1547,8 +1484,11 @@ static int add_jump_destinations(struct objtool_file *file)
struct reloc *reloc;
struct section *dest_sec;
unsigned long dest_off;
+ int ret;
for_each_insn(file, insn) {
+ struct symbol *func = insn_func(insn);
+
if (insn->jump_dest) {
/*
* handle_group_alt() may have previously set
@@ -1567,17 +1507,21 @@ static int add_jump_destinations(struct objtool_file *file)
dest_sec = reloc->sym->sec;
dest_off = arch_dest_reloc_offset(reloc_addend(reloc));
} else if (reloc->sym->retpoline_thunk) {
- add_retpoline_call(file, insn);
+ ret = add_retpoline_call(file, insn);
+ if (ret)
+ return ret;
continue;
} else if (reloc->sym->return_thunk) {
add_return_call(file, insn, true);
continue;
- } else if (insn_func(insn)) {
+ } else if (func) {
/*
* External sibling call or internal sibling call with
* STT_FUNC reloc.
*/
- add_call_dest(file, insn, reloc->sym, true);
+ ret = add_call_dest(file, insn, reloc->sym, true);
+ if (ret)
+ return ret;
continue;
} else if (reloc->sym->sec->idx) {
dest_sec = reloc->sym->sec;
@@ -1605,8 +1549,17 @@ static int add_jump_destinations(struct objtool_file *file)
continue;
}
- WARN_INSN(insn, "can't find jump dest instruction at %s+0x%lx",
- dest_sec->name, dest_off);
+ /*
+ * GCOV/KCOV dead code can jump to the end of the
+ * function/section.
+ */
+ if (file->ignore_unreachables && func &&
+ dest_sec == insn->sec &&
+ dest_off == func->offset + func->len)
+ continue;
+
+ ERROR_INSN(insn, "can't find jump dest instruction at %s+0x%lx",
+ dest_sec->name, dest_off);
return -1;
}
@@ -1617,7 +1570,9 @@ static int add_jump_destinations(struct objtool_file *file)
*/
if (jump_dest->sym && jump_dest->offset == jump_dest->sym->offset) {
if (jump_dest->sym->retpoline_thunk) {
- add_retpoline_call(file, insn);
+ ret = add_retpoline_call(file, insn);
+ if (ret)
+ return ret;
continue;
}
if (jump_dest->sym->return_thunk) {
@@ -1629,8 +1584,7 @@ static int add_jump_destinations(struct objtool_file *file)
/*
* Cross-function jump.
*/
- if (insn_func(insn) && insn_func(jump_dest) &&
- insn_func(insn) != insn_func(jump_dest)) {
+ if (func && insn_func(jump_dest) && func != insn_func(jump_dest)) {
/*
* For GCC 8+, create parent/child links for any cold
@@ -1647,10 +1601,10 @@ static int add_jump_destinations(struct objtool_file *file)
* case where the parent function's only reference to a
* subfunction is through a jump table.
*/
- if (!strstr(insn_func(insn)->name, ".cold") &&
+ if (!strstr(func->name, ".cold") &&
strstr(insn_func(jump_dest)->name, ".cold")) {
- insn_func(insn)->cfunc = insn_func(jump_dest);
- insn_func(jump_dest)->pfunc = insn_func(insn);
+ func->cfunc = insn_func(jump_dest);
+ insn_func(jump_dest)->pfunc = func;
}
}
@@ -1659,7 +1613,9 @@ static int add_jump_destinations(struct objtool_file *file)
* Internal sibling call without reloc or with
* STT_SECTION reloc.
*/
- add_call_dest(file, insn, insn_func(jump_dest), true);
+ ret = add_call_dest(file, insn, insn_func(jump_dest), true);
+ if (ret)
+ return ret;
continue;
}
@@ -1689,8 +1645,10 @@ static int add_call_destinations(struct objtool_file *file)
unsigned long dest_off;
struct symbol *dest;
struct reloc *reloc;
+ int ret;
for_each_insn(file, insn) {
+ struct symbol *func = insn_func(insn);
if (insn->type != INSN_CALL)
continue;
@@ -1699,18 +1657,20 @@ static int add_call_destinations(struct objtool_file *file)
dest_off = arch_jump_destination(insn);
dest = find_call_destination(insn->sec, dest_off);
- add_call_dest(file, insn, dest, false);
+ ret = add_call_dest(file, insn, dest, false);
+ if (ret)
+ return ret;
- if (insn->ignore)
+ if (func && func->ignore)
continue;
if (!insn_call_dest(insn)) {
- WARN_INSN(insn, "unannotated intra-function call");
+ ERROR_INSN(insn, "unannotated intra-function call");
return -1;
}
- if (insn_func(insn) && insn_call_dest(insn)->type != STT_FUNC) {
- WARN_INSN(insn, "unsupported call to non-function");
+ if (func && insn_call_dest(insn)->type != STT_FUNC) {
+ ERROR_INSN(insn, "unsupported call to non-function");
return -1;
}
@@ -1718,18 +1678,25 @@ static int add_call_destinations(struct objtool_file *file)
dest_off = arch_dest_reloc_offset(reloc_addend(reloc));
dest = find_call_destination(reloc->sym->sec, dest_off);
if (!dest) {
- WARN_INSN(insn, "can't find call dest symbol at %s+0x%lx",
- reloc->sym->sec->name, dest_off);
+ ERROR_INSN(insn, "can't find call dest symbol at %s+0x%lx",
+ reloc->sym->sec->name, dest_off);
return -1;
}
- add_call_dest(file, insn, dest, false);
+ ret = add_call_dest(file, insn, dest, false);
+ if (ret)
+ return ret;
} else if (reloc->sym->retpoline_thunk) {
- add_retpoline_call(file, insn);
+ ret = add_retpoline_call(file, insn);
+ if (ret)
+ return ret;
- } else
- add_call_dest(file, insn, reloc->sym, false);
+ } else {
+ ret = add_call_dest(file, insn, reloc->sym, false);
+ if (ret)
+ return ret;
+ }
}
return 0;
@@ -1752,15 +1719,15 @@ static int handle_group_alt(struct objtool_file *file,
if (!orig_alt_group) {
struct instruction *last_orig_insn = NULL;
- orig_alt_group = malloc(sizeof(*orig_alt_group));
+ orig_alt_group = calloc(1, sizeof(*orig_alt_group));
if (!orig_alt_group) {
- WARN("malloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
orig_alt_group->cfi = calloc(special_alt->orig_len,
sizeof(struct cfi_state *));
if (!orig_alt_group->cfi) {
- WARN("calloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
@@ -1776,21 +1743,22 @@ static int handle_group_alt(struct objtool_file *file,
orig_alt_group->first_insn = orig_insn;
orig_alt_group->last_insn = last_orig_insn;
orig_alt_group->nop = NULL;
+ orig_alt_group->ignore = orig_insn->ignore_alts;
} else {
if (orig_alt_group->last_insn->offset + orig_alt_group->last_insn->len -
orig_alt_group->first_insn->offset != special_alt->orig_len) {
- WARN_INSN(orig_insn, "weirdly overlapping alternative! %ld != %d",
- orig_alt_group->last_insn->offset +
- orig_alt_group->last_insn->len -
- orig_alt_group->first_insn->offset,
- special_alt->orig_len);
+ ERROR_INSN(orig_insn, "weirdly overlapping alternative! %ld != %d",
+ orig_alt_group->last_insn->offset +
+ orig_alt_group->last_insn->len -
+ orig_alt_group->first_insn->offset,
+ special_alt->orig_len);
return -1;
}
}
- new_alt_group = malloc(sizeof(*new_alt_group));
+ new_alt_group = calloc(1, sizeof(*new_alt_group));
if (!new_alt_group) {
- WARN("malloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
@@ -1802,9 +1770,9 @@ static int handle_group_alt(struct objtool_file *file,
* instruction affects the stack, the instruction after it (the
* nop) will propagate the new state to the shared CFI array.
*/
- nop = malloc(sizeof(*nop));
+ nop = calloc(1, sizeof(*nop));
if (!nop) {
- WARN("malloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
memset(nop, 0, sizeof(*nop));
@@ -1815,7 +1783,6 @@ static int handle_group_alt(struct objtool_file *file,
nop->type = INSN_NOP;
nop->sym = orig_insn->sym;
nop->alt_group = new_alt_group;
- nop->ignore = orig_insn->ignore_alts;
}
if (!special_alt->new_len) {
@@ -1832,7 +1799,6 @@ static int handle_group_alt(struct objtool_file *file,
last_new_insn = insn;
- insn->ignore = orig_insn->ignore_alts;
insn->sym = orig_insn->sym;
insn->alt_group = new_alt_group;
@@ -1848,7 +1814,7 @@ static int handle_group_alt(struct objtool_file *file,
if (alt_reloc && arch_pc_relative_reloc(alt_reloc) &&
!arch_support_alt_relocation(special_alt, insn, alt_reloc)) {
- WARN_INSN(insn, "unsupported relocation in alternatives section");
+ ERROR_INSN(insn, "unsupported relocation in alternatives section");
return -1;
}
@@ -1862,15 +1828,15 @@ static int handle_group_alt(struct objtool_file *file,
if (dest_off == special_alt->new_off + special_alt->new_len) {
insn->jump_dest = next_insn_same_sec(file, orig_alt_group->last_insn);
if (!insn->jump_dest) {
- WARN_INSN(insn, "can't find alternative jump destination");
+ ERROR_INSN(insn, "can't find alternative jump destination");
return -1;
}
}
}
if (!last_new_insn) {
- WARN_FUNC("can't find last new alternative instruction",
- special_alt->new_sec, special_alt->new_off);
+ ERROR_FUNC(special_alt->new_sec, special_alt->new_off,
+ "can't find last new alternative instruction");
return -1;
}
@@ -1879,6 +1845,7 @@ end:
new_alt_group->first_insn = *new_insn;
new_alt_group->last_insn = last_new_insn;
new_alt_group->nop = nop;
+ new_alt_group->ignore = (*new_insn)->ignore_alts;
new_alt_group->cfi = orig_alt_group->cfi;
return 0;
}
@@ -1896,7 +1863,7 @@ static int handle_jump_alt(struct objtool_file *file,
if (orig_insn->type != INSN_JUMP_UNCONDITIONAL &&
orig_insn->type != INSN_NOP) {
- WARN_INSN(orig_insn, "unsupported instruction at jump label");
+ ERROR_INSN(orig_insn, "unsupported instruction at jump label");
return -1;
}
@@ -1905,9 +1872,13 @@ static int handle_jump_alt(struct objtool_file *file,
if (reloc)
set_reloc_type(file->elf, reloc, R_NONE);
- elf_write_insn(file->elf, orig_insn->sec,
- orig_insn->offset, orig_insn->len,
- arch_nop_insn(orig_insn->len));
+
+ if (elf_write_insn(file->elf, orig_insn->sec,
+ orig_insn->offset, orig_insn->len,
+ arch_nop_insn(orig_insn->len))) {
+ return -1;
+ }
+
orig_insn->type = INSN_NOP;
}
@@ -1943,19 +1914,17 @@ static int add_special_section_alts(struct objtool_file *file)
struct alternative *alt;
int ret;
- ret = special_get_alts(file->elf, &special_alts);
- if (ret)
- return ret;
+ if (special_get_alts(file->elf, &special_alts))
+ return -1;
list_for_each_entry_safe(special_alt, tmp, &special_alts, list) {
orig_insn = find_insn(file, special_alt->orig_sec,
special_alt->orig_off);
if (!orig_insn) {
- WARN_FUNC("special: can't find orig instruction",
- special_alt->orig_sec, special_alt->orig_off);
- ret = -1;
- goto out;
+ ERROR_FUNC(special_alt->orig_sec, special_alt->orig_off,
+ "special: can't find orig instruction");
+ return -1;
}
new_insn = NULL;
@@ -1963,41 +1932,37 @@ static int add_special_section_alts(struct objtool_file *file)
new_insn = find_insn(file, special_alt->new_sec,
special_alt->new_off);
if (!new_insn) {
- WARN_FUNC("special: can't find new instruction",
- special_alt->new_sec,
- special_alt->new_off);
- ret = -1;
- goto out;
+ ERROR_FUNC(special_alt->new_sec, special_alt->new_off,
+ "special: can't find new instruction");
+ return -1;
}
}
if (special_alt->group) {
if (!special_alt->orig_len) {
- WARN_INSN(orig_insn, "empty alternative entry");
+ ERROR_INSN(orig_insn, "empty alternative entry");
continue;
}
ret = handle_group_alt(file, special_alt, orig_insn,
&new_insn);
if (ret)
- goto out;
+ return ret;
+
} else if (special_alt->jump_or_nop) {
ret = handle_jump_alt(file, special_alt, orig_insn,
&new_insn);
if (ret)
- goto out;
+ return ret;
}
- alt = malloc(sizeof(*alt));
+ alt = calloc(1, sizeof(*alt));
if (!alt) {
- WARN("malloc failed");
- ret = -1;
- goto out;
+ ERROR_GLIBC("calloc");
+ return -1;
}
alt->insn = new_insn;
- alt->skip_orig = special_alt->skip_orig;
- orig_insn->ignore_alts |= special_alt->skip_alt;
alt->next = orig_insn->alts;
orig_insn->alts = alt;
@@ -2011,19 +1976,24 @@ static int add_special_section_alts(struct objtool_file *file)
printf("long:\t%ld\t%ld\n", file->jl_nop_long, file->jl_long);
}
-out:
- return ret;
+ return 0;
}
-static int add_jump_table(struct objtool_file *file, struct instruction *insn,
- struct reloc *next_table)
+__weak unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table)
{
+ return reloc->sym->offset + reloc_addend(reloc);
+}
+
+static int add_jump_table(struct objtool_file *file, struct instruction *insn)
+{
+ unsigned long table_size = insn_jump_table_size(insn);
struct symbol *pfunc = insn_func(insn)->pfunc;
struct reloc *table = insn_jump_table(insn);
struct instruction *dest_insn;
unsigned int prev_offset = 0;
struct reloc *reloc = table;
struct alternative *alt;
+ unsigned long sym_offset;
/*
* Each @reloc is a switch table relocation which points to the target
@@ -2032,19 +2002,30 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
for_each_reloc_from(table->sec, reloc) {
/* Check for the end of the table: */
- if (reloc != table && reloc == next_table)
+ if (table_size && reloc_offset(reloc) - reloc_offset(table) >= table_size)
+ break;
+ if (reloc != table && is_jump_table(reloc))
break;
/* Make sure the table entries are consecutive: */
- if (prev_offset && reloc_offset(reloc) != prev_offset + 8)
+ if (prev_offset && reloc_offset(reloc) != prev_offset + arch_reloc_size(reloc))
break;
+ sym_offset = arch_jump_table_sym_offset(reloc, table);
+
/* Detect function pointers from contiguous objects: */
- if (reloc->sym->sec == pfunc->sec &&
- reloc_addend(reloc) == pfunc->offset)
+ if (reloc->sym->sec == pfunc->sec && sym_offset == pfunc->offset)
break;
- dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+ /*
+ * Clang sometimes leaves dangling unused jump table entries
+ * which point to the end of the function. Ignore them.
+ */
+ if (reloc->sym->sec == pfunc->sec &&
+ sym_offset == pfunc->offset + pfunc->len)
+ goto next;
+
+ dest_insn = find_insn(file, reloc->sym->sec, sym_offset);
if (!dest_insn)
break;
@@ -2052,20 +2033,21 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
if (!insn_func(dest_insn) || insn_func(dest_insn)->pfunc != pfunc)
break;
- alt = malloc(sizeof(*alt));
+ alt = calloc(1, sizeof(*alt));
if (!alt) {
- WARN("malloc failed");
+ ERROR_GLIBC("calloc");
return -1;
}
alt->insn = dest_insn;
alt->next = insn->alts;
insn->alts = alt;
+next:
prev_offset = reloc_offset(reloc);
}
if (!prev_offset) {
- WARN_INSN(insn, "can't find switch jump table");
+ ERROR_INSN(insn, "can't find switch jump table");
return -1;
}
@@ -2076,12 +2058,13 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
* find_jump_table() - Given a dynamic jump, find the switch jump table
* associated with it.
*/
-static struct reloc *find_jump_table(struct objtool_file *file,
- struct symbol *func,
- struct instruction *insn)
+static void find_jump_table(struct objtool_file *file, struct symbol *func,
+ struct instruction *insn)
{
struct reloc *table_reloc;
struct instruction *dest_insn, *orig_insn = insn;
+ unsigned long table_size;
+ unsigned long sym_offset;
/*
* Backward search using the @first_jump_src links, these help avoid
@@ -2100,19 +2083,24 @@ static struct reloc *find_jump_table(struct objtool_file *file,
insn->jump_dest &&
(insn->jump_dest->offset <= insn->offset ||
insn->jump_dest->offset > orig_insn->offset))
- break;
+ break;
- table_reloc = arch_find_switch_table(file, insn);
+ table_reloc = arch_find_switch_table(file, insn, &table_size);
if (!table_reloc)
continue;
- dest_insn = find_insn(file, table_reloc->sym->sec, reloc_addend(table_reloc));
+
+ sym_offset = table_reloc->sym->offset + reloc_addend(table_reloc);
+
+ dest_insn = find_insn(file, table_reloc->sym->sec, sym_offset);
if (!dest_insn || !insn_func(dest_insn) || insn_func(dest_insn)->pfunc != func)
continue;
- return table_reloc;
- }
+ set_jump_table(table_reloc);
+ orig_insn->_jump_table = table_reloc;
+ orig_insn->_jump_table_size = table_size;
- return NULL;
+ break;
+ }
}
/*
@@ -2123,7 +2111,6 @@ static void mark_func_jump_tables(struct objtool_file *file,
struct symbol *func)
{
struct instruction *insn, *last = NULL;
- struct reloc *reloc;
func_for_each_insn(file, func, insn) {
if (!last)
@@ -2146,40 +2133,26 @@ static void mark_func_jump_tables(struct objtool_file *file,
if (insn->type != INSN_JUMP_DYNAMIC)
continue;
- reloc = find_jump_table(file, func, insn);
- if (reloc)
- insn->_jump_table = reloc;
+ find_jump_table(file, func, insn);
}
}
static int add_func_jump_tables(struct objtool_file *file,
struct symbol *func)
{
- struct instruction *insn, *insn_t1 = NULL, *insn_t2;
- int ret = 0;
+ struct instruction *insn;
+ int ret;
func_for_each_insn(file, func, insn) {
if (!insn_jump_table(insn))
continue;
- if (!insn_t1) {
- insn_t1 = insn;
- continue;
- }
-
- insn_t2 = insn;
-
- ret = add_jump_table(file, insn_t1, insn_jump_table(insn_t2));
+ ret = add_jump_table(file, insn);
if (ret)
return ret;
-
- insn_t1 = insn_t2;
}
- if (insn_t1)
- ret = add_jump_table(file, insn_t1, NULL);
-
- return ret;
+ return 0;
}
/*
@@ -2224,6 +2197,7 @@ static int read_unwind_hints(struct objtool_file *file)
struct unwind_hint *hint;
struct instruction *insn;
struct reloc *reloc;
+ unsigned long offset;
int i;
sec = find_section_by_name(file->elf, ".discard.unwind_hints");
@@ -2231,12 +2205,12 @@ static int read_unwind_hints(struct objtool_file *file)
return 0;
if (!sec->rsec) {
- WARN("missing .rela.discard.unwind_hints section");
+ ERROR("missing .rela.discard.unwind_hints section");
return -1;
}
if (sec->sh.sh_size % sizeof(struct unwind_hint)) {
- WARN("struct unwind_hint size mismatch");
+ ERROR("struct unwind_hint size mismatch");
return -1;
}
@@ -2247,13 +2221,22 @@ static int read_unwind_hints(struct objtool_file *file)
reloc = find_reloc_by_dest(file->elf, sec, i * sizeof(*hint));
if (!reloc) {
- WARN("can't find reloc for unwind_hints[%d]", i);
+ ERROR("can't find reloc for unwind_hints[%d]", i);
+ return -1;
+ }
+
+ if (reloc->sym->type == STT_SECTION) {
+ offset = reloc_addend(reloc);
+ } else if (reloc->sym->local_label) {
+ offset = reloc->sym->offset;
+ } else {
+ ERROR("unexpected relocation symbol type in %s", sec->rsec->name);
return -1;
}
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+ insn = find_insn(file, reloc->sym->sec, offset);
if (!insn) {
- WARN("can't find insn for unwind_hints[%d]", i);
+ ERROR("can't find insn for unwind_hints[%d]", i);
return -1;
}
@@ -2280,7 +2263,8 @@ static int read_unwind_hints(struct objtool_file *file)
if (sym && sym->bind == STB_GLOBAL) {
if (opts.ibt && insn->type != INSN_ENDBR && !insn->noendbr) {
- WARN_INSN(insn, "UNWIND_HINT_IRET_REGS without ENDBR");
+ ERROR_INSN(insn, "UNWIND_HINT_IRET_REGS without ENDBR");
+ return -1;
}
}
}
@@ -2294,7 +2278,7 @@ static int read_unwind_hints(struct objtool_file *file)
cfi = *(insn->cfi);
if (arch_decode_hint_reg(hint->sp_reg, &cfi.cfa.base)) {
- WARN_INSN(insn, "unsupported unwind_hint sp base reg %d", hint->sp_reg);
+ ERROR_INSN(insn, "unsupported unwind_hint sp base reg %d", hint->sp_reg);
return -1;
}
@@ -2308,185 +2292,161 @@ static int read_unwind_hints(struct objtool_file *file)
return 0;
}
-static int read_noendbr_hints(struct objtool_file *file)
+static int read_annotate(struct objtool_file *file,
+ int (*func)(struct objtool_file *file, int type, struct instruction *insn))
{
+ struct section *sec;
struct instruction *insn;
- struct section *rsec;
struct reloc *reloc;
+ uint64_t offset;
+ int type, ret;
- rsec = find_section_by_name(file->elf, ".rela.discard.noendbr");
- if (!rsec)
+ sec = find_section_by_name(file->elf, ".discard.annotate_insn");
+ if (!sec)
return 0;
- for_each_reloc(rsec, reloc) {
- insn = find_insn(file, reloc->sym->sec,
- reloc->sym->offset + reloc_addend(reloc));
+ if (!sec->rsec)
+ return 0;
+
+ if (sec->sh.sh_entsize != 8) {
+ static bool warned = false;
+ if (!warned && opts.verbose) {
+ WARN("%s: dodgy linker, sh_entsize != 8", sec->name);
+ warned = true;
+ }
+ sec->sh.sh_entsize = 8;
+ }
+
+ for_each_reloc(sec->rsec, reloc) {
+ type = *(u32 *)(sec->data->d_buf + (reloc_idx(reloc) * sec->sh.sh_entsize) + 4);
+ type = bswap_if_needed(file->elf, type);
+
+ offset = reloc->sym->offset + reloc_addend(reloc);
+ insn = find_insn(file, reloc->sym->sec, offset);
+
if (!insn) {
- WARN("bad .discard.noendbr entry");
+ ERROR("bad .discard.annotate_insn entry: %d of type %d", reloc_idx(reloc), type);
return -1;
}
- insn->noendbr = 1;
+ ret = func(file, type, insn);
+ if (ret < 0)
+ return ret;
}
return 0;
}
-static int read_retpoline_hints(struct objtool_file *file)
+static int __annotate_early(struct objtool_file *file, int type, struct instruction *insn)
{
- struct section *rsec;
- struct instruction *insn;
- struct reloc *reloc;
+ switch (type) {
- rsec = find_section_by_name(file->elf, ".rela.discard.retpoline_safe");
- if (!rsec)
- return 0;
-
- for_each_reloc(rsec, reloc) {
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.retpoline_safe entry");
- return -1;
- }
+ /* Must be before add_special_section_alts() */
+ case ANNOTYPE_IGNORE_ALTS:
+ insn->ignore_alts = true;
+ break;
- if (insn->type != INSN_JUMP_DYNAMIC &&
- insn->type != INSN_CALL_DYNAMIC &&
- insn->type != INSN_RETURN &&
- insn->type != INSN_NOP) {
- WARN_INSN(insn, "retpoline_safe hint not an indirect jump/call/ret/nop");
- return -1;
- }
+ /*
+ * Must be before read_unwind_hints() since that needs insn->noendbr.
+ */
+ case ANNOTYPE_NOENDBR:
+ insn->noendbr = 1;
+ break;
- insn->retpoline_safe = true;
+ default:
+ break;
}
return 0;
}
-static int read_instr_hints(struct objtool_file *file)
+static int __annotate_ifc(struct objtool_file *file, int type, struct instruction *insn)
{
- struct section *rsec;
- struct instruction *insn;
- struct reloc *reloc;
+ unsigned long dest_off;
- rsec = find_section_by_name(file->elf, ".rela.discard.instr_end");
- if (!rsec)
+ if (type != ANNOTYPE_INTRA_FUNCTION_CALL)
return 0;
- for_each_reloc(rsec, reloc) {
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.instr_end entry");
- return -1;
- }
-
- insn->instr--;
+ if (insn->type != INSN_CALL) {
+ ERROR_INSN(insn, "intra_function_call not a direct call");
+ return -1;
}
- rsec = find_section_by_name(file->elf, ".rela.discard.instr_begin");
- if (!rsec)
- return 0;
-
- for_each_reloc(rsec, reloc) {
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
-
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.instr_begin entry");
- return -1;
- }
+ /*
+ * Treat intra-function CALLs as JMPs, but with a stack_op.
+ * See add_call_destinations(), which strips stack_ops from
+ * normal CALLs.
+ */
+ insn->type = INSN_JUMP_UNCONDITIONAL;
- insn->instr++;
+ dest_off = arch_jump_destination(insn);
+ insn->jump_dest = find_insn(file, insn->sec, dest_off);
+ if (!insn->jump_dest) {
+ ERROR_INSN(insn, "can't find call dest at %s+0x%lx",
+ insn->sec->name, dest_off);
+ return -1;
}
return 0;
}
-static int read_validate_unret_hints(struct objtool_file *file)
+static int __annotate_late(struct objtool_file *file, int type, struct instruction *insn)
{
- struct section *rsec;
- struct instruction *insn;
- struct reloc *reloc;
-
- rsec = find_section_by_name(file->elf, ".rela.discard.validate_unret");
- if (!rsec)
- return 0;
+ struct symbol *sym;
- for_each_reloc(rsec, reloc) {
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s", rsec->name);
- return -1;
- }
+ switch (type) {
+ case ANNOTYPE_NOENDBR:
+ /* early */
+ break;
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.instr_end entry");
+ case ANNOTYPE_RETPOLINE_SAFE:
+ if (insn->type != INSN_JUMP_DYNAMIC &&
+ insn->type != INSN_CALL_DYNAMIC &&
+ insn->type != INSN_RETURN &&
+ insn->type != INSN_NOP) {
+ ERROR_INSN(insn, "retpoline_safe hint not an indirect jump/call/ret/nop");
return -1;
}
- insn->unret = 1;
- }
- return 0;
-}
+ insn->retpoline_safe = true;
+ break;
+ case ANNOTYPE_INSTR_BEGIN:
+ insn->instr++;
+ break;
-static int read_intra_function_calls(struct objtool_file *file)
-{
- struct instruction *insn;
- struct section *rsec;
- struct reloc *reloc;
+ case ANNOTYPE_INSTR_END:
+ insn->instr--;
+ break;
- rsec = find_section_by_name(file->elf, ".rela.discard.intra_function_calls");
- if (!rsec)
- return 0;
+ case ANNOTYPE_UNRET_BEGIN:
+ insn->unret = 1;
+ break;
- for_each_reloc(rsec, reloc) {
- unsigned long dest_off;
+ case ANNOTYPE_IGNORE_ALTS:
+ /* early */
+ break;
- if (reloc->sym->type != STT_SECTION) {
- WARN("unexpected relocation symbol type in %s",
- rsec->name);
- return -1;
- }
+ case ANNOTYPE_INTRA_FUNCTION_CALL:
+ /* ifc */
+ break;
- insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
- if (!insn) {
- WARN("bad .discard.intra_function_call entry");
- return -1;
- }
+ case ANNOTYPE_REACHABLE:
+ insn->dead_end = false;
+ break;
- if (insn->type != INSN_CALL) {
- WARN_INSN(insn, "intra_function_call not a direct call");
+ case ANNOTYPE_NOCFI:
+ sym = insn->sym;
+ if (!sym) {
+ ERROR_INSN(insn, "dodgy NOCFI annotation");
return -1;
}
+ insn->sym->nocfi = 1;
+ break;
- /*
- * Treat intra-function CALLs as JMPs, but with a stack_op.
- * See add_call_destinations(), which strips stack_ops from
- * normal CALLs.
- */
- insn->type = INSN_JUMP_UNCONDITIONAL;
-
- dest_off = arch_jump_destination(insn);
- insn->jump_dest = find_insn(file, insn->sec, dest_off);
- if (!insn->jump_dest) {
- WARN_INSN(insn, "can't find call dest at %s+0x%lx",
- insn->sec->name, dest_off);
- return -1;
- }
+ default:
+ ERROR_INSN(insn, "Unknown annotation type: %d", type);
+ return -1;
}
return 0;
@@ -2504,16 +2464,6 @@ static bool is_profiling_func(const char *name)
if (!strncmp(name, "__sanitizer_cov_", 16))
return true;
- /*
- * Some compilers currently do not remove __tsan_func_entry/exit nor
- * __tsan_atomic_signal_fence (used for barrier instrumentation) with
- * the __no_sanitize_thread attribute, remove them. Once the kernel's
- * minimum Clang version is 14.0, this can be removed.
- */
- if (!strncmp(name, "__tsan_func_", 12) ||
- !strcmp(name, "__tsan_atomic_signal_fence"))
- return true;
-
return false;
}
@@ -2522,6 +2472,9 @@ static int classify_symbols(struct objtool_file *file)
struct symbol *func;
for_each_sym(file, func) {
+ if (func->type == STT_NOTYPE && strstarts(func->name, ".L"))
+ func->local_label = true;
+
if (func->bind != STB_GLOBAL)
continue;
@@ -2559,13 +2512,14 @@ static void mark_rodata(struct objtool_file *file)
*
* - .rodata: can contain GCC switch tables
* - .rodata.<func>: same, if -fdata-sections is being used
- * - .rodata..c_jump_table: contains C annotated jump tables
+ * - .data.rel.ro.c_jump_table: contains C annotated jump tables
*
* .rodata.str1.* sections are ignored; they don't contain jump tables.
*/
for_each_sec(file, sec) {
- if (!strncmp(sec->name, ".rodata", 7) &&
- !strstr(sec->name, ".str1.")) {
+ if ((!strncmp(sec->name, ".rodata", 7) &&
+ !strstr(sec->name, ".str1.")) ||
+ !strncmp(sec->name, ".data.rel.ro", 12)) {
sec->rodata = true;
found = true;
}
@@ -2595,17 +2549,13 @@ static int decode_sections(struct objtool_file *file)
if (ret)
return ret;
- add_ignores(file);
- add_uaccess_safe(file);
-
- ret = add_ignore_alternatives(file);
+ ret = add_ignores(file);
if (ret)
return ret;
- /*
- * Must be before read_unwind_hints() since that needs insn->noendbr.
- */
- ret = read_noendbr_hints(file);
+ add_uaccess_safe(file);
+
+ ret = read_annotate(file, __annotate_early);
if (ret)
return ret;
@@ -2627,7 +2577,7 @@ static int decode_sections(struct objtool_file *file)
* Must be before add_call_destination(); it changes INSN_CALL to
* INSN_JUMP.
*/
- ret = read_intra_function_calls(file);
+ ret = read_annotate(file, __annotate_ifc);
if (ret)
return ret;
@@ -2635,14 +2585,6 @@ static int decode_sections(struct objtool_file *file)
if (ret)
return ret;
- /*
- * Must be after add_call_destinations() such that it can override
- * dead_end_function() marks.
- */
- ret = add_dead_ends(file);
- if (ret)
- return ret;
-
ret = add_jump_table_alts(file);
if (ret)
return ret;
@@ -2651,15 +2593,11 @@ static int decode_sections(struct objtool_file *file)
if (ret)
return ret;
- ret = read_retpoline_hints(file);
- if (ret)
- return ret;
-
- ret = read_instr_hints(file);
- if (ret)
- return ret;
-
- ret = read_validate_unret_hints(file);
+ /*
+ * Must be after add_call_destinations() such that it can override
+ * dead_end_function() marks.
+ */
+ ret = read_annotate(file, __annotate_late);
if (ret)
return ret;
@@ -2834,7 +2772,7 @@ static int update_cfi_state(struct instruction *insn,
if (cfa->base == CFI_UNDEFINED) {
if (insn_func(insn)) {
WARN_INSN(insn, "undefined stack state");
- return -1;
+ return 1;
}
return 0;
}
@@ -2975,10 +2913,27 @@ static int update_cfi_state(struct instruction *insn,
break;
}
- if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
+ if (op->dest.reg == CFI_BP && op->src.reg == CFI_SP &&
+ insn->sym->frame_pointer) {
+ /* addi.d fp,sp,imm on LoongArch */
+ if (cfa->base == CFI_SP && cfa->offset == op->src.offset) {
+ cfa->base = CFI_BP;
+ cfa->offset = 0;
+ }
+ break;
+ }
- /* lea disp(%rbp), %rsp */
- cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
+ if (op->dest.reg == CFI_SP && op->src.reg == CFI_BP) {
+ /* addi.d sp,fp,imm on LoongArch */
+ if (cfa->base == CFI_BP && cfa->offset == 0) {
+ if (insn->sym->frame_pointer) {
+ cfa->base = CFI_SP;
+ cfa->offset = -op->src.offset;
+ }
+ } else {
+ /* lea disp(%rbp), %rsp */
+ cfi->stack_size = -(op->src.offset + regs[CFI_BP].offset);
+ }
break;
}
@@ -3260,9 +3215,8 @@ static int propagate_alt_cfi(struct objtool_file *file, struct instruction *insn
if (cficmp(alt_cfi[group_off], insn->cfi)) {
struct alt_group *orig_group = insn->alt_group->orig_group ?: insn->alt_group;
struct instruction *orig = orig_group->first_insn;
- char *where = offstr(insn->sec, insn->offset);
- WARN_INSN(orig, "stack layout conflict in alternatives: %s", where);
- free(where);
+ WARN_INSN(orig, "stack layout conflict in alternatives: %s",
+ offstr(insn->sec, insn->offset));
return -1;
}
}
@@ -3275,13 +3229,15 @@ static int handle_insn_ops(struct instruction *insn,
struct insn_state *state)
{
struct stack_op *op;
+ int ret;
for (op = insn->stack_ops; op; op = op->next) {
- if (update_cfi_state(insn, next_insn, &state->cfi, op))
- return 1;
+ ret = update_cfi_state(insn, next_insn, &state->cfi, op);
+ if (ret)
+ return ret;
- if (!insn->alt_group)
+ if (!opts.uaccess || !insn->alt_group)
continue;
if (op->dest.type == OP_DEST_PUSHF) {
@@ -3323,36 +3279,41 @@ static bool insn_cfi_match(struct instruction *insn, struct cfi_state *cfi2)
WARN_INSN(insn, "stack state mismatch: cfa1=%d%+d cfa2=%d%+d",
cfi1->cfa.base, cfi1->cfa.offset,
cfi2->cfa.base, cfi2->cfa.offset);
+ return false;
+
+ }
- } else if (memcmp(&cfi1->regs, &cfi2->regs, sizeof(cfi1->regs))) {
+ if (memcmp(&cfi1->regs, &cfi2->regs, sizeof(cfi1->regs))) {
for (i = 0; i < CFI_NUM_REGS; i++) {
- if (!memcmp(&cfi1->regs[i], &cfi2->regs[i],
- sizeof(struct cfi_reg)))
+
+ if (!memcmp(&cfi1->regs[i], &cfi2->regs[i], sizeof(struct cfi_reg)))
continue;
WARN_INSN(insn, "stack state mismatch: reg1[%d]=%d%+d reg2[%d]=%d%+d",
i, cfi1->regs[i].base, cfi1->regs[i].offset,
i, cfi2->regs[i].base, cfi2->regs[i].offset);
- break;
}
+ return false;
+ }
- } else if (cfi1->type != cfi2->type) {
+ if (cfi1->type != cfi2->type) {
WARN_INSN(insn, "stack state mismatch: type1=%d type2=%d",
cfi1->type, cfi2->type);
+ return false;
+ }
- } else if (cfi1->drap != cfi2->drap ||
+ if (cfi1->drap != cfi2->drap ||
(cfi1->drap && cfi1->drap_reg != cfi2->drap_reg) ||
(cfi1->drap && cfi1->drap_offset != cfi2->drap_offset)) {
WARN_INSN(insn, "stack state mismatch: drap1=%d(%d,%d) drap2=%d(%d,%d)",
cfi1->drap, cfi1->drap_reg, cfi1->drap_offset,
cfi2->drap, cfi2->drap_reg, cfi2->drap_offset);
+ return false;
+ }
- } else
- return true;
-
- return false;
+ return true;
}
static inline bool func_uaccess_safe(struct symbol *func)
@@ -3550,6 +3511,34 @@ next_orig:
return next_insn_same_sec(file, alt_group->orig_group->last_insn);
}
+static bool skip_alt_group(struct instruction *insn)
+{
+ struct instruction *alt_insn = insn->alts ? insn->alts->insn : NULL;
+
+ /* ANNOTATE_IGNORE_ALTERNATIVE */
+ if (insn->alt_group && insn->alt_group->ignore)
+ return true;
+
+ /*
+ * For NOP patched with CLAC/STAC, only follow the latter to avoid
+ * impossible code paths combining patched CLAC with unpatched STAC
+ * or vice versa.
+ *
+ * ANNOTATE_IGNORE_ALTERNATIVE could have been used here, but Linus
+ * requested not to do that to avoid hurting .s file readability
+ * around CLAC/STAC alternative sites.
+ */
+
+ if (!alt_insn)
+ return false;
+
+ /* Don't override ASM_{CLAC,STAC}_UNSAFE */
+ if (alt_insn->alt_group && alt_insn->alt_group->ignore)
+ return false;
+
+ return alt_insn->type == INSN_CLAC || alt_insn->type == INSN_STAC;
+}
+
/*
* Follow the branch starting at the given instruction, and recursively follow
* any other branches (jumps). Meanwhile, track the frame pointer state at
@@ -3565,6 +3554,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
u8 visited;
int ret;
+ if (func && func->ignore)
+ return 0;
+
sec = insn->sec;
while (1) {
@@ -3573,16 +3565,18 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
if (func && insn_func(insn) && func != insn_func(insn)->pfunc) {
/* Ignore KCFI type preambles, which always fall through */
if (!strncmp(func->name, "__cfi_", 6) ||
- !strncmp(func->name, "__pfx_", 6))
+ !strncmp(func->name, "__pfx_", 6) ||
+ !strncmp(func->name, "__pi___cfi_", 11) ||
+ !strncmp(func->name, "__pi___pfx_", 11))
+ return 0;
+
+ if (file->ignore_unreachables)
return 0;
WARN("%s() falls through to next function %s()",
func->name, insn_func(insn)->name);
- return 1;
- }
+ func->warned = 1;
- if (func && insn->ignore) {
- WARN_INSN(insn, "BUG: why am I validating an ignored function?");
return 1;
}
@@ -3620,6 +3614,18 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
}
if (!save_insn->visited) {
+ /*
+ * If the restore hint insn is at the
+ * beginning of a basic block and was
+ * branched to from elsewhere, and the
+ * save insn hasn't been visited yet,
+ * defer following this branch for now.
+ * It will be seen later via the
+ * straight-line path.
+ */
+ if (!prev_insn)
+ return 0;
+
WARN_INSN(insn, "objtool isn't smart enough to handle this CFI save/restore combo");
return 1;
}
@@ -3645,24 +3651,19 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
if (propagate_alt_cfi(file, insn))
return 1;
- if (!insn->ignore_alts && insn->alts) {
- bool skip_orig = false;
-
+ if (insn->alts) {
for (alt = insn->alts; alt; alt = alt->next) {
- if (alt->skip_orig)
- skip_orig = true;
-
ret = validate_branch(file, func, alt->insn, state);
if (ret) {
BT_INSN(insn, "(alt)");
return ret;
}
}
-
- if (skip_orig)
- return 0;
}
+ if (skip_alt_group(insn))
+ return 0;
+
if (handle_insn_ops(insn, next_insn, &state))
return 1;
@@ -3683,9 +3684,6 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
return 1;
}
- if (insn->dead_end)
- return 0;
-
break;
case INSN_JUMP_CONDITIONAL:
@@ -3722,14 +3720,26 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
break;
- case INSN_CONTEXT_SWITCH:
+ case INSN_SYSCALL:
if (func && (!next_insn || !next_insn->hint)) {
WARN_INSN(insn, "unsupported instruction in callable function");
return 1;
}
+
+ break;
+
+ case INSN_SYSRET:
+ if (func && (!next_insn || !next_insn->hint)) {
+ WARN_INSN(insn, "unsupported instruction in callable function");
+ return 1;
+ }
+
return 0;
case INSN_STAC:
+ if (!opts.uaccess)
+ break;
+
if (state.uaccess) {
WARN_INSN(insn, "recursive UACCESS enable");
return 1;
@@ -3739,6 +3749,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
break;
case INSN_CLAC:
+ if (!opts.uaccess)
+ break;
+
if (!state.uaccess && func) {
WARN_INSN(insn, "redundant UACCESS disable");
return 1;
@@ -3780,7 +3793,12 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
if (!next_insn) {
if (state.cfi.cfa.base == CFI_UNDEFINED)
return 0;
- WARN("%s: unexpected end of section", sec->name);
+ if (file->ignore_unreachables)
+ return 0;
+
+ WARN("%s%sunexpected end of section %s",
+ func ? func->name : "", func ? "(): " : "",
+ sec->name);
return 1;
}
@@ -3795,7 +3813,7 @@ static int validate_unwind_hint(struct objtool_file *file,
struct instruction *insn,
struct insn_state *state)
{
- if (insn->hint && !insn->visited && !insn->ignore) {
+ if (insn->hint && !insn->visited) {
int ret = validate_branch(file, insn_func(insn), insn, *state);
if (ret)
BT_INSN(insn, "<=== (hint)");
@@ -3846,23 +3864,15 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
insn->visited |= VISITED_UNRET;
- if (!insn->ignore_alts && insn->alts) {
+ if (insn->alts) {
struct alternative *alt;
- bool skip_orig = false;
-
for (alt = insn->alts; alt; alt = alt->next) {
- if (alt->skip_orig)
- skip_orig = true;
-
ret = validate_unret(file, alt->insn);
if (ret) {
BT_INSN(insn, "(alt)");
return ret;
}
}
-
- if (skip_orig)
- return 0;
}
switch (insn->type) {
@@ -3878,7 +3888,7 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
if (!is_sibling_call(insn)) {
if (!insn->jump_dest) {
WARN_INSN(insn, "unresolved jump target after linking?!?");
- return -1;
+ return 1;
}
ret = validate_unret(file, insn->jump_dest);
if (ret) {
@@ -3900,7 +3910,7 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
if (!dest) {
WARN("Unresolved function after linking!?: %s",
insn_call_dest(insn)->name);
- return -1;
+ return 1;
}
ret = validate_unret(file, dest);
@@ -3918,6 +3928,12 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
WARN_INSN(insn, "RET before UNTRAIN");
return 1;
+ case INSN_SYSCALL:
+ break;
+
+ case INSN_SYSRET:
+ return 0;
+
case INSN_NOP:
if (insn->retpoline_safe)
return 0;
@@ -3927,9 +3943,12 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
break;
}
+ if (insn->dead_end)
+ return 0;
+
if (!next) {
WARN_INSN(insn, "teh end!");
- return -1;
+ return 1;
}
insn = next;
}
@@ -3944,18 +3963,13 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
static int validate_unrets(struct objtool_file *file)
{
struct instruction *insn;
- int ret, warnings = 0;
+ int warnings = 0;
for_each_insn(file, insn) {
if (!insn->unret)
continue;
- ret = validate_unret(file, insn);
- if (ret < 0) {
- WARN_INSN(insn, "Failed UNRET validation");
- return ret;
- }
- warnings += ret;
+ warnings += validate_unret(file, insn);
}
return warnings;
@@ -3980,17 +3994,48 @@ static int validate_retpoline(struct objtool_file *file)
if (insn->type == INSN_RETURN) {
if (opts.rethunk) {
- WARN_INSN(insn, "'naked' return found in RETHUNK build");
- } else
- continue;
- } else {
- WARN_INSN(insn, "indirect %s found in RETPOLINE build",
- insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
+ WARN_INSN(insn, "'naked' return found in MITIGATION_RETHUNK build");
+ warnings++;
+ }
+ continue;
}
+ WARN_INSN(insn, "indirect %s found in MITIGATION_RETPOLINE build",
+ insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
warnings++;
}
+ if (!opts.cfi)
+ return warnings;
+
+ /*
+ * kCFI call sites look like:
+ *
+ * movl $(-0x12345678), %r10d
+ * addl -4(%r11), %r10d
+ * jz 1f
+ * ud2
+ * 1: cs call __x86_indirect_thunk_r11
+ *
+ * Verify all indirect calls are kCFI adorned by checking for the
+ * UD2. Notably, doing __nocfi calls to regular (cfi) functions is
+ * broken.
+ */
+ list_for_each_entry(insn, &file->retpoline_call_list, call_node) {
+ struct symbol *sym = insn->sym;
+
+ if (sym && (sym->type == STT_NOTYPE ||
+ sym->type == STT_FUNC) && !sym->nocfi) {
+ struct instruction *prev =
+ prev_insn_same_sym(file, insn);
+
+ if (!prev || prev->type != INSN_BUG) {
+ WARN_INSN(insn, "no-cfi indirect call!");
+ warnings++;
+ }
+ }
+ }
+
return warnings;
}
@@ -4009,10 +4054,11 @@ static bool is_ubsan_insn(struct instruction *insn)
static bool ignore_unreachable_insn(struct objtool_file *file, struct instruction *insn)
{
- int i;
+ struct symbol *func = insn_func(insn);
struct instruction *prev_insn;
+ int i;
- if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP)
+ if (insn->type == INSN_NOP || insn->type == INSN_TRAP || (func && func->ignore))
return true;
/*
@@ -4031,7 +4077,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
* In this case we'll find a piece of code (whole function) that is not
* covered by a !section symbol. Ignore them.
*/
- if (opts.link && !insn_func(insn)) {
+ if (opts.link && !func) {
int size = find_symbol_hole_containing(insn->sec, insn->offset);
unsigned long end = insn->offset + size;
@@ -4057,19 +4103,17 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
*/
if (insn->jump_dest && insn_func(insn->jump_dest) &&
strstr(insn_func(insn->jump_dest)->name, ".cold")) {
- struct instruction *dest = insn->jump_dest;
- func_for_each_insn(file, insn_func(dest), dest)
- dest->ignore = true;
+ insn_func(insn->jump_dest)->ignore = true;
}
}
return false;
}
- if (!insn_func(insn))
+ if (!func)
return false;
- if (insn_func(insn)->static_call_tramp)
+ if (func->static_call_tramp)
return true;
/*
@@ -4081,7 +4125,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
* It may also insert a UD2 after calling a __noreturn function.
*/
prev_insn = prev_insn_same_sec(file, insn);
- if (prev_insn->dead_end &&
+ if (prev_insn && prev_insn->dead_end &&
(insn->type == INSN_BUG ||
(insn->type == INSN_JUMP_UNCONDITIONAL &&
insn->jump_dest && insn->jump_dest->type == INSN_BUG)))
@@ -4100,7 +4144,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
if (insn->type == INSN_JUMP_UNCONDITIONAL) {
if (insn->jump_dest &&
- insn_func(insn->jump_dest) == insn_func(insn)) {
+ insn_func(insn->jump_dest) == func) {
insn = insn->jump_dest;
continue;
}
@@ -4108,7 +4152,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
break;
}
- if (insn->offset + insn->len >= insn_func(insn)->offset + insn_func(insn)->len)
+ if (insn->offset + insn->len >= func->offset + func->len)
break;
insn = next_insn_same_sec(file, insn);
@@ -4200,10 +4244,11 @@ static int validate_symbol(struct objtool_file *file, struct section *sec,
return 0;
insn = find_insn(file, sec, sym->offset);
- if (!insn || insn->ignore || insn->visited)
+ if (!insn || insn->visited)
return 0;
- state->uaccess = sym->uaccess_safe;
+ if (opts.uaccess)
+ state->uaccess = sym->uaccess_safe;
ret = validate_branch(file, insn_func(insn), insn, *state);
if (ret)
@@ -4295,6 +4340,51 @@ static bool noendbr_range(struct objtool_file *file, struct instruction *insn)
return insn->offset == sym->offset + sym->len;
}
+static int __validate_ibt_insn(struct objtool_file *file, struct instruction *insn,
+ struct instruction *dest)
+{
+ if (dest->type == INSN_ENDBR) {
+ mark_endbr_used(dest);
+ return 0;
+ }
+
+ if (insn_func(dest) && insn_func(insn) &&
+ insn_func(dest)->pfunc == insn_func(insn)->pfunc) {
+ /*
+ * Anything from->to self is either _THIS_IP_ or
+ * IRET-to-self.
+ *
+ * There is no sane way to annotate _THIS_IP_ since the
+ * compiler treats the relocation as a constant and is
+ * happy to fold in offsets, skewing any annotation we
+ * do, leading to vast amounts of false-positives.
+ *
+ * There's also compiler generated _THIS_IP_ through
+ * KCOV and such which we have no hope of annotating.
+ *
+ * As such, blanket accept self-references without
+ * issue.
+ */
+ return 0;
+ }
+
+ /*
+ * Accept anything ANNOTATE_NOENDBR.
+ */
+ if (dest->noendbr)
+ return 0;
+
+ /*
+ * Accept if this is the instruction after a symbol
+ * that is (no)endbr -- typical code-range usage.
+ */
+ if (noendbr_range(file, dest))
+ return 0;
+
+ WARN_INSN(insn, "relocation to !ENDBR: %s", offstr(dest->sec, dest->offset));
+ return 1;
+}
+
static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn)
{
struct instruction *dest;
@@ -4307,6 +4397,7 @@ static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn
* direct/indirect branches:
*/
switch (insn->type) {
+
case INSN_CALL:
case INSN_CALL_DYNAMIC:
case INSN_JUMP_CONDITIONAL:
@@ -4316,6 +4407,23 @@ static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn
case INSN_RETURN:
case INSN_NOP:
return 0;
+
+ case INSN_LEA_RIP:
+ if (!insn_reloc(file, insn)) {
+ /* local function pointer reference without reloc */
+
+ off = arch_jump_destination(insn);
+
+ dest = find_insn(file, insn->sec, off);
+ if (!dest) {
+ WARN_INSN(insn, "corrupt function pointer reference");
+ return 1;
+ }
+
+ return __validate_ibt_insn(file, insn, dest);
+ }
+ break;
+
default:
break;
}
@@ -4326,13 +4434,6 @@ static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn
reloc_offset(reloc) + 1,
(insn->offset + insn->len) - (reloc_offset(reloc) + 1))) {
- /*
- * static_call_update() references the trampoline, which
- * doesn't have (or need) ENDBR. Skip warning in that case.
- */
- if (reloc->sym->static_call_tramp)
- continue;
-
off = reloc->sym->offset;
if (reloc_type(reloc) == R_X86_64_PC32 ||
reloc_type(reloc) == R_X86_64_PLT32)
@@ -4344,47 +4445,7 @@ static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn
if (!dest)
continue;
- if (dest->type == INSN_ENDBR) {
- mark_endbr_used(dest);
- continue;
- }
-
- if (insn_func(dest) && insn_func(insn) &&
- insn_func(dest)->pfunc == insn_func(insn)->pfunc) {
- /*
- * Anything from->to self is either _THIS_IP_ or
- * IRET-to-self.
- *
- * There is no sane way to annotate _THIS_IP_ since the
- * compiler treats the relocation as a constant and is
- * happy to fold in offsets, skewing any annotation we
- * do, leading to vast amounts of false-positives.
- *
- * There's also compiler generated _THIS_IP_ through
- * KCOV and such which we have no hope of annotating.
- *
- * As such, blanket accept self-references without
- * issue.
- */
- continue;
- }
-
- /*
- * Accept anything ANNOTATE_NOENDBR.
- */
- if (dest->noendbr)
- continue;
-
- /*
- * Accept if this is the instruction after a symbol
- * that is (no)endbr -- typical code-range usage.
- */
- if (noendbr_range(file, dest))
- continue;
-
- WARN_INSN(insn, "relocation to !ENDBR: %s", offstr(dest->sec, dest->offset));
-
- warnings++;
+ warnings += __validate_ibt_insn(file, insn, dest);
}
return warnings;
@@ -4408,9 +4469,8 @@ static int validate_ibt_data_reloc(struct objtool_file *file,
if (dest->noendbr)
return 0;
- WARN_FUNC("data relocation to !ENDBR: %s",
- reloc->sec->base, reloc_offset(reloc),
- offstr(dest->sec, dest->offset));
+ WARN_FUNC(reloc->sec->base, reloc_offset(reloc),
+ "data relocation to !ENDBR: %s", offstr(dest->sec, dest->offset));
return 1;
}
@@ -4460,6 +4520,9 @@ static int validate_ibt(struct objtool_file *file)
!strcmp(sec->name, "__jump_table") ||
!strcmp(sec->name, "__mcount_loc") ||
!strcmp(sec->name, ".kcfi_traps") ||
+ !strcmp(sec->name, ".llvm.call-graph-profile") ||
+ !strcmp(sec->name, ".llvm_bb_addr_map") ||
+ !strcmp(sec->name, "__tracepoints") ||
strstr(sec->name, "__patchable_function_entries"))
continue;
@@ -4503,35 +4566,6 @@ static int validate_sls(struct objtool_file *file)
return warnings;
}
-static bool ignore_noreturn_call(struct instruction *insn)
-{
- struct symbol *call_dest = insn_call_dest(insn);
-
- /*
- * FIXME: hack, we need a real noreturn solution
- *
- * Problem is, exc_double_fault() may or may not return, depending on
- * whether CONFIG_X86_ESPFIX64 is set. But objtool has no visibility
- * to the kernel config.
- *
- * Other potential ways to fix it:
- *
- * - have compiler communicate __noreturn functions somehow
- * - remove CONFIG_X86_ESPFIX64
- * - read the .config file
- * - add a cmdline option
- * - create a generic objtool annotation format (vs a bunch of custom
- * formats) and annotate it
- */
- if (!strcmp(call_dest->name, "exc_double_fault")) {
- /* prevent further unreachable warnings for the caller */
- insn->sym->warned = 1;
- return true;
- }
-
- return false;
-}
-
static int validate_reachable_instructions(struct objtool_file *file)
{
struct instruction *insn, *prev_insn;
@@ -4548,8 +4582,8 @@ static int validate_reachable_instructions(struct objtool_file *file)
prev_insn = prev_insn_same_sec(file, insn);
if (prev_insn && prev_insn->dead_end) {
call_dest = insn_call_dest(prev_insn);
- if (call_dest && !ignore_noreturn_call(prev_insn)) {
- WARN_INSN(insn, "%s() is missing a __noreturn annotation",
+ if (call_dest) {
+ WARN_INSN(insn, "%s() missing __noreturn in .c/.h or NORETURN() in noreturns.h",
call_dest->name);
warnings++;
continue;
@@ -4564,13 +4598,15 @@ static int validate_reachable_instructions(struct objtool_file *file)
}
/* 'funcs' is a space-separated list of function names */
-static int disas_funcs(const char *funcs)
+static void disas_funcs(const char *funcs)
{
const char *objdump_str, *cross_compile;
int size, ret;
char *cmd;
cross_compile = getenv("CROSS_COMPILE");
+ if (!cross_compile)
+ cross_compile = "";
objdump_str = "%sobjdump -wdr %s | gawk -M -v _funcs='%s' '"
"BEGIN { split(_funcs, funcs); }"
@@ -4597,7 +4633,7 @@ static int disas_funcs(const char *funcs)
size = snprintf(NULL, 0, objdump_str, cross_compile, objname, funcs) + 1;
if (size <= 0) {
WARN("objdump string size calculation failed");
- return -1;
+ return;
}
cmd = malloc(size);
@@ -4607,13 +4643,11 @@ static int disas_funcs(const char *funcs)
ret = system(cmd);
if (ret) {
WARN("disassembly failed: %d", ret);
- return -1;
+ return;
}
-
- return 0;
}
-static int disas_warned_funcs(struct objtool_file *file)
+static void disas_warned_funcs(struct objtool_file *file)
{
struct symbol *sym;
char *funcs = NULL, *tmp;
@@ -4622,9 +4656,17 @@ static int disas_warned_funcs(struct objtool_file *file)
if (sym->warned) {
if (!funcs) {
funcs = malloc(strlen(sym->name) + 1);
+ if (!funcs) {
+ ERROR_GLIBC("malloc");
+ return;
+ }
strcpy(funcs, sym->name);
} else {
tmp = malloc(strlen(funcs) + strlen(sym->name) + 2);
+ if (!tmp) {
+ ERROR_GLIBC("malloc");
+ return;
+ }
sprintf(tmp, "%s %s", funcs, sym->name);
free(funcs);
funcs = tmp;
@@ -4634,8 +4676,47 @@ static int disas_warned_funcs(struct objtool_file *file)
if (funcs)
disas_funcs(funcs);
+}
- return 0;
+__weak bool arch_absolute_reloc(struct elf *elf, struct reloc *reloc)
+{
+ unsigned int type = reloc_type(reloc);
+ size_t sz = elf_addr_size(elf);
+
+ return (sz == 8) ? (type == R_ABS64) : (type == R_ABS32);
+}
+
+static int check_abs_references(struct objtool_file *file)
+{
+ struct section *sec;
+ struct reloc *reloc;
+ int ret = 0;
+
+ for_each_sec(file, sec) {
+ /* absolute references in non-loadable sections are fine */
+ if (!(sec->sh.sh_flags & SHF_ALLOC))
+ continue;
+
+ /* section must have an associated .rela section */
+ if (!sec->rsec)
+ continue;
+
+ /*
+ * Special case for compiler generated metadata that is not
+ * consumed until after boot.
+ */
+ if (!strcmp(sec->name, "__patchable_function_entries"))
+ continue;
+
+ for_each_reloc(sec->rsec, reloc) {
+ if (arch_absolute_reloc(file->elf, reloc)) {
+ WARN("section %s has absolute relocation at offset 0x%lx",
+ sec->name, reloc_offset(reloc));
+ ret++;
+ }
+ }
+ }
+ return ret;
}
struct insn_chunk {
@@ -4668,7 +4749,7 @@ static void free_insns(struct objtool_file *file)
int check(struct objtool_file *file)
{
- int ret, warnings = 0;
+ int ret = 0, warnings = 0;
arch_initial_func_cfi_state(&initial_func_cfi);
init_cfi_state(&init_cfi);
@@ -4677,51 +4758,36 @@ int check(struct objtool_file *file)
init_cfi_state(&force_undefined_cfi);
force_undefined_cfi.force_undefined = true;
- if (!cfi_hash_alloc(1UL << (file->elf->symbol_bits - 3)))
+ if (!cfi_hash_alloc(1UL << (file->elf->symbol_bits - 3))) {
+ ret = -1;
goto out;
+ }
cfi_hash_add(&init_cfi);
cfi_hash_add(&func_cfi);
ret = decode_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
-
if (!nr_insns)
goto out;
- if (opts.retpoline) {
- ret = validate_retpoline(file);
- if (ret < 0)
- return ret;
- warnings += ret;
- }
+ if (opts.retpoline)
+ warnings += validate_retpoline(file);
if (opts.stackval || opts.orc || opts.uaccess) {
- ret = validate_functions(file);
- if (ret < 0)
- goto out;
- warnings += ret;
+ int w = 0;
- ret = validate_unwind_hints(file, NULL);
- if (ret < 0)
- goto out;
- warnings += ret;
+ w += validate_functions(file);
+ w += validate_unwind_hints(file, NULL);
+ if (!w)
+ w += validate_reachable_instructions(file);
- if (!warnings) {
- ret = validate_reachable_instructions(file);
- if (ret < 0)
- goto out;
- warnings += ret;
- }
+ warnings += w;
} else if (opts.noinstr) {
- ret = validate_noinstr_sections(file);
- if (ret < 0)
- goto out;
- warnings += ret;
+ warnings += validate_noinstr_sections(file);
}
if (opts.unret) {
@@ -4729,94 +4795,74 @@ int check(struct objtool_file *file)
* Must be after validate_branch() and friends, it plays
* further games with insn->visited.
*/
- ret = validate_unrets(file);
- if (ret < 0)
- return ret;
- warnings += ret;
+ warnings += validate_unrets(file);
}
- if (opts.ibt) {
- ret = validate_ibt(file);
- if (ret < 0)
- goto out;
- warnings += ret;
- }
+ if (opts.ibt)
+ warnings += validate_ibt(file);
- if (opts.sls) {
- ret = validate_sls(file);
- if (ret < 0)
- goto out;
- warnings += ret;
- }
+ if (opts.sls)
+ warnings += validate_sls(file);
if (opts.static_call) {
ret = create_static_call_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
if (opts.retpoline) {
ret = create_retpoline_sites_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
if (opts.cfi) {
ret = create_cfi_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
if (opts.rethunk) {
ret = create_return_sites_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
if (opts.hack_skylake) {
ret = create_direct_call_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
}
if (opts.mcount) {
ret = create_mcount_loc_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
if (opts.prefix) {
ret = add_prefix_symbols(file);
- if (ret < 0)
- return ret;
- warnings += ret;
+ if (ret)
+ goto out;
}
if (opts.ibt) {
ret = create_ibt_endbr_seal_sections(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
+ if (opts.noabs)
+ warnings += check_abs_references(file);
+
if (opts.orc && nr_insns) {
ret = orc_create(file);
- if (ret < 0)
+ if (ret)
goto out;
- warnings += ret;
}
free_insns(file);
- if (opts.verbose)
- disas_warned_funcs(file);
-
if (opts.stats) {
printf("nr_insns_visited: %ld\n", nr_insns_visited);
printf("nr_cfi: %ld\n", nr_cfi);
@@ -4825,10 +4871,18 @@ int check(struct objtool_file *file)
}
out:
- /*
- * For now, don't fail the kernel build on fatal warnings. These
- * errors are still fairly common due to the growing matrix of
- * supported toolchains and their recent pace of change.
- */
- return 0;
+ if (!ret && !warnings)
+ return 0;
+
+ if (opts.werror && warnings)
+ ret = 1;
+
+ if (opts.verbose) {
+ if (opts.werror && warnings)
+ WARN("%d warning(s) upgraded to errors", warnings);
+ print_args();
+ disas_warned_funcs(file);
+ }
+
+ return ret;
}
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 3d27983dc908..ca5d77db692a 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -72,17 +72,17 @@ static inline void __elf_hash_del(struct elf_hash_node *node,
obj; \
obj = elf_list_entry(obj->member.next, typeof(*(obj)), member))
-#define elf_alloc_hash(name, size) \
-({ \
- __elf_bits(name) = max(10, ilog2(size)); \
+#define elf_alloc_hash(name, size) \
+({ \
+ __elf_bits(name) = max(10, ilog2(size)); \
__elf_table(name) = mmap(NULL, sizeof(struct elf_hash_node *) << __elf_bits(name), \
- PROT_READ|PROT_WRITE, \
- MAP_PRIVATE|MAP_ANON, -1, 0); \
- if (__elf_table(name) == (void *)-1L) { \
- WARN("mmap fail " #name); \
- __elf_table(name) = NULL; \
- } \
- __elf_table(name); \
+ PROT_READ|PROT_WRITE, \
+ MAP_PRIVATE|MAP_ANON, -1, 0); \
+ if (__elf_table(name) == (void *)-1L) { \
+ ERROR_GLIBC("mmap fail " #name); \
+ __elf_table(name) = NULL; \
+ } \
+ __elf_table(name); \
})
static inline unsigned long __sym_start(struct symbol *s)
@@ -224,12 +224,17 @@ int find_symbol_hole_containing(const struct section *sec, unsigned long offset)
if (n)
return 0; /* not a hole */
- /* didn't find a symbol for which @offset is after it */
- if (!hole.sym)
- return 0; /* not a hole */
+ /*
+ * @offset >= sym->offset + sym->len, find symbol after it.
+ * When hole.sym is empty, use the first node to compute the hole.
+ * If there is no symbol in the section, the first node will be NULL,
+ * in which case, -1 is returned to skip the whole section.
+ */
+ if (hole.sym)
+ n = rb_next(&hole.sym->node);
+ else
+ n = rb_first_cached(&sec->symbol_tree);
- /* @offset >= sym->offset + sym->len, find symbol after it */
- n = rb_next(&hole.sym->node);
if (!n)
return -1; /* until end of address space */
@@ -311,12 +316,12 @@ static int read_sections(struct elf *elf)
int i;
if (elf_getshdrnum(elf->elf, &sections_nr)) {
- WARN_ELF("elf_getshdrnum");
+ ERROR_ELF("elf_getshdrnum");
return -1;
}
if (elf_getshdrstrndx(elf->elf, &shstrndx)) {
- WARN_ELF("elf_getshdrstrndx");
+ ERROR_ELF("elf_getshdrstrndx");
return -1;
}
@@ -326,7 +331,7 @@ static int read_sections(struct elf *elf)
elf->section_data = calloc(sections_nr, sizeof(*sec));
if (!elf->section_data) {
- perror("calloc");
+ ERROR_GLIBC("calloc");
return -1;
}
for (i = 0; i < sections_nr; i++) {
@@ -336,33 +341,32 @@ static int read_sections(struct elf *elf)
s = elf_getscn(elf->elf, i);
if (!s) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
sec->idx = elf_ndxscn(s);
if (!gelf_getshdr(s, &sec->sh)) {
- WARN_ELF("gelf_getshdr");
+ ERROR_ELF("gelf_getshdr");
return -1;
}
sec->name = elf_strptr(elf->elf, shstrndx, sec->sh.sh_name);
if (!sec->name) {
- WARN_ELF("elf_strptr");
+ ERROR_ELF("elf_strptr");
return -1;
}
if (sec->sh.sh_size != 0 && !is_dwarf_section(sec)) {
sec->data = elf_getdata(s, NULL);
if (!sec->data) {
- WARN_ELF("elf_getdata");
+ ERROR_ELF("elf_getdata");
return -1;
}
if (sec->data->d_off != 0 ||
sec->data->d_size != sec->sh.sh_size) {
- WARN("unexpected data attributes for %s",
- sec->name);
+ ERROR("unexpected data attributes for %s", sec->name);
return -1;
}
}
@@ -382,7 +386,7 @@ static int read_sections(struct elf *elf)
/* sanity check, one more call to elf_nextscn() should return NULL */
if (elf_nextscn(elf->elf, s)) {
- WARN("section entry mismatch");
+ ERROR("section entry mismatch");
return -1;
}
@@ -462,7 +466,7 @@ static int read_symbols(struct elf *elf)
elf->symbol_data = calloc(symbols_nr, sizeof(*sym));
if (!elf->symbol_data) {
- perror("calloc");
+ ERROR_GLIBC("calloc");
return -1;
}
for (i = 0; i < symbols_nr; i++) {
@@ -472,14 +476,14 @@ static int read_symbols(struct elf *elf)
if (!gelf_getsymshndx(symtab->data, shndx_data, i, &sym->sym,
&shndx)) {
- WARN_ELF("gelf_getsymshndx");
+ ERROR_ELF("gelf_getsymshndx");
goto err;
}
sym->name = elf_strptr(elf->elf, symtab->sh.sh_link,
sym->sym.st_name);
if (!sym->name) {
- WARN_ELF("elf_strptr");
+ ERROR_ELF("elf_strptr");
goto err;
}
@@ -491,8 +495,7 @@ static int read_symbols(struct elf *elf)
sym->sec = find_section_by_index(elf, shndx);
if (!sym->sec) {
- WARN("couldn't find section for symbol %s",
- sym->name);
+ ERROR("couldn't find section for symbol %s", sym->name);
goto err;
}
if (GELF_ST_TYPE(sym->sym.st_info) == STT_SECTION) {
@@ -531,8 +534,7 @@ static int read_symbols(struct elf *elf)
pnamelen = coldstr - sym->name;
pname = strndup(sym->name, pnamelen);
if (!pname) {
- WARN("%s(): failed to allocate memory",
- sym->name);
+ ERROR("%s(): failed to allocate memory", sym->name);
return -1;
}
@@ -540,8 +542,7 @@ static int read_symbols(struct elf *elf)
free(pname);
if (!pfunc) {
- WARN("%s(): can't find parent function",
- sym->name);
+ ERROR("%s(): can't find parent function", sym->name);
return -1;
}
@@ -571,6 +572,34 @@ err:
return -1;
}
+static int mark_group_syms(struct elf *elf)
+{
+ struct section *symtab, *sec;
+ struct symbol *sym;
+
+ symtab = find_section_by_name(elf, ".symtab");
+ if (!symtab) {
+ ERROR("no .symtab");
+ return -1;
+ }
+
+ list_for_each_entry(sec, &elf->sections, list) {
+ if (sec->sh.sh_type == SHT_GROUP &&
+ sec->sh.sh_link == symtab->idx) {
+ sym = find_symbol_by_index(elf, sec->sh.sh_info);
+ if (!sym) {
+ ERROR("%s: can't find SHT_GROUP signature symbol",
+ sec->name);
+ return -1;
+ }
+
+ sym->group_sec = sec;
+ }
+ }
+
+ return 0;
+}
+
/*
* @sym's idx has changed. Update the relocs which reference it.
*/
@@ -578,7 +607,7 @@ static int elf_update_sym_relocs(struct elf *elf, struct symbol *sym)
{
struct reloc *reloc;
- for (reloc = sym->relocs; reloc; reloc = reloc->sym_next_reloc)
+ for (reloc = sym->relocs; reloc; reloc = sym_next_reloc(reloc))
set_reloc_sym(elf, reloc, reloc->sym->idx);
return 0;
@@ -608,14 +637,14 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
s = elf_getscn(elf->elf, symtab->idx);
if (!s) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
if (symtab_shndx) {
t = elf_getscn(elf->elf, symtab_shndx->idx);
if (!t) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
}
@@ -638,7 +667,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
if (idx) {
/* we don't do holes in symbol tables */
- WARN("index out of range");
+ ERROR("index out of range");
return -1;
}
@@ -649,7 +678,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
buf = calloc(num, entsize);
if (!buf) {
- WARN("malloc");
+ ERROR_GLIBC("calloc");
return -1;
}
@@ -664,7 +693,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
if (t) {
buf = calloc(num, sizeof(Elf32_Word));
if (!buf) {
- WARN("malloc");
+ ERROR_GLIBC("calloc");
return -1;
}
@@ -682,7 +711,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
/* empty blocks should not happen */
if (!symtab_data->d_size) {
- WARN("zero size data");
+ ERROR("zero size data");
return -1;
}
@@ -697,7 +726,7 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
/* something went side-ways */
if (idx < 0) {
- WARN("negative index");
+ ERROR("negative index");
return -1;
}
@@ -709,13 +738,13 @@ static int elf_update_symbol(struct elf *elf, struct section *symtab,
} else {
sym->sym.st_shndx = SHN_XINDEX;
if (!shndx_data) {
- WARN("no .symtab_shndx");
+ ERROR("no .symtab_shndx");
return -1;
}
}
if (!gelf_update_symshndx(symtab_data, shndx_data, idx, &sym->sym, shndx)) {
- WARN_ELF("gelf_update_symshndx");
+ ERROR_ELF("gelf_update_symshndx");
return -1;
}
@@ -733,7 +762,7 @@ __elf_create_symbol(struct elf *elf, struct symbol *sym)
if (symtab) {
symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
} else {
- WARN("no .symtab");
+ ERROR("no .symtab");
return NULL;
}
@@ -744,7 +773,7 @@ __elf_create_symbol(struct elf *elf, struct symbol *sym)
/*
* Move the first global symbol, as per sh_info, into a new, higher
- * symbol index. This fees up a spot for a new local symbol.
+ * symbol index. This frees up a spot for a new local symbol.
*/
first_non_local = symtab->sh.sh_info;
old = find_symbol_by_index(elf, first_non_local);
@@ -755,13 +784,18 @@ __elf_create_symbol(struct elf *elf, struct symbol *sym)
old->idx = new_idx;
if (elf_update_symbol(elf, symtab, symtab_shndx, old)) {
- WARN("elf_update_symbol move");
+ ERROR("elf_update_symbol move");
return NULL;
}
if (elf_update_sym_relocs(elf, old))
return NULL;
+ if (old->group_sec) {
+ old->group_sec->sh.sh_info = new_idx;
+ mark_sec_changed(elf, old->group_sec, true);
+ }
+
new_idx = first_non_local;
}
@@ -773,7 +807,7 @@ __elf_create_symbol(struct elf *elf, struct symbol *sym)
non_local:
sym->idx = new_idx;
if (elf_update_symbol(elf, symtab, symtab_shndx, sym)) {
- WARN("elf_update_symbol");
+ ERROR("elf_update_symbol");
return NULL;
}
@@ -794,7 +828,7 @@ elf_create_section_symbol(struct elf *elf, struct section *sec)
struct symbol *sym = calloc(1, sizeof(*sym));
if (!sym) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
@@ -824,7 +858,7 @@ elf_create_prefix_symbol(struct elf *elf, struct symbol *orig, long size)
char *name = malloc(namelen);
if (!sym || !name) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
@@ -853,16 +887,16 @@ static struct reloc *elf_init_reloc(struct elf *elf, struct section *rsec,
struct reloc *reloc, empty = { 0 };
if (reloc_idx >= sec_num_entries(rsec)) {
- WARN("%s: bad reloc_idx %u for %s with %d relocs",
- __func__, reloc_idx, rsec->name, sec_num_entries(rsec));
+ ERROR("%s: bad reloc_idx %u for %s with %d relocs",
+ __func__, reloc_idx, rsec->name, sec_num_entries(rsec));
return NULL;
}
reloc = &rsec->relocs[reloc_idx];
if (memcmp(reloc, &empty, sizeof(empty))) {
- WARN("%s: %s: reloc %d already initialized!",
- __func__, rsec->name, reloc_idx);
+ ERROR("%s: %s: reloc %d already initialized!",
+ __func__, rsec->name, reloc_idx);
return NULL;
}
@@ -875,7 +909,7 @@ static struct reloc *elf_init_reloc(struct elf *elf, struct section *rsec,
set_reloc_addend(elf, reloc, addend);
elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
- reloc->sym_next_reloc = sym->relocs;
+ set_sym_next_reloc(reloc, sym->relocs);
sym->relocs = reloc;
return reloc;
@@ -891,8 +925,7 @@ struct reloc *elf_init_reloc_text_sym(struct elf *elf, struct section *sec,
int addend = insn_off;
if (!(insn_sec->sh.sh_flags & SHF_EXECINSTR)) {
- WARN("bad call to %s() for data symbol %s",
- __func__, sym->name);
+ ERROR("bad call to %s() for data symbol %s", __func__, sym->name);
return NULL;
}
@@ -921,8 +954,7 @@ struct reloc *elf_init_reloc_data_sym(struct elf *elf, struct section *sec,
s64 addend)
{
if (sym->sec && (sec->sh.sh_flags & SHF_EXECINSTR)) {
- WARN("bad call to %s() for text symbol %s",
- __func__, sym->name);
+ ERROR("bad call to %s() for text symbol %s", __func__, sym->name);
return NULL;
}
@@ -948,8 +980,7 @@ static int read_relocs(struct elf *elf)
rsec->base = find_section_by_index(elf, rsec->sh.sh_info);
if (!rsec->base) {
- WARN("can't find base section for reloc section %s",
- rsec->name);
+ ERROR("can't find base section for reloc section %s", rsec->name);
return -1;
}
@@ -958,7 +989,7 @@ static int read_relocs(struct elf *elf)
nr_reloc = 0;
rsec->relocs = calloc(sec_num_entries(rsec), sizeof(*reloc));
if (!rsec->relocs) {
- perror("calloc");
+ ERROR_GLIBC("calloc");
return -1;
}
for (i = 0; i < sec_num_entries(rsec); i++) {
@@ -968,13 +999,12 @@ static int read_relocs(struct elf *elf)
symndx = reloc_sym(reloc);
reloc->sym = sym = find_symbol_by_index(elf, symndx);
if (!reloc->sym) {
- WARN("can't find reloc entry symbol %d for %s",
- symndx, rsec->name);
+ ERROR("can't find reloc entry symbol %d for %s", symndx, rsec->name);
return -1;
}
elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));
- reloc->sym_next_reloc = sym->relocs;
+ set_sym_next_reloc(reloc, sym->relocs);
sym->relocs = reloc;
nr_reloc++;
@@ -1000,7 +1030,7 @@ struct elf *elf_open_read(const char *name, int flags)
elf = malloc(sizeof(*elf));
if (!elf) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
memset(elf, 0, sizeof(*elf));
@@ -1023,12 +1053,12 @@ struct elf *elf_open_read(const char *name, int flags)
elf->elf = elf_begin(elf->fd, cmd, NULL);
if (!elf->elf) {
- WARN_ELF("elf_begin");
+ ERROR_ELF("elf_begin");
goto err;
}
if (!gelf_getehdr(elf->elf, &elf->ehdr)) {
- WARN_ELF("gelf_getehdr");
+ ERROR_ELF("gelf_getehdr");
goto err;
}
@@ -1038,6 +1068,9 @@ struct elf *elf_open_read(const char *name, int flags)
if (read_symbols(elf))
goto err;
+ if (mark_group_syms(elf))
+ goto err;
+
if (read_relocs(elf))
goto err;
@@ -1057,19 +1090,19 @@ static int elf_add_string(struct elf *elf, struct section *strtab, char *str)
if (!strtab)
strtab = find_section_by_name(elf, ".strtab");
if (!strtab) {
- WARN("can't find .strtab section");
+ ERROR("can't find .strtab section");
return -1;
}
s = elf_getscn(elf->elf, strtab->idx);
if (!s) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
data = elf_newdata(s);
if (!data) {
- WARN_ELF("elf_newdata");
+ ERROR_ELF("elf_newdata");
return -1;
}
@@ -1094,7 +1127,7 @@ struct section *elf_create_section(struct elf *elf, const char *name,
sec = malloc(sizeof(*sec));
if (!sec) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
memset(sec, 0, sizeof(*sec));
@@ -1103,13 +1136,13 @@ struct section *elf_create_section(struct elf *elf, const char *name,
s = elf_newscn(elf->elf);
if (!s) {
- WARN_ELF("elf_newscn");
+ ERROR_ELF("elf_newscn");
return NULL;
}
sec->name = strdup(name);
if (!sec->name) {
- perror("strdup");
+ ERROR_GLIBC("strdup");
return NULL;
}
@@ -1117,7 +1150,7 @@ struct section *elf_create_section(struct elf *elf, const char *name,
sec->data = elf_newdata(s);
if (!sec->data) {
- WARN_ELF("elf_newdata");
+ ERROR_ELF("elf_newdata");
return NULL;
}
@@ -1127,14 +1160,14 @@ struct section *elf_create_section(struct elf *elf, const char *name,
if (size) {
sec->data->d_buf = malloc(size);
if (!sec->data->d_buf) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
memset(sec->data->d_buf, 0, size);
}
if (!gelf_getshdr(s, &sec->sh)) {
- WARN_ELF("gelf_getshdr");
+ ERROR_ELF("gelf_getshdr");
return NULL;
}
@@ -1149,7 +1182,7 @@ struct section *elf_create_section(struct elf *elf, const char *name,
if (!shstrtab)
shstrtab = find_section_by_name(elf, ".strtab");
if (!shstrtab) {
- WARN("can't find .shstrtab or .strtab section");
+ ERROR("can't find .shstrtab or .strtab section");
return NULL;
}
sec->sh.sh_name = elf_add_string(elf, shstrtab, sec->name);
@@ -1174,7 +1207,7 @@ static struct section *elf_create_rela_section(struct elf *elf,
rsec_name = malloc(strlen(sec->name) + strlen(".rela") + 1);
if (!rsec_name) {
- perror("malloc");
+ ERROR_GLIBC("malloc");
return NULL;
}
strcpy(rsec_name, ".rela");
@@ -1194,7 +1227,7 @@ static struct section *elf_create_rela_section(struct elf *elf,
rsec->relocs = calloc(sec_num_entries(rsec), sizeof(struct reloc));
if (!rsec->relocs) {
- perror("calloc");
+ ERROR_GLIBC("calloc");
return NULL;
}
@@ -1227,7 +1260,7 @@ int elf_write_insn(struct elf *elf, struct section *sec,
Elf_Data *data = sec->data;
if (data->d_type != ELF_T_BYTE || data->d_off) {
- WARN("write to unexpected data for section: %s", sec->name);
+ ERROR("write to unexpected data for section: %s", sec->name);
return -1;
}
@@ -1256,7 +1289,7 @@ static int elf_truncate_section(struct elf *elf, struct section *sec)
s = elf_getscn(elf->elf, sec->idx);
if (!s) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
@@ -1266,7 +1299,7 @@ static int elf_truncate_section(struct elf *elf, struct section *sec)
if (!data) {
if (size) {
- WARN("end of section data but non-zero size left\n");
+ ERROR("end of section data but non-zero size left\n");
return -1;
}
return 0;
@@ -1274,12 +1307,12 @@ static int elf_truncate_section(struct elf *elf, struct section *sec)
if (truncated) {
/* when we remove symbols */
- WARN("truncated; but more data\n");
+ ERROR("truncated; but more data\n");
return -1;
}
if (!data->d_size) {
- WARN("zero size data");
+ ERROR("zero size data");
return -1;
}
@@ -1297,9 +1330,6 @@ int elf_write(struct elf *elf)
struct section *sec;
Elf_Scn *s;
- if (opts.dryrun)
- return 0;
-
/* Update changed relocation sections and section headers: */
list_for_each_entry(sec, &elf->sections, list) {
if (sec->truncate)
@@ -1308,13 +1338,13 @@ int elf_write(struct elf *elf)
if (sec_changed(sec)) {
s = elf_getscn(elf->elf, sec->idx);
if (!s) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
/* Note this also flags the section dirty */
if (!gelf_update_shdr(s, &sec->sh)) {
- WARN_ELF("gelf_update_shdr");
+ ERROR_ELF("gelf_update_shdr");
return -1;
}
@@ -1327,7 +1357,7 @@ int elf_write(struct elf *elf)
/* Write all changes to the file. */
if (elf_update(elf->elf, ELF_C_WRITE) < 0) {
- WARN_ELF("elf_update");
+ ERROR_ELF("elf_update");
return -1;
}
diff --git a/tools/objtool/include/objtool/arch.h b/tools/objtool/include/objtool/arch.h
index 0b303eba660e..be33c7b43180 100644
--- a/tools/objtool/include/objtool/arch.h
+++ b/tools/objtool/include/objtool/arch.h
@@ -19,7 +19,8 @@ enum insn_type {
INSN_CALL,
INSN_CALL_DYNAMIC,
INSN_RETURN,
- INSN_CONTEXT_SWITCH,
+ INSN_SYSCALL,
+ INSN_SYSRET,
INSN_BUG,
INSN_NOP,
INSN_STAC,
@@ -28,6 +29,7 @@ enum insn_type {
INSN_CLD,
INSN_TRAP,
INSN_ENDBR,
+ INSN_LEA_RIP,
INSN_OTHER,
};
@@ -95,5 +97,9 @@ bool arch_is_embedded_insn(struct symbol *sym);
int arch_rewrite_retpolines(struct objtool_file *file);
bool arch_pc_relative_reloc(struct reloc *reloc);
+bool arch_absolute_reloc(struct elf *elf, struct reloc *reloc);
+
+unsigned int arch_reloc_size(struct reloc *reloc);
+unsigned long arch_jump_table_sym_offset(struct reloc *reloc, struct reloc *table);
#endif /* _ARCH_H */
diff --git a/tools/objtool/include/objtool/builtin.h b/tools/objtool/include/objtool/builtin.h
index fcca6662c8b4..ab22673862e1 100644
--- a/tools/objtool/include/objtool/builtin.h
+++ b/tools/objtool/include/objtool/builtin.h
@@ -26,24 +26,28 @@ struct opts {
bool uaccess;
int prefix;
bool cfi;
+ bool noabs;
/* options: */
bool backtrace;
- bool backup;
bool dryrun;
bool link;
bool mnop;
bool module;
bool no_unreachable;
+ const char *output;
bool sec_address;
bool stats;
bool verbose;
+ bool werror;
};
extern struct opts opts;
-extern int cmd_parse_options(int argc, const char **argv, const char * const usage[]);
+int cmd_parse_options(int argc, const char **argv, const char * const usage[]);
-extern int objtool_run(int argc, const char **argv);
+int objtool_run(int argc, const char **argv);
+
+void print_args(void);
#endif /* _BUILTIN_H */
diff --git a/tools/objtool/include/objtool/check.h b/tools/objtool/include/objtool/check.h
index daa46f1f0965..00fb745e7233 100644
--- a/tools/objtool/include/objtool/check.h
+++ b/tools/objtool/include/objtool/check.h
@@ -34,6 +34,8 @@ struct alt_group {
* This is shared with the other alt_groups in the same alternative.
*/
struct cfi_state **cfi;
+
+ bool ignore;
};
#define INSN_CHUNK_BITS 8
@@ -54,7 +56,6 @@ struct instruction {
u32 idx : INSN_CHUNK_BITS,
dead_end : 1,
- ignore : 1,
ignore_alts : 1,
hint : 1,
save : 1,
@@ -71,7 +72,10 @@ struct instruction {
struct instruction *first_jump_src;
union {
struct symbol *_call_dest;
- struct reloc *_jump_table;
+ struct {
+ struct reloc *_jump_table;
+ unsigned long _jump_table_size;
+ };
};
struct alternative *alts;
struct symbol *sym;
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 9f71e988eca4..df8434d3b744 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -67,15 +67,20 @@ struct symbol {
u8 profiling_func : 1;
u8 warned : 1;
u8 embedded_insn : 1;
+ u8 local_label : 1;
+ u8 frame_pointer : 1;
+ u8 ignore : 1;
+ u8 nocfi : 1;
struct list_head pv_target;
struct reloc *relocs;
+ struct section *group_sec;
};
struct reloc {
struct elf_hash_node hash;
struct section *sec;
struct symbol *sym;
- struct reloc *sym_next_reloc;
+ unsigned long _sym_next_reloc;
};
struct elf {
@@ -295,6 +300,31 @@ static inline void set_reloc_type(struct elf *elf, struct reloc *reloc, unsigned
mark_sec_changed(elf, reloc->sec, true);
}
+#define RELOC_JUMP_TABLE_BIT 1UL
+
+/* Does reloc mark the beginning of a jump table? */
+static inline bool is_jump_table(struct reloc *reloc)
+{
+ return reloc->_sym_next_reloc & RELOC_JUMP_TABLE_BIT;
+}
+
+static inline void set_jump_table(struct reloc *reloc)
+{
+ reloc->_sym_next_reloc |= RELOC_JUMP_TABLE_BIT;
+}
+
+static inline struct reloc *sym_next_reloc(struct reloc *reloc)
+{
+ return (struct reloc *)(reloc->_sym_next_reloc & ~RELOC_JUMP_TABLE_BIT);
+}
+
+static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
+{
+ unsigned long bit = reloc->_sym_next_reloc & RELOC_JUMP_TABLE_BIT;
+
+ reloc->_sym_next_reloc = (unsigned long)next | bit;
+}
+
#define for_each_sec(file, sec) \
list_for_each_entry(sec, &file->elf->sections, list)
diff --git a/tools/objtool/include/objtool/objtool.h b/tools/objtool/include/objtool/objtool.h
index 94a33ee7b363..c0dc86a78ff6 100644
--- a/tools/objtool/include/objtool/objtool.h
+++ b/tools/objtool/include/objtool/objtool.h
@@ -41,7 +41,7 @@ struct objtool_file {
struct objtool_file *objtool_open_read(const char *_objname);
-void objtool_pv_add(struct objtool_file *file, int idx, struct symbol *func);
+int objtool_pv_add(struct objtool_file *file, int idx, struct symbol *func);
int check(struct objtool_file *file);
int orc_dump(const char *objname);
diff --git a/tools/objtool/include/objtool/orc.h b/tools/objtool/include/objtool/orc.h
new file mode 100644
index 000000000000..15a32def1071
--- /dev/null
+++ b/tools/objtool/include/objtool/orc.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ORC_H
+#define _OBJTOOL_ORC_H
+
+#include <objtool/check.h>
+
+int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi, struct instruction *insn);
+void orc_print_dump(struct elf *dummy_elf, struct orc_entry *orc, int i);
+int write_orc_entry(struct elf *elf, struct section *orc_sec,
+ struct section *ip_sec, unsigned int idx,
+ struct section *insn_sec, unsigned long insn_off,
+ struct orc_entry *o);
+
+#endif /* _OBJTOOL_ORC_H */
diff --git a/tools/objtool/include/objtool/special.h b/tools/objtool/include/objtool/special.h
index 86d4af9c5aa9..72d09c0adf1a 100644
--- a/tools/objtool/include/objtool/special.h
+++ b/tools/objtool/include/objtool/special.h
@@ -10,14 +10,12 @@
#include <objtool/check.h>
#include <objtool/elf.h>
-#define C_JUMP_TABLE_SECTION ".rodata..c_jump_table"
+#define C_JUMP_TABLE_SECTION ".data.rel.ro.c_jump_table"
struct special_alt {
struct list_head list;
bool group;
- bool skip_orig;
- bool skip_alt;
bool jump_or_nop;
u8 key_addend;
@@ -32,11 +30,12 @@ struct special_alt {
int special_get_alts(struct elf *elf, struct list_head *alts);
-void arch_handle_alternative(unsigned short feature, struct special_alt *alt);
+void arch_handle_alternative(struct special_alt *alt);
bool arch_support_alt_relocation(struct special_alt *special_alt,
struct instruction *insn,
struct reloc *reloc);
struct reloc *arch_find_switch_table(struct objtool_file *file,
- struct instruction *insn);
+ struct instruction *insn,
+ unsigned long *table_size);
#endif /* _SPECIAL_H */
diff --git a/tools/objtool/include/objtool/warn.h b/tools/objtool/include/objtool/warn.h
index ac04d3fe4dd9..cb8fe846d9dd 100644
--- a/tools/objtool/include/objtool/warn.h
+++ b/tools/objtool/include/objtool/warn.h
@@ -11,6 +11,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
+#include <errno.h>
#include <objtool/builtin.h>
#include <objtool/elf.h>
@@ -41,23 +42,43 @@ static inline char *offstr(struct section *sec, unsigned long offset)
return str;
}
-#define WARN(format, ...) \
- fprintf(stderr, \
- "%s: warning: objtool: " format "\n", \
- objname, ##__VA_ARGS__)
+#define ___WARN(severity, extra, format, ...) \
+ fprintf(stderr, \
+ "%s%s%s: objtool" extra ": " format "\n", \
+ objname ?: "", \
+ objname ? ": " : "", \
+ severity, \
+ ##__VA_ARGS__)
-#define WARN_FUNC(format, sec, offset, ...) \
-({ \
- char *_str = offstr(sec, offset); \
- WARN("%s: " format, _str, ##__VA_ARGS__); \
- free(_str); \
+#define __WARN(severity, format, ...) \
+ ___WARN(severity, "", format, ##__VA_ARGS__)
+
+#define __WARN_LINE(severity, format, ...) \
+ ___WARN(severity, " [%s:%d]", format, __FILE__, __LINE__, ##__VA_ARGS__)
+
+#define __WARN_ELF(severity, format, ...) \
+ __WARN_LINE(severity, "%s: " format " failed: %s", __func__, ##__VA_ARGS__, elf_errmsg(-1))
+
+#define __WARN_GLIBC(severity, format, ...) \
+ __WARN_LINE(severity, "%s: " format " failed: %s", __func__, ##__VA_ARGS__, strerror(errno))
+
+#define __WARN_FUNC(severity, sec, offset, format, ...) \
+({ \
+ char *_str = offstr(sec, offset); \
+ __WARN(severity, "%s: " format, _str, ##__VA_ARGS__); \
+ free(_str); \
})
+#define WARN_STR (opts.werror ? "error" : "warning")
+
+#define WARN(format, ...) __WARN(WARN_STR, format, ##__VA_ARGS__)
+#define WARN_FUNC(sec, offset, format, ...) __WARN_FUNC(WARN_STR, sec, offset, format, ##__VA_ARGS__)
+
#define WARN_INSN(insn, format, ...) \
({ \
struct instruction *_insn = (insn); \
if (!_insn->sym || !_insn->sym->warned) \
- WARN_FUNC(format, _insn->sec, _insn->offset, \
+ WARN_FUNC(_insn->sec, _insn->offset, format, \
##__VA_ARGS__); \
if (_insn->sym) \
_insn->sym->warned = 1; \
@@ -73,7 +94,12 @@ static inline char *offstr(struct section *sec, unsigned long offset)
} \
})
-#define WARN_ELF(format, ...) \
- WARN(format ": %s", ##__VA_ARGS__, elf_errmsg(-1))
+#define ERROR_STR "error"
+
+#define ERROR(format, ...) __WARN(ERROR_STR, format, ##__VA_ARGS__)
+#define ERROR_ELF(format, ...) __WARN_ELF(ERROR_STR, format, ##__VA_ARGS__)
+#define ERROR_GLIBC(format, ...) __WARN_GLIBC(ERROR_STR, format, ##__VA_ARGS__)
+#define ERROR_FUNC(sec, offset, format, ...) __WARN_FUNC(ERROR_STR, sec, offset, format, ##__VA_ARGS__)
+#define ERROR_INSN(insn, format, ...) WARN_FUNC(insn->sec, insn->offset, format, ##__VA_ARGS__)
#endif /* _WARN_H */
diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
index 1685d7ea6a9f..802895fae3ca 100644
--- a/tools/objtool/noreturns.h
+++ b/tools/objtool/noreturns.h
@@ -6,23 +6,27 @@
*
* Yes, this is unfortunate. A better solution is in the works.
*/
+NORETURN(__fortify_panic)
+NORETURN(__ia32_sys_exit)
+NORETURN(__ia32_sys_exit_group)
NORETURN(__kunit_abort)
NORETURN(__module_put_and_kthread_exit)
-NORETURN(__reiserfs_panic)
NORETURN(__stack_chk_fail)
NORETURN(__tdx_hypercall_failed)
NORETURN(__ubsan_handle_builtin_unreachable)
-NORETURN(arch_call_rest_init)
+NORETURN(__x64_sys_exit)
+NORETURN(__x64_sys_exit_group)
+NORETURN(acpi_processor_ffh_play_dead)
NORETURN(arch_cpu_idle_dead)
NORETURN(bch2_trans_in_restart_error)
NORETURN(bch2_trans_restart_error)
+NORETURN(bch2_trans_unlocked_or_in_restart_error)
NORETURN(cpu_bringup_and_idle)
NORETURN(cpu_startup_entry)
NORETURN(do_exit)
NORETURN(do_group_exit)
NORETURN(do_task_dead)
NORETURN(ex_handler_msr_mce)
-NORETURN(fortify_panic)
NORETURN(hlt_play_dead)
NORETURN(hv_ghcb_terminate)
NORETURN(kthread_complete_and_exit)
@@ -31,13 +35,16 @@ NORETURN(kunit_try_catch_throw)
NORETURN(machine_real_restart)
NORETURN(make_task_dead)
NORETURN(mpt_halt_firmware)
+NORETURN(mwait_play_dead)
NORETURN(nmi_panic_self_stop)
NORETURN(panic)
+NORETURN(vpanic)
NORETURN(panic_smp_self_stop)
NORETURN(rest_init)
NORETURN(rewind_stack_and_make_dead)
+NORETURN(rust_begin_unwind)
+NORETURN(rust_helper_BUG)
NORETURN(sev_es_terminate)
-NORETURN(snp_abort)
NORETURN(start_kernel)
NORETURN(stop_this_cpu)
NORETURN(usercopy_abort)
diff --git a/tools/objtool/objtool.c b/tools/objtool/objtool.c
index f40febdd6e36..5c8b974ad0f9 100644
--- a/tools/objtool/objtool.c
+++ b/tools/objtool/objtool.c
@@ -18,87 +18,19 @@
bool help;
-const char *objname;
static struct objtool_file file;
-static bool objtool_create_backup(const char *_objname)
+struct objtool_file *objtool_open_read(const char *filename)
{
- int len = strlen(_objname);
- char *buf, *base, *name = malloc(len+6);
- int s, d, l, t;
-
- if (!name) {
- perror("failed backup name malloc");
- return false;
- }
-
- strcpy(name, _objname);
- strcpy(name + len, ".orig");
-
- d = open(name, O_CREAT|O_WRONLY|O_TRUNC, 0644);
- if (d < 0) {
- perror("failed to create backup file");
- return false;
- }
-
- s = open(_objname, O_RDONLY);
- if (s < 0) {
- perror("failed to open orig file");
- return false;
- }
-
- buf = malloc(4096);
- if (!buf) {
- perror("failed backup data malloc");
- return false;
- }
-
- while ((l = read(s, buf, 4096)) > 0) {
- base = buf;
- do {
- t = write(d, base, l);
- if (t < 0) {
- perror("failed backup write");
- return false;
- }
- base += t;
- l -= t;
- } while (l);
- }
-
- if (l < 0) {
- perror("failed backup read");
- return false;
- }
-
- free(name);
- free(buf);
- close(d);
- close(s);
-
- return true;
-}
-
-struct objtool_file *objtool_open_read(const char *_objname)
-{
- if (objname) {
- if (strcmp(objname, _objname)) {
- WARN("won't handle more than one file at a time");
- return NULL;
- }
- return &file;
+ if (file.elf) {
+ ERROR("won't handle more than one file at a time");
+ return NULL;
}
- objname = _objname;
- file.elf = elf_open_read(objname, O_RDWR);
+ file.elf = elf_open_read(filename, O_RDWR);
if (!file.elf)
return NULL;
- if (opts.backup && !objtool_create_backup(objname)) {
- WARN("can't create backup file");
- return NULL;
- }
-
hash_init(file.insn_hash);
INIT_LIST_HEAD(&file.retpoline_call_list);
INIT_LIST_HEAD(&file.return_thunk_list);
@@ -112,14 +44,14 @@ struct objtool_file *objtool_open_read(const char *_objname)
return &file;
}
-void objtool_pv_add(struct objtool_file *f, int idx, struct symbol *func)
+int objtool_pv_add(struct objtool_file *f, int idx, struct symbol *func)
{
if (!opts.noinstr)
- return;
+ return 0;
if (!f->pv_ops) {
- WARN("paravirt confusion");
- return;
+ ERROR("paravirt confusion");
+ return -1;
}
/*
@@ -128,14 +60,15 @@ void objtool_pv_add(struct objtool_file *f, int idx, struct symbol *func)
*/
if (!strcmp(func->name, "_paravirt_nop") ||
!strcmp(func->name, "_paravirt_ident_64"))
- return;
+ return 0;
/* already added this function */
if (!list_empty(&func->pv_target))
- return;
+ return 0;
list_add(&func->pv_target, &f->pv_ops[idx].targets);
f->pv_ops[idx].clean = false;
+ return 0;
}
int main(int argc, const char **argv)
diff --git a/tools/objtool/orc_dump.c b/tools/objtool/orc_dump.c
index 0e183bb1c720..1dd9fc18fe62 100644
--- a/tools/objtool/orc_dump.c
+++ b/tools/objtool/orc_dump.c
@@ -6,66 +6,11 @@
#include <unistd.h>
#include <asm/orc_types.h>
#include <objtool/objtool.h>
+#include <objtool/orc.h>
#include <objtool/warn.h>
#include <objtool/endianness.h>
-static const char *reg_name(unsigned int reg)
-{
- switch (reg) {
- case ORC_REG_PREV_SP:
- return "prevsp";
- case ORC_REG_DX:
- return "dx";
- case ORC_REG_DI:
- return "di";
- case ORC_REG_BP:
- return "bp";
- case ORC_REG_SP:
- return "sp";
- case ORC_REG_R10:
- return "r10";
- case ORC_REG_R13:
- return "r13";
- case ORC_REG_BP_INDIRECT:
- return "bp(ind)";
- case ORC_REG_SP_INDIRECT:
- return "sp(ind)";
- default:
- return "?";
- }
-}
-
-static const char *orc_type_name(unsigned int type)
-{
- switch (type) {
- case ORC_TYPE_UNDEFINED:
- return "(und)";
- case ORC_TYPE_END_OF_STACK:
- return "end";
- case ORC_TYPE_CALL:
- return "call";
- case ORC_TYPE_REGS:
- return "regs";
- case ORC_TYPE_REGS_PARTIAL:
- return "regs (partial)";
- default:
- return "?";
- }
-}
-
-static void print_reg(unsigned int reg, int offset)
-{
- if (reg == ORC_REG_BP_INDIRECT)
- printf("(bp%+d)", offset);
- else if (reg == ORC_REG_SP_INDIRECT)
- printf("(sp)%+d", offset);
- else if (reg == ORC_REG_UNDEFINED)
- printf("(und)");
- else
- printf("%s%+d", reg_name(reg), offset);
-}
-
-int orc_dump(const char *_objname)
+int orc_dump(const char *filename)
{
int fd, nr_entries, i, *orc_ip = NULL, orc_size = 0;
struct orc_entry *orc = NULL;
@@ -81,12 +26,9 @@ int orc_dump(const char *_objname)
Elf_Data *data, *symtab = NULL, *rela_orc_ip = NULL;
struct elf dummy_elf = {};
-
- objname = _objname;
-
elf_version(EV_CURRENT);
- fd = open(objname, O_RDONLY);
+ fd = open(filename, O_RDONLY);
if (fd == -1) {
perror("open");
return -1;
@@ -94,47 +36,47 @@ int orc_dump(const char *_objname)
elf = elf_begin(fd, ELF_C_READ_MMAP, NULL);
if (!elf) {
- WARN_ELF("elf_begin");
+ ERROR_ELF("elf_begin");
return -1;
}
if (!elf64_getehdr(elf)) {
- WARN_ELF("elf64_getehdr");
+ ERROR_ELF("elf64_getehdr");
return -1;
}
memcpy(&dummy_elf.ehdr, elf64_getehdr(elf), sizeof(dummy_elf.ehdr));
if (elf_getshdrnum(elf, &nr_sections)) {
- WARN_ELF("elf_getshdrnum");
+ ERROR_ELF("elf_getshdrnum");
return -1;
}
if (elf_getshdrstrndx(elf, &shstrtab_idx)) {
- WARN_ELF("elf_getshdrstrndx");
+ ERROR_ELF("elf_getshdrstrndx");
return -1;
}
for (i = 0; i < nr_sections; i++) {
scn = elf_getscn(elf, i);
if (!scn) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
if (!gelf_getshdr(scn, &sh)) {
- WARN_ELF("gelf_getshdr");
+ ERROR_ELF("gelf_getshdr");
return -1;
}
name = elf_strptr(elf, shstrtab_idx, sh.sh_name);
if (!name) {
- WARN_ELF("elf_strptr");
+ ERROR_ELF("elf_strptr");
return -1;
}
data = elf_getdata(scn, NULL);
if (!data) {
- WARN_ELF("elf_getdata");
+ ERROR_ELF("elf_getdata");
return -1;
}
@@ -157,7 +99,7 @@ int orc_dump(const char *_objname)
return 0;
if (orc_size % sizeof(*orc) != 0) {
- WARN("bad .orc_unwind section size");
+ ERROR("bad .orc_unwind section size");
return -1;
}
@@ -165,36 +107,36 @@ int orc_dump(const char *_objname)
for (i = 0; i < nr_entries; i++) {
if (rela_orc_ip) {
if (!gelf_getrela(rela_orc_ip, i, &rela)) {
- WARN_ELF("gelf_getrela");
+ ERROR_ELF("gelf_getrela");
return -1;
}
if (!gelf_getsym(symtab, GELF_R_SYM(rela.r_info), &sym)) {
- WARN_ELF("gelf_getsym");
+ ERROR_ELF("gelf_getsym");
return -1;
}
if (GELF_ST_TYPE(sym.st_info) == STT_SECTION) {
scn = elf_getscn(elf, sym.st_shndx);
if (!scn) {
- WARN_ELF("elf_getscn");
+ ERROR_ELF("elf_getscn");
return -1;
}
if (!gelf_getshdr(scn, &sh)) {
- WARN_ELF("gelf_getshdr");
+ ERROR_ELF("gelf_getshdr");
return -1;
}
name = elf_strptr(elf, shstrtab_idx, sh.sh_name);
if (!name) {
- WARN_ELF("elf_strptr");
+ ERROR_ELF("elf_strptr");
return -1;
}
} else {
name = elf_strptr(elf, strtab_idx, sym.st_name);
if (!name) {
- WARN_ELF("elf_strptr");
+ ERROR_ELF("elf_strptr");
return -1;
}
}
@@ -205,17 +147,7 @@ int orc_dump(const char *_objname)
printf("%llx:", (unsigned long long)(orc_ip_addr + (i * sizeof(int)) + orc_ip[i]));
}
- printf("type:%s", orc_type_name(orc[i].type));
-
- printf(" sp:");
-
- print_reg(orc[i].sp_reg, bswap_if_needed(&dummy_elf, orc[i].sp_offset));
-
- printf(" bp:");
-
- print_reg(orc[i].bp_reg, bswap_if_needed(&dummy_elf, orc[i].bp_offset));
-
- printf(" signal:%d\n", orc[i].signal);
+ orc_print_dump(&dummy_elf, orc, i);
}
elf_end(elf);
diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
index bae343908867..922e6aac7cea 100644
--- a/tools/objtool/orc_gen.c
+++ b/tools/objtool/orc_gen.c
@@ -10,121 +10,10 @@
#include <asm/orc_types.h>
#include <objtool/check.h>
+#include <objtool/orc.h>
#include <objtool/warn.h>
#include <objtool/endianness.h>
-static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi,
- struct instruction *insn)
-{
- struct cfi_reg *bp = &cfi->regs[CFI_BP];
-
- memset(orc, 0, sizeof(*orc));
-
- if (!cfi) {
- /*
- * This is usually either unreachable nops/traps (which don't
- * trigger unreachable instruction warnings), or
- * STACK_FRAME_NON_STANDARD functions.
- */
- orc->type = ORC_TYPE_UNDEFINED;
- return 0;
- }
-
- switch (cfi->type) {
- case UNWIND_HINT_TYPE_UNDEFINED:
- orc->type = ORC_TYPE_UNDEFINED;
- return 0;
- case UNWIND_HINT_TYPE_END_OF_STACK:
- orc->type = ORC_TYPE_END_OF_STACK;
- return 0;
- case UNWIND_HINT_TYPE_CALL:
- orc->type = ORC_TYPE_CALL;
- break;
- case UNWIND_HINT_TYPE_REGS:
- orc->type = ORC_TYPE_REGS;
- break;
- case UNWIND_HINT_TYPE_REGS_PARTIAL:
- orc->type = ORC_TYPE_REGS_PARTIAL;
- break;
- default:
- WARN_INSN(insn, "unknown unwind hint type %d", cfi->type);
- return -1;
- }
-
- orc->signal = cfi->signal;
-
- switch (cfi->cfa.base) {
- case CFI_SP:
- orc->sp_reg = ORC_REG_SP;
- break;
- case CFI_SP_INDIRECT:
- orc->sp_reg = ORC_REG_SP_INDIRECT;
- break;
- case CFI_BP:
- orc->sp_reg = ORC_REG_BP;
- break;
- case CFI_BP_INDIRECT:
- orc->sp_reg = ORC_REG_BP_INDIRECT;
- break;
- case CFI_R10:
- orc->sp_reg = ORC_REG_R10;
- break;
- case CFI_R13:
- orc->sp_reg = ORC_REG_R13;
- break;
- case CFI_DI:
- orc->sp_reg = ORC_REG_DI;
- break;
- case CFI_DX:
- orc->sp_reg = ORC_REG_DX;
- break;
- default:
- WARN_INSN(insn, "unknown CFA base reg %d", cfi->cfa.base);
- return -1;
- }
-
- switch (bp->base) {
- case CFI_UNDEFINED:
- orc->bp_reg = ORC_REG_UNDEFINED;
- break;
- case CFI_CFA:
- orc->bp_reg = ORC_REG_PREV_SP;
- break;
- case CFI_BP:
- orc->bp_reg = ORC_REG_BP;
- break;
- default:
- WARN_INSN(insn, "unknown BP base reg %d", bp->base);
- return -1;
- }
-
- orc->sp_offset = cfi->cfa.offset;
- orc->bp_offset = bp->offset;
-
- return 0;
-}
-
-static int write_orc_entry(struct elf *elf, struct section *orc_sec,
- struct section *ip_sec, unsigned int idx,
- struct section *insn_sec, unsigned long insn_off,
- struct orc_entry *o)
-{
- struct orc_entry *orc;
-
- /* populate ORC data */
- orc = (struct orc_entry *)orc_sec->data->d_buf + idx;
- memcpy(orc, o, sizeof(*orc));
- orc->sp_offset = bswap_if_needed(elf, orc->sp_offset);
- orc->bp_offset = bswap_if_needed(elf, orc->bp_offset);
-
- /* populate reloc for ip */
- if (!elf_init_reloc_text_sym(elf, ip_sec, idx * sizeof(int), idx,
- insn_sec, insn_off))
- return -1;
-
- return 0;
-}
-
struct orc_list_entry {
struct list_head list;
struct orc_entry orc;
diff --git a/tools/objtool/special.c b/tools/objtool/special.c
index 91b1950f5bd8..c80fed8a840e 100644
--- a/tools/objtool/special.c
+++ b/tools/objtool/special.c
@@ -54,7 +54,7 @@ static const struct special_entry entries[] = {
{},
};
-void __weak arch_handle_alternative(unsigned short feature, struct special_alt *alt)
+void __weak arch_handle_alternative(struct special_alt *alt)
{
}
@@ -84,29 +84,20 @@ static int get_alt_entry(struct elf *elf, const struct special_entry *entry,
entry->new_len);
}
- if (entry->feature) {
- unsigned short feature;
-
- feature = bswap_if_needed(elf,
- *(unsigned short *)(sec->data->d_buf +
- offset +
- entry->feature));
- arch_handle_alternative(feature, alt);
- }
-
orig_reloc = find_reloc_by_dest(elf, sec, offset + entry->orig);
if (!orig_reloc) {
- WARN_FUNC("can't find orig reloc", sec, offset + entry->orig);
+ ERROR_FUNC(sec, offset + entry->orig, "can't find orig reloc");
return -1;
}
reloc_to_sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off);
+ arch_handle_alternative(alt);
+
if (!entry->group || alt->new_len) {
new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new);
if (!new_reloc) {
- WARN_FUNC("can't find new reloc",
- sec, offset + entry->new);
+ ERROR_FUNC(sec, offset + entry->new, "can't find new reloc");
return -1;
}
@@ -122,8 +113,7 @@ static int get_alt_entry(struct elf *elf, const struct special_entry *entry,
key_reloc = find_reloc_by_dest(elf, sec, offset + entry->key);
if (!key_reloc) {
- WARN_FUNC("can't find key reloc",
- sec, offset + entry->key);
+ ERROR_FUNC(sec, offset + entry->key, "can't find key reloc");
return -1;
}
alt->key_addend = reloc_addend(key_reloc);
@@ -153,8 +143,7 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
continue;
if (sec->sh.sh_size % entry->size != 0) {
- WARN("%s size not a multiple of %d",
- sec->name, entry->size);
+ ERROR("%s size not a multiple of %d", sec->name, entry->size);
return -1;
}
@@ -163,7 +152,7 @@ int special_get_alts(struct elf *elf, struct list_head *alts)
for (idx = 0; idx < nr_entries; idx++) {
alt = malloc(sizeof(*alt));
if (!alt) {
- WARN("malloc failed");
+ ERROR_GLIBC("malloc failed");
return -1;
}
memset(alt, 0, sizeof(*alt));
diff --git a/tools/pci/Build b/tools/pci/Build
deleted file mode 100644
index c375aea21790..000000000000
--- a/tools/pci/Build
+++ /dev/null
@@ -1 +0,0 @@
-pcitest-y += pcitest.o
diff --git a/tools/pci/Makefile b/tools/pci/Makefile
deleted file mode 100644
index 57744778b518..000000000000
--- a/tools/pci/Makefile
+++ /dev/null
@@ -1,58 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-include ../scripts/Makefile.include
-
-bindir ?= /usr/bin
-
-ifeq ($(srctree),)
-srctree := $(patsubst %/,%,$(dir $(CURDIR)))
-srctree := $(patsubst %/,%,$(dir $(srctree)))
-endif
-
-# Do not use make's built-in rules
-# (this improves performance and avoids hard-to-debug behaviour);
-MAKEFLAGS += -r
-
-CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include
-
-ALL_TARGETS := pcitest
-ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
-
-SCRIPTS := pcitest.sh
-
-all: $(ALL_PROGRAMS)
-
-export srctree OUTPUT CC LD CFLAGS
-include $(srctree)/tools/build/Makefile.include
-
-#
-# We need the following to be outside of kernel tree
-#
-$(OUTPUT)include/linux/: ../../include/uapi/linux/
- mkdir -p $(OUTPUT)include/linux/ 2>&1 || true
- ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@
-
-prepare: $(OUTPUT)include/linux/
-
-PCITEST_IN := $(OUTPUT)pcitest-in.o
-$(PCITEST_IN): prepare FORCE
- $(Q)$(MAKE) $(build)=pcitest
-$(OUTPUT)pcitest: $(PCITEST_IN)
- $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@
-
-clean:
- rm -f $(ALL_PROGRAMS)
- rm -rf $(OUTPUT)include/
- find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete
-
-install: $(ALL_PROGRAMS)
- install -d -m 755 $(DESTDIR)$(bindir); \
- for program in $(ALL_PROGRAMS); do \
- install $$program $(DESTDIR)$(bindir); \
- done; \
- for script in $(SCRIPTS); do \
- install $$script $(DESTDIR)$(bindir); \
- done
-
-FORCE:
-
-.PHONY: all install clean FORCE prepare
diff --git a/tools/pci/pcitest.c b/tools/pci/pcitest.c
deleted file mode 100644
index 441b54234635..000000000000
--- a/tools/pci/pcitest.c
+++ /dev/null
@@ -1,252 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/**
- * Userspace PCI Endpoint Test Module
- *
- * Copyright (C) 2017 Texas Instruments
- * Author: Kishon Vijay Abraham I <kishon@ti.com>
- */
-
-#include <errno.h>
-#include <fcntl.h>
-#include <stdbool.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <sys/ioctl.h>
-#include <unistd.h>
-
-#include <linux/pcitest.h>
-
-#define BILLION 1E9
-
-static char *result[] = { "NOT OKAY", "OKAY" };
-static char *irq[] = { "LEGACY", "MSI", "MSI-X" };
-
-struct pci_test {
- char *device;
- char barnum;
- bool legacyirq;
- unsigned int msinum;
- unsigned int msixnum;
- int irqtype;
- bool set_irqtype;
- bool get_irqtype;
- bool clear_irq;
- bool read;
- bool write;
- bool copy;
- unsigned long size;
- bool use_dma;
-};
-
-static int run_test(struct pci_test *test)
-{
- struct pci_endpoint_test_xfer_param param = {};
- int ret = -EINVAL;
- int fd;
-
- fd = open(test->device, O_RDWR);
- if (fd < 0) {
- perror("can't open PCI Endpoint Test device");
- return -ENODEV;
- }
-
- if (test->barnum >= 0 && test->barnum <= 5) {
- ret = ioctl(fd, PCITEST_BAR, test->barnum);
- fprintf(stdout, "BAR%d:\t\t", test->barnum);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->set_irqtype) {
- ret = ioctl(fd, PCITEST_SET_IRQTYPE, test->irqtype);
- fprintf(stdout, "SET IRQ TYPE TO %s:\t\t", irq[test->irqtype]);
- if (ret < 0)
- fprintf(stdout, "FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->get_irqtype) {
- ret = ioctl(fd, PCITEST_GET_IRQTYPE);
- fprintf(stdout, "GET IRQ TYPE:\t\t");
- if (ret < 0)
- fprintf(stdout, "FAILED\n");
- else
- fprintf(stdout, "%s\n", irq[ret]);
- }
-
- if (test->clear_irq) {
- ret = ioctl(fd, PCITEST_CLEAR_IRQ);
- fprintf(stdout, "CLEAR IRQ:\t\t");
- if (ret < 0)
- fprintf(stdout, "FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->legacyirq) {
- ret = ioctl(fd, PCITEST_LEGACY_IRQ, 0);
- fprintf(stdout, "LEGACY IRQ:\t");
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->msinum > 0 && test->msinum <= 32) {
- ret = ioctl(fd, PCITEST_MSI, test->msinum);
- fprintf(stdout, "MSI%d:\t\t", test->msinum);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->msixnum > 0 && test->msixnum <= 2048) {
- ret = ioctl(fd, PCITEST_MSIX, test->msixnum);
- fprintf(stdout, "MSI-X%d:\t\t", test->msixnum);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->write) {
- param.size = test->size;
- if (test->use_dma)
- param.flags = PCITEST_FLAGS_USE_DMA;
- ret = ioctl(fd, PCITEST_WRITE, &param);
- fprintf(stdout, "WRITE (%7ld bytes):\t\t", test->size);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->read) {
- param.size = test->size;
- if (test->use_dma)
- param.flags = PCITEST_FLAGS_USE_DMA;
- ret = ioctl(fd, PCITEST_READ, &param);
- fprintf(stdout, "READ (%7ld bytes):\t\t", test->size);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- if (test->copy) {
- param.size = test->size;
- if (test->use_dma)
- param.flags = PCITEST_FLAGS_USE_DMA;
- ret = ioctl(fd, PCITEST_COPY, &param);
- fprintf(stdout, "COPY (%7ld bytes):\t\t", test->size);
- if (ret < 0)
- fprintf(stdout, "TEST FAILED\n");
- else
- fprintf(stdout, "%s\n", result[ret]);
- }
-
- fflush(stdout);
- close(fd);
- return (ret < 0) ? ret : 1 - ret; /* return 0 if test succeeded */
-}
-
-int main(int argc, char **argv)
-{
- int c;
- struct pci_test *test;
-
- test = calloc(1, sizeof(*test));
- if (!test) {
- perror("Fail to allocate memory for pci_test\n");
- return -ENOMEM;
- }
-
- /* since '0' is a valid BAR number, initialize it to -1 */
- test->barnum = -1;
-
- /* set default size as 100KB */
- test->size = 0x19000;
-
- /* set default endpoint device */
- test->device = "/dev/pci-endpoint-test.0";
-
- while ((c = getopt(argc, argv, "D:b:m:x:i:deIlhrwcs:")) != EOF)
- switch (c) {
- case 'D':
- test->device = optarg;
- continue;
- case 'b':
- test->barnum = atoi(optarg);
- if (test->barnum < 0 || test->barnum > 5)
- goto usage;
- continue;
- case 'l':
- test->legacyirq = true;
- continue;
- case 'm':
- test->msinum = atoi(optarg);
- if (test->msinum < 1 || test->msinum > 32)
- goto usage;
- continue;
- case 'x':
- test->msixnum = atoi(optarg);
- if (test->msixnum < 1 || test->msixnum > 2048)
- goto usage;
- continue;
- case 'i':
- test->irqtype = atoi(optarg);
- if (test->irqtype < 0 || test->irqtype > 2)
- goto usage;
- test->set_irqtype = true;
- continue;
- case 'I':
- test->get_irqtype = true;
- continue;
- case 'r':
- test->read = true;
- continue;
- case 'w':
- test->write = true;
- continue;
- case 'c':
- test->copy = true;
- continue;
- case 'e':
- test->clear_irq = true;
- continue;
- case 's':
- test->size = strtoul(optarg, NULL, 0);
- continue;
- case 'd':
- test->use_dma = true;
- continue;
- case 'h':
- default:
-usage:
- fprintf(stderr,
- "usage: %s [options]\n"
- "Options:\n"
- "\t-D <dev> PCI endpoint test device {default: /dev/pci-endpoint-test.0}\n"
- "\t-b <bar num> BAR test (bar number between 0..5)\n"
- "\t-m <msi num> MSI test (msi number between 1..32)\n"
- "\t-x <msix num> \tMSI-X test (msix number between 1..2048)\n"
- "\t-i <irq type> \tSet IRQ type (0 - Legacy, 1 - MSI, 2 - MSI-X)\n"
- "\t-e Clear IRQ\n"
- "\t-I Get current IRQ type configured\n"
- "\t-d Use DMA\n"
- "\t-l Legacy IRQ test\n"
- "\t-r Read buffer test\n"
- "\t-w Write buffer test\n"
- "\t-c Copy buffer test\n"
- "\t-s <size> Size of buffer {default: 100KB}\n"
- "\t-h Print this help message\n",
- argv[0]);
- return -EINVAL;
- }
-
- return run_test(test);
-}
diff --git a/tools/pci/pcitest.sh b/tools/pci/pcitest.sh
deleted file mode 100644
index 75ed48ff2990..000000000000
--- a/tools/pci/pcitest.sh
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-
-echo "BAR tests"
-echo
-
-bar=0
-
-while [ $bar -lt 6 ]
-do
- pcitest -b $bar
- bar=`expr $bar + 1`
-done
-echo
-
-echo "Interrupt tests"
-echo
-
-pcitest -i 0
-pcitest -l
-
-pcitest -i 1
-msi=1
-
-while [ $msi -lt 33 ]
-do
- pcitest -m $msi
- msi=`expr $msi + 1`
-done
-echo
-
-pcitest -i 2
-msix=1
-
-while [ $msix -lt 2049 ]
-do
- pcitest -x $msix
- msix=`expr $msix + 1`
-done
-echo
-
-echo "Read Tests"
-echo
-
-pcitest -i 1
-
-pcitest -r -s 1
-pcitest -r -s 1024
-pcitest -r -s 1025
-pcitest -r -s 1024000
-pcitest -r -s 1024001
-echo
-
-echo "Write Tests"
-echo
-
-pcitest -w -s 1
-pcitest -w -s 1024
-pcitest -w -s 1025
-pcitest -w -s 1024000
-pcitest -w -s 1024001
-echo
-
-echo "Copy Tests"
-echo
-
-pcitest -c -s 1
-pcitest -c -s 1024
-pcitest -c -s 1025
-pcitest -c -s 1024000
-pcitest -c -s 1024001
-echo
diff --git a/tools/perf/.gitignore b/tools/perf/.gitignore
index f5b81d439387..b64302a76144 100644
--- a/tools/perf/.gitignore
+++ b/tools/perf/.gitignore
@@ -39,17 +39,15 @@ trace/beauty/generated/
pmu-events/pmu-events.c
pmu-events/jevents
pmu-events/metric_test.log
-tests/shell/*.shellcheck_log
-tests/shell/coresight/*.shellcheck_log
-tests/shell/lib/*.shellcheck_log
+pmu-events/empty-pmu-events.log
+pmu-events/test-empty-pmu-events.c
+*.shellcheck_log
feature/
libapi/
libbpf/
libperf/
libsubcmd/
libsymbol/
-libtraceevent/
-libtraceevent_plugins/
fixdep
Documentation/doc.dep
python_ext_build/
diff --git a/tools/perf/Build b/tools/perf/Build
index aa7623622834..b03cc59dabf8 100644
--- a/tools/perf/Build
+++ b/tools/perf/Build
@@ -1,5 +1,6 @@
-perf-y += builtin-bench.o
+perf-bench-y += builtin-bench.o
perf-y += builtin-annotate.o
+perf-y += builtin-check.o
perf-y += builtin-config.o
perf-y += builtin-diff.o
perf-y += builtin-evlist.o
@@ -35,8 +36,8 @@ endif
perf-$(CONFIG_LIBELF) += builtin-probe.o
-perf-y += bench/
-perf-y += tests/
+perf-bench-y += bench/
+perf-test-y += tests/
perf-y += perf.o
@@ -53,9 +54,51 @@ CFLAGS_builtin-trace.o += -DSTRACE_GROUPS_DIR="BUILD_STR($(STRACE_GROUPS_DIR_
CFLAGS_builtin-report.o += -DTIPDIR="BUILD_STR($(tipdir_SQ))"
CFLAGS_builtin-report.o += -DDOCDIR="BUILD_STR($(srcdir_SQ)/Documentation)"
-perf-y += util/
+perf-util-y += util/
+perf-util-y += arch/
perf-y += arch/
-perf-y += ui/
-perf-y += scripts/
+perf-test-y += arch/
+perf-ui-y += ui/
+perf-util-y += scripts/
gtk-y += ui/gtk/
+
+ifdef SHELLCHECK
+ SHELL_TESTS := $(wildcard *.sh)
+ SHELL_TEST_LOGS := $(SHELL_TESTS:%=%.shellcheck_log)
+else
+ SHELL_TESTS :=
+ SHELL_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.shellcheck_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)$(SHELLCHECK) "$<" > $@ || (cat $@ && rm $@ && false)
+
+perf-y += $(SHELL_TEST_LOGS)
+
+ifdef MYPY
+ PY_TESTS := $(shell find python -type f -name '*.py')
+ MYPY_TEST_LOGS := $(PY_TESTS:python/%=python/%.mypy_log)
+else
+ MYPY_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.mypy_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)mypy "$<" > $@ || (cat $@ && rm $@ && false)
+
+perf-y += $(MYPY_TEST_LOGS)
+
+ifdef PYLINT
+ PY_TESTS := $(shell find python -type f -name '*.py')
+ PYLINT_TEST_LOGS := $(PY_TESTS:python/%=python/%.pylint_log)
+else
+ PYLINT_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.pylint_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)pylint "$<" > $@ || (cat $@ && rm $@ && false)
+
+perf-y += $(PYLINT_TEST_LOGS)
diff --git a/tools/perf/Documentation/Build.txt b/tools/perf/Documentation/Build.txt
index 3766886c4bca..57b226e7fc2f 100644
--- a/tools/perf/Documentation/Build.txt
+++ b/tools/perf/Documentation/Build.txt
@@ -71,3 +71,46 @@ supported by GCC. UBSan detects undefined behaviors of programs at runtime.
$ UBSAN_OPTIONS=print_stacktrace=1 ./perf record -a
If UBSan detects any problem at runtime, it outputs a “runtime error:” message.
+
+4) Cross compilation
+====================
+As Multiarch is commonly supported in Linux distributions, we can install
+libraries for multiple architectures on the same system and then cross-compile
+Linux perf. For example, Aarch64 libraries and toolchains can be installed on
+an x86_64 machine, allowing us to compile perf for an Aarch64 target.
+
+Below is the command for building the perf with dynamic linking.
+
+ $ cd /path/to/Linux
+ $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -C tools/perf
+
+For static linking, the option `LDFLAGS="-static"` is required.
+
+ $ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \
+ LDFLAGS="-static" -C tools/perf
+
+In the embedded system world, a use case is to explicitly specify the package
+configuration paths for cross building:
+
+ $ PKG_CONFIG_SYSROOT_DIR="/path/to/cross/build/sysroot" \
+ PKG_CONFIG_LIBDIR="/usr/lib/:/usr/local/lib" \
+ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -C tools/perf
+
+In this case, the variable PKG_CONFIG_SYSROOT_DIR can be used alongside the
+variable PKG_CONFIG_LIBDIR or PKG_CONFIG_PATH to prepend the sysroot path to
+the library paths for cross compilation.
+
+5) Build with Clang
+===================
+By default, the makefile uses GCC as compiler. With specifying environment
+variables HOSTCC, CC and CXX, it allows to build perf with Clang.
+
+Using Clang for a native build:
+
+ $ HOSTCC=clang CC=clang CXX=clang++ make -C tools/perf
+
+Specifying ARCH and CROSS_COMPILE for cross compilation:
+
+ $ HOSTCC=clang CC=clang CXX=clang++ \
+ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \
+ make -C tools/perf
diff --git a/tools/perf/Documentation/android.txt b/tools/perf/Documentation/android.txt
index 24a59998fc91..3f3cc7ac3d13 100644
--- a/tools/perf/Documentation/android.txt
+++ b/tools/perf/Documentation/android.txt
@@ -1,78 +1,10 @@
How to compile perf for Android
-=========================================
+===============================
-I. Set the Android NDK environment
-------------------------------------------------
+There are two ways to build perf and run it on Android:
-(a). Use the Android NDK
-------------------------------------------------
-1. You need to download and install the Android Native Development Kit (NDK).
-Set the NDK variable to point to the path where you installed the NDK:
- export NDK=/path/to/android-ndk
+- Method 1: Build perf with static linking. See Build.txt, section
+ "4) Cross compilation" for how to build a static perf binary.
-2. Set cross-compiling environment variables for NDK toolchain and sysroot.
-For arm:
- export NDK_TOOLCHAIN=${NDK}/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-
- export NDK_SYSROOT=${NDK}/platforms/android-24/arch-arm
-For x86:
- export NDK_TOOLCHAIN=${NDK}/toolchains/x86-4.9/prebuilt/linux-x86_64/bin/i686-linux-android-
- export NDK_SYSROOT=${NDK}/platforms/android-24/arch-x86
-
-This method is only tested for Android NDK versions Revision 11b and later.
-perf uses some bionic enhancements that are not included in prior NDK versions.
-You can use method (b) described below instead.
-
-(b). Use the Android source tree
------------------------------------------------
-1. Download the master branch of the Android source tree.
-Set the environment for the target you want using:
- source build/envsetup.sh
- lunch
-
-2. Build your own NDK sysroot to contain latest bionic changes and set the
-NDK sysroot environment variable.
- cd ${ANDROID_BUILD_TOP}/ndk
-For arm:
- ./build/tools/build-ndk-sysroot.sh --abi=arm
- export NDK_SYSROOT=${ANDROID_BUILD_TOP}/ndk/build/platforms/android-3/arch-arm
-For x86:
- ./build/tools/build-ndk-sysroot.sh --abi=x86
- export NDK_SYSROOT=${ANDROID_BUILD_TOP}/ndk/build/platforms/android-3/arch-x86
-
-3. Set the NDK toolchain environment variable.
-For arm:
- export NDK_TOOLCHAIN=${ANDROID_TOOLCHAIN}/arm-linux-androideabi-
-For x86:
- export NDK_TOOLCHAIN=${ANDROID_TOOLCHAIN}/i686-linux-android-
-
-II. Compile perf for Android
-------------------------------------------------
-You need to run make with the NDK toolchain and sysroot defined above:
-For arm:
- make WERROR=0 ARCH=arm CROSS_COMPILE=${NDK_TOOLCHAIN} EXTRA_CFLAGS="-pie --sysroot=${NDK_SYSROOT}"
-For x86:
- make WERROR=0 ARCH=x86 CROSS_COMPILE=${NDK_TOOLCHAIN} EXTRA_CFLAGS="-pie --sysroot=${NDK_SYSROOT}"
-
-III. Install perf
------------------------------------------------
-You need to connect to your Android device/emulator using adb.
-Install perf using:
- adb push perf /data/perf
-
-If you also want to use perf-archive you need busybox tools for Android.
-For installing perf-archive, you first need to replace #!/bin/bash with #!/system/bin/sh:
- sed 's/#!\/bin\/bash/#!\/system\/bin\/sh/g' perf-archive >> /tmp/perf-archive
- chmod +x /tmp/perf-archive
- adb push /tmp/perf-archive /data/perf-archive
-
-IV. Environment settings for running perf
-------------------------------------------------
-Some perf features need environment variables to run properly.
-You need to set these before running perf on the target:
- adb shell
- # PERF_PAGER=cat
-
-IV. Run perf
-------------------------------------------------
-Run perf on your device/emulator to which you previously connected using adb:
- # ./data/perf
+- Method 2: Download the Android NDK and use the bundled Clang to
+ build perf. See Build.txt, section "5) Build with clang" for details.
diff --git a/tools/perf/Documentation/callchain-overhead-calculation.txt b/tools/perf/Documentation/callchain-overhead-calculation.txt
index 1a757927195e..e0202bf5bd1a 100644
--- a/tools/perf/Documentation/callchain-overhead-calculation.txt
+++ b/tools/perf/Documentation/callchain-overhead-calculation.txt
@@ -1,7 +1,8 @@
Overhead calculation
--------------------
-The overhead can be shown in two columns as 'Children' and 'Self' when
-perf collects callchains. The 'self' overhead is simply calculated by
+The CPU overhead can be shown in two columns as 'Children' and 'Self'
+when perf collects callchains (and corresponding 'Wall' columns for
+wall-clock overhead). The 'self' overhead is simply calculated by
adding all period values of the entry - usually a function (symbol).
This is the value that perf shows traditionally and sum of all the
'self' overhead values should be 100%.
diff --git a/tools/perf/Documentation/cpu-and-latency-overheads.txt b/tools/perf/Documentation/cpu-and-latency-overheads.txt
new file mode 100644
index 000000000000..3b6d63705465
--- /dev/null
+++ b/tools/perf/Documentation/cpu-and-latency-overheads.txt
@@ -0,0 +1,85 @@
+CPU and latency overheads
+-------------------------
+There are two notions of time: wall-clock time and CPU time.
+For a single-threaded program, or a program running on a single-core machine,
+these notions are the same. However, for a multi-threaded/multi-process program
+running on a multi-core machine, these notions are significantly different.
+Each second of wall-clock time we have number-of-cores seconds of CPU time.
+Perf can measure overhead for both of these times (shown in 'overhead' and
+'latency' columns for CPU and wall-clock time correspondingly).
+
+Optimizing CPU overhead is useful to improve 'throughput', while optimizing
+latency overhead is useful to improve 'latency'. It's important to understand
+which one is useful in a concrete situation at hand. For example, the former
+may be useful to improve max throughput of a CI build server that runs on 100%
+CPU utilization, while the latter may be useful to improve user-perceived
+latency of a single interactive program build.
+These overheads may be significantly different in some cases. For example,
+consider a program that executes function 'foo' for 9 seconds with 1 thread,
+and then executes function 'bar' for 1 second with 128 threads (consumes
+128 seconds of CPU time). The CPU overhead is: 'foo' - 6.6%, 'bar' - 93.4%.
+While the latency overhead is: 'foo' - 90%, 'bar' - 10%. If we try to optimize
+running time of the program looking at the (wrong in this case) CPU overhead,
+we would concentrate on the function 'bar', but it can yield only 10% running
+time improvement at best.
+
+By default, perf shows only CPU overhead. To show latency overhead, use
+'perf record --latency' and 'perf report':
+
+-----------------------------------
+Overhead Latency Command
+ 93.88% 25.79% cc1
+ 1.90% 39.87% gzip
+ 0.99% 10.16% dpkg-deb
+ 0.57% 1.00% as
+ 0.40% 0.46% sh
+-----------------------------------
+
+To sort by latency overhead, use 'perf report --latency':
+
+-----------------------------------
+Latency Overhead Command
+ 39.87% 1.90% gzip
+ 25.79% 93.88% cc1
+ 10.16% 0.99% dpkg-deb
+ 4.17% 0.29% git
+ 2.81% 0.11% objtool
+-----------------------------------
+
+To get insight into the difference between the overheads, you may check
+parallelization histogram with '--sort=latency,parallelism,comm,symbol --hierarchy'
+flags. It shows fraction of (wall-clock) time the workload utilizes different
+numbers of cores ('Parallelism' column). For example, in the following case
+the workload utilizes only 1 core most of the time, but also has some
+highly-parallel phases, which explains significant difference between
+CPU and wall-clock overheads:
+
+-----------------------------------
+ Latency Overhead Parallelism / Command / Symbol
++ 56.98% 2.29% 1
++ 16.94% 1.36% 2
++ 4.00% 20.13% 125
++ 3.66% 18.25% 124
++ 3.48% 17.66% 126
++ 3.26% 0.39% 3
++ 2.61% 12.93% 123
+-----------------------------------
+
+By expanding corresponding lines, you may see what commands/functions run
+at the given parallelism level:
+
+-----------------------------------
+ Latency Overhead Parallelism / Command / Symbol
+- 56.98% 2.29% 1
+ 32.80% 1.32% gzip
+ 4.46% 0.18% cc1
+ 2.81% 0.11% objtool
+ 2.43% 0.10% dpkg-source
+ 2.22% 0.09% ld
+ 2.10% 0.08% dpkg-genchanges
+-----------------------------------
+
+To see the normal function-level profile for particular parallelism levels
+(number of threads actively running on CPUs), you may use '--parallelism'
+filter. For example, to see the profile only for low parallelism phases
+of a workload use '--latency --parallelism=1-2' flags.
diff --git a/tools/perf/Documentation/intel-acr.txt b/tools/perf/Documentation/intel-acr.txt
new file mode 100644
index 000000000000..72654fdd9a52
--- /dev/null
+++ b/tools/perf/Documentation/intel-acr.txt
@@ -0,0 +1,53 @@
+Intel Auto Counter Reload Support
+---------------------------------
+Support for Intel Auto Counter Reload in perf tools
+
+Auto counter reload provides a means for software to specify to hardware
+that certain counters, if supported, should be automatically reloaded
+upon overflow of chosen counters. By taking a sample only if the rate of
+one event exceeds some threshold relative to the rate of another event,
+this feature enables software to sample based on the relative rate of
+two or more events. To enable this, the user must provide a sample period
+term and a bitmask ("acr_mask") for each relevant event specifying the
+counters in an event group to reload if the event's specified sample
+period is exceeded.
+
+For example, if the user desires to measure a scenario when IPC > 2,
+the event group might look like the one below:
+
+ perf record -e {cpu_atom/instructions,period=200000,acr_mask=0x2/, \
+ cpu_atom/cycles,period=100000,acr_mask=0x3/} -- true
+
+In this case, if the "instructions" counter exceeds the sample period of
+200000, the second counter, "cycles", will be reset and a sample will be
+taken. If "cycles" is exceeded first, both counters in the group will be
+reset. In this way, samples will only be taken for cases where IPC > 2.
+
+The acr_mask term is a hexadecimal value representing a bitmask of the
+events in the group to be reset when the period is exceeded. In the
+example above, "instructions" is assigned an acr_mask of 0x2, meaning
+only the second event in the group is reloaded and a sample is taken
+for the first event. "cycles" is assigned an acr_mask of 0x3, meaning
+that both event counters will be reset if the sample period is exceeded
+first.
+
+ratio-to-prev Event Term
+------------------------
+To simplify this, an event term "ratio-to-prev" is provided which is used
+alongside the sample period term n or the -c/--count option. This would
+allow users to specify the desired relative rate between events as a
+ratio. Note: Both events compared must belong to the same PMU.
+
+The command above would then become
+
+ perf record -e {cpu_atom/instructions/, \
+ cpu_atom/cycles,period=100000,ratio-to-prev=0.5/} -- true
+
+ratio-to-prev is the ratio of the event using the term relative
+to the previous event in the group, which will always be 1,
+for a 1:0.5 or 2:1 ratio.
+
+To sample for IPC < 2 for example, the events need to be reordered:
+
+ perf record -e {cpu_atom/cycles/, \
+ cpu_atom/instructions,period=200000,ratio-to-prev=2.0/} -- true
diff --git a/tools/perf/Documentation/intel-hybrid.txt b/tools/perf/Documentation/intel-hybrid.txt
index e7a776ad25d7..0379903673a4 100644
--- a/tools/perf/Documentation/intel-hybrid.txt
+++ b/tools/perf/Documentation/intel-hybrid.txt
@@ -8,15 +8,15 @@ Part of events are available on core cpu, part of events are available
on atom cpu and even part of events are available on both.
Kernel exports two new cpu pmus via sysfs:
-/sys/devices/cpu_core
-/sys/devices/cpu_atom
+/sys/bus/event_source/devices/cpu_core
+/sys/bus/event_source/devices/cpu_atom
The 'cpus' files are created under the directories. For example,
-cat /sys/devices/cpu_core/cpus
+cat /sys/bus/event_source/devices/cpu_core/cpus
0-15
-cat /sys/devices/cpu_atom/cpus
+cat /sys/bus/event_source/devices/cpu_atom/cpus
16-23
It indicates cpu0-cpu15 are core cpus and cpu16-cpu23 are atom cpus.
@@ -60,8 +60,8 @@ can't carry pmu information. So now this type is extended to be PMU aware
type. The PMU type ID is stored at attr.config[63:32].
PMU type ID is retrieved from sysfs.
-/sys/devices/cpu_atom/type
-/sys/devices/cpu_core/type
+/sys/bus/event_source/devices/cpu_atom/type
+/sys/bus/event_source/devices/cpu_core/type
The new attr.config layout for PERF_TYPE_HARDWARE:
diff --git a/tools/perf/Documentation/itrace.txt b/tools/perf/Documentation/itrace.txt
index 19cc179be9a7..40476b227f8d 100644
--- a/tools/perf/Documentation/itrace.txt
+++ b/tools/perf/Documentation/itrace.txt
@@ -1,6 +1,6 @@
i synthesize instructions events
y synthesize cycles events
- b synthesize branches events (branch misses for Arm SPE)
+ b synthesize branches events
c synthesize branches events (calls only)
r synthesize branches events (returns only)
x synthesize transactions events
diff --git a/tools/perf/Documentation/perf-amd-ibs.txt b/tools/perf/Documentation/perf-amd-ibs.txt
new file mode 100644
index 000000000000..548549935760
--- /dev/null
+++ b/tools/perf/Documentation/perf-amd-ibs.txt
@@ -0,0 +1,223 @@
+perf-amd-ibs(1)
+===============
+
+NAME
+----
+perf-amd-ibs - Support for AMD Instruction-Based Sampling (IBS) with perf tool
+
+SYNOPSIS
+--------
+[verse]
+'perf record' -e ibs_op//
+'perf record' -e ibs_fetch//
+
+DESCRIPTION
+-----------
+
+Instruction-Based Sampling (IBS) provides precise Instruction Pointer (IP)
+profiling support on AMD platforms. IBS has two independent components: IBS
+Op and IBS Fetch. IBS Op sampling provides information about instruction
+execution (micro-op execution to be precise) with details like d-cache
+hit/miss, d-TLB hit/miss, cache miss latency, load/store data source, branch
+behavior etc. IBS Fetch sampling provides information about instruction fetch
+with details like i-cache hit/miss, i-TLB hit/miss, fetch latency etc. IBS is
+per-smt-thread i.e. each SMT hardware thread contains standalone IBS units.
+
+Both, IBS Op and IBS Fetch, are exposed as PMUs by Linux and can be exploited
+using the Linux perf utility. The following files will be created at boot time
+if IBS is supported by the hardware and kernel.
+
+ /sys/bus/event_source/devices/ibs_op/
+ /sys/bus/event_source/devices/ibs_fetch/
+
+IBS Op PMU supports two events: cycles and micro ops. IBS Fetch PMU supports
+one event: fetch ops.
+
+IBS PMUs do not have user/kernel filtering capability and thus it requires
+CAP_SYS_ADMIN or CAP_PERFMON privilege.
+
+IBS VS. REGULAR CORE PMU
+------------------------
+
+IBS gives samples with precise IP, i.e. the IP recorded with IBS sample has
+no skid. Whereas the IP recorded by regular core PMU will have some skid
+(sample was generated at IP X but perf would record it at IP X+n). Hence,
+regular core PMU might not help for profiling with instruction level
+precision. Further, IBS provides additional information about the sample in
+question. On the other hand, regular core PMU has it's own advantages like
+plethora of events, counting mode (less interference), up to 6 parallel
+counters, event grouping support, filtering capabilities etc.
+
+Three regular core PMU events are internally forwarded to IBS Op PMU when
+precise_ip attribute is set:
+
+ -e cpu-cycles:p becomes -e ibs_op//
+ -e r076:p becomes -e ibs_op//
+ -e r0C1:p becomes -e ibs_op/cnt_ctl=1/
+
+EXAMPLES
+--------
+
+IBS Op PMU
+~~~~~~~~~~
+
+System-wide profile, cycles event, sampling period: 100000
+
+ # perf record -e ibs_op// -c 100000 -a
+
+Per-cpu profile (cpu10), cycles event, sampling period: 100000
+
+ # perf record -e ibs_op// -c 100000 -C 10
+
+Per-cpu profile (cpu10), cycles event, sampling freq: 1000
+
+ # perf record -e ibs_op// -F 1000 -C 10
+
+System-wide profile, uOps event, sampling period: 100000
+
+ # perf record -e ibs_op/cnt_ctl=1/ -c 100000 -a
+
+Same command, but also capture IBS register raw dump along with perf sample:
+
+ # perf record -e ibs_op/cnt_ctl=1/ -c 100000 -a --raw-samples
+
+System-wide profile, uOps event, sampling period: 100000, L3MissOnly (Zen4 onward)
+
+ # perf record -e ibs_op/cnt_ctl=1,l3missonly=1/ -c 100000 -a
+
+System-wide profile, cycles event, sampling period: 100000, LdLat filtering (Zen5
+onward)
+
+ # perf record -e ibs_op/ldlat=128/ -c 100000 -a
+
+ Supported load latency threshold values are 128 to 2048 (both inclusive).
+ Latency value which is a multiple of 128 incurs a little less profiling
+ overhead compared to other values.
+
+Per process(upstream v6.2 onward), uOps event, sampling period: 100000
+
+ # perf record -e ibs_op/cnt_ctl=1/ -c 100000 -p 1234
+
+Per process(upstream v6.2 onward), uOps event, sampling period: 100000
+
+ # perf record -e ibs_op/cnt_ctl=1/ -c 100000 -- ls
+
+To analyse recorded profile in aggregate mode
+
+ # perf report
+ /* Select a line and press 'a' to drill down at instruction level. */
+
+To go over each sample
+
+ # perf script
+
+Raw dump of IBS registers when profiled with --raw-samples
+
+ # perf report -D
+ /* Look for PERF_RECORD_SAMPLE */
+
+ Example register raw dump:
+
+ ibs_op_ctl: 000002c30006186a MaxCnt 100000 L3MissOnly 0 En 1
+ Val 1 CntCtl 0=cycles CurCnt 707
+ IbsOpRip: ffffffff8204aea7
+ ibs_op_data: 0000010002550001 CompToRetCtr 1 TagToRetCtr 597
+ BrnRet 0 RipInvalid 0 BrnFuse 0 Microcode 1
+ ibs_op_data2: 0000000000000013 RmtNode 1 DataSrc 3=DRAM
+ ibs_op_data3: 0000000031960092 LdOp 0 StOp 1 DcL1TlbMiss 0
+ DcL2TlbMiss 0 DcL1TlbHit2M 1 DcL1TlbHit1G 0 DcL2TlbHit2M 0
+ DcMiss 1 DcMisAcc 0 DcWcMemAcc 0 DcUcMemAcc 0 DcLockedOp 0
+ DcMissNoMabAlloc 0 DcLinAddrValid 1 DcPhyAddrValid 1
+ DcL2TlbHit1G 0 L2Miss 1 SwPf 0 OpMemWidth 32 bytes
+ OpDcMissOpenMemReqs 12 DcMissLat 0 TlbRefillLat 0
+ IbsDCLinAd: ff110008a5398920
+ IbsDCPhysAd: 00000008a5398920
+
+IBS applied in a real world usecase
+
+ ~90% regression was observed in tbench with specific scheduler hint
+ which was counter intuitive. IBS profile of good and bad run captured
+ using perf helped in identifying exact cause of the problem:
+
+ https://lore.kernel.org/r/20220921063638.2489-1-kprateek.nayak@amd.com
+
+IBS Fetch PMU
+~~~~~~~~~~~~~
+
+Similar commands can be used with Fetch PMU as well.
+
+System-wide profile, fetch ops event, sampling period: 100000
+
+ # perf record -e ibs_fetch// -c 100000 -a
+
+System-wide profile, fetch ops event, sampling period: 100000, Random enable
+
+ # perf record -e ibs_fetch/rand_en=1/ -c 100000 -a
+
+ Random enable adds small degree of variability to sample period. This
+ helps in cases like long running loops where PMU is tagging the same
+ instruction over and over because of fixed sample period.
+
+etc.
+
+PERF MEM AND PERF C2C
+---------------------
+
+perf mem is a memory access profiler tool and perf c2c is a shared data
+cacheline analyser tool. Both of them internally uses IBS Op PMU on AMD.
+Below is a simple example of the perf mem tool.
+
+ # perf mem record -c 100000 -- make
+ # perf mem report
+
+A normal perf mem report output will provide detailed memory access profile.
+New output fields will show related access info together. For example:
+
+ # perf mem report -F overhead,cache,snoop,comm
+ ...
+ # Samples: 92K of event 'ibs_op//'
+ # Total weight : 531104
+ #
+ # ---------- Cache ----------- --- Snoop ----
+ # Overhead L1 L2 L1-buf Other HitM Other Command
+ # ........ ............................ .............. ..........
+ #
+ 76.07% 5.8% 35.7% 0.0% 34.6% 23.3% 52.8% cc1
+ 5.79% 0.2% 0.0% 0.0% 5.6% 0.1% 5.7% make
+ 5.78% 0.1% 4.4% 0.0% 1.2% 0.5% 5.3% gcc
+ 5.33% 0.3% 3.9% 0.0% 1.1% 0.2% 5.2% as
+ 5.00% 0.1% 3.8% 0.0% 1.0% 0.3% 4.7% sh
+ 1.56% 0.1% 0.1% 0.0% 1.4% 0.6% 0.9% ld
+ 0.28% 0.1% 0.0% 0.0% 0.2% 0.1% 0.2% pkg-config
+ 0.09% 0.0% 0.0% 0.0% 0.1% 0.0% 0.1% git
+ 0.03% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% rm
+ ...
+
+Also, it can be aggregated based on various memory access info using the
+sort keys. For example:
+
+ # perf mem report -s mem,snoop
+ ...
+ # Samples: 92K of event 'ibs_op//'
+ # Total weight : 531104
+ # Sort order : mem,snoop
+ #
+ # Overhead Samples Memory access Snoop
+ # ........ ............ ....................................... ............
+ #
+ 47.99% 1509 L2 hit N/A
+ 25.08% 338 core, same node Any cache hit HitM
+ 10.24% 54374 N/A N/A
+ 6.77% 35938 L1 hit N/A
+ 6.39% 101 core, same node Any cache hit N/A
+ 3.50% 69 RAM hit N/A
+ 0.03% 158 LFB/MAB hit N/A
+ 0.00% 2 Uncached hit N/A
+
+Please refer to their man page for more detail.
+
+SEE ALSO
+--------
+
+linkperf:perf-record[1], linkperf:perf-script[1], linkperf:perf-report[1],
+linkperf:perf-mem[1], linkperf:perf-c2c[1]
diff --git a/tools/perf/Documentation/perf-annotate.txt b/tools/perf/Documentation/perf-annotate.txt
index b95524bea021..547f1a268018 100644
--- a/tools/perf/Documentation/perf-annotate.txt
+++ b/tools/perf/Documentation/perf-annotate.txt
@@ -165,6 +165,12 @@ include::itrace.txt[]
--type-stat::
Show stats for the data type annotation.
+--skip-empty::
+ Do not display empty (or dummy) events.
+
+--code-with-type::
+ Show data type info in code annotation (for memory instructions only).
+
SEE ALSO
--------
diff --git a/tools/perf/Documentation/perf-arm-spe.txt b/tools/perf/Documentation/perf-arm-spe.txt
index bf03222e9a68..cda8dd47fc4d 100644
--- a/tools/perf/Documentation/perf-arm-spe.txt
+++ b/tools/perf/Documentation/perf-arm-spe.txt
@@ -116,6 +116,15 @@ Depending on CPU model, the kernel may need to be booted with page table isolati
(kpti=off). If KPTI needs to be disabled, this will fail with a console message "profiling buffer
inaccessible. Try passing 'kpti=off' on the kernel command line".
+For the full criteria that determine whether KPTI needs to be forced off or not, see function
+unmap_kernel_at_el0() in the kernel sources. Common cases where it's not required
+are on the CPUs in kpti_safe_list, or on Arm v8.5+ where FEAT_E0PD is mandatory.
+
+The SPE interrupt must also be described by the firmware. If the module is loaded and KPTI is
+disabled (or isn't required to be disabled) but the SPE PMU still doesn't show in
+/sys/bus/event_source/devices/, then it's possible that the SPE interrupt isn't described by
+ACPI or DT. In this case no warning will be printed by the driver.
+
Capturing SPE with perf command-line tools
------------------------------------------
@@ -141,6 +150,7 @@ arm_spe/load_filter=1,min_latency=10/'
pct_enable=1 - collect physical timestamp instead of virtual timestamp (PMSCR.PCT) - requires privilege
store_filter=1 - collect stores only (PMSFCR.ST)
ts_enable=1 - enable timestamping with value of generic timer (PMSCR.TS)
+ discard=1 - enable SPE PMU events but don't collect sample data - see 'Discard mode' (PMBLIMITR.FM = DISCARD)
+++*+++ Latency is the total latency from the point at which sampling started on that instruction, rather
than only the execution latency.
@@ -178,17 +188,23 @@ groups:
7 llc-access
2 tlb-miss
1K tlb-access
- 36 branch-miss
+ 36 branch
0 remote-access
900 memory
+ 1800 instructions
The arm_spe// and dummy:u events are implementation details and are expected to be empty.
-To get a full list of unique samples that are not sorted into groups, set the itrace option to
-generate 'instruction' samples. The period option is also taken into account, so set it to 1
-instruction unless you want to further downsample the already sampled SPE data:
+The instructions group contains the full list of unique samples that are not
+sorted into other groups. To generate only this group use --itrace=i1i.
+
+1i (1 instruction interval) signifies no further downsampling. Rather than an
+instruction interval, this generates a sample every n SPE samples. For example
+to generate the default set of events for every 100 SPE samples:
- perf report --itrace=i1i
+ perf report --itrace==bxofmtMai100i
+
+Other period types, for example nanoseconds (ns) are not currently supported.
Memory access details are also stored on the samples and this can be viewed with:
@@ -199,7 +215,8 @@ Common errors
- "Cannot find PMU `arm_spe'. Missing kernel support?"
- Module not built or loaded, KPTI not disabled (see above), or running on a VM
+ Module not built or loaded, KPTI not disabled, interrupt not described by firmware,
+ or running on a VM. See 'Kernel Requirements' above.
- "Arm SPE CONTEXT packets not found in the traces."
@@ -210,6 +227,31 @@ Common errors
Increase sampling interval (see above)
+PMU events
+~~~~~~~~~~
+
+SPE has events that can be counted on core PMUs. These are prefixed with
+SAMPLE_, for example SAMPLE_POP, SAMPLE_FEED, SAMPLE_COLLISION and
+SAMPLE_FEED_BR.
+
+These events will only count when an SPE event is running on the same core that
+the PMU event is opened on, otherwise they read as 0. There are various ways to
+ensure that the PMU event and SPE event are scheduled together depending on the
+way the event is opened. For example opening both events as per-process events
+on the same process, although it's not guaranteed that the PMU event is enabled
+first when context switching. For that reason it may be better to open the PMU
+event as a systemwide event and then open SPE on the process of interest.
+
+Discard mode
+~~~~~~~~~~~~
+
+SPE related (SAMPLE_* etc) core PMU events can be used without the overhead of
+collecting sample data if discard mode is supported (optional from Armv8.6).
+First run a system wide SPE session (or on the core of interest) using options
+to minimize output. Then run perf stat:
+
+ perf record -e arm_spe/discard/ -a -N -B --no-bpf-event -o - > /dev/null &
+ perf stat -e SAMPLE_FEED_LD
SEE ALSO
--------
diff --git a/tools/perf/Documentation/perf-bench.txt b/tools/perf/Documentation/perf-bench.txt
index 8331bd28b10e..1160224cb718 100644
--- a/tools/perf/Documentation/perf-bench.txt
+++ b/tools/perf/Documentation/perf-bench.txt
@@ -177,11 +177,21 @@ Suite for evaluating performance of simple memory copy in various ways.
Options of *memcpy*
^^^^^^^^^^^^^^^^^^^
--l::
+-s::
--size::
Specify size of memory to copy (default: 1MB).
Available units are B, KB, MB, GB and TB (case insensitive).
+-p::
+--page::
+Specify page-size for mapping memory buffers (default: 4KB).
+Available values are 4KB, 2MB, 1GB (case insensitive).
+
+-k::
+--chunk::
+Specify the chunk-size for each invocation. (default: 0, or full-extent)
+Available units are B, KB, MB, GB and TB (case insensitive).
+
-f::
--function::
Specify function to copy (default: default).
@@ -201,11 +211,21 @@ Suite for evaluating performance of simple memory set in various ways.
Options of *memset*
^^^^^^^^^^^^^^^^^^^
--l::
+-s::
--size::
Specify size of memory to set (default: 1MB).
Available units are B, KB, MB, GB and TB (case insensitive).
+-p::
+--page::
+Specify page-size for mapping memory buffers (default: 4KB).
+Available values are 4KB, 2MB, 1GB (case insensitive).
+
+-k::
+--chunk::
+Specify the chunk-size for each invocation. (default: 0, or full-extent)
+Available units are B, KB, MB, GB and TB (case insensitive).
+
-f::
--function::
Specify function to set (default: default).
@@ -220,6 +240,40 @@ Repeat memset invocation this number of times.
--cycles::
Use perf's cpu-cycles event instead of gettimeofday syscall.
+*mmap*::
+Suite for evaluating memory subsystem performance for mmap()'d memory.
+
+Options of *mmap*
+^^^^^^^^^^^^^^^^^
+-s::
+--size::
+Specify size of memory to set (default: 1MB).
+Available units are B, KB, MB, GB and TB (case insensitive).
+
+-p::
+--page::
+Specify page-size for mapping memory buffers (default: 4KB).
+Available values are 4KB, 2MB, 1GB (case insensitive).
+
+-r::
+--randomize::
+Specify seed to randomize page access offset (default: 0, or not randomized).
+
+-f::
+--function::
+Specify function to set (default: all).
+Available functions are 'demand' and 'populate', with the first
+demand faulting pages in the region and the second using an eager
+mapping.
+
+-l::
+--nr_loops::
+Repeat mmap() invocation this number of times.
+
+-c::
+--cycles::
+Use perf's cpu-cycles event instead of gettimeofday syscall.
+
SUITES FOR 'numa'
~~~~~~~~~~~~~~~~~
*mem*::
diff --git a/tools/perf/Documentation/perf-c2c.txt b/tools/perf/Documentation/perf-c2c.txt
index 856f0dfb8e5a..f4af2dd6ab31 100644
--- a/tools/perf/Documentation/perf-c2c.txt
+++ b/tools/perf/Documentation/perf-c2c.txt
@@ -54,8 +54,15 @@ RECORD OPTIONS
-l::
--ldlat::
- Configure mem-loads latency. Supported on Intel and Arm64 processors
- only. Ignored on other archs.
+ Configure mem-loads latency. Supported on Intel, Arm64 and some AMD
+ processors. Ignored on other archs.
+
+ On supported AMD processors:
+ - /sys/bus/event_source/devices/ibs_op/caps/ldlat file contains '1'.
+ - Supported latency values are 128 to 2048 (both inclusive).
+ - Latency value which is a multiple of 128 incurs a little less profiling
+ overhead compared to other values.
+ - Load latency filtering is disabled by default.
-k::
--all-kernel::
diff --git a/tools/perf/Documentation/perf-check.txt b/tools/perf/Documentation/perf-check.txt
new file mode 100644
index 000000000000..4c9ccda6ce91
--- /dev/null
+++ b/tools/perf/Documentation/perf-check.txt
@@ -0,0 +1,81 @@
+perf-check(1)
+===============
+
+NAME
+----
+perf-check - check if features are present in perf
+
+SYNOPSIS
+--------
+[verse]
+'perf check' [<options>]
+'perf check' {feature <feature_list>} [<options>]
+
+DESCRIPTION
+-----------
+With no subcommands given, 'perf check' command just prints the command
+usage on the standard output.
+
+If the subcommand 'feature' is used, then status of feature is printed
+on the standard output (unless '-q' is also passed), ie. whether it is
+compiled-in/built-in or not.
+Also, 'perf check feature' returns with exit status 0 if the feature
+is built-in, otherwise returns with exit status 1.
+
+SUBCOMMANDS
+-----------
+
+feature::
+
+ Print whether feature(s) is compiled-in or not, and also returns with an
+ exit status of 0, if passed feature(s) are compiled-in, else 1.
+
+ It expects a feature list as an argument. There can be a single feature
+ name/macro, or multiple features can also be passed as a comma-separated
+ list, in which case the exit status will be 0 only if all of the passed
+ features are compiled-in.
+
+ The feature names/macros are case-insensitive.
+
+ Example Usage:
+ perf check feature libtraceevent
+ perf check feature HAVE_LIBTRACEEVENT
+ perf check feature libtraceevent,bpf
+
+ Supported feature names/macro:
+ aio / HAVE_AIO_SUPPORT
+ bpf / HAVE_LIBBPF_SUPPORT
+ bpf_skeletons / HAVE_BPF_SKEL
+ debuginfod / HAVE_DEBUGINFOD_SUPPORT
+ dwarf / HAVE_LIBDW_SUPPORT
+ dwarf_getlocations / HAVE_LIBDW_SUPPORT
+ dwarf-unwind / HAVE_DWARF_UNWIND_SUPPORT
+ auxtrace / HAVE_AUXTRACE_SUPPORT
+ libbfd / HAVE_LIBBFD_SUPPORT
+ libbpf-strings / HAVE_LIBBPF_STRINGS_SUPPORT
+ libcapstone / HAVE_LIBCAPSTONE_SUPPORT
+ libdw-dwarf-unwind / HAVE_LIBDW_SUPPORT
+ libelf / HAVE_LIBELF_SUPPORT
+ libLLVM / HAVE_LIBLLVM_SUPPORT
+ libnuma / HAVE_LIBNUMA_SUPPORT
+ libopencsd / HAVE_CSTRACE_SUPPORT
+ libperl / HAVE_LIBPERL_SUPPORT
+ libpfm4 / HAVE_LIBPFM
+ libpython / HAVE_LIBPYTHON_SUPPORT
+ libslang / HAVE_SLANG_SUPPORT
+ libtraceevent / HAVE_LIBTRACEEVENT
+ libunwind / HAVE_LIBUNWIND_SUPPORT
+ lzma / HAVE_LZMA_SUPPORT
+ numa_num_possible_cpus / HAVE_LIBNUMA_SUPPORT
+ zlib / HAVE_ZLIB_SUPPORT
+ zstd / HAVE_ZSTD_SUPPORT
+
+OPTIONS
+-------
+-q::
+--quiet::
+ Do not print any messages or warnings
+
+ This can be used along with subcommands such as 'perf check feature'
+ to hide unnecessary output in test scripts, eg.
+ 'perf check feature --quiet libtraceevent'
diff --git a/tools/perf/Documentation/perf-config.txt b/tools/perf/Documentation/perf-config.txt
index 379f9d7a8ab1..c6f335659667 100644
--- a/tools/perf/Documentation/perf-config.txt
+++ b/tools/perf/Documentation/perf-config.txt
@@ -40,7 +40,7 @@ The '$HOME/.perfconfig' file is used to store a per-user configuration.
The file '$(sysconfdir)/perfconfig' can be used to
store a system-wide default configuration.
-One an disable reading config files by setting the PERF_CONFIG environment
+One can disable reading config files by setting the PERF_CONFIG environment
variable to /dev/null, or provide an alternate config file by setting that
variable.
@@ -247,6 +247,19 @@ annotate.*::
These are in control of addresses, jump function, source code
in lines of assembly code from a specific program.
+ annotate.disassemblers::
+ Choose the disassembler to use: "objdump", "llvm", "capstone",
+ if not specified it will first try, if available, the "llvm" one,
+ then, if it fails, "capstone", and finally the original "objdump"
+ based one.
+
+ Choosing a different one is useful when handling some feature that
+ is known to be best support at some point by one of the options,
+ to compare the output when in doubt about some bug, etc.
+
+ This can be a list, in order of preference, the first one that works
+ finishes the process.
+
annotate.addr2line::
addr2line binary to use for file names and line numbers.
@@ -695,6 +708,10 @@ intel-pt.*::
the maximum is exceeded there will be a "Never-ending loop"
error. The default is 100000.
+ intel-pt.all-switch-events::
+ If the user has permission to do so, always record all context
+ switch events on all CPUs.
+
auxtrace.*::
auxtrace.dumpdir::
diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
index f3067a4af294..58efab72d2e5 100644
--- a/tools/perf/Documentation/perf-diff.txt
+++ b/tools/perf/Documentation/perf-diff.txt
@@ -285,7 +285,7 @@ If specified the 'Weighted diff' column is displayed with value 'd' computed as:
- period being the hist entry period value
- - WEIGHT-A/WEIGHT-B being user supplied weights in the the '-c' option
+ - WEIGHT-A/WEIGHT-B being user supplied weights in the '-c' option
behind ':' separator like '-c wdiff:1,2'.
- WEIGHT-A being the weight of the data file
- WEIGHT-B being the weight of the baseline data file
diff --git a/tools/perf/Documentation/perf-ftrace.txt b/tools/perf/Documentation/perf-ftrace.txt
index d780b93fcf87..3f3808e513fe 100644
--- a/tools/perf/Documentation/perf-ftrace.txt
+++ b/tools/perf/Documentation/perf-ftrace.txt
@@ -9,7 +9,7 @@ perf-ftrace - simple wrapper for kernel's ftrace functionality
SYNOPSIS
--------
[verse]
-'perf ftrace' {trace|latency} <command>
+'perf ftrace' {trace|latency|profile} <command>
DESCRIPTION
-----------
@@ -23,6 +23,9 @@ kernel's ftrace infrastructure.
'perf ftrace latency' calculates execution latency of a given function
(optionally with BPF) and display it as a histogram.
+ 'perf ftrace profile' show a execution profile for each function including
+ total, average, max time and the number of calls.
+
The following options apply to perf ftrace.
COMMON OPTIONS
@@ -120,11 +123,16 @@ OPTIONS for 'perf ftrace trace'
--graph-opts::
List of options allowed to set:
+ - args - Show function arguments.
+ - retval - Show function return value.
+ - retval-hex - Show function return value in hexadecimal format.
+ - retaddr - Show function return address.
- nosleep-time - Measure on-CPU time only for function_graph tracer.
- noirqs - Ignore functions that happen inside interrupt.
- verbose - Show process names, PIDs, timestamps, etc.
- thresh=<n> - Setup trace duration threshold in microseconds.
- depth=<n> - Set max depth for function graph tracer to follow.
+ - tail - Print function name at the end.
OPTIONS for 'perf ftrace latency'
@@ -135,6 +143,12 @@ OPTIONS for 'perf ftrace latency'
Set the function name to get the histogram. Unlike perf ftrace trace,
it only allows single function to calculate the histogram.
+-e::
+--events=::
+ Set the pair of events to get the histogram. The histogram is calculated
+ by the time difference between the two events from the same thread. This
+ requires -b/--use-bpf option.
+
-b::
--use-bpf::
Use BPF to measure function latency instead of using the ftrace (it
@@ -144,6 +158,67 @@ OPTIONS for 'perf ftrace latency'
--use-nsec::
Use nano-second instead of micro-second as a base unit of the histogram.
+--bucket-range=::
+ Bucket range in ms or ns (according to -n/--use-nsec), default is log2() mode.
+
+--min-latency=::
+ Minimum latency for the start of the first bucket, in ms or ns (according to
+ -n/--use-nsec).
+
+--max-latency=::
+ Maximum latency for the start of the last bucket, in ms or ns (according to
+ -n/--use-nsec). The setting is ignored if the value results in more than
+ 22 buckets.
+
+OPTIONS for 'perf ftrace profile'
+---------------------------------
+
+-T::
+--trace-funcs=::
+ Set function filter on the given function (or a glob pattern).
+ Multiple functions can be given by using this option more than once.
+ The function argument also can be a glob pattern. It will be passed
+ to 'set_ftrace_filter' in tracefs.
+
+-N::
+--notrace-funcs=::
+ Do not trace functions given by the argument. Like -T option, this
+ can be used more than once to specify multiple functions (or glob
+ patterns). It will be passed to 'set_ftrace_notrace' in tracefs.
+
+-G::
+--graph-funcs=::
+ Set graph filter on the given function (or a glob pattern). This is
+ useful to trace for functions executed from the given function. This
+ can be used more than once to specify multiple functions. It will be
+ passed to 'set_graph_function' in tracefs.
+
+-g::
+--nograph-funcs=::
+ Set graph notrace filter on the given function (or a glob pattern).
+ Like -G option, this is useful for the function_graph tracer only and
+ disables tracing for function executed from the given function. This
+ can be used more than once to specify multiple functions. It will be
+ passed to 'set_graph_notrace' in tracefs.
+
+-m::
+--buffer-size::
+ Set the size of per-cpu tracing buffer, <size> is expected to
+ be a number with appended unit character - B/K/M/G.
+
+-s::
+--sort=::
+ Sort the result by the given field. Available values are:
+ total, avg, max, count, name. Default is 'total'.
+
+--graph-opts::
+ List of options allowed to set:
+
+ - nosleep-time - Measure on-CPU time only for function_graph tracer.
+ - noirqs - Ignore functions that happen inside interrupt.
+ - thresh=<n> - Setup trace duration threshold in microseconds.
+ - depth=<n> - Set max depth for function graph tracer to follow.
+
SEE ALSO
--------
diff --git a/tools/perf/Documentation/perf-intel-pt.txt b/tools/perf/Documentation/perf-intel-pt.txt
index 2109690b0d5f..cc0f37f0fa5a 100644
--- a/tools/perf/Documentation/perf-intel-pt.txt
+++ b/tools/perf/Documentation/perf-intel-pt.txt
@@ -115,9 +115,13 @@ toggle respectively.
perf script also supports higher level ways to dump instruction traces:
+ perf script --insn-trace=disasm
+
+or to use the xed disassembler, which requires installing the xed tool
+(see XED below):
+
perf script --insn-trace --xed
-Dump all instructions. This requires installing the xed tool (see XED below)
Dumping all instructions in a long trace can be fairly slow. It is usually better
to start with higher level decoding, like
@@ -130,12 +134,12 @@ or
and then select a time range of interest. The time range can then be examined
in detail with
- perf script --time starttime,stoptime --insn-trace --xed
+ perf script --time starttime,stoptime --insn-trace=disasm
While examining the trace it's also useful to filter on specific CPUs using
the -C option
- perf script --time starttime,stoptime --insn-trace --xed -C 1
+ perf script --time starttime,stoptime --insn-trace=disasm -C 1
Dump all instructions in time range on CPU 1.
@@ -147,7 +151,7 @@ displayed as follows:
There are two ways that instructions-per-cycle (IPC) can be calculated depending
on the recording.
-If the 'cyc' config term (see config terms section below) was used, then IPC
+If the 'cyc' config term (see <<_config_terms,config terms>> section below) was used, then IPC
and cycle events are calculated using the cycle count from CYC packets, otherwise
MTC packets are used - refer to the 'mtc' config term. When MTC is used, however,
the values are less accurate because the timing is less accurate.
@@ -235,7 +239,7 @@ which is the same as
-e intel_pt/tsc=1,noretcomp=0/
-Note there are now new config terms - see section 'config terms' further below.
+Note there are other config terms - see section <<_config_terms,config terms>> further below.
The config terms are listed in /sys/devices/intel_pt/format. They are bit
fields within the config member of the struct perf_event_attr which is
@@ -307,218 +311,271 @@ perf_event_attr is displayed if the -vv option is used e.g.
config terms
~~~~~~~~~~~~
-The June 2015 version of Intel 64 and IA-32 Architectures Software Developer
-Manuals, Chapter 36 Intel Processor Trace, defined new Intel PT features.
-Some of the features are reflect in new config terms. All the config terms are
-described below.
-
-tsc Always supported. Produces TSC timestamp packets to provide
- timing information. In some cases it is possible to decode
- without timing information, for example a per-thread context
- that does not overlap executable memory maps.
-
- The default config selects tsc (i.e. tsc=1).
-
-noretcomp Always supported. Disables "return compression" so a TIP packet
- is produced when a function returns. Causes more packets to be
- produced but might make decoding more reliable.
-
- The default config does not select noretcomp (i.e. noretcomp=0).
-
-psb_period Allows the frequency of PSB packets to be specified.
-
- The PSB packet is a synchronization packet that provides a
- starting point for decoding or recovery from errors.
-
- Support for psb_period is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/psb_cyc
-
- which contains "1" if the feature is supported and "0"
- otherwise.
-
- Valid values are given by:
-
- /sys/bus/event_source/devices/intel_pt/caps/psb_periods
-
- which contains a hexadecimal value, the bits of which represent
- valid values e.g. bit 2 set means value 2 is valid.
-
- The psb_period value is converted to the approximate number of
- trace bytes between PSB packets as:
-
- 2 ^ (value + 11)
-
- e.g. value 3 means 16KiB bytes between PSBs
-
- If an invalid value is entered, the error message
- will give a list of valid values e.g.
-
- $ perf record -e intel_pt/psb_period=15/u uname
- Invalid psb_period for intel_pt. Valid values are: 0-5
-
- If MTC packets are selected, the default config selects a value
- of 3 (i.e. psb_period=3) or the nearest lower value that is
- supported (0 is always supported). Otherwise the default is 0.
-
- If decoding is expected to be reliable and the buffer is large
- then a large PSB period can be used.
-
- Because a TSC packet is produced with PSB, the PSB period can
- also affect the granularity to timing information in the absence
- of MTC or CYC.
-
-mtc Produces MTC timing packets.
-
- MTC packets provide finer grain timestamp information than TSC
- packets. MTC packets record time using the hardware crystal
- clock (CTC) which is related to TSC packets using a TMA packet.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/mtc
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
- The frequency of MTC packets can also be specified - see
- mtc_period below.
-
-mtc_period Specifies how frequently MTC packets are produced - see mtc
- above for how to determine if MTC packets are supported.
-
- Valid values are given by:
-
- /sys/bus/event_source/devices/intel_pt/caps/mtc_periods
-
- which contains a hexadecimal value, the bits of which represent
- valid values e.g. bit 2 set means value 2 is valid.
-
- The mtc_period value is converted to the MTC frequency as:
-
- CTC-frequency / (2 ^ value)
-
- e.g. value 3 means one eighth of CTC-frequency
-
- Where CTC is the hardware crystal clock, the frequency of which
- can be related to TSC via values provided in cpuid leaf 0x15.
-
- If an invalid value is entered, the error message
- will give a list of valid values e.g.
-
- $ perf record -e intel_pt/mtc_period=15/u uname
- Invalid mtc_period for intel_pt. Valid values are: 0,3,6,9
-
- The default value is 3 or the nearest lower value
- that is supported (0 is always supported).
-
-cyc Produces CYC timing packets.
-
- CYC packets provide even finer grain timestamp information than
- MTC and TSC packets. A CYC packet contains the number of CPU
- cycles since the last CYC packet. Unlike MTC and TSC packets,
- CYC packets are only sent when another packet is also sent.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/psb_cyc
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
- The number of CYC packets produced can be reduced by specifying
- a threshold - see cyc_thresh below.
-
-cyc_thresh Specifies how frequently CYC packets are produced - see cyc
- above for how to determine if CYC packets are supported.
-
- Valid cyc_thresh values are given by:
-
- /sys/bus/event_source/devices/intel_pt/caps/cycle_thresholds
-
- which contains a hexadecimal value, the bits of which represent
- valid values e.g. bit 2 set means value 2 is valid.
-
- The cyc_thresh value represents the minimum number of CPU cycles
- that must have passed before a CYC packet can be sent. The
- number of CPU cycles is:
-
- 2 ^ (value - 1)
-
- e.g. value 4 means 8 CPU cycles must pass before a CYC packet
- can be sent. Note a CYC packet is still only sent when another
- packet is sent, not at, e.g. every 8 CPU cycles.
-
- If an invalid value is entered, the error message
- will give a list of valid values e.g.
-
- $ perf record -e intel_pt/cyc,cyc_thresh=15/u uname
- Invalid cyc_thresh for intel_pt. Valid values are: 0-12
-
- CYC packets are not requested by default.
-
-pt Specifies pass-through which enables the 'branch' config term.
-
- The default config selects 'pt' if it is available, so a user will
- never need to specify this term.
-
-branch Enable branch tracing. Branch tracing is enabled by default so to
- disable branch tracing use 'branch=0'.
-
- The default config selects 'branch' if it is available.
-
-ptw Enable PTWRITE packets which are produced when a ptwrite instruction
- is executed.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/ptwrite
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
- As an alternative, refer to "Emulated PTWRITE" further below.
-
-fup_on_ptw Enable a FUP packet to follow the PTWRITE packet. The FUP packet
- provides the address of the ptwrite instruction. In the absence of
- fup_on_ptw, the decoder will use the address of the previous branch
- if branch tracing is enabled, otherwise the address will be zero.
- Note that fup_on_ptw will work even when branch tracing is disabled.
-
-pwr_evt Enable power events. The power events provide information about
- changes to the CPU C-state.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/power_event_trace
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
-event Enable Event Trace. The events provide information about asynchronous
- events.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/event_trace
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
-notnt Disable TNT packets. Without TNT packets, it is not possible to walk
- executable code to reconstruct control flow, however FUP, TIP, TIP.PGE
- and TIP.PGD packets still indicate asynchronous control flow, and (if
- return compression is disabled - see noretcomp) return statements.
- The advantage of eliminating TNT packets is reducing the size of the
- trace and corresponding tracing overhead.
-
- Support for this feature is indicated by:
-
- /sys/bus/event_source/devices/intel_pt/caps/tnt_disable
-
- which contains "1" if the feature is supported and
- "0" otherwise.
-
+Config terms are parameters specified with the -e intel_pt// event option,
+for example:
+
+ -e intel_pt/cyc/
+
+which selects cycle accurate mode. Each config term can have a value which
+defaults to 1, so the above is the same as:
+
+ -e intel_pt/cyc=1/
+
+Some terms are set by default, so must be set to 0 to turn them off. For
+example, to turn off branch tracing:
+
+ -e intel_pt/branch=0/
+
+Multiple config terms are separated by commas, for example:
+
+ -e intel_pt/cyc,mtc_period=9/
+
+There are also common config terms, see linkperf:perf-record[1] documentation.
+
+Intel PT config terms are described below.
+
+*tsc*::
+Always supported. Produces TSC timestamp packets to provide
+timing information. In some cases it is possible to decode
+without timing information, for example a per-thread context
+that does not overlap executable memory maps.
++
+The default config selects tsc (i.e. tsc=1).
+
+*noretcomp*::
+Always supported. Disables "return compression" so a TIP packet
+is produced when a function returns. Causes more packets to be
+produced but might make decoding more reliable.
++
+The default config does not select noretcomp (i.e. noretcomp=0).
+
+*psb_period*::
+Allows the frequency of PSB packets to be specified.
++
+The PSB packet is a synchronization packet that provides a
+starting point for decoding or recovery from errors.
++
+Support for psb_period is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/psb_cyc
++
+which contains "1" if the feature is supported and "0"
+otherwise.
++
+Valid values are given by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/psb_periods
++
+which contains a hexadecimal value, the bits of which represent
+valid values e.g. bit 2 set means value 2 is valid.
++
+The psb_period value is converted to the approximate number of
+trace bytes between PSB packets as:
++
+ 2 ^ (value + 11)
++
+e.g. value 3 means 16KiB bytes between PSBs
++
+If an invalid value is entered, the error message
+will give a list of valid values e.g.
++
+ $ perf record -e intel_pt/psb_period=15/u uname
+ Invalid psb_period for intel_pt. Valid values are: 0-5
++
+If MTC packets are selected, the default config selects a value
+of 3 (i.e. psb_period=3) or the nearest lower value that is
+supported (0 is always supported). Otherwise the default is 0.
++
+If decoding is expected to be reliable and the buffer is large
+then a large PSB period can be used.
++
+Because a TSC packet is produced with PSB, the PSB period can
+also affect the granularity to timing information in the absence
+of MTC or CYC.
+
+*mtc*::
+Produces MTC timing packets.
++
+MTC packets provide finer grain timestamp information than TSC
+packets. MTC packets record time using the hardware crystal
+clock (CTC) which is related to TSC packets using a TMA packet.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/mtc
++
+which contains "1" if the feature is supported and
+"0" otherwise.
++
+The frequency of MTC packets can also be specified - see
+mtc_period below.
+
+*mtc_period*::
+Specifies how frequently MTC packets are produced - see mtc
+above for how to determine if MTC packets are supported.
++
+Valid values are given by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/mtc_periods
++
+which contains a hexadecimal value, the bits of which represent
+valid values e.g. bit 2 set means value 2 is valid.
++
+The mtc_period value is converted to the MTC frequency as:
+
+ CTC-frequency / (2 ^ value)
++
+e.g. value 3 means one eighth of CTC-frequency
++
+Where CTC is the hardware crystal clock, the frequency of which
+can be related to TSC via values provided in cpuid leaf 0x15.
++
+If an invalid value is entered, the error message
+will give a list of valid values e.g.
++
+ $ perf record -e intel_pt/mtc_period=15/u uname
+ Invalid mtc_period for intel_pt. Valid values are: 0,3,6,9
++
+The default value is 3 or the nearest lower value
+that is supported (0 is always supported).
+
+*cyc*::
+Produces CYC timing packets.
++
+CYC packets provide even finer grain timestamp information than
+MTC and TSC packets. A CYC packet contains the number of CPU
+cycles since the last CYC packet. Unlike MTC and TSC packets,
+CYC packets are only sent when another packet is also sent.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/psb_cyc
++
+which contains "1" if the feature is supported and
+"0" otherwise.
++
+The number of CYC packets produced can be reduced by specifying
+a threshold - see cyc_thresh below.
+
+*cyc_thresh*::
+Specifies how frequently CYC packets are produced - see cyc
+above for how to determine if CYC packets are supported.
++
+Valid cyc_thresh values are given by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/cycle_thresholds
++
+which contains a hexadecimal value, the bits of which represent
+valid values e.g. bit 2 set means value 2 is valid.
++
+The cyc_thresh value represents the minimum number of CPU cycles
+that must have passed before a CYC packet can be sent. The
+number of CPU cycles is:
++
+ 2 ^ (value - 1)
++
+e.g. value 4 means 8 CPU cycles must pass before a CYC packet
+can be sent. Note a CYC packet is still only sent when another
+packet is sent, not at, e.g. every 8 CPU cycles.
++
+If an invalid value is entered, the error message
+will give a list of valid values e.g.
++
+ $ perf record -e intel_pt/cyc,cyc_thresh=15/u uname
+ Invalid cyc_thresh for intel_pt. Valid values are: 0-12
++
+CYC packets are not requested by default.
+
+*pt*::
+Specifies pass-through which enables the 'branch' config term.
++
+The default config selects 'pt' if it is available, so a user will
+never need to specify this term.
+
+*branch*::
+Enable branch tracing. Branch tracing is enabled by default so to
+disable branch tracing use 'branch=0'.
++
+The default config selects 'branch' if it is available.
+
+*ptw*::
+Enable PTWRITE packets which are produced when a ptwrite instruction
+is executed.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/ptwrite
++
+which contains "1" if the feature is supported and
+"0" otherwise.
++
+As an alternative, refer to "Emulated PTWRITE" further below.
+
+*fup_on_ptw*::
+Enable a FUP packet to follow the PTWRITE packet. The FUP packet
+provides the address of the ptwrite instruction. In the absence of
+fup_on_ptw, the decoder will use the address of the previous branch
+if branch tracing is enabled, otherwise the address will be zero.
+Note that fup_on_ptw will work even when branch tracing is disabled.
+
+*pwr_evt*::
+Enable power events. The power events provide information about
+changes to the CPU C-state.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/power_event_trace
++
+which contains "1" if the feature is supported and
+"0" otherwise.
+
+*event*::
+Enable Event Trace. The events provide information about asynchronous
+events.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/event_trace
++
+which contains "1" if the feature is supported and
+"0" otherwise.
+
+*notnt*::
+Disable TNT packets. Without TNT packets, it is not possible to walk
+executable code to reconstruct control flow, however FUP, TIP, TIP.PGE
+and TIP.PGD packets still indicate asynchronous control flow, and (if
+return compression is disabled - see noretcomp) return statements.
+The advantage of eliminating TNT packets is reducing the size of the
+trace and corresponding tracing overhead.
++
+Support for this feature is indicated by:
++
+ /sys/bus/event_source/devices/intel_pt/caps/tnt_disable
++
+which contains "1" if the feature is supported and
+"0" otherwise.
+
+*aux-action=start-paused*::
+Start tracing paused, refer to the section <<_pause_or_resume_tracing,Pause or Resume Tracing>>
+
+
+config terms on other events
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some Intel PT features work with other events, features such as AUX area sampling
+and PEBS-via-PT. In those cases, the other events can have config terms below:
+
+*aux-sample-size*::
+ Used to set the AUX area sample size, refer to the section
+ <<_aux_area_sampling_option,AUX area sampling option>>
+
+*aux-output*::
+ Used to select PEBS-via-PT, refer to the
+ section <<_pebs_via_intel_pt,PEBS via Intel PT>>
+
+*aux-action*::
+ Used to pause or resume tracing, refer to the section
+ <<_pause_or_resume_tracing,Pause or Resume Tracing>>
AUX area sampling option
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -592,7 +649,8 @@ The default snapshot size is the auxtrace mmap size. If neither auxtrace mmap s
nor snapshot size is specified, then the default is 4MiB for privileged users
(or if /proc/sys/kernel/perf_event_paranoid < 0), 128KiB for unprivileged users.
If an unprivileged user does not specify mmap pages, the mmap pages will be
-reduced as described in the 'new auxtrace mmap size option' section below.
+reduced as described in the <<_new_auxtrace_mmap_size_option,new auxtrace mmap size option>>
+section below.
The snapshot size is displayed if the option -vv is used e.g.
@@ -948,11 +1006,11 @@ transaction start, commit or abort.
Note that "instructions", "cycles", "branches" and "transactions" events
depend on code flow packets which can be disabled by using the config term
-"branch=0". Refer to the config terms section above.
+"branch=0". Refer to the <<_config_terms,config terms>> section above.
"ptwrite" events record the payload of the ptwrite instruction and whether
"fup_on_ptw" was used. "ptwrite" events depend on PTWRITE packets which are
-recorded only if the "ptw" config term was used. Refer to the config terms
+recorded only if the "ptw" config term was used. Refer to the <<_config_terms,config terms>>
section above. perf script "synth" field displays "ptwrite" information like
this: "ip: 0 payload: 0x123456789abcdef0" where "ip" is 1 if "fup_on_ptw" was
used.
@@ -960,7 +1018,7 @@ used.
"Power" events correspond to power event packets and CBR (core-to-bus ratio)
packets. While CBR packets are always recorded when tracing is enabled, power
event packets are recorded only if the "pwr_evt" config term was used. Refer to
-the config terms section above. The power events record information about
+the <<_config_terms,config terms>> section above. The power events record information about
C-state changes, whereas CBR is indicative of CPU frequency. perf script
"event,synth" fields display information like this:
@@ -1116,7 +1174,7 @@ What *will* be decoded with the (single) q option:
- asynchronous branches such as interrupts
- indirect branches
- function return target address *if* the noretcomp config term (refer
- config terms section) was used
+ <<_config_terms,config terms>> section) was used
- start of (control-flow) tracing
- end of (control-flow) tracing, if it is not out of context
- power events, ptwrite, transaction start and abort
@@ -1129,7 +1187,7 @@ Repeating the q option (double-q i.e. qq) results in even faster decoding and ev
less detail. The decoder decodes only extended PSB (PSB+) packets, getting the
instruction pointer if there is a FUP packet within PSB+ (i.e. between PSB and
PSBEND). Note PSB packets occur regularly in the trace based on the psb_period
-config term (refer config terms section). There will be a FUP packet if the
+config term (refer <<_config_terms,config terms>> section). There will be a FUP packet if the
PSB+ occurs while control flow is being traced.
What will *not* be decoded with the qq option:
@@ -1306,7 +1364,7 @@ Without timestamps, --per-thread must be specified to distinguish threads.
perf script can be used to provide an instruction trace
- $ perf script --guestkallsyms $KALLSYMS --insn-trace --xed -F+ipc | grep -C10 vmresume | head -21
+ $ perf script --guestkallsyms $KALLSYMS --insn-trace=disasm -F+ipc | grep -C10 vmresume | head -21
CPU 0/KVM 1440 ffffffff82133cdd __vmx_vcpu_run+0x3d ([kernel.kallsyms]) movq 0x48(%rax), %r9
CPU 0/KVM 1440 ffffffff82133ce1 __vmx_vcpu_run+0x41 ([kernel.kallsyms]) movq 0x50(%rax), %r10
CPU 0/KVM 1440 ffffffff82133ce5 __vmx_vcpu_run+0x45 ([kernel.kallsyms]) movq 0x58(%rax), %r11
@@ -1407,7 +1465,7 @@ There were none.
'perf script' can be used to provide an instruction trace showing timestamps
- $ perf script -i perf.data.kvm --guestkallsyms $KALLSYMS --insn-trace --xed -F+ipc | grep -C10 vmresume | head -21
+ $ perf script -i perf.data.kvm --guestkallsyms $KALLSYMS --insn-trace=disasm -F+ipc | grep -C10 vmresume | head -21
CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cdd __vmx_vcpu_run+0x3d ([kernel.kallsyms]) movq 0x48(%rax), %r9
CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ce1 __vmx_vcpu_run+0x41 ([kernel.kallsyms]) movq 0x50(%rax), %r10
CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ce5 __vmx_vcpu_run+0x45 ([kernel.kallsyms]) movq 0x58(%rax), %r11
@@ -1863,6 +1921,108 @@ For pipe mode, the order of events and timestamps can presumably
be messed up.
+Pause or Resume Tracing
+-----------------------
+
+With newer Kernels, it is possible to use other selected events to pause
+or resume Intel PT tracing. This is configured by using the "aux-action"
+config term:
+
+"aux-action=pause" is used with events that are to pause Intel PT tracing.
+
+"aux-action=resume" is used with events that are to resume Intel PT tracing.
+
+"aux-action=start-paused" is used with the Intel PT event to start in a
+paused state.
+
+For example, to trace only the uname system call (sys_newuname) when running the
+command line utility uname:
+
+ $ perf record --kcore -e intel_pt/aux-action=start-paused/k,syscalls:sys_enter_newuname/aux-action=resume/,syscalls:sys_exit_newuname/aux-action=pause/ uname
+ Linux
+ [ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 0.043 MB perf.data ]
+ $ perf script --call-trace
+ uname 30805 [000] 24001.058782799: name: 0x7ffc9c1865b0
+ uname 30805 [000] 24001.058784424: psb offs: 0
+ uname 30805 [000] 24001.058784424: cbr: 39 freq: 3904 MHz (139%)
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) __x64_sys_newuname
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) down_read
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) __cond_resched
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_add
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) in_lock_functions
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_sub
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) up_read
+ uname 30805 [000] 24001.058784629: ([kernel.kallsyms]) preempt_count_add
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) in_lock_functions
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) preempt_count_sub
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) _copy_to_user
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) syscall_exit_to_user_mode
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) syscall_exit_work
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) perf_syscall_exit
+ uname 30805 [000] 24001.058784838: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_trace_buf_alloc
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_swevent_get_recursion_context
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_tp_event
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_trace_buf_update
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) tracing_gen_ctx_irq_test
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_swevent_event
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __perf_event_account_interrupt
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __this_cpu_preempt_check
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_event_output_forward
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) perf_event_aux_pause
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) ring_buffer_get
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __rcu_read_lock
+ uname 30805 [000] 24001.058785046: ([kernel.kallsyms]) __rcu_read_unlock
+ uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) pt_event_stop
+ uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) debug_smp_processor_id
+ uname 30805 [000] 24001.058785254: ([kernel.kallsyms]) native_write_msr
+ uname 30805 [000] 24001.058785463: ([kernel.kallsyms]) native_write_msr
+ uname 30805 [000] 24001.058785639: 0x0
+
+The example above uses tracepoints, but any kind of sampled event can be used.
+
+For example:
+
+ Tracing between arch_cpu_idle_enter() and arch_cpu_idle_exit() using breakpoint events:
+
+ $ sudo cat /proc/kallsyms | sort | grep ' arch_cpu_idle_enter\| arch_cpu_idle_exit'
+ ffffffffb605bf60 T arch_cpu_idle_enter
+ ffffffffb614d8a0 W arch_cpu_idle_exit
+ $ sudo perf record --kcore -a -e intel_pt/aux-action=start-paused/k -e mem:0xffffffffb605bf60:x/aux-action=resume/ -e mem:0xffffffffb614d8a0:x/aux-action=pause/ -- sleep 1
+ [ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 1.387 MB perf.data ]
+
+ Tracing __alloc_pages() using kprobes:
+
+ $ sudo perf probe --add '__alloc_pages order'
+ Added new event: probe:__alloc_pages (on __alloc_pages with order)
+ $ sudo perf probe --add __alloc_pages%return
+ Added new event: probe:__alloc_pages__return (on __alloc_pages%return)
+ $ sudo perf record --kcore -aR -e intel_pt/aux-action=start-paused/k -e probe:__alloc_pages/aux-action=resume/ -e probe:__alloc_pages__return/aux-action=pause/ -- sleep 1
+ [ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 1.490 MB perf.data ]
+
+ Tracing starting at main() using a uprobe event:
+
+ $ sudo perf probe -x /usr/bin/uname main
+ Added new event: probe_uname:main (on main in /usr/bin/uname)
+ $ sudo perf record -e intel_pt/-aux-action=start-paused/u -e probe_uname:main/aux-action=resume/ -- uname
+ Linux
+ [ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 0.031 MB perf.data ]
+
+ Tracing occasionally using cycles events with different periods:
+
+ $ perf record --kcore -a -m,64M -e intel_pt/aux-action=start-paused/k -e cycles/aux-action=pause,period=1000000/Pk -e cycles/aux-action=resume,period=10500000/Pk -- firefox
+ [ perf record: Woken up 19 times to write data ]
+ [ perf record: Captured and wrote 16.561 MB perf.data ]
+
+
EXAMPLE
-------
diff --git a/tools/perf/Documentation/perf-kvm.txt b/tools/perf/Documentation/perf-kvm.txt
index b66be66fe836..c26524d38f47 100644
--- a/tools/perf/Documentation/perf-kvm.txt
+++ b/tools/perf/Documentation/perf-kvm.txt
@@ -115,9 +115,9 @@ STAT LIVE OPTIONS
-m::
--mmap-pages=::
- Number of mmap data pages (must be a power of two) or size
- specification with appended unit character - B/K/M/G. The
- size is rounded up to have nearest pages power of two value.
+ Number of mmap data pages (must be a power of two) or size
+ specification in bytes with appended unit character - B/K/M/G.
+ The size is rounded up to the nearest power-of-two page value.
-a::
--all-cpus::
diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index 109ace1d5e90..21e607669d78 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -1,4 +1,4 @@
-perf-kowrk(1)
+perf-kwork(1)
=============
NAME
@@ -35,7 +35,7 @@ There are several variants of 'perf kwork':
perf kwork top
perf kwork top -b
- By default it shows the individual work events such as irq, workqeueu,
+ By default it shows the individual work events such as irq, workqueue,
including the run time and delay (time between raise and actually entry):
Runtime start Runtime end Cpu Kwork name Runtime Delaytime
diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt
index 3b12595193c9..a4378a0cd914 100644
--- a/tools/perf/Documentation/perf-list.txt
+++ b/tools/perf/Documentation/perf-list.txt
@@ -8,7 +8,7 @@ perf-list - List all symbolic event types
SYNOPSIS
--------
[verse]
-'perf list' [--no-desc] [--long-desc]
+'perf list' [<options>]
[hw|sw|cache|tracepoint|pmu|sdt|metric|metricgroup|event_glob]
DESCRIPTION
@@ -27,7 +27,7 @@ Don't print descriptions.
-v::
--long-desc::
-Print longer event descriptions.
+Print longer event descriptions and all similar PMUs with alphanumeric suffixes.
--debug::
Enable debugging output.
@@ -71,6 +71,9 @@ counted. The following modifiers exist:
D - pin the event to the PMU
W - group is weak and will fallback to non-group if not schedulable,
e - group or event are exclusive and do not share the PMU
+ b - use BPF aggregration (see perf stat --bpf-counters)
+ R - retire latency value of the event
+ X - don't regroup the event to match PMUs
The 'p' modifier can be used for specifying how precise the instruction
address should be. The 'p' modifier can be specified multiple times:
@@ -186,7 +189,7 @@ in the CPU vendor specific documentation.
The available PMUs and their raw parameters can be listed with
- ls /sys/devices/*/format
+ ls /sys/bus/event_source/devices/*/format
For example the raw event "LSD.UOPS" core pmu event above could
be specified as
@@ -241,6 +244,21 @@ For accessing trace point events perf needs to have read access to
/sys/kernel/tracing, even when perf_event_paranoid is in a relaxed
setting.
+TOOL/HWMON EVENTS
+-----------------
+
+Some events don't have an associated PMU instead reading values
+available to software without perf_event_open. As these events don't
+support sampling they can only really be read by tools like perf stat.
+
+Tool events provide times and certain system parameters. Examples
+include duration_time, user_time, system_time and num_cpus_online.
+
+Hwmon events provide easy access to hwmon sysfs data typically in
+/sys/class/hwmon. This information includes temperatures, fan speeds
+and energy usage.
+
+
TRACING
-------
@@ -261,17 +279,33 @@ also be supplied. For example:
perf stat -C 0 -e 'hv_gpci/dtbp_ptitc,phys_processor_idx=0x2/' ...
-EVENT QUALIFIERS:
+EVENT QUALIFIERS
+----------------
It is also possible to add extra qualifiers to an event:
percore:
-Sums up the event counts for all hardware threads in a core, e.g.:
+ Sums up the event counts for all hardware threads in a core, e.g.:
+ perf stat -e cpu/event=0,umask=0x3,percore=1/
+
+cpu:
+ Specifies a CPU or a range of CPUs to open the event upon. It may
+ also reference a PMU to copy the CPU mask from. The value may be
+ repeated to specify opening the event on multiple CPUs.
- perf stat -e cpu/event=0,umask=0x3,percore=1/
+ Example 1: to open the instructions event on CPUs 0 and 2, the
+ cycles event on CPUs 1 and 2:
+ perf stat -e instructions/cpu=0,cpu=2/,cycles/cpu=1-2/ -a sleep 1
+ Example 2: to open the data_read uncore event on CPU 0 and the
+ data_write uncore event on CPU 1:
+ perf stat -e data_read/cpu=0/,data_write/cpu=1/ -a sleep 1
+
+ Example 3: to open the software msr/tsc/ event only on the CPUs
+ matching those from the cpu_core PMU:
+ perf stat -e msr/tsc,cpu=cpu_core/ -a sleep 1
EVENT GROUPS
------------
@@ -359,6 +393,8 @@ Support raw format:
. '--raw-dump [hw|sw|cache|tracepoint|pmu|event_glob]', shows the raw-dump of
a certain kind of events.
+include::intel-acr.txt[]
+
SEE ALSO
--------
linkperf:perf-stat[1], linkperf:perf-top[1],
diff --git a/tools/perf/Documentation/perf-lock.txt b/tools/perf/Documentation/perf-lock.txt
index f5938d616d75..c17b3e318169 100644
--- a/tools/perf/Documentation/perf-lock.txt
+++ b/tools/perf/Documentation/perf-lock.txt
@@ -111,11 +111,11 @@ INFO OPTIONS
-t::
--threads::
- dump thread list in perf.data
+ dump only the thread list in perf.data
-m::
--map::
- dump map of lock instances (address:name table)
+ dump only the map of lock instances (address:name table)
CONTENTION OPTIONS
@@ -179,16 +179,17 @@ CONTENTION OPTIONS
-o::
--lock-owner::
- Show lock contention stat by owners. Implies --threads and
- requires --use-bpf.
+ Show lock contention stat by owners. This option can be combined with -t,
+ which shows owner's per thread lock stats, or -v, which shows owner's
+ stacktrace. Requires --use-bpf.
-Y::
--type-filter=<value>::
Show lock contention only for given lock types (comma separated list).
Available values are:
semaphore, spinlock, rwlock, rwlock:R, rwlock:W, rwsem, rwsem:R, rwsem:W,
- rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, pcpu-sem, pcpu-sem:R, pcpu-sem:W,
- mutex
+ rtmutex, rwlock-rt, rwlock-rt:R, rwlock-rt:W, percpu-rwmem, pcpu-sem,
+ pcpu-sem:R, pcpu-sem:W, mutex
Note that RW-variant of locks have :R and :W suffix. Names without the
suffix are shortcuts for the both variants. Ex) rwsem = rwsem:R + rwsem:W.
@@ -215,6 +216,21 @@ CONTENTION OPTIONS
--cgroup-filter=<value>::
Show lock contention only in the given cgroups (comma separated list).
+-J::
+--inject-delay=<time@function>::
+ Add delays to the given lock. It's added to the contention-end part so
+ that the (new) owner of the lock will be delayed. But by slowing down
+ the owner, the waiters will also be delayed as well. This is working
+ only with -b/--use-bpf.
+
+ The 'time' is specified in nsec but it can have a unit suffix. Available
+ units are "ms", "us" and "ns". Currently it accepts up to 10ms of delays
+ for safety reasons.
+
+ Note that it will busy-wait after it gets the lock. Delaying locks can
+ have significant consequences including potential kernel crashes. Please
+ use it at your own risk.
+
SEE ALSO
--------
diff --git a/tools/perf/Documentation/perf-mem.txt b/tools/perf/Documentation/perf-mem.txt
index 19862572e3f2..4d164836d094 100644
--- a/tools/perf/Documentation/perf-mem.txt
+++ b/tools/perf/Documentation/perf-mem.txt
@@ -21,22 +21,17 @@ and stores are sampled. Use the -t option to limit to loads or stores.
Note that on Intel systems the memory latency reported is the use-latency,
not the pure load (or store latency). Use latency includes any pipeline
-queueing delays in addition to the memory subsystem latency.
+queuing delays in addition to the memory subsystem latency.
On Arm64 this uses SPE to sample load and store operations, therefore hardware
and kernel support is required. See linkperf:perf-arm-spe[1] for a setup guide.
Due to the statistical nature of SPE sampling, not every memory operation will
be sampled.
-OPTIONS
--------
-<command>...::
- Any command you can specify in a shell.
-
--i::
---input=<file>::
- Input file name.
+On AMD this use IBS Op PMU to sample load-store operations.
+COMMON OPTIONS
+--------------
-f::
--force::
Don't do ownership validation
@@ -45,24 +40,9 @@ OPTIONS
--type=<type>::
Select the memory operation type: load or store (default: load,store)
--D::
---dump-raw-samples::
- Dump the raw decoded samples on the screen in a format that is easy to parse with
- one sample per line.
-
--x::
---field-separator=<separator>::
- Specify the field separator used when dump raw samples (-D option). By default,
- The separator is the space character.
-
--C::
---cpu=<cpu>::
- Monitor only on the list of CPUs provided. Multiple CPUs can be provided as a
- comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. Default
- is to monitor all CPUS.
--U::
---hide-unresolved::
- Only display entries resolved to a symbol.
+-v::
+--verbose::
+ Be more verbose (show counter open errors, etc)
-p::
--phys-data::
@@ -73,6 +53,9 @@ OPTIONS
RECORD OPTIONS
--------------
+<command>...::
+ Any command you can specify in a shell.
+
-e::
--event <event>::
Event selector. Use 'perf mem record -e list' to list available events.
@@ -85,17 +68,144 @@ RECORD OPTIONS
--all-user::
Configure all used events to run in user space.
--v::
---verbose::
- Be more verbose (show counter open errors, etc)
-
--ldlat <n>::
- Specify desired latency for loads event. Supported on Intel and Arm64
- processors only. Ignored on other archs.
+ Specify desired latency for loads event. Supported on Intel, Arm64 and
+ some AMD processors. Ignored on other archs.
+
+ On supported AMD processors:
+ - /sys/bus/event_source/devices/ibs_op/caps/ldlat file contains '1'.
+ - Supported latency values are 128 to 2048 (both inclusive).
+ - Latency value which is a multiple of 128 incurs a little less profiling
+ overhead compared to other values.
+ - Load latency filtering is disabled by default.
+
+REPORT OPTIONS
+--------------
+-i::
+--input=<file>::
+ Input file name.
+
+-C::
+--cpu=<cpu>::
+ Monitor only on the list of CPUs provided. Multiple CPUs can be provided as a
+ comma-separated list with no space: 0,1. Ranges of CPUs are specified with -
+ like 0-2. Default is to monitor all CPUS.
+
+-D::
+--dump-raw-samples::
+ Dump the raw decoded samples on the screen in a format that is easy to parse with
+ one sample per line.
+
+-s::
+--sort=<key>::
+ Group result by given key(s) - multiple keys can be specified
+ in CSV format. The keys are specific to memory samples are:
+ symbol_daddr, symbol_iaddr, dso_daddr, locked, tlb, mem, snoop,
+ dcacheline, phys_daddr, data_page_size, blocked.
+
+ - symbol_daddr: name of data symbol being executed on at the time of sample
+ - symbol_iaddr: name of code symbol being executed on at the time of sample
+ - dso_daddr: name of library or module containing the data being executed
+ on at the time of the sample
+ - locked: whether the bus was locked at the time of the sample
+ - tlb: type of tlb access for the data at the time of the sample
+ - mem: type of memory access for the data at the time of the sample
+ - snoop: type of snoop (if any) for the data at the time of the sample
+ - dcacheline: the cacheline the data address is on at the time of the sample
+ - phys_daddr: physical address of data being executed on at the time of sample
+ - data_page_size: the data page size of data being executed on at the time of sample
+ - blocked: reason of blocked load access for the data at the time of the sample
+
+ And the default sort keys are changed to local_weight, mem, sym, dso,
+ symbol_daddr, dso_daddr, snoop, tlb, locked, blocked, local_ins_lat.
+
+-F::
+--fields=::
+ Specify output field - multiple keys can be specified in CSV format.
+ Please see linkperf:perf-report[1] for details.
+
+ In addition to the default fields, 'perf mem report' will provide the
+ following fields to break down sample periods.
+
+ - op: operation in the sample instruction (load, store, prefetch, ...)
+ - cache: location in CPU cache (L1, L2, ...) where the sample hit
+ - mem: location in memory or other places the sample hit
+ - dtlb: location in Data TLB (L1, L2) where the sample hit
+ - snoop: snoop result for the sampled data access
+
+ Please take a look at the OUTPUT FIELD SELECTION section for caveats.
+
+-T::
+--type-profile::
+ Show data-type profile result instead of code symbols. This requires
+ the debug information and it will change the default sort keys to:
+ mem, snoop, tlb, type.
+
+-U::
+--hide-unresolved::
+ Only display entries resolved to a symbol.
+
+-x::
+--field-separator=<separator>::
+ Specify the field separator used when dump raw samples (-D option). By default,
+ The separator is the space character.
In addition, for report all perf report options are valid, and for record
all perf record options.
+OVERHEAD CALCULATION
+--------------------
+Unlike linkperf:perf-report[1], which calculates overhead from the actual
+sample period, perf-mem overhead is calculated using sample weight. E.g.
+there are two samples in perf.data file, both with the same sample period,
+but one sample with weight 180 and the other with weight 20:
+
+ $ perf script -F period,data_src,weight,ip,sym
+ 100000 629080842 |OP LOAD|LVL L3 hit|... 20 7e69b93ca524 strcmp
+ 100000 1a29081042 |OP LOAD|LVL RAM hit|... 180 ffffffff82429168 memcpy
+
+ $ perf report -F overhead,symbol
+ 50% [.] strcmp
+ 50% [k] memcpy
+
+ $ perf mem report -F overhead,symbol
+ 90% [k] memcpy
+ 10% [.] strcmp
+
+OUTPUT FIELD SELECTION
+----------------------
+"perf mem report" adds a number of new output fields specific to data source
+information in the sample. Some of them have the same name with the existing
+sort keys ("mem" and "snoop"). So unlike other fields and sort keys, they'll
+behave differently when it's used by -F/--fields or -s/--sort.
+
+Using those two as output fields will aggregate samples altogether and show
+breakdown.
+
+ $ perf mem report -F mem,snoop
+ ...
+ # ------ Memory ------- --- Snoop ----
+ # RAM Uncach Other HitM Other
+ # ..................... ..............
+ #
+ 3.5% 0.0% 96.5% 25.1% 74.9%
+
+But using the same name for sort keys will aggregate samples for each type
+separately.
+
+ $ perf mem report -s mem,snoop
+ # Overhead Samples Memory access Snoop
+ # ........ ............ ....................................... ............
+ #
+ 47.99% 1509 L2 hit N/A
+ 25.08% 338 core, same node Any cache hit HitM
+ 10.24% 54374 N/A N/A
+ 6.77% 35938 L1 hit N/A
+ 6.39% 101 core, same node Any cache hit N/A
+ 3.50% 69 RAM hit N/A
+ 0.03% 158 LFB/MAB hit N/A
+ 0.00% 2 Uncached hit N/A
+
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-report[1], linkperf:perf-arm-spe[1]
diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 6015fdd08fb6..067891bd7da6 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -68,6 +68,10 @@ OPTIONS
like this: name=\'CPU_CLK_UNHALTED.THREAD:cmask=0x1\'.
- 'aux-output': Generate AUX records instead of events. This requires
that an AUX area event is also provided.
+ - 'aux-action': "pause" or "resume" to pause or resume an AUX
+ area event (the group leader) when this event occurs.
+ "start-paused" on an AUX area event itself, will
+ start in a paused state.
- 'aux-sample-size': Set sample size for AUX area sampling. If the
'--aux-sample' option has been used, set aux-sample-size=0 to disable
AUX area sampling for the event.
@@ -200,7 +204,7 @@ OPTIONS
ip, id, tid, pid, cpu, time, addr, period, txn, weight, phys_addr,
code_pgsz, data_pgsz, weight1, weight2, weight3, ins_lat, retire_lat,
p_stage_cyc, mem_op, mem_lvl, mem_snoop, mem_remote, mem_lock,
- mem_dtlb, mem_blk, mem_hops
+ mem_dtlb, mem_blk, mem_hops, uid, gid
The <operator> can be one of:
==, !=, >, >=, <, <=, &
@@ -223,6 +227,10 @@ OPTIONS
'--filter' exists, the new filter expression will be combined with
them by '&&'.
+--latency::
+ Enable data collection for latency profiling.
+ Use perf report --latency for latency-centric profile.
+
-a::
--all-cpus::
System-wide collection from all CPUs (default if no target is specified).
@@ -273,10 +281,11 @@ OPTIONS
-m::
--mmap-pages=::
Number of mmap data pages (must be a power of two) or size
- specification with appended unit character - B/K/M/G. The
- size is rounded up to have nearest pages power of two value.
- Also, by adding a comma, the number of mmap pages for AUX
- area tracing can be specified.
+ specification in bytes with appended unit character - B/K/M/G.
+ The size is rounded up to the nearest power-of-two page value.
+ By adding a comma, an additional parameter with the same
+ semantics used for the normal mmap areas can be specified for
+ AUX tracing area.
-g::
Enables call-graph (stack chain/backtrace) recording for both
@@ -311,7 +320,7 @@ OPTIONS
User can change the size by passing the size after comma like
"--call-graph dwarf,4096".
- When "fp" recording is used, perf tries to save stack enties
+ When "fp" recording is used, perf tries to save stack entries
up to the number specified in sysctl.kernel.perf_event_max_stack
by default. User can change the number by passing it after comma
like "--call-graph fp,32".
@@ -331,7 +340,7 @@ OPTIONS
-d::
--data::
- Record the sample virtual addresses.
+ Record the sample virtual addresses. Implies --sample-mem-info.
--phys-data::
Record the sample physical addresses.
@@ -359,6 +368,11 @@ OPTIONS
the sample_type member of the struct perf_event_attr argument to the
perf_event_open system call.
+--sample-mem-info::
+ Record the sample data source information for memory operations.
+ It requires hardware supports and may work on specific events only.
+ Please consider using 'perf mem record' instead if you're not sure.
+
-n::
--no-samples::
Don't sample.
@@ -549,7 +563,9 @@ Specify vmlinux path which has debuginfo.
Record build-id of all DSOs regardless whether it's actually hit or not.
--buildid-mmap::
-Record build ids in mmap2 events, disables build id cache (implies --no-buildid).
+Legacy record build-id in map events option which is now the
+default. Behaves indentically to --no-buildid. Disable with
+--no-buildid-mmap.
--aio[=n]::
Use <n> control blocks in asynchronous (Posix AIO) trace writing mode (default: 1, max: 4).
@@ -828,6 +844,20 @@ filtered through the mask provided by -C option.
only, as of now. So the applications built without the frame
pointer might see bogus addresses.
+ off-cpu profiling consists two types of samples: direct samples, which
+ share the same behavior as regular samples, and the accumulated
+ samples, stored in BPF stack trace map, presented after all the regular
+ samples.
+
+--off-cpu-thresh::
+ Once a task's off-cpu time reaches this threshold (in milliseconds), it
+ generates a direct off-cpu sample. The default is 500ms.
+
+--setup-filter=<action>::
+ Prepare BPF filter to be used by regular users. The action should be
+ either "pin" or "unpin". The filter can be used after it's pinned.
+
+
include::intel-hybrid.txt[]
SEE ALSO
diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index 38f59ac064f7..acef3ff4178e 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -44,7 +44,7 @@ OPTIONS
--comms=::
Only consider symbols in these comms. CSV that understands
file://filename entries. This option will affect the percentage of
- the overhead column. See --percentage for more info.
+ the overhead and latency columns. See --percentage for more info.
--pid=::
Only show events for given process ID (comma separated list).
@@ -54,12 +54,12 @@ OPTIONS
--dsos=::
Only consider symbols in these dsos. CSV that understands
file://filename entries. This option will affect the percentage of
- the overhead column. See --percentage for more info.
+ the overhead and latency columns. See --percentage for more info.
-S::
--symbols=::
Only consider these symbols. CSV that understands
file://filename entries. This option will affect the percentage of
- the overhead column. See --percentage for more info.
+ the overhead and latency columns. See --percentage for more info.
--symbol-filter=::
Only show symbols that match (partially) with this filter.
@@ -68,6 +68,21 @@ OPTIONS
--hide-unresolved::
Only display entries resolved to a symbol.
+--parallelism::
+ Only consider these parallelism levels. Parallelism level is the number
+ of threads that actively run on CPUs at the time of sample. The flag
+ accepts single number, comma-separated list, and ranges (for example:
+ "1", "7,8", "1,64-128"). This is useful in understanding what a program
+ is doing during sequential/low-parallelism phases as compared to
+ high-parallelism phases. This option will affect the percentage of
+ the overhead and latency columns. See --percentage for more info.
+ Also see the `CPU and latency overheads' section for more details.
+
+--latency::
+ Show latency-centric profile rather than the default
+ CPU-consumption-centric profile
+ (requires perf record --latency flag).
+
-s::
--sort=::
Sort histogram entries by given key(s) - multiple keys can be specified
@@ -79,6 +94,7 @@ OPTIONS
- comm: command (name) of the task which can be read via /proc/<pid>/comm
- pid: command and tid of the task
+ - tgid: command and tgid of the task
- dso: name of library or module executed at the time of sample
- dso_size: size of library or module executed at the time of sample
- symbol: name of function executed at the time of sample
@@ -87,6 +103,7 @@ OPTIONS
entries are displayed as "[other]".
- cpu: cpu number the task ran at the time of sample
- socket: processor socket number the task ran at the time of sample
+ - parallelism: number of running threads at the time of sample
- srcline: filename and line number executed at the time of sample. The
DWARF debugging info must be provided.
- srcfile: file name of the source file of the samples. Requires dwarf
@@ -97,12 +114,14 @@ OPTIONS
- cgroup_id: ID derived from cgroup namespace device and inode numbers.
- cgroup: cgroup pathname in the cgroupfs.
- transaction: Transaction abort flags.
- - overhead: Overhead percentage of sample
- - overhead_sys: Overhead percentage of sample running in system mode
- - overhead_us: Overhead percentage of sample running in user mode
- - overhead_guest_sys: Overhead percentage of sample running in system mode
+ - overhead: CPU overhead percentage of sample.
+ - latency: latency (wall-clock) overhead percentage of sample.
+ See the `CPU and latency overheads' section for more details.
+ - overhead_sys: CPU overhead percentage of sample running in system mode
+ - overhead_us: CPU overhead percentage of sample running in user mode
+ - overhead_guest_sys: CPU overhead percentage of sample running in system mode
on guest machine
- - overhead_guest_us: Overhead percentage of sample running in user mode on
+ - overhead_guest_us: CPU overhead percentage of sample running in user mode on
guest machine
- sample: Number of sample
- period: Raw number of event count of sample
@@ -121,9 +140,12 @@ OPTIONS
- type: Data type of sample memory access.
- typeoff: Offset in the data type of sample memory access.
- symoff: Offset in the symbol.
+ - weight1: Average value of event specific weight (1st field of weight_struct).
+ - weight2: Average value of event specific weight (2nd field of weight_struct).
+ - weight3: Average value of event specific weight (3rd field of weight_struct).
- By default, comm, dso and symbol keys are used.
- (i.e. --sort comm,dso,symbol)
+ By default, overhead, comm, dso and symbol keys are used.
+ (i.e. --sort overhead,comm,dso,symbol).
If --branch-stack option is used, following sort keys are also
available:
@@ -198,7 +220,11 @@ OPTIONS
--fields=::
Specify output field - multiple keys can be specified in CSV format.
Following fields are available:
- overhead, overhead_sys, overhead_us, overhead_children, sample and period.
+ overhead, latency, overhead_sys, overhead_us, overhead_children, sample,
+ period, weight1, weight2, weight3, ins_lat, p_stage_cyc and retire_lat.
+ The last 3 names are alias for the corresponding weights. When the weight
+ fields are used, they will show the average value of the weight.
+
Also it can contain any sort key(s).
By default, every sort keys not specified in -F will be appended
@@ -282,7 +308,7 @@ OPTIONS
Accumulate callchain of children to parent entry so that then can
show up in the output. The output will have a new "Children" column
and will be sorted on the data. It requires callchains are recorded.
- See the `overhead calculation' section for more details. Enabled by
+ See the `Overhead calculation' section for more details. Enabled by
default, disable with --no-children.
--max-stack::
@@ -384,6 +410,14 @@ OPTIONS
This allows to examine the path the program took to each sample.
The data collection must have used -b (or -j) and -g.
+ Also show with some branch flags that can be:
+ - Predicted: display the average percentage of predicated branches.
+ (predicated number / total number)
+ - Abort: display the number of tsx aborted branches.
+ - Cycles: cycles in basic block.
+
+ - iterations: display the average number of iterations in callchain list.
+
--addr2line=<path>::
Path to addr2line binary.
@@ -427,9 +461,9 @@ OPTIONS
--call-graph option for details.
--percentage::
- Determine how to display the overhead percentage of filtered entries.
- Filters can be applied by --comms, --dsos and/or --symbols options and
- Zoom operations on the TUI (thread, dso, etc).
+ Determine how to display the CPU and latency overhead percentage
+ of filtered entries. Filters can be applied by --comms, --dsos, --symbols
+ and/or --parallelism options and Zoom operations on the TUI (thread, dso, etc).
"relative" means it's relative to filtered entries only so that the
sum of shown entries will be always 100%. "absolute" means it retains
@@ -531,8 +565,35 @@ include::itrace.txt[]
--raw-trace::
When displaying traceevent output, do not use print fmt or plugins.
+-H::
--hierarchy::
- Enable hierarchical output.
+ Enable hierarchical output. In the hierarchy mode, each sort key groups
+ samples based on the criteria and then sub-divide it using the lower
+ level sort key.
+
+ For example:
+ In normal output:
+
+ perf report -s dso,sym
+ # Overhead Shared Object Symbol
+ 50.00% [kernel.kallsyms] [k] kfunc1
+ 20.00% perf [.] foo
+ 15.00% [kernel.kallsyms] [k] kfunc2
+ 10.00% perf [.] bar
+ 5.00% libc.so [.] libcall
+
+ In hierarchy output:
+
+ perf report -s dso,sym --hierarchy
+ # Overhead Shared Object / Symbol
+ 65.00% [kernel.kallsyms]
+ 50.00% [k] kfunc1
+ 15.00% [k] kfunc2
+ 30.00% perf
+ 20.00% [.] foo
+ 10.00% [.] bar
+ 5.00% libc.so
+ 5.00% [.] libcall
--inline::
If a callgraph address belongs to an inlined function, the inline stack
@@ -580,10 +641,13 @@ include::itrace.txt[]
'Avg Cycles%' - block average sampled cycles / sum of total block average
sampled cycles
'Avg Cycles' - block average sampled cycles
+ 'Branch Counter' - block branch counter histogram (with -v showing the number)
--skip-empty::
Do not print 0 results in the --stat output.
+include::cpu-and-latency-overheads.txt[]
+
include::callchain-overhead-calculation.txt[]
SEE ALSO
diff --git a/tools/perf/Documentation/perf-sched.txt b/tools/perf/Documentation/perf-sched.txt
index 5fbe42bd599b..6dbbddb6464d 100644
--- a/tools/perf/Documentation/perf-sched.txt
+++ b/tools/perf/Documentation/perf-sched.txt
@@ -20,6 +20,26 @@ There are several variants of 'perf sched':
'perf sched latency' to report the per task scheduling latencies
and other scheduling properties of the workload.
+ Example usage:
+ perf sched record -- sleep 1
+ perf sched latency
+
+ -------------------------------------------------------------------------------------------------------------------------------------------
+ Task | Runtime ms | Count | Avg delay ms | Max delay ms | Max delay start | Max delay end |
+ -------------------------------------------------------------------------------------------------------------------------------------------
+ perf:(2) | 2.804 ms | 66 | avg: 0.524 ms | max: 1.069 ms | max start: 254752.314960 s | max end: 254752.316029 s
+ NetworkManager:1343 | 0.372 ms | 13 | avg: 0.008 ms | max: 0.013 ms | max start: 254751.551153 s | max end: 254751.551166 s
+ kworker/1:2-xfs:4649 | 0.012 ms | 1 | avg: 0.008 ms | max: 0.008 ms | max start: 254751.519807 s | max end: 254751.519815 s
+ kworker/3:1-xfs:388 | 0.011 ms | 1 | avg: 0.006 ms | max: 0.006 ms | max start: 254751.519809 s | max end: 254751.519815 s
+ sleep:147736 | 0.938 ms | 3 | avg: 0.006 ms | max: 0.007 ms | max start: 254751.313817 s | max end: 254751.313824 s
+
+ It shows Runtime(time that a task spent actually running on the CPU),
+ Count(number of times a delay was calculated) and delay(time that a
+ task was ready to run but was kept waiting).
+
+ Tasks with the same command name are merged and the merge count is
+ given within (), However if -p option is used, pid is mentioned.
+
'perf sched script' to see a detailed trace of the workload that
was recorded (aliased to 'perf script' for now).
@@ -44,8 +64,8 @@ There are several variants of 'perf sched':
By default it shows the individual schedule events, including the wait
time (time between sched-out and next sched-in events for the task), the
- task scheduling delay (time between wakeup and actually running) and run
- time for the task:
+ task scheduling delay (time between runnable and actually running) and
+ run time for the task:
time cpu task name wait time sch delay run time
[tid/pid] (msec) (msec) (msec)
@@ -78,6 +98,22 @@ OPTIONS
--force::
Don't complain, do it.
+OPTIONS for 'perf sched latency'
+-------------------------------
+
+-C::
+--CPU <n>::
+ CPU to profile on.
+
+-p::
+--pids::
+ latency stats per pid instead of per command name.
+
+-s::
+--sort <key[,key2...]>::
+ sort by key(s): runtime, switch, avg, max
+ by default it's sorted by "avg ,max ,switch ,runtime".
+
OPTIONS for 'perf sched map'
----------------------------
@@ -94,6 +130,16 @@ OPTIONS for 'perf sched map'
--color-pids::
Highlight the given pids.
+--task-name <task>::
+ Map output only for the given task name(s). Separate the
+ task names with a comma (without whitespace). The sched-out
+ time is printed and is represented by '*-' for the given
+ task name(s).
+ ('-' indicates other tasks while '.' is idle).
+
+--fuzzy-name::
+ Given task name(s) can be partially matched (fuzzy matching).
+
OPTIONS for 'perf sched timehist'
---------------------------------
-k::
@@ -166,6 +212,30 @@ OPTIONS for 'perf sched timehist'
--state::
Show task state when it switched out.
+--show-prio::
+ Show task priority.
+
+--prio::
+ Only show events for given task priority(ies). Multiple priorities can be
+ provided as a comma-separated list with no spaces: 0,120. Ranges of
+ priorities are specified with -: 120-129. A combination of both can also be
+ provided: 0,120-129.
+
+-P::
+--pre-migrations::
+ Show pre-migration wait time. pre-migration wait time is the time spent
+ by a task waiting on a runqueue but not getting the chance to run there
+ and is migrated to a different runqueue where it is finally run. This
+ time between sched_wakeup and migrate_task is the pre-migration wait
+ time.
+
+OPTIONS for 'perf sched replay'
+------------------------------
+
+-r::
+--repeat <n>::
+ repeat the workload n times (0: infinite). Default is 10.
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/Documentation/perf-script-python.txt b/tools/perf/Documentation/perf-script-python.txt
index 6a8581012e16..27a1cac6fe76 100644
--- a/tools/perf/Documentation/perf-script-python.txt
+++ b/tools/perf/Documentation/perf-script-python.txt
@@ -624,7 +624,7 @@ as perf_trace_context.perf_script_context .
perf_set_itrace_options(context, itrace_options) - set --itrace options if they have not been set already
perf_sample_srcline(context) - returns source_file_name, line_number
perf_sample_srccode(context) - returns source_file_name, line_number, source_line
-
+ perf_config_get(config_name) - returns the value of the named config item, or None if unset
Util.py Module
~~~~~~~~~~~~~~
@@ -642,8 +642,8 @@ SUPPORTED FIELDS
Currently supported fields:
-ev_name, comm, pid, tid, cpu, ip, time, period, phys_addr, addr,
-symbol, symoff, dso, time_enabled, time_running, values, callchain,
+ev_name, comm, id, stream_id, pid, tid, cpu, ip, time, period, phys_addr,
+addr, symbol, symoff, dso, time_enabled, time_running, values, callchain,
brstack, brstacksym, datasrc, datasrc_decode, iregs, uregs,
weight, transaction, raw_buf, attr, cpumode.
diff --git a/tools/perf/Documentation/perf-script.txt b/tools/perf/Documentation/perf-script.txt
index ff9a52e44688..28bec7e78bc8 100644
--- a/tools/perf/Documentation/perf-script.txt
+++ b/tools/perf/Documentation/perf-script.txt
@@ -132,9 +132,10 @@ OPTIONS
Comma separated list of fields to print. Options are:
comm, tid, pid, time, cpu, event, trace, ip, sym, dso, dsoff, addr, symoff,
srcline, period, iregs, uregs, brstack, brstacksym, flags, bpf-output,
- brstackinsn, brstackinsnlen, brstackoff, callindent, insn, insnlen, synth,
- phys_addr, metric, misc, srccode, ipc, data_page_size, code_page_size, ins_lat,
- machine_pid, vcpu, cgroup, retire_lat.
+ brstackinsn, brstackinsnlen, brstackdisasm, brstackoff, callindent, insn, disasm,
+ insnlen, synth, phys_addr, metric, misc, srccode, ipc, data_page_size,
+ code_page_size, ins_lat, machine_pid, vcpu, cgroup, retire_lat, brcntr,
+
Field list can be prepended with the type, trace, sw or hw,
to indicate to which event type the field list applies.
e.g., -F sw:comm,tid,time,ip,sym and -F trace:time,cpu,trace
@@ -217,9 +218,9 @@ OPTIONS
Instruction Trace decoding. For calls and returns, it will display the
name of the symbol indented with spaces to reflect the stack depth.
- When doing instruction trace decoding insn and insnlen give the
- instruction bytes and the instruction length of the current
- instruction.
+ When doing instruction trace decoding, insn, disasm and insnlen give the
+ instruction bytes, disassembled instructions (requires libcapstone support)
+ and the instruction length of the current instruction respectively.
The synth field is used by synthesized events which may be created when
Instruction Trace decoding.
@@ -238,13 +239,22 @@ OPTIONS
i.e., -F "" is not allowed.
The brstack output includes branch related information with raw addresses using the
- /v/v/v/v/cycles syntax in the following order:
- FROM: branch source instruction
- TO : branch target instruction
- M/P/-: M=branch target mispredicted or branch direction was mispredicted, P=target predicted or direction predicted, -=not supported
- X/- : X=branch inside a transactional region, -=not in transaction region or not supported
- A/- : A=TSX abort entry, -=not aborted region or not supported
- cycles
+ FROM/TO/EVENT/INTX/ABORT/CYCLES/TYPE/SPEC syntax in the following order:
+ FROM : branch source instruction
+ TO : branch target instruction
+ EVENT : M=branch target or direction was mispredicted
+ P=branch target or direction was predicted
+ N=branch not-taken
+ -=no event or not supported
+ INTX : X=branch inside a transactional region
+ -=branch not in transaction region or not supported
+ ABORT : A=TSX abort entry
+ -=not aborted region or not supported
+ CYCLES: the number of cycles that have elapsed since the last branch was recorded
+ TYPE : branch type: COND/UNCOND/IND/CALL/IND_CALL/RET etc.
+ -=not supported
+ SPEC : branch speculation info: SPEC_WRONG_PATH/NON_SPEC_CORRECT_PATH/SPEC_CORRECT_PATH
+ -=not supported
The brstacksym is identical to brstack, except that the FROM and TO addresses are printed in a symbolic form if possible.
@@ -256,6 +266,9 @@ OPTIONS
can’t know the next sequential instruction after an unconditional branch unless
you calculate that based on its length.
+ brstackdisasm acts like brstackinsn, but will print disassembled instructions if
+ perf is built with the capstone library.
+
The brstackoff field will print an offset into a specific dso/binary.
With the metric option perf script can compute metrics for
@@ -365,6 +378,9 @@ OPTIONS
--demangle-kernel::
Demangle kernel symbol names to human readable form (for C++ kernels).
+--addr2line=<path>::
+ Path to addr2line binary.
+
--header
Show perf.data header.
@@ -441,9 +457,10 @@ include::itrace.txt[]
will be printed. Each entry has function name and file/line. Enabled by
default, disable with --no-inline.
---insn-trace::
- Show instruction stream for intel_pt traces. Combine with --xed to
- show disassembly.
+--insn-trace[=<raw|disasm>]::
+ Show instruction stream in bytes (raw) or disassembled (disasm)
+ for intel_pt traces. The default is 'raw'. To use xed, combine
+ 'raw' with --xed to show disassembly done by xed.
--xed::
Run xed disassembler on output. Requires installing the xed disassembler.
diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 5af2e432b54f..1a766d4a2233 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -308,6 +308,14 @@ use --per-die in addition to -a. (system-wide). The output includes the
die number and the number of online processors on that die. This is
useful to gauge the amount of aggregation.
+--per-cluster::
+Aggregate counts per processor cluster for system-wide mode measurement. This
+is a useful mode to detect imbalance between clusters. To enable this mode,
+use --per-cluster in addition to -a. (system-wide). The output includes the
+cluster number and the number of online processors on that cluster. This is
+useful to gauge the amount of aggregation. The information of cluster ID and
+related CPUs can be gotten from /sys/devices/system/cpu/cpuX/topology/cluster_{id, cpus}.
+
--per-cache::
Aggregate counts per cache instance for system-wide mode measurements. By
default, the aggregation happens for the cache level at the highest index
@@ -396,6 +404,9 @@ Aggregate counts per processor socket for system-wide mode measurements.
--per-die::
Aggregate counts per processor die for system-wide mode measurements.
+--per-cluster::
+Aggregate counts perf processor cluster for system-wide mode measurements.
+
--per-cache::
Aggregate counts per cache instance for system-wide mode measurements. By
default, the aggregation happens for the cache level at the highest index
@@ -487,6 +498,21 @@ To interpret the results it is usually needed to know on which
CPUs the workload runs on. If needed the CPUs can be forced using
taskset.
+--record-tpebs::
+Enable automatic sampling on Intel TPEBS retire_latency events (event with :R
+modifier). Without this option, perf would not capture dynamic retire_latency
+at runtime. Currently, a zero value is assigned to the retire_latency event when
+this option is not set. The TPEBS hardware feature starts from Intel Granite
+Rapids microarchitecture. This option only exists in X86_64 and is meaningful on
+Intel platforms with TPEBS feature.
+
+--tpebs-mode=[mean|min|max|last]::
+Set how retirement latency events have their sample times
+combined. The default "mean" gives the average of retirement
+latency. "min" or "max" give the smallest or largest retirment latency
+times respectively. "last" uses the last retirment latency sample's
+time.
+
--td-level::
Print the top-down statistics that equal the input level. It allows
users to print the interested top-down metrics level instead of the
@@ -614,18 +640,20 @@ JSON FORMAT
With -j, perf stat is able to print out a JSON format output
that can be used for parsing.
-- timestamp : optional usec time stamp in fractions of second (with -I)
+- interval : optional timestamp in fractions of second (with -I)
- optional aggregate options:
- core : core identifier (with --per-core)
- die : die identifier (with --per-die)
- socket : socket identifier (with --per-socket)
- node : node identifier (with --per-node)
- thread : thread identifier (with --per-thread)
+- counters : number of aggregated PMU counters
- counter-value : counter value
- unit : unit of the counter value or empty
- event : event name
- variance : optional variance if multiple values are collected (with -r)
-- runtime : run time of counter
+- event-runtime : run time of the event
+- pcnt-running : percentage of time the event was running
- metric-value : optional metric value
- metric-unit : optional unit of metric
diff --git a/tools/perf/Documentation/perf-test.txt b/tools/perf/Documentation/perf-test.txt
index 951a2f262872..32da0d1fa86a 100644
--- a/tools/perf/Documentation/perf-test.txt
+++ b/tools/perf/Documentation/perf-test.txt
@@ -28,12 +28,44 @@ OPTIONS
Tests to skip (comma separated numeric list).
-v::
+-vv::
+-vvv::
--verbose::
- Be more verbose.
+ With a single '-v', verbose level 1, only failing test output
+ is displayed. With '-vv' and higher all test output is shown.
+
+-S::
+--sequential::
+ Run all tests one after the other. By default "exclusive"
+ tests are run sequentially, but other tests are run in
+ parallel to speed execution.
+
+-r::
+--runs-per-test::
+ Run each test the given number of times, by default once. This
+ option can be useful to determine if a test is flaky.
-F::
--dont-fork::
- Do not fork child for each test, run all tests within single process.
+ Do not fork child for each test, run all tests within single process, this
+ sets sequential mode.
--dso::
Specify a DSO for the "Symbols" test.
+
+-w::
+--workload=::
+ Run a built-in workload, to list them use '--list-workloads', current ones include:
+ noploop, thloop, leafloop, sqrtloop, brstack, datasym and landlock.
+
+ Used with the shell script regression tests.
+
+ Some accept an extra parameter:
+
+ seconds: leafloop, noploop, sqrtloop, thloop
+ nrloops: brstack
+
+ The datasym and landlock workloads don't accept any.
+
+--list-workloads::
+ List the available workloads to use with -w/--workload.
diff --git a/tools/perf/Documentation/perf-top.txt b/tools/perf/Documentation/perf-top.txt
index 3c202ec080ba..af3e4230c72f 100644
--- a/tools/perf/Documentation/perf-top.txt
+++ b/tools/perf/Documentation/perf-top.txt
@@ -43,6 +43,10 @@ Default is to monitor all CPUS.
encoding with the layout of the event control registers as described
by entries in /sys/bus/event_source/devices/cpu/format/*.
+--filter=<filter>::
+ Event filter. This option should follow an event selector (-e). For
+ syntax see linkperf:perf-record[1].
+
-E <entries>::
--entries=<entries>::
Display this many functions.
@@ -79,8 +83,8 @@ Default is to monitor all CPUS.
-m <pages>::
--mmap-pages=<pages>::
Number of mmap data pages (must be a power of two) or size
- specification with appended unit character - B/K/M/G. The
- size is rounded up to have nearest pages power of two value.
+ specification in bytes with appended unit character - B/K/M/G.
+ The size is rounded up to the nearest power-of-two page value.
-p <pid>::
--pid=<pid>::
@@ -261,8 +265,38 @@ Default is to monitor all CPUS.
--raw-trace::
When displaying traceevent output, do not use print fmt or plugins.
+-H::
--hierarchy::
- Enable hierarchy output.
+ Enable hierarchical output. In the hierarchy mode, each sort key groups
+ samples based on the criteria and then sub-divide it using the lower
+ level sort key.
+
+ For example, in normal output:
+
+ perf report -s dso,sym
+ #
+ # Overhead Shared Object Symbol
+ # ........ ................. ...........
+ 50.00% [kernel.kallsyms] [k] kfunc1
+ 20.00% perf [.] foo
+ 15.00% [kernel.kallsyms] [k] kfunc2
+ 10.00% perf [.] bar
+ 5.00% libc.so [.] libcall
+
+ In hierarchy output:
+
+ perf report -s dso,sym --hierarchy
+ #
+ # Overhead Shared Object / Symbol
+ # .......... ......................
+ 65.00% [kernel.kallsyms]
+ 50.00% [k] kfunc1
+ 15.00% [k] kfunc2
+ 30.00% perf
+ 20.00% [.] foo
+ 10.00% [.] bar
+ 5.00% libc.so
+ 5.00% [.] libcall
--overwrite::
Enable this to use just the most recent records, which helps in high core count
diff --git a/tools/perf/Documentation/perf-trace.txt b/tools/perf/Documentation/perf-trace.txt
index f0da8cf63e9a..892c82a9bf40 100644
--- a/tools/perf/Documentation/perf-trace.txt
+++ b/tools/perf/Documentation/perf-trace.txt
@@ -106,8 +106,8 @@ filter out the startup phase of the program, which is often very different.
-m::
--mmap-pages=::
Number of mmap data pages (must be a power of two) or size
- specification with appended unit character - B/K/M/G. The
- size is rounded up to have nearest pages power of two value.
+ specification in bytes with appended unit character - B/K/M/G.
+ The size is rounded up to the nearest power-of-two page value.
-C::
--cpu::
@@ -150,6 +150,11 @@ the thread executes on the designated CPUs. Default is to monitor all CPUs.
To be used with -s or -S, to show stats for the errnos experienced by
syscalls, using only this option will trigger --summary.
+--summary-mode=mode::
+ To be used with -s or -S, to select how to show summary. By default it'll
+ show the syscall summary by thread. Possible values are: thread, total,
+ cgroup.
+
--tool_stats::
Show tool stats such as number of times fd->pathname was discovered thru
hooking the open syscall return + vfs_getname or via reading /proc/pid/fd, etc.
@@ -233,13 +238,20 @@ the thread executes on the designated CPUs. Default is to monitor all CPUs.
the same beautifiers used in the strace-like enter+exit lines to augment the
tracepoint arguments.
---map-dump::
- Dump BPF maps setup by events passed via -e, for instance the augmented_raw_syscalls
- living in tools/perf/examples/bpf/augmented_raw_syscalls.c. For now this
- dumps just boolean map values and integer keys, in time this will print in hex
- by default and use BTF when available, as well as use functions to do pretty
- printing using the existing 'perf trace' syscall arg beautifiers to map integer
- arguments to strings (pid to comm, syscall id to syscall name, etc).
+--force-btf::
+ Use btf_dump to pretty print syscall argument data, instead of using hand-crafted pretty
+ printers. This option is intended for testing BTF integration in perf trace. btf_dump-based
+ pretty-printing serves as a fallback to hand-crafted pretty printers, as the latter can
+ better pretty-print integer flags and struct pointers.
+
+--bpf-summary::
+ Collect system call statistics in BPF. This is only for live mode and
+ works well with -s/--summary option where no argument information is
+ required.
+
+--max-summary=N::
+ Maximum number of lines in the summary mode. Note that this applies to
+ each entry (thread or cgroup).
PAGEFAULTS
diff --git a/tools/perf/Documentation/perf.data-file-format.txt b/tools/perf/Documentation/perf.data-file-format.txt
index 010a4edcd384..c9d4dec65344 100644
--- a/tools/perf/Documentation/perf.data-file-format.txt
+++ b/tools/perf/Documentation/perf.data-file-format.txt
@@ -348,6 +348,16 @@ to special needs.
struct perf_bpil, which contains detailed information about
a BPF program, including type, id, tag, jited/xlated instructions, etc.
+The format of data in HEADER_BPF_PROG_INFO is as follows:
+ u32 count
+
+ struct perf_bpil {
+ u32 info_len; /* size of struct bpf_prog_info, when the tool is compiled */
+ u32 data_len; /* total bytes allocated for data, round up to 8 bytes */
+ u64 arrays; /* which arrays are included in data */
+ struct bpf_prog_info info;
+ u8 data[];
+ }[count];
HEADER_BPF_BTF = 26,
@@ -370,7 +380,7 @@ struct {
u32 mmap_len;
};
-Indicates that trace contains records of PERF_RECORD_COMPRESSED type
+Indicates that trace contains records of PERF_RECORD_COMPRESSED2 type
that have perf_events records in compressed form.
HEADER_CPU_PMU_CAPS = 28,
@@ -602,7 +612,14 @@ struct auxtrace_error_event {
Describes a header feature. These are records used in pipe-mode that
contain information that otherwise would be in perf.data file's header.
- PERF_RECORD_COMPRESSED = 81,
+ PERF_RECORD_COMPRESSED = 81, /* deprecated */
+
+The header is followed by compressed data frame that can be decompressed
+into array of perf trace records. The size of the entire compressed event
+record including the header is limited by the max value of header.size.
+
+It is deprecated and new files should use PERF_RECORD_COMPRESSED2 to gurantee
+8-byte alignment.
struct compressed_event {
struct perf_event_header header;
@@ -618,10 +635,17 @@ This is used, for instance, to 'perf inject' events after init and before
regular events, those emitted by the kernel, to support combining guest and
host records.
+ PERF_RECORD_COMPRESSED2 = 83,
-The header is followed by compressed data frame that can be decompressed
-into array of perf trace records. The size of the entire compressed event
-record including the header is limited by the max value of header.size.
+8-byte aligned version of `PERF_RECORD_COMPRESSED`. `header.size` indicates the
+total record size, including padding for 8-byte alignment, and `data_size`
+specifies the actual size of the compressed data.
+
+struct perf_record_compressed2 {
+ struct perf_event_header header;
+ __u64 data_size;
+ char data[];
+};
Event types
diff --git a/tools/perf/Documentation/perf.txt b/tools/perf/Documentation/perf.txt
index a7cf7bc2f968..cbcc2e4d557e 100644
--- a/tools/perf/Documentation/perf.txt
+++ b/tools/perf/Documentation/perf.txt
@@ -63,6 +63,8 @@ OPTIONS
in browser mode
perf-event-open - Print perf_event_open() arguments and
return value
+ kmaps - Print kernel and module maps (perf script
+ and perf report without browser)
--debug-file::
Write debug output to a specified file.
@@ -80,7 +82,8 @@ linkperf:perf-stat[1], linkperf:perf-top[1],
linkperf:perf-record[1], linkperf:perf-report[1],
linkperf:perf-list[1]
-linkperf:perf-annotate[1],linkperf:perf-archive[1],linkperf:perf-arm-spe[1],
+linkperf:perf-amd-ibs[1], linkperf:perf-annotate[1],
+linkperf:perf-archive[1], linkperf:perf-arm-spe[1],
linkperf:perf-bench[1], linkperf:perf-buildid-cache[1],
linkperf:perf-buildid-list[1], linkperf:perf-c2c[1],
linkperf:perf-config[1], linkperf:perf-data[1], linkperf:perf-diff[1],
diff --git a/tools/perf/Documentation/tips.txt b/tools/perf/Documentation/tips.txt
index 825745a645c1..3fee9b2a88ea 100644
--- a/tools/perf/Documentation/tips.txt
+++ b/tools/perf/Documentation/tips.txt
@@ -2,6 +2,7 @@ For a higher level overview, try: perf report --sort comm,dso
Sample related events with: perf record -e '{cycles,instructions}:S'
Compare performance results with: perf diff [<old file> <new file>]
Boolean options have negative forms, e.g.: perf report --no-children
+To not accumulate CPU time of children symbols add --no-children
Customize output of perf script with: perf script -F event,ip,sym
Generate a script for your data: perf script -g <lang>
Save output of perf stat using: perf stat record <target workload>
@@ -12,32 +13,56 @@ List events using substring match: perf list <keyword>
To see list of saved events and attributes: perf evlist -v
Use --symfs <dir> if your symbol files are in non-standard locations
To see callchains in a more compact form: perf report -g folded
+To see call chains by final symbol taking CPU time (bottom up) use perf report -G
Show individual samples with: perf script
Limit to show entries above 5% only: perf report --percent-limit 5
Profiling branch (mis)predictions with: perf record -b / perf report
-To show assembler sample contexts use perf record -b / perf script -F +brstackinsn --xed
-Treat branches as callchains: perf report --branch-history
-To count events in every 1000 msec: perf stat -I 1000
-Print event counts in CSV format with: perf stat -x,
+To show assembler sample context control flow use perf record -b / perf report --samples 10 and then browse context
+To adjust path to source files to local file system use perf report --prefix=... --prefix-strip=...
+Treat branches as callchains: perf record -b ... ; perf report --branch-history
+Show estimate cycles per function and IPC in annotate use perf record -b ... ; perf report --total-cycles
+To count events every 1000 msec: perf stat -I 1000
+Print event counts in machine readable CSV format with: perf stat -x\;
If you have debuginfo enabled, try: perf report -s sym,srcline
For memory address profiling, try: perf mem record / perf mem report
For tracepoint events, try: perf report -s trace_fields
To record callchains for each sample: perf record -g
+If call chains don't work try perf record --call-graph dwarf or --call-graph lbr
To record every process run by a user: perf record -u <user>
+To show inline functions in call traces add --inline to perf report
+To not record events from perf itself add --exclude-perf
Skip collecting build-id when recording: perf record -B
To change sampling frequency to 100 Hz: perf record -F 100
+To show information about system the samples were collected on use perf report --header
+To only collect call graph on one event use perf record -e cpu/cpu-cycles,callgraph=1/,branches ; perf report --show-ref-call-graph
+To set sampling period of individual events use perf record -e cpu/cpu-cycles,period=100001/,cpu/branches,period=10001/ ...
+To group events which need to be collected together for accuracy use {}: perf record -e {cycles,branches}' ...
+To compute metrics for samples use perf record -e '{cycles,instructions}' ... ; perf script -F +metric
See assembly instructions with percentage: perf annotate <symbol>
If you prefer Intel style assembly, try: perf annotate -M intel
+When collecting LBR backtraces use --stitch-lbr to handle more than 32 deep entries: perf record --call-graph lbr ; perf report --stitch-lbr
For hierarchical output, try: perf report --hierarchy
Order by the overhead of source file name and line number: perf report -s srcline
System-wide collection from all CPUs: perf record -a
Show current config key-value pairs: perf config --list
+To collect Processor Trace with samples use perf record -e '{intel_pt//,cycles}' ; perf script --call-trace or --insn-trace --xed -F +ipc (remove --xed if no xed)
+To trace calls using Processor Trace use perf record -e intel_pt// ... ; perf script --call-trace. Then use perf script --time A-B --insn-trace to look at region of interest.
+To measure approximate function latency with Processor Trace use perf record -e intel_pt// ... ; perf script --call-ret-trace
+To trace only single function with Processor Trace use perf record --filter 'filter func @ program' -e intel_pt//u ./program ; perf script --insn-trace
Show user configuration overrides: perf config --user --list
To add Node.js USDT(User-Level Statically Defined Tracing): perf buildid-cache --add `which node`
-To report cacheline events from previous recording: perf c2c report
+To analyze cache line scalability issues use perf c2c record ... ; perf c2c report
To browse sample contexts use perf report --sample 10 and select in context menu
To separate samples by time use perf report --sort time,overhead,sym
+To filter subset of samples with report or script add --time X-Y or --cpu A,B,C or --socket-filter ...
To set sample time separation other than 100ms with --sort time use --time-quantum
Add -I to perf record to sample register values, which will be visible in perf report sample context.
To show IPC for sampling periods use perf record -e '{cycles,instructions}:S' and then browse context
To show context switches in perf report sample context add --switch-events to perf record.
+To show time in nanoseconds in record/report add --ns
+To compare hot regions in two workloads use perf record -b -o file ... ; perf diff --stream file1 file2
+To compare scalability of two workload samples use perf diff -c ratio file1 file2
+For latency profiling, try: perf record/report --latency
+For parallelism histogram, try: perf report --hierarchy --sort latency,parallelism,comm,symbol
+To analyze particular parallelism levels, try: perf report --latency --parallelism=32-64
+To see how parallelism changes over time, try: perf report -F time,latency,parallelism --time-quantum=1s
diff --git a/tools/perf/Documentation/topdown.txt b/tools/perf/Documentation/topdown.txt
index ae0aee86844f..5c17fff694ee 100644
--- a/tools/perf/Documentation/topdown.txt
+++ b/tools/perf/Documentation/topdown.txt
@@ -325,6 +325,36 @@ other four level 2 metrics by subtracting corresponding metrics as below.
Fetch_Bandwidth = Frontend_Bound - Fetch_Latency
Core_Bound = Backend_Bound - Memory_Bound
+TPEBS in TopDown
+================
+
+TPEBS (Timed PEBS) is one of the new Intel PMU features provided since Granite
+Rapids microarchitecture. The TPEBS feature adds a 16 bit retire_latency field
+in the Basic Info group of the PEBS record. It records the Core cycles since the
+retirement of the previous instruction to the retirement of current instruction.
+Please refer to Section 8.4.1 of "Intel® Architecture Instruction Set Extensions
+Programming Reference" for more details about this feature. Because this feature
+extends PEBS record, sampling with weight option is required to get the
+retire_latency value.
+
+ perf record -e event_name -W ...
+
+In the most recent release of TMA, the metrics begin to use event retire_latency
+values in some of the metrics’ formulas on processors that support TPEBS feature.
+For previous generations that do not support TPEBS, the values are static and
+predefined per processor family by the hardware architects. Due to the diversity
+of workloads in execution environments, retire_latency values measured at real
+time are more accurate. Therefore, new TMA metrics that use TPEBS will provide
+more accurate performance analysis results.
+
+To support TPEBS in TMA metrics, a new modifier :R on event is added. Perf would
+capture retire_latency value of required events(event with :R in metric formula)
+with perf record. The retire_latency value would be used in metric calculation.
+Currently, this feature is supported through perf stat
+
+ perf stat -M metric_name --record-tpebs ...
+
+
[1] https://software.intel.com/en-us/top-down-microarchitecture-analysis-method-win
[2] https://sites.google.com/site/analysismethods/yasin-pubs
diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
index dc42de1785ce..34af57b8ec2a 100644
--- a/tools/perf/MANIFEST
+++ b/tools/perf/MANIFEST
@@ -1,5 +1,10 @@
+COPYING
+LICENSES/preferred/GPL-2.0
arch/arm64/tools/gen-sysreg.awk
+arch/arm64/tools/syscall_64.tbl
arch/arm64/tools/sysreg
+arch/*/include/uapi/asm/bpf_perf_event.h
+include/uapi/asm-generic/Kbuild
tools/perf
tools/arch
tools/scripts
@@ -22,6 +27,10 @@ tools/lib/str_error_r.c
tools/lib/vsprintf.c
tools/lib/zalloc.c
scripts/bpf_doc.py
+scripts/Kbuild.include
+scripts/Makefile.asm-headers
+scripts/syscall.tbl
+scripts/syscallhdr.sh
tools/bpf/bpftool
kernel/bpf/disasm.c
kernel/bpf/disasm.h
diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 75f3f6e0a231..816d5d84816b 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -51,8 +51,14 @@ else
override DEBUG = 0
endif
+ifeq ($(JOBS),1)
+ BUILD_TYPE := sequential
+else
+ BUILD_TYPE := parallel
+endif
+
define print_msg
- @printf ' BUILD: Doing '\''make \033[33m-j'$(JOBS)'\033[m'\'' parallel build\n'
+ @printf ' BUILD: Doing '\''make \033[33m-j'$(JOBS)'\033[m'\'' $(BUILD_TYPE) build\n'
endef
define make
diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index aa55850fbc21..5700516aa84a 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -19,8 +19,43 @@ detected_var = $(shell echo "$(1)=$($(1))" >> $(OUTPUT).config-detected)
CFLAGS := $(EXTRA_CFLAGS) $(filter-out -Wnested-externs,$(EXTRA_WARNINGS))
HOSTCFLAGS := $(filter-out -Wnested-externs,$(EXTRA_WARNINGS))
-# Enabled Wthread-safety analysis for clang builds.
+# This is required because the kernel is built with this and some of the code
+# borrowed from kernel headers depends on it, e.g. put_unaligned_*().
+CFLAGS += -fno-strict-aliasing
+
+# Set target flag and options when using clang as compiler.
ifeq ($(CC_NO_CLANG), 0)
+ CLANG_TARGET_FLAGS_arm := arm-linux-gnueabi
+ CLANG_TARGET_FLAGS_arm64 := aarch64-linux-gnu
+ CLANG_TARGET_FLAGS_m68k := m68k-linux-gnu
+ CLANG_TARGET_FLAGS_mips := mipsel-linux-gnu
+ CLANG_TARGET_FLAGS_powerpc := powerpc64le-linux-gnu
+ CLANG_TARGET_FLAGS_riscv := riscv64-linux-gnu
+ CLANG_TARGET_FLAGS_s390 := s390x-linux-gnu
+ CLANG_TARGET_FLAGS_x86 := x86_64-linux-gnu
+ CLANG_TARGET_FLAGS_x86_64 := x86_64-linux-gnu
+
+ # Default to host architecture if ARCH is not explicitly given.
+ ifeq ($(ARCH), $(HOSTARCH))
+ CLANG_TARGET_FLAGS := $(shell $(CLANG) -print-target-triple)
+ else
+ CLANG_TARGET_FLAGS := $(CLANG_TARGET_FLAGS_$(ARCH))
+ endif
+
+ ifeq ($(CROSS_COMPILE),)
+ ifeq ($(CLANG_TARGET_FLAGS),)
+ $(error Specify CROSS_COMPILE or add CLANG_TARGET_FLAGS for $(ARCH))
+ else
+ CLANG_FLAGS += --target=$(CLANG_TARGET_FLAGS)
+ endif # CLANG_TARGET_FLAGS
+ else
+ CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%))
+ endif # CROSS_COMPILE
+
+ CC := $(CLANG) $(CLANG_FLAGS) -fintegrated-as
+ CXX := $(CXX) $(CLANG_FLAGS) -fintegrated-as
+
+ # Enabled Wthread-safety analysis for clang builds.
CFLAGS += -Wthread-safety
endif
@@ -28,85 +63,59 @@ include $(srctree)/tools/scripts/Makefile.arch
$(call detected_var,SRCARCH)
-NO_PERF_REGS := 1
-
-ifneq ($(NO_SYSCALL_TABLE),1)
- NO_SYSCALL_TABLE := 1
-
- ifeq ($(SRCARCH),x86)
- ifeq (${IS_64_BIT}, 1)
- NO_SYSCALL_TABLE := 0
- endif
- else
- ifeq ($(SRCARCH),$(filter $(SRCARCH),powerpc arm64 s390 mips loongarch))
- NO_SYSCALL_TABLE := 0
- endif
- endif
-
- ifneq ($(NO_SYSCALL_TABLE),1)
- CFLAGS += -DHAVE_SYSCALL_TABLE_SUPPORT
- endif
-endif
+CFLAGS += -I$(OUTPUT)arch/$(SRCARCH)/include/generated
+CFLAGS += -I$(OUTPUT)libperf/arch/$(SRCARCH)/include/generated/uapi
# Additional ARCH settings for ppc
ifeq ($(SRCARCH),powerpc)
- NO_PERF_REGS := 0
- CFLAGS += -I$(OUTPUT)arch/powerpc/include/generated
- LIBUNWIND_LIBS := -lunwind -lunwind-ppc64
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS := -lunwind -lunwind-ppc64
+ endif
endif
# Additional ARCH settings for x86
ifeq ($(SRCARCH),x86)
$(call detected,CONFIG_X86)
ifeq (${IS_64_BIT}, 1)
- CFLAGS += -DHAVE_ARCH_X86_64_SUPPORT -I$(OUTPUT)arch/x86/include/generated
+ CFLAGS += -DHAVE_ARCH_X86_64_SUPPORT
ARCH_INCLUDE = ../../arch/x86/lib/memcpy_64.S ../../arch/x86/lib/memset_64.S
- LIBUNWIND_LIBS = -lunwind-x86_64 -lunwind -llzma
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind-x86_64 -lunwind -llzma
+ endif
$(call detected,CONFIG_X86_64)
else
- LIBUNWIND_LIBS = -lunwind-x86 -llzma -lunwind
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind-x86 -llzma -lunwind
+ endif
endif
- NO_PERF_REGS := 0
endif
ifeq ($(SRCARCH),arm)
- NO_PERF_REGS := 0
- LIBUNWIND_LIBS = -lunwind -lunwind-arm
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind -lunwind-arm
+ endif
endif
ifeq ($(SRCARCH),arm64)
- NO_PERF_REGS := 0
- CFLAGS += -I$(OUTPUT)arch/arm64/include/generated
- LIBUNWIND_LIBS = -lunwind -lunwind-aarch64
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind -lunwind-aarch64
+ endif
endif
ifeq ($(SRCARCH),loongarch)
- NO_PERF_REGS := 0
- CFLAGS += -I$(OUTPUT)arch/loongarch/include/generated
- LIBUNWIND_LIBS = -lunwind -lunwind-loongarch64
-endif
-
-ifeq ($(SRCARCH),riscv)
- NO_PERF_REGS := 0
-endif
-
-ifeq ($(SRCARCH),csky)
- NO_PERF_REGS := 0
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind -lunwind-loongarch64
+ endif
endif
ifeq ($(ARCH),s390)
- NO_PERF_REGS := 0
- CFLAGS += -fPIC -I$(OUTPUT)arch/s390/include/generated
+ CFLAGS += -fPIC
endif
ifeq ($(ARCH),mips)
- NO_PERF_REGS := 0
- CFLAGS += -I$(OUTPUT)arch/mips/include/generated
- LIBUNWIND_LIBS = -lunwind -lunwind-mips
-endif
-
-ifeq ($(NO_PERF_REGS),0)
- $(call detected,CONFIG_PERF_REGS)
+ ifndef NO_LIBUNWIND
+ LIBUNWIND_LIBS = -lunwind -lunwind-mips
+ endif
endif
# So far there's only x86 and arm libdw unwind support merged in perf.
@@ -117,6 +126,10 @@ ifneq ($(SRCARCH),$(filter $(SRCARCH),x86 arm arm64 powerpc s390 csky riscv loon
NO_LIBDW_DWARF_UNWIND := 1
endif
+ifneq ($(LIBUNWIND),1)
+ NO_LIBUNWIND := 1
+endif
+
ifeq ($(LIBUNWIND_LIBS),)
NO_LIBUNWIND := 1
endif
@@ -139,18 +152,18 @@ ifdef LIBUNWIND_DIR
$(foreach libunwind_arch,$(LIBUNWIND_ARCHS),$(call libunwind_arch_set_flags,$(libunwind_arch)))
endif
-# Set per-feature check compilation flags
-FEATURE_CHECK_CFLAGS-libunwind = $(LIBUNWIND_CFLAGS)
-FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
-FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS)
-FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
-
-FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm
-FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64
-FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86
-FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64
-
-FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto
+ifndef NO_LIBUNWIND
+ # Set per-feature check compilation flags
+ FEATURE_CHECK_CFLAGS-libunwind = $(LIBUNWIND_CFLAGS)
+ FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+ FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS)
+ FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS)
+
+ FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm
+ FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64
+ FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86
+ FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64
+endif
ifdef CSINCLUDES
LIBOPENCSD_CFLAGS := -I$(CSINCLUDES)
@@ -165,10 +178,6 @@ endif
FEATURE_CHECK_CFLAGS-libopencsd := $(LIBOPENCSD_CFLAGS)
FEATURE_CHECK_LDFLAGS-libopencsd := $(LIBOPENCSD_LDFLAGS) $(OPENCSDLIBS)
-ifeq ($(NO_PERF_REGS),0)
- CFLAGS += -DHAVE_PERF_REGS_SUPPORT
-endif
-
# for linking with debug library, run like:
# make DEBUG=1 LIBDW_DIR=/opt/libdw/
ifdef LIBDW_DIR
@@ -177,10 +186,23 @@ ifdef LIBDW_DIR
endif
DWARFLIBS := -ldw
ifeq ($(findstring -static,${LDFLAGS}),-static)
- DWARFLIBS += -lelf -lebl -ldl -lz -llzma -lbz2
+ DWARFLIBS += -lelf -lz -llzma -lbz2
+
+ LIBDW_VERSION := $(shell $(PKG_CONFIG) --modversion libdw).0.0
+ LIBDW_VERSION_1 := $(word 1, $(subst ., ,$(LIBDW_VERSION)))
+ LIBDW_VERSION_2 := $(word 2, $(subst ., ,$(LIBDW_VERSION)))
+
+ # Elfutils merged libebl.a into libdw.a starting from version 0.177,
+ # Link libebl.a only if libdw is older than this version.
+ ifeq ($(shell test $(LIBDW_VERSION_2) -lt 177; echo $$?),0)
+ DWARFLIBS += -lebl
+ endif
+
+ # Must put -ldl after -lebl for dependency
+ DWARFLIBS += -ldl
endif
-FEATURE_CHECK_CFLAGS-libdw-dwarf-unwind := $(LIBDW_CFLAGS)
-FEATURE_CHECK_LDFLAGS-libdw-dwarf-unwind := $(LIBDW_LDFLAGS) $(DWARFLIBS)
+FEATURE_CHECK_CFLAGS-libdw := $(LIBDW_CFLAGS)
+FEATURE_CHECK_LDFLAGS-libdw := $(LIBDW_LDFLAGS) $(DWARFLIBS)
# for linking with debug library, run like:
# make DEBUG=1 LIBBABELTRACE_DIR=/opt/libbabeltrace/
@@ -191,6 +213,15 @@ endif
FEATURE_CHECK_CFLAGS-libbabeltrace := $(LIBBABELTRACE_CFLAGS)
FEATURE_CHECK_LDFLAGS-libbabeltrace := $(LIBBABELTRACE_LDFLAGS) -lbabeltrace-ctf
+# for linking with debug library, run like:
+# make DEBUG=1 LIBCAPSTONE_DIR=/opt/capstone/
+ifdef LIBCAPSTONE_DIR
+ LIBCAPSTONE_CFLAGS := -I$(LIBCAPSTONE_DIR)/include
+ LIBCAPSTONE_LDFLAGS := -L$(LIBCAPSTONE_DIR)/
+endif
+FEATURE_CHECK_CFLAGS-libcapstone := $(LIBCAPSTONE_CFLAGS)
+FEATURE_CHECK_LDFLAGS-libcapstone := $(LIBCAPSTONE_LDFLAGS) -lcapstone
+
ifdef LIBZSTD_DIR
LIBZSTD_CFLAGS := -I$(LIBZSTD_DIR)/lib
LIBZSTD_LDFLAGS := -L$(LIBZSTD_DIR)/lib
@@ -198,22 +229,27 @@ endif
FEATURE_CHECK_CFLAGS-libzstd := $(LIBZSTD_CFLAGS)
FEATURE_CHECK_LDFLAGS-libzstd := $(LIBZSTD_LDFLAGS)
+# for linking with debug library, run like:
+# make DEBUG=1 PKG_CONFIG_PATH=/opt/libtraceevent/(lib|lib64)/pkgconfig
+
+ifneq ($(NO_LIBTRACEEVENT),1)
+ ifeq ($(call get-executable,$(PKG_CONFIG)),)
+ $(error Error: $(PKG_CONFIG) needed by libtraceevent is missing on this system, please install it)
+ endif
+endif
+
FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(SRCARCH)/include/uapi -I$(srctree)/tools/include/uapi
# include ARCH specific config
-include $(src-perf)/arch/$(SRCARCH)/Makefile
-ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
- CFLAGS += -DHAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
-endif
-
include $(srctree)/tools/scripts/utilities.mak
ifeq ($(call get-executable,$(FLEX)),)
- dummy := $(error Error: $(FLEX) is missing on this system, please install it)
+ $(error Error: $(FLEX) is missing on this system, please install it)
endif
ifeq ($(call get-executable,$(BISON)),)
- dummy := $(error Error: $(BISON) is missing on this system, please install it)
+ $(error Error: $(BISON) is missing on this system, please install it)
endif
ifneq ($(OUTPUT),)
@@ -235,11 +271,7 @@ endif
ifeq ($(DEBUG),0)
CORE_CFLAGS += -DNDEBUG=1
-ifeq ($(CC_NO_CLANG), 0)
- CORE_CFLAGS += -O3
-else
- CORE_CFLAGS += -O6
-endif
+CORE_CFLAGS += -O3
else
CORE_CFLAGS += -g
CXXFLAGS += -g
@@ -303,6 +335,11 @@ endif
ifdef PYTHON_CONFIG
PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) $(PYTHON_CONFIG_LDFLAGS) 2>/dev/null)
+ # Update the python flags for cross compilation
+ ifdef CROSS_COMPILE
+ PYTHON_NATIVE := $(shell echo $(PYTHON_EMBED_LDOPTS) | sed 's/\(-L.*\/\)\(.*-linux-gnu\).*/\2/')
+ PYTHON_EMBED_LDOPTS := $(subst $(PYTHON_NATIVE),$(shell $(CC) -dumpmachine),$(PYTHON_EMBED_LDOPTS))
+ endif
PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS))
PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
@@ -411,10 +448,6 @@ ifeq ($(feature-eventfd), 1)
CFLAGS += -DHAVE_EVENTFD_SUPPORT
endif
-ifeq ($(feature-get_current_dir_name), 1)
- CFLAGS += -DHAVE_GET_CURRENT_DIR_NAME
-endif
-
ifeq ($(feature-gettid), 1)
CFLAGS += -DHAVE_GETTID
endif
@@ -424,7 +457,7 @@ ifeq ($(feature-file-handle), 1)
endif
ifdef NO_LIBELF
- NO_DWARF := 1
+ NO_LIBDW := 1
NO_LIBUNWIND := 1
NO_LIBDW_DWARF_UNWIND := 1
NO_LIBBPF := 1
@@ -438,49 +471,32 @@ else
LIBC_SUPPORT := 1
endif
ifeq ($(LIBC_SUPPORT),1)
- msg := $(error ERROR: No libelf found. Disables 'probe' tool, jvmti and BPF support. Please install libelf-dev, libelf-devel, elfutils-libelf-devel or build with NO_LIBELF=1.)
+ $(error ERROR: No libelf found. Disables 'probe' tool, jvmti and BPF support. Please install libelf-dev, libelf-devel, elfutils-libelf-devel or build with NO_LIBELF=1.)
else
ifneq ($(filter s% -fsanitize=address%,$(EXTRA_CFLAGS),),)
ifneq ($(shell ldconfig -p | grep libasan >/dev/null 2>&1; echo $$?), 0)
- msg := $(error No libasan found, please install libasan);
+ $(error No libasan found, please install libasan)
endif
endif
ifneq ($(filter s% -fsanitize=undefined%,$(EXTRA_CFLAGS),),)
ifneq ($(shell ldconfig -p | grep libubsan >/dev/null 2>&1; echo $$?), 0)
- msg := $(error No libubsan found, please install libubsan);
+ $(error No libubsan found, please install libubsan)
endif
endif
ifneq ($(filter s% -static%,$(LDFLAGS),),)
- msg := $(error No static glibc found, please install glibc-static);
+ $(error No static glibc found, please install glibc-static)
else
- msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]);
+ $(error No gnu/libc-version.h found, please install glibc-dev[el])
endif
endif
else
- ifndef NO_LIBDW_DWARF_UNWIND
- ifneq ($(feature-libdw-dwarf-unwind),1)
- NO_LIBDW_DWARF_UNWIND := 1
- msg := $(warning No libdw DWARF unwind found, Please install elfutils-devel/libdw-dev >= 0.158 and/or set LIBDW_DIR);
+ ifneq ($(feature-libdw), 1)
+ ifndef NO_LIBDW
+ $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.157, disables dwarf support. Please install new elfutils-devel/libdw-dev)
+ NO_LIBDW := 1
endif
- endif
- ifneq ($(feature-dwarf), 1)
- ifndef NO_DWARF
- msg := $(warning No libdw.h found or old libdw.h found or elfutils is older than 0.138, disables dwarf support. Please install new elfutils-devel/libdw-dev);
- NO_DWARF := 1
- endif
- else
- ifneq ($(feature-dwarf_getlocations), 1)
- msg := $(warning Old libdw.h, finding variables at given 'perf probe' point will not work, install elfutils-devel/libdw-dev >= 0.157);
- else
- CFLAGS += -DHAVE_DWARF_GETLOCATIONS_SUPPORT
- endif # dwarf_getlocations
- ifneq ($(feature-dwarf_getcfi), 1)
- msg := $(warning Old libdw.h, finding variables at given 'perf probe' point will not work, install elfutils-devel/libdw-dev >= 0.142);
- else
- CFLAGS += -DHAVE_DWARF_CFI_SUPPORT
- endif # dwarf_getcfi
endif # Dwarf support
endif # libelf support
endif # NO_LIBELF
@@ -491,12 +507,15 @@ ifeq ($(feature-libaio), 1)
endif
endif
-ifdef NO_DWARF
+ifdef NO_LIBDW
NO_LIBDW_DWARF_UNWIND := 1
endif
ifeq ($(feature-scandirat), 1)
- CFLAGS += -DHAVE_SCANDIRAT_SUPPORT
+ # Ignore having scandirat with memory sanitizer that lacks an interceptor.
+ ifeq ($(filter s% -fsanitize=memory%,$(EXTRA_CFLAGS),),)
+ CFLAGS += -DHAVE_SCANDIRAT_SUPPORT
+ endif
endif
ifeq ($(feature-sched_getcpu), 1)
@@ -508,13 +527,14 @@ ifeq ($(feature-setns), 1)
$(call detected,CONFIG_SETNS)
endif
+ifeq ($(feature-reallocarray), 0)
+ CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
+endif
+
ifdef CORESIGHT
$(call feature_check,libopencsd)
ifeq ($(feature-libopencsd), 1)
CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
- ifeq ($(feature-reallocarray), 0)
- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
- endif
LDFLAGS += $(LIBOPENCSD_LDFLAGS)
EXTLIBS += $(OPENCSDLIBS)
$(call detected,CONFIG_LIBOPENCSD)
@@ -525,7 +545,7 @@ ifdef CORESIGHT
endif
endif
else
- dummy := $(error Error: No libopencsd library found or the version is not up-to-date. Please install recent libopencsd to build with CORESIGHT=1)
+ $(error Error: No libopencsd library found or the version is not up-to-date. Please install recent libopencsd to build with CORESIGHT=1)
endif
endif
@@ -551,32 +571,35 @@ ifndef NO_LIBELF
ifeq ($(feature-libelf-gelf_getnote), 1)
CFLAGS += -DHAVE_GELF_GETNOTE_SUPPORT
else
- msg := $(warning gelf_getnote() not found on libelf, SDT support disabled);
+ $(warning gelf_getnote() not found on libelf, SDT support disabled)
endif
ifeq ($(feature-libelf-getshdrstrndx), 1)
CFLAGS += -DHAVE_ELF_GETSHDRSTRNDX_SUPPORT
endif
+ ifeq ($(feature-libelf-zstd), 1)
+ ifdef NO_LIBZSTD
+ $(error Error: libzstd is required by libelf, please do not set NO_LIBZSTD)
+ endif
+ endif
+
ifndef NO_LIBDEBUGINFOD
$(call feature_check,libdebuginfod)
ifeq ($(feature-libdebuginfod), 1)
CFLAGS += -DHAVE_DEBUGINFOD_SUPPORT
EXTLIBS += -ldebuginfod
+ else
+ $(warning No elfutils/debuginfod.h found, no debuginfo server support, please install libdebuginfod-dev/elfutils-debuginfod-client-devel or equivalent)
endif
endif
- ifndef NO_DWARF
- ifeq ($(origin PERF_HAVE_DWARF_REGS), undefined)
- msg := $(warning DWARF register mappings have not been defined for architecture $(SRCARCH), DWARF support disabled);
- NO_DWARF := 1
- else
- CFLAGS += -DHAVE_DWARF_SUPPORT $(LIBDW_CFLAGS)
- LDFLAGS += $(LIBDW_LDFLAGS)
- EXTLIBS += ${DWARFLIBS}
- $(call detected,CONFIG_DWARF)
- endif # PERF_HAVE_DWARF_REGS
- endif # NO_DWARF
+ ifndef NO_LIBDW
+ CFLAGS += -DHAVE_LIBDW_SUPPORT $(LIBDW_CFLAGS)
+ LDFLAGS += $(LIBDW_LDFLAGS)
+ EXTLIBS += ${DWARFLIBS}
+ $(call detected,CONFIG_LIBDW)
+ endif # NO_LIBDW
ifndef NO_LIBBPF
ifeq ($(feature-bpf), 1)
@@ -590,17 +613,18 @@ ifndef NO_LIBELF
$(call detected,CONFIG_LIBBPF)
$(call detected,CONFIG_LIBBPF_DYNAMIC)
else
- dummy := $(error Error: No libbpf devel library found or older than v1.0, please install/update libbpf-devel);
+ $(error Error: No libbpf devel library found or older than v1.0, please install/update libbpf-devel)
endif
else
ifeq ($(NO_ZLIB), 1)
- dummy := $(warning Warning: Statically building libbpf not possible as zlib is missing)
+ $(warning Warning: Statically building libbpf not possible as zlib is missing)
NO_LIBBPF := 1
else
# Libbpf will be built as a static library from tools/lib/bpf.
LIBBPF_STATIC := 1
$(call detected,CONFIG_LIBBPF)
CFLAGS += -DHAVE_LIBBPF_SUPPORT
+ LIBBPF_INCLUDE = $(LIBBPF_DIR)/..
endif
endif
endif
@@ -609,7 +633,7 @@ endif # NO_LIBELF
ifndef NO_SDT
ifneq ($(feature-sdt), 1)
- msg := $(warning No sys/sdt.h found, no SDT events are defined, please install systemtap-sdt-devel or systemtap-sdt-dev);
+ $(warning No sys/sdt.h found, no SDT events are defined, please install systemtap-sdt-devel or systemtap-sdt-dev)
NO_SDT := 1;
else
CFLAGS += -DHAVE_SDT_EVENT
@@ -625,7 +649,7 @@ ifdef PERF_HAVE_JITDUMP
endif
ifeq ($(SRCARCH),powerpc)
- ifndef NO_DWARF
+ ifndef NO_LIBDW
CFLAGS += -DHAVE_SKIP_CALLCHAIN_IDX
endif
endif
@@ -633,6 +657,8 @@ endif
ifndef NO_LIBUNWIND
have_libunwind :=
+ $(call feature_check,libunwind)
+
$(call feature_check,libunwind-x86)
ifeq ($(feature-libunwind-x86), 1)
$(call detected,CONFIG_LIBUNWIND_X86)
@@ -651,13 +677,13 @@ ifndef NO_LIBUNWIND
have_libunwind = 1
$(call feature_check,libunwind-debug-frame-aarch64)
ifneq ($(feature-libunwind-debug-frame-aarch64), 1)
- msg := $(warning No debug_frame support found in libunwind-aarch64);
+ $(warning No debug_frame support found in libunwind-aarch64)
CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME_AARCH64
endif
endif
ifneq ($(feature-libunwind), 1)
- msg := $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR);
+ $(warning No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR and set LIBUNWIND=1 in the make command line as it is opt-in now)
NO_LOCAL_LIBUNWIND := 1
else
have_libunwind := 1
@@ -673,7 +699,7 @@ endif
ifndef NO_LIBBPF
ifneq ($(feature-bpf), 1)
- msg := $(warning BPF API too old. Please install recent kernel headers. BPF support in 'perf record' is disabled.)
+ $(warning BPF API too old. Please install recent kernel headers. BPF support in 'perf record' is disabled.)
NO_LIBBPF := 1
endif
endif
@@ -686,28 +712,28 @@ endif
ifeq ($(BUILD_BPF_SKEL),1)
ifeq ($(filter -DHAVE_LIBELF_SUPPORT, $(CFLAGS)),)
- dummy := $(warning Warning: Disabled BPF skeletons as libelf is required by bpftool)
+ $(warning Warning: Disabled BPF skeletons as libelf is required by bpftool)
BUILD_BPF_SKEL := 0
else ifeq ($(filter -DHAVE_ZLIB_SUPPORT, $(CFLAGS)),)
- dummy := $(warning Warning: Disabled BPF skeletons as zlib is required by bpftool)
+ $(warning Warning: Disabled BPF skeletons as zlib is required by bpftool)
BUILD_BPF_SKEL := 0
else ifeq ($(filter -DHAVE_LIBBPF_SUPPORT, $(CFLAGS)),)
- dummy := $(warning Warning: Disabled BPF skeletons as libbpf is required)
+ $(warning Warning: Disabled BPF skeletons as libbpf is required)
BUILD_BPF_SKEL := 0
else ifeq ($(call get-executable,$(CLANG)),)
- dummy := $(warning Warning: Disabled BPF skeletons as clang ($(CLANG)) is missing)
+ $(warning Warning: Disabled BPF skeletons as clang ($(CLANG)) is missing)
BUILD_BPF_SKEL := 0
else
CLANG_VERSION := $(shell $(CLANG) --version | head -1 | sed 's/.*clang version \([[:digit:]]\+.[[:digit:]]\+.[[:digit:]]\+\).*/\1/g')
ifeq ($(call version-lt3,$(CLANG_VERSION),12.0.1),1)
- dummy := $(warning Warning: Disabled BPF skeletons as reliable BTF generation needs at least $(CLANG) version 12.0.1)
+ $(warning Warning: Disabled BPF skeletons as reliable BTF generation needs at least $(CLANG) version 12.0.1)
BUILD_BPF_SKEL := 0
endif
endif
ifeq ($(BUILD_BPF_SKEL),1)
$(call feature_check,clang-bpf-co-re)
ifeq ($(feature-clang-bpf-co-re), 0)
- dummy := $(warning Warning: Disabled BPF skeletons as clang is too old)
+ $(warning Warning: Disabled BPF skeletons as clang is too old)
BUILD_BPF_SKEL := 0
endif
endif
@@ -727,7 +753,7 @@ dwarf-post-unwind-text := BUG
# setup DWARF post unwinder
ifdef NO_LIBUNWIND
ifdef NO_LIBDW_DWARF_UNWIND
- msg := $(warning Disabling post unwind, no support found.);
+ $(warning Disabling post unwind, no support found.)
dwarf-post-unwind := 0
else
dwarf-post-unwind-text := libdw
@@ -745,30 +771,27 @@ endif
ifeq ($(dwarf-post-unwind),1)
CFLAGS += -DHAVE_DWARF_UNWIND_SUPPORT
$(call detected,CONFIG_DWARF_UNWIND)
-else
- NO_DWARF_UNWIND := 1
endif
-ifndef NO_LOCAL_LIBUNWIND
- ifeq ($(SRCARCH),$(filter $(SRCARCH),arm arm64))
- $(call feature_check,libunwind-debug-frame)
- ifneq ($(feature-libunwind-debug-frame), 1)
- msg := $(warning No debug_frame support found in libunwind);
+ifndef NO_LIBUNWIND
+ ifndef NO_LOCAL_LIBUNWIND
+ ifeq ($(SRCARCH),$(filter $(SRCARCH),arm arm64))
+ $(call feature_check,libunwind-debug-frame)
+ ifneq ($(feature-libunwind-debug-frame), 1)
+ $(warning No debug_frame support found in libunwind)
+ CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME
+ endif
+ else
+ # non-ARM has no dwarf_find_debug_frame() function:
CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME
endif
- else
- # non-ARM has no dwarf_find_debug_frame() function:
- CFLAGS += -DNO_LIBUNWIND_DEBUG_FRAME
+ EXTLIBS += $(LIBUNWIND_LIBS)
+ LDFLAGS += $(LIBUNWIND_LIBS)
+ endif
+ ifeq ($(findstring -static,${LDFLAGS}),-static)
+ # gcc -static links libgcc_eh which contans piece of libunwind
+ LIBUNWIND_LDFLAGS += -Wl,--allow-multiple-definition
endif
- EXTLIBS += $(LIBUNWIND_LIBS)
- LDFLAGS += $(LIBUNWIND_LIBS)
-endif
-ifeq ($(findstring -static,${LDFLAGS}),-static)
- # gcc -static links libgcc_eh which contans piece of libunwind
- LIBUNWIND_LDFLAGS += -Wl,--allow-multiple-definition
-endif
-
-ifndef NO_LIBUNWIND
CFLAGS += -DHAVE_LIBUNWIND_SUPPORT
CFLAGS += $(LIBUNWIND_CFLAGS)
LDFLAGS += $(LIBUNWIND_LDFLAGS)
@@ -776,45 +799,15 @@ ifndef NO_LIBUNWIND
endif
ifneq ($(NO_LIBTRACEEVENT),1)
- ifeq ($(NO_SYSCALL_TABLE),0)
- $(call detected,CONFIG_TRACE)
- else
- ifndef NO_LIBAUDIT
- $(call feature_check,libaudit)
- ifneq ($(feature-libaudit), 1)
- msg := $(warning No libaudit.h found, disables 'trace' tool, please install audit-libs-devel or libaudit-dev);
- NO_LIBAUDIT := 1
- else
- CFLAGS += -DHAVE_LIBAUDIT_SUPPORT
- EXTLIBS += -laudit
- $(call detected,CONFIG_TRACE)
- endif
- endif
- endif
-endif
-
-ifndef NO_LIBCRYPTO
- ifneq ($(feature-libcrypto), 1)
- msg := $(warning No libcrypto.h found, disables jitted code injection, please install openssl-devel or libssl-dev);
- NO_LIBCRYPTO := 1
- else
- CFLAGS += -DHAVE_LIBCRYPTO_SUPPORT
- EXTLIBS += -lcrypto
- $(call detected,CONFIG_CRYPTO)
- endif
+ $(call detected,CONFIG_TRACE)
endif
ifndef NO_SLANG
ifneq ($(feature-libslang), 1)
- ifneq ($(feature-libslang-include-subdir), 1)
- msg := $(warning slang not found, disables TUI support. Please install slang-devel, libslang-dev or libslang2-dev);
- NO_SLANG := 1
- else
- CFLAGS += -DHAVE_SLANG_INCLUDE_SUBDIR
- endif
+ $(warning slang not found, disables TUI support. Please install slang-devel, libslang-dev or libslang2-dev)
+ NO_SLANG := 1
endif
ifndef NO_SLANG
- # Fedora has /usr/include/slang/slang.h, but ubuntu /usr/include/slang.h
CFLAGS += -DHAVE_SLANG_SUPPORT
EXTLIBS += -lslang
$(call detected,CONFIG_SLANG)
@@ -825,7 +818,7 @@ ifdef GTK2
FLAGS_GTK2=$(CFLAGS) $(LDFLAGS) $(EXTLIBS) $(shell $(PKG_CONFIG) --libs --cflags gtk+-2.0 2>/dev/null)
$(call feature_check,gtk2)
ifneq ($(feature-gtk2), 1)
- msg := $(warning GTK2 not found, disables GTK2 support. Please install gtk2-devel or libgtk2.0-dev);
+ $(warning GTK2 not found, disables GTK2 support. Please install gtk2-devel or libgtk2.0-dev)
NO_GTK2 := 1
else
$(call feature_check,gtk2-infobar)
@@ -839,29 +832,23 @@ ifdef GTK2
endif
endif
-ifdef NO_LIBPERL
- CFLAGS += -DNO_LIBPERL
-else
+ifdef LIBPERL
PERL_EMBED_LDOPTS = $(shell perl -MExtUtils::Embed -e ldopts 2>/dev/null)
PERL_EMBED_LDFLAGS = $(call strip-libs,$(PERL_EMBED_LDOPTS))
PERL_EMBED_LIBADD = $(call grep-libs,$(PERL_EMBED_LDOPTS))
PERL_EMBED_CCOPTS = $(shell perl -MExtUtils::Embed -e ccopts 2>/dev/null)
PERL_EMBED_CCOPTS := $(filter-out -specs=%,$(PERL_EMBED_CCOPTS))
- PERL_EMBED_CCOPTS := $(filter-out -flto=auto -ffat-lto-objects, $(PERL_EMBED_CCOPTS))
+ PERL_EMBED_CCOPTS := $(filter-out -flto% -ffat-lto-objects, $(PERL_EMBED_CCOPTS))
PERL_EMBED_LDOPTS := $(filter-out -specs=%,$(PERL_EMBED_LDOPTS))
FLAGS_PERL_EMBED=$(PERL_EMBED_CCOPTS) $(PERL_EMBED_LDOPTS)
+ $(call feature_check,libperl)
ifneq ($(feature-libperl), 1)
- CFLAGS += -DNO_LIBPERL
- NO_LIBPERL := 1
- msg := $(warning Missing perl devel files. Disabling perl scripting support, please install perl-ExtUtils-Embed/libperl-dev);
+ $(error Missing perl devel files. Please install perl-ExtUtils-Embed/libperl-dev)
else
LDFLAGS += $(PERL_EMBED_LDFLAGS)
EXTLIBS += $(PERL_EMBED_LIBADD)
CFLAGS += -DHAVE_LIBPERL_SUPPORT
- ifeq ($(CC_NO_CLANG), 0)
- CFLAGS += -Wno-compound-token-split-by-macro
- endif
$(call detected,CONFIG_LIBPERL)
endif
endif
@@ -869,7 +856,7 @@ endif
ifeq ($(feature-timerfd), 1)
CFLAGS += -DHAVE_TIMERFD_SUPPORT
else
- msg := $(warning No timerfd support. Disables 'perf kvm stat live');
+ $(warning No timerfd support. Disables 'perf kvm stat live')
endif
disable-python = $(eval $(disable-python_code))
@@ -901,12 +888,20 @@ else
PYTHON_SETUPTOOLS_INSTALLED := $(shell $(PYTHON) -c 'import setuptools;' 2> /dev/null && echo "yes" || echo "no")
ifeq ($(PYTHON_SETUPTOOLS_INSTALLED), yes)
PYTHON_EXTENSION_SUFFIX := $(shell $(PYTHON) -c 'from importlib import machinery; print(machinery.EXTENSION_SUFFIXES[0])')
+ ifdef CROSS_COMPILE
+ PYTHON_EXTENSION_SUFFIX := $(subst $(PYTHON_NATIVE),$(shell $(CC) -dumpmachine),$(PYTHON_EXTENSION_SUFFIX))
+ endif
LANG_BINDINGS += $(obj-perf)python/perf$(PYTHON_EXTENSION_SUFFIX)
else
- msg := $(warning Missing python setuptools, the python binding won't be built, please install python3-setuptools or equivalent);
+ $(warning Missing python setuptools, the python binding won't be built, please install python3-setuptools or equivalent)
endif
CFLAGS += -DHAVE_LIBPYTHON_SUPPORT
$(call detected,CONFIG_LIBPYTHON)
+ ifeq ($(filter -fPIC,$(CFLAGS)),)
+ # Building a shared library requires position independent code.
+ CFLAGS += -fPIC
+ CXXFLAGS += -fPIC
+ endif
endif
endif
endif
@@ -931,6 +926,8 @@ ifneq ($(NO_JEVENTS),1)
endif
ifdef BUILD_NONDISTRO
+ $(call feature_check,libbfd)
+
ifeq ($(feature-libbfd), 1)
EXTLIBS += -lbfd -lopcodes
else
@@ -959,10 +956,39 @@ ifdef BUILD_NONDISTRO
CFLAGS += -DHAVE_LIBBFD_SUPPORT
CXXFLAGS += -DHAVE_LIBBFD_SUPPORT
+ $(call detected,CONFIG_LIBBFD)
+
+ $(call feature_check,libbfd-buildid)
+
ifeq ($(feature-libbfd-buildid), 1)
CFLAGS += -DHAVE_LIBBFD_BUILDID_SUPPORT
else
- msg := $(warning Old version of libbfd/binutils things like PE executable profiling will not be available);
+ $(warning Old version of libbfd/binutils things like PE executable profiling will not be available)
+ endif
+
+ ifeq ($(feature-disassembler-four-args), 1)
+ CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE
+ endif
+
+ ifeq ($(feature-disassembler-init-styled), 1)
+ CFLAGS += -DDISASM_INIT_STYLED
+ endif
+endif
+
+ifndef NO_LIBLLVM
+ $(call feature_check,llvm-perf)
+ ifeq ($(feature-llvm-perf), 1)
+ CFLAGS += -DHAVE_LIBLLVM_SUPPORT
+ CFLAGS += $(shell $(LLVM_CONFIG) --cflags)
+ CXXFLAGS += -DHAVE_LIBLLVM_SUPPORT
+ CXXFLAGS += $(shell $(LLVM_CONFIG) --cxxflags)
+ LIBLLVM = $(shell $(LLVM_CONFIG) --libs all) $(shell $(LLVM_CONFIG) --system-libs)
+ EXTLIBS += -L$(shell $(LLVM_CONFIG) --libdir) $(LIBLLVM)
+ EXTLIBS += -lstdc++
+ $(call detected,CONFIG_LIBLLVM)
+ else
+ $(warning No libllvm 13+ found, slower source file resolution, please install llvm-devel/llvm-dev)
+ NO_LIBLLVM := 1
endif
endif
@@ -994,7 +1020,7 @@ ifndef NO_LZMA
EXTLIBS += -llzma
$(call detected,CONFIG_LZMA)
else
- msg := $(warning No liblzma found, disables xz kernel module decompression, please install xz-devel/liblzma-dev);
+ $(warning No liblzma found, disables xz kernel module decompression, please install xz-devel/liblzma-dev)
NO_LZMA := 1
endif
endif
@@ -1007,22 +1033,11 @@ ifndef NO_LIBZSTD
EXTLIBS += -lzstd
$(call detected,CONFIG_ZSTD)
else
- msg := $(warning No libzstd found, disables trace compression, please install libzstd-dev[el] and/or set LIBZSTD_DIR);
+ $(warning No libzstd found, disables trace compression, please install libzstd-dev[el] and/or set LIBZSTD_DIR)
NO_LIBZSTD := 1
endif
endif
-ifndef NO_LIBCAP
- ifeq ($(feature-libcap), 1)
- CFLAGS += -DHAVE_LIBCAP_SUPPORT
- EXTLIBS += -lcap
- $(call detected,CONFIG_LIBCAP)
- else
- msg := $(warning No libcap found, disables capability support, please install libcap-devel/libcap-dev);
- NO_LIBCAP := 1
- endif
-endif
-
ifndef NO_BACKTRACE
ifeq ($(feature-backtrace), 1)
CFLAGS += -DHAVE_BACKTRACE_SUPPORT
@@ -1031,11 +1046,11 @@ endif
ifndef NO_LIBNUMA
ifeq ($(feature-libnuma), 0)
- msg := $(warning No numa.h found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev);
+ $(warning No numa.h found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev)
NO_LIBNUMA := 1
else
ifeq ($(feature-numa_num_possible_cpus), 0)
- msg := $(warning Old numa library found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev >= 2.0.8);
+ $(warning Old numa library found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev >= 2.0.8)
NO_LIBNUMA := 1
else
CFLAGS += -DHAVE_LIBNUMA_SUPPORT
@@ -1049,14 +1064,6 @@ ifdef HAVE_KVM_STAT_SUPPORT
CFLAGS += -DHAVE_KVM_STAT_SUPPORT
endif
-ifeq ($(feature-disassembler-four-args), 1)
- CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE
-endif
-
-ifeq ($(feature-disassembler-init-styled), 1)
- CFLAGS += -DDISASM_INIT_STYLED
-endif
-
ifeq (${IS_64_BIT}, 1)
ifndef NO_PERF_READ_VDSO32
$(call feature_check,compile-32)
@@ -1090,23 +1097,32 @@ ifndef NO_LIBBABELTRACE
EXTLIBS += -lbabeltrace-ctf
$(call detected,CONFIG_LIBBABELTRACE)
else
- msg := $(warning No libbabeltrace found, disables 'perf data' CTF format support, please install libbabeltrace-dev[el]/libbabeltrace-ctf-dev);
+ $(warning No libbabeltrace found, disables 'perf data' CTF format support, please install libbabeltrace-dev[el]/libbabeltrace-ctf-dev)
+ endif
+endif
+
+ifndef NO_CAPSTONE
+ $(call feature_check,libcapstone)
+ ifeq ($(feature-libcapstone), 1)
+ CFLAGS += -DHAVE_LIBCAPSTONE_SUPPORT $(LIBCAPSTONE_CFLAGS)
+ LDFLAGS += $(LICAPSTONE_LDFLAGS)
+ EXTLIBS += -lcapstone
+ $(call detected,CONFIG_LIBCAPSTONE)
+ else
+ msg := $(warning No libcapstone found, disables disasm engine support for 'perf script', please install libcapstone-dev/capstone-devel);
endif
endif
ifndef NO_AUXTRACE
ifeq ($(SRCARCH),x86)
ifeq ($(feature-get_cpuid), 0)
- msg := $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc);
+ $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc)
NO_AUXTRACE := 1
endif
endif
ifndef NO_AUXTRACE
$(call detected,CONFIG_AUXTRACE)
CFLAGS += -DHAVE_AUXTRACE_SUPPORT
- ifeq ($(feature-reallocarray), 0)
- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
- endif
endif
endif
@@ -1142,7 +1158,7 @@ ifndef NO_JVMTI
endif
endif # NO_JVMTI_CMLR
else
- $(warning No openjdk development package found, please install JDK package, e.g. openjdk-8-jdk, java-1.8.0-openjdk-devel)
+ $(warning No openjdk development package found, please install JDK package, e.g. openjdk-8-jdk, java-latest-openjdk-devel)
NO_JVMTI := 1
endif
endif
@@ -1155,17 +1171,17 @@ ifndef NO_LIBPFM4
ASCIIDOC_EXTRA = -aHAVE_LIBPFM=1
$(call detected,CONFIG_LIBPFM4)
else
- msg := $(warning libpfm4 not found, disables libpfm4 support. Please install libpfm4-dev);
+ $(warning libpfm4 not found, disables libpfm4 support. Please install libpfm-devel or libpfm4-dev)
endif
endif
# libtraceevent is a recommended dependency picked up from the system.
ifneq ($(NO_LIBTRACEEVENT),1)
- $(call feature_check,libtraceevent)
ifeq ($(feature-libtraceevent), 1)
- CFLAGS += -DHAVE_LIBTRACEEVENT
- EXTLIBS += -ltraceevent
- LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent)
+ CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent)
+ LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent)
+ EXTLIBS += $(shell $(PKG_CONFIG) --libs-only-l libtraceevent)
+ LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent).0.0
LIBTRACEEVENT_VERSION_1 := $(word 1, $(subst ., ,$(LIBTRACEEVENT_VERSION)))
LIBTRACEEVENT_VERSION_2 := $(word 2, $(subst ., ,$(LIBTRACEEVENT_VERSION)))
LIBTRACEEVENT_VERSION_3 := $(word 3, $(subst ., ,$(LIBTRACEEVENT_VERSION)))
@@ -1173,18 +1189,7 @@ ifneq ($(NO_LIBTRACEEVENT),1)
CFLAGS += -DLIBTRACEEVENT_VERSION=$(LIBTRACEEVENT_VERSION_CPP)
$(call detected,CONFIG_LIBTRACEEVENT)
else
- dummy := $(error ERROR: libtraceevent is missing. Please install libtraceevent-dev/libtraceevent-devel or build with NO_LIBTRACEEVENT=1)
- endif
-
- $(call feature_check,libtracefs)
- ifeq ($(feature-libtracefs), 1)
- EXTLIBS += -ltracefs
- LIBTRACEFS_VERSION := $(shell $(PKG_CONFIG) --modversion libtracefs)
- LIBTRACEFS_VERSION_1 := $(word 1, $(subst ., ,$(LIBTRACEFS_VERSION)))
- LIBTRACEFS_VERSION_2 := $(word 2, $(subst ., ,$(LIBTRACEFS_VERSION)))
- LIBTRACEFS_VERSION_3 := $(word 3, $(subst ., ,$(LIBTRACEFS_VERSION)))
- LIBTRACEFS_VERSION_CPP := $(shell expr $(LIBTRACEFS_VERSION_1) \* 255 \* 255 + $(LIBTRACEFS_VERSION_2) \* 255 + $(LIBTRACEFS_VERSION_3))
- CFLAGS += -DLIBTRACEFS_VERSION=$(LIBTRACEFS_VERSION_CPP)
+ $(error ERROR: libtraceevent is missing. Please install libtraceevent-dev/libtraceevent-devel and/or set LIBTRACEEVENT_DIR or build with NO_LIBTRACEEVENT=1)
endif
endif
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index f8774a9b1377..47c906b807ef 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -17,7 +17,7 @@ include ../scripts/utilities.mak
#
# Define CROSS_COMPILE as prefix name of compiler if you want cross-builds.
#
-# Define NO_LIBPERL to disable perl script extension.
+# Define LIBPERL to enable perl script extension.
#
# Define NO_LIBPYTHON to disable python script extension.
#
@@ -40,7 +40,7 @@ include ../scripts/utilities.mak
#
# Define EXTRA_PERFLIBS to pass extra libraries to PERFLIBS.
#
-# Define NO_DWARF if you do not want debug-info analysis feature at all.
+# Define NO_LIBDW if you do not want debug-info analysis feature at all.
#
# Define WERROR=0 to disable treating any warnings as errors.
#
@@ -52,20 +52,15 @@ include ../scripts/utilities.mak
#
# Define NO_LIBELF if you do not want libelf dependency (e.g. cross-builds)
#
-# Define NO_LIBUNWIND if you do not want libunwind dependency for dwarf
+# Define LIBUNWIND if you do not want libunwind dependency for dwarf
# backtrace post unwind.
#
# Define NO_BACKTRACE if you do not want stack backtrace debug feature
#
# Define NO_LIBNUMA if you do not want numa perf benchmark
#
-# Define NO_LIBAUDIT if you do not want libaudit support
-#
# Define NO_LIBBIONIC if you do not want bionic support
#
-# Define NO_LIBCRYPTO if you do not want libcrypto (openssl) support
-# used for generating build-ids for ELFs generated by jitdump.
-#
# Define NO_LIBDW_DWARF_UNWIND if you do not want libdw support
# for dwarf backtrace post unwind.
#
@@ -84,6 +79,9 @@ include ../scripts/utilities.mak
# Define NO_LIBBABELTRACE if you do not want libbabeltrace support
# for CTF data format.
#
+# Define NO_CAPSTONE if you do not want libcapstone support
+# for disasm engine.
+#
# Define NO_LZMA if you do not want to support compressed (xz) kernel modules
#
# Define NO_AUXTRACE if you do not want AUX area tracing support
@@ -116,10 +114,6 @@ include ../scripts/utilities.mak
#
# Define LIBBPF_DYNAMIC to enable libbpf dynamic linking.
#
-# Define NO_SYSCALL_TABLE=1 to disable the use of syscall id to/from name tables
-# generated from the kernel .tbl or unistd.h files and use, if available, libaudit
-# for doing the conversions to/from strings/id.
-#
# Define NO_LIBPFM4 to disable libpfm4 events extension.
#
# Define NO_LIBDEBUGINFOD if you do not want support debuginfod
@@ -160,12 +154,8 @@ ifneq ($(OUTPUT),)
# for flex/bison parsers.
VPATH += $(OUTPUT)
export VPATH
-endif
-
-ifeq ($(V),1)
- Q =
-else
- Q = @
+# create symlink to the original source
+SOURCE := $(shell ln -sfn $(srctree)/tools/perf $(OUTPUT)/source)
endif
# Do not use make's built-in rules
@@ -190,7 +180,32 @@ HOSTLD ?= ld
HOSTAR ?= ar
CLANG ?= clang
-PKG_CONFIG = $(CROSS_COMPILE)pkg-config
+# Some distros provide the command $(CROSS_COMPILE)pkg-config for
+# searching packges installed with Multiarch. Use it for cross
+# compilation if it is existed.
+ifneq (, $(shell which $(CROSS_COMPILE)pkg-config))
+ PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
+else
+ PKG_CONFIG ?= pkg-config
+
+ # PKG_CONFIG_PATH or PKG_CONFIG_LIBDIR, alongside PKG_CONFIG_SYSROOT_DIR
+ # for modified system root, is required for the cross compilation.
+ # If these PKG_CONFIG environment variables are not set, Multiarch library
+ # paths are used instead.
+ ifdef CROSS_COMPILE
+ ifeq ($(PKG_CONFIG_LIBDIR)$(PKG_CONFIG_PATH)$(PKG_CONFIG_SYSROOT_DIR),)
+ CROSS_ARCH = $(notdir $(CROSS_COMPILE:%-=%))
+ PKG_CONFIG_LIBDIR := /usr/local/$(CROSS_ARCH)/lib/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/local/lib/$(CROSS_ARCH)/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/lib/$(CROSS_ARCH)/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/local/share/pkgconfig/
+ PKG_CONFIG_LIBDIR := $(PKG_CONFIG_LIBDIR):/usr/share/pkgconfig/
+ export PKG_CONFIG_LIBDIR
+ $(warning Missing PKG_CONFIG_LIBDIR, PKG_CONFIG_PATH and PKG_CONFIG_SYSROOT_DIR for cross compilation,)
+ $(warning set PKG_CONFIG_LIBDIR for using Multiarch libs.)
+ endif
+ endif
+endif
RM = rm -f
LN = ln -f
@@ -211,6 +226,7 @@ NON_CONFIG_TARGETS := clean python-clean TAGS tags cscope help
ifdef MAKECMDGOALS
ifeq ($(filter-out $(NON_CONFIG_TARGETS),$(MAKECMDGOALS)),)
+ VMLINUX_H=$(src-perf)/util/bpf_skel/vmlinux/vmlinux.h
config := 0
endif
endif
@@ -229,7 +245,7 @@ else
force_fixdep := $(config)
endif
-# Runs shellcheck on perf test shell scripts
+# Runs shellcheck on perf shell scripts
ifeq ($(NO_SHELLCHECK),1)
SHELLCHECK :=
else
@@ -243,11 +259,23 @@ ifneq ($(SHELLCHECK),)
ifeq ($(shell expr $(shell $(SHELLCHECK) --version | grep version: | \
sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \< 060), 1)
SHELLCHECK :=
+ else
+ SHELLCHECK := $(SHELLCHECK) -s bash -a -S warning
endif
endif
+# Runs mypy on perf python files
+ifeq ($(MYPY),1)
+ MYPY := $(shell which mypy 2> /dev/null)
+endif
+
+# Runs pylint on perf python files
+ifeq ($(PYLINT),1)
+ PYLINT := $(shell which pylint 2> /dev/null)
+endif
+
export srctree OUTPUT RM CC CXX LD AR CFLAGS CXXFLAGS V BISON FLEX AWK
-export HOSTCC HOSTLD HOSTAR HOSTCFLAGS SHELLCHECK
+export HOSTCC HOSTLD HOSTAR HOSTCFLAGS SHELLCHECK MYPY PYLINT
include $(srctree)/tools/build/Makefile.include
@@ -377,14 +405,6 @@ python-clean := $(call QUIET_CLEAN, python) $(RM) -r $(PYTHON_EXTBUILD) $(OUTPUT
# Use the detected configuration
-include $(OUTPUT).config-detected
-ifeq ($(CONFIG_LIBTRACEEVENT),y)
- PYTHON_EXT_SRCS := $(shell grep -v ^\# util/python-ext-sources)
-else
- PYTHON_EXT_SRCS := $(shell grep -v ^\#\\\|util/trace-event.c util/python-ext-sources)
-endif
-
-PYTHON_EXT_DEPS := util/python-ext-sources util/setup.py $(LIBAPI)
-
SCRIPTS = $(patsubst %.sh,%,$(SCRIPT_SH))
PROGRAMS += $(OUTPUT)perf
@@ -422,10 +442,27 @@ endif
export PERL_PATH
+LIBPERF_BENCH_IN := $(OUTPUT)perf-bench-in.o
+LIBPERF_BENCH := $(OUTPUT)libperf-bench.a
+
+LIBPERF_TEST_IN := $(OUTPUT)perf-test-in.o
+LIBPERF_TEST := $(OUTPUT)libperf-test.a
+
+LIBPERF_UI_IN := $(OUTPUT)perf-ui-in.o
+LIBPERF_UI := $(OUTPUT)libperf-ui.a
+
+LIBPERF_UTIL_IN := $(OUTPUT)perf-util-in.o
+LIBPERF_UTIL := $(OUTPUT)libperf-util.a
+
+LIBPMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o
+LIBPMU_EVENTS := $(OUTPUT)libpmu-events.a
+
PERFLIBS = $(LIBAPI) $(LIBPERF) $(LIBSUBCMD) $(LIBSYMBOL)
ifdef LIBBPF_STATIC
PERFLIBS += $(LIBBPF)
endif
+PERFLIBS += $(LIBPERF_BENCH) $(LIBPERF_TEST) $(LIBPERF_UI) $(LIBPERF_UTIL)
+PERFLIBS += $(LIBPMU_EVENTS)
# We choose to avoid "if .. else if .. else .. endif endif"
# because maintaining the nesting to match is a pain. If
@@ -447,6 +484,9 @@ endif
EXTLIBS := $(call filter-out,$(EXCLUDE_EXTLIBS),$(EXTLIBS))
LIBS = -Wl,--whole-archive $(PERFLIBS) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group
+PERFLIBS_PY := $(call filter-out,$(LIBPERF_BENCH) $(LIBPERF_TEST),$(PERFLIBS))
+LIBS_PY = -Wl,--whole-archive $(PERFLIBS_PY) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive -Wl,--start-group $(EXTLIBS) -Wl,--end-group
+
export INSTALL SHELL_PATH
### Build rules
@@ -455,35 +495,61 @@ SHELL = $(SHELL_PATH)
arm64_gen_sysreg_dir := $(srctree)/tools/arch/arm64/tools
ifneq ($(OUTPUT),)
- arm64_gen_sysreg_outdir := $(OUTPUT)
+ arm64_gen_sysreg_outdir := $(abspath $(OUTPUT))
else
arm64_gen_sysreg_outdir := $(CURDIR)
endif
arm64-sysreg-defs: FORCE
- $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) O=$(arm64_gen_sysreg_outdir)
+ $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) O=$(arm64_gen_sysreg_outdir) \
+ prefix= subdir=
arm64-sysreg-defs-clean:
$(call QUIET_CLEAN,arm64-sysreg-defs)
$(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) O=$(arm64_gen_sysreg_outdir) \
- clean > /dev/null
+ prefix= subdir= clean > /dev/null
beauty_linux_dir := $(srctree)/tools/perf/trace/beauty/include/linux/
+beauty_uapi_linux_dir := $(srctree)/tools/perf/trace/beauty/include/uapi/linux/
+beauty_uapi_sound_dir := $(srctree)/tools/perf/trace/beauty/include/uapi/sound/
+beauty_arch_asm_dir := $(srctree)/tools/perf/trace/beauty/arch/x86/include/asm/
+beauty_x86_arch_asm_uapi_dir := $(srctree)/tools/perf/trace/beauty/arch/x86/include/uapi/asm/
+
linux_uapi_dir := $(srctree)/tools/include/uapi/linux
asm_generic_uapi_dir := $(srctree)/tools/include/uapi/asm-generic
arch_asm_uapi_dir := $(srctree)/tools/arch/$(SRCARCH)/include/uapi/asm/
-x86_arch_asm_uapi_dir := $(srctree)/tools/arch/x86/include/uapi/asm/
x86_arch_asm_dir := $(srctree)/tools/arch/x86/include/asm/
beauty_outdir := $(OUTPUT)trace/beauty/generated
beauty_ioctl_outdir := $(beauty_outdir)/ioctl
+
+# Create output directory if not already present
+$(shell [ -d '$(beauty_ioctl_outdir)' ] || mkdir -p '$(beauty_ioctl_outdir)')
+
+syscall_array := $(beauty_outdir)/syscalltbl.c
+syscall_tbl := $(srctree)/tools/perf/trace/beauty/syscalltbl.sh
+syscall_tbl_data := $(srctree)/tools/scripts/syscall.tbl \
+ $(wildcard $(srctree)/tools/perf/arch/*/entry/syscalls/syscall*.tbl)
+
+$(syscall_array): $(syscall_tbl) $(syscall_tbl_data)
+ $(Q)$(SHELL) '$(syscall_tbl)' $(srctree)/tools $@
+
+fs_at_flags_array := $(beauty_outdir)/fs_at_flags_array.c
+fs_at_flags_tbl := $(srctree)/tools/perf/trace/beauty/fs_at_flags.sh
+
+$(fs_at_flags_array): $(beauty_uapi_linux_dir)/fcntl.h $(fs_at_flags_tbl)
+ $(Q)$(SHELL) '$(fs_at_flags_tbl)' $(beauty_uapi_linux_dir) > $@
+
+clone_flags_array := $(beauty_outdir)/clone_flags_array.c
+clone_flags_tbl := $(srctree)/tools/perf/trace/beauty/clone.sh
+
+$(clone_flags_array): $(beauty_uapi_linux_dir)/sched.h $(clone_flags_tbl)
+ $(Q)$(SHELL) '$(clone_flags_tbl)' $(beauty_uapi_linux_dir) > $@
+
drm_ioctl_array := $(beauty_ioctl_outdir)/drm_ioctl_array.c
drm_hdr_dir := $(srctree)/tools/include/uapi/drm
drm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/drm_ioctl.sh
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(beauty_ioctl_outdir)' ] || mkdir -p '$(beauty_ioctl_outdir)')
-
$(drm_ioctl_array): $(drm_hdr_dir)/drm.h $(drm_hdr_dir)/i915_drm.h $(drm_ioctl_tbl)
$(Q)$(SHELL) '$(drm_ioctl_tbl)' $(drm_hdr_dir) > $@
@@ -496,20 +562,20 @@ $(fadvise_advice_array): $(linux_uapi_dir)/in.h $(fadvise_advice_tbl)
fsmount_arrays := $(beauty_outdir)/fsmount_arrays.c
fsmount_tbls := $(srctree)/tools/perf/trace/beauty/fsmount.sh
-$(fsmount_arrays): $(linux_uapi_dir)/fs.h $(fsmount_tbls)
- $(Q)$(SHELL) '$(fsmount_tbls)' $(linux_uapi_dir) > $@
+$(fsmount_arrays): $(beauty_uapi_linux_dir)/mount.h $(fsmount_tbls)
+ $(Q)$(SHELL) '$(fsmount_tbls)' $(beauty_uapi_linux_dir) > $@
fspick_arrays := $(beauty_outdir)/fspick_arrays.c
fspick_tbls := $(srctree)/tools/perf/trace/beauty/fspick.sh
-$(fspick_arrays): $(linux_uapi_dir)/fs.h $(fspick_tbls)
- $(Q)$(SHELL) '$(fspick_tbls)' $(linux_uapi_dir) > $@
+$(fspick_arrays): $(beauty_uapi_linux_dir)/mount.h $(fspick_tbls)
+ $(Q)$(SHELL) '$(fspick_tbls)' $(beauty_uapi_linux_dir) > $@
fsconfig_arrays := $(beauty_outdir)/fsconfig_arrays.c
fsconfig_tbls := $(srctree)/tools/perf/trace/beauty/fsconfig.sh
-$(fsconfig_arrays): $(linux_uapi_dir)/fs.h $(fsconfig_tbls)
- $(Q)$(SHELL) '$(fsconfig_tbls)' $(linux_uapi_dir) > $@
+$(fsconfig_arrays): $(beauty_uapi_linux_dir)/mount.h $(fsconfig_tbls)
+ $(Q)$(SHELL) '$(fsconfig_tbls)' $(beauty_uapi_linux_dir) > $@
pkey_alloc_access_rights_array := $(beauty_outdir)/pkey_alloc_access_rights_array.c
asm_generic_hdr_dir := $(srctree)/tools/include/uapi/asm-generic/
@@ -522,15 +588,15 @@ sndrv_ctl_ioctl_array := $(beauty_ioctl_outdir)/sndrv_ctl_ioctl_array.c
sndrv_ctl_hdr_dir := $(srctree)/tools/include/uapi/sound
sndrv_ctl_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/sndrv_ctl_ioctl.sh
-$(sndrv_ctl_ioctl_array): $(sndrv_ctl_hdr_dir)/asound.h $(sndrv_ctl_ioctl_tbl)
- $(Q)$(SHELL) '$(sndrv_ctl_ioctl_tbl)' $(sndrv_ctl_hdr_dir) > $@
+$(sndrv_ctl_ioctl_array): $(beauty_uapi_sound_dir)/asound.h $(sndrv_ctl_ioctl_tbl)
+ $(Q)$(SHELL) '$(sndrv_ctl_ioctl_tbl)' $(beauty_uapi_sound_dir) > $@
sndrv_pcm_ioctl_array := $(beauty_ioctl_outdir)/sndrv_pcm_ioctl_array.c
sndrv_pcm_hdr_dir := $(srctree)/tools/include/uapi/sound
sndrv_pcm_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/sndrv_pcm_ioctl.sh
-$(sndrv_pcm_ioctl_array): $(sndrv_pcm_hdr_dir)/asound.h $(sndrv_pcm_ioctl_tbl)
- $(Q)$(SHELL) '$(sndrv_pcm_ioctl_tbl)' $(sndrv_pcm_hdr_dir) > $@
+$(sndrv_pcm_ioctl_array): $(beauty_uapi_sound_dir)/asound.h $(sndrv_pcm_ioctl_tbl)
+ $(Q)$(SHELL) '$(sndrv_pcm_ioctl_tbl)' $(beauty_uapi_sound_dir) > $@
kcmp_type_array := $(beauty_outdir)/kcmp_type_array.c
kcmp_hdr_dir := $(srctree)/tools/include/uapi/linux/
@@ -559,11 +625,10 @@ $(sockaddr_arrays): $(beauty_linux_dir)/socket.h $(sockaddr_tbl)
$(Q)$(SHELL) '$(sockaddr_tbl)' $(beauty_linux_dir) > $@
vhost_virtio_ioctl_array := $(beauty_ioctl_outdir)/vhost_virtio_ioctl_array.c
-vhost_virtio_hdr_dir := $(srctree)/tools/include/uapi/linux
vhost_virtio_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/vhost_virtio_ioctl.sh
-$(vhost_virtio_ioctl_array): $(vhost_virtio_hdr_dir)/vhost.h $(vhost_virtio_ioctl_tbl)
- $(Q)$(SHELL) '$(vhost_virtio_ioctl_tbl)' $(vhost_virtio_hdr_dir) > $@
+$(vhost_virtio_ioctl_array): $(beauty_uapi_linux_dir)/vhost.h $(vhost_virtio_ioctl_tbl)
+ $(Q)$(SHELL) '$(vhost_virtio_ioctl_tbl)' $(beauty_uapi_linux_dir) > $@
perf_ioctl_array := $(beauty_ioctl_outdir)/perf_ioctl_array.c
perf_hdr_dir := $(srctree)/tools/include/uapi/linux
@@ -594,15 +659,14 @@ $(mremap_flags_array): $(linux_uapi_dir)/mman.h $(mremap_flags_tbl)
mount_flags_array := $(beauty_outdir)/mount_flags_array.c
mount_flags_tbl := $(srctree)/tools/perf/trace/beauty/mount_flags.sh
-$(mount_flags_array): $(linux_uapi_dir)/fs.h $(mount_flags_tbl)
- $(Q)$(SHELL) '$(mount_flags_tbl)' $(linux_uapi_dir) > $@
+$(mount_flags_array): $(beauty_uapi_linux_dir)/mount.h $(mount_flags_tbl)
+ $(Q)$(SHELL) '$(mount_flags_tbl)' $(beauty_uapi_linux_dir) > $@
move_mount_flags_array := $(beauty_outdir)/move_mount_flags_array.c
move_mount_flags_tbl := $(srctree)/tools/perf/trace/beauty/move_mount_flags.sh
-$(move_mount_flags_array): $(linux_uapi_dir)/fs.h $(move_mount_flags_tbl)
- $(Q)$(SHELL) '$(move_mount_flags_tbl)' $(linux_uapi_dir) > $@
-
+$(move_mount_flags_array): $(beauty_uapi_linux_dir)/mount.h $(move_mount_flags_tbl)
+ $(Q)$(SHELL) '$(move_mount_flags_tbl)' $(beauty_uapi_linux_dir) > $@
mmap_prot_array := $(beauty_outdir)/mmap_prot_array.c
mmap_prot_tbl := $(srctree)/tools/perf/trace/beauty/mmap_prot.sh
@@ -611,29 +675,28 @@ $(mmap_prot_array): $(asm_generic_uapi_dir)/mman.h $(asm_generic_uapi_dir)/mman-
$(Q)$(SHELL) '$(mmap_prot_tbl)' $(asm_generic_uapi_dir) $(arch_asm_uapi_dir) > $@
prctl_option_array := $(beauty_outdir)/prctl_option_array.c
-prctl_hdr_dir := $(srctree)/tools/include/uapi/linux/
prctl_option_tbl := $(srctree)/tools/perf/trace/beauty/prctl_option.sh
-$(prctl_option_array): $(prctl_hdr_dir)/prctl.h $(prctl_option_tbl)
- $(Q)$(SHELL) '$(prctl_option_tbl)' $(prctl_hdr_dir) > $@
+$(prctl_option_array): $(beauty_uapi_linux_dir)/prctl.h $(prctl_option_tbl)
+ $(Q)$(SHELL) '$(prctl_option_tbl)' $(beauty_uapi_linux_dir) > $@
usbdevfs_ioctl_array := $(beauty_ioctl_outdir)/usbdevfs_ioctl_array.c
usbdevfs_ioctl_tbl := $(srctree)/tools/perf/trace/beauty/usbdevfs_ioctl.sh
-$(usbdevfs_ioctl_array): $(linux_uapi_dir)/usbdevice_fs.h $(usbdevfs_ioctl_tbl)
- $(Q)$(SHELL) '$(usbdevfs_ioctl_tbl)' $(linux_uapi_dir) > $@
+$(usbdevfs_ioctl_array): $(beauty_uapi_linux_dir)/usbdevice_fs.h $(usbdevfs_ioctl_tbl)
+ $(Q)$(SHELL) '$(usbdevfs_ioctl_tbl)' $(beauty_uapi_linux_dir) > $@
x86_arch_prctl_code_array := $(beauty_outdir)/x86_arch_prctl_code_array.c
x86_arch_prctl_code_tbl := $(srctree)/tools/perf/trace/beauty/x86_arch_prctl.sh
-$(x86_arch_prctl_code_array): $(x86_arch_asm_uapi_dir)/prctl.h $(x86_arch_prctl_code_tbl)
- $(Q)$(SHELL) '$(x86_arch_prctl_code_tbl)' $(x86_arch_asm_uapi_dir) > $@
+$(x86_arch_prctl_code_array): $(beauty_x86_arch_asm_uapi_dir)/prctl.h $(x86_arch_prctl_code_tbl)
+ $(Q)$(SHELL) '$(x86_arch_prctl_code_tbl)' $(beauty_x86_arch_asm_uapi_dir) > $@
x86_arch_irq_vectors_array := $(beauty_outdir)/x86_arch_irq_vectors_array.c
x86_arch_irq_vectors_tbl := $(srctree)/tools/perf/trace/beauty/tracepoints/x86_irq_vectors.sh
-$(x86_arch_irq_vectors_array): $(x86_arch_asm_dir)/irq_vectors.h $(x86_arch_irq_vectors_tbl)
- $(Q)$(SHELL) '$(x86_arch_irq_vectors_tbl)' $(x86_arch_asm_dir) > $@
+$(x86_arch_irq_vectors_array): $(beauty_arch_asm_dir)/irq_vectors.h $(x86_arch_irq_vectors_tbl)
+ $(Q)$(SHELL) '$(x86_arch_irq_vectors_tbl)' $(beauty_arch_asm_dir) > $@
x86_arch_MSRs_array := $(beauty_outdir)/x86_arch_MSRs_array.c
x86_arch_MSRs_tbl := $(srctree)/tools/perf/trace/beauty/tracepoints/x86_msr.sh
@@ -644,8 +707,8 @@ $(x86_arch_MSRs_array): $(x86_arch_asm_dir)/msr-index.h $(x86_arch_MSRs_tbl)
rename_flags_array := $(beauty_outdir)/rename_flags_array.c
rename_flags_tbl := $(srctree)/tools/perf/trace/beauty/rename_flags.sh
-$(rename_flags_array): $(linux_uapi_dir)/fs.h $(rename_flags_tbl)
- $(Q)$(SHELL) '$(rename_flags_tbl)' $(linux_uapi_dir) > $@
+$(rename_flags_array): $(beauty_uapi_linux_dir)/fs.h $(rename_flags_tbl)
+ $(Q)$(SHELL) '$(rename_flags_tbl)' $(beauty_uapi_linux_dir) > $@
arch_errno_name_array := $(beauty_outdir)/arch_errno_name_array.c
arch_errno_hdr_dir := $(srctree)/tools
@@ -654,11 +717,17 @@ arch_errno_tbl := $(srctree)/tools/perf/trace/beauty/arch_errno_names.sh
$(arch_errno_name_array): $(arch_errno_tbl)
$(Q)$(SHELL) '$(arch_errno_tbl)' '$(patsubst -%,,$(CC))' $(arch_errno_hdr_dir) > $@
+statx_mask_array := $(beauty_outdir)/statx_mask_array.c
+statx_mask_tbl := $(srctree)/tools/perf/trace/beauty/statx_mask.sh
+
+$(statx_mask_array): $(beauty_uapi_linux_dir)/stat.h $(statx_mask_tbl)
+ $(Q)$(SHELL) '$(statx_mask_tbl)' $(beauty_uapi_linux_dir) > $@
+
sync_file_range_arrays := $(beauty_outdir)/sync_file_range_arrays.c
sync_file_range_tbls := $(srctree)/tools/perf/trace/beauty/sync_file_range.sh
-$(sync_file_range_arrays): $(linux_uapi_dir)/fs.h $(sync_file_range_tbls)
- $(Q)$(SHELL) '$(sync_file_range_tbls)' $(linux_uapi_dir) > $@
+$(sync_file_range_arrays): $(beauty_uapi_linux_dir)/fs.h $(sync_file_range_tbls)
+ $(Q)$(SHELL) '$(sync_file_range_tbls)' $(beauty_uapi_linux_dir) > $@
TESTS_CORESIGHT_DIR := $(srctree)/tools/perf/tests/shell/coresight
@@ -672,11 +741,11 @@ tests-coresight-targets-clean:
all: shell_compatibility_test $(ALL_PROGRAMS) $(LANG_BINDINGS) $(OTHER_PROGRAMS) tests-coresight-targets
# Create python binding output directory if not already present
-_dummy := $(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python')
+$(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python')
-$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBPERF) $(LIBSUBCMD)
+$(OUTPUT)python/perf$(PYTHON_EXTENSION_SUFFIX): util/python.c util/setup.py $(PERFLIBS_PY)
$(QUIET_GEN)LDSHARED="$(CC) -pthread -shared" \
- CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS)' \
+ CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS) $(LIBS_PY)' \
$(PYTHON_WORD) util/setup.py \
--quiet build_ext; \
cp $(PYTHON_EXTBUILD_LIB)perf*.so $(OUTPUT)python/
@@ -693,8 +762,6 @@ strip: $(PROGRAMS) $(OUTPUT)perf
$(STRIP) $(STRIP_OPTS) $(PROGRAMS) $(OUTPUT)perf
PERF_IN := $(OUTPUT)perf-in.o
-
-PMU_EVENTS_IN := $(OUTPUT)pmu-events/pmu-events-in.o
export NO_JEVENTS
build := -f $(srctree)/tools/build/Makefile.build dir=. obj
@@ -702,12 +769,39 @@ build := -f $(srctree)/tools/build/Makefile.build dir=. obj
$(PERF_IN): prepare FORCE
$(Q)$(MAKE) $(build)=perf
-$(PMU_EVENTS_IN): FORCE prepare
+$(LIBPMU_EVENTS_IN): FORCE prepare
$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=pmu-events obj=pmu-events
-$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN) $(PMU_EVENTS_IN)
+$(LIBPMU_EVENTS): $(LIBPMU_EVENTS_IN)
+ $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $<
+
+$(LIBPERF_BENCH_IN): FORCE prepare
+ $(Q)$(MAKE) $(build)=perf-bench
+
+$(LIBPERF_BENCH): $(LIBPERF_BENCH_IN)
+ $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $<
+
+$(LIBPERF_TEST_IN): FORCE prepare
+ $(Q)$(MAKE) $(build)=perf-test
+
+$(LIBPERF_TEST): $(LIBPERF_TEST_IN)
+ $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $<
+
+$(LIBPERF_UI_IN): FORCE prepare
+ $(Q)$(MAKE) $(build)=perf-ui
+
+$(LIBPERF_UI): $(LIBPERF_UI_IN)
+ $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $<
+
+$(LIBPERF_UTIL_IN): FORCE prepare
+ $(Q)$(MAKE) $(build)=perf-util
+
+$(LIBPERF_UTIL): $(LIBPERF_UTIL_IN)
+ $(QUIET_AR)$(RM) $@ && $(AR) rcs $@ $<
+
+$(OUTPUT)perf: $(PERFLIBS) $(PERF_IN)
$(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) \
- $(PERF_IN) $(PMU_EVENTS_IN) $(LIBS) -o $@
+ $(PERF_IN) $(LIBS) -o $@
$(GTK_IN): FORCE prepare
$(Q)$(MAKE) $(build)=gtk
@@ -759,6 +853,9 @@ build-dir = $(or $(__build-dir),.)
prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders \
arm64-sysreg-defs \
+ $(syscall_array) \
+ $(fs_at_flags_array) \
+ $(clone_flags_array) \
$(drm_ioctl_array) \
$(fadvise_advice_array) \
$(fsconfig_arrays) \
@@ -786,6 +883,7 @@ prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders \
$(x86_arch_prctl_code_array) \
$(rename_flags_array) \
$(arch_errno_name_array) \
+ $(statx_mask_array) \
$(sync_file_range_arrays) \
$(LIBAPI) \
$(LIBPERF) \
@@ -843,7 +941,7 @@ $(OUTPUT)dlfilters/%.so: $(OUTPUT)dlfilters/%.o
ifndef NO_JVMTI
LIBJVMTI_IN := $(OUTPUT)jvmti/jvmti-in.o
-$(LIBJVMTI_IN): FORCE
+$(LIBJVMTI_IN): prepare FORCE
$(Q)$(MAKE) -f $(srctree)/tools/build/Makefile.build dir=jvmti obj=jvmti
$(OUTPUT)$(LIBJVMTI): $(LIBJVMTI_IN)
@@ -864,7 +962,7 @@ $(LIBAPI)-clean:
$(LIBBPF): FORCE | $(LIBBPF_OUTPUT)
$(Q)$(MAKE) -C $(LIBBPF_DIR) FEATURES_DUMP=$(FEATURE_DUMP_EXPORT) \
O= OUTPUT=$(LIBBPF_OUTPUT)/ DESTDIR=$(LIBBPF_DESTDIR) prefix= subdir= \
- $@ install_headers
+ EXTRA_CFLAGS="-fPIC" $@ install_headers
$(LIBBPF)-clean:
$(call QUIET_CLEAN, libbpf)
@@ -1005,12 +1103,7 @@ endif
$(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
$(call QUIET_INSTALL, perf-iostat) \
$(INSTALL) $(OUTPUT)perf-iostat -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
-ifndef NO_LIBAUDIT
- $(call QUIET_INSTALL, strace/groups) \
- $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)'; \
- $(INSTALL) trace/strace/groups/* -m 644 -t '$(DESTDIR_SQ)$(STRACE_GROUPS_INSTDIR_SQ)'
-endif
-ifndef NO_LIBPERL
+ifdef LIBPERL
$(call QUIET_INSTALL, perl-scripts) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \
$(INSTALL) scripts/perl/Perf-Trace-Util/lib/Perf/Trace/* -m 644 -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'; \
@@ -1039,15 +1132,22 @@ endif
install-tests: all install-gtk
$(call QUIET_INSTALL, tests) \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
- $(INSTALL) tests/attr.py -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
$(INSTALL) tests/pe-file.exe* '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests'; \
- $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
- $(INSTALL) tests/attr/* -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/attr'; \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \
$(INSTALL) tests/shell/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell'; \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/attr'; \
+ $(INSTALL) tests/shell/attr/* -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/attr'; \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \
$(INSTALL) tests/shell/lib/*.sh -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \
$(INSTALL) tests/shell/lib/*.py -m 644 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/lib'; \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/common'; \
+ $(INSTALL) tests/shell/common/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/common'; \
+ $(INSTALL) tests/shell/common/*.pl '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/common'; \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ $(INSTALL) tests/shell/base_probe/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_probe'; \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+ $(INSTALL) tests/shell/base_report/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
+ $(INSTALL) tests/shell/base_report/*.txt '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/base_report'; \
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight' ; \
$(INSTALL) tests/shell/coresight/*.sh '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/tests/shell/coresight'
$(Q)$(MAKE) -C tests/shell/coresight install-tests
@@ -1061,7 +1161,7 @@ install-python_ext:
# 'make install-doc' should call 'make -C Documentation install'
$(INSTALL_DOC_TARGETS):
- $(Q)$(MAKE) -C $(DOC_DIR) O=$(OUTPUT) $(@:-doc=) ASCIIDOC_EXTRA=$(ASCIIDOC_EXTRA)
+ $(Q)$(MAKE) -C $(DOC_DIR) O=$(OUTPUT) $(@:-doc=) ASCIIDOC_EXTRA=$(ASCIIDOC_EXTRA) subdir=
### Cleaning rules
@@ -1075,7 +1175,7 @@ SKELETONS += $(SKEL_OUT)/bperf_leader.skel.h $(SKEL_OUT)/bperf_follower.skel.h
SKELETONS += $(SKEL_OUT)/bperf_cgroup.skel.h $(SKEL_OUT)/func_latency.skel.h
SKELETONS += $(SKEL_OUT)/off_cpu.skel.h $(SKEL_OUT)/lock_contention.skel.h
SKELETONS += $(SKEL_OUT)/kwork_trace.skel.h $(SKEL_OUT)/sample_filter.skel.h
-SKELETONS += $(SKEL_OUT)/kwork_top.skel.h
+SKELETONS += $(SKEL_OUT)/kwork_top.skel.h $(SKEL_OUT)/syscall_summary.skel.h
SKELETONS += $(SKEL_OUT)/bench_uprobe.skel.h
SKELETONS += $(SKEL_OUT)/augmented_raw_syscalls.skel.h
@@ -1142,15 +1242,18 @@ ifeq ($(VMLINUX_H),)
endif
endif
-$(SKEL_OUT)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
+$(SKEL_OUT)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) $(VMLINUX_H)
ifeq ($(VMLINUX_H),)
$(QUIET_GEN)$(BPFTOOL) btf dump file $< format c > $@
else
$(Q)cp "$(VMLINUX_H)" $@
endif
-$(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) $(SKEL_OUT)/vmlinux.h | $(SKEL_TMP_OUT)
- $(QUIET_CLANG)$(CLANG) -g -O2 --target=bpf $(CLANG_OPTIONS) $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \
+$(SKEL_TMP_OUT)/%.bpf.o: $(OUTPUT)PERF-VERSION-FILE util/bpf_skel/perf_version.h | $(SKEL_TMP_OUT)
+$(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) $(SKEL_OUT)/vmlinux.h
+ $(QUIET_CLANG)$(CLANG) -g -O2 -fno-stack-protector --target=bpf \
+ $(CLANG_OPTIONS) $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \
+ -include $(OUTPUT)PERF-VERSION-FILE -include util/bpf_skel/perf_version.h \
-c $(filter util/bpf_skel/%.bpf.c,$^) -o $@
$(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL)
@@ -1167,17 +1270,26 @@ bpf-skel:
endif # CONFIG_PERF_BPF_SKEL
bpf-skel-clean:
- $(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS)
-
-clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean
- $(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS)
- $(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete -o -name '*.shellcheck_log' -delete
+ $(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS) $(SKEL_OUT)/vmlinux.h
+
+clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean \
+ arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean \
+ tests-coresight-targets-clean
+ $(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive \
+ $(OUTPUT)perf-iostat $(LANG_BINDINGS)
+ $(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '*.a' -delete -o \
+ -name '\.*.cmd' -delete -o -name '\.*.d' -delete -o -name '*.shellcheck_log' -delete
$(Q)$(RM) $(OUTPUT).config-detected
- $(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 perf-read-vdsox32 $(OUTPUT)$(LIBJVMTI).so
- $(call QUIET_CLEAN, core-gen) $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* \
+ $(call QUIET_CLEAN, core-progs) $(RM) $(ALL_PROGRAMS) perf perf-read-vdso32 \
+ perf-read-vdsox32 $(OUTPUT)$(LIBJVMTI).so
+ $(call QUIET_CLEAN, core-gen) $(RM) *.spec *.pyc *.pyo */*.pyc */*.pyo \
+ $(OUTPUT)common-cmds.h TAGS tags cscope* $(OUTPUT)PERF-VERSION-FILE \
+ $(OUTPUT)FEATURE-DUMP $(OUTPUT)util/*-bison* $(OUTPUT)util/*-flex* \
$(OUTPUT)util/intel-pt-decoder/inat-tables.c \
$(OUTPUT)tests/llvm-src-{base,kbuild,prologue,relocation}.c \
$(OUTPUT)pmu-events/pmu-events.c \
+ $(OUTPUT)pmu-events/test-empty-pmu-events.c \
+ $(OUTPUT)pmu-events/empty-pmu-events.log \
$(OUTPUT)pmu-events/metric_test.log \
$(OUTPUT)$(fadvise_advice_array) \
$(OUTPUT)$(fsconfig_arrays) \
diff --git a/tools/perf/arch/Build b/tools/perf/arch/Build
index 688818844c11..f0d96a13445c 100644
--- a/tools/perf/arch/Build
+++ b/tools/perf/arch/Build
@@ -1,2 +1,3 @@
-perf-y += common.o
-perf-y += $(SRCARCH)/
+perf-util-y += common.o
+perf-test-y += $(SRCARCH)/
+perf-util-y += $(SRCARCH)/
diff --git a/tools/perf/arch/alpha/entry/syscalls/syscall.tbl b/tools/perf/arch/alpha/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..74720667fe09
--- /dev/null
+++ b/tools/perf/arch/alpha/entry/syscalls/syscall.tbl
@@ -0,0 +1,504 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# system call numbers and entry vectors for alpha
+#
+# The format is:
+# <number> <abi> <name> <entry point>
+#
+# The <abi> is always "common" for this file
+#
+0 common osf_syscall alpha_syscall_zero
+1 common exit sys_exit
+2 common fork alpha_fork
+3 common read sys_read
+4 common write sys_write
+5 common osf_old_open sys_ni_syscall
+6 common close sys_close
+7 common osf_wait4 sys_osf_wait4
+8 common osf_old_creat sys_ni_syscall
+9 common link sys_link
+10 common unlink sys_unlink
+11 common osf_execve sys_ni_syscall
+12 common chdir sys_chdir
+13 common fchdir sys_fchdir
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 common chown sys_chown
+17 common brk sys_osf_brk
+18 common osf_getfsstat sys_ni_syscall
+19 common lseek sys_lseek
+20 common getxpid sys_getxpid
+21 common osf_mount sys_osf_mount
+22 common umount2 sys_umount
+23 common setuid sys_setuid
+24 common getxuid sys_getxuid
+25 common exec_with_loader sys_ni_syscall
+26 common ptrace sys_ptrace
+27 common osf_nrecvmsg sys_ni_syscall
+28 common osf_nsendmsg sys_ni_syscall
+29 common osf_nrecvfrom sys_ni_syscall
+30 common osf_naccept sys_ni_syscall
+31 common osf_ngetpeername sys_ni_syscall
+32 common osf_ngetsockname sys_ni_syscall
+33 common access sys_access
+34 common osf_chflags sys_ni_syscall
+35 common osf_fchflags sys_ni_syscall
+36 common sync sys_sync
+37 common kill sys_kill
+38 common osf_old_stat sys_ni_syscall
+39 common setpgid sys_setpgid
+40 common osf_old_lstat sys_ni_syscall
+41 common dup sys_dup
+42 common pipe sys_alpha_pipe
+43 common osf_set_program_attributes sys_osf_set_program_attributes
+44 common osf_profil sys_ni_syscall
+45 common open sys_open
+46 common osf_old_sigaction sys_ni_syscall
+47 common getxgid sys_getxgid
+48 common osf_sigprocmask sys_osf_sigprocmask
+49 common osf_getlogin sys_ni_syscall
+50 common osf_setlogin sys_ni_syscall
+51 common acct sys_acct
+52 common sigpending sys_sigpending
+54 common ioctl sys_ioctl
+55 common osf_reboot sys_ni_syscall
+56 common osf_revoke sys_ni_syscall
+57 common symlink sys_symlink
+58 common readlink sys_readlink
+59 common execve sys_execve
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common osf_old_fstat sys_ni_syscall
+63 common getpgrp sys_getpgrp
+64 common getpagesize sys_getpagesize
+65 common osf_mremap sys_ni_syscall
+66 common vfork alpha_vfork
+67 common stat sys_newstat
+68 common lstat sys_newlstat
+69 common osf_sbrk sys_ni_syscall
+70 common osf_sstk sys_ni_syscall
+71 common mmap sys_osf_mmap
+72 common osf_old_vadvise sys_ni_syscall
+73 common munmap sys_munmap
+74 common mprotect sys_mprotect
+75 common madvise sys_madvise
+76 common vhangup sys_vhangup
+77 common osf_kmodcall sys_ni_syscall
+78 common osf_mincore sys_ni_syscall
+79 common getgroups sys_getgroups
+80 common setgroups sys_setgroups
+81 common osf_old_getpgrp sys_ni_syscall
+82 common setpgrp sys_setpgid
+83 common osf_setitimer compat_sys_setitimer
+84 common osf_old_wait sys_ni_syscall
+85 common osf_table sys_ni_syscall
+86 common osf_getitimer compat_sys_getitimer
+87 common gethostname sys_gethostname
+88 common sethostname sys_sethostname
+89 common getdtablesize sys_getdtablesize
+90 common dup2 sys_dup2
+91 common fstat sys_newfstat
+92 common fcntl sys_fcntl
+93 common osf_select sys_osf_select
+94 common poll sys_poll
+95 common fsync sys_fsync
+96 common setpriority sys_setpriority
+97 common socket sys_socket
+98 common connect sys_connect
+99 common accept sys_accept
+100 common getpriority sys_osf_getpriority
+101 common send sys_send
+102 common recv sys_recv
+103 common sigreturn sys_sigreturn
+104 common bind sys_bind
+105 common setsockopt sys_setsockopt
+106 common listen sys_listen
+107 common osf_plock sys_ni_syscall
+108 common osf_old_sigvec sys_ni_syscall
+109 common osf_old_sigblock sys_ni_syscall
+110 common osf_old_sigsetmask sys_ni_syscall
+111 common sigsuspend sys_sigsuspend
+112 common osf_sigstack sys_osf_sigstack
+113 common recvmsg sys_recvmsg
+114 common sendmsg sys_sendmsg
+115 common osf_old_vtrace sys_ni_syscall
+116 common osf_gettimeofday sys_osf_gettimeofday
+117 common osf_getrusage sys_osf_getrusage
+118 common getsockopt sys_getsockopt
+120 common readv sys_readv
+121 common writev sys_writev
+122 common osf_settimeofday sys_osf_settimeofday
+123 common fchown sys_fchown
+124 common fchmod sys_fchmod
+125 common recvfrom sys_recvfrom
+126 common setreuid sys_setreuid
+127 common setregid sys_setregid
+128 common rename sys_rename
+129 common truncate sys_truncate
+130 common ftruncate sys_ftruncate
+131 common flock sys_flock
+132 common setgid sys_setgid
+133 common sendto sys_sendto
+134 common shutdown sys_shutdown
+135 common socketpair sys_socketpair
+136 common mkdir sys_mkdir
+137 common rmdir sys_rmdir
+138 common osf_utimes sys_osf_utimes
+139 common osf_old_sigreturn sys_ni_syscall
+140 common osf_adjtime sys_ni_syscall
+141 common getpeername sys_getpeername
+142 common osf_gethostid sys_ni_syscall
+143 common osf_sethostid sys_ni_syscall
+144 common getrlimit sys_getrlimit
+145 common setrlimit sys_setrlimit
+146 common osf_old_killpg sys_ni_syscall
+147 common setsid sys_setsid
+148 common quotactl sys_quotactl
+149 common osf_oldquota sys_ni_syscall
+150 common getsockname sys_getsockname
+153 common osf_pid_block sys_ni_syscall
+154 common osf_pid_unblock sys_ni_syscall
+156 common sigaction sys_osf_sigaction
+157 common osf_sigwaitprim sys_ni_syscall
+158 common osf_nfssvc sys_ni_syscall
+159 common osf_getdirentries sys_osf_getdirentries
+160 common osf_statfs sys_osf_statfs
+161 common osf_fstatfs sys_osf_fstatfs
+163 common osf_asynch_daemon sys_ni_syscall
+164 common osf_getfh sys_ni_syscall
+165 common osf_getdomainname sys_osf_getdomainname
+166 common setdomainname sys_setdomainname
+169 common osf_exportfs sys_ni_syscall
+181 common osf_alt_plock sys_ni_syscall
+184 common osf_getmnt sys_ni_syscall
+187 common osf_alt_sigpending sys_ni_syscall
+188 common osf_alt_setsid sys_ni_syscall
+199 common osf_swapon sys_swapon
+200 common msgctl sys_old_msgctl
+201 common msgget sys_msgget
+202 common msgrcv sys_msgrcv
+203 common msgsnd sys_msgsnd
+204 common semctl sys_old_semctl
+205 common semget sys_semget
+206 common semop sys_semop
+207 common osf_utsname sys_osf_utsname
+208 common lchown sys_lchown
+209 common shmat sys_shmat
+210 common shmctl sys_old_shmctl
+211 common shmdt sys_shmdt
+212 common shmget sys_shmget
+213 common osf_mvalid sys_ni_syscall
+214 common osf_getaddressconf sys_ni_syscall
+215 common osf_msleep sys_ni_syscall
+216 common osf_mwakeup sys_ni_syscall
+217 common msync sys_msync
+218 common osf_signal sys_ni_syscall
+219 common osf_utc_gettime sys_ni_syscall
+220 common osf_utc_adjtime sys_ni_syscall
+222 common osf_security sys_ni_syscall
+223 common osf_kloadcall sys_ni_syscall
+224 common osf_stat sys_osf_stat
+225 common osf_lstat sys_osf_lstat
+226 common osf_fstat sys_osf_fstat
+227 common osf_statfs64 sys_osf_statfs64
+228 common osf_fstatfs64 sys_osf_fstatfs64
+233 common getpgid sys_getpgid
+234 common getsid sys_getsid
+235 common sigaltstack sys_sigaltstack
+236 common osf_waitid sys_ni_syscall
+237 common osf_priocntlset sys_ni_syscall
+238 common osf_sigsendset sys_ni_syscall
+239 common osf_set_speculative sys_ni_syscall
+240 common osf_msfs_syscall sys_ni_syscall
+241 common osf_sysinfo sys_osf_sysinfo
+242 common osf_uadmin sys_ni_syscall
+243 common osf_fuser sys_ni_syscall
+244 common osf_proplist_syscall sys_osf_proplist_syscall
+245 common osf_ntp_adjtime sys_ni_syscall
+246 common osf_ntp_gettime sys_ni_syscall
+247 common osf_pathconf sys_ni_syscall
+248 common osf_fpathconf sys_ni_syscall
+250 common osf_uswitch sys_ni_syscall
+251 common osf_usleep_thread sys_osf_usleep_thread
+252 common osf_audcntl sys_ni_syscall
+253 common osf_audgen sys_ni_syscall
+254 common sysfs sys_sysfs
+255 common osf_subsys_info sys_ni_syscall
+256 common osf_getsysinfo sys_osf_getsysinfo
+257 common osf_setsysinfo sys_osf_setsysinfo
+258 common osf_afs_syscall sys_ni_syscall
+259 common osf_swapctl sys_ni_syscall
+260 common osf_memcntl sys_ni_syscall
+261 common osf_fdatasync sys_ni_syscall
+300 common bdflush sys_ni_syscall
+301 common sethae sys_sethae
+302 common mount sys_mount
+303 common old_adjtimex sys_old_adjtimex
+304 common swapoff sys_swapoff
+305 common getdents sys_getdents
+306 common create_module sys_ni_syscall
+307 common init_module sys_init_module
+308 common delete_module sys_delete_module
+309 common get_kernel_syms sys_ni_syscall
+310 common syslog sys_syslog
+311 common reboot sys_reboot
+312 common clone alpha_clone
+313 common uselib sys_uselib
+314 common mlock sys_mlock
+315 common munlock sys_munlock
+316 common mlockall sys_mlockall
+317 common munlockall sys_munlockall
+318 common sysinfo sys_sysinfo
+319 common _sysctl sys_ni_syscall
+# 320 was sys_idle
+321 common oldumount sys_oldumount
+322 common swapon sys_swapon
+323 common times sys_times
+324 common personality sys_personality
+325 common setfsuid sys_setfsuid
+326 common setfsgid sys_setfsgid
+327 common ustat sys_ustat
+328 common statfs sys_statfs
+329 common fstatfs sys_fstatfs
+330 common sched_setparam sys_sched_setparam
+331 common sched_getparam sys_sched_getparam
+332 common sched_setscheduler sys_sched_setscheduler
+333 common sched_getscheduler sys_sched_getscheduler
+334 common sched_yield sys_sched_yield
+335 common sched_get_priority_max sys_sched_get_priority_max
+336 common sched_get_priority_min sys_sched_get_priority_min
+337 common sched_rr_get_interval sys_sched_rr_get_interval
+338 common afs_syscall sys_ni_syscall
+339 common uname sys_newuname
+340 common nanosleep sys_nanosleep
+341 common mremap sys_mremap
+342 common nfsservctl sys_ni_syscall
+343 common setresuid sys_setresuid
+344 common getresuid sys_getresuid
+345 common pciconfig_read sys_pciconfig_read
+346 common pciconfig_write sys_pciconfig_write
+347 common query_module sys_ni_syscall
+348 common prctl sys_prctl
+349 common pread64 sys_pread64
+350 common pwrite64 sys_pwrite64
+351 common rt_sigreturn sys_rt_sigreturn
+352 common rt_sigaction sys_rt_sigaction
+353 common rt_sigprocmask sys_rt_sigprocmask
+354 common rt_sigpending sys_rt_sigpending
+355 common rt_sigtimedwait sys_rt_sigtimedwait
+356 common rt_sigqueueinfo sys_rt_sigqueueinfo
+357 common rt_sigsuspend sys_rt_sigsuspend
+358 common select sys_select
+359 common gettimeofday sys_gettimeofday
+360 common settimeofday sys_settimeofday
+361 common getitimer sys_getitimer
+362 common setitimer sys_setitimer
+363 common utimes sys_utimes
+364 common getrusage sys_getrusage
+365 common wait4 sys_wait4
+366 common adjtimex sys_adjtimex
+367 common getcwd sys_getcwd
+368 common capget sys_capget
+369 common capset sys_capset
+370 common sendfile sys_sendfile64
+371 common setresgid sys_setresgid
+372 common getresgid sys_getresgid
+373 common dipc sys_ni_syscall
+374 common pivot_root sys_pivot_root
+375 common mincore sys_mincore
+376 common pciconfig_iobase sys_pciconfig_iobase
+377 common getdents64 sys_getdents64
+378 common gettid sys_gettid
+379 common readahead sys_readahead
+# 380 is unused
+381 common tkill sys_tkill
+382 common setxattr sys_setxattr
+383 common lsetxattr sys_lsetxattr
+384 common fsetxattr sys_fsetxattr
+385 common getxattr sys_getxattr
+386 common lgetxattr sys_lgetxattr
+387 common fgetxattr sys_fgetxattr
+388 common listxattr sys_listxattr
+389 common llistxattr sys_llistxattr
+390 common flistxattr sys_flistxattr
+391 common removexattr sys_removexattr
+392 common lremovexattr sys_lremovexattr
+393 common fremovexattr sys_fremovexattr
+394 common futex sys_futex
+395 common sched_setaffinity sys_sched_setaffinity
+396 common sched_getaffinity sys_sched_getaffinity
+397 common tuxcall sys_ni_syscall
+398 common io_setup sys_io_setup
+399 common io_destroy sys_io_destroy
+400 common io_getevents sys_io_getevents
+401 common io_submit sys_io_submit
+402 common io_cancel sys_io_cancel
+405 common exit_group sys_exit_group
+406 common lookup_dcookie sys_ni_syscall
+407 common epoll_create sys_epoll_create
+408 common epoll_ctl sys_epoll_ctl
+409 common epoll_wait sys_epoll_wait
+410 common remap_file_pages sys_remap_file_pages
+411 common set_tid_address sys_set_tid_address
+412 common restart_syscall sys_restart_syscall
+413 common fadvise64 sys_fadvise64
+414 common timer_create sys_timer_create
+415 common timer_settime sys_timer_settime
+416 common timer_gettime sys_timer_gettime
+417 common timer_getoverrun sys_timer_getoverrun
+418 common timer_delete sys_timer_delete
+419 common clock_settime sys_clock_settime
+420 common clock_gettime sys_clock_gettime
+421 common clock_getres sys_clock_getres
+422 common clock_nanosleep sys_clock_nanosleep
+423 common semtimedop sys_semtimedop
+424 common tgkill sys_tgkill
+425 common stat64 sys_stat64
+426 common lstat64 sys_lstat64
+427 common fstat64 sys_fstat64
+428 common vserver sys_ni_syscall
+429 common mbind sys_ni_syscall
+430 common get_mempolicy sys_ni_syscall
+431 common set_mempolicy sys_ni_syscall
+432 common mq_open sys_mq_open
+433 common mq_unlink sys_mq_unlink
+434 common mq_timedsend sys_mq_timedsend
+435 common mq_timedreceive sys_mq_timedreceive
+436 common mq_notify sys_mq_notify
+437 common mq_getsetattr sys_mq_getsetattr
+438 common waitid sys_waitid
+439 common add_key sys_add_key
+440 common request_key sys_request_key
+441 common keyctl sys_keyctl
+442 common ioprio_set sys_ioprio_set
+443 common ioprio_get sys_ioprio_get
+444 common inotify_init sys_inotify_init
+445 common inotify_add_watch sys_inotify_add_watch
+446 common inotify_rm_watch sys_inotify_rm_watch
+447 common fdatasync sys_fdatasync
+448 common kexec_load sys_kexec_load
+449 common migrate_pages sys_migrate_pages
+450 common openat sys_openat
+451 common mkdirat sys_mkdirat
+452 common mknodat sys_mknodat
+453 common fchownat sys_fchownat
+454 common futimesat sys_futimesat
+455 common fstatat64 sys_fstatat64
+456 common unlinkat sys_unlinkat
+457 common renameat sys_renameat
+458 common linkat sys_linkat
+459 common symlinkat sys_symlinkat
+460 common readlinkat sys_readlinkat
+461 common fchmodat sys_fchmodat
+462 common faccessat sys_faccessat
+463 common pselect6 sys_pselect6
+464 common ppoll sys_ppoll
+465 common unshare sys_unshare
+466 common set_robust_list sys_set_robust_list
+467 common get_robust_list sys_get_robust_list
+468 common splice sys_splice
+469 common sync_file_range sys_sync_file_range
+470 common tee sys_tee
+471 common vmsplice sys_vmsplice
+472 common move_pages sys_move_pages
+473 common getcpu sys_getcpu
+474 common epoll_pwait sys_epoll_pwait
+475 common utimensat sys_utimensat
+476 common signalfd sys_signalfd
+477 common timerfd sys_ni_syscall
+478 common eventfd sys_eventfd
+479 common recvmmsg sys_recvmmsg
+480 common fallocate sys_fallocate
+481 common timerfd_create sys_timerfd_create
+482 common timerfd_settime sys_timerfd_settime
+483 common timerfd_gettime sys_timerfd_gettime
+484 common signalfd4 sys_signalfd4
+485 common eventfd2 sys_eventfd2
+486 common epoll_create1 sys_epoll_create1
+487 common dup3 sys_dup3
+488 common pipe2 sys_pipe2
+489 common inotify_init1 sys_inotify_init1
+490 common preadv sys_preadv
+491 common pwritev sys_pwritev
+492 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo
+493 common perf_event_open sys_perf_event_open
+494 common fanotify_init sys_fanotify_init
+495 common fanotify_mark sys_fanotify_mark
+496 common prlimit64 sys_prlimit64
+497 common name_to_handle_at sys_name_to_handle_at
+498 common open_by_handle_at sys_open_by_handle_at
+499 common clock_adjtime sys_clock_adjtime
+500 common syncfs sys_syncfs
+501 common setns sys_setns
+502 common accept4 sys_accept4
+503 common sendmmsg sys_sendmmsg
+504 common process_vm_readv sys_process_vm_readv
+505 common process_vm_writev sys_process_vm_writev
+506 common kcmp sys_kcmp
+507 common finit_module sys_finit_module
+508 common sched_setattr sys_sched_setattr
+509 common sched_getattr sys_sched_getattr
+510 common renameat2 sys_renameat2
+511 common getrandom sys_getrandom
+512 common memfd_create sys_memfd_create
+513 common execveat sys_execveat
+514 common seccomp sys_seccomp
+515 common bpf sys_bpf
+516 common userfaultfd sys_userfaultfd
+517 common membarrier sys_membarrier
+518 common mlock2 sys_mlock2
+519 common copy_file_range sys_copy_file_range
+520 common preadv2 sys_preadv2
+521 common pwritev2 sys_pwritev2
+522 common statx sys_statx
+523 common io_pgetevents sys_io_pgetevents
+524 common pkey_mprotect sys_pkey_mprotect
+525 common pkey_alloc sys_pkey_alloc
+526 common pkey_free sys_pkey_free
+527 common rseq sys_rseq
+528 common statfs64 sys_statfs64
+529 common fstatfs64 sys_fstatfs64
+530 common getegid sys_getegid
+531 common geteuid sys_geteuid
+532 common getppid sys_getppid
+# all other architectures have common numbers for new syscall, alpha
+# is the exception.
+534 common pidfd_send_signal sys_pidfd_send_signal
+535 common io_uring_setup sys_io_uring_setup
+536 common io_uring_enter sys_io_uring_enter
+537 common io_uring_register sys_io_uring_register
+538 common open_tree sys_open_tree
+539 common move_mount sys_move_mount
+540 common fsopen sys_fsopen
+541 common fsconfig sys_fsconfig
+542 common fsmount sys_fsmount
+543 common fspick sys_fspick
+544 common pidfd_open sys_pidfd_open
+545 common clone3 alpha_clone3
+546 common close_range sys_close_range
+547 common openat2 sys_openat2
+548 common pidfd_getfd sys_pidfd_getfd
+549 common faccessat2 sys_faccessat2
+550 common process_madvise sys_process_madvise
+551 common epoll_pwait2 sys_epoll_pwait2
+552 common mount_setattr sys_mount_setattr
+553 common quotactl_fd sys_quotactl_fd
+554 common landlock_create_ruleset sys_landlock_create_ruleset
+555 common landlock_add_rule sys_landlock_add_rule
+556 common landlock_restrict_self sys_landlock_restrict_self
+# 557 reserved for memfd_secret
+558 common process_mrelease sys_process_mrelease
+559 common futex_waitv sys_futex_waitv
+560 common set_mempolicy_home_node sys_ni_syscall
+561 common cachestat sys_cachestat
+562 common fchmodat2 sys_fchmodat2
+563 common map_shadow_stack sys_map_shadow_stack
+564 common futex_wake sys_futex_wake
+565 common futex_wait sys_futex_wait
+566 common futex_requeue sys_futex_requeue
+567 common statmount sys_statmount
+568 common listmount sys_listmount
+569 common lsm_get_self_attr sys_lsm_get_self_attr
+570 common lsm_set_self_attr sys_lsm_set_self_attr
+571 common lsm_list_modules sys_lsm_list_modules
+572 common mseal sys_mseal
diff --git a/tools/perf/arch/arc/annotate/instructions.c b/tools/perf/arch/arc/annotate/instructions.c
index 2f00e995c7e3..e5619770a1af 100644
--- a/tools/perf/arch/arc/annotate/instructions.c
+++ b/tools/perf/arch/arc/annotate/instructions.c
@@ -5,5 +5,7 @@ static int arc__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
{
arch->initialized = true;
arch->objdump.comment_char = ';';
+ arch->e_machine = EM_ARC;
+ arch->e_flags = 0;
return 0;
}
diff --git a/tools/perf/arch/arm/Build b/tools/perf/arch/arm/Build
index 36222e64bbf7..317425aa3712 100644
--- a/tools/perf/arch/arm/Build
+++ b/tools/perf/arch/arm/Build
@@ -1,2 +1,2 @@
-perf-y += util/
-perf-$(CONFIG_DWARF_UNWIND) += tests/
+perf-util-y += util/
+perf-test-$(CONFIG_DWARF_UNWIND) += tests/
diff --git a/tools/perf/arch/arm/Makefile b/tools/perf/arch/arm/Makefile
index 1d88fdab13bf..8b59ce8efb89 100644
--- a/tools/perf/arch/arm/Makefile
+++ b/tools/perf/arch/arm/Makefile
@@ -1,5 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
PERF_HAVE_JITDUMP := 1
diff --git a/tools/perf/arch/arm/annotate/instructions.c b/tools/perf/arch/arm/annotate/instructions.c
index 2ff6cedeb9c5..cf91a43362b0 100644
--- a/tools/perf/arch/arm/annotate/instructions.c
+++ b/tools/perf/arch/arm/annotate/instructions.c
@@ -53,6 +53,8 @@ static int arm__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->associate_instruction_ops = arm__associate_instruction_ops;
arch->objdump.comment_char = ';';
arch->objdump.skip_functions_char = '+';
+ arch->e_machine = EM_ARM;
+ arch->e_flags = 0;
return 0;
out_free_call:
diff --git a/tools/perf/arch/arm/entry/syscalls/syscall.tbl b/tools/perf/arch/arm/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..b07e699aaa3c
--- /dev/null
+++ b/tools/perf/arch/arm/entry/syscalls/syscall.tbl
@@ -0,0 +1,486 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# Linux system call numbers and entry vectors
+#
+# The format is:
+# <num> <abi> <name> [<entry point> [<oabi compat entry point>]]
+#
+# Where abi is:
+# common - for system calls shared between oabi and eabi (may have compat)
+# oabi - for oabi-only system calls (may have compat)
+# eabi - for eabi-only system calls
+#
+# For each syscall number, "common" is mutually exclusive with oabi and eabi
+#
+0 common restart_syscall sys_restart_syscall
+1 common exit sys_exit
+2 common fork sys_fork
+3 common read sys_read
+4 common write sys_write
+5 common open sys_open
+6 common close sys_close
+# 7 was sys_waitpid
+8 common creat sys_creat
+9 common link sys_link
+10 common unlink sys_unlink
+11 common execve sys_execve
+12 common chdir sys_chdir
+13 oabi time sys_time32
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 common lchown sys_lchown16
+# 17 was sys_break
+# 18 was sys_stat
+19 common lseek sys_lseek
+20 common getpid sys_getpid
+21 common mount sys_mount
+22 oabi umount sys_oldumount
+23 common setuid sys_setuid16
+24 common getuid sys_getuid16
+25 oabi stime sys_stime32
+26 common ptrace sys_ptrace
+27 oabi alarm sys_alarm
+# 28 was sys_fstat
+29 common pause sys_pause
+30 oabi utime sys_utime32
+# 31 was sys_stty
+# 32 was sys_gtty
+33 common access sys_access
+34 common nice sys_nice
+# 35 was sys_ftime
+36 common sync sys_sync
+37 common kill sys_kill
+38 common rename sys_rename
+39 common mkdir sys_mkdir
+40 common rmdir sys_rmdir
+41 common dup sys_dup
+42 common pipe sys_pipe
+43 common times sys_times
+# 44 was sys_prof
+45 common brk sys_brk
+46 common setgid sys_setgid16
+47 common getgid sys_getgid16
+# 48 was sys_signal
+49 common geteuid sys_geteuid16
+50 common getegid sys_getegid16
+51 common acct sys_acct
+52 common umount2 sys_umount
+# 53 was sys_lock
+54 common ioctl sys_ioctl
+55 common fcntl sys_fcntl
+# 56 was sys_mpx
+57 common setpgid sys_setpgid
+# 58 was sys_ulimit
+# 59 was sys_olduname
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common ustat sys_ustat
+63 common dup2 sys_dup2
+64 common getppid sys_getppid
+65 common getpgrp sys_getpgrp
+66 common setsid sys_setsid
+67 common sigaction sys_sigaction
+# 68 was sys_sgetmask
+# 69 was sys_ssetmask
+70 common setreuid sys_setreuid16
+71 common setregid sys_setregid16
+72 common sigsuspend sys_sigsuspend
+73 common sigpending sys_sigpending
+74 common sethostname sys_sethostname
+75 common setrlimit sys_setrlimit
+# Back compat 2GB limited rlimit
+76 oabi getrlimit sys_old_getrlimit
+77 common getrusage sys_getrusage
+78 common gettimeofday sys_gettimeofday
+79 common settimeofday sys_settimeofday
+80 common getgroups sys_getgroups16
+81 common setgroups sys_setgroups16
+82 oabi select sys_old_select
+83 common symlink sys_symlink
+# 84 was sys_lstat
+85 common readlink sys_readlink
+86 common uselib sys_uselib
+87 common swapon sys_swapon
+88 common reboot sys_reboot
+89 oabi readdir sys_old_readdir
+90 oabi mmap sys_old_mmap
+91 common munmap sys_munmap
+92 common truncate sys_truncate
+93 common ftruncate sys_ftruncate
+94 common fchmod sys_fchmod
+95 common fchown sys_fchown16
+96 common getpriority sys_getpriority
+97 common setpriority sys_setpriority
+# 98 was sys_profil
+99 common statfs sys_statfs
+100 common fstatfs sys_fstatfs
+# 101 was sys_ioperm
+102 oabi socketcall sys_socketcall sys_oabi_socketcall
+103 common syslog sys_syslog
+104 common setitimer sys_setitimer
+105 common getitimer sys_getitimer
+106 common stat sys_newstat
+107 common lstat sys_newlstat
+108 common fstat sys_newfstat
+# 109 was sys_uname
+# 110 was sys_iopl
+111 common vhangup sys_vhangup
+# 112 was sys_idle
+# syscall to call a syscall!
+113 oabi syscall sys_syscall
+114 common wait4 sys_wait4
+115 common swapoff sys_swapoff
+116 common sysinfo sys_sysinfo
+117 oabi ipc sys_ipc sys_oabi_ipc
+118 common fsync sys_fsync
+119 common sigreturn sys_sigreturn_wrapper
+120 common clone sys_clone
+121 common setdomainname sys_setdomainname
+122 common uname sys_newuname
+# 123 was sys_modify_ldt
+124 common adjtimex sys_adjtimex_time32
+125 common mprotect sys_mprotect
+126 common sigprocmask sys_sigprocmask
+# 127 was sys_create_module
+128 common init_module sys_init_module
+129 common delete_module sys_delete_module
+# 130 was sys_get_kernel_syms
+131 common quotactl sys_quotactl
+132 common getpgid sys_getpgid
+133 common fchdir sys_fchdir
+134 common bdflush sys_ni_syscall
+135 common sysfs sys_sysfs
+136 common personality sys_personality
+# 137 was sys_afs_syscall
+138 common setfsuid sys_setfsuid16
+139 common setfsgid sys_setfsgid16
+140 common _llseek sys_llseek
+141 common getdents sys_getdents
+142 common _newselect sys_select
+143 common flock sys_flock
+144 common msync sys_msync
+145 common readv sys_readv
+146 common writev sys_writev
+147 common getsid sys_getsid
+148 common fdatasync sys_fdatasync
+149 common _sysctl sys_ni_syscall
+150 common mlock sys_mlock
+151 common munlock sys_munlock
+152 common mlockall sys_mlockall
+153 common munlockall sys_munlockall
+154 common sched_setparam sys_sched_setparam
+155 common sched_getparam sys_sched_getparam
+156 common sched_setscheduler sys_sched_setscheduler
+157 common sched_getscheduler sys_sched_getscheduler
+158 common sched_yield sys_sched_yield
+159 common sched_get_priority_max sys_sched_get_priority_max
+160 common sched_get_priority_min sys_sched_get_priority_min
+161 common sched_rr_get_interval sys_sched_rr_get_interval_time32
+162 common nanosleep sys_nanosleep_time32
+163 common mremap sys_mremap
+164 common setresuid sys_setresuid16
+165 common getresuid sys_getresuid16
+# 166 was sys_vm86
+# 167 was sys_query_module
+168 common poll sys_poll
+169 common nfsservctl
+170 common setresgid sys_setresgid16
+171 common getresgid sys_getresgid16
+172 common prctl sys_prctl
+173 common rt_sigreturn sys_rt_sigreturn_wrapper
+174 common rt_sigaction sys_rt_sigaction
+175 common rt_sigprocmask sys_rt_sigprocmask
+176 common rt_sigpending sys_rt_sigpending
+177 common rt_sigtimedwait sys_rt_sigtimedwait_time32
+178 common rt_sigqueueinfo sys_rt_sigqueueinfo
+179 common rt_sigsuspend sys_rt_sigsuspend
+180 common pread64 sys_pread64 sys_oabi_pread64
+181 common pwrite64 sys_pwrite64 sys_oabi_pwrite64
+182 common chown sys_chown16
+183 common getcwd sys_getcwd
+184 common capget sys_capget
+185 common capset sys_capset
+186 common sigaltstack sys_sigaltstack
+187 common sendfile sys_sendfile
+# 188 reserved
+# 189 reserved
+190 common vfork sys_vfork
+# SuS compliant getrlimit
+191 common ugetrlimit sys_getrlimit
+192 common mmap2 sys_mmap2
+193 common truncate64 sys_truncate64 sys_oabi_truncate64
+194 common ftruncate64 sys_ftruncate64 sys_oabi_ftruncate64
+195 common stat64 sys_stat64 sys_oabi_stat64
+196 common lstat64 sys_lstat64 sys_oabi_lstat64
+197 common fstat64 sys_fstat64 sys_oabi_fstat64
+198 common lchown32 sys_lchown
+199 common getuid32 sys_getuid
+200 common getgid32 sys_getgid
+201 common geteuid32 sys_geteuid
+202 common getegid32 sys_getegid
+203 common setreuid32 sys_setreuid
+204 common setregid32 sys_setregid
+205 common getgroups32 sys_getgroups
+206 common setgroups32 sys_setgroups
+207 common fchown32 sys_fchown
+208 common setresuid32 sys_setresuid
+209 common getresuid32 sys_getresuid
+210 common setresgid32 sys_setresgid
+211 common getresgid32 sys_getresgid
+212 common chown32 sys_chown
+213 common setuid32 sys_setuid
+214 common setgid32 sys_setgid
+215 common setfsuid32 sys_setfsuid
+216 common setfsgid32 sys_setfsgid
+217 common getdents64 sys_getdents64
+218 common pivot_root sys_pivot_root
+219 common mincore sys_mincore
+220 common madvise sys_madvise
+221 common fcntl64 sys_fcntl64 sys_oabi_fcntl64
+# 222 for tux
+# 223 is unused
+224 common gettid sys_gettid
+225 common readahead sys_readahead sys_oabi_readahead
+226 common setxattr sys_setxattr
+227 common lsetxattr sys_lsetxattr
+228 common fsetxattr sys_fsetxattr
+229 common getxattr sys_getxattr
+230 common lgetxattr sys_lgetxattr
+231 common fgetxattr sys_fgetxattr
+232 common listxattr sys_listxattr
+233 common llistxattr sys_llistxattr
+234 common flistxattr sys_flistxattr
+235 common removexattr sys_removexattr
+236 common lremovexattr sys_lremovexattr
+237 common fremovexattr sys_fremovexattr
+238 common tkill sys_tkill
+239 common sendfile64 sys_sendfile64
+240 common futex sys_futex_time32
+241 common sched_setaffinity sys_sched_setaffinity
+242 common sched_getaffinity sys_sched_getaffinity
+243 common io_setup sys_io_setup
+244 common io_destroy sys_io_destroy
+245 common io_getevents sys_io_getevents_time32
+246 common io_submit sys_io_submit
+247 common io_cancel sys_io_cancel
+248 common exit_group sys_exit_group
+249 common lookup_dcookie sys_ni_syscall
+250 common epoll_create sys_epoll_create
+251 common epoll_ctl sys_epoll_ctl sys_oabi_epoll_ctl
+252 common epoll_wait sys_epoll_wait
+253 common remap_file_pages sys_remap_file_pages
+# 254 for set_thread_area
+# 255 for get_thread_area
+256 common set_tid_address sys_set_tid_address
+257 common timer_create sys_timer_create
+258 common timer_settime sys_timer_settime32
+259 common timer_gettime sys_timer_gettime32
+260 common timer_getoverrun sys_timer_getoverrun
+261 common timer_delete sys_timer_delete
+262 common clock_settime sys_clock_settime32
+263 common clock_gettime sys_clock_gettime32
+264 common clock_getres sys_clock_getres_time32
+265 common clock_nanosleep sys_clock_nanosleep_time32
+266 common statfs64 sys_statfs64_wrapper
+267 common fstatfs64 sys_fstatfs64_wrapper
+268 common tgkill sys_tgkill
+269 common utimes sys_utimes_time32
+270 common arm_fadvise64_64 sys_arm_fadvise64_64
+271 common pciconfig_iobase sys_pciconfig_iobase
+272 common pciconfig_read sys_pciconfig_read
+273 common pciconfig_write sys_pciconfig_write
+274 common mq_open sys_mq_open
+275 common mq_unlink sys_mq_unlink
+276 common mq_timedsend sys_mq_timedsend_time32
+277 common mq_timedreceive sys_mq_timedreceive_time32
+278 common mq_notify sys_mq_notify
+279 common mq_getsetattr sys_mq_getsetattr
+280 common waitid sys_waitid
+281 common socket sys_socket
+282 common bind sys_bind sys_oabi_bind
+283 common connect sys_connect sys_oabi_connect
+284 common listen sys_listen
+285 common accept sys_accept
+286 common getsockname sys_getsockname
+287 common getpeername sys_getpeername
+288 common socketpair sys_socketpair
+289 common send sys_send
+290 common sendto sys_sendto sys_oabi_sendto
+291 common recv sys_recv
+292 common recvfrom sys_recvfrom
+293 common shutdown sys_shutdown
+294 common setsockopt sys_setsockopt
+295 common getsockopt sys_getsockopt
+296 common sendmsg sys_sendmsg sys_oabi_sendmsg
+297 common recvmsg sys_recvmsg
+298 common semop sys_semop sys_oabi_semop
+299 common semget sys_semget
+300 common semctl sys_old_semctl
+301 common msgsnd sys_msgsnd
+302 common msgrcv sys_msgrcv
+303 common msgget sys_msgget
+304 common msgctl sys_old_msgctl
+305 common shmat sys_shmat
+306 common shmdt sys_shmdt
+307 common shmget sys_shmget
+308 common shmctl sys_old_shmctl
+309 common add_key sys_add_key
+310 common request_key sys_request_key
+311 common keyctl sys_keyctl
+312 common semtimedop sys_semtimedop_time32 sys_oabi_semtimedop
+313 common vserver
+314 common ioprio_set sys_ioprio_set
+315 common ioprio_get sys_ioprio_get
+316 common inotify_init sys_inotify_init
+317 common inotify_add_watch sys_inotify_add_watch
+318 common inotify_rm_watch sys_inotify_rm_watch
+319 common mbind sys_mbind
+320 common get_mempolicy sys_get_mempolicy
+321 common set_mempolicy sys_set_mempolicy
+322 common openat sys_openat
+323 common mkdirat sys_mkdirat
+324 common mknodat sys_mknodat
+325 common fchownat sys_fchownat
+326 common futimesat sys_futimesat_time32
+327 common fstatat64 sys_fstatat64 sys_oabi_fstatat64
+328 common unlinkat sys_unlinkat
+329 common renameat sys_renameat
+330 common linkat sys_linkat
+331 common symlinkat sys_symlinkat
+332 common readlinkat sys_readlinkat
+333 common fchmodat sys_fchmodat
+334 common faccessat sys_faccessat
+335 common pselect6 sys_pselect6_time32
+336 common ppoll sys_ppoll_time32
+337 common unshare sys_unshare
+338 common set_robust_list sys_set_robust_list
+339 common get_robust_list sys_get_robust_list
+340 common splice sys_splice
+341 common arm_sync_file_range sys_sync_file_range2
+342 common tee sys_tee
+343 common vmsplice sys_vmsplice
+344 common move_pages sys_move_pages
+345 common getcpu sys_getcpu
+346 common epoll_pwait sys_epoll_pwait
+347 common kexec_load sys_kexec_load
+348 common utimensat sys_utimensat_time32
+349 common signalfd sys_signalfd
+350 common timerfd_create sys_timerfd_create
+351 common eventfd sys_eventfd
+352 common fallocate sys_fallocate
+353 common timerfd_settime sys_timerfd_settime32
+354 common timerfd_gettime sys_timerfd_gettime32
+355 common signalfd4 sys_signalfd4
+356 common eventfd2 sys_eventfd2
+357 common epoll_create1 sys_epoll_create1
+358 common dup3 sys_dup3
+359 common pipe2 sys_pipe2
+360 common inotify_init1 sys_inotify_init1
+361 common preadv sys_preadv
+362 common pwritev sys_pwritev
+363 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo
+364 common perf_event_open sys_perf_event_open
+365 common recvmmsg sys_recvmmsg_time32
+366 common accept4 sys_accept4
+367 common fanotify_init sys_fanotify_init
+368 common fanotify_mark sys_fanotify_mark
+369 common prlimit64 sys_prlimit64
+370 common name_to_handle_at sys_name_to_handle_at
+371 common open_by_handle_at sys_open_by_handle_at
+372 common clock_adjtime sys_clock_adjtime32
+373 common syncfs sys_syncfs
+374 common sendmmsg sys_sendmmsg
+375 common setns sys_setns
+376 common process_vm_readv sys_process_vm_readv
+377 common process_vm_writev sys_process_vm_writev
+378 common kcmp sys_kcmp
+379 common finit_module sys_finit_module
+380 common sched_setattr sys_sched_setattr
+381 common sched_getattr sys_sched_getattr
+382 common renameat2 sys_renameat2
+383 common seccomp sys_seccomp
+384 common getrandom sys_getrandom
+385 common memfd_create sys_memfd_create
+386 common bpf sys_bpf
+387 common execveat sys_execveat
+388 common userfaultfd sys_userfaultfd
+389 common membarrier sys_membarrier
+390 common mlock2 sys_mlock2
+391 common copy_file_range sys_copy_file_range
+392 common preadv2 sys_preadv2
+393 common pwritev2 sys_pwritev2
+394 common pkey_mprotect sys_pkey_mprotect
+395 common pkey_alloc sys_pkey_alloc
+396 common pkey_free sys_pkey_free
+397 common statx sys_statx
+398 common rseq sys_rseq
+399 common io_pgetevents sys_io_pgetevents_time32
+400 common migrate_pages sys_migrate_pages
+401 common kexec_file_load sys_kexec_file_load
+# 402 is unused
+403 common clock_gettime64 sys_clock_gettime
+404 common clock_settime64 sys_clock_settime
+405 common clock_adjtime64 sys_clock_adjtime
+406 common clock_getres_time64 sys_clock_getres
+407 common clock_nanosleep_time64 sys_clock_nanosleep
+408 common timer_gettime64 sys_timer_gettime
+409 common timer_settime64 sys_timer_settime
+410 common timerfd_gettime64 sys_timerfd_gettime
+411 common timerfd_settime64 sys_timerfd_settime
+412 common utimensat_time64 sys_utimensat
+413 common pselect6_time64 sys_pselect6
+414 common ppoll_time64 sys_ppoll
+416 common io_pgetevents_time64 sys_io_pgetevents
+417 common recvmmsg_time64 sys_recvmmsg
+418 common mq_timedsend_time64 sys_mq_timedsend
+419 common mq_timedreceive_time64 sys_mq_timedreceive
+420 common semtimedop_time64 sys_semtimedop
+421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait
+422 common futex_time64 sys_futex
+423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+435 common clone3 sys_clone3
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
diff --git a/tools/perf/arch/arm/tests/Build b/tools/perf/arch/arm/tests/Build
index bc8e97380c82..599efa545727 100644
--- a/tools/perf/arch/arm/tests/Build
+++ b/tools/perf/arch/arm/tests/Build
@@ -1,5 +1,5 @@
-perf-y += regs_load.o
-perf-y += dwarf-unwind.o
-perf-y += vectors-page.o
+perf-test-y += regs_load.o
+perf-test-y += dwarf-unwind.o
+perf-test-y += vectors-page.o
-perf-y += arch-tests.o
+perf-test-y += arch-tests.o
diff --git a/tools/perf/arch/arm/tests/dwarf-unwind.c b/tools/perf/arch/arm/tests/dwarf-unwind.c
index 9bc304cb7762..f421910e0709 100644
--- a/tools/perf/arch/arm/tests/dwarf-unwind.c
+++ b/tools/perf/arch/arm/tests/dwarf-unwind.c
@@ -45,7 +45,7 @@ static int sample_ustack(struct perf_sample *sample,
int test__arch_unwind_sample(struct perf_sample *sample,
struct thread *thread)
{
- struct regs_dump *regs = &sample->user_regs;
+ struct regs_dump *regs = perf_sample__user_regs(sample);
u64 *buf;
buf = calloc(1, sizeof(u64) * PERF_REGS_MAX);
diff --git a/tools/perf/arch/arm/util/Build b/tools/perf/arch/arm/util/Build
index 37fc63708966..f7a8b37d1c68 100644
--- a/tools/perf/arch/arm/util/Build
+++ b/tools/perf/arch/arm/util/Build
@@ -1,8 +1,6 @@
-perf-y += perf_regs.o
+perf-util-y += perf_regs.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
+perf-util-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-perf-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-
-perf-$(CONFIG_AUXTRACE) += pmu.o auxtrace.o cs-etm.o
+perf-util-$(CONFIG_AUXTRACE) += pmu.o auxtrace.o cs-etm.o
diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c
index 77e6663c1703..ea891d12f8f4 100644
--- a/tools/perf/arch/arm/util/cs-etm.c
+++ b/tools/perf/arch/arm/util/cs-etm.c
@@ -66,18 +66,30 @@ static const char * const metadata_ete_ro[] = {
[CS_ETE_TS_SOURCE] = "ts_source",
};
-static bool cs_etm_is_etmv4(struct auxtrace_record *itr, int cpu);
-static bool cs_etm_is_ete(struct auxtrace_record *itr, int cpu);
+enum cs_etm_version { CS_NOT_PRESENT, CS_ETMV3, CS_ETMV4, CS_ETE };
-static int cs_etm_validate_context_id(struct auxtrace_record *itr,
- struct evsel *evsel, int cpu)
+static bool cs_etm_is_ete(struct perf_pmu *cs_etm_pmu, struct perf_cpu cpu);
+static int cs_etm_get_ro(struct perf_pmu *pmu, struct perf_cpu cpu, const char *path, __u64 *val);
+static bool cs_etm_pmu_path_exists(struct perf_pmu *pmu, struct perf_cpu cpu, const char *path);
+
+static enum cs_etm_version cs_etm_get_version(struct perf_pmu *cs_etm_pmu,
+ struct perf_cpu cpu)
+{
+ if (cs_etm_is_ete(cs_etm_pmu, cpu))
+ return CS_ETE;
+ else if (cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0]))
+ return CS_ETMV4;
+ else if (cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_etmv3_ro[CS_ETM_ETMCCER]))
+ return CS_ETMV3;
+
+ return CS_NOT_PRESENT;
+}
+
+static int cs_etm_validate_context_id(struct perf_pmu *cs_etm_pmu, struct evsel *evsel,
+ struct perf_cpu cpu)
{
- struct cs_etm_recording *ptr =
- container_of(itr, struct cs_etm_recording, itr);
- struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
- char path[PATH_MAX];
int err;
- u32 val;
+ __u64 val;
u64 contextid = evsel->core.attr.config &
(perf_pmu__format_bits(cs_etm_pmu, "contextid") |
perf_pmu__format_bits(cs_etm_pmu, "contextid1") |
@@ -87,23 +99,16 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
return 0;
/* Not supported in etmv3 */
- if (!cs_etm_is_etmv4(itr, cpu)) {
+ if (cs_etm_get_version(cs_etm_pmu, cpu) == CS_ETMV3) {
pr_err("%s: contextid not supported in ETMv3, disable with %s/contextid=0/\n",
CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME);
return -EINVAL;
}
/* Get a handle on TRCIDR2 */
- snprintf(path, PATH_MAX, "cpu%d/%s",
- cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR2]);
- err = perf_pmu__scan_file(cs_etm_pmu, path, "%x", &val);
-
- /* There was a problem reading the file, bailing out */
- if (err != 1) {
- pr_err("%s: can't read file %s\n", CORESIGHT_ETM_PMU_NAME,
- path);
+ err = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR2], &val);
+ if (err)
return err;
- }
if (contextid &
perf_pmu__format_bits(cs_etm_pmu, "contextid1")) {
@@ -140,37 +145,26 @@ static int cs_etm_validate_context_id(struct auxtrace_record *itr,
return 0;
}
-static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
- struct evsel *evsel, int cpu)
+static int cs_etm_validate_timestamp(struct perf_pmu *cs_etm_pmu, struct evsel *evsel,
+ struct perf_cpu cpu)
{
- struct cs_etm_recording *ptr =
- container_of(itr, struct cs_etm_recording, itr);
- struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
- char path[PATH_MAX];
int err;
- u32 val;
+ __u64 val;
if (!(evsel->core.attr.config &
perf_pmu__format_bits(cs_etm_pmu, "timestamp")))
return 0;
- if (!cs_etm_is_etmv4(itr, cpu)) {
+ if (cs_etm_get_version(cs_etm_pmu, cpu) == CS_ETMV3) {
pr_err("%s: timestamp not supported in ETMv3, disable with %s/timestamp=0/\n",
CORESIGHT_ETM_PMU_NAME, CORESIGHT_ETM_PMU_NAME);
return -EINVAL;
}
/* Get a handle on TRCIRD0 */
- snprintf(path, PATH_MAX, "cpu%d/%s",
- cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
- err = perf_pmu__scan_file(cs_etm_pmu, path, "%x", &val);
-
- /* There was a problem reading the file, bailing out */
- if (err != 1) {
- pr_err("%s: can't read file %s\n",
- CORESIGHT_ETM_PMU_NAME, path);
+ err = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0], &val);
+ if (err)
return err;
- }
/*
* TRCIDR0.TSSIZE, bit [28-24], indicates whether global timestamping
@@ -187,6 +181,13 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
return 0;
}
+static struct perf_pmu *cs_etm_get_pmu(struct auxtrace_record *itr)
+{
+ struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr);
+
+ return ptr->cs_etm_pmu;
+}
+
/*
* Check whether the requested timestamp and contextid options should be
* available on all requested CPUs and if not, tell the user how to override.
@@ -194,41 +195,45 @@ static int cs_etm_validate_timestamp(struct auxtrace_record *itr,
* first is better. In theory the kernel could still disable the option for
* some other reason so this is best effort only.
*/
-static int cs_etm_validate_config(struct auxtrace_record *itr,
+static int cs_etm_validate_config(struct perf_pmu *cs_etm_pmu,
struct evsel *evsel)
{
- int i, err = -EINVAL;
+ int idx, err = 0;
struct perf_cpu_map *event_cpus = evsel->evlist->core.user_requested_cpus;
- struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
+ struct perf_cpu_map *intersect_cpus;
+ struct perf_cpu cpu;
- /* Set option of each CPU we have */
- for (i = 0; i < cpu__max_cpu().cpu; i++) {
- struct perf_cpu cpu = { .cpu = i, };
-
- /*
- * In per-cpu case, do the validation for CPUs to work with.
- * In per-thread case, the CPU map is empty. Since the traced
- * program can run on any CPUs in this case, thus don't skip
- * validation.
- */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus) &&
- !perf_cpu_map__has(event_cpus, cpu))
- continue;
+ /*
+ * Set option of each CPU we have. In per-cpu case, do the validation
+ * for CPUs to work with. In per-thread case, the CPU map has the "any"
+ * CPU value. Since the traced program can run on any CPUs in this case,
+ * thus don't skip validation.
+ */
+ if (!perf_cpu_map__has_any_cpu(event_cpus)) {
+ struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
- if (!perf_cpu_map__has(online_cpus, cpu))
- continue;
+ intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
+ perf_cpu_map__put(online_cpus);
+ } else {
+ intersect_cpus = perf_cpu_map__new_online_cpus();
+ }
- err = cs_etm_validate_context_id(itr, evsel, i);
+ perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
+ if (cs_etm_get_version(cs_etm_pmu, cpu) == CS_NOT_PRESENT) {
+ pr_err("%s: Not found on CPU %d. Check hardware and firmware support and that all Coresight drivers are loaded\n",
+ CORESIGHT_ETM_PMU_NAME, cpu.cpu);
+ return -EINVAL;
+ }
+ err = cs_etm_validate_context_id(cs_etm_pmu, evsel, cpu);
if (err)
- goto out;
- err = cs_etm_validate_timestamp(itr, evsel, i);
+ break;
+
+ err = cs_etm_validate_timestamp(cs_etm_pmu, evsel, cpu);
if (err)
- goto out;
+ break;
}
- err = 0;
-out:
- perf_cpu_map__put(online_cpus);
+ perf_cpu_map__put(intersect_cpus);
return err;
}
@@ -435,7 +440,7 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
* Also the case of per-cpu mmaps, need the contextID in order to be notified
* when a context switch happened.
*/
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
"timestamp", 1);
evsel__set_config_if_unset(cs_etm_pmu, cs_etm_evsel,
@@ -461,10 +466,10 @@ static int cs_etm_recording_options(struct auxtrace_record *itr,
evsel->core.attr.sample_period = 1;
/* In per-cpu case, always need the time of mmap events etc */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus))
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus))
evsel__set_sample_bit(evsel, TIME);
- err = cs_etm_validate_config(itr, cs_etm_evsel);
+ err = cs_etm_validate_config(cs_etm_pmu, cs_etm_evsel);
out:
return err;
}
@@ -530,48 +535,35 @@ static u64 cs_etmv4_get_config(struct auxtrace_record *itr)
}
static size_t
-cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
- struct evlist *evlist __maybe_unused)
+cs_etm_info_priv_size(struct auxtrace_record *itr,
+ struct evlist *evlist)
{
- int i;
+ int idx;
int etmv3 = 0, etmv4 = 0, ete = 0;
struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
- struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
-
- /* cpu map is not empty, we have specific CPUs to work with */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
- for (i = 0; i < cpu__max_cpu().cpu; i++) {
- struct perf_cpu cpu = { .cpu = i, };
+ struct perf_cpu_map *intersect_cpus;
+ struct perf_cpu cpu;
+ struct perf_pmu *cs_etm_pmu = cs_etm_get_pmu(itr);
- if (!perf_cpu_map__has(event_cpus, cpu) ||
- !perf_cpu_map__has(online_cpus, cpu))
- continue;
+ if (!perf_cpu_map__has_any_cpu(event_cpus)) {
+ /* cpu map is not "any" CPU , we have specific CPUs to work with */
+ struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
- if (cs_etm_is_ete(itr, i))
- ete++;
- else if (cs_etm_is_etmv4(itr, i))
- etmv4++;
- else
- etmv3++;
- }
+ intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
+ perf_cpu_map__put(online_cpus);
} else {
- /* get configuration for all CPUs in the system */
- for (i = 0; i < cpu__max_cpu().cpu; i++) {
- struct perf_cpu cpu = { .cpu = i, };
-
- if (!perf_cpu_map__has(online_cpus, cpu))
- continue;
-
- if (cs_etm_is_ete(itr, i))
- ete++;
- else if (cs_etm_is_etmv4(itr, i))
- etmv4++;
- else
- etmv3++;
- }
+ /* Event can be "any" CPU so count all online CPUs. */
+ intersect_cpus = perf_cpu_map__new_online_cpus();
}
+ /* Count number of each type of ETM. Don't count if that CPU has CS_NOT_PRESENT. */
+ perf_cpu_map__for_each_cpu_skip_any(cpu, idx, intersect_cpus) {
+ enum cs_etm_version v = cs_etm_get_version(cs_etm_pmu, cpu);
- perf_cpu_map__put(online_cpus);
+ ete += v == CS_ETE;
+ etmv4 += v == CS_ETMV4;
+ etmv3 += v == CS_ETMV3;
+ }
+ perf_cpu_map__put(intersect_cpus);
return (CS_ETM_HEADER_SIZE +
(ete * CS_ETE_PRIV_SIZE) +
@@ -579,66 +571,49 @@ cs_etm_info_priv_size(struct auxtrace_record *itr __maybe_unused,
(etmv3 * CS_ETMV3_PRIV_SIZE));
}
-static bool cs_etm_is_etmv4(struct auxtrace_record *itr, int cpu)
-{
- bool ret = false;
- char path[PATH_MAX];
- int scan;
- unsigned int val;
- struct cs_etm_recording *ptr =
- container_of(itr, struct cs_etm_recording, itr);
- struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
-
- /* Take any of the RO files for ETMv4 and see if it present */
- snprintf(path, PATH_MAX, "cpu%d/%s",
- cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
- scan = perf_pmu__scan_file(cs_etm_pmu, path, "%x", &val);
-
- /* The file was read successfully, we have a winner */
- if (scan == 1)
- ret = true;
-
- return ret;
-}
-
-static int cs_etm_get_ro(struct perf_pmu *pmu, int cpu, const char *path)
+static int cs_etm_get_ro(struct perf_pmu *pmu, struct perf_cpu cpu, const char *path, __u64 *val)
{
char pmu_path[PATH_MAX];
int scan;
- unsigned int val = 0;
/* Get RO metadata from sysfs */
- snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu, path);
+ snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu.cpu, path);
- scan = perf_pmu__scan_file(pmu, pmu_path, "%x", &val);
- if (scan != 1)
+ scan = perf_pmu__scan_file(pmu, pmu_path, "%llx", val);
+ if (scan != 1) {
pr_err("%s: error reading: %s\n", __func__, pmu_path);
+ return -EINVAL;
+ }
- return val;
+ return 0;
}
-static int cs_etm_get_ro_signed(struct perf_pmu *pmu, int cpu, const char *path)
+static int cs_etm_get_ro_signed(struct perf_pmu *pmu, struct perf_cpu cpu, const char *path,
+ __u64 *out_val)
{
char pmu_path[PATH_MAX];
int scan;
int val = 0;
/* Get RO metadata from sysfs */
- snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu, path);
+ snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu.cpu, path);
scan = perf_pmu__scan_file(pmu, pmu_path, "%d", &val);
- if (scan != 1)
+ if (scan != 1) {
pr_err("%s: error reading: %s\n", __func__, pmu_path);
+ return -EINVAL;
+ }
- return val;
+ *out_val = (__u64) val;
+ return 0;
}
-static bool cs_etm_pmu_path_exists(struct perf_pmu *pmu, int cpu, const char *path)
+static bool cs_etm_pmu_path_exists(struct perf_pmu *pmu, struct perf_cpu cpu, const char *path)
{
char pmu_path[PATH_MAX];
/* Get RO metadata from sysfs */
- snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu, path);
+ snprintf(pmu_path, PATH_MAX, "cpu%d/%s", cpu.cpu, path);
return perf_pmu__file_exists(pmu, pmu_path);
}
@@ -651,16 +626,14 @@ static bool cs_etm_pmu_path_exists(struct perf_pmu *pmu, int cpu, const char *pa
#define TRCDEVARCH_ARCHVER_MASK GENMASK(15, 12)
#define TRCDEVARCH_ARCHVER(x) (((x) & TRCDEVARCH_ARCHVER_MASK) >> TRCDEVARCH_ARCHVER_SHIFT)
-static bool cs_etm_is_ete(struct auxtrace_record *itr, int cpu)
+static bool cs_etm_is_ete(struct perf_pmu *cs_etm_pmu, struct perf_cpu cpu)
{
- struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr);
- struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
- int trcdevarch;
+ __u64 trcdevarch;
if (!cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH]))
return false;
- trcdevarch = cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH], &trcdevarch);
/*
* ETE if ARCHVER is 5 (ARCHVER is 4 for ETM) and ARCHPART is 0xA13.
* See ETM_DEVARCH_ETE_ARCH in coresight-etm4x.h
@@ -668,7 +641,13 @@ static bool cs_etm_is_ete(struct auxtrace_record *itr, int cpu)
return TRCDEVARCH_ARCHVER(trcdevarch) == 5 && TRCDEVARCH_ARCHPART(trcdevarch) == 0xA13;
}
-static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr, int cpu)
+static __u64 cs_etm_get_legacy_trace_id(struct perf_cpu cpu)
+{
+ /* Wrap at 48 so that invalid trace IDs aren't saved into files. */
+ return CORESIGHT_LEGACY_CPU_TRACE_ID(cpu.cpu % 48);
+}
+
+static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr, struct perf_cpu cpu)
{
struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
@@ -676,33 +655,31 @@ static void cs_etm_save_etmv4_header(__u64 data[], struct auxtrace_record *itr,
/* Get trace configuration register */
data[CS_ETMV4_TRCCONFIGR] = cs_etmv4_get_config(itr);
/* traceID set to legacy version, in case new perf running on older system */
- data[CS_ETMV4_TRCTRACEIDR] =
- CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
+ data[CS_ETMV4_TRCTRACEIDR] = cs_etm_get_legacy_trace_id(cpu);
/* Get read-only information from sysFS */
- data[CS_ETMV4_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TRCIDR0]);
- data[CS_ETMV4_TRCIDR1] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TRCIDR1]);
- data[CS_ETMV4_TRCIDR2] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TRCIDR2]);
- data[CS_ETMV4_TRCIDR8] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TRCIDR8]);
- data[CS_ETMV4_TRCAUTHSTATUS] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TRCAUTHSTATUS]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR0],
+ &data[CS_ETMV4_TRCIDR0]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR1],
+ &data[CS_ETMV4_TRCIDR1]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR2],
+ &data[CS_ETMV4_TRCIDR2]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCIDR8],
+ &data[CS_ETMV4_TRCIDR8]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TRCAUTHSTATUS],
+ &data[CS_ETMV4_TRCAUTHSTATUS]);
/* Kernels older than 5.19 may not expose ts_source */
- if (cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TS_SOURCE]))
- data[CS_ETMV4_TS_SOURCE] = (__u64) cs_etm_get_ro_signed(cs_etm_pmu, cpu,
- metadata_etmv4_ro[CS_ETMV4_TS_SOURCE]);
- else {
+ if (!cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TS_SOURCE]) ||
+ cs_etm_get_ro_signed(cs_etm_pmu, cpu, metadata_etmv4_ro[CS_ETMV4_TS_SOURCE],
+ &data[CS_ETMV4_TS_SOURCE])) {
pr_debug3("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n",
- cpu);
+ cpu.cpu);
data[CS_ETMV4_TS_SOURCE] = (__u64) -1;
}
}
-static void cs_etm_save_ete_header(__u64 data[], struct auxtrace_record *itr, int cpu)
+static void cs_etm_save_ete_header(__u64 data[], struct auxtrace_record *itr, struct perf_cpu cpu)
{
struct cs_etm_recording *ptr = container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
@@ -710,83 +687,84 @@ static void cs_etm_save_ete_header(__u64 data[], struct auxtrace_record *itr, in
/* Get trace configuration register */
data[CS_ETE_TRCCONFIGR] = cs_etmv4_get_config(itr);
/* traceID set to legacy version, in case new perf running on older system */
- data[CS_ETE_TRCTRACEIDR] =
- CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
+ data[CS_ETE_TRCTRACEIDR] = cs_etm_get_legacy_trace_id(cpu);
/* Get read-only information from sysFS */
- data[CS_ETE_TRCIDR0] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCIDR0]);
- data[CS_ETE_TRCIDR1] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCIDR1]);
- data[CS_ETE_TRCIDR2] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCIDR2]);
- data[CS_ETE_TRCIDR8] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCIDR8]);
- data[CS_ETE_TRCAUTHSTATUS] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCAUTHSTATUS]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCIDR0], &data[CS_ETE_TRCIDR0]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCIDR1], &data[CS_ETE_TRCIDR1]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCIDR2], &data[CS_ETE_TRCIDR2]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCIDR8], &data[CS_ETE_TRCIDR8]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCAUTHSTATUS],
+ &data[CS_ETE_TRCAUTHSTATUS]);
/* ETE uses the same registers as ETMv4 plus TRCDEVARCH */
- data[CS_ETE_TRCDEVARCH] = cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TRCDEVARCH]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TRCDEVARCH],
+ &data[CS_ETE_TRCDEVARCH]);
/* Kernels older than 5.19 may not expose ts_source */
- if (cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TS_SOURCE]))
- data[CS_ETE_TS_SOURCE] = (__u64) cs_etm_get_ro_signed(cs_etm_pmu, cpu,
- metadata_ete_ro[CS_ETE_TS_SOURCE]);
- else {
+ if (!cs_etm_pmu_path_exists(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TS_SOURCE]) ||
+ cs_etm_get_ro_signed(cs_etm_pmu, cpu, metadata_ete_ro[CS_ETE_TS_SOURCE],
+ &data[CS_ETE_TS_SOURCE])) {
pr_debug3("[%03d] pmu file 'ts_source' not found. Fallback to safe value (-1)\n",
- cpu);
+ cpu.cpu);
data[CS_ETE_TS_SOURCE] = (__u64) -1;
}
}
-static void cs_etm_get_metadata(int cpu, u32 *offset,
+static void cs_etm_get_metadata(struct perf_cpu cpu, u32 *offset,
struct auxtrace_record *itr,
struct perf_record_auxtrace_info *info)
{
u32 increment, nr_trc_params;
u64 magic;
- struct cs_etm_recording *ptr =
- container_of(itr, struct cs_etm_recording, itr);
- struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
+ struct perf_pmu *cs_etm_pmu = cs_etm_get_pmu(itr);
/* first see what kind of tracer this cpu is affined to */
- if (cs_etm_is_ete(itr, cpu)) {
+ switch (cs_etm_get_version(cs_etm_pmu, cpu)) {
+ case CS_ETE:
magic = __perf_cs_ete_magic;
cs_etm_save_ete_header(&info->priv[*offset], itr, cpu);
/* How much space was used */
increment = CS_ETE_PRIV_MAX;
nr_trc_params = CS_ETE_PRIV_MAX - CS_ETM_COMMON_BLK_MAX_V1;
- } else if (cs_etm_is_etmv4(itr, cpu)) {
+ break;
+
+ case CS_ETMV4:
magic = __perf_cs_etmv4_magic;
cs_etm_save_etmv4_header(&info->priv[*offset], itr, cpu);
/* How much space was used */
increment = CS_ETMV4_PRIV_MAX;
nr_trc_params = CS_ETMV4_PRIV_MAX - CS_ETMV4_TRCCONFIGR;
- } else {
+ break;
+
+ case CS_ETMV3:
magic = __perf_cs_etmv3_magic;
/* Get configuration register */
info->priv[*offset + CS_ETM_ETMCR] = cs_etm_get_config(itr);
/* traceID set to legacy value in case new perf running on old system */
- info->priv[*offset + CS_ETM_ETMTRACEIDR] =
- CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) | CORESIGHT_TRACE_ID_UNUSED_FLAG;
+ info->priv[*offset + CS_ETM_ETMTRACEIDR] = cs_etm_get_legacy_trace_id(cpu);
/* Get read-only information from sysFS */
- info->priv[*offset + CS_ETM_ETMCCER] =
- cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv3_ro[CS_ETM_ETMCCER]);
- info->priv[*offset + CS_ETM_ETMIDR] =
- cs_etm_get_ro(cs_etm_pmu, cpu,
- metadata_etmv3_ro[CS_ETM_ETMIDR]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv3_ro[CS_ETM_ETMCCER],
+ &info->priv[*offset + CS_ETM_ETMCCER]);
+ cs_etm_get_ro(cs_etm_pmu, cpu, metadata_etmv3_ro[CS_ETM_ETMIDR],
+ &info->priv[*offset + CS_ETM_ETMIDR]);
/* How much space was used */
increment = CS_ETM_PRIV_MAX;
nr_trc_params = CS_ETM_PRIV_MAX - CS_ETM_ETMCR;
+ break;
+
+ default:
+ case CS_NOT_PRESENT:
+ /* Unreachable, CPUs already validated in cs_etm_validate_config() */
+ assert(true);
+ return;
}
/* Build generic header portion */
info->priv[*offset + CS_ETM_MAGIC] = magic;
- info->priv[*offset + CS_ETM_CPU] = cpu;
+ info->priv[*offset + CS_ETM_CPU] = cpu.cpu;
info->priv[*offset + CS_ETM_NR_TRC_PARAMS] = nr_trc_params;
/* Where the next CPU entry should start from */
*offset += increment;
@@ -806,6 +784,7 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
struct cs_etm_recording *ptr =
container_of(itr, struct cs_etm_recording, itr);
struct perf_pmu *cs_etm_pmu = ptr->cs_etm_pmu;
+ struct perf_cpu cpu;
if (priv_size != cs_etm_info_priv_size(itr, session->evlist))
return -EINVAL;
@@ -813,16 +792,13 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
if (!session->evlist->core.nr_mmaps)
return -EINVAL;
- /* If the cpu_map is empty all online CPUs are involved */
- if (perf_cpu_map__has_any_cpu_or_is_empty(event_cpus)) {
+ /* If the cpu_map has the "any" CPU all online CPUs are involved */
+ if (perf_cpu_map__has_any_cpu(event_cpus)) {
cpu_map = online_cpus;
} else {
/* Make sure all specified CPUs are online */
- for (i = 0; i < perf_cpu_map__nr(event_cpus); i++) {
- struct perf_cpu cpu = { .cpu = i, };
-
- if (perf_cpu_map__has(event_cpus, cpu) &&
- !perf_cpu_map__has(online_cpus, cpu))
+ perf_cpu_map__for_each_cpu(cpu, i, event_cpus) {
+ if (!perf_cpu_map__has(online_cpus, cpu))
return -EINVAL;
}
@@ -842,11 +818,9 @@ static int cs_etm_info_fill(struct auxtrace_record *itr,
offset = CS_ETM_SNAPSHOT + 1;
- for (i = 0; i < cpu__max_cpu().cpu && offset < priv_size; i++) {
- struct perf_cpu cpu = { .cpu = i, };
-
- if (perf_cpu_map__has(cpu_map, cpu))
- cs_etm_get_metadata(i, &offset, itr, info);
+ perf_cpu_map__for_each_cpu(cpu, i, cpu_map) {
+ assert(offset < priv_size);
+ cs_etm_get_metadata(cpu, &offset, itr, info);
}
perf_cpu_map__put(online_cpus);
@@ -913,7 +887,6 @@ struct auxtrace_record *cs_etm_record_init(int *err)
}
ptr->cs_etm_pmu = cs_etm_pmu;
- ptr->itr.pmu = cs_etm_pmu;
ptr->itr.parse_snapshot_options = cs_etm_parse_snapshot_options;
ptr->itr.recording_options = cs_etm_recording_options;
ptr->itr.info_priv_size = cs_etm_info_priv_size;
diff --git a/tools/perf/arch/arm/util/dwarf-regs.c b/tools/perf/arch/arm/util/dwarf-regs.c
deleted file mode 100644
index fc5f71c91802..000000000000
--- a/tools/perf/arch/arm/util/dwarf-regs.c
+++ /dev/null
@@ -1,61 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2010 Will Deacon, ARM Ltd.
- */
-
-#include <stddef.h>
-#include <linux/stringify.h>
-#include <dwarf-regs.h>
-
-struct pt_regs_dwarfnum {
- const char *name;
- unsigned int dwarfnum;
-};
-
-#define REG_DWARFNUM_NAME(r, num) {.name = r, .dwarfnum = num}
-#define GPR_DWARFNUM_NAME(num) \
- {.name = __stringify(%r##num), .dwarfnum = num}
-#define REG_DWARFNUM_END {.name = NULL, .dwarfnum = 0}
-
-/*
- * Reference:
- * http://infocenter.arm.com/help/topic/com.arm.doc.ihi0040a/IHI0040A_aadwarf.pdf
- */
-static const struct pt_regs_dwarfnum regdwarfnum_table[] = {
- GPR_DWARFNUM_NAME(0),
- GPR_DWARFNUM_NAME(1),
- GPR_DWARFNUM_NAME(2),
- GPR_DWARFNUM_NAME(3),
- GPR_DWARFNUM_NAME(4),
- GPR_DWARFNUM_NAME(5),
- GPR_DWARFNUM_NAME(6),
- GPR_DWARFNUM_NAME(7),
- GPR_DWARFNUM_NAME(8),
- GPR_DWARFNUM_NAME(9),
- GPR_DWARFNUM_NAME(10),
- REG_DWARFNUM_NAME("%fp", 11),
- REG_DWARFNUM_NAME("%ip", 12),
- REG_DWARFNUM_NAME("%sp", 13),
- REG_DWARFNUM_NAME("%lr", 14),
- REG_DWARFNUM_NAME("%pc", 15),
- REG_DWARFNUM_END,
-};
-
-/**
- * get_arch_regstr() - lookup register name from it's DWARF register number
- * @n: the DWARF register number
- *
- * get_arch_regstr() returns the name of the register in struct
- * regdwarfnum_table from it's DWARF register number. If the register is not
- * found in the table, this returns NULL;
- */
-const char *get_arch_regstr(unsigned int n)
-{
- const struct pt_regs_dwarfnum *roff;
- for (roff = regdwarfnum_table; roff->name != NULL; roff++)
- if (roff->dwarfnum == n)
- return roff->name;
- return NULL;
-}
diff --git a/tools/perf/arch/arm/util/perf_regs.c b/tools/perf/arch/arm/util/perf_regs.c
index 2c56e8b56ddf..f94a0210c7b7 100644
--- a/tools/perf/arch/arm/util/perf_regs.c
+++ b/tools/perf/arch/arm/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/arm/util/pmu.c b/tools/perf/arch/arm/util/pmu.c
index 7f3af3b97f3b..f70075c89aa0 100644
--- a/tools/perf/arch/arm/util/pmu.c
+++ b/tools/perf/arch/arm/util/pmu.c
@@ -11,25 +11,38 @@
#include "arm-spe.h"
#include "hisi-ptt.h"
+#include "../../../util/cpumap.h"
#include "../../../util/pmu.h"
#include "../../../util/cs-etm.h"
+#include "../../arm64/util/mem-events.h"
-void perf_pmu__arch_init(struct perf_pmu *pmu __maybe_unused)
+void perf_pmu__arch_init(struct perf_pmu *pmu)
{
+ struct perf_cpu_map *intersect, *online = cpu_map__online();
+
#ifdef HAVE_AUXTRACE_SUPPORT
if (!strcmp(pmu->name, CORESIGHT_ETM_PMU_NAME)) {
/* add ETM default config here */
+ pmu->auxtrace = true;
pmu->selectable = true;
pmu->perf_event_attr_init_default = cs_etm_get_default_config;
#if defined(__aarch64__)
} else if (strstarts(pmu->name, ARM_SPE_PMU_NAME)) {
+ pmu->auxtrace = true;
pmu->selectable = true;
pmu->is_uncore = false;
pmu->perf_event_attr_init_default = arm_spe_pmu_default_config;
+ if (strstarts(pmu->name, "arm_spe_"))
+ pmu->mem_events = perf_mem_events_arm;
} else if (strstarts(pmu->name, HISI_PTT_PMU_NAME)) {
+ pmu->auxtrace = true;
pmu->selectable = true;
#endif
}
-
#endif
+ /* Workaround some ARM PMU's failing to correctly set CPU maps for online processors. */
+ intersect = perf_cpu_map__intersect(online, pmu->cpus);
+ perf_cpu_map__put(online);
+ perf_cpu_map__put(pmu->cpus);
+ pmu->cpus = intersect;
}
diff --git a/tools/perf/arch/arm/util/unwind-libdw.c b/tools/perf/arch/arm/util/unwind-libdw.c
index 4e02cef461e3..fbb643f224ec 100644
--- a/tools/perf/arch/arm/util/unwind-libdw.c
+++ b/tools/perf/arch/arm/util/unwind-libdw.c
@@ -8,7 +8,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[PERF_REG_ARM_MAX];
#define REG(r) ({ \
diff --git a/tools/perf/arch/arm64/Build b/tools/perf/arch/arm64/Build
index a7dd46a5b678..12ebc65ea7a3 100644
--- a/tools/perf/arch/arm64/Build
+++ b/tools/perf/arch/arm64/Build
@@ -1,2 +1,2 @@
-perf-y += util/
-perf-y += tests/
+perf-util-y += util/
+perf-test-y += tests/
diff --git a/tools/perf/arch/arm64/Makefile b/tools/perf/arch/arm64/Makefile
index fab3095fb5d0..087e099fb453 100644
--- a/tools/perf/arch/arm64/Makefile
+++ b/tools/perf/arch/arm64/Makefile
@@ -1,29 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
PERF_HAVE_JITDUMP := 1
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
HAVE_KVM_STAT_SUPPORT := 1
-
-#
-# Syscall table generation for perf
-#
-
-out := $(OUTPUT)arch/arm64/include/generated/asm
-header := $(out)/syscalls.c
-incpath := $(srctree)/tools
-sysdef := $(srctree)/tools/arch/arm64/include/uapi/asm/unistd.h
-sysprf := $(srctree)/tools/perf/arch/arm64/entry/syscalls/
-systbl := $(sysprf)/mksyscalltbl
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(incpath) $(sysdef) > $@
-
-clean::
- $(call QUIET_CLEAN, arm64) $(RM) $(header)
-
-archheaders: $(header)
diff --git a/tools/perf/arch/arm64/annotate/instructions.c b/tools/perf/arch/arm64/annotate/instructions.c
index 4af0c3a0f86e..d465d093e7eb 100644
--- a/tools/perf/arch/arm64/annotate/instructions.c
+++ b/tools/perf/arch/arm64/annotate/instructions.c
@@ -11,7 +11,8 @@ struct arm64_annotate {
static int arm64_mov__parse(struct arch *arch __maybe_unused,
struct ins_operands *ops,
- struct map_symbol *ms __maybe_unused)
+ struct map_symbol *ms __maybe_unused,
+ struct disasm_line *dl __maybe_unused)
{
char *s = strchr(ops->raw, ','), *target, *endptr;
@@ -112,6 +113,8 @@ static int arm64__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->associate_instruction_ops = arm64__associate_instruction_ops;
arch->objdump.comment_char = '/';
arch->objdump.skip_functions_char = '+';
+ arch->e_machine = EM_AARCH64;
+ arch->e_flags = 0;
return 0;
out_free_call:
diff --git a/tools/perf/arch/arm64/entry/syscalls/mksyscalltbl b/tools/perf/arch/arm64/entry/syscalls/mksyscalltbl
deleted file mode 100755
index 27d747c92d44..000000000000
--- a/tools/perf/arch/arm64/entry/syscalls/mksyscalltbl
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-#
-# Generate system call table for perf. Derived from
-# powerpc script.
-#
-# Copyright IBM Corp. 2017
-# Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
-# Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
-# Changed by: Kim Phillips <kim.phillips@arm.com>
-
-gcc=$1
-hostcc=$2
-incpath=$3
-input=$4
-
-if ! test -r $input; then
- echo "Could not read input file" >&2
- exit 1
-fi
-
-create_sc_table()
-{
- local sc nr max_nr
-
- while read sc nr; do
- printf "%s\n" " [$nr] = \"$sc\","
- max_nr=$nr
- done
-
- echo "#define SYSCALLTBL_ARM64_MAX_ID $max_nr"
-}
-
-create_table()
-{
- echo "#include \"$input\""
- echo "static const char *const syscalltbl_arm64[] = {"
- create_sc_table
- echo "};"
-}
-
-$gcc -E -dM -x c -I $incpath/include/uapi $input \
- |awk '$2 ~ "__NR" && $3 !~ "__NR3264_" {
- sub("^#define __NR(3264)?_", "");
- print | "sort -k2 -n"}' \
- |create_table
diff --git a/tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl b/tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl
new file mode 100644
index 000000000000..9a37930d4e26
--- /dev/null
+++ b/tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl
@@ -0,0 +1,476 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# AArch32 (compat) system call definitions.
+#
+# Copyright (C) 2001-2005 Russell King
+# Copyright (C) 2012 ARM Ltd.
+#
+# This file corresponds to arch/arm/tools/syscall.tbl
+# for the native EABI syscalls and should be kept in sync
+# Instead of the OABI syscalls, it contains pointers to
+# the compat entry points where they differ from the native
+# syscalls.
+#
+0 common restart_syscall sys_restart_syscall
+1 common exit sys_exit
+2 common fork sys_fork
+3 common read sys_read
+4 common write sys_write
+5 common open sys_open compat_sys_open
+6 common close sys_close
+# 7 was sys_waitpid
+8 common creat sys_creat
+9 common link sys_link
+10 common unlink sys_unlink
+11 common execve sys_execve compat_sys_execve
+12 common chdir sys_chdir
+# 13 was sys_time
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 common lchown sys_lchown16
+# 17 was sys_break
+# 18 was sys_stat
+19 common lseek sys_lseek compat_sys_lseek
+20 common getpid sys_getpid
+21 common mount sys_mount
+# 22 was sys_umount
+23 common setuid sys_setuid16
+24 common getuid sys_getuid16
+# 25 was sys_stime
+26 common ptrace sys_ptrace compat_sys_ptrace
+# 27 was sys_alarm
+# 28 was sys_fstat
+29 common pause sys_pause
+# 30 was sys_utime
+# 31 was sys_stty
+# 32 was sys_gtty
+33 common access sys_access
+34 common nice sys_nice
+# 35 was sys_ftime
+36 common sync sys_sync
+37 common kill sys_kill
+38 common rename sys_rename
+39 common mkdir sys_mkdir
+40 common rmdir sys_rmdir
+41 common dup sys_dup
+42 common pipe sys_pipe
+43 common times sys_times compat_sys_times
+# 44 was sys_prof
+45 common brk sys_brk
+46 common setgid sys_setgid16
+47 common getgid sys_getgid16
+# 48 was sys_signal
+49 common geteuid sys_geteuid16
+50 common getegid sys_getegid16
+51 common acct sys_acct
+52 common umount2 sys_umount
+# 53 was sys_lock
+54 common ioctl sys_ioctl compat_sys_ioctl
+55 common fcntl sys_fcntl compat_sys_fcntl
+# 56 was sys_mpx
+57 common setpgid sys_setpgid
+# 58 was sys_ulimit
+# 59 was sys_olduname
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common ustat sys_ustat compat_sys_ustat
+63 common dup2 sys_dup2
+64 common getppid sys_getppid
+65 common getpgrp sys_getpgrp
+66 common setsid sys_setsid
+67 common sigaction sys_sigaction compat_sys_sigaction
+# 68 was sys_sgetmask
+# 69 was sys_ssetmask
+70 common setreuid sys_setreuid16
+71 common setregid sys_setregid16
+72 common sigsuspend sys_sigsuspend
+73 common sigpending sys_sigpending compat_sys_sigpending
+74 common sethostname sys_sethostname
+75 common setrlimit sys_setrlimit compat_sys_setrlimit
+# 76 was compat_sys_getrlimit
+77 common getrusage sys_getrusage compat_sys_getrusage
+78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday
+79 common settimeofday sys_settimeofday compat_sys_settimeofday
+80 common getgroups sys_getgroups16
+81 common setgroups sys_setgroups16
+# 82 was compat_sys_select
+83 common symlink sys_symlink
+# 84 was sys_lstat
+85 common readlink sys_readlink
+86 common uselib sys_uselib
+87 common swapon sys_swapon
+88 common reboot sys_reboot
+# 89 was sys_readdir
+# 90 was sys_mmap
+91 common munmap sys_munmap
+92 common truncate sys_truncate compat_sys_truncate
+93 common ftruncate sys_ftruncate compat_sys_ftruncate
+94 common fchmod sys_fchmod
+95 common fchown sys_fchown16
+96 common getpriority sys_getpriority
+97 common setpriority sys_setpriority
+# 98 was sys_profil
+99 common statfs sys_statfs compat_sys_statfs
+100 common fstatfs sys_fstatfs compat_sys_fstatfs
+# 101 was sys_ioperm
+# 102 was sys_socketcall
+103 common syslog sys_syslog
+104 common setitimer sys_setitimer compat_sys_setitimer
+105 common getitimer sys_getitimer compat_sys_getitimer
+106 common stat sys_newstat compat_sys_newstat
+107 common lstat sys_newlstat compat_sys_newlstat
+108 common fstat sys_newfstat compat_sys_newfstat
+# 109 was sys_uname
+# 110 was sys_iopl
+111 common vhangup sys_vhangup
+# 112 was sys_idle
+# 113 was sys_syscall
+114 common wait4 sys_wait4 compat_sys_wait4
+115 common swapoff sys_swapoff
+116 common sysinfo sys_sysinfo compat_sys_sysinfo
+# 117 was sys_ipc
+118 common fsync sys_fsync
+119 common sigreturn sys_sigreturn_wrapper compat_sys_sigreturn
+120 common clone sys_clone
+121 common setdomainname sys_setdomainname
+122 common uname sys_newuname
+# 123 was sys_modify_ldt
+124 common adjtimex sys_adjtimex_time32
+125 common mprotect sys_mprotect
+126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask
+# 127 was sys_create_module
+128 common init_module sys_init_module
+129 common delete_module sys_delete_module
+# 130 was sys_get_kernel_syms
+131 common quotactl sys_quotactl
+132 common getpgid sys_getpgid
+133 common fchdir sys_fchdir
+134 common bdflush sys_ni_syscall
+135 common sysfs sys_sysfs
+136 common personality sys_personality
+# 137 was sys_afs_syscall
+138 common setfsuid sys_setfsuid16
+139 common setfsgid sys_setfsgid16
+140 common _llseek sys_llseek
+141 common getdents sys_getdents compat_sys_getdents
+142 common _newselect sys_select compat_sys_select
+143 common flock sys_flock
+144 common msync sys_msync
+145 common readv sys_readv
+146 common writev sys_writev
+147 common getsid sys_getsid
+148 common fdatasync sys_fdatasync
+149 common _sysctl sys_ni_syscall
+150 common mlock sys_mlock
+151 common munlock sys_munlock
+152 common mlockall sys_mlockall
+153 common munlockall sys_munlockall
+154 common sched_setparam sys_sched_setparam
+155 common sched_getparam sys_sched_getparam
+156 common sched_setscheduler sys_sched_setscheduler
+157 common sched_getscheduler sys_sched_getscheduler
+158 common sched_yield sys_sched_yield
+159 common sched_get_priority_max sys_sched_get_priority_max
+160 common sched_get_priority_min sys_sched_get_priority_min
+161 common sched_rr_get_interval sys_sched_rr_get_interval_time32
+162 common nanosleep sys_nanosleep_time32
+163 common mremap sys_mremap
+164 common setresuid sys_setresuid16
+165 common getresuid sys_getresuid16
+# 166 was sys_vm86
+# 167 was sys_query_module
+168 common poll sys_poll
+169 common nfsservctl sys_ni_syscall
+170 common setresgid sys_setresgid16
+171 common getresgid sys_getresgid16
+172 common prctl sys_prctl
+173 common rt_sigreturn sys_rt_sigreturn_wrapper compat_sys_rt_sigreturn
+174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction
+175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask
+176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending
+177 common rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32
+178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
+179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
+180 common pread64 sys_pread64 compat_sys_aarch32_pread64
+181 common pwrite64 sys_pwrite64 compat_sys_aarch32_pwrite64
+182 common chown sys_chown16
+183 common getcwd sys_getcwd
+184 common capget sys_capget
+185 common capset sys_capset
+186 common sigaltstack sys_sigaltstack compat_sys_sigaltstack
+187 common sendfile sys_sendfile compat_sys_sendfile
+# 188 reserved
+# 189 reserved
+190 common vfork sys_vfork
+# SuS compliant getrlimit
+191 common ugetrlimit sys_getrlimit compat_sys_getrlimit
+192 common mmap2 sys_mmap2 compat_sys_aarch32_mmap2
+193 common truncate64 sys_truncate64 compat_sys_aarch32_truncate64
+194 common ftruncate64 sys_ftruncate64 compat_sys_aarch32_ftruncate64
+195 common stat64 sys_stat64
+196 common lstat64 sys_lstat64
+197 common fstat64 sys_fstat64
+198 common lchown32 sys_lchown
+199 common getuid32 sys_getuid
+200 common getgid32 sys_getgid
+201 common geteuid32 sys_geteuid
+202 common getegid32 sys_getegid
+203 common setreuid32 sys_setreuid
+204 common setregid32 sys_setregid
+205 common getgroups32 sys_getgroups
+206 common setgroups32 sys_setgroups
+207 common fchown32 sys_fchown
+208 common setresuid32 sys_setresuid
+209 common getresuid32 sys_getresuid
+210 common setresgid32 sys_setresgid
+211 common getresgid32 sys_getresgid
+212 common chown32 sys_chown
+213 common setuid32 sys_setuid
+214 common setgid32 sys_setgid
+215 common setfsuid32 sys_setfsuid
+216 common setfsgid32 sys_setfsgid
+217 common getdents64 sys_getdents64
+218 common pivot_root sys_pivot_root
+219 common mincore sys_mincore
+220 common madvise sys_madvise
+221 common fcntl64 sys_fcntl64 compat_sys_fcntl64
+# 222 for tux
+# 223 is unused
+224 common gettid sys_gettid
+225 common readahead sys_readahead compat_sys_aarch32_readahead
+226 common setxattr sys_setxattr
+227 common lsetxattr sys_lsetxattr
+228 common fsetxattr sys_fsetxattr
+229 common getxattr sys_getxattr
+230 common lgetxattr sys_lgetxattr
+231 common fgetxattr sys_fgetxattr
+232 common listxattr sys_listxattr
+233 common llistxattr sys_llistxattr
+234 common flistxattr sys_flistxattr
+235 common removexattr sys_removexattr
+236 common lremovexattr sys_lremovexattr
+237 common fremovexattr sys_fremovexattr
+238 common tkill sys_tkill
+239 common sendfile64 sys_sendfile64
+240 common futex sys_futex_time32
+241 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity
+242 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity
+243 common io_setup sys_io_setup compat_sys_io_setup
+244 common io_destroy sys_io_destroy
+245 common io_getevents sys_io_getevents_time32
+246 common io_submit sys_io_submit compat_sys_io_submit
+247 common io_cancel sys_io_cancel
+248 common exit_group sys_exit_group
+249 common lookup_dcookie sys_ni_syscall
+250 common epoll_create sys_epoll_create
+251 common epoll_ctl sys_epoll_ctl
+252 common epoll_wait sys_epoll_wait
+253 common remap_file_pages sys_remap_file_pages
+# 254 for set_thread_area
+# 255 for get_thread_area
+256 common set_tid_address sys_set_tid_address
+257 common timer_create sys_timer_create compat_sys_timer_create
+258 common timer_settime sys_timer_settime32
+259 common timer_gettime sys_timer_gettime32
+260 common timer_getoverrun sys_timer_getoverrun
+261 common timer_delete sys_timer_delete
+262 common clock_settime sys_clock_settime32
+263 common clock_gettime sys_clock_gettime32
+264 common clock_getres sys_clock_getres_time32
+265 common clock_nanosleep sys_clock_nanosleep_time32
+266 common statfs64 sys_statfs64_wrapper compat_sys_aarch32_statfs64
+267 common fstatfs64 sys_fstatfs64_wrapper compat_sys_aarch32_fstatfs64
+268 common tgkill sys_tgkill
+269 common utimes sys_utimes_time32
+270 common arm_fadvise64_64 sys_arm_fadvise64_64 compat_sys_aarch32_fadvise64_64
+271 common pciconfig_iobase sys_pciconfig_iobase
+272 common pciconfig_read sys_pciconfig_read
+273 common pciconfig_write sys_pciconfig_write
+274 common mq_open sys_mq_open compat_sys_mq_open
+275 common mq_unlink sys_mq_unlink
+276 common mq_timedsend sys_mq_timedsend_time32
+277 common mq_timedreceive sys_mq_timedreceive_time32
+278 common mq_notify sys_mq_notify compat_sys_mq_notify
+279 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr
+280 common waitid sys_waitid compat_sys_waitid
+281 common socket sys_socket
+282 common bind sys_bind
+283 common connect sys_connect
+284 common listen sys_listen
+285 common accept sys_accept
+286 common getsockname sys_getsockname
+287 common getpeername sys_getpeername
+288 common socketpair sys_socketpair
+289 common send sys_send
+290 common sendto sys_sendto
+291 common recv sys_recv compat_sys_recv
+292 common recvfrom sys_recvfrom compat_sys_recvfrom
+293 common shutdown sys_shutdown
+294 common setsockopt sys_setsockopt
+295 common getsockopt sys_getsockopt
+296 common sendmsg sys_sendmsg compat_sys_sendmsg
+297 common recvmsg sys_recvmsg compat_sys_recvmsg
+298 common semop sys_semop
+299 common semget sys_semget
+300 common semctl sys_old_semctl compat_sys_old_semctl
+301 common msgsnd sys_msgsnd compat_sys_msgsnd
+302 common msgrcv sys_msgrcv compat_sys_msgrcv
+303 common msgget sys_msgget
+304 common msgctl sys_old_msgctl compat_sys_old_msgctl
+305 common shmat sys_shmat compat_sys_shmat
+306 common shmdt sys_shmdt
+307 common shmget sys_shmget
+308 common shmctl sys_old_shmctl compat_sys_old_shmctl
+309 common add_key sys_add_key
+310 common request_key sys_request_key
+311 common keyctl sys_keyctl compat_sys_keyctl
+312 common semtimedop sys_semtimedop_time32
+313 common vserver sys_ni_syscall
+314 common ioprio_set sys_ioprio_set
+315 common ioprio_get sys_ioprio_get
+316 common inotify_init sys_inotify_init
+317 common inotify_add_watch sys_inotify_add_watch
+318 common inotify_rm_watch sys_inotify_rm_watch
+319 common mbind sys_mbind
+320 common get_mempolicy sys_get_mempolicy
+321 common set_mempolicy sys_set_mempolicy
+322 common openat sys_openat compat_sys_openat
+323 common mkdirat sys_mkdirat
+324 common mknodat sys_mknodat
+325 common fchownat sys_fchownat
+326 common futimesat sys_futimesat_time32
+327 common fstatat64 sys_fstatat64
+328 common unlinkat sys_unlinkat
+329 common renameat sys_renameat
+330 common linkat sys_linkat
+331 common symlinkat sys_symlinkat
+332 common readlinkat sys_readlinkat
+333 common fchmodat sys_fchmodat
+334 common faccessat sys_faccessat
+335 common pselect6 sys_pselect6_time32 compat_sys_pselect6_time32
+336 common ppoll sys_ppoll_time32 compat_sys_ppoll_time32
+337 common unshare sys_unshare
+338 common set_robust_list sys_set_robust_list compat_sys_set_robust_list
+339 common get_robust_list sys_get_robust_list compat_sys_get_robust_list
+340 common splice sys_splice
+341 common arm_sync_file_range sys_sync_file_range2 compat_sys_aarch32_sync_file_range2
+342 common tee sys_tee
+343 common vmsplice sys_vmsplice
+344 common move_pages sys_move_pages
+345 common getcpu sys_getcpu
+346 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait
+347 common kexec_load sys_kexec_load compat_sys_kexec_load
+348 common utimensat sys_utimensat_time32
+349 common signalfd sys_signalfd compat_sys_signalfd
+350 common timerfd_create sys_timerfd_create
+351 common eventfd sys_eventfd
+352 common fallocate sys_fallocate compat_sys_aarch32_fallocate
+353 common timerfd_settime sys_timerfd_settime32
+354 common timerfd_gettime sys_timerfd_gettime32
+355 common signalfd4 sys_signalfd4 compat_sys_signalfd4
+356 common eventfd2 sys_eventfd2
+357 common epoll_create1 sys_epoll_create1
+358 common dup3 sys_dup3
+359 common pipe2 sys_pipe2
+360 common inotify_init1 sys_inotify_init1
+361 common preadv sys_preadv compat_sys_preadv
+362 common pwritev sys_pwritev compat_sys_pwritev
+363 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo
+364 common perf_event_open sys_perf_event_open
+365 common recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32
+366 common accept4 sys_accept4
+367 common fanotify_init sys_fanotify_init
+368 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark
+369 common prlimit64 sys_prlimit64
+370 common name_to_handle_at sys_name_to_handle_at
+371 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at
+372 common clock_adjtime sys_clock_adjtime32
+373 common syncfs sys_syncfs
+374 common sendmmsg sys_sendmmsg compat_sys_sendmmsg
+375 common setns sys_setns
+376 common process_vm_readv sys_process_vm_readv
+377 common process_vm_writev sys_process_vm_writev
+378 common kcmp sys_kcmp
+379 common finit_module sys_finit_module
+380 common sched_setattr sys_sched_setattr
+381 common sched_getattr sys_sched_getattr
+382 common renameat2 sys_renameat2
+383 common seccomp sys_seccomp
+384 common getrandom sys_getrandom
+385 common memfd_create sys_memfd_create
+386 common bpf sys_bpf
+387 common execveat sys_execveat compat_sys_execveat
+388 common userfaultfd sys_userfaultfd
+389 common membarrier sys_membarrier
+390 common mlock2 sys_mlock2
+391 common copy_file_range sys_copy_file_range
+392 common preadv2 sys_preadv2 compat_sys_preadv2
+393 common pwritev2 sys_pwritev2 compat_sys_pwritev2
+394 common pkey_mprotect sys_pkey_mprotect
+395 common pkey_alloc sys_pkey_alloc
+396 common pkey_free sys_pkey_free
+397 common statx sys_statx
+398 common rseq sys_rseq
+399 common io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
+400 common migrate_pages sys_migrate_pages
+401 common kexec_file_load sys_kexec_file_load
+# 402 is unused
+403 common clock_gettime64 sys_clock_gettime
+404 common clock_settime64 sys_clock_settime
+405 common clock_adjtime64 sys_clock_adjtime
+406 common clock_getres_time64 sys_clock_getres
+407 common clock_nanosleep_time64 sys_clock_nanosleep
+408 common timer_gettime64 sys_timer_gettime
+409 common timer_settime64 sys_timer_settime
+410 common timerfd_gettime64 sys_timerfd_gettime
+411 common timerfd_settime64 sys_timerfd_settime
+412 common utimensat_time64 sys_utimensat
+413 common pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
+414 common ppoll_time64 sys_ppoll compat_sys_ppoll_time64
+416 common io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
+417 common recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
+418 common mq_timedsend_time64 sys_mq_timedsend
+419 common mq_timedreceive_time64 sys_mq_timedreceive
+420 common semtimedop_time64 sys_semtimedop
+421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64
+422 common futex_time64 sys_futex
+423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+435 common clone3 sys_clone3
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
diff --git a/tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl b/tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl
new file mode 120000
index 000000000000..4fdd58f10c15
--- /dev/null
+++ b/tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl
@@ -0,0 +1 @@
+../../../../../scripts/syscall.tbl \ No newline at end of file
diff --git a/tools/perf/arch/arm64/tests/Build b/tools/perf/arch/arm64/tests/Build
index e337c09e7f56..d44c9de92d42 100644
--- a/tools/perf/arch/arm64/tests/Build
+++ b/tools/perf/arch/arm64/tests/Build
@@ -1,5 +1,5 @@
-perf-y += regs_load.o
-perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
+perf-test-y += regs_load.o
+perf-test-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
-perf-y += arch-tests.o
-perf-y += cpuid-match.o
+perf-test-y += arch-tests.o
+perf-test-y += cpuid-match.o
diff --git a/tools/perf/arch/arm64/tests/dwarf-unwind.c b/tools/perf/arch/arm64/tests/dwarf-unwind.c
index b2603d0d3737..440d00f0de14 100644
--- a/tools/perf/arch/arm64/tests/dwarf-unwind.c
+++ b/tools/perf/arch/arm64/tests/dwarf-unwind.c
@@ -45,7 +45,7 @@ static int sample_ustack(struct perf_sample *sample,
int test__arch_unwind_sample(struct perf_sample *sample,
struct thread *thread)
{
- struct regs_dump *regs = &sample->user_regs;
+ struct regs_dump *regs = perf_sample__user_regs(sample);
u64 *buf;
buf = calloc(1, sizeof(u64) * PERF_REGS_MAX);
diff --git a/tools/perf/arch/arm64/util/Build b/tools/perf/arch/arm64/util/Build
index 78ef7115be3d..a74521b79eaa 100644
--- a/tools/perf/arch/arm64/util/Build
+++ b/tools/perf/arch/arm64/util/Build
@@ -1,14 +1,13 @@
-perf-y += header.o
-perf-y += machine.o
-perf-y += perf_regs.o
-perf-y += tsc.o
-perf-y += pmu.o
-perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-y += header.o
+perf-util-y += machine.o
+perf-util-y += perf_regs.o
+perf-util-y += tsc.o
+perf-util-y += pmu.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
+perf-util-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-perf-$(CONFIG_AUXTRACE) += ../../arm/util/pmu.o \
+perf-util-$(CONFIG_AUXTRACE) += ../../arm/util/pmu.o \
../../arm/util/auxtrace.o \
../../arm/util/cs-etm.o \
arm-spe.o mem-events.o hisi-ptt.o
diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
index 51ccbfd3d246..cac43cde7dbe 100644
--- a/tools/perf/arch/arm64/util/arm-spe.c
+++ b/tools/perf/arch/arm64/util/arm-spe.c
@@ -8,6 +8,7 @@
#include <linux/types.h>
#include <linux/bitops.h>
#include <linux/log2.h>
+#include <linux/string.h>
#include <linux/zalloc.h>
#include <time.h>
@@ -22,9 +23,12 @@
#include "../../../util/debug.h"
#include "../../../util/auxtrace.h"
#include "../../../util/record.h"
+#include "../../../util/header.h"
#include "../../../util/arm-spe.h"
#include <tools/libc_compat.h> // reallocarray
+#define ARM_SPE_CPU_MAGIC 0x1010101010101010ULL
+
#define KiB(x) ((x) * 1024)
#define MiB(x) ((x) * 1024 * 1024)
@@ -36,11 +40,102 @@ struct arm_spe_recording {
bool *wrapped;
};
+/* Iterate config list to detect if the "freq" parameter is set */
+static bool arm_spe_is_set_freq(struct evsel *evsel)
+{
+ struct evsel_config_term *term;
+
+ list_for_each_entry(term, &evsel->config_terms, list) {
+ if (term->type == EVSEL__CONFIG_TERM_FREQ)
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * arm_spe_find_cpus() returns a new cpu map, and the caller should invoke
+ * perf_cpu_map__put() to release the map after use.
+ */
+static struct perf_cpu_map *arm_spe_find_cpus(struct evlist *evlist)
+{
+ struct perf_cpu_map *event_cpus = evlist->core.user_requested_cpus;
+ struct perf_cpu_map *online_cpus = perf_cpu_map__new_online_cpus();
+ struct perf_cpu_map *intersect_cpus;
+
+ /* cpu map is not "any" CPU , we have specific CPUs to work with */
+ if (!perf_cpu_map__has_any_cpu(event_cpus)) {
+ intersect_cpus = perf_cpu_map__intersect(event_cpus, online_cpus);
+ perf_cpu_map__put(online_cpus);
+ /* Event can be "any" CPU so count all CPUs. */
+ } else {
+ intersect_cpus = online_cpus;
+ }
+
+ return intersect_cpus;
+}
+
static size_t
arm_spe_info_priv_size(struct auxtrace_record *itr __maybe_unused,
- struct evlist *evlist __maybe_unused)
+ struct evlist *evlist)
{
- return ARM_SPE_AUXTRACE_PRIV_SIZE;
+ struct perf_cpu_map *cpu_map = arm_spe_find_cpus(evlist);
+ size_t size;
+
+ if (!cpu_map)
+ return 0;
+
+ size = ARM_SPE_AUXTRACE_PRIV_MAX +
+ ARM_SPE_CPU_PRIV_MAX * perf_cpu_map__nr(cpu_map);
+ size *= sizeof(u64);
+
+ perf_cpu_map__put(cpu_map);
+ return size;
+}
+
+static int arm_spe_save_cpu_header(struct auxtrace_record *itr,
+ struct perf_cpu cpu, __u64 data[])
+{
+ struct arm_spe_recording *sper =
+ container_of(itr, struct arm_spe_recording, itr);
+ struct perf_pmu *pmu = NULL;
+ char *cpuid = NULL;
+ u64 val;
+
+ /* Read CPU MIDR */
+ cpuid = get_cpuid_allow_env_override(cpu);
+ if (!cpuid)
+ return -ENOMEM;
+ val = strtol(cpuid, NULL, 16);
+
+ data[ARM_SPE_MAGIC] = ARM_SPE_CPU_MAGIC;
+ data[ARM_SPE_CPU] = cpu.cpu;
+ data[ARM_SPE_CPU_NR_PARAMS] = ARM_SPE_CPU_PRIV_MAX - ARM_SPE_CPU_MIDR;
+ data[ARM_SPE_CPU_MIDR] = val;
+
+ /* Find the associate Arm SPE PMU for the CPU */
+ if (perf_cpu_map__has(sper->arm_spe_pmu->cpus, cpu))
+ pmu = sper->arm_spe_pmu;
+
+ if (!pmu) {
+ /* No Arm SPE PMU is found */
+ data[ARM_SPE_CPU_PMU_TYPE] = ULLONG_MAX;
+ data[ARM_SPE_CAP_MIN_IVAL] = 0;
+ data[ARM_SPE_CAP_EVENT_FILTER] = 0;
+ } else {
+ data[ARM_SPE_CPU_PMU_TYPE] = pmu->type;
+
+ if (perf_pmu__scan_file(pmu, "caps/min_interval", "%lu", &val) != 1)
+ val = 0;
+ data[ARM_SPE_CAP_MIN_IVAL] = val;
+
+ if (perf_pmu__scan_file(pmu, "caps/event_filter", "%lx", &val) != 1)
+ val = 0;
+ data[ARM_SPE_CAP_EVENT_FILTER] = val;
+ }
+
+ free(cpuid);
+ return ARM_SPE_CPU_PRIV_MAX;
}
static int arm_spe_info_fill(struct auxtrace_record *itr,
@@ -48,20 +143,46 @@ static int arm_spe_info_fill(struct auxtrace_record *itr,
struct perf_record_auxtrace_info *auxtrace_info,
size_t priv_size)
{
+ int i, ret;
+ size_t offset;
struct arm_spe_recording *sper =
container_of(itr, struct arm_spe_recording, itr);
struct perf_pmu *arm_spe_pmu = sper->arm_spe_pmu;
+ struct perf_cpu_map *cpu_map;
+ struct perf_cpu cpu;
+ __u64 *data;
- if (priv_size != ARM_SPE_AUXTRACE_PRIV_SIZE)
+ if (priv_size != arm_spe_info_priv_size(itr, session->evlist))
return -EINVAL;
if (!session->evlist->core.nr_mmaps)
return -EINVAL;
+ cpu_map = arm_spe_find_cpus(session->evlist);
+ if (!cpu_map)
+ return -EINVAL;
+
auxtrace_info->type = PERF_AUXTRACE_ARM_SPE;
- auxtrace_info->priv[ARM_SPE_PMU_TYPE] = arm_spe_pmu->type;
+ auxtrace_info->priv[ARM_SPE_HEADER_VERSION] = ARM_SPE_HEADER_CURRENT_VERSION;
+ auxtrace_info->priv[ARM_SPE_HEADER_SIZE] =
+ ARM_SPE_AUXTRACE_PRIV_MAX - ARM_SPE_HEADER_VERSION;
+ auxtrace_info->priv[ARM_SPE_PMU_TYPE_V2] = arm_spe_pmu->type;
+ auxtrace_info->priv[ARM_SPE_CPUS_NUM] = perf_cpu_map__nr(cpu_map);
+
+ offset = ARM_SPE_AUXTRACE_PRIV_MAX;
+ perf_cpu_map__for_each_cpu(cpu, i, cpu_map) {
+ assert(offset < priv_size);
+ data = &auxtrace_info->priv[offset];
+ ret = arm_spe_save_cpu_header(itr, cpu, data);
+ if (ret < 0)
+ goto out;
+ offset += ret;
+ }
- return 0;
+ ret = 0;
+out:
+ perf_cpu_map__put(cpu_map);
+ return ret;
}
static void
@@ -132,38 +253,48 @@ static __u64 arm_spe_pmu__sample_period(const struct perf_pmu *arm_spe_pmu)
return sample_period;
}
-static int arm_spe_recording_options(struct auxtrace_record *itr,
- struct evlist *evlist,
- struct record_opts *opts)
+static void arm_spe_setup_evsel(struct evsel *evsel, struct perf_cpu_map *cpus)
{
- struct arm_spe_recording *sper =
- container_of(itr, struct arm_spe_recording, itr);
- struct perf_pmu *arm_spe_pmu = sper->arm_spe_pmu;
- struct evsel *evsel, *arm_spe_evsel = NULL;
- struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
- bool privileged = perf_event_paranoid_check(-1);
- struct evsel *tracking_evsel;
- int err;
u64 bit;
- sper->evlist = evlist;
+ evsel->core.attr.freq = 0;
+ evsel->core.attr.sample_period = arm_spe_pmu__sample_period(evsel->pmu);
+ evsel->needs_auxtrace_mmap = true;
- evlist__for_each_entry(evlist, evsel) {
- if (evsel->core.attr.type == arm_spe_pmu->type) {
- if (arm_spe_evsel) {
- pr_err("There may be only one " ARM_SPE_PMU_NAME "x event\n");
- return -EINVAL;
- }
- evsel->core.attr.freq = 0;
- evsel->core.attr.sample_period = arm_spe_pmu__sample_period(arm_spe_pmu);
- evsel->needs_auxtrace_mmap = true;
- arm_spe_evsel = evsel;
- opts->full_auxtrace = true;
- }
+ /*
+ * To obtain the auxtrace buffer file descriptor, the auxtrace event
+ * must come first.
+ */
+ evlist__to_front(evsel->evlist, evsel);
+
+ /*
+ * In the case of per-cpu mmaps, sample CPU for AUX event;
+ * also enable the timestamp tracing for samples correlation.
+ */
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
+ evsel__set_sample_bit(evsel, CPU);
+ evsel__set_config_if_unset(evsel->pmu, evsel, "ts_enable", 1);
}
- if (!opts->full_auxtrace)
- return 0;
+ /*
+ * Set this only so that perf report knows that SPE generates memory info. It has no effect
+ * on the opening of the event or the SPE data produced.
+ */
+ evsel__set_sample_bit(evsel, DATA_SRC);
+
+ /*
+ * The PHYS_ADDR flag does not affect the driver behaviour, it is used to
+ * inform that the resulting output's SPE samples contain physical addresses
+ * where applicable.
+ */
+ bit = perf_pmu__format_bits(evsel->pmu, "pa_enable");
+ if (evsel->core.attr.config & bit)
+ evsel__set_sample_bit(evsel, PHYS_ADDR);
+}
+
+static int arm_spe_setup_aux_buffer(struct record_opts *opts)
+{
+ bool privileged = perf_event_paranoid_check(-1);
/*
* we are in snapshot mode.
@@ -193,6 +324,9 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
pr_err("Failed to calculate default snapshot size and/or AUX area tracing mmap pages\n");
return -EINVAL;
}
+
+ pr_debug2("%sx snapshot size: %zu\n", ARM_SPE_PMU_NAME,
+ opts->auxtrace_snapshot_size);
}
/* We are in full trace mode but '-m,xyz' wasn't specified */
@@ -218,40 +352,15 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
}
}
- if (opts->auxtrace_snapshot_mode)
- pr_debug2("%sx snapshot size: %zu\n", ARM_SPE_PMU_NAME,
- opts->auxtrace_snapshot_size);
-
- /*
- * To obtain the auxtrace buffer file descriptor, the auxtrace event
- * must come first.
- */
- evlist__to_front(evlist, arm_spe_evsel);
-
- /*
- * In the case of per-cpu mmaps, sample CPU for AUX event;
- * also enable the timestamp tracing for samples correlation.
- */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
- evsel__set_sample_bit(arm_spe_evsel, CPU);
- evsel__set_config_if_unset(arm_spe_pmu, arm_spe_evsel,
- "ts_enable", 1);
- }
-
- /*
- * Set this only so that perf report knows that SPE generates memory info. It has no effect
- * on the opening of the event or the SPE data produced.
- */
- evsel__set_sample_bit(arm_spe_evsel, DATA_SRC);
+ return 0;
+}
- /*
- * The PHYS_ADDR flag does not affect the driver behaviour, it is used to
- * inform that the resulting output's SPE samples contain physical addresses
- * where applicable.
- */
- bit = perf_pmu__format_bits(arm_spe_pmu, "pa_enable");
- if (arm_spe_evsel->core.attr.config & bit)
- evsel__set_sample_bit(arm_spe_evsel, PHYS_ADDR);
+static int arm_spe_setup_tracking_event(struct evlist *evlist,
+ struct record_opts *opts)
+{
+ int err;
+ struct evsel *tracking_evsel;
+ struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
/* Add dummy event to keep tracking */
err = parse_event(evlist, "dummy:u");
@@ -265,7 +374,7 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
tracking_evsel->core.attr.sample_period = 1;
/* In per-cpu case, always need the time of mmap events etc */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
evsel__set_sample_bit(tracking_evsel, TIME);
evsel__set_sample_bit(tracking_evsel, CPU);
@@ -277,6 +386,60 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
return 0;
}
+static int arm_spe_recording_options(struct auxtrace_record *itr,
+ struct evlist *evlist,
+ struct record_opts *opts)
+{
+ struct arm_spe_recording *sper =
+ container_of(itr, struct arm_spe_recording, itr);
+ struct evsel *evsel, *tmp;
+ struct perf_cpu_map *cpus = evlist->core.user_requested_cpus;
+ bool discard = false;
+ int err;
+
+ sper->evlist = evlist;
+
+ evlist__for_each_entry(evlist, evsel) {
+ if (evsel__is_aux_event(evsel)) {
+ if (!strstarts(evsel->pmu->name, ARM_SPE_PMU_NAME)) {
+ pr_err("Found unexpected auxtrace event: %s\n",
+ evsel->pmu->name);
+ return -EINVAL;
+ }
+ opts->full_auxtrace = true;
+
+ if (opts->user_freq != UINT_MAX ||
+ arm_spe_is_set_freq(evsel)) {
+ pr_err("Arm SPE: Frequency is not supported. "
+ "Set period with -c option or PMU parameter (-e %s/period=NUM/).\n",
+ evsel->pmu->name);
+ return -EINVAL;
+ }
+ }
+ }
+
+ if (!opts->full_auxtrace)
+ return 0;
+
+ evlist__for_each_entry_safe(evlist, tmp, evsel) {
+ if (evsel__is_aux_event(evsel)) {
+ arm_spe_setup_evsel(evsel, cpus);
+ if (evsel->core.attr.config &
+ perf_pmu__format_bits(evsel->pmu, "discard"))
+ discard = true;
+ }
+ }
+
+ if (discard)
+ return 0;
+
+ err = arm_spe_setup_aux_buffer(opts);
+ if (err)
+ return err;
+
+ return arm_spe_setup_tracking_event(evlist, opts);
+}
+
static int arm_spe_parse_snapshot_options(struct auxtrace_record *itr __maybe_unused,
struct record_opts *opts,
const char *str)
@@ -301,12 +464,16 @@ static int arm_spe_snapshot_start(struct auxtrace_record *itr)
struct arm_spe_recording *ptr =
container_of(itr, struct arm_spe_recording, itr);
struct evsel *evsel;
+ int ret = -EINVAL;
evlist__for_each_entry(ptr->evlist, evsel) {
- if (evsel->core.attr.type == ptr->arm_spe_pmu->type)
- return evsel__disable(evsel);
+ if (evsel__is_aux_event(evsel)) {
+ ret = evsel__disable(evsel);
+ if (ret < 0)
+ return ret;
+ }
}
- return -EINVAL;
+ return ret;
}
static int arm_spe_snapshot_finish(struct auxtrace_record *itr)
@@ -314,12 +481,16 @@ static int arm_spe_snapshot_finish(struct auxtrace_record *itr)
struct arm_spe_recording *ptr =
container_of(itr, struct arm_spe_recording, itr);
struct evsel *evsel;
+ int ret = -EINVAL;
evlist__for_each_entry(ptr->evlist, evsel) {
- if (evsel->core.attr.type == ptr->arm_spe_pmu->type)
- return evsel__enable(evsel);
+ if (evsel__is_aux_event(evsel)) {
+ ret = evsel__enable(evsel);
+ if (ret < 0)
+ return ret;
+ }
}
- return -EINVAL;
+ return ret;
}
static int arm_spe_alloc_wrapped_array(struct arm_spe_recording *ptr, int idx)
@@ -497,7 +668,6 @@ struct auxtrace_record *arm_spe_recording_init(int *err,
}
sper->arm_spe_pmu = arm_spe_pmu;
- sper->itr.pmu = arm_spe_pmu;
sper->itr.snapshot_start = arm_spe_snapshot_start;
sper->itr.snapshot_finish = arm_spe_snapshot_finish;
sper->itr.find_snapshot = arm_spe_find_snapshot;
diff --git a/tools/perf/arch/arm64/util/arm64_exception_types.h b/tools/perf/arch/arm64/util/arm64_exception_types.h
index 27c981ebe401..bf827f19ace0 100644
--- a/tools/perf/arch/arm64/util/arm64_exception_types.h
+++ b/tools/perf/arch/arm64/util/arm64_exception_types.h
@@ -31,9 +31,10 @@
#define ESR_ELx_EC_FP_ASIMD (0x07)
#define ESR_ELx_EC_CP10_ID (0x08) /* EL2 only */
#define ESR_ELx_EC_PAC (0x09) /* EL2 and above */
-/* Unallocated EC: 0x0A - 0x0B */
+#define ESR_ELx_EC_OTHER (0x0A)
+/* Unallocated EC: 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
-/* Unallocated EC: 0x0d */
+#define ESR_ELx_EC_BTI (0x0D)
#define ESR_ELx_EC_ILL (0x0E)
/* Unallocated EC: 0x0F - 0x10 */
#define ESR_ELx_EC_SVC32 (0x11)
@@ -46,7 +47,10 @@
#define ESR_ELx_EC_SYS64 (0x18)
#define ESR_ELx_EC_SVE (0x19)
#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */
-/* Unallocated EC: 0x1b - 0x1E */
+/* Unallocated EC: 0x1B */
+#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */
+#define ESR_ELx_EC_SME (0x1D)
+/* Unallocated EC: 0x1E */
#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
#define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21)
@@ -55,11 +59,12 @@
#define ESR_ELx_EC_DABT_LOW (0x24)
#define ESR_ELx_EC_DABT_CUR (0x25)
#define ESR_ELx_EC_SP_ALIGN (0x26)
-/* Unallocated EC: 0x27 */
+#define ESR_ELx_EC_MOPS (0x27)
#define ESR_ELx_EC_FP_EXC32 (0x28)
/* Unallocated EC: 0x29 - 0x2B */
#define ESR_ELx_EC_FP_EXC64 (0x2C)
-/* Unallocated EC: 0x2D - 0x2E */
+#define ESR_ELx_EC_GCS (0x2D)
+/* Unallocated EC: 0x2E */
#define ESR_ELx_EC_SERROR (0x2F)
#define ESR_ELx_EC_BREAKPT_LOW (0x30)
#define ESR_ELx_EC_BREAKPT_CUR (0x31)
diff --git a/tools/perf/arch/arm64/util/dwarf-regs.c b/tools/perf/arch/arm64/util/dwarf-regs.c
deleted file mode 100644
index 917b97d7c5d3..000000000000
--- a/tools/perf/arch/arm64/util/dwarf-regs.c
+++ /dev/null
@@ -1,92 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2010 Will Deacon, ARM Ltd.
- */
-
-#include <errno.h>
-#include <stddef.h>
-#include <string.h>
-#include <dwarf-regs.h>
-#include <linux/ptrace.h> /* for struct user_pt_regs */
-#include <linux/stringify.h>
-
-struct pt_regs_dwarfnum {
- const char *name;
- unsigned int dwarfnum;
-};
-
-#define REG_DWARFNUM_NAME(r, num) {.name = r, .dwarfnum = num}
-#define GPR_DWARFNUM_NAME(num) \
- {.name = __stringify(%x##num), .dwarfnum = num}
-#define REG_DWARFNUM_END {.name = NULL, .dwarfnum = 0}
-#define DWARFNUM2OFFSET(index) \
- (index * sizeof((struct user_pt_regs *)0)->regs[0])
-
-/*
- * Reference:
- * http://infocenter.arm.com/help/topic/com.arm.doc.ihi0057b/IHI0057B_aadwarf64.pdf
- */
-static const struct pt_regs_dwarfnum regdwarfnum_table[] = {
- GPR_DWARFNUM_NAME(0),
- GPR_DWARFNUM_NAME(1),
- GPR_DWARFNUM_NAME(2),
- GPR_DWARFNUM_NAME(3),
- GPR_DWARFNUM_NAME(4),
- GPR_DWARFNUM_NAME(5),
- GPR_DWARFNUM_NAME(6),
- GPR_DWARFNUM_NAME(7),
- GPR_DWARFNUM_NAME(8),
- GPR_DWARFNUM_NAME(9),
- GPR_DWARFNUM_NAME(10),
- GPR_DWARFNUM_NAME(11),
- GPR_DWARFNUM_NAME(12),
- GPR_DWARFNUM_NAME(13),
- GPR_DWARFNUM_NAME(14),
- GPR_DWARFNUM_NAME(15),
- GPR_DWARFNUM_NAME(16),
- GPR_DWARFNUM_NAME(17),
- GPR_DWARFNUM_NAME(18),
- GPR_DWARFNUM_NAME(19),
- GPR_DWARFNUM_NAME(20),
- GPR_DWARFNUM_NAME(21),
- GPR_DWARFNUM_NAME(22),
- GPR_DWARFNUM_NAME(23),
- GPR_DWARFNUM_NAME(24),
- GPR_DWARFNUM_NAME(25),
- GPR_DWARFNUM_NAME(26),
- GPR_DWARFNUM_NAME(27),
- GPR_DWARFNUM_NAME(28),
- GPR_DWARFNUM_NAME(29),
- REG_DWARFNUM_NAME("%lr", 30),
- REG_DWARFNUM_NAME("%sp", 31),
- REG_DWARFNUM_END,
-};
-
-/**
- * get_arch_regstr() - lookup register name from it's DWARF register number
- * @n: the DWARF register number
- *
- * get_arch_regstr() returns the name of the register in struct
- * regdwarfnum_table from it's DWARF register number. If the register is not
- * found in the table, this returns NULL;
- */
-const char *get_arch_regstr(unsigned int n)
-{
- const struct pt_regs_dwarfnum *roff;
- for (roff = regdwarfnum_table; roff->name != NULL; roff++)
- if (roff->dwarfnum == n)
- return roff->name;
- return NULL;
-}
-
-int regs_query_register_offset(const char *name)
-{
- const struct pt_regs_dwarfnum *roff;
-
- for (roff = regdwarfnum_table; roff->name != NULL; roff++)
- if (!strcmp(roff->name, name))
- return DWARFNUM2OFFSET(roff->dwarfnum);
- return -EINVAL;
-}
diff --git a/tools/perf/arch/arm64/util/header.c b/tools/perf/arch/arm64/util/header.c
index 97037499152e..f445a2dd6293 100644
--- a/tools/perf/arch/arm64/util/header.c
+++ b/tools/perf/arch/arm64/util/header.c
@@ -4,8 +4,6 @@
#include <stdio.h>
#include <stdlib.h>
#include <perf/cpumap.h>
-#include <util/cpumap.h>
-#include <internal/cpumap.h>
#include <api/fs/fs.h>
#include <errno.h>
#include "debug.h"
@@ -16,76 +14,66 @@
#define MIDR_REVISION_MASK GENMASK(3, 0)
#define MIDR_VARIANT_MASK GENMASK(23, 20)
-static int _get_cpuid(char *buf, size_t sz, struct perf_cpu_map *cpus)
+static int _get_cpuid(char *buf, size_t sz, struct perf_cpu cpu)
{
+ char path[PATH_MAX];
+ FILE *file;
const char *sysfs = sysfs__mountpoint();
- int cpu;
- int ret = EINVAL;
+ assert(cpu.cpu != -1);
if (!sysfs || sz < MIDR_SIZE)
return EINVAL;
- cpus = perf_cpu_map__get(cpus);
+ scnprintf(path, PATH_MAX, "%s/devices/system/cpu/cpu%d" MIDR, sysfs, cpu.cpu);
- for (cpu = 0; cpu < perf_cpu_map__nr(cpus); cpu++) {
- char path[PATH_MAX];
- FILE *file;
-
- scnprintf(path, PATH_MAX, "%s/devices/system/cpu/cpu%d" MIDR,
- sysfs, RC_CHK_ACCESS(cpus)->map[cpu].cpu);
-
- file = fopen(path, "r");
- if (!file) {
- pr_debug("fopen failed for file %s\n", path);
- continue;
- }
+ file = fopen(path, "r");
+ if (!file) {
+ pr_debug("fopen failed for file %s\n", path);
+ return EINVAL;
+ }
- if (!fgets(buf, MIDR_SIZE, file)) {
- fclose(file);
- continue;
- }
+ if (!fgets(buf, MIDR_SIZE, file)) {
+ pr_debug("Failed to read file %s\n", path);
fclose(file);
-
- /* got midr break loop */
- ret = 0;
- break;
+ return EINVAL;
}
-
- perf_cpu_map__put(cpus);
- return ret;
+ fclose(file);
+ return 0;
}
-int get_cpuid(char *buf, size_t sz)
+int get_cpuid(char *buf, size_t sz, struct perf_cpu cpu)
{
- struct perf_cpu_map *cpus = perf_cpu_map__new_online_cpus();
- int ret;
+ struct perf_cpu_map *cpus;
+ int idx;
+
+ if (cpu.cpu != -1)
+ return _get_cpuid(buf, sz, cpu);
+ cpus = perf_cpu_map__new_online_cpus();
if (!cpus)
return EINVAL;
- ret = _get_cpuid(buf, sz, cpus);
-
- perf_cpu_map__put(cpus);
+ perf_cpu_map__for_each_cpu(cpu, idx, cpus) {
+ int ret = _get_cpuid(buf, sz, cpu);
- return ret;
+ if (ret == 0)
+ return 0;
+ }
+ return EINVAL;
}
-char *get_cpuid_str(struct perf_pmu *pmu)
+char *get_cpuid_str(struct perf_cpu cpu)
{
- char *buf = NULL;
+ char *buf = malloc(MIDR_SIZE);
int res;
- if (!pmu || !pmu->cpus)
- return NULL;
-
- buf = malloc(MIDR_SIZE);
if (!buf)
return NULL;
/* read midr from list of cpus mapped to this pmu */
- res = _get_cpuid(buf, MIDR_SIZE, pmu->cpus);
+ res = get_cpuid(buf, MIDR_SIZE, cpu);
if (res) {
- pr_err("failed to get cpuid string for PMU %s\n", pmu->name);
+ pr_err("failed to get cpuid string for CPU %d\n", cpu.cpu);
free(buf);
buf = NULL;
}
diff --git a/tools/perf/arch/arm64/util/hisi-ptt.c b/tools/perf/arch/arm64/util/hisi-ptt.c
index ba97c8a562a0..eac9739c87e6 100644
--- a/tools/perf/arch/arm64/util/hisi-ptt.c
+++ b/tools/perf/arch/arm64/util/hisi-ptt.c
@@ -174,7 +174,6 @@ struct auxtrace_record *hisi_ptt_recording_init(int *err,
}
pttr->hisi_ptt_pmu = hisi_ptt_pmu;
- pttr->itr.pmu = hisi_ptt_pmu;
pttr->itr.recording_options = hisi_ptt_recording_options;
pttr->itr.info_priv_size = hisi_ptt_info_priv_size;
pttr->itr.info_fill = hisi_ptt_info_fill;
diff --git a/tools/perf/arch/arm64/util/machine.c b/tools/perf/arch/arm64/util/machine.c
index ba1144366e85..aab1cc2bc283 100644
--- a/tools/perf/arch/arm64/util/machine.c
+++ b/tools/perf/arch/arm64/util/machine.c
@@ -12,5 +12,7 @@
void arch__add_leaf_frame_record_opts(struct record_opts *opts)
{
+ const struct sample_reg *sample_reg_masks = arch__sample_reg_masks();
+
opts->sample_user_regs |= sample_reg_masks[PERF_REG_ARM64_LR].mask;
}
diff --git a/tools/perf/arch/arm64/util/mem-events.c b/tools/perf/arch/arm64/util/mem-events.c
index 3bcc5c7035c2..9f8da7937255 100644
--- a/tools/perf/arch/arm64/util/mem-events.c
+++ b/tools/perf/arch/arm64/util/mem-events.c
@@ -1,37 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
-#include "map_symbol.h"
+#include "util/map_symbol.h"
+#include "util/mem-events.h"
#include "mem-events.h"
-#define E(t, n, s) { .tag = t, .name = n, .sysfs_name = s }
+#define E(t, n, s, l, a) { .tag = t, .name = n, .event_name = s, .ldlat = l, .aux_event = a }
-static struct perf_mem_event perf_mem_events[PERF_MEM_EVENTS__MAX] = {
- E("spe-load", "arm_spe_0/ts_enable=1,pa_enable=1,load_filter=1,store_filter=0,min_latency=%u/", "arm_spe_0"),
- E("spe-store", "arm_spe_0/ts_enable=1,pa_enable=1,load_filter=0,store_filter=1/", "arm_spe_0"),
- E("spe-ldst", "arm_spe_0/ts_enable=1,pa_enable=1,load_filter=1,store_filter=1,min_latency=%u/", "arm_spe_0"),
+struct perf_mem_event perf_mem_events_arm[PERF_MEM_EVENTS__MAX] = {
+ E("spe-load", "%s/ts_enable=1,pa_enable=1,load_filter=1,store_filter=0,min_latency=%u/", NULL, true, 0),
+ E("spe-store", "%s/ts_enable=1,pa_enable=1,load_filter=0,store_filter=1/", NULL, false, 0),
+ E("spe-ldst", "%s/ts_enable=1,pa_enable=1,load_filter=1,store_filter=1,min_latency=%u/", NULL, true, 0),
};
-
-static char mem_ev_name[100];
-
-struct perf_mem_event *perf_mem_events__ptr(int i)
-{
- if (i >= PERF_MEM_EVENTS__MAX)
- return NULL;
-
- return &perf_mem_events[i];
-}
-
-const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused)
-{
- struct perf_mem_event *e = perf_mem_events__ptr(i);
-
- if (i >= PERF_MEM_EVENTS__MAX)
- return NULL;
-
- if (i == PERF_MEM_EVENTS__LOAD || i == PERF_MEM_EVENTS__LOAD_STORE)
- scnprintf(mem_ev_name, sizeof(mem_ev_name),
- e->name, perf_mem_events__loads_ldlat);
- else /* PERF_MEM_EVENTS__STORE */
- scnprintf(mem_ev_name, sizeof(mem_ev_name), e->name);
-
- return mem_ev_name;
-}
diff --git a/tools/perf/arch/arm64/util/mem-events.h b/tools/perf/arch/arm64/util/mem-events.h
new file mode 100644
index 000000000000..5fc50be4be38
--- /dev/null
+++ b/tools/perf/arch/arm64/util/mem-events.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ARM64_MEM_EVENTS_H
+#define _ARM64_MEM_EVENTS_H
+
+extern struct perf_mem_event perf_mem_events_arm[PERF_MEM_EVENTS__MAX];
+
+#endif /* _ARM64_MEM_EVENTS_H */
diff --git a/tools/perf/arch/arm64/util/perf_regs.c b/tools/perf/arch/arm64/util/perf_regs.c
index 1b79d8eab22f..09308665e28a 100644
--- a/tools/perf/arch/arm64/util/perf_regs.c
+++ b/tools/perf/arch/arm64/util/perf_regs.c
@@ -16,7 +16,7 @@
#define HWCAP_SVE (1 << 22)
#endif
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG(x0, PERF_REG_ARM64_X0),
SMPL_REG(x1, PERF_REG_ARM64_X1),
SMPL_REG(x2, PERF_REG_ARM64_X2),
@@ -175,3 +175,8 @@ uint64_t arch__user_reg_mask(void)
}
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/arm64/util/pmu.c b/tools/perf/arch/arm64/util/pmu.c
index 2a4eab2d160e..895fb0d0610c 100644
--- a/tools/perf/arch/arm64/util/pmu.c
+++ b/tools/perf/arch/arm64/util/pmu.c
@@ -1,30 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
-#include <internal/cpumap.h>
-#include "../../../util/cpumap.h"
-#include "../../../util/header.h"
#include "../../../util/pmu.h"
#include "../../../util/pmus.h"
+#include "../../../util/tool_pmu.h"
#include <api/fs/fs.h>
-#include <math.h>
-const struct pmu_metrics_table *pmu_metrics_table__find(void)
-{
- struct perf_pmu *pmu;
-
- /* Metrics aren't currently supported on heterogeneous Arm systems */
- if (perf_pmus__num_core_pmus() > 1)
- return NULL;
-
- /* Doesn't matter which one here because they'll all be the same */
- pmu = perf_pmus__find_core_pmu();
- if (pmu)
- return perf_pmu__find_metrics_table(pmu);
-
- return NULL;
-}
-
-double perf_pmu__cpu_slots_per_cycle(void)
+u64 tool_pmu__cpu_slots_per_cycle(void)
{
char path[PATH_MAX];
unsigned long long slots = 0;
@@ -41,5 +22,5 @@ double perf_pmu__cpu_slots_per_cycle(void)
filename__read_ull(path, &slots);
}
- return slots ? (double)slots : NAN;
+ return slots;
}
diff --git a/tools/perf/arch/arm64/util/unwind-libdw.c b/tools/perf/arch/arm64/util/unwind-libdw.c
index e056d50ab42e..b89b0a7e5ad9 100644
--- a/tools/perf/arch/arm64/util/unwind-libdw.c
+++ b/tools/perf/arch/arm64/util/unwind-libdw.c
@@ -8,7 +8,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[PERF_REG_ARM64_MAX], dwarf_pc;
#define REG(r) ({ \
diff --git a/tools/perf/arch/csky/Build b/tools/perf/arch/csky/Build
index e4e5f33c84d8..e63eabc2c8f4 100644
--- a/tools/perf/arch/csky/Build
+++ b/tools/perf/arch/csky/Build
@@ -1 +1 @@
-perf-y += util/
+perf-util-y += util/
diff --git a/tools/perf/arch/csky/Makefile b/tools/perf/arch/csky/Makefile
deleted file mode 100644
index 88c08eed9c7b..000000000000
--- a/tools/perf/arch/csky/Makefile
+++ /dev/null
@@ -1,4 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
diff --git a/tools/perf/arch/csky/annotate/instructions.c b/tools/perf/arch/csky/annotate/instructions.c
index 5337bfb7d5fc..14270311d215 100644
--- a/tools/perf/arch/csky/annotate/instructions.c
+++ b/tools/perf/arch/csky/annotate/instructions.c
@@ -43,6 +43,11 @@ static int csky__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->initialized = true;
arch->objdump.comment_char = '/';
arch->associate_instruction_ops = csky__associate_ins_ops;
-
+ arch->e_machine = EM_CSKY;
+#if defined(__CSKYABIV2__)
+ arch->e_flags = EF_CSKY_ABIV2;
+#else
+ arch->e_flags = EF_CSKY_ABIV1;
+#endif
return 0;
}
diff --git a/tools/perf/arch/csky/util/Build b/tools/perf/arch/csky/util/Build
index 7d3050134ae0..5e6ea82c4202 100644
--- a/tools/perf/arch/csky/util/Build
+++ b/tools/perf/arch/csky/util/Build
@@ -1,4 +1,3 @@
-perf-y += perf_regs.o
+perf-util-y += perf_regs.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
diff --git a/tools/perf/arch/csky/util/perf_regs.c b/tools/perf/arch/csky/util/perf_regs.c
index c0877c264d49..6b1665f41180 100644
--- a/tools/perf/arch/csky/util/perf_regs.c
+++ b/tools/perf/arch/csky/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/csky/util/unwind-libdw.c b/tools/perf/arch/csky/util/unwind-libdw.c
index 79df4374ab18..b20b1569783d 100644
--- a/tools/perf/arch/csky/util/unwind-libdw.c
+++ b/tools/perf/arch/csky/util/unwind-libdw.c
@@ -10,7 +10,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[PERF_REG_CSKY_MAX];
#define REG(r) ({ \
diff --git a/tools/perf/arch/loongarch/Build b/tools/perf/arch/loongarch/Build
index e4e5f33c84d8..e63eabc2c8f4 100644
--- a/tools/perf/arch/loongarch/Build
+++ b/tools/perf/arch/loongarch/Build
@@ -1 +1 @@
-perf-y += util/
+perf-util-y += util/
diff --git a/tools/perf/arch/loongarch/Makefile b/tools/perf/arch/loongarch/Makefile
index c392e7af4743..087e099fb453 100644
--- a/tools/perf/arch/loongarch/Makefile
+++ b/tools/perf/arch/loongarch/Makefile
@@ -1,28 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
PERF_HAVE_JITDUMP := 1
-
-#
-# Syscall table generation for perf
-#
-
-out := $(OUTPUT)arch/loongarch/include/generated/asm
-header := $(out)/syscalls.c
-incpath := $(srctree)/tools
-sysdef := $(srctree)/tools/arch/loongarch/include/uapi/asm/unistd.h
-sysprf := $(srctree)/tools/perf/arch/loongarch/entry/syscalls/
-systbl := $(sysprf)/mksyscalltbl
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' '$(CC)' '$(HOSTCC)' $(incpath) $(sysdef) > $@
-
-clean::
- $(call QUIET_CLEAN, loongarch) $(RM) $(header)
-
-archheaders: $(header)
+HAVE_KVM_STAT_SUPPORT := 1
diff --git a/tools/perf/arch/loongarch/annotate/instructions.c b/tools/perf/arch/loongarch/annotate/instructions.c
index 21cc7e4149f7..70262d5f1444 100644
--- a/tools/perf/arch/loongarch/annotate/instructions.c
+++ b/tools/perf/arch/loongarch/annotate/instructions.c
@@ -5,7 +5,8 @@
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
-static int loongarch_call__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms)
+static int loongarch_call__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms,
+ struct disasm_line *dl __maybe_unused)
{
char *c, *endptr, *tok, *name;
struct map *map = ms->map;
@@ -51,7 +52,8 @@ static struct ins_ops loongarch_call_ops = {
.scnprintf = call__scnprintf,
};
-static int loongarch_jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms)
+static int loongarch_jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms,
+ struct disasm_line *dl __maybe_unused)
{
struct map *map = ms->map;
struct symbol *sym = ms->sym;
@@ -129,6 +131,8 @@ int loongarch__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->associate_instruction_ops = loongarch__associate_ins_ops;
arch->initialized = true;
arch->objdump.comment_char = '#';
+ arch->e_machine = EM_LOONGARCH;
+ arch->e_flags = 0;
}
return 0;
diff --git a/tools/perf/arch/loongarch/entry/syscalls/mksyscalltbl b/tools/perf/arch/loongarch/entry/syscalls/mksyscalltbl
deleted file mode 100755
index c10ad3580aef..000000000000
--- a/tools/perf/arch/loongarch/entry/syscalls/mksyscalltbl
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-#
-# Generate system call table for perf. Derived from
-# powerpc script.
-#
-# Author(s): Ming Wang <wangming01@loongson.cn>
-# Author(s): Huacai Chen <chenhuacai@loongson.cn>
-# Copyright (C) 2020-2023 Loongson Technology Corporation Limited
-
-gcc=$1
-hostcc=$2
-incpath=$3
-input=$4
-
-if ! test -r $input; then
- echo "Could not read input file" >&2
- exit 1
-fi
-
-create_sc_table()
-{
- local sc nr max_nr
-
- while read sc nr; do
- printf "%s\n" " [$nr] = \"$sc\","
- max_nr=$nr
- done
-
- echo "#define SYSCALLTBL_LOONGARCH_MAX_ID $max_nr"
-}
-
-create_table()
-{
- echo "#include \"$input\""
- echo "static const char *const syscalltbl_loongarch[] = {"
- create_sc_table
- echo "};"
-}
-
-$gcc -E -dM -x c -I $incpath/include/uapi $input \
- |awk '$2 ~ "__NR" && $3 !~ "__NR3264_" {
- sub("^#define __NR(3264)?_", "");
- print | "sort -k2 -n"}' \
- |create_table
diff --git a/tools/perf/arch/loongarch/util/Build b/tools/perf/arch/loongarch/util/Build
index d776125a2d06..0aa31986ecb5 100644
--- a/tools/perf/arch/loongarch/util/Build
+++ b/tools/perf/arch/loongarch/util/Build
@@ -1,5 +1,6 @@
-perf-y += perf_regs.o
+perf-util-y += header.o
+perf-util-y += perf_regs.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
diff --git a/tools/perf/arch/loongarch/util/dwarf-regs.c b/tools/perf/arch/loongarch/util/dwarf-regs.c
deleted file mode 100644
index 0f6ebc387463..000000000000
--- a/tools/perf/arch/loongarch/util/dwarf-regs.c
+++ /dev/null
@@ -1,44 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * dwarf-regs.c : Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2020-2023 Loongson Technology Corporation Limited
- */
-
-#include <stdio.h>
-#include <errno.h> /* for EINVAL */
-#include <string.h> /* for strcmp */
-#include <dwarf-regs.h>
-
-struct pt_regs_dwarfnum {
- const char *name;
- unsigned int dwarfnum;
-};
-
-static struct pt_regs_dwarfnum loongarch_gpr_table[] = {
- {"%r0", 0}, {"%r1", 1}, {"%r2", 2}, {"%r3", 3},
- {"%r4", 4}, {"%r5", 5}, {"%r6", 6}, {"%r7", 7},
- {"%r8", 8}, {"%r9", 9}, {"%r10", 10}, {"%r11", 11},
- {"%r12", 12}, {"%r13", 13}, {"%r14", 14}, {"%r15", 15},
- {"%r16", 16}, {"%r17", 17}, {"%r18", 18}, {"%r19", 19},
- {"%r20", 20}, {"%r21", 21}, {"%r22", 22}, {"%r23", 23},
- {"%r24", 24}, {"%r25", 25}, {"%r26", 26}, {"%r27", 27},
- {"%r28", 28}, {"%r29", 29}, {"%r30", 30}, {"%r31", 31},
- {NULL, 0}
-};
-
-const char *get_arch_regstr(unsigned int n)
-{
- n %= 32;
- return loongarch_gpr_table[n].name;
-}
-
-int regs_query_register_offset(const char *name)
-{
- const struct pt_regs_dwarfnum *roff;
-
- for (roff = loongarch_gpr_table; roff->name != NULL; roff++)
- if (!strcmp(roff->name, name))
- return roff->dwarfnum;
- return -EINVAL;
-}
diff --git a/tools/perf/arch/loongarch/util/header.c b/tools/perf/arch/loongarch/util/header.c
new file mode 100644
index 000000000000..0c6d823334a2
--- /dev/null
+++ b/tools/perf/arch/loongarch/util/header.c
@@ -0,0 +1,96 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Implementation of get_cpuid().
+ *
+ * Author: Nikita Shubin <n.shubin@yadro.com>
+ * Bibo Mao <maobibo@loongson.cn>
+ * Huacai Chen <chenhuacai@loongson.cn>
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <api/fs/fs.h>
+#include <errno.h>
+#include "util/debug.h"
+#include "util/header.h"
+
+/*
+ * Output example from /proc/cpuinfo
+ * CPU Family : Loongson-64bit
+ * Model Name : Loongson-3C5000
+ * CPU Revision : 0x10
+ * FPU Revision : 0x01
+ */
+#define CPUINFO_MODEL "Model Name"
+#define CPUINFO "/proc/cpuinfo"
+
+static char *_get_field(const char *line)
+{
+ char *line2, *nl;
+
+ line2 = strrchr(line, ' ');
+ if (!line2)
+ return NULL;
+
+ line2++;
+ nl = strrchr(line, '\n');
+ if (!nl)
+ return NULL;
+
+ return strndup(line2, nl - line2);
+}
+
+static char *_get_cpuid(void)
+{
+ unsigned long line_sz;
+ char *line, *model, *cpuid;
+ FILE *file;
+
+ file = fopen(CPUINFO, "r");
+ if (file == NULL)
+ return NULL;
+
+ line = model = cpuid = NULL;
+ while (getline(&line, &line_sz, file) != -1) {
+ if (strncmp(line, CPUINFO_MODEL, strlen(CPUINFO_MODEL)))
+ continue;
+
+ model = _get_field(line);
+ if (!model)
+ goto out_free;
+ break;
+ }
+
+ if (model && (asprintf(&cpuid, "%s", model) < 0))
+ cpuid = NULL;
+
+out_free:
+ fclose(file);
+ free(model);
+ return cpuid;
+}
+
+int get_cpuid(char *buffer, size_t sz, struct perf_cpu cpu __maybe_unused)
+{
+ int ret = 0;
+ char *cpuid = _get_cpuid();
+
+ if (!cpuid)
+ return EINVAL;
+
+ if (sz < strlen(cpuid)) {
+ ret = ENOBUFS;
+ goto out_free;
+ }
+
+ scnprintf(buffer, sz, "%s", cpuid);
+
+out_free:
+ free(cpuid);
+ return ret;
+}
+
+char *get_cpuid_str(struct perf_cpu cpu __maybe_unused)
+{
+ return _get_cpuid();
+}
diff --git a/tools/perf/arch/loongarch/util/kvm-stat.c b/tools/perf/arch/loongarch/util/kvm-stat.c
new file mode 100644
index 000000000000..a7859a3a9a51
--- /dev/null
+++ b/tools/perf/arch/loongarch/util/kvm-stat.c
@@ -0,0 +1,139 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <errno.h>
+#include <memory.h>
+#include "util/kvm-stat.h"
+#include "util/parse-events.h"
+#include "util/debug.h"
+#include "util/evsel.h"
+#include "util/evlist.h"
+#include "util/pmus.h"
+
+#define LOONGARCH_EXCEPTION_INT 0
+#define LOONGARCH_EXCEPTION_PIL 1
+#define LOONGARCH_EXCEPTION_PIS 2
+#define LOONGARCH_EXCEPTION_PIF 3
+#define LOONGARCH_EXCEPTION_PME 4
+#define LOONGARCH_EXCEPTION_FPD 15
+#define LOONGARCH_EXCEPTION_SXD 16
+#define LOONGARCH_EXCEPTION_ASXD 17
+#define LOONGARCH_EXCEPTION_GSPR 22
+#define LOONGARCH_EXCEPTION_CPUCFG 100
+#define LOONGARCH_EXCEPTION_CSR 101
+#define LOONGARCH_EXCEPTION_IOCSR 102
+#define LOONGARCH_EXCEPTION_IDLE 103
+#define LOONGARCH_EXCEPTION_OTHERS 104
+#define LOONGARCH_EXCEPTION_HVC 23
+
+#define loongarch_exception_type \
+ {LOONGARCH_EXCEPTION_INT, "Interrupt" }, \
+ {LOONGARCH_EXCEPTION_PIL, "Mem Read" }, \
+ {LOONGARCH_EXCEPTION_PIS, "Mem Store" }, \
+ {LOONGARCH_EXCEPTION_PIF, "Inst Fetch" }, \
+ {LOONGARCH_EXCEPTION_PME, "Mem Modify" }, \
+ {LOONGARCH_EXCEPTION_FPD, "FPU" }, \
+ {LOONGARCH_EXCEPTION_SXD, "LSX" }, \
+ {LOONGARCH_EXCEPTION_ASXD, "LASX" }, \
+ {LOONGARCH_EXCEPTION_GSPR, "Privilege Error" }, \
+ {LOONGARCH_EXCEPTION_HVC, "Hypercall" }, \
+ {LOONGARCH_EXCEPTION_CPUCFG, "CPUCFG" }, \
+ {LOONGARCH_EXCEPTION_CSR, "CSR" }, \
+ {LOONGARCH_EXCEPTION_IOCSR, "IOCSR" }, \
+ {LOONGARCH_EXCEPTION_IDLE, "Idle" }, \
+ {LOONGARCH_EXCEPTION_OTHERS, "Others" }
+
+define_exit_reasons_table(loongarch_exit_reasons, loongarch_exception_type);
+
+const char *vcpu_id_str = "vcpu_id";
+const char *kvm_exit_reason = "reason";
+const char *kvm_entry_trace = "kvm:kvm_enter";
+const char *kvm_reenter_trace = "kvm:kvm_reenter";
+const char *kvm_exit_trace = "kvm:kvm_exit";
+const char *kvm_events_tp[] = {
+ "kvm:kvm_enter",
+ "kvm:kvm_reenter",
+ "kvm:kvm_exit",
+ "kvm:kvm_exit_gspr",
+ NULL,
+};
+
+static bool event_begin(struct evsel *evsel,
+ struct perf_sample *sample, struct event_key *key)
+{
+ return exit_event_begin(evsel, sample, key);
+}
+
+static bool event_end(struct evsel *evsel,
+ struct perf_sample *sample __maybe_unused,
+ struct event_key *key __maybe_unused)
+{
+ /*
+ * LoongArch kvm is different with other architectures
+ *
+ * There is kvm:kvm_reenter or kvm:kvm_enter event adjacent with
+ * kvm:kvm_exit event.
+ * kvm:kvm_enter means returning to vmm and then to guest
+ * kvm:kvm_reenter means returning to guest immediately
+ */
+ return evsel__name_is(evsel, kvm_entry_trace) || evsel__name_is(evsel, kvm_reenter_trace);
+}
+
+static void event_gspr_get_key(struct evsel *evsel,
+ struct perf_sample *sample, struct event_key *key)
+{
+ unsigned int insn;
+
+ key->key = LOONGARCH_EXCEPTION_OTHERS;
+ insn = evsel__intval(evsel, sample, "inst_word");
+
+ switch (insn >> 24) {
+ case 0:
+ /* CPUCFG inst trap */
+ if ((insn >> 10) == 0x1b)
+ key->key = LOONGARCH_EXCEPTION_CPUCFG;
+ break;
+ case 4:
+ /* CSR inst trap */
+ key->key = LOONGARCH_EXCEPTION_CSR;
+ break;
+ case 6:
+ /* IOCSR inst trap */
+ if ((insn >> 15) == 0xc90)
+ key->key = LOONGARCH_EXCEPTION_IOCSR;
+ else if ((insn >> 15) == 0xc91)
+ /* Idle inst trap */
+ key->key = LOONGARCH_EXCEPTION_IDLE;
+ break;
+ default:
+ key->key = LOONGARCH_EXCEPTION_OTHERS;
+ break;
+ }
+}
+
+static struct child_event_ops child_events[] = {
+ { .name = "kvm:kvm_exit_gspr", .get_key = event_gspr_get_key },
+ { NULL, NULL },
+};
+
+static struct kvm_events_ops exit_events = {
+ .is_begin_event = event_begin,
+ .is_end_event = event_end,
+ .child_ops = child_events,
+ .decode_key = exit_event_decode_key,
+ .name = "VM-EXIT"
+};
+
+struct kvm_reg_events_ops kvm_reg_events_ops[] = {
+ { .name = "vmexit", .ops = &exit_events, },
+ { NULL, NULL },
+};
+
+const char * const kvm_skip_events[] = {
+ NULL,
+};
+
+int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid __maybe_unused)
+{
+ kvm->exit_reasons_isa = "loongarch64";
+ kvm->exit_reasons = loongarch_exit_reasons;
+ return 0;
+}
diff --git a/tools/perf/arch/loongarch/util/perf_regs.c b/tools/perf/arch/loongarch/util/perf_regs.c
index 2c56e8b56ddf..f94a0210c7b7 100644
--- a/tools/perf/arch/loongarch/util/perf_regs.c
+++ b/tools/perf/arch/loongarch/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/loongarch/util/unwind-libdw.c b/tools/perf/arch/loongarch/util/unwind-libdw.c
index 7b3b9a4b21f8..60b1144bedd5 100644
--- a/tools/perf/arch/loongarch/util/unwind-libdw.c
+++ b/tools/perf/arch/loongarch/util/unwind-libdw.c
@@ -10,7 +10,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[PERF_REG_LOONGARCH_MAX];
#define REG(r) ({ \
diff --git a/tools/perf/arch/mips/Build b/tools/perf/arch/mips/Build
index e4e5f33c84d8..e63eabc2c8f4 100644
--- a/tools/perf/arch/mips/Build
+++ b/tools/perf/arch/mips/Build
@@ -1 +1 @@
-perf-y += util/
+perf-util-y += util/
diff --git a/tools/perf/arch/mips/Makefile b/tools/perf/arch/mips/Makefile
deleted file mode 100644
index 8bc09072e3d6..000000000000
--- a/tools/perf/arch/mips/Makefile
+++ /dev/null
@@ -1,22 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
-
-# Syscall table generation for perf
-out := $(OUTPUT)arch/mips/include/generated/asm
-header := $(out)/syscalls_n64.c
-sysprf := $(srctree)/tools/perf/arch/mips/entry/syscalls
-sysdef := $(sysprf)/syscall_n64.tbl
-systbl := $(sysprf)/mksyscalltbl
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' $(sysdef) > $@
-
-clean::
- $(call QUIET_CLEAN, mips) $(RM) $(header)
-
-archheaders: $(header)
diff --git a/tools/perf/arch/mips/annotate/instructions.c b/tools/perf/arch/mips/annotate/instructions.c
index 340993f2a897..b50b46c613d6 100644
--- a/tools/perf/arch/mips/annotate/instructions.c
+++ b/tools/perf/arch/mips/annotate/instructions.c
@@ -40,6 +40,8 @@ int mips__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->associate_instruction_ops = mips__associate_ins_ops;
arch->initialized = true;
arch->objdump.comment_char = '#';
+ arch->e_machine = EM_MIPS;
+ arch->e_flags = 0;
}
return 0;
diff --git a/tools/perf/arch/mips/entry/syscalls/mksyscalltbl b/tools/perf/arch/mips/entry/syscalls/mksyscalltbl
deleted file mode 100644
index c0d93f959c4e..000000000000
--- a/tools/perf/arch/mips/entry/syscalls/mksyscalltbl
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-#
-# Generate system call table for perf. Derived from
-# s390 script.
-#
-# Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
-# Changed by: Tiezhu Yang <yangtiezhu@loongson.cn>
-
-SYSCALL_TBL=$1
-
-if ! test -r $SYSCALL_TBL; then
- echo "Could not read input file" >&2
- exit 1
-fi
-
-create_table()
-{
- local max_nr nr abi sc discard
-
- echo 'static const char *const syscalltbl_mips_n64[] = {'
- while read nr abi sc discard; do
- printf '\t[%d] = "%s",\n' $nr $sc
- max_nr=$nr
- done
- echo '};'
- echo "#define SYSCALLTBL_MIPS_N64_MAX_ID $max_nr"
-}
-
-grep -E "^[[:digit:]]+[[:space:]]+(n64)" $SYSCALL_TBL \
- |sort -k1 -n \
- |create_table
diff --git a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
index 532b855df589..7a7049c2c307 100644
--- a/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
+++ b/tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
@@ -376,3 +376,11 @@
459 n64 lsm_get_self_attr sys_lsm_get_self_attr
460 n64 lsm_set_self_attr sys_lsm_set_self_attr
461 n64 lsm_list_modules sys_lsm_list_modules
+462 n64 mseal sys_mseal
+463 n64 setxattrat sys_setxattrat
+464 n64 getxattrat sys_getxattrat
+465 n64 listxattrat sys_listxattrat
+466 n64 removexattrat sys_removexattrat
+467 n64 open_tree_attr sys_open_tree_attr
+468 n64 file_getattr sys_file_getattr
+469 n64 file_setattr sys_file_setattr
diff --git a/tools/perf/arch/mips/util/Build b/tools/perf/arch/mips/util/Build
index 51c8900a9a10..691fa2051958 100644
--- a/tools/perf/arch/mips/util/Build
+++ b/tools/perf/arch/mips/util/Build
@@ -1,3 +1,2 @@
-perf-y += perf_regs.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
+perf-util-y += perf_regs.o
+perf-util-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
diff --git a/tools/perf/arch/mips/util/dwarf-regs.c b/tools/perf/arch/mips/util/dwarf-regs.c
deleted file mode 100644
index 25c13a91c2a7..000000000000
--- a/tools/perf/arch/mips/util/dwarf-regs.c
+++ /dev/null
@@ -1,38 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * dwarf-regs.c : Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2013 Cavium, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- */
-
-#include <stdio.h>
-#include <dwarf-regs.h>
-
-static const char *mips_gpr_names[32] = {
- "$0", "$1", "$2", "$3", "$4", "$5", "$6", "$7", "$8", "$9",
- "$10", "$11", "$12", "$13", "$14", "$15", "$16", "$17", "$18", "$19",
- "$20", "$21", "$22", "$23", "$24", "$25", "$26", "$27", "$28", "$29",
- "$30", "$31"
-};
-
-const char *get_arch_regstr(unsigned int n)
-{
- if (n < 32)
- return mips_gpr_names[n];
- if (n == 64)
- return "hi";
- if (n == 65)
- return "lo";
- return NULL;
-}
diff --git a/tools/perf/arch/mips/util/perf_regs.c b/tools/perf/arch/mips/util/perf_regs.c
index c0877c264d49..6b1665f41180 100644
--- a/tools/perf/arch/mips/util/perf_regs.c
+++ b/tools/perf/arch/mips/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/parisc/entry/syscalls/syscall.tbl b/tools/perf/arch/parisc/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..66dc406b12e4
--- /dev/null
+++ b/tools/perf/arch/parisc/entry/syscalls/syscall.tbl
@@ -0,0 +1,463 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# system call numbers and entry vectors for parisc
+#
+# The format is:
+# <number> <abi> <name> <entry point> <compat entry point>
+#
+# The <abi> can be common, 64, or 32 for this file.
+#
+0 common restart_syscall sys_restart_syscall
+1 common exit sys_exit
+2 common fork sys_fork_wrapper
+3 common read sys_read
+4 common write sys_write
+5 common open sys_open compat_sys_open
+6 common close sys_close
+7 common waitpid sys_waitpid
+8 common creat sys_creat
+9 common link sys_link
+10 common unlink sys_unlink
+11 common execve sys_execve compat_sys_execve
+12 common chdir sys_chdir
+13 32 time sys_time32
+13 64 time sys_time
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 common lchown sys_lchown
+17 common socket sys_socket
+18 common stat sys_newstat compat_sys_newstat
+19 common lseek sys_lseek compat_sys_lseek
+20 common getpid sys_getpid
+21 common mount sys_mount
+22 common bind sys_bind
+23 common setuid sys_setuid
+24 common getuid sys_getuid
+25 32 stime sys_stime32
+25 64 stime sys_stime
+26 common ptrace sys_ptrace compat_sys_ptrace
+27 common alarm sys_alarm
+28 common fstat sys_newfstat compat_sys_newfstat
+29 common pause sys_pause
+30 32 utime sys_utime32
+30 64 utime sys_utime
+31 common connect sys_connect
+32 common listen sys_listen
+33 common access sys_access
+34 common nice sys_nice
+35 common accept sys_accept
+36 common sync sys_sync
+37 common kill sys_kill
+38 common rename sys_rename
+39 common mkdir sys_mkdir
+40 common rmdir sys_rmdir
+41 common dup sys_dup
+42 common pipe sys_pipe
+43 common times sys_times compat_sys_times
+44 common getsockname sys_getsockname
+45 common brk sys_brk
+46 common setgid sys_setgid
+47 common getgid sys_getgid
+48 common signal sys_signal
+49 common geteuid sys_geteuid
+50 common getegid sys_getegid
+51 common acct sys_acct
+52 common umount2 sys_umount
+53 common getpeername sys_getpeername
+54 common ioctl sys_ioctl compat_sys_ioctl
+55 common fcntl sys_fcntl compat_sys_fcntl
+56 common socketpair sys_socketpair
+57 common setpgid sys_setpgid
+58 common send sys_send
+59 common uname sys_newuname
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common ustat sys_ustat compat_sys_ustat
+63 common dup2 sys_dup2
+64 common getppid sys_getppid
+65 common getpgrp sys_getpgrp
+66 common setsid sys_setsid
+67 common pivot_root sys_pivot_root
+68 common sgetmask sys_sgetmask sys32_unimplemented
+69 common ssetmask sys_ssetmask sys32_unimplemented
+70 common setreuid sys_setreuid
+71 common setregid sys_setregid
+72 common mincore sys_mincore
+73 common sigpending sys_sigpending compat_sys_sigpending
+74 common sethostname sys_sethostname
+75 common setrlimit sys_setrlimit compat_sys_setrlimit
+76 common getrlimit sys_getrlimit compat_sys_getrlimit
+77 common getrusage sys_getrusage compat_sys_getrusage
+78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday
+79 common settimeofday sys_settimeofday compat_sys_settimeofday
+80 common getgroups sys_getgroups
+81 common setgroups sys_setgroups
+82 common sendto sys_sendto
+83 common symlink sys_symlink
+84 common lstat sys_newlstat compat_sys_newlstat
+85 common readlink sys_readlink
+86 common uselib sys_ni_syscall
+87 common swapon sys_swapon
+88 common reboot sys_reboot
+89 common mmap2 sys_mmap2
+90 common mmap sys_mmap
+91 common munmap sys_munmap
+92 common truncate sys_truncate compat_sys_truncate
+93 common ftruncate sys_ftruncate compat_sys_ftruncate
+94 common fchmod sys_fchmod
+95 common fchown sys_fchown
+96 common getpriority sys_getpriority
+97 common setpriority sys_setpriority
+98 common recv sys_recv compat_sys_recv
+99 common statfs sys_statfs compat_sys_statfs
+100 common fstatfs sys_fstatfs compat_sys_fstatfs
+101 common stat64 sys_stat64
+# 102 was socketcall
+103 common syslog sys_syslog
+104 common setitimer sys_setitimer compat_sys_setitimer
+105 common getitimer sys_getitimer compat_sys_getitimer
+106 common capget sys_capget
+107 common capset sys_capset
+108 32 pread64 parisc_pread64
+108 64 pread64 sys_pread64
+109 32 pwrite64 parisc_pwrite64
+109 64 pwrite64 sys_pwrite64
+110 common getcwd sys_getcwd
+111 common vhangup sys_vhangup
+112 common fstat64 sys_fstat64
+113 common vfork sys_vfork_wrapper
+114 common wait4 sys_wait4 compat_sys_wait4
+115 common swapoff sys_swapoff
+116 common sysinfo sys_sysinfo compat_sys_sysinfo
+117 common shutdown sys_shutdown
+118 common fsync sys_fsync
+119 common madvise parisc_madvise
+120 common clone sys_clone_wrapper
+121 common setdomainname sys_setdomainname
+122 common sendfile sys_sendfile compat_sys_sendfile
+123 common recvfrom sys_recvfrom compat_sys_recvfrom
+124 32 adjtimex sys_adjtimex_time32
+124 64 adjtimex sys_adjtimex
+125 common mprotect sys_mprotect
+126 common sigprocmask sys_sigprocmask compat_sys_sigprocmask
+# 127 was create_module
+128 common init_module sys_init_module
+129 common delete_module sys_delete_module
+# 130 was get_kernel_syms
+131 common quotactl sys_quotactl
+132 common getpgid sys_getpgid
+133 common fchdir sys_fchdir
+134 common bdflush sys_ni_syscall
+135 common sysfs sys_sysfs
+136 32 personality parisc_personality
+136 64 personality sys_personality
+# 137 was afs_syscall
+138 common setfsuid sys_setfsuid
+139 common setfsgid sys_setfsgid
+140 common _llseek sys_llseek
+141 common getdents sys_getdents compat_sys_getdents
+142 common _newselect sys_select compat_sys_select
+143 common flock sys_flock
+144 common msync sys_msync
+145 common readv sys_readv
+146 common writev sys_writev
+147 common getsid sys_getsid
+148 common fdatasync sys_fdatasync
+149 common _sysctl sys_ni_syscall
+150 common mlock sys_mlock
+151 common munlock sys_munlock
+152 common mlockall sys_mlockall
+153 common munlockall sys_munlockall
+154 common sched_setparam sys_sched_setparam
+155 common sched_getparam sys_sched_getparam
+156 common sched_setscheduler sys_sched_setscheduler
+157 common sched_getscheduler sys_sched_getscheduler
+158 common sched_yield sys_sched_yield
+159 common sched_get_priority_max sys_sched_get_priority_max
+160 common sched_get_priority_min sys_sched_get_priority_min
+161 32 sched_rr_get_interval sys_sched_rr_get_interval_time32
+161 64 sched_rr_get_interval sys_sched_rr_get_interval
+162 32 nanosleep sys_nanosleep_time32
+162 64 nanosleep sys_nanosleep
+163 common mremap sys_mremap
+164 common setresuid sys_setresuid
+165 common getresuid sys_getresuid
+166 common sigaltstack sys_sigaltstack compat_sys_sigaltstack
+# 167 was query_module
+168 common poll sys_poll
+# 169 was nfsservctl
+170 common setresgid sys_setresgid
+171 common getresgid sys_getresgid
+172 common prctl sys_prctl
+173 common rt_sigreturn sys_rt_sigreturn_wrapper
+174 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction
+175 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask
+176 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending
+177 32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32
+177 64 rt_sigtimedwait sys_rt_sigtimedwait
+178 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
+179 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
+180 common chown sys_chown
+181 common setsockopt sys_setsockopt sys_setsockopt
+182 common getsockopt sys_getsockopt sys_getsockopt
+183 common sendmsg sys_sendmsg compat_sys_sendmsg
+184 common recvmsg sys_recvmsg compat_sys_recvmsg
+185 common semop sys_semop
+186 common semget sys_semget
+187 common semctl sys_semctl compat_sys_semctl
+188 common msgsnd sys_msgsnd compat_sys_msgsnd
+189 common msgrcv sys_msgrcv compat_sys_msgrcv
+190 common msgget sys_msgget
+191 common msgctl sys_msgctl compat_sys_msgctl
+192 common shmat sys_shmat compat_sys_shmat
+193 common shmdt sys_shmdt
+194 common shmget sys_shmget
+195 common shmctl sys_shmctl compat_sys_shmctl
+# 196 was getpmsg
+# 197 was putpmsg
+198 common lstat64 sys_lstat64
+199 32 truncate64 parisc_truncate64
+199 64 truncate64 sys_truncate64
+200 32 ftruncate64 parisc_ftruncate64
+200 64 ftruncate64 sys_ftruncate64
+201 common getdents64 sys_getdents64
+202 common fcntl64 sys_fcntl64 compat_sys_fcntl64
+# 203 was attrctl
+# 204 was acl_get
+# 205 was acl_set
+206 common gettid sys_gettid
+207 32 readahead parisc_readahead
+207 64 readahead sys_readahead
+208 common tkill sys_tkill
+209 common sendfile64 sys_sendfile64 compat_sys_sendfile64
+210 32 futex sys_futex_time32
+210 64 futex sys_futex
+211 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity
+212 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity
+# 213 was set_thread_area
+# 214 was get_thread_area
+215 common io_setup sys_io_setup compat_sys_io_setup
+216 common io_destroy sys_io_destroy
+217 32 io_getevents sys_io_getevents_time32
+217 64 io_getevents sys_io_getevents
+218 common io_submit sys_io_submit compat_sys_io_submit
+219 common io_cancel sys_io_cancel
+# 220 was alloc_hugepages
+# 221 was free_hugepages
+222 common exit_group sys_exit_group
+223 common lookup_dcookie sys_ni_syscall
+224 common epoll_create sys_epoll_create
+225 common epoll_ctl sys_epoll_ctl
+226 common epoll_wait sys_epoll_wait
+227 common remap_file_pages sys_remap_file_pages
+228 32 semtimedop sys_semtimedop_time32
+228 64 semtimedop sys_semtimedop
+229 common mq_open sys_mq_open compat_sys_mq_open
+230 common mq_unlink sys_mq_unlink
+231 32 mq_timedsend sys_mq_timedsend_time32
+231 64 mq_timedsend sys_mq_timedsend
+232 32 mq_timedreceive sys_mq_timedreceive_time32
+232 64 mq_timedreceive sys_mq_timedreceive
+233 common mq_notify sys_mq_notify compat_sys_mq_notify
+234 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr
+235 common waitid sys_waitid compat_sys_waitid
+236 32 fadvise64_64 parisc_fadvise64_64
+236 64 fadvise64_64 sys_fadvise64_64
+237 common set_tid_address sys_set_tid_address
+238 common setxattr sys_setxattr
+239 common lsetxattr sys_lsetxattr
+240 common fsetxattr sys_fsetxattr
+241 common getxattr sys_getxattr
+242 common lgetxattr sys_lgetxattr
+243 common fgetxattr sys_fgetxattr
+244 common listxattr sys_listxattr
+245 common llistxattr sys_llistxattr
+246 common flistxattr sys_flistxattr
+247 common removexattr sys_removexattr
+248 common lremovexattr sys_lremovexattr
+249 common fremovexattr sys_fremovexattr
+250 common timer_create sys_timer_create compat_sys_timer_create
+251 32 timer_settime sys_timer_settime32
+251 64 timer_settime sys_timer_settime
+252 32 timer_gettime sys_timer_gettime32
+252 64 timer_gettime sys_timer_gettime
+253 common timer_getoverrun sys_timer_getoverrun
+254 common timer_delete sys_timer_delete
+255 32 clock_settime sys_clock_settime32
+255 64 clock_settime sys_clock_settime
+256 32 clock_gettime sys_clock_gettime32
+256 64 clock_gettime sys_clock_gettime
+257 32 clock_getres sys_clock_getres_time32
+257 64 clock_getres sys_clock_getres
+258 32 clock_nanosleep sys_clock_nanosleep_time32
+258 64 clock_nanosleep sys_clock_nanosleep
+259 common tgkill sys_tgkill
+260 common mbind sys_mbind
+261 common get_mempolicy sys_get_mempolicy
+262 common set_mempolicy sys_set_mempolicy
+# 263 was vserver
+264 common add_key sys_add_key
+265 common request_key sys_request_key
+266 common keyctl sys_keyctl compat_sys_keyctl
+267 common ioprio_set sys_ioprio_set
+268 common ioprio_get sys_ioprio_get
+269 common inotify_init sys_inotify_init
+270 common inotify_add_watch sys_inotify_add_watch
+271 common inotify_rm_watch sys_inotify_rm_watch
+272 common migrate_pages sys_migrate_pages
+273 32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32
+273 64 pselect6 sys_pselect6
+274 32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32
+274 64 ppoll sys_ppoll
+275 common openat sys_openat compat_sys_openat
+276 common mkdirat sys_mkdirat
+277 common mknodat sys_mknodat
+278 common fchownat sys_fchownat
+279 32 futimesat sys_futimesat_time32
+279 64 futimesat sys_futimesat
+280 common fstatat64 sys_fstatat64
+281 common unlinkat sys_unlinkat
+282 common renameat sys_renameat
+283 common linkat sys_linkat
+284 common symlinkat sys_symlinkat
+285 common readlinkat sys_readlinkat
+286 common fchmodat sys_fchmodat
+287 common faccessat sys_faccessat
+288 common unshare sys_unshare
+289 common set_robust_list sys_set_robust_list compat_sys_set_robust_list
+290 common get_robust_list sys_get_robust_list compat_sys_get_robust_list
+291 common splice sys_splice
+292 32 sync_file_range parisc_sync_file_range
+292 64 sync_file_range sys_sync_file_range
+293 common tee sys_tee
+294 common vmsplice sys_vmsplice
+295 common move_pages sys_move_pages
+296 common getcpu sys_getcpu
+297 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait
+298 common statfs64 sys_statfs64 compat_sys_statfs64
+299 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64
+300 common kexec_load sys_kexec_load compat_sys_kexec_load
+301 32 utimensat sys_utimensat_time32
+301 64 utimensat sys_utimensat
+302 common signalfd sys_signalfd compat_sys_signalfd
+# 303 was timerfd
+304 common eventfd sys_eventfd
+305 32 fallocate parisc_fallocate
+305 64 fallocate sys_fallocate
+306 common timerfd_create parisc_timerfd_create
+307 32 timerfd_settime sys_timerfd_settime32
+307 64 timerfd_settime sys_timerfd_settime
+308 32 timerfd_gettime sys_timerfd_gettime32
+308 64 timerfd_gettime sys_timerfd_gettime
+309 common signalfd4 parisc_signalfd4 parisc_compat_signalfd4
+310 common eventfd2 parisc_eventfd2
+311 common epoll_create1 sys_epoll_create1
+312 common dup3 sys_dup3
+313 common pipe2 parisc_pipe2
+314 common inotify_init1 parisc_inotify_init1
+315 common preadv sys_preadv compat_sys_preadv
+316 common pwritev sys_pwritev compat_sys_pwritev
+317 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo
+318 common perf_event_open sys_perf_event_open
+319 32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32
+319 64 recvmmsg sys_recvmmsg
+320 common accept4 sys_accept4
+321 common prlimit64 sys_prlimit64
+322 common fanotify_init sys_fanotify_init
+323 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark
+324 32 clock_adjtime sys_clock_adjtime32
+324 64 clock_adjtime sys_clock_adjtime
+325 common name_to_handle_at sys_name_to_handle_at
+326 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at
+327 common syncfs sys_syncfs
+328 common setns sys_setns
+329 common sendmmsg sys_sendmmsg compat_sys_sendmmsg
+330 common process_vm_readv sys_process_vm_readv
+331 common process_vm_writev sys_process_vm_writev
+332 common kcmp sys_kcmp
+333 common finit_module sys_finit_module
+334 common sched_setattr sys_sched_setattr
+335 common sched_getattr sys_sched_getattr
+336 32 utimes sys_utimes_time32
+336 64 utimes sys_utimes
+337 common renameat2 sys_renameat2
+338 common seccomp sys_seccomp
+339 common getrandom sys_getrandom
+340 common memfd_create sys_memfd_create
+341 common bpf sys_bpf
+342 common execveat sys_execveat compat_sys_execveat
+343 common membarrier sys_membarrier
+344 common userfaultfd parisc_userfaultfd
+345 common mlock2 sys_mlock2
+346 common copy_file_range sys_copy_file_range
+347 common preadv2 sys_preadv2 compat_sys_preadv2
+348 common pwritev2 sys_pwritev2 compat_sys_pwritev2
+349 common statx sys_statx
+350 32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
+350 64 io_pgetevents sys_io_pgetevents
+351 common pkey_mprotect sys_pkey_mprotect
+352 common pkey_alloc sys_pkey_alloc
+353 common pkey_free sys_pkey_free
+354 common rseq sys_rseq
+355 common kexec_file_load sys_kexec_file_load sys_kexec_file_load
+356 common cacheflush sys_cacheflush
+# up to 402 is unassigned and reserved for arch specific syscalls
+403 32 clock_gettime64 sys_clock_gettime sys_clock_gettime
+404 32 clock_settime64 sys_clock_settime sys_clock_settime
+405 32 clock_adjtime64 sys_clock_adjtime sys_clock_adjtime
+406 32 clock_getres_time64 sys_clock_getres sys_clock_getres
+407 32 clock_nanosleep_time64 sys_clock_nanosleep sys_clock_nanosleep
+408 32 timer_gettime64 sys_timer_gettime sys_timer_gettime
+409 32 timer_settime64 sys_timer_settime sys_timer_settime
+410 32 timerfd_gettime64 sys_timerfd_gettime sys_timerfd_gettime
+411 32 timerfd_settime64 sys_timerfd_settime sys_timerfd_settime
+412 32 utimensat_time64 sys_utimensat sys_utimensat
+413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
+414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
+416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
+417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
+418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend
+419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
+420 32 semtimedop_time64 sys_semtimedop sys_semtimedop
+421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64
+422 32 futex_time64 sys_futex sys_futex
+423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+435 common clone3 sys_clone3_wrapper
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
diff --git a/tools/perf/arch/powerpc/Build b/tools/perf/arch/powerpc/Build
index a7dd46a5b678..12ebc65ea7a3 100644
--- a/tools/perf/arch/powerpc/Build
+++ b/tools/perf/arch/powerpc/Build
@@ -1,2 +1,2 @@
-perf-y += util/
-perf-y += tests/
+perf-util-y += util/
+perf-test-y += tests/
diff --git a/tools/perf/arch/powerpc/Makefile b/tools/perf/arch/powerpc/Makefile
index 840ea0e59287..a295a80ea078 100644
--- a/tools/perf/arch/powerpc/Makefile
+++ b/tools/perf/arch/powerpc/Makefile
@@ -1,33 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
-
HAVE_KVM_STAT_SUPPORT := 1
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
PERF_HAVE_JITDUMP := 1
-
-#
-# Syscall table generation for perf
-#
-
-out := $(OUTPUT)arch/powerpc/include/generated/asm
-header32 := $(out)/syscalls_32.c
-header64 := $(out)/syscalls_64.c
-sysprf := $(srctree)/tools/perf/arch/powerpc/entry/syscalls
-sysdef := $(sysprf)/syscall.tbl
-systbl := $(sysprf)/mksyscalltbl
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header64): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' '64' $(sysdef) > $@
-
-$(header32): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' '32' $(sysdef) > $@
-
-clean::
- $(call QUIET_CLEAN, powerpc) $(RM) $(header32) $(header64)
-
-archheaders: $(header32) $(header64)
diff --git a/tools/perf/arch/powerpc/annotate/instructions.c b/tools/perf/arch/powerpc/annotate/instructions.c
index a3f423c27cae..ca567cfdcbdb 100644
--- a/tools/perf/arch/powerpc/annotate/instructions.c
+++ b/tools/perf/arch/powerpc/annotate/instructions.c
@@ -49,12 +49,268 @@ static struct ins_ops *powerpc__associate_instruction_ops(struct arch *arch, con
return ops;
}
+#define PPC_OP(op) (((op) >> 26) & 0x3F)
+#define PPC_21_30(R) (((R) >> 1) & 0x3ff)
+#define PPC_22_30(R) (((R) >> 1) & 0x1ff)
+
+struct insn_offset {
+ const char *name;
+ int value;
+};
+
+/*
+ * There are memory instructions with opcode 31 which are
+ * of X Form, Example:
+ * ldx RT,RA,RB
+ * ______________________________________
+ * | 31 | RT | RA | RB | 21 |/|
+ * --------------------------------------
+ * 0 6 11 16 21 30 31
+ *
+ * But all instructions with opcode 31 are not memory.
+ * Example: add RT,RA,RB
+ *
+ * Use bits 21 to 30 to check memory insns with 31 as opcode.
+ * In ins_array below, for ldx instruction:
+ * name => OP_31_XOP_LDX
+ * value => 21
+ */
+
+static struct insn_offset ins_array[] = {
+ { .name = "OP_31_XOP_LXSIWZX", .value = 12, },
+ { .name = "OP_31_XOP_LWARX", .value = 20, },
+ { .name = "OP_31_XOP_LDX", .value = 21, },
+ { .name = "OP_31_XOP_LWZX", .value = 23, },
+ { .name = "OP_31_XOP_LDUX", .value = 53, },
+ { .name = "OP_31_XOP_LWZUX", .value = 55, },
+ { .name = "OP_31_XOP_LXSIWAX", .value = 76, },
+ { .name = "OP_31_XOP_LDARX", .value = 84, },
+ { .name = "OP_31_XOP_LBZX", .value = 87, },
+ { .name = "OP_31_XOP_LVX", .value = 103, },
+ { .name = "OP_31_XOP_LBZUX", .value = 119, },
+ { .name = "OP_31_XOP_STXSIWX", .value = 140, },
+ { .name = "OP_31_XOP_STDX", .value = 149, },
+ { .name = "OP_31_XOP_STWX", .value = 151, },
+ { .name = "OP_31_XOP_STDUX", .value = 181, },
+ { .name = "OP_31_XOP_STWUX", .value = 183, },
+ { .name = "OP_31_XOP_STBX", .value = 215, },
+ { .name = "OP_31_XOP_STVX", .value = 231, },
+ { .name = "OP_31_XOP_STBUX", .value = 247, },
+ { .name = "OP_31_XOP_LHZX", .value = 279, },
+ { .name = "OP_31_XOP_LHZUX", .value = 311, },
+ { .name = "OP_31_XOP_LXVDSX", .value = 332, },
+ { .name = "OP_31_XOP_LWAX", .value = 341, },
+ { .name = "OP_31_XOP_LHAX", .value = 343, },
+ { .name = "OP_31_XOP_LWAUX", .value = 373, },
+ { .name = "OP_31_XOP_LHAUX", .value = 375, },
+ { .name = "OP_31_XOP_STHX", .value = 407, },
+ { .name = "OP_31_XOP_STHUX", .value = 439, },
+ { .name = "OP_31_XOP_LXSSPX", .value = 524, },
+ { .name = "OP_31_XOP_LDBRX", .value = 532, },
+ { .name = "OP_31_XOP_LSWX", .value = 533, },
+ { .name = "OP_31_XOP_LWBRX", .value = 534, },
+ { .name = "OP_31_XOP_LFSUX", .value = 567, },
+ { .name = "OP_31_XOP_LXSDX", .value = 588, },
+ { .name = "OP_31_XOP_LSWI", .value = 597, },
+ { .name = "OP_31_XOP_LFDX", .value = 599, },
+ { .name = "OP_31_XOP_LFDUX", .value = 631, },
+ { .name = "OP_31_XOP_STXSSPX", .value = 652, },
+ { .name = "OP_31_XOP_STDBRX", .value = 660, },
+ { .name = "OP_31_XOP_STXWX", .value = 661, },
+ { .name = "OP_31_XOP_STWBRX", .value = 662, },
+ { .name = "OP_31_XOP_STFSX", .value = 663, },
+ { .name = "OP_31_XOP_STFSUX", .value = 695, },
+ { .name = "OP_31_XOP_STXSDX", .value = 716, },
+ { .name = "OP_31_XOP_STSWI", .value = 725, },
+ { .name = "OP_31_XOP_STFDX", .value = 727, },
+ { .name = "OP_31_XOP_STFDUX", .value = 759, },
+ { .name = "OP_31_XOP_LXVW4X", .value = 780, },
+ { .name = "OP_31_XOP_LHBRX", .value = 790, },
+ { .name = "OP_31_XOP_LXVD2X", .value = 844, },
+ { .name = "OP_31_XOP_LFIWAX", .value = 855, },
+ { .name = "OP_31_XOP_LFIWZX", .value = 887, },
+ { .name = "OP_31_XOP_STXVW4X", .value = 908, },
+ { .name = "OP_31_XOP_STHBRX", .value = 918, },
+ { .name = "OP_31_XOP_STXVD2X", .value = 972, },
+ { .name = "OP_31_XOP_STFIWX", .value = 983, },
+};
+
+/*
+ * Arithmetic instructions which are having opcode as 31.
+ * These instructions are tracked to save the register state
+ * changes. Example:
+ *
+ * lwz r10,264(r3)
+ * add r31, r3, r3
+ * lwz r9, 0(r31)
+ *
+ * Here instruction tracking needs to identify the "add"
+ * instruction and save data type of r3 to r31. If a sample
+ * is hit at next "lwz r9, 0(r31)", by this instruction tracking,
+ * data type of r31 can be resolved.
+ */
+static struct insn_offset arithmetic_ins_op_31[] = {
+ { .name = "SUB_CARRY_XO_FORM", .value = 8, },
+ { .name = "MUL_HDW_XO_FORM1", .value = 9, },
+ { .name = "ADD_CARRY_XO_FORM", .value = 10, },
+ { .name = "MUL_HW_XO_FORM1", .value = 11, },
+ { .name = "SUB_XO_FORM", .value = 40, },
+ { .name = "MUL_HDW_XO_FORM", .value = 73, },
+ { .name = "MUL_HW_XO_FORM", .value = 75, },
+ { .name = "SUB_EXT_XO_FORM", .value = 136, },
+ { .name = "ADD_EXT_XO_FORM", .value = 138, },
+ { .name = "SUB_ZERO_EXT_XO_FORM", .value = 200, },
+ { .name = "ADD_ZERO_EXT_XO_FORM", .value = 202, },
+ { .name = "SUB_EXT_XO_FORM2", .value = 232, },
+ { .name = "MUL_DW_XO_FORM", .value = 233, },
+ { .name = "ADD_EXT_XO_FORM2", .value = 234, },
+ { .name = "MUL_W_XO_FORM", .value = 235, },
+ { .name = "ADD_XO_FORM", .value = 266, },
+ { .name = "DIV_DW_XO_FORM1", .value = 457, },
+ { .name = "DIV_W_XO_FORM1", .value = 459, },
+ { .name = "DIV_DW_XO_FORM", .value = 489, },
+ { .name = "DIV_W_XO_FORM", .value = 491, },
+};
+
+static struct insn_offset arithmetic_two_ops[] = {
+ { .name = "mulli", .value = 7, },
+ { .name = "subfic", .value = 8, },
+ { .name = "addic", .value = 12, },
+ { .name = "addic.", .value = 13, },
+ { .name = "addi", .value = 14, },
+ { .name = "addis", .value = 15, },
+};
+
+static int cmp_offset(const void *a, const void *b)
+{
+ const struct insn_offset *val1 = a;
+ const struct insn_offset *val2 = b;
+
+ return (val1->value - val2->value);
+}
+
+static struct ins_ops *check_ppc_insn(struct disasm_line *dl)
+{
+ int raw_insn = dl->raw.raw_insn;
+ int opcode = PPC_OP(raw_insn);
+ int mem_insn_31 = PPC_21_30(raw_insn);
+ struct insn_offset *ret;
+ struct insn_offset mem_insns_31_opcode = {
+ "OP_31_INSN",
+ mem_insn_31
+ };
+ char name_insn[32];
+
+ /*
+ * Instructions with opcode 32 to 63 are memory
+ * instructions in powerpc
+ */
+ if ((opcode & 0x20)) {
+ /*
+ * Set name in case of raw instruction to
+ * opcode to be used in insn-stat
+ */
+ if (!strlen(dl->ins.name)) {
+ sprintf(name_insn, "%d", opcode);
+ dl->ins.name = strdup(name_insn);
+ }
+ return &load_store_ops;
+ } else if (opcode == 31) {
+ /* Check for memory instructions with opcode 31 */
+ ret = bsearch(&mem_insns_31_opcode, ins_array, ARRAY_SIZE(ins_array), sizeof(ins_array[0]), cmp_offset);
+ if (ret) {
+ if (!strlen(dl->ins.name))
+ dl->ins.name = strdup(ret->name);
+ return &load_store_ops;
+ } else {
+ mem_insns_31_opcode.value = PPC_22_30(raw_insn);
+ ret = bsearch(&mem_insns_31_opcode, arithmetic_ins_op_31, ARRAY_SIZE(arithmetic_ins_op_31),
+ sizeof(arithmetic_ins_op_31[0]), cmp_offset);
+ if (ret != NULL)
+ return &arithmetic_ops;
+ /* Bits 21 to 30 has value 444 for "mr" insn ie, OR X form */
+ if (PPC_21_30(raw_insn) == 444)
+ return &arithmetic_ops;
+ }
+ } else {
+ mem_insns_31_opcode.value = opcode;
+ ret = bsearch(&mem_insns_31_opcode, arithmetic_two_ops, ARRAY_SIZE(arithmetic_two_ops),
+ sizeof(arithmetic_two_ops[0]), cmp_offset);
+ if (ret != NULL)
+ return &arithmetic_ops;
+ }
+
+ return NULL;
+}
+
+/*
+ * Instruction tracking function to track register state moves.
+ * Example sequence:
+ * ld r10,264(r3)
+ * mr r31,r3
+ * <<after some sequence>
+ * ld r9,312(r31)
+ *
+ * Previous instruction sequence shows that register state of r3
+ * is moved to r31. update_insn_state_powerpc tracks these state
+ * changes
+ */
+#ifdef HAVE_LIBDW_SUPPORT
+static void update_insn_state_powerpc(struct type_state *state,
+ struct data_loc_info *dloc, Dwarf_Die * cu_die __maybe_unused,
+ struct disasm_line *dl)
+{
+ struct annotated_insn_loc loc;
+ struct annotated_op_loc *src = &loc.ops[INSN_OP_SOURCE];
+ struct annotated_op_loc *dst = &loc.ops[INSN_OP_TARGET];
+ struct type_state_reg *tsr;
+ u32 insn_offset = dl->al.offset;
+
+ if (annotate_get_insn_location(dloc->arch, dl, &loc) < 0)
+ return;
+
+ /*
+ * Value 444 for bits 21:30 is for "mr"
+ * instruction. "mr" is extended OR. So set the
+ * source and destination reg correctly
+ */
+ if (PPC_21_30(dl->raw.raw_insn) == 444) {
+ int src_reg = src->reg1;
+
+ src->reg1 = dst->reg1;
+ dst->reg1 = src_reg;
+ }
+
+ if (!has_reg_type(state, dst->reg1))
+ return;
+
+ tsr = &state->regs[dst->reg1];
+
+ if (!has_reg_type(state, src->reg1) ||
+ !state->regs[src->reg1].ok) {
+ tsr->ok = false;
+ return;
+ }
+
+ tsr->type = state->regs[src->reg1].type;
+ tsr->kind = state->regs[src->reg1].kind;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] reg%d -> reg%d",
+ insn_offset, src->reg1, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+}
+#endif /* HAVE_LIBDW_SUPPORT */
+
static int powerpc__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
{
if (!arch->initialized) {
arch->initialized = true;
arch->associate_instruction_ops = powerpc__associate_instruction_ops;
arch->objdump.comment_char = '#';
+ annotate_opts.show_asm_raw = true;
+ arch->e_machine = EM_PPC;
+ arch->e_flags = 0;
}
return 0;
diff --git a/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl b/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
deleted file mode 100755
index 0eb316fe6dd1..000000000000
--- a/tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-#
-# Generate system call table for perf. Derived from
-# s390 script.
-#
-# Copyright IBM Corp. 2017
-# Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
-# Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
-
-wordsize=$1
-SYSCALL_TBL=$2
-
-if ! test -r $SYSCALL_TBL; then
- echo "Could not read input file" >&2
- exit 1
-fi
-
-create_table()
-{
- local wordsize=$1
- local max_nr nr abi sc discard
- max_nr=-1
- nr=0
-
- echo "static const char *const syscalltbl_powerpc_${wordsize}[] = {"
- while read nr abi sc discard; do
- if [ "$max_nr" -lt "$nr" ]; then
- printf '\t[%d] = "%s",\n' $nr $sc
- max_nr=$nr
- fi
- done
- echo '};'
- echo "#define SYSCALLTBL_POWERPC_${wordsize}_MAX_ID $max_nr"
-}
-
-grep -E "^[[:digit:]]+[[:space:]]+(common|spu|nospu|${wordsize})" $SYSCALL_TBL \
- |sort -k1 -n \
- |create_table ${wordsize}
diff --git a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
index 17173b82ca21..b453e80dfc00 100644
--- a/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
+++ b/tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
@@ -230,8 +230,10 @@
178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
179 32 pread64 sys_ppc_pread64 compat_sys_ppc_pread64
179 64 pread64 sys_pread64
+179 spu pread64 sys_pread64
180 32 pwrite64 sys_ppc_pwrite64 compat_sys_ppc_pwrite64
180 64 pwrite64 sys_pwrite64
+180 spu pwrite64 sys_pwrite64
181 common chown sys_chown
182 common getcwd sys_getcwd
183 common capget sys_capget
@@ -246,6 +248,7 @@
190 common ugetrlimit sys_getrlimit compat_sys_getrlimit
191 32 readahead sys_ppc_readahead compat_sys_ppc_readahead
191 64 readahead sys_readahead
+191 spu readahead sys_readahead
192 32 mmap2 sys_mmap2 compat_sys_mmap2
193 32 truncate64 sys_ppc_truncate64 compat_sys_ppc_truncate64
194 32 ftruncate64 sys_ppc_ftruncate64 compat_sys_ppc_ftruncate64
@@ -293,6 +296,7 @@
232 nospu set_tid_address sys_set_tid_address
233 32 fadvise64 sys_ppc32_fadvise64 compat_sys_ppc32_fadvise64
233 64 fadvise64 sys_fadvise64
+233 spu fadvise64 sys_fadvise64
234 nospu exit_group sys_exit_group
235 nospu lookup_dcookie sys_ni_syscall
236 common epoll_create sys_epoll_create
@@ -502,7 +506,7 @@
412 32 utimensat_time64 sys_utimensat sys_utimensat
413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
-416 32 io_pgetevents_time64 sys_io_pgetevents sys_io_pgetevents
+416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend
419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
@@ -548,3 +552,11 @@
459 common lsm_get_self_attr sys_lsm_get_self_attr
460 common lsm_set_self_attr sys_lsm_set_self_attr
461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
diff --git a/tools/perf/arch/powerpc/tests/Build b/tools/perf/arch/powerpc/tests/Build
index 3526ab0af9f9..275026950645 100644
--- a/tools/perf/arch/powerpc/tests/Build
+++ b/tools/perf/arch/powerpc/tests/Build
@@ -1,4 +1,4 @@
-perf-$(CONFIG_DWARF_UNWIND) += regs_load.o
-perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
+perf-test-$(CONFIG_DWARF_UNWIND) += regs_load.o
+perf-test-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
-perf-y += arch-tests.o
+perf-test-y += arch-tests.o
diff --git a/tools/perf/arch/powerpc/tests/dwarf-unwind.c b/tools/perf/arch/powerpc/tests/dwarf-unwind.c
index 5ecf82893b84..66af884baa66 100644
--- a/tools/perf/arch/powerpc/tests/dwarf-unwind.c
+++ b/tools/perf/arch/powerpc/tests/dwarf-unwind.c
@@ -45,7 +45,7 @@ static int sample_ustack(struct perf_sample *sample,
int test__arch_unwind_sample(struct perf_sample *sample,
struct thread *thread)
{
- struct regs_dump *regs = &sample->user_regs;
+ struct regs_dump *regs = perf_sample__user_regs(sample);
u64 *buf;
buf = calloc(1, sizeof(u64) * PERF_REGS_MAX);
diff --git a/tools/perf/arch/powerpc/util/Build b/tools/perf/arch/powerpc/util/Build
index 9889245c555c..a5b0babd307e 100644
--- a/tools/perf/arch/powerpc/util/Build
+++ b/tools/perf/arch/powerpc/util/Build
@@ -1,13 +1,13 @@
-perf-y += header.o
-perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
-perf-y += perf_regs.o
-perf-y += mem-events.o
-perf-y += sym-handling.o
-perf-y += evsel.o
-perf-y += event.o
+perf-util-y += header.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
+perf-util-y += perf_regs.o
+perf-util-y += mem-events.o
+perf-util-y += pmu.o
+perf-util-y += sym-handling.o
+perf-util-y += evsel.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_DWARF) += skip-callchain-idx.o
+perf-util-$(CONFIG_LIBDW) += skip-callchain-idx.o
-perf-$(CONFIG_LIBUNWIND) += unwind-libunwind.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LIBUNWIND) += unwind-libunwind.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_AUXTRACE) += auxtrace.o
diff --git a/tools/perf/arch/powerpc/util/auxtrace.c b/tools/perf/arch/powerpc/util/auxtrace.c
new file mode 100644
index 000000000000..62c6f67f1bbe
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/auxtrace.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * VPA support
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include "../../util/evlist.h"
+#include "../../util/debug.h"
+#include "../../util/auxtrace.h"
+#include "../../util/powerpc-vpadtl.h"
+#include "../../util/record.h"
+#include <internal/lib.h> // page_size
+
+#define KiB(x) ((x) * 1024)
+
+static int
+powerpc_vpadtl_recording_options(struct auxtrace_record *ar __maybe_unused,
+ struct evlist *evlist __maybe_unused,
+ struct record_opts *opts)
+{
+ opts->full_auxtrace = true;
+
+ /*
+ * Set auxtrace_mmap_pages to minimum
+ * two pages
+ */
+ if (!opts->auxtrace_mmap_pages) {
+ opts->auxtrace_mmap_pages = KiB(128) / page_size;
+ if (opts->mmap_pages == UINT_MAX)
+ opts->mmap_pages = KiB(256) / page_size;
+ }
+
+ return 0;
+}
+
+static size_t powerpc_vpadtl_info_priv_size(struct auxtrace_record *itr __maybe_unused,
+ struct evlist *evlist __maybe_unused)
+{
+ return VPADTL_AUXTRACE_PRIV_SIZE;
+}
+
+static int
+powerpc_vpadtl_info_fill(struct auxtrace_record *itr __maybe_unused,
+ struct perf_session *session __maybe_unused,
+ struct perf_record_auxtrace_info *auxtrace_info,
+ size_t priv_size __maybe_unused)
+{
+ auxtrace_info->type = PERF_AUXTRACE_VPA_DTL;
+
+ return 0;
+}
+
+static void powerpc_vpadtl_free(struct auxtrace_record *itr)
+{
+ free(itr);
+}
+
+static u64 powerpc_vpadtl_reference(struct auxtrace_record *itr __maybe_unused)
+{
+ return 0;
+}
+
+struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
+ int *err)
+{
+ struct auxtrace_record *aux;
+ struct evsel *pos;
+ int found = 0;
+
+ evlist__for_each_entry(evlist, pos) {
+ if (strstarts(pos->name, "vpa_dtl")) {
+ found = 1;
+ pos->needs_auxtrace_mmap = true;
+ break;
+ }
+ }
+
+ if (!found)
+ return NULL;
+
+ /*
+ * To obtain the auxtrace buffer file descriptor, the auxtrace event
+ * must come first.
+ */
+ evlist__to_front(pos->evlist, pos);
+
+ aux = zalloc(sizeof(*aux));
+ if (aux == NULL) {
+ pr_debug("aux record is NULL\n");
+ *err = -ENOMEM;
+ return NULL;
+ }
+
+ aux->recording_options = powerpc_vpadtl_recording_options;
+ aux->info_priv_size = powerpc_vpadtl_info_priv_size;
+ aux->info_fill = powerpc_vpadtl_info_fill;
+ aux->free = powerpc_vpadtl_free;
+ aux->reference = powerpc_vpadtl_reference;
+ return aux;
+}
diff --git a/tools/perf/arch/powerpc/util/dwarf-regs.c b/tools/perf/arch/powerpc/util/dwarf-regs.c
deleted file mode 100644
index 0c4f4caf53ac..000000000000
--- a/tools/perf/arch/powerpc/util/dwarf-regs.c
+++ /dev/null
@@ -1,100 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2010 Ian Munsie, IBM Corporation.
- */
-
-#include <stddef.h>
-#include <errno.h>
-#include <string.h>
-#include <dwarf-regs.h>
-#include <linux/ptrace.h>
-#include <linux/kernel.h>
-#include <linux/stringify.h>
-
-struct pt_regs_dwarfnum {
- const char *name;
- unsigned int dwarfnum;
- unsigned int ptregs_offset;
-};
-
-#define REG_DWARFNUM_NAME(r, num) \
- {.name = __stringify(%)__stringify(r), .dwarfnum = num, \
- .ptregs_offset = offsetof(struct pt_regs, r)}
-#define GPR_DWARFNUM_NAME(num) \
- {.name = __stringify(%gpr##num), .dwarfnum = num, \
- .ptregs_offset = offsetof(struct pt_regs, gpr[num])}
-#define REG_DWARFNUM_END {.name = NULL, .dwarfnum = 0, .ptregs_offset = 0}
-
-/*
- * Reference:
- * http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi-1.9.html
- */
-static const struct pt_regs_dwarfnum regdwarfnum_table[] = {
- GPR_DWARFNUM_NAME(0),
- GPR_DWARFNUM_NAME(1),
- GPR_DWARFNUM_NAME(2),
- GPR_DWARFNUM_NAME(3),
- GPR_DWARFNUM_NAME(4),
- GPR_DWARFNUM_NAME(5),
- GPR_DWARFNUM_NAME(6),
- GPR_DWARFNUM_NAME(7),
- GPR_DWARFNUM_NAME(8),
- GPR_DWARFNUM_NAME(9),
- GPR_DWARFNUM_NAME(10),
- GPR_DWARFNUM_NAME(11),
- GPR_DWARFNUM_NAME(12),
- GPR_DWARFNUM_NAME(13),
- GPR_DWARFNUM_NAME(14),
- GPR_DWARFNUM_NAME(15),
- GPR_DWARFNUM_NAME(16),
- GPR_DWARFNUM_NAME(17),
- GPR_DWARFNUM_NAME(18),
- GPR_DWARFNUM_NAME(19),
- GPR_DWARFNUM_NAME(20),
- GPR_DWARFNUM_NAME(21),
- GPR_DWARFNUM_NAME(22),
- GPR_DWARFNUM_NAME(23),
- GPR_DWARFNUM_NAME(24),
- GPR_DWARFNUM_NAME(25),
- GPR_DWARFNUM_NAME(26),
- GPR_DWARFNUM_NAME(27),
- GPR_DWARFNUM_NAME(28),
- GPR_DWARFNUM_NAME(29),
- GPR_DWARFNUM_NAME(30),
- GPR_DWARFNUM_NAME(31),
- REG_DWARFNUM_NAME(msr, 66),
- REG_DWARFNUM_NAME(ctr, 109),
- REG_DWARFNUM_NAME(link, 108),
- REG_DWARFNUM_NAME(xer, 101),
- REG_DWARFNUM_NAME(dar, 119),
- REG_DWARFNUM_NAME(dsisr, 118),
- REG_DWARFNUM_END,
-};
-
-/**
- * get_arch_regstr() - lookup register name from it's DWARF register number
- * @n: the DWARF register number
- *
- * get_arch_regstr() returns the name of the register in struct
- * regdwarfnum_table from it's DWARF register number. If the register is not
- * found in the table, this returns NULL;
- */
-const char *get_arch_regstr(unsigned int n)
-{
- const struct pt_regs_dwarfnum *roff;
- for (roff = regdwarfnum_table; roff->name != NULL; roff++)
- if (roff->dwarfnum == n)
- return roff->name;
- return NULL;
-}
-
-int regs_query_register_offset(const char *name)
-{
- const struct pt_regs_dwarfnum *roff;
- for (roff = regdwarfnum_table; roff->name != NULL; roff++)
- if (!strcmp(roff->name, name))
- return roff->ptregs_offset;
- return -EINVAL;
-}
diff --git a/tools/perf/arch/powerpc/util/event.c b/tools/perf/arch/powerpc/util/event.c
deleted file mode 100644
index 77d8cc2b5691..000000000000
--- a/tools/perf/arch/powerpc/util/event.c
+++ /dev/null
@@ -1,60 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/types.h>
-#include <linux/string.h>
-#include <linux/zalloc.h>
-
-#include "../../../util/event.h"
-#include "../../../util/synthetic-events.h"
-#include "../../../util/machine.h"
-#include "../../../util/tool.h"
-#include "../../../util/map.h"
-#include "../../../util/debug.h"
-#include "../../../util/sample.h"
-
-void arch_perf_parse_sample_weight(struct perf_sample *data,
- const __u64 *array, u64 type)
-{
- union perf_sample_weight weight;
-
- weight.full = *array;
- if (type & PERF_SAMPLE_WEIGHT)
- data->weight = weight.full;
- else {
- data->weight = weight.var1_dw;
- data->ins_lat = weight.var2_w;
- data->p_stage_cyc = weight.var3_w;
- }
-}
-
-void arch_perf_synthesize_sample_weight(const struct perf_sample *data,
- __u64 *array, u64 type)
-{
- *array = data->weight;
-
- if (type & PERF_SAMPLE_WEIGHT_STRUCT) {
- *array &= 0xffffffff;
- *array |= ((u64)data->ins_lat << 32);
- }
-}
-
-const char *arch_perf_header_entry(const char *se_header)
-{
- if (!strcmp(se_header, "Local INSTR Latency"))
- return "Finish Cyc";
- else if (!strcmp(se_header, "INSTR Latency"))
- return "Global Finish_cyc";
- else if (!strcmp(se_header, "Local Pipeline Stage Cycle"))
- return "Dispatch Cyc";
- else if (!strcmp(se_header, "Pipeline Stage Cycle"))
- return "Global Dispatch_cyc";
- return se_header;
-}
-
-int arch_support_sort_key(const char *sort_key)
-{
- if (!strcmp(sort_key, "p_stage_cyc"))
- return 1;
- if (!strcmp(sort_key, "local_p_stage_cyc"))
- return 1;
- return 0;
-}
diff --git a/tools/perf/arch/powerpc/util/header.c b/tools/perf/arch/powerpc/util/header.c
index 6b00efd53638..0be74f048f96 100644
--- a/tools/perf/arch/powerpc/util/header.c
+++ b/tools/perf/arch/powerpc/util/header.c
@@ -10,9 +10,21 @@
#include "utils_header.h"
#include "metricgroup.h"
#include <api/fs/fs.h>
+#include <sys/auxv.h>
+
+static bool is_compat_mode(void)
+{
+ unsigned long base_platform = getauxval(AT_BASE_PLATFORM);
+ unsigned long platform = getauxval(AT_PLATFORM);
+
+ if (!strcmp((char *)platform, (char *)base_platform))
+ return false;
+
+ return true;
+}
int
-get_cpuid(char *buffer, size_t sz)
+get_cpuid(char *buffer, size_t sz, struct perf_cpu cpu __maybe_unused)
{
unsigned long pvr;
int nb;
@@ -30,11 +42,29 @@ get_cpuid(char *buffer, size_t sz)
}
char *
-get_cpuid_str(struct perf_pmu *pmu __maybe_unused)
+get_cpuid_str(struct perf_cpu cpu __maybe_unused)
{
char *bufp;
+ unsigned long pvr;
+
+ /*
+ * IBM Power System supports compatible mode. That is
+ * Nth generation platform can support previous generation
+ * OS in a mode called compatibile mode. For ex. LPAR can be
+ * booted in a Power9 mode when the system is a Power10.
+ *
+ * In the compatible mode, care must be taken when generating
+ * PVR value. When read, PVR will be of the AT_BASE_PLATFORM
+ * To support generic events, return 0x00ffffff as pvr when
+ * booted in compat mode. Based on this pvr value, json will
+ * pick events from pmu-events/arch/powerpc/compat
+ */
+ if (!is_compat_mode())
+ pvr = mfspr(SPRN_PVR);
+ else
+ pvr = 0x00ffffff;
- if (asprintf(&bufp, "0x%.8lx", mfspr(SPRN_PVR)) < 0)
+ if (asprintf(&bufp, "0x%.8lx", pvr) < 0)
bufp = NULL;
return bufp;
diff --git a/tools/perf/arch/powerpc/util/kvm-stat.c b/tools/perf/arch/powerpc/util/kvm-stat.c
index d9a0ac1cdf30..c8357b571ccf 100644
--- a/tools/perf/arch/powerpc/util/kvm-stat.c
+++ b/tools/perf/arch/powerpc/util/kvm-stat.c
@@ -114,7 +114,7 @@ static int is_tracepoint_available(const char *str, struct evlist *evlist)
parse_events_error__init(&err);
ret = parse_events(evlist, str, &err);
- if (err.str)
+ if (ret)
parse_events_error__print(&err, "tracepoint");
parse_events_error__exit(&err);
return ret;
diff --git a/tools/perf/arch/powerpc/util/mem-events.c b/tools/perf/arch/powerpc/util/mem-events.c
index 78b986e5268d..765d4a054b0a 100644
--- a/tools/perf/arch/powerpc/util/mem-events.c
+++ b/tools/perf/arch/powerpc/util/mem-events.c
@@ -1,12 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
-#include "map_symbol.h"
+#include "util/map_symbol.h"
+#include "util/mem-events.h"
#include "mem-events.h"
-/* PowerPC does not support 'ldlat' parameter. */
-const char *perf_mem_events__name(int i, const char *pmu_name __maybe_unused)
-{
- if (i == PERF_MEM_EVENTS__LOAD)
- return "cpu/mem-loads/";
+#define E(t, n, s, l, a) { .tag = t, .name = n, .event_name = s, .ldlat = l, .aux_event = a }
- return "cpu/mem-stores/";
-}
+struct perf_mem_event perf_mem_events_power[PERF_MEM_EVENTS__MAX] = {
+ E("ldlat-loads", "%s/mem-loads/", "mem-loads", false, 0),
+ E("ldlat-stores", "%s/mem-stores/", "mem-stores", false, 0),
+ E(NULL, NULL, NULL, false, 0),
+};
diff --git a/tools/perf/arch/powerpc/util/mem-events.h b/tools/perf/arch/powerpc/util/mem-events.h
new file mode 100644
index 000000000000..6acc3d1b6873
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/mem-events.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _POWER_MEM_EVENTS_H
+#define _POWER_MEM_EVENTS_H
+
+extern struct perf_mem_event perf_mem_events_power[PERF_MEM_EVENTS__MAX];
+
+#endif /* _POWER_MEM_EVENTS_H */
diff --git a/tools/perf/arch/powerpc/util/perf_regs.c b/tools/perf/arch/powerpc/util/perf_regs.c
index b38aa056eea0..bd36cfd420a2 100644
--- a/tools/perf/arch/powerpc/util/perf_regs.c
+++ b/tools/perf/arch/powerpc/util/perf_regs.c
@@ -16,8 +16,9 @@
#define PVR_POWER9 0x004E
#define PVR_POWER10 0x0080
+#define PVR_POWER11 0x0082
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG(r0, PERF_REG_POWERPC_R0),
SMPL_REG(r1, PERF_REG_POWERPC_R1),
SMPL_REG(r2, PERF_REG_POWERPC_R2),
@@ -207,7 +208,7 @@ uint64_t arch__intr_reg_mask(void)
version = (((mfspr(SPRN_PVR)) >> 16) & 0xFFFF);
if (version == PVR_POWER9)
extended_mask = PERF_REG_PMU_MASK_300;
- else if (version == PVR_POWER10)
+ else if ((version == PVR_POWER10) || (version == PVR_POWER11))
extended_mask = PERF_REG_PMU_MASK_31;
else
return mask;
@@ -232,3 +233,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/powerpc/util/pmu.c b/tools/perf/arch/powerpc/util/pmu.c
new file mode 100644
index 000000000000..554675deef7b
--- /dev/null
+++ b/tools/perf/arch/powerpc/util/pmu.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <string.h>
+
+#include "../../../util/pmu.h"
+#include "mem-events.h"
+
+void perf_pmu__arch_init(struct perf_pmu *pmu)
+{
+ if (pmu->is_core)
+ pmu->mem_events = perf_mem_events_power;
+}
diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
index 5f3edb3004d8..356786432fd3 100644
--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
@@ -159,9 +159,9 @@ static int check_return_addr(struct dso *dso, u64 map_start, Dwarf_Addr pc)
Dwarf_Addr start = pc;
Dwarf_Addr end = pc;
bool signalp;
- const char *exec_file = dso->long_name;
+ const char *exec_file = dso__long_name(dso);
- dwfl = dso->dwfl;
+ dwfl = RC_CHK_ACCESS(dso)->dwfl;
if (!dwfl) {
dwfl = dwfl_begin(&offline_callbacks);
@@ -183,7 +183,7 @@ static int check_return_addr(struct dso *dso, u64 map_start, Dwarf_Addr pc)
dwfl_end(dwfl);
goto out;
}
- dso->dwfl = dwfl;
+ RC_CHK_ACCESS(dso)->dwfl = dwfl;
}
mod = dwfl_addrmodule(dwfl, pc);
@@ -267,7 +267,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
rc = check_return_addr(dso, map__start(al.map), ip);
pr_debug("[DSO %s, sym %s, ip 0x%" PRIx64 "] rc %d\n",
- dso->long_name, al.sym->name, ip, rc);
+ dso__long_name(dso), al.sym->name, ip, rc);
if (rc == 0) {
/*
diff --git a/tools/perf/arch/powerpc/util/unwind-libdw.c b/tools/perf/arch/powerpc/util/unwind-libdw.c
index e9a5a8bb67d9..82d0c28ae345 100644
--- a/tools/perf/arch/powerpc/util/unwind-libdw.c
+++ b/tools/perf/arch/powerpc/util/unwind-libdw.c
@@ -16,7 +16,7 @@ static const int special_regs[3][2] = {
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[32], dwarf_nip;
size_t i;
diff --git a/tools/perf/arch/riscv/Build b/tools/perf/arch/riscv/Build
index e4e5f33c84d8..e63eabc2c8f4 100644
--- a/tools/perf/arch/riscv/Build
+++ b/tools/perf/arch/riscv/Build
@@ -1 +1 @@
-perf-y += util/
+perf-util-y += util/
diff --git a/tools/perf/arch/riscv/Makefile b/tools/perf/arch/riscv/Makefile
index a8d25d005207..087e099fb453 100644
--- a/tools/perf/arch/riscv/Makefile
+++ b/tools/perf/arch/riscv/Makefile
@@ -1,5 +1,3 @@
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
+# SPDX-License-Identifier: GPL-2.0
PERF_HAVE_JITDUMP := 1
+HAVE_KVM_STAT_SUPPORT := 1
diff --git a/tools/perf/arch/riscv/include/dwarf-regs-table.h b/tools/perf/arch/riscv/include/dwarf-regs-table.h
new file mode 100644
index 000000000000..a45b63a6d5a8
--- /dev/null
+++ b/tools/perf/arch/riscv/include/dwarf-regs-table.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifdef DEFINE_DWARF_REGSTR_TABLE
+/* This is included in perf/util/dwarf-regs.c */
+
+#define REG_DWARFNUM_NAME(reg, idx) [idx] = "%" #reg
+
+static const char * const riscv_regstr_tbl[] = {
+ REG_DWARFNUM_NAME("%zero", 0),
+ REG_DWARFNUM_NAME("%ra", 1),
+ REG_DWARFNUM_NAME("%sp", 2),
+ REG_DWARFNUM_NAME("%gp", 3),
+ REG_DWARFNUM_NAME("%tp", 4),
+ REG_DWARFNUM_NAME("%t0", 5),
+ REG_DWARFNUM_NAME("%t1", 6),
+ REG_DWARFNUM_NAME("%t2", 7),
+ REG_DWARFNUM_NAME("%s0", 8),
+ REG_DWARFNUM_NAME("%s1", 9),
+ REG_DWARFNUM_NAME("%a0", 10),
+ REG_DWARFNUM_NAME("%a1", 11),
+ REG_DWARFNUM_NAME("%a2", 12),
+ REG_DWARFNUM_NAME("%a3", 13),
+ REG_DWARFNUM_NAME("%a4", 14),
+ REG_DWARFNUM_NAME("%a5", 15),
+ REG_DWARFNUM_NAME("%a6", 16),
+ REG_DWARFNUM_NAME("%a7", 17),
+ REG_DWARFNUM_NAME("%s2", 18),
+ REG_DWARFNUM_NAME("%s3", 19),
+ REG_DWARFNUM_NAME("%s4", 20),
+ REG_DWARFNUM_NAME("%s5", 21),
+ REG_DWARFNUM_NAME("%s6", 22),
+ REG_DWARFNUM_NAME("%s7", 23),
+ REG_DWARFNUM_NAME("%s8", 24),
+ REG_DWARFNUM_NAME("%s9", 25),
+ REG_DWARFNUM_NAME("%s10", 26),
+ REG_DWARFNUM_NAME("%s11", 27),
+ REG_DWARFNUM_NAME("%t3", 28),
+ REG_DWARFNUM_NAME("%t4", 29),
+ REG_DWARFNUM_NAME("%t5", 30),
+ REG_DWARFNUM_NAME("%t6", 31),
+};
+
+#endif
diff --git a/tools/perf/arch/riscv/util/Build b/tools/perf/arch/riscv/util/Build
index 603dbb5ae4dc..58a672246024 100644
--- a/tools/perf/arch/riscv/util/Build
+++ b/tools/perf/arch/riscv/util/Build
@@ -1,5 +1,5 @@
-perf-y += perf_regs.o
-perf-y += header.o
+perf-util-y += perf_regs.o
+perf-util-y += header.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
diff --git a/tools/perf/arch/riscv/util/dwarf-regs.c b/tools/perf/arch/riscv/util/dwarf-regs.c
deleted file mode 100644
index cd0504c02e2e..000000000000
--- a/tools/perf/arch/riscv/util/dwarf-regs.c
+++ /dev/null
@@ -1,72 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright (C) 2019 Hangzhou C-SKY Microsystems co.,ltd.
- * Mapping of DWARF debug register numbers into register names.
- */
-
-#include <stddef.h>
-#include <errno.h> /* for EINVAL */
-#include <string.h> /* for strcmp */
-#include <dwarf-regs.h>
-
-struct pt_regs_dwarfnum {
- const char *name;
- unsigned int dwarfnum;
-};
-
-#define REG_DWARFNUM_NAME(r, num) {.name = r, .dwarfnum = num}
-#define REG_DWARFNUM_END {.name = NULL, .dwarfnum = 0}
-
-struct pt_regs_dwarfnum riscv_dwarf_regs_table[] = {
- REG_DWARFNUM_NAME("%zero", 0),
- REG_DWARFNUM_NAME("%ra", 1),
- REG_DWARFNUM_NAME("%sp", 2),
- REG_DWARFNUM_NAME("%gp", 3),
- REG_DWARFNUM_NAME("%tp", 4),
- REG_DWARFNUM_NAME("%t0", 5),
- REG_DWARFNUM_NAME("%t1", 6),
- REG_DWARFNUM_NAME("%t2", 7),
- REG_DWARFNUM_NAME("%s0", 8),
- REG_DWARFNUM_NAME("%s1", 9),
- REG_DWARFNUM_NAME("%a0", 10),
- REG_DWARFNUM_NAME("%a1", 11),
- REG_DWARFNUM_NAME("%a2", 12),
- REG_DWARFNUM_NAME("%a3", 13),
- REG_DWARFNUM_NAME("%a4", 14),
- REG_DWARFNUM_NAME("%a5", 15),
- REG_DWARFNUM_NAME("%a6", 16),
- REG_DWARFNUM_NAME("%a7", 17),
- REG_DWARFNUM_NAME("%s2", 18),
- REG_DWARFNUM_NAME("%s3", 19),
- REG_DWARFNUM_NAME("%s4", 20),
- REG_DWARFNUM_NAME("%s5", 21),
- REG_DWARFNUM_NAME("%s6", 22),
- REG_DWARFNUM_NAME("%s7", 23),
- REG_DWARFNUM_NAME("%s8", 24),
- REG_DWARFNUM_NAME("%s9", 25),
- REG_DWARFNUM_NAME("%s10", 26),
- REG_DWARFNUM_NAME("%s11", 27),
- REG_DWARFNUM_NAME("%t3", 28),
- REG_DWARFNUM_NAME("%t4", 29),
- REG_DWARFNUM_NAME("%t5", 30),
- REG_DWARFNUM_NAME("%t6", 31),
- REG_DWARFNUM_END,
-};
-
-#define RISCV_MAX_REGS ((sizeof(riscv_dwarf_regs_table) / \
- sizeof(riscv_dwarf_regs_table[0])) - 1)
-
-const char *get_arch_regstr(unsigned int n)
-{
- return (n < RISCV_MAX_REGS) ? riscv_dwarf_regs_table[n].name : NULL;
-}
-
-int regs_query_register_offset(const char *name)
-{
- const struct pt_regs_dwarfnum *roff;
-
- for (roff = riscv_dwarf_regs_table; roff->name; roff++)
- if (!strcmp(roff->name, name))
- return roff->dwarfnum;
- return -EINVAL;
-}
diff --git a/tools/perf/arch/riscv/util/header.c b/tools/perf/arch/riscv/util/header.c
index 4a41856938a8..4b839203d4a5 100644
--- a/tools/perf/arch/riscv/util/header.c
+++ b/tools/perf/arch/riscv/util/header.c
@@ -41,7 +41,7 @@ static char *_get_cpuid(void)
char *mimpid = NULL;
char *cpuid = NULL;
int read;
- unsigned long line_sz;
+ size_t line_sz;
FILE *cpuinfo;
cpuinfo = fopen(CPUINFO, "r");
@@ -81,7 +81,7 @@ free:
return cpuid;
}
-int get_cpuid(char *buffer, size_t sz)
+int get_cpuid(char *buffer, size_t sz, struct perf_cpu cpu __maybe_unused)
{
char *cpuid = _get_cpuid();
int ret = 0;
@@ -98,7 +98,7 @@ free:
}
char *
-get_cpuid_str(struct perf_pmu *pmu __maybe_unused)
+get_cpuid_str(struct perf_cpu cpu __maybe_unused)
{
return _get_cpuid();
}
diff --git a/tools/perf/arch/riscv/util/kvm-stat.c b/tools/perf/arch/riscv/util/kvm-stat.c
new file mode 100644
index 000000000000..3ea7acb5e159
--- /dev/null
+++ b/tools/perf/arch/riscv/util/kvm-stat.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Arch specific functions for perf kvm stat.
+ *
+ * Copyright 2024 Beijing ESWIN Computing Technology Co., Ltd.
+ *
+ */
+#include <errno.h>
+#include <memory.h>
+#include "../../../util/evsel.h"
+#include "../../../util/kvm-stat.h"
+#include "riscv_trap_types.h"
+#include "debug.h"
+
+define_exit_reasons_table(riscv_exit_reasons, kvm_riscv_trap_class);
+
+const char *vcpu_id_str = "id";
+const char *kvm_exit_reason = "scause";
+const char *kvm_entry_trace = "kvm:kvm_entry";
+const char *kvm_exit_trace = "kvm:kvm_exit";
+
+const char *kvm_events_tp[] = {
+ "kvm:kvm_entry",
+ "kvm:kvm_exit",
+ NULL,
+};
+
+static void event_get_key(struct evsel *evsel,
+ struct perf_sample *sample,
+ struct event_key *key)
+{
+ key->info = 0;
+ key->key = evsel__intval(evsel, sample, kvm_exit_reason) & ~CAUSE_IRQ_FLAG;
+ key->exit_reasons = riscv_exit_reasons;
+}
+
+static bool event_begin(struct evsel *evsel,
+ struct perf_sample *sample __maybe_unused,
+ struct event_key *key __maybe_unused)
+{
+ return evsel__name_is(evsel, kvm_entry_trace);
+}
+
+static bool event_end(struct evsel *evsel,
+ struct perf_sample *sample,
+ struct event_key *key)
+{
+ if (evsel__name_is(evsel, kvm_exit_trace)) {
+ event_get_key(evsel, sample, key);
+ return true;
+ }
+ return false;
+}
+
+static struct kvm_events_ops exit_events = {
+ .is_begin_event = event_begin,
+ .is_end_event = event_end,
+ .decode_key = exit_event_decode_key,
+ .name = "VM-EXIT"
+};
+
+struct kvm_reg_events_ops kvm_reg_events_ops[] = {
+ {
+ .name = "vmexit",
+ .ops = &exit_events,
+ },
+ { NULL, NULL },
+};
+
+const char * const kvm_skip_events[] = {
+ NULL,
+};
+
+int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid __maybe_unused)
+{
+ kvm->exit_reasons_isa = "riscv64";
+ return 0;
+}
diff --git a/tools/perf/arch/riscv/util/perf_regs.c b/tools/perf/arch/riscv/util/perf_regs.c
index c0877c264d49..6b1665f41180 100644
--- a/tools/perf/arch/riscv/util/perf_regs.c
+++ b/tools/perf/arch/riscv/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/riscv/util/riscv_trap_types.h b/tools/perf/arch/riscv/util/riscv_trap_types.h
new file mode 100644
index 000000000000..6cc71eb01fca
--- /dev/null
+++ b/tools/perf/arch/riscv/util/riscv_trap_types.h
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef ARCH_PERF_RISCV_TRAP_TYPES_H
+#define ARCH_PERF_RISCV_TRAP_TYPES_H
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
+
+/* Interrupt causes (minus the high bit) */
+#define IRQ_S_SOFT 1
+#define IRQ_VS_SOFT 2
+#define IRQ_M_SOFT 3
+#define IRQ_S_TIMER 5
+#define IRQ_VS_TIMER 6
+#define IRQ_M_TIMER 7
+#define IRQ_S_EXT 9
+#define IRQ_VS_EXT 10
+#define IRQ_M_EXT 11
+#define IRQ_S_GEXT 12
+#define IRQ_PMU_OVF 13
+
+/* Exception causes */
+#define EXC_INST_MISALIGNED 0
+#define EXC_INST_ACCESS 1
+#define EXC_INST_ILLEGAL 2
+#define EXC_BREAKPOINT 3
+#define EXC_LOAD_MISALIGNED 4
+#define EXC_LOAD_ACCESS 5
+#define EXC_STORE_MISALIGNED 6
+#define EXC_STORE_ACCESS 7
+#define EXC_SYSCALL 8
+#define EXC_HYPERVISOR_SYSCALL 9
+#define EXC_SUPERVISOR_SYSCALL 10
+#define EXC_INST_PAGE_FAULT 12
+#define EXC_LOAD_PAGE_FAULT 13
+#define EXC_STORE_PAGE_FAULT 15
+#define EXC_INST_GUEST_PAGE_FAULT 20
+#define EXC_LOAD_GUEST_PAGE_FAULT 21
+#define EXC_VIRTUAL_INST_FAULT 22
+#define EXC_STORE_GUEST_PAGE_FAULT 23
+
+#define TRAP(x) { x, #x }
+
+#define kvm_riscv_trap_class \
+ TRAP(IRQ_S_SOFT), TRAP(IRQ_VS_SOFT), TRAP(IRQ_M_SOFT), \
+ TRAP(IRQ_S_TIMER), TRAP(IRQ_VS_TIMER), TRAP(IRQ_M_TIMER), \
+ TRAP(IRQ_S_EXT), TRAP(IRQ_VS_EXT), TRAP(IRQ_M_EXT), \
+ TRAP(IRQ_S_GEXT), TRAP(IRQ_PMU_OVF), \
+ TRAP(EXC_INST_MISALIGNED), TRAP(EXC_INST_ACCESS), TRAP(EXC_INST_ILLEGAL), \
+ TRAP(EXC_BREAKPOINT), TRAP(EXC_LOAD_MISALIGNED), TRAP(EXC_LOAD_ACCESS), \
+ TRAP(EXC_STORE_MISALIGNED), TRAP(EXC_STORE_ACCESS), TRAP(EXC_SYSCALL), \
+ TRAP(EXC_HYPERVISOR_SYSCALL), TRAP(EXC_SUPERVISOR_SYSCALL), \
+ TRAP(EXC_INST_PAGE_FAULT), TRAP(EXC_LOAD_PAGE_FAULT), \
+ TRAP(EXC_STORE_PAGE_FAULT), TRAP(EXC_INST_GUEST_PAGE_FAULT), \
+ TRAP(EXC_LOAD_GUEST_PAGE_FAULT), TRAP(EXC_VIRTUAL_INST_FAULT), \
+ TRAP(EXC_STORE_GUEST_PAGE_FAULT)
+
+#endif /* ARCH_PERF_RISCV_TRAP_TYPES_H */
diff --git a/tools/perf/arch/riscv/util/unwind-libdw.c b/tools/perf/arch/riscv/util/unwind-libdw.c
index 5c98010d8b59..dc1476e16321 100644
--- a/tools/perf/arch/riscv/util/unwind-libdw.c
+++ b/tools/perf/arch/riscv/util/unwind-libdw.c
@@ -10,7 +10,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[32];
#define REG(r) ({ \
diff --git a/tools/perf/arch/riscv64/annotate/instructions.c b/tools/perf/arch/riscv64/annotate/instructions.c
index 869a0eb28953..55cf911633f8 100644
--- a/tools/perf/arch/riscv64/annotate/instructions.c
+++ b/tools/perf/arch/riscv64/annotate/instructions.c
@@ -28,6 +28,8 @@ int riscv64__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->associate_instruction_ops = riscv64__associate_ins_ops;
arch->initialized = true;
arch->objdump.comment_char = '#';
+ arch->e_machine = EM_RISCV;
+ arch->e_flags = 0;
}
return 0;
diff --git a/tools/perf/arch/s390/Build b/tools/perf/arch/s390/Build
index e4e5f33c84d8..e63eabc2c8f4 100644
--- a/tools/perf/arch/s390/Build
+++ b/tools/perf/arch/s390/Build
@@ -1 +1 @@
-perf-y += util/
+perf-util-y += util/
diff --git a/tools/perf/arch/s390/Makefile b/tools/perf/arch/s390/Makefile
index 74bffbea03e2..0033698a65ce 100644
--- a/tools/perf/arch/s390/Makefile
+++ b/tools/perf/arch/s390/Makefile
@@ -1,28 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
HAVE_KVM_STAT_SUPPORT := 1
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
PERF_HAVE_JITDUMP := 1
-
-#
-# Syscall table generation for perf
-#
-
-out := $(OUTPUT)arch/s390/include/generated/asm
-header := $(out)/syscalls_64.c
-sysprf := $(srctree)/tools/perf/arch/s390/entry/syscalls
-sysdef := $(sysprf)/syscall.tbl
-systbl := $(sysprf)/mksyscalltbl
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header): $(sysdef) $(systbl)
- $(Q)$(SHELL) '$(systbl)' $(sysdef) > $@
-
-clean::
- $(call QUIET_CLEAN, s390) $(RM) $(header)
-
-archheaders: $(header)
diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
index da5aa3e1f04c..c61193f1e096 100644
--- a/tools/perf/arch/s390/annotate/instructions.c
+++ b/tools/perf/arch/s390/annotate/instructions.c
@@ -2,7 +2,7 @@
#include <linux/compiler.h>
static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
- struct map_symbol *ms)
+ struct map_symbol *ms, struct disasm_line *dl __maybe_unused)
{
char *endptr, *tok, *name;
struct map *map = ms->map;
@@ -52,7 +52,8 @@ static struct ins_ops s390_call_ops = {
static int s390_mov__parse(struct arch *arch __maybe_unused,
struct ins_operands *ops,
- struct map_symbol *ms __maybe_unused)
+ struct map_symbol *ms __maybe_unused,
+ struct disasm_line *dl __maybe_unused)
{
char *s = strchr(ops->raw, ','), *target, *endptr;
@@ -165,6 +166,8 @@ static int s390__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
if (s390__cpuid_parse(arch, cpuid))
err = SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING;
}
+ arch->e_machine = EM_S390;
+ arch->e_flags = 0;
}
return err;
diff --git a/tools/perf/arch/s390/entry/syscalls/mksyscalltbl b/tools/perf/arch/s390/entry/syscalls/mksyscalltbl
deleted file mode 100755
index 52eb88a77c94..000000000000
--- a/tools/perf/arch/s390/entry/syscalls/mksyscalltbl
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-#
-# Generate system call table for perf
-#
-# Copyright IBM Corp. 2017, 2018
-# Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
-#
-
-SYSCALL_TBL=$1
-
-if ! test -r $SYSCALL_TBL; then
- echo "Could not read input file" >&2
- exit 1
-fi
-
-create_table()
-{
- local max_nr nr abi sc discard
-
- echo 'static const char *const syscalltbl_s390_64[] = {'
- while read nr abi sc discard; do
- printf '\t[%d] = "%s",\n' $nr $sc
- max_nr=$nr
- done
- echo '};'
- echo "#define SYSCALLTBL_S390_64_MAX_ID $max_nr"
-}
-
-grep -E "^[[:digit:]]+[[:space:]]+(common|64)" $SYSCALL_TBL \
- |sort -k1 -n \
- |create_table
diff --git a/tools/perf/arch/s390/entry/syscalls/syscall.tbl b/tools/perf/arch/s390/entry/syscalls/syscall.tbl
index 095bb86339a7..8a6744d658db 100644
--- a/tools/perf/arch/s390/entry/syscalls/syscall.tbl
+++ b/tools/perf/arch/s390/entry/syscalls/syscall.tbl
@@ -418,7 +418,7 @@
412 32 utimensat_time64 - sys_utimensat
413 32 pselect6_time64 - compat_sys_pselect6_time64
414 32 ppoll_time64 - compat_sys_ppoll_time64
-416 32 io_pgetevents_time64 - sys_io_pgetevents
+416 32 io_pgetevents_time64 - compat_sys_io_pgetevents_time64
417 32 recvmmsg_time64 - compat_sys_recvmmsg_time64
418 32 mq_timedsend_time64 - sys_mq_timedsend
419 32 mq_timedreceive_time64 - sys_mq_timedreceive
@@ -464,3 +464,11 @@
459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr
460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr
461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal sys_mseal
+463 common setxattrat sys_setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr sys_file_setattr
diff --git a/tools/perf/arch/s390/util/Build b/tools/perf/arch/s390/util/Build
index fa66f15a14ec..736c0ad09194 100644
--- a/tools/perf/arch/s390/util/Build
+++ b/tools/perf/arch/s390/util/Build
@@ -1,11 +1,10 @@
-perf-y += header.o
-perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
-perf-y += perf_regs.o
+perf-util-y += header.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
+perf-util-y += perf_regs.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-perf-y += machine.o
-perf-y += pmu.o
+perf-util-y += machine.o
+perf-util-y += pmu.o
-perf-$(CONFIG_AUXTRACE) += auxtrace.o
+perf-util-$(CONFIG_AUXTRACE) += auxtrace.o
diff --git a/tools/perf/arch/s390/util/dwarf-regs.c b/tools/perf/arch/s390/util/dwarf-regs.c
deleted file mode 100644
index dfddb3099bfa..000000000000
--- a/tools/perf/arch/s390/util/dwarf-regs.c
+++ /dev/null
@@ -1,43 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright IBM Corp. 2010, 2017
- * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
- *
- */
-
-#include <errno.h>
-#include <stddef.h>
-#include <stdlib.h>
-#include <linux/kernel.h>
-#include <asm/ptrace.h>
-#include <string.h>
-#include <dwarf-regs.h>
-#include "dwarf-regs-table.h"
-
-const char *get_arch_regstr(unsigned int n)
-{
- return (n >= ARRAY_SIZE(s390_dwarf_regs)) ? NULL : s390_dwarf_regs[n];
-}
-
-/*
- * Convert the register name into an offset to struct pt_regs (kernel).
- * This is required by the BPF prologue generator. The BPF
- * program is called in the BPF overflow handler in the perf
- * core.
- */
-int regs_query_register_offset(const char *name)
-{
- unsigned long gpr;
-
- if (!name || strncmp(name, "%r", 2))
- return -EINVAL;
-
- errno = 0;
- gpr = strtoul(name + 2, NULL, 10);
- if (errno || gpr >= 16)
- return -EINVAL;
-
- return offsetof(user_pt_regs, gprs) + 8 * gpr;
-}
diff --git a/tools/perf/arch/s390/util/header.c b/tools/perf/arch/s390/util/header.c
index 7933f6871c81..db54677a17d2 100644
--- a/tools/perf/arch/s390/util/header.c
+++ b/tools/perf/arch/s390/util/header.c
@@ -27,7 +27,7 @@
#define SYSINFO "/proc/sysinfo"
#define SRVLVL "/proc/service_levels"
-int get_cpuid(char *buffer, size_t sz)
+int get_cpuid(char *buffer, size_t sz, struct perf_cpu cpu __maybe_unused)
{
char *cp, *line = NULL, *line2;
char type[8], model[33], version[8], manufacturer[32], authorization[8];
@@ -137,11 +137,11 @@ skip_sysinfo:
return (nbytes >= sz) ? ENOBUFS : 0;
}
-char *get_cpuid_str(struct perf_pmu *pmu __maybe_unused)
+char *get_cpuid_str(struct perf_cpu cpu)
{
char *buf = malloc(128);
- if (buf && get_cpuid(buf, 128))
+ if (buf && get_cpuid(buf, 128, cpu))
zfree(&buf);
return buf;
}
diff --git a/tools/perf/arch/s390/util/perf_regs.c b/tools/perf/arch/s390/util/perf_regs.c
index c0877c264d49..6b1665f41180 100644
--- a/tools/perf/arch/s390/util/perf_regs.c
+++ b/tools/perf/arch/s390/util/perf_regs.c
@@ -2,7 +2,7 @@
#include "perf_regs.h"
#include "../../util/perf_regs.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG_END
};
@@ -15,3 +15,8 @@ uint64_t arch__user_reg_mask(void)
{
return PERF_REGS_MASK;
}
+
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
diff --git a/tools/perf/arch/s390/util/unwind-libdw.c b/tools/perf/arch/s390/util/unwind-libdw.c
index f50fb6dbb35c..c27c7a0d1076 100644
--- a/tools/perf/arch/s390/util/unwind-libdw.c
+++ b/tools/perf/arch/s390/util/unwind-libdw.c
@@ -11,7 +11,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[ARRAY_SIZE(s390_dwarf_regs)];
#define REG(r) ({ \
diff --git a/tools/perf/arch/sh/Build b/tools/perf/arch/sh/Build
deleted file mode 100644
index e4e5f33c84d8..000000000000
--- a/tools/perf/arch/sh/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-y += util/
diff --git a/tools/perf/arch/sh/Makefile b/tools/perf/arch/sh/Makefile
deleted file mode 100644
index 88c08eed9c7b..000000000000
--- a/tools/perf/arch/sh/Makefile
+++ /dev/null
@@ -1,4 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
diff --git a/tools/perf/arch/sh/entry/syscalls/syscall.tbl b/tools/perf/arch/sh/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..5e9c9eff5539
--- /dev/null
+++ b/tools/perf/arch/sh/entry/syscalls/syscall.tbl
@@ -0,0 +1,475 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# system call numbers and entry vectors for sh
+#
+# The format is:
+# <number> <abi> <name> <entry point>
+#
+# The <abi> is always "common" for this file
+#
+0 common restart_syscall sys_restart_syscall
+1 common exit sys_exit
+2 common fork sys_fork
+3 common read sys_read
+4 common write sys_write
+5 common open sys_open
+6 common close sys_close
+7 common waitpid sys_waitpid
+8 common creat sys_creat
+9 common link sys_link
+10 common unlink sys_unlink
+11 common execve sys_execve
+12 common chdir sys_chdir
+13 common time sys_time32
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 common lchown sys_lchown16
+# 17 was break
+18 common oldstat sys_stat
+19 common lseek sys_lseek
+20 common getpid sys_getpid
+21 common mount sys_mount
+22 common umount sys_oldumount
+23 common setuid sys_setuid16
+24 common getuid sys_getuid16
+25 common stime sys_stime32
+26 common ptrace sys_ptrace
+27 common alarm sys_alarm
+28 common oldfstat sys_fstat
+29 common pause sys_pause
+30 common utime sys_utime32
+# 31 was stty
+# 32 was gtty
+33 common access sys_access
+34 common nice sys_nice
+# 35 was ftime
+36 common sync sys_sync
+37 common kill sys_kill
+38 common rename sys_rename
+39 common mkdir sys_mkdir
+40 common rmdir sys_rmdir
+41 common dup sys_dup
+42 common pipe sys_sh_pipe
+43 common times sys_times
+# 44 was prof
+45 common brk sys_brk
+46 common setgid sys_setgid16
+47 common getgid sys_getgid16
+48 common signal sys_signal
+49 common geteuid sys_geteuid16
+50 common getegid sys_getegid16
+51 common acct sys_acct
+52 common umount2 sys_umount
+# 53 was lock
+54 common ioctl sys_ioctl
+55 common fcntl sys_fcntl
+# 56 was mpx
+57 common setpgid sys_setpgid
+# 58 was ulimit
+# 59 was olduname
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common ustat sys_ustat
+63 common dup2 sys_dup2
+64 common getppid sys_getppid
+65 common getpgrp sys_getpgrp
+66 common setsid sys_setsid
+67 common sigaction sys_sigaction
+68 common sgetmask sys_sgetmask
+69 common ssetmask sys_ssetmask
+70 common setreuid sys_setreuid16
+71 common setregid sys_setregid16
+72 common sigsuspend sys_sigsuspend
+73 common sigpending sys_sigpending
+74 common sethostname sys_sethostname
+75 common setrlimit sys_setrlimit
+76 common getrlimit sys_old_getrlimit
+77 common getrusage sys_getrusage
+78 common gettimeofday sys_gettimeofday
+79 common settimeofday sys_settimeofday
+80 common getgroups sys_getgroups16
+81 common setgroups sys_setgroups16
+# 82 was select
+83 common symlink sys_symlink
+84 common oldlstat sys_lstat
+85 common readlink sys_readlink
+86 common uselib sys_uselib
+87 common swapon sys_swapon
+88 common reboot sys_reboot
+89 common readdir sys_old_readdir
+90 common mmap old_mmap
+91 common munmap sys_munmap
+92 common truncate sys_truncate
+93 common ftruncate sys_ftruncate
+94 common fchmod sys_fchmod
+95 common fchown sys_fchown16
+96 common getpriority sys_getpriority
+97 common setpriority sys_setpriority
+# 98 was profil
+99 common statfs sys_statfs
+100 common fstatfs sys_fstatfs
+# 101 was ioperm
+102 common socketcall sys_socketcall
+103 common syslog sys_syslog
+104 common setitimer sys_setitimer
+105 common getitimer sys_getitimer
+106 common stat sys_newstat
+107 common lstat sys_newlstat
+108 common fstat sys_newfstat
+109 common olduname sys_uname
+# 110 was iopl
+111 common vhangup sys_vhangup
+# 112 was idle
+# 113 was vm86old
+114 common wait4 sys_wait4
+115 common swapoff sys_swapoff
+116 common sysinfo sys_sysinfo
+117 common ipc sys_ipc
+118 common fsync sys_fsync
+119 common sigreturn sys_sigreturn
+120 common clone sys_clone
+121 common setdomainname sys_setdomainname
+122 common uname sys_newuname
+123 common cacheflush sys_cacheflush
+124 common adjtimex sys_adjtimex_time32
+125 common mprotect sys_mprotect
+126 common sigprocmask sys_sigprocmask
+# 127 was create_module
+128 common init_module sys_init_module
+129 common delete_module sys_delete_module
+# 130 was get_kernel_syms
+131 common quotactl sys_quotactl
+132 common getpgid sys_getpgid
+133 common fchdir sys_fchdir
+134 common bdflush sys_ni_syscall
+135 common sysfs sys_sysfs
+136 common personality sys_personality
+# 137 was afs_syscall
+138 common setfsuid sys_setfsuid16
+139 common setfsgid sys_setfsgid16
+140 common _llseek sys_llseek
+141 common getdents sys_getdents
+142 common _newselect sys_select
+143 common flock sys_flock
+144 common msync sys_msync
+145 common readv sys_readv
+146 common writev sys_writev
+147 common getsid sys_getsid
+148 common fdatasync sys_fdatasync
+149 common _sysctl sys_ni_syscall
+150 common mlock sys_mlock
+151 common munlock sys_munlock
+152 common mlockall sys_mlockall
+153 common munlockall sys_munlockall
+154 common sched_setparam sys_sched_setparam
+155 common sched_getparam sys_sched_getparam
+156 common sched_setscheduler sys_sched_setscheduler
+157 common sched_getscheduler sys_sched_getscheduler
+158 common sched_yield sys_sched_yield
+159 common sched_get_priority_max sys_sched_get_priority_max
+160 common sched_get_priority_min sys_sched_get_priority_min
+161 common sched_rr_get_interval sys_sched_rr_get_interval_time32
+162 common nanosleep sys_nanosleep_time32
+163 common mremap sys_mremap
+164 common setresuid sys_setresuid16
+165 common getresuid sys_getresuid16
+# 166 was vm86
+# 167 was query_module
+168 common poll sys_poll
+169 common nfsservctl sys_ni_syscall
+170 common setresgid sys_setresgid16
+171 common getresgid sys_getresgid16
+172 common prctl sys_prctl
+173 common rt_sigreturn sys_rt_sigreturn
+174 common rt_sigaction sys_rt_sigaction
+175 common rt_sigprocmask sys_rt_sigprocmask
+176 common rt_sigpending sys_rt_sigpending
+177 common rt_sigtimedwait sys_rt_sigtimedwait_time32
+178 common rt_sigqueueinfo sys_rt_sigqueueinfo
+179 common rt_sigsuspend sys_rt_sigsuspend
+180 common pread64 sys_pread_wrapper
+181 common pwrite64 sys_pwrite_wrapper
+182 common chown sys_chown16
+183 common getcwd sys_getcwd
+184 common capget sys_capget
+185 common capset sys_capset
+186 common sigaltstack sys_sigaltstack
+187 common sendfile sys_sendfile
+# 188 is reserved for getpmsg
+# 189 is reserved for putpmsg
+190 common vfork sys_vfork
+191 common ugetrlimit sys_getrlimit
+192 common mmap2 sys_mmap2
+193 common truncate64 sys_truncate64
+194 common ftruncate64 sys_ftruncate64
+195 common stat64 sys_stat64
+196 common lstat64 sys_lstat64
+197 common fstat64 sys_fstat64
+198 common lchown32 sys_lchown
+199 common getuid32 sys_getuid
+200 common getgid32 sys_getgid
+201 common geteuid32 sys_geteuid
+202 common getegid32 sys_getegid
+203 common setreuid32 sys_setreuid
+204 common setregid32 sys_setregid
+205 common getgroups32 sys_getgroups
+206 common setgroups32 sys_setgroups
+207 common fchown32 sys_fchown
+208 common setresuid32 sys_setresuid
+209 common getresuid32 sys_getresuid
+210 common setresgid32 sys_setresgid
+211 common getresgid32 sys_getresgid
+212 common chown32 sys_chown
+213 common setuid32 sys_setuid
+214 common setgid32 sys_setgid
+215 common setfsuid32 sys_setfsuid
+216 common setfsgid32 sys_setfsgid
+217 common pivot_root sys_pivot_root
+218 common mincore sys_mincore
+219 common madvise sys_madvise
+220 common getdents64 sys_getdents64
+221 common fcntl64 sys_fcntl64
+# 222 is reserved for tux
+# 223 is unused
+224 common gettid sys_gettid
+225 common readahead sys_readahead
+226 common setxattr sys_setxattr
+227 common lsetxattr sys_lsetxattr
+228 common fsetxattr sys_fsetxattr
+229 common getxattr sys_getxattr
+230 common lgetxattr sys_lgetxattr
+231 common fgetxattr sys_fgetxattr
+232 common listxattr sys_listxattr
+233 common llistxattr sys_llistxattr
+234 common flistxattr sys_flistxattr
+235 common removexattr sys_removexattr
+236 common lremovexattr sys_lremovexattr
+237 common fremovexattr sys_fremovexattr
+238 common tkill sys_tkill
+239 common sendfile64 sys_sendfile64
+240 common futex sys_futex_time32
+241 common sched_setaffinity sys_sched_setaffinity
+242 common sched_getaffinity sys_sched_getaffinity
+# 243 is reserved for set_thread_area
+# 244 is reserved for get_thread_area
+245 common io_setup sys_io_setup
+246 common io_destroy sys_io_destroy
+247 common io_getevents sys_io_getevents_time32
+248 common io_submit sys_io_submit
+249 common io_cancel sys_io_cancel
+250 common fadvise64 sys_fadvise64
+# 251 is unused
+252 common exit_group sys_exit_group
+253 common lookup_dcookie sys_ni_syscall
+254 common epoll_create sys_epoll_create
+255 common epoll_ctl sys_epoll_ctl
+256 common epoll_wait sys_epoll_wait
+257 common remap_file_pages sys_remap_file_pages
+258 common set_tid_address sys_set_tid_address
+259 common timer_create sys_timer_create
+260 common timer_settime sys_timer_settime32
+261 common timer_gettime sys_timer_gettime32
+262 common timer_getoverrun sys_timer_getoverrun
+263 common timer_delete sys_timer_delete
+264 common clock_settime sys_clock_settime32
+265 common clock_gettime sys_clock_gettime32
+266 common clock_getres sys_clock_getres_time32
+267 common clock_nanosleep sys_clock_nanosleep_time32
+268 common statfs64 sys_statfs64
+269 common fstatfs64 sys_fstatfs64
+270 common tgkill sys_tgkill
+271 common utimes sys_utimes_time32
+272 common fadvise64_64 sys_fadvise64_64_wrapper
+# 273 is reserved for vserver
+274 common mbind sys_mbind
+275 common get_mempolicy sys_get_mempolicy
+276 common set_mempolicy sys_set_mempolicy
+277 common mq_open sys_mq_open
+278 common mq_unlink sys_mq_unlink
+279 common mq_timedsend sys_mq_timedsend_time32
+280 common mq_timedreceive sys_mq_timedreceive_time32
+281 common mq_notify sys_mq_notify
+282 common mq_getsetattr sys_mq_getsetattr
+283 common kexec_load sys_kexec_load
+284 common waitid sys_waitid
+285 common add_key sys_add_key
+286 common request_key sys_request_key
+287 common keyctl sys_keyctl
+288 common ioprio_set sys_ioprio_set
+289 common ioprio_get sys_ioprio_get
+290 common inotify_init sys_inotify_init
+291 common inotify_add_watch sys_inotify_add_watch
+292 common inotify_rm_watch sys_inotify_rm_watch
+# 293 is unused
+294 common migrate_pages sys_migrate_pages
+295 common openat sys_openat
+296 common mkdirat sys_mkdirat
+297 common mknodat sys_mknodat
+298 common fchownat sys_fchownat
+299 common futimesat sys_futimesat_time32
+300 common fstatat64 sys_fstatat64
+301 common unlinkat sys_unlinkat
+302 common renameat sys_renameat
+303 common linkat sys_linkat
+304 common symlinkat sys_symlinkat
+305 common readlinkat sys_readlinkat
+306 common fchmodat sys_fchmodat
+307 common faccessat sys_faccessat
+308 common pselect6 sys_pselect6_time32
+309 common ppoll sys_ppoll_time32
+310 common unshare sys_unshare
+311 common set_robust_list sys_set_robust_list
+312 common get_robust_list sys_get_robust_list
+313 common splice sys_splice
+314 common sync_file_range sys_sh_sync_file_range6
+315 common tee sys_tee
+316 common vmsplice sys_vmsplice
+317 common move_pages sys_move_pages
+318 common getcpu sys_getcpu
+319 common epoll_pwait sys_epoll_pwait
+320 common utimensat sys_utimensat_time32
+321 common signalfd sys_signalfd
+322 common timerfd_create sys_timerfd_create
+323 common eventfd sys_eventfd
+324 common fallocate sys_fallocate
+325 common timerfd_settime sys_timerfd_settime32
+326 common timerfd_gettime sys_timerfd_gettime32
+327 common signalfd4 sys_signalfd4
+328 common eventfd2 sys_eventfd2
+329 common epoll_create1 sys_epoll_create1
+330 common dup3 sys_dup3
+331 common pipe2 sys_pipe2
+332 common inotify_init1 sys_inotify_init1
+333 common preadv sys_preadv
+334 common pwritev sys_pwritev
+335 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo
+336 common perf_event_open sys_perf_event_open
+337 common fanotify_init sys_fanotify_init
+338 common fanotify_mark sys_fanotify_mark
+339 common prlimit64 sys_prlimit64
+340 common socket sys_socket
+341 common bind sys_bind
+342 common connect sys_connect
+343 common listen sys_listen
+344 common accept sys_accept
+345 common getsockname sys_getsockname
+346 common getpeername sys_getpeername
+347 common socketpair sys_socketpair
+348 common send sys_send
+349 common sendto sys_sendto
+350 common recv sys_recv
+351 common recvfrom sys_recvfrom
+352 common shutdown sys_shutdown
+353 common setsockopt sys_setsockopt
+354 common getsockopt sys_getsockopt
+355 common sendmsg sys_sendmsg
+356 common recvmsg sys_recvmsg
+357 common recvmmsg sys_recvmmsg_time32
+358 common accept4 sys_accept4
+359 common name_to_handle_at sys_name_to_handle_at
+360 common open_by_handle_at sys_open_by_handle_at
+361 common clock_adjtime sys_clock_adjtime32
+362 common syncfs sys_syncfs
+363 common sendmmsg sys_sendmmsg
+364 common setns sys_setns
+365 common process_vm_readv sys_process_vm_readv
+366 common process_vm_writev sys_process_vm_writev
+367 common kcmp sys_kcmp
+368 common finit_module sys_finit_module
+369 common sched_getattr sys_sched_getattr
+370 common sched_setattr sys_sched_setattr
+371 common renameat2 sys_renameat2
+372 common seccomp sys_seccomp
+373 common getrandom sys_getrandom
+374 common memfd_create sys_memfd_create
+375 common bpf sys_bpf
+376 common execveat sys_execveat
+377 common userfaultfd sys_userfaultfd
+378 common membarrier sys_membarrier
+379 common mlock2 sys_mlock2
+380 common copy_file_range sys_copy_file_range
+381 common preadv2 sys_preadv2
+382 common pwritev2 sys_pwritev2
+383 common statx sys_statx
+384 common pkey_mprotect sys_pkey_mprotect
+385 common pkey_alloc sys_pkey_alloc
+386 common pkey_free sys_pkey_free
+387 common rseq sys_rseq
+388 common sync_file_range2 sys_sync_file_range2
+# room for arch specific syscalls
+393 common semget sys_semget
+394 common semctl sys_semctl
+395 common shmget sys_shmget
+396 common shmctl sys_shmctl
+397 common shmat sys_shmat
+398 common shmdt sys_shmdt
+399 common msgget sys_msgget
+400 common msgsnd sys_msgsnd
+401 common msgrcv sys_msgrcv
+402 common msgctl sys_msgctl
+403 common clock_gettime64 sys_clock_gettime
+404 common clock_settime64 sys_clock_settime
+405 common clock_adjtime64 sys_clock_adjtime
+406 common clock_getres_time64 sys_clock_getres
+407 common clock_nanosleep_time64 sys_clock_nanosleep
+408 common timer_gettime64 sys_timer_gettime
+409 common timer_settime64 sys_timer_settime
+410 common timerfd_gettime64 sys_timerfd_gettime
+411 common timerfd_settime64 sys_timerfd_settime
+412 common utimensat_time64 sys_utimensat
+413 common pselect6_time64 sys_pselect6
+414 common ppoll_time64 sys_ppoll
+416 common io_pgetevents_time64 sys_io_pgetevents
+417 common recvmmsg_time64 sys_recvmmsg
+418 common mq_timedsend_time64 sys_mq_timedsend
+419 common mq_timedreceive_time64 sys_mq_timedreceive
+420 common semtimedop_time64 sys_semtimedop
+421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait
+422 common futex_time64 sys_futex
+423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+# 435 reserved for clone3
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
diff --git a/tools/perf/arch/sh/util/Build b/tools/perf/arch/sh/util/Build
deleted file mode 100644
index e813e618954b..000000000000
--- a/tools/perf/arch/sh/util/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-$(CONFIG_DWARF) += dwarf-regs.o
diff --git a/tools/perf/arch/sh/util/dwarf-regs.c b/tools/perf/arch/sh/util/dwarf-regs.c
deleted file mode 100644
index 4b17fc86c73b..000000000000
--- a/tools/perf/arch/sh/util/dwarf-regs.c
+++ /dev/null
@@ -1,41 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2010 Matt Fleming <matt@console-pimps.org>
- */
-
-#include <stddef.h>
-#include <dwarf-regs.h>
-
-/*
- * Generic dwarf analysis helpers
- */
-
-#define SH_MAX_REGS 18
-const char *sh_regs_table[SH_MAX_REGS] = {
- "r0",
- "r1",
- "r2",
- "r3",
- "r4",
- "r5",
- "r6",
- "r7",
- "r8",
- "r9",
- "r10",
- "r11",
- "r12",
- "r13",
- "r14",
- "r15",
- "pc",
- "pr",
-};
-
-/* Return architecture dependent register string (for kprobe-tracer) */
-const char *get_arch_regstr(unsigned int n)
-{
- return (n < SH_MAX_REGS) ? sh_regs_table[n] : NULL;
-}
diff --git a/tools/perf/arch/sparc/Build b/tools/perf/arch/sparc/Build
deleted file mode 100644
index e4e5f33c84d8..000000000000
--- a/tools/perf/arch/sparc/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-y += util/
diff --git a/tools/perf/arch/sparc/Makefile b/tools/perf/arch/sparc/Makefile
index 4031db72ba71..8b59ce8efb89 100644
--- a/tools/perf/arch/sparc/Makefile
+++ b/tools/perf/arch/sparc/Makefile
@@ -1,6 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
-
PERF_HAVE_JITDUMP := 1
diff --git a/tools/perf/arch/sparc/annotate/instructions.c b/tools/perf/arch/sparc/annotate/instructions.c
index 2614c010c235..68c31580ccfc 100644
--- a/tools/perf/arch/sparc/annotate/instructions.c
+++ b/tools/perf/arch/sparc/annotate/instructions.c
@@ -163,6 +163,8 @@ static int sparc__annotate_init(struct arch *arch, char *cpuid __maybe_unused)
arch->initialized = true;
arch->associate_instruction_ops = sparc__associate_instruction_ops;
arch->objdump.comment_char = '#';
+ arch->e_machine = EM_SPARC;
+ arch->e_flags = 0;
}
return 0;
diff --git a/tools/perf/arch/sparc/entry/syscalls/syscall.tbl b/tools/perf/arch/sparc/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..ebb7d06d1044
--- /dev/null
+++ b/tools/perf/arch/sparc/entry/syscalls/syscall.tbl
@@ -0,0 +1,517 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# system call numbers and entry vectors for sparc
+#
+# The format is:
+# <number> <abi> <name> <entry point> <compat entry point>
+#
+# The <abi> can be common, 64, or 32 for this file.
+#
+0 common restart_syscall sys_restart_syscall
+1 32 exit sys_exit sparc_exit
+1 64 exit sparc_exit
+2 common fork sys_fork
+3 common read sys_read
+4 common write sys_write
+5 common open sys_open compat_sys_open
+6 common close sys_close
+7 common wait4 sys_wait4 compat_sys_wait4
+8 common creat sys_creat
+9 common link sys_link
+10 common unlink sys_unlink
+11 32 execv sunos_execv
+11 64 execv sys_nis_syscall
+12 common chdir sys_chdir
+13 32 chown sys_chown16
+13 64 chown sys_chown
+14 common mknod sys_mknod
+15 common chmod sys_chmod
+16 32 lchown sys_lchown16
+16 64 lchown sys_lchown
+17 common brk sys_brk
+18 common perfctr sys_nis_syscall
+19 common lseek sys_lseek compat_sys_lseek
+20 common getpid sys_getpid
+21 common capget sys_capget
+22 common capset sys_capset
+23 32 setuid sys_setuid16
+23 64 setuid sys_setuid
+24 32 getuid sys_getuid16
+24 64 getuid sys_getuid
+25 common vmsplice sys_vmsplice
+26 common ptrace sys_ptrace compat_sys_ptrace
+27 common alarm sys_alarm
+28 common sigaltstack sys_sigaltstack compat_sys_sigaltstack
+29 32 pause sys_pause
+29 64 pause sys_nis_syscall
+30 32 utime sys_utime32
+30 64 utime sys_utime
+31 32 lchown32 sys_lchown
+32 32 fchown32 sys_fchown
+33 common access sys_access
+34 common nice sys_nice
+35 32 chown32 sys_chown
+36 common sync sys_sync
+37 common kill sys_kill
+38 common stat sys_newstat compat_sys_newstat
+39 32 sendfile sys_sendfile compat_sys_sendfile
+39 64 sendfile sys_sendfile64
+40 common lstat sys_newlstat compat_sys_newlstat
+41 common dup sys_dup
+42 common pipe sys_sparc_pipe
+43 common times sys_times compat_sys_times
+44 32 getuid32 sys_getuid
+45 common umount2 sys_umount
+46 32 setgid sys_setgid16
+46 64 setgid sys_setgid
+47 32 getgid sys_getgid16
+47 64 getgid sys_getgid
+48 common signal sys_signal
+49 32 geteuid sys_geteuid16
+49 64 geteuid sys_geteuid
+50 32 getegid sys_getegid16
+50 64 getegid sys_getegid
+51 common acct sys_acct
+52 64 memory_ordering sys_memory_ordering
+53 32 getgid32 sys_getgid
+54 common ioctl sys_ioctl compat_sys_ioctl
+55 common reboot sys_reboot
+56 32 mmap2 sys_mmap2 sys32_mmap2
+57 common symlink sys_symlink
+58 common readlink sys_readlink
+59 32 execve sys_execve sys32_execve
+59 64 execve sys64_execve
+60 common umask sys_umask
+61 common chroot sys_chroot
+62 common fstat sys_newfstat compat_sys_newfstat
+63 common fstat64 sys_fstat64 compat_sys_fstat64
+64 common getpagesize sys_getpagesize
+65 common msync sys_msync
+66 common vfork sys_vfork
+67 common pread64 sys_pread64 compat_sys_pread64
+68 common pwrite64 sys_pwrite64 compat_sys_pwrite64
+69 32 geteuid32 sys_geteuid
+70 32 getegid32 sys_getegid
+71 common mmap sys_mmap
+72 32 setreuid32 sys_setreuid
+73 32 munmap sys_munmap
+73 64 munmap sys_64_munmap
+74 common mprotect sys_mprotect
+75 common madvise sys_madvise
+76 common vhangup sys_vhangup
+77 32 truncate64 sys_truncate64 compat_sys_truncate64
+78 common mincore sys_mincore
+79 32 getgroups sys_getgroups16
+79 64 getgroups sys_getgroups
+80 32 setgroups sys_setgroups16
+80 64 setgroups sys_setgroups
+81 common getpgrp sys_getpgrp
+82 32 setgroups32 sys_setgroups
+83 common setitimer sys_setitimer compat_sys_setitimer
+84 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64
+85 common swapon sys_swapon
+86 common getitimer sys_getitimer compat_sys_getitimer
+87 32 setuid32 sys_setuid
+88 common sethostname sys_sethostname
+89 32 setgid32 sys_setgid
+90 common dup2 sys_dup2
+91 32 setfsuid32 sys_setfsuid
+92 common fcntl sys_fcntl compat_sys_fcntl
+93 common select sys_select compat_sys_select
+94 32 setfsgid32 sys_setfsgid
+95 common fsync sys_fsync
+96 common setpriority sys_setpriority
+97 common socket sys_socket
+98 common connect sys_connect
+99 common accept sys_accept
+100 common getpriority sys_getpriority
+101 common rt_sigreturn sys_rt_sigreturn sys32_rt_sigreturn
+102 common rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction
+103 common rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask
+104 common rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending
+105 32 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32
+105 64 rt_sigtimedwait sys_rt_sigtimedwait
+106 common rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
+107 common rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
+108 32 setresuid32 sys_setresuid
+108 64 setresuid sys_setresuid
+109 32 getresuid32 sys_getresuid
+109 64 getresuid sys_getresuid
+110 32 setresgid32 sys_setresgid
+110 64 setresgid sys_setresgid
+111 32 getresgid32 sys_getresgid
+111 64 getresgid sys_getresgid
+112 32 setregid32 sys_setregid
+113 common recvmsg sys_recvmsg compat_sys_recvmsg
+114 common sendmsg sys_sendmsg compat_sys_sendmsg
+115 32 getgroups32 sys_getgroups
+116 common gettimeofday sys_gettimeofday compat_sys_gettimeofday
+117 common getrusage sys_getrusage compat_sys_getrusage
+118 common getsockopt sys_getsockopt sys_getsockopt
+119 common getcwd sys_getcwd
+120 common readv sys_readv
+121 common writev sys_writev
+122 common settimeofday sys_settimeofday compat_sys_settimeofday
+123 32 fchown sys_fchown16
+123 64 fchown sys_fchown
+124 common fchmod sys_fchmod
+125 common recvfrom sys_recvfrom compat_sys_recvfrom
+126 32 setreuid sys_setreuid16
+126 64 setreuid sys_setreuid
+127 32 setregid sys_setregid16
+127 64 setregid sys_setregid
+128 common rename sys_rename
+129 common truncate sys_truncate compat_sys_truncate
+130 common ftruncate sys_ftruncate compat_sys_ftruncate
+131 common flock sys_flock
+132 common lstat64 sys_lstat64 compat_sys_lstat64
+133 common sendto sys_sendto
+134 common shutdown sys_shutdown
+135 common socketpair sys_socketpair
+136 common mkdir sys_mkdir
+137 common rmdir sys_rmdir
+138 32 utimes sys_utimes_time32
+138 64 utimes sys_utimes
+139 common stat64 sys_stat64 compat_sys_stat64
+140 common sendfile64 sys_sendfile64
+141 common getpeername sys_getpeername
+142 32 futex sys_futex_time32
+142 64 futex sys_futex
+143 common gettid sys_gettid
+144 common getrlimit sys_getrlimit compat_sys_getrlimit
+145 common setrlimit sys_setrlimit compat_sys_setrlimit
+146 common pivot_root sys_pivot_root
+147 common prctl sys_prctl
+148 common pciconfig_read sys_pciconfig_read
+149 common pciconfig_write sys_pciconfig_write
+150 common getsockname sys_getsockname
+151 common inotify_init sys_inotify_init
+152 common inotify_add_watch sys_inotify_add_watch
+153 common poll sys_poll
+154 common getdents64 sys_getdents64
+155 32 fcntl64 sys_fcntl64 compat_sys_fcntl64
+156 common inotify_rm_watch sys_inotify_rm_watch
+157 common statfs sys_statfs compat_sys_statfs
+158 common fstatfs sys_fstatfs compat_sys_fstatfs
+159 common umount sys_oldumount
+160 common sched_set_affinity sys_sched_setaffinity compat_sys_sched_setaffinity
+161 common sched_get_affinity sys_sched_getaffinity compat_sys_sched_getaffinity
+162 common getdomainname sys_getdomainname
+163 common setdomainname sys_setdomainname
+164 64 utrap_install sys_utrap_install
+165 common quotactl sys_quotactl
+166 common set_tid_address sys_set_tid_address
+167 common mount sys_mount
+168 common ustat sys_ustat compat_sys_ustat
+169 common setxattr sys_setxattr
+170 common lsetxattr sys_lsetxattr
+171 common fsetxattr sys_fsetxattr
+172 common getxattr sys_getxattr
+173 common lgetxattr sys_lgetxattr
+174 common getdents sys_getdents compat_sys_getdents
+175 common setsid sys_setsid
+176 common fchdir sys_fchdir
+177 common fgetxattr sys_fgetxattr
+178 common listxattr sys_listxattr
+179 common llistxattr sys_llistxattr
+180 common flistxattr sys_flistxattr
+181 common removexattr sys_removexattr
+182 common lremovexattr sys_lremovexattr
+183 32 sigpending sys_sigpending compat_sys_sigpending
+183 64 sigpending sys_nis_syscall
+184 common query_module sys_ni_syscall
+185 common setpgid sys_setpgid
+186 common fremovexattr sys_fremovexattr
+187 common tkill sys_tkill
+188 32 exit_group sys_exit_group sparc_exit_group
+188 64 exit_group sparc_exit_group
+189 common uname sys_newuname
+190 common init_module sys_init_module
+191 32 personality sys_personality sys_sparc64_personality
+191 64 personality sys_sparc64_personality
+192 32 remap_file_pages sys_sparc_remap_file_pages sys_remap_file_pages
+192 64 remap_file_pages sys_remap_file_pages
+193 common epoll_create sys_epoll_create
+194 common epoll_ctl sys_epoll_ctl
+195 common epoll_wait sys_epoll_wait
+196 common ioprio_set sys_ioprio_set
+197 common getppid sys_getppid
+198 32 sigaction sys_sparc_sigaction compat_sys_sparc_sigaction
+198 64 sigaction sys_nis_syscall
+199 common sgetmask sys_sgetmask
+200 common ssetmask sys_ssetmask
+201 32 sigsuspend sys_sigsuspend
+201 64 sigsuspend sys_nis_syscall
+202 common oldlstat sys_newlstat compat_sys_newlstat
+203 common uselib sys_uselib
+204 32 readdir sys_old_readdir compat_sys_old_readdir
+204 64 readdir sys_nis_syscall
+205 common readahead sys_readahead compat_sys_readahead
+206 common socketcall sys_socketcall compat_sys_socketcall
+207 common syslog sys_syslog
+208 common lookup_dcookie sys_ni_syscall
+209 common fadvise64 sys_fadvise64 compat_sys_fadvise64
+210 common fadvise64_64 sys_fadvise64_64 compat_sys_fadvise64_64
+211 common tgkill sys_tgkill
+212 common waitpid sys_waitpid
+213 common swapoff sys_swapoff
+214 common sysinfo sys_sysinfo compat_sys_sysinfo
+215 32 ipc sys_ipc compat_sys_ipc
+215 64 ipc sys_sparc_ipc
+216 32 sigreturn sys_sigreturn sys32_sigreturn
+216 64 sigreturn sys_nis_syscall
+217 common clone sys_clone
+218 common ioprio_get sys_ioprio_get
+219 32 adjtimex sys_adjtimex_time32
+219 64 adjtimex sys_sparc_adjtimex
+220 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask
+220 64 sigprocmask sys_nis_syscall
+221 common create_module sys_ni_syscall
+222 common delete_module sys_delete_module
+223 common get_kernel_syms sys_ni_syscall
+224 common getpgid sys_getpgid
+225 common bdflush sys_ni_syscall
+226 common sysfs sys_sysfs
+227 common afs_syscall sys_nis_syscall
+228 common setfsuid sys_setfsuid16
+229 common setfsgid sys_setfsgid16
+230 common _newselect sys_select compat_sys_select
+231 32 time sys_time32
+232 common splice sys_splice
+233 32 stime sys_stime32
+233 64 stime sys_stime
+234 common statfs64 sys_statfs64 compat_sys_statfs64
+235 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64
+236 common _llseek sys_llseek
+237 common mlock sys_mlock
+238 common munlock sys_munlock
+239 common mlockall sys_mlockall
+240 common munlockall sys_munlockall
+241 common sched_setparam sys_sched_setparam
+242 common sched_getparam sys_sched_getparam
+243 common sched_setscheduler sys_sched_setscheduler
+244 common sched_getscheduler sys_sched_getscheduler
+245 common sched_yield sys_sched_yield
+246 common sched_get_priority_max sys_sched_get_priority_max
+247 common sched_get_priority_min sys_sched_get_priority_min
+248 32 sched_rr_get_interval sys_sched_rr_get_interval_time32
+248 64 sched_rr_get_interval sys_sched_rr_get_interval
+249 32 nanosleep sys_nanosleep_time32
+249 64 nanosleep sys_nanosleep
+250 32 mremap sys_mremap
+250 64 mremap sys_64_mremap
+251 common _sysctl sys_ni_syscall
+252 common getsid sys_getsid
+253 common fdatasync sys_fdatasync
+254 32 nfsservctl sys_ni_syscall sys_nis_syscall
+254 64 nfsservctl sys_nis_syscall
+255 common sync_file_range sys_sync_file_range compat_sys_sync_file_range
+256 32 clock_settime sys_clock_settime32
+256 64 clock_settime sys_clock_settime
+257 32 clock_gettime sys_clock_gettime32
+257 64 clock_gettime sys_clock_gettime
+258 32 clock_getres sys_clock_getres_time32
+258 64 clock_getres sys_clock_getres
+259 32 clock_nanosleep sys_clock_nanosleep_time32
+259 64 clock_nanosleep sys_clock_nanosleep
+260 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity
+261 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity
+262 32 timer_settime sys_timer_settime32
+262 64 timer_settime sys_timer_settime
+263 32 timer_gettime sys_timer_gettime32
+263 64 timer_gettime sys_timer_gettime
+264 common timer_getoverrun sys_timer_getoverrun
+265 common timer_delete sys_timer_delete
+266 common timer_create sys_timer_create compat_sys_timer_create
+# 267 was vserver
+267 common vserver sys_nis_syscall
+268 common io_setup sys_io_setup compat_sys_io_setup
+269 common io_destroy sys_io_destroy
+270 common io_submit sys_io_submit compat_sys_io_submit
+271 common io_cancel sys_io_cancel
+272 32 io_getevents sys_io_getevents_time32
+272 64 io_getevents sys_io_getevents
+273 common mq_open sys_mq_open compat_sys_mq_open
+274 common mq_unlink sys_mq_unlink
+275 32 mq_timedsend sys_mq_timedsend_time32
+275 64 mq_timedsend sys_mq_timedsend
+276 32 mq_timedreceive sys_mq_timedreceive_time32
+276 64 mq_timedreceive sys_mq_timedreceive
+277 common mq_notify sys_mq_notify compat_sys_mq_notify
+278 common mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr
+279 common waitid sys_waitid compat_sys_waitid
+280 common tee sys_tee
+281 common add_key sys_add_key
+282 common request_key sys_request_key
+283 common keyctl sys_keyctl compat_sys_keyctl
+284 common openat sys_openat compat_sys_openat
+285 common mkdirat sys_mkdirat
+286 common mknodat sys_mknodat
+287 common fchownat sys_fchownat
+288 32 futimesat sys_futimesat_time32
+288 64 futimesat sys_futimesat
+289 common fstatat64 sys_fstatat64 compat_sys_fstatat64
+290 common unlinkat sys_unlinkat
+291 common renameat sys_renameat
+292 common linkat sys_linkat
+293 common symlinkat sys_symlinkat
+294 common readlinkat sys_readlinkat
+295 common fchmodat sys_fchmodat
+296 common faccessat sys_faccessat
+297 32 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32
+297 64 pselect6 sys_pselect6
+298 32 ppoll sys_ppoll_time32 compat_sys_ppoll_time32
+298 64 ppoll sys_ppoll
+299 common unshare sys_unshare
+300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list
+301 common get_robust_list sys_get_robust_list compat_sys_get_robust_list
+302 common migrate_pages sys_migrate_pages
+303 common mbind sys_mbind
+304 common get_mempolicy sys_get_mempolicy
+305 common set_mempolicy sys_set_mempolicy
+306 common kexec_load sys_kexec_load compat_sys_kexec_load
+307 common move_pages sys_move_pages
+308 common getcpu sys_getcpu
+309 common epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait
+310 32 utimensat sys_utimensat_time32
+310 64 utimensat sys_utimensat
+311 common signalfd sys_signalfd compat_sys_signalfd
+312 common timerfd_create sys_timerfd_create
+313 common eventfd sys_eventfd
+314 common fallocate sys_fallocate compat_sys_fallocate
+315 32 timerfd_settime sys_timerfd_settime32
+315 64 timerfd_settime sys_timerfd_settime
+316 32 timerfd_gettime sys_timerfd_gettime32
+316 64 timerfd_gettime sys_timerfd_gettime
+317 common signalfd4 sys_signalfd4 compat_sys_signalfd4
+318 common eventfd2 sys_eventfd2
+319 common epoll_create1 sys_epoll_create1
+320 common dup3 sys_dup3
+321 common pipe2 sys_pipe2
+322 common inotify_init1 sys_inotify_init1
+323 common accept4 sys_accept4
+324 common preadv sys_preadv compat_sys_preadv
+325 common pwritev sys_pwritev compat_sys_pwritev
+326 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo
+327 common perf_event_open sys_perf_event_open
+328 32 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32
+328 64 recvmmsg sys_recvmmsg
+329 common fanotify_init sys_fanotify_init
+330 common fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark
+331 common prlimit64 sys_prlimit64
+332 common name_to_handle_at sys_name_to_handle_at
+333 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at
+334 32 clock_adjtime sys_clock_adjtime32
+334 64 clock_adjtime sys_sparc_clock_adjtime
+335 common syncfs sys_syncfs
+336 common sendmmsg sys_sendmmsg compat_sys_sendmmsg
+337 common setns sys_setns
+338 common process_vm_readv sys_process_vm_readv
+339 common process_vm_writev sys_process_vm_writev
+340 32 kern_features sys_ni_syscall sys_kern_features
+340 64 kern_features sys_kern_features
+341 common kcmp sys_kcmp
+342 common finit_module sys_finit_module
+343 common sched_setattr sys_sched_setattr
+344 common sched_getattr sys_sched_getattr
+345 common renameat2 sys_renameat2
+346 common seccomp sys_seccomp
+347 common getrandom sys_getrandom
+348 common memfd_create sys_memfd_create
+349 common bpf sys_bpf
+350 32 execveat sys_execveat sys32_execveat
+350 64 execveat sys64_execveat
+351 common membarrier sys_membarrier
+352 common userfaultfd sys_userfaultfd
+353 common bind sys_bind
+354 common listen sys_listen
+355 common setsockopt sys_setsockopt sys_setsockopt
+356 common mlock2 sys_mlock2
+357 common copy_file_range sys_copy_file_range
+358 common preadv2 sys_preadv2 compat_sys_preadv2
+359 common pwritev2 sys_pwritev2 compat_sys_pwritev2
+360 common statx sys_statx
+361 32 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
+361 64 io_pgetevents sys_io_pgetevents
+362 common pkey_mprotect sys_pkey_mprotect
+363 common pkey_alloc sys_pkey_alloc
+364 common pkey_free sys_pkey_free
+365 common rseq sys_rseq
+# room for arch specific syscalls
+392 64 semtimedop sys_semtimedop
+393 common semget sys_semget
+394 common semctl sys_semctl compat_sys_semctl
+395 common shmget sys_shmget
+396 common shmctl sys_shmctl compat_sys_shmctl
+397 common shmat sys_shmat compat_sys_shmat
+398 common shmdt sys_shmdt
+399 common msgget sys_msgget
+400 common msgsnd sys_msgsnd compat_sys_msgsnd
+401 common msgrcv sys_msgrcv compat_sys_msgrcv
+402 common msgctl sys_msgctl compat_sys_msgctl
+403 32 clock_gettime64 sys_clock_gettime sys_clock_gettime
+404 32 clock_settime64 sys_clock_settime sys_clock_settime
+405 32 clock_adjtime64 sys_clock_adjtime sys_clock_adjtime
+406 32 clock_getres_time64 sys_clock_getres sys_clock_getres
+407 32 clock_nanosleep_time64 sys_clock_nanosleep sys_clock_nanosleep
+408 32 timer_gettime64 sys_timer_gettime sys_timer_gettime
+409 32 timer_settime64 sys_timer_settime sys_timer_settime
+410 32 timerfd_gettime64 sys_timerfd_gettime sys_timerfd_gettime
+411 32 timerfd_settime64 sys_timerfd_settime sys_timerfd_settime
+412 32 utimensat_time64 sys_utimensat sys_utimensat
+413 32 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
+414 32 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
+416 32 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
+417 32 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
+418 32 mq_timedsend_time64 sys_mq_timedsend sys_mq_timedsend
+419 32 mq_timedreceive_time64 sys_mq_timedreceive sys_mq_timedreceive
+420 32 semtimedop_time64 sys_semtimedop sys_semtimedop
+421 32 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64
+422 32 futex_time64 sys_futex sys_futex
+423 32 sched_rr_get_interval_time64 sys_sched_rr_get_interval sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+# 435 reserved for clone3
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
diff --git a/tools/perf/arch/sparc/util/Build b/tools/perf/arch/sparc/util/Build
deleted file mode 100644
index e813e618954b..000000000000
--- a/tools/perf/arch/sparc/util/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-$(CONFIG_DWARF) += dwarf-regs.o
diff --git a/tools/perf/arch/sparc/util/dwarf-regs.c b/tools/perf/arch/sparc/util/dwarf-regs.c
deleted file mode 100644
index 1282cb2dc7bd..000000000000
--- a/tools/perf/arch/sparc/util/dwarf-regs.c
+++ /dev/null
@@ -1,39 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (C) 2010 David S. Miller <davem@davemloft.net>
- */
-
-#include <stddef.h>
-#include <dwarf-regs.h>
-
-#define SPARC_MAX_REGS 96
-
-const char *sparc_regs_table[SPARC_MAX_REGS] = {
- "%g0", "%g1", "%g2", "%g3", "%g4", "%g5", "%g6", "%g7",
- "%o0", "%o1", "%o2", "%o3", "%o4", "%o5", "%sp", "%o7",
- "%l0", "%l1", "%l2", "%l3", "%l4", "%l5", "%l6", "%l7",
- "%i0", "%i1", "%i2", "%i3", "%i4", "%i5", "%fp", "%i7",
- "%f0", "%f1", "%f2", "%f3", "%f4", "%f5", "%f6", "%f7",
- "%f8", "%f9", "%f10", "%f11", "%f12", "%f13", "%f14", "%f15",
- "%f16", "%f17", "%f18", "%f19", "%f20", "%f21", "%f22", "%f23",
- "%f24", "%f25", "%f26", "%f27", "%f28", "%f29", "%f30", "%f31",
- "%f32", "%f33", "%f34", "%f35", "%f36", "%f37", "%f38", "%f39",
- "%f40", "%f41", "%f42", "%f43", "%f44", "%f45", "%f46", "%f47",
- "%f48", "%f49", "%f50", "%f51", "%f52", "%f53", "%f54", "%f55",
- "%f56", "%f57", "%f58", "%f59", "%f60", "%f61", "%f62", "%f63",
-};
-
-/**
- * get_arch_regstr() - lookup register name from it's DWARF register number
- * @n: the DWARF register number
- *
- * get_arch_regstr() returns the name of the register in struct
- * regdwarfnum_table from it's DWARF register number. If the register is not
- * found in the table, this returns NULL;
- */
-const char *get_arch_regstr(unsigned int n)
-{
- return (n < SPARC_MAX_REGS) ? sparc_regs_table[n] : NULL;
-}
diff --git a/tools/perf/arch/x86/Build b/tools/perf/arch/x86/Build
index a7dd46a5b678..d31a1168757c 100644
--- a/tools/perf/arch/x86/Build
+++ b/tools/perf/arch/x86/Build
@@ -1,2 +1,15 @@
-perf-y += util/
-perf-y += tests/
+perf-util-y += util/
+perf-test-y += tests/
+
+ifdef SHELLCHECK
+ SHELL_TEST_LOGS := $(SHELL_TESTS:%=%.shellcheck_log)
+else
+ SHELL_TESTS :=
+ SHELL_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.shellcheck_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)$(SHELLCHECK) "$<" > $@ || (cat $@ && rm $@ && false)
+
+perf-test-y += $(SHELL_TEST_LOGS)
diff --git a/tools/perf/arch/x86/Makefile b/tools/perf/arch/x86/Makefile
index 5a9f9a7bf07d..a295a80ea078 100644
--- a/tools/perf/arch/x86/Makefile
+++ b/tools/perf/arch/x86/Makefile
@@ -1,28 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
HAVE_KVM_STAT_SUPPORT := 1
-PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
PERF_HAVE_JITDUMP := 1
-
-###
-# Syscall table generation
-#
-
-generated := $(OUTPUT)arch/x86/include/generated
-out := $(generated)/asm
-header := $(out)/syscalls_64.c
-sys := $(srctree)/tools/perf/arch/x86/entry/syscalls
-systbl := $(sys)/syscalltbl.sh
-
-# Create output directory if not already present
-_dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)')
-
-$(header): $(sys)/syscall_64.tbl $(systbl)
- $(Q)$(SHELL) '$(systbl)' $(sys)/syscall_64.tbl 'x86_64' > $@
-
-clean::
- $(call QUIET_CLEAN, x86) $(RM) -r $(header) $(generated)
-
-archheaders: $(header)
diff --git a/tools/perf/arch/x86/annotate/instructions.c b/tools/perf/arch/x86/annotate/instructions.c
index 5cdf457f5cbe..da98a4e3c52c 100644
--- a/tools/perf/arch/x86/annotate/instructions.c
+++ b/tools/perf/arch/x86/annotate/instructions.c
@@ -202,7 +202,407 @@ static int x86__annotate_init(struct arch *arch, char *cpuid)
if (x86__cpuid_parse(arch, cpuid))
err = SYMBOL_ANNOTATE_ERRNO__ARCH_INIT_CPUID_PARSING;
}
-
+ arch->e_machine = EM_X86_64;
+ arch->e_flags = 0;
arch->initialized = true;
return err;
}
+
+#ifdef HAVE_LIBDW_SUPPORT
+static void update_insn_state_x86(struct type_state *state,
+ struct data_loc_info *dloc, Dwarf_Die *cu_die,
+ struct disasm_line *dl)
+{
+ struct annotated_insn_loc loc;
+ struct annotated_op_loc *src = &loc.ops[INSN_OP_SOURCE];
+ struct annotated_op_loc *dst = &loc.ops[INSN_OP_TARGET];
+ struct type_state_reg *tsr;
+ Dwarf_Die type_die;
+ u32 insn_offset = dl->al.offset;
+ int fbreg = dloc->fbreg;
+ int fboff = 0;
+
+ if (annotate_get_insn_location(dloc->arch, dl, &loc) < 0)
+ return;
+
+ if (ins__is_call(&dl->ins)) {
+ struct symbol *func = dl->ops.target.sym;
+
+ if (func == NULL)
+ return;
+
+ /* __fentry__ will preserve all registers */
+ if (!strcmp(func->name, "__fentry__"))
+ return;
+
+ pr_debug_dtp("call [%x] %s\n", insn_offset, func->name);
+
+ /* Otherwise invalidate caller-saved registers after call */
+ for (unsigned i = 0; i < ARRAY_SIZE(state->regs); i++) {
+ if (state->regs[i].caller_saved)
+ state->regs[i].ok = false;
+ }
+
+ /* Update register with the return type (if any) */
+ if (die_find_func_rettype(cu_die, func->name, &type_die)) {
+ tsr = &state->regs[state->ret_reg];
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ pr_debug_dtp("call [%x] return -> reg%d",
+ insn_offset, state->ret_reg);
+ pr_debug_type_name(&type_die, tsr->kind);
+ }
+ return;
+ }
+
+ if (!strncmp(dl->ins.name, "add", 3)) {
+ u64 imm_value = -1ULL;
+ int offset;
+ const char *var_name = NULL;
+ struct map_symbol *ms = dloc->ms;
+ u64 ip = ms->sym->start + dl->al.offset;
+
+ if (!has_reg_type(state, dst->reg1))
+ return;
+
+ tsr = &state->regs[dst->reg1];
+ tsr->copied_from = -1;
+
+ if (src->imm)
+ imm_value = src->offset;
+ else if (has_reg_type(state, src->reg1) &&
+ state->regs[src->reg1].kind == TSR_KIND_CONST)
+ imm_value = state->regs[src->reg1].imm_value;
+ else if (src->reg1 == DWARF_REG_PC) {
+ u64 var_addr = annotate_calc_pcrel(dloc->ms, ip,
+ src->offset, dl);
+
+ if (get_global_var_info(dloc, var_addr,
+ &var_name, &offset) &&
+ !strcmp(var_name, "this_cpu_off") &&
+ tsr->kind == TSR_KIND_CONST) {
+ tsr->kind = TSR_KIND_PERCPU_BASE;
+ tsr->ok = true;
+ imm_value = tsr->imm_value;
+ }
+ }
+ else
+ return;
+
+ if (tsr->kind != TSR_KIND_PERCPU_BASE)
+ return;
+
+ if (get_global_var_type(cu_die, dloc, ip, imm_value, &offset,
+ &type_die) && offset == 0) {
+ /*
+ * This is not a pointer type, but it should be treated
+ * as a pointer.
+ */
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_PERCPU_POINTER;
+ tsr->ok = true;
+
+ pr_debug_dtp("add [%x] percpu %#"PRIx64" -> reg%d",
+ insn_offset, imm_value, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ return;
+ }
+
+ if (strncmp(dl->ins.name, "mov", 3))
+ return;
+
+ if (dloc->fb_cfa) {
+ u64 ip = dloc->ms->sym->start + dl->al.offset;
+ u64 pc = map__rip_2objdump(dloc->ms->map, ip);
+
+ if (die_get_cfa(dloc->di->dbg, pc, &fbreg, &fboff) < 0)
+ fbreg = -1;
+ }
+
+ /* Case 1. register to register or segment:offset to register transfers */
+ if (!src->mem_ref && !dst->mem_ref) {
+ if (!has_reg_type(state, dst->reg1))
+ return;
+
+ tsr = &state->regs[dst->reg1];
+ tsr->copied_from = -1;
+
+ if (dso__kernel(map__dso(dloc->ms->map)) &&
+ src->segment == INSN_SEG_X86_GS && src->imm) {
+ u64 ip = dloc->ms->sym->start + dl->al.offset;
+ u64 var_addr;
+ int offset;
+
+ /*
+ * In kernel, %gs points to a per-cpu region for the
+ * current CPU. Access with a constant offset should
+ * be treated as a global variable access.
+ */
+ var_addr = src->offset;
+
+ if (var_addr == 40) {
+ tsr->kind = TSR_KIND_CANARY;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] stack canary -> reg%d\n",
+ insn_offset, dst->reg1);
+ return;
+ }
+
+ if (!get_global_var_type(cu_die, dloc, ip, var_addr,
+ &offset, &type_die) ||
+ !die_get_member_type(&type_die, offset, &type_die)) {
+ tsr->ok = false;
+ return;
+ }
+
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] this-cpu addr=%#"PRIx64" -> reg%d",
+ insn_offset, var_addr, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ return;
+ }
+
+ if (src->imm) {
+ tsr->kind = TSR_KIND_CONST;
+ tsr->imm_value = src->offset;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] imm=%#x -> reg%d\n",
+ insn_offset, tsr->imm_value, dst->reg1);
+ return;
+ }
+
+ if (!has_reg_type(state, src->reg1) ||
+ !state->regs[src->reg1].ok) {
+ tsr->ok = false;
+ return;
+ }
+
+ tsr->type = state->regs[src->reg1].type;
+ tsr->kind = state->regs[src->reg1].kind;
+ tsr->imm_value = state->regs[src->reg1].imm_value;
+ tsr->ok = true;
+
+ /* To copy back the variable type later (hopefully) */
+ if (tsr->kind == TSR_KIND_TYPE)
+ tsr->copied_from = src->reg1;
+
+ pr_debug_dtp("mov [%x] reg%d -> reg%d",
+ insn_offset, src->reg1, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ /* Case 2. memory to register transers */
+ if (src->mem_ref && !dst->mem_ref) {
+ int sreg = src->reg1;
+
+ if (!has_reg_type(state, dst->reg1))
+ return;
+
+ tsr = &state->regs[dst->reg1];
+ tsr->copied_from = -1;
+
+retry:
+ /* Check stack variables with offset */
+ if (sreg == fbreg || sreg == state->stack_reg) {
+ struct type_state_stack *stack;
+ int offset = src->offset - fboff;
+
+ stack = find_stack_state(state, offset);
+ if (stack == NULL) {
+ tsr->ok = false;
+ return;
+ } else if (!stack->compound) {
+ tsr->type = stack->type;
+ tsr->kind = stack->kind;
+ tsr->ok = true;
+ } else if (die_get_member_type(&stack->type,
+ offset - stack->offset,
+ &type_die)) {
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+ } else {
+ tsr->ok = false;
+ return;
+ }
+
+ if (sreg == fbreg) {
+ pr_debug_dtp("mov [%x] -%#x(stack) -> reg%d",
+ insn_offset, -offset, dst->reg1);
+ } else {
+ pr_debug_dtp("mov [%x] %#x(reg%d) -> reg%d",
+ insn_offset, offset, sreg, dst->reg1);
+ }
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ /* And then dereference the pointer if it has one */
+ else if (has_reg_type(state, sreg) && state->regs[sreg].ok &&
+ state->regs[sreg].kind == TSR_KIND_TYPE &&
+ die_deref_ptr_type(&state->regs[sreg].type,
+ src->offset, &type_die)) {
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] %#x(reg%d) -> reg%d",
+ insn_offset, src->offset, sreg, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ /* Or check if it's a global variable */
+ else if (sreg == DWARF_REG_PC) {
+ struct map_symbol *ms = dloc->ms;
+ u64 ip = ms->sym->start + dl->al.offset;
+ u64 addr;
+ int offset;
+
+ addr = annotate_calc_pcrel(ms, ip, src->offset, dl);
+
+ if (!get_global_var_type(cu_die, dloc, ip, addr, &offset,
+ &type_die) ||
+ !die_get_member_type(&type_die, offset, &type_die)) {
+ tsr->ok = false;
+ return;
+ }
+
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] global addr=%"PRIx64" -> reg%d",
+ insn_offset, addr, dst->reg1);
+ pr_debug_type_name(&type_die, tsr->kind);
+ }
+ /* And check percpu access with base register */
+ else if (has_reg_type(state, sreg) &&
+ state->regs[sreg].kind == TSR_KIND_PERCPU_BASE) {
+ u64 ip = dloc->ms->sym->start + dl->al.offset;
+ u64 var_addr = src->offset;
+ int offset;
+
+ if (src->multi_regs) {
+ int reg2 = (sreg == src->reg1) ? src->reg2 : src->reg1;
+
+ if (has_reg_type(state, reg2) && state->regs[reg2].ok &&
+ state->regs[reg2].kind == TSR_KIND_CONST)
+ var_addr += state->regs[reg2].imm_value;
+ }
+
+ /*
+ * In kernel, %gs points to a per-cpu region for the
+ * current CPU. Access with a constant offset should
+ * be treated as a global variable access.
+ */
+ if (get_global_var_type(cu_die, dloc, ip, var_addr,
+ &offset, &type_die) &&
+ die_get_member_type(&type_die, offset, &type_die)) {
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ if (src->multi_regs) {
+ pr_debug_dtp("mov [%x] percpu %#x(reg%d,reg%d) -> reg%d",
+ insn_offset, src->offset, src->reg1,
+ src->reg2, dst->reg1);
+ } else {
+ pr_debug_dtp("mov [%x] percpu %#x(reg%d) -> reg%d",
+ insn_offset, src->offset, sreg, dst->reg1);
+ }
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ } else {
+ tsr->ok = false;
+ }
+ }
+ /* And then dereference the calculated pointer if it has one */
+ else if (has_reg_type(state, sreg) && state->regs[sreg].ok &&
+ state->regs[sreg].kind == TSR_KIND_PERCPU_POINTER &&
+ die_get_member_type(&state->regs[sreg].type,
+ src->offset, &type_die)) {
+ tsr->type = type_die;
+ tsr->kind = TSR_KIND_TYPE;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] pointer %#x(reg%d) -> reg%d",
+ insn_offset, src->offset, sreg, dst->reg1);
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ /* Or try another register if any */
+ else if (src->multi_regs && sreg == src->reg1 &&
+ src->reg1 != src->reg2) {
+ sreg = src->reg2;
+ goto retry;
+ }
+ else {
+ int offset;
+ const char *var_name = NULL;
+
+ /* it might be per-cpu variable (in kernel) access */
+ if (src->offset < 0) {
+ if (get_global_var_info(dloc, (s64)src->offset,
+ &var_name, &offset) &&
+ !strcmp(var_name, "__per_cpu_offset")) {
+ tsr->kind = TSR_KIND_PERCPU_BASE;
+ tsr->ok = true;
+
+ pr_debug_dtp("mov [%x] percpu base reg%d\n",
+ insn_offset, dst->reg1);
+ return;
+ }
+ }
+
+ tsr->ok = false;
+ }
+ }
+ /* Case 3. register to memory transfers */
+ if (!src->mem_ref && dst->mem_ref) {
+ if (!has_reg_type(state, src->reg1) ||
+ !state->regs[src->reg1].ok)
+ return;
+
+ /* Check stack variables with offset */
+ if (dst->reg1 == fbreg || dst->reg1 == state->stack_reg) {
+ struct type_state_stack *stack;
+ int offset = dst->offset - fboff;
+
+ tsr = &state->regs[src->reg1];
+
+ stack = find_stack_state(state, offset);
+ if (stack) {
+ /*
+ * The source register is likely to hold a type
+ * of member if it's a compound type. Do not
+ * update the stack variable type since we can
+ * get the member type later by using the
+ * die_get_member_type().
+ */
+ if (!stack->compound)
+ set_stack_state(stack, offset, tsr->kind,
+ &tsr->type);
+ } else {
+ findnew_stack_state(state, offset, tsr->kind,
+ &tsr->type);
+ }
+
+ if (dst->reg1 == fbreg) {
+ pr_debug_dtp("mov [%x] reg%d -> -%#x(stack)",
+ insn_offset, src->reg1, -offset);
+ } else {
+ pr_debug_dtp("mov [%x] reg%d -> %#x(reg%d)",
+ insn_offset, src->reg1, offset, dst->reg1);
+ }
+ pr_debug_type_name(&tsr->type, tsr->kind);
+ }
+ /*
+ * Ignore other transfers since it'd set a value in a struct
+ * and won't change the type.
+ */
+ }
+ /* Case 4. memory to memory transfers (not handled for now) */
+}
+#endif
diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_32.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_32.tbl
new file mode 100644
index 000000000000..4877e16da69a
--- /dev/null
+++ b/tools/perf/arch/x86/entry/syscalls/syscall_32.tbl
@@ -0,0 +1,477 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# 32-bit system call numbers and entry vectors
+#
+# The format is:
+# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
+#
+# The __ia32_sys and __ia32_compat_sys stubs are created on-the-fly for
+# sys_*() system calls and compat_sys_*() compat system calls if
+# IA32_EMULATION is defined, and expect struct pt_regs *regs as their only
+# parameter.
+#
+# The abi is always "i386" for this file.
+#
+0 i386 restart_syscall sys_restart_syscall
+1 i386 exit sys_exit - noreturn
+2 i386 fork sys_fork
+3 i386 read sys_read
+4 i386 write sys_write
+5 i386 open sys_open compat_sys_open
+6 i386 close sys_close
+7 i386 waitpid sys_waitpid
+8 i386 creat sys_creat
+9 i386 link sys_link
+10 i386 unlink sys_unlink
+11 i386 execve sys_execve compat_sys_execve
+12 i386 chdir sys_chdir
+13 i386 time sys_time32
+14 i386 mknod sys_mknod
+15 i386 chmod sys_chmod
+16 i386 lchown sys_lchown16
+17 i386 break
+18 i386 oldstat sys_stat
+19 i386 lseek sys_lseek compat_sys_lseek
+20 i386 getpid sys_getpid
+21 i386 mount sys_mount
+22 i386 umount sys_oldumount
+23 i386 setuid sys_setuid16
+24 i386 getuid sys_getuid16
+25 i386 stime sys_stime32
+26 i386 ptrace sys_ptrace compat_sys_ptrace
+27 i386 alarm sys_alarm
+28 i386 oldfstat sys_fstat
+29 i386 pause sys_pause
+30 i386 utime sys_utime32
+31 i386 stty
+32 i386 gtty
+33 i386 access sys_access
+34 i386 nice sys_nice
+35 i386 ftime
+36 i386 sync sys_sync
+37 i386 kill sys_kill
+38 i386 rename sys_rename
+39 i386 mkdir sys_mkdir
+40 i386 rmdir sys_rmdir
+41 i386 dup sys_dup
+42 i386 pipe sys_pipe
+43 i386 times sys_times compat_sys_times
+44 i386 prof
+45 i386 brk sys_brk
+46 i386 setgid sys_setgid16
+47 i386 getgid sys_getgid16
+48 i386 signal sys_signal
+49 i386 geteuid sys_geteuid16
+50 i386 getegid sys_getegid16
+51 i386 acct sys_acct
+52 i386 umount2 sys_umount
+53 i386 lock
+54 i386 ioctl sys_ioctl compat_sys_ioctl
+55 i386 fcntl sys_fcntl compat_sys_fcntl64
+56 i386 mpx
+57 i386 setpgid sys_setpgid
+58 i386 ulimit
+59 i386 oldolduname sys_olduname
+60 i386 umask sys_umask
+61 i386 chroot sys_chroot
+62 i386 ustat sys_ustat compat_sys_ustat
+63 i386 dup2 sys_dup2
+64 i386 getppid sys_getppid
+65 i386 getpgrp sys_getpgrp
+66 i386 setsid sys_setsid
+67 i386 sigaction sys_sigaction compat_sys_sigaction
+68 i386 sgetmask sys_sgetmask
+69 i386 ssetmask sys_ssetmask
+70 i386 setreuid sys_setreuid16
+71 i386 setregid sys_setregid16
+72 i386 sigsuspend sys_sigsuspend
+73 i386 sigpending sys_sigpending compat_sys_sigpending
+74 i386 sethostname sys_sethostname
+75 i386 setrlimit sys_setrlimit compat_sys_setrlimit
+76 i386 getrlimit sys_old_getrlimit compat_sys_old_getrlimit
+77 i386 getrusage sys_getrusage compat_sys_getrusage
+78 i386 gettimeofday sys_gettimeofday compat_sys_gettimeofday
+79 i386 settimeofday sys_settimeofday compat_sys_settimeofday
+80 i386 getgroups sys_getgroups16
+81 i386 setgroups sys_setgroups16
+82 i386 select sys_old_select compat_sys_old_select
+83 i386 symlink sys_symlink
+84 i386 oldlstat sys_lstat
+85 i386 readlink sys_readlink
+86 i386 uselib sys_uselib
+87 i386 swapon sys_swapon
+88 i386 reboot sys_reboot
+89 i386 readdir sys_old_readdir compat_sys_old_readdir
+90 i386 mmap sys_old_mmap compat_sys_ia32_mmap
+91 i386 munmap sys_munmap
+92 i386 truncate sys_truncate compat_sys_truncate
+93 i386 ftruncate sys_ftruncate compat_sys_ftruncate
+94 i386 fchmod sys_fchmod
+95 i386 fchown sys_fchown16
+96 i386 getpriority sys_getpriority
+97 i386 setpriority sys_setpriority
+98 i386 profil
+99 i386 statfs sys_statfs compat_sys_statfs
+100 i386 fstatfs sys_fstatfs compat_sys_fstatfs
+101 i386 ioperm sys_ioperm
+102 i386 socketcall sys_socketcall compat_sys_socketcall
+103 i386 syslog sys_syslog
+104 i386 setitimer sys_setitimer compat_sys_setitimer
+105 i386 getitimer sys_getitimer compat_sys_getitimer
+106 i386 stat sys_newstat compat_sys_newstat
+107 i386 lstat sys_newlstat compat_sys_newlstat
+108 i386 fstat sys_newfstat compat_sys_newfstat
+109 i386 olduname sys_uname
+110 i386 iopl sys_iopl
+111 i386 vhangup sys_vhangup
+112 i386 idle
+113 i386 vm86old sys_vm86old sys_ni_syscall
+114 i386 wait4 sys_wait4 compat_sys_wait4
+115 i386 swapoff sys_swapoff
+116 i386 sysinfo sys_sysinfo compat_sys_sysinfo
+117 i386 ipc sys_ipc compat_sys_ipc
+118 i386 fsync sys_fsync
+119 i386 sigreturn sys_sigreturn compat_sys_sigreturn
+120 i386 clone sys_clone compat_sys_ia32_clone
+121 i386 setdomainname sys_setdomainname
+122 i386 uname sys_newuname
+123 i386 modify_ldt sys_modify_ldt
+124 i386 adjtimex sys_adjtimex_time32
+125 i386 mprotect sys_mprotect
+126 i386 sigprocmask sys_sigprocmask compat_sys_sigprocmask
+127 i386 create_module
+128 i386 init_module sys_init_module
+129 i386 delete_module sys_delete_module
+130 i386 get_kernel_syms
+131 i386 quotactl sys_quotactl
+132 i386 getpgid sys_getpgid
+133 i386 fchdir sys_fchdir
+134 i386 bdflush sys_ni_syscall
+135 i386 sysfs sys_sysfs
+136 i386 personality sys_personality
+137 i386 afs_syscall
+138 i386 setfsuid sys_setfsuid16
+139 i386 setfsgid sys_setfsgid16
+140 i386 _llseek sys_llseek
+141 i386 getdents sys_getdents compat_sys_getdents
+142 i386 _newselect sys_select compat_sys_select
+143 i386 flock sys_flock
+144 i386 msync sys_msync
+145 i386 readv sys_readv
+146 i386 writev sys_writev
+147 i386 getsid sys_getsid
+148 i386 fdatasync sys_fdatasync
+149 i386 _sysctl sys_ni_syscall
+150 i386 mlock sys_mlock
+151 i386 munlock sys_munlock
+152 i386 mlockall sys_mlockall
+153 i386 munlockall sys_munlockall
+154 i386 sched_setparam sys_sched_setparam
+155 i386 sched_getparam sys_sched_getparam
+156 i386 sched_setscheduler sys_sched_setscheduler
+157 i386 sched_getscheduler sys_sched_getscheduler
+158 i386 sched_yield sys_sched_yield
+159 i386 sched_get_priority_max sys_sched_get_priority_max
+160 i386 sched_get_priority_min sys_sched_get_priority_min
+161 i386 sched_rr_get_interval sys_sched_rr_get_interval_time32
+162 i386 nanosleep sys_nanosleep_time32
+163 i386 mremap sys_mremap
+164 i386 setresuid sys_setresuid16
+165 i386 getresuid sys_getresuid16
+166 i386 vm86 sys_vm86 sys_ni_syscall
+167 i386 query_module
+168 i386 poll sys_poll
+169 i386 nfsservctl
+170 i386 setresgid sys_setresgid16
+171 i386 getresgid sys_getresgid16
+172 i386 prctl sys_prctl
+173 i386 rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn
+174 i386 rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction
+175 i386 rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask
+176 i386 rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending
+177 i386 rt_sigtimedwait sys_rt_sigtimedwait_time32 compat_sys_rt_sigtimedwait_time32
+178 i386 rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo
+179 i386 rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend
+180 i386 pread64 sys_ia32_pread64
+181 i386 pwrite64 sys_ia32_pwrite64
+182 i386 chown sys_chown16
+183 i386 getcwd sys_getcwd
+184 i386 capget sys_capget
+185 i386 capset sys_capset
+186 i386 sigaltstack sys_sigaltstack compat_sys_sigaltstack
+187 i386 sendfile sys_sendfile compat_sys_sendfile
+188 i386 getpmsg
+189 i386 putpmsg
+190 i386 vfork sys_vfork
+191 i386 ugetrlimit sys_getrlimit compat_sys_getrlimit
+192 i386 mmap2 sys_mmap_pgoff
+193 i386 truncate64 sys_ia32_truncate64
+194 i386 ftruncate64 sys_ia32_ftruncate64
+195 i386 stat64 sys_stat64 compat_sys_ia32_stat64
+196 i386 lstat64 sys_lstat64 compat_sys_ia32_lstat64
+197 i386 fstat64 sys_fstat64 compat_sys_ia32_fstat64
+198 i386 lchown32 sys_lchown
+199 i386 getuid32 sys_getuid
+200 i386 getgid32 sys_getgid
+201 i386 geteuid32 sys_geteuid
+202 i386 getegid32 sys_getegid
+203 i386 setreuid32 sys_setreuid
+204 i386 setregid32 sys_setregid
+205 i386 getgroups32 sys_getgroups
+206 i386 setgroups32 sys_setgroups
+207 i386 fchown32 sys_fchown
+208 i386 setresuid32 sys_setresuid
+209 i386 getresuid32 sys_getresuid
+210 i386 setresgid32 sys_setresgid
+211 i386 getresgid32 sys_getresgid
+212 i386 chown32 sys_chown
+213 i386 setuid32 sys_setuid
+214 i386 setgid32 sys_setgid
+215 i386 setfsuid32 sys_setfsuid
+216 i386 setfsgid32 sys_setfsgid
+217 i386 pivot_root sys_pivot_root
+218 i386 mincore sys_mincore
+219 i386 madvise sys_madvise
+220 i386 getdents64 sys_getdents64
+221 i386 fcntl64 sys_fcntl64 compat_sys_fcntl64
+# 222 is unused
+# 223 is unused
+224 i386 gettid sys_gettid
+225 i386 readahead sys_ia32_readahead
+226 i386 setxattr sys_setxattr
+227 i386 lsetxattr sys_lsetxattr
+228 i386 fsetxattr sys_fsetxattr
+229 i386 getxattr sys_getxattr
+230 i386 lgetxattr sys_lgetxattr
+231 i386 fgetxattr sys_fgetxattr
+232 i386 listxattr sys_listxattr
+233 i386 llistxattr sys_llistxattr
+234 i386 flistxattr sys_flistxattr
+235 i386 removexattr sys_removexattr
+236 i386 lremovexattr sys_lremovexattr
+237 i386 fremovexattr sys_fremovexattr
+238 i386 tkill sys_tkill
+239 i386 sendfile64 sys_sendfile64
+240 i386 futex sys_futex_time32
+241 i386 sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity
+242 i386 sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity
+243 i386 set_thread_area sys_set_thread_area
+244 i386 get_thread_area sys_get_thread_area
+245 i386 io_setup sys_io_setup compat_sys_io_setup
+246 i386 io_destroy sys_io_destroy
+247 i386 io_getevents sys_io_getevents_time32
+248 i386 io_submit sys_io_submit compat_sys_io_submit
+249 i386 io_cancel sys_io_cancel
+250 i386 fadvise64 sys_ia32_fadvise64
+# 251 is available for reuse (was briefly sys_set_zone_reclaim)
+252 i386 exit_group sys_exit_group - noreturn
+253 i386 lookup_dcookie
+254 i386 epoll_create sys_epoll_create
+255 i386 epoll_ctl sys_epoll_ctl
+256 i386 epoll_wait sys_epoll_wait
+257 i386 remap_file_pages sys_remap_file_pages
+258 i386 set_tid_address sys_set_tid_address
+259 i386 timer_create sys_timer_create compat_sys_timer_create
+260 i386 timer_settime sys_timer_settime32
+261 i386 timer_gettime sys_timer_gettime32
+262 i386 timer_getoverrun sys_timer_getoverrun
+263 i386 timer_delete sys_timer_delete
+264 i386 clock_settime sys_clock_settime32
+265 i386 clock_gettime sys_clock_gettime32
+266 i386 clock_getres sys_clock_getres_time32
+267 i386 clock_nanosleep sys_clock_nanosleep_time32
+268 i386 statfs64 sys_statfs64 compat_sys_statfs64
+269 i386 fstatfs64 sys_fstatfs64 compat_sys_fstatfs64
+270 i386 tgkill sys_tgkill
+271 i386 utimes sys_utimes_time32
+272 i386 fadvise64_64 sys_ia32_fadvise64_64
+273 i386 vserver
+274 i386 mbind sys_mbind
+275 i386 get_mempolicy sys_get_mempolicy
+276 i386 set_mempolicy sys_set_mempolicy
+277 i386 mq_open sys_mq_open compat_sys_mq_open
+278 i386 mq_unlink sys_mq_unlink
+279 i386 mq_timedsend sys_mq_timedsend_time32
+280 i386 mq_timedreceive sys_mq_timedreceive_time32
+281 i386 mq_notify sys_mq_notify compat_sys_mq_notify
+282 i386 mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr
+283 i386 kexec_load sys_kexec_load compat_sys_kexec_load
+284 i386 waitid sys_waitid compat_sys_waitid
+# 285 sys_setaltroot
+286 i386 add_key sys_add_key
+287 i386 request_key sys_request_key
+288 i386 keyctl sys_keyctl compat_sys_keyctl
+289 i386 ioprio_set sys_ioprio_set
+290 i386 ioprio_get sys_ioprio_get
+291 i386 inotify_init sys_inotify_init
+292 i386 inotify_add_watch sys_inotify_add_watch
+293 i386 inotify_rm_watch sys_inotify_rm_watch
+294 i386 migrate_pages sys_migrate_pages
+295 i386 openat sys_openat compat_sys_openat
+296 i386 mkdirat sys_mkdirat
+297 i386 mknodat sys_mknodat
+298 i386 fchownat sys_fchownat
+299 i386 futimesat sys_futimesat_time32
+300 i386 fstatat64 sys_fstatat64 compat_sys_ia32_fstatat64
+301 i386 unlinkat sys_unlinkat
+302 i386 renameat sys_renameat
+303 i386 linkat sys_linkat
+304 i386 symlinkat sys_symlinkat
+305 i386 readlinkat sys_readlinkat
+306 i386 fchmodat sys_fchmodat
+307 i386 faccessat sys_faccessat
+308 i386 pselect6 sys_pselect6_time32 compat_sys_pselect6_time32
+309 i386 ppoll sys_ppoll_time32 compat_sys_ppoll_time32
+310 i386 unshare sys_unshare
+311 i386 set_robust_list sys_set_robust_list compat_sys_set_robust_list
+312 i386 get_robust_list sys_get_robust_list compat_sys_get_robust_list
+313 i386 splice sys_splice
+314 i386 sync_file_range sys_ia32_sync_file_range
+315 i386 tee sys_tee
+316 i386 vmsplice sys_vmsplice
+317 i386 move_pages sys_move_pages
+318 i386 getcpu sys_getcpu
+319 i386 epoll_pwait sys_epoll_pwait
+320 i386 utimensat sys_utimensat_time32
+321 i386 signalfd sys_signalfd compat_sys_signalfd
+322 i386 timerfd_create sys_timerfd_create
+323 i386 eventfd sys_eventfd
+324 i386 fallocate sys_ia32_fallocate
+325 i386 timerfd_settime sys_timerfd_settime32
+326 i386 timerfd_gettime sys_timerfd_gettime32
+327 i386 signalfd4 sys_signalfd4 compat_sys_signalfd4
+328 i386 eventfd2 sys_eventfd2
+329 i386 epoll_create1 sys_epoll_create1
+330 i386 dup3 sys_dup3
+331 i386 pipe2 sys_pipe2
+332 i386 inotify_init1 sys_inotify_init1
+333 i386 preadv sys_preadv compat_sys_preadv
+334 i386 pwritev sys_pwritev compat_sys_pwritev
+335 i386 rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo
+336 i386 perf_event_open sys_perf_event_open
+337 i386 recvmmsg sys_recvmmsg_time32 compat_sys_recvmmsg_time32
+338 i386 fanotify_init sys_fanotify_init
+339 i386 fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark
+340 i386 prlimit64 sys_prlimit64
+341 i386 name_to_handle_at sys_name_to_handle_at
+342 i386 open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at
+343 i386 clock_adjtime sys_clock_adjtime32
+344 i386 syncfs sys_syncfs
+345 i386 sendmmsg sys_sendmmsg compat_sys_sendmmsg
+346 i386 setns sys_setns
+347 i386 process_vm_readv sys_process_vm_readv
+348 i386 process_vm_writev sys_process_vm_writev
+349 i386 kcmp sys_kcmp
+350 i386 finit_module sys_finit_module
+351 i386 sched_setattr sys_sched_setattr
+352 i386 sched_getattr sys_sched_getattr
+353 i386 renameat2 sys_renameat2
+354 i386 seccomp sys_seccomp
+355 i386 getrandom sys_getrandom
+356 i386 memfd_create sys_memfd_create
+357 i386 bpf sys_bpf
+358 i386 execveat sys_execveat compat_sys_execveat
+359 i386 socket sys_socket
+360 i386 socketpair sys_socketpair
+361 i386 bind sys_bind
+362 i386 connect sys_connect
+363 i386 listen sys_listen
+364 i386 accept4 sys_accept4
+365 i386 getsockopt sys_getsockopt sys_getsockopt
+366 i386 setsockopt sys_setsockopt sys_setsockopt
+367 i386 getsockname sys_getsockname
+368 i386 getpeername sys_getpeername
+369 i386 sendto sys_sendto
+370 i386 sendmsg sys_sendmsg compat_sys_sendmsg
+371 i386 recvfrom sys_recvfrom compat_sys_recvfrom
+372 i386 recvmsg sys_recvmsg compat_sys_recvmsg
+373 i386 shutdown sys_shutdown
+374 i386 userfaultfd sys_userfaultfd
+375 i386 membarrier sys_membarrier
+376 i386 mlock2 sys_mlock2
+377 i386 copy_file_range sys_copy_file_range
+378 i386 preadv2 sys_preadv2 compat_sys_preadv2
+379 i386 pwritev2 sys_pwritev2 compat_sys_pwritev2
+380 i386 pkey_mprotect sys_pkey_mprotect
+381 i386 pkey_alloc sys_pkey_alloc
+382 i386 pkey_free sys_pkey_free
+383 i386 statx sys_statx
+384 i386 arch_prctl sys_arch_prctl
+385 i386 io_pgetevents sys_io_pgetevents_time32 compat_sys_io_pgetevents
+386 i386 rseq sys_rseq
+393 i386 semget sys_semget
+394 i386 semctl sys_semctl compat_sys_semctl
+395 i386 shmget sys_shmget
+396 i386 shmctl sys_shmctl compat_sys_shmctl
+397 i386 shmat sys_shmat compat_sys_shmat
+398 i386 shmdt sys_shmdt
+399 i386 msgget sys_msgget
+400 i386 msgsnd sys_msgsnd compat_sys_msgsnd
+401 i386 msgrcv sys_msgrcv compat_sys_msgrcv
+402 i386 msgctl sys_msgctl compat_sys_msgctl
+403 i386 clock_gettime64 sys_clock_gettime
+404 i386 clock_settime64 sys_clock_settime
+405 i386 clock_adjtime64 sys_clock_adjtime
+406 i386 clock_getres_time64 sys_clock_getres
+407 i386 clock_nanosleep_time64 sys_clock_nanosleep
+408 i386 timer_gettime64 sys_timer_gettime
+409 i386 timer_settime64 sys_timer_settime
+410 i386 timerfd_gettime64 sys_timerfd_gettime
+411 i386 timerfd_settime64 sys_timerfd_settime
+412 i386 utimensat_time64 sys_utimensat
+413 i386 pselect6_time64 sys_pselect6 compat_sys_pselect6_time64
+414 i386 ppoll_time64 sys_ppoll compat_sys_ppoll_time64
+416 i386 io_pgetevents_time64 sys_io_pgetevents compat_sys_io_pgetevents_time64
+417 i386 recvmmsg_time64 sys_recvmmsg compat_sys_recvmmsg_time64
+418 i386 mq_timedsend_time64 sys_mq_timedsend
+419 i386 mq_timedreceive_time64 sys_mq_timedreceive
+420 i386 semtimedop_time64 sys_semtimedop
+421 i386 rt_sigtimedwait_time64 sys_rt_sigtimedwait compat_sys_rt_sigtimedwait_time64
+422 i386 futex_time64 sys_futex
+423 i386 sched_rr_get_interval_time64 sys_sched_rr_get_interval
+424 i386 pidfd_send_signal sys_pidfd_send_signal
+425 i386 io_uring_setup sys_io_uring_setup
+426 i386 io_uring_enter sys_io_uring_enter
+427 i386 io_uring_register sys_io_uring_register
+428 i386 open_tree sys_open_tree
+429 i386 move_mount sys_move_mount
+430 i386 fsopen sys_fsopen
+431 i386 fsconfig sys_fsconfig
+432 i386 fsmount sys_fsmount
+433 i386 fspick sys_fspick
+434 i386 pidfd_open sys_pidfd_open
+435 i386 clone3 sys_clone3
+436 i386 close_range sys_close_range
+437 i386 openat2 sys_openat2
+438 i386 pidfd_getfd sys_pidfd_getfd
+439 i386 faccessat2 sys_faccessat2
+440 i386 process_madvise sys_process_madvise
+441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
+442 i386 mount_setattr sys_mount_setattr
+443 i386 quotactl_fd sys_quotactl_fd
+444 i386 landlock_create_ruleset sys_landlock_create_ruleset
+445 i386 landlock_add_rule sys_landlock_add_rule
+446 i386 landlock_restrict_self sys_landlock_restrict_self
+447 i386 memfd_secret sys_memfd_secret
+448 i386 process_mrelease sys_process_mrelease
+449 i386 futex_waitv sys_futex_waitv
+450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node
+451 i386 cachestat sys_cachestat
+452 i386 fchmodat2 sys_fchmodat2
+453 i386 map_shadow_stack sys_map_shadow_stack
+454 i386 futex_wake sys_futex_wake
+455 i386 futex_wait sys_futex_wait
+456 i386 futex_requeue sys_futex_requeue
+457 i386 statmount sys_statmount
+458 i386 listmount sys_listmount
+459 i386 lsm_get_self_attr sys_lsm_get_self_attr
+460 i386 lsm_set_self_attr sys_lsm_set_self_attr
+461 i386 lsm_list_modules sys_lsm_list_modules
+462 i386 mseal sys_mseal
+463 i386 setxattrat sys_setxattrat
+464 i386 getxattrat sys_getxattrat
+465 i386 listxattrat sys_listxattrat
+466 i386 removexattrat sys_removexattrat
+467 i386 open_tree_attr sys_open_tree_attr
+468 i386 file_getattr sys_file_getattr
+469 i386 file_setattr sys_file_setattr
diff --git a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
index 7e8d46f4147f..92cf0fe2291e 100644
--- a/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
@@ -1,8 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
#
# 64-bit system call numbers and entry vectors
#
# The format is:
-# <number> <abi> <name> <entry point>
+# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
#
# The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls
#
@@ -68,7 +69,7 @@
57 common fork sys_fork
58 common vfork sys_vfork
59 64 execve sys_execve
-60 common exit sys_exit
+60 common exit sys_exit - noreturn
61 common wait4 sys_wait4
62 common kill sys_kill
63 common uname sys_newuname
@@ -239,7 +240,7 @@
228 common clock_gettime sys_clock_gettime
229 common clock_getres sys_clock_getres
230 common clock_nanosleep sys_clock_nanosleep
-231 common exit_group sys_exit_group
+231 common exit_group sys_exit_group - noreturn
232 common epoll_wait sys_epoll_wait
233 common epoll_ctl sys_epoll_ctl
234 common tgkill sys_tgkill
@@ -343,6 +344,7 @@
332 common statx sys_statx
333 common io_pgetevents sys_io_pgetevents
334 common rseq sys_rseq
+335 common uretprobe sys_uretprobe
# don't use numbers 387 through 423, add new calls after the last
# 'common' entry
424 common pidfd_send_signal sys_pidfd_send_signal
@@ -374,7 +376,7 @@
450 common set_mempolicy_home_node sys_set_mempolicy_home_node
451 common cachestat sys_cachestat
452 common fchmodat2 sys_fchmodat2
-453 64 map_shadow_stack sys_map_shadow_stack
+453 common map_shadow_stack sys_map_shadow_stack
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
@@ -383,6 +385,14 @@
459 common lsm_get_self_attr sys_lsm_get_self_attr
460 common lsm_set_self_attr sys_lsm_set_self_attr
461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh b/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh
deleted file mode 100755
index 59d7914ed6bb..000000000000
--- a/tools/perf/arch/x86/entry/syscalls/syscalltbl.sh
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-
-in="$1"
-arch="$2"
-
-syscall_macro() {
- nr="$1"
- name="$2"
-
- echo " [$nr] = \"$name\","
-}
-
-emit() {
- nr="$1"
- entry="$2"
-
- syscall_macro "$nr" "$entry"
-}
-
-echo "static const char *const syscalltbl_${arch}[] = {"
-
-sorted_table=$(mktemp /tmp/syscalltbl.XXXXXX)
-grep '^[0-9]' "$in" | sort -n > $sorted_table
-
-max_nr=0
-while read nr _abi name entry _compat; do
- if [ $nr -ge 512 ] ; then # discard compat sycalls
- break
- fi
-
- emit "$nr" "$name"
- max_nr=$nr
-done < $sorted_table
-
-rm -f $sorted_table
-
-echo "};"
-
-echo "#define SYSCALLTBL_${arch}_MAX_ID ${max_nr}"
diff --git a/tools/perf/arch/x86/include/arch-tests.h b/tools/perf/arch/x86/include/arch-tests.h
index c0421a26b875..7d65b9e51840 100644
--- a/tools/perf/arch/x86/include/arch-tests.h
+++ b/tools/perf/arch/x86/include/arch-tests.h
@@ -2,6 +2,8 @@
#ifndef ARCH_TESTS_H
#define ARCH_TESTS_H
+#include "tests/tests.h"
+
struct test_suite;
/* Tests */
@@ -12,10 +14,12 @@ int test__insn_x86(struct test_suite *test, int subtest);
int test__intel_pt_pkt_decoder(struct test_suite *test, int subtest);
int test__intel_pt_hybrid_compat(struct test_suite *test, int subtest);
int test__bp_modify(struct test_suite *test, int subtest);
-int test__x86_sample_parsing(struct test_suite *test, int subtest);
int test__amd_ibs_via_core_pmu(struct test_suite *test, int subtest);
+int test__amd_ibs_period(struct test_suite *test, int subtest);
int test__hybrid(struct test_suite *test, int subtest);
+DECLARE_SUITE(x86_topdown);
+
extern struct test_suite *arch_tests[];
#endif
diff --git a/tools/perf/arch/x86/tests/Build b/tools/perf/arch/x86/tests/Build
index b87f46e5feea..7790b3e20f4e 100644
--- a/tools/perf/arch/x86/tests/Build
+++ b/tools/perf/arch/x86/tests/Build
@@ -1,12 +1,27 @@
-perf-$(CONFIG_DWARF_UNWIND) += regs_load.o
-perf-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
+perf-test-$(CONFIG_DWARF_UNWIND) += regs_load.o
+perf-test-$(CONFIG_DWARF_UNWIND) += dwarf-unwind.o
-perf-y += arch-tests.o
-perf-y += sample-parsing.o
-perf-y += hybrid.o
-perf-$(CONFIG_AUXTRACE) += intel-pt-test.o
+perf-test-y += arch-tests.o
+perf-test-y += hybrid.o
+perf-test-$(CONFIG_AUXTRACE) += intel-pt-test.o
ifeq ($(CONFIG_EXTRA_TESTS),y)
-perf-$(CONFIG_AUXTRACE) += insn-x86.o
+perf-test-$(CONFIG_AUXTRACE) += insn-x86.o
endif
-perf-$(CONFIG_X86_64) += bp-modify.o
-perf-y += amd-ibs-via-core-pmu.o
+perf-test-$(CONFIG_X86_64) += bp-modify.o
+perf-test-y += amd-ibs-via-core-pmu.o
+perf-test-y += amd-ibs-period.o
+perf-test-y += topdown.o
+
+ifdef SHELLCHECK
+ SHELL_TESTS := gen-insn-x86-dat.sh
+ SHELL_TEST_LOGS := $(SHELL_TESTS:%=%.shellcheck_log)
+else
+ SHELL_TESTS :=
+ SHELL_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.shellcheck_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)$(SHELLCHECK) "$<" > $@ || (cat $@ && rm $@ && false)
+
+perf-test-y += $(SHELL_TEST_LOGS)
diff --git a/tools/perf/arch/x86/tests/amd-ibs-period.c b/tools/perf/arch/x86/tests/amd-ibs-period.c
new file mode 100644
index 000000000000..223e059e04de
--- /dev/null
+++ b/tools/perf/arch/x86/tests/amd-ibs-period.c
@@ -0,0 +1,1032 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <sched.h>
+#include <sys/syscall.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+#include <sys/utsname.h>
+#include <string.h>
+
+#include "arch-tests.h"
+#include "linux/perf_event.h"
+#include "linux/zalloc.h"
+#include "tests/tests.h"
+#include "../perf-sys.h"
+#include "pmu.h"
+#include "pmus.h"
+#include "debug.h"
+#include "util.h"
+#include "strbuf.h"
+#include "../util/env.h"
+
+static int page_size;
+
+#define PERF_MMAP_DATA_PAGES 32L
+#define PERF_MMAP_DATA_SIZE (PERF_MMAP_DATA_PAGES * page_size)
+#define PERF_MMAP_DATA_MASK (PERF_MMAP_DATA_SIZE - 1)
+#define PERF_MMAP_TOTAL_PAGES (PERF_MMAP_DATA_PAGES + 1)
+#define PERF_MMAP_TOTAL_SIZE (PERF_MMAP_TOTAL_PAGES * page_size)
+
+#define rmb() asm volatile("lfence":::"memory")
+
+enum {
+ FD_ERROR,
+ FD_SUCCESS,
+};
+
+enum {
+ IBS_FETCH,
+ IBS_OP,
+};
+
+struct perf_pmu *fetch_pmu;
+struct perf_pmu *op_pmu;
+unsigned int perf_event_max_sample_rate;
+
+/* Dummy workload to generate IBS samples. */
+static int dummy_workload_1(unsigned long count)
+{
+ int (*func)(void);
+ int ret = 0;
+ char *p;
+ char insn1[] = {
+ 0xb8, 0x01, 0x00, 0x00, 0x00, /* mov 1,%eax */
+ 0xc3, /* ret */
+ 0xcc, /* int 3 */
+ };
+
+ char insn2[] = {
+ 0xb8, 0x02, 0x00, 0x00, 0x00, /* mov 2,%eax */
+ 0xc3, /* ret */
+ 0xcc, /* int 3 */
+ };
+
+ p = zalloc(2 * page_size);
+ if (!p) {
+ printf("malloc() failed. %m");
+ return 1;
+ }
+
+ func = (void *)((unsigned long)(p + page_size - 1) & ~(page_size - 1));
+
+ ret = mprotect(func, page_size, PROT_READ | PROT_WRITE | PROT_EXEC);
+ if (ret) {
+ printf("mprotect() failed. %m");
+ goto out;
+ }
+
+ if (count < 100000)
+ count = 100000;
+ else if (count > 10000000)
+ count = 10000000;
+ while (count--) {
+ memcpy((void *)func, insn1, sizeof(insn1));
+ if (func() != 1) {
+ pr_debug("ERROR insn1\n");
+ ret = -1;
+ goto out;
+ }
+ memcpy((void *)func, insn2, sizeof(insn2));
+ if (func() != 2) {
+ pr_debug("ERROR insn2\n");
+ ret = -1;
+ goto out;
+ }
+ }
+
+out:
+ free(p);
+ return ret;
+}
+
+/* Another dummy workload to generate IBS samples. */
+static void dummy_workload_2(char *perf)
+{
+ char bench[] = " bench sched messaging -g 10 -l 5000 > /dev/null 2>&1";
+ char taskset[] = "taskset -c 0 ";
+ int ret __maybe_unused;
+ struct strbuf sb;
+ char *cmd;
+
+ strbuf_init(&sb, 0);
+ strbuf_add(&sb, taskset, strlen(taskset));
+ strbuf_add(&sb, perf, strlen(perf));
+ strbuf_add(&sb, bench, strlen(bench));
+ cmd = strbuf_detach(&sb, NULL);
+ ret = system(cmd);
+ free(cmd);
+}
+
+static int sched_affine(int cpu)
+{
+ cpu_set_t set;
+
+ CPU_ZERO(&set);
+ CPU_SET(cpu, &set);
+ if (sched_setaffinity(getpid(), sizeof(set), &set) == -1) {
+ pr_debug("sched_setaffinity() failed. [%m]");
+ return -1;
+ }
+ return 0;
+}
+
+static void
+copy_sample_data(void *src, unsigned long offset, void *dest, size_t size)
+{
+ size_t chunk1_size, chunk2_size;
+
+ if ((offset + size) < (size_t)PERF_MMAP_DATA_SIZE) {
+ memcpy(dest, src + offset, size);
+ } else {
+ chunk1_size = PERF_MMAP_DATA_SIZE - offset;
+ chunk2_size = size - chunk1_size;
+
+ memcpy(dest, src + offset, chunk1_size);
+ memcpy(dest + chunk1_size, src, chunk2_size);
+ }
+}
+
+static int rb_read(struct perf_event_mmap_page *rb, void *dest, size_t size)
+{
+ void *base;
+ unsigned long data_tail, data_head;
+
+ /* Casting to (void *) is needed. */
+ base = (void *)rb + page_size;
+
+ data_head = rb->data_head;
+ rmb();
+ data_tail = rb->data_tail;
+
+ if ((data_head - data_tail) < size)
+ return -1;
+
+ data_tail &= PERF_MMAP_DATA_MASK;
+ copy_sample_data(base, data_tail, dest, size);
+ rb->data_tail += size;
+ return 0;
+}
+
+static void rb_skip(struct perf_event_mmap_page *rb, size_t size)
+{
+ size_t data_head = rb->data_head;
+
+ rmb();
+
+ if ((rb->data_tail + size) > data_head)
+ rb->data_tail = data_head;
+ else
+ rb->data_tail += size;
+}
+
+/* Sample period value taken from perf sample must match with expected value. */
+static int period_equal(unsigned long exp_period, unsigned long act_period)
+{
+ return exp_period == act_period ? 0 : -1;
+}
+
+/*
+ * Sample period value taken from perf sample must be >= minimum sample period
+ * supported by IBS HW.
+ */
+static int period_higher(unsigned long min_period, unsigned long act_period)
+{
+ return min_period <= act_period ? 0 : -1;
+}
+
+static int rb_drain_samples(struct perf_event_mmap_page *rb,
+ unsigned long exp_period,
+ int *nr_samples,
+ int (*callback)(unsigned long, unsigned long))
+{
+ struct perf_event_header hdr;
+ unsigned long period;
+ int ret = 0;
+
+ /*
+ * PERF_RECORD_SAMPLE:
+ * struct {
+ * struct perf_event_header hdr;
+ * { u64 period; } && PERF_SAMPLE_PERIOD
+ * };
+ */
+ while (1) {
+ if (rb_read(rb, &hdr, sizeof(hdr)))
+ return ret;
+
+ if (hdr.type == PERF_RECORD_SAMPLE) {
+ (*nr_samples)++;
+ period = 0;
+ if (rb_read(rb, &period, sizeof(period)))
+ pr_debug("rb_read(period) error. [%m]");
+ ret |= callback(exp_period, period);
+ } else {
+ rb_skip(rb, hdr.size - sizeof(hdr));
+ }
+ }
+ return ret;
+}
+
+static long perf_event_open(struct perf_event_attr *attr, pid_t pid,
+ int cpu, int group_fd, unsigned long flags)
+{
+ return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags);
+}
+
+static void fetch_prepare_attr(struct perf_event_attr *attr,
+ unsigned long long config, int freq,
+ unsigned long sample_period)
+{
+ memset(attr, 0, sizeof(struct perf_event_attr));
+
+ attr->type = fetch_pmu->type;
+ attr->size = sizeof(struct perf_event_attr);
+ attr->config = config;
+ attr->disabled = 1;
+ attr->sample_type = PERF_SAMPLE_PERIOD;
+ attr->freq = freq;
+ attr->sample_period = sample_period; /* = ->sample_freq */
+}
+
+static void op_prepare_attr(struct perf_event_attr *attr,
+ unsigned long config, int freq,
+ unsigned long sample_period)
+{
+ memset(attr, 0, sizeof(struct perf_event_attr));
+
+ attr->type = op_pmu->type;
+ attr->size = sizeof(struct perf_event_attr);
+ attr->config = config;
+ attr->disabled = 1;
+ attr->sample_type = PERF_SAMPLE_PERIOD;
+ attr->freq = freq;
+ attr->sample_period = sample_period; /* = ->sample_freq */
+}
+
+struct ibs_configs {
+ /* Input */
+ unsigned long config;
+
+ /* Expected output */
+ unsigned long period;
+ int fd;
+};
+
+/*
+ * Somehow first Fetch event with sample period = 0x10 causes 0
+ * samples. So start with large period and decrease it gradually.
+ */
+struct ibs_configs fetch_configs[] = {
+ { .config = 0xffff, .period = 0xffff0, .fd = FD_SUCCESS },
+ { .config = 0x1000, .period = 0x10000, .fd = FD_SUCCESS },
+ { .config = 0xff, .period = 0xff0, .fd = FD_SUCCESS },
+ { .config = 0x1, .period = 0x10, .fd = FD_SUCCESS },
+ { .config = 0x0, .period = -1, .fd = FD_ERROR },
+ { .config = 0x10000, .period = -1, .fd = FD_ERROR },
+};
+
+struct ibs_configs op_configs[] = {
+ { .config = 0x0, .period = -1, .fd = FD_ERROR },
+ { .config = 0x1, .period = -1, .fd = FD_ERROR },
+ { .config = 0x8, .period = -1, .fd = FD_ERROR },
+ { .config = 0x9, .period = 0x90, .fd = FD_SUCCESS },
+ { .config = 0xf, .period = 0xf0, .fd = FD_SUCCESS },
+ { .config = 0x1000, .period = 0x10000, .fd = FD_SUCCESS },
+ { .config = 0xffff, .period = 0xffff0, .fd = FD_SUCCESS },
+ { .config = 0x10000, .period = -1, .fd = FD_ERROR },
+ { .config = 0x100000, .period = 0x100000, .fd = FD_SUCCESS },
+ { .config = 0xf00000, .period = 0xf00000, .fd = FD_SUCCESS },
+ { .config = 0xf0ffff, .period = 0xfffff0, .fd = FD_SUCCESS },
+ { .config = 0x1f0ffff, .period = 0x1fffff0, .fd = FD_SUCCESS },
+ { .config = 0x7f0ffff, .period = 0x7fffff0, .fd = FD_SUCCESS },
+ { .config = 0x8f0ffff, .period = -1, .fd = FD_ERROR },
+ { .config = 0x17f0ffff, .period = -1, .fd = FD_ERROR },
+};
+
+static int __ibs_config_test(int ibs_type, struct ibs_configs *config, int *nr_samples)
+{
+ struct perf_event_attr attr;
+ int fd, i;
+ void *rb;
+ int ret = 0;
+
+ if (ibs_type == IBS_FETCH)
+ fetch_prepare_attr(&attr, config->config, 0, 0);
+ else
+ op_prepare_attr(&attr, config->config, 0, 0);
+
+ /* CPU0, All processes */
+ fd = perf_event_open(&attr, -1, 0, -1, 0);
+ if (config->fd == FD_ERROR) {
+ if (fd != -1) {
+ close(fd);
+ return -1;
+ }
+ return 0;
+ }
+ if (fd <= -1)
+ return -1;
+
+ rb = mmap(NULL, PERF_MMAP_TOTAL_SIZE, PROT_READ | PROT_WRITE,
+ MAP_SHARED, fd, 0);
+ if (rb == MAP_FAILED) {
+ pr_debug("mmap() failed. [%m]\n");
+ return -1;
+ }
+
+ ioctl(fd, PERF_EVENT_IOC_RESET, 0);
+ ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
+
+ i = 5;
+ while (i--) {
+ dummy_workload_1(1000000);
+
+ ret = rb_drain_samples(rb, config->period, nr_samples,
+ period_equal);
+ if (ret)
+ break;
+ }
+
+ ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
+ munmap(rb, PERF_MMAP_TOTAL_SIZE);
+ close(fd);
+ return ret;
+}
+
+static int ibs_config_test(void)
+{
+ int nr_samples = 0;
+ unsigned long i;
+ int ret = 0;
+ int r;
+
+ pr_debug("\nIBS config tests:\n");
+ pr_debug("-----------------\n");
+
+ pr_debug("Fetch PMU tests:\n");
+ for (i = 0; i < ARRAY_SIZE(fetch_configs); i++) {
+ nr_samples = 0;
+ r = __ibs_config_test(IBS_FETCH, &(fetch_configs[i]), &nr_samples);
+
+ if (fetch_configs[i].fd == FD_ERROR) {
+ pr_debug("0x%-16lx: %-4s\n", fetch_configs[i].config,
+ !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("0x%-16lx: %-4s (nr samples: %d)\n", fetch_configs[i].config,
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+
+ ret |= r;
+ }
+
+ pr_debug("Op PMU tests:\n");
+ for (i = 0; i < ARRAY_SIZE(op_configs); i++) {
+ nr_samples = 0;
+ r = __ibs_config_test(IBS_OP, &(op_configs[i]), &nr_samples);
+
+ if (op_configs[i].fd == FD_ERROR) {
+ pr_debug("0x%-16lx: %-4s\n", op_configs[i].config,
+ !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("0x%-16lx: %-4s (nr samples: %d)\n", op_configs[i].config,
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+
+ ret |= r;
+ }
+
+ return ret;
+}
+
+struct ibs_period {
+ /* Input */
+ int freq;
+ unsigned long sample_freq;
+
+ /* Output */
+ int ret;
+ unsigned long period;
+};
+
+struct ibs_period fetch_period[] = {
+ { .freq = 0, .sample_freq = 0, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 1, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0xf, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0x10, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 0, .sample_freq = 0x11, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 0, .sample_freq = 0x8f, .ret = FD_SUCCESS, .period = 0x80 },
+ { .freq = 0, .sample_freq = 0x90, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 0, .sample_freq = 0x91, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 0, .sample_freq = 0x4d2, .ret = FD_SUCCESS, .period = 0x4d0 },
+ { .freq = 0, .sample_freq = 0x1007, .ret = FD_SUCCESS, .period = 0x1000 },
+ { .freq = 0, .sample_freq = 0xfff0, .ret = FD_SUCCESS, .period = 0xfff0 },
+ { .freq = 0, .sample_freq = 0xffff, .ret = FD_SUCCESS, .period = 0xfff0 },
+ { .freq = 0, .sample_freq = 0x10010, .ret = FD_SUCCESS, .period = 0x10010 },
+ { .freq = 0, .sample_freq = 0x7fffff, .ret = FD_SUCCESS, .period = 0x7ffff0 },
+ { .freq = 0, .sample_freq = 0xfffffff, .ret = FD_SUCCESS, .period = 0xffffff0 },
+ { .freq = 1, .sample_freq = 0, .ret = FD_ERROR, .period = -1 },
+ { .freq = 1, .sample_freq = 1, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0xf, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x10, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x11, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x8f, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x90, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x91, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x4d2, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x1007, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0xfff0, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0xffff, .ret = FD_SUCCESS, .period = 0x10 },
+ { .freq = 1, .sample_freq = 0x10010, .ret = FD_SUCCESS, .period = 0x10 },
+ /* ret=FD_ERROR because freq > default perf_event_max_sample_rate (100000) */
+ { .freq = 1, .sample_freq = 0x7fffff, .ret = FD_ERROR, .period = -1 },
+};
+
+struct ibs_period op_period[] = {
+ { .freq = 0, .sample_freq = 0, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 1, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0xf, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0x10, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0x11, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0x8f, .ret = FD_ERROR, .period = -1 },
+ { .freq = 0, .sample_freq = 0x90, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 0, .sample_freq = 0x91, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 0, .sample_freq = 0x4d2, .ret = FD_SUCCESS, .period = 0x4d0 },
+ { .freq = 0, .sample_freq = 0x1007, .ret = FD_SUCCESS, .period = 0x1000 },
+ { .freq = 0, .sample_freq = 0xfff0, .ret = FD_SUCCESS, .period = 0xfff0 },
+ { .freq = 0, .sample_freq = 0xffff, .ret = FD_SUCCESS, .period = 0xfff0 },
+ { .freq = 0, .sample_freq = 0x10010, .ret = FD_SUCCESS, .period = 0x10010 },
+ { .freq = 0, .sample_freq = 0x7fffff, .ret = FD_SUCCESS, .period = 0x7ffff0 },
+ { .freq = 0, .sample_freq = 0xfffffff, .ret = FD_SUCCESS, .period = 0xffffff0 },
+ { .freq = 1, .sample_freq = 0, .ret = FD_ERROR, .period = -1 },
+ { .freq = 1, .sample_freq = 1, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0xf, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x10, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x11, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x8f, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x90, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x91, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x4d2, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x1007, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0xfff0, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0xffff, .ret = FD_SUCCESS, .period = 0x90 },
+ { .freq = 1, .sample_freq = 0x10010, .ret = FD_SUCCESS, .period = 0x90 },
+ /* ret=FD_ERROR because freq > default perf_event_max_sample_rate (100000) */
+ { .freq = 1, .sample_freq = 0x7fffff, .ret = FD_ERROR, .period = -1 },
+};
+
+static int __ibs_period_constraint_test(int ibs_type, struct ibs_period *period,
+ int *nr_samples)
+{
+ struct perf_event_attr attr;
+ int ret = 0;
+ void *rb;
+ int fd;
+
+ if (period->freq && period->sample_freq > perf_event_max_sample_rate)
+ period->ret = FD_ERROR;
+
+ if (ibs_type == IBS_FETCH)
+ fetch_prepare_attr(&attr, 0, period->freq, period->sample_freq);
+ else
+ op_prepare_attr(&attr, 0, period->freq, period->sample_freq);
+
+ /* CPU0, All processes */
+ fd = perf_event_open(&attr, -1, 0, -1, 0);
+ if (period->ret == FD_ERROR) {
+ if (fd != -1) {
+ close(fd);
+ return -1;
+ }
+ return 0;
+ }
+ if (fd <= -1)
+ return -1;
+
+ rb = mmap(NULL, PERF_MMAP_TOTAL_SIZE, PROT_READ | PROT_WRITE,
+ MAP_SHARED, fd, 0);
+ if (rb == MAP_FAILED) {
+ pr_debug("mmap() failed. [%m]\n");
+ close(fd);
+ return -1;
+ }
+
+ ioctl(fd, PERF_EVENT_IOC_RESET, 0);
+ ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
+
+ if (period->freq) {
+ dummy_workload_1(100000);
+ ret = rb_drain_samples(rb, period->period, nr_samples,
+ period_higher);
+ } else {
+ dummy_workload_1(period->sample_freq * 10);
+ ret = rb_drain_samples(rb, period->period, nr_samples,
+ period_equal);
+ }
+
+ ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
+ munmap(rb, PERF_MMAP_TOTAL_SIZE);
+ close(fd);
+ return ret;
+}
+
+static int ibs_period_constraint_test(void)
+{
+ unsigned long i;
+ int nr_samples;
+ int ret = 0;
+ int r;
+
+ pr_debug("\nIBS sample period constraint tests:\n");
+ pr_debug("-----------------------------------\n");
+
+ pr_debug("Fetch PMU test:\n");
+ for (i = 0; i < ARRAY_SIZE(fetch_period); i++) {
+ nr_samples = 0;
+ r = __ibs_period_constraint_test(IBS_FETCH, &fetch_period[i],
+ &nr_samples);
+
+ if (fetch_period[i].ret == FD_ERROR) {
+ pr_debug("freq %d, sample_freq %9ld: %-4s\n",
+ fetch_period[i].freq, fetch_period[i].sample_freq,
+ !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("freq %d, sample_freq %9ld: %-4s (nr samples: %d)\n",
+ fetch_period[i].freq, fetch_period[i].sample_freq,
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+ ret |= r;
+ }
+
+ pr_debug("Op PMU test:\n");
+ for (i = 0; i < ARRAY_SIZE(op_period); i++) {
+ nr_samples = 0;
+ r = __ibs_period_constraint_test(IBS_OP, &op_period[i],
+ &nr_samples);
+
+ if (op_period[i].ret == FD_ERROR) {
+ pr_debug("freq %d, sample_freq %9ld: %-4s\n",
+ op_period[i].freq, op_period[i].sample_freq,
+ !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("freq %d, sample_freq %9ld: %-4s (nr samples: %d)\n",
+ op_period[i].freq, op_period[i].sample_freq,
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+ ret |= r;
+ }
+
+ return ret;
+}
+
+struct ibs_ioctl {
+ /* Input */
+ int freq;
+ unsigned long period;
+
+ /* Expected output */
+ int ret;
+};
+
+struct ibs_ioctl fetch_ioctl[] = {
+ { .freq = 0, .period = 0x0, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x1, .ret = FD_ERROR },
+ { .freq = 0, .period = 0xf, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x10, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x11, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x1f, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x20, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x80, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x8f, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x90, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x91, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x100, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0xfff0, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0xffff, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x10000, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x1fff0, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x1fff5, .ret = FD_ERROR },
+ { .freq = 1, .period = 0x0, .ret = FD_ERROR },
+ { .freq = 1, .period = 0x1, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0xf, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x10, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x11, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x1f, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x20, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x80, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x8f, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x90, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x91, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x100, .ret = FD_SUCCESS },
+};
+
+struct ibs_ioctl op_ioctl[] = {
+ { .freq = 0, .period = 0x0, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x1, .ret = FD_ERROR },
+ { .freq = 0, .period = 0xf, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x10, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x11, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x1f, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x20, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x80, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x8f, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x90, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x91, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x100, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0xfff0, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0xffff, .ret = FD_ERROR },
+ { .freq = 0, .period = 0x10000, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x1fff0, .ret = FD_SUCCESS },
+ { .freq = 0, .period = 0x1fff5, .ret = FD_ERROR },
+ { .freq = 1, .period = 0x0, .ret = FD_ERROR },
+ { .freq = 1, .period = 0x1, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0xf, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x10, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x11, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x1f, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x20, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x80, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x8f, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x90, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x91, .ret = FD_SUCCESS },
+ { .freq = 1, .period = 0x100, .ret = FD_SUCCESS },
+};
+
+static int __ibs_ioctl_test(int ibs_type, struct ibs_ioctl *ibs_ioctl)
+{
+ struct perf_event_attr attr;
+ int ret = 0;
+ int fd;
+ int r;
+
+ if (ibs_type == IBS_FETCH)
+ fetch_prepare_attr(&attr, 0, ibs_ioctl->freq, 1000);
+ else
+ op_prepare_attr(&attr, 0, ibs_ioctl->freq, 1000);
+
+ /* CPU0, All processes */
+ fd = perf_event_open(&attr, -1, 0, -1, 0);
+ if (fd <= -1) {
+ pr_debug("event_open() Failed\n");
+ return -1;
+ }
+
+ r = ioctl(fd, PERF_EVENT_IOC_PERIOD, &ibs_ioctl->period);
+ if ((ibs_ioctl->ret == FD_SUCCESS && r <= -1) ||
+ (ibs_ioctl->ret == FD_ERROR && r >= 0)) {
+ ret = -1;
+ }
+
+ close(fd);
+ return ret;
+}
+
+static int ibs_ioctl_test(void)
+{
+ unsigned long i;
+ int ret = 0;
+ int r;
+
+ pr_debug("\nIBS ioctl() tests:\n");
+ pr_debug("------------------\n");
+
+ pr_debug("Fetch PMU tests\n");
+ for (i = 0; i < ARRAY_SIZE(fetch_ioctl); i++) {
+ r = __ibs_ioctl_test(IBS_FETCH, &fetch_ioctl[i]);
+
+ pr_debug("ioctl(%s = 0x%-7lx): %s\n",
+ fetch_ioctl[i].freq ? "freq " : "period",
+ fetch_ioctl[i].period, r ? "Fail" : "Ok");
+ ret |= r;
+ }
+
+ pr_debug("Op PMU tests\n");
+ for (i = 0; i < ARRAY_SIZE(op_ioctl); i++) {
+ r = __ibs_ioctl_test(IBS_OP, &op_ioctl[i]);
+
+ pr_debug("ioctl(%s = 0x%-7lx): %s\n",
+ op_ioctl[i].freq ? "freq " : "period",
+ op_ioctl[i].period, r ? "Fail" : "Ok");
+ ret |= r;
+ }
+
+ return ret;
+}
+
+static int ibs_freq_neg_test(void)
+{
+ struct perf_event_attr attr;
+ int fd;
+
+ pr_debug("\nIBS freq (negative) tests:\n");
+ pr_debug("--------------------------\n");
+
+ /*
+ * Assuming perf_event_max_sample_rate <= 100000,
+ * config: 0x300D40 ==> MaxCnt: 200000
+ */
+ op_prepare_attr(&attr, 0x300D40, 1, 0);
+
+ /* CPU0, All processes */
+ fd = perf_event_open(&attr, -1, 0, -1, 0);
+ if (fd != -1) {
+ pr_debug("freq 1, sample_freq 200000: Fail\n");
+ close(fd);
+ return -1;
+ }
+
+ pr_debug("freq 1, sample_freq 200000: Ok\n");
+
+ return 0;
+}
+
+struct ibs_l3missonly {
+ /* Input */
+ int freq;
+ unsigned long sample_freq;
+
+ /* Expected output */
+ int ret;
+ unsigned long min_period;
+};
+
+struct ibs_l3missonly fetch_l3missonly = {
+ .freq = 1,
+ .sample_freq = 10000,
+ .ret = FD_SUCCESS,
+ .min_period = 0x10,
+};
+
+struct ibs_l3missonly op_l3missonly = {
+ .freq = 1,
+ .sample_freq = 10000,
+ .ret = FD_SUCCESS,
+ .min_period = 0x90,
+};
+
+static int __ibs_l3missonly_test(char *perf, int ibs_type, int *nr_samples,
+ struct ibs_l3missonly *l3missonly)
+{
+ struct perf_event_attr attr;
+ int ret = 0;
+ void *rb;
+ int fd;
+
+ if (l3missonly->sample_freq > perf_event_max_sample_rate)
+ l3missonly->ret = FD_ERROR;
+
+ if (ibs_type == IBS_FETCH) {
+ fetch_prepare_attr(&attr, 0x800000000000000UL, l3missonly->freq,
+ l3missonly->sample_freq);
+ } else {
+ op_prepare_attr(&attr, 0x10000, l3missonly->freq,
+ l3missonly->sample_freq);
+ }
+
+ /* CPU0, All processes */
+ fd = perf_event_open(&attr, -1, 0, -1, 0);
+ if (l3missonly->ret == FD_ERROR) {
+ if (fd != -1) {
+ close(fd);
+ return -1;
+ }
+ return 0;
+ }
+ if (fd == -1) {
+ pr_debug("perf_event_open() failed. [%m]\n");
+ return -1;
+ }
+
+ rb = mmap(NULL, PERF_MMAP_TOTAL_SIZE, PROT_READ | PROT_WRITE,
+ MAP_SHARED, fd, 0);
+ if (rb == MAP_FAILED) {
+ pr_debug("mmap() failed. [%m]\n");
+ close(fd);
+ return -1;
+ }
+
+ ioctl(fd, PERF_EVENT_IOC_RESET, 0);
+ ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
+
+ dummy_workload_2(perf);
+
+ ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
+
+ ret = rb_drain_samples(rb, l3missonly->min_period, nr_samples, period_higher);
+
+ munmap(rb, PERF_MMAP_TOTAL_SIZE);
+ close(fd);
+ return ret;
+}
+
+static int ibs_l3missonly_test(char *perf)
+{
+ int nr_samples = 0;
+ int ret = 0;
+ int r = 0;
+
+ pr_debug("\nIBS L3MissOnly test: (takes a while)\n");
+ pr_debug("--------------------\n");
+
+ if (perf_pmu__has_format(fetch_pmu, "l3missonly")) {
+ nr_samples = 0;
+ r = __ibs_l3missonly_test(perf, IBS_FETCH, &nr_samples, &fetch_l3missonly);
+ if (fetch_l3missonly.ret == FD_ERROR) {
+ pr_debug("Fetch L3MissOnly: %-4s\n", !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("Fetch L3MissOnly: %-4s (nr_samples: %d)\n",
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+ ret |= r;
+ }
+
+ if (perf_pmu__has_format(op_pmu, "l3missonly")) {
+ nr_samples = 0;
+ r = __ibs_l3missonly_test(perf, IBS_OP, &nr_samples, &op_l3missonly);
+ if (op_l3missonly.ret == FD_ERROR) {
+ pr_debug("Op L3MissOnly: %-4s\n", !r ? "Ok" : "Fail");
+ } else {
+ /*
+ * Although nr_samples == 0 is reported as Fail here,
+ * the failure status is not cascaded up because, we
+ * can not decide whether test really failed or not
+ * without actual samples.
+ */
+ pr_debug("Op L3MissOnly: %-4s (nr_samples: %d)\n",
+ (!r && nr_samples != 0) ? "Ok" : "Fail", nr_samples);
+ }
+ ret |= r;
+ }
+
+ return ret;
+}
+
+static unsigned int get_perf_event_max_sample_rate(void)
+{
+ unsigned int max_sample_rate = 100000;
+ FILE *fp;
+ int ret;
+
+ fp = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r");
+ if (!fp) {
+ pr_debug("Can't open perf_event_max_sample_rate. Assuming %d\n",
+ max_sample_rate);
+ goto out;
+ }
+
+ ret = fscanf(fp, "%d", &max_sample_rate);
+ if (ret == EOF) {
+ pr_debug("Can't read perf_event_max_sample_rate. Assuming 100000\n");
+ max_sample_rate = 100000;
+ }
+ fclose(fp);
+
+out:
+ return max_sample_rate;
+}
+
+/*
+ * Bunch of IBS sample period fixes that this test exercise went in v6.15.
+ * Skip the test on older kernels to distinguish between test failure due
+ * to a new bug vs known failure due to older kernel.
+ */
+static bool kernel_v6_15_or_newer(void)
+{
+ struct utsname utsname;
+ char *endptr = NULL;
+ long major, minor;
+
+ if (uname(&utsname) < 0) {
+ pr_debug("uname() failed. [%m]");
+ return false;
+ }
+
+ major = strtol(utsname.release, &endptr, 10);
+ endptr++;
+ minor = strtol(endptr, NULL, 10);
+
+ return major >= 6 && minor >= 15;
+}
+
+int test__amd_ibs_period(struct test_suite *test __maybe_unused,
+ int subtest __maybe_unused)
+{
+ char perf[PATH_MAX] = {'\0'};
+ int ret = TEST_OK;
+
+ page_size = sysconf(_SC_PAGESIZE);
+
+ /*
+ * Reading perf_event_max_sample_rate only once _might_ cause some
+ * of the test to fail if kernel changes it after reading it here.
+ */
+ perf_event_max_sample_rate = get_perf_event_max_sample_rate();
+ fetch_pmu = perf_pmus__find("ibs_fetch");
+ op_pmu = perf_pmus__find("ibs_op");
+
+ if (!x86__is_amd_cpu() || !fetch_pmu || !op_pmu)
+ return TEST_SKIP;
+
+ if (!kernel_v6_15_or_newer()) {
+ pr_debug("Need v6.15 or newer kernel. Skipping.\n");
+ return TEST_SKIP;
+ }
+
+ perf_exe(perf, sizeof(perf));
+
+ if (sched_affine(0))
+ return TEST_FAIL;
+
+ /*
+ * Perf event can be opened in two modes:
+ * 1 Freq mode
+ * perf_event_attr->freq = 1, ->sample_freq = <frequency>
+ * 2 Sample period mode
+ * perf_event_attr->freq = 0, ->sample_period = <period>
+ *
+ * Instead of using above interface, IBS event in 'sample period mode'
+ * can also be opened by passing <period> value directly in a MaxCnt
+ * bitfields of perf_event_attr->config. Test this IBS specific special
+ * interface.
+ */
+ if (ibs_config_test())
+ ret = TEST_FAIL;
+
+ /*
+ * IBS Fetch and Op PMUs have HW constraints on minimum sample period.
+ * Also, sample period value must be in multiple of 0x10. Test that IBS
+ * driver honors HW constraints for various possible values in Freq as
+ * well as Sample Period mode IBS events.
+ */
+ if (ibs_period_constraint_test())
+ ret = TEST_FAIL;
+
+ /*
+ * Test ioctl() with various sample period values for IBS event.
+ */
+ if (ibs_ioctl_test())
+ ret = TEST_FAIL;
+
+ /*
+ * Test that opening of freq mode IBS event fails when the freq value
+ * is passed through ->config, not explicitly in ->sample_freq. Also
+ * use high freq value (beyond perf_event_max_sample_rate) to test IBS
+ * driver do not bypass perf_event_max_sample_rate checks.
+ */
+ if (ibs_freq_neg_test())
+ ret = TEST_FAIL;
+
+ /*
+ * L3MissOnly is a post-processing filter, i.e. IBS HW checks for L3
+ * Miss at the completion of the tagged uOp. The sample is discarded
+ * if the tagged uOp did not cause L3Miss. Also, IBS HW internally
+ * resets CurCnt to a small pseudo-random value and resumes counting.
+ * A new uOp is tagged once CurCnt reaches to MaxCnt. But the process
+ * repeats until the tagged uOp causes an L3 Miss.
+ *
+ * With the freq mode event, the next sample period is calculated by
+ * generic kernel on every sample to achieve desired freq of samples.
+ *
+ * Since the number of times HW internally reset CurCnt and the pseudo-
+ * random value of CurCnt for all those occurrences are not known to SW,
+ * the sample period adjustment by kernel goes for a toes for freq mode
+ * IBS events. Kernel will set very small period for the next sample if
+ * the window between current sample and prev sample is too high due to
+ * multiple samples being discarded internally by IBS HW.
+ *
+ * Test that IBS sample period constraints are honored when L3MissOnly
+ * is ON.
+ */
+ if (ibs_l3missonly_test(perf))
+ ret = TEST_FAIL;
+
+ return ret;
+}
diff --git a/tools/perf/arch/x86/tests/arch-tests.c b/tools/perf/arch/x86/tests/arch-tests.c
index a216a5d172ed..8f9cfeaa170f 100644
--- a/tools/perf/arch/x86/tests/arch-tests.c
+++ b/tools/perf/arch/x86/tests/arch-tests.c
@@ -23,8 +23,8 @@ struct test_suite suite__intel_pt = {
#if defined(__x86_64__)
DEFINE_SUITE("x86 bp modify", bp_modify);
#endif
-DEFINE_SUITE("x86 Sample parsing", x86_sample_parsing);
DEFINE_SUITE("AMD IBS via core pmu", amd_ibs_via_core_pmu);
+DEFINE_SUITE_EXCLUSIVE("AMD IBS sample period", amd_ibs_period);
static struct test_case hybrid_tests[] = {
TEST_CASE_REASON("x86 hybrid event parsing", hybrid, "not hybrid"),
{ .name = NULL, }
@@ -48,8 +48,9 @@ struct test_suite *arch_tests[] = {
#if defined(__x86_64__)
&suite__bp_modify,
#endif
- &suite__x86_sample_parsing,
&suite__amd_ibs_via_core_pmu,
+ &suite__amd_ibs_period,
&suite__hybrid,
+ &suite__x86_topdown,
NULL,
};
diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
index 5bfec3345d59..e91a73d09cec 100644
--- a/tools/perf/arch/x86/tests/dwarf-unwind.c
+++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
@@ -34,6 +34,7 @@ static int sample_ustack(struct perf_sample *sample,
}
stack_size = map__end(map) - sp;
+ map__put(map);
stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
memcpy(buf, (void *) sp, stack_size);
@@ -52,7 +53,7 @@ static int sample_ustack(struct perf_sample *sample,
int test__arch_unwind_sample(struct perf_sample *sample,
struct thread *thread)
{
- struct regs_dump *regs = &sample->user_regs;
+ struct regs_dump *regs = perf_sample__user_regs(sample);
u64 *buf;
buf = malloc(sizeof(u64) * PERF_REGS_MAX);
diff --git a/tools/perf/arch/x86/tests/gen-insn-x86-dat.sh b/tools/perf/arch/x86/tests/gen-insn-x86-dat.sh
index 0d0a003a9c5e..89c46532cd5c 100755
--- a/tools/perf/arch/x86/tests/gen-insn-x86-dat.sh
+++ b/tools/perf/arch/x86/tests/gen-insn-x86-dat.sh
@@ -11,7 +11,7 @@ if [ "$(uname -m)" != "x86_64" ]; then
exit 1
fi
-cd $(dirname $0)
+cd "$(dirname $0)"
trap 'echo "Might need a more recent version of binutils"' EXIT
diff --git a/tools/perf/arch/x86/tests/hybrid.c b/tools/perf/arch/x86/tests/hybrid.c
index 40f5d17fedab..e221ea104174 100644
--- a/tools/perf/arch/x86/tests/hybrid.c
+++ b/tools/perf/arch/x86/tests/hybrid.c
@@ -259,11 +259,10 @@ static int test_event(const struct evlist_test *e)
parse_events_error__init(&err);
ret = parse_events(evlist, e->name, &err);
if (ret) {
- pr_debug("failed to parse event '%s', err %d, str '%s'\n",
- e->name, ret, err.str);
+ pr_debug("failed to parse event '%s', err %d\n", e->name, ret);
parse_events_error__print(&err, e->name);
ret = TEST_FAIL;
- if (strstr(err.str, "can't access trace events"))
+ if (parse_events_error__contains(&err, "can't access trace events"))
ret = TEST_SKIP;
} else {
ret = e->check(evlist);
diff --git a/tools/perf/arch/x86/tests/insn-x86-dat-32.c b/tools/perf/arch/x86/tests/insn-x86-dat-32.c
index ba429cadb18f..ce9645edaf68 100644
--- a/tools/perf/arch/x86/tests/insn-x86-dat-32.c
+++ b/tools/perf/arch/x86/tests/insn-x86-dat-32.c
@@ -3107,6 +3107,122 @@
"62 f5 7c 08 2e ca \tvucomish %xmm2,%xmm1",},
{{0x62, 0xf5, 0x7c, 0x08, 0x2e, 0x8c, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
"62 f5 7c 08 2e 8c c8 78 56 34 12 \tvucomish 0x12345678(%eax,%ecx,8),%xmm1",},
+{{0xf3, 0x0f, 0x38, 0xdc, 0xd1, }, 5, 0, "", "",
+"f3 0f 38 dc d1 \tloadiwkey %xmm1,%xmm2",},
+{{0xf3, 0x0f, 0x38, 0xfa, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fa d0 \tencodekey128 %eax,%edx",},
+{{0xf3, 0x0f, 0x38, 0xfb, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fb d0 \tencodekey256 %eax,%edx",},
+{{0xf3, 0x0f, 0x38, 0xdc, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 dc 5a 77 \taesenc128kl 0x77(%edx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xde, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 de 5a 77 \taesenc256kl 0x77(%edx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xdd, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 dd 5a 77 \taesdec128kl 0x77(%edx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xdf, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 df 5a 77 \taesdec256kl 0x77(%edx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x42, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 42 77 \taesencwide128kl 0x77(%edx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x52, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 52 77 \taesencwide256kl 0x77(%edx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x4a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 4a 77 \taesdecwide128kl 0x77(%edx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 5a 77 \taesdecwide256kl 0x77(%edx)",},
+{{0x0f, 0x38, 0xfc, 0x08, }, 4, 0, "", "",
+"0f 38 fc 08 \taadd %ecx,(%eax)",},
+{{0x0f, 0x38, 0xfc, 0x15, 0x78, 0x56, 0x34, 0x12, }, 8, 0, "", "",
+"0f 38 fc 15 78 56 34 12 \taadd %edx,0x12345678",},
+{{0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"0f 38 fc 94 c8 78 56 34 12 \taadd %edx,0x12345678(%eax,%ecx,8)",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"66 0f 38 fc 08 \taand %ecx,(%eax)",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x15, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"66 0f 38 fc 15 78 56 34 12 \taand %edx,0x12345678",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"66 0f 38 fc 94 c8 78 56 34 12 \taand %edx,0x12345678(%eax,%ecx,8)",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"f2 0f 38 fc 08 \taor %ecx,(%eax)",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x15, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"f2 0f 38 fc 15 78 56 34 12 \taor %edx,0x12345678",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f2 0f 38 fc 94 c8 78 56 34 12 \taor %edx,0x12345678(%eax,%ecx,8)",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"f3 0f 38 fc 08 \taxor %ecx,(%eax)",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x15, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"f3 0f 38 fc 15 78 56 34 12 \taxor %edx,0x12345678",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f3 0f 38 fc 94 c8 78 56 34 12 \taxor %edx,0x12345678(%eax,%ecx,8)",},
+{{0xc4, 0xe2, 0x7a, 0xb1, 0x31, }, 5, 0, "", "",
+"c4 e2 7a b1 31 \tvbcstnebf162ps (%ecx),%xmm6",},
+{{0xc4, 0xe2, 0x79, 0xb1, 0x31, }, 5, 0, "", "",
+"c4 e2 79 b1 31 \tvbcstnesh2ps (%ecx),%xmm6",},
+{{0xc4, 0xe2, 0x7a, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 7a b0 31 \tvcvtneebf162ps (%ecx),%xmm6",},
+{{0xc4, 0xe2, 0x79, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 79 b0 31 \tvcvtneeph2ps (%ecx),%xmm6",},
+{{0xc4, 0xe2, 0x7b, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 7b b0 31 \tvcvtneobf162ps (%ecx),%xmm6",},
+{{0xc4, 0xe2, 0x78, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 78 b0 31 \tvcvtneoph2ps (%ecx),%xmm6",},
+{{0x62, 0xf2, 0x7e, 0x08, 0x72, 0xf1, }, 6, 0, "", "",
+"62 f2 7e 08 72 f1 \tvcvtneps2bf16 %xmm1,%xmm6",},
+{{0xc4, 0xe2, 0x6b, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b 50 d9 \tvpdpbssd %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6b, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b 51 d9 \tvpdpbssds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a 50 d9 \tvpdpbsud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a 51 d9 \tvpdpbsuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 50 d9 \tvpdpbuud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 51 d9 \tvpdpbuuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a d2 d9 \tvpdpwsud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a d3 d9 \tvpdpwsuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 d2 d9 \tvpdpwusd %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 d3 d9 \tvpdpwusds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 d2 d9 \tvpdpwuud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 d3 d9 \tvpdpwuuds %xmm1,%xmm2,%xmm3",},
+{{0x62, 0xf2, 0xed, 0x08, 0xb5, 0xd9, }, 6, 0, "", "",
+"62 f2 ed 08 b5 d9 \tvpmadd52huq %xmm1,%xmm2,%xmm3",},
+{{0x62, 0xf2, 0xed, 0x08, 0xb4, 0xd9, }, 6, 0, "", "",
+"62 f2 ed 08 b4 d9 \tvpmadd52luq %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x7f, 0xcc, 0xd1, }, 5, 0, "", "",
+"c4 e2 7f cc d1 \tvsha512msg1 %xmm1,%ymm2",},
+{{0xc4, 0xe2, 0x7f, 0xcd, 0xd1, }, 5, 0, "", "",
+"c4 e2 7f cd d1 \tvsha512msg2 %ymm1,%ymm2",},
+{{0xc4, 0xe2, 0x6f, 0xcb, 0xd9, }, 5, 0, "", "",
+"c4 e2 6f cb d9 \tvsha512rnds2 %xmm1,%ymm2,%ymm3",},
+{{0xc4, 0xe2, 0x68, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 da d9 \tvsm3msg1 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 da d9 \tvsm3msg2 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe3, 0x69, 0xde, 0xd9, 0xa1, }, 6, 0, "", "",
+"c4 e3 69 de d9 a1 \tvsm3rnds2 $0xa1,%xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a da d9 \tvsm4key4 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6b, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b da d9 \tvsm4rnds4 %xmm1,%xmm2,%xmm3",},
+{{0x0f, 0x0d, 0x00, }, 3, 0, "", "",
+"0f 0d 00 \tprefetch (%eax)",},
+{{0x0f, 0x18, 0x08, }, 3, 0, "", "",
+"0f 18 08 \tprefetcht0 (%eax)",},
+{{0x0f, 0x18, 0x10, }, 3, 0, "", "",
+"0f 18 10 \tprefetcht1 (%eax)",},
+{{0x0f, 0x18, 0x18, }, 3, 0, "", "",
+"0f 18 18 \tprefetcht2 (%eax)",},
+{{0x0f, 0x18, 0x00, }, 3, 0, "", "",
+"0f 18 00 \tprefetchnta (%eax)",},
+{{0x0f, 0x01, 0xc6, }, 3, 0, "", "",
+"0f 01 c6 \twrmsrns",},
{{0xf3, 0x0f, 0x3a, 0xf0, 0xc0, 0x00, }, 6, 0, "", "",
"f3 0f 3a f0 c0 00 \threset $0x0",},
{{0x0f, 0x01, 0xe8, }, 3, 0, "", "",
diff --git a/tools/perf/arch/x86/tests/insn-x86-dat-64.c b/tools/perf/arch/x86/tests/insn-x86-dat-64.c
index 3a47e98fec33..3881fe89df8b 100644
--- a/tools/perf/arch/x86/tests/insn-x86-dat-64.c
+++ b/tools/perf/arch/x86/tests/insn-x86-dat-64.c
@@ -3877,6 +3877,1032 @@
"62 f5 7c 08 2e 8c c8 78 56 34 12 \tvucomish 0x12345678(%rax,%rcx,8),%xmm1",},
{{0x67, 0x62, 0xf5, 0x7c, 0x08, 0x2e, 0x8c, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 12, 0, "", "",
"67 62 f5 7c 08 2e 8c c8 78 56 34 12 \tvucomish 0x12345678(%eax,%ecx,8),%xmm1",},
+{{0xf3, 0x0f, 0x38, 0xdc, 0xd1, }, 5, 0, "", "",
+"f3 0f 38 dc d1 \tloadiwkey %xmm1,%xmm2",},
+{{0xf3, 0x0f, 0x38, 0xfa, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fa d0 \tencodekey128 %eax,%edx",},
+{{0xf3, 0x0f, 0x38, 0xfb, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fb d0 \tencodekey256 %eax,%edx",},
+{{0xf3, 0x0f, 0x38, 0xdc, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 dc 5a 77 \taesenc128kl 0x77(%rdx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xde, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 de 5a 77 \taesenc256kl 0x77(%rdx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xdd, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 dd 5a 77 \taesdec128kl 0x77(%rdx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xdf, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 df 5a 77 \taesdec256kl 0x77(%rdx),%xmm3",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x42, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 42 77 \taesencwide128kl 0x77(%rdx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x52, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 52 77 \taesencwide256kl 0x77(%rdx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x4a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 4a 77 \taesdecwide128kl 0x77(%rdx)",},
+{{0xf3, 0x0f, 0x38, 0xd8, 0x5a, 0x77, }, 6, 0, "", "",
+"f3 0f 38 d8 5a 77 \taesdecwide256kl 0x77(%rdx)",},
+{{0x0f, 0x38, 0xfc, 0x08, }, 4, 0, "", "",
+"0f 38 fc 08 \taadd %ecx,(%rax)",},
+{{0x41, 0x0f, 0x38, 0xfc, 0x10, }, 5, 0, "", "",
+"41 0f 38 fc 10 \taadd %edx,(%r8)",},
+{{0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"0f 38 fc 94 c8 78 56 34 12 \taadd %edx,0x12345678(%rax,%rcx,8)",},
+{{0x41, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"41 0f 38 fc 94 c8 78 56 34 12 \taadd %edx,0x12345678(%r8,%rcx,8)",},
+{{0x48, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"48 0f 38 fc 08 \taadd %rcx,(%rax)",},
+{{0x49, 0x0f, 0x38, 0xfc, 0x10, }, 5, 0, "", "",
+"49 0f 38 fc 10 \taadd %rdx,(%r8)",},
+{{0x48, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"48 0f 38 fc 14 25 78 56 34 12 \taadd %rdx,0x12345678",},
+{{0x48, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"48 0f 38 fc 94 c8 78 56 34 12 \taadd %rdx,0x12345678(%rax,%rcx,8)",},
+{{0x49, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"49 0f 38 fc 94 c8 78 56 34 12 \taadd %rdx,0x12345678(%r8,%rcx,8)",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"66 0f 38 fc 08 \taand %ecx,(%rax)",},
+{{0x66, 0x41, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"66 41 0f 38 fc 10 \taand %edx,(%r8)",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"66 0f 38 fc 94 c8 78 56 34 12 \taand %edx,0x12345678(%rax,%rcx,8)",},
+{{0x66, 0x41, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"66 41 0f 38 fc 94 c8 78 56 34 12 \taand %edx,0x12345678(%r8,%rcx,8)",},
+{{0x66, 0x48, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"66 48 0f 38 fc 08 \taand %rcx,(%rax)",},
+{{0x66, 0x49, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"66 49 0f 38 fc 10 \taand %rdx,(%r8)",},
+{{0x66, 0x48, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"66 48 0f 38 fc 14 25 78 56 34 12 \taand %rdx,0x12345678",},
+{{0x66, 0x48, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"66 48 0f 38 fc 94 c8 78 56 34 12 \taand %rdx,0x12345678(%rax,%rcx,8)",},
+{{0x66, 0x49, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"66 49 0f 38 fc 94 c8 78 56 34 12 \taand %rdx,0x12345678(%r8,%rcx,8)",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"f2 0f 38 fc 08 \taor %ecx,(%rax)",},
+{{0xf2, 0x41, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"f2 41 0f 38 fc 10 \taor %edx,(%r8)",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f2 0f 38 fc 94 c8 78 56 34 12 \taor %edx,0x12345678(%rax,%rcx,8)",},
+{{0xf2, 0x41, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f2 41 0f 38 fc 94 c8 78 56 34 12 \taor %edx,0x12345678(%r8,%rcx,8)",},
+{{0xf2, 0x48, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"f2 48 0f 38 fc 08 \taor %rcx,(%rax)",},
+{{0xf2, 0x49, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"f2 49 0f 38 fc 10 \taor %rdx,(%r8)",},
+{{0xf2, 0x48, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f2 48 0f 38 fc 14 25 78 56 34 12 \taor %rdx,0x12345678",},
+{{0xf2, 0x48, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f2 48 0f 38 fc 94 c8 78 56 34 12 \taor %rdx,0x12345678(%rax,%rcx,8)",},
+{{0xf2, 0x49, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f2 49 0f 38 fc 94 c8 78 56 34 12 \taor %rdx,0x12345678(%r8,%rcx,8)",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"f3 0f 38 fc 08 \taxor %ecx,(%rax)",},
+{{0xf3, 0x41, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"f3 41 0f 38 fc 10 \taxor %edx,(%r8)",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f3 0f 38 fc 94 c8 78 56 34 12 \taxor %edx,0x12345678(%rax,%rcx,8)",},
+{{0xf3, 0x41, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f3 41 0f 38 fc 94 c8 78 56 34 12 \taxor %edx,0x12345678(%r8,%rcx,8)",},
+{{0xf3, 0x48, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"f3 48 0f 38 fc 08 \taxor %rcx,(%rax)",},
+{{0xf3, 0x49, 0x0f, 0x38, 0xfc, 0x10, }, 6, 0, "", "",
+"f3 49 0f 38 fc 10 \taxor %rdx,(%r8)",},
+{{0xf3, 0x48, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f3 48 0f 38 fc 14 25 78 56 34 12 \taxor %rdx,0x12345678",},
+{{0xf3, 0x48, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f3 48 0f 38 fc 94 c8 78 56 34 12 \taxor %rdx,0x12345678(%rax,%rcx,8)",},
+{{0xf3, 0x49, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"f3 49 0f 38 fc 94 c8 78 56 34 12 \taxor %rdx,0x12345678(%r8,%rcx,8)",},
+{{0xc4, 0xc2, 0x61, 0xe6, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e6 09 \tcmpbexadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe2, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e2 09 \tcmpbxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xee, 0x09, }, 5, 0, "", "",
+"c4 c2 61 ee 09 \tcmplexadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xec, 0x09, }, 5, 0, "", "",
+"c4 c2 61 ec 09 \tcmplxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe7, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e7 09 \tcmpnbexadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe3, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e3 09 \tcmpnbxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xef, 0x09, }, 5, 0, "", "",
+"c4 c2 61 ef 09 \tcmpnlexadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xed, 0x09, }, 5, 0, "", "",
+"c4 c2 61 ed 09 \tcmpnlxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe1, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e1 09 \tcmpnoxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xeb, 0x09, }, 5, 0, "", "",
+"c4 c2 61 eb 09 \tcmpnpxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe9, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e9 09 \tcmpnsxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe5, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e5 09 \tcmpnzxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe0, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e0 09 \tcmpoxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xea, 0x09, }, 5, 0, "", "",
+"c4 c2 61 ea 09 \tcmppxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe8, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e8 09 \tcmpsxadd %ebx,%ecx,(%r9)",},
+{{0xc4, 0xc2, 0x61, 0xe4, 0x09, }, 5, 0, "", "",
+"c4 c2 61 e4 09 \tcmpzxadd %ebx,%ecx,(%r9)",},
+{{0x0f, 0x0d, 0x00, }, 3, 0, "", "",
+"0f 0d 00 \tprefetch (%rax)",},
+{{0x0f, 0x18, 0x08, }, 3, 0, "", "",
+"0f 18 08 \tprefetcht0 (%rax)",},
+{{0x0f, 0x18, 0x10, }, 3, 0, "", "",
+"0f 18 10 \tprefetcht1 (%rax)",},
+{{0x0f, 0x18, 0x18, }, 3, 0, "", "",
+"0f 18 18 \tprefetcht2 (%rax)",},
+{{0x0f, 0x18, 0x00, }, 3, 0, "", "",
+"0f 18 00 \tprefetchnta (%rax)",},
+{{0x0f, 0x18, 0x3d, 0x78, 0x56, 0x34, 0x12, }, 7, 0, "", "",
+"0f 18 3d 78 56 34 12 \tprefetchit0 0x12345678(%rip) # 1234924e <main+0x1234924e>",},
+{{0x0f, 0x18, 0x35, 0x78, 0x56, 0x34, 0x12, }, 7, 0, "", "",
+"0f 18 35 78 56 34 12 \tprefetchit1 0x12345678(%rip) # 12349255 <main+0x12349255>",},
+{{0xf2, 0x0f, 0x01, 0xc6, }, 4, 0, "", "",
+"f2 0f 01 c6 \trdmsrlist",},
+{{0xf3, 0x0f, 0x01, 0xc6, }, 4, 0, "", "",
+"f3 0f 01 c6 \twrmsrlist",},
+{{0xf2, 0x0f, 0x38, 0xf8, 0xd0, }, 5, 0, "", "",
+"f2 0f 38 f8 d0 \turdmsr %rdx,%rax",},
+{{0x62, 0xfc, 0x7f, 0x08, 0xf8, 0xd6, }, 6, 0, "", "",
+"62 fc 7f 08 f8 d6 \turdmsr %rdx,%r22",},
+{{0xc4, 0xc7, 0x7b, 0xf8, 0xc4, 0x7f, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"c4 c7 7b f8 c4 7f 00 00 00 \turdmsr $0x7f,%r12",},
+{{0xf3, 0x0f, 0x38, 0xf8, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 f8 d0 \tuwrmsr %rax,%rdx",},
+{{0x62, 0xfc, 0x7e, 0x08, 0xf8, 0xd6, }, 6, 0, "", "",
+"62 fc 7e 08 f8 d6 \tuwrmsr %r22,%rdx",},
+{{0xc4, 0xc7, 0x7a, 0xf8, 0xc4, 0x7f, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"c4 c7 7a f8 c4 7f 00 00 00 \tuwrmsr %r12,$0x7f",},
+{{0xc4, 0xe2, 0x7a, 0xb1, 0x31, }, 5, 0, "", "",
+"c4 e2 7a b1 31 \tvbcstnebf162ps (%rcx),%xmm6",},
+{{0xc4, 0xe2, 0x79, 0xb1, 0x31, }, 5, 0, "", "",
+"c4 e2 79 b1 31 \tvbcstnesh2ps (%rcx),%xmm6",},
+{{0xc4, 0xe2, 0x7a, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 7a b0 31 \tvcvtneebf162ps (%rcx),%xmm6",},
+{{0xc4, 0xe2, 0x79, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 79 b0 31 \tvcvtneeph2ps (%rcx),%xmm6",},
+{{0xc4, 0xe2, 0x7b, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 7b b0 31 \tvcvtneobf162ps (%rcx),%xmm6",},
+{{0xc4, 0xe2, 0x78, 0xb0, 0x31, }, 5, 0, "", "",
+"c4 e2 78 b0 31 \tvcvtneoph2ps (%rcx),%xmm6",},
+{{0x62, 0xf2, 0x7e, 0x08, 0x72, 0xf1, }, 6, 0, "", "",
+"62 f2 7e 08 72 f1 \tvcvtneps2bf16 %xmm1,%xmm6",},
+{{0xf2, 0x0f, 0x01, 0xca, }, 4, 0, "erets", "indirect",
+"f2 0f 01 ca \terets",},
+{{0xf3, 0x0f, 0x01, 0xca, }, 4, 0, "eretu", "indirect",
+"f3 0f 01 ca \teretu",},
+{{0xc4, 0xe2, 0x71, 0x6c, 0xda, }, 5, 0, "", "",
+"c4 e2 71 6c da \ttcmmimfp16ps %tmm1,%tmm2,%tmm3",},
+{{0xc4, 0xe2, 0x70, 0x6c, 0xda, }, 5, 0, "", "",
+"c4 e2 70 6c da \ttcmmrlfp16ps %tmm1,%tmm2,%tmm3",},
+{{0xc4, 0xe2, 0x73, 0x5c, 0xda, }, 5, 0, "", "",
+"c4 e2 73 5c da \ttdpfp16ps %tmm1,%tmm2,%tmm3",},
+{{0xd5, 0x10, 0xf6, 0xc2, 0x05, }, 5, 0, "", "",
+"d5 10 f6 c2 05 \ttest $0x5,%r18b",},
+{{0xd5, 0x10, 0xf7, 0xc2, 0x05, 0x00, 0x00, 0x00, }, 8, 0, "", "",
+"d5 10 f7 c2 05 00 00 00 \ttest $0x5,%r18d",},
+{{0xd5, 0x18, 0xf7, 0xc2, 0x05, 0x00, 0x00, 0x00, }, 8, 0, "", "",
+"d5 18 f7 c2 05 00 00 00 \ttest $0x5,%r18",},
+{{0x66, 0xd5, 0x10, 0xf7, 0xc2, 0x05, 0x00, }, 7, 0, "", "",
+"66 d5 10 f7 c2 05 00 \ttest $0x5,%r18w",},
+{{0x44, 0x0f, 0xaf, 0xf0, }, 4, 0, "", "",
+"44 0f af f0 \timul %eax,%r14d",},
+{{0xd5, 0xc0, 0xaf, 0xc8, }, 4, 0, "", "",
+"d5 c0 af c8 \timul %eax,%r17d",},
+{{0xd5, 0x90, 0x62, 0x12, }, 4, 0, "", "",
+"d5 90 62 12 \tpunpckldq %mm2,(%r18)",},
+{{0xd5, 0x40, 0x8d, 0x00, }, 4, 0, "", "",
+"d5 40 8d 00 \tlea (%rax),%r16d",},
+{{0xd5, 0x44, 0x8d, 0x38, }, 4, 0, "", "",
+"d5 44 8d 38 \tlea (%rax),%r31d",},
+{{0xd5, 0x20, 0x8d, 0x04, 0x05, 0x00, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 20 8d 04 05 00 00 00 00 \tlea 0x0(,%r16,1),%eax",},
+{{0xd5, 0x22, 0x8d, 0x04, 0x3d, 0x00, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 22 8d 04 3d 00 00 00 00 \tlea 0x0(,%r31,1),%eax",},
+{{0xd5, 0x10, 0x8d, 0x00, }, 4, 0, "", "",
+"d5 10 8d 00 \tlea (%r16),%eax",},
+{{0xd5, 0x11, 0x8d, 0x07, }, 4, 0, "", "",
+"d5 11 8d 07 \tlea (%r31),%eax",},
+{{0x4c, 0x8d, 0x38, }, 3, 0, "", "",
+"4c 8d 38 \tlea (%rax),%r15",},
+{{0xd5, 0x48, 0x8d, 0x00, }, 4, 0, "", "",
+"d5 48 8d 00 \tlea (%rax),%r16",},
+{{0x49, 0x8d, 0x07, }, 3, 0, "", "",
+"49 8d 07 \tlea (%r15),%rax",},
+{{0xd5, 0x18, 0x8d, 0x00, }, 4, 0, "", "",
+"d5 18 8d 00 \tlea (%r16),%rax",},
+{{0x4a, 0x8d, 0x04, 0x3d, 0x00, 0x00, 0x00, 0x00, }, 8, 0, "", "",
+"4a 8d 04 3d 00 00 00 00 \tlea 0x0(,%r15,1),%rax",},
+{{0xd5, 0x28, 0x8d, 0x04, 0x05, 0x00, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 28 8d 04 05 00 00 00 00 \tlea 0x0(,%r16,1),%rax",},
+{{0xd5, 0x1c, 0x03, 0x00, }, 4, 0, "", "",
+"d5 1c 03 00 \tadd (%r16),%r8",},
+{{0xd5, 0x1c, 0x03, 0x38, }, 4, 0, "", "",
+"d5 1c 03 38 \tadd (%r16),%r15",},
+{{0xd5, 0x4a, 0x8b, 0x04, 0x0d, 0x00, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 4a 8b 04 0d 00 00 00 00 \tmov 0x0(,%r9,1),%r16",},
+{{0xd5, 0x4a, 0x8b, 0x04, 0x35, 0x00, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 4a 8b 04 35 00 00 00 00 \tmov 0x0(,%r14,1),%r16",},
+{{0xd5, 0x4d, 0x2b, 0x3a, }, 4, 0, "", "",
+"d5 4d 2b 3a \tsub (%r10),%r31",},
+{{0xd5, 0x4d, 0x2b, 0x7d, 0x00, }, 5, 0, "", "",
+"d5 4d 2b 7d 00 \tsub 0x0(%r13),%r31",},
+{{0xd5, 0x30, 0x8d, 0x44, 0x28, 0x01, }, 6, 0, "", "",
+"d5 30 8d 44 28 01 \tlea 0x1(%r16,%r21,1),%eax",},
+{{0xd5, 0x76, 0x8d, 0x7c, 0x10, 0x01, }, 6, 0, "", "",
+"d5 76 8d 7c 10 01 \tlea 0x1(%r16,%r26,1),%r31d",},
+{{0xd5, 0x12, 0x8d, 0x84, 0x0d, 0x81, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 12 8d 84 0d 81 00 00 00 \tlea 0x81(%r21,%r9,1),%eax",},
+{{0xd5, 0x57, 0x8d, 0xbc, 0x0a, 0x81, 0x00, 0x00, 0x00, }, 9, 0, "", "",
+"d5 57 8d bc 0a 81 00 00 00 \tlea 0x81(%r26,%r9,1),%r31d",},
+{{0xd5, 0x00, 0xa1, 0xef, 0xcd, 0xab, 0x90, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "jmp", "indirect",
+"d5 00 a1 ef cd ab 90 78 56 34 12 \tjmpabs $0x1234567890abcdef",},
+{{0xd5, 0x08, 0x53, }, 3, 0, "", "",
+"d5 08 53 \tpushp %rbx",},
+{{0xd5, 0x18, 0x50, }, 3, 0, "", "",
+"d5 18 50 \tpushp %r16",},
+{{0xd5, 0x19, 0x57, }, 3, 0, "", "",
+"d5 19 57 \tpushp %r31",},
+{{0xd5, 0x19, 0x5f, }, 3, 0, "", "",
+"d5 19 5f \tpopp %r31",},
+{{0xd5, 0x18, 0x58, }, 3, 0, "", "",
+"d5 18 58 \tpopp %r16",},
+{{0xd5, 0x08, 0x5b, }, 3, 0, "", "",
+"d5 08 5b \tpopp %rbx",},
+{{0x62, 0x72, 0x34, 0x00, 0xf7, 0xd2, }, 6, 0, "", "",
+"62 72 34 00 f7 d2 \tbextr %r25d,%edx,%r10d",},
+{{0x62, 0xda, 0x34, 0x00, 0xf7, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 34 00 f7 94 87 23 01 00 00 \tbextr %r25d,0x123(%r31,%rax,4),%edx",},
+{{0x62, 0x52, 0x84, 0x00, 0xf7, 0xdf, }, 6, 0, "", "",
+"62 52 84 00 f7 df \tbextr %r31,%r15,%r11",},
+{{0x62, 0x5a, 0x84, 0x00, 0xf7, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 84 00 f7 bc 87 23 01 00 00 \tbextr %r31,0x123(%r31,%rax,4),%r15",},
+{{0x62, 0xda, 0x6c, 0x08, 0xf3, 0xd9, }, 6, 0, "", "",
+"62 da 6c 08 f3 d9 \tblsi %r25d,%edx",},
+{{0x62, 0xda, 0x84, 0x08, 0xf3, 0xdf, }, 6, 0, "", "",
+"62 da 84 08 f3 df \tblsi %r31,%r15",},
+{{0x62, 0xda, 0x34, 0x00, 0xf3, 0x9c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 34 00 f3 9c 87 23 01 00 00 \tblsi 0x123(%r31,%rax,4),%r25d",},
+{{0x62, 0xda, 0x84, 0x00, 0xf3, 0x9c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 84 00 f3 9c 87 23 01 00 00 \tblsi 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0xda, 0x6c, 0x08, 0xf3, 0xd1, }, 6, 0, "", "",
+"62 da 6c 08 f3 d1 \tblsmsk %r25d,%edx",},
+{{0x62, 0xda, 0x84, 0x08, 0xf3, 0xd7, }, 6, 0, "", "",
+"62 da 84 08 f3 d7 \tblsmsk %r31,%r15",},
+{{0x62, 0xda, 0x34, 0x00, 0xf3, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 34 00 f3 94 87 23 01 00 00 \tblsmsk 0x123(%r31,%rax,4),%r25d",},
+{{0x62, 0xda, 0x84, 0x00, 0xf3, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 84 00 f3 94 87 23 01 00 00 \tblsmsk 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0xda, 0x6c, 0x08, 0xf3, 0xc9, }, 6, 0, "", "",
+"62 da 6c 08 f3 c9 \tblsr %r25d,%edx",},
+{{0x62, 0xda, 0x84, 0x08, 0xf3, 0xcf, }, 6, 0, "", "",
+"62 da 84 08 f3 cf \tblsr %r31,%r15",},
+{{0x62, 0xda, 0x34, 0x00, 0xf3, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 34 00 f3 8c 87 23 01 00 00 \tblsr 0x123(%r31,%rax,4),%r25d",},
+{{0x62, 0xda, 0x84, 0x00, 0xf3, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 84 00 f3 8c 87 23 01 00 00 \tblsr 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x72, 0x34, 0x00, 0xf5, 0xd2, }, 6, 0, "", "",
+"62 72 34 00 f5 d2 \tbzhi %r25d,%edx,%r10d",},
+{{0x62, 0xda, 0x34, 0x00, 0xf5, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 34 00 f5 94 87 23 01 00 00 \tbzhi %r25d,0x123(%r31,%rax,4),%edx",},
+{{0x62, 0x52, 0x84, 0x00, 0xf5, 0xdf, }, 6, 0, "", "",
+"62 52 84 00 f5 df \tbzhi %r31,%r15,%r11",},
+{{0x62, 0x5a, 0x84, 0x00, 0xf5, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 84 00 f5 bc 87 23 01 00 00 \tbzhi %r31,0x123(%r31,%rax,4),%r15",},
+{{0x62, 0xda, 0x35, 0x00, 0xe6, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e6 94 87 23 01 00 00 \tcmpbexadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe6, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e6 bc 87 23 01 00 00 \tcmpbexadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe2, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e2 94 87 23 01 00 00 \tcmpbxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe2, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e2 bc 87 23 01 00 00 \tcmpbxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xec, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 ec 94 87 23 01 00 00 \tcmplxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xec, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 ec bc 87 23 01 00 00 \tcmplxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe7, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e7 94 87 23 01 00 00 \tcmpnbexadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe7, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e7 bc 87 23 01 00 00 \tcmpnbexadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe3, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e3 94 87 23 01 00 00 \tcmpnbxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe3, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e3 bc 87 23 01 00 00 \tcmpnbxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xef, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 ef 94 87 23 01 00 00 \tcmpnlexadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xef, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 ef bc 87 23 01 00 00 \tcmpnlexadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xed, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 ed 94 87 23 01 00 00 \tcmpnlxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xed, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 ed bc 87 23 01 00 00 \tcmpnlxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe1, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e1 94 87 23 01 00 00 \tcmpnoxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe1, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e1 bc 87 23 01 00 00 \tcmpnoxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xeb, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 eb 94 87 23 01 00 00 \tcmpnpxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xeb, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 eb bc 87 23 01 00 00 \tcmpnpxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe9, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e9 94 87 23 01 00 00 \tcmpnsxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe9, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e9 bc 87 23 01 00 00 \tcmpnsxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe5, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e5 94 87 23 01 00 00 \tcmpnzxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe5, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e5 bc 87 23 01 00 00 \tcmpnzxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe0, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e0 94 87 23 01 00 00 \tcmpoxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe0, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e0 bc 87 23 01 00 00 \tcmpoxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xea, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 ea 94 87 23 01 00 00 \tcmppxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xea, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 ea bc 87 23 01 00 00 \tcmppxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe8, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e8 94 87 23 01 00 00 \tcmpsxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe8, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e8 bc 87 23 01 00 00 \tcmpsxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x35, 0x00, 0xe4, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 e4 94 87 23 01 00 00 \tcmpzxadd %r25d,%edx,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x85, 0x00, 0xe4, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 e4 bc 87 23 01 00 00 \tcmpzxadd %r31,%r15,0x123(%r31,%rax,4)",},
+{{0x62, 0xcc, 0xfc, 0x08, 0xf1, 0xf7, }, 6, 0, "", "",
+"62 cc fc 08 f1 f7 \tcrc32 %r31,%r22",},
+{{0x62, 0xcc, 0xfc, 0x08, 0xf1, 0x37, }, 6, 0, "", "",
+"62 cc fc 08 f1 37 \tcrc32q (%r31),%r22",},
+{{0x62, 0xec, 0xfc, 0x08, 0xf0, 0xcb, }, 6, 0, "", "",
+"62 ec fc 08 f0 cb \tcrc32 %r19b,%r17",},
+{{0x62, 0xec, 0x7c, 0x08, 0xf0, 0xeb, }, 6, 0, "", "",
+"62 ec 7c 08 f0 eb \tcrc32 %r19b,%r21d",},
+{{0x62, 0xfc, 0x7c, 0x08, 0xf0, 0x1b, }, 6, 0, "", "",
+"62 fc 7c 08 f0 1b \tcrc32b (%r19),%ebx",},
+{{0x62, 0xcc, 0x7c, 0x08, 0xf1, 0xff, }, 6, 0, "", "",
+"62 cc 7c 08 f1 ff \tcrc32 %r31d,%r23d",},
+{{0x62, 0xcc, 0x7c, 0x08, 0xf1, 0x3f, }, 6, 0, "", "",
+"62 cc 7c 08 f1 3f \tcrc32l (%r31),%r23d",},
+{{0x62, 0xcc, 0x7d, 0x08, 0xf1, 0xef, }, 6, 0, "", "",
+"62 cc 7d 08 f1 ef \tcrc32 %r31w,%r21d",},
+{{0x62, 0xcc, 0x7d, 0x08, 0xf1, 0x2f, }, 6, 0, "", "",
+"62 cc 7d 08 f1 2f \tcrc32w (%r31),%r21d",},
+{{0x62, 0xe4, 0xfc, 0x08, 0xf1, 0xd0, }, 6, 0, "", "",
+"62 e4 fc 08 f1 d0 \tcrc32 %rax,%r18",},
+{{0x67, 0x62, 0x4c, 0x7f, 0x08, 0xf8, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 12, 0, "", "",
+"67 62 4c 7f 08 f8 8c 87 23 01 00 00 \tenqcmd 0x123(%r31d,%eax,4),%r25d",},
+{{0x62, 0x4c, 0x7f, 0x08, 0xf8, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7f 08 f8 bc 87 23 01 00 00 \tenqcmd 0x123(%r31,%rax,4),%r31",},
+{{0x67, 0x62, 0x4c, 0x7e, 0x08, 0xf8, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 12, 0, "", "",
+"67 62 4c 7e 08 f8 8c 87 23 01 00 00 \tenqcmds 0x123(%r31d,%eax,4),%r25d",},
+{{0x62, 0x4c, 0x7e, 0x08, 0xf8, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7e 08 f8 bc 87 23 01 00 00 \tenqcmds 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x4c, 0x7e, 0x08, 0xf0, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7e 08 f0 bc 87 23 01 00 00 \tinvept 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x4c, 0x7e, 0x08, 0xf2, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7e 08 f2 bc 87 23 01 00 00 \tinvpcid 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x4c, 0x7e, 0x08, 0xf1, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7e 08 f1 bc 87 23 01 00 00 \tinvvpid 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x61, 0x7d, 0x08, 0x93, 0xcd, }, 6, 0, "", "",
+"62 61 7d 08 93 cd \tkmovb %k5,%r25d",},
+{{0x62, 0xd9, 0x7d, 0x08, 0x91, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 7d 08 91 ac 87 23 01 00 00 \tkmovb %k5,0x123(%r31,%rax,4)",},
+{{0x62, 0xd9, 0x7d, 0x08, 0x92, 0xe9, }, 6, 0, "", "",
+"62 d9 7d 08 92 e9 \tkmovb %r25d,%k5",},
+{{0x62, 0xd9, 0x7d, 0x08, 0x90, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 7d 08 90 ac 87 23 01 00 00 \tkmovb 0x123(%r31,%rax,4),%k5",},
+{{0x62, 0x61, 0x7f, 0x08, 0x93, 0xcd, }, 6, 0, "", "",
+"62 61 7f 08 93 cd \tkmovd %k5,%r25d",},
+{{0x62, 0xd9, 0xfd, 0x08, 0x91, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 fd 08 91 ac 87 23 01 00 00 \tkmovd %k5,0x123(%r31,%rax,4)",},
+{{0x62, 0xd9, 0x7f, 0x08, 0x92, 0xe9, }, 6, 0, "", "",
+"62 d9 7f 08 92 e9 \tkmovd %r25d,%k5",},
+{{0x62, 0xd9, 0xfd, 0x08, 0x90, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 fd 08 90 ac 87 23 01 00 00 \tkmovd 0x123(%r31,%rax,4),%k5",},
+{{0x62, 0x61, 0xff, 0x08, 0x93, 0xfd, }, 6, 0, "", "",
+"62 61 ff 08 93 fd \tkmovq %k5,%r31",},
+{{0x62, 0xd9, 0xfc, 0x08, 0x91, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 fc 08 91 ac 87 23 01 00 00 \tkmovq %k5,0x123(%r31,%rax,4)",},
+{{0x62, 0xd9, 0xff, 0x08, 0x92, 0xef, }, 6, 0, "", "",
+"62 d9 ff 08 92 ef \tkmovq %r31,%k5",},
+{{0x62, 0xd9, 0xfc, 0x08, 0x90, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 fc 08 90 ac 87 23 01 00 00 \tkmovq 0x123(%r31,%rax,4),%k5",},
+{{0x62, 0x61, 0x7c, 0x08, 0x93, 0xcd, }, 6, 0, "", "",
+"62 61 7c 08 93 cd \tkmovw %k5,%r25d",},
+{{0x62, 0xd9, 0x7c, 0x08, 0x91, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 7c 08 91 ac 87 23 01 00 00 \tkmovw %k5,0x123(%r31,%rax,4)",},
+{{0x62, 0xd9, 0x7c, 0x08, 0x92, 0xe9, }, 6, 0, "", "",
+"62 d9 7c 08 92 e9 \tkmovw %r25d,%k5",},
+{{0x62, 0xd9, 0x7c, 0x08, 0x90, 0xac, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d9 7c 08 90 ac 87 23 01 00 00 \tkmovw 0x123(%r31,%rax,4),%k5",},
+{{0x62, 0xda, 0x7c, 0x08, 0x49, 0x84, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 7c 08 49 84 87 23 01 00 00 \tldtilecfg 0x123(%r31,%rax,4)",},
+{{0x62, 0xfc, 0x7d, 0x08, 0x60, 0xc2, }, 6, 0, "", "",
+"62 fc 7d 08 60 c2 \tmovbe %r18w,%ax",},
+{{0x62, 0xd4, 0x7d, 0x08, 0x60, 0xc7, }, 6, 0, "", "",
+"62 d4 7d 08 60 c7 \tmovbe %r15w,%ax",},
+{{0x62, 0xec, 0x7d, 0x08, 0x61, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 ec 7d 08 61 94 80 23 01 00 00 \tmovbe %r18w,0x123(%r16,%rax,4)",},
+{{0x62, 0xcc, 0x7d, 0x08, 0x61, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 cc 7d 08 61 94 87 23 01 00 00 \tmovbe %r18w,0x123(%r31,%rax,4)",},
+{{0x62, 0xdc, 0x7c, 0x08, 0x60, 0xd1, }, 6, 0, "", "",
+"62 dc 7c 08 60 d1 \tmovbe %r25d,%edx",},
+{{0x62, 0xd4, 0x7c, 0x08, 0x60, 0xd7, }, 6, 0, "", "",
+"62 d4 7c 08 60 d7 \tmovbe %r15d,%edx",},
+{{0x62, 0x6c, 0x7c, 0x08, 0x61, 0x8c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 6c 7c 08 61 8c 80 23 01 00 00 \tmovbe %r25d,0x123(%r16,%rax,4)",},
+{{0x62, 0x5c, 0xfc, 0x08, 0x60, 0xff, }, 6, 0, "", "",
+"62 5c fc 08 60 ff \tmovbe %r31,%r15",},
+{{0x62, 0x54, 0xfc, 0x08, 0x60, 0xf8, }, 6, 0, "", "",
+"62 54 fc 08 60 f8 \tmovbe %r8,%r15",},
+{{0x62, 0x6c, 0xfc, 0x08, 0x61, 0xbc, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 6c fc 08 61 bc 80 23 01 00 00 \tmovbe %r31,0x123(%r16,%rax,4)",},
+{{0x62, 0x4c, 0xfc, 0x08, 0x61, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c fc 08 61 bc 87 23 01 00 00 \tmovbe %r31,0x123(%r31,%rax,4)",},
+{{0x62, 0x6c, 0xfc, 0x08, 0x60, 0xbc, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 6c fc 08 60 bc 80 23 01 00 00 \tmovbe 0x123(%r16,%rax,4),%r31",},
+{{0x62, 0xcc, 0x7d, 0x08, 0x60, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 cc 7d 08 60 94 87 23 01 00 00 \tmovbe 0x123(%r31,%rax,4),%r18w",},
+{{0x62, 0x4c, 0x7c, 0x08, 0x60, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7c 08 60 8c 87 23 01 00 00 \tmovbe 0x123(%r31,%rax,4),%r25d",},
+{{0x67, 0x62, 0x4c, 0x7d, 0x08, 0xf8, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 12, 0, "", "",
+"67 62 4c 7d 08 f8 8c 87 23 01 00 00 \tmovdir64b 0x123(%r31d,%eax,4),%r25d",},
+{{0x62, 0x4c, 0x7d, 0x08, 0xf8, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7d 08 f8 bc 87 23 01 00 00 \tmovdir64b 0x123(%r31,%rax,4),%r31",},
+{{0x62, 0x4c, 0x7c, 0x08, 0xf9, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7c 08 f9 8c 87 23 01 00 00 \tmovdiri %r25d,0x123(%r31,%rax,4)",},
+{{0x62, 0x4c, 0xfc, 0x08, 0xf9, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c fc 08 f9 bc 87 23 01 00 00 \tmovdiri %r31,0x123(%r31,%rax,4)",},
+{{0x62, 0x5a, 0x6f, 0x08, 0xf5, 0xd1, }, 6, 0, "", "",
+"62 5a 6f 08 f5 d1 \tpdep %r25d,%edx,%r10d",},
+{{0x62, 0x5a, 0x87, 0x08, 0xf5, 0xdf, }, 6, 0, "", "",
+"62 5a 87 08 f5 df \tpdep %r31,%r15,%r11",},
+{{0x62, 0xda, 0x37, 0x00, 0xf5, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 37 00 f5 94 87 23 01 00 00 \tpdep 0x123(%r31,%rax,4),%r25d,%edx",},
+{{0x62, 0x5a, 0x87, 0x00, 0xf5, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 87 00 f5 bc 87 23 01 00 00 \tpdep 0x123(%r31,%rax,4),%r31,%r15",},
+{{0x62, 0x5a, 0x6e, 0x08, 0xf5, 0xd1, }, 6, 0, "", "",
+"62 5a 6e 08 f5 d1 \tpext %r25d,%edx,%r10d",},
+{{0x62, 0x5a, 0x86, 0x08, 0xf5, 0xdf, }, 6, 0, "", "",
+"62 5a 86 08 f5 df \tpext %r31,%r15,%r11",},
+{{0x62, 0xda, 0x36, 0x00, 0xf5, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 36 00 f5 94 87 23 01 00 00 \tpext 0x123(%r31,%rax,4),%r25d,%edx",},
+{{0x62, 0x5a, 0x86, 0x00, 0xf5, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 86 00 f5 bc 87 23 01 00 00 \tpext 0x123(%r31,%rax,4),%r31,%r15",},
+{{0x62, 0x72, 0x35, 0x00, 0xf7, 0xd2, }, 6, 0, "", "",
+"62 72 35 00 f7 d2 \tshlx %r25d,%edx,%r10d",},
+{{0x62, 0xda, 0x35, 0x00, 0xf7, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 35 00 f7 94 87 23 01 00 00 \tshlx %r25d,0x123(%r31,%rax,4),%edx",},
+{{0x62, 0x52, 0x85, 0x00, 0xf7, 0xdf, }, 6, 0, "", "",
+"62 52 85 00 f7 df \tshlx %r31,%r15,%r11",},
+{{0x62, 0x5a, 0x85, 0x00, 0xf7, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 85 00 f7 bc 87 23 01 00 00 \tshlx %r31,0x123(%r31,%rax,4),%r15",},
+{{0x62, 0x72, 0x37, 0x00, 0xf7, 0xd2, }, 6, 0, "", "",
+"62 72 37 00 f7 d2 \tshrx %r25d,%edx,%r10d",},
+{{0x62, 0xda, 0x37, 0x00, 0xf7, 0x94, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 37 00 f7 94 87 23 01 00 00 \tshrx %r25d,0x123(%r31,%rax,4),%edx",},
+{{0x62, 0x52, 0x87, 0x00, 0xf7, 0xdf, }, 6, 0, "", "",
+"62 52 87 00 f7 df \tshrx %r31,%r15,%r11",},
+{{0x62, 0x5a, 0x87, 0x00, 0xf7, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 5a 87 00 f7 bc 87 23 01 00 00 \tshrx %r31,0x123(%r31,%rax,4),%r15",},
+{{0x62, 0xda, 0x7d, 0x08, 0x49, 0x84, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 7d 08 49 84 87 23 01 00 00 \tsttilecfg 0x123(%r31,%rax,4)",},
+{{0x62, 0xda, 0x7f, 0x08, 0x4b, 0xb4, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 7f 08 4b b4 87 23 01 00 00 \ttileloadd 0x123(%r31,%rax,4),%tmm6",},
+{{0x62, 0xda, 0x7d, 0x08, 0x4b, 0xb4, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 7d 08 4b b4 87 23 01 00 00 \ttileloaddt1 0x123(%r31,%rax,4),%tmm6",},
+{{0x62, 0xda, 0x7e, 0x08, 0x4b, 0xb4, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 da 7e 08 4b b4 87 23 01 00 00 \ttilestored %tmm6,0x123(%r31,%rax,4)",},
+{{0x62, 0xfa, 0x7d, 0x28, 0x1a, 0x18, }, 6, 0, "", "",
+"62 fa 7d 28 1a 18 \tvbroadcastf32x4 (%r16),%ymm3",},
+{{0x62, 0xfa, 0x7d, 0x28, 0x5a, 0x18, }, 6, 0, "", "",
+"62 fa 7d 28 5a 18 \tvbroadcasti32x4 (%r16),%ymm3",},
+{{0x62, 0xfb, 0x7d, 0x28, 0x19, 0x18, 0x01, }, 7, 0, "", "",
+"62 fb 7d 28 19 18 01 \tvextractf32x4 $0x1,%ymm3,(%r16)",},
+{{0x62, 0xfb, 0x7d, 0x28, 0x39, 0x18, 0x01, }, 7, 0, "", "",
+"62 fb 7d 28 39 18 01 \tvextracti32x4 $0x1,%ymm3,(%r16)",},
+{{0x62, 0x7b, 0x65, 0x28, 0x18, 0x00, 0x01, }, 7, 0, "", "",
+"62 7b 65 28 18 00 01 \tvinsertf32x4 $0x1,(%r16),%ymm3,%ymm8",},
+{{0x62, 0x7b, 0x65, 0x28, 0x38, 0x00, 0x01, }, 7, 0, "", "",
+"62 7b 65 28 38 00 01 \tvinserti32x4 $0x1,(%r16),%ymm3,%ymm8",},
+{{0x62, 0xdb, 0xfd, 0x08, 0x09, 0x30, 0x01, }, 7, 0, "", "",
+"62 db fd 08 09 30 01 \tvrndscalepd $0x1,(%r24),%xmm6",},
+{{0x62, 0xdb, 0x7d, 0x08, 0x08, 0x30, 0x02, }, 7, 0, "", "",
+"62 db 7d 08 08 30 02 \tvrndscaleps $0x2,(%r24),%xmm6",},
+{{0x62, 0xdb, 0xcd, 0x08, 0x0b, 0x18, 0x03, }, 7, 0, "", "",
+"62 db cd 08 0b 18 03 \tvrndscalesd $0x3,(%r24),%xmm6,%xmm3",},
+{{0x62, 0xdb, 0x4d, 0x08, 0x0a, 0x18, 0x04, }, 7, 0, "", "",
+"62 db 4d 08 0a 18 04 \tvrndscaless $0x4,(%r24),%xmm6,%xmm3",},
+{{0x62, 0x4c, 0x7c, 0x08, 0x66, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7c 08 66 8c 87 23 01 00 00 \twrssd %r25d,0x123(%r31,%rax,4)",},
+{{0x62, 0x4c, 0xfc, 0x08, 0x66, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c fc 08 66 bc 87 23 01 00 00 \twrssq %r31,0x123(%r31,%rax,4)",},
+{{0x62, 0x4c, 0x7d, 0x08, 0x65, 0x8c, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c 7d 08 65 8c 87 23 01 00 00 \twrussd %r25d,0x123(%r31,%rax,4)",},
+{{0x62, 0x4c, 0xfd, 0x08, 0x65, 0xbc, 0x87, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 4c fd 08 65 bc 87 23 01 00 00 \twrussq %r31,0x123(%r31,%rax,4)",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xd0, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 d0 34 12 \tadc $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x10, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 10 f9 \tadc %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x11, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 11 38 \tadc %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x12, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 12 04 07 \tadc (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x13, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 13 04 07 \tadc (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x14, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 14 83 11 \tadc $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0x54, 0x6d, 0x10, 0x66, 0xc7, }, 6, 0, "", "",
+"62 54 6d 10 66 c7 \tadcx %r15d,%r8d,%r18d",},
+{{0x62, 0x14, 0xf9, 0x08, 0x66, 0x04, 0x3f, }, 7, 0, "", "",
+"62 14 f9 08 66 04 3f \tadcx (%r15,%r31,1),%r8",},
+{{0x62, 0x14, 0x69, 0x10, 0x66, 0x04, 0x3f, }, 7, 0, "", "",
+"62 14 69 10 66 04 3f \tadcx (%r15,%r31,1),%r8d,%r18d",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xc0, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 c0 34 12 \tadd $0x1234,%ax,%r30w",},
+{{0x62, 0xd4, 0xfc, 0x10, 0x81, 0xc7, 0x33, 0x44, 0x34, 0x12, }, 10, 0, "", "",
+"62 d4 fc 10 81 c7 33 44 34 12 \tadd $0x12344433,%r15,%r16",},
+{{0x62, 0xd4, 0x74, 0x10, 0x80, 0xc5, 0x34, }, 7, 0, "", "",
+"62 d4 74 10 80 c5 34 \tadd $0x34,%r13b,%r17b",},
+{{0x62, 0xf4, 0xbc, 0x18, 0x81, 0xc0, 0x11, 0x22, 0x33, 0xf4, }, 10, 0, "", "",
+"62 f4 bc 18 81 c0 11 22 33 f4 \tadd $0xfffffffff4332211,%rax,%r8",},
+{{0x62, 0x44, 0xfc, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 fc 10 01 f8 \tadd %r31,%r8,%r16",},
+{{0x62, 0x44, 0xfc, 0x10, 0x01, 0x38, }, 6, 0, "", "",
+"62 44 fc 10 01 38 \tadd %r31,(%r8),%r16",},
+{{0x62, 0x44, 0xf8, 0x10, 0x01, 0x3c, 0xc0, }, 7, 0, "", "",
+"62 44 f8 10 01 3c c0 \tadd %r31,(%r8,%r16,8),%r16",},
+{{0x62, 0x44, 0x7c, 0x10, 0x00, 0xf8, }, 6, 0, "", "",
+"62 44 7c 10 00 f8 \tadd %r31b,%r8b,%r16b",},
+{{0x62, 0x44, 0x7c, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 7c 10 01 f8 \tadd %r31d,%r8d,%r16d",},
+{{0x62, 0x44, 0x7d, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 7d 10 01 f8 \tadd %r31w,%r8w,%r16w",},
+{{0x62, 0x5c, 0xfc, 0x10, 0x03, 0x07, }, 6, 0, "", "",
+"62 5c fc 10 03 07 \tadd (%r31),%r8,%r16",},
+{{0x62, 0x5c, 0xf8, 0x10, 0x03, 0x84, 0x07, 0x90, 0x90, 0x00, 0x00, }, 11, 0, "", "",
+"62 5c f8 10 03 84 07 90 90 00 00 \tadd 0x9090(%r31,%r16,1),%r8,%r16",},
+{{0x62, 0x44, 0x7c, 0x10, 0x00, 0xf8, }, 6, 0, "", "",
+"62 44 7c 10 00 f8 \tadd %r31b,%r8b,%r16b",},
+{{0x62, 0x44, 0x7c, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 7c 10 01 f8 \tadd %r31d,%r8d,%r16d",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x04, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 04 83 11 \tadd $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0x44, 0xfc, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 fc 10 01 f8 \tadd %r31,%r8,%r16",},
+{{0x62, 0xd4, 0xfc, 0x10, 0x81, 0x04, 0x8f, 0x33, 0x44, 0x34, 0x12, }, 11, 0, "", "",
+"62 d4 fc 10 81 04 8f 33 44 34 12 \tadd $0x12344433,(%r15,%rcx,4),%r16",},
+{{0x62, 0x44, 0x7d, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 7d 10 01 f8 \tadd %r31w,%r8w,%r16w",},
+{{0x62, 0x54, 0x6e, 0x10, 0x66, 0xc7, }, 6, 0, "", "",
+"62 54 6e 10 66 c7 \tadox %r15d,%r8d,%r18d",},
+{{0x62, 0x5c, 0xfc, 0x10, 0x03, 0xc7, }, 6, 0, "", "",
+"62 5c fc 10 03 c7 \tadd %r31,%r8,%r16",},
+{{0x62, 0x44, 0xfc, 0x10, 0x01, 0xf8, }, 6, 0, "", "",
+"62 44 fc 10 01 f8 \tadd %r31,%r8,%r16",},
+{{0x62, 0x14, 0xfa, 0x08, 0x66, 0x04, 0x3f, }, 7, 0, "", "",
+"62 14 fa 08 66 04 3f \tadox (%r15,%r31,1),%r8",},
+{{0x62, 0x14, 0x6a, 0x10, 0x66, 0x04, 0x3f, }, 7, 0, "", "",
+"62 14 6a 10 66 04 3f \tadox (%r15,%r31,1),%r8d,%r18d",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xe0, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 e0 34 12 \tand $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x20, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 20 f9 \tand %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x21, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 21 38 \tand %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x22, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 22 04 07 \tand (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x23, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 23 04 07 \tand (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x24, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 24 83 11 \tand $0x11,(%r19,%rax,4),%r20d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x47, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 47 90 90 90 90 90 \tcmova -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x43, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 43 90 90 90 90 90 \tcmovae -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x42, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 42 90 90 90 90 90 \tcmovb -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x46, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 46 90 90 90 90 90 \tcmovbe -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x44, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 44 90 90 90 90 90 \tcmove -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4f, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4f 90 90 90 90 90 \tcmovg -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4d, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4d 90 90 90 90 90 \tcmovge -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4c, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4c 90 90 90 90 90 \tcmovl -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4e, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4e 90 90 90 90 90 \tcmovle -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x45, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 45 90 90 90 90 90 \tcmovne -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x41, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 41 90 90 90 90 90 \tcmovno -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4b, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4b 90 90 90 90 90 \tcmovnp -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x49, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 49 90 90 90 90 90 \tcmovns -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x40, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 40 90 90 90 90 90 \tcmovo -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x4a, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 4a 90 90 90 90 90 \tcmovp -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0x48, 0x90, 0x90, 0x90, 0x90, 0x90, }, 11, 0, "", "",
+"67 62 f4 3c 18 48 90 90 90 90 90 \tcmovs -0x6f6f6f70(%eax),%edx,%r8d",},
+{{0x62, 0xf4, 0xf4, 0x10, 0xff, 0xc8, }, 6, 0, "", "",
+"62 f4 f4 10 ff c8 \tdec %rax,%r17",},
+{{0x62, 0x9c, 0x3c, 0x18, 0xfe, 0x0c, 0x27, }, 7, 0, "", "",
+"62 9c 3c 18 fe 0c 27 \tdec (%r31,%r12,1),%r8b",},
+{{0x62, 0xb4, 0xb0, 0x10, 0xaf, 0x94, 0xf8, 0x09, 0x09, 0x00, 0x00, }, 11, 0, "", "",
+"62 b4 b0 10 af 94 f8 09 09 00 00 \timul 0x909(%rax,%r31,8),%rdx,%r25",},
+{{0x67, 0x62, 0xf4, 0x3c, 0x18, 0xaf, 0x90, 0x09, 0x09, 0x09, 0x00, }, 11, 0, "", "",
+"67 62 f4 3c 18 af 90 09 09 09 00 \timul 0x90909(%eax),%edx,%r8d",},
+{{0x62, 0xdc, 0xfc, 0x10, 0xff, 0xc7, }, 6, 0, "", "",
+"62 dc fc 10 ff c7 \tinc %r31,%r16",},
+{{0x62, 0xdc, 0xbc, 0x18, 0xff, 0xc7, }, 6, 0, "", "",
+"62 dc bc 18 ff c7 \tinc %r31,%r8",},
+{{0x62, 0xf4, 0xe4, 0x18, 0xff, 0xc0, }, 6, 0, "", "",
+"62 f4 e4 18 ff c0 \tinc %rax,%rbx",},
+{{0x62, 0xf4, 0xf4, 0x10, 0xf7, 0xd8, }, 6, 0, "", "",
+"62 f4 f4 10 f7 d8 \tneg %rax,%r17",},
+{{0x62, 0x9c, 0x3c, 0x18, 0xf6, 0x1c, 0x27, }, 7, 0, "", "",
+"62 9c 3c 18 f6 1c 27 \tneg (%r31,%r12,1),%r8b",},
+{{0x62, 0xf4, 0xf4, 0x10, 0xf7, 0xd0, }, 6, 0, "", "",
+"62 f4 f4 10 f7 d0 \tnot %rax,%r17",},
+{{0x62, 0x9c, 0x3c, 0x18, 0xf6, 0x14, 0x27, }, 7, 0, "", "",
+"62 9c 3c 18 f6 14 27 \tnot (%r31,%r12,1),%r8b",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xc8, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 c8 34 12 \tor $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x08, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 08 f9 \tor %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x09, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 09 38 \tor %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x0a, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 0a 04 07 \tor (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x0b, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 0b 04 07 \tor (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x0c, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 0c 83 11 \tor $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xd4, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 d4 02 \trcl $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xd0, }, 6, 0, "", "",
+"62 fc 3c 18 d2 d0 \trcl %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x10, }, 6, 0, "", "",
+"62 f4 04 10 d0 10 \trcl $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x10, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 10 02 \trcl $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x10, }, 6, 0, "", "",
+"62 f4 05 10 d1 10 \trcl $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x14, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 14 83 \trcl %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xdc, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 dc 02 \trcr $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xd8, }, 6, 0, "", "",
+"62 fc 3c 18 d2 d8 \trcr %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x18, }, 6, 0, "", "",
+"62 f4 04 10 d0 18 \trcr $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x18, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 18 02 \trcr $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x18, }, 6, 0, "", "",
+"62 f4 05 10 d1 18 \trcr $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x1c, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 1c 83 \trcr %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xc4, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 c4 02 \trol $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xc0, }, 6, 0, "", "",
+"62 fc 3c 18 d2 c0 \trol %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x00, }, 6, 0, "", "",
+"62 f4 04 10 d0 00 \trol $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x00, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 00 02 \trol $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x00, }, 6, 0, "", "",
+"62 f4 05 10 d1 00 \trol $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x04, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 04 83 \trol %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xcc, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 cc 02 \tror $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xc8, }, 6, 0, "", "",
+"62 fc 3c 18 d2 c8 \tror %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x08, }, 6, 0, "", "",
+"62 f4 04 10 d0 08 \tror $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x08, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 08 02 \tror $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x08, }, 6, 0, "", "",
+"62 f4 05 10 d1 08 \tror $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x0c, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 0c 83 \tror %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xfc, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 fc 02 \tsar $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xf8, }, 6, 0, "", "",
+"62 fc 3c 18 d2 f8 \tsar %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x38, }, 6, 0, "", "",
+"62 f4 04 10 d0 38 \tsar $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x38, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 38 02 \tsar $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x38, }, 6, 0, "", "",
+"62 f4 05 10 d1 38 \tsar $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x3c, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 3c 83 \tsar %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xd8, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 d8 34 12 \tsbb $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x18, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 18 f9 \tsbb %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x19, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 19 38 \tsbb %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x1a, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 1a 04 07 \tsbb (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x1b, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 1b 04 07 \tsbb (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x1c, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 1c 83 11 \tsbb $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xe4, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 e4 02 \tshl $0x2,%r12b,%r31b",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xe4, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 e4 02 \tshl $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xe0, }, 6, 0, "", "",
+"62 fc 3c 18 d2 e0 \tshl %cl,%r16b,%r8b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xe0, }, 6, 0, "", "",
+"62 fc 3c 18 d2 e0 \tshl %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x20, }, 6, 0, "", "",
+"62 f4 04 10 d0 20 \tshl $1,(%rax),%r31b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x20, }, 6, 0, "", "",
+"62 f4 04 10 d0 20 \tshl $1,(%rax),%r31b",},
+{{0x62, 0x74, 0x84, 0x10, 0x24, 0x20, 0x01, }, 7, 0, "", "",
+"62 74 84 10 24 20 01 \tshld $0x1,%r12,(%rax),%r31",},
+{{0x62, 0x74, 0x04, 0x10, 0x24, 0x38, 0x02, }, 7, 0, "", "",
+"62 74 04 10 24 38 02 \tshld $0x2,%r15d,(%rax),%r31d",},
+{{0x62, 0x54, 0x05, 0x10, 0x24, 0xc4, 0x02, }, 7, 0, "", "",
+"62 54 05 10 24 c4 02 \tshld $0x2,%r8w,%r12w,%r31w",},
+{{0x62, 0x7c, 0xbc, 0x18, 0xa5, 0xe0, }, 6, 0, "", "",
+"62 7c bc 18 a5 e0 \tshld %cl,%r12,%r16,%r8",},
+{{0x62, 0x7c, 0x05, 0x10, 0xa5, 0x2c, 0x83, }, 7, 0, "", "",
+"62 7c 05 10 a5 2c 83 \tshld %cl,%r13w,(%r19,%rax,4),%r31w",},
+{{0x62, 0x74, 0x05, 0x10, 0xa5, 0x08, }, 6, 0, "", "",
+"62 74 05 10 a5 08 \tshld %cl,%r9w,(%rax),%r31w",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x20, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 20 02 \tshl $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x20, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 20 02 \tshl $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x20, }, 6, 0, "", "",
+"62 f4 05 10 d1 20 \tshl $1,(%rax),%r31w",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x20, }, 6, 0, "", "",
+"62 f4 05 10 d1 20 \tshl $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x24, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 24 83 \tshl %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x24, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 24 83 \tshl %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xd4, 0x04, 0x10, 0xc0, 0xec, 0x02, }, 7, 0, "", "",
+"62 d4 04 10 c0 ec 02 \tshr $0x2,%r12b,%r31b",},
+{{0x62, 0xfc, 0x3c, 0x18, 0xd2, 0xe8, }, 6, 0, "", "",
+"62 fc 3c 18 d2 e8 \tshr %cl,%r16b,%r8b",},
+{{0x62, 0xf4, 0x04, 0x10, 0xd0, 0x28, }, 6, 0, "", "",
+"62 f4 04 10 d0 28 \tshr $1,(%rax),%r31b",},
+{{0x62, 0x74, 0x84, 0x10, 0x2c, 0x20, 0x01, }, 7, 0, "", "",
+"62 74 84 10 2c 20 01 \tshrd $0x1,%r12,(%rax),%r31",},
+{{0x62, 0x74, 0x04, 0x10, 0x2c, 0x38, 0x02, }, 7, 0, "", "",
+"62 74 04 10 2c 38 02 \tshrd $0x2,%r15d,(%rax),%r31d",},
+{{0x62, 0x54, 0x05, 0x10, 0x2c, 0xc4, 0x02, }, 7, 0, "", "",
+"62 54 05 10 2c c4 02 \tshrd $0x2,%r8w,%r12w,%r31w",},
+{{0x62, 0x7c, 0xbc, 0x18, 0xad, 0xe0, }, 6, 0, "", "",
+"62 7c bc 18 ad e0 \tshrd %cl,%r12,%r16,%r8",},
+{{0x62, 0x7c, 0x05, 0x10, 0xad, 0x2c, 0x83, }, 7, 0, "", "",
+"62 7c 05 10 ad 2c 83 \tshrd %cl,%r13w,(%r19,%rax,4),%r31w",},
+{{0x62, 0x74, 0x05, 0x10, 0xad, 0x08, }, 6, 0, "", "",
+"62 74 05 10 ad 08 \tshrd %cl,%r9w,(%rax),%r31w",},
+{{0x62, 0xf4, 0x04, 0x10, 0xc1, 0x28, 0x02, }, 7, 0, "", "",
+"62 f4 04 10 c1 28 02 \tshr $0x2,(%rax),%r31d",},
+{{0x62, 0xf4, 0x05, 0x10, 0xd1, 0x28, }, 6, 0, "", "",
+"62 f4 05 10 d1 28 \tshr $1,(%rax),%r31w",},
+{{0x62, 0xfc, 0x05, 0x10, 0xd3, 0x2c, 0x83, }, 7, 0, "", "",
+"62 fc 05 10 d3 2c 83 \tshr %cl,(%r19,%rax,4),%r31w",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xe8, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 e8 34 12 \tsub $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x28, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 28 f9 \tsub %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x29, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 29 38 \tsub %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x2a, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 2a 04 07 \tsub (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x2b, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 2b 04 07 \tsub (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x2c, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 2c 83 11 \tsub $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0xf4, 0x0d, 0x10, 0x81, 0xf0, 0x34, 0x12, }, 8, 0, "", "",
+"62 f4 0d 10 81 f0 34 12 \txor $0x1234,%ax,%r30w",},
+{{0x62, 0x7c, 0x6c, 0x10, 0x30, 0xf9, }, 6, 0, "", "",
+"62 7c 6c 10 30 f9 \txor %r15b,%r17b,%r18b",},
+{{0x62, 0x54, 0x6c, 0x10, 0x31, 0x38, }, 6, 0, "", "",
+"62 54 6c 10 31 38 \txor %r15d,(%r8),%r18d",},
+{{0x62, 0xc4, 0x3c, 0x18, 0x32, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3c 18 32 04 07 \txor (%r15,%rax,1),%r16b,%r8b",},
+{{0x62, 0xc4, 0x3d, 0x18, 0x33, 0x04, 0x07, }, 7, 0, "", "",
+"62 c4 3d 18 33 04 07 \txor (%r15,%rax,1),%r16w,%r8w",},
+{{0x62, 0xfc, 0x5c, 0x10, 0x83, 0x34, 0x83, 0x11, }, 8, 0, "", "",
+"62 fc 5c 10 83 34 83 11 \txor $0x11,(%r19,%rax,4),%r20d",},
+{{0x62, 0xf4, 0x3c, 0x1c, 0x00, 0xda, }, 6, 0, "", "",
+"62 f4 3c 1c 00 da \t{nf} add %bl,%dl,%r8b",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x01, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c 01 d0 \t{nf} add %dx,%ax,%r9w",},
+{{0x62, 0xd4, 0x6c, 0x1c, 0x02, 0x9c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 6c 1c 02 9c 80 23 01 00 00 \t{nf} add 0x123(%r8,%rax,4),%bl,%dl",},
+{{0x62, 0xd4, 0x7d, 0x1c, 0x03, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 7d 1c 03 94 80 23 01 00 00 \t{nf} add 0x123(%r8,%rax,4),%dx,%ax",},
+{{0x62, 0xf4, 0x3c, 0x1c, 0x08, 0xda, }, 6, 0, "", "",
+"62 f4 3c 1c 08 da \t{nf} or %bl,%dl,%r8b",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x09, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c 09 d0 \t{nf} or %dx,%ax,%r9w",},
+{{0x62, 0xd4, 0x6c, 0x1c, 0x0a, 0x9c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 6c 1c 0a 9c 80 23 01 00 00 \t{nf} or 0x123(%r8,%rax,4),%bl,%dl",},
+{{0x62, 0xd4, 0x7d, 0x1c, 0x0b, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 7d 1c 0b 94 80 23 01 00 00 \t{nf} or 0x123(%r8,%rax,4),%dx,%ax",},
+{{0x62, 0xf4, 0x3c, 0x1c, 0x20, 0xda, }, 6, 0, "", "",
+"62 f4 3c 1c 20 da \t{nf} and %bl,%dl,%r8b",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x21, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c 21 d0 \t{nf} and %dx,%ax,%r9w",},
+{{0x62, 0xd4, 0x6c, 0x1c, 0x22, 0x9c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 6c 1c 22 9c 80 23 01 00 00 \t{nf} and 0x123(%r8,%rax,4),%bl,%dl",},
+{{0x62, 0xd4, 0x7d, 0x1c, 0x23, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 7d 1c 23 94 80 23 01 00 00 \t{nf} and 0x123(%r8,%rax,4),%dx,%ax",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x24, 0xd0, 0x7b, }, 7, 0, "", "",
+"62 f4 35 1c 24 d0 7b \t{nf} shld $0x7b,%dx,%ax,%r9w",},
+{{0x62, 0xf4, 0x3c, 0x1c, 0x28, 0xda, }, 6, 0, "", "",
+"62 f4 3c 1c 28 da \t{nf} sub %bl,%dl,%r8b",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x29, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c 29 d0 \t{nf} sub %dx,%ax,%r9w",},
+{{0x62, 0xd4, 0x6c, 0x1c, 0x2a, 0x9c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 6c 1c 2a 9c 80 23 01 00 00 \t{nf} sub 0x123(%r8,%rax,4),%bl,%dl",},
+{{0x62, 0xd4, 0x7d, 0x1c, 0x2b, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 7d 1c 2b 94 80 23 01 00 00 \t{nf} sub 0x123(%r8,%rax,4),%dx,%ax",},
+{{0x62, 0xf4, 0x35, 0x1c, 0x2c, 0xd0, 0x7b, }, 7, 0, "", "",
+"62 f4 35 1c 2c d0 7b \t{nf} shrd $0x7b,%dx,%ax,%r9w",},
+{{0x62, 0xf4, 0x3c, 0x1c, 0x30, 0xda, }, 6, 0, "", "",
+"62 f4 3c 1c 30 da \t{nf} xor %bl,%dl,%r8b",},
+{{0x62, 0x4c, 0xfc, 0x0c, 0x31, 0xff, }, 6, 0, "", "",
+"62 4c fc 0c 31 ff \t{nf} xor %r31,%r31",},
+{{0x62, 0xd4, 0x6c, 0x1c, 0x32, 0x9c, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 6c 1c 32 9c 80 23 01 00 00 \t{nf} xor 0x123(%r8,%rax,4),%bl,%dl",},
+{{0x62, 0xd4, 0x7d, 0x1c, 0x33, 0x94, 0x80, 0x23, 0x01, 0x00, 0x00, }, 11, 0, "", "",
+"62 d4 7d 1c 33 94 80 23 01 00 00 \t{nf} xor 0x123(%r8,%rax,4),%dx,%ax",},
+{{0x62, 0x54, 0xfc, 0x0c, 0x69, 0xf9, 0x90, 0xff, 0x00, 0x00, }, 10, 0, "", "",
+"62 54 fc 0c 69 f9 90 ff 00 00 \t{nf} imul $0xff90,%r9,%r15",},
+{{0x62, 0x54, 0xfc, 0x0c, 0x6b, 0xf9, 0x7b, }, 7, 0, "", "",
+"62 54 fc 0c 6b f9 7b \t{nf} imul $0x7b,%r9,%r15",},
+{{0x62, 0xf4, 0x6c, 0x1c, 0x80, 0xf3, 0x7b, }, 7, 0, "", "",
+"62 f4 6c 1c 80 f3 7b \t{nf} xor $0x7b,%bl,%dl",},
+{{0x62, 0xf4, 0x7d, 0x1c, 0x83, 0xf2, 0x7b, }, 7, 0, "", "",
+"62 f4 7d 1c 83 f2 7b \t{nf} xor $0x7b,%dx,%ax",},
+{{0x62, 0x44, 0xfc, 0x0c, 0x88, 0xf9, }, 6, 0, "", "",
+"62 44 fc 0c 88 f9 \t{nf} popcnt %r9,%r31",},
+{{0x62, 0xf4, 0x35, 0x1c, 0xa5, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c a5 d0 \t{nf} shld %cl,%dx,%ax,%r9w",},
+{{0x62, 0xf4, 0x35, 0x1c, 0xad, 0xd0, }, 6, 0, "", "",
+"62 f4 35 1c ad d0 \t{nf} shrd %cl,%dx,%ax,%r9w",},
+{{0x62, 0x44, 0xa4, 0x1c, 0xaf, 0xf9, }, 6, 0, "", "",
+"62 44 a4 1c af f9 \t{nf} imul %r9,%r31,%r11",},
+{{0x62, 0xf4, 0x6c, 0x1c, 0xc0, 0xfb, 0x7b, }, 7, 0, "", "",
+"62 f4 6c 1c c0 fb 7b \t{nf} sar $0x7b,%bl,%dl",},
+{{0x62, 0xf4, 0x7d, 0x1c, 0xc1, 0xfa, 0x7b, }, 7, 0, "", "",
+"62 f4 7d 1c c1 fa 7b \t{nf} sar $0x7b,%dx,%ax",},
+{{0x62, 0xf4, 0x6c, 0x1c, 0xd0, 0xfb, }, 6, 0, "", "",
+"62 f4 6c 1c d0 fb \t{nf} sar $1,%bl,%dl",},
+{{0x62, 0xf4, 0x7d, 0x1c, 0xd1, 0xfa, }, 6, 0, "", "",
+"62 f4 7d 1c d1 fa \t{nf} sar $1,%dx,%ax",},
+{{0x62, 0xf4, 0x6c, 0x1c, 0xd2, 0xfb, }, 6, 0, "", "",
+"62 f4 6c 1c d2 fb \t{nf} sar %cl,%bl,%dl",},
+{{0x62, 0xf4, 0x7d, 0x1c, 0xd3, 0xfa, }, 6, 0, "", "",
+"62 f4 7d 1c d3 fa \t{nf} sar %cl,%dx,%ax",},
+{{0x62, 0x52, 0x84, 0x04, 0xf2, 0xd9, }, 6, 0, "", "",
+"62 52 84 04 f2 d9 \t{nf} andn %r9,%r31,%r11",},
+{{0x62, 0xd2, 0x84, 0x04, 0xf3, 0xd9, }, 6, 0, "", "",
+"62 d2 84 04 f3 d9 \t{nf} blsi %r9,%r31",},
+{{0x62, 0x44, 0xfc, 0x0c, 0xf4, 0xf9, }, 6, 0, "", "",
+"62 44 fc 0c f4 f9 \t{nf} tzcnt %r9,%r31",},
+{{0x62, 0x44, 0xfc, 0x0c, 0xf5, 0xf9, }, 6, 0, "", "",
+"62 44 fc 0c f5 f9 \t{nf} lzcnt %r9,%r31",},
+{{0x62, 0xf4, 0x7c, 0x0c, 0xf6, 0xfb, }, 6, 0, "", "",
+"62 f4 7c 0c f6 fb \t{nf} idiv %bl",},
+{{0x62, 0xf4, 0x7d, 0x0c, 0xf7, 0xfa, }, 6, 0, "", "",
+"62 f4 7d 0c f7 fa \t{nf} idiv %dx",},
+{{0x62, 0xf4, 0x6c, 0x1c, 0xfe, 0xcb, }, 6, 0, "", "",
+"62 f4 6c 1c fe cb \t{nf} dec %bl,%dl",},
+{{0x62, 0xf4, 0x7d, 0x1c, 0xff, 0xca, }, 6, 0, "", "",
+"62 f4 7d 1c ff ca \t{nf} dec %dx,%ax",},
+{{0xf3, 0x0f, 0x38, 0xdc, 0xd1, }, 5, 0, "", "",
+"f3 0f 38 dc d1 \tloadiwkey %xmm1,%xmm2",},
+{{0xf3, 0x0f, 0x38, 0xfa, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fa d0 \tencodekey128 %eax,%edx",},
+{{0xf3, 0x0f, 0x38, 0xfb, 0xd0, }, 5, 0, "", "",
+"f3 0f 38 fb d0 \tencodekey256 %eax,%edx",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xdc, 0x5a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 dc 5a 77 \taesenc128kl 0x77(%edx),%xmm3",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xde, 0x5a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 de 5a 77 \taesenc256kl 0x77(%edx),%xmm3",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xdd, 0x5a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 dd 5a 77 \taesdec128kl 0x77(%edx),%xmm3",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xdf, 0x5a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 df 5a 77 \taesdec256kl 0x77(%edx),%xmm3",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xd8, 0x42, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 d8 42 77 \taesencwide128kl 0x77(%edx)",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xd8, 0x52, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 d8 52 77 \taesencwide256kl 0x77(%edx)",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xd8, 0x4a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 d8 4a 77 \taesdecwide128kl 0x77(%edx)",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xd8, 0x5a, 0x77, }, 7, 0, "", "",
+"67 f3 0f 38 d8 5a 77 \taesdecwide256kl 0x77(%edx)",},
+{{0x67, 0x0f, 0x38, 0xfc, 0x08, }, 5, 0, "", "",
+"67 0f 38 fc 08 \taadd %ecx,(%eax)",},
+{{0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 9, 0, "", "",
+"0f 38 fc 14 25 78 56 34 12 \taadd %edx,0x12345678",},
+{{0x67, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"67 0f 38 fc 94 c8 78 56 34 12 \taadd %edx,0x12345678(%eax,%ecx,8)",},
+{{0x67, 0x66, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"67 66 0f 38 fc 08 \taand %ecx,(%eax)",},
+{{0x66, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"66 0f 38 fc 14 25 78 56 34 12 \taand %edx,0x12345678",},
+{{0x67, 0x66, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"67 66 0f 38 fc 94 c8 78 56 34 12 \taand %edx,0x12345678(%eax,%ecx,8)",},
+{{0x67, 0xf2, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"67 f2 0f 38 fc 08 \taor %ecx,(%eax)",},
+{{0xf2, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f2 0f 38 fc 14 25 78 56 34 12 \taor %edx,0x12345678",},
+{{0x67, 0xf2, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"67 f2 0f 38 fc 94 c8 78 56 34 12 \taor %edx,0x12345678(%eax,%ecx,8)",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xfc, 0x08, }, 6, 0, "", "",
+"67 f3 0f 38 fc 08 \taxor %ecx,(%eax)",},
+{{0xf3, 0x0f, 0x38, 0xfc, 0x14, 0x25, 0x78, 0x56, 0x34, 0x12, }, 10, 0, "", "",
+"f3 0f 38 fc 14 25 78 56 34 12 \taxor %edx,0x12345678",},
+{{0x67, 0xf3, 0x0f, 0x38, 0xfc, 0x94, 0xc8, 0x78, 0x56, 0x34, 0x12, }, 11, 0, "", "",
+"67 f3 0f 38 fc 94 c8 78 56 34 12 \taxor %edx,0x12345678(%eax,%ecx,8)",},
+{{0x67, 0xc4, 0xe2, 0x7a, 0xb1, 0x31, }, 6, 0, "", "",
+"67 c4 e2 7a b1 31 \tvbcstnebf162ps (%ecx),%xmm6",},
+{{0x67, 0xc4, 0xe2, 0x79, 0xb1, 0x31, }, 6, 0, "", "",
+"67 c4 e2 79 b1 31 \tvbcstnesh2ps (%ecx),%xmm6",},
+{{0x67, 0xc4, 0xe2, 0x7a, 0xb0, 0x31, }, 6, 0, "", "",
+"67 c4 e2 7a b0 31 \tvcvtneebf162ps (%ecx),%xmm6",},
+{{0x67, 0xc4, 0xe2, 0x79, 0xb0, 0x31, }, 6, 0, "", "",
+"67 c4 e2 79 b0 31 \tvcvtneeph2ps (%ecx),%xmm6",},
+{{0x67, 0xc4, 0xe2, 0x7b, 0xb0, 0x31, }, 6, 0, "", "",
+"67 c4 e2 7b b0 31 \tvcvtneobf162ps (%ecx),%xmm6",},
+{{0x67, 0xc4, 0xe2, 0x78, 0xb0, 0x31, }, 6, 0, "", "",
+"67 c4 e2 78 b0 31 \tvcvtneoph2ps (%ecx),%xmm6",},
+{{0x62, 0xf2, 0x7e, 0x08, 0x72, 0xf1, }, 6, 0, "", "",
+"62 f2 7e 08 72 f1 \tvcvtneps2bf16 %xmm1,%xmm6",},
+{{0xc4, 0xe2, 0x6b, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b 50 d9 \tvpdpbssd %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6b, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b 51 d9 \tvpdpbssds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a 50 d9 \tvpdpbsud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a 51 d9 \tvpdpbsuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0x50, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 50 d9 \tvpdpbuud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0x51, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 51 d9 \tvpdpbuuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a d2 d9 \tvpdpwsud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a d3 d9 \tvpdpwsuds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 d2 d9 \tvpdpwusd %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 d3 d9 \tvpdpwusds %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0xd2, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 d2 d9 \tvpdpwuud %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x68, 0xd3, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 d3 d9 \tvpdpwuuds %xmm1,%xmm2,%xmm3",},
+{{0x62, 0xf2, 0xed, 0x08, 0xb5, 0xd9, }, 6, 0, "", "",
+"62 f2 ed 08 b5 d9 \tvpmadd52huq %xmm1,%xmm2,%xmm3",},
+{{0x62, 0xf2, 0xed, 0x08, 0xb4, 0xd9, }, 6, 0, "", "",
+"62 f2 ed 08 b4 d9 \tvpmadd52luq %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x7f, 0xcc, 0xd1, }, 5, 0, "", "",
+"c4 e2 7f cc d1 \tvsha512msg1 %xmm1,%ymm2",},
+{{0xc4, 0xe2, 0x7f, 0xcd, 0xd1, }, 5, 0, "", "",
+"c4 e2 7f cd d1 \tvsha512msg2 %ymm1,%ymm2",},
+{{0xc4, 0xe2, 0x6f, 0xcb, 0xd9, }, 5, 0, "", "",
+"c4 e2 6f cb d9 \tvsha512rnds2 %xmm1,%ymm2,%ymm3",},
+{{0xc4, 0xe2, 0x68, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 68 da d9 \tvsm3msg1 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x69, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 69 da d9 \tvsm3msg2 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe3, 0x69, 0xde, 0xd9, 0xa1, }, 6, 0, "", "",
+"c4 e3 69 de d9 a1 \tvsm3rnds2 $0xa1,%xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6a, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 6a da d9 \tvsm4key4 %xmm1,%xmm2,%xmm3",},
+{{0xc4, 0xe2, 0x6b, 0xda, 0xd9, }, 5, 0, "", "",
+"c4 e2 6b da d9 \tvsm4rnds4 %xmm1,%xmm2,%xmm3",},
+{{0x67, 0x0f, 0x0d, 0x00, }, 4, 0, "", "",
+"67 0f 0d 00 \tprefetch (%eax)",},
+{{0x67, 0x0f, 0x18, 0x08, }, 4, 0, "", "",
+"67 0f 18 08 \tprefetcht0 (%eax)",},
+{{0x67, 0x0f, 0x18, 0x10, }, 4, 0, "", "",
+"67 0f 18 10 \tprefetcht1 (%eax)",},
+{{0x67, 0x0f, 0x18, 0x18, }, 4, 0, "", "",
+"67 0f 18 18 \tprefetcht2 (%eax)",},
+{{0x67, 0x0f, 0x18, 0x00, }, 4, 0, "", "",
+"67 0f 18 00 \tprefetchnta (%eax)",},
+{{0x0f, 0x01, 0xc6, }, 3, 0, "", "",
+"0f 01 c6 \twrmsrns",},
{{0xf3, 0x0f, 0x3a, 0xf0, 0xc0, 0x00, }, 6, 0, "", "",
"f3 0f 3a f0 c0 00 \threset $0x0",},
{{0x0f, 0x01, 0xe8, }, 3, 0, "", "",
diff --git a/tools/perf/arch/x86/tests/insn-x86-dat-src.c b/tools/perf/arch/x86/tests/insn-x86-dat-src.c
index a391464c8dee..f55505c75d51 100644
--- a/tools/perf/arch/x86/tests/insn-x86-dat-src.c
+++ b/tools/perf/arch/x86/tests/insn-x86-dat-src.c
@@ -2628,6 +2628,512 @@ int main(void)
asm volatile("vucomish 0x12345678(%rax,%rcx,8), %xmm1");
asm volatile("vucomish 0x12345678(%eax,%ecx,8), %xmm1");
+ /* Key Locker */
+
+ asm volatile("loadiwkey %xmm1, %xmm2");
+ asm volatile("encodekey128 %eax, %edx");
+ asm volatile("encodekey256 %eax, %edx");
+ asm volatile("aesenc128kl 0x77(%rdx), %xmm3");
+ asm volatile("aesenc256kl 0x77(%rdx), %xmm3");
+ asm volatile("aesdec128kl 0x77(%rdx), %xmm3");
+ asm volatile("aesdec256kl 0x77(%rdx), %xmm3");
+ asm volatile("aesencwide128kl 0x77(%rdx)");
+ asm volatile("aesencwide256kl 0x77(%rdx)");
+ asm volatile("aesdecwide128kl 0x77(%rdx)");
+ asm volatile("aesdecwide256kl 0x77(%rdx)");
+
+ /* Remote Atomic Operations */
+
+ asm volatile("aadd %ecx,(%rax)");
+ asm volatile("aadd %edx,(%r8)");
+ asm volatile("aadd %edx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aadd %edx,0x12345678(%r8,%rcx,8)");
+ asm volatile("aadd %rcx,(%rax)");
+ asm volatile("aadd %rdx,(%r8)");
+ asm volatile("aadd %rdx,(0x12345678)");
+ asm volatile("aadd %rdx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aadd %rdx,0x12345678(%r8,%rcx,8)");
+
+ asm volatile("aand %ecx,(%rax)");
+ asm volatile("aand %edx,(%r8)");
+ asm volatile("aand %edx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aand %edx,0x12345678(%r8,%rcx,8)");
+ asm volatile("aand %rcx,(%rax)");
+ asm volatile("aand %rdx,(%r8)");
+ asm volatile("aand %rdx,(0x12345678)");
+ asm volatile("aand %rdx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aand %rdx,0x12345678(%r8,%rcx,8)");
+
+ asm volatile("aor %ecx,(%rax)");
+ asm volatile("aor %edx,(%r8)");
+ asm volatile("aor %edx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aor %edx,0x12345678(%r8,%rcx,8)");
+ asm volatile("aor %rcx,(%rax)");
+ asm volatile("aor %rdx,(%r8)");
+ asm volatile("aor %rdx,(0x12345678)");
+ asm volatile("aor %rdx,0x12345678(%rax,%rcx,8)");
+ asm volatile("aor %rdx,0x12345678(%r8,%rcx,8)");
+
+ asm volatile("axor %ecx,(%rax)");
+ asm volatile("axor %edx,(%r8)");
+ asm volatile("axor %edx,0x12345678(%rax,%rcx,8)");
+ asm volatile("axor %edx,0x12345678(%r8,%rcx,8)");
+ asm volatile("axor %rcx,(%rax)");
+ asm volatile("axor %rdx,(%r8)");
+ asm volatile("axor %rdx,(0x12345678)");
+ asm volatile("axor %rdx,0x12345678(%rax,%rcx,8)");
+ asm volatile("axor %rdx,0x12345678(%r8,%rcx,8)");
+
+ /* VEX CMPxxXADD */
+
+ asm volatile("cmpbexadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpbxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmplexadd %ebx,%ecx,(%r9)");
+ asm volatile("cmplxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnbexadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnbxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnlexadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnlxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnoxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnpxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnsxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpnzxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpoxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmppxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpsxadd %ebx,%ecx,(%r9)");
+ asm volatile("cmpzxadd %ebx,%ecx,(%r9)");
+
+ /* Pre-fetch */
+
+ asm volatile("prefetch (%rax)");
+ asm volatile("prefetcht0 (%rax)");
+ asm volatile("prefetcht1 (%rax)");
+ asm volatile("prefetcht2 (%rax)");
+ asm volatile("prefetchnta (%rax)");
+ asm volatile("prefetchit0 0x12345678(%rip)");
+ asm volatile("prefetchit1 0x12345678(%rip)");
+
+ /* MSR List */
+
+ asm volatile("rdmsrlist");
+ asm volatile("wrmsrlist");
+
+ /* User Read/Write MSR */
+
+ asm volatile("urdmsr %rdx,%rax");
+ asm volatile("urdmsr %rdx,%r22");
+ asm volatile("urdmsr $0x7f,%r12");
+ asm volatile("uwrmsr %rax,%rdx");
+ asm volatile("uwrmsr %r22,%rdx");
+ asm volatile("uwrmsr %r12,$0x7f");
+
+ /* AVX NE Convert */
+
+ asm volatile("vbcstnebf162ps (%rcx),%xmm6");
+ asm volatile("vbcstnesh2ps (%rcx),%xmm6");
+ asm volatile("vcvtneebf162ps (%rcx),%xmm6");
+ asm volatile("vcvtneeph2ps (%rcx),%xmm6");
+ asm volatile("vcvtneobf162ps (%rcx),%xmm6");
+ asm volatile("vcvtneoph2ps (%rcx),%xmm6");
+ asm volatile("vcvtneps2bf16 %xmm1,%xmm6");
+
+ /* FRED */
+
+ asm volatile("erets"); /* Expecting: erets indirect 0 */
+ asm volatile("eretu"); /* Expecting: eretu indirect 0 */
+
+ /* AMX Complex */
+
+ asm volatile("tcmmimfp16ps %tmm1,%tmm2,%tmm3");
+ asm volatile("tcmmrlfp16ps %tmm1,%tmm2,%tmm3");
+
+ /* AMX FP16 */
+
+ asm volatile("tdpfp16ps %tmm1,%tmm2,%tmm3");
+
+ /* REX2 */
+
+ asm volatile("test $0x5, %r18b");
+ asm volatile("test $0x5, %r18d");
+ asm volatile("test $0x5, %r18");
+ asm volatile("test $0x5, %r18w");
+ asm volatile("imull %eax, %r14d");
+ asm volatile("imull %eax, %r17d");
+ asm volatile("punpckldq (%r18), %mm2");
+ asm volatile("leal (%rax), %r16d");
+ asm volatile("leal (%rax), %r31d");
+ asm volatile("leal (,%r16), %eax");
+ asm volatile("leal (,%r31), %eax");
+ asm volatile("leal (%r16), %eax");
+ asm volatile("leal (%r31), %eax");
+ asm volatile("leaq (%rax), %r15");
+ asm volatile("leaq (%rax), %r16");
+ asm volatile("leaq (%r15), %rax");
+ asm volatile("leaq (%r16), %rax");
+ asm volatile("leaq (,%r15), %rax");
+ asm volatile("leaq (,%r16), %rax");
+ asm volatile("add (%r16), %r8");
+ asm volatile("add (%r16), %r15");
+ asm volatile("mov (,%r9), %r16");
+ asm volatile("mov (,%r14), %r16");
+ asm volatile("sub (%r10), %r31");
+ asm volatile("sub (%r13), %r31");
+ asm volatile("leal 1(%r16, %r21), %eax");
+ asm volatile("leal 1(%r16, %r26), %r31d");
+ asm volatile("leal 129(%r21, %r9), %eax");
+ asm volatile("leal 129(%r26, %r9), %r31d");
+ /*
+ * Have to use .byte for jmpabs because gas does not support the
+ * mnemonic for some reason, but then it also gets the source line wrong
+ * with .byte, so the following is a workaround.
+ */
+ asm volatile(""); /* Expecting: jmp indirect 0 */
+ asm volatile(".byte 0xd5, 0x00, 0xa1, 0xef, 0xcd, 0xab, 0x90, 0x78, 0x56, 0x34, 0x12");
+ asm volatile("pushp %rbx");
+ asm volatile("pushp %r16");
+ asm volatile("pushp %r31");
+ asm volatile("popp %r31");
+ asm volatile("popp %r16");
+ asm volatile("popp %rbx");
+
+ /* APX */
+
+ asm volatile("bextr %r25d,%edx,%r10d");
+ asm volatile("bextr %r25d,0x123(%r31,%rax,4),%edx");
+ asm volatile("bextr %r31,%r15,%r11");
+ asm volatile("bextr %r31,0x123(%r31,%rax,4),%r15");
+ asm volatile("blsi %r25d,%edx");
+ asm volatile("blsi %r31,%r15");
+ asm volatile("blsi 0x123(%r31,%rax,4),%r25d");
+ asm volatile("blsi 0x123(%r31,%rax,4),%r31");
+ asm volatile("blsmsk %r25d,%edx");
+ asm volatile("blsmsk %r31,%r15");
+ asm volatile("blsmsk 0x123(%r31,%rax,4),%r25d");
+ asm volatile("blsmsk 0x123(%r31,%rax,4),%r31");
+ asm volatile("blsr %r25d,%edx");
+ asm volatile("blsr %r31,%r15");
+ asm volatile("blsr 0x123(%r31,%rax,4),%r25d");
+ asm volatile("blsr 0x123(%r31,%rax,4),%r31");
+ asm volatile("bzhi %r25d,%edx,%r10d");
+ asm volatile("bzhi %r25d,0x123(%r31,%rax,4),%edx");
+ asm volatile("bzhi %r31,%r15,%r11");
+ asm volatile("bzhi %r31,0x123(%r31,%rax,4),%r15");
+ asm volatile("cmpbexadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpbexadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpbxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpbxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmplxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmplxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnbexadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnbexadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnbxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnbxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnlexadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnlexadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnlxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnlxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnoxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnoxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnpxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnpxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnsxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnsxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpnzxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpnzxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpoxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpoxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmppxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmppxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpsxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpsxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("cmpzxadd %r25d,%edx,0x123(%r31,%rax,4)");
+ asm volatile("cmpzxadd %r31,%r15,0x123(%r31,%rax,4)");
+ asm volatile("crc32q %r31, %r22");
+ asm volatile("crc32q (%r31), %r22");
+ asm volatile("crc32b %r19b, %r17");
+ asm volatile("crc32b %r19b, %r21d");
+ asm volatile("crc32b (%r19),%ebx");
+ asm volatile("crc32l %r31d, %r23d");
+ asm volatile("crc32l (%r31), %r23d");
+ asm volatile("crc32w %r31w, %r21d");
+ asm volatile("crc32w (%r31),%r21d");
+ asm volatile("crc32 %rax, %r18");
+ asm volatile("enqcmd 0x123(%r31d,%eax,4),%r25d");
+ asm volatile("enqcmd 0x123(%r31,%rax,4),%r31");
+ asm volatile("enqcmds 0x123(%r31d,%eax,4),%r25d");
+ asm volatile("enqcmds 0x123(%r31,%rax,4),%r31");
+ asm volatile("invept 0x123(%r31,%rax,4),%r31");
+ asm volatile("invpcid 0x123(%r31,%rax,4),%r31");
+ asm volatile("invvpid 0x123(%r31,%rax,4),%r31");
+ asm volatile("kmovb %k5,%r25d");
+ asm volatile("kmovb %k5,0x123(%r31,%rax,4)");
+ asm volatile("kmovb %r25d,%k5");
+ asm volatile("kmovb 0x123(%r31,%rax,4),%k5");
+ asm volatile("kmovd %k5,%r25d");
+ asm volatile("kmovd %k5,0x123(%r31,%rax,4)");
+ asm volatile("kmovd %r25d,%k5");
+ asm volatile("kmovd 0x123(%r31,%rax,4),%k5");
+ asm volatile("kmovq %k5,%r31");
+ asm volatile("kmovq %k5,0x123(%r31,%rax,4)");
+ asm volatile("kmovq %r31,%k5");
+ asm volatile("kmovq 0x123(%r31,%rax,4),%k5");
+ asm volatile("kmovw %k5,%r25d");
+ asm volatile("kmovw %k5,0x123(%r31,%rax,4)");
+ asm volatile("kmovw %r25d,%k5");
+ asm volatile("kmovw 0x123(%r31,%rax,4),%k5");
+ asm volatile("ldtilecfg 0x123(%r31,%rax,4)");
+ asm volatile("movbe %r18w,%ax");
+ asm volatile("movbe %r15w,%ax");
+ asm volatile("movbe %r18w,0x123(%r16,%rax,4)");
+ asm volatile("movbe %r18w,0x123(%r31,%rax,4)");
+ asm volatile("movbe %r25d,%edx");
+ asm volatile("movbe %r15d,%edx");
+ asm volatile("movbe %r25d,0x123(%r16,%rax,4)");
+ asm volatile("movbe %r31,%r15");
+ asm volatile("movbe %r8,%r15");
+ asm volatile("movbe %r31,0x123(%r16,%rax,4)");
+ asm volatile("movbe %r31,0x123(%r31,%rax,4)");
+ asm volatile("movbe 0x123(%r16,%rax,4),%r31");
+ asm volatile("movbe 0x123(%r31,%rax,4),%r18w");
+ asm volatile("movbe 0x123(%r31,%rax,4),%r25d");
+ asm volatile("movdir64b 0x123(%r31d,%eax,4),%r25d");
+ asm volatile("movdir64b 0x123(%r31,%rax,4),%r31");
+ asm volatile("movdiri %r25d,0x123(%r31,%rax,4)");
+ asm volatile("movdiri %r31,0x123(%r31,%rax,4)");
+ asm volatile("pdep %r25d,%edx,%r10d");
+ asm volatile("pdep %r31,%r15,%r11");
+ asm volatile("pdep 0x123(%r31,%rax,4),%r25d,%edx");
+ asm volatile("pdep 0x123(%r31,%rax,4),%r31,%r15");
+ asm volatile("pext %r25d,%edx,%r10d");
+ asm volatile("pext %r31,%r15,%r11");
+ asm volatile("pext 0x123(%r31,%rax,4),%r25d,%edx");
+ asm volatile("pext 0x123(%r31,%rax,4),%r31,%r15");
+ asm volatile("shlx %r25d,%edx,%r10d");
+ asm volatile("shlx %r25d,0x123(%r31,%rax,4),%edx");
+ asm volatile("shlx %r31,%r15,%r11");
+ asm volatile("shlx %r31,0x123(%r31,%rax,4),%r15");
+ asm volatile("shrx %r25d,%edx,%r10d");
+ asm volatile("shrx %r25d,0x123(%r31,%rax,4),%edx");
+ asm volatile("shrx %r31,%r15,%r11");
+ asm volatile("shrx %r31,0x123(%r31,%rax,4),%r15");
+ asm volatile("sttilecfg 0x123(%r31,%rax,4)");
+ asm volatile("tileloadd 0x123(%r31,%rax,4),%tmm6");
+ asm volatile("tileloaddt1 0x123(%r31,%rax,4),%tmm6");
+ asm volatile("tilestored %tmm6,0x123(%r31,%rax,4)");
+ asm volatile("vbroadcastf128 (%r16),%ymm3");
+ asm volatile("vbroadcasti128 (%r16),%ymm3");
+ asm volatile("vextractf128 $1,%ymm3,(%r16)");
+ asm volatile("vextracti128 $1,%ymm3,(%r16)");
+ asm volatile("vinsertf128 $1,(%r16),%ymm3,%ymm8");
+ asm volatile("vinserti128 $1,(%r16),%ymm3,%ymm8");
+ asm volatile("vroundpd $1,(%r24),%xmm6");
+ asm volatile("vroundps $2,(%r24),%xmm6");
+ asm volatile("vroundsd $3,(%r24),%xmm6,%xmm3");
+ asm volatile("vroundss $4,(%r24),%xmm6,%xmm3");
+ asm volatile("wrssd %r25d,0x123(%r31,%rax,4)");
+ asm volatile("wrssq %r31,0x123(%r31,%rax,4)");
+ asm volatile("wrussd %r25d,0x123(%r31,%rax,4)");
+ asm volatile("wrussq %r31,0x123(%r31,%rax,4)");
+
+ /* APX new data destination */
+
+ asm volatile("adc $0x1234,%ax,%r30w");
+ asm volatile("adc %r15b,%r17b,%r18b");
+ asm volatile("adc %r15d,(%r8),%r18d");
+ asm volatile("adc (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("adc (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("adcl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("adcx %r15d,%r8d,%r18d");
+ asm volatile("adcx (%r15,%r31,1),%r8");
+ asm volatile("adcx (%r15,%r31,1),%r8d,%r18d");
+ asm volatile("add $0x1234,%ax,%r30w");
+ asm volatile("add $0x12344433,%r15,%r16");
+ asm volatile("add $0x34,%r13b,%r17b");
+ asm volatile("add $0xfffffffff4332211,%rax,%r8");
+ asm volatile("add %r31,%r8,%r16");
+ asm volatile("add %r31,(%r8),%r16");
+ asm volatile("add %r31,(%r8,%r16,8),%r16");
+ asm volatile("add %r31b,%r8b,%r16b");
+ asm volatile("add %r31d,%r8d,%r16d");
+ asm volatile("add %r31w,%r8w,%r16w");
+ asm volatile("add (%r31),%r8,%r16");
+ asm volatile("add 0x9090(%r31,%r16,1),%r8,%r16");
+ asm volatile("addb %r31b,%r8b,%r16b");
+ asm volatile("addl %r31d,%r8d,%r16d");
+ asm volatile("addl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("addq %r31,%r8,%r16");
+ asm volatile("addq $0x12344433,(%r15,%rcx,4),%r16");
+ asm volatile("addw %r31w,%r8w,%r16w");
+ asm volatile("adox %r15d,%r8d,%r18d");
+ asm volatile("{load} add %r31,%r8,%r16");
+ asm volatile("{store} add %r31,%r8,%r16");
+ asm volatile("adox (%r15,%r31,1),%r8");
+ asm volatile("adox (%r15,%r31,1),%r8d,%r18d");
+ asm volatile("and $0x1234,%ax,%r30w");
+ asm volatile("and %r15b,%r17b,%r18b");
+ asm volatile("and %r15d,(%r8),%r18d");
+ asm volatile("and (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("and (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("andl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("cmova 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovae 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovb 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovbe 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmove 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovg 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovge 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovl 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovle 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovne 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovno 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovnp 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovns 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovo 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovp 0x90909090(%eax),%edx,%r8d");
+ asm volatile("cmovs 0x90909090(%eax),%edx,%r8d");
+ asm volatile("dec %rax,%r17");
+ asm volatile("decb (%r31,%r12,1),%r8b");
+ asm volatile("imul 0x909(%rax,%r31,8),%rdx,%r25");
+ asm volatile("imul 0x90909(%eax),%edx,%r8d");
+ asm volatile("inc %r31,%r16");
+ asm volatile("inc %r31,%r8");
+ asm volatile("inc %rax,%rbx");
+ asm volatile("neg %rax,%r17");
+ asm volatile("negb (%r31,%r12,1),%r8b");
+ asm volatile("not %rax,%r17");
+ asm volatile("notb (%r31,%r12,1),%r8b");
+ asm volatile("or $0x1234,%ax,%r30w");
+ asm volatile("or %r15b,%r17b,%r18b");
+ asm volatile("or %r15d,(%r8),%r18d");
+ asm volatile("or (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("or (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("orl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("rcl $0x2,%r12b,%r31b");
+ asm volatile("rcl %cl,%r16b,%r8b");
+ asm volatile("rclb $0x1,(%rax),%r31b");
+ asm volatile("rcll $0x2,(%rax),%r31d");
+ asm volatile("rclw $0x1,(%rax),%r31w");
+ asm volatile("rclw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("rcr $0x2,%r12b,%r31b");
+ asm volatile("rcr %cl,%r16b,%r8b");
+ asm volatile("rcrb $0x1,(%rax),%r31b");
+ asm volatile("rcrl $0x2,(%rax),%r31d");
+ asm volatile("rcrw $0x1,(%rax),%r31w");
+ asm volatile("rcrw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("rol $0x2,%r12b,%r31b");
+ asm volatile("rol %cl,%r16b,%r8b");
+ asm volatile("rolb $0x1,(%rax),%r31b");
+ asm volatile("roll $0x2,(%rax),%r31d");
+ asm volatile("rolw $0x1,(%rax),%r31w");
+ asm volatile("rolw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("ror $0x2,%r12b,%r31b");
+ asm volatile("ror %cl,%r16b,%r8b");
+ asm volatile("rorb $0x1,(%rax),%r31b");
+ asm volatile("rorl $0x2,(%rax),%r31d");
+ asm volatile("rorw $0x1,(%rax),%r31w");
+ asm volatile("rorw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("sar $0x2,%r12b,%r31b");
+ asm volatile("sar %cl,%r16b,%r8b");
+ asm volatile("sarb $0x1,(%rax),%r31b");
+ asm volatile("sarl $0x2,(%rax),%r31d");
+ asm volatile("sarw $0x1,(%rax),%r31w");
+ asm volatile("sarw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("sbb $0x1234,%ax,%r30w");
+ asm volatile("sbb %r15b,%r17b,%r18b");
+ asm volatile("sbb %r15d,(%r8),%r18d");
+ asm volatile("sbb (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("sbb (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("sbbl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("shl $0x2,%r12b,%r31b");
+ asm volatile("shl $0x2,%r12b,%r31b");
+ asm volatile("shl %cl,%r16b,%r8b");
+ asm volatile("shl %cl,%r16b,%r8b");
+ asm volatile("shlb $0x1,(%rax),%r31b");
+ asm volatile("shlb $0x1,(%rax),%r31b");
+ asm volatile("shld $0x1,%r12,(%rax),%r31");
+ asm volatile("shld $0x2,%r15d,(%rax),%r31d");
+ asm volatile("shld $0x2,%r8w,%r12w,%r31w");
+ asm volatile("shld %cl,%r12,%r16,%r8");
+ asm volatile("shld %cl,%r13w,(%r19,%rax,4),%r31w");
+ asm volatile("shld %cl,%r9w,(%rax),%r31w");
+ asm volatile("shll $0x2,(%rax),%r31d");
+ asm volatile("shll $0x2,(%rax),%r31d");
+ asm volatile("shlw $0x1,(%rax),%r31w");
+ asm volatile("shlw $0x1,(%rax),%r31w");
+ asm volatile("shlw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("shlw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("shr $0x2,%r12b,%r31b");
+ asm volatile("shr %cl,%r16b,%r8b");
+ asm volatile("shrb $0x1,(%rax),%r31b");
+ asm volatile("shrd $0x1,%r12,(%rax),%r31");
+ asm volatile("shrd $0x2,%r15d,(%rax),%r31d");
+ asm volatile("shrd $0x2,%r8w,%r12w,%r31w");
+ asm volatile("shrd %cl,%r12,%r16,%r8");
+ asm volatile("shrd %cl,%r13w,(%r19,%rax,4),%r31w");
+ asm volatile("shrd %cl,%r9w,(%rax),%r31w");
+ asm volatile("shrl $0x2,(%rax),%r31d");
+ asm volatile("shrw $0x1,(%rax),%r31w");
+ asm volatile("shrw %cl,(%r19,%rax,4),%r31w");
+ asm volatile("sub $0x1234,%ax,%r30w");
+ asm volatile("sub %r15b,%r17b,%r18b");
+ asm volatile("sub %r15d,(%r8),%r18d");
+ asm volatile("sub (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("sub (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("subl $0x11,(%r19,%rax,4),%r20d");
+ asm volatile("xor $0x1234,%ax,%r30w");
+ asm volatile("xor %r15b,%r17b,%r18b");
+ asm volatile("xor %r15d,(%r8),%r18d");
+ asm volatile("xor (%r15,%rax,1),%r16b,%r8b");
+ asm volatile("xor (%r15,%rax,1),%r16w,%r8w");
+ asm volatile("xorl $0x11,(%r19,%rax,4),%r20d");
+
+ /* APX suppress status flags */
+
+ asm volatile("{nf} add %bl,%dl,%r8b");
+ asm volatile("{nf} add %dx,%ax,%r9w");
+ asm volatile("{nf} add 0x123(%r8,%rax,4),%bl,%dl");
+ asm volatile("{nf} add 0x123(%r8,%rax,4),%dx,%ax");
+ asm volatile("{nf} or %bl,%dl,%r8b");
+ asm volatile("{nf} or %dx,%ax,%r9w");
+ asm volatile("{nf} or 0x123(%r8,%rax,4),%bl,%dl");
+ asm volatile("{nf} or 0x123(%r8,%rax,4),%dx,%ax");
+ asm volatile("{nf} and %bl,%dl,%r8b");
+ asm volatile("{nf} and %dx,%ax,%r9w");
+ asm volatile("{nf} and 0x123(%r8,%rax,4),%bl,%dl");
+ asm volatile("{nf} and 0x123(%r8,%rax,4),%dx,%ax");
+ asm volatile("{nf} shld $0x7b,%dx,%ax,%r9w");
+ asm volatile("{nf} sub %bl,%dl,%r8b");
+ asm volatile("{nf} sub %dx,%ax,%r9w");
+ asm volatile("{nf} sub 0x123(%r8,%rax,4),%bl,%dl");
+ asm volatile("{nf} sub 0x123(%r8,%rax,4),%dx,%ax");
+ asm volatile("{nf} shrd $0x7b,%dx,%ax,%r9w");
+ asm volatile("{nf} xor %bl,%dl,%r8b");
+ asm volatile("{nf} xor %r31,%r31");
+ asm volatile("{nf} xor 0x123(%r8,%rax,4),%bl,%dl");
+ asm volatile("{nf} xor 0x123(%r8,%rax,4),%dx,%ax");
+ asm volatile("{nf} imul $0xff90,%r9,%r15");
+ asm volatile("{nf} imul $0x7b,%r9,%r15");
+ asm volatile("{nf} xor $0x7b,%bl,%dl");
+ asm volatile("{nf} xor $0x7b,%dx,%ax");
+ asm volatile("{nf} popcnt %r9,%r31");
+ asm volatile("{nf} shld %cl,%dx,%ax,%r9w");
+ asm volatile("{nf} shrd %cl,%dx,%ax,%r9w");
+ asm volatile("{nf} imul %r9,%r31,%r11");
+ asm volatile("{nf} sar $0x7b,%bl,%dl");
+ asm volatile("{nf} sar $0x7b,%dx,%ax");
+ asm volatile("{nf} sar $1,%bl,%dl");
+ asm volatile("{nf} sar $1,%dx,%ax");
+ asm volatile("{nf} sar %cl,%bl,%dl");
+ asm volatile("{nf} sar %cl,%dx,%ax");
+ asm volatile("{nf} andn %r9,%r31,%r11");
+ asm volatile("{nf} blsi %r9,%r31");
+ asm volatile("{nf} tzcnt %r9,%r31");
+ asm volatile("{nf} lzcnt %r9,%r31");
+ asm volatile("{nf} idiv %bl");
+ asm volatile("{nf} idiv %dx");
+ asm volatile("{nf} dec %bl,%dl");
+ asm volatile("{nf} dec %dx,%ax");
+
#else /* #ifdef __x86_64__ */
/* bound r32, mem (same op code as EVEX prefix) */
@@ -4848,6 +5354,97 @@ int main(void)
#endif /* #ifndef __x86_64__ */
+ /* Key Locker */
+
+ asm volatile(" loadiwkey %xmm1, %xmm2");
+ asm volatile(" encodekey128 %eax, %edx");
+ asm volatile(" encodekey256 %eax, %edx");
+ asm volatile(" aesenc128kl 0x77(%edx), %xmm3");
+ asm volatile(" aesenc256kl 0x77(%edx), %xmm3");
+ asm volatile(" aesdec128kl 0x77(%edx), %xmm3");
+ asm volatile(" aesdec256kl 0x77(%edx), %xmm3");
+ asm volatile(" aesencwide128kl 0x77(%edx)");
+ asm volatile(" aesencwide256kl 0x77(%edx)");
+ asm volatile(" aesdecwide128kl 0x77(%edx)");
+ asm volatile(" aesdecwide256kl 0x77(%edx)");
+
+ /* Remote Atomic Operations */
+
+ asm volatile("aadd %ecx,(%eax)");
+ asm volatile("aadd %edx,(0x12345678)");
+ asm volatile("aadd %edx,0x12345678(%eax,%ecx,8)");
+
+ asm volatile("aand %ecx,(%eax)");
+ asm volatile("aand %edx,(0x12345678)");
+ asm volatile("aand %edx,0x12345678(%eax,%ecx,8)");
+
+ asm volatile("aor %ecx,(%eax)");
+ asm volatile("aor %edx,(0x12345678)");
+ asm volatile("aor %edx,0x12345678(%eax,%ecx,8)");
+
+ asm volatile("axor %ecx,(%eax)");
+ asm volatile("axor %edx,(0x12345678)");
+ asm volatile("axor %edx,0x12345678(%eax,%ecx,8)");
+
+ /* AVX NE Convert */
+
+ asm volatile("vbcstnebf162ps (%ecx),%xmm6");
+ asm volatile("vbcstnesh2ps (%ecx),%xmm6");
+ asm volatile("vcvtneebf162ps (%ecx),%xmm6");
+ asm volatile("vcvtneeph2ps (%ecx),%xmm6");
+ asm volatile("vcvtneobf162ps (%ecx),%xmm6");
+ asm volatile("vcvtneoph2ps (%ecx),%xmm6");
+ asm volatile("vcvtneps2bf16 %xmm1,%xmm6");
+
+ /* AVX VNNI INT16 */
+
+ asm volatile("vpdpbssd %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpbssds %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpbsud %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpbsuds %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpbuud %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpbuuds %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwsud %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwsuds %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwusd %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwusds %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwuud %xmm1,%xmm2,%xmm3");
+ asm volatile("vpdpwuuds %xmm1,%xmm2,%xmm3");
+
+ /* AVX IFMA */
+
+ asm volatile("vpmadd52huq %xmm1,%xmm2,%xmm3");
+ asm volatile("vpmadd52luq %xmm1,%xmm2,%xmm3");
+
+ /* AVX SHA512 */
+
+ asm volatile("vsha512msg1 %xmm1,%ymm2");
+ asm volatile("vsha512msg2 %ymm1,%ymm2");
+ asm volatile("vsha512rnds2 %xmm1,%ymm2,%ymm3");
+
+ /* AVX SM3 */
+
+ asm volatile("vsm3msg1 %xmm1,%xmm2,%xmm3");
+ asm volatile("vsm3msg2 %xmm1,%xmm2,%xmm3");
+ asm volatile("vsm3rnds2 $0xa1,%xmm1,%xmm2,%xmm3");
+
+ /* AVX SM4 */
+
+ asm volatile("vsm4key4 %xmm1,%xmm2,%xmm3");
+ asm volatile("vsm4rnds4 %xmm1,%xmm2,%xmm3");
+
+ /* Pre-fetch */
+
+ asm volatile("prefetch (%eax)");
+ asm volatile("prefetcht0 (%eax)");
+ asm volatile("prefetcht1 (%eax)");
+ asm volatile("prefetcht2 (%eax)");
+ asm volatile("prefetchnta (%eax)");
+
+ /* Non-serializing write MSR */
+
+ asm volatile("wrmsrns");
+
/* Prediction history reset */
asm volatile("hreset $0");
diff --git a/tools/perf/arch/x86/tests/intel-cqm.c b/tools/perf/arch/x86/tests/intel-cqm.c
deleted file mode 100644
index 360a082fc928..000000000000
--- a/tools/perf/arch/x86/tests/intel-cqm.c
+++ /dev/null
@@ -1,128 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include "tests/tests.h"
-#include "cloexec.h"
-#include "debug.h"
-#include "evlist.h"
-#include "evsel.h"
-#include "arch-tests.h"
-#include <internal/lib.h> // page_size
-
-#include <signal.h>
-#include <sys/mman.h>
-#include <sys/wait.h>
-#include <errno.h>
-#include <string.h>
-
-static pid_t spawn(void)
-{
- pid_t pid;
-
- pid = fork();
- if (pid)
- return pid;
-
- while(1)
- sleep(5);
- return 0;
-}
-
-/*
- * Create an event group that contains both a sampled hardware
- * (cpu-cycles) and software (intel_cqm/llc_occupancy/) event. We then
- * wait for the hardware perf counter to overflow and generate a PMI,
- * which triggers an event read for both of the events in the group.
- *
- * Since reading Intel CQM event counters requires sending SMP IPIs, the
- * CQM pmu needs to handle the above situation gracefully, and return
- * the last read counter value to avoid triggering a WARN_ON_ONCE() in
- * smp_call_function_many() caused by sending IPIs from NMI context.
- */
-int test__intel_cqm_count_nmi_context(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
-{
- struct evlist *evlist = NULL;
- struct evsel *evsel = NULL;
- struct perf_event_attr pe;
- int i, fd[2], flag, ret;
- size_t mmap_len;
- void *event;
- pid_t pid;
- int err = TEST_FAIL;
-
- flag = perf_event_open_cloexec_flag();
-
- evlist = evlist__new();
- if (!evlist) {
- pr_debug("evlist__new failed\n");
- return TEST_FAIL;
- }
-
- ret = parse_event(evlist, "intel_cqm/llc_occupancy/");
- if (ret) {
- pr_debug("parse_events failed, is \"intel_cqm/llc_occupancy/\" available?\n");
- err = TEST_SKIP;
- goto out;
- }
-
- evsel = evlist__first(evlist);
- if (!evsel) {
- pr_debug("evlist__first failed\n");
- goto out;
- }
-
- memset(&pe, 0, sizeof(pe));
- pe.size = sizeof(pe);
-
- pe.type = PERF_TYPE_HARDWARE;
- pe.config = PERF_COUNT_HW_CPU_CYCLES;
- pe.read_format = PERF_FORMAT_GROUP;
-
- pe.sample_period = 128;
- pe.sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_READ;
-
- pid = spawn();
-
- fd[0] = sys_perf_event_open(&pe, pid, -1, -1, flag);
- if (fd[0] < 0) {
- pr_debug("failed to open event\n");
- goto out;
- }
-
- memset(&pe, 0, sizeof(pe));
- pe.size = sizeof(pe);
-
- pe.type = evsel->attr.type;
- pe.config = evsel->attr.config;
-
- fd[1] = sys_perf_event_open(&pe, pid, -1, fd[0], flag);
- if (fd[1] < 0) {
- pr_debug("failed to open event\n");
- goto out;
- }
-
- /*
- * Pick a power-of-two number of pages + 1 for the meta-data
- * page (struct perf_event_mmap_page). See tools/perf/design.txt.
- */
- mmap_len = page_size * 65;
-
- event = mmap(NULL, mmap_len, PROT_READ, MAP_SHARED, fd[0], 0);
- if (event == (void *)(-1)) {
- pr_debug("failed to mmap %d\n", errno);
- goto out;
- }
-
- sleep(1);
-
- err = TEST_OK;
-
- munmap(event, mmap_len);
-
- for (i = 0; i < 2; i++)
- close(fd[i]);
-
- kill(pid, SIGKILL);
- wait(NULL);
-out:
- evlist__delete(evlist);
- return err;
-}
diff --git a/tools/perf/arch/x86/tests/intel-pt-test.c b/tools/perf/arch/x86/tests/intel-pt-test.c
index 09d61fa736e3..b217ed67cd4e 100644
--- a/tools/perf/arch/x86/tests/intel-pt-test.c
+++ b/tools/perf/arch/x86/tests/intel-pt-test.c
@@ -375,7 +375,7 @@ static int get_pt_caps(int cpu, struct pt_caps *caps)
return 0;
}
-static bool is_hydrid(void)
+static bool is_hybrid(void)
{
unsigned int eax, ebx, ecx, edx = 0;
bool result;
@@ -441,7 +441,7 @@ int test__intel_pt_hybrid_compat(struct test_suite *test, int subtest)
int ret = TEST_OK;
int cpu;
- if (!is_hydrid()) {
+ if (!is_hybrid()) {
test->test_cases[subtest].skip_reason = "not hybrid";
return TEST_SKIP;
}
diff --git a/tools/perf/arch/x86/tests/sample-parsing.c b/tools/perf/arch/x86/tests/sample-parsing.c
deleted file mode 100644
index a061e8619267..000000000000
--- a/tools/perf/arch/x86/tests/sample-parsing.c
+++ /dev/null
@@ -1,125 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-only
-#include <stdbool.h>
-#include <inttypes.h>
-#include <stdlib.h>
-#include <string.h>
-#include <linux/bitops.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-
-#include "event.h"
-#include "evsel.h"
-#include "debug.h"
-#include "util/sample.h"
-#include "util/synthetic-events.h"
-
-#include "tests/tests.h"
-#include "arch-tests.h"
-
-#define COMP(m) do { \
- if (s1->m != s2->m) { \
- pr_debug("Samples differ at '"#m"'\n"); \
- return false; \
- } \
-} while (0)
-
-static bool samples_same(const struct perf_sample *s1,
- const struct perf_sample *s2,
- u64 type)
-{
- if (type & PERF_SAMPLE_WEIGHT_STRUCT) {
- COMP(ins_lat);
- COMP(retire_lat);
- }
-
- return true;
-}
-
-static int do_test(u64 sample_type)
-{
- struct evsel evsel = {
- .needs_swap = false,
- .core = {
- . attr = {
- .sample_type = sample_type,
- .read_format = 0,
- },
- },
- };
- union perf_event *event;
- struct perf_sample sample = {
- .weight = 101,
- .ins_lat = 102,
- .retire_lat = 103,
- };
- struct perf_sample sample_out;
- size_t i, sz, bufsz;
- int err, ret = -1;
-
- sz = perf_event__sample_event_size(&sample, sample_type, 0);
- bufsz = sz + 4096; /* Add a bit for overrun checking */
- event = malloc(bufsz);
- if (!event) {
- pr_debug("malloc failed\n");
- return -1;
- }
-
- memset(event, 0xff, bufsz);
- event->header.type = PERF_RECORD_SAMPLE;
- event->header.misc = 0;
- event->header.size = sz;
-
- err = perf_event__synthesize_sample(event, sample_type, 0, &sample);
- if (err) {
- pr_debug("%s failed for sample_type %#"PRIx64", error %d\n",
- "perf_event__synthesize_sample", sample_type, err);
- goto out_free;
- }
-
- /* The data does not contain 0xff so we use that to check the size */
- for (i = bufsz; i > 0; i--) {
- if (*(i - 1 + (u8 *)event) != 0xff)
- break;
- }
- if (i != sz) {
- pr_debug("Event size mismatch: actual %zu vs expected %zu\n",
- i, sz);
- goto out_free;
- }
-
- evsel.sample_size = __evsel__sample_size(sample_type);
-
- err = evsel__parse_sample(&evsel, event, &sample_out);
- if (err) {
- pr_debug("%s failed for sample_type %#"PRIx64", error %d\n",
- "evsel__parse_sample", sample_type, err);
- goto out_free;
- }
-
- if (!samples_same(&sample, &sample_out, sample_type)) {
- pr_debug("parsing failed for sample_type %#"PRIx64"\n",
- sample_type);
- goto out_free;
- }
-
- ret = 0;
-out_free:
- free(event);
-
- return ret;
-}
-
-/**
- * test__x86_sample_parsing - test X86 specific sample parsing
- *
- * This function implements a test that synthesizes a sample event, parses it
- * and then checks that the parsed sample matches the original sample. If the
- * test passes %0 is returned, otherwise %-1 is returned.
- *
- * For now, the PERF_SAMPLE_WEIGHT_STRUCT is the only X86 specific sample type.
- * The test only checks the PERF_SAMPLE_WEIGHT_STRUCT type.
- */
-int test__x86_sample_parsing(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
-{
- return do_test(PERF_SAMPLE_WEIGHT_STRUCT);
-}
diff --git a/tools/perf/arch/x86/tests/topdown.c b/tools/perf/arch/x86/tests/topdown.c
new file mode 100644
index 000000000000..1eba3b4594ef
--- /dev/null
+++ b/tools/perf/arch/x86/tests/topdown.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "arch-tests.h"
+#include "../util/topdown.h"
+#include "debug.h"
+#include "evlist.h"
+#include "parse-events.h"
+#include "pmu.h"
+#include "pmus.h"
+
+static int event_cb(void *state, struct pmu_event_info *info)
+{
+ char buf[256];
+ struct parse_events_error parse_err;
+ int *ret = state, err;
+ struct evlist *evlist = evlist__new();
+ struct evsel *evsel;
+
+ if (!evlist)
+ return -ENOMEM;
+
+ parse_events_error__init(&parse_err);
+ snprintf(buf, sizeof(buf), "%s/%s/", info->pmu->name, info->name);
+ err = parse_events(evlist, buf, &parse_err);
+ if (err) {
+ parse_events_error__print(&parse_err, buf);
+ *ret = TEST_FAIL;
+ }
+ parse_events_error__exit(&parse_err);
+ evlist__for_each_entry(evlist, evsel) {
+ bool fail = false;
+ bool p_core_pmu = evsel->pmu->type == PERF_TYPE_RAW;
+ const char *name = evsel__name(evsel);
+
+ if (strcasestr(name, "uops_retired.slots") ||
+ strcasestr(name, "topdown.backend_bound_slots") ||
+ strcasestr(name, "topdown.br_mispredict_slots") ||
+ strcasestr(name, "topdown.memory_bound_slots") ||
+ strcasestr(name, "topdown.bad_spec_slots") ||
+ strcasestr(name, "topdown.slots_p")) {
+ if (arch_is_topdown_slots(evsel) || arch_is_topdown_metrics(evsel))
+ fail = true;
+ } else if (strcasestr(name, "slots")) {
+ if (arch_is_topdown_slots(evsel) != p_core_pmu ||
+ arch_is_topdown_metrics(evsel))
+ fail = true;
+ } else if (strcasestr(name, "topdown")) {
+ if (arch_is_topdown_slots(evsel) ||
+ arch_is_topdown_metrics(evsel) != p_core_pmu)
+ fail = true;
+ } else if (arch_is_topdown_slots(evsel) || arch_is_topdown_metrics(evsel)) {
+ fail = true;
+ }
+ if (fail) {
+ pr_debug("Broken topdown information for '%s'\n", evsel__name(evsel));
+ *ret = TEST_FAIL;
+ }
+ }
+ evlist__delete(evlist);
+ return 0;
+}
+
+static int test__x86_topdown(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
+{
+ int ret = TEST_OK;
+ struct perf_pmu *pmu = NULL;
+
+ if (!topdown_sys_has_perf_metrics())
+ return TEST_OK;
+
+ while ((pmu = perf_pmus__scan_core(pmu)) != NULL) {
+ if (perf_pmu__for_each_event(pmu, /*skip_duplicate_pmus=*/false, &ret, event_cb))
+ break;
+ }
+ return ret;
+}
+
+DEFINE_SUITE("x86 topdown", x86_topdown);
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index 005907cb97d8..06d7c0205b3d 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -1,24 +1,20 @@
-perf-y += header.o
-perf-y += tsc.o
-perf-y += pmu.o
-perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
-perf-y += perf_regs.o
-perf-y += topdown.o
-perf-y += machine.o
-perf-y += event.o
-perf-y += evlist.o
-perf-y += mem-events.o
-perf-y += evsel.o
-perf-y += iostat.o
-perf-y += env.o
+perf-util-y += header.o
+perf-util-y += tsc.o
+perf-util-y += pmu.o
+perf-util-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
+perf-util-y += perf_regs.o
+perf-util-y += topdown.o
+perf-util-y += machine.o
+perf-util-y += event.o
+perf-util-y += evlist.o
+perf-util-y += mem-events.o
+perf-util-y += evsel.o
+perf-util-y += iostat.o
-perf-$(CONFIG_DWARF) += dwarf-regs.o
-perf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
+perf-util-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
+perf-util-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-perf-$(CONFIG_LOCAL_LIBUNWIND) += unwind-libunwind.o
-perf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
-
-perf-$(CONFIG_AUXTRACE) += auxtrace.o
-perf-$(CONFIG_AUXTRACE) += archinsn.o
-perf-$(CONFIG_AUXTRACE) += intel-pt.o
-perf-$(CONFIG_AUXTRACE) += intel-bts.o
+perf-util-$(CONFIG_AUXTRACE) += auxtrace.o
+perf-util-y += archinsn.o
+perf-util-$(CONFIG_AUXTRACE) += intel-pt.o
+perf-util-$(CONFIG_AUXTRACE) += intel-bts.o
diff --git a/tools/perf/arch/x86/util/auxtrace.c b/tools/perf/arch/x86/util/auxtrace.c
index 354780ff1605..ecbf61a7eb3a 100644
--- a/tools/perf/arch/x86/util/auxtrace.c
+++ b/tools/perf/arch/x86/util/auxtrace.c
@@ -55,11 +55,12 @@ struct auxtrace_record *auxtrace_record__init(struct evlist *evlist,
int *err)
{
char buffer[64];
+ struct perf_cpu cpu = perf_cpu_map__min(evlist->core.all_cpus);
int ret;
*err = 0;
- ret = get_cpuid(buffer, sizeof(buffer));
+ ret = get_cpuid(buffer, sizeof(buffer), cpu);
if (ret) {
*err = ret;
return NULL;
diff --git a/tools/perf/arch/x86/util/dwarf-regs.c b/tools/perf/arch/x86/util/dwarf-regs.c
deleted file mode 100644
index 399c4a0a29d8..000000000000
--- a/tools/perf/arch/x86/util/dwarf-regs.c
+++ /dev/null
@@ -1,153 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * dwarf-regs.c : Mapping of DWARF debug register numbers into register names.
- * Extracted from probe-finder.c
- *
- * Written by Masami Hiramatsu <mhiramat@redhat.com>
- */
-
-#include <stddef.h>
-#include <errno.h> /* for EINVAL */
-#include <string.h> /* for strcmp */
-#include <linux/ptrace.h> /* for struct pt_regs */
-#include <linux/kernel.h> /* for offsetof */
-#include <dwarf-regs.h>
-
-/*
- * See arch/x86/kernel/ptrace.c.
- * Different from it:
- *
- * - Since struct pt_regs is defined differently for user and kernel,
- * but we want to use 'ax, bx' instead of 'rax, rbx' (which is struct
- * field name of user's pt_regs), we make REG_OFFSET_NAME to accept
- * both string name and reg field name.
- *
- * - Since accessing x86_32's pt_regs from x86_64 building is difficult
- * and vise versa, we simply fill offset with -1, so
- * get_arch_regstr() still works but regs_query_register_offset()
- * returns error.
- * The only inconvenience caused by it now is that we are not allowed
- * to generate BPF prologue for a x86_64 kernel if perf is built for
- * x86_32. This is really a rare usecase.
- *
- * - Order is different from kernel's ptrace.c for get_arch_regstr(). Use
- * the order defined by dwarf.
- */
-
-struct pt_regs_offset {
- const char *name;
- int offset;
-};
-
-#define REG_OFFSET_END {.name = NULL, .offset = 0}
-
-#ifdef __x86_64__
-# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
-# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = -1}
-#else
-# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = -1}
-# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
-#endif
-
-/* TODO: switching by dwarf address size */
-#ifndef __x86_64__
-static const struct pt_regs_offset x86_32_regoffset_table[] = {
- REG_OFFSET_NAME_32("%ax", eax),
- REG_OFFSET_NAME_32("%cx", ecx),
- REG_OFFSET_NAME_32("%dx", edx),
- REG_OFFSET_NAME_32("%bx", ebx),
- REG_OFFSET_NAME_32("$stack", esp), /* Stack address instead of %sp */
- REG_OFFSET_NAME_32("%bp", ebp),
- REG_OFFSET_NAME_32("%si", esi),
- REG_OFFSET_NAME_32("%di", edi),
- REG_OFFSET_END,
-};
-
-#define regoffset_table x86_32_regoffset_table
-#else
-static const struct pt_regs_offset x86_64_regoffset_table[] = {
- REG_OFFSET_NAME_64("%ax", rax),
- REG_OFFSET_NAME_64("%dx", rdx),
- REG_OFFSET_NAME_64("%cx", rcx),
- REG_OFFSET_NAME_64("%bx", rbx),
- REG_OFFSET_NAME_64("%si", rsi),
- REG_OFFSET_NAME_64("%di", rdi),
- REG_OFFSET_NAME_64("%bp", rbp),
- REG_OFFSET_NAME_64("%sp", rsp),
- REG_OFFSET_NAME_64("%r8", r8),
- REG_OFFSET_NAME_64("%r9", r9),
- REG_OFFSET_NAME_64("%r10", r10),
- REG_OFFSET_NAME_64("%r11", r11),
- REG_OFFSET_NAME_64("%r12", r12),
- REG_OFFSET_NAME_64("%r13", r13),
- REG_OFFSET_NAME_64("%r14", r14),
- REG_OFFSET_NAME_64("%r15", r15),
- REG_OFFSET_END,
-};
-
-#define regoffset_table x86_64_regoffset_table
-#endif
-
-/* Minus 1 for the ending REG_OFFSET_END */
-#define ARCH_MAX_REGS ((sizeof(regoffset_table) / sizeof(regoffset_table[0])) - 1)
-
-/* Return architecture dependent register string (for kprobe-tracer) */
-const char *get_arch_regstr(unsigned int n)
-{
- return (n < ARCH_MAX_REGS) ? regoffset_table[n].name : NULL;
-}
-
-/* Reuse code from arch/x86/kernel/ptrace.c */
-/**
- * regs_query_register_offset() - query register offset from its name
- * @name: the name of a register
- *
- * regs_query_register_offset() returns the offset of a register in struct
- * pt_regs from its name. If the name is invalid, this returns -EINVAL;
- */
-int regs_query_register_offset(const char *name)
-{
- const struct pt_regs_offset *roff;
- for (roff = regoffset_table; roff->name != NULL; roff++)
- if (!strcmp(roff->name, name))
- return roff->offset;
- return -EINVAL;
-}
-
-struct dwarf_regs_idx {
- const char *name;
- int idx;
-};
-
-static const struct dwarf_regs_idx x86_regidx_table[] = {
- { "rax", 0 }, { "eax", 0 }, { "ax", 0 }, { "al", 0 },
- { "rdx", 1 }, { "edx", 1 }, { "dx", 1 }, { "dl", 1 },
- { "rcx", 2 }, { "ecx", 2 }, { "cx", 2 }, { "cl", 2 },
- { "rbx", 3 }, { "edx", 3 }, { "bx", 3 }, { "bl", 3 },
- { "rsi", 4 }, { "esi", 4 }, { "si", 4 }, { "sil", 4 },
- { "rdi", 5 }, { "edi", 5 }, { "di", 5 }, { "dil", 5 },
- { "rbp", 6 }, { "ebp", 6 }, { "bp", 6 }, { "bpl", 6 },
- { "rsp", 7 }, { "esp", 7 }, { "sp", 7 }, { "spl", 7 },
- { "r8", 8 }, { "r8d", 8 }, { "r8w", 8 }, { "r8b", 8 },
- { "r9", 9 }, { "r9d", 9 }, { "r9w", 9 }, { "r9b", 9 },
- { "r10", 10 }, { "r10d", 10 }, { "r10w", 10 }, { "r10b", 10 },
- { "r11", 11 }, { "r11d", 11 }, { "r11w", 11 }, { "r11b", 11 },
- { "r12", 12 }, { "r12d", 12 }, { "r12w", 12 }, { "r12b", 12 },
- { "r13", 13 }, { "r13d", 13 }, { "r13w", 13 }, { "r13b", 13 },
- { "r14", 14 }, { "r14d", 14 }, { "r14w", 14 }, { "r14b", 14 },
- { "r15", 15 }, { "r15d", 15 }, { "r15w", 15 }, { "r15b", 15 },
- { "rip", DWARF_REG_PC },
-};
-
-int get_arch_regnum(const char *name)
-{
- unsigned int i;
-
- if (*name != '%')
- return -EINVAL;
-
- for (i = 0; i < ARRAY_SIZE(x86_regidx_table); i++)
- if (!strcmp(x86_regidx_table[i].name, name + 1))
- return x86_regidx_table[i].idx;
- return -ENOENT;
-}
diff --git a/tools/perf/arch/x86/util/env.c b/tools/perf/arch/x86/util/env.c
deleted file mode 100644
index 3e537ffb1353..000000000000
--- a/tools/perf/arch/x86/util/env.c
+++ /dev/null
@@ -1,19 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-#include "linux/string.h"
-#include "util/env.h"
-#include "env.h"
-
-bool x86__is_amd_cpu(void)
-{
- struct perf_env env = { .total_mem = 0, };
- static int is_amd; /* 0: Uninitialized, 1: Yes, -1: No */
-
- if (is_amd)
- goto ret;
-
- perf_env__cpuid(&env);
- is_amd = env.cpuid && strstarts(env.cpuid, "AuthenticAMD") ? 1 : -1;
- perf_env__exit(&env);
-ret:
- return is_amd >= 1 ? true : false;
-}
diff --git a/tools/perf/arch/x86/util/env.h b/tools/perf/arch/x86/util/env.h
deleted file mode 100644
index d78f080b6b3f..000000000000
--- a/tools/perf/arch/x86/util/env.h
+++ /dev/null
@@ -1,7 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _X86_ENV_H
-#define _X86_ENV_H
-
-bool x86__is_amd_cpu(void);
-
-#endif /* _X86_ENV_H */
diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index e65b7dbe27fb..3cd384317739 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -15,7 +15,7 @@
#if defined(__x86_64__)
struct perf_event__synthesize_extra_kmaps_cb_args {
- struct perf_tool *tool;
+ const struct perf_tool *tool;
perf_event__handler_t process;
struct machine *machine;
union perf_event *event;
@@ -65,7 +65,7 @@ static int perf_event__synthesize_extra_kmaps_cb(struct map *map, void *data)
return 0;
}
-int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
+int perf_event__synthesize_extra_kmaps(const struct perf_tool *tool,
perf_event__handler_t process,
struct machine *machine)
{
@@ -91,49 +91,3 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
}
#endif
-
-void arch_perf_parse_sample_weight(struct perf_sample *data,
- const __u64 *array, u64 type)
-{
- union perf_sample_weight weight;
-
- weight.full = *array;
- if (type & PERF_SAMPLE_WEIGHT)
- data->weight = weight.full;
- else {
- data->weight = weight.var1_dw;
- data->ins_lat = weight.var2_w;
- data->retire_lat = weight.var3_w;
- }
-}
-
-void arch_perf_synthesize_sample_weight(const struct perf_sample *data,
- __u64 *array, u64 type)
-{
- *array = data->weight;
-
- if (type & PERF_SAMPLE_WEIGHT_STRUCT) {
- *array &= 0xffffffff;
- *array |= ((u64)data->ins_lat << 32);
- *array |= ((u64)data->retire_lat << 48);
- }
-}
-
-const char *arch_perf_header_entry(const char *se_header)
-{
- if (!strcmp(se_header, "Local Pipeline Stage Cycle"))
- return "Local Retire Latency";
- else if (!strcmp(se_header, "Pipeline Stage Cycle"))
- return "Retire Latency";
-
- return se_header;
-}
-
-int arch_support_sort_key(const char *sort_key)
-{
- if (!strcmp(sort_key, "p_stage_cyc"))
- return 1;
- if (!strcmp(sort_key, "local_p_stage_cyc"))
- return 1;
- return 0;
-}
diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
index b1ce0c52d88d..75e9d00a1494 100644
--- a/tools/perf/arch/x86/util/evlist.c
+++ b/tools/perf/arch/x86/util/evlist.c
@@ -1,94 +1,107 @@
// SPDX-License-Identifier: GPL-2.0
-#include <stdio.h>
-#include "util/pmu.h"
-#include "util/pmus.h"
-#include "util/evlist.h"
-#include "util/parse-events.h"
-#include "util/event.h"
+#include <string.h>
+#include "../../../util/evlist.h"
+#include "../../../util/evsel.h"
#include "topdown.h"
#include "evsel.h"
-static int ___evlist__add_default_attrs(struct evlist *evlist,
- struct perf_event_attr *attrs,
- size_t nr_attrs)
+int arch_evlist__cmp(const struct evsel *lhs, const struct evsel *rhs)
{
- LIST_HEAD(head);
- size_t i = 0;
-
- for (i = 0; i < nr_attrs; i++)
- event_attr_init(attrs + i);
-
- if (perf_pmus__num_core_pmus() == 1)
- return evlist__add_attrs(evlist, attrs, nr_attrs);
-
- for (i = 0; i < nr_attrs; i++) {
- struct perf_pmu *pmu = NULL;
-
- if (attrs[i].type == PERF_TYPE_SOFTWARE) {
- struct evsel *evsel = evsel__new(attrs + i);
-
- if (evsel == NULL)
- goto out_delete_partial_list;
- list_add_tail(&evsel->core.node, &head);
- continue;
- }
-
- while ((pmu = perf_pmus__scan_core(pmu)) != NULL) {
- struct perf_cpu_map *cpus;
- struct evsel *evsel;
+ /*
+ * Currently the following topdown events sequence are supported to
+ * move and regroup correctly.
+ *
+ * a. all events in a group
+ * perf stat -e "{instructions,topdown-retiring,slots}" -C0 sleep 1
+ * WARNING: events were regrouped to match PMUs
+ * Performance counter stats for 'CPU(s) 0':
+ * 15,066,240 slots
+ * 1,899,760 instructions
+ * 2,126,998 topdown-retiring
+ * b. all events not in a group
+ * perf stat -e "instructions,topdown-retiring,slots" -C0 sleep 1
+ * WARNING: events were regrouped to match PMUs
+ * Performance counter stats for 'CPU(s) 0':
+ * 2,045,561 instructions
+ * 17,108,370 slots
+ * 2,281,116 topdown-retiring
+ * c. slots event in a group but topdown metrics events outside the group
+ * perf stat -e "{instructions,slots},topdown-retiring" -C0 sleep 1
+ * WARNING: events were regrouped to match PMUs
+ * Performance counter stats for 'CPU(s) 0':
+ * 20,323,878 slots
+ * 2,634,884 instructions
+ * 3,028,656 topdown-retiring
+ * d. slots event and topdown metrics events in two groups
+ * perf stat -e "{instructions,slots},{topdown-retiring}" -C0 sleep 1
+ * WARNING: events were regrouped to match PMUs
+ * Performance counter stats for 'CPU(s) 0':
+ * 26,319,024 slots
+ * 2,427,791 instructions
+ * 2,683,508 topdown-retiring
+ * e. slots event and metrics event are not in a group and not adjacent
+ * perf stat -e "{instructions,slots},cycles,topdown-retiring" -C0 sleep 1
+ * WARNING: events were regrouped to match PMUs
+ * 68,433,522 slots
+ * 8,856,102 topdown-retiring
+ * 7,791,494 instructions
+ * 11,469,513 cycles
+ */
+ if (topdown_sys_has_perf_metrics() &&
+ (arch_evsel__must_be_in_group(lhs) || arch_evsel__must_be_in_group(rhs))) {
+ /* Ensure the topdown slots comes first. */
+ if (arch_is_topdown_slots(lhs))
+ return -1;
+ if (arch_is_topdown_slots(rhs))
+ return 1;
- evsel = evsel__new(attrs + i);
- if (evsel == NULL)
- goto out_delete_partial_list;
- evsel->core.attr.config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
- cpus = perf_cpu_map__get(pmu->cpus);
- evsel->core.cpus = cpus;
- evsel->core.own_cpus = perf_cpu_map__get(cpus);
- evsel->pmu_name = strdup(pmu->name);
- list_add_tail(&evsel->core.node, &head);
+ /*
+ * Move topdown metrics events forward only when topdown metrics
+ * events are not in same group with previous slots event. If
+ * topdown metrics events are already in same group with slots
+ * event, do nothing.
+ */
+ if (lhs->core.leader != rhs->core.leader) {
+ bool lhs_topdown = arch_is_topdown_metrics(lhs);
+ bool rhs_topdown = arch_is_topdown_metrics(rhs);
+
+ if (lhs_topdown && !rhs_topdown)
+ return -1;
+ if (!lhs_topdown && rhs_topdown)
+ return 1;
}
}
- evlist__splice_list_tail(evlist, &head);
-
- return 0;
+ /* Retire latency event should not be group leader*/
+ if (lhs->retire_lat && !rhs->retire_lat)
+ return 1;
+ if (!lhs->retire_lat && rhs->retire_lat)
+ return -1;
-out_delete_partial_list:
- {
- struct evsel *evsel, *n;
-
- __evlist__for_each_entry_safe(&head, n, evsel)
- evsel__delete(evsel);
- }
- return -1;
+ /* Default ordering by insertion index. */
+ return lhs->core.idx - rhs->core.idx;
}
-int arch_evlist__add_default_attrs(struct evlist *evlist,
- struct perf_event_attr *attrs,
- size_t nr_attrs)
+int arch_evlist__add_required_events(struct list_head *list)
{
- if (!nr_attrs)
- return 0;
+ struct evsel *pos, *metric_event = NULL;
+ int idx = 0;
- return ___evlist__add_default_attrs(evlist, attrs, nr_attrs);
-}
+ if (!topdown_sys_has_perf_metrics())
+ return 0;
-int arch_evlist__cmp(const struct evsel *lhs, const struct evsel *rhs)
-{
- if (topdown_sys_has_perf_metrics() &&
- (arch_evsel__must_be_in_group(lhs) || arch_evsel__must_be_in_group(rhs))) {
- /* Ensure the topdown slots comes first. */
- if (strcasestr(lhs->name, "slots") && !strcasestr(lhs->name, "uops_retired.slots"))
- return -1;
- if (strcasestr(rhs->name, "slots") && !strcasestr(rhs->name, "uops_retired.slots"))
- return 1;
- /* Followed by topdown events. */
- if (strcasestr(lhs->name, "topdown") && !strcasestr(rhs->name, "topdown"))
- return -1;
- if (!strcasestr(lhs->name, "topdown") && strcasestr(rhs->name, "topdown"))
- return 1;
+ list_for_each_entry(pos, list, core.node) {
+ if (arch_is_topdown_slots(pos)) {
+ /* Slots event already present, nothing to do. */
+ return 0;
+ }
+ if (metric_event == NULL && arch_is_topdown_metrics(pos))
+ metric_event = pos;
+ idx++;
}
-
- /* Default ordering by insertion index. */
- return lhs->core.idx - rhs->core.idx;
+ if (metric_event == NULL) {
+ /* No topdown metric events, nothing to do. */
+ return 0;
+ }
+ return topdown_insert_slots_event(list, idx + 1, metric_event);
}
diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
index 090d0f371891..23a8e662a912 100644
--- a/tools/perf/arch/x86/util/evsel.c
+++ b/tools/perf/arch/x86/util/evsel.c
@@ -1,11 +1,17 @@
// SPDX-License-Identifier: GPL-2.0
+#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
+#include "util/evlist.h"
#include "util/evsel.h"
+#include "util/evsel_config.h"
#include "util/env.h"
#include "util/pmu.h"
#include "util/pmus.h"
+#include "util/stat.h"
+#include "util/strbuf.h"
#include "linux/string.h"
+#include "topdown.h"
#include "evsel.h"
#include "util/debug.h"
#include "env.h"
@@ -21,30 +27,29 @@ void arch_evsel__set_sample_weight(struct evsel *evsel)
/* Check whether the evsel's PMU supports the perf metrics */
bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
{
- const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
+ struct perf_pmu *pmu;
+
+ if (!topdown_sys_has_perf_metrics())
+ return false;
/*
- * The PERF_TYPE_RAW type is the core PMU type, e.g., "cpu" PMU
- * on a non-hybrid machine, "cpu_core" PMU on a hybrid machine.
- * The slots event is only available for the core PMU, which
- * supports the perf metrics feature.
- * Checking both the PERF_TYPE_RAW type and the slots event
- * should be good enough to detect the perf metrics feature.
+ * The PERF_TYPE_RAW type is the core PMU type, e.g., "cpu" PMU on a
+ * non-hybrid machine, "cpu_core" PMU on a hybrid machine. The
+ * topdown_sys_has_perf_metrics checks the slots event is only available
+ * for the core PMU, which supports the perf metrics feature. Checking
+ * both the PERF_TYPE_RAW type and the slots event should be good enough
+ * to detect the perf metrics feature.
*/
- if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
- perf_pmus__have_event(pmu_name, "slots"))
- return true;
-
- return false;
+ pmu = evsel__find_pmu(evsel);
+ return pmu && pmu->type == PERF_TYPE_RAW;
}
bool arch_evsel__must_be_in_group(const struct evsel *evsel)
{
- if (!evsel__sys_has_perf_metrics(evsel) || !evsel->name ||
- strcasestr(evsel->name, "uops_retired.slots"))
+ if (!evsel__sys_has_perf_metrics(evsel))
return false;
- return strcasestr(evsel->name, "topdown") || strcasestr(evsel->name, "slots");
+ return arch_is_topdown_metrics(evsel) || arch_is_topdown_slots(evsel);
}
int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size)
@@ -63,10 +68,61 @@ int arch_evsel__hw_name(struct evsel *evsel, char *bf, size_t size)
return scnprintf(bf, size, "%s", event_name);
return scnprintf(bf, size, "%s/%s/",
- evsel->pmu_name ? evsel->pmu_name : "cpu",
+ evsel->pmu ? evsel->pmu->name : "cpu",
event_name);
}
+void arch_evsel__apply_ratio_to_prev(struct evsel *evsel,
+ struct perf_event_attr *attr)
+{
+ struct perf_event_attr *prev_attr = NULL;
+ struct evsel *evsel_prev = NULL;
+ const char *name = "acr_mask";
+ int evsel_idx = 0;
+ __u64 ev_mask, pr_ev_mask;
+
+ if (!perf_pmu__has_format(evsel->pmu, name)) {
+ pr_err("'%s' does not have acr_mask format support\n", evsel->pmu->name);
+ return;
+ }
+ if (perf_pmu__format_type(evsel->pmu, name) !=
+ PERF_PMU_FORMAT_VALUE_CONFIG2) {
+ pr_err("'%s' does not have config2 format support\n", evsel->pmu->name);
+ return;
+ }
+
+ evsel_prev = evsel__prev(evsel);
+ if (!evsel_prev) {
+ pr_err("Previous event does not exist.\n");
+ return;
+ }
+
+ prev_attr = &evsel_prev->core.attr;
+
+ if (prev_attr->config2) {
+ pr_err("'%s' has set config2 (acr_mask?) already, configuration not supported\n", evsel_prev->name);
+ return;
+ }
+
+ /*
+ * acr_mask (config2) is calculated using the event's index in
+ * the event group. The first event will use the index of the
+ * second event as its mask (e.g., 0x2), indicating that the
+ * second event counter will be reset and a sample taken for
+ * the first event if its counter overflows. The second event
+ * will use the mask consisting of the first and second bits
+ * (e.g., 0x3), meaning both counters will be reset if the
+ * second event counter overflows.
+ */
+
+ evsel_idx = evsel__group_idx(evsel);
+ ev_mask = 1ull << evsel_idx;
+ pr_ev_mask = 1ull << (evsel_idx - 1);
+
+ prev_attr->config2 = ev_mask;
+ attr->config2 = ev_mask | pr_ev_mask;
+}
+
static void ibs_l3miss_warn(void)
{
pr_warning(
@@ -102,13 +158,15 @@ void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr)
}
}
-int arch_evsel__open_strerror(struct evsel *evsel, char *msg, size_t size)
+static int amd_evsel__open_strerror(struct evsel *evsel, char *msg, size_t size)
{
- if (!x86__is_amd_cpu())
+ struct perf_pmu *pmu;
+
+ if (evsel->core.attr.precise_ip == 0)
return 0;
- if (!evsel->core.attr.precise_ip &&
- !(evsel->pmu_name && !strncmp(evsel->pmu_name, "ibs", 3)))
+ pmu = evsel__find_pmu(evsel);
+ if (!pmu || strncmp(pmu->name, "ibs", 3))
return 0;
/* More verbose IBS errors. */
@@ -118,6 +176,54 @@ int arch_evsel__open_strerror(struct evsel *evsel, char *msg, size_t size)
return scnprintf(msg, size, "AMD IBS doesn't support privilege filtering. Try "
"again without the privilege modifiers (like 'k') at the end.");
}
+ return 0;
+}
+
+static int intel_evsel__open_strerror(struct evsel *evsel, int err, char *msg, size_t size)
+{
+ struct strbuf sb = STRBUF_INIT;
+ int ret;
+
+ if (err != EINVAL)
+ return 0;
+
+ if (!topdown_sys_has_perf_metrics())
+ return 0;
+ if (arch_is_topdown_slots(evsel)) {
+ if (!evsel__is_group_leader(evsel)) {
+ evlist__uniquify_evsel_names(evsel->evlist, &stat_config);
+ evlist__format_evsels(evsel->evlist, &sb, 2048);
+ ret = scnprintf(msg, size, "Topdown slots event can only be group leader "
+ "in '%s'.", sb.buf);
+ strbuf_release(&sb);
+ return ret;
+ }
+ } else if (arch_is_topdown_metrics(evsel)) {
+ struct evsel *pos;
+
+ evlist__for_each_entry(evsel->evlist, pos) {
+ if (pos == evsel || !arch_is_topdown_metrics(pos))
+ continue;
+
+ if (pos->core.attr.config != evsel->core.attr.config)
+ continue;
+
+ evlist__uniquify_evsel_names(evsel->evlist, &stat_config);
+ evlist__format_evsels(evsel->evlist, &sb, 2048);
+ ret = scnprintf(msg, size, "Perf metric event '%s' is duplicated "
+ "in the same group (only one event is allowed) in '%s'.",
+ evsel__name(evsel), sb.buf);
+ strbuf_release(&sb);
+ return ret;
+ }
+ }
return 0;
}
+
+int arch_evsel__open_strerror(struct evsel *evsel, int err, char *msg, size_t size)
+{
+ return x86__is_amd_cpu()
+ ? amd_evsel__open_strerror(evsel, msg, size)
+ : intel_evsel__open_strerror(evsel, err, msg, size);
+}
diff --git a/tools/perf/arch/x86/util/header.c b/tools/perf/arch/x86/util/header.c
index a51444a77a5f..412977f8aa83 100644
--- a/tools/perf/arch/x86/util/header.c
+++ b/tools/perf/arch/x86/util/header.c
@@ -58,13 +58,12 @@ __get_cpuid(char *buffer, size_t sz, const char *fmt)
}
int
-get_cpuid(char *buffer, size_t sz)
+get_cpuid(char *buffer, size_t sz, struct perf_cpu cpu __maybe_unused)
{
return __get_cpuid(buffer, sz, "%s,%u,%u,%u$");
}
-char *
-get_cpuid_str(struct perf_pmu *pmu __maybe_unused)
+char *get_cpuid_str(struct perf_cpu cpu __maybe_unused)
{
char *buf = malloc(128);
diff --git a/tools/perf/arch/x86/util/intel-bts.c b/tools/perf/arch/x86/util/intel-bts.c
index af8ae4647585..85c8186300c8 100644
--- a/tools/perf/arch/x86/util/intel-bts.c
+++ b/tools/perf/arch/x86/util/intel-bts.c
@@ -143,7 +143,7 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
if (!opts->full_auxtrace)
return 0;
- if (opts->full_auxtrace && !perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
+ if (opts->full_auxtrace && !perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
pr_err(INTEL_BTS_PMU_NAME " does not support per-cpu recording\n");
return -EINVAL;
}
@@ -224,7 +224,7 @@ static int intel_bts_recording_options(struct auxtrace_record *itr,
* In the case of per-cpu mmaps, we need the CPU on the
* AUX event.
*/
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus))
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus))
evsel__set_sample_bit(intel_bts_evsel, CPU);
}
@@ -434,7 +434,6 @@ struct auxtrace_record *intel_bts_recording_init(int *err)
}
btsr->intel_bts_pmu = intel_bts_pmu;
- btsr->itr.pmu = intel_bts_pmu;
btsr->itr.recording_options = intel_bts_recording_options;
btsr->itr.info_priv_size = intel_bts_info_priv_size;
btsr->itr.info_fill = intel_bts_info_fill;
diff --git a/tools/perf/arch/x86/util/intel-pt.c b/tools/perf/arch/x86/util/intel-pt.c
index d199619df3ab..add33cb5d1da 100644
--- a/tools/perf/arch/x86/util/intel-pt.c
+++ b/tools/perf/arch/x86/util/intel-pt.c
@@ -19,6 +19,7 @@
#include "../../../util/evlist.h"
#include "../../../util/evsel.h"
#include "../../../util/evsel_config.h"
+#include "../../../util/config.h"
#include "../../../util/cpumap.h"
#include "../../../util/mmap.h"
#include <subcmd/parse-options.h>
@@ -32,6 +33,7 @@
#include "../../../util/tsc.h"
#include <internal/lib.h> // page_size
#include "../../../util/intel-pt.h"
+#include <api/fs/fs.h>
#define KiB(x) ((x) * 1024)
#define MiB(x) ((x) * 1024 * 1024)
@@ -51,6 +53,7 @@ struct intel_pt_recording {
struct perf_pmu *intel_pt_pmu;
int have_sched_switch;
struct evlist *evlist;
+ bool all_switch_events;
bool snapshot_mode;
bool snapshot_init_done;
size_t snapshot_size;
@@ -74,7 +77,8 @@ static int intel_pt_parse_terms_with_default(const struct perf_pmu *pmu,
goto out_free;
attr.config = *config;
- err = perf_pmu__config_terms(pmu, &attr, &terms, /*zero=*/true, /*err=*/NULL);
+ err = perf_pmu__config_terms(pmu, &attr, &terms, /*zero=*/true, /*apply_hardcoded=*/false,
+ /*err=*/NULL);
if (err)
goto out_free;
@@ -369,7 +373,7 @@ static int intel_pt_info_fill(struct auxtrace_record *itr,
ui__warning("Intel Processor Trace: TSC not available\n");
}
- per_cpu_mmaps = !perf_cpu_map__has_any_cpu_or_is_empty(session->evlist->core.user_requested_cpus);
+ per_cpu_mmaps = !perf_cpu_map__is_any_cpu_or_is_empty(session->evlist->core.user_requested_cpus);
auxtrace_info->type = PERF_AUXTRACE_INTEL_PT;
auxtrace_info->priv[INTEL_PT_PMU_TYPE] = intel_pt_pmu->type;
@@ -428,6 +432,16 @@ static int intel_pt_track_switches(struct evlist *evlist)
}
#endif
+static bool intel_pt_exclude_guest(void)
+{
+ int pt_mode;
+
+ if (sysfs__read_int("module/kvm_intel/parameters/pt_mode", &pt_mode))
+ pt_mode = 0;
+
+ return pt_mode == 1;
+}
+
static void intel_pt_valid_str(char *str, size_t len, u64 valid)
{
unsigned int val, last = 0, state = 1;
@@ -620,6 +634,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
}
evsel->core.attr.freq = 0;
evsel->core.attr.sample_period = 1;
+ evsel->core.attr.exclude_guest = intel_pt_exclude_guest();
evsel->no_aux_samples = true;
evsel->needs_auxtrace_mmap = true;
intel_pt_evsel = evsel;
@@ -758,7 +773,8 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
}
if (!opts->auxtrace_snapshot_mode && !opts->auxtrace_sample_mode) {
- u32 aux_watermark = opts->auxtrace_mmap_pages * page_size / 4;
+ size_t aw = opts->auxtrace_mmap_pages * (size_t)page_size / 4;
+ u32 aux_watermark = aw > UINT_MAX ? UINT_MAX : aw;
intel_pt_evsel->core.attr.aux_watermark = aux_watermark;
}
@@ -774,13 +790,13 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* Per-cpu recording needs sched_switch events to distinguish different
* threads.
*/
- if (have_timing_info && !perf_cpu_map__has_any_cpu_or_is_empty(cpus) &&
+ if (have_timing_info && !perf_cpu_map__is_any_cpu_or_is_empty(cpus) &&
!record_opts__no_switch_events(opts)) {
if (perf_can_record_switch_events()) {
bool cpu_wide = !target__none(&opts->target) &&
!target__has_task(&opts->target);
- if (!cpu_wide && perf_can_record_cpu_wide()) {
+ if (ptr->all_switch_events && !cpu_wide && perf_can_record_cpu_wide()) {
struct evsel *switch_evsel;
switch_evsel = evlist__add_dummy_on_all_cpus(evlist);
@@ -832,7 +848,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* In the case of per-cpu mmaps, we need the CPU on the
* AUX event.
*/
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus))
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus))
evsel__set_sample_bit(intel_pt_evsel, CPU);
}
@@ -858,7 +874,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
tracking_evsel->immediate = true;
/* In per-cpu case, always need the time of mmap events etc */
- if (!perf_cpu_map__has_any_cpu_or_is_empty(cpus)) {
+ if (!perf_cpu_map__is_any_cpu_or_is_empty(cpus)) {
evsel__set_sample_bit(tracking_evsel, TIME);
/* And the CPU for switch events */
evsel__set_sample_bit(tracking_evsel, CPU);
@@ -870,7 +886,7 @@ static int intel_pt_recording_options(struct auxtrace_record *itr,
* Warn the user when we do not have enough information to decode i.e.
* per-cpu with no sched_switch (except workload-only).
*/
- if (!ptr->have_sched_switch && !perf_cpu_map__has_any_cpu_or_is_empty(cpus) &&
+ if (!ptr->have_sched_switch && !perf_cpu_map__is_any_cpu_or_is_empty(cpus) &&
!target__none(&opts->target) &&
!intel_pt_evsel->core.attr.exclude_user)
ui__warning("Intel Processor Trace decoding will not be possible except for kernel tracing!\n");
@@ -1164,6 +1180,16 @@ static u64 intel_pt_reference(struct auxtrace_record *itr __maybe_unused)
return rdtsc();
}
+static int intel_pt_perf_config(const char *var, const char *value, void *data)
+{
+ struct intel_pt_recording *ptr = data;
+
+ if (!strcmp(var, "intel-pt.all-switch-events"))
+ ptr->all_switch_events = perf_config_bool(var, value);
+
+ return 0;
+}
+
struct auxtrace_record *intel_pt_recording_init(int *err)
{
struct perf_pmu *intel_pt_pmu = perf_pmus__find(INTEL_PT_PMU_NAME);
@@ -1183,8 +1209,9 @@ struct auxtrace_record *intel_pt_recording_init(int *err)
return NULL;
}
+ perf_config(intel_pt_perf_config, ptr);
+
ptr->intel_pt_pmu = intel_pt_pmu;
- ptr->itr.pmu = intel_pt_pmu;
ptr->itr.recording_options = intel_pt_recording_options;
ptr->itr.info_priv_size = intel_pt_info_priv_size;
ptr->itr.info_fill = intel_pt_info_fill;
diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c
index df7b5dfcc26a..7442a2cd87ed 100644
--- a/tools/perf/arch/x86/util/iostat.c
+++ b/tools/perf/arch/x86/util/iostat.c
@@ -32,7 +32,7 @@
#define MAX_PATH 1024
#endif
-#define UNCORE_IIO_PMU_PATH "devices/uncore_iio_%d"
+#define UNCORE_IIO_PMU_PATH "bus/event_source/devices/uncore_iio_%d"
#define SYSFS_UNCORE_PMU_PATH "%s/"UNCORE_IIO_PMU_PATH
#define PLATFORM_MAPPING_PATH UNCORE_IIO_PMU_PATH"/die%d"
@@ -403,6 +403,10 @@ void iostat_prefix(struct evlist *evlist,
struct iio_root_port *rp = evlist->selected->priv;
if (rp) {
+ /*
+ * TODO: This is the incorrect format in JSON mode.
+ * See prepare_timestamp()
+ */
if (ts)
sprintf(prefix, "%6lu.%09lu%s%04x:%02x%s",
ts->tv_sec, ts->tv_nsec,
@@ -444,7 +448,7 @@ void iostat_print_metric(struct perf_stat_config *config, struct evsel *evsel,
iostat_value = (count->val - prev_count_val) /
((double) count->run / count->ena);
}
- out->print_metric(config, out->ctx, NULL, "%8.0f", iostat_metric,
+ out->print_metric(config, out->ctx, METRIC_THRESHOLD_UNKNOWN, "%8.0f", iostat_metric,
iostat_value / (256 * 1024));
}
diff --git a/tools/perf/arch/x86/util/kvm-stat.c b/tools/perf/arch/x86/util/kvm-stat.c
index 424716518b75..bff36f9345ea 100644
--- a/tools/perf/arch/x86/util/kvm-stat.c
+++ b/tools/perf/arch/x86/util/kvm-stat.c
@@ -3,9 +3,11 @@
#include <string.h>
#include "../../../util/kvm-stat.h"
#include "../../../util/evsel.h"
+#include "../../../util/env.h"
#include <asm/svm.h>
#include <asm/vmx.h>
#include <asm/kvm.h>
+#include <subcmd/parse-options.h>
define_exit_reasons_table(vmx_exit_reasons, VMX_EXIT_REASONS);
define_exit_reasons_table(svm_exit_reasons, SVM_EXIT_REASONS);
@@ -211,3 +213,52 @@ int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid)
return 0;
}
+
+/*
+ * After KVM supports PEBS for guest on Intel platforms
+ * (https://lore.kernel.org/all/20220411101946.20262-1-likexu@tencent.com/),
+ * host loses the capability to sample guest with PEBS since all PEBS related
+ * MSRs are switched to guest value after vm-entry, like IA32_DS_AREA MSR is
+ * switched to guest GVA at vm-entry. This would lead to "perf kvm record"
+ * fails to sample guest on Intel platforms since "cycles:P" event is used to
+ * sample guest by default.
+ *
+ * So, to avoid this issue explicitly use "cycles" instead of "cycles:P" event
+ * by default to sample guest on Intel platforms.
+ */
+int kvm_add_default_arch_event(int *argc, const char **argv)
+{
+ const char **tmp;
+ bool event = false;
+ int ret = 0, i, j = *argc;
+
+ const struct option event_options[] = {
+ OPT_BOOLEAN('e', "event", &event, NULL),
+ OPT_BOOLEAN(0, "pfm-events", &event, NULL),
+ OPT_END()
+ };
+
+ if (!x86__is_intel_cpu())
+ return 0;
+
+ tmp = calloc(j + 1, sizeof(char *));
+ if (!tmp)
+ return -ENOMEM;
+
+ for (i = 0; i < j; i++)
+ tmp[i] = argv[i];
+
+ parse_options(j, tmp, event_options, NULL, PARSE_OPT_KEEP_UNKNOWN);
+ if (!event) {
+ argv[j++] = STRDUP_FAIL_EXIT("-e");
+ argv[j++] = STRDUP_FAIL_EXIT("cycles");
+ *argc += 2;
+ }
+
+ free(tmp);
+ return 0;
+
+EXIT:
+ free(tmp);
+ return ret;
+}
diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c
index 191b372f9a2d..b38f519020ff 100644
--- a/tools/perf/arch/x86/util/mem-events.c
+++ b/tools/perf/arch/x86/util/mem-events.c
@@ -1,93 +1,34 @@
// SPDX-License-Identifier: GPL-2.0
-#include "util/pmu.h"
-#include "util/pmus.h"
-#include "util/env.h"
-#include "map_symbol.h"
-#include "mem-events.h"
#include "linux/string.h"
-#include "env.h"
+#include "util/map_symbol.h"
+#include "util/mem-events.h"
+#include "mem-events.h"
-static char mem_loads_name[100];
-static bool mem_loads_name__init;
-static char mem_stores_name[100];
#define MEM_LOADS_AUX 0x8203
-#define MEM_LOADS_AUX_NAME "{%s/mem-loads-aux/,%s/mem-loads,ldlat=%u/}:P"
-#define E(t, n, s) { .tag = t, .name = n, .sysfs_name = s }
+#define E(t, n, s, l, a) { .tag = t, .name = n, .event_name = s, .ldlat = l, .aux_event = a }
-static struct perf_mem_event perf_mem_events_intel[PERF_MEM_EVENTS__MAX] = {
- E("ldlat-loads", "%s/mem-loads,ldlat=%u/P", "%s/events/mem-loads"),
- E("ldlat-stores", "%s/mem-stores/P", "%s/events/mem-stores"),
- E(NULL, NULL, NULL),
+struct perf_mem_event perf_mem_events_intel[PERF_MEM_EVENTS__MAX] = {
+ E("ldlat-loads", "%s/mem-loads,ldlat=%u/P", "mem-loads", true, 0),
+ E("ldlat-stores", "%s/mem-stores/P", "mem-stores", false, 0),
+ E(NULL, NULL, NULL, false, 0),
};
-static struct perf_mem_event perf_mem_events_amd[PERF_MEM_EVENTS__MAX] = {
- E(NULL, NULL, NULL),
- E(NULL, NULL, NULL),
- E("mem-ldst", "ibs_op//", "ibs_op"),
+struct perf_mem_event perf_mem_events_intel_aux[PERF_MEM_EVENTS__MAX] = {
+ E("ldlat-loads", "{%s/mem-loads-aux/,%s/mem-loads,ldlat=%u/}:P", "mem-loads", true, MEM_LOADS_AUX),
+ E("ldlat-stores", "%s/mem-stores/P", "mem-stores", false, 0),
+ E(NULL, NULL, NULL, false, 0),
};
-struct perf_mem_event *perf_mem_events__ptr(int i)
-{
- if (i >= PERF_MEM_EVENTS__MAX)
- return NULL;
-
- if (x86__is_amd_cpu())
- return &perf_mem_events_amd[i];
-
- return &perf_mem_events_intel[i];
-}
-
-bool is_mem_loads_aux_event(struct evsel *leader)
-{
- struct perf_pmu *pmu = perf_pmus__find("cpu");
-
- if (!pmu)
- pmu = perf_pmus__find("cpu_core");
-
- if (pmu && !perf_pmu__have_event(pmu, "mem-loads-aux"))
- return false;
-
- return leader->core.attr.config == MEM_LOADS_AUX;
-}
-
-const char *perf_mem_events__name(int i, const char *pmu_name)
-{
- struct perf_mem_event *e = perf_mem_events__ptr(i);
-
- if (!e)
- return NULL;
-
- if (i == PERF_MEM_EVENTS__LOAD) {
- if (mem_loads_name__init && !pmu_name)
- return mem_loads_name;
-
- if (!pmu_name) {
- mem_loads_name__init = true;
- pmu_name = "cpu";
- }
-
- if (perf_pmus__have_event(pmu_name, "mem-loads-aux")) {
- scnprintf(mem_loads_name, sizeof(mem_loads_name),
- MEM_LOADS_AUX_NAME, pmu_name, pmu_name,
- perf_mem_events__loads_ldlat);
- } else {
- scnprintf(mem_loads_name, sizeof(mem_loads_name),
- e->name, pmu_name,
- perf_mem_events__loads_ldlat);
- }
- return mem_loads_name;
- }
-
- if (i == PERF_MEM_EVENTS__STORE) {
- if (!pmu_name)
- pmu_name = "cpu";
-
- scnprintf(mem_stores_name, sizeof(mem_stores_name),
- e->name, pmu_name);
- return mem_stores_name;
- }
+struct perf_mem_event perf_mem_events_amd[PERF_MEM_EVENTS__MAX] = {
+ E(NULL, NULL, NULL, false, 0),
+ E(NULL, NULL, NULL, false, 0),
+ E("mem-ldst", "%s//", NULL, false, 0),
+};
- return e->name;
-}
+struct perf_mem_event perf_mem_events_amd_ldlat[PERF_MEM_EVENTS__MAX] = {
+ E(NULL, NULL, NULL, false, 0),
+ E(NULL, NULL, NULL, false, 0),
+ E("mem-ldst", "%s/ldlat=%u/", NULL, true, 0),
+};
diff --git a/tools/perf/arch/x86/util/mem-events.h b/tools/perf/arch/x86/util/mem-events.h
new file mode 100644
index 000000000000..11e09a256f5b
--- /dev/null
+++ b/tools/perf/arch/x86/util/mem-events.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _X86_MEM_EVENTS_H
+#define _X86_MEM_EVENTS_H
+
+extern struct perf_mem_event perf_mem_events_intel[PERF_MEM_EVENTS__MAX];
+extern struct perf_mem_event perf_mem_events_intel_aux[PERF_MEM_EVENTS__MAX];
+
+extern struct perf_mem_event perf_mem_events_amd[PERF_MEM_EVENTS__MAX];
+extern struct perf_mem_event perf_mem_events_amd_ldlat[PERF_MEM_EVENTS__MAX];
+
+#endif /* _X86_MEM_EVENTS_H */
diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c
index b813502a2727..12fd93f04802 100644
--- a/tools/perf/arch/x86/util/perf_regs.c
+++ b/tools/perf/arch/x86/util/perf_regs.c
@@ -13,7 +13,7 @@
#include "../../../util/pmu.h"
#include "../../../util/pmus.h"
-const struct sample_reg sample_reg_masks[] = {
+static const struct sample_reg sample_reg_masks[] = {
SMPL_REG(AX, PERF_REG_X86_AX),
SMPL_REG(BX, PERF_REG_X86_BX),
SMPL_REG(CX, PERF_REG_X86_CX),
@@ -276,6 +276,11 @@ int arch_sdt_arg_parse_op(char *old_op, char **new_op)
return SDT_ARG_VALID;
}
+const struct sample_reg *arch__sample_reg_masks(void)
+{
+ return sample_reg_masks;
+}
+
uint64_t arch__intr_reg_mask(void)
{
struct perf_event_attr attr = {
diff --git a/tools/perf/arch/x86/util/pmu.c b/tools/perf/arch/x86/util/pmu.c
index 469555ae9b3c..58113482654b 100644
--- a/tools/perf/arch/x86/util/pmu.c
+++ b/tools/perf/arch/x86/util/pmu.c
@@ -8,6 +8,8 @@
#include <linux/perf_event.h>
#include <linux/zalloc.h>
#include <api/fs/fs.h>
+#include <api/io_dir.h>
+#include <internal/cpumap.h>
#include <errno.h>
#include "../../../util/intel-pt.h"
@@ -15,10 +17,262 @@
#include "../../../util/pmu.h"
#include "../../../util/fncache.h"
#include "../../../util/pmus.h"
-#include "env.h"
+#include "mem-events.h"
+#include "util/debug.h"
+#include "util/env.h"
+#include "util/header.h"
-void perf_pmu__arch_init(struct perf_pmu *pmu __maybe_unused)
+static bool x86__is_intel_graniterapids(void)
{
+ static bool checked_if_graniterapids;
+ static bool is_graniterapids;
+
+ if (!checked_if_graniterapids) {
+ const char *graniterapids_cpuid = "GenuineIntel-6-A[DE]";
+ char *cpuid = get_cpuid_str((struct perf_cpu){0});
+
+ is_graniterapids = cpuid && strcmp_cpuid_str(graniterapids_cpuid, cpuid) == 0;
+ free(cpuid);
+ checked_if_graniterapids = true;
+ }
+ return is_graniterapids;
+}
+
+static struct perf_cpu_map *read_sysfs_cpu_map(const char *sysfs_path)
+{
+ struct perf_cpu_map *cpus;
+ char *buf = NULL;
+ size_t buf_len;
+
+ if (sysfs__read_str(sysfs_path, &buf, &buf_len) < 0)
+ return NULL;
+
+ cpus = perf_cpu_map__new(buf);
+ free(buf);
+ return cpus;
+}
+
+static int snc_nodes_per_l3_cache(void)
+{
+ static bool checked_snc;
+ static int snc_nodes;
+
+ if (!checked_snc) {
+ struct perf_cpu_map *node_cpus =
+ read_sysfs_cpu_map("devices/system/node/node0/cpulist");
+ struct perf_cpu_map *cache_cpus =
+ read_sysfs_cpu_map("devices/system/cpu/cpu0/cache/index3/shared_cpu_list");
+
+ snc_nodes = perf_cpu_map__nr(cache_cpus) / perf_cpu_map__nr(node_cpus);
+ perf_cpu_map__put(cache_cpus);
+ perf_cpu_map__put(node_cpus);
+ checked_snc = true;
+ }
+ return snc_nodes;
+}
+
+static bool starts_with(const char *str, const char *prefix)
+{
+ return !strncmp(prefix, str, strlen(prefix));
+}
+
+static int num_chas(void)
+{
+ static bool checked_chas;
+ static int num_chas;
+
+ if (!checked_chas) {
+ int fd = perf_pmu__event_source_devices_fd();
+ struct io_dir dir;
+ struct io_dirent64 *dent;
+
+ if (fd < 0)
+ return -1;
+
+ io_dir__init(&dir, fd);
+
+ while ((dent = io_dir__readdir(&dir)) != NULL) {
+ /* Note, dent->d_type will be DT_LNK and so isn't a useful filter. */
+ if (starts_with(dent->d_name, "uncore_cha_"))
+ num_chas++;
+ }
+ close(fd);
+ checked_chas = true;
+ }
+ return num_chas;
+}
+
+#define MAX_SNCS 6
+
+static int uncore_cha_snc(struct perf_pmu *pmu)
+{
+ // CHA SNC numbers are ordered correspond to the CHAs number.
+ unsigned int cha_num;
+ int num_cha, chas_per_node, cha_snc;
+ int snc_nodes = snc_nodes_per_l3_cache();
+
+ if (snc_nodes <= 1)
+ return 0;
+
+ num_cha = num_chas();
+ if (num_cha <= 0) {
+ pr_warning("Unexpected: no CHAs found\n");
+ return 0;
+ }
+
+ /* Compute SNC for PMU. */
+ if (sscanf(pmu->name, "uncore_cha_%u", &cha_num) != 1) {
+ pr_warning("Unexpected: unable to compute CHA number '%s'\n", pmu->name);
+ return 0;
+ }
+ chas_per_node = num_cha / snc_nodes;
+ cha_snc = cha_num / chas_per_node;
+
+ /* Range check cha_snc. for unexpected out of bounds. */
+ return cha_snc >= MAX_SNCS ? 0 : cha_snc;
+}
+
+static int uncore_imc_snc(struct perf_pmu *pmu)
+{
+ // Compute the IMC SNC using lookup tables.
+ unsigned int imc_num;
+ int snc_nodes = snc_nodes_per_l3_cache();
+ const u8 snc2_map[] = {1, 1, 0, 0, 1, 1, 0, 0};
+ const u8 snc3_map[] = {1, 1, 0, 0, 2, 2, 1, 1, 0, 0, 2, 2};
+ const u8 *snc_map;
+ size_t snc_map_len;
+
+ switch (snc_nodes) {
+ case 2:
+ snc_map = snc2_map;
+ snc_map_len = ARRAY_SIZE(snc2_map);
+ break;
+ case 3:
+ snc_map = snc3_map;
+ snc_map_len = ARRAY_SIZE(snc3_map);
+ break;
+ default:
+ /* Error or no lookup support for SNC with >3 nodes. */
+ return 0;
+ }
+
+ /* Compute SNC for PMU. */
+ if (sscanf(pmu->name, "uncore_imc_%u", &imc_num) != 1) {
+ pr_warning("Unexpected: unable to compute IMC number '%s'\n", pmu->name);
+ return 0;
+ }
+ if (imc_num >= snc_map_len) {
+ pr_warning("Unexpected IMC %d for SNC%d mapping\n", imc_num, snc_nodes);
+ return 0;
+ }
+ return snc_map[imc_num];
+}
+
+static int uncore_cha_imc_compute_cpu_adjust(int pmu_snc)
+{
+ static bool checked_cpu_adjust[MAX_SNCS];
+ static int cpu_adjust[MAX_SNCS];
+ struct perf_cpu_map *node_cpus;
+ char node_path[] = "devices/system/node/node0/cpulist";
+
+ /* Was adjust already computed? */
+ if (checked_cpu_adjust[pmu_snc])
+ return cpu_adjust[pmu_snc];
+
+ /* SNC0 doesn't need an adjust. */
+ if (pmu_snc == 0) {
+ cpu_adjust[0] = 0;
+ checked_cpu_adjust[0] = true;
+ return 0;
+ }
+
+ /*
+ * Use NUMA topology to compute first CPU of the NUMA node, we want to
+ * adjust CPU 0 to be this and similarly for other CPUs if there is >1
+ * socket.
+ */
+ assert(pmu_snc >= 0 && pmu_snc <= 9);
+ node_path[24] += pmu_snc; // Shift node0 to be node<pmu_snc>.
+ node_cpus = read_sysfs_cpu_map(node_path);
+ cpu_adjust[pmu_snc] = perf_cpu_map__cpu(node_cpus, 0).cpu;
+ if (cpu_adjust[pmu_snc] < 0) {
+ pr_debug("Failed to read valid CPU list from <sysfs>/%s\n", node_path);
+ cpu_adjust[pmu_snc] = 0;
+ } else {
+ checked_cpu_adjust[pmu_snc] = true;
+ }
+ perf_cpu_map__put(node_cpus);
+ return cpu_adjust[pmu_snc];
+}
+
+static void gnr_uncore_cha_imc_adjust_cpumask_for_snc(struct perf_pmu *pmu, bool cha)
+{
+ // With sub-NUMA clustering (SNC) there is a NUMA node per SNC in the
+ // topology. For example, a two socket graniterapids machine may be set
+ // up with 3-way SNC meaning there are 6 NUMA nodes that should be
+ // displayed with --per-node. The cpumask of the CHA and IMC PMUs
+ // reflects per-socket information meaning, for example, uncore_cha_60
+ // on a two socket graniterapids machine with 120 cores per socket will
+ // have a cpumask of "0,120". This cpumask needs adjusting to "40,160"
+ // to reflect that uncore_cha_60 is used for the 2nd SNC of each
+ // socket. Without the adjustment events on uncore_cha_60 will appear in
+ // node 0 and node 3 (in our example 2 socket 3-way set up), but with
+ // the adjustment they will appear in node 1 and node 4. The number of
+ // CHAs is typically larger than the number of cores. The CHA numbers
+ // are assumed to split evenly and inorder wrt core numbers. There are
+ // fewer memory IMC PMUs than cores and mapping is handled using lookup
+ // tables.
+ static struct perf_cpu_map *cha_adjusted[MAX_SNCS];
+ static struct perf_cpu_map *imc_adjusted[MAX_SNCS];
+ struct perf_cpu_map **adjusted = cha ? cha_adjusted : imc_adjusted;
+ int idx, pmu_snc, cpu_adjust;
+ struct perf_cpu cpu;
+ bool alloc;
+
+ // Cpus from the kernel holds first CPU of each socket. e.g. 0,120.
+ if (perf_cpu_map__cpu(pmu->cpus, 0).cpu != 0) {
+ pr_debug("Ignoring cpumask adjust for %s as unexpected first CPU\n", pmu->name);
+ return;
+ }
+
+ pmu_snc = cha ? uncore_cha_snc(pmu) : uncore_imc_snc(pmu);
+ if (pmu_snc == 0) {
+ // No adjustment necessary for the first SNC.
+ return;
+ }
+
+ alloc = adjusted[pmu_snc] == NULL;
+ if (alloc) {
+ // Hold onto the perf_cpu_map globally to avoid recomputation.
+ cpu_adjust = uncore_cha_imc_compute_cpu_adjust(pmu_snc);
+ adjusted[pmu_snc] = perf_cpu_map__empty_new(perf_cpu_map__nr(pmu->cpus));
+ if (!adjusted[pmu_snc])
+ return;
+ }
+
+ perf_cpu_map__for_each_cpu(cpu, idx, pmu->cpus) {
+ // Compute the new cpu map values or if not allocating, assert
+ // that they match expectations. asserts will be removed to
+ // avoid overhead in NDEBUG builds.
+ if (alloc) {
+ RC_CHK_ACCESS(adjusted[pmu_snc])->map[idx].cpu = cpu.cpu + cpu_adjust;
+ } else if (idx == 0) {
+ cpu_adjust = perf_cpu_map__cpu(adjusted[pmu_snc], idx).cpu - cpu.cpu;
+ assert(uncore_cha_imc_compute_cpu_adjust(pmu_snc) == cpu_adjust);
+ } else {
+ assert(perf_cpu_map__cpu(adjusted[pmu_snc], idx).cpu ==
+ cpu.cpu + cpu_adjust);
+ }
+ }
+
+ perf_cpu_map__put(pmu->cpus);
+ pmu->cpus = perf_cpu_map__get(adjusted[pmu_snc]);
+}
+
+void perf_pmu__arch_init(struct perf_pmu *pmu)
+{
+ struct perf_pmu_caps *ldlat_cap;
+
#ifdef HAVE_AUXTRACE_SUPPORT
if (!strcmp(pmu->name, INTEL_PT_PMU_NAME)) {
pmu->auxtrace = true;
@@ -30,14 +284,33 @@ void perf_pmu__arch_init(struct perf_pmu *pmu __maybe_unused)
pmu->selectable = true;
}
#endif
-}
-int perf_pmus__num_mem_pmus(void)
-{
- /* AMD uses IBS OP pmu and not a core PMU for perf mem/c2c */
- if (x86__is_amd_cpu())
- return 1;
+ if (x86__is_amd_cpu()) {
+ if (strcmp(pmu->name, "ibs_op"))
+ return;
- /* Intel uses core pmus for perf mem/c2c */
- return perf_pmus__num_core_pmus();
+ pmu->mem_events = perf_mem_events_amd;
+
+ if (!perf_pmu__caps_parse(pmu))
+ return;
+
+ ldlat_cap = perf_pmu__get_cap(pmu, "ldlat");
+ if (!ldlat_cap || strcmp(ldlat_cap->value, "1"))
+ return;
+
+ perf_mem_events__loads_ldlat = 0;
+ pmu->mem_events = perf_mem_events_amd_ldlat;
+ } else {
+ if (pmu->is_core) {
+ if (perf_pmu__have_event(pmu, "mem-loads-aux"))
+ pmu->mem_events = perf_mem_events_intel_aux;
+ else
+ pmu->mem_events = perf_mem_events_intel;
+ } else if (x86__is_intel_graniterapids()) {
+ if (starts_with(pmu->name, "uncore_cha_"))
+ gnr_uncore_cha_imc_adjust_cpumask_for_snc(pmu, /*cha=*/true);
+ else if (starts_with(pmu->name, "uncore_imc_"))
+ gnr_uncore_cha_imc_adjust_cpumask_for_snc(pmu, /*cha=*/false);
+ }
+ }
}
diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
index 3f9a267d4501..0d01b662627a 100644
--- a/tools/perf/arch/x86/util/topdown.c
+++ b/tools/perf/arch/x86/util/topdown.c
@@ -1,12 +1,14 @@
// SPDX-License-Identifier: GPL-2.0
-#include "api/fs/fs.h"
-#include "util/evsel.h"
+#include "util/evlist.h"
#include "util/pmu.h"
#include "util/pmus.h"
#include "util/topdown.h"
#include "topdown.h"
#include "evsel.h"
+// cmask=0, inv=0, pc=0, edge=0, umask=4, event=0
+#define TOPDOWN_SLOTS 0x0400
+
/* Check whether there is a PMU which supports the perf metrics. */
bool topdown_sys_has_perf_metrics(void)
{
@@ -31,7 +33,20 @@ bool topdown_sys_has_perf_metrics(void)
return has_perf_metrics;
}
-#define TOPDOWN_SLOTS 0x0400
+bool arch_is_topdown_slots(const struct evsel *evsel)
+{
+ return evsel->core.attr.type == PERF_TYPE_RAW &&
+ evsel->core.attr.config == TOPDOWN_SLOTS &&
+ evsel->core.attr.config1 == 0;
+}
+
+bool arch_is_topdown_metrics(const struct evsel *evsel)
+{
+ // cmask=0, inv=0, pc=0, edge=0, umask=0x80-0x87, event=0
+ return evsel->core.attr.type == PERF_TYPE_RAW &&
+ (evsel->core.attr.config & 0xFFFFF8FF) == 0x8000 &&
+ evsel->core.attr.config1 == 0;
+}
/*
* Check whether a topdown group supports sample-read.
@@ -41,11 +56,52 @@ bool topdown_sys_has_perf_metrics(void)
*/
bool arch_topdown_sample_read(struct evsel *leader)
{
+ struct evsel *evsel;
+
if (!evsel__sys_has_perf_metrics(leader))
return false;
- if (leader->core.attr.config == TOPDOWN_SLOTS)
- return true;
+ if (!arch_is_topdown_slots(leader))
+ return false;
+
+ /*
+ * If slots event as leader event but no topdown metric events
+ * in group, slots event should still sample as leader.
+ */
+ evlist__for_each_entry(leader->evlist, evsel) {
+ if (evsel->core.leader != leader->core.leader)
+ continue;
+ if (evsel != leader && arch_is_topdown_metrics(evsel))
+ return true;
+ }
return false;
}
+
+/*
+ * Make a copy of the topdown metric event metric_event with the given index but
+ * change its configuration to be a topdown slots event. Copying from
+ * metric_event ensures modifiers are the same.
+ */
+int topdown_insert_slots_event(struct list_head *list, int idx, struct evsel *metric_event)
+{
+ struct evsel *evsel = evsel__new_idx(&metric_event->core.attr, idx);
+
+ if (!evsel)
+ return -ENOMEM;
+
+ evsel->core.attr.config = TOPDOWN_SLOTS;
+ evsel->core.cpus = perf_cpu_map__get(metric_event->core.cpus);
+ evsel->core.pmu_cpus = perf_cpu_map__get(metric_event->core.pmu_cpus);
+ evsel->core.is_pmu_core = true;
+ evsel->pmu = metric_event->pmu;
+ evsel->name = strdup("slots");
+ evsel->precise_max = metric_event->precise_max;
+ evsel->sample_read = metric_event->sample_read;
+ evsel->weak_group = metric_event->weak_group;
+ evsel->bpf_counter = metric_event->bpf_counter;
+ evsel->retire_lat = metric_event->retire_lat;
+ evsel__set_leader(evsel, evsel__leader(metric_event));
+ list_add_tail(&evsel->core.node, list);
+ return 0;
+}
diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
index 46bf9273e572..69035565e649 100644
--- a/tools/perf/arch/x86/util/topdown.h
+++ b/tools/perf/arch/x86/util/topdown.h
@@ -2,6 +2,14 @@
#ifndef _TOPDOWN_H
#define _TOPDOWN_H 1
+#include <stdbool.h>
+
+struct evsel;
+struct list_head;
+
bool topdown_sys_has_perf_metrics(void);
+bool arch_is_topdown_slots(const struct evsel *evsel);
+bool arch_is_topdown_metrics(const struct evsel *evsel);
+int topdown_insert_slots_event(struct list_head *list, int idx, struct evsel *metric_event);
#endif
diff --git a/tools/perf/arch/x86/util/tsc.c b/tools/perf/arch/x86/util/tsc.c
index 9b99f48b923c..3a439e4b12d2 100644
--- a/tools/perf/arch/x86/util/tsc.c
+++ b/tools/perf/arch/x86/util/tsc.c
@@ -24,38 +24,40 @@ u64 rdtsc(void)
* ...
* will return 3000000000.
*/
-static double cpuinfo_tsc_freq(void)
+static u64 cpuinfo_tsc_freq(void)
{
- double result = 0;
+ u64 result = 0;
FILE *cpuinfo;
char *line = NULL;
size_t len = 0;
cpuinfo = fopen("/proc/cpuinfo", "r");
if (!cpuinfo) {
- pr_err("Failed to read /proc/cpuinfo for TSC frequency");
- return NAN;
+ pr_err("Failed to read /proc/cpuinfo for TSC frequency\n");
+ return 0;
}
while (getline(&line, &len, cpuinfo) > 0) {
if (!strncmp(line, "model name", 10)) {
char *pos = strstr(line + 11, " @ ");
+ double float_result;
- if (pos && sscanf(pos, " @ %lfGHz", &result) == 1) {
- result *= 1000000000;
+ if (pos && sscanf(pos, " @ %lfGHz", &float_result) == 1) {
+ float_result *= 1000000000;
+ result = (u64)float_result;
goto out;
}
}
}
out:
- if (fpclassify(result) == FP_ZERO)
- pr_err("Failed to find TSC frequency in /proc/cpuinfo");
+ if (result == 0)
+ pr_err("Failed to find TSC frequency in /proc/cpuinfo\n");
free(line);
fclose(cpuinfo);
return result;
}
-double arch_get_tsc_freq(void)
+u64 arch_get_tsc_freq(void)
{
unsigned int a, b, c, d, lvl;
static bool cached;
@@ -86,6 +88,6 @@ double arch_get_tsc_freq(void)
return tsc;
}
- tsc = (double)c * (double)b / (double)a;
+ tsc = (u64)c * (u64)b / (u64)a;
return tsc;
}
diff --git a/tools/perf/arch/x86/util/unwind-libdw.c b/tools/perf/arch/x86/util/unwind-libdw.c
index edb77e20e083..798493e887d7 100644
--- a/tools/perf/arch/x86/util/unwind-libdw.c
+++ b/tools/perf/arch/x86/util/unwind-libdw.c
@@ -8,7 +8,7 @@
bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
{
struct unwind_info *ui = arg;
- struct regs_dump *user_regs = &ui->sample->user_regs;
+ struct regs_dump *user_regs = perf_sample__user_regs(ui->sample);
Dwarf_Word dwarf_regs[17];
unsigned nregs;
diff --git a/tools/perf/arch/xtensa/Build b/tools/perf/arch/xtensa/Build
deleted file mode 100644
index e4e5f33c84d8..000000000000
--- a/tools/perf/arch/xtensa/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-y += util/
diff --git a/tools/perf/arch/xtensa/Makefile b/tools/perf/arch/xtensa/Makefile
deleted file mode 100644
index 88c08eed9c7b..000000000000
--- a/tools/perf/arch/xtensa/Makefile
+++ /dev/null
@@ -1,4 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-ifndef NO_DWARF
-PERF_HAVE_DWARF_REGS := 1
-endif
diff --git a/tools/perf/arch/xtensa/entry/syscalls/syscall.tbl b/tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
new file mode 100644
index 000000000000..374e4cb788d8
--- /dev/null
+++ b/tools/perf/arch/xtensa/entry/syscalls/syscall.tbl
@@ -0,0 +1,442 @@
+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note
+#
+# system call numbers and entry vectors for xtensa
+#
+# The format is:
+# <number> <abi> <name> <entry point>
+#
+# The <abi> is always "common" for this file
+#
+0 common spill sys_ni_syscall
+1 common xtensa sys_ni_syscall
+2 common available4 sys_ni_syscall
+3 common available5 sys_ni_syscall
+4 common available6 sys_ni_syscall
+5 common available7 sys_ni_syscall
+6 common available8 sys_ni_syscall
+7 common available9 sys_ni_syscall
+# File Operations
+8 common open sys_open
+9 common close sys_close
+10 common dup sys_dup
+11 common dup2 sys_dup2
+12 common read sys_read
+13 common write sys_write
+14 common select sys_select
+15 common lseek sys_lseek
+16 common poll sys_poll
+17 common _llseek sys_llseek
+18 common epoll_wait sys_epoll_wait
+19 common epoll_ctl sys_epoll_ctl
+20 common epoll_create sys_epoll_create
+21 common creat sys_creat
+22 common truncate sys_truncate
+23 common ftruncate sys_ftruncate
+24 common readv sys_readv
+25 common writev sys_writev
+26 common fsync sys_fsync
+27 common fdatasync sys_fdatasync
+28 common truncate64 sys_truncate64
+29 common ftruncate64 sys_ftruncate64
+30 common pread64 sys_pread64
+31 common pwrite64 sys_pwrite64
+32 common link sys_link
+33 common rename sys_rename
+34 common symlink sys_symlink
+35 common readlink sys_readlink
+36 common mknod sys_mknod
+37 common pipe sys_pipe
+38 common unlink sys_unlink
+39 common rmdir sys_rmdir
+40 common mkdir sys_mkdir
+41 common chdir sys_chdir
+42 common fchdir sys_fchdir
+43 common getcwd sys_getcwd
+44 common chmod sys_chmod
+45 common chown sys_chown
+46 common stat sys_newstat
+47 common stat64 sys_stat64
+48 common lchown sys_lchown
+49 common lstat sys_newlstat
+50 common lstat64 sys_lstat64
+51 common available51 sys_ni_syscall
+52 common fchmod sys_fchmod
+53 common fchown sys_fchown
+54 common fstat sys_newfstat
+55 common fstat64 sys_fstat64
+56 common flock sys_flock
+57 common access sys_access
+58 common umask sys_umask
+59 common getdents sys_getdents
+60 common getdents64 sys_getdents64
+61 common fcntl64 sys_fcntl64
+62 common fallocate sys_fallocate
+63 common fadvise64_64 xtensa_fadvise64_64
+64 common utime sys_utime32
+65 common utimes sys_utimes_time32
+66 common ioctl sys_ioctl
+67 common fcntl sys_fcntl
+68 common setxattr sys_setxattr
+69 common getxattr sys_getxattr
+70 common listxattr sys_listxattr
+71 common removexattr sys_removexattr
+72 common lsetxattr sys_lsetxattr
+73 common lgetxattr sys_lgetxattr
+74 common llistxattr sys_llistxattr
+75 common lremovexattr sys_lremovexattr
+76 common fsetxattr sys_fsetxattr
+77 common fgetxattr sys_fgetxattr
+78 common flistxattr sys_flistxattr
+79 common fremovexattr sys_fremovexattr
+# File Map / Shared Memory Operations
+80 common mmap2 sys_mmap_pgoff
+81 common munmap sys_munmap
+82 common mprotect sys_mprotect
+83 common brk sys_brk
+84 common mlock sys_mlock
+85 common munlock sys_munlock
+86 common mlockall sys_mlockall
+87 common munlockall sys_munlockall
+88 common mremap sys_mremap
+89 common msync sys_msync
+90 common mincore sys_mincore
+91 common madvise sys_madvise
+92 common shmget sys_shmget
+93 common shmat xtensa_shmat
+94 common shmctl sys_old_shmctl
+95 common shmdt sys_shmdt
+# Socket Operations
+96 common socket sys_socket
+97 common setsockopt sys_setsockopt
+98 common getsockopt sys_getsockopt
+99 common shutdown sys_shutdown
+100 common bind sys_bind
+101 common connect sys_connect
+102 common listen sys_listen
+103 common accept sys_accept
+104 common getsockname sys_getsockname
+105 common getpeername sys_getpeername
+106 common sendmsg sys_sendmsg
+107 common recvmsg sys_recvmsg
+108 common send sys_send
+109 common recv sys_recv
+110 common sendto sys_sendto
+111 common recvfrom sys_recvfrom
+112 common socketpair sys_socketpair
+113 common sendfile sys_sendfile
+114 common sendfile64 sys_sendfile64
+115 common sendmmsg sys_sendmmsg
+# Process Operations
+116 common clone sys_clone
+117 common execve sys_execve
+118 common exit sys_exit
+119 common exit_group sys_exit_group
+120 common getpid sys_getpid
+121 common wait4 sys_wait4
+122 common waitid sys_waitid
+123 common kill sys_kill
+124 common tkill sys_tkill
+125 common tgkill sys_tgkill
+126 common set_tid_address sys_set_tid_address
+127 common gettid sys_gettid
+128 common setsid sys_setsid
+129 common getsid sys_getsid
+130 common prctl sys_prctl
+131 common personality sys_personality
+132 common getpriority sys_getpriority
+133 common setpriority sys_setpriority
+134 common setitimer sys_setitimer
+135 common getitimer sys_getitimer
+136 common setuid sys_setuid
+137 common getuid sys_getuid
+138 common setgid sys_setgid
+139 common getgid sys_getgid
+140 common geteuid sys_geteuid
+141 common getegid sys_getegid
+142 common setreuid sys_setreuid
+143 common setregid sys_setregid
+144 common setresuid sys_setresuid
+145 common getresuid sys_getresuid
+146 common setresgid sys_setresgid
+147 common getresgid sys_getresgid
+148 common setpgid sys_setpgid
+149 common getpgid sys_getpgid
+150 common getppid sys_getppid
+151 common getpgrp sys_getpgrp
+# 152 was set_thread_area
+152 common reserved152 sys_ni_syscall
+# 153 was get_thread_area
+153 common reserved153 sys_ni_syscall
+154 common times sys_times
+155 common acct sys_acct
+156 common sched_setaffinity sys_sched_setaffinity
+157 common sched_getaffinity sys_sched_getaffinity
+158 common capget sys_capget
+159 common capset sys_capset
+160 common ptrace sys_ptrace
+161 common semtimedop sys_semtimedop_time32
+162 common semget sys_semget
+163 common semop sys_semop
+164 common semctl sys_old_semctl
+165 common available165 sys_ni_syscall
+166 common msgget sys_msgget
+167 common msgsnd sys_msgsnd
+168 common msgrcv sys_msgrcv
+169 common msgctl sys_old_msgctl
+170 common available170 sys_ni_syscall
+# File System
+171 common umount2 sys_umount
+172 common mount sys_mount
+173 common swapon sys_swapon
+174 common chroot sys_chroot
+175 common pivot_root sys_pivot_root
+176 common umount sys_oldumount
+177 common swapoff sys_swapoff
+178 common sync sys_sync
+179 common syncfs sys_syncfs
+180 common setfsuid sys_setfsuid
+181 common setfsgid sys_setfsgid
+182 common sysfs sys_sysfs
+183 common ustat sys_ustat
+184 common statfs sys_statfs
+185 common fstatfs sys_fstatfs
+186 common statfs64 sys_statfs64
+187 common fstatfs64 sys_fstatfs64
+# System
+188 common setrlimit sys_setrlimit
+189 common getrlimit sys_getrlimit
+190 common getrusage sys_getrusage
+191 common futex sys_futex_time32
+192 common gettimeofday sys_gettimeofday
+193 common settimeofday sys_settimeofday
+194 common adjtimex sys_adjtimex_time32
+195 common nanosleep sys_nanosleep_time32
+196 common getgroups sys_getgroups
+197 common setgroups sys_setgroups
+198 common sethostname sys_sethostname
+199 common setdomainname sys_setdomainname
+200 common syslog sys_syslog
+201 common vhangup sys_vhangup
+202 common uselib sys_uselib
+203 common reboot sys_reboot
+204 common quotactl sys_quotactl
+# 205 was old nfsservctl
+205 common nfsservctl sys_ni_syscall
+206 common _sysctl sys_ni_syscall
+207 common bdflush sys_ni_syscall
+208 common uname sys_newuname
+209 common sysinfo sys_sysinfo
+210 common init_module sys_init_module
+211 common delete_module sys_delete_module
+212 common sched_setparam sys_sched_setparam
+213 common sched_getparam sys_sched_getparam
+214 common sched_setscheduler sys_sched_setscheduler
+215 common sched_getscheduler sys_sched_getscheduler
+216 common sched_get_priority_max sys_sched_get_priority_max
+217 common sched_get_priority_min sys_sched_get_priority_min
+218 common sched_rr_get_interval sys_sched_rr_get_interval_time32
+219 common sched_yield sys_sched_yield
+222 common available222 sys_ni_syscall
+# Signal Handling
+223 common restart_syscall sys_restart_syscall
+224 common sigaltstack sys_sigaltstack
+225 common rt_sigreturn xtensa_rt_sigreturn
+226 common rt_sigaction sys_rt_sigaction
+227 common rt_sigprocmask sys_rt_sigprocmask
+228 common rt_sigpending sys_rt_sigpending
+229 common rt_sigtimedwait sys_rt_sigtimedwait_time32
+230 common rt_sigqueueinfo sys_rt_sigqueueinfo
+231 common rt_sigsuspend sys_rt_sigsuspend
+# Message
+232 common mq_open sys_mq_open
+233 common mq_unlink sys_mq_unlink
+234 common mq_timedsend sys_mq_timedsend_time32
+235 common mq_timedreceive sys_mq_timedreceive_time32
+236 common mq_notify sys_mq_notify
+237 common mq_getsetattr sys_mq_getsetattr
+238 common available238 sys_ni_syscall
+239 common io_setup sys_io_setup
+# IO
+240 common io_destroy sys_io_destroy
+241 common io_submit sys_io_submit
+242 common io_getevents sys_io_getevents_time32
+243 common io_cancel sys_io_cancel
+244 common clock_settime sys_clock_settime32
+245 common clock_gettime sys_clock_gettime32
+246 common clock_getres sys_clock_getres_time32
+247 common clock_nanosleep sys_clock_nanosleep_time32
+# Timer
+248 common timer_create sys_timer_create
+249 common timer_delete sys_timer_delete
+250 common timer_settime sys_timer_settime32
+251 common timer_gettime sys_timer_gettime32
+252 common timer_getoverrun sys_timer_getoverrun
+# System
+253 common reserved253 sys_ni_syscall
+254 common lookup_dcookie sys_ni_syscall
+255 common available255 sys_ni_syscall
+256 common add_key sys_add_key
+257 common request_key sys_request_key
+258 common keyctl sys_keyctl
+259 common available259 sys_ni_syscall
+260 common readahead sys_readahead
+261 common remap_file_pages sys_remap_file_pages
+262 common migrate_pages sys_migrate_pages
+263 common mbind sys_mbind
+264 common get_mempolicy sys_get_mempolicy
+265 common set_mempolicy sys_set_mempolicy
+266 common unshare sys_unshare
+267 common move_pages sys_move_pages
+268 common splice sys_splice
+269 common tee sys_tee
+270 common vmsplice sys_vmsplice
+271 common available271 sys_ni_syscall
+272 common pselect6 sys_pselect6_time32
+273 common ppoll sys_ppoll_time32
+274 common epoll_pwait sys_epoll_pwait
+275 common epoll_create1 sys_epoll_create1
+276 common inotify_init sys_inotify_init
+277 common inotify_add_watch sys_inotify_add_watch
+278 common inotify_rm_watch sys_inotify_rm_watch
+279 common inotify_init1 sys_inotify_init1
+280 common getcpu sys_getcpu
+281 common kexec_load sys_ni_syscall
+282 common ioprio_set sys_ioprio_set
+283 common ioprio_get sys_ioprio_get
+284 common set_robust_list sys_set_robust_list
+285 common get_robust_list sys_get_robust_list
+286 common available286 sys_ni_syscall
+287 common available287 sys_ni_syscall
+# Relative File Operations
+288 common openat sys_openat
+289 common mkdirat sys_mkdirat
+290 common mknodat sys_mknodat
+291 common unlinkat sys_unlinkat
+292 common renameat sys_renameat
+293 common linkat sys_linkat
+294 common symlinkat sys_symlinkat
+295 common readlinkat sys_readlinkat
+296 common utimensat sys_utimensat_time32
+297 common fchownat sys_fchownat
+298 common futimesat sys_futimesat_time32
+299 common fstatat64 sys_fstatat64
+300 common fchmodat sys_fchmodat
+301 common faccessat sys_faccessat
+302 common available302 sys_ni_syscall
+303 common available303 sys_ni_syscall
+304 common signalfd sys_signalfd
+# 305 was timerfd
+306 common eventfd sys_eventfd
+307 common recvmmsg sys_recvmmsg_time32
+308 common setns sys_setns
+309 common signalfd4 sys_signalfd4
+310 common dup3 sys_dup3
+311 common pipe2 sys_pipe2
+312 common timerfd_create sys_timerfd_create
+313 common timerfd_settime sys_timerfd_settime32
+314 common timerfd_gettime sys_timerfd_gettime32
+315 common available315 sys_ni_syscall
+316 common eventfd2 sys_eventfd2
+317 common preadv sys_preadv
+318 common pwritev sys_pwritev
+319 common available319 sys_ni_syscall
+320 common fanotify_init sys_fanotify_init
+321 common fanotify_mark sys_fanotify_mark
+322 common process_vm_readv sys_process_vm_readv
+323 common process_vm_writev sys_process_vm_writev
+324 common name_to_handle_at sys_name_to_handle_at
+325 common open_by_handle_at sys_open_by_handle_at
+326 common sync_file_range2 sys_sync_file_range2
+327 common perf_event_open sys_perf_event_open
+328 common rt_tgsigqueueinfo sys_rt_tgsigqueueinfo
+329 common clock_adjtime sys_clock_adjtime32
+330 common prlimit64 sys_prlimit64
+331 common kcmp sys_kcmp
+332 common finit_module sys_finit_module
+333 common accept4 sys_accept4
+334 common sched_setattr sys_sched_setattr
+335 common sched_getattr sys_sched_getattr
+336 common renameat2 sys_renameat2
+337 common seccomp sys_seccomp
+338 common getrandom sys_getrandom
+339 common memfd_create sys_memfd_create
+340 common bpf sys_bpf
+341 common execveat sys_execveat
+342 common userfaultfd sys_userfaultfd
+343 common membarrier sys_membarrier
+344 common mlock2 sys_mlock2
+345 common copy_file_range sys_copy_file_range
+346 common preadv2 sys_preadv2
+347 common pwritev2 sys_pwritev2
+348 common pkey_mprotect sys_pkey_mprotect
+349 common pkey_alloc sys_pkey_alloc
+350 common pkey_free sys_pkey_free
+351 common statx sys_statx
+352 common rseq sys_rseq
+# 353 through 402 are unassigned to sync up with generic numbers
+403 common clock_gettime64 sys_clock_gettime
+404 common clock_settime64 sys_clock_settime
+405 common clock_adjtime64 sys_clock_adjtime
+406 common clock_getres_time64 sys_clock_getres
+407 common clock_nanosleep_time64 sys_clock_nanosleep
+408 common timer_gettime64 sys_timer_gettime
+409 common timer_settime64 sys_timer_settime
+410 common timerfd_gettime64 sys_timerfd_gettime
+411 common timerfd_settime64 sys_timerfd_settime
+412 common utimensat_time64 sys_utimensat
+413 common pselect6_time64 sys_pselect6
+414 common ppoll_time64 sys_ppoll
+416 common io_pgetevents_time64 sys_io_pgetevents
+417 common recvmmsg_time64 sys_recvmmsg
+418 common mq_timedsend_time64 sys_mq_timedsend
+419 common mq_timedreceive_time64 sys_mq_timedreceive
+420 common semtimedop_time64 sys_semtimedop
+421 common rt_sigtimedwait_time64 sys_rt_sigtimedwait
+422 common futex_time64 sys_futex
+423 common sched_rr_get_interval_time64 sys_sched_rr_get_interval
+424 common pidfd_send_signal sys_pidfd_send_signal
+425 common io_uring_setup sys_io_uring_setup
+426 common io_uring_enter sys_io_uring_enter
+427 common io_uring_register sys_io_uring_register
+428 common open_tree sys_open_tree
+429 common move_mount sys_move_mount
+430 common fsopen sys_fsopen
+431 common fsconfig sys_fsconfig
+432 common fsmount sys_fsmount
+433 common fspick sys_fspick
+434 common pidfd_open sys_pidfd_open
+435 common clone3 sys_clone3
+436 common close_range sys_close_range
+437 common openat2 sys_openat2
+438 common pidfd_getfd sys_pidfd_getfd
+439 common faccessat2 sys_faccessat2
+440 common process_madvise sys_process_madvise
+441 common epoll_pwait2 sys_epoll_pwait2
+442 common mount_setattr sys_mount_setattr
+443 common quotactl_fd sys_quotactl_fd
+444 common landlock_create_ruleset sys_landlock_create_ruleset
+445 common landlock_add_rule sys_landlock_add_rule
+446 common landlock_restrict_self sys_landlock_restrict_self
+# 447 reserved for memfd_secret
+448 common process_mrelease sys_process_mrelease
+449 common futex_waitv sys_futex_waitv
+450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
+452 common fchmodat2 sys_fchmodat2
+453 common map_shadow_stack sys_map_shadow_stack
+454 common futex_wake sys_futex_wake
+455 common futex_wait sys_futex_wait
+456 common futex_requeue sys_futex_requeue
+457 common statmount sys_statmount
+458 common listmount sys_listmount
+459 common lsm_get_self_attr sys_lsm_get_self_attr
+460 common lsm_set_self_attr sys_lsm_set_self_attr
+461 common lsm_list_modules sys_lsm_list_modules
+462 common mseal sys_mseal
+463 common setxattrat sys_setxattrat
+464 common getxattrat sys_getxattrat
+465 common listxattrat sys_listxattrat
+466 common removexattrat sys_removexattrat
+467 common open_tree_attr sys_open_tree_attr
+468 common file_getattr sys_file_getattr
+469 common file_setattr sys_file_setattr
diff --git a/tools/perf/arch/xtensa/util/Build b/tools/perf/arch/xtensa/util/Build
deleted file mode 100644
index e813e618954b..000000000000
--- a/tools/perf/arch/xtensa/util/Build
+++ /dev/null
@@ -1 +0,0 @@
-perf-$(CONFIG_DWARF) += dwarf-regs.o
diff --git a/tools/perf/arch/xtensa/util/dwarf-regs.c b/tools/perf/arch/xtensa/util/dwarf-regs.c
deleted file mode 100644
index 12f5457300f5..000000000000
--- a/tools/perf/arch/xtensa/util/dwarf-regs.c
+++ /dev/null
@@ -1,21 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * Mapping of DWARF debug register numbers into register names.
- *
- * Copyright (c) 2015 Cadence Design Systems Inc.
- */
-
-#include <stddef.h>
-#include <dwarf-regs.h>
-
-#define XTENSA_MAX_REGS 16
-
-const char *xtensa_regs_table[XTENSA_MAX_REGS] = {
- "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
- "a8", "a9", "a10", "a11", "a12", "a13", "a14", "a15",
-};
-
-const char *get_arch_regstr(unsigned int n)
-{
- return n < XTENSA_MAX_REGS ? xtensa_regs_table[n] : NULL;
-}
diff --git a/tools/perf/bench/Build b/tools/perf/bench/Build
index c2ab30907ae7..b558ab98719f 100644
--- a/tools/perf/bench/Build
+++ b/tools/perf/bench/Build
@@ -1,25 +1,26 @@
-perf-y += sched-messaging.o
-perf-y += sched-pipe.o
-perf-y += sched-seccomp-notify.o
-perf-y += syscall.o
-perf-y += mem-functions.o
-perf-y += futex-hash.o
-perf-y += futex-wake.o
-perf-y += futex-wake-parallel.o
-perf-y += futex-requeue.o
-perf-y += futex-lock-pi.o
-perf-y += epoll-wait.o
-perf-y += epoll-ctl.o
-perf-y += synthesize.o
-perf-y += kallsyms-parse.o
-perf-y += find-bit-bench.o
-perf-y += inject-buildid.o
-perf-y += evlist-open-close.o
-perf-y += breakpoint.o
-perf-y += pmu-scan.o
-perf-y += uprobe.o
+perf-bench-y += sched-messaging.o
+perf-bench-y += sched-pipe.o
+perf-bench-y += sched-seccomp-notify.o
+perf-bench-y += syscall.o
+perf-bench-y += mem-functions.o
+perf-bench-y += futex.o
+perf-bench-y += futex-hash.o
+perf-bench-y += futex-wake.o
+perf-bench-y += futex-wake-parallel.o
+perf-bench-y += futex-requeue.o
+perf-bench-y += futex-lock-pi.o
+perf-bench-y += epoll-wait.o
+perf-bench-y += epoll-ctl.o
+perf-bench-y += synthesize.o
+perf-bench-y += kallsyms-parse.o
+perf-bench-y += find-bit-bench.o
+perf-bench-y += inject-buildid.o
+perf-bench-y += evlist-open-close.o
+perf-bench-y += breakpoint.o
+perf-bench-y += pmu-scan.o
+perf-bench-y += uprobe.o
-perf-$(CONFIG_X86_64) += mem-memcpy-x86-64-asm.o
-perf-$(CONFIG_X86_64) += mem-memset-x86-64-asm.o
+perf-bench-$(CONFIG_X86_64) += mem-memcpy-x86-64-asm.o
+perf-bench-$(CONFIG_X86_64) += mem-memset-x86-64-asm.o
-perf-$(CONFIG_NUMA) += numa.o
+perf-bench-$(CONFIG_NUMA) += numa.o
diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
index faa18e6d2467..8519eb5a42fa 100644
--- a/tools/perf/bench/bench.h
+++ b/tools/perf/bench/bench.h
@@ -28,6 +28,7 @@ int bench_syscall_fork(int argc, const char **argv);
int bench_syscall_execve(int argc, const char **argv);
int bench_mem_memcpy(int argc, const char **argv);
int bench_mem_memset(int argc, const char **argv);
+int bench_mem_mmap(int argc, const char **argv);
int bench_mem_find_bit(int argc, const char **argv);
int bench_futex_hash(int argc, const char **argv);
int bench_futex_wake(int argc, const char **argv);
@@ -46,6 +47,8 @@ int bench_breakpoint_enable(int argc, const char **argv);
int bench_uprobe_baseline(int argc, const char **argv);
int bench_uprobe_empty(int argc, const char **argv);
int bench_uprobe_trace_printk(int argc, const char **argv);
+int bench_uprobe_empty_ret(int argc, const char **argv);
+int bench_uprobe_trace_printk_ret(int argc, const char **argv);
int bench_pmu_scan(int argc, const char **argv);
#define BENCH_FORMAT_DEFAULT_STR "default"
diff --git a/tools/perf/bench/epoll-ctl.c b/tools/perf/bench/epoll-ctl.c
index d3db73dac66a..d66d852b90e4 100644
--- a/tools/perf/bench/epoll-ctl.c
+++ b/tools/perf/bench/epoll-ctl.c
@@ -232,7 +232,7 @@ static int do_threads(struct worker *worker, struct perf_cpu_map *cpu)
if (!noaffinity)
pthread_attr_init(&thread_attr);
- nrcpus = perf_cpu_map__nr(cpu);
+ nrcpus = cpu__max_cpu().cpu;
cpuset = CPU_ALLOC(nrcpus);
BUG_ON(!cpuset);
size = CPU_ALLOC_SIZE(nrcpus);
diff --git a/tools/perf/bench/epoll-wait.c b/tools/perf/bench/epoll-wait.c
index 06bb3187660a..20fe4f72b4af 100644
--- a/tools/perf/bench/epoll-wait.c
+++ b/tools/perf/bench/epoll-wait.c
@@ -309,7 +309,7 @@ static int do_threads(struct worker *worker, struct perf_cpu_map *cpu)
if (!noaffinity)
pthread_attr_init(&thread_attr);
- nrcpus = perf_cpu_map__nr(cpu);
+ nrcpus = cpu__max_cpu().cpu;
cpuset = CPU_ALLOC(nrcpus);
BUG_ON(!cpuset);
size = CPU_ALLOC_SIZE(nrcpus);
@@ -420,7 +420,12 @@ static int cmpworker(const void *p1, const void *p2)
struct worker *w1 = (struct worker *) p1;
struct worker *w2 = (struct worker *) p2;
- return w1->tid > w2->tid;
+
+ if (w1->tid > w2->tid)
+ return 1;
+ if (w1->tid < w2->tid)
+ return -1;
+ return 0;
}
int bench_epoll_wait(int argc, const char **argv)
diff --git a/tools/perf/bench/evlist-open-close.c b/tools/perf/bench/evlist-open-close.c
index 5a27691469ed..bfaf50e4e519 100644
--- a/tools/perf/bench/evlist-open-close.c
+++ b/tools/perf/bench/evlist-open-close.c
@@ -46,25 +46,6 @@ static struct record_opts opts = {
.ctl_fd_ack = -1,
};
-static const struct option options[] = {
- OPT_STRING('e', "event", &event_string, "event", "event selector. use 'perf list' to list available events"),
- OPT_INTEGER('n', "nr-events", &nr_events,
- "number of dummy events to create (default 1). If used with -e, it clones those events n times (1 = no change)"),
- OPT_INTEGER('i', "iterations", &iterations, "Number of iterations used to compute average (default=100)"),
- OPT_BOOLEAN('a', "all-cpus", &opts.target.system_wide, "system-wide collection from all CPUs"),
- OPT_STRING('C', "cpu", &opts.target.cpu_list, "cpu", "list of cpus where to open events"),
- OPT_STRING('p', "pid", &opts.target.pid, "pid", "record events on existing process id"),
- OPT_STRING('t', "tid", &opts.target.tid, "tid", "record events on existing thread id"),
- OPT_STRING('u', "uid", &opts.target.uid_str, "user", "user to profile"),
- OPT_BOOLEAN(0, "per-thread", &opts.target.per_thread, "use per-thread mmaps"),
- OPT_END()
-};
-
-static const char *const bench_usage[] = {
- "perf bench internals evlist-open-close <options>",
- NULL
-};
-
static int evlist__count_evsel_fds(struct evlist *evlist)
{
struct evsel *evsel;
@@ -76,7 +57,7 @@ static int evlist__count_evsel_fds(struct evlist *evlist)
return cnt;
}
-static struct evlist *bench__create_evlist(char *evstr)
+static struct evlist *bench__create_evlist(char *evstr, const char *uid_str)
{
struct parse_events_error err;
struct evlist *evlist = evlist__new();
@@ -97,6 +78,18 @@ static struct evlist *bench__create_evlist(char *evstr)
goto out_delete_evlist;
}
parse_events_error__exit(&err);
+ if (uid_str) {
+ uid_t uid = parse_uid(uid_str);
+
+ if (uid == UINT_MAX) {
+ pr_err("Invalid User: %s", uid_str);
+ ret = -EINVAL;
+ goto out_delete_evlist;
+ }
+ ret = parse_uid_filter(evlist, uid);
+ if (ret)
+ goto out_delete_evlist;
+ }
ret = evlist__create_maps(evlist, &opts.target);
if (ret < 0) {
pr_err("Not enough memory to create thread/cpu maps\n");
@@ -136,10 +129,10 @@ static int bench__do_evlist_open_close(struct evlist *evlist)
return 0;
}
-static int bench_evlist_open_close__run(char *evstr)
+static int bench_evlist_open_close__run(char *evstr, const char *uid_str)
{
// used to print statistics only
- struct evlist *evlist = bench__create_evlist(evstr);
+ struct evlist *evlist = bench__create_evlist(evstr, uid_str);
double time_average, time_stddev;
struct timeval start, end, diff;
struct stats time_stats;
@@ -161,7 +154,7 @@ static int bench_evlist_open_close__run(char *evstr)
for (i = 0; i < iterations; i++) {
pr_debug("Started iteration %d\n", i);
- evlist = bench__create_evlist(evstr);
+ evlist = bench__create_evlist(evstr, uid_str);
if (!evlist)
return -ENOMEM;
@@ -225,6 +218,30 @@ out_error:
int bench_evlist_open_close(int argc, const char **argv)
{
+ const char *uid_str = NULL;
+ const struct option options[] = {
+ OPT_STRING('e', "event", &event_string, "event",
+ "event selector. use 'perf list' to list available events"),
+ OPT_INTEGER('n', "nr-events", &nr_events,
+ "number of dummy events to create (default 1). If used with -e, it clones those events n times (1 = no change)"),
+ OPT_INTEGER('i', "iterations", &iterations,
+ "Number of iterations used to compute average (default=100)"),
+ OPT_BOOLEAN('a', "all-cpus", &opts.target.system_wide,
+ "system-wide collection from all CPUs"),
+ OPT_STRING('C', "cpu", &opts.target.cpu_list, "cpu",
+ "list of cpus where to open events"),
+ OPT_STRING('p', "pid", &opts.target.pid, "pid",
+ "record events on existing process id"),
+ OPT_STRING('t', "tid", &opts.target.tid, "tid",
+ "record events on existing thread id"),
+ OPT_STRING('u', "uid", &uid_str, "user", "user to profile"),
+ OPT_BOOLEAN(0, "per-thread", &opts.target.per_thread, "use per-thread mmaps"),
+ OPT_END()
+ };
+ const char *const bench_usage[] = {
+ "perf bench internals evlist-open-close <options>",
+ NULL
+ };
char *evstr, errbuf[BUFSIZ];
int err;
@@ -241,15 +258,8 @@ int bench_evlist_open_close(int argc, const char **argv)
goto out;
}
- err = target__parse_uid(&opts.target);
- if (err) {
- target__strerror(&opts.target, err, errbuf, sizeof(errbuf));
- pr_err("%s", errbuf);
- goto out;
- }
-
- /* Enable ignoring missing threads when -u/-p option is defined. */
- opts.ignore_missing_thread = opts.target.uid != UINT_MAX || opts.target.pid;
+ /* Enable ignoring missing threads when -p option is defined. */
+ opts.ignore_missing_thread = opts.target.pid;
evstr = bench__repeat_event_string(event_string, nr_events);
if (!evstr) {
@@ -257,7 +267,7 @@ int bench_evlist_open_close(int argc, const char **argv)
goto out;
}
- err = bench_evlist_open_close__run(evstr);
+ err = bench_evlist_open_close__run(evstr, uid_str);
free(evstr);
out:
diff --git a/tools/perf/bench/find-bit-bench.c b/tools/perf/bench/find-bit-bench.c
index 7e25b0e413f6..e697c20951bc 100644
--- a/tools/perf/bench/find-bit-bench.c
+++ b/tools/perf/bench/find-bit-bench.c
@@ -37,7 +37,7 @@ static noinline void workload(int val)
accumulator++;
}
-#if (defined(__i386__) || defined(__x86_64__)) && defined(__GCC_ASM_FLAG_OUTPUTS__)
+#if defined(__i386__) || defined(__x86_64__)
static bool asm_test_bit(long nr, const unsigned long *addr)
{
bool oldbit;
diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c
index 0c69d20efa32..7e29f04da744 100644
--- a/tools/perf/bench/futex-hash.c
+++ b/tools/perf/bench/futex-hash.c
@@ -21,6 +21,7 @@
#include <linux/zalloc.h>
#include <sys/time.h>
#include <sys/mman.h>
+#include <sys/prctl.h>
#include <perf/cpumap.h>
#include "../util/mutex.h"
@@ -50,9 +51,11 @@ struct worker {
static struct bench_futex_parameters params = {
.nfutexes = 1024,
.runtime = 10,
+ .nbuckets = -1,
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", &params.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", &params.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", &params.runtime, "Specify runtime (in seconds)"),
OPT_UINTEGER('f', "futexes", &params.nfutexes, "Specify amount of futexes per threads"),
@@ -118,6 +121,7 @@ static void print_summary(void)
printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",
!params.silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),
(int)bench__runtime.tv_sec);
+ futex_print_nbuckets(&params);
}
int bench_futex_hash(int argc, const char **argv)
@@ -161,6 +165,7 @@ int bench_futex_hash(int argc, const char **argv)
if (!params.fshared)
futex_flag = FUTEX_PRIVATE_FLAG;
+ futex_set_nbuckets_param(&params);
printf("Run summary [PID %d]: %d threads, each operating on %d [%s] futexes for %d secs.\n\n",
getpid(), params.nthreads, params.nfutexes, params.fshared ? "shared":"private", params.runtime);
@@ -174,7 +179,7 @@ int bench_futex_hash(int argc, const char **argv)
pthread_attr_init(&thread_attr);
gettimeofday(&bench__start, NULL);
- nrcpus = perf_cpu_map__nr(cpu);
+ nrcpus = cpu__max_cpu().cpu;
cpuset = CPU_ALLOC(nrcpus);
BUG_ON(!cpuset);
size = CPU_ALLOC_SIZE(nrcpus);
diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
index 7a4973346180..40640b674427 100644
--- a/tools/perf/bench/futex-lock-pi.c
+++ b/tools/perf/bench/futex-lock-pi.c
@@ -41,10 +41,12 @@ static struct stats throughput_stats;
static struct cond thread_parent, thread_worker;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
.runtime = 10,
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", &params.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", &params.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", &params.runtime, "Specify runtime (in seconds)"),
OPT_BOOLEAN( 'M', "multi", &params.multi, "Use multiple futexes"),
@@ -67,6 +69,7 @@ static void print_summary(void)
printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",
!params.silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),
(int)bench__runtime.tv_sec);
+ futex_print_nbuckets(&params);
}
static void toggle_done(int sig __maybe_unused,
@@ -122,7 +125,7 @@ static void create_threads(struct worker *w, struct perf_cpu_map *cpu)
{
cpu_set_t *cpuset;
unsigned int i;
- int nrcpus = perf_cpu_map__nr(cpu);
+ int nrcpus = cpu__max_cpu().cpu;
size_t size;
threads_starting = params.nthreads;
@@ -203,6 +206,7 @@ int bench_futex_lock_pi(int argc, const char **argv)
mutex_init(&thread_lock);
cond_init(&thread_parent);
cond_init(&thread_worker);
+ futex_set_nbuckets_param(&params);
threads_starting = params.nthreads;
gettimeofday(&bench__start, NULL);
diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
index d9ad736c1a3e..0748b0fd689e 100644
--- a/tools/perf/bench/futex-requeue.c
+++ b/tools/perf/bench/futex-requeue.c
@@ -42,6 +42,7 @@ static unsigned int threads_starting;
static int futex_flag = 0;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
/*
* How many tasks to requeue at a time.
* Default to 1 in order to make the kernel work more.
@@ -50,6 +51,7 @@ static struct bench_futex_parameters params = {
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", &params.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", &params.nthreads, "Specify amount of threads"),
OPT_UINTEGER('q', "nrequeue", &params.nrequeue, "Specify amount of threads to requeue at once"),
OPT_BOOLEAN( 's', "silent", &params.silent, "Silent mode: do not display data/details"),
@@ -77,6 +79,7 @@ static void print_summary(void)
params.nthreads,
requeuetime_avg / USEC_PER_MSEC,
rel_stddev_stats(requeuetime_stddev, requeuetime_avg));
+ futex_print_nbuckets(&params);
}
static void *workerfn(void *arg __maybe_unused)
@@ -125,7 +128,7 @@ static void block_threads(pthread_t *w, struct perf_cpu_map *cpu)
{
cpu_set_t *cpuset;
unsigned int i;
- int nrcpus = perf_cpu_map__nr(cpu);
+ int nrcpus = cpu__max_cpu().cpu;
size_t size;
threads_starting = params.nthreads;
@@ -204,6 +207,8 @@ int bench_futex_requeue(int argc, const char **argv)
if (params.broadcast)
params.nrequeue = params.nthreads;
+ futex_set_nbuckets_param(&params);
+
printf("Run summary [PID %d]: Requeuing %d threads (from [%s] %p to %s%p), "
"%d at a time.\n\n", getpid(), params.nthreads,
params.fshared ? "shared":"private", &futex1,
diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c
index b66df553e561..6aede7c46b33 100644
--- a/tools/perf/bench/futex-wake-parallel.c
+++ b/tools/perf/bench/futex-wake-parallel.c
@@ -57,9 +57,12 @@ static struct stats waketime_stats, wakeup_stats;
static unsigned int threads_starting;
static int futex_flag = 0;
-static struct bench_futex_parameters params;
+static struct bench_futex_parameters params = {
+ .nbuckets = -1,
+};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", &params.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", &params.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakers", &params.nwakes, "Specify amount of waking threads"),
OPT_BOOLEAN( 's', "silent", &params.silent, "Silent mode: do not display data/details"),
@@ -149,7 +152,7 @@ static void block_threads(pthread_t *w, struct perf_cpu_map *cpu)
{
cpu_set_t *cpuset;
unsigned int i;
- int nrcpus = perf_cpu_map__nr(cpu);
+ int nrcpus = cpu__max_cpu().cpu;
size_t size;
threads_starting = params.nthreads;
@@ -218,6 +221,7 @@ static void print_summary(void)
params.nthreads,
waketime_avg / USEC_PER_MSEC,
rel_stddev_stats(waketime_stddev, waketime_avg));
+ futex_print_nbuckets(&params);
}
@@ -291,6 +295,8 @@ int bench_futex_wake_parallel(int argc, const char **argv)
if (!params.fshared)
futex_flag = FUTEX_PRIVATE_FLAG;
+ futex_set_nbuckets_param(&params);
+
printf("Run summary [PID %d]: blocking on %d threads (at [%s] "
"futex %p), %d threads waking up %d at a time.\n\n",
getpid(), params.nthreads, params.fshared ? "shared":"private",
@@ -318,7 +324,7 @@ int bench_futex_wake_parallel(int argc, const char **argv)
cond_broadcast(&thread_worker);
mutex_unlock(&thread_lock);
- usleep(100000);
+ usleep(200000);
/* Ok, all threads are patiently blocked, start waking folks up */
wakeup_threads(waking_worker);
diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c
index 690fd6d3da13..a31fc1563862 100644
--- a/tools/perf/bench/futex-wake.c
+++ b/tools/perf/bench/futex-wake.c
@@ -42,6 +42,7 @@ static unsigned int threads_starting;
static int futex_flag = 0;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
/*
* How many wakeups to do at a time.
* Default to 1 in order to make the kernel work more.
@@ -50,6 +51,7 @@ static struct bench_futex_parameters params = {
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", &params.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", &params.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakes", &params.nwakes, "Specify amount of threads to wake at once"),
OPT_BOOLEAN( 's', "silent", &params.silent, "Silent mode: do not display data/details"),
@@ -93,6 +95,7 @@ static void print_summary(void)
params.nthreads,
waketime_avg / USEC_PER_MSEC,
rel_stddev_stats(waketime_stddev, waketime_avg));
+ futex_print_nbuckets(&params);
}
static void block_threads(pthread_t *w, struct perf_cpu_map *cpu)
@@ -100,7 +103,7 @@ static void block_threads(pthread_t *w, struct perf_cpu_map *cpu)
cpu_set_t *cpuset;
unsigned int i;
size_t size;
- int nrcpus = perf_cpu_map__nr(cpu);
+ int nrcpus = cpu__max_cpu().cpu;
threads_starting = params.nthreads;
cpuset = CPU_ALLOC(nrcpus);
diff --git a/tools/perf/bench/futex.c b/tools/perf/bench/futex.c
new file mode 100644
index 000000000000..1481196a22f0
--- /dev/null
+++ b/tools/perf/bench/futex.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <err.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/prctl.h>
+
+#include "futex.h"
+
+#ifndef PR_FUTEX_HASH
+#define PR_FUTEX_HASH 78
+# define PR_FUTEX_HASH_SET_SLOTS 1
+# define PR_FUTEX_HASH_GET_SLOTS 2
+#endif // PR_FUTEX_HASH
+
+void futex_set_nbuckets_param(struct bench_futex_parameters *params)
+{
+ int ret;
+
+ if (params->nbuckets < 0)
+ return;
+
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets, 0);
+ if (ret) {
+ printf("Requesting %d hash buckets failed: %d/%m\n",
+ params->nbuckets, ret);
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+}
+
+void futex_print_nbuckets(struct bench_futex_parameters *params)
+{
+ char *futex_hash_mode;
+ int ret;
+
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_SLOTS);
+ if (params->nbuckets >= 0) {
+ if (ret != params->nbuckets) {
+ if (ret < 0) {
+ printf("Can't query number of buckets: %m\n");
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+ printf("Requested number of hash buckets does not currently used.\n");
+ printf("Requested: %d in usage: %d\n", params->nbuckets, ret);
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+ if (params->nbuckets == 0)
+ ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
+ else
+ ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets",
+ params->nbuckets);
+ } else {
+ if (ret <= 0) {
+ ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
+ } else {
+ ret = asprintf(&futex_hash_mode, "Futex hashing: auto resized to %d buckets",
+ ret);
+ }
+ }
+ if (ret < 0)
+ err(EXIT_FAILURE, "ENOMEM, futex_hash_mode");
+ printf("%s\n", futex_hash_mode);
+ free(futex_hash_mode);
+}
diff --git a/tools/perf/bench/futex.h b/tools/perf/bench/futex.h
index ebdc2b032afc..fcb72d682cf8 100644
--- a/tools/perf/bench/futex.h
+++ b/tools/perf/bench/futex.h
@@ -8,6 +8,7 @@
#ifndef _FUTEX_H
#define _FUTEX_H
+#include <stdbool.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/types.h>
@@ -25,6 +26,7 @@ struct bench_futex_parameters {
unsigned int nfutexes;
unsigned int nwakes;
unsigned int nrequeue;
+ int nbuckets;
};
/**
@@ -143,4 +145,7 @@ futex_cmp_requeue_pi(u_int32_t *uaddr, u_int32_t val, u_int32_t *uaddr2,
val, opflags);
}
+void futex_set_nbuckets_param(struct bench_futex_parameters *params);
+void futex_print_nbuckets(struct bench_futex_parameters *params);
+
#endif /* _FUTEX_H */
diff --git a/tools/perf/bench/inject-buildid.c b/tools/perf/bench/inject-buildid.c
index 49331743c743..12387ea88b9a 100644
--- a/tools/perf/bench/inject-buildid.c
+++ b/tools/perf/bench/inject-buildid.c
@@ -52,7 +52,7 @@ struct bench_dso {
static int nr_dsos;
static struct bench_dso *dsos;
-extern int cmd_inject(int argc, const char *argv[]);
+extern int main(int argc, const char **argv);
static const struct option options[] = {
OPT_UINTEGER('i', "iterations", &iterations,
@@ -80,12 +80,12 @@ static int add_dso(const char *fpath, const struct stat *sb __maybe_unused,
int typeflag, struct FTW *ftwbuf __maybe_unused)
{
struct bench_dso *dso = &dsos[nr_dsos];
- struct build_id bid;
+ struct build_id bid = { .size = 0, };
if (typeflag == FTW_D || typeflag == FTW_SL)
return 0;
- if (filename__read_build_id(fpath, &bid) < 0)
+ if (filename__read_build_id(fpath, &bid, /*block=*/true) < 0)
return 0;
dso->name = realpath(fpath, NULL);
@@ -294,7 +294,7 @@ static int setup_injection(struct bench_data *data, bool build_id_all)
if (data->pid == 0) {
const char **inject_argv;
- int inject_argc = 2;
+ int inject_argc = 3;
close(data->input_pipe[1]);
close(data->output_pipe[0]);
@@ -318,15 +318,16 @@ static int setup_injection(struct bench_data *data, bool build_id_all)
if (inject_argv == NULL)
exit(1);
- inject_argv[0] = strdup("inject");
- inject_argv[1] = strdup("-b");
+ inject_argv[0] = strdup("perf");
+ inject_argv[1] = strdup("inject");
+ inject_argv[2] = strdup("-b");
if (build_id_all)
- inject_argv[2] = strdup("--buildid-all");
+ inject_argv[3] = strdup("--buildid-all");
/* signal that we're ready to go */
close(ready_pipe[1]);
- cmd_inject(inject_argc, inject_argv);
+ main(inject_argc, inject_argv);
exit(0);
}
@@ -362,7 +363,7 @@ static int inject_build_id(struct bench_data *data, u64 *max_rss)
return -1;
for (i = 0; i < nr_mmaps; i++) {
- int idx = rand() % (nr_dsos - 1);
+ int idx = rand() % nr_dsos;
struct bench_dso *dso = &dsos[idx];
u64 timestamp = rand() % 1000000;
diff --git a/tools/perf/bench/mem-functions.c b/tools/perf/bench/mem-functions.c
index 19d45c377ac1..2908a3a796c9 100644
--- a/tools/perf/bench/mem-functions.c
+++ b/tools/perf/bench/mem-functions.c
@@ -22,27 +22,39 @@
#include <string.h>
#include <unistd.h>
#include <sys/time.h>
+#include <sys/mman.h>
#include <errno.h>
#include <linux/time64.h>
-#include <linux/zalloc.h>
+#include <linux/log2.h>
#define K 1024
+#define PAGE_SHIFT_4KB 12
+#define PAGE_SHIFT_2MB 21
+#define PAGE_SHIFT_1GB 30
+
static const char *size_str = "1MB";
static const char *function_str = "all";
-static int nr_loops = 1;
+static const char *page_size_str = "4KB";
+static const char *chunk_size_str = "0";
+static unsigned int nr_loops = 1;
static bool use_cycles;
static int cycles_fd;
+static unsigned int seed;
-static const struct option options[] = {
+static const struct option bench_common_options[] = {
OPT_STRING('s', "size", &size_str, "1MB",
"Specify the size of the memory buffers. "
"Available units: B, KB, MB, GB and TB (case insensitive)"),
+ OPT_STRING('p', "page", &page_size_str, "4KB",
+ "Specify page-size for mapping memory buffers. "
+ "Available sizes: 4KB, 2MB, 1GB (case insensitive)"),
+
OPT_STRING('f', "function", &function_str, "all",
"Specify the function to run, \"all\" runs all available functions, \"help\" lists them"),
- OPT_INTEGER('l', "nr_loops", &nr_loops,
+ OPT_UINTEGER('l', "nr_loops", &nr_loops,
"Specify the number of loops to run. (default: 1)"),
OPT_BOOLEAN('c', "cycles", &use_cycles,
@@ -51,15 +63,56 @@ static const struct option options[] = {
OPT_END()
};
+static const struct option bench_mem_options[] = {
+ OPT_STRING('k', "chunk", &chunk_size_str, "0",
+ "Specify the chunk-size for each invocation. "
+ "Available units: B, KB, MB, GB and TB (case insensitive)"),
+ OPT_PARENT(bench_common_options),
+ OPT_END()
+};
+
+union bench_clock {
+ u64 cycles;
+ struct timeval tv;
+};
+
+struct bench_params {
+ size_t size;
+ size_t size_total;
+ size_t chunk_size;
+ unsigned int nr_loops;
+ unsigned int page_shift;
+ unsigned int seed;
+};
+
+struct bench_mem_info {
+ const struct function *functions;
+ int (*do_op)(const struct function *r, struct bench_params *p,
+ void *src, void *dst, union bench_clock *rt);
+ const char *const *usage;
+ const struct option *options;
+ bool alloc_src;
+};
+
+typedef bool (*mem_init_t)(struct bench_mem_info *, struct bench_params *,
+ void **, void **);
+typedef void (*mem_fini_t)(struct bench_mem_info *, struct bench_params *,
+ void **, void **);
typedef void *(*memcpy_t)(void *, const void *, size_t);
typedef void *(*memset_t)(void *, int, size_t);
+typedef void (*mmap_op_t)(void *, size_t, unsigned int, bool);
struct function {
const char *name;
const char *desc;
- union {
- memcpy_t memcpy;
- memset_t memset;
+ struct {
+ mem_init_t init;
+ mem_fini_t fini;
+ union {
+ memcpy_t memcpy;
+ memset_t memset;
+ mmap_op_t mmap_op;
+ };
} fn;
};
@@ -91,6 +144,34 @@ static u64 get_cycles(void)
return clk;
}
+static void clock_get(union bench_clock *t)
+{
+ if (use_cycles)
+ t->cycles = get_cycles();
+ else
+ BUG_ON(gettimeofday(&t->tv, NULL));
+}
+
+static union bench_clock clock_diff(union bench_clock *s, union bench_clock *e)
+{
+ union bench_clock t;
+
+ if (use_cycles)
+ t.cycles = e->cycles - s->cycles;
+ else
+ timersub(&e->tv, &s->tv, &t.tv);
+
+ return t;
+}
+
+static void clock_accum(union bench_clock *a, union bench_clock *b)
+{
+ if (use_cycles)
+ a->cycles += b->cycles;
+ else
+ timeradd(&a->tv, &b->tv, &a->tv);
+}
+
static double timeval2double(struct timeval *ts)
{
return (double)ts->tv_sec + (double)ts->tv_usec / (double)USEC_PER_SEC;
@@ -107,54 +188,40 @@ static double timeval2double(struct timeval *ts)
printf(" %14lf GB/sec\n", x / K / K / K); \
} while (0)
-struct bench_mem_info {
- const struct function *functions;
- u64 (*do_cycles)(const struct function *r, size_t size, void *src, void *dst);
- double (*do_gettimeofday)(const struct function *r, size_t size, void *src, void *dst);
- const char *const *usage;
- bool alloc_src;
-};
-
-static void __bench_mem_function(struct bench_mem_info *info, int r_idx, size_t size, double size_total)
+static void __bench_mem_function(struct bench_mem_info *info, struct bench_params *p,
+ int r_idx)
{
const struct function *r = &info->functions[r_idx];
double result_bps = 0.0;
- u64 result_cycles = 0;
- void *src = NULL, *dst = zalloc(size);
+ union bench_clock rt = { 0 };
+ void *src = NULL, *dst = NULL;
printf("# function '%s' (%s)\n", r->name, r->desc);
- if (dst == NULL)
- goto out_alloc_failed;
-
- if (info->alloc_src) {
- src = zalloc(size);
- if (src == NULL)
- goto out_alloc_failed;
- }
+ if (r->fn.init && r->fn.init(info, p, &src, &dst))
+ goto out_init_failed;
if (bench_format == BENCH_FORMAT_DEFAULT)
printf("# Copying %s bytes ...\n\n", size_str);
- if (use_cycles) {
- result_cycles = info->do_cycles(r, size, src, dst);
- } else {
- result_bps = info->do_gettimeofday(r, size, src, dst);
- }
+ if (info->do_op(r, p, src, dst, &rt))
+ goto out_test_failed;
switch (bench_format) {
case BENCH_FORMAT_DEFAULT:
if (use_cycles) {
- printf(" %14lf cycles/byte\n", (double)result_cycles/size_total);
+ printf(" %14lf cycles/byte\n", (double)rt.cycles/(double)p->size_total);
} else {
+ result_bps = (double)p->size_total/timeval2double(&rt.tv);
print_bps(result_bps);
}
break;
case BENCH_FORMAT_SIMPLE:
if (use_cycles) {
- printf("%lf\n", (double)result_cycles/size_total);
+ printf("%lf\n", (double)rt.cycles/(double)p->size_total);
} else {
+ result_bps = (double)p->size_total/timeval2double(&rt.tv);
printf("%lf\n", result_bps);
}
break;
@@ -164,22 +231,23 @@ static void __bench_mem_function(struct bench_mem_info *info, int r_idx, size_t
break;
}
+out_test_failed:
out_free:
- free(src);
- free(dst);
+ if (r->fn.fini) r->fn.fini(info, p, &src, &dst);
return;
-out_alloc_failed:
- printf("# Memory allocation failed - maybe size (%s) is too large?\n", size_str);
+out_init_failed:
+ printf("# Memory allocation failed - maybe size (%s) %s?\n", size_str,
+ p->page_shift != PAGE_SHIFT_4KB ? "has insufficient hugepages" : "is too large");
goto out_free;
}
static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *info)
{
int i;
- size_t size;
- double size_total;
+ struct bench_params p = { 0 };
+ unsigned int page_size;
- argc = parse_options(argc, argv, options, info->usage, 0);
+ argc = parse_options(argc, argv, info->options, info->usage, 0);
if (use_cycles) {
i = init_cycles();
@@ -189,17 +257,37 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *
}
}
- size = (size_t)perf_atoll((char *)size_str);
- size_total = (double)size * nr_loops;
+ p.nr_loops = nr_loops;
+ p.size = (size_t)perf_atoll((char *)size_str);
- if ((s64)size <= 0) {
+ if ((s64)p.size <= 0) {
fprintf(stderr, "Invalid size:%s\n", size_str);
return 1;
}
+ p.size_total = p.size * p.nr_loops;
+
+ p.chunk_size = (size_t)perf_atoll((char *)chunk_size_str);
+ if ((s64)p.chunk_size < 0 || (s64)p.chunk_size > (s64)p.size) {
+ fprintf(stderr, "Invalid chunk_size:%s\n", chunk_size_str);
+ return 1;
+ }
+ if (!p.chunk_size)
+ p.chunk_size = p.size;
+
+ page_size = (unsigned int)perf_atoll((char *)page_size_str);
+ if (page_size != (1 << PAGE_SHIFT_4KB) &&
+ page_size != (1 << PAGE_SHIFT_2MB) &&
+ page_size != (1 << PAGE_SHIFT_1GB)) {
+ fprintf(stderr, "Invalid page-size:%s\n", page_size_str);
+ return 1;
+ }
+ p.page_shift = ilog2(page_size);
+
+ p.seed = seed;
if (!strncmp(function_str, "all", 3)) {
for (i = 0; info->functions[i].name; i++)
- __bench_mem_function(info, i, size, size_total);
+ __bench_mem_function(info, &p, i);
return 0;
}
@@ -218,7 +306,7 @@ static int bench_mem_common(int argc, const char **argv, struct bench_mem_info *
return 1;
}
- __bench_mem_function(info, i, size, size_total);
+ __bench_mem_function(info, &p, i);
return 0;
}
@@ -235,47 +323,81 @@ static void memcpy_prefault(memcpy_t fn, size_t size, void *src, void *dst)
fn(dst, src, size);
}
-static u64 do_memcpy_cycles(const struct function *r, size_t size, void *src, void *dst)
+static int do_memcpy(const struct function *r, struct bench_params *p,
+ void *src, void *dst, union bench_clock *rt)
{
- u64 cycle_start = 0ULL, cycle_end = 0ULL;
+ union bench_clock start, end;
memcpy_t fn = r->fn.memcpy;
- int i;
- memcpy_prefault(fn, size, src, dst);
+ memcpy_prefault(fn, p->size, src, dst);
+
+ clock_get(&start);
+ for (unsigned int i = 0; i < p->nr_loops; ++i)
+ for (size_t off = 0; off < p->size; off += p->chunk_size)
+ fn(dst + off, src + off, min(p->chunk_size, p->size - off));
+ clock_get(&end);
- cycle_start = get_cycles();
- for (i = 0; i < nr_loops; ++i)
- fn(dst, src, size);
- cycle_end = get_cycles();
+ *rt = clock_diff(&start, &end);
- return cycle_end - cycle_start;
+ return 0;
}
-static double do_memcpy_gettimeofday(const struct function *r, size_t size, void *src, void *dst)
+static void *bench_mmap(size_t size, bool populate, unsigned int page_shift)
{
- struct timeval tv_start, tv_end, tv_diff;
- memcpy_t fn = r->fn.memcpy;
- int i;
+ void *p;
+ int extra = populate ? MAP_POPULATE : 0;
+
+ if (page_shift != PAGE_SHIFT_4KB)
+ extra |= MAP_HUGETLB | (page_shift << MAP_HUGE_SHIFT);
+
+ p = mmap(NULL, size, PROT_READ|PROT_WRITE,
+ extra | MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
+
+ return p == MAP_FAILED ? NULL : p;
+}
+
+static void bench_munmap(void *p, size_t size)
+{
+ if (p)
+ munmap(p, size);
+}
+
+static bool mem_alloc(struct bench_mem_info *info, struct bench_params *p,
+ void **src, void **dst)
+{
+ bool failed;
- memcpy_prefault(fn, size, src, dst);
+ *dst = bench_mmap(p->size, true, p->page_shift);
+ failed = *dst == NULL;
- BUG_ON(gettimeofday(&tv_start, NULL));
- for (i = 0; i < nr_loops; ++i)
- fn(dst, src, size);
- BUG_ON(gettimeofday(&tv_end, NULL));
+ if (info->alloc_src) {
+ *src = bench_mmap(p->size, true, p->page_shift);
+ failed = failed || *src == NULL;
+ }
+
+ return failed;
+}
- timersub(&tv_end, &tv_start, &tv_diff);
+static void mem_free(struct bench_mem_info *info __maybe_unused,
+ struct bench_params *p __maybe_unused,
+ void **src, void **dst)
+{
+ bench_munmap(*dst, p->size);
+ bench_munmap(*src, p->size);
- return (double)(((double)size * nr_loops) / timeval2double(&tv_diff));
+ *dst = *src = NULL;
}
struct function memcpy_functions[] = {
{ .name = "default",
.desc = "Default memcpy() provided by glibc",
+ .fn.init = mem_alloc,
+ .fn.fini = mem_free,
.fn.memcpy = memcpy },
#ifdef HAVE_ARCH_X86_64_SUPPORT
-# define MEMCPY_FN(_fn, _name, _desc) {.name = _name, .desc = _desc, .fn.memcpy = _fn},
+# define MEMCPY_FN(_fn, _init, _fini, _name, _desc) \
+ {.name = _name, .desc = _desc, .fn.memcpy = _fn, .fn.init = _init, .fn.fini = _fini },
# include "mem-memcpy-x86-64-asm-def.h"
# undef MEMCPY_FN
#endif
@@ -292,55 +414,36 @@ int bench_mem_memcpy(int argc, const char **argv)
{
struct bench_mem_info info = {
.functions = memcpy_functions,
- .do_cycles = do_memcpy_cycles,
- .do_gettimeofday = do_memcpy_gettimeofday,
+ .do_op = do_memcpy,
.usage = bench_mem_memcpy_usage,
+ .options = bench_mem_options,
.alloc_src = true,
};
return bench_mem_common(argc, argv, &info);
}
-static u64 do_memset_cycles(const struct function *r, size_t size, void *src __maybe_unused, void *dst)
-{
- u64 cycle_start = 0ULL, cycle_end = 0ULL;
- memset_t fn = r->fn.memset;
- int i;
-
- /*
- * We prefault the freshly allocated memory range here,
- * to not measure page fault overhead:
- */
- fn(dst, -1, size);
-
- cycle_start = get_cycles();
- for (i = 0; i < nr_loops; ++i)
- fn(dst, i, size);
- cycle_end = get_cycles();
-
- return cycle_end - cycle_start;
-}
-
-static double do_memset_gettimeofday(const struct function *r, size_t size, void *src __maybe_unused, void *dst)
+static int do_memset(const struct function *r, struct bench_params *p,
+ void *src __maybe_unused, void *dst, union bench_clock *rt)
{
- struct timeval tv_start, tv_end, tv_diff;
+ union bench_clock start, end;
memset_t fn = r->fn.memset;
- int i;
/*
* We prefault the freshly allocated memory range here,
* to not measure page fault overhead:
*/
- fn(dst, -1, size);
+ fn(dst, -1, p->size);
- BUG_ON(gettimeofday(&tv_start, NULL));
- for (i = 0; i < nr_loops; ++i)
- fn(dst, i, size);
- BUG_ON(gettimeofday(&tv_end, NULL));
+ clock_get(&start);
+ for (unsigned int i = 0; i < p->nr_loops; ++i)
+ for (size_t off = 0; off < p->size; off += p->chunk_size)
+ fn(dst + off, i, min(p->chunk_size, p->size - off));
+ clock_get(&end);
- timersub(&tv_end, &tv_start, &tv_diff);
+ *rt = clock_diff(&start, &end);
- return (double)(((double)size * nr_loops) / timeval2double(&tv_diff));
+ return 0;
}
static const char * const bench_mem_memset_usage[] = {
@@ -351,10 +454,13 @@ static const char * const bench_mem_memset_usage[] = {
static const struct function memset_functions[] = {
{ .name = "default",
.desc = "Default memset() provided by glibc",
+ .fn.init = mem_alloc,
+ .fn.fini = mem_free,
.fn.memset = memset },
#ifdef HAVE_ARCH_X86_64_SUPPORT
-# define MEMSET_FN(_fn, _name, _desc) { .name = _name, .desc = _desc, .fn.memset = _fn },
+# define MEMSET_FN(_fn, _init, _fini, _name, _desc) \
+ {.name = _name, .desc = _desc, .fn.memset = _fn, .fn.init = _init, .fn.fini = _fini },
# include "mem-memset-x86-64-asm-def.h"
# undef MEMSET_FN
#endif
@@ -366,9 +472,91 @@ int bench_mem_memset(int argc, const char **argv)
{
struct bench_mem_info info = {
.functions = memset_functions,
- .do_cycles = do_memset_cycles,
- .do_gettimeofday = do_memset_gettimeofday,
+ .do_op = do_memset,
.usage = bench_mem_memset_usage,
+ .options = bench_mem_options,
+ };
+
+ return bench_mem_common(argc, argv, &info);
+}
+
+static void mmap_page_touch(void *dst, size_t size, unsigned int page_shift, bool random)
+{
+ unsigned long npages = size / (1 << page_shift);
+ unsigned long offset = 0, r = 0;
+
+ for (unsigned long i = 0; i < npages; i++) {
+ if (random)
+ r = rand() % (1 << page_shift);
+
+ *((char *)dst + offset + r) = *(char *)(dst + offset + r) + i;
+ offset += 1 << page_shift;
+ }
+}
+
+static int do_mmap(const struct function *r, struct bench_params *p,
+ void *src __maybe_unused, void *dst __maybe_unused,
+ union bench_clock *accum)
+{
+ union bench_clock start, end, diff;
+ mmap_op_t fn = r->fn.mmap_op;
+ bool populate = strcmp(r->name, "populate") == 0;
+
+ if (p->seed)
+ srand(p->seed);
+
+ for (unsigned int i = 0; i < p->nr_loops; i++) {
+ clock_get(&start);
+ dst = bench_mmap(p->size, populate, p->page_shift);
+ if (!dst)
+ goto out;
+
+ fn(dst, p->size, p->page_shift, p->seed);
+ clock_get(&end);
+ diff = clock_diff(&start, &end);
+ clock_accum(accum, &diff);
+
+ bench_munmap(dst, p->size);
+ }
+
+ return 0;
+out:
+ printf("# Memory allocation failed - maybe size (%s) %s?\n", size_str,
+ p->page_shift != PAGE_SHIFT_4KB ? "has insufficient hugepages" : "is too large");
+ return -1;
+}
+
+static const char * const bench_mem_mmap_usage[] = {
+ "perf bench mem mmap <options>",
+ NULL
+};
+
+static const struct function mmap_functions[] = {
+ { .name = "demand",
+ .desc = "Demand loaded mmap()",
+ .fn.mmap_op = mmap_page_touch },
+
+ { .name = "populate",
+ .desc = "Eagerly populated mmap()",
+ .fn.mmap_op = mmap_page_touch },
+
+ { .name = NULL, }
+};
+
+int bench_mem_mmap(int argc, const char **argv)
+{
+ static const struct option bench_mmap_options[] = {
+ OPT_UINTEGER('r', "randomize", &seed,
+ "Seed to randomize page access offset."),
+ OPT_PARENT(bench_common_options),
+ OPT_END()
+ };
+
+ struct bench_mem_info info = {
+ .functions = mmap_functions,
+ .do_op = do_mmap,
+ .usage = bench_mem_mmap_usage,
+ .options = bench_mmap_options,
};
return bench_mem_common(argc, argv, &info);
diff --git a/tools/perf/bench/mem-memcpy-arch.h b/tools/perf/bench/mem-memcpy-arch.h
index 5bcaec5601a8..852e48cfd8fe 100644
--- a/tools/perf/bench/mem-memcpy-arch.h
+++ b/tools/perf/bench/mem-memcpy-arch.h
@@ -2,7 +2,7 @@
#ifdef HAVE_ARCH_X86_64_SUPPORT
-#define MEMCPY_FN(fn, name, desc) \
+#define MEMCPY_FN(fn, init, fini, name, desc) \
void *fn(void *, const void *, size_t);
#include "mem-memcpy-x86-64-asm-def.h"
diff --git a/tools/perf/bench/mem-memcpy-x86-64-asm-def.h b/tools/perf/bench/mem-memcpy-x86-64-asm-def.h
index 6188e19d3129..f43038f4448b 100644
--- a/tools/perf/bench/mem-memcpy-x86-64-asm-def.h
+++ b/tools/perf/bench/mem-memcpy-x86-64-asm-def.h
@@ -1,9 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
MEMCPY_FN(memcpy_orig,
+ mem_alloc,
+ mem_free,
"x86-64-unrolled",
"unrolled memcpy() in arch/x86/lib/memcpy_64.S")
MEMCPY_FN(__memcpy,
+ mem_alloc,
+ mem_free,
"x86-64-movsq",
"movsq-based memcpy() in arch/x86/lib/memcpy_64.S")
diff --git a/tools/perf/bench/mem-memset-arch.h b/tools/perf/bench/mem-memset-arch.h
index 53f45482663f..278c5da12d63 100644
--- a/tools/perf/bench/mem-memset-arch.h
+++ b/tools/perf/bench/mem-memset-arch.h
@@ -2,7 +2,7 @@
#ifdef HAVE_ARCH_X86_64_SUPPORT
-#define MEMSET_FN(fn, name, desc) \
+#define MEMSET_FN(fn, init, fini, name, desc) \
void *fn(void *, int, size_t);
#include "mem-memset-x86-64-asm-def.h"
diff --git a/tools/perf/bench/mem-memset-x86-64-asm-def.h b/tools/perf/bench/mem-memset-x86-64-asm-def.h
index 247c72fdfb9d..80ad1b7ea770 100644
--- a/tools/perf/bench/mem-memset-x86-64-asm-def.h
+++ b/tools/perf/bench/mem-memset-x86-64-asm-def.h
@@ -1,9 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
MEMSET_FN(memset_orig,
+ mem_alloc,
+ mem_free,
"x86-64-unrolled",
"unrolled memset() in arch/x86/lib/memset_64.S")
MEMSET_FN(__memset,
+ mem_alloc,
+ mem_free,
"x86-64-stosq",
"movsq-based memset() in arch/x86/lib/memset_64.S")
diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
index 1fbd7c947abc..19be2aaf4dc0 100644
--- a/tools/perf/bench/numa.c
+++ b/tools/perf/bench/numa.c
@@ -27,6 +27,7 @@
#include <sys/resource.h>
#include <sys/wait.h>
#include <sys/prctl.h>
+#include <sys/stat.h>
#include <sys/types.h>
#include <linux/kernel.h>
#include <linux/time64.h>
@@ -35,6 +36,7 @@
#include "../util/header.h"
#include "../util/mutex.h"
+#include <api/fs/fs.h>
#include <numa.h>
#include <numaif.h>
@@ -533,6 +535,57 @@ static int parse_cpu_list(const char *arg)
return 0;
}
+/*
+ * Check whether a CPU is online
+ *
+ * Returns:
+ * 1 -> if CPU is online
+ * 0 -> if CPU is offline
+ * -1 -> error case
+ */
+static int is_cpu_online(unsigned int cpu)
+{
+ char *str;
+ size_t strlen;
+ char buf[256];
+ int status = -1;
+ struct stat statbuf;
+
+ snprintf(buf, sizeof(buf),
+ "/sys/devices/system/cpu/cpu%d", cpu);
+ if (stat(buf, &statbuf) != 0)
+ return 0;
+
+ /*
+ * Check if /sys/devices/system/cpu/cpux/online file
+ * exists. Some cases cpu0 won't have online file since
+ * it is not expected to be turned off generally.
+ * In kernels without CONFIG_HOTPLUG_CPU, this
+ * file won't exist
+ */
+ snprintf(buf, sizeof(buf),
+ "/sys/devices/system/cpu/cpu%d/online", cpu);
+ if (stat(buf, &statbuf) != 0)
+ return 1;
+
+ /*
+ * Read online file using sysfs__read_str.
+ * If read or open fails, return -1.
+ * If read succeeds, return value from file
+ * which gets stored in "str"
+ */
+ snprintf(buf, sizeof(buf),
+ "devices/system/cpu/cpu%d/online", cpu);
+
+ if (sysfs__read_str(buf, &str, &strlen) < 0)
+ return status;
+
+ status = atoi(str);
+
+ free(str);
+ return status;
+}
+
static int parse_setup_cpu_list(void)
{
struct thread_data *td;
diff --git a/tools/perf/bench/sched-pipe.c b/tools/perf/bench/sched-pipe.c
index 3af6d3c55aba..70139036d68f 100644
--- a/tools/perf/bench/sched-pipe.c
+++ b/tools/perf/bench/sched-pipe.c
@@ -23,6 +23,7 @@
#include <errno.h>
#include <fcntl.h>
#include <assert.h>
+#include <sys/epoll.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/syscall.h>
@@ -34,6 +35,8 @@ struct thread_data {
int nr;
int pipe_read;
int pipe_write;
+ struct epoll_event epoll_ev;
+ int epoll_fd;
bool cgroup_failed;
pthread_t pthread;
};
@@ -44,6 +47,7 @@ static int loops = LOOPS_DEFAULT;
/* Use processes by default: */
static bool threaded;
+static bool nonblocking;
static char *cgrp_names[2];
static struct cgroup *cgrps[2];
@@ -81,6 +85,7 @@ out:
}
static const struct option options[] = {
+ OPT_BOOLEAN('n', "nonblocking", &nonblocking, "Use non-blocking operations"),
OPT_INTEGER('l', "loop", &loops, "Specify number of loops"),
OPT_BOOLEAN('T', "threaded", &threaded, "Specify threads/process based task setup"),
OPT_CALLBACK('G', "cgroups", NULL, "SEND,RECV",
@@ -165,11 +170,25 @@ static void exit_cgroup(int nr)
free(cgrp_names[nr]);
}
+static inline int read_pipe(struct thread_data *td)
+{
+ int ret, m;
+retry:
+ if (nonblocking) {
+ ret = epoll_wait(td->epoll_fd, &td->epoll_ev, 1, -1);
+ if (ret < 0)
+ return ret;
+ }
+ ret = read(td->pipe_read, &m, sizeof(int));
+ if (nonblocking && ret < 0 && errno == EWOULDBLOCK)
+ goto retry;
+ return ret;
+}
+
static void *worker_thread(void *__tdata)
{
struct thread_data *td = __tdata;
- int m = 0, i;
- int ret;
+ int i, ret, m = 0;
ret = enter_cgroup(td->nr);
if (ret < 0) {
@@ -177,18 +196,18 @@ static void *worker_thread(void *__tdata)
return NULL;
}
+ if (nonblocking) {
+ td->epoll_ev.events = EPOLLIN;
+ td->epoll_fd = epoll_create(1);
+ BUG_ON(td->epoll_fd < 0);
+ BUG_ON(epoll_ctl(td->epoll_fd, EPOLL_CTL_ADD, td->pipe_read, &td->epoll_ev) < 0);
+ }
+
for (i = 0; i < loops; i++) {
- if (!td->nr) {
- ret = read(td->pipe_read, &m, sizeof(int));
- BUG_ON(ret != sizeof(int));
- ret = write(td->pipe_write, &m, sizeof(int));
- BUG_ON(ret != sizeof(int));
- } else {
- ret = write(td->pipe_write, &m, sizeof(int));
- BUG_ON(ret != sizeof(int));
- ret = read(td->pipe_read, &m, sizeof(int));
- BUG_ON(ret != sizeof(int));
- }
+ ret = write(td->pipe_write, &m, sizeof(int));
+ BUG_ON(ret != sizeof(int));
+ ret = read_pipe(td);
+ BUG_ON(ret != sizeof(int));
}
return NULL;
@@ -209,13 +228,16 @@ int bench_sched_pipe(int argc, const char **argv)
* discarding returned value of read(), write()
* causes error in building environment for perf
*/
- int __maybe_unused ret, wait_stat;
+ int __maybe_unused ret, wait_stat, flags = 0;
pid_t pid, retpid __maybe_unused;
argc = parse_options(argc, argv, options, bench_sched_pipe_usage, 0);
- BUG_ON(pipe(pipe_1));
- BUG_ON(pipe(pipe_2));
+ if (nonblocking)
+ flags |= O_NONBLOCK;
+
+ BUG_ON(pipe2(pipe_1, flags));
+ BUG_ON(pipe2(pipe_2, flags));
gettimeofday(&start, NULL);
diff --git a/tools/perf/bench/synthesize.c b/tools/perf/bench/synthesize.c
index 7401ebbac100..b3d493697675 100644
--- a/tools/perf/bench/synthesize.c
+++ b/tools/perf/bench/synthesize.c
@@ -49,7 +49,7 @@ static const char *const bench_usage[] = {
static atomic_t event_count;
-static int process_synthesized_event(struct perf_tool *tool __maybe_unused,
+static int process_synthesized_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -114,12 +114,16 @@ static int run_single_threaded(void)
.pid = "self",
};
struct perf_thread_map *threads;
+ struct perf_env host_env;
int err;
perf_set_singlethreaded();
- session = perf_session__new(NULL, NULL);
+ perf_env__init(&host_env);
+ session = __perf_session__new(/*data=*/NULL, /*tool=*/NULL,
+ /*trace_event_repipe=*/false, &host_env);
if (IS_ERR(session)) {
pr_err("Session creation failed.\n");
+ perf_env__exit(&host_env);
return PTR_ERR(session);
}
threads = thread_map__new_by_pid(getpid());
@@ -144,6 +148,7 @@ err_out:
perf_thread_map__put(threads);
perf_session__delete(session);
+ perf_env__exit(&host_env);
return err;
}
@@ -154,17 +159,21 @@ static int do_run_multi_threaded(struct target *target,
u64 runtime_us;
unsigned int i;
double time_average, time_stddev, event_average, event_stddev;
- int err;
+ int err = 0;
struct stats time_stats, event_stats;
struct perf_session *session;
+ struct perf_env host_env;
+ perf_env__init(&host_env);
init_stats(&time_stats);
init_stats(&event_stats);
for (i = 0; i < multi_iterations; i++) {
- session = perf_session__new(NULL, NULL);
- if (IS_ERR(session))
- return PTR_ERR(session);
-
+ session = __perf_session__new(/*data=*/NULL, /*tool=*/NULL,
+ /*trace_event_repipe=*/false, &host_env);
+ if (IS_ERR(session)) {
+ err = PTR_ERR(session);
+ goto err_out;
+ }
atomic_set(&event_count, 0);
gettimeofday(&start, NULL);
err = __machine__synthesize_threads(&session->machines.host,
@@ -175,7 +184,7 @@ static int do_run_multi_threaded(struct target *target,
nr_threads_synthesize);
if (err) {
perf_session__delete(session);
- return err;
+ goto err_out;
}
gettimeofday(&end, NULL);
@@ -198,7 +207,9 @@ static int do_run_multi_threaded(struct target *target,
printf(" Average time per event %.3f usec\n",
time_average / event_average);
- return 0;
+err_out:
+ perf_env__exit(&host_env);
+ return err;
}
static int run_multi_threaded(void)
diff --git a/tools/perf/bench/syscall.c b/tools/perf/bench/syscall.c
index ea4dfc07cbd6..e7dc216f717f 100644
--- a/tools/perf/bench/syscall.c
+++ b/tools/perf/bench/syscall.c
@@ -22,8 +22,7 @@
#define __NR_fork -1
#endif
-#define LOOPS_DEFAULT 10000000
-static int loops = LOOPS_DEFAULT;
+static int loops;
static const struct option options[] = {
OPT_INTEGER('l', "loop", &loops, "Specify number of loops"),
@@ -80,6 +79,18 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
const char *name = NULL;
int i;
+ switch (syscall) {
+ case __NR_fork:
+ case __NR_execve:
+ /* Limit default loop to 10000 times to save time */
+ loops = 10000;
+ break;
+ default:
+ loops = 10000000;
+ break;
+ }
+
+ /* Options -l and --loops override default above */
argc = parse_options(argc, argv, options, bench_syscall_usage, 0);
gettimeofday(&start, NULL);
@@ -94,16 +105,9 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
break;
case __NR_fork:
test_fork();
- /* Only loop 10000 times to save time */
- if (i == 10000)
- loops = 10000;
break;
case __NR_execve:
test_execve();
- /* Only loop 10000 times to save time */
- if (i == 10000)
- loops = 10000;
- break;
default:
break;
}
diff --git a/tools/perf/bench/uprobe.c b/tools/perf/bench/uprobe.c
index 5c71fdc419dd..0b90275862e1 100644
--- a/tools/perf/bench/uprobe.c
+++ b/tools/perf/bench/uprobe.c
@@ -26,9 +26,11 @@
static int loops = LOOPS_DEFAULT;
enum bench_uprobe {
- BENCH_UPROBE__BASELINE,
- BENCH_UPROBE__EMPTY,
- BENCH_UPROBE__TRACE_PRINTK,
+ BENCH_UPROBE__BASELINE,
+ BENCH_UPROBE__EMPTY,
+ BENCH_UPROBE__TRACE_PRINTK,
+ BENCH_UPROBE__EMPTY_RET,
+ BENCH_UPROBE__TRACE_PRINTK_RET,
};
static const struct option options[] = {
@@ -47,7 +49,7 @@ static const char * const bench_uprobe_usage[] = {
#define bench_uprobe__attach_uprobe(prog) \
skel->links.prog = bpf_program__attach_uprobe_opts(/*prog=*/skel->progs.prog, \
/*pid=*/-1, \
- /*binary_path=*/"/lib64/libc.so.6", \
+ /*binary_path=*/"libc.so.6", \
/*func_offset=*/0, \
/*opts=*/&uprobe_opts); \
if (!skel->links.prog) { \
@@ -81,6 +83,8 @@ static int bench_uprobe__setup_bpf_skel(enum bench_uprobe bench)
case BENCH_UPROBE__BASELINE: break;
case BENCH_UPROBE__EMPTY: bench_uprobe__attach_uprobe(empty); break;
case BENCH_UPROBE__TRACE_PRINTK: bench_uprobe__attach_uprobe(trace_printk); break;
+ case BENCH_UPROBE__EMPTY_RET: bench_uprobe__attach_uprobe(empty_ret); break;
+ case BENCH_UPROBE__TRACE_PRINTK_RET: bench_uprobe__attach_uprobe(trace_printk_ret); break;
default:
fprintf(stderr, "Invalid bench: %d\n", bench);
goto cleanup;
@@ -197,3 +201,13 @@ int bench_uprobe_trace_printk(int argc, const char **argv)
{
return bench_uprobe(argc, argv, BENCH_UPROBE__TRACE_PRINTK);
}
+
+int bench_uprobe_empty_ret(int argc, const char **argv)
+{
+ return bench_uprobe(argc, argv, BENCH_UPROBE__EMPTY_RET);
+}
+
+int bench_uprobe_trace_printk_ret(int argc, const char **argv)
+{
+ return bench_uprobe(argc, argv, BENCH_UPROBE__TRACE_PRINTK_RET);
+}
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 6c1cc797692d..646f43b0f7c4 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -7,6 +7,7 @@
* a histogram of results, along various sorting keys.
*/
#include "builtin.h"
+#include "perf.h"
#include "util/color.h"
#include <linux/list.h>
@@ -37,11 +38,13 @@
#include "util/map_symbol.h"
#include "util/branch.h"
#include "util/util.h"
+#include "ui/progress.h"
#include <dlfcn.h>
#include <errno.h>
#include <linux/bitmap.h>
#include <linux/err.h>
+#include <inttypes.h>
struct perf_annotate {
struct perf_tool tool;
@@ -217,9 +220,10 @@ static int process_branch_callback(struct evsel *evsel,
}
if (a.map != NULL)
- map__dso(a.map)->hit = 1;
+ dso__set_hit(map__dso(a.map));
- hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
+ hist__account_cycles(sample->branch_stack, al, sample, false,
+ NULL, evsel);
ret = hist_entry_iter__add(&iter, &a, PERF_MAX_STACK_DEPTH, ann);
out:
@@ -252,7 +256,7 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
if (al->sym != NULL) {
struct dso *dso = map__dso(al->map);
- rb_erase_cached(&al->sym->rb_node, &dso->symbols);
+ rb_erase_cached(&al->sym->rb_node, dso__symbols(dso));
symbol__delete(al->sym);
dso__reset_find_symbol_cache(dso);
}
@@ -277,7 +281,7 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
return ret;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -317,85 +321,14 @@ static int process_feature_event(struct perf_session *session,
return 0;
}
-static int hist_entry__tty_annotate(struct hist_entry *he,
+static int hist_entry__stdio_annotate(struct hist_entry *he,
struct evsel *evsel,
struct perf_annotate *ann)
{
- if (!ann->use_stdio2)
- return symbol__tty_annotate(&he->ms, evsel);
+ if (ann->use_stdio2)
+ return hist_entry__tty_annotate2(he, evsel);
- return symbol__tty_annotate2(&he->ms, evsel);
-}
-
-static void print_annotated_data_header(struct hist_entry *he, struct evsel *evsel)
-{
- struct dso *dso = map__dso(he->ms.map);
- int nr_members = 1;
- int nr_samples = he->stat.nr_events;
-
- if (evsel__is_group_event(evsel)) {
- struct hist_entry *pair;
-
- list_for_each_entry(pair, &he->pairs.head, pairs.node)
- nr_samples += pair->stat.nr_events;
- }
-
- printf("Annotate type: '%s' in %s (%d samples):\n",
- he->mem_type->self.type_name, dso->name, nr_samples);
-
- if (evsel__is_group_event(evsel)) {
- struct evsel *pos;
- int i = 0;
-
- for_each_group_evsel(pos, evsel)
- printf(" event[%d] = %s\n", i++, pos->name);
-
- nr_members = evsel->core.nr_members;
- }
-
- printf("============================================================================\n");
- printf("%*s %10s %10s %s\n", 11 * nr_members, "samples", "offset", "size", "field");
-}
-
-static void print_annotated_data_type(struct annotated_data_type *mem_type,
- struct annotated_member *member,
- struct evsel *evsel, int indent)
-{
- struct annotated_member *child;
- struct type_hist *h = mem_type->histograms[evsel->core.idx];
- int i, nr_events = 1, samples = 0;
-
- for (i = 0; i < member->size; i++)
- samples += h->addr[member->offset + i].nr_samples;
- printf(" %10d", samples);
-
- if (evsel__is_group_event(evsel)) {
- struct evsel *pos;
-
- for_each_group_member(pos, evsel) {
- h = mem_type->histograms[pos->core.idx];
-
- samples = 0;
- for (i = 0; i < member->size; i++)
- samples += h->addr[member->offset + i].nr_samples;
- printf(" %10d", samples);
- }
- nr_events = evsel->core.nr_members;
- }
-
- printf(" %10d %10d %*s%s\t%s",
- member->offset, member->size, indent, "", member->type_name,
- member->var_name ?: "");
-
- if (!list_empty(&member->children))
- printf(" {\n");
-
- list_for_each_entry(child, &member->children, node)
- print_annotated_data_type(mem_type, child, evsel, indent + 4);
-
- if (!list_empty(&member->children))
- printf("%*s}", 11 * nr_events + 24 + indent, "");
- printf(";\n");
+ return hist_entry__tty_annotate(he, evsel);
}
static void print_annotate_data_stat(struct annotated_data_stat *s)
@@ -430,6 +363,7 @@ static void print_annotate_data_stat(struct annotated_data_stat *s)
PRINT_STAT(no_typeinfo);
PRINT_STAT(invalid_size);
PRINT_STAT(bad_offset);
+ PRINT_STAT(insn_track);
printf("\n");
#undef PRINT_STAT
@@ -464,10 +398,10 @@ static void print_annotate_item_stat(struct list_head *head, const char *title)
printf("total %d, ok %d (%.1f%%), bad %d (%.1f%%)\n\n", total,
total_good, 100.0 * total_good / (total ?: 1),
total_bad, 100.0 * total_bad / (total ?: 1));
- printf(" %-10s: %5s %5s\n", "Name", "Good", "Bad");
+ printf(" %-20s: %5s %5s\n", "Name/opcode", "Good", "Bad");
printf("-----------------------------------------------------------\n");
list_for_each_entry(istat, head, list)
- printf(" %-10s: %5d %5d\n", istat->name, istat->good, istat->bad);
+ printf(" %-20s: %5d %5d\n", istat->name, istat->good, istat->bad);
printf("\n");
}
@@ -487,7 +421,7 @@ static void hists__find_annotations(struct hists *hists,
struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
struct annotation *notes;
- if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
+ if (he->ms.sym == NULL || dso__annotate_warned(map__dso(he->ms.map)))
goto find_next;
if (ann->sym_hist_filter &&
@@ -537,10 +471,32 @@ find_next:
goto find_next;
}
- print_annotated_data_header(he, evsel);
- print_annotated_data_type(he->mem_type, &he->mem_type->self, evsel, 0);
- printf("\n");
- goto find_next;
+ if (use_browser == 1)
+ key = hist_entry__annotate_data_tui(he, evsel, NULL);
+ else
+ key = hist_entry__annotate_data_tty(he, evsel);
+
+ switch (key) {
+ case -1:
+ if (!ann->skip_missing)
+ return;
+ /* fall through */
+ case K_RIGHT:
+ case '>':
+ next = rb_next(nd);
+ break;
+ case K_LEFT:
+ case '<':
+ next = rb_prev(nd);
+ break;
+ default:
+ return;
+ }
+
+ if (use_browser == 0 || next != NULL)
+ nd = next;
+
+ continue;
}
if (use_browser == 2) {
@@ -585,7 +541,7 @@ find_next:
if (next != NULL)
nd = next;
} else {
- hist_entry__tty_annotate(he, evsel, ann);
+ hist_entry__stdio_annotate(he, evsel, ann);
nd = rb_next(nd);
}
}
@@ -606,7 +562,7 @@ static int __cmd_annotate(struct perf_annotate *ann)
}
if (!annotate_opts.objdump_path) {
- ret = perf_env__lookup_objdump(&session->header.env,
+ ret = perf_env__lookup_objdump(perf_session__env(session),
&annotate_opts.objdump_path);
if (ret)
goto out;
@@ -617,8 +573,8 @@ static int __cmd_annotate(struct perf_annotate *ann)
goto out;
if (dump_trace) {
- perf_session__fprintf_nr_events(session, stdout, false);
- evlist__fprintf_nr_events(session->evlist, stdout, false);
+ perf_session__fprintf_nr_events(session, stdout);
+ evlist__fprintf_nr_events(session->evlist, stdout);
goto out;
}
@@ -632,13 +588,23 @@ static int __cmd_annotate(struct perf_annotate *ann)
evlist__for_each_entry(session->evlist, pos) {
struct hists *hists = evsel__hists(pos);
u32 nr_samples = hists->stats.nr_samples;
+ struct ui_progress prog;
if (nr_samples > 0) {
total_nr_samples += nr_samples;
- hists__collapse_resort(hists, NULL);
+
+ ui_progress__init(&prog, nr_samples,
+ "Merging related events...");
+ hists__collapse_resort(hists, &prog);
+ ui_progress__finish();
+
/* Don't sort callchain */
evsel__reset_sample_bit(pos, CALLCHAIN);
- evsel__output_resort(pos, NULL);
+
+ ui_progress__init(&prog, nr_samples,
+ "Sorting events for output...");
+ evsel__output_resort(pos, &prog);
+ ui_progress__finish();
/*
* An event group needs to display other events too.
@@ -668,13 +634,23 @@ static int __cmd_annotate(struct perf_annotate *ann)
evlist__for_each_entry(session->evlist, pos) {
struct hists *hists = evsel__hists(pos);
u32 nr_samples = hists->stats.nr_samples;
+ struct ui_progress prog;
+ struct evsel *evsel;
- if (nr_samples == 0)
+ if (!symbol_conf.event_group || !evsel__is_group_leader(pos))
continue;
- if (!symbol_conf.event_group || !evsel__is_group_leader(pos))
+ for_each_group_member(evsel, pos)
+ nr_samples += evsel__hists(evsel)->stats.nr_samples;
+
+ if (nr_samples == 0)
continue;
+ ui_progress__init(&prog, nr_samples,
+ "Sorting group events for output...");
+ evsel__output_resort(pos, &prog);
+ ui_progress__finish();
+
hists__find_annotations(hists, pos, ann);
}
@@ -722,28 +698,7 @@ static const char * const annotate_usage[] = {
int cmd_annotate(int argc, const char **argv)
{
- struct perf_annotate annotate = {
- .tool = {
- .sample = process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .exit = perf_event__process_exit,
- .fork = perf_event__process_fork,
- .namespaces = perf_event__process_namespaces,
- .attr = perf_event__process_attr,
- .build_id = perf_event__process_build_id,
-#ifdef HAVE_LIBTRACEEVENT
- .tracing_data = perf_event__process_tracing_data,
-#endif
- .id_index = perf_event__process_id_index,
- .auxtrace_info = perf_event__process_auxtrace_info,
- .auxtrace = perf_event__process_auxtrace,
- .feature = process_feature_event,
- .ordered_events = true,
- .ordering_requires_timestamps = true,
- },
- };
+ struct perf_annotate annotate = {};
struct perf_data data = {
.mode = PERF_DATA_MODE_READ,
};
@@ -809,8 +764,6 @@ int cmd_annotate(int argc, const char **argv)
"Enable symbol demangling"),
OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
"Enable kernel symbol demangling"),
- OPT_BOOLEAN(0, "group", &symbol_conf.event_group,
- "Show event group information together"),
OPT_BOOLEAN(0, "show-total-period", &symbol_conf.show_total_period,
"Show a column with the sum of periods"),
OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
@@ -833,6 +786,10 @@ int cmd_annotate(int argc, const char **argv)
"Show stats for the data type annotation"),
OPT_BOOLEAN(0, "insn-stat", &annotate.insn_stat,
"Show instruction stats for the data type annotation"),
+ OPT_BOOLEAN(0, "skip-empty", &symbol_conf.skip_empty,
+ "Do not display empty (or dummy) events in the output"),
+ OPT_BOOLEAN(0, "code-with-type", &annotate_opts.code_with_type,
+ "Show data type info in code annotation (memory instructions only)"),
OPT_END()
};
int ret;
@@ -886,7 +843,7 @@ int cmd_annotate(int argc, const char **argv)
}
#endif
-#ifndef HAVE_DWARF_GETLOCATIONS_SUPPORT
+#ifndef HAVE_LIBDW_SUPPORT
if (annotate.data_type) {
pr_err("Error: Data type profiling is disabled due to missing DWARF support\n");
return -ENOTSUP;
@@ -902,6 +859,25 @@ int cmd_annotate(int argc, const char **argv)
data.path = input_name;
+ perf_tool__init(&annotate.tool, /*ordered_events=*/true);
+ annotate.tool.sample = process_sample_event;
+ annotate.tool.mmap = perf_event__process_mmap;
+ annotate.tool.mmap2 = perf_event__process_mmap2;
+ annotate.tool.comm = perf_event__process_comm;
+ annotate.tool.exit = perf_event__process_exit;
+ annotate.tool.fork = perf_event__process_fork;
+ annotate.tool.namespaces = perf_event__process_namespaces;
+ annotate.tool.attr = perf_event__process_attr;
+ annotate.tool.build_id = perf_event__process_build_id;
+#ifdef HAVE_LIBTRACEEVENT
+ annotate.tool.tracing_data = perf_event__process_tracing_data;
+#endif
+ annotate.tool.id_index = perf_event__process_id_index;
+ annotate.tool.auxtrace_info = perf_event__process_auxtrace_info;
+ annotate.tool.auxtrace = perf_event__process_auxtrace;
+ annotate.tool.feature = process_feature_event;
+ annotate.tool.ordering_requires_timestamps = true;
+
annotate.session = perf_session__new(&data, &annotate.tool);
if (IS_ERR(annotate.session))
return PTR_ERR(annotate.session);
@@ -920,7 +896,7 @@ int cmd_annotate(int argc, const char **argv)
symbol_conf.try_vmlinux_path = true;
- ret = symbol__init(&annotate.session->header.env);
+ ret = symbol__init(perf_session__env(annotate.session));
if (ret < 0)
goto out_delete;
@@ -935,12 +911,12 @@ int cmd_annotate(int argc, const char **argv)
use_browser = 2;
#endif
- /* FIXME: only support stdio for now */
if (annotate.data_type) {
- use_browser = 0;
annotate_opts.annotate_src = false;
symbol_conf.annotate_data_member = true;
symbol_conf.annotate_data_sample = true;
+ } else if (annotate_opts.code_with_type) {
+ symbol_conf.annotate_data_member = true;
}
setup_browser(true);
@@ -956,13 +932,17 @@ int cmd_annotate(int argc, const char **argv)
sort_order = "dso,symbol";
/*
- * Set SORT_MODE__BRANCH so that annotate display IPC/Cycle
- * if branch info is in perf data in TUI mode.
+ * Set SORT_MODE__BRANCH so that annotate displays IPC/Cycle and
+ * branch counters, if the corresponding branch info is available
+ * in the perf data in the TUI mode.
*/
- if ((use_browser == 1 || annotate.use_stdio2) && annotate.has_br_stack)
+ if ((use_browser == 1 || annotate.use_stdio2) && annotate.has_br_stack) {
sort__mode = SORT_MODE__BRANCH;
+ if (annotate.session->evlist->nr_br_cntr > 0)
+ annotate_opts.show_br_cntr = true;
+ }
- if (setup_sorting(NULL) < 0)
+ if (setup_sorting(/*evlist=*/NULL, perf_session__env(annotate.session)) < 0)
usage_with_options(annotate_usage, options);
ret = __cmd_annotate(&annotate);
diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
index 1a8898d5b560..02dea1b88228 100644
--- a/tools/perf/builtin-bench.c
+++ b/tools/perf/builtin-bench.c
@@ -65,6 +65,7 @@ static struct bench mem_benchmarks[] = {
{ "memcpy", "Benchmark for memcpy() functions", bench_mem_memcpy },
{ "memset", "Benchmark for memset() functions", bench_mem_memset },
{ "find_bit", "Benchmark for find_bit() functions", bench_mem_find_bit },
+ { "mmap", "Benchmark for mmap() mappings", bench_mem_mmap },
{ "all", "Run all memory access benchmarks", NULL },
{ NULL, NULL, NULL }
};
@@ -109,6 +110,8 @@ static struct bench uprobe_benchmarks[] = {
{ "baseline", "Baseline libc usleep(1000) call", bench_uprobe_baseline, },
{ "empty", "Attach empty BPF prog to uprobe on usleep, system wide", bench_uprobe_empty, },
{ "trace_printk", "Attach trace_printk BPF prog to uprobe on usleep syswide", bench_uprobe_trace_printk, },
+ { "empty_ret", "Attach empty BPF prog to uretprobe on usleep, system wide", bench_uprobe_empty_ret, },
+ { "trace_printk_ret", "Attach trace_printk BPF prog to uretprobe on usleep syswide", bench_uprobe_trace_printk_ret,},
{ NULL, NULL, NULL },
};
diff --git a/tools/perf/builtin-buildid-cache.c b/tools/perf/builtin-buildid-cache.c
index e2a40f1d9225..2e0f2004696a 100644
--- a/tools/perf/builtin-buildid-cache.c
+++ b/tools/perf/builtin-buildid-cache.c
@@ -31,7 +31,7 @@
#include <linux/string.h>
#include <linux/err.h>
-static int build_id_cache__kcore_buildid(const char *proc_dir, char *sbuildid)
+static int build_id_cache__kcore_buildid(const char *proc_dir, char *sbuildid, size_t sbuildid_size)
{
char root_dir[PATH_MAX];
char *p;
@@ -42,7 +42,7 @@ static int build_id_cache__kcore_buildid(const char *proc_dir, char *sbuildid)
if (!p)
return -1;
*p = '\0';
- return sysfs__sprintf_build_id(root_dir, sbuildid);
+ return sysfs__snprintf_build_id(root_dir, sbuildid, sbuildid_size);
}
static int build_id_cache__kcore_dir(char *dir, size_t sz)
@@ -128,7 +128,7 @@ static int build_id_cache__add_kcore(const char *filename, bool force)
return -1;
*p = '\0';
- if (build_id_cache__kcore_buildid(from_dir, sbuildid) < 0)
+ if (build_id_cache__kcore_buildid(from_dir, sbuildid, sizeof(sbuildid)) < 0)
return -1;
scnprintf(to_dir, sizeof(to_dir), "%s/%s/%s",
@@ -175,19 +175,19 @@ static int build_id_cache__add_kcore(const char *filename, bool force)
static int build_id_cache__add_file(const char *filename, struct nsinfo *nsi)
{
char sbuild_id[SBUILD_ID_SIZE];
- struct build_id bid;
+ struct build_id bid = { .size = 0, };
int err;
struct nscookie nsc;
nsinfo__mountns_enter(nsi, &nsc);
- err = filename__read_build_id(filename, &bid);
+ err = filename__read_build_id(filename, &bid, /*block=*/true);
nsinfo__mountns_exit(&nsc);
if (err < 0) {
pr_debug("Couldn't read a build-id in %s\n", filename);
return -1;
}
- build_id__sprintf(&bid, sbuild_id);
+ build_id__snprintf(&bid, sbuild_id, sizeof(sbuild_id));
err = build_id_cache__add_s(sbuild_id, filename, nsi,
false, false);
pr_debug("Adding %s %s: %s\n", sbuild_id, filename,
@@ -198,20 +198,20 @@ static int build_id_cache__add_file(const char *filename, struct nsinfo *nsi)
static int build_id_cache__remove_file(const char *filename, struct nsinfo *nsi)
{
char sbuild_id[SBUILD_ID_SIZE];
- struct build_id bid;
+ struct build_id bid = { .size = 0, };
struct nscookie nsc;
int err;
nsinfo__mountns_enter(nsi, &nsc);
- err = filename__read_build_id(filename, &bid);
+ err = filename__read_build_id(filename, &bid, /*block=*/true);
nsinfo__mountns_exit(&nsc);
if (err < 0) {
pr_debug("Couldn't read a build-id in %s\n", filename);
return -1;
}
- build_id__sprintf(&bid, sbuild_id);
+ build_id__snprintf(&bid, sbuild_id, sizeof(sbuild_id));
err = build_id_cache__remove_s(sbuild_id);
pr_debug("Removing %s %s: %s\n", sbuild_id, filename,
err ? "FAIL" : "Ok");
@@ -275,18 +275,18 @@ static int build_id_cache__purge_all(void)
static bool dso__missing_buildid_cache(struct dso *dso, int parm __maybe_unused)
{
char filename[PATH_MAX];
- struct build_id bid;
+ struct build_id bid = { .size = 0, };
if (!dso__build_id_filename(dso, filename, sizeof(filename), false))
return true;
- if (filename__read_build_id(filename, &bid) == -1) {
+ if (filename__read_build_id(filename, &bid, /*block=*/true) == -1) {
if (errno == ENOENT)
return false;
pr_warning("Problems with %s file, consider removing it from the cache\n",
filename);
- } else if (memcmp(dso->bid.data, bid.data, bid.size)) {
+ } else if (memcmp(dso__bid(dso)->data, bid.data, bid.size)) {
pr_warning("Problems with %s file, consider removing it from the cache\n",
filename);
}
@@ -303,13 +303,13 @@ static int build_id_cache__fprintf_missing(struct perf_session *session, FILE *f
static int build_id_cache__update_file(const char *filename, struct nsinfo *nsi)
{
char sbuild_id[SBUILD_ID_SIZE];
- struct build_id bid;
+ struct build_id bid = { .size = 0, };
struct nscookie nsc;
int err;
nsinfo__mountns_enter(nsi, &nsc);
- err = filename__read_build_id(filename, &bid);
+ err = filename__read_build_id(filename, &bid, /*block=*/true);
nsinfo__mountns_exit(&nsc);
if (err < 0) {
pr_debug("Couldn't read a build-id in %s\n", filename);
@@ -317,7 +317,7 @@ static int build_id_cache__update_file(const char *filename, struct nsinfo *nsi)
}
err = 0;
- build_id__sprintf(&bid, sbuild_id);
+ build_id__snprintf(&bid, sbuild_id, sizeof(sbuild_id));
if (build_id_cache__cached(sbuild_id))
err = build_id_cache__remove_s(sbuild_id);
@@ -453,7 +453,7 @@ int cmd_buildid_cache(int argc, const char **argv)
return PTR_ERR(session);
}
- if (symbol__init(session ? &session->header.env : NULL) < 0)
+ if (symbol__init(session ? perf_session__env(session) : NULL) < 0)
goto out;
setup_pager();
diff --git a/tools/perf/builtin-buildid-list.c b/tools/perf/builtin-buildid-list.c
index c9037477865a..a91bbb34ac94 100644
--- a/tools/perf/builtin-buildid-list.c
+++ b/tools/perf/builtin-buildid-list.c
@@ -26,16 +26,18 @@ static int buildid__map_cb(struct map *map, void *arg __maybe_unused)
{
const struct dso *dso = map__dso(map);
char bid_buf[SBUILD_ID_SIZE];
+ const char *dso_long_name = dso__long_name(dso);
+ const char *dso_short_name = dso__short_name(dso);
memset(bid_buf, 0, sizeof(bid_buf));
- if (dso->has_build_id)
- build_id__sprintf(&dso->bid, bid_buf);
+ if (dso__has_build_id(dso))
+ build_id__snprintf(dso__bid(dso), bid_buf, sizeof(bid_buf));
printf("%s %16" PRIx64 " %16" PRIx64, bid_buf, map__start(map), map__end(map));
- if (dso->long_name != NULL) {
- printf(" %s", dso->long_name);
- } else if (dso->short_name != NULL) {
- printf(" %s", dso->short_name);
- }
+ if (dso_long_name != NULL)
+ printf(" %s", dso_long_name);
+ else if (dso_short_name != NULL)
+ printf(" %s", dso_short_name);
+
printf("\n");
return 0;
@@ -43,11 +45,14 @@ static int buildid__map_cb(struct map *map, void *arg __maybe_unused)
static void buildid__show_kernel_maps(void)
{
+ struct perf_env host_env;
struct machine *machine;
- machine = machine__new_host();
+ perf_env__init(&host_env);
+ machine = machine__new_host(&host_env);
machine__for_each_kernel_map(machine, buildid__map_cb, NULL);
machine__delete(machine);
+ perf_env__exit(&host_env);
}
static int sysfs__fprintf_build_id(FILE *fp)
@@ -55,7 +60,7 @@ static int sysfs__fprintf_build_id(FILE *fp)
char sbuild_id[SBUILD_ID_SIZE];
int ret;
- ret = sysfs__sprintf_build_id("/", sbuild_id);
+ ret = sysfs__snprintf_build_id("/", sbuild_id, sizeof(sbuild_id));
if (ret != sizeof(sbuild_id))
return ret < 0 ? ret : -EINVAL;
@@ -67,7 +72,7 @@ static int filename__fprintf_build_id(const char *name, FILE *fp)
char sbuild_id[SBUILD_ID_SIZE];
int ret;
- ret = filename__sprintf_build_id(name, sbuild_id);
+ ret = filename__snprintf_build_id(name, sbuild_id, sizeof(sbuild_id));
if (ret != sizeof(sbuild_id))
return ret < 0 ? ret : -EINVAL;
@@ -76,7 +81,7 @@ static int filename__fprintf_build_id(const char *name, FILE *fp)
static bool dso__skip_buildid(struct dso *dso, int with_hits)
{
- return with_hits && !dso->hit;
+ return with_hits && !dso__hit(dso);
}
static int perf_session__list_build_ids(bool force, bool with_hits)
@@ -87,6 +92,7 @@ static int perf_session__list_build_ids(bool force, bool with_hits)
.mode = PERF_DATA_MODE_READ,
.force = force,
};
+ struct perf_tool build_id__mark_dso_hit_ops;
symbol__elf_init();
/*
@@ -95,6 +101,15 @@ static int perf_session__list_build_ids(bool force, bool with_hits)
if (filename__fprintf_build_id(input_name, stdout) > 0)
goto out;
+ perf_tool__init(&build_id__mark_dso_hit_ops, /*ordered_events=*/true);
+ build_id__mark_dso_hit_ops.sample = build_id__mark_dso_hit;
+ build_id__mark_dso_hit_ops.mmap = perf_event__process_mmap;
+ build_id__mark_dso_hit_ops.mmap2 = perf_event__process_mmap2;
+ build_id__mark_dso_hit_ops.fork = perf_event__process_fork;
+ build_id__mark_dso_hit_ops.exit = perf_event__exit_del_thread;
+ build_id__mark_dso_hit_ops.attr = perf_event__process_attr;
+ build_id__mark_dso_hit_ops.build_id = perf_event__process_build_id;
+
session = perf_session__new(&data, &build_id__mark_dso_hit_ops);
if (IS_ERR(session))
return PTR_ERR(session);
diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
index f78eea9e2153..9e9ff471ddd1 100644
--- a/tools/perf/builtin-c2c.c
+++ b/tools/perf/builtin-c2c.c
@@ -38,6 +38,7 @@
#include "ui/browsers/hists.h"
#include "thread.h"
#include "mem2node.h"
+#include "mem-info.h"
#include "symbol.h"
#include "ui/ui.h"
#include "ui/progress.h"
@@ -194,12 +195,14 @@ static struct hist_entry_ops c2c_entry_ops = {
static int c2c_hists__init(struct c2c_hists *hists,
const char *sort,
- int nr_header_lines);
+ int nr_header_lines,
+ struct perf_env *env);
static struct c2c_hists*
he__get_c2c_hists(struct hist_entry *he,
const char *sort,
- int nr_header_lines)
+ int nr_header_lines,
+ struct perf_env *env)
{
struct c2c_hist_entry *c2c_he;
struct c2c_hists *hists;
@@ -213,7 +216,7 @@ he__get_c2c_hists(struct hist_entry *he,
if (!hists)
return NULL;
- ret = c2c_hists__init(hists, sort, nr_header_lines);
+ ret = c2c_hists__init(hists, sort, nr_header_lines, env);
if (ret) {
free(hists);
return NULL;
@@ -272,7 +275,7 @@ static void compute_stats(struct c2c_hist_entry *c2c_he,
update_stats(&cstats->load, weight);
}
-static int process_sample_event(struct perf_tool *tool __maybe_unused,
+static int process_sample_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -349,7 +352,7 @@ static int process_sample_event(struct perf_tool *tool __maybe_unused,
mi = mi_dup;
- c2c_hists = he__get_c2c_hists(he, c2c.cl_sort, 2);
+ c2c_hists = he__get_c2c_hists(he, c2c.cl_sort, 2, machine->env);
if (!c2c_hists)
goto free_mi;
@@ -384,24 +387,6 @@ free_mi:
goto out;
}
-static struct perf_c2c c2c = {
- .tool = {
- .sample = process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .exit = perf_event__process_exit,
- .fork = perf_event__process_fork,
- .lost = perf_event__process_lost,
- .attr = perf_event__process_attr,
- .auxtrace_info = perf_event__process_auxtrace_info,
- .auxtrace = perf_event__process_auxtrace,
- .auxtrace_error = perf_event__process_auxtrace_error,
- .ordered_events = true,
- .ordering_requires_timestamps = true,
- },
-};
-
static const char * const c2c_usage[] = {
"perf c2c {record|report}",
NULL
@@ -529,7 +514,7 @@ static int dcacheline_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
char buf[20];
if (he->mem_info)
- addr = cl_address(he->mem_info->daddr.addr, chk_double_cl);
+ addr = cl_address(mem_info__daddr(he->mem_info)->addr, chk_double_cl);
return scnprintf(hpp->buf, hpp->size, "%*s", width, HEX_STR(buf, addr));
}
@@ -567,7 +552,7 @@ static int offset_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
char buf[20];
if (he->mem_info)
- addr = cl_offset(he->mem_info->daddr.al_addr, chk_double_cl);
+ addr = cl_offset(mem_info__daddr(he->mem_info)->al_addr, chk_double_cl);
return scnprintf(hpp->buf, hpp->size, "%*s", width, HEX_STR(buf, addr));
}
@@ -579,10 +564,10 @@ offset_cmp(struct perf_hpp_fmt *fmt __maybe_unused,
uint64_t l = 0, r = 0;
if (left->mem_info)
- l = cl_offset(left->mem_info->daddr.addr, chk_double_cl);
+ l = cl_offset(mem_info__daddr(left->mem_info)->addr, chk_double_cl);
if (right->mem_info)
- r = cl_offset(right->mem_info->daddr.addr, chk_double_cl);
+ r = cl_offset(mem_info__daddr(right->mem_info)->addr, chk_double_cl);
return (int64_t)(r - l);
}
@@ -596,7 +581,7 @@ iaddr_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
char buf[20];
if (he->mem_info)
- addr = he->mem_info->iaddr.addr;
+ addr = mem_info__iaddr(he->mem_info)->addr;
return scnprintf(hpp->buf, hpp->size, "%*s", width, HEX_STR(buf, addr));
}
@@ -1983,27 +1968,29 @@ static struct c2c_fmt *get_format(const char *name)
return c2c_fmt;
}
-static int c2c_hists__init_output(struct perf_hpp_list *hpp_list, char *name)
+static int c2c_hists__init_output(struct perf_hpp_list *hpp_list, char *name,
+ struct perf_env *env __maybe_unused)
{
struct c2c_fmt *c2c_fmt = get_format(name);
+ int level = 0;
if (!c2c_fmt) {
reset_dimensions();
- return output_field_add(hpp_list, name);
+ return output_field_add(hpp_list, name, &level);
}
perf_hpp_list__column_register(hpp_list, &c2c_fmt->fmt);
return 0;
}
-static int c2c_hists__init_sort(struct perf_hpp_list *hpp_list, char *name)
+static int c2c_hists__init_sort(struct perf_hpp_list *hpp_list, char *name, struct perf_env *env)
{
struct c2c_fmt *c2c_fmt = get_format(name);
struct c2c_dimension *dim;
if (!c2c_fmt) {
reset_dimensions();
- return sort_dimension__add(hpp_list, name, NULL, 0);
+ return sort_dimension__add(hpp_list, name, /*evlist=*/NULL, env, /*level=*/0);
}
dim = c2c_fmt->dim;
@@ -2024,7 +2011,7 @@ static int c2c_hists__init_sort(struct perf_hpp_list *hpp_list, char *name)
\
for (tok = strtok_r((char *)_list, ", ", &tmp); \
tok; tok = strtok_r(NULL, ", ", &tmp)) { \
- ret = _fn(hpp_list, tok); \
+ ret = _fn(hpp_list, tok, env); \
if (ret == -EINVAL) { \
pr_err("Invalid --fields key: `%s'", tok); \
break; \
@@ -2037,7 +2024,8 @@ static int c2c_hists__init_sort(struct perf_hpp_list *hpp_list, char *name)
static int hpp_list__parse(struct perf_hpp_list *hpp_list,
const char *output_,
- const char *sort_)
+ const char *sort_,
+ struct perf_env *env)
{
char *output = output_ ? strdup(output_) : NULL;
char *sort = sort_ ? strdup(sort_) : NULL;
@@ -2050,7 +2038,7 @@ static int hpp_list__parse(struct perf_hpp_list *hpp_list,
perf_hpp__setup_output_field(hpp_list);
/*
- * We dont need other sorting keys other than those
+ * We don't need other sorting keys other than those
* we already specified. It also really slows down
* the processing a lot with big number of output
* fields, so switching this off for c2c.
@@ -2068,7 +2056,8 @@ static int hpp_list__parse(struct perf_hpp_list *hpp_list,
static int c2c_hists__init(struct c2c_hists *hists,
const char *sort,
- int nr_header_lines)
+ int nr_header_lines,
+ struct perf_env *env)
{
__hists__init(&hists->hists, &hists->list);
@@ -2082,15 +2071,16 @@ static int c2c_hists__init(struct c2c_hists *hists,
/* Overload number of header lines.*/
hists->list.nr_header_lines = nr_header_lines;
- return hpp_list__parse(&hists->list, NULL, sort);
+ return hpp_list__parse(&hists->list, /*output=*/NULL, sort, env);
}
static int c2c_hists__reinit(struct c2c_hists *c2c_hists,
const char *output,
- const char *sort)
+ const char *sort,
+ struct perf_env *env)
{
perf_hpp__reset_output_field(&c2c_hists->list);
- return hpp_list__parse(&c2c_hists->list, output, sort);
+ return hpp_list__parse(&c2c_hists->list, output, sort, env);
}
#define DISPLAY_LINE_LIMIT 0.001
@@ -2223,8 +2213,9 @@ static int filter_cb(struct hist_entry *he, void *arg __maybe_unused)
return 0;
}
-static int resort_cl_cb(struct hist_entry *he, void *arg __maybe_unused)
+static int resort_cl_cb(struct hist_entry *he, void *arg)
{
+ struct perf_env *env = arg;
struct c2c_hist_entry *c2c_he;
struct c2c_hists *c2c_hists;
bool display = he__display(he, &c2c.shared_clines_stats);
@@ -2238,7 +2229,7 @@ static int resort_cl_cb(struct hist_entry *he, void *arg __maybe_unused)
c2c_he->cacheline_idx = idx++;
calc_width(c2c_he);
- c2c_hists__reinit(c2c_hists, c2c.cl_output, c2c.cl_resort);
+ c2c_hists__reinit(c2c_hists, c2c.cl_output, c2c.cl_resort, env);
hists__collapse_resort(&c2c_hists->hists, NULL);
hists__output_resort_cb(&c2c_hists->hists, NULL, filter_cb);
@@ -2283,14 +2274,15 @@ static int setup_nodes(struct perf_session *session)
int node, idx;
struct perf_cpu cpu;
int *cpu2node;
+ struct perf_env *env = perf_session__env(session);
if (c2c.node_info > 2)
c2c.node_info = 2;
- c2c.nodes_cnt = session->header.env.nr_numa_nodes;
- c2c.cpus_cnt = session->header.env.nr_cpus_avail;
+ c2c.nodes_cnt = env->nr_numa_nodes;
+ c2c.cpus_cnt = env->nr_cpus_avail;
- n = session->header.env.numa_nodes;
+ n = env->numa_nodes;
if (!n)
return -EINVAL;
@@ -2319,11 +2311,7 @@ static int setup_nodes(struct perf_session *session)
nodes[node] = set;
- /* empty node, skip */
- if (perf_cpu_map__has_any_cpu_or_is_empty(map))
- continue;
-
- perf_cpu_map__for_each_cpu(cpu, idx, map) {
+ perf_cpu_map__for_each_cpu_skip_any(cpu, idx, map) {
__set_bit(cpu.cpu, set);
if (WARN_ONCE(cpu2node[cpu.cpu] != -1, "node/cpu topology bug"))
@@ -2353,7 +2341,7 @@ static int resort_shared_cl_cb(struct hist_entry *he, void *arg __maybe_unused)
return 0;
}
-static int hists__iterate_cb(struct hists *hists, hists__resort_cb_t cb)
+static int hists__iterate_cb(struct hists *hists, hists__resort_cb_t cb, void *arg)
{
struct rb_node *next = rb_first_cached(&hists->entries);
int ret = 0;
@@ -2362,7 +2350,7 @@ static int hists__iterate_cb(struct hists *hists, hists__resort_cb_t cb)
struct hist_entry *he;
he = rb_entry(next, struct hist_entry, rb_node);
- ret = cb(he, NULL);
+ ret = cb(he, arg);
if (ret)
break;
next = rb_next(&he->rb_node);
@@ -2468,7 +2456,7 @@ static void print_cacheline(struct c2c_hists *c2c_hists,
hists__fprintf(&c2c_hists->hists, false, 0, 0, 0, out, false);
}
-static void print_pareto(FILE *out)
+static void print_pareto(FILE *out, struct perf_env *env)
{
struct perf_hpp_list hpp_list;
struct rb_node *nd;
@@ -2493,7 +2481,7 @@ static void print_pareto(FILE *out)
"dcacheline";
perf_hpp_list__init(&hpp_list);
- ret = hpp_list__parse(&hpp_list, cl_output, NULL);
+ ret = hpp_list__parse(&hpp_list, cl_output, /*evlist=*/NULL, env);
if (WARN_ONCE(ret, "failed to setup sort entries\n"))
return;
@@ -2558,7 +2546,7 @@ static void perf_c2c__hists_fprintf(FILE *out, struct perf_session *session)
fprintf(out, "=================================================\n");
fprintf(out, "#\n");
- print_pareto(out);
+ print_pareto(out, perf_session__env(session));
}
#ifdef HAVE_SLANG_SUPPORT
@@ -2596,7 +2584,7 @@ perf_c2c_cacheline_browser__title(struct hist_browser *browser,
he = cl_browser->he;
if (he->mem_info)
- addr = cl_address(he->mem_info->daddr.addr, chk_double_cl);
+ addr = cl_address(mem_info__daddr(he->mem_info)->addr, chk_double_cl);
scnprintf(bf, size, "Cacheline 0x%lx", addr);
return 0;
@@ -3050,6 +3038,7 @@ static int perf_c2c__report(int argc, const char **argv)
};
int err = 0;
const char *output_str, *sort_str = NULL;
+ struct perf_env *env;
argc = parse_options(argc, argv, options, report_c2c_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
@@ -3073,20 +3062,33 @@ static int perf_c2c__report(int argc, const char **argv)
data.path = input_name;
data.force = symbol_conf.force;
+ perf_tool__init(&c2c.tool, /*ordered_events=*/true);
+ c2c.tool.sample = process_sample_event;
+ c2c.tool.mmap = perf_event__process_mmap;
+ c2c.tool.mmap2 = perf_event__process_mmap2;
+ c2c.tool.comm = perf_event__process_comm;
+ c2c.tool.exit = perf_event__process_exit;
+ c2c.tool.fork = perf_event__process_fork;
+ c2c.tool.lost = perf_event__process_lost;
+ c2c.tool.attr = perf_event__process_attr;
+ c2c.tool.auxtrace_info = perf_event__process_auxtrace_info;
+ c2c.tool.auxtrace = perf_event__process_auxtrace;
+ c2c.tool.auxtrace_error = perf_event__process_auxtrace_error;
+ c2c.tool.ordering_requires_timestamps = true;
session = perf_session__new(&data, &c2c.tool);
if (IS_ERR(session)) {
err = PTR_ERR(session);
pr_debug("Error creating perf session\n");
goto out;
}
-
+ env = perf_session__env(session);
/*
* Use the 'tot' as default display type if user doesn't specify it;
* since Arm64 platform doesn't support HITMs flag, use 'peer' as the
* default display type.
*/
if (!display) {
- if (!strcmp(perf_env__arch(&session->header.env), "arm64"))
+ if (!strcmp(perf_env__arch(env), "arm64"))
display = "peer";
else
display = "tot";
@@ -3102,7 +3104,7 @@ static int perf_c2c__report(int argc, const char **argv)
goto out_session;
}
- err = c2c_hists__init(&c2c.hists, "dcacheline", 2);
+ err = c2c_hists__init(&c2c.hists, "dcacheline", 2, perf_session__env(session));
if (err) {
pr_debug("Failed to initialize hists\n");
goto out_session;
@@ -3116,7 +3118,7 @@ static int perf_c2c__report(int argc, const char **argv)
goto out_session;
}
- err = mem2node__init(&c2c.mem2node, &session->header.env);
+ err = mem2node__init(&c2c.mem2node, env);
if (err)
goto out_session;
@@ -3124,7 +3126,7 @@ static int perf_c2c__report(int argc, const char **argv)
if (err)
goto out_mem2node;
- if (symbol__init(&session->header.env) < 0)
+ if (symbol__init(env) < 0)
goto out_mem2node;
/* No pipe support at the moment. */
@@ -3186,13 +3188,13 @@ static int perf_c2c__report(int argc, const char **argv)
else if (c2c.display == DISPLAY_SNP_PEER)
sort_str = "tot_peer";
- c2c_hists__reinit(&c2c.hists, output_str, sort_str);
+ c2c_hists__reinit(&c2c.hists, output_str, sort_str, perf_session__env(session));
ui_progress__init(&prog, c2c.hists.hists.nr_entries, "Sorting...");
hists__collapse_resort(&c2c.hists.hists, NULL);
hists__output_resort_cb(&c2c.hists.hists, &prog, resort_shared_cl_cb);
- hists__iterate_cb(&c2c.hists.hists, resort_cl_cb);
+ hists__iterate_cb(&c2c.hists.hists, resort_cl_cb, perf_session__env(session));
ui_progress__finish();
@@ -3215,12 +3217,19 @@ static int parse_record_events(const struct option *opt,
const char *str, int unset __maybe_unused)
{
bool *event_set = (bool *) opt->value;
+ struct perf_pmu *pmu;
+
+ pmu = perf_mem_events_find_pmu();
+ if (!pmu) {
+ pr_err("failed: there is no PMU that supports perf c2c\n");
+ exit(-1);
+ }
if (!strcmp(str, "list")) {
- perf_mem_events__list();
+ perf_pmu__mem_events_list(pmu);
exit(0);
}
- if (perf_mem_events__parse(str))
+ if (perf_pmu__mem_events_parse(pmu, str))
exit(-1);
*event_set = true;
@@ -3238,13 +3247,14 @@ static const char * const *record_mem_usage = __usage_record;
static int perf_c2c__record(int argc, const char **argv)
{
- int rec_argc, i = 0, j, rec_tmp_nr = 0;
+ int rec_argc, i = 0, j;
const char **rec_argv;
- char **rec_tmp;
+ char *event_name_storage = NULL;
int ret;
bool all_user = false, all_kernel = false;
bool event_set = false;
struct perf_mem_event *e;
+ struct perf_pmu *pmu;
struct option options[] = {
OPT_CALLBACK('e', "event", &event_set, "event",
"event selector. Use 'perf c2c record -e list' to list available events",
@@ -3256,7 +3266,13 @@ static int perf_c2c__record(int argc, const char **argv)
OPT_END()
};
- if (perf_mem_events__init()) {
+ pmu = perf_mem_events_find_pmu();
+ if (!pmu) {
+ pr_err("failed: no PMU supports the memory events\n");
+ return -1;
+ }
+
+ if (perf_pmu__mem_events_init()) {
pr_err("failed: memory events not supported\n");
return -1;
}
@@ -3265,47 +3281,37 @@ static int perf_c2c__record(int argc, const char **argv)
PARSE_OPT_KEEP_UNKNOWN);
/* Max number of arguments multiplied by number of PMUs that can support them. */
- rec_argc = argc + 11 * perf_pmus__num_mem_pmus();
+ rec_argc = argc + 11 * (perf_pmu__mem_events_num_mem_pmus(pmu) + 1);
rec_argv = calloc(rec_argc + 1, sizeof(char *));
if (!rec_argv)
return -1;
- rec_tmp = calloc(rec_argc + 1, sizeof(char *));
- if (!rec_tmp) {
- free(rec_argv);
- return -1;
- }
-
rec_argv[i++] = "record";
if (!event_set) {
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD_STORE);
+ e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD_STORE);
/*
* The load and store operations are required, use the event
* PERF_MEM_EVENTS__LOAD_STORE if it is supported.
*/
if (e->tag) {
- e->record = true;
+ perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
rec_argv[i++] = "-W";
} else {
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD);
- e->record = true;
-
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__STORE);
- e->record = true;
+ perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
+ perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
}
}
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD);
- if (e->record)
+ if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
rec_argv[i++] = "-W";
rec_argv[i++] = "-d";
rec_argv[i++] = "--phys-data";
rec_argv[i++] = "--sample-cpu";
- ret = perf_mem_events__record_args(rec_argv, &i, rec_tmp, &rec_tmp_nr);
+ ret = perf_mem_events__record_args(rec_argv, &i, &event_name_storage);
if (ret)
goto out;
@@ -3332,10 +3338,7 @@ static int perf_c2c__record(int argc, const char **argv)
ret = cmd_record(i, rec_argv);
out:
- for (i = 0; i < rec_tmp_nr; i++)
- free(rec_tmp[i]);
-
- free(rec_tmp);
+ free(event_name_storage);
free(rec_argv);
return ret;
}
diff --git a/tools/perf/builtin-check.c b/tools/perf/builtin-check.c
new file mode 100644
index 000000000000..9ce2e71999df
--- /dev/null
+++ b/tools/perf/builtin-check.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "builtin.h"
+#include "color.h"
+#include "util/bpf-utils.h"
+#include "util/debug.h"
+#include "util/header.h"
+#include <tools/config.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <string.h>
+#include <subcmd/parse-options.h>
+
+static const char * const check_subcommands[] = { "feature", NULL };
+static struct option check_options[] = {
+ OPT_BOOLEAN('q', "quiet", &quiet, "do not show any warnings or messages"),
+ OPT_END()
+};
+static struct option check_feature_options[] = { OPT_PARENT(check_options) };
+
+static const char *check_usage[] = { NULL, NULL };
+static const char *check_feature_usage[] = {
+ "perf check feature <feature_list>",
+ NULL
+};
+
+#define FEATURE_STATUS(name_, macro_) { \
+ .name = name_, \
+ .macro = #macro_, \
+ .is_builtin = IS_BUILTIN(macro_) }
+
+#define FEATURE_STATUS_TIP(name_, macro_, tip_) { \
+ .name = name_, \
+ .macro = #macro_, \
+ .tip = tip_, \
+ .is_builtin = IS_BUILTIN(macro_) }
+
+struct feature_status supported_features[] = {
+ FEATURE_STATUS("aio", HAVE_AIO_SUPPORT),
+ FEATURE_STATUS("bpf", HAVE_LIBBPF_SUPPORT),
+ FEATURE_STATUS("bpf_skeletons", HAVE_BPF_SKEL),
+ FEATURE_STATUS("debuginfod", HAVE_DEBUGINFOD_SUPPORT),
+ FEATURE_STATUS("dwarf", HAVE_LIBDW_SUPPORT),
+ FEATURE_STATUS("dwarf_getlocations", HAVE_LIBDW_SUPPORT),
+ FEATURE_STATUS("dwarf-unwind", HAVE_DWARF_UNWIND_SUPPORT),
+ FEATURE_STATUS("auxtrace", HAVE_AUXTRACE_SUPPORT),
+ FEATURE_STATUS_TIP("libbfd", HAVE_LIBBFD_SUPPORT, "Deprecated, license incompatibility, use BUILD_NONDISTRO=1 and install binutils-dev[el]"),
+ FEATURE_STATUS("libbpf-strings", HAVE_LIBBPF_STRINGS_SUPPORT),
+ FEATURE_STATUS("libcapstone", HAVE_LIBCAPSTONE_SUPPORT),
+ FEATURE_STATUS("libdw-dwarf-unwind", HAVE_LIBDW_SUPPORT),
+ FEATURE_STATUS("libelf", HAVE_LIBELF_SUPPORT),
+ FEATURE_STATUS("libLLVM", HAVE_LIBLLVM_SUPPORT),
+ FEATURE_STATUS("libnuma", HAVE_LIBNUMA_SUPPORT),
+ FEATURE_STATUS("libopencsd", HAVE_CSTRACE_SUPPORT),
+ FEATURE_STATUS_TIP("libperl", HAVE_LIBPERL_SUPPORT, "Deprecated, use LIBPERL=1 and install perl-ExtUtils-Embed/libperl-dev to build with it"),
+ FEATURE_STATUS("libpfm4", HAVE_LIBPFM),
+ FEATURE_STATUS("libpython", HAVE_LIBPYTHON_SUPPORT),
+ FEATURE_STATUS("libslang", HAVE_SLANG_SUPPORT),
+ FEATURE_STATUS("libtraceevent", HAVE_LIBTRACEEVENT),
+ FEATURE_STATUS_TIP("libunwind", HAVE_LIBUNWIND_SUPPORT, "Deprecated, use LIBUNWIND=1 and install libunwind-dev[el] to build with it"),
+ FEATURE_STATUS("lzma", HAVE_LZMA_SUPPORT),
+ FEATURE_STATUS("numa_num_possible_cpus", HAVE_LIBNUMA_SUPPORT),
+ FEATURE_STATUS("zlib", HAVE_ZLIB_SUPPORT),
+ FEATURE_STATUS("zstd", HAVE_ZSTD_SUPPORT),
+
+ /* this should remain at end, to know the array end */
+ FEATURE_STATUS(NULL, _)
+};
+
+static void on_off_print(const char *status)
+{
+ printf("[ ");
+
+ if (!strcmp(status, "OFF"))
+ color_fprintf(stdout, PERF_COLOR_RED, "%-3s", status);
+ else
+ color_fprintf(stdout, PERF_COLOR_GREEN, "%-3s", status);
+
+ printf(" ]");
+}
+
+/* Helper function to print status of a feature along with name/macro */
+void feature_status__printf(const struct feature_status *feature)
+{
+ const char *name = feature->name, *macro = feature->macro,
+ *status = feature->is_builtin ? "on" : "OFF";
+
+ printf("%22s: ", name);
+ on_off_print(status);
+ printf(" # %s", macro);
+
+ if (!feature->is_builtin && feature->tip)
+ printf(" ( tip: %s )", feature->tip);
+
+ putchar('\n');
+}
+
+/**
+ * check whether "feature" is built-in with perf
+ *
+ * returns:
+ * 0: NOT built-in or Feature not known
+ * 1: Built-in
+ */
+static int has_support(const char *feature)
+{
+ for (int i = 0; supported_features[i].name; ++i) {
+ if ((strcasecmp(feature, supported_features[i].name) == 0) ||
+ (strcasecmp(feature, supported_features[i].macro) == 0)) {
+ if (!quiet)
+ feature_status__printf(&supported_features[i]);
+ return supported_features[i].is_builtin;
+ }
+ }
+
+ if (!quiet)
+ pr_err("Unknown feature '%s', please use 'perf version --build-options' to see which ones are available.\n", feature);
+
+ return 0;
+}
+
+
+/**
+ * Usage: 'perf check feature <feature_list>'
+ *
+ * <feature_list> can be a single feature name/macro, or a comma-separated list
+ * of feature names/macros
+ * eg. argument can be "libtraceevent" or "libtraceevent,bpf" etc
+ *
+ * In case of a comma-separated list, feature_enabled will be 1, only if
+ * all features passed in the string are supported
+ *
+ * Note that argv will get modified
+ */
+static int subcommand_feature(int argc, const char **argv)
+{
+ char *feature_list;
+ char *feature_name;
+ int feature_enabled;
+
+ argc = parse_options(argc, argv, check_feature_options,
+ check_feature_usage, 0);
+
+ if (!argc)
+ usage_with_options(check_feature_usage, check_feature_options);
+
+ if (argc > 1) {
+ pr_err("Too many arguments passed to 'perf check feature'\n");
+ return -1;
+ }
+
+ feature_enabled = 1;
+ /* feature_list is a non-const copy of 'argv[0]' */
+ feature_list = strdup(argv[0]);
+ if (!feature_list) {
+ pr_err("ERROR: failed to allocate memory for feature list\n");
+ return -1;
+ }
+
+ feature_name = strtok(feature_list, ",");
+
+ while (feature_name) {
+ feature_enabled &= has_support(feature_name);
+ feature_name = strtok(NULL, ",");
+ }
+
+ free(feature_list);
+
+ return !feature_enabled;
+}
+
+int cmd_check(int argc, const char **argv)
+{
+ argc = parse_options_subcommand(argc, argv, check_options,
+ check_subcommands, check_usage, 0);
+
+ if (!argc)
+ usage_with_options(check_usage, check_options);
+
+ if (strcmp(argv[0], "feature") == 0)
+ return subcommand_feature(argc, argv);
+
+ /* If no subcommand matched above, print usage help */
+ pr_err("Unknown subcommand: %s\n", argv[0]);
+ usage_with_options(check_usage, check_options);
+
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)check_usage[0]);
+
+ return 0;
+}
diff --git a/tools/perf/builtin-config.c b/tools/perf/builtin-config.c
index 2e8363778935..45b5312fbe83 100644
--- a/tools/perf/builtin-config.c
+++ b/tools/perf/builtin-config.c
@@ -154,6 +154,44 @@ static int parse_config_arg(char *arg, char **var, char **value)
return 0;
}
+int perf_config__set_variable(const char *var, const char *value)
+{
+ char path[PATH_MAX];
+ char *user_config = mkpath(path, sizeof(path), "%s/.perfconfig", getenv("HOME"));
+ const char *config_filename;
+ struct perf_config_set *set;
+ int ret = -1;
+
+ if (use_system_config)
+ config_exclusive_filename = perf_etc_perfconfig();
+ else if (use_user_config)
+ config_exclusive_filename = user_config;
+
+ if (!config_exclusive_filename)
+ config_filename = user_config;
+ else
+ config_filename = config_exclusive_filename;
+
+ set = perf_config_set__new();
+ if (!set)
+ goto out_err;
+
+ if (perf_config_set__collect(set, config_filename, var, value) < 0) {
+ pr_err("Failed to add '%s=%s'\n", var, value);
+ goto out_err;
+ }
+
+ if (set_config(set, config_filename) < 0) {
+ pr_err("Failed to set the configs on %s\n", config_filename);
+ goto out_err;
+ }
+
+ ret = 0;
+out_err:
+ perf_config_set__delete(set);
+ return ret;
+}
+
int cmd_config(int argc, const char **argv)
{
int i, ret = -1;
diff --git a/tools/perf/builtin-daemon.c b/tools/perf/builtin-daemon.c
index 83954af36753..f0568431fbd5 100644
--- a/tools/perf/builtin-daemon.c
+++ b/tools/perf/builtin-daemon.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <internal/lib.h>
+#include <inttypes.h>
#include <subcmd/parse-options.h>
#include <api/fd/array.h>
#include <api/fs/fs.h>
@@ -523,7 +524,7 @@ static int daemon_session__control(struct daemon_session *session,
session->base, SESSION_CONTROL);
control = open(control_path, O_WRONLY|O_NONBLOCK);
- if (!control)
+ if (control < 0)
return -1;
if (do_ack) {
@@ -532,7 +533,7 @@ static int daemon_session__control(struct daemon_session *session,
session->base, SESSION_ACK);
ack = open(ack_path, O_RDONLY, O_NONBLOCK);
- if (!ack) {
+ if (ack < 0) {
close(control);
return -1;
}
@@ -688,9 +689,9 @@ static int cmd_session_list(struct daemon *daemon, union cmd *cmd, FILE *out)
/* lock */
csv_sep, daemon->base, "lock");
- fprintf(out, "%c%lu",
+ fprintf(out, "%c%" PRIu64,
/* session up time */
- csv_sep, (curr - daemon->start) / 60);
+ csv_sep, (uint64_t)((curr - daemon->start) / 60));
fprintf(out, "\n");
} else {
@@ -700,8 +701,8 @@ static int cmd_session_list(struct daemon *daemon, union cmd *cmd, FILE *out)
daemon->base, SESSION_OUTPUT);
fprintf(out, " lock: %s/lock\n",
daemon->base);
- fprintf(out, " up: %lu minutes\n",
- (curr - daemon->start) / 60);
+ fprintf(out, " up: %" PRIu64 " minutes\n",
+ (uint64_t)((curr - daemon->start) / 60));
}
}
@@ -727,9 +728,9 @@ static int cmd_session_list(struct daemon *daemon, union cmd *cmd, FILE *out)
/* session ack */
csv_sep, session->base, SESSION_ACK);
- fprintf(out, "%c%lu",
+ fprintf(out, "%c%" PRIu64,
/* session up time */
- csv_sep, (curr - session->start) / 60);
+ csv_sep, (uint64_t)((curr - session->start) / 60));
fprintf(out, "\n");
} else {
@@ -745,8 +746,8 @@ static int cmd_session_list(struct daemon *daemon, union cmd *cmd, FILE *out)
session->base, SESSION_CONTROL);
fprintf(out, " ack: %s/%s\n",
session->base, SESSION_ACK);
- fprintf(out, " up: %lu minutes\n",
- (curr - session->start) / 60);
+ fprintf(out, " up: %" PRIu64 " minutes\n",
+ (uint64_t)((curr - session->start) / 60));
}
}
@@ -1433,7 +1434,7 @@ static int __cmd_signal(struct daemon *daemon, struct option parent_options[],
}
memset(&cmd, 0, sizeof(cmd));
- cmd.signal.cmd = CMD_SIGNAL,
+ cmd.signal.cmd = CMD_SIGNAL;
cmd.signal.sig = SIGUSR2;
strncpy(cmd.signal.name, name, sizeof(cmd.signal.name) - 1);
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index 57d300d8e570..53d5ea4a6a4f 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -6,6 +6,7 @@
* DSOs and symbol information, sort them and produce a diff.
*/
#include "builtin.h"
+#include "perf.h"
#include "util/debug.h"
#include "util/event.h"
@@ -388,7 +389,7 @@ struct hist_entry_ops block_hist_ops = {
.free = block_hist_free,
};
-static int diff__process_sample_event(struct perf_tool *tool,
+static int diff__process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -431,8 +432,8 @@ static int diff__process_sample_event(struct perf_tool *tool,
goto out;
}
- hist__account_cycles(sample->branch_stack, &al, sample, false,
- NULL);
+ hist__account_cycles(sample->branch_stack, &al, sample,
+ false, NULL, evsel);
break;
case COMPUTE_STREAM:
@@ -467,29 +468,15 @@ out:
return ret;
}
-static struct perf_diff pdiff = {
- .tool = {
- .sample = diff__process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .exit = perf_event__process_exit,
- .fork = perf_event__process_fork,
- .lost = perf_event__process_lost,
- .namespaces = perf_event__process_namespaces,
- .cgroup = perf_event__process_cgroup,
- .ordered_events = true,
- .ordering_requires_timestamps = true,
- },
-};
+static struct perf_diff pdiff;
-static struct evsel *evsel_match(struct evsel *evsel,
- struct evlist *evlist)
+static struct evsel *evsel_match(struct evsel *evsel, struct evlist *evlist)
{
struct evsel *e;
evlist__for_each_entry(evlist, e) {
- if (evsel__match2(evsel, e))
+ if ((evsel->core.attr.type == e->core.attr.type) &&
+ (evsel->core.attr.config == e->core.attr.config))
return e;
}
@@ -705,7 +692,7 @@ static void hists__precompute(struct hists *hists)
if (compute == COMPUTE_CYCLES) {
bh = container_of(he, struct block_hist, he);
init_block_hist(bh);
- block_info__process_sym(he, bh, NULL, 0);
+ block_info__process_sym(he, bh, NULL, 0, 0);
}
data__for_each_file_new(i, d) {
@@ -728,7 +715,7 @@ static void hists__precompute(struct hists *hists)
pair_bh = container_of(pair, struct block_hist,
he);
init_block_hist(pair_bh);
- block_info__process_sym(pair, pair_bh, NULL, 0);
+ block_info__process_sym(pair, pair_bh, NULL, 0, 0);
bh = container_of(he, struct block_hist, he);
@@ -1033,12 +1020,12 @@ static int process_base_stream(struct data__file *data_base,
continue;
es_base = evsel_streams__entry(data_base->evlist_streams,
- evsel_base->core.idx);
+ evsel_base);
if (!es_base)
return -1;
es_pair = evsel_streams__entry(data_pair->evlist_streams,
- evsel_pair->core.idx);
+ evsel_pair);
if (!es_pair)
return -1;
@@ -1959,6 +1946,18 @@ int cmd_diff(int argc, const char **argv)
if (ret < 0)
return ret;
+ perf_tool__init(&pdiff.tool, /*ordered_events=*/true);
+ pdiff.tool.sample = diff__process_sample_event;
+ pdiff.tool.mmap = perf_event__process_mmap;
+ pdiff.tool.mmap2 = perf_event__process_mmap2;
+ pdiff.tool.comm = perf_event__process_comm;
+ pdiff.tool.exit = perf_event__process_exit;
+ pdiff.tool.fork = perf_event__process_fork;
+ pdiff.tool.lost = perf_event__process_lost;
+ pdiff.tool.namespaces = perf_event__process_namespaces;
+ pdiff.tool.cgroup = perf_event__process_cgroup;
+ pdiff.tool.ordering_requires_timestamps = true;
+
perf_config(diff__config, NULL);
argc = parse_options(argc, argv, options, diff_usage, 0);
@@ -2004,7 +2003,7 @@ int cmd_diff(int argc, const char **argv)
sort__mode = SORT_MODE__DIFF;
}
- if (setup_sorting(NULL) < 0)
+ if (setup_sorting(/*evlist=*/NULL, perf_session__env(data__files[0].session)) < 0)
usage_with_options(diff_usage, options);
setup_pager();
diff --git a/tools/perf/builtin-evlist.c b/tools/perf/builtin-evlist.c
index 7117656939e7..a9bd7bbef5a9 100644
--- a/tools/perf/builtin-evlist.c
+++ b/tools/perf/builtin-evlist.c
@@ -35,13 +35,13 @@ static int __cmd_evlist(const char *file_name, struct perf_attr_details *details
.mode = PERF_DATA_MODE_READ,
.force = details->force,
};
- struct perf_tool tool = {
- /* only needed for pipe mode */
- .attr = perf_event__process_attr,
- .feature = process_header_feature,
- };
- bool has_tracepoint = false;
+ struct perf_tool tool;
+ bool has_tracepoint = false, has_group = false;
+ perf_tool__init(&tool, /*ordered_events=*/false);
+ /* only needed for pipe mode */
+ tool.attr = perf_event__process_attr;
+ tool.feature = process_header_feature;
session = perf_session__new(&data, &tool);
if (IS_ERR(session))
return PTR_ERR(session);
@@ -54,11 +54,17 @@ static int __cmd_evlist(const char *file_name, struct perf_attr_details *details
if (pos->core.attr.type == PERF_TYPE_TRACEPOINT)
has_tracepoint = true;
+
+ if (!evsel__is_group_leader(pos))
+ has_group = true;
}
if (has_tracepoint && !details->trace_fields)
printf("# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events\n");
+ if (has_group && !details->event_group)
+ printf("# Tip: use 'perf evlist -g' to show group information\n");
+
perf_session__delete(session);
return 0;
}
diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
index eb30c8eca488..6b6eec65f93f 100644
--- a/tools/perf/builtin-ftrace.c
+++ b/tools/perf/builtin-ftrace.c
@@ -13,24 +13,29 @@
#include <signal.h>
#include <stdlib.h>
#include <fcntl.h>
+#include <inttypes.h>
#include <math.h>
#include <poll.h>
#include <ctype.h>
#include <linux/capability.h>
#include <linux/string.h>
+#include <sys/stat.h>
#include "debug.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
+#include <api/io.h>
#include <api/fs/tracing_path.h>
#include "evlist.h"
#include "target.h"
#include "cpumap.h"
+#include "hashmap.h"
#include "thread_map.h"
#include "strfilter.h"
#include "util/cap.h"
#include "util/config.h"
#include "util/ftrace.h"
+#include "util/stat.h"
#include "util/units.h"
#include "util/parse-sublevel-options.h"
@@ -39,6 +44,10 @@
static volatile sig_atomic_t workload_exec_errno;
static volatile sig_atomic_t done;
+static struct stats latency_stats; /* for tracepoints */
+
+static char tracing_instance[PATH_MAX]; /* Trace instance directory */
+
static void sig_handler(int sig __maybe_unused)
{
done = true;
@@ -59,6 +68,69 @@ static void ftrace__workload_exec_failed_signal(int signo __maybe_unused,
done = true;
}
+static bool check_ftrace_capable(void)
+{
+ bool used_root;
+
+ if (perf_cap__capable(CAP_PERFMON, &used_root))
+ return true;
+
+ if (!used_root && perf_cap__capable(CAP_SYS_ADMIN, &used_root))
+ return true;
+
+ pr_err("ftrace only works for %s!\n",
+ used_root ? "root"
+ : "users with the CAP_PERFMON or CAP_SYS_ADMIN capability"
+ );
+ return false;
+}
+
+static bool is_ftrace_supported(void)
+{
+ char *file;
+ bool supported = false;
+
+ file = get_tracing_file("set_ftrace_pid");
+ if (!file) {
+ pr_debug("cannot get tracing file set_ftrace_pid\n");
+ return false;
+ }
+
+ if (!access(file, F_OK))
+ supported = true;
+
+ put_tracing_file(file);
+ return supported;
+}
+
+/*
+ * Wrapper to test if a file in directory .../tracing/instances/XXX
+ * exists. If so return the .../tracing/instances/XXX file for use.
+ * Otherwise the file exists only in directory .../tracing and
+ * is applicable to all instances, for example file available_filter_functions.
+ * Return that file name in this case.
+ *
+ * This functions works similar to get_tracing_file() and expects its caller
+ * to free the returned file name.
+ *
+ * The global variable tracing_instance is set in init_tracing_instance()
+ * called at the beginning to a process specific tracing subdirectory.
+ */
+static char *get_tracing_instance_file(const char *name)
+{
+ char *file;
+
+ if (asprintf(&file, "%s/%s", tracing_instance, name) < 0)
+ return NULL;
+
+ if (!access(file, F_OK))
+ return file;
+
+ free(file);
+ file = get_tracing_file(name);
+ return file;
+}
+
static int __write_tracing_file(const char *name, const char *val, bool append)
{
char *file;
@@ -68,7 +140,7 @@ static int __write_tracing_file(const char *name, const char *val, bool append)
char errbuf[512];
char *val_copy;
- file = get_tracing_file(name);
+ file = get_tracing_instance_file(name);
if (!file) {
pr_debug("cannot get tracing file: %s\n", name);
return -1;
@@ -126,7 +198,7 @@ static int read_tracing_file_to_stdout(const char *name)
int fd;
int ret = -1;
- file = get_tracing_file(name);
+ file = get_tracing_instance_file(name);
if (!file) {
pr_debug("cannot get tracing file: %s\n", name);
return -1;
@@ -168,7 +240,7 @@ static int read_tracing_file_by_line(const char *name,
char *file;
FILE *fp;
- file = get_tracing_file(name);
+ file = get_tracing_instance_file(name);
if (!file) {
pr_debug("cannot get tracing file: %s\n", name);
return -1;
@@ -228,6 +300,11 @@ static void reset_tracing_options(struct perf_ftrace *ftrace __maybe_unused)
write_tracing_option_file("funcgraph-irqs", "1");
write_tracing_option_file("funcgraph-proc", "0");
write_tracing_option_file("funcgraph-abstime", "0");
+ write_tracing_option_file("funcgraph-tail", "0");
+ write_tracing_option_file("funcgraph-args", "0");
+ write_tracing_option_file("funcgraph-retval", "0");
+ write_tracing_option_file("funcgraph-retval-hex", "0");
+ write_tracing_option_file("funcgraph-retaddr", "0");
write_tracing_option_file("latency-format", "0");
write_tracing_option_file("irq-info", "0");
}
@@ -257,6 +334,39 @@ static int reset_tracing_files(struct perf_ftrace *ftrace __maybe_unused)
return 0;
}
+/* Remove .../tracing/instances/XXX subdirectory created with
+ * init_tracing_instance().
+ */
+static void exit_tracing_instance(void)
+{
+ if (rmdir(tracing_instance))
+ pr_err("failed to delete tracing/instances directory\n");
+}
+
+/* Create subdirectory within .../tracing/instances/XXX to have session
+ * or process specific setup. To delete this setup, simply remove the
+ * subdirectory.
+ */
+static int init_tracing_instance(void)
+{
+ char dirname[] = "instances/perf-ftrace-XXXXXX";
+ char *path;
+
+ path = get_tracing_file(dirname);
+ if (!path)
+ goto error;
+ strncpy(tracing_instance, path, sizeof(tracing_instance) - 1);
+ put_tracing_file(path);
+ path = mkdtemp(tracing_instance);
+ if (!path)
+ goto error;
+ return 0;
+
+error:
+ pr_err("failed to create tracing/instances directory\n");
+ return -1;
+}
+
static int set_tracing_pid(struct perf_ftrace *ftrace)
{
int i;
@@ -436,6 +546,41 @@ static int set_tracing_sleep_time(struct perf_ftrace *ftrace)
return 0;
}
+static int set_tracing_funcgraph_args(struct perf_ftrace *ftrace)
+{
+ if (ftrace->graph_args) {
+ if (write_tracing_option_file("funcgraph-args", "1") < 0)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int set_tracing_funcgraph_retval(struct perf_ftrace *ftrace)
+{
+ if (ftrace->graph_retval || ftrace->graph_retval_hex) {
+ if (write_tracing_option_file("funcgraph-retval", "1") < 0)
+ return -1;
+ }
+
+ if (ftrace->graph_retval_hex) {
+ if (write_tracing_option_file("funcgraph-retval-hex", "1") < 0)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int set_tracing_funcgraph_retaddr(struct perf_ftrace *ftrace)
+{
+ if (ftrace->graph_retaddr) {
+ if (write_tracing_option_file("funcgraph-retaddr", "1") < 0)
+ return -1;
+ }
+
+ return 0;
+}
+
static int set_tracing_funcgraph_irqs(struct perf_ftrace *ftrace)
{
if (!ftrace->graph_noirqs)
@@ -464,6 +609,17 @@ static int set_tracing_funcgraph_verbose(struct perf_ftrace *ftrace)
return 0;
}
+static int set_tracing_funcgraph_tail(struct perf_ftrace *ftrace)
+{
+ if (!ftrace->graph_tail)
+ return 0;
+
+ if (write_tracing_option_file("funcgraph-tail", "1") < 0)
+ return -1;
+
+ return 0;
+}
+
static int set_tracing_thresh(struct perf_ftrace *ftrace)
{
int ret;
@@ -525,6 +681,21 @@ static int set_tracing_options(struct perf_ftrace *ftrace)
return -1;
}
+ if (set_tracing_funcgraph_args(ftrace) < 0) {
+ pr_err("failed to set tracing option funcgraph-args\n");
+ return -1;
+ }
+
+ if (set_tracing_funcgraph_retval(ftrace) < 0) {
+ pr_err("failed to set tracing option funcgraph-retval\n");
+ return -1;
+ }
+
+ if (set_tracing_funcgraph_retaddr(ftrace) < 0) {
+ pr_err("failed to set tracing option funcgraph-retaddr\n");
+ return -1;
+ }
+
if (set_tracing_funcgraph_irqs(ftrace) < 0) {
pr_err("failed to set tracing option funcgraph-irqs\n");
return -1;
@@ -540,6 +711,11 @@ static int set_tracing_options(struct perf_ftrace *ftrace)
return -1;
}
+ if (set_tracing_funcgraph_tail(ftrace) < 0) {
+ pr_err("failed to set tracing option funcgraph-tail\n");
+ return -1;
+ }
+
return 0;
}
@@ -569,28 +745,19 @@ static int __cmd_ftrace(struct perf_ftrace *ftrace)
.events = POLLIN,
};
- if (!(perf_cap__capable(CAP_PERFMON) ||
- perf_cap__capable(CAP_SYS_ADMIN))) {
- pr_err("ftrace only works for %s!\n",
-#ifdef HAVE_LIBCAP_SUPPORT
- "users with the CAP_PERFMON or CAP_SYS_ADMIN capability"
-#else
- "root"
-#endif
- );
- return -1;
- }
-
select_tracer(ftrace);
+ if (init_tracing_instance() < 0)
+ goto out;
+
if (reset_tracing_files(ftrace) < 0) {
pr_err("failed to reset ftrace\n");
- goto out;
+ goto out_reset;
}
/* reset ftrace buffer */
if (write_tracing_file("trace", "0") < 0)
- goto out;
+ goto out_reset;
if (set_tracing_options(ftrace) < 0)
goto out_reset;
@@ -602,7 +769,7 @@ static int __cmd_ftrace(struct perf_ftrace *ftrace)
setup_pager();
- trace_file = get_tracing_file("trace_pipe");
+ trace_file = get_tracing_instance_file("trace_pipe");
if (!trace_file) {
pr_err("failed to open trace_pipe\n");
goto out_reset;
@@ -677,14 +844,17 @@ static int __cmd_ftrace(struct perf_ftrace *ftrace)
out_close_fd:
close(trace_fd);
out_reset:
- reset_tracing_files(ftrace);
+ exit_tracing_instance();
out:
return (done && !workload_exec_errno) ? 0 : -1;
}
-static void make_histogram(int buckets[], char *buf, size_t len, char *linebuf,
- bool use_nsec)
+static void make_histogram(struct perf_ftrace *ftrace, int buckets[],
+ char *buf, size_t len, char *linebuf)
{
+ int min_latency = ftrace->min_latency;
+ int max_latency = ftrace->max_latency;
+ unsigned int bucket_num = ftrace->bucket_num;
char *p, *q;
char *unit;
double num;
@@ -730,16 +900,34 @@ static void make_histogram(int buckets[], char *buf, size_t len, char *linebuf,
if (!unit || strncmp(unit, " us", 3))
goto next;
- if (use_nsec)
+ if (ftrace->use_nsec)
num *= 1000;
- i = log2(num);
- if (i < 0)
- i = 0;
- if (i >= NUM_BUCKET)
- i = NUM_BUCKET - 1;
+ i = 0;
+ if (num < min_latency)
+ goto do_inc;
+
+ num -= min_latency;
+
+ if (!ftrace->bucket_range) {
+ i = log2(num);
+ if (i < 0)
+ i = 0;
+ } else {
+ // Less than 1 unit (ms or ns), or, in the future,
+ // than the min latency desired.
+ if (num > 0) // 1st entry: [ 1 unit .. bucket_range units ]
+ i = num / ftrace->bucket_range + 1;
+ if (num >= max_latency - min_latency)
+ i = bucket_num -1;
+ }
+ if ((unsigned)i >= bucket_num)
+ i = bucket_num - 1;
+ num += min_latency;
+do_inc:
buckets[i]++;
+ update_stats(&latency_stats, num);
next:
/* empty the line buffer for the next output */
@@ -750,15 +938,18 @@ next:
strcat(linebuf, p);
}
-static void display_histogram(int buckets[], bool use_nsec)
+static void display_histogram(struct perf_ftrace *ftrace, int buckets[])
{
- int i;
+ int min_latency = ftrace->min_latency;
+ bool use_nsec = ftrace->use_nsec;
+ unsigned int bucket_num = ftrace->bucket_num;
+ unsigned int i;
int total = 0;
int bar_total = 46; /* to fit in 80 column */
char bar[] = "###############################################";
int bar_len;
- for (i = 0; i < NUM_BUCKET; i++)
+ for (i = 0; i < bucket_num; i++)
total += buckets[i];
if (total == 0) {
@@ -770,30 +961,80 @@ static void display_histogram(int buckets[], bool use_nsec)
" DURATION ", "COUNT", bar_total, "GRAPH");
bar_len = buckets[0] * bar_total / total;
- printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
- 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
- for (i = 1; i < NUM_BUCKET - 1; i++) {
- int start = (1 << (i - 1));
- int stop = 1 << i;
+ if (!ftrace->hide_empty || buckets[0])
+ printf(" %4d - %4d %s | %10d | %.*s%*s |\n",
+ 0, min_latency ?: 1, use_nsec ? "ns" : "us",
+ buckets[0], bar_len, bar, bar_total - bar_len, "");
+
+ for (i = 1; i < bucket_num - 1; i++) {
+ unsigned int start, stop;
const char *unit = use_nsec ? "ns" : "us";
- if (start >= 1024) {
- start >>= 10;
- stop >>= 10;
- unit = use_nsec ? "us" : "ms";
+ if (ftrace->hide_empty && !buckets[i])
+ continue;
+ if (!ftrace->bucket_range) {
+ start = (1 << (i - 1));
+ stop = 1 << i;
+
+ if (start >= 1024) {
+ start >>= 10;
+ stop >>= 10;
+ unit = use_nsec ? "us" : "ms";
+ }
+ } else {
+ start = (i - 1) * ftrace->bucket_range + min_latency;
+ stop = i * ftrace->bucket_range + min_latency;
+
+ if (start >= ftrace->max_latency)
+ break;
+ if (stop > ftrace->max_latency)
+ stop = ftrace->max_latency;
+
+ if (start >= 1000) {
+ double dstart = start / 1000.0,
+ dstop = stop / 1000.0;
+ printf(" %4.2f - %-4.2f", dstart, dstop);
+ unit = use_nsec ? "us" : "ms";
+ goto print_bucket_info;
+ }
}
+
+ printf(" %4d - %4d", start, stop);
+print_bucket_info:
bar_len = buckets[i] * bar_total / total;
- printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
- start, stop, unit, buckets[i], bar_len, bar,
+ printf(" %s | %10d | %.*s%*s |\n", unit, buckets[i], bar_len, bar,
bar_total - bar_len, "");
}
- bar_len = buckets[NUM_BUCKET - 1] * bar_total / total;
- printf(" %4d - %-4s %s | %10d | %.*s%*s |\n",
- 1, "...", use_nsec ? "ms" : " s", buckets[NUM_BUCKET - 1],
+ bar_len = buckets[bucket_num - 1] * bar_total / total;
+ if (ftrace->hide_empty && !buckets[bucket_num - 1])
+ goto print_stats;
+ if (!ftrace->bucket_range) {
+ printf(" %4d - %-4s %s", 1, "...", use_nsec ? "ms" : "s ");
+ } else {
+ unsigned int upper_outlier = (bucket_num - 2) * ftrace->bucket_range + min_latency;
+ if (upper_outlier > ftrace->max_latency)
+ upper_outlier = ftrace->max_latency;
+
+ if (upper_outlier >= 1000) {
+ double dstart = upper_outlier / 1000.0;
+
+ printf(" %4.2f - %-4s %s", dstart, "...", use_nsec ? "us" : "ms");
+ } else {
+ printf(" %4d - %4s %s", upper_outlier, "...", use_nsec ? "ns" : "us");
+ }
+ }
+ printf(" | %10d | %.*s%*s |\n", buckets[bucket_num - 1],
bar_len, bar, bar_total - bar_len, "");
+print_stats:
+ printf("\n# statistics (in %s)\n", ftrace->use_nsec ? "nsec" : "usec");
+ printf(" total time: %20.0f\n", latency_stats.mean * latency_stats.n);
+ printf(" avg time: %20.0f\n", latency_stats.mean);
+ printf(" max time: %20"PRIu64"\n", latency_stats.max);
+ printf(" min time: %20"PRIu64"\n", latency_stats.min);
+ printf(" count: %20.0f\n", latency_stats.n);
}
static int prepare_func_latency(struct perf_ftrace *ftrace)
@@ -804,6 +1045,9 @@ static int prepare_func_latency(struct perf_ftrace *ftrace)
if (ftrace->target.use_bpf)
return perf_ftrace__latency_prepare_bpf(ftrace);
+ if (init_tracing_instance() < 0)
+ return -1;
+
if (reset_tracing_files(ftrace) < 0) {
pr_err("failed to reset ftrace\n");
return -1;
@@ -822,7 +1066,7 @@ static int prepare_func_latency(struct perf_ftrace *ftrace)
return -1;
}
- trace_file = get_tracing_file("trace_pipe");
+ trace_file = get_tracing_instance_file("trace_pipe");
if (!trace_file) {
pr_err("failed to open trace_pipe\n");
return -1;
@@ -832,6 +1076,8 @@ static int prepare_func_latency(struct perf_ftrace *ftrace)
if (fd < 0)
pr_err("failed to open trace_pipe\n");
+ init_stats(&latency_stats);
+
put_tracing_file(trace_file);
return fd;
}
@@ -861,7 +1107,7 @@ static int stop_func_latency(struct perf_ftrace *ftrace)
static int read_func_latency(struct perf_ftrace *ftrace, int buckets[])
{
if (ftrace->target.use_bpf)
- return perf_ftrace__latency_read_bpf(ftrace, buckets);
+ return perf_ftrace__latency_read_bpf(ftrace, buckets, &latency_stats);
return 0;
}
@@ -871,7 +1117,7 @@ static int cleanup_func_latency(struct perf_ftrace *ftrace)
if (ftrace->target.use_bpf)
return perf_ftrace__latency_cleanup_bpf(ftrace);
- reset_tracing_files(ftrace);
+ exit_tracing_instance();
return 0;
}
@@ -883,19 +1129,7 @@ static int __cmd_latency(struct perf_ftrace *ftrace)
struct pollfd pollfd = {
.events = POLLIN,
};
- int buckets[NUM_BUCKET] = { };
-
- if (!(perf_cap__capable(CAP_PERFMON) ||
- perf_cap__capable(CAP_SYS_ADMIN))) {
- pr_err("ftrace only works for %s!\n",
-#ifdef HAVE_LIBCAP_SUPPORT
- "users with the CAP_PERFMON or CAP_SYS_ADMIN capability"
-#else
- "root"
-#endif
- );
- return -1;
- }
+ int *buckets;
trace_fd = prepare_func_latency(ftrace);
if (trace_fd < 0)
@@ -909,6 +1143,12 @@ static int __cmd_latency(struct perf_ftrace *ftrace)
evlist__start_workload(ftrace->evlist);
+ buckets = calloc(ftrace->bucket_num, sizeof(*buckets));
+ if (buckets == NULL) {
+ pr_err("failed to allocate memory for the buckets\n");
+ goto out;
+ }
+
line[0] = '\0';
while (!done) {
if (poll(&pollfd, 1, -1) < 0)
@@ -919,7 +1159,7 @@ static int __cmd_latency(struct perf_ftrace *ftrace)
if (n < 0)
break;
- make_histogram(buckets, buf, n, line, ftrace->use_nsec);
+ make_histogram(ftrace, buckets, buf, n, line);
}
}
@@ -928,7 +1168,7 @@ static int __cmd_latency(struct perf_ftrace *ftrace)
if (workload_exec_errno) {
const char *emsg = str_error_r(workload_exec_errno, buf, sizeof(buf));
pr_err("workload failed: %s\n", emsg);
- goto out;
+ goto out_free_buckets;
}
/* read remaining buffer contents */
@@ -936,13 +1176,15 @@ static int __cmd_latency(struct perf_ftrace *ftrace)
int n = read(trace_fd, buf, sizeof(buf) - 1);
if (n <= 0)
break;
- make_histogram(buckets, buf, n, line, ftrace->use_nsec);
+ make_histogram(ftrace, buckets, buf, n, line);
}
read_func_latency(ftrace, buckets);
- display_histogram(buckets, ftrace->use_nsec);
+ display_histogram(ftrace, buckets);
+out_free_buckets:
+ free(buckets);
out:
close(trace_fd);
cleanup_func_latency(ftrace);
@@ -950,6 +1192,331 @@ out:
return (done && !workload_exec_errno) ? 0 : -1;
}
+static size_t profile_hash(long func, void *ctx __maybe_unused)
+{
+ return str_hash((char *)func);
+}
+
+static bool profile_equal(long func1, long func2, void *ctx __maybe_unused)
+{
+ return !strcmp((char *)func1, (char *)func2);
+}
+
+static int prepare_func_profile(struct perf_ftrace *ftrace)
+{
+ ftrace->tracer = "function_graph";
+ ftrace->graph_tail = 1;
+ ftrace->graph_verbose = 0;
+
+ ftrace->profile_hash = hashmap__new(profile_hash, profile_equal, NULL);
+ if (ftrace->profile_hash == NULL)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/* This is saved in a hashmap keyed by the function name */
+struct ftrace_profile_data {
+ struct stats st;
+};
+
+static int add_func_duration(struct perf_ftrace *ftrace, char *func, double time_ns)
+{
+ struct ftrace_profile_data *prof = NULL;
+
+ if (!hashmap__find(ftrace->profile_hash, func, &prof)) {
+ char *key = strdup(func);
+
+ if (key == NULL)
+ return -ENOMEM;
+
+ prof = zalloc(sizeof(*prof));
+ if (prof == NULL) {
+ free(key);
+ return -ENOMEM;
+ }
+
+ init_stats(&prof->st);
+ hashmap__add(ftrace->profile_hash, key, prof);
+ }
+
+ update_stats(&prof->st, time_ns);
+ return 0;
+}
+
+/*
+ * The ftrace function_graph text output normally looks like below:
+ *
+ * CPU DURATION FUNCTION
+ *
+ * 0) | syscall_trace_enter.isra.0() {
+ * 0) | __audit_syscall_entry() {
+ * 0) | auditd_test_task() {
+ * 0) 0.271 us | __rcu_read_lock();
+ * 0) 0.275 us | __rcu_read_unlock();
+ * 0) 1.254 us | } /\* auditd_test_task *\/
+ * 0) 0.279 us | ktime_get_coarse_real_ts64();
+ * 0) 2.227 us | } /\* __audit_syscall_entry *\/
+ * 0) 2.713 us | } /\* syscall_trace_enter.isra.0 *\/
+ *
+ * Parse the line and get the duration and function name.
+ */
+static int parse_func_duration(struct perf_ftrace *ftrace, char *line, size_t len)
+{
+ char *p;
+ char *func;
+ double duration;
+
+ /* skip CPU */
+ p = strchr(line, ')');
+ if (p == NULL)
+ return 0;
+
+ /* get duration */
+ p = skip_spaces(p + 1);
+
+ /* no duration? */
+ if (p == NULL || *p == '|')
+ return 0;
+
+ /* skip markers like '*' or '!' for longer than ms */
+ if (!isdigit(*p))
+ p++;
+
+ duration = strtod(p, &p);
+
+ if (strncmp(p, " us", 3)) {
+ pr_debug("non-usec time found.. ignoring\n");
+ return 0;
+ }
+
+ /*
+ * profile stat keeps the max and min values as integer,
+ * convert to nsec time so that we can have accurate max.
+ */
+ duration *= 1000;
+
+ /* skip to the pipe */
+ while (p < line + len && *p != '|')
+ p++;
+
+ if (*p++ != '|')
+ return -EINVAL;
+
+ /* get function name */
+ func = skip_spaces(p);
+
+ /* skip the closing bracket and the start of comment */
+ if (*func == '}')
+ func += 5;
+
+ /* remove semi-colon or end of comment at the end */
+ p = line + len - 1;
+ while (!isalnum(*p) && *p != ']') {
+ *p = '\0';
+ --p;
+ }
+
+ return add_func_duration(ftrace, func, duration);
+}
+
+enum perf_ftrace_profile_sort_key {
+ PFP_SORT_TOTAL = 0,
+ PFP_SORT_AVG,
+ PFP_SORT_MAX,
+ PFP_SORT_COUNT,
+ PFP_SORT_NAME,
+};
+
+static enum perf_ftrace_profile_sort_key profile_sort = PFP_SORT_TOTAL;
+
+static int cmp_profile_data(const void *a, const void *b)
+{
+ const struct hashmap_entry *e1 = *(const struct hashmap_entry **)a;
+ const struct hashmap_entry *e2 = *(const struct hashmap_entry **)b;
+ struct ftrace_profile_data *p1 = e1->pvalue;
+ struct ftrace_profile_data *p2 = e2->pvalue;
+ double v1, v2;
+
+ switch (profile_sort) {
+ case PFP_SORT_NAME:
+ return strcmp(e1->pkey, e2->pkey);
+ case PFP_SORT_AVG:
+ v1 = p1->st.mean;
+ v2 = p2->st.mean;
+ break;
+ case PFP_SORT_MAX:
+ v1 = p1->st.max;
+ v2 = p2->st.max;
+ break;
+ case PFP_SORT_COUNT:
+ v1 = p1->st.n;
+ v2 = p2->st.n;
+ break;
+ case PFP_SORT_TOTAL:
+ default:
+ v1 = p1->st.n * p1->st.mean;
+ v2 = p2->st.n * p2->st.mean;
+ break;
+ }
+
+ if (v1 > v2)
+ return -1;
+ if (v1 < v2)
+ return 1;
+ return 0;
+}
+
+static void print_profile_result(struct perf_ftrace *ftrace)
+{
+ struct hashmap_entry *entry, **profile;
+ size_t i, nr, bkt;
+
+ nr = hashmap__size(ftrace->profile_hash);
+ if (nr == 0)
+ return;
+
+ profile = calloc(nr, sizeof(*profile));
+ if (profile == NULL) {
+ pr_err("failed to allocate memory for the result\n");
+ return;
+ }
+
+ i = 0;
+ hashmap__for_each_entry(ftrace->profile_hash, entry, bkt)
+ profile[i++] = entry;
+
+ assert(i == nr);
+
+ //cmp_profile_data(profile[0], profile[1]);
+ qsort(profile, nr, sizeof(*profile), cmp_profile_data);
+
+ printf("# %10s %10s %10s %10s %s\n",
+ "Total (us)", "Avg (us)", "Max (us)", "Count", "Function");
+
+ for (i = 0; i < nr; i++) {
+ const char *name = profile[i]->pkey;
+ struct ftrace_profile_data *p = profile[i]->pvalue;
+
+ printf("%12.3f %10.3f %6"PRIu64".%03"PRIu64" %10.0f %s\n",
+ p->st.n * p->st.mean / 1000, p->st.mean / 1000,
+ p->st.max / 1000, p->st.max % 1000, p->st.n, name);
+ }
+
+ free(profile);
+
+ hashmap__for_each_entry(ftrace->profile_hash, entry, bkt) {
+ free((char *)entry->pkey);
+ free(entry->pvalue);
+ }
+
+ hashmap__free(ftrace->profile_hash);
+ ftrace->profile_hash = NULL;
+}
+
+static int __cmd_profile(struct perf_ftrace *ftrace)
+{
+ char *trace_file;
+ int trace_fd;
+ char buf[4096];
+ struct io io;
+ char *line = NULL;
+ size_t line_len = 0;
+
+ if (prepare_func_profile(ftrace) < 0) {
+ pr_err("failed to prepare func profiler\n");
+ goto out;
+ }
+
+ if (init_tracing_instance() < 0)
+ goto out;
+
+ if (reset_tracing_files(ftrace) < 0) {
+ pr_err("failed to reset ftrace\n");
+ goto out_reset;
+ }
+
+ /* reset ftrace buffer */
+ if (write_tracing_file("trace", "0") < 0)
+ goto out_reset;
+
+ if (set_tracing_options(ftrace) < 0)
+ goto out_reset;
+
+ if (write_tracing_file("current_tracer", ftrace->tracer) < 0) {
+ pr_err("failed to set current_tracer to %s\n", ftrace->tracer);
+ goto out_reset;
+ }
+
+ setup_pager();
+
+ trace_file = get_tracing_instance_file("trace_pipe");
+ if (!trace_file) {
+ pr_err("failed to open trace_pipe\n");
+ goto out_reset;
+ }
+
+ trace_fd = open(trace_file, O_RDONLY);
+
+ put_tracing_file(trace_file);
+
+ if (trace_fd < 0) {
+ pr_err("failed to open trace_pipe\n");
+ goto out_reset;
+ }
+
+ fcntl(trace_fd, F_SETFL, O_NONBLOCK);
+
+ if (write_tracing_file("tracing_on", "1") < 0) {
+ pr_err("can't enable tracing\n");
+ goto out_close_fd;
+ }
+
+ evlist__start_workload(ftrace->evlist);
+
+ io__init(&io, trace_fd, buf, sizeof(buf));
+ io.timeout_ms = -1;
+
+ while (!done && !io.eof) {
+ if (io__getline(&io, &line, &line_len) < 0)
+ break;
+
+ if (parse_func_duration(ftrace, line, line_len) < 0)
+ break;
+ }
+
+ write_tracing_file("tracing_on", "0");
+
+ if (workload_exec_errno) {
+ const char *emsg = str_error_r(workload_exec_errno, buf, sizeof(buf));
+ /* flush stdout first so below error msg appears at the end. */
+ fflush(stdout);
+ pr_err("workload failed: %s\n", emsg);
+ goto out_free_line;
+ }
+
+ /* read remaining buffer contents */
+ io.timeout_ms = 0;
+ while (!io.eof) {
+ if (io__getline(&io, &line, &line_len) < 0)
+ break;
+
+ if (parse_func_duration(ftrace, line, line_len) < 0)
+ break;
+ }
+
+ print_profile_result(ftrace);
+
+out_free_line:
+ free(line);
+out_close_fd:
+ close(trace_fd);
+out_reset:
+ exit_tracing_instance();
+out:
+ return (done && !workload_exec_errno) ? 0 : -1;
+}
+
static int perf_ftrace_config(const char *var, const char *value, void *cb)
{
struct perf_ftrace *ftrace = cb;
@@ -1036,6 +1603,33 @@ static void delete_filter_func(struct list_head *head)
}
}
+static int parse_filter_event(const struct option *opt, const char *str,
+ int unset __maybe_unused)
+{
+ struct list_head *head = opt->value;
+ struct filter_entry *entry;
+ char *s, *p;
+ int ret = -ENOMEM;
+
+ s = strdup(str);
+ if (s == NULL)
+ return -ENOMEM;
+
+ while ((p = strsep(&s, ",")) != NULL) {
+ entry = malloc(sizeof(*entry) + strlen(p) + 1);
+ if (entry == NULL)
+ goto out;
+
+ strcpy(entry->name, p);
+ list_add_tail(&entry->list, head);
+ }
+ ret = 0;
+
+out:
+ free(s);
+ return ret;
+}
+
static int parse_buffer_size(const struct option *opt,
const char *str, int unset)
{
@@ -1094,11 +1688,16 @@ static int parse_graph_tracer_opts(const struct option *opt,
int ret;
struct perf_ftrace *ftrace = (struct perf_ftrace *) opt->value;
struct sublevel_option graph_tracer_opts[] = {
+ { .name = "args", .value_ptr = &ftrace->graph_args },
+ { .name = "retval", .value_ptr = &ftrace->graph_retval },
+ { .name = "retval-hex", .value_ptr = &ftrace->graph_retval_hex },
+ { .name = "retaddr", .value_ptr = &ftrace->graph_retaddr },
{ .name = "nosleep-time", .value_ptr = &ftrace->graph_nosleep_time },
{ .name = "noirqs", .value_ptr = &ftrace->graph_noirqs },
{ .name = "verbose", .value_ptr = &ftrace->graph_verbose },
{ .name = "thresh", .value_ptr = &ftrace->graph_thresh },
{ .name = "depth", .value_ptr = &ftrace->graph_depth },
+ { .name = "tail", .value_ptr = &ftrace->graph_tail },
{ .name = NULL, }
};
@@ -1112,10 +1711,35 @@ static int parse_graph_tracer_opts(const struct option *opt,
return 0;
}
+static int parse_sort_key(const struct option *opt, const char *str, int unset)
+{
+ enum perf_ftrace_profile_sort_key *key = (void *)opt->value;
+
+ if (unset)
+ return 0;
+
+ if (!strcmp(str, "total"))
+ *key = PFP_SORT_TOTAL;
+ else if (!strcmp(str, "avg"))
+ *key = PFP_SORT_AVG;
+ else if (!strcmp(str, "max"))
+ *key = PFP_SORT_MAX;
+ else if (!strcmp(str, "count"))
+ *key = PFP_SORT_COUNT;
+ else if (!strcmp(str, "name"))
+ *key = PFP_SORT_NAME;
+ else {
+ pr_err("Unknown sort key: %s\n", str);
+ return -1;
+ }
+ return 0;
+}
+
enum perf_ftrace_subcommand {
PERF_FTRACE_NONE,
PERF_FTRACE_TRACE,
PERF_FTRACE_LATENCY,
+ PERF_FTRACE_PROFILE,
};
int cmd_ftrace(int argc, const char **argv)
@@ -1124,7 +1748,6 @@ int cmd_ftrace(int argc, const char **argv)
int (*cmd_func)(struct perf_ftrace *) = NULL;
struct perf_ftrace ftrace = {
.tracer = DEFAULT_TRACER,
- .target = { .uid = UINT_MAX, },
};
const struct option common_options[] = {
OPT_STRING('p', "pid", &ftrace.target.pid, "pid",
@@ -1160,7 +1783,7 @@ int cmd_ftrace(int argc, const char **argv)
OPT_CALLBACK('g', "nograph-funcs", &ftrace.nograph_funcs, "func",
"Set nograph filter on given functions", parse_filter_func),
OPT_CALLBACK(0, "graph-opts", &ftrace, "options",
- "Graph tracer options, available options: nosleep-time,noirqs,verbose,thresh=<n>,depth=<n>",
+ "Graph tracer options, available options: args,retval,retval-hex,retaddr,nosleep-time,noirqs,verbose,thresh=<n>,depth=<n>",
parse_graph_tracer_opts),
OPT_CALLBACK('m', "buffer-size", &ftrace.percpu_buffer_size, "size",
"Size of per cpu buffer, needs to use a B, K, M or G suffix.", parse_buffer_size),
@@ -1173,12 +1796,43 @@ int cmd_ftrace(int argc, const char **argv)
const struct option latency_options[] = {
OPT_CALLBACK('T', "trace-funcs", &ftrace.filters, "func",
"Show latency of given function", parse_filter_func),
+ OPT_CALLBACK('e', "events", &ftrace.event_pair, "event1,event2",
+ "Show latency between the two events", parse_filter_event),
#ifdef HAVE_BPF_SKEL
OPT_BOOLEAN('b', "use-bpf", &ftrace.target.use_bpf,
"Use BPF to measure function latency"),
#endif
OPT_BOOLEAN('n', "use-nsec", &ftrace.use_nsec,
"Use nano-second histogram"),
+ OPT_UINTEGER(0, "bucket-range", &ftrace.bucket_range,
+ "Bucket range in ms or ns (-n/--use-nsec), default is log2() mode"),
+ OPT_UINTEGER(0, "min-latency", &ftrace.min_latency,
+ "Minimum latency (1st bucket). Works only with --bucket-range."),
+ OPT_UINTEGER(0, "max-latency", &ftrace.max_latency,
+ "Maximum latency (last bucket). Works only with --bucket-range."),
+ OPT_BOOLEAN(0, "hide-empty", &ftrace.hide_empty,
+ "Hide empty buckets in the histogram"),
+ OPT_PARENT(common_options),
+ };
+ const struct option profile_options[] = {
+ OPT_CALLBACK('T', "trace-funcs", &ftrace.filters, "func",
+ "Trace given functions using function tracer",
+ parse_filter_func),
+ OPT_CALLBACK('N', "notrace-funcs", &ftrace.notrace, "func",
+ "Do not trace given functions", parse_filter_func),
+ OPT_CALLBACK('G', "graph-funcs", &ftrace.graph_funcs, "func",
+ "Trace given functions using function_graph tracer",
+ parse_filter_func),
+ OPT_CALLBACK('g', "nograph-funcs", &ftrace.nograph_funcs, "func",
+ "Set nograph filter on given functions", parse_filter_func),
+ OPT_CALLBACK('m', "buffer-size", &ftrace.percpu_buffer_size, "size",
+ "Size of per cpu buffer, needs to use a B, K, M or G suffix.", parse_buffer_size),
+ OPT_CALLBACK('s', "sort", &profile_sort, "key",
+ "Sort result by key: total (default), avg, max, count, name.",
+ parse_sort_key),
+ OPT_CALLBACK(0, "graph-opts", &ftrace, "options",
+ "Graph tracer options, available options: nosleep-time,noirqs,thresh=<n>,depth=<n>",
+ parse_graph_tracer_opts),
OPT_PARENT(common_options),
};
const struct option *options = ftrace_options;
@@ -1186,8 +1840,8 @@ int cmd_ftrace(int argc, const char **argv)
const char * const ftrace_usage[] = {
"perf ftrace [<options>] [<command>]",
"perf ftrace [<options>] -- [<command>] [<options>]",
- "perf ftrace {trace|latency} [<options>] [<command>]",
- "perf ftrace {trace|latency} [<options>] -- [<command>] [<options>]",
+ "perf ftrace {trace|latency|profile} [<options>] [<command>]",
+ "perf ftrace {trace|latency|profile} [<options>] -- [<command>] [<options>]",
NULL
};
enum perf_ftrace_subcommand subcmd = PERF_FTRACE_NONE;
@@ -1196,12 +1850,21 @@ int cmd_ftrace(int argc, const char **argv)
INIT_LIST_HEAD(&ftrace.notrace);
INIT_LIST_HEAD(&ftrace.graph_funcs);
INIT_LIST_HEAD(&ftrace.nograph_funcs);
+ INIT_LIST_HEAD(&ftrace.event_pair);
signal(SIGINT, sig_handler);
signal(SIGUSR1, sig_handler);
signal(SIGCHLD, sig_handler);
signal(SIGPIPE, sig_handler);
+ if (!check_ftrace_capable())
+ return -1;
+
+ if (!is_ftrace_supported()) {
+ pr_err("ftrace is not supported on this system\n");
+ return -ENOTSUP;
+ }
+
ret = perf_config(perf_ftrace_config, &ftrace);
if (ret < 0)
return -1;
@@ -1212,6 +1875,9 @@ int cmd_ftrace(int argc, const char **argv)
} else if (!strcmp(argv[1], "latency")) {
subcmd = PERF_FTRACE_LATENCY;
options = latency_options;
+ } else if (!strcmp(argv[1], "profile")) {
+ subcmd = PERF_FTRACE_PROFILE;
+ options = profile_options;
}
if (subcmd != PERF_FTRACE_NONE) {
@@ -1239,14 +1905,70 @@ int cmd_ftrace(int argc, const char **argv)
cmd_func = __cmd_ftrace;
break;
case PERF_FTRACE_LATENCY:
- if (list_empty(&ftrace.filters)) {
- pr_err("Should provide a function to measure\n");
+ if (list_empty(&ftrace.filters) && list_empty(&ftrace.event_pair)) {
+ pr_err("Should provide a function or events to measure\n");
parse_options_usage(ftrace_usage, options, "T", 1);
+ parse_options_usage(NULL, options, "e", 1);
+ ret = -EINVAL;
+ goto out_delete_filters;
+ }
+ if (!list_empty(&ftrace.filters) && !list_empty(&ftrace.event_pair)) {
+ pr_err("Please specify either of function or events\n");
+ parse_options_usage(ftrace_usage, options, "T", 1);
+ parse_options_usage(NULL, options, "e", 1);
+ ret = -EINVAL;
+ goto out_delete_filters;
+ }
+ if (!list_empty(&ftrace.event_pair) && !ftrace.target.use_bpf) {
+ pr_err("Event processing needs BPF\n");
+ parse_options_usage(ftrace_usage, options, "b", 1);
+ parse_options_usage(NULL, options, "e", 1);
+ ret = -EINVAL;
+ goto out_delete_filters;
+ }
+ if (!ftrace.bucket_range && ftrace.min_latency) {
+ pr_err("--min-latency works only with --bucket-range\n");
+ parse_options_usage(ftrace_usage, options,
+ "min-latency", /*short_opt=*/false);
+ ret = -EINVAL;
+ goto out_delete_filters;
+ }
+ if (ftrace.bucket_range && !ftrace.min_latency) {
+ /* default min latency should be the bucket range */
+ ftrace.min_latency = ftrace.bucket_range;
+ }
+ if (!ftrace.bucket_range && ftrace.max_latency) {
+ pr_err("--max-latency works only with --bucket-range\n");
+ parse_options_usage(ftrace_usage, options,
+ "max-latency", /*short_opt=*/false);
ret = -EINVAL;
goto out_delete_filters;
}
+ if (ftrace.bucket_range && ftrace.max_latency &&
+ ftrace.max_latency < ftrace.min_latency + ftrace.bucket_range) {
+ /* we need at least 1 bucket excluding min and max buckets */
+ pr_err("--max-latency must be larger than min-latency + bucket-range\n");
+ parse_options_usage(ftrace_usage, options,
+ "max-latency", /*short_opt=*/false);
+ ret = -EINVAL;
+ goto out_delete_filters;
+ }
+ /* set default unless max_latency is set and valid */
+ ftrace.bucket_num = NUM_BUCKET;
+ if (ftrace.bucket_range) {
+ if (ftrace.max_latency)
+ ftrace.bucket_num = (ftrace.max_latency - ftrace.min_latency) /
+ ftrace.bucket_range + 2;
+ else
+ /* default max latency should depend on bucket range and num_buckets */
+ ftrace.max_latency = (NUM_BUCKET - 2) * ftrace.bucket_range +
+ ftrace.min_latency;
+ }
cmd_func = __cmd_latency;
break;
+ case PERF_FTRACE_PROFILE:
+ cmd_func = __cmd_profile;
+ break;
case PERF_FTRACE_NONE:
default:
pr_err("Invalid subcommand\n");
@@ -1291,6 +2013,7 @@ out_delete_filters:
delete_filter_func(&ftrace.notrace);
delete_filter_func(&ftrace.graph_funcs);
delete_filter_func(&ftrace.nograph_funcs);
+ delete_filter_func(&ftrace.event_pair);
return ret;
}
diff --git a/tools/perf/builtin-help.c b/tools/perf/builtin-help.c
index b2a368ae295a..7be6fb6df595 100644
--- a/tools/perf/builtin-help.c
+++ b/tools/perf/builtin-help.c
@@ -417,7 +417,7 @@ static void open_html(const char *path)
static int show_html_page(const char *perf_cmd)
{
const char *page = cmd_to_page(perf_cmd);
- char *page_path; /* it leaks but we exec bellow */
+ char *page_path; /* it leaks but we exec below */
if (get_html_page_path(&page_path, page) < 0)
return -1;
@@ -447,9 +447,7 @@ int cmd_help(int argc, const char **argv)
#ifdef HAVE_LIBELF_SUPPORT
"probe",
#endif
-#if defined(HAVE_LIBAUDIT_SUPPORT) || defined(HAVE_SYSCALL_TABLE_SUPPORT)
"trace",
-#endif
NULL };
const char *builtin_help_usage[] = {
"perf help [--all] [--man|--web|--info] [command]",
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index eb3ef5c24b66..a114b3fa1bea 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -103,18 +103,24 @@ struct guest_session {
struct guest_event ev;
};
+enum build_id_rewrite_style {
+ BID_RWS__NONE = 0,
+ BID_RWS__INJECT_HEADER_LAZY,
+ BID_RWS__INJECT_HEADER_ALL,
+ BID_RWS__MMAP2_BUILDID_ALL,
+ BID_RWS__MMAP2_BUILDID_LAZY,
+};
+
struct perf_inject {
struct perf_tool tool;
struct perf_session *session;
- bool build_ids;
- bool build_id_all;
+ enum build_id_rewrite_style build_id_style;
bool sched_stat;
bool have_auxtrace;
bool strip;
bool jit_mode;
bool in_place_update;
bool in_place_update_dry_run;
- bool is_pipe;
bool copy_kcore_dir;
const char *input_name;
struct perf_data output;
@@ -126,6 +132,7 @@ struct perf_inject {
struct perf_file_section secs[HEADER_FEAT_BITS];
struct guest_session guest_session;
struct strlist *known_build_ids;
+ const struct evsel *mmap_evsel;
};
struct event_entry {
@@ -134,8 +141,23 @@ struct event_entry {
union perf_event event[];
};
-static int dso__inject_build_id(struct dso *dso, struct perf_tool *tool,
- struct machine *machine, u8 cpumode, u32 flags);
+static int tool__inject_build_id(const struct perf_tool *tool,
+ struct perf_sample *sample,
+ struct machine *machine,
+ const struct evsel *evsel,
+ __u16 misc,
+ const char *filename,
+ struct dso *dso, u32 flags);
+static int tool__inject_mmap2_build_id(const struct perf_tool *tool,
+ struct perf_sample *sample,
+ struct machine *machine,
+ const struct evsel *evsel,
+ __u16 misc,
+ __u32 pid, __u32 tid,
+ __u64 start, __u64 len, __u64 pgoff,
+ struct dso *dso,
+ __u32 prot, __u32 flags,
+ const char *filename);
static int output_bytes(struct perf_inject *inject, void *buf, size_t sz)
{
@@ -149,8 +171,9 @@ static int output_bytes(struct perf_inject *inject, void *buf, size_t sz)
return 0;
}
-static int perf_event__repipe_synth(struct perf_tool *tool,
+static int perf_event__repipe_synth(const struct perf_tool *tool,
union perf_event *event)
+
{
struct perf_inject *inject = container_of(tool, struct perf_inject,
tool);
@@ -158,7 +181,7 @@ static int perf_event__repipe_synth(struct perf_tool *tool,
return output_bytes(inject, event, event->header.size);
}
-static int perf_event__repipe_oe_synth(struct perf_tool *tool,
+static int perf_event__repipe_oe_synth(const struct perf_tool *tool,
union perf_event *event,
struct ordered_events *oe __maybe_unused)
{
@@ -166,7 +189,7 @@ static int perf_event__repipe_oe_synth(struct perf_tool *tool,
}
#ifdef HAVE_JITDUMP
-static int perf_event__drop_oe(struct perf_tool *tool __maybe_unused,
+static int perf_event__drop_oe(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct ordered_events *oe __maybe_unused)
{
@@ -188,7 +211,7 @@ static int perf_event__repipe_op4_synth(struct perf_session *session,
return perf_event__repipe_synth(session->tool, event);
}
-static int perf_event__repipe_attr(struct perf_tool *tool,
+static int perf_event__repipe_attr(const struct perf_tool *tool,
union perf_event *event,
struct evlist **pevlist)
{
@@ -200,13 +223,14 @@ static int perf_event__repipe_attr(struct perf_tool *tool,
if (ret)
return ret;
- if (!inject->is_pipe)
+ /* If the output isn't a pipe then the attributes will be written as part of the header. */
+ if (!inject->output.is_pipe)
return 0;
return perf_event__repipe_synth(tool, event);
}
-static int perf_event__repipe_event_update(struct perf_tool *tool,
+static int perf_event__repipe_event_update(const struct perf_tool *tool,
union perf_event *event,
struct evlist **pevlist __maybe_unused)
{
@@ -237,7 +261,7 @@ static int copy_bytes(struct perf_inject *inject, struct perf_data *data, off_t
static s64 perf_event__repipe_auxtrace(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_inject *inject = container_of(tool, struct perf_inject,
tool);
int ret;
@@ -284,7 +308,7 @@ perf_event__repipe_auxtrace(struct perf_session *session __maybe_unused,
#endif
-static int perf_event__repipe(struct perf_tool *tool,
+static int perf_event__repipe(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -292,7 +316,7 @@ static int perf_event__repipe(struct perf_tool *tool,
return perf_event__repipe_synth(tool, event);
}
-static int perf_event__drop(struct perf_tool *tool __maybe_unused,
+static int perf_event__drop(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -300,7 +324,7 @@ static int perf_event__drop(struct perf_tool *tool __maybe_unused,
return 0;
}
-static int perf_event__drop_aux(struct perf_tool *tool,
+static int perf_event__drop_aux(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
@@ -341,13 +365,13 @@ perf_inject__cut_auxtrace_sample(struct perf_inject *inject,
return ev;
}
-typedef int (*inject_handler)(struct perf_tool *tool,
+typedef int (*inject_handler)(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
struct machine *machine);
-static int perf_event__repipe_sample(struct perf_tool *tool,
+static int perf_event__repipe_sample(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -372,46 +396,8 @@ static int perf_event__repipe_sample(struct perf_tool *tool,
return perf_event__repipe_synth(tool, event);
}
-static int perf_event__repipe_mmap(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
-{
- int err;
-
- err = perf_event__process_mmap(tool, event, sample, machine);
- perf_event__repipe(tool, event, sample, machine);
-
- return err;
-}
-
-#ifdef HAVE_JITDUMP
-static int perf_event__jit_repipe_mmap(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
-{
- struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
- u64 n = 0;
- int ret;
-
- /*
- * if jit marker, then inject jit mmaps and generate ELF images
- */
- ret = jit_process(inject->session, &inject->output, machine,
- event->mmap.filename, event->mmap.pid, event->mmap.tid, &n);
- if (ret < 0)
- return ret;
- if (ret) {
- inject->bytes_written += n;
- return 0;
- }
- return perf_event__repipe_mmap(tool, event, sample, machine);
-}
-#endif
-
static struct dso *findnew_dso(int pid, int tid, const char *filename,
- struct dso_id *id, struct machine *machine)
+ const struct dso_id *id, struct machine *machine)
{
struct thread *thread;
struct nsinfo *nsi = NULL;
@@ -445,10 +431,9 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
}
if (dso) {
- mutex_lock(&dso->lock);
- nsinfo__put(dso->nsinfo);
- dso->nsinfo = nsi;
- mutex_unlock(&dso->lock);
+ mutex_lock(dso__lock(dso));
+ dso__set_nsinfo(dso, nsi);
+ mutex_unlock(dso__lock(dso));
} else
nsinfo__put(nsi);
@@ -456,117 +441,175 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
return dso;
}
-static int perf_event__repipe_buildid_mmap(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
+/*
+ * The evsel used for the sample ID for mmap events. Typically stashed when
+ * processing mmap events. If not stashed, search the evlist for the first mmap
+ * gathering event.
+ */
+static const struct evsel *inject__mmap_evsel(struct perf_inject *inject)
{
- struct dso *dso;
+ struct evsel *pos;
- dso = findnew_dso(event->mmap.pid, event->mmap.tid,
- event->mmap.filename, NULL, machine);
+ if (inject->mmap_evsel)
+ return inject->mmap_evsel;
- if (dso && !dso->hit) {
- dso->hit = 1;
- dso__inject_build_id(dso, tool, machine, sample->cpumode, 0);
+ evlist__for_each_entry(inject->session->evlist, pos) {
+ if (pos->core.attr.mmap) {
+ inject->mmap_evsel = pos;
+ return pos;
+ }
}
- dso__put(dso);
-
- return perf_event__repipe(tool, event, sample, machine);
+ pr_err("No mmap events found\n");
+ return NULL;
}
-static int perf_event__repipe_mmap2(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
+static int perf_event__repipe_common_mmap(const struct perf_tool *tool,
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct machine *machine,
+ __u32 pid, __u32 tid,
+ __u64 start, __u64 len, __u64 pgoff,
+ __u32 flags, __u32 prot,
+ const char *filename,
+ const struct dso_id *dso_id,
+ int (*perf_event_process)(const struct perf_tool *tool,
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct machine *machine))
{
- int err;
+ struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
+ struct dso *dso = NULL;
+ bool dso_sought = false;
- err = perf_event__process_mmap2(tool, event, sample, machine);
- perf_event__repipe(tool, event, sample, machine);
+#ifdef HAVE_JITDUMP
+ if (inject->jit_mode) {
+ u64 n = 0;
+ int ret;
+ /* If jit marker, then inject jit mmaps and generate ELF images. */
+ ret = jit_process(inject->session, &inject->output, machine,
+ filename, pid, tid, &n);
+ if (ret < 0)
+ return ret;
+ if (ret) {
+ inject->bytes_written += n;
+ return 0;
+ }
+ }
+#endif
if (event->header.misc & PERF_RECORD_MISC_MMAP_BUILD_ID) {
- struct dso *dso;
-
- dso = findnew_dso(event->mmap2.pid, event->mmap2.tid,
- event->mmap2.filename, NULL, machine);
+ dso = findnew_dso(pid, tid, filename, dso_id, machine);
+ dso_sought = true;
if (dso) {
/* mark it not to inject build-id */
- dso->hit = 1;
+ dso__set_hit(dso);
}
- dso__put(dso);
}
+ if (inject->build_id_style == BID_RWS__INJECT_HEADER_ALL) {
+ if (!dso_sought) {
+ dso = findnew_dso(pid, tid, filename, dso_id, machine);
+ dso_sought = true;
+ }
- return err;
-}
+ if (dso && !dso__hit(dso)) {
+ struct evsel *evsel = evlist__event2evsel(inject->session->evlist, event);
-#ifdef HAVE_JITDUMP
-static int perf_event__jit_repipe_mmap2(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
-{
- struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
- u64 n = 0;
- int ret;
+ if (evsel) {
+ dso__set_hit(dso);
+ tool__inject_build_id(tool, sample, machine, evsel,
+ /*misc=*/sample->cpumode,
+ filename, dso, flags);
+ }
+ }
+ } else {
+ int err;
- /*
- * if jit marker, then inject jit mmaps and generate ELF images
- */
- ret = jit_process(inject->session, &inject->output, machine,
- event->mmap2.filename, event->mmap2.pid, event->mmap2.tid, &n);
- if (ret < 0)
- return ret;
- if (ret) {
- inject->bytes_written += n;
- return 0;
- }
- return perf_event__repipe_mmap2(tool, event, sample, machine);
-}
-#endif
+ /*
+ * Remember the evsel for lazy build id generation. It is used
+ * for the sample id header type.
+ */
+ if ((inject->build_id_style == BID_RWS__INJECT_HEADER_LAZY ||
+ inject->build_id_style == BID_RWS__MMAP2_BUILDID_LAZY) &&
+ !inject->mmap_evsel)
+ inject->mmap_evsel = evlist__event2evsel(inject->session->evlist, event);
-static int perf_event__repipe_buildid_mmap2(struct perf_tool *tool,
- union perf_event *event,
- struct perf_sample *sample,
- struct machine *machine)
-{
- struct dso_id dso_id = {
- .maj = event->mmap2.maj,
- .min = event->mmap2.min,
- .ino = event->mmap2.ino,
- .ino_generation = event->mmap2.ino_generation,
- };
- struct dso *dso;
+ /* Create the thread, map, etc. Not done for the unordered inject all case. */
+ err = perf_event_process(tool, event, sample, machine);
- if (event->header.misc & PERF_RECORD_MISC_MMAP_BUILD_ID) {
- /* cannot use dso_id since it'd have invalid info */
- dso = findnew_dso(event->mmap2.pid, event->mmap2.tid,
- event->mmap2.filename, NULL, machine);
- if (dso) {
- /* mark it not to inject build-id */
- dso->hit = 1;
+ if (err) {
+ dso__put(dso);
+ return err;
}
- dso__put(dso);
- perf_event__repipe(tool, event, sample, machine);
- return 0;
}
+ if ((inject->build_id_style == BID_RWS__MMAP2_BUILDID_ALL) &&
+ !(event->header.misc & PERF_RECORD_MISC_MMAP_BUILD_ID)) {
+ struct evsel *evsel = evlist__event2evsel(inject->session->evlist, event);
- dso = findnew_dso(event->mmap2.pid, event->mmap2.tid,
- event->mmap2.filename, &dso_id, machine);
-
- if (dso && !dso->hit) {
- dso->hit = 1;
- dso__inject_build_id(dso, tool, machine, sample->cpumode,
- event->mmap2.flags);
+ if (evsel && !dso_sought) {
+ dso = findnew_dso(pid, tid, filename, dso_id, machine);
+ dso_sought = true;
+ }
+ if (evsel && dso &&
+ !tool__inject_mmap2_build_id(tool, sample, machine, evsel,
+ sample->cpumode | PERF_RECORD_MISC_MMAP_BUILD_ID,
+ pid, tid, start, len, pgoff,
+ dso,
+ prot, flags,
+ filename)) {
+ /* Injected mmap2 so no need to repipe. */
+ dso__put(dso);
+ return 0;
+ }
}
dso__put(dso);
+ if (inject->build_id_style == BID_RWS__MMAP2_BUILDID_LAZY)
+ return 0;
- perf_event__repipe(tool, event, sample, machine);
+ return perf_event__repipe(tool, event, sample, machine);
+}
- return 0;
+static int perf_event__repipe_mmap(const struct perf_tool *tool,
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ return perf_event__repipe_common_mmap(
+ tool, event, sample, machine,
+ event->mmap.pid, event->mmap.tid,
+ event->mmap.start, event->mmap.len, event->mmap.pgoff,
+ /*flags=*/0, PROT_EXEC,
+ event->mmap.filename, /*dso_id=*/NULL,
+ perf_event__process_mmap);
+}
+
+static int perf_event__repipe_mmap2(const struct perf_tool *tool,
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct dso_id id = dso_id_empty;
+
+ if (event->header.misc & PERF_RECORD_MISC_MMAP_BUILD_ID) {
+ build_id__init(&id.build_id, event->mmap2.build_id, event->mmap2.build_id_size);
+ } else {
+ id.maj = event->mmap2.maj;
+ id.min = event->mmap2.min;
+ id.ino = event->mmap2.ino;
+ id.ino_generation = event->mmap2.ino_generation;
+ id.mmap2_valid = true;
+ id.mmap2_ino_generation_valid = true;
+ }
+
+ return perf_event__repipe_common_mmap(
+ tool, event, sample, machine,
+ event->mmap2.pid, event->mmap2.tid,
+ event->mmap2.start, event->mmap2.len, event->mmap2.pgoff,
+ event->mmap2.flags, event->mmap2.prot,
+ event->mmap2.filename, &id,
+ perf_event__process_mmap2);
}
-static int perf_event__repipe_fork(struct perf_tool *tool,
+static int perf_event__repipe_fork(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -579,7 +622,7 @@ static int perf_event__repipe_fork(struct perf_tool *tool,
return err;
}
-static int perf_event__repipe_comm(struct perf_tool *tool,
+static int perf_event__repipe_comm(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -592,7 +635,7 @@ static int perf_event__repipe_comm(struct perf_tool *tool,
return err;
}
-static int perf_event__repipe_namespaces(struct perf_tool *tool,
+static int perf_event__repipe_namespaces(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -604,7 +647,7 @@ static int perf_event__repipe_namespaces(struct perf_tool *tool,
return err;
}
-static int perf_event__repipe_exit(struct perf_tool *tool,
+static int perf_event__repipe_exit(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -630,25 +673,26 @@ static int perf_event__repipe_tracing_data(struct perf_session *session,
static int dso__read_build_id(struct dso *dso)
{
struct nscookie nsc;
+ struct build_id bid = { .size = 0, };
- if (dso->has_build_id)
+ if (dso__has_build_id(dso))
return 0;
- mutex_lock(&dso->lock);
- nsinfo__mountns_enter(dso->nsinfo, &nsc);
- if (filename__read_build_id(dso->long_name, &dso->bid) > 0)
- dso->has_build_id = true;
- else if (dso->nsinfo) {
- char *new_name = dso__filename_with_chroot(dso, dso->long_name);
+ mutex_lock(dso__lock(dso));
+ nsinfo__mountns_enter(dso__nsinfo(dso), &nsc);
+ if (filename__read_build_id(dso__long_name(dso), &bid, /*block=*/true) > 0)
+ dso__set_build_id(dso, &bid);
+ else if (dso__nsinfo(dso)) {
+ char *new_name = dso__filename_with_chroot(dso, dso__long_name(dso));
- if (new_name && filename__read_build_id(new_name, &dso->bid) > 0)
- dso->has_build_id = true;
+ if (new_name && filename__read_build_id(new_name, &bid, /*block=*/true) > 0)
+ dso__set_build_id(dso, &bid);
free(new_name);
}
nsinfo__mountns_exit(&nsc);
- mutex_unlock(&dso->lock);
+ mutex_unlock(dso__lock(dso));
- return dso->has_build_id ? 0 : -1;
+ return dso__has_build_id(dso) ? 0 : -1;
}
static struct strlist *perf_inject__parse_known_build_ids(
@@ -691,38 +735,45 @@ static bool perf_inject__lookup_known_build_id(struct perf_inject *inject,
struct dso *dso)
{
struct str_node *pos;
- int bid_len;
strlist__for_each_entry(pos, inject->known_build_ids) {
+ struct build_id bid;
const char *build_id, *dso_name;
+ size_t bid_len;
build_id = skip_spaces(pos->s);
dso_name = strchr(build_id, ' ');
bid_len = dso_name - pos->s;
+ if (bid_len > sizeof(bid.data))
+ bid_len = sizeof(bid.data);
dso_name = skip_spaces(dso_name);
- if (strcmp(dso->long_name, dso_name))
+ if (strcmp(dso__long_name(dso), dso_name))
continue;
- for (int ix = 0; 2 * ix + 1 < bid_len; ++ix) {
- dso->bid.data[ix] = (hex(build_id[2 * ix]) << 4 |
- hex(build_id[2 * ix + 1]));
+ for (size_t ix = 0; 2 * ix + 1 < bid_len; ++ix) {
+ bid.data[ix] = (hex(build_id[2 * ix]) << 4 |
+ hex(build_id[2 * ix + 1]));
}
- dso->bid.size = bid_len / 2;
- dso->has_build_id = 1;
+ bid.size = bid_len / 2;
+ dso__set_build_id(dso, &bid);
return true;
}
return false;
}
-static int dso__inject_build_id(struct dso *dso, struct perf_tool *tool,
- struct machine *machine, u8 cpumode, u32 flags)
+static int tool__inject_build_id(const struct perf_tool *tool,
+ struct perf_sample *sample,
+ struct machine *machine,
+ const struct evsel *evsel,
+ __u16 misc,
+ const char *filename,
+ struct dso *dso, u32 flags)
{
- struct perf_inject *inject = container_of(tool, struct perf_inject,
- tool);
+ struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
int err;
- if (is_anon_memory(dso->long_name) || flags & MAP_HUGETLB)
+ if (is_anon_memory(filename) || flags & MAP_HUGETLB)
return 0;
- if (is_no_dso_memory(dso->long_name))
+ if (is_no_dso_memory(filename))
return 0;
if (inject->known_build_ids != NULL &&
@@ -730,27 +781,158 @@ static int dso__inject_build_id(struct dso *dso, struct perf_tool *tool,
return 1;
if (dso__read_build_id(dso) < 0) {
- pr_debug("no build_id found for %s\n", dso->long_name);
+ pr_debug("no build_id found for %s\n", filename);
+ return -1;
+ }
+
+ err = perf_event__synthesize_build_id(tool, sample, machine,
+ perf_event__repipe,
+ evsel, misc, dso__bid(dso),
+ filename);
+ if (err) {
+ pr_err("Can't synthesize build_id event for %s\n", filename);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int tool__inject_mmap2_build_id(const struct perf_tool *tool,
+ struct perf_sample *sample,
+ struct machine *machine,
+ const struct evsel *evsel,
+ __u16 misc,
+ __u32 pid, __u32 tid,
+ __u64 start, __u64 len, __u64 pgoff,
+ struct dso *dso,
+ __u32 prot, __u32 flags,
+ const char *filename)
+{
+ int err;
+
+ /* Return to repipe anonymous maps. */
+ if (is_anon_memory(filename) || flags & MAP_HUGETLB)
+ return 1;
+ if (is_no_dso_memory(filename))
+ return 1;
+
+ if (dso__read_build_id(dso)) {
+ pr_debug("no build_id found for %s\n", filename);
return -1;
}
- err = perf_event__synthesize_build_id(tool, dso, cpumode,
- perf_event__repipe, machine);
+ err = perf_event__synthesize_mmap2_build_id(tool, sample, machine,
+ perf_event__repipe,
+ evsel,
+ misc, pid, tid,
+ start, len, pgoff,
+ dso__bid(dso),
+ prot, flags,
+ filename);
if (err) {
- pr_err("Can't synthesize build_id event for %s\n", dso->long_name);
+ pr_err("Can't synthesize build_id event for %s\n", filename);
return -1;
}
+ return 0;
+}
+
+static int mark_dso_hit(const struct perf_inject *inject,
+ const struct perf_tool *tool,
+ struct perf_sample *sample,
+ struct machine *machine,
+ const struct evsel *mmap_evsel,
+ struct map *map, bool sample_in_dso)
+{
+ struct dso *dso;
+ u16 misc = sample->cpumode;
+ if (!map)
+ return 0;
+
+ if (!sample_in_dso) {
+ u16 guest_mask = PERF_RECORD_MISC_GUEST_KERNEL |
+ PERF_RECORD_MISC_GUEST_USER;
+
+ if ((misc & guest_mask) != 0) {
+ misc &= PERF_RECORD_MISC_HYPERVISOR;
+ misc |= __map__is_kernel(map)
+ ? PERF_RECORD_MISC_GUEST_KERNEL
+ : PERF_RECORD_MISC_GUEST_USER;
+ } else {
+ misc &= PERF_RECORD_MISC_HYPERVISOR;
+ misc |= __map__is_kernel(map)
+ ? PERF_RECORD_MISC_KERNEL
+ : PERF_RECORD_MISC_USER;
+ }
+ }
+ dso = map__dso(map);
+ if (inject->build_id_style == BID_RWS__INJECT_HEADER_LAZY) {
+ if (dso && !dso__hit(dso)) {
+ dso__set_hit(dso);
+ tool__inject_build_id(tool, sample, machine,
+ mmap_evsel, misc, dso__long_name(dso), dso,
+ map__flags(map));
+ }
+ } else if (inject->build_id_style == BID_RWS__MMAP2_BUILDID_LAZY) {
+ if (!map__hit(map)) {
+ const struct build_id null_bid = { .size = 0 };
+ const struct build_id *bid = dso ? dso__bid(dso) : &null_bid;
+ const char *filename = dso ? dso__long_name(dso) : "";
+
+ map__set_hit(map);
+ perf_event__synthesize_mmap2_build_id(tool, sample, machine,
+ perf_event__repipe,
+ mmap_evsel,
+ misc,
+ sample->pid, sample->tid,
+ map__start(map),
+ map__end(map) - map__start(map),
+ map__pgoff(map),
+ bid,
+ map__prot(map),
+ map__flags(map),
+ filename);
+ }
+ }
return 0;
}
-int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
+struct mark_dso_hit_args {
+ const struct perf_inject *inject;
+ const struct perf_tool *tool;
+ struct perf_sample *sample;
+ struct machine *machine;
+ const struct evsel *mmap_evsel;
+};
+
+static int mark_dso_hit_callback(struct callchain_cursor_node *node, void *data)
+{
+ struct mark_dso_hit_args *args = data;
+ struct map *map = node->ms.map;
+
+ return mark_dso_hit(args->inject, args->tool, args->sample, args->machine,
+ args->mmap_evsel, map, /*sample_in_dso=*/false);
+}
+
+int perf_event__inject_buildid(const struct perf_tool *tool, union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel __maybe_unused,
struct machine *machine)
{
struct addr_location al;
struct thread *thread;
+ struct perf_inject *inject = container_of(tool, struct perf_inject, tool);
+ struct mark_dso_hit_args args = {
+ .inject = inject,
+ .tool = tool,
+ /*
+ * Use the parsed sample data of the sample event, which will
+ * have a later timestamp than the mmap event.
+ */
+ .sample = sample,
+ .machine = machine,
+ .mmap_evsel = inject__mmap_evsel(inject),
+ };
addr_location__init(&al);
thread = machine__findnew_thread(machine, sample->pid, sample->tid);
@@ -761,15 +943,13 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
}
if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
- struct dso *dso = map__dso(al.map);
-
- if (!dso->hit) {
- dso->hit = 1;
- dso__inject_build_id(dso, tool, machine,
- sample->cpumode, map__flags(al.map));
- }
+ mark_dso_hit(inject, tool, sample, machine, args.mmap_evsel, al.map,
+ /*sample_in_dso=*/true);
}
+ sample__for_each_callchain_node(thread, evsel, sample, PERF_MAX_STACK_DEPTH,
+ /*symbols=*/false, mark_dso_hit_callback, &args);
+
thread__put(thread);
repipe:
perf_event__repipe(tool, event, sample, machine);
@@ -777,7 +957,7 @@ repipe:
return 0;
}
-static int perf_inject__sched_process_exit(struct perf_tool *tool,
+static int perf_inject__sched_process_exit(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct evsel *evsel __maybe_unused,
@@ -797,7 +977,7 @@ static int perf_inject__sched_process_exit(struct perf_tool *tool,
return 0;
}
-static int perf_inject__sched_switch(struct perf_tool *tool,
+static int perf_inject__sched_switch(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -822,7 +1002,7 @@ static int perf_inject__sched_switch(struct perf_tool *tool,
}
#ifdef HAVE_LIBTRACEEVENT
-static int perf_inject__sched_stat(struct perf_tool *tool,
+static int perf_inject__sched_stat(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct evsel *evsel,
@@ -867,7 +1047,7 @@ static int guest_session__output_bytes(struct guest_session *gs, void *buf, size
return ret < 0 ? ret : 0;
}
-static int guest_session__repipe(struct perf_tool *tool,
+static int guest_session__repipe(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -1033,7 +1213,7 @@ static struct guest_id *guest_session__lookup_id(struct guest_session *gs, u64 i
return NULL;
}
-static int process_attr(struct perf_tool *tool, union perf_event *event,
+static int process_attr(const struct perf_tool *tool, union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
{
@@ -1146,8 +1326,8 @@ static bool dso__is_in_kernel_space(struct dso *dso)
return false;
return dso__is_kcore(dso) ||
- dso->kernel ||
- is_kernel_module(dso->long_name, PERF_RECORD_MISC_CPUMODE_UNKNOWN);
+ dso__kernel(dso) ||
+ is_kernel_module(dso__long_name(dso), PERF_RECORD_MISC_CPUMODE_UNKNOWN);
}
static u64 evlist__first_id(struct evlist *evlist)
@@ -1161,7 +1341,7 @@ static u64 evlist__first_id(struct evlist *evlist)
return 0;
}
-static int process_build_id(struct perf_tool *tool,
+static int process_build_id(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -1174,39 +1354,54 @@ static int process_build_id(struct perf_tool *tool,
static int synthesize_build_id(struct perf_inject *inject, struct dso *dso, pid_t machine_pid)
{
struct machine *machine = perf_session__findnew_machine(inject->session, machine_pid);
- u8 cpumode = dso__is_in_kernel_space(dso) ?
- PERF_RECORD_MISC_GUEST_KERNEL :
- PERF_RECORD_MISC_GUEST_USER;
+ struct perf_sample synth_sample = {
+ .pid = -1,
+ .tid = -1,
+ .time = -1,
+ .stream_id = -1,
+ .cpu = -1,
+ .period = 1,
+ .cpumode = dso__is_in_kernel_space(dso)
+ ? PERF_RECORD_MISC_GUEST_KERNEL
+ : PERF_RECORD_MISC_GUEST_USER,
+ };
if (!machine)
return -ENOMEM;
- dso->hit = 1;
+ dso__set_hit(dso);
+
+ return perf_event__synthesize_build_id(&inject->tool, &synth_sample, machine,
+ process_build_id, inject__mmap_evsel(inject),
+ /*misc=*/synth_sample.cpumode,
+ dso__bid(dso), dso__long_name(dso));
+}
+
+static int guest_session__add_build_ids_cb(struct dso *dso, void *data)
+{
+ struct guest_session *gs = data;
+ struct perf_inject *inject = container_of(gs, struct perf_inject, guest_session);
+
+ if (!dso__has_build_id(dso))
+ return 0;
+
+ return synthesize_build_id(inject, dso, gs->machine_pid);
- return perf_event__synthesize_build_id(&inject->tool, dso, cpumode,
- process_build_id, machine);
}
static int guest_session__add_build_ids(struct guest_session *gs)
{
struct perf_inject *inject = container_of(gs, struct perf_inject, guest_session);
- struct machine *machine = &gs->session->machines.host;
- struct dso *dso;
- int ret;
/* Build IDs will be put in the Build ID feature section */
perf_header__set_feat(&inject->session->header, HEADER_BUILD_ID);
- dsos__for_each_with_build_id(dso, &machine->dsos.head) {
- ret = synthesize_build_id(inject, dso, gs->machine_pid);
- if (ret)
- return ret;
- }
-
- return 0;
+ return dsos__for_each_dso(&gs->session->machines.host.dsos,
+ guest_session__add_build_ids_cb,
+ gs);
}
-static int guest_session__ksymbol_event(struct perf_tool *tool,
+static int guest_session__ksymbol_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -1570,7 +1765,7 @@ static int guest_session__flush_events(struct guest_session *gs)
return guest_session__inject_events(gs, -1);
}
-static int host__repipe(struct perf_tool *tool,
+static int host__repipe(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -1643,7 +1838,7 @@ static int host__finished_init(struct perf_session *session, union perf_event *e
* guest events up to the same time. Finally write out the FINISHED_ROUND event
* itself.
*/
-static int host__finished_round(struct perf_tool *tool,
+static int host__finished_round(const struct perf_tool *tool,
union perf_event *event,
struct ordered_events *oe)
{
@@ -1661,7 +1856,7 @@ static int host__finished_round(struct perf_tool *tool,
return perf_event__repipe_oe_synth(tool, event, oe);
}
-static int host__context_switch(struct perf_tool *tool,
+static int host__context_switch(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -1715,7 +1910,7 @@ static int evsel__check_stype(struct evsel *evsel, u64 sample_type, const char *
return 0;
}
-static int drop_sample(struct perf_tool *tool __maybe_unused,
+static int drop_sample(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct evsel *evsel __maybe_unused,
@@ -1976,12 +2171,18 @@ static int __cmd_inject(struct perf_inject *inject)
struct guest_session *gs = &inject->guest_session;
struct perf_session *session = inject->session;
int fd = output_fd(inject);
- u64 output_data_offset;
+ u64 output_data_offset = perf_session__data_offset(session->evlist);
+ /*
+ * Pipe input hasn't loaded the attributes and will handle them as
+ * events. So that the attributes don't overlap the data, write the
+ * attributes after the data.
+ */
+ bool write_attrs_after_data = !inject->output.is_pipe && inject->session->data->is_pipe;
signal(SIGINT, sig_handler);
- if (inject->build_ids || inject->sched_stat ||
- inject->itrace_synth_opts.set || inject->build_id_all) {
+ if (inject->build_id_style != BID_RWS__NONE || inject->sched_stat ||
+ inject->itrace_synth_opts.set) {
inject->tool.mmap = perf_event__repipe_mmap;
inject->tool.mmap2 = perf_event__repipe_mmap2;
inject->tool.fork = perf_event__repipe_fork;
@@ -1990,12 +2191,8 @@ static int __cmd_inject(struct perf_inject *inject)
#endif
}
- output_data_offset = perf_session__data_offset(session->evlist);
-
- if (inject->build_id_all) {
- inject->tool.mmap = perf_event__repipe_buildid_mmap;
- inject->tool.mmap2 = perf_event__repipe_buildid_mmap2;
- } else if (inject->build_ids) {
+ if (inject->build_id_style == BID_RWS__INJECT_HEADER_LAZY ||
+ inject->build_id_style == BID_RWS__MMAP2_BUILDID_LAZY) {
inject->tool.sample = perf_event__inject_buildid;
} else if (inject->sched_stat) {
struct evsel *evsel;
@@ -2065,7 +2262,7 @@ static int __cmd_inject(struct perf_inject *inject)
*/
inject->tool.finished_init = host__finished_init;
/* Obey finished round ordering */
- inject->tool.finished_round = host__finished_round,
+ inject->tool.finished_round = host__finished_round;
/* Keep track of which CPU a VCPU is runnng on */
inject->tool.context_switch = host__context_switch;
/*
@@ -2088,7 +2285,7 @@ static int __cmd_inject(struct perf_inject *inject)
if (!inject->itrace_synth_opts.set)
auxtrace_index__free(&session->auxtrace_index);
- if (!inject->is_pipe && !inject->in_place_update)
+ if (!inject->output.is_pipe && !inject->in_place_update)
lseek(fd, output_data_offset, SEEK_SET);
ret = perf_session__process_events(session);
@@ -2107,22 +2304,22 @@ static int __cmd_inject(struct perf_inject *inject)
}
}
- if (!inject->is_pipe && !inject->in_place_update) {
+ if (!inject->output.is_pipe && !inject->in_place_update) {
struct inject_fc inj_fc = {
.fc.copy = feat_copy_cb,
.inject = inject,
};
- if (inject->build_ids)
- perf_header__set_feat(&session->header,
- HEADER_BUILD_ID);
+ if (inject->build_id_style == BID_RWS__INJECT_HEADER_LAZY ||
+ inject->build_id_style == BID_RWS__INJECT_HEADER_ALL)
+ perf_header__set_feat(&session->header, HEADER_BUILD_ID);
/*
* Keep all buildids when there is unprocessed AUX data because
* it is not known which ones the AUX trace hits.
*/
if (perf_header__has_feat(&session->header, HEADER_BUILD_ID) &&
inject->have_auxtrace && !inject->itrace_synth_opts.set)
- dsos__hit_all(session);
+ perf_session__dsos_hit_all(session);
/*
* The AUX areas have been removed and replaced with
* synthesized hardware events, so clear the feature flag.
@@ -2137,7 +2334,8 @@ static int __cmd_inject(struct perf_inject *inject)
}
session->header.data_offset = output_data_offset;
session->header.data_size = inject->bytes_written;
- perf_session__inject_header(session, session->evlist, fd, &inj_fc.fc);
+ perf_session__inject_header(session, session->evlist, fd, &inj_fc.fc,
+ write_attrs_after_data);
if (inject->copy_kcore_dir) {
ret = copy_kcore_dir(inject);
@@ -2161,46 +2359,6 @@ static int __cmd_inject(struct perf_inject *inject)
int cmd_inject(int argc, const char **argv)
{
struct perf_inject inject = {
- .tool = {
- .sample = perf_event__repipe_sample,
- .read = perf_event__repipe_sample,
- .mmap = perf_event__repipe,
- .mmap2 = perf_event__repipe,
- .comm = perf_event__repipe,
- .namespaces = perf_event__repipe,
- .cgroup = perf_event__repipe,
- .fork = perf_event__repipe,
- .exit = perf_event__repipe,
- .lost = perf_event__repipe,
- .lost_samples = perf_event__repipe,
- .aux = perf_event__repipe,
- .itrace_start = perf_event__repipe,
- .aux_output_hw_id = perf_event__repipe,
- .context_switch = perf_event__repipe,
- .throttle = perf_event__repipe,
- .unthrottle = perf_event__repipe,
- .ksymbol = perf_event__repipe,
- .bpf = perf_event__repipe,
- .text_poke = perf_event__repipe,
- .attr = perf_event__repipe_attr,
- .event_update = perf_event__repipe_event_update,
- .tracing_data = perf_event__repipe_op2_synth,
- .finished_round = perf_event__repipe_oe_synth,
- .build_id = perf_event__repipe_op2_synth,
- .id_index = perf_event__repipe_op2_synth,
- .auxtrace_info = perf_event__repipe_op2_synth,
- .auxtrace_error = perf_event__repipe_op2_synth,
- .time_conv = perf_event__repipe_op2_synth,
- .thread_map = perf_event__repipe_op2_synth,
- .cpu_map = perf_event__repipe_op2_synth,
- .stat_config = perf_event__repipe_op2_synth,
- .stat = perf_event__repipe_op2_synth,
- .stat_round = perf_event__repipe_op2_synth,
- .feature = perf_event__repipe_op2_synth,
- .finished_init = perf_event__repipe_op2_synth,
- .compressed = perf_event__repipe_op4_synth,
- .auxtrace = perf_event__repipe_auxtrace,
- },
.input_name = "-",
.samples = LIST_HEAD_INIT(inject.samples),
.output = {
@@ -2214,14 +2372,21 @@ int cmd_inject(int argc, const char **argv)
.use_stdio = true,
};
int ret;
- bool repipe = true;
const char *known_build_ids = NULL;
+ bool build_ids = false;
+ bool build_id_all = false;
+ bool mmap2_build_ids = false;
+ bool mmap2_build_id_all = false;
struct option options[] = {
- OPT_BOOLEAN('b', "build-ids", &inject.build_ids,
+ OPT_BOOLEAN('b', "build-ids", &build_ids,
"Inject build-ids into the output stream"),
- OPT_BOOLEAN(0, "buildid-all", &inject.build_id_all,
+ OPT_BOOLEAN(0, "buildid-all", &build_id_all,
"Inject build-ids of all DSOs into the output stream"),
+ OPT_BOOLEAN('B', "mmap2-buildids", &mmap2_build_ids,
+ "Drop unused mmap events, make others mmap2 with build IDs"),
+ OPT_BOOLEAN(0, "mmap2-buildid-all", &mmap2_build_id_all,
+ "Rewrite all mmap events as mmap2 events with build IDs"),
OPT_STRING(0, "known-build-ids", &known_build_ids,
"buildid path [,buildid path...]",
"build-ids to use for given paths"),
@@ -2265,6 +2430,7 @@ int cmd_inject(int argc, const char **argv)
"perf inject [<options>]",
NULL
};
+ bool ordered_events;
if (!inject.itrace_synth_opts.set) {
/* Disable eager loading of kernel symbols that adds overhead to perf inject. */
@@ -2317,22 +2483,65 @@ int cmd_inject(int argc, const char **argv)
return -1;
}
}
+ if (mmap2_build_ids)
+ inject.build_id_style = BID_RWS__MMAP2_BUILDID_LAZY;
+ if (mmap2_build_id_all)
+ inject.build_id_style = BID_RWS__MMAP2_BUILDID_ALL;
+ if (build_ids)
+ inject.build_id_style = BID_RWS__INJECT_HEADER_LAZY;
+ if (build_id_all)
+ inject.build_id_style = BID_RWS__INJECT_HEADER_ALL;
data.path = inject.input_name;
- if (!strcmp(inject.input_name, "-") || inject.output.is_pipe) {
- inject.is_pipe = true;
- /*
- * Do not repipe header when input is a regular file
- * since either it can rewrite the header at the end
- * or write a new pipe header.
- */
- if (strcmp(inject.input_name, "-"))
- repipe = false;
- }
- inject.session = __perf_session__new(&data, repipe,
- output_fd(&inject),
- &inject.tool);
+ ordered_events = inject.jit_mode || inject.sched_stat ||
+ inject.build_id_style == BID_RWS__INJECT_HEADER_LAZY ||
+ inject.build_id_style == BID_RWS__MMAP2_BUILDID_LAZY;
+ perf_tool__init(&inject.tool, ordered_events);
+ inject.tool.sample = perf_event__repipe_sample;
+ inject.tool.read = perf_event__repipe_sample;
+ inject.tool.mmap = perf_event__repipe;
+ inject.tool.mmap2 = perf_event__repipe;
+ inject.tool.comm = perf_event__repipe;
+ inject.tool.namespaces = perf_event__repipe;
+ inject.tool.cgroup = perf_event__repipe;
+ inject.tool.fork = perf_event__repipe;
+ inject.tool.exit = perf_event__repipe;
+ inject.tool.lost = perf_event__repipe;
+ inject.tool.lost_samples = perf_event__repipe;
+ inject.tool.aux = perf_event__repipe;
+ inject.tool.itrace_start = perf_event__repipe;
+ inject.tool.aux_output_hw_id = perf_event__repipe;
+ inject.tool.context_switch = perf_event__repipe;
+ inject.tool.throttle = perf_event__repipe;
+ inject.tool.unthrottle = perf_event__repipe;
+ inject.tool.ksymbol = perf_event__repipe;
+ inject.tool.bpf = perf_event__repipe;
+ inject.tool.text_poke = perf_event__repipe;
+ inject.tool.attr = perf_event__repipe_attr;
+ inject.tool.event_update = perf_event__repipe_event_update;
+ inject.tool.tracing_data = perf_event__repipe_op2_synth;
+ inject.tool.finished_round = perf_event__repipe_oe_synth;
+ inject.tool.build_id = perf_event__repipe_op2_synth;
+ inject.tool.id_index = perf_event__repipe_op2_synth;
+ inject.tool.auxtrace_info = perf_event__repipe_op2_synth;
+ inject.tool.auxtrace_error = perf_event__repipe_op2_synth;
+ inject.tool.time_conv = perf_event__repipe_op2_synth;
+ inject.tool.thread_map = perf_event__repipe_op2_synth;
+ inject.tool.cpu_map = perf_event__repipe_op2_synth;
+ inject.tool.stat_config = perf_event__repipe_op2_synth;
+ inject.tool.stat = perf_event__repipe_op2_synth;
+ inject.tool.stat_round = perf_event__repipe_op2_synth;
+ inject.tool.feature = perf_event__repipe_op2_synth;
+ inject.tool.finished_init = perf_event__repipe_op2_synth;
+ inject.tool.compressed = perf_event__repipe_op4_synth;
+ inject.tool.auxtrace = perf_event__repipe_auxtrace;
+ inject.tool.bpf_metadata = perf_event__repipe_op2_synth;
+ inject.tool.dont_split_sample_group = true;
+ inject.session = __perf_session__new(&data, &inject.tool,
+ /*trace_event_repipe=*/inject.output.is_pipe,
+ /*host_env=*/NULL);
+
if (IS_ERR(inject.session)) {
ret = PTR_ERR(inject.session);
goto out_close_output;
@@ -2346,50 +2555,52 @@ int cmd_inject(int argc, const char **argv)
if (ret)
goto out_delete;
- if (!data.is_pipe && inject.output.is_pipe) {
+ if (inject.output.is_pipe) {
ret = perf_header__write_pipe(perf_data__fd(&inject.output));
if (ret < 0) {
pr_err("Couldn't write a new pipe header.\n");
goto out_delete;
}
- ret = perf_event__synthesize_for_pipe(&inject.tool,
- inject.session,
- &inject.output,
- perf_event__repipe);
- if (ret < 0)
- goto out_delete;
+ /*
+ * If the input is already a pipe then the features and
+ * attributes don't need synthesizing, they will be present in
+ * the input.
+ */
+ if (!data.is_pipe) {
+ ret = perf_event__synthesize_for_pipe(&inject.tool,
+ inject.session,
+ &inject.output,
+ perf_event__repipe);
+ if (ret < 0)
+ goto out_delete;
+ }
}
- if (inject.build_ids && !inject.build_id_all) {
+ if (inject.build_id_style == BID_RWS__INJECT_HEADER_LAZY ||
+ inject.build_id_style == BID_RWS__MMAP2_BUILDID_LAZY) {
/*
* to make sure the mmap records are ordered correctly
* and so that the correct especially due to jitted code
* mmaps. We cannot generate the buildid hit list and
* inject the jit mmaps at the same time for now.
*/
- inject.tool.ordered_events = true;
inject.tool.ordering_requires_timestamps = true;
- if (known_build_ids != NULL) {
- inject.known_build_ids =
- perf_inject__parse_known_build_ids(known_build_ids);
-
- if (inject.known_build_ids == NULL) {
- pr_err("Couldn't parse known build ids.\n");
- goto out_delete;
- }
- }
}
+ if (inject.build_id_style != BID_RWS__NONE && known_build_ids != NULL) {
+ inject.known_build_ids =
+ perf_inject__parse_known_build_ids(known_build_ids);
- if (inject.sched_stat) {
- inject.tool.ordered_events = true;
+ if (inject.known_build_ids == NULL) {
+ pr_err("Couldn't parse known build ids.\n");
+ goto out_delete;
+ }
}
#ifdef HAVE_JITDUMP
if (inject.jit_mode) {
- inject.tool.mmap2 = perf_event__jit_repipe_mmap2;
- inject.tool.mmap = perf_event__jit_repipe_mmap;
- inject.tool.ordered_events = true;
+ inject.tool.mmap2 = perf_event__repipe_mmap2;
+ inject.tool.mmap = perf_event__repipe_mmap;
inject.tool.ordering_requires_timestamps = true;
/*
* JIT MMAP injection injects all MMAP events in one go, so it
@@ -2398,7 +2609,7 @@ int cmd_inject(int argc, const char **argv)
inject.tool.finished_round = perf_event__drop_oe;
}
#endif
- ret = symbol__init(&inject.session->header.env);
+ ret = symbol__init(perf_session__env(inject.session));
if (ret < 0)
goto out_delete;
diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
index 7f75c5b73f26..3c4339982b16 100644
--- a/tools/perf/builtin-kallsyms.c
+++ b/tools/perf/builtin-kallsyms.c
@@ -12,18 +12,28 @@
#include <subcmd/parse-options.h>
#include "debug.h"
#include "dso.h"
+#include "env.h"
#include "machine.h"
#include "map.h"
#include "symbol.h"
static int __cmd_kallsyms(int argc, const char **argv)
{
- int i;
- struct machine *machine = machine__new_kallsyms();
+ int i, err;
+ struct perf_env host_env;
+ struct machine *machine = NULL;
+
+ perf_env__init(&host_env);
+ err = perf_env__set_cmdline(&host_env, argc, argv);
+ if (err)
+ goto out;
+
+ machine = machine__new_kallsyms(&host_env);
if (machine == NULL) {
pr_err("Couldn't read /proc/kallsyms\n");
- return -1;
+ err = -1;
+ goto out;
}
for (i = 0; i < argc; ++i) {
@@ -38,13 +48,14 @@ static int __cmd_kallsyms(int argc, const char **argv)
dso = map__dso(map);
printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
- symbol->name, dso->short_name, dso->long_name,
+ symbol->name, dso__short_name(dso), dso__long_name(dso),
map__unmap_ip(map, symbol->start), map__unmap_ip(map, symbol->end),
symbol->start, symbol->end);
}
-
+out:
machine__delete(machine);
- return 0;
+ perf_env__exit(&host_env);
+ return err;
}
int cmd_kallsyms(int argc, const char **argv)
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 9714327fd0ea..7929a5fa5f46 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -36,7 +36,7 @@
#include <regex.h>
#include <linux/ctype.h>
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
static int kmem_slab;
static int kmem_page;
@@ -761,6 +761,7 @@ static int parse_gfp_flags(struct evsel *evsel, struct perf_sample *sample,
};
struct trace_seq seq;
char *str, *pos = NULL;
+ const struct tep_event *tp_format;
if (nr_gfps) {
struct gfp_flag key = {
@@ -772,8 +773,9 @@ static int parse_gfp_flags(struct evsel *evsel, struct perf_sample *sample,
}
trace_seq_init(&seq);
- tep_print_event(evsel->tp_format->tep,
- &seq, &record, "%s", TEP_PRINT_INFO);
+ tp_format = evsel__tp_format(evsel);
+ if (tp_format)
+ tep_print_event(tp_format->tep, &seq, &record, "%s", TEP_PRINT_INFO);
str = strtok_r(seq.buffer, " ", &pos);
while (str) {
@@ -955,7 +957,7 @@ static bool perf_kmem__skip_sample(struct perf_sample *sample)
typedef int (*tracepoint_handler)(struct evsel *evsel,
struct perf_sample *sample);
-static int process_sample_event(struct perf_tool *tool __maybe_unused,
+static int process_sample_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -986,15 +988,6 @@ static int process_sample_event(struct perf_tool *tool __maybe_unused,
return err;
}
-static struct perf_tool perf_kmem = {
- .sample = process_sample_event,
- .comm = perf_event__process_comm,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .namespaces = perf_event__process_namespaces,
- .ordered_events = true,
-};
-
static double fragmentation(unsigned long n_req, unsigned long n_alloc)
{
if (n_alloc == 0)
@@ -1408,7 +1401,7 @@ static int __cmd_kmem(struct perf_session *session)
}
evlist__for_each_entry(session->evlist, evsel) {
- if (!strcmp(evsel__name(evsel), "kmem:mm_page_alloc") &&
+ if (evsel__name_is(evsel, "kmem:mm_page_alloc") &&
evsel__field(evsel, "pfn")) {
use_pfn = true;
break;
@@ -1971,6 +1964,7 @@ int cmd_kmem(int argc, const char **argv)
NULL
};
struct perf_session *session;
+ struct perf_tool perf_kmem;
static const char errmsg[] = "No %s allocation events found. Have you run 'perf kmem record --%s'?\n";
int ret = perf_config(kmem_config, NULL);
@@ -1998,6 +1992,13 @@ int cmd_kmem(int argc, const char **argv)
data.path = input_name;
+ perf_tool__init(&perf_kmem, /*ordered_events=*/true);
+ perf_kmem.sample = process_sample_event;
+ perf_kmem.comm = perf_event__process_comm;
+ perf_kmem.mmap = perf_event__process_mmap;
+ perf_kmem.mmap2 = perf_event__process_mmap2;
+ perf_kmem.namespaces = perf_event__process_namespaces;
+
kmem_session = session = perf_session__new(&data, &perf_kmem);
if (IS_ERR(session))
return PTR_ERR(session);
@@ -2013,17 +2014,17 @@ int cmd_kmem(int argc, const char **argv)
if (kmem_page) {
struct evsel *evsel = evlist__find_tracepoint_by_name(session->evlist, "kmem:mm_page_alloc");
+ const struct tep_event *tp_format = evsel ? evsel__tp_format(evsel) : NULL;
- if (evsel == NULL) {
+ if (tp_format == NULL) {
pr_err(errmsg, "page", "page");
goto out_delete;
}
-
- kmem_page_size = tep_get_page_size(evsel->tp_format->tep);
+ kmem_page_size = tep_get_page_size(tp_format->tep);
symbol_conf.use_callchain = true;
}
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
if (perf_time__parse_str(&ptime, time_str) != 0) {
pr_err("Invalid time string\n");
@@ -2058,7 +2059,8 @@ int cmd_kmem(int argc, const char **argv)
out_delete:
perf_session__delete(session);
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)kmem_usage[0]);
return ret;
}
-
diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c
index 71165036e4ca..f0f285763f19 100644
--- a/tools/perf/builtin-kvm.c
+++ b/tools/perf/builtin-kvm.c
@@ -615,67 +615,6 @@ static const char *get_filename_for_perf_kvm(void)
#if defined(HAVE_KVM_STAT_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
-void exit_event_get_key(struct evsel *evsel,
- struct perf_sample *sample,
- struct event_key *key)
-{
- key->info = 0;
- key->key = evsel__intval(evsel, sample, kvm_exit_reason);
-}
-
-bool kvm_exit_event(struct evsel *evsel)
-{
- return evsel__name_is(evsel, kvm_exit_trace);
-}
-
-bool exit_event_begin(struct evsel *evsel,
- struct perf_sample *sample, struct event_key *key)
-{
- if (kvm_exit_event(evsel)) {
- exit_event_get_key(evsel, sample, key);
- return true;
- }
-
- return false;
-}
-
-bool kvm_entry_event(struct evsel *evsel)
-{
- return evsel__name_is(evsel, kvm_entry_trace);
-}
-
-bool exit_event_end(struct evsel *evsel,
- struct perf_sample *sample __maybe_unused,
- struct event_key *key __maybe_unused)
-{
- return kvm_entry_event(evsel);
-}
-
-static const char *get_exit_reason(struct perf_kvm_stat *kvm,
- struct exit_reasons_table *tbl,
- u64 exit_code)
-{
- while (tbl->reason != NULL) {
- if (tbl->exit_code == exit_code)
- return tbl->reason;
- tbl++;
- }
-
- pr_err("unknown kvm exit code:%lld on %s\n",
- (unsigned long long)exit_code, kvm->exit_reasons_isa);
- return "UNKNOWN";
-}
-
-void exit_event_decode_key(struct perf_kvm_stat *kvm,
- struct event_key *key,
- char *decode)
-{
- const char *exit_reason = get_exit_reason(kvm, key->exit_reasons,
- key->key);
-
- scnprintf(decode, KVM_EVENT_NAME_LEN, "%s", exit_reason);
-}
-
static bool register_kvm_events_ops(struct perf_kvm_stat *kvm)
{
struct kvm_reg_events_ops *events_ops = kvm_reg_events_ops;
@@ -1166,7 +1105,7 @@ static void print_result(struct perf_kvm_stat *kvm)
}
#if defined(HAVE_TIMERFD_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
-static int process_lost_event(struct perf_tool *tool,
+static int process_lost_event(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -1187,7 +1126,7 @@ static bool skip_sample(struct perf_kvm_stat *kvm,
return false;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -1226,7 +1165,9 @@ static int cpu_isa_config(struct perf_kvm_stat *kvm)
int err;
if (kvm->live) {
- err = get_cpuid(buf, sizeof(buf));
+ struct perf_cpu cpu = {-1};
+
+ err = get_cpuid(buf, sizeof(buf), cpu);
if (err != 0) {
pr_err("Failed to look up CPU type: %s\n",
str_error_r(err, buf, sizeof(buf)));
@@ -1234,7 +1175,7 @@ static int cpu_isa_config(struct perf_kvm_stat *kvm)
}
cpuid = buf;
} else
- cpuid = kvm->session->header.env.cpuid;
+ cpuid = perf_session__env(kvm->session)->cpuid;
if (!cpuid) {
pr_err("Failed to look up CPU type\n");
@@ -1603,26 +1544,24 @@ static int read_events(struct perf_kvm_stat *kvm)
{
int ret;
- struct perf_tool eops = {
- .sample = process_sample_event,
- .comm = perf_event__process_comm,
- .namespaces = perf_event__process_namespaces,
- .ordered_events = true,
- };
struct perf_data file = {
.path = kvm->file_name,
.mode = PERF_DATA_MODE_READ,
.force = kvm->force,
};
- kvm->tool = eops;
+ perf_tool__init(&kvm->tool, /*ordered_events=*/true);
+ kvm->tool.sample = process_sample_event;
+ kvm->tool.comm = perf_event__process_comm;
+ kvm->tool.namespaces = perf_event__process_namespaces;
+
kvm->session = perf_session__new(&file, &kvm->tool);
if (IS_ERR(kvm->session)) {
pr_err("Initializing perf session failed\n");
return PTR_ERR(kvm->session);
}
- symbol__init(&kvm->session->header.env);
+ symbol__init(perf_session__env(kvm->session));
if (!perf_session__has_traces(kvm->session, "kvm record")) {
ret = -EINVAL;
@@ -1697,14 +1636,6 @@ exit:
return ret;
}
-#define STRDUP_FAIL_EXIT(s) \
- ({ char *_p; \
- _p = strdup(s); \
- if (!_p) \
- return -ENOMEM; \
- _p; \
- })
-
int __weak setup_kvm_events_tp(struct perf_kvm_stat *kvm __maybe_unused)
{
return 0;
@@ -1749,7 +1680,7 @@ kvm_events_record(struct perf_kvm_stat *kvm, int argc, const char **argv)
rec_argv[i] = STRDUP_FAIL_EXIT(record_args[i]);
for (j = 0; j < events_tp_size; j++) {
- rec_argv[i++] = "-e";
+ rec_argv[i++] = STRDUP_FAIL_EXIT("-e");
rec_argv[i++] = STRDUP_FAIL_EXIT(kvm_events_tp[j]);
}
@@ -1757,7 +1688,7 @@ kvm_events_record(struct perf_kvm_stat *kvm, int argc, const char **argv)
rec_argv[i++] = STRDUP_FAIL_EXIT(kvm->file_name);
for (j = 1; j < (unsigned int)argc; j++, i++)
- rec_argv[i] = argv[j];
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[j]);
set_option_flag(record_options, 'e', "event", PARSE_OPT_HIDDEN);
set_option_flag(record_options, 0, "filter", PARSE_OPT_HIDDEN);
@@ -1780,7 +1711,13 @@ kvm_events_record(struct perf_kvm_stat *kvm, int argc, const char **argv)
set_option_flag(record_options, 0, "transaction", PARSE_OPT_DISABLED);
record_usage = kvm_stat_record_usage;
- return cmd_record(i, rec_argv);
+ ret = cmd_record(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
}
static int
@@ -1919,22 +1856,19 @@ static int kvm_events_live(struct perf_kvm_stat *kvm,
/* event handling */
+ perf_tool__init(&kvm->tool, /*ordered_events=*/true);
kvm->tool.sample = process_sample_event;
kvm->tool.comm = perf_event__process_comm;
kvm->tool.exit = perf_event__process_exit;
kvm->tool.fork = perf_event__process_fork;
kvm->tool.lost = process_lost_event;
kvm->tool.namespaces = perf_event__process_namespaces;
- kvm->tool.ordered_events = true;
- perf_tool__fill_defaults(&kvm->tool);
/* set defaults */
kvm->display_time = 1;
kvm->opts.user_interval = 1;
kvm->opts.mmap_pages = 512;
kvm->opts.target.uses_mmap = false;
- kvm->opts.target.uid_str = NULL;
- kvm->opts.target.uid = UINT_MAX;
symbol__init(NULL);
disable_buildid_cache();
@@ -2064,58 +1998,122 @@ static int __cmd_record(const char *file_name, int argc, const char **argv)
int rec_argc, i = 0, j, ret;
const char **rec_argv;
- ret = kvm_add_default_arch_event(&argc, argv);
- if (ret)
- return -EINVAL;
-
- rec_argc = argc + 2;
+ /*
+ * Besides the 2 more options "-o" and "filename",
+ * kvm_add_default_arch_event() may add 2 extra options,
+ * so allocate 4 more items.
+ */
+ rec_argc = argc + 2 + 2;
rec_argv = calloc(rec_argc + 1, sizeof(char *));
- rec_argv[i++] = strdup("record");
- rec_argv[i++] = strdup("-o");
- rec_argv[i++] = strdup(file_name);
+ if (!rec_argv)
+ return -ENOMEM;
+
+ rec_argv[i++] = STRDUP_FAIL_EXIT("record");
+ rec_argv[i++] = STRDUP_FAIL_EXIT("-o");
+ rec_argv[i++] = STRDUP_FAIL_EXIT(file_name);
for (j = 1; j < argc; j++, i++)
- rec_argv[i] = argv[j];
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[j]);
BUG_ON(i != rec_argc);
- return cmd_record(i, rec_argv);
+ ret = kvm_add_default_arch_event(&i, rec_argv);
+ if (ret)
+ goto EXIT;
+
+ ret = cmd_record(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
}
static int __cmd_report(const char *file_name, int argc, const char **argv)
{
- int rec_argc, i = 0, j;
+ int rec_argc, i = 0, j, ret;
const char **rec_argv;
rec_argc = argc + 2;
rec_argv = calloc(rec_argc + 1, sizeof(char *));
- rec_argv[i++] = strdup("report");
- rec_argv[i++] = strdup("-i");
- rec_argv[i++] = strdup(file_name);
+ if (!rec_argv)
+ return -ENOMEM;
+
+ rec_argv[i++] = STRDUP_FAIL_EXIT("report");
+ rec_argv[i++] = STRDUP_FAIL_EXIT("-i");
+ rec_argv[i++] = STRDUP_FAIL_EXIT(file_name);
for (j = 1; j < argc; j++, i++)
- rec_argv[i] = argv[j];
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[j]);
BUG_ON(i != rec_argc);
- return cmd_report(i, rec_argv);
+ ret = cmd_report(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
}
static int
__cmd_buildid_list(const char *file_name, int argc, const char **argv)
{
- int rec_argc, i = 0, j;
+ int rec_argc, i = 0, j, ret;
const char **rec_argv;
rec_argc = argc + 2;
rec_argv = calloc(rec_argc + 1, sizeof(char *));
- rec_argv[i++] = strdup("buildid-list");
- rec_argv[i++] = strdup("-i");
- rec_argv[i++] = strdup(file_name);
+ if (!rec_argv)
+ return -ENOMEM;
+
+ rec_argv[i++] = STRDUP_FAIL_EXIT("buildid-list");
+ rec_argv[i++] = STRDUP_FAIL_EXIT("-i");
+ rec_argv[i++] = STRDUP_FAIL_EXIT(file_name);
for (j = 1; j < argc; j++, i++)
- rec_argv[i] = argv[j];
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[j]);
BUG_ON(i != rec_argc);
- return cmd_buildid_list(i, rec_argv);
+ ret = cmd_buildid_list(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
+}
+
+static int __cmd_top(int argc, const char **argv)
+{
+ int rec_argc, i = 0, ret;
+ const char **rec_argv;
+
+ /*
+ * kvm_add_default_arch_event() may add 2 extra options, so
+ * allocate 2 more pointers in adavance.
+ */
+ rec_argc = argc + 2;
+ rec_argv = calloc(rec_argc + 1, sizeof(char *));
+ if (!rec_argv)
+ return -ENOMEM;
+
+ for (i = 0; i < argc; i++)
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[i]);
+
+ BUG_ON(i != argc);
+
+ ret = kvm_add_default_arch_event(&i, rec_argv);
+ if (ret)
+ goto EXIT;
+
+ ret = cmd_top(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
}
int cmd_kvm(int argc, const char **argv)
@@ -2150,6 +2148,7 @@ int cmd_kvm(int argc, const char **argv)
"buildid-list", "stat", NULL };
const char *kvm_usage[] = { NULL, NULL };
+ exclude_GH_default = true;
perf_host = 0;
perf_guest = 1;
@@ -2177,7 +2176,7 @@ int cmd_kvm(int argc, const char **argv)
else if (strlen(argv[0]) > 2 && strstarts("diff", argv[0]))
return cmd_diff(argc, argv);
else if (!strcmp(argv[0], "top"))
- return cmd_top(argc, argv);
+ return __cmd_top(argc, argv);
else if (strlen(argv[0]) > 2 && strstarts("buildid-list", argv[0]))
return __cmd_buildid_list(file_name, argc, argv);
#if defined(HAVE_KVM_STAT_SUPPORT) && defined(HAVE_LIBTRACEEVENT)
@@ -2187,5 +2186,8 @@ int cmd_kvm(int argc, const char **argv)
else
usage_with_options(kvm_usage, kvm_options);
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)kvm_usage[0]);
+
return 0;
}
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 0092b9b39611..7f3068264568 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -6,6 +6,7 @@
*/
#include "builtin.h"
+#include "perf.h"
#include "util/data.h"
#include "util/evlist.h"
@@ -23,7 +24,7 @@
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
#include <errno.h>
#include <inttypes.h>
@@ -958,7 +959,7 @@ static int top_sched_switch_event(struct perf_kwork *kwork,
}
static struct kwork_class kwork_irq;
-static int process_irq_handler_entry_event(struct perf_tool *tool,
+static int process_irq_handler_entry_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -971,7 +972,7 @@ static int process_irq_handler_entry_event(struct perf_tool *tool,
return 0;
}
-static int process_irq_handler_exit_event(struct perf_tool *tool,
+static int process_irq_handler_exit_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1037,7 +1038,7 @@ static struct kwork_class kwork_irq = {
};
static struct kwork_class kwork_softirq;
-static int process_softirq_raise_event(struct perf_tool *tool,
+static int process_softirq_raise_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1051,7 +1052,7 @@ static int process_softirq_raise_event(struct perf_tool *tool,
return 0;
}
-static int process_softirq_entry_event(struct perf_tool *tool,
+static int process_softirq_entry_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1065,7 +1066,7 @@ static int process_softirq_entry_event(struct perf_tool *tool,
return 0;
}
-static int process_softirq_exit_event(struct perf_tool *tool,
+static int process_softirq_exit_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1103,7 +1104,8 @@ static char *evsel__softirq_name(struct evsel *evsel, u64 num)
char *name = NULL;
bool found = false;
struct tep_print_flag_sym *sym = NULL;
- struct tep_print_arg *args = evsel->tp_format->print_fmt.args;
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+ struct tep_print_arg *args = tp_format ? tp_format->print_fmt.args : NULL;
if ((args == NULL) || (args->next == NULL))
return NULL;
@@ -1167,7 +1169,7 @@ static struct kwork_class kwork_softirq = {
};
static struct kwork_class kwork_workqueue;
-static int process_workqueue_activate_work_event(struct perf_tool *tool,
+static int process_workqueue_activate_work_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1181,7 +1183,7 @@ static int process_workqueue_activate_work_event(struct perf_tool *tool,
return 0;
}
-static int process_workqueue_execute_start_event(struct perf_tool *tool,
+static int process_workqueue_execute_start_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1195,7 +1197,7 @@ static int process_workqueue_execute_start_event(struct perf_tool *tool,
return 0;
}
-static int process_workqueue_execute_end_event(struct perf_tool *tool,
+static int process_workqueue_execute_end_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1266,7 +1268,7 @@ static struct kwork_class kwork_workqueue = {
};
static struct kwork_class kwork_sched;
-static int process_sched_switch_event(struct perf_tool *tool,
+static int process_sched_switch_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1802,7 +1804,7 @@ static int perf_kwork__read_events(struct perf_kwork *kwork)
return PTR_ERR(session);
}
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
if (perf_kwork__check_config(kwork, session) != 0)
goto out_delete;
@@ -1846,7 +1848,7 @@ static void process_skipped_events(struct perf_kwork *kwork,
}
}
-struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork,
+static struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork,
struct kwork_class *class,
struct kwork_work *key)
{
@@ -1945,12 +1947,12 @@ static int perf_kwork__report(struct perf_kwork *kwork)
return 0;
}
-typedef int (*tracepoint_handler)(struct perf_tool *tool,
+typedef int (*tracepoint_handler)(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
-static int perf_kwork__process_tracepoint_sample(struct perf_tool *tool,
+static int perf_kwork__process_tracepoint_sample(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct evsel *evsel,
@@ -2230,7 +2232,7 @@ static int perf_kwork__top(struct perf_kwork *kwork)
perf_kwork__top_report(kwork);
out:
- free(kwork->top_stat.cpus_runtime);
+ zfree(&kwork->top_stat.cpus_runtime);
return ret;
}
@@ -2271,12 +2273,23 @@ static void setup_event_list(struct perf_kwork *kwork,
pr_debug("\n");
}
+#define STRDUP_FAIL_EXIT(s) \
+ ({ char *_p; \
+ _p = strdup(s); \
+ if (!_p) { \
+ ret = -ENOMEM; \
+ goto EXIT; \
+ } \
+ _p; \
+ })
+
static int perf_kwork__record(struct perf_kwork *kwork,
int argc, const char **argv)
{
const char **rec_argv;
unsigned int rec_argc, i, j;
struct kwork_class *class;
+ int ret;
const char *const record_args[] = {
"record",
@@ -2296,17 +2309,17 @@ static int perf_kwork__record(struct perf_kwork *kwork,
return -ENOMEM;
for (i = 0; i < ARRAY_SIZE(record_args); i++)
- rec_argv[i] = strdup(record_args[i]);
+ rec_argv[i] = STRDUP_FAIL_EXIT(record_args[i]);
list_for_each_entry(class, &kwork->class_list, list) {
for (j = 0; j < class->nr_tracepoints; j++) {
- rec_argv[i++] = strdup("-e");
- rec_argv[i++] = strdup(class->tp_handlers[j].name);
+ rec_argv[i++] = STRDUP_FAIL_EXIT("-e");
+ rec_argv[i++] = STRDUP_FAIL_EXIT(class->tp_handlers[j].name);
}
}
for (j = 1; j < (unsigned int)argc; j++, i++)
- rec_argv[i] = argv[j];
+ rec_argv[i] = STRDUP_FAIL_EXIT(argv[j]);
BUG_ON(i != rec_argc);
@@ -2315,19 +2328,19 @@ static int perf_kwork__record(struct perf_kwork *kwork,
pr_debug("%s ", rec_argv[j]);
pr_debug("\n");
- return cmd_record(i, rec_argv);
+ ret = cmd_record(i, rec_argv);
+
+EXIT:
+ for (i = 0; i < rec_argc; i++)
+ free((void *)rec_argv[i]);
+ free(rec_argv);
+ return ret;
}
int cmd_kwork(int argc, const char **argv)
{
static struct perf_kwork kwork = {
.class_list = LIST_HEAD_INIT(kwork.class_list),
- .tool = {
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .sample = perf_kwork__process_tracepoint_sample,
- .ordered_events = true,
- },
.atom_page_list = LIST_HEAD_INIT(kwork.atom_page_list),
.sort_list = LIST_HEAD_INIT(kwork.sort_list),
.cmp_id = LIST_HEAD_INIT(kwork.cmp_id),
@@ -2350,6 +2363,7 @@ int cmd_kwork(int argc, const char **argv)
.all_runtime = 0,
.all_count = 0,
.nr_skipped_events = { 0 },
+ .add_work = perf_kwork_add_work,
};
static const char default_report_sort_order[] = "runtime, max, count";
static const char default_latency_sort_order[] = "avg, max, count";
@@ -2462,6 +2476,11 @@ int cmd_kwork(int argc, const char **argv)
"record", "report", "latency", "timehist", "top", NULL
};
+ perf_tool__init(&kwork.tool, /*ordered_events=*/true);
+ kwork.tool.mmap = perf_event__process_mmap;
+ kwork.tool.mmap2 = perf_event__process_mmap2;
+ kwork.tool.sample = perf_kwork__process_tracepoint_sample;
+
argc = parse_options_subcommand(argc, argv, kwork_options,
kwork_subcommands, kwork_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
@@ -2520,5 +2539,8 @@ int cmd_kwork(int argc, const char **argv)
} else
usage_with_options(kwork_usage, kwork_options);
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)kwork_usage[0]);
+
return 0;
}
diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
index e27a1b1288c2..caf42276bd0f 100644
--- a/tools/perf/builtin-list.c
+++ b/tools/perf/builtin-list.c
@@ -19,9 +19,11 @@
#include "util/string2.h"
#include "util/strlist.h"
#include "util/strbuf.h"
+#include "util/tool_pmu.h"
#include <subcmd/pager.h>
#include <subcmd/parse-options.h>
#include <linux/zalloc.h>
+#include <ctype.h>
#include <stdarg.h>
#include <stdio.h>
@@ -56,6 +58,8 @@ struct print_state {
bool metrics;
/** @metricgroups: Controls printing of metric and metric groups. */
bool metricgroups;
+ /** @exclude_abi: Exclude PMUs with types less than PERF_TYPE_MAX except PERF_TYPE_RAW. */
+ bool exclude_abi;
/** @last_topic: The last printed event topic. */
char *last_topic;
/** @last_metricgroups: The last printed metric group. */
@@ -76,30 +80,43 @@ static void default_print_start(void *ps)
static void default_print_end(void *print_state __maybe_unused) {}
+static const char *skip_spaces_or_commas(const char *str)
+{
+ while (isspace(*str) || *str == ',')
+ ++str;
+ return str;
+}
+
static void wordwrap(FILE *fp, const char *s, int start, int max, int corr)
{
int column = start;
int n;
bool saw_newline = false;
+ bool comma = false;
while (*s) {
- int wlen = strcspn(s, " \t\n");
+ int wlen = strcspn(s, " ,\t\n");
+ const char *sep = comma ? "," : " ";
if ((column + wlen >= max && column > start) || saw_newline) {
- fprintf(fp, "\n%*s", start, "");
+ fprintf(fp, comma ? ",\n%*s" : "\n%*s", start, "");
column = start + corr;
}
- n = fprintf(fp, "%s%.*s", column > start ? " " : "", wlen, s);
+ if (column <= start)
+ sep = "";
+ n = fprintf(fp, "%s%.*s", sep, wlen, s);
if (n <= 0)
break;
saw_newline = s[wlen] == '\n';
s += wlen;
+ comma = s[0] == ',';
column += n;
- s = skip_spaces(s);
+ s = skip_spaces_or_commas(s);
}
}
-static void default_print_event(void *ps, const char *pmu_name, const char *topic,
+static void default_print_event(void *ps, const char *topic,
+ const char *pmu_name, u32 pmu_type,
const char *event_name, const char *event_alias,
const char *scale_unit __maybe_unused,
bool deprecated, const char *event_type_desc,
@@ -116,6 +133,9 @@ static void default_print_event(void *ps, const char *pmu_name, const char *topi
if (print_state->pmu_glob && pmu_name && !strglobmatch(pmu_name, print_state->pmu_glob))
return;
+ if (print_state->exclude_abi && pmu_type < PERF_TYPE_MAX && pmu_type != PERF_TYPE_RAW)
+ return;
+
if (print_state->event_glob &&
(!event_name || !strglobmatch(event_name, print_state->event_glob)) &&
(!event_alias || !strglobmatch(event_alias, print_state->event_glob)) &&
@@ -149,14 +169,17 @@ static void default_print_event(void *ps, const char *pmu_name, const char *topi
} else
fputc('\n', fp);
- if (desc && print_state->desc) {
+ if (long_desc && print_state->long_desc)
+ desc = long_desc;
+
+ if (desc && (print_state->desc || print_state->long_desc)) {
char *desc_with_unit = NULL;
int desc_len = -1;
if (pmu_name && strcmp(pmu_name, "default_core")) {
desc_len = strlen(desc);
desc_len = asprintf(&desc_with_unit,
- desc[desc_len - 1] != '.'
+ desc_len > 0 && desc[desc_len - 1] != '.'
? "%s. Unit: %s" : "%s Unit: %s",
desc, pmu_name);
}
@@ -165,12 +188,6 @@ static void default_print_event(void *ps, const char *pmu_name, const char *topi
fprintf(fp, "]\n");
free(desc_with_unit);
}
- long_desc = long_desc ?: desc;
- if (long_desc && print_state->long_desc) {
- fprintf(fp, "%*s", 8, "[");
- wordwrap(fp, long_desc, 8, pager_get_columns(), 0);
- fprintf(fp, "]\n");
- }
if (print_state->detailed && encoding_desc) {
fprintf(fp, "%*s", 8, "");
@@ -186,7 +203,8 @@ static void default_print_metric(void *ps,
const char *long_desc,
const char *expr,
const char *threshold,
- const char *unit __maybe_unused)
+ const char *unit __maybe_unused,
+ const char *pmu_name __maybe_unused)
{
struct print_state *print_state = ps;
FILE *fp = print_state->fp;
@@ -208,17 +226,24 @@ static void default_print_metric(void *ps,
if (!print_state->last_metricgroups ||
strcmp(print_state->last_metricgroups, group ?: "")) {
if (group && print_state->metricgroups) {
- if (print_state->name_only)
+ if (print_state->name_only) {
fprintf(fp, "%s ", group);
- else if (print_state->metrics) {
- const char *gdesc = describe_metricgroup(group);
+ } else {
+ const char *gdesc = print_state->desc
+ ? describe_metricgroup(group)
+ : NULL;
+ const char *print_colon = "";
+
+ if (print_state->metrics) {
+ print_colon = ":";
+ fputc('\n', fp);
+ }
if (gdesc)
- fprintf(fp, "\n%s: [%s]\n", group, gdesc);
+ fprintf(fp, "%s%s [%s]\n", group, print_colon, gdesc);
else
- fprintf(fp, "\n%s:\n", group);
- } else
- fprintf(fp, "%s\n", group);
+ fprintf(fp, "%s%s\n", group, print_colon);
+ }
}
zfree(&print_state->last_metricgroups);
print_state->last_metricgroups = strdup(group ?: "");
@@ -236,15 +261,14 @@ static void default_print_metric(void *ps,
}
fprintf(fp, " %s\n", name);
- if (desc && print_state->desc) {
- fprintf(fp, "%*s", 8, "[");
- wordwrap(fp, desc, 8, pager_get_columns(), 0);
- fprintf(fp, "]\n");
- }
if (long_desc && print_state->long_desc) {
fprintf(fp, "%*s", 8, "[");
wordwrap(fp, long_desc, 8, pager_get_columns(), 0);
fprintf(fp, "]\n");
+ } else if (desc && print_state->desc) {
+ fprintf(fp, "%*s", 8, "[");
+ wordwrap(fp, desc, 8, pager_get_columns(), 0);
+ fprintf(fp, "]\n");
}
if (expr && print_state->detailed) {
fprintf(fp, "%*s", 8, "[");
@@ -306,6 +330,9 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, ..
case '\n':
strbuf_addstr(buf, "\\n");
break;
+ case '\r':
+ strbuf_addstr(buf, "\\r");
+ break;
case '\\':
fallthrough;
case '\"':
@@ -333,7 +360,8 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, ..
fputs(buf->buf, fp);
}
-static void json_print_event(void *ps, const char *pmu_name, const char *topic,
+static void json_print_event(void *ps, const char *topic,
+ const char *pmu_name, u32 pmu_type __maybe_unused,
const char *event_name, const char *event_alias,
const char *scale_unit,
bool deprecated, const char *event_type_desc,
@@ -413,7 +441,8 @@ static void json_print_event(void *ps, const char *pmu_name, const char *topic,
static void json_print_metric(void *ps __maybe_unused, const char *group,
const char *name, const char *desc,
const char *long_desc, const char *expr,
- const char *threshold, const char *unit)
+ const char *threshold, const char *unit,
+ const char *pmu_name)
{
struct json_print_state *print_state = ps;
bool need_sep = false;
@@ -463,6 +492,12 @@ static void json_print_metric(void *ps __maybe_unused, const char *group,
long_desc);
need_sep = true;
}
+ if (pmu_name) {
+ fix_escape_fprintf(fp, &buf, "%s\t\"Unit\": \"%S\"",
+ need_sep ? ",\n" : "",
+ pmu_name);
+ need_sep = true;
+ }
fprintf(fp, "%s}", need_sep ? "\n" : "");
strbuf_release(&buf);
}
@@ -484,6 +519,7 @@ int cmd_list(int argc, const char **argv)
int i, ret = 0;
struct print_state default_ps = {
.fp = stdout,
+ .desc = true,
};
struct print_state json_ps = {
.fp = stdout,
@@ -506,7 +542,7 @@ int cmd_list(int argc, const char **argv)
OPT_BOOLEAN('d', "desc", &default_ps.desc,
"Print extra event descriptions. --no-desc to not print."),
OPT_BOOLEAN('v', "long-desc", &default_ps.long_desc,
- "Print longer event descriptions."),
+ "Print longer event descriptions and all similar PMUs with alphanumeric suffixes."),
OPT_BOOLEAN(0, "details", &default_ps.detailed,
"Print information on the perf event names and expressions used internally by events."),
OPT_STRING('o', "output", &output_path, "file", "output file name"),
@@ -556,7 +592,6 @@ int cmd_list(int argc, const char **argv)
};
ps = &json_ps;
} else {
- default_ps.desc = !default_ps.long_desc;
default_ps.last_topic = strdup("");
assert(default_ps.last_topic);
default_ps.visited_metrics = strlist__new(NULL, NULL);
@@ -586,23 +621,44 @@ int cmd_list(int argc, const char **argv)
for (i = 0; i < argc; ++i) {
char *sep, *s;
- if (strcmp(argv[i], "tracepoint") == 0)
- print_tracepoint_events(&print_cb, ps);
- else if (strcmp(argv[i], "hw") == 0 ||
+ if (strcmp(argv[i], "tracepoint") == 0) {
+ char *old_pmu_glob = default_ps.pmu_glob;
+
+ default_ps.pmu_glob = strdup("tracepoint");
+ if (!default_ps.pmu_glob) {
+ ret = -1;
+ goto out;
+ }
+ perf_pmus__print_pmu_events(&print_cb, ps);
+ zfree(&default_ps.pmu_glob);
+ default_ps.pmu_glob = old_pmu_glob;
+ } else if (strcmp(argv[i], "hw") == 0 ||
strcmp(argv[i], "hardware") == 0)
print_symbol_events(&print_cb, ps, PERF_TYPE_HARDWARE,
event_symbols_hw, PERF_COUNT_HW_MAX);
else if (strcmp(argv[i], "sw") == 0 ||
strcmp(argv[i], "software") == 0) {
- print_symbol_events(&print_cb, ps, PERF_TYPE_SOFTWARE,
- event_symbols_sw, PERF_COUNT_SW_MAX);
- print_tool_events(&print_cb, ps);
+ char *old_pmu_glob = default_ps.pmu_glob;
+ static const char * const sw_globs[] = { "software", "tool" };
+
+ for (size_t j = 0; j < ARRAY_SIZE(sw_globs); j++) {
+ default_ps.pmu_glob = strdup(sw_globs[j]);
+ if (!default_ps.pmu_glob) {
+ ret = -1;
+ goto out;
+ }
+ perf_pmus__print_pmu_events(&print_cb, ps);
+ zfree(&default_ps.pmu_glob);
+ }
+ default_ps.pmu_glob = old_pmu_glob;
} else if (strcmp(argv[i], "cache") == 0 ||
strcmp(argv[i], "hwcache") == 0)
print_hwcache_events(&print_cb, ps);
- else if (strcmp(argv[i], "pmu") == 0)
+ else if (strcmp(argv[i], "pmu") == 0) {
+ default_ps.exclude_abi = true;
perf_pmus__print_pmu_events(&print_cb, ps);
- else if (strcmp(argv[i], "sdt") == 0)
+ default_ps.exclude_abi = false;
+ } else if (strcmp(argv[i], "sdt") == 0)
print_sdt_events(&print_cb, ps);
else if (strcmp(argv[i], "metric") == 0 || strcmp(argv[i], "metrics") == 0) {
default_ps.metricgroups = false;
@@ -620,6 +676,7 @@ int cmd_list(int argc, const char **argv)
#endif
else if ((sep = strchr(argv[i], ':')) != NULL) {
char *old_pmu_glob = default_ps.pmu_glob;
+ char *old_event_glob = default_ps.event_glob;
default_ps.event_glob = strdup(argv[i]);
if (!default_ps.event_glob) {
@@ -627,13 +684,21 @@ int cmd_list(int argc, const char **argv)
goto out;
}
- print_tracepoint_events(&print_cb, ps);
+ default_ps.pmu_glob = strdup("tracepoint");
+ if (!default_ps.pmu_glob) {
+ zfree(&default_ps.event_glob);
+ ret = -1;
+ goto out;
+ }
+ perf_pmus__print_pmu_events(&print_cb, ps);
+ zfree(&default_ps.pmu_glob);
+ default_ps.pmu_glob = old_pmu_glob;
print_sdt_events(&print_cb, ps);
default_ps.metrics = true;
default_ps.metricgroups = true;
metricgroup__print(&print_cb, ps);
zfree(&default_ps.event_glob);
- default_ps.pmu_glob = old_pmu_glob;
+ default_ps.event_glob = old_event_glob;
} else {
if (asprintf(&s, "*%s*", argv[i]) < 0) {
printf("Critical: Not enough memory! Trying to continue...\n");
@@ -642,12 +707,8 @@ int cmd_list(int argc, const char **argv)
default_ps.event_glob = s;
print_symbol_events(&print_cb, ps, PERF_TYPE_HARDWARE,
event_symbols_hw, PERF_COUNT_HW_MAX);
- print_symbol_events(&print_cb, ps, PERF_TYPE_SOFTWARE,
- event_symbols_sw, PERF_COUNT_SW_MAX);
- print_tool_events(&print_cb, ps);
print_hwcache_events(&print_cb, ps);
perf_pmus__print_pmu_events(&print_cb, ps);
- print_tracepoint_events(&print_cb, ps);
print_sdt_events(&print_cb, ps);
default_ps.metrics = true;
default_ps.metricgroups = true;
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index 230461280e45..078634461df2 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -46,15 +46,6 @@
static struct perf_session *session;
static struct target target;
-/* based on kernel/lockdep.c */
-#define LOCKHASH_BITS 12
-#define LOCKHASH_SIZE (1UL << LOCKHASH_BITS)
-
-static struct hlist_head *lockhash_table;
-
-#define __lockhashfn(key) hash_long((unsigned long)key, LOCKHASH_BITS)
-#define lockhashentry(key) (lockhash_table + __lockhashfn((key)))
-
static struct rb_root thread_stats;
static bool combine_locks;
@@ -67,24 +58,15 @@ static unsigned long bpf_map_entries = MAX_ENTRIES;
static int max_stack_depth = CONTENTION_STACK_DEPTH;
static int stack_skip = CONTENTION_STACK_SKIP;
static int print_nr_entries = INT_MAX / 2;
-static LIST_HEAD(callstack_filters);
static const char *output_name = NULL;
static FILE *lock_output;
-struct callstack_filter {
- struct list_head list;
- char name[];
-};
-
static struct lock_filter filters;
+static struct lock_delay *delays;
+static int nr_delays;
static enum lock_aggr_mode aggr_mode = LOCK_AGGR_ADDR;
-static bool needs_callstack(void)
-{
- return !list_empty(&callstack_filters);
-}
-
static struct thread_stat *thread_stat_find(u32 tid)
{
struct rb_node *node;
@@ -438,16 +420,13 @@ static void combine_lock_stats(struct lock_stat *st)
rb_insert_color(&st->rb, &sorted);
}
-static void insert_to_result(struct lock_stat *st,
- int (*bigger)(struct lock_stat *, struct lock_stat *))
+static void insert_to(struct rb_root *rr, struct lock_stat *st,
+ int (*bigger)(struct lock_stat *, struct lock_stat *))
{
- struct rb_node **rb = &result.rb_node;
+ struct rb_node **rb = &rr->rb_node;
struct rb_node *parent = NULL;
struct lock_stat *p;
- if (combine_locks && st->combined)
- return;
-
while (*rb) {
p = container_of(*rb, struct lock_stat, rb);
parent = *rb;
@@ -459,13 +438,21 @@ static void insert_to_result(struct lock_stat *st,
}
rb_link_node(&st->rb, parent, rb);
- rb_insert_color(&st->rb, &result);
+ rb_insert_color(&st->rb, rr);
}
-/* returns left most element of result, and erase it */
-static struct lock_stat *pop_from_result(void)
+static inline void insert_to_result(struct lock_stat *st,
+ int (*bigger)(struct lock_stat *,
+ struct lock_stat *))
{
- struct rb_node *node = result.rb_node;
+ if (combine_locks && st->combined)
+ return;
+ insert_to(&result, st, bigger);
+}
+
+static inline struct lock_stat *pop_from(struct rb_root *rr)
+{
+ struct rb_node *node = rr->rb_node;
if (!node)
return NULL;
@@ -473,95 +460,15 @@ static struct lock_stat *pop_from_result(void)
while (node->rb_left)
node = node->rb_left;
- rb_erase(node, &result);
+ rb_erase(node, rr);
return container_of(node, struct lock_stat, rb);
-}
-
-struct lock_stat *lock_stat_find(u64 addr)
-{
- struct hlist_head *entry = lockhashentry(addr);
- struct lock_stat *ret;
-
- hlist_for_each_entry(ret, entry, hash_entry) {
- if (ret->addr == addr)
- return ret;
- }
- return NULL;
-}
-struct lock_stat *lock_stat_findnew(u64 addr, const char *name, int flags)
-{
- struct hlist_head *entry = lockhashentry(addr);
- struct lock_stat *ret, *new;
-
- hlist_for_each_entry(ret, entry, hash_entry) {
- if (ret->addr == addr)
- return ret;
- }
-
- new = zalloc(sizeof(struct lock_stat));
- if (!new)
- goto alloc_failed;
-
- new->addr = addr;
- new->name = strdup(name);
- if (!new->name) {
- free(new);
- goto alloc_failed;
- }
-
- new->flags = flags;
- new->wait_time_min = ULLONG_MAX;
-
- hlist_add_head(&new->hash_entry, entry);
- return new;
-
-alloc_failed:
- pr_err("memory allocation failed\n");
- return NULL;
}
-bool match_callstack_filter(struct machine *machine, u64 *callstack)
+/* returns left most element of result, and erase it */
+static struct lock_stat *pop_from_result(void)
{
- struct map *kmap;
- struct symbol *sym;
- u64 ip;
- const char *arch = perf_env__arch(machine->env);
-
- if (list_empty(&callstack_filters))
- return true;
-
- for (int i = 0; i < max_stack_depth; i++) {
- struct callstack_filter *filter;
-
- /*
- * In powerpc, the callchain saved by kernel always includes
- * first three entries as the NIP (next instruction pointer),
- * LR (link register), and the contents of LR save area in the
- * second stack frame. In certain scenarios its possible to have
- * invalid kernel instruction addresses in either LR or the second
- * stack frame's LR. In that case, kernel will store that address as
- * zero.
- *
- * The below check will continue to look into callstack,
- * incase first or second callstack index entry has 0
- * address for powerpc.
- */
- if (!callstack || (!callstack[i] && (strcmp(arch, "powerpc") ||
- (i != 1 && i != 2))))
- break;
-
- ip = callstack[i];
- sym = machine__find_kernel_symbol(machine, ip, &kmap);
- if (sym == NULL)
- continue;
-
- list_for_each_entry(filter, &callstack_filters, list) {
- if (strstr(sym->name, filter->name))
- return true;
- }
- }
- return false;
+ return pop_from(&result);
}
struct trace_lock_handler {
@@ -1165,7 +1072,7 @@ static int report_lock_contention_begin_event(struct evsel *evsel,
if (callstack == NULL)
return -ENOMEM;
- if (!match_callstack_filter(machine, callstack)) {
+ if (!match_callstack_filter(machine, callstack, max_stack_depth)) {
free(callstack);
return 0;
}
@@ -1477,20 +1384,16 @@ static void dump_map(void)
fprintf(lock_output, " %#llx: %s\n", (unsigned long long)st->addr, st->name);
}
-static int dump_info(void)
+static void dump_info(void)
{
- int rc = 0;
-
if (info_threads)
dump_threads();
- else if (info_map)
+
+ if (info_map) {
+ if (info_threads)
+ fputc('\n', lock_output);
dump_map();
- else {
- rc = -1;
- pr_err("Unknown type of information\n");
}
-
- return rc;
}
static const struct evsel_str_handler lock_tracepoints[] = {
@@ -1505,7 +1408,7 @@ static const struct evsel_str_handler contention_tracepoints[] = {
{ "lock:contention_end", evsel__process_contention_end, },
};
-static int process_event_update(struct perf_tool *tool,
+static int process_event_update(const struct perf_tool *tool,
union perf_event *event,
struct evlist **pevlist)
{
@@ -1524,7 +1427,7 @@ static int process_event_update(struct perf_tool *tool,
typedef int (*tracepoint_handler)(struct evsel *evsel,
struct perf_sample *sample);
-static int process_sample_event(struct perf_tool *tool __maybe_unused,
+static int process_sample_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -1579,8 +1482,13 @@ static void sort_result(void)
static const struct {
unsigned int flags;
- const char *str;
- const char *name;
+ /*
+ * Name of the lock flags (access), with delimeter ':'.
+ * For example, rwsem:R of rwsem:W.
+ */
+ const char *flags_name;
+ /* Name of the lock (type), for example, rwlock or rwsem. */
+ const char *lock_name;
} lock_type_table[] = {
{ 0, "semaphore", "semaphore" },
{ LCB_F_SPIN, "spinlock", "spinlock" },
@@ -1595,45 +1503,32 @@ static const struct {
{ LCB_F_PERCPU | LCB_F_WRITE, "pcpu-sem:W", "percpu-rwsem" },
{ LCB_F_MUTEX, "mutex", "mutex" },
{ LCB_F_MUTEX | LCB_F_SPIN, "mutex", "mutex" },
- /* alias for get_type_flag() */
- { LCB_F_MUTEX | LCB_F_SPIN, "mutex-spin", "mutex" },
+ /* alias for optimistic spinning only */
+ { LCB_F_MUTEX | LCB_F_SPIN, "mutex:spin", "mutex-spin" },
};
-static const char *get_type_str(unsigned int flags)
+static const char *get_type_flags_name(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
- return lock_type_table[i].str;
+ return lock_type_table[i].flags_name;
}
return "unknown";
}
-static const char *get_type_name(unsigned int flags)
+static const char *get_type_lock_name(unsigned int flags)
{
- flags &= LCB_F_MAX_FLAGS - 1;
+ flags &= LCB_F_TYPE_MASK;
for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
if (lock_type_table[i].flags == flags)
- return lock_type_table[i].name;
+ return lock_type_table[i].lock_name;
}
return "unknown";
}
-static unsigned int get_type_flag(const char *str)
-{
- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
- if (!strcmp(lock_type_table[i].name, str))
- return lock_type_table[i].flags;
- }
- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
- if (!strcmp(lock_type_table[i].str, str))
- return lock_type_table[i].flags;
- }
- return UINT_MAX;
-}
-
static void lock_filter_finish(void)
{
zfree(&filters.types);
@@ -1650,6 +1545,12 @@ static void lock_filter_finish(void)
zfree(&filters.cgrps);
filters.nr_cgrps = 0;
+
+ for (int i = 0; i < filters.nr_slabs; i++)
+ free(filters.slabs[i]);
+
+ zfree(&filters.slabs);
+ filters.nr_slabs = 0;
}
static void sort_contention_result(void)
@@ -1736,7 +1637,7 @@ static void print_lock_stat_stdio(struct lock_contention *con, struct lock_stat
switch (aggr_mode) {
case LOCK_AGGR_CALLER:
- fprintf(lock_output, " %10s %s\n", get_type_str(st->flags), st->name);
+ fprintf(lock_output, " %10s %s\n", get_type_flags_name(st->flags), st->name);
break;
case LOCK_AGGR_TASK:
pid = st->addr;
@@ -1746,7 +1647,7 @@ static void print_lock_stat_stdio(struct lock_contention *con, struct lock_stat
break;
case LOCK_AGGR_ADDR:
fprintf(lock_output, " %016llx %s (%s)\n", (unsigned long long)st->addr,
- st->name, get_type_name(st->flags));
+ st->name, get_type_lock_name(st->flags));
break;
case LOCK_AGGR_CGROUP:
fprintf(lock_output, " %s\n", st->name);
@@ -1787,7 +1688,7 @@ static void print_lock_stat_csv(struct lock_contention *con, struct lock_stat *s
switch (aggr_mode) {
case LOCK_AGGR_CALLER:
- fprintf(lock_output, "%s%s %s", get_type_str(st->flags), sep, st->name);
+ fprintf(lock_output, "%s%s %s", get_type_flags_name(st->flags), sep, st->name);
if (verbose <= 0)
fprintf(lock_output, "\n");
break;
@@ -1799,7 +1700,7 @@ static void print_lock_stat_csv(struct lock_contention *con, struct lock_stat *s
break;
case LOCK_AGGR_ADDR:
fprintf(lock_output, "%llx%s %s%s %s\n", (unsigned long long)st->addr, sep,
- st->name, sep, get_type_name(st->flags));
+ st->name, sep, get_type_lock_name(st->flags));
break;
case LOCK_AGGR_CGROUP:
fprintf(lock_output, "%s\n",st->name);
@@ -1918,6 +1819,22 @@ static void print_contention_result(struct lock_contention *con)
break;
}
+ if (con->owner && con->save_callstack && verbose > 0) {
+ struct rb_root root = RB_ROOT;
+
+ if (symbol_conf.field_sep)
+ fprintf(lock_output, "# owner stack trace:\n");
+ else
+ fprintf(lock_output, "\n=== owner stack trace ===\n\n");
+ while ((st = pop_owner_stack_trace(con)))
+ insert_to(&root, st, compare);
+
+ while ((st = pop_from(&root))) {
+ print_lock_stat(con, st);
+ free(st);
+ }
+ }
+
if (print_nr_entries) {
/* update the total/bad stats */
while ((st = pop_from_result())) {
@@ -1937,22 +1854,21 @@ static bool force;
static int __cmd_report(bool display_info)
{
int err = -EINVAL;
- struct perf_tool eops = {
- .attr = perf_event__process_attr,
- .event_update = process_event_update,
- .sample = process_sample_event,
- .comm = perf_event__process_comm,
- .mmap = perf_event__process_mmap,
- .namespaces = perf_event__process_namespaces,
- .tracing_data = perf_event__process_tracing_data,
- .ordered_events = true,
- };
+ struct perf_tool eops;
struct perf_data data = {
.path = input_name,
.mode = PERF_DATA_MODE_READ,
.force = force,
};
+ perf_tool__init(&eops, /*ordered_events=*/true);
+ eops.attr = perf_event__process_attr;
+ eops.event_update = process_event_update;
+ eops.sample = process_sample_event;
+ eops.comm = perf_event__process_comm;
+ eops.mmap = perf_event__process_mmap;
+ eops.namespaces = perf_event__process_namespaces;
+ eops.tracing_data = perf_event__process_tracing_data;
session = perf_session__new(&data, &eops);
if (IS_ERR(session)) {
pr_err("Initializing perf session failed\n");
@@ -1960,7 +1876,7 @@ static int __cmd_report(bool display_info)
}
symbol_conf.allow_aliases = true;
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
if (!data.is_pipe) {
if (!perf_session__has_traces(session, "lock record"))
@@ -1992,7 +1908,7 @@ static int __cmd_report(bool display_info)
setup_pager();
if (display_info) /* used for info subcommand */
- err = dump_info();
+ dump_info();
else {
combine_result();
sort_result();
@@ -2064,8 +1980,10 @@ static int check_lock_contention_options(const struct option *options,
}
}
- if (show_lock_owner)
- show_thread_stats = true;
+ if (show_lock_owner && !show_thread_stats) {
+ pr_warning("Now -o try to show owner's callstack instead of pid and comm.\n");
+ pr_warning("Please use -t option too to keep the old behavior.\n");
+ }
return 0;
}
@@ -2073,15 +1991,7 @@ static int check_lock_contention_options(const struct option *options,
static int __cmd_contention(int argc, const char **argv)
{
int err = -EINVAL;
- struct perf_tool eops = {
- .attr = perf_event__process_attr,
- .event_update = process_event_update,
- .sample = process_sample_event,
- .comm = perf_event__process_comm,
- .mmap = perf_event__process_mmap,
- .tracing_data = perf_event__process_tracing_data,
- .ordered_events = true,
- };
+ struct perf_tool eops;
struct perf_data data = {
.path = input_name,
.mode = PERF_DATA_MODE_READ,
@@ -2093,10 +2003,13 @@ static int __cmd_contention(int argc, const char **argv)
.max_stack = max_stack_depth,
.stack_skip = stack_skip,
.filters = &filters,
+ .delays = delays,
+ .nr_delays = nr_delays,
.save_callstack = needs_callstack(),
.owner = show_lock_owner,
.cgroups = RB_ROOT,
};
+ struct perf_env host_env;
lockhash_table = calloc(LOCKHASH_SIZE, sizeof(*lockhash_table));
if (!lockhash_table)
@@ -2104,7 +2017,18 @@ static int __cmd_contention(int argc, const char **argv)
con.result = &lockhash_table[0];
- session = perf_session__new(use_bpf ? NULL : &data, &eops);
+ perf_tool__init(&eops, /*ordered_events=*/true);
+ eops.attr = perf_event__process_attr;
+ eops.event_update = process_event_update;
+ eops.sample = process_sample_event;
+ eops.comm = perf_event__process_comm;
+ eops.mmap = perf_event__process_mmap;
+ eops.tracing_data = perf_event__process_tracing_data;
+
+ perf_env__init(&host_env);
+ session = __perf_session__new(use_bpf ? NULL : &data, &eops,
+ /*trace_event_repipe=*/false, &host_env);
+
if (IS_ERR(session)) {
pr_err("Initializing perf session failed\n");
err = PTR_ERR(session);
@@ -2122,7 +2046,7 @@ static int __cmd_contention(int argc, const char **argv)
con.save_callstack = true;
symbol_conf.allow_aliases = true;
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
if (use_bpf) {
err = target__validate(&target);
@@ -2155,7 +2079,8 @@ static int __cmd_contention(int argc, const char **argv)
goto out_delete;
}
- if (lock_contention_prepare(&con) < 0) {
+ err = lock_contention_prepare(&con);
+ if (err < 0) {
pr_err("lock contention BPF setup failed\n");
goto out_delete;
}
@@ -2176,10 +2101,14 @@ static int __cmd_contention(int argc, const char **argv)
}
}
- if (setup_output_field(true, output_fields))
+ err = setup_output_field(true, output_fields);
+ if (err) {
+ pr_err("Failed to setup output field\n");
goto out_delete;
+ }
- if (select_key(true))
+ err = select_key(true);
+ if (err)
goto out_delete;
if (symbol_conf.field_sep) {
@@ -2217,6 +2146,7 @@ out_delete:
evlist__delete(con.evlist);
lock_contention_finish(&con);
perf_session__delete(session);
+ perf_env__exit(&host_env);
zfree(&lockhash_table);
return err;
}
@@ -2275,23 +2205,13 @@ setup_args:
return -ENOMEM;
for (i = 0; i < ARRAY_SIZE(record_args); i++)
- rec_argv[i] = strdup(record_args[i]);
+ rec_argv[i] = record_args[i];
for (j = 0; j < nr_tracepoints; j++) {
- const char *ev_name;
-
- if (has_lock_stat)
- ev_name = strdup(lock_tracepoints[j].name);
- else
- ev_name = strdup(contention_tracepoints[j].name);
-
- if (!ev_name) {
- free(rec_argv);
- return -ENOMEM;
- }
-
rec_argv[i++] = "-e";
- rec_argv[i++] = ev_name;
+ rec_argv[i++] = has_lock_stat
+ ? lock_tracepoints[j].name
+ : contention_tracepoints[j].name;
}
for (j = 0; j < nr_callgraph_args; j++, i++)
@@ -2365,29 +2285,61 @@ static int parse_lock_type(const struct option *opt __maybe_unused, const char *
int unset __maybe_unused)
{
char *s, *tmp, *tok;
- int ret = 0;
s = strdup(str);
if (s == NULL)
return -1;
for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) {
- unsigned int flags = get_type_flag(tok);
+ bool found = false;
- if (flags == -1U) {
- pr_err("Unknown lock flags: %s\n", tok);
- ret = -1;
- break;
+ /* `tok` is a flags name if it contains ':'. */
+ if (strchr(tok, ':')) {
+ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+ if (!strcmp(lock_type_table[i].flags_name, tok) &&
+ add_lock_type(lock_type_table[i].flags)) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found) {
+ pr_err("Unknown lock flags name: %s\n", tok);
+ free(s);
+ return -1;
+ }
+
+ continue;
}
- if (!add_lock_type(flags)) {
- ret = -1;
- break;
+ /*
+ * Otherwise `tok` is a lock name.
+ * Single lock name could contain multiple flags.
+ * Replace alias `pcpu-sem` with actual name `percpu-rwsem.
+ */
+ if (!strcmp(tok, "pcpu-sem"))
+ tok = (char *)"percpu-rwsem";
+ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+ if (!strcmp(lock_type_table[i].lock_name, tok)) {
+ if (add_lock_type(lock_type_table[i].flags)) {
+ found = true;
+ } else {
+ free(s);
+ return -1;
+ }
+ }
}
+
+ if (!found) {
+ pr_err("Unknown lock name: %s\n", tok);
+ free(s);
+ return -1;
+ }
+
}
free(s);
- return ret;
+ return 0;
}
static bool add_lock_addr(unsigned long addr)
@@ -2427,6 +2379,27 @@ static bool add_lock_sym(char *name)
return true;
}
+static bool add_lock_slab(char *name)
+{
+ char **tmp;
+ char *sym = strdup(name);
+
+ if (sym == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp = realloc(filters.slabs, (filters.nr_slabs + 1) * sizeof(*filters.slabs));
+ if (tmp == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+
+ tmp[filters.nr_slabs++] = sym;
+ filters.slabs = tmp;
+ return true;
+}
+
static int parse_lock_addr(const struct option *opt __maybe_unused, const char *str,
int unset __maybe_unused)
{
@@ -2450,6 +2423,14 @@ static int parse_lock_addr(const struct option *opt __maybe_unused, const char *
continue;
}
+ if (*tok == '&') {
+ if (!add_lock_slab(tok + 1)) {
+ ret = -1;
+ break;
+ }
+ continue;
+ }
+
/*
* At this moment, we don't have kernel symbols. Save the symbols
* in a separate list and resolve them to addresses later.
@@ -2464,34 +2445,6 @@ static int parse_lock_addr(const struct option *opt __maybe_unused, const char *
return ret;
}
-static int parse_call_stack(const struct option *opt __maybe_unused, const char *str,
- int unset __maybe_unused)
-{
- char *s, *tmp, *tok;
- int ret = 0;
-
- s = strdup(str);
- if (s == NULL)
- return -1;
-
- for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) {
- struct callstack_filter *entry;
-
- entry = malloc(sizeof(*entry) + strlen(tok) + 1);
- if (entry == NULL) {
- pr_err("Memory allocation failure\n");
- free(s);
- return -1;
- }
-
- strcpy(entry->name, tok);
- list_add_tail(&entry->list, &callstack_filters);
- }
-
- free(s);
- return ret;
-}
-
static int parse_output(const struct option *opt __maybe_unused, const char *str,
int unset __maybe_unused)
{
@@ -2560,6 +2513,79 @@ static int parse_cgroup_filter(const struct option *opt __maybe_unused, const ch
return ret;
}
+static bool add_lock_delay(char *spec)
+{
+ char *at, *pos;
+ struct lock_delay *tmp;
+ unsigned long duration;
+
+ at = strchr(spec, '@');
+ if (at == NULL) {
+ pr_err("lock delay should have '@' sign: %s\n", spec);
+ return false;
+ }
+ if (at == spec) {
+ pr_err("lock delay should have time before '@': %s\n", spec);
+ return false;
+ }
+
+ *at = '\0';
+ duration = strtoul(spec, &pos, 0);
+ if (!strcmp(pos, "ns"))
+ duration *= 1;
+ else if (!strcmp(pos, "us"))
+ duration *= 1000;
+ else if (!strcmp(pos, "ms"))
+ duration *= 1000 * 1000;
+ else if (*pos) {
+ pr_err("invalid delay time: %s@%s\n", spec, at + 1);
+ return false;
+ }
+
+ if (duration > 10 * 1000 * 1000) {
+ pr_err("lock delay is too long: %s (> 10ms)\n", spec);
+ return false;
+ }
+
+ tmp = realloc(delays, (nr_delays + 1) * sizeof(*delays));
+ if (tmp == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+ delays = tmp;
+
+ delays[nr_delays].sym = strdup(at + 1);
+ if (delays[nr_delays].sym == NULL) {
+ pr_err("Memory allocation failure\n");
+ return false;
+ }
+ delays[nr_delays].time = duration;
+
+ nr_delays++;
+ return true;
+}
+
+static int parse_lock_delay(const struct option *opt __maybe_unused, const char *str,
+ int unset __maybe_unused)
+{
+ char *s, *tmp, *tok;
+ int ret = 0;
+
+ s = strdup(str);
+ if (s == NULL)
+ return -1;
+
+ for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) {
+ if (!add_lock_delay(tok)) {
+ ret = -1;
+ break;
+ }
+ }
+
+ free(s);
+ return ret;
+}
+
int cmd_lock(int argc, const char **argv)
{
const struct option lock_options[] = {
@@ -2578,9 +2604,9 @@ int cmd_lock(int argc, const char **argv)
const struct option info_options[] = {
OPT_BOOLEAN('t', "threads", &info_threads,
- "dump thread list in perf.data"),
+ "dump the thread list in perf.data"),
OPT_BOOLEAN('m', "map", &info_map,
- "map of lock instances (address:name table)"),
+ "dump the map of lock instances (address:name table)"),
OPT_PARENT(lock_options)
};
@@ -2636,6 +2662,8 @@ int cmd_lock(int argc, const char **argv)
OPT_BOOLEAN(0, "lock-cgroup", &show_lock_cgroups, "show lock stats by cgroup"),
OPT_CALLBACK('G', "cgroup-filter", NULL, "CGROUPS",
"Filter specific cgroups", parse_cgroup_filter),
+ OPT_CALLBACK('J', "inject-delay", NULL, "TIME@FUNC",
+ "Inject delays to specific locks", parse_lock_delay),
OPT_PARENT(lock_options)
};
@@ -2694,6 +2722,13 @@ int cmd_lock(int argc, const char **argv)
if (argc)
usage_with_options(info_usage, info_options);
}
+
+ /* If neither threads nor map requested, display both */
+ if (!info_threads && !info_map) {
+ info_threads = true;
+ info_map = true;
+ }
+
/* recycling report_lock_ops */
trace_handler = &report_lock_ops;
rc = __cmd_report(true);
@@ -2720,6 +2755,9 @@ int cmd_lock(int argc, const char **argv)
usage_with_options(lock_usage, lock_options);
}
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)lock_usage[0]);
+
zfree(&lockhash_table);
return rc;
}
diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
index 51499c20da01..c6496adff3fe 100644
--- a/tools/perf/builtin-mem.c
+++ b/tools/perf/builtin-mem.c
@@ -4,6 +4,7 @@
#include <sys/stat.h>
#include <unistd.h>
#include "builtin.h"
+#include "perf.h"
#include <subcmd/parse-options.h>
#include "util/auxtrace.h"
@@ -19,6 +20,7 @@
#include "util/symbol.h"
#include "util/pmus.h"
#include "util/sample.h"
+#include "util/sort.h"
#include "util/string2.h"
#include "util/util.h"
#include <linux/err.h>
@@ -28,12 +30,16 @@
struct perf_mem {
struct perf_tool tool;
- char const *input_name;
+ const char *input_name;
+ const char *sort_key;
bool hide_unresolved;
bool dump_raw;
bool force;
bool phys_addr;
bool data_page_size;
+ bool all_kernel;
+ bool all_user;
+ bool data_type;
int operation;
const char *cpu_list;
DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
@@ -42,58 +48,58 @@ struct perf_mem {
static int parse_record_events(const struct option *opt,
const char *str, int unset __maybe_unused)
{
- struct perf_mem *mem = *(struct perf_mem **)opt->value;
+ struct perf_mem *mem = (struct perf_mem *)opt->value;
+ struct perf_pmu *pmu;
+
+ pmu = perf_mem_events_find_pmu();
+ if (!pmu) {
+ pr_err("failed: there is no PMU that supports perf mem\n");
+ exit(-1);
+ }
if (!strcmp(str, "list")) {
- perf_mem_events__list();
+ perf_pmu__mem_events_list(pmu);
exit(0);
}
- if (perf_mem_events__parse(str))
+ if (perf_pmu__mem_events_parse(pmu, str))
exit(-1);
mem->operation = 0;
return 0;
}
-static const char * const __usage[] = {
- "perf mem record [<options>] [<command>]",
- "perf mem record [<options>] -- <command> [<options>]",
- NULL
-};
-
-static const char * const *record_mem_usage = __usage;
-
-static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
+static int __cmd_record(int argc, const char **argv, struct perf_mem *mem,
+ const struct option *options)
{
- int rec_argc, i = 0, j, tmp_nr = 0;
+ int rec_argc, i = 0, j;
int start, end;
const char **rec_argv;
- char **rec_tmp;
+ char *event_name_storage = NULL;
int ret;
- bool all_user = false, all_kernel = false;
struct perf_mem_event *e;
- struct option options[] = {
- OPT_CALLBACK('e', "event", &mem, "event",
- "event selector. use 'perf mem record -e list' to list available events",
- parse_record_events),
- OPT_UINTEGER(0, "ldlat", &perf_mem_events__loads_ldlat, "mem-loads latency"),
- OPT_INCR('v', "verbose", &verbose,
- "be more verbose (show counter open errors, etc)"),
- OPT_BOOLEAN('U', "all-user", &all_user, "collect only user level data"),
- OPT_BOOLEAN('K', "all-kernel", &all_kernel, "collect only kernel level data"),
- OPT_END()
+ struct perf_pmu *pmu;
+ const char * const record_usage[] = {
+ "perf mem record [<options>] [<command>]",
+ "perf mem record [<options>] -- <command> [<options>]",
+ NULL
};
- if (perf_mem_events__init()) {
+ pmu = perf_mem_events_find_pmu();
+ if (!pmu) {
+ pr_err("failed: no PMU supports the memory events\n");
+ return -1;
+ }
+
+ if (perf_pmu__mem_events_init()) {
pr_err("failed: memory events not supported\n");
return -1;
}
- argc = parse_options(argc, argv, options, record_mem_usage,
+ argc = parse_options(argc, argv, options, record_usage,
PARSE_OPT_KEEP_UNKNOWN);
/* Max number of arguments multiplied by number of PMUs that can support them. */
- rec_argc = argc + 9 * perf_pmus__num_mem_pmus();
+ rec_argc = argc + 9 * (perf_pmu__mem_events_num_mem_pmus(pmu) + 1);
if (mem->cpu_list)
rec_argc += 2;
@@ -102,18 +108,9 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
if (!rec_argv)
return -1;
- /*
- * Save the allocated event name strings.
- */
- rec_tmp = calloc(rec_argc + 1, sizeof(char *));
- if (!rec_tmp) {
- free(rec_argv);
- return -1;
- }
-
rec_argv[i++] = "record";
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD_STORE);
+ e = perf_pmu__mem_events_ptr(pmu, PERF_MEM_EVENTS__LOAD_STORE);
/*
* The load and store operations are required, use the event
@@ -122,22 +119,17 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
if (e->tag &&
(mem->operation & MEM_OPERATION_LOAD) &&
(mem->operation & MEM_OPERATION_STORE)) {
- e->record = true;
+ perf_mem_record[PERF_MEM_EVENTS__LOAD_STORE] = true;
rec_argv[i++] = "-W";
} else {
- if (mem->operation & MEM_OPERATION_LOAD) {
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD);
- e->record = true;
- }
+ if (mem->operation & MEM_OPERATION_LOAD)
+ perf_mem_record[PERF_MEM_EVENTS__LOAD] = true;
- if (mem->operation & MEM_OPERATION_STORE) {
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__STORE);
- e->record = true;
- }
+ if (mem->operation & MEM_OPERATION_STORE)
+ perf_mem_record[PERF_MEM_EVENTS__STORE] = true;
}
- e = perf_mem_events__ptr(PERF_MEM_EVENTS__LOAD);
- if (e->record)
+ if (perf_mem_record[PERF_MEM_EVENTS__LOAD])
rec_argv[i++] = "-W";
rec_argv[i++] = "-d";
@@ -149,15 +141,15 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
rec_argv[i++] = "--data-page-size";
start = i;
- ret = perf_mem_events__record_args(rec_argv, &i, rec_tmp, &tmp_nr);
+ ret = perf_mem_events__record_args(rec_argv, &i, &event_name_storage);
if (ret)
goto out;
end = i;
- if (all_user)
+ if (mem->all_user)
rec_argv[i++] = "--all-user";
- if (all_kernel)
+ if (mem->all_kernel)
rec_argv[i++] = "--all-kernel";
if (mem->cpu_list) {
@@ -179,16 +171,13 @@ static int __cmd_record(int argc, const char **argv, struct perf_mem *mem)
ret = cmd_record(i, rec_argv);
out:
- for (i = 0; i < tmp_nr; i++)
- free(rec_tmp[i]);
-
- free(rec_tmp);
+ free(event_name_storage);
free(rec_argv);
return ret;
}
static int
-dump_raw_samples(struct perf_tool *tool,
+dump_raw_samples(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -213,7 +202,7 @@ dump_raw_samples(struct perf_tool *tool,
if (al.map != NULL) {
dso = map__dso(al.map);
if (dso)
- dso->hit = 1;
+ dso__set_hit(dso);
}
field_sep = symbol_conf.field_sep;
@@ -255,14 +244,14 @@ dump_raw_samples(struct perf_tool *tool,
symbol_conf.field_sep,
sample->data_src,
symbol_conf.field_sep,
- dso ? dso->long_name : "???",
+ dso ? dso__long_name(dso) : "???",
al.sym ? al.sym->name : "???");
out_put:
addr_location__exit(&al);
return 0;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel __maybe_unused,
@@ -285,7 +274,23 @@ static int report_raw_events(struct perf_mem *mem)
.force = mem->force,
};
int ret;
- struct perf_session *session = perf_session__new(&data, &mem->tool);
+ struct perf_session *session;
+
+ perf_tool__init(&mem->tool, /*ordered_events=*/true);
+ mem->tool.sample = process_sample_event;
+ mem->tool.mmap = perf_event__process_mmap;
+ mem->tool.mmap2 = perf_event__process_mmap2;
+ mem->tool.comm = perf_event__process_comm;
+ mem->tool.lost = perf_event__process_lost;
+ mem->tool.fork = perf_event__process_fork;
+ mem->tool.attr = perf_event__process_attr;
+ mem->tool.build_id = perf_event__process_build_id;
+ mem->tool.namespaces = perf_event__process_namespaces;
+ mem->tool.auxtrace_info = perf_event__process_auxtrace_info;
+ mem->tool.auxtrace = perf_event__process_auxtrace;
+ mem->tool.auxtrace_error = perf_event__process_auxtrace_error;
+
+ session = perf_session__new(&data, &mem->tool);
if (IS_ERR(session))
return PTR_ERR(session);
@@ -299,7 +304,7 @@ static int report_raw_events(struct perf_mem *mem)
goto out_delete;
}
- ret = symbol__init(&session->header.env);
+ ret = symbol__init(perf_session__env(session));
if (ret < 0)
goto out_delete;
@@ -319,16 +324,21 @@ out_delete:
perf_session__delete(session);
return ret;
}
+
static char *get_sort_order(struct perf_mem *mem)
{
bool has_extra_options = (mem->phys_addr | mem->data_page_size) ? true : false;
char sort[128];
+ if (mem->sort_key)
+ scnprintf(sort, sizeof(sort), "--sort=%s", mem->sort_key);
+ else if (mem->data_type)
+ strcpy(sort, "--sort=mem,snoop,tlb,type");
/*
* there is no weight (cost) associated with stores, so don't print
* the column
*/
- if (!(mem->operation & MEM_OPERATION_LOAD)) {
+ else if (!(mem->operation & MEM_OPERATION_LOAD)) {
strcpy(sort, "--sort=mem,sym,dso,symbol_daddr,"
"dso_daddr,tlb,locked");
} else if (has_extra_options) {
@@ -343,14 +353,26 @@ static char *get_sort_order(struct perf_mem *mem)
if (mem->data_page_size)
strcat(sort, ",data_page_size");
+ /* make sure it has 'type' sort key even -s option is used */
+ if (mem->data_type && !strstr(sort, "type"))
+ strcat(sort, ",type");
+
return strdup(sort);
}
-static int report_events(int argc, const char **argv, struct perf_mem *mem)
+static int __cmd_report(int argc, const char **argv, struct perf_mem *mem,
+ const struct option *options)
{
const char **rep_argv;
int ret, i = 0, j, rep_argc;
char *new_sort_order;
+ const char * const report_usage[] = {
+ "perf mem report [<options>]",
+ NULL
+ };
+
+ argc = parse_options(argc, argv, options, report_usage,
+ PARSE_OPT_KEEP_UNKNOWN);
if (mem->dump_raw)
return report_raw_events(mem);
@@ -368,10 +390,11 @@ static int report_events(int argc, const char **argv, struct perf_mem *mem)
if (new_sort_order)
rep_argv[i++] = new_sort_order;
- for (j = 1; j < argc; j++, i++)
+ for (j = 0; j < argc; j++, i++)
rep_argv[i] = argv[j];
ret = cmd_report(i, rep_argv);
+ free(new_sort_order);
free(rep_argv);
return ret;
}
@@ -449,56 +472,61 @@ int cmd_mem(int argc, const char **argv)
{
struct stat st;
struct perf_mem mem = {
- .tool = {
- .sample = process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .lost = perf_event__process_lost,
- .fork = perf_event__process_fork,
- .attr = perf_event__process_attr,
- .build_id = perf_event__process_build_id,
- .namespaces = perf_event__process_namespaces,
- .auxtrace_info = perf_event__process_auxtrace_info,
- .auxtrace = perf_event__process_auxtrace,
- .auxtrace_error = perf_event__process_auxtrace_error,
- .ordered_events = true,
- },
.input_name = "perf.data",
/*
* default to both load an store sampling
*/
.operation = MEM_OPERATION_LOAD | MEM_OPERATION_STORE,
};
+ char *sort_order_help = sort_help("sort by key(s):", SORT_MODE__MEMORY);
const struct option mem_options[] = {
OPT_CALLBACK('t', "type", &mem.operation,
"type", "memory operations(load,store) Default load,store",
parse_mem_ops),
+ OPT_STRING('C', "cpu", &mem.cpu_list, "cpu",
+ "list of cpus to profile"),
+ OPT_BOOLEAN('f', "force", &mem.force, "don't complain, do it"),
+ OPT_INCR('v', "verbose", &verbose,
+ "be more verbose (show counter open errors, etc)"),
+ OPT_BOOLEAN('p', "phys-data", &mem.phys_addr, "Record/Report sample physical addresses"),
+ OPT_BOOLEAN(0, "data-page-size", &mem.data_page_size, "Record/Report sample data address page size"),
+ OPT_END()
+ };
+ const struct option record_options[] = {
+ OPT_CALLBACK('e', "event", &mem, "event",
+ "event selector. use 'perf mem record -e list' to list available events",
+ parse_record_events),
+ OPT_UINTEGER(0, "ldlat", &perf_mem_events__loads_ldlat, "mem-loads latency"),
+ OPT_BOOLEAN('U', "all-user", &mem.all_user, "collect only user level data"),
+ OPT_BOOLEAN('K', "all-kernel", &mem.all_kernel, "collect only kernel level data"),
+ OPT_PARENT(mem_options)
+ };
+ const struct option report_options[] = {
OPT_BOOLEAN('D', "dump-raw-samples", &mem.dump_raw,
"dump raw samples in ASCII"),
OPT_BOOLEAN('U', "hide-unresolved", &mem.hide_unresolved,
"Only display entries resolved to a symbol"),
OPT_STRING('i', "input", &input_name, "file",
"input file name"),
- OPT_STRING('C', "cpu", &mem.cpu_list, "cpu",
- "list of cpus to profile"),
OPT_STRING_NOEMPTY('x', "field-separator", &symbol_conf.field_sep,
"separator",
"separator for columns, no spaces will be added"
" between columns '.' is reserved."),
- OPT_BOOLEAN('f', "force", &mem.force, "don't complain, do it"),
- OPT_BOOLEAN('p', "phys-data", &mem.phys_addr, "Record/Report sample physical addresses"),
- OPT_BOOLEAN(0, "data-page-size", &mem.data_page_size, "Record/Report sample data address page size"),
- OPT_END()
+ OPT_STRING('s', "sort", &mem.sort_key, "key[,key2...]",
+ sort_order_help),
+ OPT_BOOLEAN('T', "type-profile", &mem.data_type,
+ "Show data-type profile result"),
+ OPT_PARENT(mem_options)
};
const char *const mem_subcommands[] = { "record", "report", NULL };
const char *mem_usage[] = {
NULL,
NULL
};
+ int ret;
argc = parse_options_subcommand(argc, argv, mem_options, mem_subcommands,
- mem_usage, PARSE_OPT_KEEP_UNKNOWN);
+ mem_usage, PARSE_OPT_STOP_AT_NON_OPTION);
if (!argc || !(strncmp(argv[0], "rec", 3) || mem.operation))
usage_with_options(mem_usage, mem_options);
@@ -511,11 +539,15 @@ int cmd_mem(int argc, const char **argv)
}
if (strlen(argv[0]) > 2 && strstarts("record", argv[0]))
- return __cmd_record(argc, argv, &mem);
+ ret = __cmd_record(argc, argv, &mem, record_options);
else if (strlen(argv[0]) > 2 && strstarts("report", argv[0]))
- return report_events(argc, argv, &mem);
+ ret = __cmd_report(argc, argv, &mem, report_options);
else
usage_with_options(mem_usage, mem_options);
- return 0;
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)mem_usage[0]);
+ free(sort_order_help);
+
+ return ret;
}
diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
index 019fef8da6a8..69800e4d9530 100644
--- a/tools/perf/builtin-probe.c
+++ b/tools/perf/builtin-probe.c
@@ -229,7 +229,7 @@ static int opt_set_target_ns(const struct option *opt __maybe_unused,
/* Command option callbacks */
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
static int opt_show_lines(const struct option *opt,
const char *str, int unset __maybe_unused)
{
@@ -325,7 +325,7 @@ static void cleanup_params(void)
for (i = 0; i < params->nevents; i++)
clear_perf_probe_event(params->events + i);
line_range__clear(&params->line_range);
- free(params->target);
+ zfree(&params->target);
strfilter__delete(params->filter);
nsinfo__put(params->nsi);
zfree(&params);
@@ -505,7 +505,7 @@ out:
return ret;
}
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
#define PROBEDEF_STR \
"[EVENT=]FUNC[@SRC][+OFF|%return|:RL|;PT]|SRC:AL|SRC;PT [[NAME=]ARG ...]"
#else
@@ -521,7 +521,7 @@ __cmd_probe(int argc, const char **argv)
"perf probe [<options>] --add 'PROBEDEF' [--add 'PROBEDEF' ...]",
"perf probe [<options>] --del '[GROUP:]EVENT' ...",
"perf probe --list [GROUP:]EVENT ...",
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
"perf probe [<options>] --line 'LINEDESC'",
"perf probe [<options>] --vars 'PROBEPOINT'",
#endif
@@ -545,7 +545,7 @@ __cmd_probe(int argc, const char **argv)
"\t\tFUNC:\tFunction name\n"
"\t\tOFF:\tOffset from function entry (in byte)\n"
"\t\t%return:\tPut the probe at function return\n"
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
"\t\tSRC:\tSource code path\n"
"\t\tRL:\tRelative line number from function entry.\n"
"\t\tAL:\tAbsolute line number in file.\n"
@@ -612,11 +612,11 @@ __cmd_probe(int argc, const char **argv)
set_option_flag(options, 'd', "del", PARSE_OPT_EXCLUSIVE);
set_option_flag(options, 'D', "definition", PARSE_OPT_EXCLUSIVE);
set_option_flag(options, 'l', "list", PARSE_OPT_EXCLUSIVE);
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
set_option_flag(options, 'L', "line", PARSE_OPT_EXCLUSIVE);
set_option_flag(options, 'V', "vars", PARSE_OPT_EXCLUSIVE);
#else
-# define set_nobuild(s, l, c) set_option_nobuild(options, s, l, "NO_DWARF=1", c)
+# define set_nobuild(s, l, c) set_option_nobuild(options, s, l, "NO_LIBDW=1", c)
set_nobuild('L', "line", false);
set_nobuild('V', "vars", false);
set_nobuild('\0', "externs", false);
@@ -694,7 +694,7 @@ __cmd_probe(int argc, const char **argv)
if (ret < 0)
pr_err_with_code(" Error: Failed to show functions.", ret);
return ret;
-#ifdef HAVE_DWARF_SUPPORT
+#ifdef HAVE_LIBDW_SUPPORT
case 'L':
ret = show_line_range(&params->line_range, params->target,
params->nsi, params->uprobes);
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 86c910125172..d76f01956e33 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -26,6 +26,7 @@
#include "util/target.h"
#include "util/session.h"
#include "util/tool.h"
+#include "util/stat.h"
#include "util/symbol.h"
#include "util/record.h"
#include "util/cpumap.h"
@@ -51,6 +52,7 @@
#include "util/clockid.h"
#include "util/off_cpu.h"
#include "util/bpf-filter.h"
+#include "util/strbuf.h"
#include "asm/bug.h"
#include "perf.h"
#include "cputopo.h"
@@ -161,6 +163,7 @@ struct record {
struct evlist *sb_evlist;
pthread_t thread_id;
int realtime_prio;
+ bool latency;
bool switch_output_event_set;
bool no_buildid;
bool no_buildid_set;
@@ -168,9 +171,12 @@ struct record {
bool no_buildid_cache_set;
bool buildid_all;
bool buildid_mmap;
+ bool buildid_mmap_set;
bool timestamp_filename;
bool timestamp_boundary;
bool off_cpu;
+ const char *filter_action;
+ const char *uid_str;
struct switch_output switch_output;
unsigned long long samples;
unsigned long output_max_size; /* = 0: unlimited */
@@ -193,6 +199,15 @@ static const char *affinity_tags[PERF_AFFINITY_MAX] = {
"SYS", "NODE", "CPU"
};
+static int build_id__process_mmap(const struct perf_tool *tool, union perf_event *event,
+ struct perf_sample *sample, struct machine *machine);
+static int build_id__process_mmap2(const struct perf_tool *tool, union perf_event *event,
+ struct perf_sample *sample, struct machine *machine);
+static int process_timestamp_boundary(const struct perf_tool *tool,
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct machine *machine);
+
#ifndef HAVE_GETTID
static inline pid_t gettid(void)
{
@@ -332,7 +347,7 @@ static int record__aio_complete(struct mmap *md, struct aiocb *cblock)
} else {
/*
* aio write request may require restart with the
- * reminder if the kernel didn't write whole
+ * remainder if the kernel didn't write whole
* chunk at once.
*/
rem_off = cblock->aio_offset + written;
@@ -400,7 +415,7 @@ static int record__aio_pushfn(struct mmap *map, void *to, void *buf, size_t size
*
* Coping can be done in two steps in case the chunk of profiling data
* crosses the upper bound of the kernel buffer. In this case we first move
- * part of data from map->start till the upper bound and then the reminder
+ * part of data from map->start till the upper bound and then the remainder
* from the beginning of the kernel buffer till the end of the data chunk.
*/
@@ -608,7 +623,7 @@ static int record__comp_enabled(struct record *rec)
return rec->opts.comp_level > 0;
}
-static int process_synthesized_event(struct perf_tool *tool,
+static int process_synthesized_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -619,7 +634,7 @@ static int process_synthesized_event(struct perf_tool *tool,
static struct mutex synth_lock;
-static int process_locked_synthesized_event(struct perf_tool *tool,
+static int process_locked_synthesized_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -637,14 +652,27 @@ static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
struct record *rec = to;
if (record__comp_enabled(rec)) {
+ struct perf_record_compressed2 *event = map->data;
+ size_t padding = 0;
+ u8 pad[8] = {0};
ssize_t compressed = zstd_compress(rec->session, map, map->data,
mmap__mmap_len(map), bf, size);
if (compressed < 0)
return (int)compressed;
- size = compressed;
- bf = map->data;
+ bf = event;
+ thread->samples++;
+
+ /*
+ * The record from `zstd_compress` is not 8 bytes aligned, which would cause asan
+ * error. We make it aligned here.
+ */
+ event->data_size = compressed - sizeof(struct perf_record_compressed2);
+ event->header.size = PERF_ALIGN(compressed, sizeof(u64));
+ padding = event->header.size - compressed;
+ return record__write(rec, map, bf, compressed) ||
+ record__write(rec, map, &pad, padding);
}
thread->samples++;
@@ -704,7 +732,7 @@ static void record__sig_exit(void)
#ifdef HAVE_AUXTRACE_SUPPORT
-static int record__process_auxtrace(struct perf_tool *tool,
+static int record__process_auxtrace(const struct perf_tool *tool,
struct mmap *map,
union perf_event *event, void *data1,
size_t len1, void *data2, size_t len2)
@@ -747,7 +775,9 @@ static int record__auxtrace_mmap_read(struct record *rec,
{
int ret;
- ret = auxtrace_mmap__read(map, rec->itr, &rec->tool,
+ ret = auxtrace_mmap__read(map, rec->itr,
+ perf_session__env(rec->session),
+ &rec->tool,
record__process_auxtrace);
if (ret < 0)
return ret;
@@ -763,7 +793,9 @@ static int record__auxtrace_mmap_read_snapshot(struct record *rec,
{
int ret;
- ret = auxtrace_mmap__read_snapshot(map, rec->itr, &rec->tool,
+ ret = auxtrace_mmap__read_snapshot(map, rec->itr,
+ perf_session__env(rec->session),
+ &rec->tool,
record__process_auxtrace,
rec->opts.auxtrace_snapshot_size);
if (ret < 0)
@@ -850,7 +882,9 @@ static int record__auxtrace_init(struct record *rec)
if (err)
return err;
- auxtrace_regroup_aux_output(rec->evlist);
+ err = auxtrace_parse_aux_action(rec->evlist);
+ if (err)
+ return err;
return auxtrace_parse_filters(rec->evlist);
}
@@ -1355,8 +1389,6 @@ static int record__open(struct record *rec)
struct record_opts *opts = &rec->opts;
int rc = 0;
- evlist__config(evlist, opts, &callchain_param);
-
evlist__for_each_entry(evlist, pos) {
try_again:
if (evsel__open(pos, pos->core.cpus, pos->core.threads) < 0) {
@@ -1376,8 +1408,6 @@ try_again:
ui__error("%s\n", msg);
goto out;
}
-
- pos->supported = true;
}
if (symbol_conf.kptr_restrict && !evlist__exclude_kernel(evlist)) {
@@ -1391,7 +1421,7 @@ try_again:
"even with a suitable vmlinux or kallsyms file.\n\n");
}
- if (evlist__apply_filters(evlist, &pos)) {
+ if (evlist__apply_filters(evlist, &pos, &opts->target)) {
pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
pos->filter ?: "BPF", evsel__name(pos), errno,
str_error_r(errno, msg, sizeof(msg)));
@@ -1418,7 +1448,7 @@ static void set_timestamp_boundary(struct record *rec, u64 sample_time)
rec->evlist->last_sample_time = sample_time;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -1460,7 +1490,7 @@ static int process_buildids(struct record *rec)
* first/last samples.
*/
if (rec->buildid_all && !rec->timestamp_boundary)
- rec->tool.sample = NULL;
+ rec->tool.sample = process_event_sample_stub;
return perf_session__process_events(session);
}
@@ -1523,7 +1553,7 @@ static void record__adjust_affinity(struct record *rec, struct mmap *map)
static size_t process_comp_header(void *record, size_t increment)
{
- struct perf_record_compressed *event = record;
+ struct perf_record_compressed2 *event = record;
size_t size = sizeof(*event);
if (increment) {
@@ -1531,7 +1561,7 @@ static size_t process_comp_header(void *record, size_t increment)
return increment;
}
- event->header.type = PERF_RECORD_COMPRESSED;
+ event->header.type = PERF_RECORD_COMPRESSED2;
event->header.size = size;
return size;
@@ -1541,7 +1571,7 @@ static ssize_t zstd_compress(struct perf_session *session, struct mmap *map,
void *dst, size_t dst_size, void *src, size_t src_size)
{
ssize_t compressed;
- size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct perf_record_compressed) - 1;
+ size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct perf_record_compressed2) - 1;
struct zstd_data *zstd_data = &session->zstd_data;
if (map && map->file)
@@ -1740,10 +1770,8 @@ static void record__init_features(struct record *rec)
if (rec->no_buildid)
perf_header__clear_feat(&session->header, HEADER_BUILD_ID);
-#ifdef HAVE_LIBTRACEEVENT
if (!have_tracepoints(&rec->evlist->core.entries))
perf_header__clear_feat(&session->header, HEADER_TRACING_DATA);
-#endif
if (!rec->opts.branch_stack)
perf_header__clear_feat(&session->header, HEADER_BRANCH_STACK);
@@ -1773,8 +1801,11 @@ record__finish_output(struct record *rec)
struct perf_data *data = &rec->data;
int fd = perf_data__fd(data);
- if (data->is_pipe)
+ if (data->is_pipe) {
+ /* Just to display approx. size */
+ data->file.size = rec->bytes_written;
return;
+ }
rec->session->header.data_size += rec->bytes_written;
data->file.size = lseek(perf_data__fd(data), 0, SEEK_CUR);
@@ -1783,11 +1814,12 @@ record__finish_output(struct record *rec)
data->dir.files[i].size = lseek(data->dir.files[i].fd, 0, SEEK_CUR);
}
+ /* Buildid scanning disabled or build ID in kernel and synthesized map events. */
if (!rec->no_buildid) {
process_buildids(rec);
if (rec->buildid_all)
- dsos__hit_all(rec->session);
+ perf_session__dsos_hit_all(rec->session);
}
perf_session__write_header(rec->session, rec->evlist, fd, true);
@@ -1830,8 +1862,8 @@ static int
record__switch_output(struct record *rec, bool at_exit)
{
struct perf_data *data = &rec->data;
+ char *new_filename = NULL;
int fd, err;
- char *new_filename;
/* Same Size: "2015122520103046"*/
char timestamp[] = "InvalidTimestamp";
@@ -1853,16 +1885,17 @@ record__switch_output(struct record *rec, bool at_exit)
}
fd = perf_data__switch(data, timestamp,
- rec->session->header.data_offset,
- at_exit, &new_filename);
+ rec->session->header.data_offset,
+ at_exit, &new_filename);
if (fd >= 0 && !at_exit) {
rec->bytes_written = 0;
rec->session->header.data_size = 0;
}
- if (!quiet)
+ if (!quiet) {
fprintf(stderr, "[ perf record: Dump %s.%s ]\n",
data->path, timestamp);
+ }
if (rec->switch_output.num_files) {
int n = rec->switch_output.cur_file + 1;
@@ -1905,9 +1938,10 @@ static void __record__save_lost_samples(struct record *rec, struct evsel *evsel,
u16 misc_flag)
{
struct perf_sample_id *sid;
- struct perf_sample sample = {};
+ struct perf_sample sample;
int id_hdr_size;
+ perf_sample__init(&sample, /*all=*/true);
lost->lost = lost_count;
if (evsel->core.ids) {
sid = xyarray__entry(evsel->core.sample_id, cpu_idx, thread_idx);
@@ -1919,12 +1953,13 @@ static void __record__save_lost_samples(struct record *rec, struct evsel *evsel,
lost->header.size = sizeof(*lost) + id_hdr_size;
lost->header.misc = misc_flag;
record__write(rec, NULL, lost, lost->header.size);
+ perf_sample__exit(&sample);
}
static void record__read_lost_samples(struct record *rec)
{
struct perf_session *session = rec->session;
- struct perf_record_lost_samples *lost = NULL;
+ struct perf_record_lost_samples_and_ids lost;
struct evsel *evsel;
/* there was an error during record__open */
@@ -1949,20 +1984,13 @@ static void record__read_lost_samples(struct record *rec)
if (perf_evsel__read(&evsel->core, x, y, &count) < 0) {
pr_debug("read LOST count failed\n");
- goto out;
+ return;
}
if (count.lost) {
- if (!lost) {
- lost = zalloc(sizeof(*lost) +
- session->machines.host.id_hdr_size);
- if (!lost) {
- pr_debug("Memory allocation failed\n");
- return;
- }
- lost->header.type = PERF_RECORD_LOST_SAMPLES;
- }
- __record__save_lost_samples(rec, evsel, lost,
+ memset(&lost, 0, sizeof(lost));
+ lost.lost.header.type = PERF_RECORD_LOST_SAMPLES;
+ __record__save_lost_samples(rec, evsel, &lost.lost,
x, y, count.lost, 0);
}
}
@@ -1970,21 +1998,12 @@ static void record__read_lost_samples(struct record *rec)
lost_count = perf_bpf_filter__lost_count(evsel);
if (lost_count) {
- if (!lost) {
- lost = zalloc(sizeof(*lost) +
- session->machines.host.id_hdr_size);
- if (!lost) {
- pr_debug("Memory allocation failed\n");
- return;
- }
- lost->header.type = PERF_RECORD_LOST_SAMPLES;
- }
- __record__save_lost_samples(rec, evsel, lost, 0, 0, lost_count,
+ memset(&lost, 0, sizeof(lost));
+ lost.lost.header.type = PERF_RECORD_LOST_SAMPLES;
+ __record__save_lost_samples(rec, evsel, &lost.lost, 0, 0, lost_count,
PERF_RECORD_MISC_LOST_SAMPLES_BPF);
}
}
-out:
- free(lost);
}
static volatile sig_atomic_t workload_exec_errno;
@@ -2147,6 +2166,14 @@ out:
return err;
}
+static void record__synthesize_final_bpf_metadata(struct record *rec __maybe_unused)
+{
+#ifdef HAVE_LIBBPF_SUPPORT
+ perf_event__synthesize_final_bpf_metadata(rec->session,
+ process_synthesized_event);
+#endif
+}
+
static int record__process_signal_event(union perf_event *event __maybe_unused, void *data)
{
struct record *rec = data;
@@ -2178,7 +2205,7 @@ static int record__setup_sb_evlist(struct record *rec)
}
}
- if (evlist__add_bpf_sb_event(rec->sb_evlist, &rec->session->header.env)) {
+ if (evlist__add_bpf_sb_event(rec->sb_evlist, perf_session__env(rec->session))) {
pr_err("Couldn't ask for PERF_RECORD_BPF_EVENT side band events.\n.");
return -1;
}
@@ -2197,15 +2224,16 @@ static int record__init_clock(struct record *rec)
struct perf_session *session = rec->session;
struct timespec ref_clockid;
struct timeval ref_tod;
+ struct perf_env *env = perf_session__env(session);
u64 ref;
if (!rec->opts.use_clockid)
return 0;
if (rec->opts.use_clockid && rec->opts.clockid_res_ns)
- session->header.env.clock.clockid_res_ns = rec->opts.clockid_res_ns;
+ env->clock.clockid_res_ns = rec->opts.clockid_res_ns;
- session->header.env.clock.clockid = rec->opts.clockid;
+ env->clock.clockid = rec->opts.clockid;
if (gettimeofday(&ref_tod, NULL) != 0) {
pr_err("gettimeofday failed, cannot set reference time.\n");
@@ -2220,12 +2248,12 @@ static int record__init_clock(struct record *rec)
ref = (u64) ref_tod.tv_sec * NSEC_PER_SEC +
(u64) ref_tod.tv_usec * NSEC_PER_USEC;
- session->header.env.clock.tod_ns = ref;
+ env->clock.tod_ns = ref;
ref = (u64) ref_clockid.tv_sec * NSEC_PER_SEC +
(u64) ref_clockid.tv_nsec;
- session->header.env.clock.clockid_ns = ref;
+ env->clock.clockid_ns = ref;
return 0;
}
@@ -2371,6 +2399,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
int fd;
float ratio = 0;
enum evlist_ctl_cmd cmd = EVLIST_CTL_CMD_UNSUPPORTED;
+ struct perf_env *env;
atexit(record__sig_exit);
signal(SIGCHLD, sig_handler);
@@ -2378,13 +2407,8 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
signal(SIGTERM, sig_handler);
signal(SIGSEGV, sigsegv_handler);
- if (rec->opts.record_namespaces)
- tool->namespace_events = true;
-
if (rec->opts.record_cgroup) {
-#ifdef HAVE_FILE_HANDLE
- tool->cgroup_events = true;
-#else
+#ifndef HAVE_FILE_HANDLE
pr_err("cgroup tracking is not supported\n");
return -1;
#endif
@@ -2400,12 +2424,24 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
signal(SIGUSR2, SIG_IGN);
}
+ perf_tool__init(tool, /*ordered_events=*/true);
+ tool->sample = process_sample_event;
+ tool->fork = perf_event__process_fork;
+ tool->exit = perf_event__process_exit;
+ tool->comm = perf_event__process_comm;
+ tool->namespaces = perf_event__process_namespaces;
+ tool->mmap = build_id__process_mmap;
+ tool->mmap2 = build_id__process_mmap2;
+ tool->itrace_start = process_timestamp_boundary;
+ tool->aux = process_timestamp_boundary;
+ tool->namespace_events = rec->opts.record_namespaces;
+ tool->cgroup_events = rec->opts.record_cgroup;
session = perf_session__new(data, tool);
if (IS_ERR(session)) {
pr_err("Perf session creation failed.\n");
return PTR_ERR(session);
}
-
+ env = perf_session__env(session);
if (record__threads_enabled(rec)) {
if (perf_data__is_pipe(&rec->data)) {
pr_err("Parallel trace streaming is not available in pipe mode.\n");
@@ -2439,8 +2475,8 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
#endif // HAVE_EVENTFD_SUPPORT
- session->header.env.comp_type = PERF_COMP_ZSTD;
- session->header.env.comp_level = rec->opts.comp_level;
+ env->comp_type = PERF_COMP_ZSTD;
+ env->comp_level = rec->opts.comp_level;
if (rec->opts.kcore &&
!record__kcore_readable(&session->machines.host)) {
@@ -2472,7 +2508,18 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
if (data->is_pipe && rec->evlist->core.nr_entries == 1)
rec->opts.sample_id = true;
- evlist__uniquify_name(rec->evlist);
+ if (rec->timestamp_filename && perf_data__is_pipe(data)) {
+ rec->timestamp_filename = false;
+ pr_warning("WARNING: --timestamp-filename option is not available in pipe mode.\n");
+ }
+
+ /*
+ * Use global stat_config that is zero meaning aggr_mode is AGGR_NONE
+ * and hybrid_merge is false.
+ */
+ evlist__uniquify_evsel_names(rec->evlist, &stat_config);
+
+ evlist__config(rec->evlist, opts, &callchain_param);
/* Debug message used by test scripts */
pr_debug3("perf record opening and mmapping events\n");
@@ -2482,7 +2529,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
}
/* Debug message used by test scripts */
pr_debug3("perf record done opening and mmapping events\n");
- session->header.env.comp_mmap_len = session->evlist->core.mmap_len;
+ env->comp_mmap_len = session->evlist->core.mmap_len;
if (rec->opts.kcore) {
err = record__kcore_copy(&session->machines.host, data);
@@ -2522,6 +2569,9 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
goto out_free_threads;
}
+ if (!evlist__needs_bpf_sb_event(rec->evlist))
+ opts->no_bpf_event = true;
+
err = record__setup_sb_evlist(rec);
if (err)
goto out_free_threads;
@@ -2553,6 +2603,13 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
evlist__enable(rec->evlist);
/*
+ * offcpu-time does not call execve, so enable_on_exe wouldn't work
+ * when recording a workload, do it manually
+ */
+ if (rec->off_cpu)
+ evlist__enable_evsel(rec->evlist, (char *)OFFCPU_EVENT);
+
+ /*
* Let the child rip
*/
if (forks) {
@@ -2764,17 +2821,21 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
trigger_off(&auxtrace_snapshot_trigger);
trigger_off(&switch_output_trigger);
+ record__synthesize_final_bpf_metadata(rec);
+
if (opts->auxtrace_snapshot_on_exit)
record__auxtrace_snapshot_exit(rec);
if (forks && workload_exec_errno) {
- char msg[STRERR_BUFSIZE], strevsels[2048];
+ char msg[STRERR_BUFSIZE];
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
+ struct strbuf sb = STRBUF_INIT;
- evlist__scnprintf_evsels(rec->evlist, sizeof(strevsels), strevsels);
+ evlist__format_evsels(rec->evlist, &sb, 2048);
pr_err("Failed to collect '%s' for the '%s' workload: %s\n",
- strevsels, argv[0], emsg);
+ sb.buf, argv[0], emsg);
+ strbuf_release(&sb);
err = -1;
goto out_child;
}
@@ -2798,7 +2859,7 @@ out_free_threads:
if (rec->session->bytes_transferred && rec->session->bytes_compressed) {
ratio = (float)rec->session->bytes_transferred/(float)rec->session->bytes_compressed;
- session->header.env.comp_ratio = ratio + 0.5;
+ env->comp_ratio = ratio + 0.5;
}
if (forks) {
@@ -2872,10 +2933,10 @@ out_delete_session:
}
#endif
zstd_fini(&session->zstd_data);
- perf_session__delete(session);
-
if (!opts->no_bpf_event)
evlist__stop_sb_thread(rec->sb_evlist);
+
+ perf_session__delete(session);
return status;
}
@@ -2950,6 +3011,8 @@ static int perf_record_config(const char *var, const char *value, void *cb)
rec->no_buildid = true;
else if (!strcmp(value, "mmap"))
rec->buildid_mmap = true;
+ else if (!strcmp(value, "no-mmap"))
+ rec->buildid_mmap = false;
else
return -1;
return 0;
@@ -3139,6 +3202,28 @@ out_free:
return ret;
}
+static int record__parse_off_cpu_thresh(const struct option *opt,
+ const char *str,
+ int unset __maybe_unused)
+{
+ struct record_opts *opts = opt->value;
+ char *endptr;
+ u64 off_cpu_thresh_ms;
+
+ if (!str)
+ return -EINVAL;
+
+ off_cpu_thresh_ms = strtoull(str, &endptr, 10);
+
+ /* the threshold isn't string "0", yet strtoull() returns 0, parsing failed */
+ if (*endptr || (off_cpu_thresh_ms == 0 && strcmp(str, "0")))
+ return -EINVAL;
+ else
+ opts->off_cpu_thresh_ns = off_cpu_thresh_ms * NSEC_PER_MSEC;
+
+ return 0;
+}
+
void __weak arch__add_leaf_frame_record_opts(struct record_opts *opts __maybe_unused)
{
}
@@ -3189,7 +3274,7 @@ static int switch_output_setup(struct record *rec)
unsigned long val;
/*
- * If we're using --switch-output-events, then we imply its
+ * If we're using --switch-output-events, then we imply its
* --switch-output=signal, as we'll send a SIGUSR2 from the side band
* thread to its parent.
*/
@@ -3250,7 +3335,7 @@ static const char * const __record_usage[] = {
};
const char * const *record_usage = __record_usage;
-static int build_id__process_mmap(struct perf_tool *tool, union perf_event *event,
+static int build_id__process_mmap(const struct perf_tool *tool, union perf_event *event,
struct perf_sample *sample, struct machine *machine)
{
/*
@@ -3262,7 +3347,7 @@ static int build_id__process_mmap(struct perf_tool *tool, union perf_event *even
return perf_event__process_mmap(tool, event, sample, machine);
}
-static int build_id__process_mmap2(struct perf_tool *tool, union perf_event *event,
+static int build_id__process_mmap2(const struct perf_tool *tool, union perf_event *event,
struct perf_sample *sample, struct machine *machine)
{
/*
@@ -3275,7 +3360,7 @@ static int build_id__process_mmap2(struct perf_tool *tool, union perf_event *eve
return perf_event__process_mmap2(tool, event, sample, machine);
}
-static int process_timestamp_boundary(struct perf_tool *tool,
+static int process_timestamp_boundary(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
@@ -3332,19 +3417,9 @@ static struct record record = {
.ctl_fd = -1,
.ctl_fd_ack = -1,
.synth = PERF_SYNTH_ALL,
+ .off_cpu_thresh_ns = OFFCPU_THRESH,
},
- .tool = {
- .sample = process_sample_event,
- .fork = perf_event__process_fork,
- .exit = perf_event__process_exit,
- .comm = perf_event__process_comm,
- .namespaces = perf_event__process_namespaces,
- .mmap = build_id__process_mmap,
- .mmap2 = build_id__process_mmap2,
- .itrace_start = process_timestamp_boundary,
- .aux = process_timestamp_boundary,
- .ordered_events = true,
- },
+ .buildid_mmap = true,
};
const char record_callchain_help[] = CALLCHAIN_RECORD_HELP
@@ -3373,6 +3448,9 @@ static struct option __record_options[] = {
parse_events_option),
OPT_CALLBACK(0, "filter", &record.evlist, "filter",
"event filter", parse_filter),
+ OPT_BOOLEAN(0, "latency", &record.latency,
+ "Enable data collection for latency profiling.\n"
+ "\t\t\t Use perf report --latency for latency-centric profile."),
OPT_CALLBACK_NOOPT(0, "exclude-perf", &record.evlist,
NULL, "don't record events from perf itself",
exclude_perf),
@@ -3429,6 +3507,8 @@ static struct option __record_options[] = {
"Record the sampled data address data page size"),
OPT_BOOLEAN(0, "code-page-size", &record.opts.sample_code_page_size,
"Record the sampled code address (ip) page size"),
+ OPT_BOOLEAN(0, "sample-mem-info", &record.opts.sample_data_src,
+ "Record the data source for memory operations"),
OPT_BOOLEAN(0, "sample-cpu", &record.opts.sample_cpu, "Record the sample cpu"),
OPT_BOOLEAN(0, "sample-identifier", &record.opts.sample_identifier,
"Record the sample identifier"),
@@ -3453,8 +3533,7 @@ static struct option __record_options[] = {
"or ranges of time to enable events e.g. '-D 10-20,30-40'",
record__parse_event_enable_time),
OPT_BOOLEAN(0, "kcore", &record.opts.kcore, "copy /proc/kcore"),
- OPT_STRING('u', "uid", &record.opts.target.uid_str, "user",
- "user to profile"),
+ OPT_STRING('u', "uid", &record.uid_str, "user", "user to profile"),
OPT_CALLBACK_NOOPT('b', "branch-any", &record.opts.branch_stack,
"branch any", "sample any taken branches",
@@ -3473,7 +3552,7 @@ static struct option __record_options[] = {
"sample selected machine registers on interrupt,"
" use '-I?' to list register names", parse_intr_regs),
OPT_CALLBACK_OPTARG(0, "user-regs", &record.opts.sample_user_regs, NULL, "any register",
- "sample selected machine registers on interrupt,"
+ "sample selected machine registers in user space,"
" use '--user-regs=?' to list register names", parse_user_regs),
OPT_BOOLEAN(0, "running-time", &record.opts.running_time,
"Record running/enabled time of read (:S) events"),
@@ -3507,8 +3586,8 @@ static struct option __record_options[] = {
"file", "vmlinux pathname"),
OPT_BOOLEAN(0, "buildid-all", &record.buildid_all,
"Record build-id of all DSOs regardless of hits"),
- OPT_BOOLEAN(0, "buildid-mmap", &record.buildid_mmap,
- "Record build-id in map events"),
+ OPT_BOOLEAN_SET(0, "buildid-mmap", &record.buildid_mmap, &record.buildid_mmap_set,
+ "Record build-id in mmap events and skip build-id processing."),
OPT_BOOLEAN(0, "timestamp-filename", &record.timestamp_filename,
"append timestamp to output filename"),
OPT_BOOLEAN(0, "timestamp-boundary", &record.timestamp_boundary,
@@ -3564,6 +3643,11 @@ static struct option __record_options[] = {
"write collected trace data into several data files using parallel threads",
record__parse_threads),
OPT_BOOLEAN(0, "off-cpu", &record.off_cpu, "Enable off-cpu analysis"),
+ OPT_STRING(0, "setup-filter", &record.filter_action, "pin|unpin",
+ "BPF filter action"),
+ OPT_CALLBACK(0, "off-cpu-thresh", &record.opts, "ms",
+ "Dump off-cpu samples if off-cpu time exceeds this threshold (in milliseconds). (Default: 500ms)",
+ record__parse_off_cpu_thresh),
OPT_END()
};
@@ -4017,19 +4101,40 @@ int cmd_record(int argc, const char **argv)
}
- if (rec->buildid_mmap) {
- if (!perf_can_record_build_id()) {
- pr_err("Failed: no support to record build id in mmap events, update your kernel.\n");
+ if (record.latency) {
+ /*
+ * There is no fundamental reason why latency profiling
+ * can't work for system-wide mode, but exact semantics
+ * and details are to be defined.
+ * See the following thread for details:
+ * https://lore.kernel.org/all/Z4XDJyvjiie3howF@google.com/
+ */
+ if (record.opts.target.system_wide) {
+ pr_err("Failed: latency profiling is not supported with system-wide collection.\n");
err = -EINVAL;
goto out_opts;
}
- pr_debug("Enabling build id in mmap2 events.\n");
- /* Enable mmap build id synthesizing. */
- symbol_conf.buildid_mmap2 = true;
+ record.opts.record_switch_events = true;
+ }
+
+ if (!rec->buildid_mmap) {
+ pr_debug("Disabling build id in synthesized mmap2 events.\n");
+ symbol_conf.no_buildid_mmap2 = true;
+ } else if (rec->buildid_mmap_set) {
+ /*
+ * Explicitly passing --buildid-mmap disables buildid processing
+ * and cache generation.
+ */
+ rec->no_buildid = true;
+ }
+ if (rec->buildid_mmap && !perf_can_record_build_id()) {
+ pr_warning("Missing support for build id in kernel mmap events.\n"
+ "Disable this warning with --no-buildid-mmap\n");
+ rec->buildid_mmap = false;
+ }
+ if (rec->buildid_mmap) {
/* Enable perf_event_attr::build_id bit. */
rec->opts.build_id = true;
- /* Disable build id cache. */
- rec->no_buildid = true;
}
if (rec->opts.record_cgroup && !perf_can_record_cgroup()) {
@@ -4093,6 +4198,22 @@ int cmd_record(int argc, const char **argv)
pr_warning("WARNING: --timestamp-filename option is not available in parallel streaming mode.\n");
}
+ if (rec->filter_action) {
+ if (!strcmp(rec->filter_action, "pin"))
+ err = perf_bpf_filter__pin();
+ else if (!strcmp(rec->filter_action, "unpin"))
+ err = perf_bpf_filter__unpin();
+ else {
+ pr_warning("Unknown BPF filter action: %s\n", rec->filter_action);
+ err = -EINVAL;
+ }
+ goto out_opts;
+ }
+
+ /* For backward compatibility, -d implies --mem-info */
+ if (rec->opts.sample_address)
+ rec->opts.sample_data_src = true;
+
/*
* Allow aliases to facilitate the lookup of symbols for address
* filters. Refer to auxtrace_parse_filters().
@@ -4145,9 +4266,7 @@ int cmd_record(int argc, const char **argv)
record.opts.tail_synthesize = true;
if (rec->evlist->core.nr_entries == 0) {
- bool can_profile_kernel = perf_event_paranoid_check(1);
-
- err = parse_event(rec->evlist, can_profile_kernel ? "cycles:P" : "cycles:Pu");
+ err = parse_event(rec->evlist, "cycles:P");
if (err)
goto out;
}
@@ -4161,19 +4280,24 @@ int cmd_record(int argc, const char **argv)
ui__warning("%s\n", errbuf);
}
- err = target__parse_uid(&rec->opts.target);
- if (err) {
- int saved_errno = errno;
+ if (rec->uid_str) {
+ uid_t uid = parse_uid(rec->uid_str);
- target__strerror(&rec->opts.target, err, errbuf, BUFSIZ);
- ui__error("%s", errbuf);
+ if (uid == UINT_MAX) {
+ ui__error("Invalid User: %s", rec->uid_str);
+ err = -EINVAL;
+ goto out;
+ }
+ err = parse_uid_filter(rec->evlist, uid);
+ if (err)
+ goto out;
- err = -saved_errno;
- goto out;
+ /* User ID filtering implies system wide. */
+ rec->opts.target.system_wide = true;
}
- /* Enable ignoring missing threads when -u/-p option is defined. */
- rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX || rec->opts.target.pid;
+ /* Enable ignoring missing threads when -p option is defined. */
+ rec->opts.ignore_missing_thread = rec->opts.target.pid;
evlist__warn_user_requested_cpus(rec->evlist, rec->opts.target.cpu_list);
@@ -4249,13 +4373,13 @@ int cmd_record(int argc, const char **argv)
err = __cmd_record(&record, argc, argv);
out:
- evlist__delete(rec->evlist);
+ record__free_thread_masks(rec, rec->nr_threads);
+ rec->nr_threads = 0;
symbol__exit();
auxtrace_record__free(rec->itr);
out_opts:
- record__free_thread_masks(rec, rec->nr_threads);
- rec->nr_threads = 0;
evlist__close_control(rec->opts.ctl_fd, rec->opts.ctl_fd_ack, &rec->opts.ctl_fd_close);
+ evlist__delete(rec->evlist);
return err;
}
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index f2ed2b7e80a3..35df04dad2fd 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -31,6 +31,7 @@
#include "util/evsel.h"
#include "util/evswitch.h"
#include "util/header.h"
+#include "util/mem-info.h"
#include "util/session.h"
#include "util/srcline.h"
#include "util/tool.h"
@@ -59,6 +60,7 @@
#include <linux/ctype.h>
#include <signal.h>
#include <linux/bitmap.h>
+#include <linux/list_sort.h>
#include <linux/string.h>
#include <linux/stringify.h>
#include <linux/time64.h>
@@ -68,7 +70,7 @@
#include <linux/mman.h>
#ifdef HAVE_LIBTRACEEVENT
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
#endif
struct report {
@@ -110,6 +112,8 @@ struct report {
u64 nr_entries;
u64 queue_size;
u64 total_cycles;
+ u64 total_samples;
+ u64 singlethreaded_samples;
int socket_filter;
DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
struct branch_type_stat brtype_stat;
@@ -171,7 +175,7 @@ static int hist_iter__report_callback(struct hist_entry_iter *iter,
struct mem_info *mi;
struct branch_info *bi;
- if (!ui__has_annotation() && !rep->symbol_ipc && !rep->data_type)
+ if (!ui__has_annotation() && !rep->symbol_ipc)
return 0;
if (sort__mode == SORT_MODE__BRANCH) {
@@ -184,7 +188,7 @@ static int hist_iter__report_callback(struct hist_entry_iter *iter,
} else if (rep->mem_mode) {
mi = he->mem_info;
- err = addr_map_symbol__inc_samples(&mi->daddr, sample, evsel);
+ err = addr_map_symbol__inc_samples(mem_info__daddr(mi), sample, evsel);
if (err)
goto out;
@@ -261,7 +265,7 @@ static int process_feature_event(struct perf_session *session,
return 0;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -321,14 +325,18 @@ static int process_sample_event(struct perf_tool *tool,
}
if (al.map != NULL)
- map__dso(al.map)->hit = 1;
+ dso__set_hit(map__dso(al.map));
if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
hist__account_cycles(sample->branch_stack, &al, sample,
rep->nonany_branch_mode,
- &rep->total_cycles);
+ &rep->total_cycles, evsel);
}
+ rep->total_samples++;
+ if (al.parallelism == 1)
+ rep->singlethreaded_samples++;
+
ret = hist_entry_iter__add(&iter, &al, rep->max_stack, rep);
if (ret < 0)
pr_debug("problem adding hist entry, skipping event\n");
@@ -337,7 +345,7 @@ out_put:
return ret;
}
-static int process_read_event(struct perf_tool *tool,
+static int process_read_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct evsel *evsel,
@@ -346,11 +354,9 @@ static int process_read_event(struct perf_tool *tool,
struct report *rep = container_of(tool, struct report, tool);
if (rep->show_threads) {
- const char *name = evsel__name(evsel);
int err = perf_read_values_add_value(&rep->show_threads_values,
event->read.pid, event->read.tid,
- evsel->core.idx,
- name,
+ evsel,
event->read.value);
if (err)
@@ -407,7 +413,7 @@ static int report__setup_sample_type(struct report *rep)
/* Silently ignore if callchain is missing */
if (!(sample_type & PERF_SAMPLE_CALLCHAIN)) {
symbol_conf.cumulate_callchain = false;
- perf_hpp__cancel_cumulate();
+ perf_hpp__cancel_cumulate(session->evlist);
}
}
@@ -427,7 +433,7 @@ static int report__setup_sample_type(struct report *rep)
* compatibility, set the bit if it's an old perf data file.
*/
evlist__for_each_entry(session->evlist, evsel) {
- if (strstr(evsel->name, "arm_spe") &&
+ if (strstr(evsel__name(evsel), "arm_spe") &&
!(sample_type & PERF_SAMPLE_DATA_SRC)) {
evsel->core.attr.sample_type |= PERF_SAMPLE_DATA_SRC;
sample_type |= PERF_SAMPLE_DATA_SRC;
@@ -441,7 +447,7 @@ static int report__setup_sample_type(struct report *rep)
}
}
- callchain_param_setup(sample_type, perf_env__arch(&rep->session->header.env));
+ callchain_param_setup(sample_type, perf_env__arch(perf_session__env(rep->session)));
if (rep->stitch_lbr && (callchain_param.record_mode != CALLCHAIN_LBR)) {
ui__warning("Can't find LBR callchain. Switch off --stitch-lbr.\n"
@@ -453,7 +459,7 @@ static int report__setup_sample_type(struct report *rep)
if (!(evlist__combined_branch_type(session->evlist) & PERF_SAMPLE_BRANCH_ANY))
rep->nonany_branch_mode = true;
-#if !defined(HAVE_LIBUNWIND_SUPPORT) && !defined(HAVE_DWARF_SUPPORT)
+#if !defined(HAVE_LIBUNWIND_SUPPORT) && !defined(HAVE_LIBDW_SUPPORT)
if (dwarf_callchain_users) {
ui__warning("Please install libunwind or libdw "
"development packages during the perf build.\n");
@@ -523,7 +529,10 @@ static size_t hists__fprintf_nr_sample_events(struct hists *hists, struct report
if (rep->mem_mode) {
ret += fprintf(fp, "\n# Total weight : %" PRIu64, nr_events);
- ret += fprintf(fp, "\n# Sort order : %s", sort_order ? : default_mem_sort_order);
+ if (sort_order || !field_order) {
+ ret += fprintf(fp, "\n# Sort order : %s",
+ sort_order ? : default_mem_sort_order);
+ }
} else
ret += fprintf(fp, "\n# Event count (approx.): %" PRIu64, nr_events);
@@ -541,7 +550,7 @@ static int evlist__tui_block_hists_browse(struct evlist *evlist, struct report *
evlist__for_each_entry(evlist, pos) {
ret = report__browse_block_hists(&rep->block_reports[i++].hist,
rep->min_percent, pos,
- &rep->session->header.env);
+ perf_session__env(rep->session));
if (ret != 0)
return ret;
}
@@ -563,6 +572,7 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
struct hists *hists = evsel__hists(pos);
const char *evname = evsel__name(pos);
+ i++;
if (symbol_conf.event_group && !evsel__is_group_leader(pos))
continue;
@@ -572,7 +582,14 @@ static int evlist__tty_browse_hists(struct evlist *evlist, struct report *rep, c
hists__fprintf_nr_sample_events(hists, rep, evname, stdout);
if (rep->total_cycles_mode) {
- report__browse_block_hists(&rep->block_reports[i++].hist,
+ char *buf;
+
+ if (!annotation_br_cntr_abbr_list(&buf, pos, true)) {
+ fprintf(stdout, "%s", buf);
+ fprintf(stdout, "#\n");
+ free(buf);
+ }
+ report__browse_block_hists(&rep->block_reports[i - 1].hist,
rep->min_percent, pos, NULL);
continue;
}
@@ -608,7 +625,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
return;
if (kernel_map == NULL ||
- (map__dso(kernel_map)->hit &&
+ (dso__hit(map__dso(kernel_map)) &&
(kernel_kmap->ref_reloc_sym == NULL ||
kernel_kmap->ref_reloc_sym->addr == 0))) {
const char *desc =
@@ -668,7 +685,7 @@ static int report__browse_hists(struct report *rep)
}
ret = evlist__tui_browse_hists(evlist, help, NULL, rep->min_percent,
- &session->header.env, true);
+ perf_session__env(session), true);
/*
* Usually "ret" is the last pressed key, and we only
* care if the key notifies us to switch data file.
@@ -763,7 +780,7 @@ static void report__output_resort(struct report *rep)
ui_progress__finish();
}
-static int count_sample_event(struct perf_tool *tool __maybe_unused,
+static int count_sample_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct evsel *evsel,
@@ -775,7 +792,7 @@ static int count_sample_event(struct perf_tool *tool __maybe_unused,
return 0;
}
-static int count_lost_samples_event(struct perf_tool *tool,
+static int count_lost_samples_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
@@ -785,22 +802,28 @@ static int count_lost_samples_event(struct perf_tool *tool,
evsel = evlist__id2evsel(rep->session->evlist, sample->id);
if (evsel) {
- hists__inc_nr_lost_samples(evsel__hists(evsel),
- event->lost_samples.lost);
+ struct hists *hists = evsel__hists(evsel);
+ u32 count = event->lost_samples.lost;
+
+ if (event->header.misc & PERF_RECORD_MISC_LOST_SAMPLES_BPF)
+ hists__inc_nr_dropped_samples(hists, count);
+ else
+ hists__inc_nr_lost_samples(hists, count);
}
return 0;
}
-static int process_attr(struct perf_tool *tool __maybe_unused,
+static int process_attr(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct evlist **pevlist);
static void stats_setup(struct report *rep)
{
- memset(&rep->tool, 0, sizeof(rep->tool));
+ perf_tool__init(&rep->tool, /*ordered_events=*/false);
rep->tool.attr = process_attr;
rep->tool.sample = count_sample_event;
rep->tool.lost_samples = count_lost_samples_event;
+ rep->tool.event_update = perf_event__process_event_update;
rep->tool.no_warn = true;
}
@@ -808,15 +831,14 @@ static int stats_print(struct report *rep)
{
struct perf_session *session = rep->session;
- perf_session__fprintf_nr_events(session, stdout, rep->skip_empty);
- evlist__fprintf_nr_events(session->evlist, stdout, rep->skip_empty);
+ perf_session__fprintf_nr_events(session, stdout);
+ evlist__fprintf_nr_events(session->evlist, stdout);
return 0;
}
static void tasks_setup(struct report *rep)
{
- memset(&rep->tool, 0, sizeof(rep->tool));
- rep->tool.ordered_events = true;
+ perf_tool__init(&rep->tool, /*ordered_events=*/true);
if (rep->mmaps_mode) {
rep->tool.mmap = perf_event__process_mmap;
rep->tool.mmap2 = perf_event__process_mmap2;
@@ -828,35 +850,6 @@ static void tasks_setup(struct report *rep)
rep->tool.no_warn = true;
}
-struct task {
- struct thread *thread;
- struct list_head list;
- struct list_head children;
-};
-
-static struct task *tasks_list(struct task *task, struct machine *machine)
-{
- struct thread *parent_thread, *thread = task->thread;
- struct task *parent_task;
-
- /* Already listed. */
- if (!list_empty(&task->list))
- return NULL;
-
- /* Last one in the chain. */
- if (thread__ppid(thread) == -1)
- return task;
-
- parent_thread = machine__find_thread(machine, -1, thread__ppid(thread));
- if (!parent_thread)
- return ERR_PTR(-ENOENT);
-
- parent_task = thread__priv(parent_thread);
- thread__put(parent_thread);
- list_add_tail(&task->list, &parent_task->children);
- return tasks_list(parent_task, machine);
-}
-
struct maps__fprintf_task_args {
int indent;
FILE *fp;
@@ -868,17 +861,24 @@ static int maps__fprintf_task_cb(struct map *map, void *data)
struct maps__fprintf_task_args *args = data;
const struct dso *dso = map__dso(map);
u32 prot = map__prot(map);
+ const struct dso_id *dso_id = dso__id_const(dso);
int ret;
+ char buf[SBUILD_ID_SIZE];
+
+ if (dso_id->mmap2_valid)
+ snprintf(buf, sizeof(buf), "%" PRIu64, dso_id->ino);
+ else
+ build_id__snprintf(&dso_id->build_id, buf, sizeof(buf));
ret = fprintf(args->fp,
- "%*s %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
+ "%*s %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %s %s\n",
args->indent, "", map__start(map), map__end(map),
prot & PROT_READ ? 'r' : '-',
prot & PROT_WRITE ? 'w' : '-',
prot & PROT_EXEC ? 'x' : '-',
map__flags(map) ? 's' : 'p',
map__pgoff(map),
- dso->id.ino, dso->name);
+ buf, dso__name(dso));
if (ret < 0)
return ret;
@@ -900,89 +900,156 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
return args.printed;
}
-static void task__print_level(struct task *task, FILE *fp, int level)
+static int thread_level(struct machine *machine, const struct thread *thread)
{
- struct thread *thread = task->thread;
- struct task *child;
- int comm_indent = fprintf(fp, " %8d %8d %8d |%*s",
- thread__pid(thread), thread__tid(thread),
- thread__ppid(thread), level, "");
+ struct thread *parent_thread;
+ int res;
- fprintf(fp, "%s\n", thread__comm_str(thread));
+ if (thread__tid(thread) <= 0)
+ return 0;
- maps__fprintf_task(thread__maps(thread), comm_indent, fp);
+ if (thread__ppid(thread) <= 0)
+ return 1;
- if (!list_empty(&task->children)) {
- list_for_each_entry(child, &task->children, list)
- task__print_level(child, fp, level + 1);
+ parent_thread = machine__find_thread(machine, -1, thread__ppid(thread));
+ if (!parent_thread) {
+ pr_err("Missing parent thread of %d\n", thread__tid(thread));
+ return 0;
}
+ res = 1 + thread_level(machine, parent_thread);
+ thread__put(parent_thread);
+ return res;
}
-static int tasks_print(struct report *rep, FILE *fp)
+static void task__print_level(struct machine *machine, struct thread *thread, FILE *fp)
{
- struct perf_session *session = rep->session;
- struct machine *machine = &session->machines.host;
- struct task *tasks, *task;
- unsigned int nr = 0, itask = 0, i;
- struct rb_node *nd;
- LIST_HEAD(list);
+ int level = thread_level(machine, thread);
+ int comm_indent = fprintf(fp, " %8d %8d %8d |%*s",
+ thread__pid(thread), thread__tid(thread),
+ thread__ppid(thread), level, "");
- /*
- * No locking needed while accessing machine->threads,
- * because --tasks is single threaded command.
- */
+ fprintf(fp, "%s\n", thread__comm_str(thread));
- /* Count all the threads. */
- for (i = 0; i < THREADS__TABLE_SIZE; i++)
- nr += machine->threads[i].nr;
+ maps__fprintf_task(thread__maps(thread), comm_indent, fp);
+}
- tasks = malloc(sizeof(*tasks) * nr);
- if (!tasks)
- return -ENOMEM;
+/*
+ * Sort two thread list nodes such that they form a tree. The first node is the
+ * root of the tree, its children are ordered numerically after it. If a child
+ * has children itself then they appear immediately after their parent. For
+ * example, the 4 threads in the order they'd appear in the list:
+ * - init with a TID 1 and a parent of 0
+ * - systemd with a TID 3000 and a parent of init/1
+ * - systemd child thread with TID 4000, the parent is 3000
+ * - NetworkManager is a child of init with a TID of 3500.
+ */
+static int task_list_cmp(void *priv, const struct list_head *la, const struct list_head *lb)
+{
+ struct machine *machine = priv;
+ struct thread_list *task_a = list_entry(la, struct thread_list, list);
+ struct thread_list *task_b = list_entry(lb, struct thread_list, list);
+ struct thread *a = task_a->thread;
+ struct thread *b = task_b->thread;
+ int level_a, level_b, res;
+
+ /* Same thread? */
+ if (thread__tid(a) == thread__tid(b))
+ return 0;
- for (i = 0; i < THREADS__TABLE_SIZE; i++) {
- struct threads *threads = &machine->threads[i];
+ /* Compare a and b to root. */
+ if (thread__tid(a) == 0)
+ return -1;
- for (nd = rb_first_cached(&threads->entries); nd;
- nd = rb_next(nd)) {
- task = tasks + itask++;
+ if (thread__tid(b) == 0)
+ return 1;
- task->thread = rb_entry(nd, struct thread_rb_node, rb_node)->thread;
- INIT_LIST_HEAD(&task->children);
- INIT_LIST_HEAD(&task->list);
- thread__set_priv(task->thread, task);
- }
- }
+ /* If parents match sort by tid. */
+ if (thread__ppid(a) == thread__ppid(b))
+ return thread__tid(a) < thread__tid(b) ? -1 : 1;
/*
- * Iterate every task down to the unprocessed parent
- * and link all in task children list. Task with no
- * parent is added into 'list'.
+ * Find a and b such that if they are a child of each other a and b's
+ * tid's match, otherwise a and b have a common parent and distinct
+ * tid's to sort by. First make the depths of the threads match.
*/
- for (itask = 0; itask < nr; itask++) {
- task = tasks + itask;
-
- if (!list_empty(&task->list))
- continue;
-
- task = tasks_list(task, machine);
- if (IS_ERR(task)) {
- pr_err("Error: failed to process tasks\n");
- free(tasks);
- return PTR_ERR(task);
+ level_a = thread_level(machine, a);
+ level_b = thread_level(machine, b);
+ a = thread__get(a);
+ b = thread__get(b);
+ for (int i = level_a; i > level_b; i--) {
+ struct thread *parent = machine__find_thread(machine, -1, thread__ppid(a));
+
+ thread__put(a);
+ if (!parent) {
+ pr_err("Missing parent thread of %d\n", thread__tid(a));
+ thread__put(b);
+ return -1;
}
+ a = parent;
+ }
+ for (int i = level_b; i > level_a; i--) {
+ struct thread *parent = machine__find_thread(machine, -1, thread__ppid(b));
- if (task)
- list_add_tail(&task->list, &list);
+ thread__put(b);
+ if (!parent) {
+ pr_err("Missing parent thread of %d\n", thread__tid(b));
+ thread__put(a);
+ return 1;
+ }
+ b = parent;
+ }
+ /* Search up to a common parent. */
+ while (thread__ppid(a) != thread__ppid(b)) {
+ struct thread *parent;
+
+ parent = machine__find_thread(machine, -1, thread__ppid(a));
+ thread__put(a);
+ if (!parent)
+ pr_err("Missing parent thread of %d\n", thread__tid(a));
+ a = parent;
+ parent = machine__find_thread(machine, -1, thread__ppid(b));
+ thread__put(b);
+ if (!parent)
+ pr_err("Missing parent thread of %d\n", thread__tid(b));
+ b = parent;
+ if (!a || !b) {
+ /* Handle missing parent (unexpected) with some sanity. */
+ thread__put(a);
+ thread__put(b);
+ return !a && !b ? 0 : (!a ? -1 : 1);
+ }
+ }
+ if (thread__tid(a) == thread__tid(b)) {
+ /* a is a child of b or vice-versa, deeper levels appear later. */
+ res = level_a < level_b ? -1 : (level_a > level_b ? 1 : 0);
+ } else {
+ /* Sort by tid now the parent is the same. */
+ res = thread__tid(a) < thread__tid(b) ? -1 : 1;
}
+ thread__put(a);
+ thread__put(b);
+ return res;
+}
- fprintf(fp, "# %8s %8s %8s %s\n", "pid", "tid", "ppid", "comm");
+static int tasks_print(struct report *rep, FILE *fp)
+{
+ struct machine *machine = &rep->session->machines.host;
+ LIST_HEAD(tasks);
+ int ret;
- list_for_each_entry(task, &list, list)
- task__print_level(task, fp, 0);
+ ret = machine__thread_list(machine, &tasks);
+ if (!ret) {
+ struct thread_list *task;
- free(tasks);
- return 0;
+ list_sort(machine, &tasks, task_list_cmp);
+
+ fprintf(fp, "# %8s %8s %8s %s\n", "pid", "tid", "ppid", "comm");
+
+ list_for_each_entry(task, &tasks, list)
+ task__print_level(machine, task->thread, fp);
+ }
+ thread_list__delete(&tasks);
+ return ret;
}
static int __cmd_report(struct report *rep)
@@ -1028,6 +1095,11 @@ static int __cmd_report(struct report *rep)
return ret;
}
+ /* Don't show Latency column for non-parallel profiles by default. */
+ if (!symbol_conf.prefer_latency && rep->total_samples &&
+ rep->singlethreaded_samples * 100 / rep->total_samples >= 99)
+ perf_hpp__cancel_latency(session->evlist);
+
evlist__check_mem_load_aux(session->evlist);
if (rep->stats_mode)
@@ -1049,10 +1121,7 @@ static int __cmd_report(struct report *rep)
perf_session__fprintf_dsos(session, stdout);
if (dump_trace) {
- perf_session__fprintf_nr_events(session, stdout,
- rep->skip_empty);
- evlist__fprintf_nr_events(session->evlist, stdout,
- rep->skip_empty);
+ stats_print(rep);
return 0;
}
}
@@ -1082,18 +1151,23 @@ static int __cmd_report(struct report *rep)
report__output_resort(rep);
if (rep->total_cycles_mode) {
- int block_hpps[6] = {
+ int nr_hpps = 4;
+ int block_hpps[PERF_HPP_REPORT__BLOCK_MAX_INDEX] = {
PERF_HPP_REPORT__BLOCK_TOTAL_CYCLES_PCT,
PERF_HPP_REPORT__BLOCK_LBR_CYCLES,
PERF_HPP_REPORT__BLOCK_CYCLES_PCT,
PERF_HPP_REPORT__BLOCK_AVG_CYCLES,
- PERF_HPP_REPORT__BLOCK_RANGE,
- PERF_HPP_REPORT__BLOCK_DSO,
};
+ if (session->evlist->nr_br_cntr > 0)
+ block_hpps[nr_hpps++] = PERF_HPP_REPORT__BLOCK_BRANCH_COUNTER;
+
+ block_hpps[nr_hpps++] = PERF_HPP_REPORT__BLOCK_RANGE;
+ block_hpps[nr_hpps++] = PERF_HPP_REPORT__BLOCK_DSO;
+
rep->block_reports = block_info__create_report(session->evlist,
rep->total_cycles,
- block_hpps, 6,
+ block_hpps, nr_hpps,
&rep->nr_block_reports);
if (!rep->block_reports)
return -1;
@@ -1196,10 +1270,12 @@ parse_percent_limit(const struct option *opt, const char *str,
return 0;
}
-static int process_attr(struct perf_tool *tool __maybe_unused,
+static int process_attr(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct evlist **pevlist)
{
+ struct perf_session *session;
+ struct perf_env *env;
u64 sample_type;
int err;
@@ -1212,10 +1288,16 @@ static int process_attr(struct perf_tool *tool __maybe_unused,
* on events sample_type.
*/
sample_type = evlist__combined_sample_type(*pevlist);
- callchain_param_setup(sample_type, perf_env__arch((*pevlist)->env));
+ session = (*pevlist)->session;
+ env = perf_session__env(session);
+ callchain_param_setup(sample_type, perf_env__arch(env));
return 0;
}
+#define CALLCHAIN_BRANCH_SORT_ORDER \
+ "srcline,symbol,dso,callchain_branch_predicted," \
+ "callchain_branch_abort,callchain_branch_cycles"
+
int cmd_report(int argc, const char **argv)
{
struct perf_session *session;
@@ -1235,37 +1317,13 @@ int cmd_report(int argc, const char **argv)
NULL
};
struct report report = {
- .tool = {
- .sample = process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .namespaces = perf_event__process_namespaces,
- .cgroup = perf_event__process_cgroup,
- .exit = perf_event__process_exit,
- .fork = perf_event__process_fork,
- .lost = perf_event__process_lost,
- .read = process_read_event,
- .attr = process_attr,
-#ifdef HAVE_LIBTRACEEVENT
- .tracing_data = perf_event__process_tracing_data,
-#endif
- .build_id = perf_event__process_build_id,
- .id_index = perf_event__process_id_index,
- .auxtrace_info = perf_event__process_auxtrace_info,
- .auxtrace = perf_event__process_auxtrace,
- .event_update = perf_event__process_event_update,
- .feature = process_feature_event,
- .ordered_events = true,
- .ordering_requires_timestamps = true,
- },
.max_stack = PERF_MAX_STACK_DEPTH,
.pretty_printing_style = "normal",
.socket_filter = -1,
.skip_empty = true,
};
- char *sort_order_help = sort_help("sort by key(s):");
- char *field_order_help = sort_help("output field(s): overhead period sample ");
+ char *sort_order_help = sort_help("sort by key(s):", SORT_MODE__NORMAL);
+ char *field_order_help = sort_help("output field(s):", SORT_MODE__NORMAL);
const char *disassembler_style = NULL, *objdump_path = NULL, *addr2line_path = NULL;
const struct option options[] = {
OPT_STRING('i', "input", &input_name, "file",
@@ -1357,6 +1415,8 @@ int cmd_report(int argc, const char **argv)
symbol__config_symfs),
OPT_STRING('C', "cpu", &report.cpu_list, "cpu",
"list of cpus to profile"),
+ OPT_STRING(0, "parallelism", &symbol_conf.parallelism_list_str, "parallelism",
+ "only consider these parallelism levels (cpu set format)"),
OPT_BOOLEAN('I', "show-info", &report.show_full_info,
"Display extended information about perf.data file"),
OPT_BOOLEAN(0, "source", &annotate_opts.annotate_src,
@@ -1387,7 +1447,7 @@ int cmd_report(int argc, const char **argv)
OPT_STRING(0, "addr2line", &addr2line_path, "path",
"addr2line binary to use for line numbers"),
OPT_BOOLEAN(0, "demangle", &symbol_conf.demangle,
- "Disable symbol demangling"),
+ "Symbol demangling. Enabled by default, use --no-demangle to disable."),
OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
"Enable kernel symbol demangling"),
OPT_BOOLEAN(0, "mem-mode", &report.mem_mode, "mem access profile"),
@@ -1410,7 +1470,7 @@ int cmd_report(int argc, const char **argv)
"only show processor socket that match with this filter"),
OPT_BOOLEAN(0, "raw-trace", &symbol_conf.raw_trace,
"Show raw trace event output (do not use print fmt or plugins)"),
- OPT_BOOLEAN(0, "hierarchy", &symbol_conf.report_hierarchy,
+ OPT_BOOLEAN('H', "hierarchy", &symbol_conf.report_hierarchy,
"Show entries in a hierarchy"),
OPT_CALLBACK_DEFAULT(0, "stdio-color", NULL, "mode",
"'always' (default), 'never' or 'auto' only applicable to --stdio mode",
@@ -1433,6 +1493,10 @@ int cmd_report(int argc, const char **argv)
"Disable raw trace ordering"),
OPT_BOOLEAN(0, "skip-empty", &report.skip_empty,
"Do not display empty (or dummy) events in the output"),
+ OPT_BOOLEAN(0, "latency", &symbol_conf.prefer_latency,
+ "Show latency-centric profile rather than the default\n"
+ "\t\t\t CPU-consumption-centric profile\n"
+ "\t\t\t (requires perf record --latency flag)."),
OPT_END()
};
struct perf_data data = {
@@ -1440,6 +1504,7 @@ int cmd_report(int argc, const char **argv)
};
int ret = hists__init();
char sort_tmp[128];
+ bool ordered_events = true;
if (ret < 0)
goto exit;
@@ -1494,7 +1559,7 @@ int cmd_report(int argc, const char **argv)
report.tasks_mode = true;
if (dump_trace && report.disable_order)
- report.tool.ordered_events = false;
+ ordered_events = false;
if (quiet)
perf_quiet_option();
@@ -1519,10 +1584,36 @@ int cmd_report(int argc, const char **argv)
input_name = "perf.data";
}
+repeat:
data.path = input_name;
data.force = symbol_conf.force;
-repeat:
+ symbol_conf.skip_empty = report.skip_empty;
+
+ perf_tool__init(&report.tool, ordered_events);
+ report.tool.sample = process_sample_event;
+ report.tool.mmap = perf_event__process_mmap;
+ report.tool.mmap2 = perf_event__process_mmap2;
+ report.tool.comm = perf_event__process_comm;
+ report.tool.namespaces = perf_event__process_namespaces;
+ report.tool.cgroup = perf_event__process_cgroup;
+ report.tool.exit = perf_event__process_exit;
+ report.tool.fork = perf_event__process_fork;
+ report.tool.context_switch = perf_event__process_switch;
+ report.tool.lost = perf_event__process_lost;
+ report.tool.read = process_read_event;
+ report.tool.attr = process_attr;
+#ifdef HAVE_LIBTRACEEVENT
+ report.tool.tracing_data = perf_event__process_tracing_data;
+#endif
+ report.tool.build_id = perf_event__process_build_id;
+ report.tool.id_index = perf_event__process_id_index;
+ report.tool.auxtrace_info = perf_event__process_auxtrace_info;
+ report.tool.auxtrace = perf_event__process_auxtrace;
+ report.tool.event_update = perf_event__process_event_update;
+ report.tool.feature = process_feature_event;
+ report.tool.ordering_requires_timestamps = true;
+
session = perf_session__new(&data, &report.tool);
if (IS_ERR(session)) {
ret = PTR_ERR(session);
@@ -1582,7 +1673,7 @@ repeat:
symbol_conf.use_callchain = true;
callchain_register_param(&callchain_param);
if (sort_order == NULL)
- sort_order = "srcline,symbol,dso";
+ sort_order = CALLCHAIN_BRANCH_SORT_ORDER;
}
if (report.mem_mode) {
@@ -1595,16 +1686,10 @@ repeat:
}
if (symbol_conf.report_hierarchy) {
- /* disable incompatible options */
- symbol_conf.cumulate_callchain = false;
-
- if (field_order) {
- pr_err("Error: --hierarchy and --fields options cannot be used together\n");
- parse_options_usage(report_usage, options, "F", 1);
- parse_options_usage(NULL, options, "hierarchy", 0);
- goto error;
- }
-
+ /*
+ * The hist entries in hierarchy are added during the collpase
+ * phase. Let's enable it even if no sort keys require it.
+ */
perf_hpp_list.need_collapse = true;
}
@@ -1644,7 +1729,10 @@ repeat:
report.data_type = true;
annotate_opts.annotate_src = false;
-#ifndef HAVE_DWARF_GETLOCATIONS_SUPPORT
+ /* disable incompatible options */
+ symbol_conf.cumulate_callchain = false;
+
+#ifndef HAVE_LIBDW_SUPPORT
pr_err("Error: Data type profiling is disabled due to missing DWARF support\n");
goto error;
#endif
@@ -1655,26 +1743,54 @@ repeat:
else
use_browser = 0;
- if (sort_order && strstr(sort_order, "ipc")) {
- parse_options_usage(report_usage, options, "s", 1);
- goto error;
+ if (report.data_type && use_browser == 1) {
+ symbol_conf.annotate_data_member = true;
+ symbol_conf.annotate_data_sample = true;
+ }
+
+ symbol_conf.enable_latency = true;
+ if (report.disable_order || !perf_session__has_switch_events(session)) {
+ if (symbol_conf.parallelism_list_str ||
+ symbol_conf.prefer_latency ||
+ (sort_order && (strstr(sort_order, "latency") ||
+ strstr(sort_order, "parallelism"))) ||
+ (field_order && (strstr(field_order, "latency") ||
+ strstr(field_order, "parallelism")))) {
+ if (report.disable_order)
+ ui__error("Use of latency profile or parallelism is incompatible with --disable-order.\n");
+ else
+ ui__error("Use of latency profile or parallelism requires --latency flag during record.\n");
+ return -1;
+ }
+ /*
+ * If user did not ask for anything related to
+ * latency/parallelism explicitly, just don't show it.
+ */
+ symbol_conf.enable_latency = false;
}
- if (sort_order && strstr(sort_order, "symbol")) {
- if (sort__mode == SORT_MODE__BRANCH) {
- snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
- sort_order, "ipc_lbr");
- report.symbol_ipc = true;
- } else {
- snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
- sort_order, "ipc_null");
+ if (last_key != K_SWITCH_INPUT_DATA) {
+ if (sort_order && strstr(sort_order, "ipc")) {
+ parse_options_usage(report_usage, options, "s", 1);
+ goto error;
}
- sort_order = sort_tmp;
+ if (sort_order && strstr(sort_order, "symbol")) {
+ if (sort__mode == SORT_MODE__BRANCH) {
+ snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
+ sort_order, "ipc_lbr");
+ report.symbol_ipc = true;
+ } else {
+ snprintf(sort_tmp, sizeof(sort_tmp), "%s,%s",
+ sort_order, "ipc_null");
+ }
+
+ sort_order = sort_tmp;
+ }
}
if ((last_key != K_SWITCH_INPUT_DATA && last_key != K_RELOAD) &&
- (setup_sorting(session->evlist) < 0)) {
+ (setup_sorting(session->evlist, perf_session__env(session)) < 0)) {
if (sort_order)
parse_options_usage(report_usage, options, "s", 1);
if (field_order)
@@ -1730,7 +1846,7 @@ repeat:
annotation_config__init();
}
- if (symbol__init(&session->header.env) < 0)
+ if (symbol__init(perf_session__env(session)) < 0)
goto error;
if (report.time_str) {
@@ -1762,10 +1878,17 @@ repeat:
if (ret == K_SWITCH_INPUT_DATA || ret == K_RELOAD) {
perf_session__delete(session);
last_key = K_SWITCH_INPUT_DATA;
+ /*
+ * To support switching between data with and without callchains.
+ * report__setup_sample_type() will update it properly.
+ */
+ symbol_conf.use_callchain = false;
goto repeat;
} else
ret = 0;
+ if (!use_browser && (verbose > 2 || debug_kmaps))
+ perf_session__dump_kmaps(session);
error:
if (report.ptime_range) {
itrace_synth_opts__clear_time_range(&itrace_synth_opts);
diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
index dd6065afbbaf..eca3b1c58c4b 100644
--- a/tools/perf/builtin-sched.c
+++ b/tools/perf/builtin-sched.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include "builtin.h"
+#include "perf.h"
#include "perf-sys.h"
#include "util/cpumap.h"
@@ -51,6 +52,7 @@
#define COMM_LEN 20
#define SYM_LEN 129
#define MAX_PID 1024000
+#define MAX_PRIO 140
static const char *cpu_list;
static DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
@@ -67,7 +69,6 @@ struct task_desc {
struct sched_atom **atoms;
pthread_t thread;
- sem_t sleep_sem;
sem_t ready_for_work;
sem_t work_done_sem;
@@ -79,12 +80,10 @@ enum sched_event_type {
SCHED_EVENT_RUN,
SCHED_EVENT_SLEEP,
SCHED_EVENT_WAKEUP,
- SCHED_EVENT_MIGRATION,
};
struct sched_atom {
enum sched_event_type type;
- int specific_wait;
u64 timestamp;
u64 duration;
unsigned long nr;
@@ -92,24 +91,6 @@ struct sched_atom {
struct task_desc *wakee;
};
-#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP"
-
-/* task state bitmask, copied from include/linux/sched.h */
-#define TASK_RUNNING 0
-#define TASK_INTERRUPTIBLE 1
-#define TASK_UNINTERRUPTIBLE 2
-#define __TASK_STOPPED 4
-#define __TASK_TRACED 8
-/* in tsk->exit_state */
-#define EXIT_DEAD 16
-#define EXIT_ZOMBIE 32
-#define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
-/* in tsk->state again */
-#define TASK_DEAD 64
-#define TASK_WAKEKILL 128
-#define TASK_WAKING 256
-#define TASK_PARKED 512
-
enum thread_state {
THREAD_SLEEPING = 0,
THREAD_WAIT_CPU,
@@ -174,6 +155,9 @@ struct perf_sched_map {
const char *color_pids_str;
struct perf_cpu_map *color_cpus;
const char *color_cpus_str;
+ const char *task_name;
+ struct strlist *task_names;
+ bool fuzzy;
struct perf_cpu_map *cpus;
const char *cpus_str;
};
@@ -195,6 +179,7 @@ struct perf_sched {
struct perf_cpu max_cpu;
u32 *curr_pid;
struct thread **curr_thread;
+ struct thread **curr_out_thread;
char next_shortname1;
char next_shortname2;
unsigned int replay_repeat;
@@ -241,12 +226,16 @@ struct perf_sched {
bool show_wakeups;
bool show_next;
bool show_migrations;
+ bool pre_migrations;
bool show_state;
+ bool show_prio;
u64 skipped_samples;
const char *time_str;
struct perf_time_interval ptime;
struct perf_time_interval hist_time;
volatile bool thread_funcs_exit;
+ const char *prio_str;
+ DECLARE_BITMAP(prio_bitmap, MAX_PRIO);
};
/* per thread run time data */
@@ -257,7 +246,9 @@ struct thread_runtime {
u64 dt_iowait; /* time between CPU access by iowait (off cpu) */
u64 dt_preempt; /* time between CPU access by preempt (off cpu) */
u64 dt_delay; /* time between wakeup and sched-in */
+ u64 dt_pre_mig; /* time between migration and wakeup */
u64 ready_to_run; /* time of wakeup */
+ u64 migrated; /* time when a thread is migrated */
struct stats run_stats;
u64 total_run_time;
@@ -265,13 +256,16 @@ struct thread_runtime {
u64 total_iowait_time;
u64 total_preempt_time;
u64 total_delay_time;
+ u64 total_pre_mig_time;
- int last_state;
+ char last_state;
char shortname[3];
bool comm_changed;
u64 migrations;
+
+ int prio;
};
/* per event run time data */
@@ -429,14 +423,13 @@ static void add_sched_event_wakeup(struct perf_sched *sched, struct task_desc *t
wakee_event->wait_sem = zalloc(sizeof(*wakee_event->wait_sem));
sem_init(wakee_event->wait_sem, 0, 0);
- wakee_event->specific_wait = 1;
event->wait_sem = wakee_event->wait_sem;
sched->nr_wakeup_events++;
}
static void add_sched_event_sleep(struct perf_sched *sched, struct task_desc *task,
- u64 timestamp, u64 task_state __maybe_unused)
+ u64 timestamp)
{
struct sched_atom *event = get_new_event(task, timestamp);
@@ -476,7 +469,7 @@ static struct task_desc *register_pid(struct perf_sched *sched,
* every task starts in sleeping state - this gets ignored
* if there's no wakeup pointing to this sleep state:
*/
- add_sched_event_sleep(sched, task, 0, 0);
+ add_sched_event_sleep(sched, task, 0);
sched->pid_to_task[pid] = task;
sched->nr_tasks++;
@@ -537,8 +530,6 @@ static void perf_sched__process_event(struct perf_sched *sched,
ret = sem_post(atom->wait_sem);
BUG_ON(ret);
break;
- case SCHED_EVENT_MIGRATION:
- break;
default:
BUG_ON(1);
}
@@ -681,7 +672,6 @@ static void create_tasks(struct perf_sched *sched)
parms->task = task = sched->tasks[i];
parms->sched = sched;
parms->fd = self_open_counters(sched, i);
- sem_init(&task->sleep_sem, 0, 0);
sem_init(&task->ready_for_work, 0, 0);
sem_init(&task->work_done_sem, 0, 0);
task->curr_event = 0;
@@ -705,7 +695,6 @@ static void destroy_tasks(struct perf_sched *sched)
task = sched->tasks[i];
err = pthread_join(task->thread, NULL);
BUG_ON(err);
- sem_destroy(&task->sleep_sem);
sem_destroy(&task->ready_for_work);
sem_destroy(&task->work_done_sem);
}
@@ -759,7 +748,6 @@ static void wait_for_tasks(struct perf_sched *sched)
for (i = 0; i < sched->nr_tasks; i++) {
task = sched->tasks[i];
- sem_init(&task->sleep_sem, 0, 0);
task->curr_event = 0;
}
}
@@ -860,7 +848,6 @@ static int replay_switch_event(struct perf_sched *sched,
*next_comm = evsel__strval(evsel, sample, "next_comm");
const u32 prev_pid = evsel__intval(evsel, sample, "prev_pid"),
next_pid = evsel__intval(evsel, sample, "next_pid");
- const u64 prev_state = evsel__intval(evsel, sample, "prev_state");
struct task_desc *prev, __maybe_unused *next;
u64 timestamp0, timestamp = sample->time;
int cpu = sample->cpu;
@@ -892,7 +879,7 @@ static int replay_switch_event(struct perf_sched *sched,
sched->cpu_last_switched[cpu] = timestamp;
add_sched_event_run(sched, prev, timestamp, delta);
- add_sched_event_sleep(sched, prev, timestamp, prev_state);
+ add_sched_event_sleep(sched, prev, timestamp);
return 0;
}
@@ -934,6 +921,11 @@ struct sort_dimension {
struct list_head list;
};
+static inline void init_prio(struct thread_runtime *r)
+{
+ r->prio = -1;
+}
+
/*
* handle runtime stats saved per thread
*/
@@ -946,6 +938,7 @@ static struct thread_runtime *thread__init_runtime(struct thread *thread)
return NULL;
init_stats(&r->run_stats);
+ init_prio(r);
thread__set_priv(thread, r);
return r;
@@ -1001,7 +994,7 @@ thread_atoms_search(struct rb_root_cached *root, struct thread *thread,
else if (cmp < 0)
node = node->rb_right;
else {
- BUG_ON(thread != atoms->thread);
+ BUG_ON(!RC_CHK_EQUAL(thread, atoms->thread));
return atoms;
}
}
@@ -1050,13 +1043,6 @@ static int thread_atoms_insert(struct perf_sched *sched, struct thread *thread)
return 0;
}
-static char sched_out_state(u64 prev_state)
-{
- const char *str = TASK_STATE_TO_CHAR_STR;
-
- return str[prev_state];
-}
-
static int
add_sched_out_event(struct work_atoms *atoms,
char run_state,
@@ -1125,6 +1111,21 @@ add_sched_in_event(struct work_atoms *atoms, u64 timestamp)
atoms->nb_atoms++;
}
+static void free_work_atoms(struct work_atoms *atoms)
+{
+ struct work_atom *atom, *tmp;
+
+ if (atoms == NULL)
+ return;
+
+ list_for_each_entry_safe(atom, tmp, &atoms->work_list, list) {
+ list_del(&atom->list);
+ free(atom);
+ }
+ thread__zput(atoms->thread);
+ free(atoms);
+}
+
static int latency_switch_event(struct perf_sched *sched,
struct evsel *evsel,
struct perf_sample *sample,
@@ -1132,7 +1133,7 @@ static int latency_switch_event(struct perf_sched *sched,
{
const u32 prev_pid = evsel__intval(evsel, sample, "prev_pid"),
next_pid = evsel__intval(evsel, sample, "next_pid");
- const u64 prev_state = evsel__intval(evsel, sample, "prev_state");
+ const char prev_state = evsel__taskstate(evsel, sample, "prev_state");
struct work_atoms *out_events, *in_events;
struct thread *sched_out, *sched_in;
u64 timestamp0, timestamp = sample->time;
@@ -1168,7 +1169,7 @@ static int latency_switch_event(struct perf_sched *sched,
goto out_put;
}
}
- if (add_sched_out_event(out_events, sched_out_state(prev_state), timestamp))
+ if (add_sched_out_event(out_events, prev_state, timestamp))
return -1;
in_events = thread_atoms_search(&sched->atom_root, sched_in, &sched->cmp_pid);
@@ -1510,7 +1511,7 @@ again:
}
}
-static int process_sched_wakeup_event(struct perf_tool *tool,
+static int process_sched_wakeup_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1523,7 +1524,7 @@ static int process_sched_wakeup_event(struct perf_tool *tool,
return 0;
}
-static int process_sched_wakeup_ignore(struct perf_tool *tool __maybe_unused,
+static int process_sched_wakeup_ignore(const struct perf_tool *tool __maybe_unused,
struct evsel *evsel __maybe_unused,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -1531,55 +1532,113 @@ static int process_sched_wakeup_ignore(struct perf_tool *tool __maybe_unused,
return 0;
}
-union map_priv {
- void *ptr;
- bool color;
-};
-
static bool thread__has_color(struct thread *thread)
{
- union map_priv priv = {
- .ptr = thread__priv(thread),
- };
-
- return priv.color;
+ return thread__priv(thread) != NULL;
}
static struct thread*
map__findnew_thread(struct perf_sched *sched, struct machine *machine, pid_t pid, pid_t tid)
{
struct thread *thread = machine__findnew_thread(machine, pid, tid);
- union map_priv priv = {
- .color = false,
- };
+ bool color = false;
if (!sched->map.color_pids || !thread || thread__priv(thread))
return thread;
if (thread_map__has(sched->map.color_pids, tid))
- priv.color = true;
+ color = true;
- thread__set_priv(thread, priv.ptr);
+ thread__set_priv(thread, color ? ((void*)1) : NULL);
return thread;
}
+static bool sched_match_task(struct perf_sched *sched, const char *comm_str)
+{
+ bool fuzzy_match = sched->map.fuzzy;
+ struct strlist *task_names = sched->map.task_names;
+ struct str_node *node;
+
+ strlist__for_each_entry(node, task_names) {
+ bool match_found = fuzzy_match ? !!strstr(comm_str, node->s) :
+ !strcmp(comm_str, node->s);
+ if (match_found)
+ return true;
+ }
+
+ return false;
+}
+
+static void print_sched_map(struct perf_sched *sched, struct perf_cpu this_cpu, int cpus_nr,
+ const char *color, bool sched_out)
+{
+ for (int i = 0; i < cpus_nr; i++) {
+ struct perf_cpu cpu = {
+ .cpu = sched->map.comp ? sched->map.comp_cpus[i].cpu : i,
+ };
+ struct thread *curr_thread = sched->curr_thread[cpu.cpu];
+ struct thread *curr_out_thread = sched->curr_out_thread[cpu.cpu];
+ struct thread_runtime *curr_tr;
+ const char *pid_color = color;
+ const char *cpu_color = color;
+ char symbol = ' ';
+ struct thread *thread_to_check = sched_out ? curr_out_thread : curr_thread;
+
+ if (thread_to_check && thread__has_color(thread_to_check))
+ pid_color = COLOR_PIDS;
+
+ if (sched->map.color_cpus && perf_cpu_map__has(sched->map.color_cpus, cpu))
+ cpu_color = COLOR_CPUS;
+
+ if (cpu.cpu == this_cpu.cpu)
+ symbol = '*';
+
+ color_fprintf(stdout, cpu.cpu != this_cpu.cpu ? color : cpu_color, "%c", symbol);
+
+ thread_to_check = sched_out ? sched->curr_out_thread[cpu.cpu] :
+ sched->curr_thread[cpu.cpu];
+
+ if (thread_to_check) {
+ curr_tr = thread__get_runtime(thread_to_check);
+ if (curr_tr == NULL)
+ return;
+
+ if (sched_out) {
+ if (cpu.cpu == this_cpu.cpu)
+ color_fprintf(stdout, color, "- ");
+ else {
+ curr_tr = thread__get_runtime(sched->curr_thread[cpu.cpu]);
+ if (curr_tr != NULL)
+ color_fprintf(stdout, pid_color, "%2s ",
+ curr_tr->shortname);
+ }
+ } else
+ color_fprintf(stdout, pid_color, "%2s ", curr_tr->shortname);
+ } else
+ color_fprintf(stdout, color, " ");
+ }
+}
+
static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine)
{
const u32 next_pid = evsel__intval(evsel, sample, "next_pid");
- struct thread *sched_in;
+ const u32 prev_pid = evsel__intval(evsel, sample, "prev_pid");
+ struct thread *sched_in, *sched_out;
struct thread_runtime *tr;
int new_shortname;
u64 timestamp0, timestamp = sample->time;
s64 delta;
- int i;
struct perf_cpu this_cpu = {
.cpu = sample->cpu,
};
int cpus_nr;
+ int proceed;
bool new_cpu = false;
const char *color = PERF_COLOR_NORMAL;
char stimestamp[32];
+ const char *str;
+ int ret = -1;
BUG_ON(this_cpu.cpu >= MAX_CPUS || this_cpu.cpu < 0);
@@ -1608,19 +1667,23 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
}
sched_in = map__findnew_thread(sched, machine, -1, next_pid);
- if (sched_in == NULL)
- return -1;
+ sched_out = map__findnew_thread(sched, machine, -1, prev_pid);
+ if (sched_in == NULL || sched_out == NULL)
+ goto out;
tr = thread__get_runtime(sched_in);
- if (tr == NULL) {
- thread__put(sched_in);
- return -1;
- }
+ if (tr == NULL)
+ goto out;
+
+ thread__put(sched->curr_thread[this_cpu.cpu]);
+ thread__put(sched->curr_out_thread[this_cpu.cpu]);
sched->curr_thread[this_cpu.cpu] = thread__get(sched_in);
+ sched->curr_out_thread[this_cpu.cpu] = thread__get(sched_out);
- printf(" ");
+ ret = 0;
+ str = thread__comm_str(sched_in);
new_shortname = 0;
if (!tr->shortname[0]) {
if (!strcmp(thread__comm_str(sched_in), "swapper")) {
@@ -1630,7 +1693,7 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
*/
tr->shortname[0] = '.';
tr->shortname[1] = ' ';
- } else {
+ } else if (!sched->map.task_name || sched_match_task(sched, str)) {
tr->shortname[0] = sched->next_shortname1;
tr->shortname[1] = sched->next_shortname2;
@@ -1643,46 +1706,37 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
else
sched->next_shortname2 = '0';
}
+ } else {
+ tr->shortname[0] = '-';
+ tr->shortname[1] = ' ';
}
new_shortname = 1;
}
- for (i = 0; i < cpus_nr; i++) {
- struct perf_cpu cpu = {
- .cpu = sched->map.comp ? sched->map.comp_cpus[i].cpu : i,
- };
- struct thread *curr_thread = sched->curr_thread[cpu.cpu];
- struct thread_runtime *curr_tr;
- const char *pid_color = color;
- const char *cpu_color = color;
-
- if (curr_thread && thread__has_color(curr_thread))
- pid_color = COLOR_PIDS;
-
- if (sched->map.cpus && !perf_cpu_map__has(sched->map.cpus, cpu))
- continue;
-
- if (sched->map.color_cpus && perf_cpu_map__has(sched->map.color_cpus, cpu))
- cpu_color = COLOR_CPUS;
+ if (sched->map.cpus && !perf_cpu_map__has(sched->map.cpus, this_cpu))
+ goto out;
- if (cpu.cpu != this_cpu.cpu)
- color_fprintf(stdout, color, " ");
+ proceed = 0;
+ str = thread__comm_str(sched_in);
+ /*
+ * Check which of sched_in and sched_out matches the passed --task-name
+ * arguments and call the corresponding print_sched_map.
+ */
+ if (sched->map.task_name && !sched_match_task(sched, str)) {
+ if (!sched_match_task(sched, thread__comm_str(sched_out)))
+ goto out;
else
- color_fprintf(stdout, cpu_color, "*");
+ goto sched_out;
- if (sched->curr_thread[cpu.cpu]) {
- curr_tr = thread__get_runtime(sched->curr_thread[cpu.cpu]);
- if (curr_tr == NULL) {
- thread__put(sched_in);
- return -1;
- }
- color_fprintf(stdout, pid_color, "%2s ", curr_tr->shortname);
- } else
- color_fprintf(stdout, color, " ");
+ } else {
+ str = thread__comm_str(sched_out);
+ if (!(sched->map.task_name && !sched_match_task(sched, str)))
+ proceed = 1;
}
- if (sched->map.cpus && !perf_cpu_map__has(sched->map.cpus, this_cpu))
- goto out;
+ printf(" ");
+
+ print_sched_map(sched, this_cpu, cpus_nr, color, false);
timestamp__scnprintf_usec(timestamp, stimestamp, sizeof(stimestamp));
color_fprintf(stdout, color, " %12s secs ", stimestamp);
@@ -1698,17 +1752,38 @@ static int map_switch_event(struct perf_sched *sched, struct evsel *evsel,
}
if (sched->map.comp && new_cpu)
- color_fprintf(stdout, color, " (CPU %d)", this_cpu);
+ color_fprintf(stdout, color, " (CPU %d)", this_cpu.cpu);
+
+ if (proceed != 1) {
+ color_fprintf(stdout, color, "\n");
+ goto out;
+ }
+
+sched_out:
+ if (sched->map.task_name) {
+ tr = thread__get_runtime(sched->curr_out_thread[this_cpu.cpu]);
+ if (strcmp(tr->shortname, "") == 0)
+ goto out;
+
+ if (proceed == 1)
+ color_fprintf(stdout, color, "\n");
+
+ printf(" ");
+ print_sched_map(sched, this_cpu, cpus_nr, color, true);
+ timestamp__scnprintf_usec(timestamp, stimestamp, sizeof(stimestamp));
+ color_fprintf(stdout, color, " %12s secs ", stimestamp);
+ }
-out:
color_fprintf(stdout, color, "\n");
+out:
+ thread__put(sched_out);
thread__put(sched_in);
- return 0;
+ return ret;
}
-static int process_sched_switch_event(struct perf_tool *tool,
+static int process_sched_switch_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1734,7 +1809,7 @@ static int process_sched_switch_event(struct perf_tool *tool,
return err;
}
-static int process_sched_runtime_event(struct perf_tool *tool,
+static int process_sched_runtime_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1747,7 +1822,7 @@ static int process_sched_runtime_event(struct perf_tool *tool,
return 0;
}
-static int perf_sched__process_fork_event(struct perf_tool *tool,
+static int perf_sched__process_fork_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -1764,7 +1839,7 @@ static int perf_sched__process_fork_event(struct perf_tool *tool,
return 0;
}
-static int process_sched_migrate_task_event(struct perf_tool *tool,
+static int process_sched_migrate_task_event(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine)
@@ -1777,12 +1852,12 @@ static int process_sched_migrate_task_event(struct perf_tool *tool,
return 0;
}
-typedef int (*tracepoint_handler)(struct perf_tool *tool,
+typedef int (*tracepoint_handler)(const struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
-static int perf_sched__process_tracepoint_sample(struct perf_tool *tool __maybe_unused,
+static int perf_sched__process_tracepoint_sample(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct perf_sample *sample,
struct evsel *evsel,
@@ -1798,7 +1873,7 @@ static int perf_sched__process_tracepoint_sample(struct perf_tool *tool __maybe_
return err;
}
-static int perf_sched__process_comm(struct perf_tool *tool __maybe_unused,
+static int perf_sched__process_comm(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -1853,7 +1928,7 @@ static int perf_sched__read_events(struct perf_sched *sched)
return PTR_ERR(session);
}
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
/* prefer sched_waking if it is captured */
if (evlist__find_tracepoint_by_name(session->evlist, "sched:sched_waking"))
@@ -1949,6 +2024,16 @@ static u64 evsel__get_time(struct evsel *evsel, u32 cpu)
return r->last_time[cpu];
}
+static void timehist__evsel_priv_destructor(void *priv)
+{
+ struct evsel_runtime *r = priv;
+
+ if (r) {
+ free(r->last_time);
+ free(r);
+ }
+}
+
static int comm_width = 30;
static char *timehist_get_commstr(struct thread *thread)
@@ -1974,6 +2059,24 @@ static char *timehist_get_commstr(struct thread *thread)
return str;
}
+/* prio field format: xxx or xxx->yyy */
+#define MAX_PRIO_STR_LEN 8
+static char *timehist_get_priostr(struct evsel *evsel,
+ struct thread *thread,
+ struct perf_sample *sample)
+{
+ static char prio_str[16];
+ int prev_prio = (int)evsel__intval(evsel, sample, "prev_prio");
+ struct thread_runtime *tr = thread__priv(thread);
+
+ if (tr->prio != prev_prio && tr->prio != -1)
+ scnprintf(prio_str, sizeof(prio_str), "%d->%d", tr->prio, prev_prio);
+ else
+ scnprintf(prio_str, sizeof(prio_str), "%d", prev_prio);
+
+ return prio_str;
+}
+
static void timehist_header(struct perf_sched *sched)
{
u32 ncpus = sched->max_cpu.cpu + 1;
@@ -1991,8 +2094,15 @@ static void timehist_header(struct perf_sched *sched)
printf(" ");
}
- printf(" %-*s %9s %9s %9s", comm_width,
- "task name", "wait time", "sch delay", "run time");
+ printf(" %-*s", comm_width, "task name");
+
+ if (sched->show_prio)
+ printf(" %-*s", MAX_PRIO_STR_LEN, "prio");
+
+ printf(" %9s %9s %9s", "wait time", "sch delay", "run time");
+
+ if (sched->pre_migrations)
+ printf(" %9s", "pre-mig time");
if (sched->show_state)
printf(" %s", "state");
@@ -2007,11 +2117,15 @@ static void timehist_header(struct perf_sched *sched)
if (sched->show_cpu_visual)
printf(" %*s ", ncpus, "");
- printf(" %-*s %9s %9s %9s", comm_width,
- "[tid/pid]", "(msec)", "(msec)", "(msec)");
+ printf(" %-*s", comm_width, "[tid/pid]");
- if (sched->show_state)
- printf(" %5s", "");
+ if (sched->show_prio)
+ printf(" %-*s", MAX_PRIO_STR_LEN, "");
+
+ printf(" %9s %9s %9s", "(msec)", "(msec)", "(msec)");
+
+ if (sched->pre_migrations)
+ printf(" %9s", "(msec)");
printf("\n");
@@ -2023,26 +2137,20 @@ static void timehist_header(struct perf_sched *sched)
if (sched->show_cpu_visual)
printf(" %.*s ", ncpus, graph_dotted_line);
- printf(" %.*s %.9s %.9s %.9s", comm_width,
- graph_dotted_line, graph_dotted_line, graph_dotted_line,
- graph_dotted_line);
+ printf(" %.*s", comm_width, graph_dotted_line);
- if (sched->show_state)
- printf(" %.5s", graph_dotted_line);
+ if (sched->show_prio)
+ printf(" %.*s", MAX_PRIO_STR_LEN, graph_dotted_line);
- printf("\n");
-}
+ printf(" %.9s %.9s %.9s", graph_dotted_line, graph_dotted_line, graph_dotted_line);
-static char task_state_char(struct thread *thread, int state)
-{
- static const char state_to_char[] = TASK_STATE_TO_CHAR_STR;
- unsigned bit = state ? ffs(state) : 0;
+ if (sched->pre_migrations)
+ printf(" %.9s", graph_dotted_line);
- /* 'I' for idle */
- if (thread__tid(thread) == 0)
- return 'I';
+ if (sched->show_state)
+ printf(" %.5s", graph_dotted_line);
- return bit < sizeof(state_to_char) - 1 ? state_to_char[bit] : '?';
+ printf("\n");
}
static void timehist_print_sample(struct perf_sched *sched,
@@ -2050,7 +2158,7 @@ static void timehist_print_sample(struct perf_sched *sched,
struct perf_sample *sample,
struct addr_location *al,
struct thread *thread,
- u64 t, int state)
+ u64 t, const char state)
{
struct thread_runtime *tr = thread__priv(thread);
const char *next_comm = evsel__strval(evsel, sample, "next_comm");
@@ -2082,16 +2190,26 @@ static void timehist_print_sample(struct perf_sched *sched,
printf(" ");
}
+ if (!thread__comm_set(thread)) {
+ const char *prev_comm = evsel__strval(evsel, sample, "prev_comm");
+ thread__set_comm(thread, prev_comm, sample->time);
+ }
+
printf(" %-*s ", comm_width, timehist_get_commstr(thread));
+ if (sched->show_prio)
+ printf(" %-*s ", MAX_PRIO_STR_LEN, timehist_get_priostr(evsel, thread, sample));
+
wait_time = tr->dt_sleep + tr->dt_iowait + tr->dt_preempt;
print_sched_time(wait_time, 6);
print_sched_time(tr->dt_delay, 6);
print_sched_time(tr->dt_run, 6);
+ if (sched->pre_migrations)
+ print_sched_time(tr->dt_pre_mig, 6);
if (sched->show_state)
- printf(" %5c ", task_state_char(thread, state));
+ printf(" %5c ", thread__tid(thread) == 0 ? 'I' : state);
if (sched->show_next) {
snprintf(nstr, sizeof(nstr), "next: %s[%d]", next_comm, next_pid);
@@ -2126,18 +2244,21 @@ out:
* last_time = time of last sched change event for current task
* (i.e, time process was last scheduled out)
* ready_to_run = time of wakeup for current task
+ * migrated = time of task migration to another CPU
*
- * -----|------------|------------|------------|------
- * last ready tprev t
+ * -----|-------------|-------------|-------------|-------------|-----
+ * last ready migrated tprev t
* time to run
*
- * |-------- dt_wait --------|
- * |- dt_delay -|-- dt_run --|
+ * |---------------- dt_wait ----------------|
+ * |--------- dt_delay ---------|-- dt_run --|
+ * |- dt_pre_mig -|
*
- * dt_run = run time of current task
- * dt_wait = time between last schedule out event for task and tprev
- * represents time spent off the cpu
- * dt_delay = time between wakeup and schedule-in of task
+ * dt_run = run time of current task
+ * dt_wait = time between last schedule out event for task and tprev
+ * represents time spent off the cpu
+ * dt_delay = time between wakeup and schedule-in of task
+ * dt_pre_mig = time between wakeup and migration to another CPU
*/
static void timehist_update_runtime_stats(struct thread_runtime *r,
@@ -2148,6 +2269,7 @@ static void timehist_update_runtime_stats(struct thread_runtime *r,
r->dt_iowait = 0;
r->dt_preempt = 0;
r->dt_run = 0;
+ r->dt_pre_mig = 0;
if (tprev) {
r->dt_run = t - tprev;
@@ -2156,6 +2278,9 @@ static void timehist_update_runtime_stats(struct thread_runtime *r,
pr_debug("time travel: wakeup time for task > previous sched_switch event\n");
else
r->dt_delay = tprev - r->ready_to_run;
+
+ if ((r->migrated > r->ready_to_run) && (r->migrated < tprev))
+ r->dt_pre_mig = r->migrated - r->ready_to_run;
}
if (r->last_time > tprev)
@@ -2163,9 +2288,9 @@ static void timehist_update_runtime_stats(struct thread_runtime *r,
else if (r->last_time) {
u64 dt_wait = tprev - r->last_time;
- if (r->last_state == TASK_RUNNING)
+ if (r->last_state == 'R')
r->dt_preempt = dt_wait;
- else if (r->last_state == TASK_UNINTERRUPTIBLE)
+ else if (r->last_state == 'D')
r->dt_iowait = dt_wait;
else
r->dt_sleep = dt_wait;
@@ -2179,13 +2304,14 @@ static void timehist_update_runtime_stats(struct thread_runtime *r,
r->total_sleep_time += r->dt_sleep;
r->total_iowait_time += r->dt_iowait;
r->total_preempt_time += r->dt_preempt;
+ r->total_pre_mig_time += r->dt_pre_mig;
}
static bool is_idle_sample(struct perf_sample *sample,
struct evsel *evsel)
{
/* pid 0 == swapper == idle task */
- if (strcmp(evsel__name(evsel), "sched:sched_switch") == 0)
+ if (evsel__name_is(evsel, "sched:sched_switch"))
return evsel__intval(evsel, sample, "prev_pid") == 0;
return sample->pid == 0;
@@ -2206,8 +2332,10 @@ static void save_task_callchain(struct perf_sched *sched,
return;
}
- if (!sched->show_callchain || sample->callchain == NULL)
+ if (!sched->show_callchain || sample->callchain == NULL) {
+ thread__put(thread);
return;
+ }
cursor = get_tls_callchain_cursor();
@@ -2216,10 +2344,12 @@ static void save_task_callchain(struct perf_sched *sched,
if (verbose > 0)
pr_err("Failed to resolve callchain. Skipping\n");
+ thread__put(thread);
return;
}
callchain_cursor_commit(cursor);
+ thread__put(thread);
while (true) {
struct callchain_cursor_node *node;
@@ -2251,6 +2381,7 @@ static int init_idle_thread(struct thread *thread)
if (itr == NULL)
return -ENOMEM;
+ init_prio(&itr->tr);
init_stats(&itr->tr.run_stats);
callchain_init(&itr->callchain);
callchain_cursor_reset(&itr->cursor);
@@ -2295,8 +2426,17 @@ static void free_idle_threads(void)
return;
for (i = 0; i < idle_max_cpu; ++i) {
- if ((idle_threads[i]))
- thread__delete(idle_threads[i]);
+ struct thread *idle = idle_threads[i];
+
+ if (idle) {
+ struct idle_thread_runtime *itr;
+
+ itr = thread__priv(idle);
+ if (itr)
+ thread__put(itr->last_thread);
+
+ thread__delete(idle);
+ }
}
free(idle_threads);
@@ -2333,7 +2473,7 @@ static struct thread *get_idle_thread(int cpu)
}
}
- return idle_threads[cpu];
+ return thread__get(idle_threads[cpu]);
}
static void save_idle_callchain(struct perf_sched *sched,
@@ -2388,7 +2528,8 @@ static struct thread *timehist_get_thread(struct perf_sched *sched,
if (itr == NULL)
return NULL;
- itr->last_thread = thread;
+ thread__put(itr->last_thread);
+ itr->last_thread = thread__get(thread);
/* copy task callchain when entering to idle */
if (evsel__intval(evsel, sample, "next_pid") == 0)
@@ -2405,14 +2546,35 @@ static bool timehist_skip_sample(struct perf_sched *sched,
struct perf_sample *sample)
{
bool rc = false;
+ int prio = -1;
+ struct thread_runtime *tr = NULL;
if (thread__is_filtered(thread)) {
rc = true;
sched->skipped_samples++;
}
+ if (sched->prio_str) {
+ /*
+ * Because priority may be changed during task execution,
+ * first read priority from prev sched_in event for current task.
+ * If prev sched_in event is not saved, then read priority from
+ * current task sched_out event.
+ */
+ tr = thread__get_runtime(thread);
+ if (tr && tr->prio != -1)
+ prio = tr->prio;
+ else if (evsel__name_is(evsel, "sched:sched_switch"))
+ prio = evsel__intval(evsel, sample, "prev_prio");
+
+ if (prio != -1 && !test_bit(prio, sched->prio_bitmap)) {
+ rc = true;
+ sched->skipped_samples++;
+ }
+ }
+
if (sched->idle_hist) {
- if (strcmp(evsel__name(evsel), "sched:sched_switch"))
+ if (!evsel__name_is(evsel, "sched:sched_switch"))
rc = true;
else if (evsel__intval(evsel, sample, "prev_pid") != 0 &&
evsel__intval(evsel, sample, "next_pid") != 0)
@@ -2438,6 +2600,7 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
/* show wakeup unless both awakee and awaker are filtered */
if (timehist_skip_sample(sched, thread, evsel, sample) &&
timehist_skip_sample(sched, awakened, evsel, sample)) {
+ thread__put(thread);
return;
}
@@ -2454,9 +2617,11 @@ static void timehist_print_wakeup_event(struct perf_sched *sched,
printf("awakened: %s", timehist_get_commstr(awakened));
printf("\n");
+
+ thread__put(thread);
}
-static int timehist_sched_wakeup_ignore(struct perf_tool *tool __maybe_unused,
+static int timehist_sched_wakeup_ignore(const struct perf_tool *tool __maybe_unused,
union perf_event *event __maybe_unused,
struct evsel *evsel __maybe_unused,
struct perf_sample *sample __maybe_unused,
@@ -2465,7 +2630,7 @@ static int timehist_sched_wakeup_ignore(struct perf_tool *tool __maybe_unused,
return 0;
}
-static int timehist_sched_wakeup_event(struct perf_tool *tool,
+static int timehist_sched_wakeup_event(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct evsel *evsel,
struct perf_sample *sample,
@@ -2482,8 +2647,10 @@ static int timehist_sched_wakeup_event(struct perf_tool *tool,
return -1;
tr = thread__get_runtime(thread);
- if (tr == NULL)
+ if (tr == NULL) {
+ thread__put(thread);
return -1;
+ }
if (tr->ready_to_run == 0)
tr->ready_to_run = sample->time;
@@ -2493,6 +2660,7 @@ static int timehist_sched_wakeup_event(struct perf_tool *tool,
!perf_time__skip_sample(&sched->ptime, sample->time))
timehist_print_wakeup_event(sched, evsel, sample, machine, thread);
+ thread__put(thread);
return 0;
}
@@ -2520,6 +2688,7 @@ static void timehist_print_migration_event(struct perf_sched *sched,
if (timehist_skip_sample(sched, thread, evsel, sample) &&
timehist_skip_sample(sched, migrated, evsel, sample)) {
+ thread__put(thread);
return;
}
@@ -2547,9 +2716,10 @@ static void timehist_print_migration_event(struct perf_sched *sched,
printf(" cpu %d => %d", ocpu, dcpu);
printf("\n");
+ thread__put(thread);
}
-static int timehist_migrate_task_event(struct perf_tool *tool,
+static int timehist_migrate_task_event(const struct perf_tool *tool,
union perf_event *event __maybe_unused,
struct evsel *evsel,
struct perf_sample *sample,
@@ -2566,18 +2736,49 @@ static int timehist_migrate_task_event(struct perf_tool *tool,
return -1;
tr = thread__get_runtime(thread);
- if (tr == NULL)
+ if (tr == NULL) {
+ thread__put(thread);
return -1;
+ }
tr->migrations++;
+ tr->migrated = sample->time;
/* show migrations if requested */
- timehist_print_migration_event(sched, evsel, sample, machine, thread);
+ if (sched->show_migrations) {
+ timehist_print_migration_event(sched, evsel, sample,
+ machine, thread);
+ }
+ thread__put(thread);
return 0;
}
-static int timehist_sched_change_event(struct perf_tool *tool,
+static void timehist_update_task_prio(struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct thread *thread;
+ struct thread_runtime *tr = NULL;
+ const u32 next_pid = evsel__intval(evsel, sample, "next_pid");
+ const u32 next_prio = evsel__intval(evsel, sample, "next_prio");
+
+ if (next_pid == 0)
+ thread = get_idle_thread(sample->cpu);
+ else
+ thread = machine__findnew_thread(machine, -1, next_pid);
+
+ if (thread == NULL)
+ return;
+
+ tr = thread__get_runtime(thread);
+ if (tr != NULL)
+ tr->prio = next_prio;
+
+ thread__put(thread);
+}
+
+static int timehist_sched_change_event(const struct perf_tool *tool,
union perf_event *event,
struct evsel *evsel,
struct perf_sample *sample,
@@ -2586,11 +2787,11 @@ static int timehist_sched_change_event(struct perf_tool *tool,
struct perf_sched *sched = container_of(tool, struct perf_sched, tool);
struct perf_time_interval *ptime = &sched->ptime;
struct addr_location al;
- struct thread *thread;
+ struct thread *thread = NULL;
struct thread_runtime *tr = NULL;
u64 tprev, t = sample->time;
int rc = 0;
- int state = evsel__intval(evsel, sample, "prev_state");
+ const char state = evsel__taskstate(evsel, sample, "prev_state");
addr_location__init(&al);
if (machine__resolve(machine, &al, sample) < 0) {
@@ -2600,6 +2801,9 @@ static int timehist_sched_change_event(struct perf_tool *tool,
goto out;
}
+ if (sched->show_prio || sched->prio_str)
+ timehist_update_task_prio(evsel, sample, machine);
+
thread = timehist_get_thread(sched, sample, machine, evsel);
if (thread == NULL) {
rc = -1;
@@ -2633,9 +2837,12 @@ static int timehist_sched_change_event(struct perf_tool *tool,
* - previous sched event is out of window - we are done
* - sample time is beyond window user cares about - reset it
* to close out stats for time window interest
+ * - If tprev is 0, that is, sched_in event for current task is
+ * not recorded, cannot determine whether sched_in event is
+ * within time window interest - ignore it
*/
if (ptime->end) {
- if (tprev > ptime->end)
+ if (!tprev || tprev > ptime->end)
goto out;
if (t > ptime->end)
@@ -2650,8 +2857,6 @@ static int timehist_sched_change_event(struct perf_tool *tool,
struct idle_thread_runtime *itr = (void *)tr;
struct thread_runtime *last_tr;
- BUG_ON(thread__tid(thread) != 0);
-
if (itr->last_thread == NULL)
goto out;
@@ -2677,10 +2882,10 @@ static int timehist_sched_change_event(struct perf_tool *tool,
itr->last_thread = NULL;
}
- }
- if (!sched->summary_only)
- timehist_print_sample(sched, evsel, sample, &al, thread, t, state);
+ if (!sched->summary_only)
+ timehist_print_sample(sched, evsel, sample, &al, thread, t, state);
+ }
out:
if (sched->hist_time.start == 0 && t >= ptime->start)
@@ -2695,17 +2900,23 @@ out:
/* last state is used to determine where to account wait time */
tr->last_state = state;
- /* sched out event for task so reset ready to run time */
- tr->ready_to_run = 0;
+ /* sched out event for task so reset ready to run time and migrated time */
+ if (state == 'R')
+ tr->ready_to_run = t;
+ else
+ tr->ready_to_run = 0;
+
+ tr->migrated = 0;
}
evsel__save_time(evsel, sample->time, sample->cpu);
+ thread__put(thread);
addr_location__exit(&al);
return rc;
}
-static int timehist_sched_switch_event(struct perf_tool *tool,
+static int timehist_sched_switch_event(const struct perf_tool *tool,
union perf_event *event,
struct evsel *evsel,
struct perf_sample *sample,
@@ -2714,7 +2925,7 @@ static int timehist_sched_switch_event(struct perf_tool *tool,
return timehist_sched_change_event(tool, event, evsel, sample, machine);
}
-static int process_lost(struct perf_tool *tool __maybe_unused,
+static int process_lost(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine __maybe_unused)
@@ -2957,13 +3168,13 @@ static void timehist_print_summary(struct perf_sched *sched,
printf(" (x %d)\n", sched->max_cpu.cpu);
}
-typedef int (*sched_handler)(struct perf_tool *tool,
+typedef int (*sched_handler)(const struct perf_tool *tool,
union perf_event *event,
struct evsel *evsel,
struct perf_sample *sample,
struct machine *machine);
-static int perf_timehist__process_sample(struct perf_tool *tool,
+static int perf_timehist__process_sample(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -3000,8 +3211,11 @@ static int timehist_check_attr(struct perf_sched *sched,
return -1;
}
- if (sched->show_callchain && !evsel__has_callchain(evsel)) {
- pr_info("Samples do not have callchains.\n");
+ /* only need to save callchain related to sched_switch event */
+ if (sched->show_callchain &&
+ evsel__name_is(evsel, "sched:sched_switch") &&
+ !evsel__has_callchain(evsel)) {
+ pr_info("Samples of sched_switch event do not have callchains.\n");
sched->show_callchain = 0;
symbol_conf.use_callchain = 0;
}
@@ -3010,6 +3224,47 @@ static int timehist_check_attr(struct perf_sched *sched,
return 0;
}
+static int timehist_parse_prio_str(struct perf_sched *sched)
+{
+ char *p;
+ unsigned long start_prio, end_prio;
+ const char *str = sched->prio_str;
+
+ if (!str)
+ return 0;
+
+ while (isdigit(*str)) {
+ p = NULL;
+ start_prio = strtoul(str, &p, 0);
+ if (start_prio >= MAX_PRIO || (*p != '\0' && *p != ',' && *p != '-'))
+ return -1;
+
+ if (*p == '-') {
+ str = ++p;
+ p = NULL;
+ end_prio = strtoul(str, &p, 0);
+
+ if (end_prio >= MAX_PRIO || (*p != '\0' && *p != ','))
+ return -1;
+
+ if (end_prio < start_prio)
+ return -1;
+ } else {
+ end_prio = start_prio;
+ }
+
+ for (; start_prio <= end_prio; start_prio++)
+ __set_bit(start_prio, sched->prio_bitmap);
+
+ if (*p)
+ ++p;
+
+ str = p;
+ }
+
+ return 0;
+}
+
static int perf_sched__timehist(struct perf_sched *sched)
{
struct evsel_str_handler handlers[] = {
@@ -3028,6 +3283,7 @@ static int perf_sched__timehist(struct perf_sched *sched)
};
struct perf_session *session;
+ struct perf_env *env;
struct evlist *evlist;
int err = -1;
@@ -3044,7 +3300,6 @@ static int perf_sched__timehist(struct perf_sched *sched)
sched->tool.tracing_data = perf_event__process_tracing_data;
sched->tool.build_id = perf_event__process_build_id;
- sched->tool.ordered_events = true;
sched->tool.ordering_requires_timestamps = true;
symbol_conf.use_callchain = sched->show_callchain;
@@ -3053,6 +3308,7 @@ static int perf_sched__timehist(struct perf_sched *sched)
if (IS_ERR(session))
return PTR_ERR(session);
+ env = perf_session__env(session);
if (cpu_list) {
err = perf_session__cpu_bitmap(session, cpu_list, cpu_bitmap);
if (err < 0)
@@ -3061,18 +3317,26 @@ static int perf_sched__timehist(struct perf_sched *sched)
evlist = session->evlist;
- symbol__init(&session->header.env);
+ symbol__init(env);
if (perf_time__parse_str(&sched->ptime, sched->time_str) != 0) {
pr_err("Invalid time string\n");
- return -EINVAL;
+ err = -EINVAL;
+ goto out;
}
if (timehist_check_attr(sched, evlist) != 0)
goto out;
+ if (timehist_parse_prio_str(sched) != 0) {
+ pr_err("Invalid prio string\n");
+ goto out;
+ }
+
setup_pager();
+ evsel__set_priv_destructor(timehist__evsel_priv_destructor);
+
/* prefer sched_waking if it is captured */
if (evlist__find_tracepoint_by_name(session->evlist, "sched:sched_waking"))
handlers[1].handler = timehist_sched_wakeup_ignore;
@@ -3087,12 +3351,12 @@ static int perf_sched__timehist(struct perf_sched *sched)
goto out;
}
- if (sched->show_migrations &&
- perf_session__set_tracepoints_handlers(session, migrate_handlers))
+ if ((sched->show_migrations || sched->pre_migrations) &&
+ perf_session__set_tracepoints_handlers(session, migrate_handlers))
goto out;
/* pre-allocate struct for per-CPU idle stats */
- sched->max_cpu.cpu = session->header.env.nr_cpus_online;
+ sched->max_cpu.cpu = env->nr_cpus_online;
if (sched->max_cpu.cpu == 0)
sched->max_cpu.cpu = 4;
if (init_idle_threads(sched->max_cpu.cpu))
@@ -3173,13 +3437,13 @@ static void __merge_work_atoms(struct rb_root_cached *root, struct work_atoms *d
this->total_runtime += data->total_runtime;
this->nb_atoms += data->nb_atoms;
this->total_lat += data->total_lat;
- list_splice(&data->work_list, &this->work_list);
+ list_splice_init(&data->work_list, &this->work_list);
if (this->max_lat < data->max_lat) {
this->max_lat = data->max_lat;
this->max_lat_start = data->max_lat_start;
this->max_lat_end = data->max_lat_end;
}
- zfree(&data);
+ free_work_atoms(data);
return;
}
}
@@ -3204,20 +3468,50 @@ static void perf_sched__merge_lat(struct perf_sched *sched)
}
}
+static int setup_cpus_switch_event(struct perf_sched *sched)
+{
+ unsigned int i;
+
+ sched->cpu_last_switched = calloc(MAX_CPUS, sizeof(*(sched->cpu_last_switched)));
+ if (!sched->cpu_last_switched)
+ return -1;
+
+ sched->curr_pid = malloc(MAX_CPUS * sizeof(*(sched->curr_pid)));
+ if (!sched->curr_pid) {
+ zfree(&sched->cpu_last_switched);
+ return -1;
+ }
+
+ for (i = 0; i < MAX_CPUS; i++)
+ sched->curr_pid[i] = -1;
+
+ return 0;
+}
+
+static void free_cpus_switch_event(struct perf_sched *sched)
+{
+ zfree(&sched->curr_pid);
+ zfree(&sched->cpu_last_switched);
+}
+
static int perf_sched__lat(struct perf_sched *sched)
{
+ int rc = -1;
struct rb_node *next;
setup_pager();
+ if (setup_cpus_switch_event(sched))
+ return rc;
+
if (perf_sched__read_events(sched))
- return -1;
+ goto out_free_cpus_switch_event;
perf_sched__merge_lat(sched);
perf_sched__sort_lat(sched);
printf("\n -------------------------------------------------------------------------------------------------------------------------------------------\n");
- printf(" Task | Runtime ms | Switches | Avg delay ms | Max delay ms | Max delay start | Max delay end |\n");
+ printf(" Task | Runtime ms | Count | Avg delay ms | Max delay ms | Max delay start | Max delay end |\n");
printf(" -------------------------------------------------------------------------------------------------------------------------------------------\n");
next = rb_first_cached(&sched->sorted_atom_root);
@@ -3228,7 +3522,6 @@ static int perf_sched__lat(struct perf_sched *sched)
work_list = rb_entry(next, struct work_atoms, node);
output_lat_thread(sched, work_list);
next = rb_next(next);
- thread__zput(work_list->thread);
}
printf(" -----------------------------------------------------------------------------------------------------------------\n");
@@ -3240,13 +3533,22 @@ static int perf_sched__lat(struct perf_sched *sched)
print_bad_events(sched);
printf("\n");
- return 0;
+ rc = 0;
+
+ while ((next = rb_first_cached(&sched->sorted_atom_root))) {
+ struct work_atoms *data;
+
+ data = rb_entry(next, struct work_atoms, node);
+ rb_erase_cached(next, &sched->sorted_atom_root);
+ free_work_atoms(data);
+ }
+out_free_cpus_switch_event:
+ free_cpus_switch_event(sched);
+ return rc;
}
static int setup_map_cpus(struct perf_sched *sched)
{
- struct perf_cpu_map *map;
-
sched->max_cpu.cpu = sysconf(_SC_NPROCESSORS_CONF);
if (sched->map.comp) {
@@ -3255,16 +3557,15 @@ static int setup_map_cpus(struct perf_sched *sched)
return -1;
}
- if (!sched->map.cpus_str)
- return 0;
-
- map = perf_cpu_map__new(sched->map.cpus_str);
- if (!map) {
- pr_err("failed to get cpus map from %s\n", sched->map.cpus_str);
- return -1;
+ if (sched->map.cpus_str) {
+ sched->map.cpus = perf_cpu_map__new(sched->map.cpus_str);
+ if (!sched->map.cpus) {
+ pr_err("failed to get cpus map from %s\n", sched->map.cpus_str);
+ zfree(&sched->map.comp_cpus);
+ return -1;
+ }
}
- sched->map.cpus = map;
return 0;
}
@@ -3304,33 +3605,80 @@ static int setup_color_cpus(struct perf_sched *sched)
static int perf_sched__map(struct perf_sched *sched)
{
+ int rc = -1;
+
+ sched->curr_thread = calloc(MAX_CPUS, sizeof(*(sched->curr_thread)));
+ if (!sched->curr_thread)
+ return rc;
+
+ sched->curr_out_thread = calloc(MAX_CPUS, sizeof(*(sched->curr_out_thread)));
+ if (!sched->curr_out_thread)
+ goto out_free_curr_thread;
+
+ if (setup_cpus_switch_event(sched))
+ goto out_free_curr_out_thread;
+
if (setup_map_cpus(sched))
- return -1;
+ goto out_free_cpus_switch_event;
if (setup_color_pids(sched))
- return -1;
+ goto out_put_map_cpus;
if (setup_color_cpus(sched))
- return -1;
+ goto out_put_color_pids;
setup_pager();
if (perf_sched__read_events(sched))
- return -1;
+ goto out_put_color_cpus;
+
+ rc = 0;
print_bad_events(sched);
- return 0;
+
+out_put_color_cpus:
+ perf_cpu_map__put(sched->map.color_cpus);
+
+out_put_color_pids:
+ perf_thread_map__put(sched->map.color_pids);
+
+out_put_map_cpus:
+ zfree(&sched->map.comp_cpus);
+ perf_cpu_map__put(sched->map.cpus);
+
+out_free_cpus_switch_event:
+ free_cpus_switch_event(sched);
+
+out_free_curr_out_thread:
+ for (int i = 0; i < MAX_CPUS; i++)
+ thread__put(sched->curr_out_thread[i]);
+ zfree(&sched->curr_out_thread);
+
+out_free_curr_thread:
+ for (int i = 0; i < MAX_CPUS; i++)
+ thread__put(sched->curr_thread[i]);
+ zfree(&sched->curr_thread);
+ return rc;
}
static int perf_sched__replay(struct perf_sched *sched)
{
+ int ret;
unsigned long i;
+ mutex_init(&sched->start_work_mutex);
+ mutex_init(&sched->work_done_wait_mutex);
+
+ ret = setup_cpus_switch_event(sched);
+ if (ret)
+ goto out_mutex_destroy;
+
calibrate_run_measurement_overhead(sched);
calibrate_sleep_measurement_overhead(sched);
test_calibrations(sched);
- if (perf_sched__read_events(sched))
- return -1;
+ ret = perf_sched__read_events(sched);
+ if (ret)
+ goto out_free_cpus_switch_event;
printf("nr_run_events: %ld\n", sched->nr_run_events);
printf("nr_sleep_events: %ld\n", sched->nr_sleep_events);
@@ -3350,12 +3698,22 @@ static int perf_sched__replay(struct perf_sched *sched)
sched->thread_funcs_exit = false;
create_tasks(sched);
printf("------------------------------------------------------------\n");
+ if (sched->replay_repeat == 0)
+ sched->replay_repeat = UINT_MAX;
+
for (i = 0; i < sched->replay_repeat; i++)
run_one_test(sched);
sched->thread_funcs_exit = true;
destroy_tasks(sched);
- return 0;
+
+out_free_cpus_switch_event:
+ free_cpus_switch_event(sched);
+
+out_mutex_destroy:
+ mutex_destroy(&sched->start_work_mutex);
+ mutex_destroy(&sched->work_done_wait_mutex);
+ return ret;
}
static void setup_sorting(struct perf_sched *sched, const struct option *options,
@@ -3468,14 +3826,6 @@ int cmd_sched(int argc, const char **argv)
{
static const char default_sort_order[] = "avg, max, switch, runtime";
struct perf_sched sched = {
- .tool = {
- .sample = perf_sched__process_tracepoint_sample,
- .comm = perf_sched__process_comm,
- .namespaces = perf_event__process_namespaces,
- .lost = perf_event__process_lost,
- .fork = perf_sched__process_fork_event,
- .ordered_events = true,
- },
.cmp_pid = LIST_HEAD_INIT(sched.cmp_pid),
.sort_list = LIST_HEAD_INIT(sched.sort_list),
.sort_order = default_sort_order,
@@ -3508,7 +3858,7 @@ int cmd_sched(int argc, const char **argv)
};
const struct option replay_options[] = {
OPT_UINTEGER('r', "repeat", &sched.replay_repeat,
- "repeat the workload replay N times (-1: infinite)"),
+ "repeat the workload replay N times (0: infinite)"),
OPT_PARENT(sched_options)
};
const struct option map_options[] = {
@@ -3520,6 +3870,10 @@ int cmd_sched(int argc, const char **argv)
"highlight given CPUs in map"),
OPT_STRING(0, "cpus", &sched.map.cpus_str, "cpus",
"display given CPUs in map"),
+ OPT_STRING(0, "task-name", &sched.map.task_name, "task",
+ "map output only for the given task name(s)."),
+ OPT_BOOLEAN(0, "fuzzy-name", &sched.map.fuzzy,
+ "given command name can be partially matched (fuzzy matching)"),
OPT_PARENT(sched_options)
};
const struct option timehist_options[] = {
@@ -3550,6 +3904,10 @@ int cmd_sched(int argc, const char **argv)
OPT_STRING('t', "tid", &symbol_conf.tid_list_str, "tid[,tid...]",
"analyze events only for given thread id(s)"),
OPT_STRING('C', "cpu", &cpu_list, "cpu", "list of cpus to profile"),
+ OPT_BOOLEAN(0, "show-prio", &sched.show_prio, "Show task priority"),
+ OPT_STRING(0, "prio", &sched.prio_str, "prio",
+ "analyze events only for given task priority(ies)"),
+ OPT_BOOLEAN('P', "pre-migrations", &sched.pre_migrations, "Show pre-migration wait time"),
OPT_PARENT(sched_options)
};
@@ -3590,34 +3948,22 @@ int cmd_sched(int argc, const char **argv)
.switch_event = replay_switch_event,
.fork_event = replay_fork_event,
};
- unsigned int i;
- int ret = 0;
+ int ret;
- mutex_init(&sched.start_work_mutex);
- mutex_init(&sched.work_done_wait_mutex);
- sched.curr_thread = calloc(MAX_CPUS, sizeof(*sched.curr_thread));
- if (!sched.curr_thread) {
- ret = -ENOMEM;
- goto out;
- }
- sched.cpu_last_switched = calloc(MAX_CPUS, sizeof(*sched.cpu_last_switched));
- if (!sched.cpu_last_switched) {
- ret = -ENOMEM;
- goto out;
- }
- sched.curr_pid = malloc(MAX_CPUS * sizeof(*sched.curr_pid));
- if (!sched.curr_pid) {
- ret = -ENOMEM;
- goto out;
- }
- for (i = 0; i < MAX_CPUS; i++)
- sched.curr_pid[i] = -1;
+ perf_tool__init(&sched.tool, /*ordered_events=*/true);
+ sched.tool.sample = perf_sched__process_tracepoint_sample;
+ sched.tool.comm = perf_sched__process_comm;
+ sched.tool.namespaces = perf_event__process_namespaces;
+ sched.tool.lost = perf_event__process_lost;
+ sched.tool.fork = perf_sched__process_fork_event;
argc = parse_options_subcommand(argc, argv, sched_options, sched_subcommands,
sched_usage, PARSE_OPT_STOP_AT_NON_OPTION);
if (!argc)
usage_with_options(sched_usage, sched_options);
+ thread__set_priv_destructor(free);
+
/*
* Aliased to 'perf script' for now:
*/
@@ -3639,6 +3985,15 @@ int cmd_sched(int argc, const char **argv)
argc = parse_options(argc, argv, map_options, map_usage, 0);
if (argc)
usage_with_options(map_usage, map_options);
+
+ if (sched.map.task_name) {
+ sched.map.task_names = strlist__new(sched.map.task_name, NULL);
+ if (sched.map.task_names == NULL) {
+ fprintf(stderr, "Failed to parse task names\n");
+ ret = -1;
+ goto out;
+ }
+ }
}
sched.tp_handler = &map_ops;
setup_sorting(&sched, latency_options, latency_usage);
@@ -3670,20 +4025,15 @@ int cmd_sched(int argc, const char **argv)
goto out;
}
ret = symbol__validate_sym_arguments();
- if (ret)
- goto out;
-
- ret = perf_sched__timehist(&sched);
+ if (!ret)
+ ret = perf_sched__timehist(&sched);
} else {
usage_with_options(sched_usage, sched_options);
}
out:
- free(sched.curr_pid);
- free(sched.cpu_last_switched);
- free(sched.curr_thread);
- mutex_destroy(&sched.start_work_mutex);
- mutex_destroy(&sched.work_done_wait_mutex);
+ /* free usage string allocated by parse_options_subcommand */
+ free((void *)sched_usage[0]);
return ret;
}
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index b1f57401ff23..8124fcb51da9 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -32,14 +32,18 @@
#include "util/time-utils.h"
#include "util/path.h"
#include "util/event.h"
+#include "util/mem-info.h"
#include "ui/ui.h"
#include "print_binary.h"
+#include "print_insn.h"
#include "archinsn.h"
#include <linux/bitmap.h>
+#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/stringify.h>
#include <linux/time64.h>
#include <linux/zalloc.h>
+#include <linux/unaligned.h>
#include <sys/utsname.h>
#include "asm/bug.h"
#include "util/mem-events.h"
@@ -48,6 +52,7 @@
#include <errno.h>
#include <inttypes.h>
#include <signal.h>
+#include <stdio.h>
#include <sys/param.h>
#include <sys/types.h>
#include <sys/stat.h>
@@ -60,11 +65,12 @@
#include "util/record.h"
#include "util/util.h"
#include "util/cgroup.h"
+#include "util/annotate.h"
#include "perf.h"
#include <linux/ctype.h>
#ifdef HAVE_LIBTRACEEVENT
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
#endif
static char const *script_name;
@@ -82,15 +88,12 @@ static bool system_wide;
static bool print_flags;
static const char *cpu_list;
static DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
-static struct perf_stat_config stat_config;
static int max_blocks;
static bool native_arch;
static struct dlfilter *dlfilter;
static int dlargc;
static char **dlargv;
-unsigned int scripting_max_stack = PERF_MAX_STACK_DEPTH;
-
enum perf_output_field {
PERF_OUTPUT_COMM = 1ULL << 0,
PERF_OUTPUT_TID = 1ULL << 1,
@@ -134,6 +137,9 @@ enum perf_output_field {
PERF_OUTPUT_CGROUP = 1ULL << 39,
PERF_OUTPUT_RETIRE_LAT = 1ULL << 40,
PERF_OUTPUT_DSOFF = 1ULL << 41,
+ PERF_OUTPUT_DISASM = 1ULL << 42,
+ PERF_OUTPUT_BRSTACKDISASM = 1ULL << 43,
+ PERF_OUTPUT_BRCNTR = 1ULL << 44,
};
struct perf_script {
@@ -189,6 +195,7 @@ struct output_option {
{.str = "bpf-output", .field = PERF_OUTPUT_BPF_OUTPUT},
{.str = "callindent", .field = PERF_OUTPUT_CALLINDENT},
{.str = "insn", .field = PERF_OUTPUT_INSN},
+ {.str = "disasm", .field = PERF_OUTPUT_DISASM},
{.str = "insnlen", .field = PERF_OUTPUT_INSNLEN},
{.str = "brstackinsn", .field = PERF_OUTPUT_BRSTACKINSN},
{.str = "brstackoff", .field = PERF_OUTPUT_BRSTACKOFF},
@@ -207,6 +214,8 @@ struct output_option {
{.str = "vcpu", .field = PERF_OUTPUT_VCPU},
{.str = "cgroup", .field = PERF_OUTPUT_CGROUP},
{.str = "retire_lat", .field = PERF_OUTPUT_RETIRE_LAT},
+ {.str = "brstackdisasm", .field = PERF_OUTPUT_BRSTACKDISASM},
+ {.str = "brcntr", .field = PERF_OUTPUT_BRCNTR},
};
enum {
@@ -215,6 +224,10 @@ enum {
OUTPUT_TYPE_MAX
};
+// We need to refactor the evsel->priv use in 'perf script' to allow for
+// using that area, that is being used only in some cases.
+#define OUTPUT_TYPE_UNSET -1
+
/* default set to maintain compatibility with current format */
static struct {
bool user_set;
@@ -388,6 +401,24 @@ static inline int output_type(unsigned int type)
return OUTPUT_TYPE_OTHER;
}
+static inline int evsel__output_type(struct evsel *evsel)
+{
+ int type = evsel->script_output_type;
+
+ if (type == OUTPUT_TYPE_UNSET) {
+ type = output_type(evsel->core.attr.type);
+ if (type == OUTPUT_TYPE_OTHER) {
+ struct perf_pmu *pmu = evsel__find_pmu(evsel);
+
+ if (pmu && pmu->is_core)
+ type = PERF_TYPE_RAW;
+ }
+ evsel->script_output_type = type;
+ }
+
+ return type;
+}
+
static bool output_set_by_user(void)
{
int j;
@@ -412,13 +443,13 @@ static const char *output_field2str(enum perf_output_field field)
return str;
}
-#define PRINT_FIELD(x) (output[output_type(attr->type)].fields & PERF_OUTPUT_##x)
+#define PRINT_FIELD(x) (output[evsel__output_type(evsel)].fields & PERF_OUTPUT_##x)
static int evsel__do_check_stype(struct evsel *evsel, u64 sample_type, const char *sample_msg,
enum perf_output_field field, bool allow_user_set)
{
struct perf_event_attr *attr = &evsel->core.attr;
- int type = output_type(attr->type);
+ int type = evsel__output_type(evsel);
const char *evname;
if (attr->sample_type & sample_type)
@@ -452,7 +483,6 @@ static int evsel__check_stype(struct evsel *evsel, u64 sample_type, const char *
static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
{
- struct perf_event_attr *attr = &evsel->core.attr;
bool allow_user_set;
if (evsel__is_dummy_event(evsel))
@@ -507,12 +537,19 @@ static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
"selected. Hence, no address to lookup the source line number.\n");
return -EINVAL;
}
- if ((PRINT_FIELD(BRSTACKINSN) || PRINT_FIELD(BRSTACKINSNLEN)) && !allow_user_set &&
+ if ((PRINT_FIELD(BRSTACKINSN) || PRINT_FIELD(BRSTACKINSNLEN) || PRINT_FIELD(BRSTACKDISASM))
+ && !allow_user_set &&
!(evlist__combined_branch_type(session->evlist) & PERF_SAMPLE_BRANCH_ANY)) {
pr_err("Display of branch stack assembler requested, but non all-branch filter set\n"
"Hint: run 'perf record -b ...'\n");
return -EINVAL;
}
+ if (PRINT_FIELD(BRCNTR) &&
+ !(evlist__combined_branch_type(session->evlist) & PERF_SAMPLE_BRANCH_COUNTERS)) {
+ pr_err("Display of branch counter requested but it's not enabled\n"
+ "Hint: run 'perf record -j any,counter ...'\n");
+ return -EINVAL;
+ }
if ((PRINT_FIELD(PID) || PRINT_FIELD(TID)) &&
evsel__check_stype(evsel, PERF_SAMPLE_TID, "TID", PERF_OUTPUT_TID|PERF_OUTPUT_PID))
return -EINVAL;
@@ -562,9 +599,9 @@ static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
return 0;
}
-static void set_print_ip_opts(struct perf_event_attr *attr)
+static void evsel__set_print_ip_opts(struct evsel *evsel)
{
- unsigned int type = output_type(attr->type);
+ unsigned int type = evsel__output_type(evsel);
output[type].print_ip_opts = 0;
if (PRINT_FIELD(IP))
@@ -594,7 +631,7 @@ static struct evsel *find_first_output_type(struct evlist *evlist,
evlist__for_each_entry(evlist, evsel) {
if (evsel__is_dummy_event(evsel))
continue;
- if (output_type(evsel->core.attr.type) == (int)type)
+ if (evsel__output_type(evsel) == (int)type)
return evsel;
}
return NULL;
@@ -636,7 +673,7 @@ static int perf_session__check_output_opt(struct perf_session *session)
if (output[j].fields & PERF_OUTPUT_DSOFF)
output[j].fields |= PERF_OUTPUT_DSO;
- set_print_ip_opts(&evsel->core.attr);
+ evsel__set_print_ip_opts(evsel);
tod |= output[j].fields & PERF_OUTPUT_TOD;
}
@@ -646,7 +683,7 @@ static int perf_session__check_output_opt(struct perf_session *session)
evlist__for_each_entry(session->evlist, evsel) {
not_pipe = true;
- if (evsel__has_callchain(evsel)) {
+ if (evsel__has_callchain(evsel) || evsel__is_offcpu_event(evsel)) {
use_callchain = true;
break;
}
@@ -672,13 +709,13 @@ static int perf_session__check_output_opt(struct perf_session *session)
output[j].fields |= PERF_OUTPUT_SYM;
output[j].fields |= PERF_OUTPUT_SYMOFFSET;
output[j].fields |= PERF_OUTPUT_DSO;
- set_print_ip_opts(&evsel->core.attr);
+ evsel__set_print_ip_opts(evsel);
goto out;
}
}
}
- if (tod && !session->header.env.clock.enabled) {
+ if (tod && !perf_session__env(session)->clock.enabled) {
pr_err("Can't provide 'tod' time, missing clock data. "
"Please record with -k/--clockid option.\n");
return -1;
@@ -723,7 +760,7 @@ tod_scnprintf(struct perf_script *script, char *buf, int buflen,
if (buflen < 64 || !script)
return buf;
- env = &script->session->header.env;
+ env = perf_session__env(script->session);
if (!env->clock.enabled) {
scnprintf(buf, buflen, "disabled");
return buf;
@@ -759,14 +796,20 @@ tod_scnprintf(struct perf_script *script, char *buf, int buflen,
static int perf_sample__fprintf_iregs(struct perf_sample *sample,
struct perf_event_attr *attr, const char *arch, FILE *fp)
{
- return perf_sample__fprintf_regs(&sample->intr_regs,
+ if (!sample->intr_regs)
+ return 0;
+
+ return perf_sample__fprintf_regs(perf_sample__intr_regs(sample),
attr->sample_regs_intr, arch, fp);
}
static int perf_sample__fprintf_uregs(struct perf_sample *sample,
struct perf_event_attr *attr, const char *arch, FILE *fp)
{
- return perf_sample__fprintf_regs(&sample->user_regs,
+ if (!sample->user_regs)
+ return 0;
+
+ return perf_sample__fprintf_regs(perf_sample__user_regs(sample),
attr->sample_regs_user, arch, fp);
}
@@ -776,12 +819,24 @@ static int perf_sample__fprintf_start(struct perf_script *script,
struct evsel *evsel,
u32 type, FILE *fp)
{
- struct perf_event_attr *attr = &evsel->core.attr;
unsigned long secs;
unsigned long long nsecs;
int printed = 0;
char tstr[128];
+ /*
+ * Print the branch counter's abbreviation list,
+ * if the branch counter is available.
+ */
+ if (PRINT_FIELD(BRCNTR) && !verbose) {
+ char *buf;
+
+ if (!annotation_br_cntr_abbr_list(&buf, evsel, true)) {
+ printed += fprintf(stdout, "%s", buf);
+ free(buf);
+ }
+ }
+
if (PRINT_FIELD(MACHINE_PID) && sample->machine_pid)
printed += fprintf(fp, "VM:%5d ", sample->machine_pid);
@@ -893,19 +948,25 @@ static int perf_sample__fprintf_start(struct perf_script *script,
return printed;
}
-static inline char
-mispred_str(struct branch_entry *br)
+static inline size_t
+bstack_event_str(struct branch_entry *br, char *buf, size_t sz)
{
- if (!(br->flags.mispred || br->flags.predicted))
- return '-';
+ if (!(br->flags.mispred || br->flags.predicted || br->flags.not_taken))
+ return snprintf(buf, sz, "-");
- return br->flags.predicted ? 'P' : 'M';
+ return snprintf(buf, sz, "%s%s",
+ br->flags.predicted ? "P" : "M",
+ br->flags.not_taken ? "N" : "");
}
static int print_bstack_flags(FILE *fp, struct branch_entry *br)
{
- return fprintf(fp, "/%c/%c/%c/%d/%s/%s ",
- mispred_str(br),
+ char events[16] = { 0 };
+ size_t pos;
+
+ pos = bstack_event_str(br, events, sizeof(events));
+ return fprintf(fp, "/%s/%c/%c/%d/%s/%s ",
+ pos < 0 ? "-" : events,
br->flags.in_tx ? 'X' : '-',
br->flags.abort ? 'A' : '-',
br->flags.cycles,
@@ -915,7 +976,7 @@ static int print_bstack_flags(FILE *fp, struct branch_entry *br)
static int perf_sample__fprintf_brstack(struct perf_sample *sample,
struct thread *thread,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
struct branch_stack *br = sample->branch_stack;
struct branch_entry *entries = perf_sample__branch_entries(sample);
@@ -954,7 +1015,7 @@ static int perf_sample__fprintf_brstack(struct perf_sample *sample,
static int perf_sample__fprintf_brstacksym(struct perf_sample *sample,
struct thread *thread,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
struct branch_stack *br = sample->branch_stack;
struct branch_entry *entries = perf_sample__branch_entries(sample);
@@ -992,7 +1053,7 @@ static int perf_sample__fprintf_brstacksym(struct perf_sample *sample,
static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
struct thread *thread,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
struct branch_stack *br = sample->branch_stack;
struct branch_entry *entries = perf_sample__branch_entries(sample);
@@ -1011,11 +1072,11 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
to = entries[i].to;
if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
- !map__dso(alf.map)->adjust_symbols)
+ !dso__adjust_symbols(map__dso(alf.map)))
from = map__dso_map_ip(alf.map, from);
if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
- !map__dso(alt.map)->adjust_symbols)
+ !dso__adjust_symbols(map__dso(alt.map)))
to = map__dso_map_ip(alt.map, to);
printed += fprintf(fp, " 0x%"PRIx64, from);
@@ -1076,7 +1137,7 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
goto out;
}
- if (dso->data.status == DSO_DATA_STATUS_ERROR) {
+ if (dso__data(dso)->status == DSO_DATA_STATUS_ERROR) {
pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
goto out;
}
@@ -1088,7 +1149,7 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
len = dso__data_read_offset(dso, machine, offset, (u8 *)buffer,
end - start + MAXINSN);
- *is64bit = dso->is_64_bit;
+ *is64bit = dso__is_64_bit(dso);
if (len <= 0)
pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
start, end);
@@ -1159,18 +1220,78 @@ out:
return ret;
}
+static int any_dump_insn(struct evsel *evsel __maybe_unused,
+ struct perf_insn *x, uint64_t ip,
+ u8 *inbuf, int inlen, int *lenp,
+ FILE *fp)
+{
+ if (PRINT_FIELD(BRSTACKDISASM)) {
+ int printed = fprintf_insn_asm(x->machine, x->thread, x->cpumode, x->is64bit,
+ (uint8_t *)inbuf, inlen, ip, lenp,
+ PRINT_INSN_IMM_HEX, fp);
+
+ if (printed > 0)
+ return printed;
+ }
+ return fprintf(fp, "%s", dump_insn(x, ip, inbuf, inlen, lenp));
+}
+
+static int add_padding(FILE *fp, int printed, int padding)
+{
+ if (printed >= 0 && printed < padding)
+ printed += fprintf(fp, "%*s", padding - printed, "");
+ return printed;
+}
+
static int ip__fprintf_jump(uint64_t ip, struct branch_entry *en,
struct perf_insn *x, u8 *inbuf, int len,
int insn, FILE *fp, int *total_cycles,
- struct perf_event_attr *attr)
+ struct evsel *evsel,
+ struct thread *thread,
+ u64 br_cntr)
{
int ilen = 0;
- int printed = fprintf(fp, "\t%016" PRIx64 "\t%-30s\t", ip,
- dump_insn(x, ip, inbuf, len, &ilen));
+ int printed = fprintf(fp, "\t%016" PRIx64 "\t", ip);
+
+ printed += add_padding(fp, any_dump_insn(evsel, x, ip, inbuf, len, &ilen, fp), 30);
+ printed += fprintf(fp, "\t");
if (PRINT_FIELD(BRSTACKINSNLEN))
printed += fprintf(fp, "ilen: %d\t", ilen);
+ if (PRINT_FIELD(SRCLINE)) {
+ struct addr_location al;
+
+ addr_location__init(&al);
+ thread__find_map(thread, x->cpumode, ip, &al);
+ printed += map__fprintf_srcline(al.map, al.addr, " srcline: ", fp);
+ printed += fprintf(fp, "\t");
+ addr_location__exit(&al);
+ }
+
+ if (PRINT_FIELD(BRCNTR)) {
+ struct evsel *pos = evsel__leader(evsel);
+ unsigned int i = 0, j, num, mask, width;
+
+ perf_env__find_br_cntr_info(evsel__env(evsel), NULL, &width);
+ mask = (1L << width) - 1;
+ printed += fprintf(fp, "br_cntr: ");
+ evlist__for_each_entry_from(evsel->evlist, pos) {
+ if (!(pos->core.attr.branch_sample_type & PERF_SAMPLE_BRANCH_COUNTERS))
+ continue;
+ if (evsel__leader(pos) != evsel__leader(evsel))
+ break;
+
+ num = (br_cntr >> (i++ * width)) & mask;
+ if (!verbose) {
+ for (j = 0; j < num; j++)
+ printed += fprintf(fp, "%s", pos->abbr_name);
+ } else
+ printed += fprintf(fp, "%s %d ", pos->name, num);
+ }
+ printed += fprintf(fp, "\t");
+ }
+
printed += fprintf(fp, "#%s%s%s%s",
en->flags.predicted ? " PRED" : "",
en->flags.mispred ? " MISPRED" : "",
@@ -1182,12 +1303,13 @@ static int ip__fprintf_jump(uint64_t ip, struct branch_entry *en,
if (insn)
printed += fprintf(fp, " %.2f IPC", (float)insn / en->flags.cycles);
}
+
return printed + fprintf(fp, "\n");
}
static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
u8 cpumode, int cpu, struct symbol **lastsym,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
struct addr_location al;
int off, printed = 0, ret = 0;
@@ -1226,6 +1348,7 @@ out:
}
static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
+ struct evsel *evsel,
struct thread *thread,
struct perf_event_attr *attr,
struct machine *machine, FILE *fp)
@@ -1239,6 +1362,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
unsigned off;
struct symbol *lastsym = NULL;
int total_cycles = 0;
+ u64 br_cntr = 0;
if (!(br && br->nr))
return 0;
@@ -1247,8 +1371,12 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
nr = max_blocks + 1;
x.thread = thread;
+ x.machine = machine;
x.cpu = sample->cpu;
+ if (PRINT_FIELD(BRCNTR) && sample->branch_stack_cntr)
+ br_cntr = sample->branch_stack_cntr[nr - 1];
+
printed += fprintf(fp, "%c", '\n');
/* Handle first from jump, of which we don't know the entry. */
@@ -1257,10 +1385,10 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
machine, thread, &x.is64bit, &x.cpumode, false);
if (len > 0) {
printed += ip__fprintf_sym(entries[nr - 1].from, thread,
- x.cpumode, x.cpu, &lastsym, attr, fp);
+ x.cpumode, x.cpu, &lastsym, evsel, fp);
printed += ip__fprintf_jump(entries[nr - 1].from, &entries[nr - 1],
&x, buffer, len, 0, fp, &total_cycles,
- attr);
+ evsel, thread, br_cntr);
if (PRINT_FIELD(SRCCODE))
printed += print_srccode(thread, x.cpumode, entries[nr - 1].from);
}
@@ -1288,17 +1416,19 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
for (off = 0; off < (unsigned)len; off += ilen) {
uint64_t ip = start + off;
- printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, attr, fp);
+ printed += ip__fprintf_sym(ip, thread, x.cpumode, x.cpu, &lastsym, evsel, fp);
if (ip == end) {
+ if (PRINT_FIELD(BRCNTR) && sample->branch_stack_cntr)
+ br_cntr = sample->branch_stack_cntr[i];
printed += ip__fprintf_jump(ip, &entries[i], &x, buffer + off, len - off, ++insn, fp,
- &total_cycles, attr);
+ &total_cycles, evsel, thread, br_cntr);
if (PRINT_FIELD(SRCCODE))
printed += print_srccode(thread, x.cpumode, ip);
break;
} else {
ilen = 0;
- printed += fprintf(fp, "\t%016" PRIx64 "\t%s", ip,
- dump_insn(&x, ip, buffer + off, len - off, &ilen));
+ printed += fprintf(fp, "\t%016" PRIx64 "\t", ip);
+ printed += any_dump_insn(evsel, &x, ip, buffer + off, len - off, &ilen, fp);
if (PRINT_FIELD(BRSTACKINSNLEN))
printed += fprintf(fp, "\tilen: %d", ilen);
printed += fprintf(fp, "\n");
@@ -1328,7 +1458,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
* Due to pipeline delays the LBRs might be missing a branch
* or two, which can result in very large or negative blocks
* between final branch and sample. When this happens just
- * continue walking after the last TO until we hit a branch.
+ * continue walking after the last TO.
*/
start = entries[0].to;
end = sample->ip;
@@ -1337,7 +1467,7 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
end = start + 128;
}
len = grab_bb(buffer, start, end, machine, thread, &x.is64bit, &x.cpumode, true);
- printed += ip__fprintf_sym(start, thread, x.cpumode, x.cpu, &lastsym, attr, fp);
+ printed += ip__fprintf_sym(start, thread, x.cpumode, x.cpu, &lastsym, evsel, fp);
if (len <= 0) {
/* Print at least last IP if basic block did not work */
len = grab_bb(buffer, sample->ip, sample->ip,
@@ -1345,8 +1475,8 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
if (len <= 0)
goto out;
ilen = 0;
- printed += fprintf(fp, "\t%016" PRIx64 "\t%s", sample->ip,
- dump_insn(&x, sample->ip, buffer, len, &ilen));
+ printed += fprintf(fp, "\t%016" PRIx64 "\t", sample->ip);
+ printed += any_dump_insn(evsel, &x, sample->ip, buffer, len, &ilen, fp);
if (PRINT_FIELD(BRSTACKINSNLEN))
printed += fprintf(fp, "\tilen: %d", ilen);
printed += fprintf(fp, "\n");
@@ -1356,14 +1486,16 @@ static int perf_sample__fprintf_brstackinsn(struct perf_sample *sample,
}
for (off = 0; off <= end - start; off += ilen) {
ilen = 0;
- printed += fprintf(fp, "\t%016" PRIx64 "\t%s", start + off,
- dump_insn(&x, start + off, buffer + off, len - off, &ilen));
+ printed += fprintf(fp, "\t%016" PRIx64 "\t", start + off);
+ printed += any_dump_insn(evsel, &x, start + off, buffer + off, len - off, &ilen, fp);
if (PRINT_FIELD(BRSTACKINSNLEN))
printed += fprintf(fp, "\tilen: %d", ilen);
printed += fprintf(fp, "\n");
if (ilen == 0)
break;
- if (arch_is_branch(buffer + off, len - off, x.is64bit) && start + off != sample->ip) {
+ if ((attr->branch_sample_type == 0 || attr->branch_sample_type & PERF_SAMPLE_BRANCH_ANY)
+ && arch_is_uncond_branch(buffer + off, len - off, x.is64bit)
+ && start + off != sample->ip) {
/*
* Hit a missing branch. Just stop.
*/
@@ -1379,13 +1511,13 @@ out:
static int perf_sample__fprintf_addr(struct perf_sample *sample,
struct thread *thread,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
struct addr_location al;
int printed = fprintf(fp, "%16" PRIx64, sample->addr);
addr_location__init(&al);
- if (!sample_addr_correlates_sym(attr))
+ if (!sample_addr_correlates_sym(&evsel->core.attr))
goto out;
thread__resolve(thread, &al, sample);
@@ -1412,11 +1544,10 @@ static const char *resolve_branch_sym(struct perf_sample *sample,
struct addr_location *addr_al,
u64 *ip)
{
- struct perf_event_attr *attr = &evsel->core.attr;
const char *name = NULL;
if (sample->flags & (PERF_IP_FLAG_CALL | PERF_IP_FLAG_TRACE_BEGIN)) {
- if (sample_addr_correlates_sym(attr)) {
+ if (sample_addr_correlates_sym(&evsel->core.attr)) {
if (!addr_al->thread)
thread__resolve(thread, addr_al, sample);
if (addr_al->sym)
@@ -1442,7 +1573,6 @@ static int perf_sample__fprintf_callindent(struct perf_sample *sample,
struct addr_location *addr_al,
FILE *fp)
{
- struct perf_event_attr *attr = &evsel->core.attr;
size_t depth = thread_stack__depth(thread, sample->cpu);
const char *name = NULL;
static int spacing;
@@ -1486,45 +1616,35 @@ static int perf_sample__fprintf_callindent(struct perf_sample *sample,
return len + dlen;
}
-__weak void arch_fetch_insn(struct perf_sample *sample __maybe_unused,
- struct thread *thread __maybe_unused,
- struct machine *machine __maybe_unused)
-{
-}
-
-void script_fetch_insn(struct perf_sample *sample, struct thread *thread,
- struct machine *machine)
-{
- if (sample->insn_len == 0 && native_arch)
- arch_fetch_insn(sample, thread, machine);
-}
-
static int perf_sample__fprintf_insn(struct perf_sample *sample,
+ struct evsel *evsel,
struct perf_event_attr *attr,
struct thread *thread,
- struct machine *machine, FILE *fp)
+ struct machine *machine, FILE *fp,
+ struct addr_location *al)
{
int printed = 0;
- script_fetch_insn(sample, thread, machine);
+ script_fetch_insn(sample, thread, machine, native_arch);
if (PRINT_FIELD(INSNLEN))
printed += fprintf(fp, " ilen: %d", sample->insn_len);
if (PRINT_FIELD(INSN) && sample->insn_len) {
- int i;
-
- printed += fprintf(fp, " insn:");
- for (i = 0; i < sample->insn_len; i++)
- printed += fprintf(fp, " %02x", (unsigned char)sample->insn[i]);
+ printed += fprintf(fp, " insn: ");
+ printed += sample__fprintf_insn_raw(sample, fp);
}
- if (PRINT_FIELD(BRSTACKINSN) || PRINT_FIELD(BRSTACKINSNLEN))
- printed += perf_sample__fprintf_brstackinsn(sample, thread, attr, machine, fp);
+ if (PRINT_FIELD(DISASM) && sample->insn_len) {
+ printed += fprintf(fp, "\t\t");
+ printed += sample__fprintf_insn_asm(sample, thread, machine, fp, al);
+ }
+ if (PRINT_FIELD(BRSTACKINSN) || PRINT_FIELD(BRSTACKINSNLEN) || PRINT_FIELD(BRSTACKDISASM))
+ printed += perf_sample__fprintf_brstackinsn(sample, evsel, thread, attr, machine, fp);
return printed;
}
static int perf_sample__fprintf_ipc(struct perf_sample *sample,
- struct perf_event_attr *attr, FILE *fp)
+ struct evsel *evsel, FILE *fp)
{
unsigned int ipc;
@@ -1545,7 +1665,7 @@ static int perf_sample__fprintf_bts(struct perf_sample *sample,
struct machine *machine, FILE *fp)
{
struct perf_event_attr *attr = &evsel->core.attr;
- unsigned int type = output_type(attr->type);
+ unsigned int type = evsel__output_type(evsel);
bool print_srcline_last = false;
int printed = 0;
@@ -1582,15 +1702,15 @@ static int perf_sample__fprintf_bts(struct perf_sample *sample,
((evsel->core.attr.sample_type & PERF_SAMPLE_ADDR) &&
!output[type].user_set)) {
printed += fprintf(fp, " => ");
- printed += perf_sample__fprintf_addr(sample, thread, attr, fp);
+ printed += perf_sample__fprintf_addr(sample, thread, evsel, fp);
}
- printed += perf_sample__fprintf_ipc(sample, attr, fp);
+ printed += perf_sample__fprintf_ipc(sample, evsel, fp);
if (print_srcline_last)
printed += map__fprintf_srcline(al->map, al->addr, "\n ", fp);
- printed += perf_sample__fprintf_insn(sample, attr, thread, machine, fp);
+ printed += perf_sample__fprintf_insn(sample, evsel, attr, thread, machine, fp, al);
printed += fprintf(fp, "\n");
if (PRINT_FIELD(SRCCODE)) {
int ret = map__fprintf_srccode(al->map, al->addr, stdout,
@@ -1603,92 +1723,17 @@ static int perf_sample__fprintf_bts(struct perf_sample *sample,
return printed;
}
-static struct {
- u32 flags;
- const char *name;
-} sample_flags[] = {
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL, "call"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN, "return"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CONDITIONAL, "jcc"},
- {PERF_IP_FLAG_BRANCH, "jmp"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_INTERRUPT, "int"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_INTERRUPT, "iret"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_SYSCALLRET, "syscall"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_RETURN | PERF_IP_FLAG_SYSCALLRET, "sysret"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_ASYNC, "async"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_ASYNC | PERF_IP_FLAG_INTERRUPT, "hw int"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TX_ABORT, "tx abrt"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_BEGIN, "tr strt"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_TRACE_END, "tr end"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMENTRY, "vmentry"},
- {PERF_IP_FLAG_BRANCH | PERF_IP_FLAG_CALL | PERF_IP_FLAG_VMEXIT, "vmexit"},
- {0, NULL}
-};
-
-static const char *sample_flags_to_name(u32 flags)
-{
- int i;
-
- for (i = 0; sample_flags[i].name ; i++) {
- if (sample_flags[i].flags == flags)
- return sample_flags[i].name;
- }
-
- return NULL;
-}
-
-int perf_sample__sprintf_flags(u32 flags, char *str, size_t sz)
-{
- u32 xf = PERF_IP_FLAG_IN_TX | PERF_IP_FLAG_INTR_DISABLE |
- PERF_IP_FLAG_INTR_TOGGLE;
- const char *chars = PERF_IP_FLAG_CHARS;
- const size_t n = strlen(PERF_IP_FLAG_CHARS);
- const char *name = NULL;
- size_t i, pos = 0;
- char xs[16] = {0};
-
- if (flags & xf)
- snprintf(xs, sizeof(xs), "(%s%s%s)",
- flags & PERF_IP_FLAG_IN_TX ? "x" : "",
- flags & PERF_IP_FLAG_INTR_DISABLE ? "D" : "",
- flags & PERF_IP_FLAG_INTR_TOGGLE ? "t" : "");
-
- name = sample_flags_to_name(flags & ~xf);
- if (name)
- return snprintf(str, sz, "%-15s%6s", name, xs);
-
- if (flags & PERF_IP_FLAG_TRACE_BEGIN) {
- name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_BEGIN));
- if (name)
- return snprintf(str, sz, "tr strt %-7s%6s", name, xs);
- }
-
- if (flags & PERF_IP_FLAG_TRACE_END) {
- name = sample_flags_to_name(flags & ~(xf | PERF_IP_FLAG_TRACE_END));
- if (name)
- return snprintf(str, sz, "tr end %-7s%6s", name, xs);
- }
-
- for (i = 0; i < n; i++, flags >>= 1) {
- if ((flags & 1) && pos < sz)
- str[pos++] = chars[i];
- }
- for (; i < 32; i++, flags >>= 1) {
- if ((flags & 1) && pos < sz)
- str[pos++] = '?';
- }
- if (pos < sz)
- str[pos] = 0;
-
- return pos;
-}
-
static int perf_sample__fprintf_flags(u32 flags, FILE *fp)
{
char str[SAMPLE_FLAGS_BUF_SIZE];
+ int ret;
- perf_sample__sprintf_flags(flags, str, sizeof(str));
- return fprintf(fp, " %-21s ", str);
+ ret = perf_sample__sprintf_flags(flags, str, sizeof(str));
+ if (ret < 0)
+ return fprintf(fp, " raw flags:0x%-*x ",
+ SAMPLE_FLAGS_STR_ALIGNED_SIZE - 12, flags);
+
+ return fprintf(fp, " %-*s ", SAMPLE_FLAGS_STR_ALIGNED_SIZE, str);
}
struct printer_data {
@@ -1957,6 +2002,33 @@ static int perf_sample__fprintf_synth_iflag_chg(struct perf_sample *sample, FILE
return len + perf_sample__fprintf_pt_spacing(len, fp);
}
+#ifdef HAVE_AUXTRACE_SUPPORT
+static int perf_sample__fprintf_synth_vpadtl(struct perf_sample *data, FILE *fp)
+{
+ struct powerpc_vpadtl_entry *dtl = (struct powerpc_vpadtl_entry *)data->raw_data;
+ int len;
+
+ len = fprintf(fp, "timebase: %" PRIu64 " dispatch_reason:%s, preempt_reason:%s,\n"
+ "enqueue_to_dispatch_time:%d, ready_to_enqueue_time:%d,"
+ "waiting_to_ready_time:%d, processor_id: %d",
+ get_unaligned_be64(&dtl->timebase),
+ dispatch_reasons[dtl->dispatch_reason],
+ preempt_reasons[dtl->preempt_reason],
+ be32_to_cpu(dtl->enqueue_to_dispatch_time),
+ be32_to_cpu(dtl->ready_to_enqueue_time),
+ be32_to_cpu(dtl->waiting_to_ready_time),
+ be16_to_cpu(dtl->processor_id));
+
+ return len;
+}
+#else
+static int perf_sample__fprintf_synth_vpadtl(struct perf_sample *data __maybe_unused,
+ FILE *fp __maybe_unused)
+{
+ return 0;
+}
+#endif
+
static int perf_sample__fprintf_synth(struct perf_sample *sample,
struct evsel *evsel, FILE *fp)
{
@@ -1979,6 +2051,8 @@ static int perf_sample__fprintf_synth(struct perf_sample *sample,
return perf_sample__fprintf_synth_evt(sample, fp);
case PERF_SYNTH_INTEL_IFLAG_CHG:
return perf_sample__fprintf_synth_iflag_chg(sample, fp);
+ case PERF_SYNTH_POWERPC_VPA_DTL:
+ return perf_sample__fprintf_synth_vpadtl(sample, fp);
default:
break;
}
@@ -2002,13 +2076,18 @@ static int evlist__max_name_len(struct evlist *evlist)
static int data_src__fprintf(u64 data_src, FILE *fp)
{
- struct mem_info mi = { .data_src.val = data_src };
+ struct mem_info *mi = mem_info__new();
char decode[100];
char out[100];
static int maxlen;
int len;
- perf_script__meminfo_scnprintf(decode, 100, &mi);
+ if (!mi)
+ return -ENOMEM;
+
+ mem_info__data_src(mi)->val = data_src;
+ perf_script__meminfo_scnprintf(decode, 100, mi);
+ mem_info__put(mi);
len = scnprintf(out, 100, "%16" PRIx64 " %s", data_src, decode);
if (maxlen < len)
@@ -2025,11 +2104,11 @@ struct metric_ctx {
};
static void script_print_metric(struct perf_stat_config *config __maybe_unused,
- void *ctx, const char *color,
- const char *fmt,
- const char *unit, double val)
+ void *ctx, enum metric_threshold_classify thresh,
+ const char *fmt, const char *unit, double val)
{
struct metric_ctx *mctx = ctx;
+ const char *color = metric_threshold_classify__color(thresh);
if (!fmt)
return;
@@ -2085,8 +2164,7 @@ static void perf_sample__fprint_metric(struct perf_script *script,
perf_stat__print_shadow_stats(&stat_config, ev2,
evsel_script(ev2)->val,
sample->cpu,
- &ctx,
- NULL);
+ &ctx);
}
evsel_script(leader)->gnum = 0;
}
@@ -2142,7 +2220,7 @@ static void process_event(struct perf_script *script,
{
struct thread *thread = al->thread;
struct perf_event_attr *attr = &evsel->core.attr;
- unsigned int type = output_type(attr->type);
+ unsigned int type = evsel__output_type(evsel);
struct evsel_script *es = evsel->priv;
FILE *fp = es->fp;
char str[PAGE_SIZE_NAME_LEN];
@@ -2177,15 +2255,20 @@ static void process_event(struct perf_script *script,
}
#ifdef HAVE_LIBTRACEEVENT
if (PRINT_FIELD(TRACE) && sample->raw_data) {
- event_format__fprintf(evsel->tp_format, sample->cpu,
- sample->raw_data, sample->raw_size, fp);
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+
+ if (tp_format) {
+ event_format__fprintf(tp_format, sample->cpu,
+ sample->raw_data, sample->raw_size,
+ fp);
+ }
}
#endif
if (attr->type == PERF_TYPE_SYNTH && PRINT_FIELD(SYNTH))
perf_sample__fprintf_synth(sample, evsel, fp);
if (PRINT_FIELD(ADDR))
- perf_sample__fprintf_addr(sample, thread, attr, fp);
+ perf_sample__fprintf_addr(sample, thread, evsel, fp);
if (PRINT_FIELD(DATA_SRC))
data_src__fprintf(sample->data_src, fp);
@@ -2197,7 +2280,7 @@ static void process_event(struct perf_script *script,
fprintf(fp, "%16" PRIu16, sample->ins_lat);
if (PRINT_FIELD(RETIRE_LAT))
- fprintf(fp, "%16" PRIu16, sample->retire_lat);
+ fprintf(fp, "%16" PRIu16, sample->weight3);
if (PRINT_FIELD(CGROUP)) {
const char *cgrp_name;
@@ -2235,15 +2318,15 @@ static void process_event(struct perf_script *script,
perf_sample__fprintf_uregs(sample, attr, arch, fp);
if (PRINT_FIELD(BRSTACK))
- perf_sample__fprintf_brstack(sample, thread, attr, fp);
+ perf_sample__fprintf_brstack(sample, thread, evsel, fp);
else if (PRINT_FIELD(BRSTACKSYM))
- perf_sample__fprintf_brstacksym(sample, thread, attr, fp);
+ perf_sample__fprintf_brstacksym(sample, thread, evsel, fp);
else if (PRINT_FIELD(BRSTACKOFF))
- perf_sample__fprintf_brstackoff(sample, thread, attr, fp);
+ perf_sample__fprintf_brstackoff(sample, thread, evsel, fp);
- if (evsel__is_bpf_output(evsel) && PRINT_FIELD(BPF_OUTPUT))
+ if (evsel__is_bpf_output(evsel) && !evsel__is_offcpu_event(evsel) && PRINT_FIELD(BPF_OUTPUT))
perf_sample__fprintf_bpf_output(sample, fp);
- perf_sample__fprintf_insn(sample, attr, thread, machine, fp);
+ perf_sample__fprintf_insn(sample, evsel, attr, thread, machine, fp, al);
if (PRINT_FIELD(PHYS_ADDR))
fprintf(fp, "%16" PRIx64, sample->phys_addr);
@@ -2254,7 +2337,7 @@ static void process_event(struct perf_script *script,
if (PRINT_FIELD(CODE_PAGE_SIZE))
fprintf(fp, " %s", get_page_size_name(sample->code_page_size, str));
- perf_sample__fprintf_ipc(sample, attr, fp);
+ perf_sample__fprintf_ipc(sample, evsel, fp);
fprintf(fp, "\n");
@@ -2345,7 +2428,7 @@ static bool filter_cpu(struct perf_sample *sample)
return false;
}
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -2432,7 +2515,7 @@ out_put:
// Used when scr->per_event_dump is not set
static struct evsel_script es_stdout;
-static int process_attr(struct perf_tool *tool, union perf_event *event,
+static int process_attr(const struct perf_tool *tool, union perf_event *event,
struct evlist **pevlist)
{
struct perf_script *scr = container_of(tool, struct perf_script, tool);
@@ -2449,7 +2532,7 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
evsel = evlist__last(*pevlist);
if (!evsel->priv) {
- if (scr->per_event_dump) {
+ if (scr->per_event_dump) {
evsel->priv = evsel_script__new(evsel, scr->session->data);
if (!evsel->priv)
return -ENOMEM;
@@ -2479,7 +2562,7 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
* on events sample_type.
*/
sample_type = evlist__combined_sample_type(evlist);
- callchain_param_setup(sample_type, perf_env__arch((*pevlist)->env));
+ callchain_param_setup(sample_type, perf_env__arch(perf_session__env(scr->session)));
/* Enable fields for callchain entries */
if (symbol_conf.use_callchain &&
@@ -2487,18 +2570,18 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
sample_type & PERF_SAMPLE_BRANCH_STACK ||
(sample_type & PERF_SAMPLE_REGS_USER &&
sample_type & PERF_SAMPLE_STACK_USER))) {
- int type = output_type(evsel->core.attr.type);
+ int type = evsel__output_type(evsel);
if (!(output[type].user_unset_fields & PERF_OUTPUT_IP))
output[type].fields |= PERF_OUTPUT_IP;
if (!(output[type].user_unset_fields & PERF_OUTPUT_SYM))
output[type].fields |= PERF_OUTPUT_SYM;
}
- set_print_ip_opts(&evsel->core.attr);
+ evsel__set_print_ip_opts(evsel);
return 0;
}
-static int print_event_with_time(struct perf_tool *tool,
+static int print_event_with_time(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine,
@@ -2534,14 +2617,14 @@ static int print_event_with_time(struct perf_tool *tool,
return 0;
}
-static int print_event(struct perf_tool *tool, union perf_event *event,
+static int print_event(const struct perf_tool *tool, union perf_event *event,
struct perf_sample *sample, struct machine *machine,
pid_t pid, pid_t tid)
{
return print_event_with_time(tool, event, sample, machine, pid, tid, 0);
}
-static int process_comm_event(struct perf_tool *tool,
+static int process_comm_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2553,7 +2636,7 @@ static int process_comm_event(struct perf_tool *tool,
event->comm.tid);
}
-static int process_namespaces_event(struct perf_tool *tool,
+static int process_namespaces_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2565,7 +2648,7 @@ static int process_namespaces_event(struct perf_tool *tool,
event->namespaces.tid);
}
-static int process_cgroup_event(struct perf_tool *tool,
+static int process_cgroup_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2577,7 +2660,7 @@ static int process_cgroup_event(struct perf_tool *tool,
sample->tid);
}
-static int process_fork_event(struct perf_tool *tool,
+static int process_fork_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2589,7 +2672,7 @@ static int process_fork_event(struct perf_tool *tool,
event->fork.pid, event->fork.tid,
event->fork.time);
}
-static int process_exit_event(struct perf_tool *tool,
+static int process_exit_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2602,7 +2685,7 @@ static int process_exit_event(struct perf_tool *tool,
return perf_event__process_exit(tool, event, sample, machine);
}
-static int process_mmap_event(struct perf_tool *tool,
+static int process_mmap_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2614,7 +2697,7 @@ static int process_mmap_event(struct perf_tool *tool,
event->mmap.tid);
}
-static int process_mmap2_event(struct perf_tool *tool,
+static int process_mmap2_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2626,7 +2709,7 @@ static int process_mmap2_event(struct perf_tool *tool,
event->mmap2.tid);
}
-static int process_switch_event(struct perf_tool *tool,
+static int process_switch_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2658,7 +2741,7 @@ static int process_auxtrace_error(struct perf_session *session,
}
static int
-process_lost_event(struct perf_tool *tool,
+process_lost_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2668,7 +2751,7 @@ process_lost_event(struct perf_tool *tool,
}
static int
-process_throttle_event(struct perf_tool *tool __maybe_unused,
+process_throttle_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2679,7 +2762,7 @@ process_throttle_event(struct perf_tool *tool __maybe_unused,
}
static int
-process_finished_round_event(struct perf_tool *tool __maybe_unused,
+process_finished_round_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct ordered_events *oe __maybe_unused)
@@ -2689,7 +2772,7 @@ process_finished_round_event(struct perf_tool *tool __maybe_unused,
}
static int
-process_bpf_events(struct perf_tool *tool __maybe_unused,
+process_bpf_events(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2701,7 +2784,15 @@ process_bpf_events(struct perf_tool *tool __maybe_unused,
sample->tid);
}
-static int process_text_poke_events(struct perf_tool *tool,
+static int
+process_bpf_metadata_event(struct perf_session *session __maybe_unused,
+ union perf_event *event)
+{
+ perf_event__fprintf(event, NULL, stdout);
+ return 0;
+}
+
+static int process_text_poke_events(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -2823,8 +2914,9 @@ static int __cmd_script(struct perf_script *script)
script->tool.finished_round = process_finished_round_event;
}
if (script->show_bpf_events) {
- script->tool.ksymbol = process_bpf_events;
- script->tool.bpf = process_bpf_events;
+ script->tool.ksymbol = process_bpf_events;
+ script->tool.bpf = process_bpf_events;
+ script->tool.bpf_metadata = process_bpf_metadata_event;
}
if (script->show_text_poke_events) {
script->tool.ksymbol = process_bpf_events;
@@ -2847,79 +2939,18 @@ static int __cmd_script(struct perf_script *script)
return ret;
}
-struct script_spec {
- struct list_head node;
- struct scripting_ops *ops;
- char spec[];
-};
-
-static LIST_HEAD(script_specs);
-
-static struct script_spec *script_spec__new(const char *spec,
- struct scripting_ops *ops)
-{
- struct script_spec *s = malloc(sizeof(*s) + strlen(spec) + 1);
-
- if (s != NULL) {
- strcpy(s->spec, spec);
- s->ops = ops;
- }
-
- return s;
-}
-
-static void script_spec__add(struct script_spec *s)
-{
- list_add_tail(&s->node, &script_specs);
-}
-
-static struct script_spec *script_spec__find(const char *spec)
+static int list_available_languages_cb(struct scripting_ops *ops, const char *spec)
{
- struct script_spec *s;
-
- list_for_each_entry(s, &script_specs, node)
- if (strcasecmp(s->spec, spec) == 0)
- return s;
- return NULL;
-}
-
-int script_spec_register(const char *spec, struct scripting_ops *ops)
-{
- struct script_spec *s;
-
- s = script_spec__find(spec);
- if (s)
- return -1;
-
- s = script_spec__new(spec, ops);
- if (!s)
- return -1;
- else
- script_spec__add(s);
-
+ fprintf(stderr, " %-42s [%s]\n", spec, ops->name);
return 0;
}
-static struct scripting_ops *script_spec__lookup(const char *spec)
-{
- struct script_spec *s = script_spec__find(spec);
- if (!s)
- return NULL;
-
- return s->ops;
-}
-
static void list_available_languages(void)
{
- struct script_spec *s;
-
fprintf(stderr, "\n");
fprintf(stderr, "Scripting language extensions (used in "
"perf script -s [spec:]script.[spec]):\n\n");
-
- list_for_each_entry(s, &script_specs, node)
- fprintf(stderr, " %-42s [%s]\n", s->spec, s->ops->name);
-
+ script_spec__for_each(&list_available_languages_cb);
fprintf(stderr, "\n");
}
@@ -3108,6 +3139,13 @@ parse:
rc = -EINVAL;
goto out;
}
+#ifndef HAVE_LIBCAPSTONE_SUPPORT
+ if (change != REMOVE && strcmp(tok, "disasm") == 0) {
+ fprintf(stderr, "Field \"disasm\" requires perf to be built with libcapstone support.\n");
+ rc = -EINVAL;
+ goto out;
+ }
+#endif
if (type == -1) {
/* add user option to all events types for
@@ -3404,144 +3442,6 @@ static void free_dlarg(void)
free(dlargv);
}
-/*
- * Some scripts specify the required events in their "xxx-record" file,
- * this function will check if the events in perf.data match those
- * mentioned in the "xxx-record".
- *
- * Fixme: All existing "xxx-record" are all in good formats "-e event ",
- * which is covered well now. And new parsing code should be added to
- * cover the future complex formats like event groups etc.
- */
-static int check_ev_match(char *dir_name, char *scriptname,
- struct perf_session *session)
-{
- char filename[MAXPATHLEN], evname[128];
- char line[BUFSIZ], *p;
- struct evsel *pos;
- int match, len;
- FILE *fp;
-
- scnprintf(filename, MAXPATHLEN, "%s/bin/%s-record", dir_name, scriptname);
-
- fp = fopen(filename, "r");
- if (!fp)
- return -1;
-
- while (fgets(line, sizeof(line), fp)) {
- p = skip_spaces(line);
- if (*p == '#')
- continue;
-
- while (strlen(p)) {
- p = strstr(p, "-e");
- if (!p)
- break;
-
- p += 2;
- p = skip_spaces(p);
- len = strcspn(p, " \t");
- if (!len)
- break;
-
- snprintf(evname, len + 1, "%s", p);
-
- match = 0;
- evlist__for_each_entry(session->evlist, pos) {
- if (!strcmp(evsel__name(pos), evname)) {
- match = 1;
- break;
- }
- }
-
- if (!match) {
- fclose(fp);
- return -1;
- }
- }
- }
-
- fclose(fp);
- return 0;
-}
-
-/*
- * Return -1 if none is found, otherwise the actual scripts number.
- *
- * Currently the only user of this function is the script browser, which
- * will list all statically runnable scripts, select one, execute it and
- * show the output in a perf browser.
- */
-int find_scripts(char **scripts_array, char **scripts_path_array, int num,
- int pathlen)
-{
- struct dirent *script_dirent, *lang_dirent;
- char scripts_path[MAXPATHLEN], lang_path[MAXPATHLEN];
- DIR *scripts_dir, *lang_dir;
- struct perf_session *session;
- struct perf_data data = {
- .path = input_name,
- .mode = PERF_DATA_MODE_READ,
- };
- char *temp;
- int i = 0;
-
- session = perf_session__new(&data, NULL);
- if (IS_ERR(session))
- return PTR_ERR(session);
-
- snprintf(scripts_path, MAXPATHLEN, "%s/scripts", get_argv_exec_path());
-
- scripts_dir = opendir(scripts_path);
- if (!scripts_dir) {
- perf_session__delete(session);
- return -1;
- }
-
- for_each_lang(scripts_path, scripts_dir, lang_dirent) {
- scnprintf(lang_path, MAXPATHLEN, "%s/%s", scripts_path,
- lang_dirent->d_name);
-#ifndef HAVE_LIBPERL_SUPPORT
- if (strstr(lang_path, "perl"))
- continue;
-#endif
-#ifndef HAVE_LIBPYTHON_SUPPORT
- if (strstr(lang_path, "python"))
- continue;
-#endif
-
- lang_dir = opendir(lang_path);
- if (!lang_dir)
- continue;
-
- for_each_script(lang_path, lang_dir, script_dirent) {
- /* Skip those real time scripts: xxxtop.p[yl] */
- if (strstr(script_dirent->d_name, "top."))
- continue;
- if (i >= num)
- break;
- snprintf(scripts_path_array[i], pathlen, "%s/%s",
- lang_path,
- script_dirent->d_name);
- temp = strchr(script_dirent->d_name, '.');
- snprintf(scripts_array[i],
- (temp - script_dirent->d_name) + 1,
- "%s", script_dirent->d_name);
-
- if (check_ev_match(lang_path,
- scripts_array[i], session))
- continue;
-
- i++;
- }
- closedir(lang_dir);
- }
-
- closedir(scripts_dir);
- perf_session__delete(session);
- return i;
-}
-
static char *get_script_path(const char *script_root, const char *suffix)
{
struct dirent *script_dirent, *lang_dirent;
@@ -3696,7 +3596,7 @@ static
int process_thread_map_event(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_script *script = container_of(tool, struct perf_script, tool);
if (dump_trace)
@@ -3718,7 +3618,7 @@ static
int process_cpu_map_event(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_script *script = container_of(tool, struct perf_script, tool);
if (dump_trace)
@@ -3748,11 +3648,10 @@ static int process_feature_event(struct perf_session *session,
static int perf_script__process_auxtrace_info(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
-
int ret = perf_event__process_auxtrace_info(session, event);
if (ret == 0) {
+ const struct perf_tool *tool = session->tool;
struct perf_script *script = container_of(tool, struct perf_script, tool);
ret = perf_script__setup_per_event_dump(script);
@@ -3765,11 +3664,25 @@ static int perf_script__process_auxtrace_info(struct perf_session *session,
#endif
static int parse_insn_trace(const struct option *opt __maybe_unused,
- const char *str __maybe_unused,
- int unset __maybe_unused)
+ const char *str, int unset __maybe_unused)
{
- parse_output_fields(NULL, "+insn,-event,-period", 0);
- itrace_parse_synth_opts(opt, "i0ns", 0);
+ const char *fields = "+insn,-event,-period";
+ int ret;
+
+ if (str) {
+ if (strcmp(str, "disasm") == 0)
+ fields = "+disasm,-event,-period";
+ else if (strlen(str) != 0 && strcmp(str, "raw") != 0) {
+ fprintf(stderr, "Only accept raw|disasm\n");
+ return -EINVAL;
+ }
+ }
+
+ ret = parse_output_fields(NULL, fields, 0);
+ if (ret < 0)
+ return ret;
+
+ itrace_parse_synth_opts(opt, "i0nse", 0);
symbol_conf.nanosecs = true;
return 0;
}
@@ -3825,38 +3738,7 @@ int cmd_script(int argc, const char **argv)
const char *dlfilter_file = NULL;
const char **__argv;
int i, j, err = 0;
- struct perf_script script = {
- .tool = {
- .sample = process_sample_event,
- .mmap = perf_event__process_mmap,
- .mmap2 = perf_event__process_mmap2,
- .comm = perf_event__process_comm,
- .namespaces = perf_event__process_namespaces,
- .cgroup = perf_event__process_cgroup,
- .exit = perf_event__process_exit,
- .fork = perf_event__process_fork,
- .attr = process_attr,
- .event_update = perf_event__process_event_update,
-#ifdef HAVE_LIBTRACEEVENT
- .tracing_data = perf_event__process_tracing_data,
-#endif
- .feature = process_feature_event,
- .build_id = perf_event__process_build_id,
- .id_index = perf_event__process_id_index,
- .auxtrace_info = perf_script__process_auxtrace_info,
- .auxtrace = perf_event__process_auxtrace,
- .auxtrace_error = perf_event__process_auxtrace_error,
- .stat = perf_event__process_stat_event,
- .stat_round = process_stat_round_event,
- .stat_config = process_stat_config_event,
- .thread_map = process_thread_map_event,
- .cpu_map = process_cpu_map_event,
- .throttle = process_throttle_event,
- .unthrottle = process_throttle_event,
- .ordered_events = true,
- .ordering_requires_timestamps = true,
- },
- };
+ struct perf_script script = {};
struct perf_data data = {
.mode = PERF_DATA_MODE_READ,
};
@@ -3902,9 +3784,10 @@ int cmd_script(int argc, const char **argv)
"Fields: comm,tid,pid,time,cpu,event,trace,ip,sym,dso,dsoff,"
"addr,symoff,srcline,period,iregs,uregs,brstack,"
"brstacksym,flags,data_src,weight,bpf-output,brstackinsn,"
- "brstackinsnlen,brstackoff,callindent,insn,insnlen,synth,"
+ "brstackinsnlen,brstackdisasm,brstackoff,callindent,insn,disasm,insnlen,synth,"
"phys_addr,metric,misc,srccode,ipc,tod,data_page_size,"
- "code_page_size,ins_lat,machine_pid,vcpu,cgroup,retire_lat",
+ "code_page_size,ins_lat,machine_pid,vcpu,cgroup,retire_lat,"
+ "brcntr",
parse_output_fields),
OPT_BOOLEAN('a', "all-cpus", &system_wide,
"system-wide collection from all CPUs"),
@@ -3914,7 +3797,7 @@ int cmd_script(int argc, const char **argv)
"only consider these symbols"),
OPT_INTEGER(0, "addr-range", &symbol_conf.addr_range,
"Use with -S to list traced records within address range"),
- OPT_CALLBACK_OPTARG(0, "insn-trace", &itrace_synth_opts, NULL, NULL,
+ OPT_CALLBACK_OPTARG(0, "insn-trace", &itrace_synth_opts, NULL, "raw|disasm",
"Decode instructions from itrace", parse_insn_trace),
OPT_CALLBACK_OPTARG(0, "xed", NULL, NULL, NULL,
"Run xed disassembler on output", parse_xed),
@@ -3977,6 +3860,8 @@ int cmd_script(int argc, const char **argv)
"Enable symbol demangling"),
OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
"Enable kernel symbol demangling"),
+ OPT_STRING(0, "addr2line", &symbol_conf.addr2line_path, "path",
+ "addr2line binary to use for line numbers"),
OPT_STRING(0, "time", &script.time_str, "str",
"Time span of interest (start,stop)"),
OPT_BOOLEAN(0, "inline", &symbol_conf.inline_name,
@@ -4006,6 +3891,7 @@ int cmd_script(int argc, const char **argv)
"perf script [<options>] <top-script> [script-args]",
NULL
};
+ struct perf_env *env;
perf_set_singlethreaded();
@@ -4028,10 +3914,8 @@ int cmd_script(int argc, const char **argv)
data.path = input_name;
data.force = symbol_conf.force;
- if (unsorted_dump) {
+ if (unsorted_dump)
dump_trace = true;
- script.tool.ordered_events = false;
- }
if (symbol__validate_sym_arguments())
return -1;
@@ -4222,10 +4106,39 @@ script_found:
use_browser = 0;
}
+ perf_tool__init(&script.tool, !unsorted_dump);
+ script.tool.sample = process_sample_event;
+ script.tool.mmap = perf_event__process_mmap;
+ script.tool.mmap2 = perf_event__process_mmap2;
+ script.tool.comm = perf_event__process_comm;
+ script.tool.namespaces = perf_event__process_namespaces;
+ script.tool.cgroup = perf_event__process_cgroup;
+ script.tool.exit = perf_event__process_exit;
+ script.tool.fork = perf_event__process_fork;
+ script.tool.attr = process_attr;
+ script.tool.event_update = perf_event__process_event_update;
+#ifdef HAVE_LIBTRACEEVENT
+ script.tool.tracing_data = perf_event__process_tracing_data;
+#endif
+ script.tool.feature = process_feature_event;
+ script.tool.build_id = perf_event__process_build_id;
+ script.tool.id_index = perf_event__process_id_index;
+ script.tool.auxtrace_info = perf_script__process_auxtrace_info;
+ script.tool.auxtrace = perf_event__process_auxtrace;
+ script.tool.auxtrace_error = perf_event__process_auxtrace_error;
+ script.tool.stat = perf_event__process_stat_event;
+ script.tool.stat_round = process_stat_round_event;
+ script.tool.stat_config = process_stat_config_event;
+ script.tool.thread_map = process_thread_map_event;
+ script.tool.cpu_map = process_cpu_map_event;
+ script.tool.throttle = process_throttle_event;
+ script.tool.unthrottle = process_throttle_event;
+ script.tool.ordering_requires_timestamps = true;
session = perf_session__new(&data, &script.tool);
if (IS_ERR(session))
return PTR_ERR(session);
+ env = perf_session__env(session);
if (header || header_only) {
script.tool.show_feat_hdr = SHOW_FEAT_HEADER;
perf_session__fprintf_info(session, stdout, show_full_info);
@@ -4235,17 +4148,17 @@ script_found:
if (show_full_info)
script.tool.show_feat_hdr = SHOW_FEAT_HEADER_FULL_INFO;
- if (symbol__init(&session->header.env) < 0)
+ if (symbol__init(env) < 0)
goto out_delete;
uname(&uts);
if (data.is_pipe) { /* Assume pipe_mode indicates native_arch */
native_arch = true;
- } else if (session->header.env.arch) {
- if (!strcmp(uts.machine, session->header.env.arch))
+ } else if (env->arch) {
+ if (!strcmp(uts.machine, env->arch))
native_arch = true;
else if (!strcmp(uts.machine, "x86_64") &&
- !strcmp(session->header.env.arch, "i386"))
+ !strcmp(env->arch, "i386"))
native_arch = true;
}
@@ -4366,6 +4279,9 @@ script_found:
flush_scripting();
+ if (verbose > 2 || debug_kmaps)
+ perf_session__dump_kmaps(session);
+
out_delete:
if (script.ptime_range) {
itrace_synth_opts__clear_time_range(&itrace_synth_opts);
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 5fe9abc6a524..7006f848f87a 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -46,6 +46,7 @@
#include "util/parse-events.h"
#include "util/pmus.h"
#include "util/pmu.h"
+#include "util/tool_pmu.h"
#include "util/event.h"
#include "util/evlist.h"
#include "util/evsel.h"
@@ -70,6 +71,7 @@
#include "util/bpf_counter.h"
#include "util/iostat.h"
#include "util/util.h"
+#include "util/intel-tpebs.h"
#include "asm/bug.h"
#include <linux/time64.h>
@@ -95,7 +97,7 @@
#include <internal/threadmap.h>
#define DEFAULT_SEPARATOR " "
-#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
+#define FREEZE_ON_SMI_PATH "bus/event_source/devices/cpu/freeze_on_smi"
static void print_counters(struct timespec *ts, int argc, const char **argv);
@@ -106,11 +108,7 @@ static struct parse_events_option_args parse_events_option_args = {
static bool all_counters_use_bpf = true;
-static struct target target = {
- .uid = UINT_MAX,
-};
-
-#define METRIC_ONLY_LEN 20
+static struct target target;
static volatile sig_atomic_t child_pid = -1;
static int detailed_run = 0;
@@ -149,39 +147,33 @@ static struct perf_stat perf_stat;
static volatile sig_atomic_t done = 0;
-static struct perf_stat_config stat_config = {
- .aggr_mode = AGGR_GLOBAL,
- .aggr_level = MAX_CACHE_LVL + 1,
- .scale = true,
- .unit_width = 4, /* strlen("unit") */
- .run_count = 1,
- .metric_only_len = METRIC_ONLY_LEN,
- .walltime_nsecs_stats = &walltime_nsecs_stats,
- .ru_stats = &ru_stats,
- .big_num = true,
- .ctl_fd = -1,
- .ctl_fd_ack = -1,
- .iostat_run = false,
+/* Options set from the command line. */
+struct opt_aggr_mode {
+ bool node, socket, die, cluster, cache, core, thread, no_aggr;
};
-static bool cpus_map_matched(struct evsel *a, struct evsel *b)
-{
- if (!a->core.cpus && !b->core.cpus)
- return true;
-
- if (!a->core.cpus || !b->core.cpus)
- return false;
-
- if (perf_cpu_map__nr(a->core.cpus) != perf_cpu_map__nr(b->core.cpus))
- return false;
-
- for (int i = 0; i < perf_cpu_map__nr(a->core.cpus); i++) {
- if (perf_cpu_map__cpu(a->core.cpus, i).cpu !=
- perf_cpu_map__cpu(b->core.cpus, i).cpu)
- return false;
- }
-
- return true;
+/* Turn command line option into most generic aggregation mode setting. */
+static enum aggr_mode opt_aggr_mode_to_aggr_mode(struct opt_aggr_mode *opt_mode)
+{
+ enum aggr_mode mode = AGGR_GLOBAL;
+
+ if (opt_mode->node)
+ mode = AGGR_NODE;
+ if (opt_mode->socket)
+ mode = AGGR_SOCKET;
+ if (opt_mode->die)
+ mode = AGGR_DIE;
+ if (opt_mode->cluster)
+ mode = AGGR_CLUSTER;
+ if (opt_mode->cache)
+ mode = AGGR_CACHE;
+ if (opt_mode->core)
+ mode = AGGR_CORE;
+ if (opt_mode->thread)
+ mode = AGGR_THREAD;
+ if (opt_mode->no_aggr)
+ mode = AGGR_NONE;
+ return mode;
}
static void evlist__check_cpu_maps(struct evlist *evlist)
@@ -194,7 +186,7 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
/* Check that leader matches cpus with each member. */
if (leader == evsel)
continue;
- if (cpus_map_matched(leader, evsel))
+ if (perf_cpu_map__equal(leader->core.cpus, evsel->core.cpus))
continue;
/* If there's mismatch disable the group and warn user. */
@@ -239,7 +231,7 @@ static void perf_stat__reset_stats(void)
perf_stat__reset_shadow_stats();
}
-static int process_synthesized_event(struct perf_tool *tool __maybe_unused,
+static int process_synthesized_event(const struct perf_tool *tool __maybe_unused,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -275,45 +267,38 @@ static int evsel__write_stat_event(struct evsel *counter, int cpu_map_idx, u32 t
process_synthesized_event, NULL);
}
-static int read_single_counter(struct evsel *counter, int cpu_map_idx,
- int thread, struct timespec *rs)
+static int read_single_counter(struct evsel *counter, int cpu_map_idx, int thread)
{
- switch(counter->tool_event) {
- case PERF_TOOL_DURATION_TIME: {
- u64 val = rs->tv_nsec + rs->tv_sec*1000000000ULL;
- struct perf_counts_values *count =
- perf_counts(counter->counts, cpu_map_idx, thread);
- count->ena = count->run = val;
- count->val = val;
- return 0;
- }
- case PERF_TOOL_USER_TIME:
- case PERF_TOOL_SYSTEM_TIME: {
- u64 val;
- struct perf_counts_values *count =
- perf_counts(counter->counts, cpu_map_idx, thread);
- if (counter->tool_event == PERF_TOOL_USER_TIME)
- val = ru_stats.ru_utime_usec_stat.mean;
- else
- val = ru_stats.ru_stime_usec_stat.mean;
- count->ena = count->run = val;
- count->val = val;
- return 0;
- }
- default:
- case PERF_TOOL_NONE:
- return evsel__read_counter(counter, cpu_map_idx, thread);
- case PERF_TOOL_MAX:
- /* This should never be reached */
- return 0;
+ int err = evsel__read_counter(counter, cpu_map_idx, thread);
+
+ /*
+ * Reading user and system time will fail when the process
+ * terminates. Use the wait4 values in that case.
+ */
+ if (err && cpu_map_idx == 0 &&
+ (evsel__tool_event(counter) == TOOL_PMU__EVENT_USER_TIME ||
+ evsel__tool_event(counter) == TOOL_PMU__EVENT_SYSTEM_TIME)) {
+ u64 val, *start_time;
+ struct perf_counts_values *count =
+ perf_counts(counter->counts, cpu_map_idx, thread);
+
+ start_time = xyarray__entry(counter->start_times, cpu_map_idx, thread);
+ if (evsel__tool_event(counter) == TOOL_PMU__EVENT_USER_TIME)
+ val = ru_stats.ru_utime_usec_stat.mean;
+ else
+ val = ru_stats.ru_stime_usec_stat.mean;
+ count->ena = count->run = *start_time + val;
+ count->val = val;
+ return 0;
}
+ return err;
}
/*
* Read out the results of a single counter:
* do not aggregate counts across CPUs in system-wide mode
*/
-static int read_counter_cpu(struct evsel *counter, struct timespec *rs, int cpu_map_idx)
+static int read_counter_cpu(struct evsel *counter, int cpu_map_idx)
{
int nthreads = perf_thread_map__nr(evsel_list->core.threads);
int thread;
@@ -331,7 +316,7 @@ static int read_counter_cpu(struct evsel *counter, struct timespec *rs, int cpu_
* (via evsel__read_counter()) and sets their count->loaded.
*/
if (!perf_counts__is_loaded(counter->counts, cpu_map_idx, thread) &&
- read_single_counter(counter, cpu_map_idx, thread, rs)) {
+ read_single_counter(counter, cpu_map_idx, thread)) {
counter->counts->scaled = -1;
perf_counts(counter->counts, cpu_map_idx, thread)->ena = 0;
perf_counts(counter->counts, cpu_map_idx, thread)->run = 0;
@@ -360,7 +345,7 @@ static int read_counter_cpu(struct evsel *counter, struct timespec *rs, int cpu_
return 0;
}
-static int read_affinity_counters(struct timespec *rs)
+static int read_affinity_counters(void)
{
struct evlist_cpu_iterator evlist_cpu_itr;
struct affinity saved_affinity, *affinity;
@@ -381,10 +366,8 @@ static int read_affinity_counters(struct timespec *rs)
if (evsel__is_bpf(counter))
continue;
- if (!counter->err) {
- counter->err = read_counter_cpu(counter, rs,
- evlist_cpu_itr.cpu_map_idx);
- }
+ if (!counter->err)
+ counter->err = read_counter_cpu(counter, evlist_cpu_itr.cpu_map_idx);
}
if (affinity)
affinity__cleanup(&saved_affinity);
@@ -408,11 +391,11 @@ static int read_bpf_map_counters(void)
return 0;
}
-static int read_counters(struct timespec *rs)
+static int read_counters(void)
{
if (!stat_config.stop_read_counter) {
if (read_bpf_map_counters() ||
- read_affinity_counters(rs))
+ read_affinity_counters())
return -1;
}
return 0;
@@ -443,7 +426,7 @@ static void process_interval(void)
evlist__reset_aggr_stats(evsel_list);
- if (read_counters(&rs) == 0)
+ if (read_counters() == 0)
process_counters();
if (STAT_RECORD) {
@@ -627,39 +610,33 @@ static int dispatch_events(bool forks, int timeout, int interval, int *times)
enum counter_recovery {
COUNTER_SKIP,
COUNTER_RETRY,
- COUNTER_FATAL,
};
-static enum counter_recovery stat_handle_error(struct evsel *counter)
+static enum counter_recovery stat_handle_error(struct evsel *counter, int err)
{
char msg[BUFSIZ];
+
+ assert(!counter->supported);
+
/*
* PPC returns ENXIO for HW counters until 2.6.37
* (behavior changed with commit b0a873e).
*/
- if (errno == EINVAL || errno == ENOSYS ||
- errno == ENOENT || errno == EOPNOTSUPP ||
- errno == ENXIO) {
- if (verbose > 0)
+ if (err == EINVAL || err == ENOSYS || err == ENOENT || err == ENXIO) {
+ if (verbose > 0) {
ui__warning("%s event is not supported by the kernel.\n",
evsel__name(counter));
- counter->supported = false;
- /*
- * errored is a sticky flag that means one of the counter's
- * cpu event had a problem and needs to be reexamined.
- */
- counter->errored = true;
-
- if ((evsel__leader(counter) != counter) ||
- !(counter->core.leader->nr_members > 1))
- return COUNTER_SKIP;
- } else if (evsel__fallback(counter, &target, errno, msg, sizeof(msg))) {
+ }
+ return COUNTER_SKIP;
+ }
+ if (evsel__fallback(counter, &target, err, msg, sizeof(msg))) {
if (verbose > 0)
ui__warning("%s\n", msg);
+ counter->supported = true;
return COUNTER_RETRY;
- } else if (target__has_per_thread(&target) &&
- evsel_list->core.threads &&
- evsel_list->core.threads->err_thread != -1) {
+ }
+ if (target__has_per_thread(&target) && err != EOPNOTSUPP &&
+ evsel_list->core.threads && evsel_list->core.threads->err_thread != -1) {
/*
* For global --per-thread case, skip current
* error thread.
@@ -667,23 +644,73 @@ static enum counter_recovery stat_handle_error(struct evsel *counter)
if (!thread_map__remove(evsel_list->core.threads,
evsel_list->core.threads->err_thread)) {
evsel_list->core.threads->err_thread = -1;
+ counter->supported = true;
return COUNTER_RETRY;
}
- } else if (counter->skippable) {
- if (verbose > 0)
- ui__warning("skipping event %s that kernel failed to open .\n",
- evsel__name(counter));
- counter->supported = false;
- counter->errored = true;
- return COUNTER_SKIP;
}
+ if (verbose > 0) {
+ ui__warning(err == EOPNOTSUPP
+ ? "%s event is not supported by the kernel.\n"
+ : "skipping event %s that kernel failed to open.\n",
+ evsel__name(counter));
+ }
+ return COUNTER_SKIP;
+}
- evsel__open_strerror(counter, &target, errno, msg, sizeof(msg));
- ui__error("%s\n", msg);
+static int create_perf_stat_counter(struct evsel *evsel,
+ struct perf_stat_config *config,
+ int cpu_map_idx)
+{
+ struct perf_event_attr *attr = &evsel->core.attr;
+ struct evsel *leader = evsel__leader(evsel);
- if (child_pid != -1)
- kill(child_pid, SIGTERM);
- return COUNTER_FATAL;
+ /* Reset supported flag as creating a stat counter is retried. */
+ attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
+ PERF_FORMAT_TOTAL_TIME_RUNNING;
+
+ /*
+ * The event is part of non trivial group, let's enable
+ * the group read (for leader) and ID retrieval for all
+ * members.
+ */
+ if (leader->core.nr_members > 1)
+ attr->read_format |= PERF_FORMAT_ID|PERF_FORMAT_GROUP;
+
+ attr->inherit = !config->no_inherit && list_empty(&evsel->bpf_counter_list);
+
+ /*
+ * Some events get initialized with sample_(period/type) set,
+ * like tracepoints. Clear it up for counting.
+ */
+ attr->sample_period = 0;
+
+ if (config->identifier)
+ attr->sample_type = PERF_SAMPLE_IDENTIFIER;
+
+ if (config->all_user) {
+ attr->exclude_kernel = 1;
+ attr->exclude_user = 0;
+ }
+
+ if (config->all_kernel) {
+ attr->exclude_kernel = 0;
+ attr->exclude_user = 1;
+ }
+
+ /*
+ * Disabling all counters initially, they will be enabled
+ * either manually by us or by kernel via enable_on_exec
+ * set later.
+ */
+ if (evsel__is_group_leader(evsel)) {
+ attr->disabled = 1;
+
+ if (target__enable_on_exec(&target))
+ attr->enable_on_exec = 1;
+ }
+
+ return evsel__open_per_cpu_and_thread(evsel, evsel__cpus(evsel), cpu_map_idx,
+ evsel->core.threads);
}
static int __run_perf_stat(int argc, const char **argv, int run_idx)
@@ -700,8 +727,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
struct evlist_cpu_iterator evlist_cpu_itr;
struct affinity saved_affinity, *affinity = NULL;
- int err;
- bool second_pass = false;
+ int err, open_err = 0;
+ bool second_pass = false, has_supported_counters;
if (forks) {
if (evlist__prepare_workload(evsel_list, &target, argv, is_pipe, workload_exec_failed_signal) < 0) {
@@ -712,15 +739,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
}
if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) {
- if (affinity__setup(&saved_affinity) < 0)
- return -1;
+ if (affinity__setup(&saved_affinity) < 0) {
+ err = -1;
+ goto err_out;
+ }
affinity = &saved_affinity;
}
evlist__for_each_entry(evsel_list, counter) {
counter->reset_group = false;
- if (bpf_counter__load(counter, &target))
- return -1;
+ if (bpf_counter__load(counter, &target)) {
+ err = -1;
+ goto err_out;
+ }
if (!(evsel__is_bperf(counter)))
all_counters_use_bpf = false;
}
@@ -737,14 +768,17 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (target.use_bpf)
break;
- if (counter->reset_group || counter->errored)
+ if (counter->reset_group || !counter->supported)
continue;
if (evsel__is_bperf(counter))
continue;
-try_again:
- if (create_perf_stat_counter(counter, &stat_config, &target,
- evlist_cpu_itr.cpu_map_idx) < 0) {
+ while (true) {
+ if (create_perf_stat_counter(counter, &stat_config,
+ evlist_cpu_itr.cpu_map_idx) == 0)
+ break;
+
+ open_err = errno;
/*
* Weak group failed. We cannot just undo this here
* because earlier CPUs might be in group mode, and the kernel
@@ -752,28 +786,19 @@ try_again:
* it to later.
* Don't close here because we're in the wrong affinity.
*/
- if ((errno == EINVAL || errno == EBADF) &&
+ if ((open_err == EINVAL || open_err == EBADF) &&
evsel__leader(counter) != counter &&
counter->weak_group) {
evlist__reset_weak_group(evsel_list, counter, false);
assert(counter->reset_group);
+ counter->supported = true;
second_pass = true;
- continue;
- }
-
- switch (stat_handle_error(counter)) {
- case COUNTER_FATAL:
- return -1;
- case COUNTER_RETRY:
- goto try_again;
- case COUNTER_SKIP:
- continue;
- default:
break;
}
+ if (stat_handle_error(counter, open_err) != COUNTER_RETRY)
+ break;
}
- counter->supported = true;
}
if (second_pass) {
@@ -786,7 +811,7 @@ try_again:
evlist__for_each_cpu(evlist_cpu_itr, evsel_list, affinity) {
counter = evlist_cpu_itr.evsel;
- if (!counter->reset_group && !counter->errored)
+ if (!counter->reset_group && counter->supported)
continue;
perf_evsel__close_cpu(&counter->core, evlist_cpu_itr.cpu_map_idx);
@@ -797,43 +822,52 @@ try_again:
if (!counter->reset_group)
continue;
-try_again_reset:
- pr_debug2("reopening weak %s\n", evsel__name(counter));
- if (create_perf_stat_counter(counter, &stat_config, &target,
- evlist_cpu_itr.cpu_map_idx) < 0) {
-
- switch (stat_handle_error(counter)) {
- case COUNTER_FATAL:
- return -1;
- case COUNTER_RETRY:
- goto try_again_reset;
- case COUNTER_SKIP:
- continue;
- default:
+
+ while (true) {
+ pr_debug2("reopening weak %s\n", evsel__name(counter));
+ if (create_perf_stat_counter(counter, &stat_config,
+ evlist_cpu_itr.cpu_map_idx) == 0)
+ break;
+
+ open_err = errno;
+ if (stat_handle_error(counter, open_err) != COUNTER_RETRY)
break;
- }
}
- counter->supported = true;
}
}
affinity__cleanup(affinity);
+ affinity = NULL;
+ has_supported_counters = false;
evlist__for_each_entry(evsel_list, counter) {
if (!counter->supported) {
perf_evsel__free_fd(&counter->core);
continue;
}
+ has_supported_counters = true;
l = strlen(counter->unit);
if (l > stat_config.unit_width)
stat_config.unit_width = l;
if (evsel__should_store_id(counter) &&
- evsel__store_ids(counter, evsel_list))
- return -1;
+ evsel__store_ids(counter, evsel_list)) {
+ err = -1;
+ goto err_out;
+ }
}
+ if (!has_supported_counters) {
+ evsel__open_strerror(evlist__first(evsel_list), &target, open_err,
+ msg, sizeof(msg));
+ ui__error("No supported events found.\n%s\n", msg);
- if (evlist__apply_filters(evsel_list, &counter)) {
+ if (child_pid != -1)
+ kill(child_pid, SIGTERM);
+ err = -1;
+ goto err_out;
+ }
+
+ if (evlist__apply_filters(evsel_list, &counter, &target)) {
pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
counter->filter, evsel__name(counter), errno,
str_error_r(errno, msg, sizeof(msg)));
@@ -851,20 +885,23 @@ try_again_reset:
}
if (err < 0)
- return err;
+ goto err_out;
err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
process_synthesized_event, is_pipe);
if (err < 0)
- return err;
+ goto err_out;
+
}
if (target.initial_delay) {
pr_info(EVLIST_DISABLED_MSG);
} else {
err = enable_counters();
- if (err)
- return -1;
+ if (err) {
+ err = -1;
+ goto err_out;
+ }
}
/* Exec the command, if any */
@@ -874,8 +911,10 @@ try_again_reset:
if (target.initial_delay > 0) {
usleep(target.initial_delay * USEC_PER_MSEC);
err = enable_counters();
- if (err)
- return -1;
+ if (err) {
+ err = -1;
+ goto err_out;
+ }
pr_info(EVLIST_ENABLED_MSG);
}
@@ -895,7 +934,8 @@ try_again_reset:
if (workload_exec_errno) {
const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
pr_err("Workload failed: %s\n", emsg);
- return -1;
+ err = -1;
+ goto err_out;
}
if (WIFSIGNALED(status))
@@ -931,7 +971,7 @@ try_again_reset:
* avoid arbitrary skew, we must read all counters before closing any
* group leaders.
*/
- if (read_counters(&(struct timespec) { .tv_nsec = t1-t0 }) == 0)
+ if (read_counters() == 0)
process_counters();
/*
@@ -942,8 +982,23 @@ try_again_reset:
evlist__close(evsel_list);
return WEXITSTATUS(status);
+
+err_out:
+ if (forks)
+ evlist__cancel_workload(evsel_list);
+
+ affinity__cleanup(affinity);
+ return err;
}
+/*
+ * Returns -1 for fatal errors which signifies to not continue
+ * when in repeat mode.
+ *
+ * Returns < -1 error codes when stat record is used. These
+ * result in the stat information being displayed, but writing
+ * to the file fails and is non fatal.
+ */
static int run_perf_stat(int argc, const char **argv, int run_idx)
{
int ret;
@@ -1024,16 +1079,6 @@ static void sig_atexit(void)
kill(getpid(), signr);
}
-void perf_stat__set_big_num(int set)
-{
- stat_config.big_num = (set != 0);
-}
-
-void perf_stat__set_no_csv_summary(int set)
-{
- stat_config.no_csv_summary = (set != 0);
-}
-
static int stat__set_big_num(const struct option *opt __maybe_unused,
const char *s __maybe_unused, int unset)
{
@@ -1116,7 +1161,7 @@ static int parse_cache_level(const struct option *opt,
int unset __maybe_unused)
{
int level;
- u32 *aggr_mode = (u32 *)opt->value;
+ struct opt_aggr_mode *opt_aggr_mode = (struct opt_aggr_mode *)opt->value;
u32 *aggr_level = (u32 *)opt->data;
/*
@@ -1155,153 +1200,11 @@ static int parse_cache_level(const struct option *opt,
return -EINVAL;
}
out:
- *aggr_mode = AGGR_CACHE;
+ opt_aggr_mode->cache = true;
*aggr_level = level;
return 0;
}
-static struct option stat_options[] = {
- OPT_BOOLEAN('T', "transaction", &transaction_run,
- "hardware transaction statistics"),
- OPT_CALLBACK('e', "event", &parse_events_option_args, "event",
- "event selector. use 'perf list' to list available events",
- parse_events_option),
- OPT_CALLBACK(0, "filter", &evsel_list, "filter",
- "event filter", parse_filter),
- OPT_BOOLEAN('i', "no-inherit", &stat_config.no_inherit,
- "child tasks do not inherit counters"),
- OPT_STRING('p', "pid", &target.pid, "pid",
- "stat events on existing process id"),
- OPT_STRING('t', "tid", &target.tid, "tid",
- "stat events on existing thread id"),
-#ifdef HAVE_BPF_SKEL
- OPT_STRING('b', "bpf-prog", &target.bpf_str, "bpf-prog-id",
- "stat events on existing bpf program id"),
- OPT_BOOLEAN(0, "bpf-counters", &target.use_bpf,
- "use bpf program to count events"),
- OPT_STRING(0, "bpf-attr-map", &target.attr_map, "attr-map-path",
- "path to perf_event_attr map"),
-#endif
- OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
- "system-wide collection from all CPUs"),
- OPT_BOOLEAN(0, "scale", &stat_config.scale,
- "Use --no-scale to disable counter scaling for multiplexing"),
- OPT_INCR('v', "verbose", &verbose,
- "be more verbose (show counter open errors, etc)"),
- OPT_INTEGER('r', "repeat", &stat_config.run_count,
- "repeat command and print average + stddev (max: 100, forever: 0)"),
- OPT_BOOLEAN(0, "table", &stat_config.walltime_run_table,
- "display details about each run (only with -r option)"),
- OPT_BOOLEAN('n', "null", &stat_config.null_run,
- "null run - dont start any counters"),
- OPT_INCR('d', "detailed", &detailed_run,
- "detailed run - start a lot of events"),
- OPT_BOOLEAN('S', "sync", &sync_run,
- "call sync() before starting a run"),
- OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
- "print large numbers with thousands\' separators",
- stat__set_big_num),
- OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
- "list of cpus to monitor in system-wide"),
- OPT_SET_UINT('A', "no-aggr", &stat_config.aggr_mode,
- "disable aggregation across CPUs or PMUs", AGGR_NONE),
- OPT_SET_UINT(0, "no-merge", &stat_config.aggr_mode,
- "disable aggregation the same as -A or -no-aggr", AGGR_NONE),
- OPT_BOOLEAN(0, "hybrid-merge", &stat_config.hybrid_merge,
- "Merge identical named hybrid events"),
- OPT_STRING('x', "field-separator", &stat_config.csv_sep, "separator",
- "print counts with custom separator"),
- OPT_BOOLEAN('j', "json-output", &stat_config.json_output,
- "print counts in JSON format"),
- OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
- "monitor event in cgroup name only", parse_stat_cgroups),
- OPT_STRING(0, "for-each-cgroup", &stat_config.cgroup_list, "name",
- "expand events for each cgroup"),
- OPT_STRING('o', "output", &output_name, "file", "output file name"),
- OPT_BOOLEAN(0, "append", &append_file, "append to the output file"),
- OPT_INTEGER(0, "log-fd", &output_fd,
- "log output to fd, instead of stderr"),
- OPT_STRING(0, "pre", &pre_cmd, "command",
- "command to run prior to the measured command"),
- OPT_STRING(0, "post", &post_cmd, "command",
- "command to run after to the measured command"),
- OPT_UINTEGER('I', "interval-print", &stat_config.interval,
- "print counts at regular interval in ms "
- "(overhead is possible for values <= 100ms)"),
- OPT_INTEGER(0, "interval-count", &stat_config.times,
- "print counts for fixed number of times"),
- OPT_BOOLEAN(0, "interval-clear", &stat_config.interval_clear,
- "clear screen in between new interval"),
- OPT_UINTEGER(0, "timeout", &stat_config.timeout,
- "stop workload and print counts after a timeout period in ms (>= 10ms)"),
- OPT_SET_UINT(0, "per-socket", &stat_config.aggr_mode,
- "aggregate counts per processor socket", AGGR_SOCKET),
- OPT_SET_UINT(0, "per-die", &stat_config.aggr_mode,
- "aggregate counts per processor die", AGGR_DIE),
- OPT_CALLBACK_OPTARG(0, "per-cache", &stat_config.aggr_mode, &stat_config.aggr_level,
- "cache level", "aggregate count at this cache level (Default: LLC)",
- parse_cache_level),
- OPT_SET_UINT(0, "per-core", &stat_config.aggr_mode,
- "aggregate counts per physical processor core", AGGR_CORE),
- OPT_SET_UINT(0, "per-thread", &stat_config.aggr_mode,
- "aggregate counts per thread", AGGR_THREAD),
- OPT_SET_UINT(0, "per-node", &stat_config.aggr_mode,
- "aggregate counts per numa node", AGGR_NODE),
- OPT_INTEGER('D', "delay", &target.initial_delay,
- "ms to wait before starting measurement after program start (-1: start with events disabled)"),
- OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
- "Only print computed metrics. No raw values", enable_metric_only),
- OPT_BOOLEAN(0, "metric-no-group", &stat_config.metric_no_group,
- "don't group metric events, impacts multiplexing"),
- OPT_BOOLEAN(0, "metric-no-merge", &stat_config.metric_no_merge,
- "don't try to share events between metrics in a group"),
- OPT_BOOLEAN(0, "metric-no-threshold", &stat_config.metric_no_threshold,
- "disable adding events for the metric threshold calculation"),
- OPT_BOOLEAN(0, "topdown", &topdown_run,
- "measure top-down statistics"),
- OPT_UINTEGER(0, "td-level", &stat_config.topdown_level,
- "Set the metrics level for the top-down statistics (0: max level)"),
- OPT_BOOLEAN(0, "smi-cost", &smi_cost,
- "measure SMI cost"),
- OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list",
- "monitor specified metrics or metric groups (separated by ,)",
- append_metric_groups),
- OPT_BOOLEAN_FLAG(0, "all-kernel", &stat_config.all_kernel,
- "Configure all used events to run in kernel space.",
- PARSE_OPT_EXCLUSIVE),
- OPT_BOOLEAN_FLAG(0, "all-user", &stat_config.all_user,
- "Configure all used events to run in user space.",
- PARSE_OPT_EXCLUSIVE),
- OPT_BOOLEAN(0, "percore-show-thread", &stat_config.percore_show_thread,
- "Use with 'percore' event qualifier to show the event "
- "counts of one hardware thread by sum up total hardware "
- "threads of same physical core"),
- OPT_BOOLEAN(0, "summary", &stat_config.summary,
- "print summary for interval mode"),
- OPT_BOOLEAN(0, "no-csv-summary", &stat_config.no_csv_summary,
- "don't print 'summary' for CSV summary output"),
- OPT_BOOLEAN(0, "quiet", &quiet,
- "don't print any output, messages or warnings (useful with record)"),
- OPT_CALLBACK(0, "cputype", &evsel_list, "hybrid cpu type",
- "Only enable events on applying cpu with this type "
- "for hybrid platform (e.g. core or atom)",
- parse_cputype),
-#ifdef HAVE_LIBPFM
- OPT_CALLBACK(0, "pfm-events", &evsel_list, "event",
- "libpfm4 event selector. use 'perf list' to list available events",
- parse_libpfm_events_option),
-#endif
- OPT_CALLBACK(0, "control", &stat_config, "fd:ctl-fd[,ack-fd] or fifo:ctl-fifo[,ack-fifo]",
- "Listen on ctl-fd descriptor for command to control measurement ('enable': enable events, 'disable': disable events).\n"
- "\t\t\t Optionally send control command completion ('ack\\n') to ack-fd descriptor.\n"
- "\t\t\t Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.",
- parse_control_option),
- OPT_CALLBACK_OPTARG(0, "iostat", &evsel_list, &stat_config, "default",
- "measure I/O performance metrics provided by arch/platform",
- iostat_parse),
- OPT_END()
-};
-
/**
* Calculate the cache instance ID from the map in
* /sys/devices/system/cpu/cpuX/cache/indexY/shared_cpu_list
@@ -1317,10 +1220,9 @@ static int cpu__get_cache_id_from_map(struct perf_cpu cpu, char *map)
* be the first online CPU in the cache domain else use the
* first online CPU of the cache domain as the ID.
*/
- if (perf_cpu_map__has_any_cpu_or_is_empty(cpu_map))
+ id = perf_cpu_map__min(cpu_map).cpu;
+ if (id == -1)
id = cpu.cpu;
- else
- id = perf_cpu_map__cpu(cpu_map, 0).cpu;
/* Free the perf_cpu_map used to find the cache ID */
perf_cpu_map__put(cpu_map);
@@ -1428,6 +1330,7 @@ static struct aggr_cpu_id aggr_cpu_id__cache(struct perf_cpu cpu, void *data)
static const char *const aggr_mode__string[] = {
[AGGR_CORE] = "core",
[AGGR_CACHE] = "cache",
+ [AGGR_CLUSTER] = "cluster",
[AGGR_DIE] = "die",
[AGGR_GLOBAL] = "global",
[AGGR_NODE] = "node",
@@ -1455,6 +1358,12 @@ static struct aggr_cpu_id perf_stat__get_cache_id(struct perf_stat_config *confi
return aggr_cpu_id__cache(cpu, /*data=*/NULL);
}
+static struct aggr_cpu_id perf_stat__get_cluster(struct perf_stat_config *config __maybe_unused,
+ struct perf_cpu cpu)
+{
+ return aggr_cpu_id__cluster(cpu, /*data=*/NULL);
+}
+
static struct aggr_cpu_id perf_stat__get_core(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
@@ -1485,7 +1394,7 @@ static struct aggr_cpu_id perf_stat__get_aggr(struct perf_stat_config *config,
struct aggr_cpu_id id;
/* per-process mode - should use global aggr mode */
- if (cpu.cpu == -1)
+ if (cpu.cpu == -1 || cpu.cpu >= config->cpus_aggr_map->nr)
return get_id(config, cpu);
if (aggr_cpu_id__is_empty(&config->cpus_aggr_map->map[cpu.cpu]))
@@ -1507,6 +1416,12 @@ static struct aggr_cpu_id perf_stat__get_die_cached(struct perf_stat_config *con
return perf_stat__get_aggr(config, perf_stat__get_die, cpu);
}
+static struct aggr_cpu_id perf_stat__get_cluster_cached(struct perf_stat_config *config,
+ struct perf_cpu cpu)
+{
+ return perf_stat__get_aggr(config, perf_stat__get_cluster, cpu);
+}
+
static struct aggr_cpu_id perf_stat__get_cache_id_cached(struct perf_stat_config *config,
struct perf_cpu cpu)
{
@@ -1544,6 +1459,8 @@ static aggr_cpu_id_get_t aggr_mode__get_aggr(enum aggr_mode aggr_mode)
return aggr_cpu_id__socket;
case AGGR_DIE:
return aggr_cpu_id__die;
+ case AGGR_CLUSTER:
+ return aggr_cpu_id__cluster;
case AGGR_CACHE:
return aggr_cpu_id__cache;
case AGGR_CORE:
@@ -1569,6 +1486,8 @@ static aggr_get_id_t aggr_mode__get_id(enum aggr_mode aggr_mode)
return perf_stat__get_socket_cached;
case AGGR_DIE:
return perf_stat__get_die_cached;
+ case AGGR_CLUSTER:
+ return perf_stat__get_cluster_cached;
case AGGR_CACHE:
return perf_stat__get_cache_id_cached;
case AGGR_CORE:
@@ -1623,33 +1542,20 @@ static int perf_stat_init_aggr_mode(void)
* taking the highest cpu number to be the size of
* the aggregation translate cpumap.
*/
- if (!perf_cpu_map__has_any_cpu_or_is_empty(evsel_list->core.user_requested_cpus))
- nr = perf_cpu_map__max(evsel_list->core.user_requested_cpus).cpu;
- else
- nr = 0;
- stat_config.cpus_aggr_map = cpu_aggr_map__empty_new(nr + 1);
+ nr = perf_cpu_map__max(evsel_list->core.all_cpus).cpu + 1;
+ stat_config.cpus_aggr_map = cpu_aggr_map__empty_new(nr);
return stat_config.cpus_aggr_map ? 0 : -ENOMEM;
}
static void cpu_aggr_map__delete(struct cpu_aggr_map *map)
{
- if (map) {
- WARN_ONCE(refcount_read(&map->refcnt) != 0,
- "cpu_aggr_map refcnt unbalanced\n");
- free(map);
- }
-}
-
-static void cpu_aggr_map__put(struct cpu_aggr_map *map)
-{
- if (map && refcount_dec_and_test(&map->refcnt))
- cpu_aggr_map__delete(map);
+ free(map);
}
static void perf_stat__exit_aggr_mode(void)
{
- cpu_aggr_map__put(stat_config.aggr_map);
- cpu_aggr_map__put(stat_config.cpus_aggr_map);
+ cpu_aggr_map__delete(stat_config.aggr_map);
+ cpu_aggr_map__delete(stat_config.cpus_aggr_map);
stat_config.aggr_map = NULL;
stat_config.cpus_aggr_map = NULL;
}
@@ -1737,6 +1643,21 @@ static struct aggr_cpu_id perf_env__get_cache_aggr_by_cpu(struct perf_cpu cpu,
return id;
}
+static struct aggr_cpu_id perf_env__get_cluster_aggr_by_cpu(struct perf_cpu cpu,
+ void *data)
+{
+ struct perf_env *env = data;
+ struct aggr_cpu_id id = aggr_cpu_id__empty();
+
+ if (cpu.cpu != -1) {
+ id.socket = env->cpu[cpu.cpu].socket_id;
+ id.die = env->cpu[cpu.cpu].die_id;
+ id.cluster = env->cpu[cpu.cpu].cluster_id;
+ }
+
+ return id;
+}
+
static struct aggr_cpu_id perf_env__get_core_aggr_by_cpu(struct perf_cpu cpu, void *data)
{
struct perf_env *env = data;
@@ -1744,12 +1665,12 @@ static struct aggr_cpu_id perf_env__get_core_aggr_by_cpu(struct perf_cpu cpu, vo
if (cpu.cpu != -1) {
/*
- * core_id is relative to socket and die,
- * we need a global id. So we set
- * socket, die id and core id
+ * core_id is relative to socket, die and cluster, we need a
+ * global id. So we set socket, die id, cluster id and core id.
*/
id.socket = env->cpu[cpu.cpu].socket_id;
id.die = env->cpu[cpu.cpu].die_id;
+ id.cluster = env->cpu[cpu.cpu].cluster_id;
id.core = env->cpu[cpu.cpu].core_id;
}
@@ -1797,42 +1718,48 @@ static struct aggr_cpu_id perf_env__get_global_aggr_by_cpu(struct perf_cpu cpu _
static struct aggr_cpu_id perf_stat__get_socket_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_socket_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_socket_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_die_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_die_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_die_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
+}
+
+static struct aggr_cpu_id perf_stat__get_cluster_file(struct perf_stat_config *config __maybe_unused,
+ struct perf_cpu cpu)
+{
+ return perf_env__get_cluster_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_cache_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_cache_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_cache_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_core_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_core_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_core_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_cpu_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_cpu_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_cpu_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_node_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_node_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_node_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static struct aggr_cpu_id perf_stat__get_global_file(struct perf_stat_config *config __maybe_unused,
struct perf_cpu cpu)
{
- return perf_env__get_global_aggr_by_cpu(cpu, &perf_stat.session->header.env);
+ return perf_env__get_global_aggr_by_cpu(cpu, perf_session__env(perf_stat.session));
}
static aggr_cpu_id_get_t aggr_mode__get_aggr_file(enum aggr_mode aggr_mode)
@@ -1842,6 +1769,8 @@ static aggr_cpu_id_get_t aggr_mode__get_aggr_file(enum aggr_mode aggr_mode)
return perf_env__get_socket_aggr_by_cpu;
case AGGR_DIE:
return perf_env__get_die_aggr_by_cpu;
+ case AGGR_CLUSTER:
+ return perf_env__get_cluster_aggr_by_cpu;
case AGGR_CACHE:
return perf_env__get_cache_aggr_by_cpu;
case AGGR_CORE:
@@ -1867,6 +1796,8 @@ static aggr_get_id_t aggr_mode__get_id_file(enum aggr_mode aggr_mode)
return perf_stat__get_socket_file;
case AGGR_DIE:
return perf_stat__get_die_file;
+ case AGGR_CLUSTER:
+ return perf_stat__get_cluster_file;
case AGGR_CACHE:
return perf_stat__get_cache_file;
case AGGR_CORE:
@@ -1887,7 +1818,7 @@ static aggr_get_id_t aggr_mode__get_id_file(enum aggr_mode aggr_mode)
static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
{
- struct perf_env *env = &st->session->header.env;
+ struct perf_env *env = perf_session__env(st->session);
aggr_cpu_id_get_t get_id = aggr_mode__get_aggr_file(stat_config.aggr_mode);
bool needs_sort = stat_config.aggr_mode != AGGR_NONE;
@@ -1921,130 +1852,25 @@ static int perf_stat_init_aggr_mode_file(struct perf_stat *st)
}
/*
- * Add default attributes, if there were no attributes specified or
+ * Add default events, if there were no attributes specified or
* if -d/--detailed, -d -d or -d -d -d is used:
*/
-static int add_default_attributes(void)
+static int add_default_events(void)
{
- struct perf_event_attr default_attrs0[] = {
-
- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK },
- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CONTEXT_SWITCHES },
- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_CPU_MIGRATIONS },
- { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_PAGE_FAULTS },
-
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_CPU_CYCLES },
-};
- struct perf_event_attr frontend_attrs[] = {
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
-};
- struct perf_event_attr backend_attrs[] = {
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
-};
- struct perf_event_attr default_attrs1[] = {
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_INSTRUCTIONS },
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
- { .type = PERF_TYPE_HARDWARE, .config = PERF_COUNT_HW_BRANCH_MISSES },
-
-};
-
-/*
- * Detailed stats (-d), covering the L1 and last level data caches:
- */
- struct perf_event_attr detailed_attrs[] = {
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1D << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1D << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_LL << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_LL << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-};
-
-/*
- * Very detailed stats (-d -d), covering the instruction cache and the TLB caches:
- */
- struct perf_event_attr very_detailed_attrs[] = {
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1I << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1I << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_DTLB << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_DTLB << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_ITLB << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_ITLB << 0 |
- (PERF_COUNT_HW_CACHE_OP_READ << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-
-};
+ const char *pmu = parse_events_option_args.pmu_filter ?: "all";
+ struct parse_events_error err;
+ struct evlist *evlist = evlist__new();
+ struct evsel *evsel;
+ int ret = 0;
-/*
- * Very, very detailed stats (-d -d -d), adding prefetch events:
- */
- struct perf_event_attr very_very_detailed_attrs[] = {
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1D << 0 |
- (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16) },
-
- { .type = PERF_TYPE_HW_CACHE,
- .config =
- PERF_COUNT_HW_CACHE_L1D << 0 |
- (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
- (PERF_COUNT_HW_CACHE_RESULT_MISS << 16) },
-};
+ if (!evlist)
+ return -ENOMEM;
- struct perf_event_attr default_null_attrs[] = {};
- const char *pmu = parse_events_option_args.pmu_filter ?: "all";
+ parse_events_error__init(&err);
/* Set attrs if no event is selected and !null_run: */
if (stat_config.null_run)
- return 0;
+ goto out;
if (transaction_run) {
/* Handle -T as -M transaction. Once platform specific metrics
@@ -2052,17 +1878,19 @@ static int add_default_attributes(void)
* will use this approach. To determine transaction support
* on an architecture test for such a metric name.
*/
- if (!metricgroup__has_metric(pmu, "transaction")) {
+ if (!metricgroup__has_metric_or_groups(pmu, "transaction")) {
pr_err("Missing transaction metrics\n");
- return -1;
+ ret = -1;
+ goto out;
}
- return metricgroup__parse_groups(evsel_list, pmu, "transaction",
+ ret = metricgroup__parse_groups(evlist, pmu, "transaction",
stat_config.metric_no_group,
stat_config.metric_no_merge,
stat_config.metric_no_threshold,
stat_config.user_requested_cpu_list,
stat_config.system_wide,
- &stat_config.metric_events);
+ stat_config.hardware_aware_grouping);
+ goto out;
}
if (smi_cost) {
@@ -2070,32 +1898,36 @@ static int add_default_attributes(void)
if (sysfs__read_int(FREEZE_ON_SMI_PATH, &smi) < 0) {
pr_err("freeze_on_smi is not supported.\n");
- return -1;
+ ret = -1;
+ goto out;
}
if (!smi) {
if (sysfs__write_int(FREEZE_ON_SMI_PATH, 1) < 0) {
- fprintf(stderr, "Failed to set freeze_on_smi.\n");
- return -1;
+ pr_err("Failed to set freeze_on_smi.\n");
+ ret = -1;
+ goto out;
}
smi_reset = true;
}
- if (!metricgroup__has_metric(pmu, "smi")) {
+ if (!metricgroup__has_metric_or_groups(pmu, "smi")) {
pr_err("Missing smi metrics\n");
- return -1;
+ ret = -1;
+ goto out;
}
if (!force_metric_only)
stat_config.metric_only = true;
- return metricgroup__parse_groups(evsel_list, pmu, "smi",
+ ret = metricgroup__parse_groups(evlist, pmu, "smi",
stat_config.metric_no_group,
stat_config.metric_no_merge,
stat_config.metric_no_threshold,
stat_config.user_requested_cpu_list,
stat_config.system_wide,
- &stat_config.metric_events);
+ stat_config.hardware_aware_grouping);
+ goto out;
}
if (topdown_run) {
@@ -2108,105 +1940,149 @@ static int add_default_attributes(void)
if (!max_level) {
pr_err("Topdown requested but the topdown metric groups aren't present.\n"
"(See perf list the metric groups have names like TopdownL1)\n");
- return -1;
+ ret = -1;
+ goto out;
}
if (stat_config.topdown_level > max_level) {
pr_err("Invalid top-down metrics level. The max level is %u.\n", max_level);
- return -1;
- } else if (!stat_config.topdown_level)
+ ret = -1;
+ goto out;
+ } else if (!stat_config.topdown_level) {
stat_config.topdown_level = 1;
-
+ }
if (!stat_config.interval && !stat_config.metric_only) {
fprintf(stat_config.output,
"Topdown accuracy may decrease when measuring long periods.\n"
"Please print the result regularly, e.g. -I1000\n");
}
str[8] = stat_config.topdown_level + '0';
- if (metricgroup__parse_groups(evsel_list,
+ if (metricgroup__parse_groups(evlist,
pmu, str,
/*metric_no_group=*/false,
/*metric_no_merge=*/false,
/*metric_no_threshold=*/true,
stat_config.user_requested_cpu_list,
stat_config.system_wide,
- &stat_config.metric_events) < 0)
- return -1;
+ stat_config.hardware_aware_grouping) < 0) {
+ ret = -1;
+ goto out;
+ }
}
if (!stat_config.topdown_level)
stat_config.topdown_level = 1;
- if (!evsel_list->core.nr_entries) {
+ if (!evlist->core.nr_entries && !evsel_list->core.nr_entries) {
/* No events so add defaults. */
if (target__has_cpu(&target))
- default_attrs0[0].config = PERF_COUNT_SW_CPU_CLOCK;
+ ret = parse_events(evlist, "cpu-clock", &err);
+ else
+ ret = parse_events(evlist, "task-clock", &err);
+ if (ret)
+ goto out;
+
+ ret = parse_events(evlist,
+ "context-switches,"
+ "cpu-migrations,"
+ "page-faults,"
+ "instructions,"
+ "cycles,"
+ "stalled-cycles-frontend,"
+ "stalled-cycles-backend,"
+ "branches,"
+ "branch-misses",
+ &err);
+ if (ret)
+ goto out;
- if (evlist__add_default_attrs(evsel_list, default_attrs0) < 0)
- return -1;
- if (perf_pmus__have_event("cpu", "stalled-cycles-frontend")) {
- if (evlist__add_default_attrs(evsel_list, frontend_attrs) < 0)
- return -1;
- }
- if (perf_pmus__have_event("cpu", "stalled-cycles-backend")) {
- if (evlist__add_default_attrs(evsel_list, backend_attrs) < 0)
- return -1;
- }
- if (evlist__add_default_attrs(evsel_list, default_attrs1) < 0)
- return -1;
/*
* Add TopdownL1 metrics if they exist. To minimize
* multiplexing, don't request threshold computation.
*/
- if (metricgroup__has_metric(pmu, "Default")) {
+ if (metricgroup__has_metric_or_groups(pmu, "Default")) {
struct evlist *metric_evlist = evlist__new();
- struct evsel *metric_evsel;
-
- if (!metric_evlist)
- return -1;
+ if (!metric_evlist) {
+ ret = -ENOMEM;
+ goto out;
+ }
if (metricgroup__parse_groups(metric_evlist, pmu, "Default",
/*metric_no_group=*/false,
/*metric_no_merge=*/false,
/*metric_no_threshold=*/true,
stat_config.user_requested_cpu_list,
stat_config.system_wide,
- &stat_config.metric_events) < 0)
- return -1;
-
- evlist__for_each_entry(metric_evlist, metric_evsel) {
- metric_evsel->skippable = true;
- metric_evsel->default_metricgroup = true;
+ stat_config.hardware_aware_grouping) < 0) {
+ ret = -1;
+ goto out;
}
- evlist__splice_list_tail(evsel_list, &metric_evlist->core.entries);
+
+ evlist__for_each_entry(metric_evlist, evsel)
+ evsel->default_metricgroup = true;
+
+ evlist__splice_list_tail(evlist, &metric_evlist->core.entries);
+ metricgroup__copy_metric_events(evlist, /*cgrp=*/NULL,
+ &evlist->metric_events,
+ &metric_evlist->metric_events);
evlist__delete(metric_evlist);
}
-
- /* Platform specific attrs */
- if (evlist__add_default_attrs(evsel_list, default_null_attrs) < 0)
- return -1;
}
/* Detailed events get appended to the event list: */
- if (detailed_run < 1)
- return 0;
-
- /* Append detailed run extra attributes: */
- if (evlist__add_default_attrs(evsel_list, detailed_attrs) < 0)
- return -1;
-
- if (detailed_run < 2)
- return 0;
-
- /* Append very detailed run extra attributes: */
- if (evlist__add_default_attrs(evsel_list, very_detailed_attrs) < 0)
- return -1;
-
- if (detailed_run < 3)
- return 0;
-
- /* Append very, very detailed run extra attributes: */
- return evlist__add_default_attrs(evsel_list, very_very_detailed_attrs);
+ if (!ret && detailed_run >= 1) {
+ /*
+ * Detailed stats (-d), covering the L1 and last level data
+ * caches:
+ */
+ ret = parse_events(evlist,
+ "L1-dcache-loads,"
+ "L1-dcache-load-misses,"
+ "LLC-loads,"
+ "LLC-load-misses",
+ &err);
+ }
+ if (!ret && detailed_run >= 2) {
+ /*
+ * Very detailed stats (-d -d), covering the instruction cache
+ * and the TLB caches:
+ */
+ ret = parse_events(evlist,
+ "L1-icache-loads,"
+ "L1-icache-load-misses,"
+ "dTLB-loads,"
+ "dTLB-load-misses,"
+ "iTLB-loads,"
+ "iTLB-load-misses",
+ &err);
+ }
+ if (!ret && detailed_run >= 3) {
+ /*
+ * Very, very detailed stats (-d -d -d), adding prefetch events:
+ */
+ ret = parse_events(evlist,
+ "L1-dcache-prefetches,"
+ "L1-dcache-prefetch-misses",
+ &err);
+ }
+out:
+ if (!ret) {
+ evlist__for_each_entry(evlist, evsel) {
+ /*
+ * Make at least one event non-skippable so fatal errors are visible.
+ * 'cycles' always used to be default and non-skippable, so use that.
+ */
+ if (strcmp("cycles", evsel__name(evsel)))
+ evsel->skippable = true;
+ }
+ }
+ parse_events_error__exit(&err);
+ evlist__splice_list_tail(evsel_list, &evlist->core.entries);
+ metricgroup__copy_metric_events(evsel_list, /*cgrp=*/NULL,
+ &evsel_list->metric_events,
+ &evlist->metric_events);
+ evlist__delete(evlist);
+ return ret;
}
static const char * const stat_record_usage[] = {
@@ -2228,13 +2104,15 @@ static void init_features(struct perf_session *session)
perf_header__clear_feat(&session->header, HEADER_AUXTRACE);
}
-static int __cmd_record(int argc, const char **argv)
+static int __cmd_record(const struct option stat_options[], struct opt_aggr_mode *opt_mode,
+ int argc, const char **argv)
{
struct perf_session *session;
struct perf_data *data = &perf_stat.data;
argc = parse_options(argc, argv, stat_options, stat_record_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
+ stat_config.aggr_mode = opt_aggr_mode_to_aggr_mode(opt_mode);
if (output_name)
data->path = output_name;
@@ -2263,8 +2141,9 @@ static int process_stat_round_event(struct perf_session *session,
{
struct perf_record_stat_round *stat_round = &event->stat_round;
struct timespec tsh, *ts = NULL;
- const char **argv = session->header.env.cmdline_argv;
- int argc = session->header.env.nr_cmdline;
+ struct perf_env *env = perf_session__env(session);
+ const char **argv = env->cmdline_argv;
+ int argc = env->nr_cmdline;
process_counters();
@@ -2285,12 +2164,12 @@ static
int process_stat_config_event(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
perf_event__read_stat_config(&stat_config, &event->stat_config);
- if (perf_cpu_map__has_any_cpu_or_is_empty(st->cpus)) {
+ if (perf_cpu_map__is_empty(st->cpus)) {
if (st->aggr_mode != AGGR_UNSET)
pr_warning("warning: processing task data, aggregation mode not set\n");
} else if (st->aggr_mode != AGGR_UNSET) {
@@ -2334,7 +2213,7 @@ static
int process_thread_map_event(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
if (st->threads) {
@@ -2353,7 +2232,7 @@ static
int process_cpu_map_event(struct perf_session *session,
union perf_event *event)
{
- struct perf_tool *tool = session->tool;
+ const struct perf_tool *tool = session->tool;
struct perf_stat *st = container_of(tool, struct perf_stat, tool);
struct perf_cpu_map *cpus;
@@ -2376,15 +2255,6 @@ static const char * const stat_report_usage[] = {
};
static struct perf_stat perf_stat = {
- .tool = {
- .attr = perf_event__process_attr,
- .event_update = perf_event__process_event_update,
- .thread_map = process_thread_map_event,
- .cpu_map = process_cpu_map_event,
- .stat_config = process_stat_config_event,
- .stat = perf_event__process_stat_event,
- .stat_round = process_stat_round_event,
- },
.aggr_mode = AGGR_UNSET,
.aggr_level = 0,
};
@@ -2398,6 +2268,8 @@ static int __cmd_report(int argc, const char **argv)
"aggregate counts per processor socket", AGGR_SOCKET),
OPT_SET_UINT(0, "per-die", &perf_stat.aggr_mode,
"aggregate counts per processor die", AGGR_DIE),
+ OPT_SET_UINT(0, "per-cluster", &perf_stat.aggr_mode,
+ "aggregate counts perf processor cluster", AGGR_CLUSTER),
OPT_CALLBACK_OPTARG(0, "per-cache", &perf_stat.aggr_mode, &perf_stat.aggr_level,
"cache level",
"aggregate count at this cache level (Default: LLC)",
@@ -2425,6 +2297,15 @@ static int __cmd_report(int argc, const char **argv)
perf_stat.data.path = input_name;
perf_stat.data.mode = PERF_DATA_MODE_READ;
+ perf_tool__init(&perf_stat.tool, /*ordered_events=*/false);
+ perf_stat.tool.attr = perf_event__process_attr;
+ perf_stat.tool.event_update = perf_event__process_event_update;
+ perf_stat.tool.thread_map = process_thread_map_event;
+ perf_stat.tool.cpu_map = process_cpu_map_event;
+ perf_stat.tool.stat_config = process_stat_config_event;
+ perf_stat.tool.stat = perf_event__process_stat_event;
+ perf_stat.tool.stat_round = process_stat_round_event;
+
session = perf_session__new(&perf_stat.data, &perf_stat.tool);
if (IS_ERR(session))
return PTR_ERR(session);
@@ -2473,8 +2354,182 @@ static void setup_system_wide(int forks)
}
}
+#ifdef HAVE_ARCH_X86_64_SUPPORT
+static int parse_tpebs_mode(const struct option *opt, const char *str,
+ int unset __maybe_unused)
+{
+ enum tpebs_mode *mode = opt->value;
+
+ if (!strcasecmp("mean", str)) {
+ *mode = TPEBS_MODE__MEAN;
+ return 0;
+ }
+ if (!strcasecmp("min", str)) {
+ *mode = TPEBS_MODE__MIN;
+ return 0;
+ }
+ if (!strcasecmp("max", str)) {
+ *mode = TPEBS_MODE__MAX;
+ return 0;
+ }
+ if (!strcasecmp("last", str)) {
+ *mode = TPEBS_MODE__LAST;
+ return 0;
+ }
+ return -1;
+}
+#endif // HAVE_ARCH_X86_64_SUPPORT
+
int cmd_stat(int argc, const char **argv)
{
+ struct opt_aggr_mode opt_mode = {};
+ struct option stat_options[] = {
+ OPT_BOOLEAN('T', "transaction", &transaction_run,
+ "hardware transaction statistics"),
+ OPT_CALLBACK('e', "event", &parse_events_option_args, "event",
+ "event selector. use 'perf list' to list available events",
+ parse_events_option),
+ OPT_CALLBACK(0, "filter", &evsel_list, "filter",
+ "event filter", parse_filter),
+ OPT_BOOLEAN('i', "no-inherit", &stat_config.no_inherit,
+ "child tasks do not inherit counters"),
+ OPT_STRING('p', "pid", &target.pid, "pid",
+ "stat events on existing process id"),
+ OPT_STRING('t', "tid", &target.tid, "tid",
+ "stat events on existing thread id"),
+#ifdef HAVE_BPF_SKEL
+ OPT_STRING('b', "bpf-prog", &target.bpf_str, "bpf-prog-id",
+ "stat events on existing bpf program id"),
+ OPT_BOOLEAN(0, "bpf-counters", &target.use_bpf,
+ "use bpf program to count events"),
+ OPT_STRING(0, "bpf-attr-map", &target.attr_map, "attr-map-path",
+ "path to perf_event_attr map"),
+#endif
+ OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
+ "system-wide collection from all CPUs"),
+ OPT_BOOLEAN(0, "scale", &stat_config.scale,
+ "Use --no-scale to disable counter scaling for multiplexing"),
+ OPT_INCR('v', "verbose", &verbose,
+ "be more verbose (show counter open errors, etc)"),
+ OPT_INTEGER('r', "repeat", &stat_config.run_count,
+ "repeat command and print average + stddev (max: 100, forever: 0)"),
+ OPT_BOOLEAN(0, "table", &stat_config.walltime_run_table,
+ "display details about each run (only with -r option)"),
+ OPT_BOOLEAN('n', "null", &stat_config.null_run,
+ "null run - dont start any counters"),
+ OPT_INCR('d', "detailed", &detailed_run,
+ "detailed run - start a lot of events"),
+ OPT_BOOLEAN('S', "sync", &sync_run,
+ "call sync() before starting a run"),
+ OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
+ "print large numbers with thousands\' separators",
+ stat__set_big_num),
+ OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
+ "list of cpus to monitor in system-wide"),
+ OPT_BOOLEAN('A', "no-aggr", &opt_mode.no_aggr,
+ "disable aggregation across CPUs or PMUs"),
+ OPT_BOOLEAN(0, "no-merge", &opt_mode.no_aggr,
+ "disable aggregation the same as -A or -no-aggr"),
+ OPT_BOOLEAN(0, "hybrid-merge", &stat_config.hybrid_merge,
+ "Merge identical named hybrid events"),
+ OPT_STRING('x', "field-separator", &stat_config.csv_sep, "separator",
+ "print counts with custom separator"),
+ OPT_BOOLEAN('j', "json-output", &stat_config.json_output,
+ "print counts in JSON format"),
+ OPT_CALLBACK('G', "cgroup", &evsel_list, "name",
+ "monitor event in cgroup name only", parse_stat_cgroups),
+ OPT_STRING(0, "for-each-cgroup", &stat_config.cgroup_list, "name",
+ "expand events for each cgroup"),
+ OPT_STRING('o', "output", &output_name, "file", "output file name"),
+ OPT_BOOLEAN(0, "append", &append_file, "append to the output file"),
+ OPT_INTEGER(0, "log-fd", &output_fd,
+ "log output to fd, instead of stderr"),
+ OPT_STRING(0, "pre", &pre_cmd, "command",
+ "command to run prior to the measured command"),
+ OPT_STRING(0, "post", &post_cmd, "command",
+ "command to run after to the measured command"),
+ OPT_UINTEGER('I', "interval-print", &stat_config.interval,
+ "print counts at regular interval in ms "
+ "(overhead is possible for values <= 100ms)"),
+ OPT_INTEGER(0, "interval-count", &stat_config.times,
+ "print counts for fixed number of times"),
+ OPT_BOOLEAN(0, "interval-clear", &stat_config.interval_clear,
+ "clear screen in between new interval"),
+ OPT_UINTEGER(0, "timeout", &stat_config.timeout,
+ "stop workload and print counts after a timeout period in ms (>= 10ms)"),
+ OPT_BOOLEAN(0, "per-socket", &opt_mode.socket,
+ "aggregate counts per processor socket"),
+ OPT_BOOLEAN(0, "per-die", &opt_mode.die, "aggregate counts per processor die"),
+ OPT_BOOLEAN(0, "per-cluster", &opt_mode.cluster,
+ "aggregate counts per processor cluster"),
+ OPT_CALLBACK_OPTARG(0, "per-cache", &opt_mode, &stat_config.aggr_level,
+ "cache level", "aggregate count at this cache level (Default: LLC)",
+ parse_cache_level),
+ OPT_BOOLEAN(0, "per-core", &opt_mode.core,
+ "aggregate counts per physical processor core"),
+ OPT_BOOLEAN(0, "per-thread", &opt_mode.thread, "aggregate counts per thread"),
+ OPT_BOOLEAN(0, "per-node", &opt_mode.node, "aggregate counts per numa node"),
+ OPT_INTEGER('D', "delay", &target.initial_delay,
+ "ms to wait before starting measurement after program start (-1: start with events disabled)"),
+ OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
+ "Only print computed metrics. No raw values", enable_metric_only),
+ OPT_BOOLEAN(0, "metric-no-group", &stat_config.metric_no_group,
+ "don't group metric events, impacts multiplexing"),
+ OPT_BOOLEAN(0, "metric-no-merge", &stat_config.metric_no_merge,
+ "don't try to share events between metrics in a group"),
+ OPT_BOOLEAN(0, "metric-no-threshold", &stat_config.metric_no_threshold,
+ "disable adding events for the metric threshold calculation"),
+ OPT_BOOLEAN(0, "topdown", &topdown_run,
+ "measure top-down statistics"),
+#ifdef HAVE_ARCH_X86_64_SUPPORT
+ OPT_BOOLEAN(0, "record-tpebs", &tpebs_recording,
+ "enable recording for tpebs when retire_latency required"),
+ OPT_CALLBACK(0, "tpebs-mode", &tpebs_mode, "tpebs-mode",
+ "Mode of TPEBS recording: mean, min or max",
+ parse_tpebs_mode),
+#endif
+ OPT_UINTEGER(0, "td-level", &stat_config.topdown_level,
+ "Set the metrics level for the top-down statistics (0: max level)"),
+ OPT_BOOLEAN(0, "smi-cost", &smi_cost,
+ "measure SMI cost"),
+ OPT_CALLBACK('M', "metrics", &evsel_list, "metric/metric group list",
+ "monitor specified metrics or metric groups (separated by ,)",
+ append_metric_groups),
+ OPT_BOOLEAN_FLAG(0, "all-kernel", &stat_config.all_kernel,
+ "Configure all used events to run in kernel space.",
+ PARSE_OPT_EXCLUSIVE),
+ OPT_BOOLEAN_FLAG(0, "all-user", &stat_config.all_user,
+ "Configure all used events to run in user space.",
+ PARSE_OPT_EXCLUSIVE),
+ OPT_BOOLEAN(0, "percore-show-thread", &stat_config.percore_show_thread,
+ "Use with 'percore' event qualifier to show the event "
+ "counts of one hardware thread by sum up total hardware "
+ "threads of same physical core"),
+ OPT_BOOLEAN(0, "summary", &stat_config.summary,
+ "print summary for interval mode"),
+ OPT_BOOLEAN(0, "no-csv-summary", &stat_config.no_csv_summary,
+ "don't print 'summary' for CSV summary output"),
+ OPT_BOOLEAN(0, "quiet", &quiet,
+ "don't print any output, messages or warnings (useful with record)"),
+ OPT_CALLBACK(0, "cputype", &evsel_list, "hybrid cpu type",
+ "Only enable events on applying cpu with this type "
+ "for hybrid platform (e.g. core or atom)",
+ parse_cputype),
+#ifdef HAVE_LIBPFM
+ OPT_CALLBACK(0, "pfm-events", &evsel_list, "event",
+ "libpfm4 event selector. use 'perf list' to list available events",
+ parse_libpfm_events_option),
+#endif
+ OPT_CALLBACK(0, "control", &stat_config, "fd:ctl-fd[,ack-fd] or fifo:ctl-fifo[,ack-fifo]",
+ "Listen on ctl-fd descriptor for command to control measurement ('enable': enable events, 'disable': disable events).\n"
+ "\t\t\t Optionally send control command completion ('ack\\n') to ack-fd descriptor.\n"
+ "\t\t\t Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.",
+ parse_control_option),
+ OPT_CALLBACK_OPTARG(0, "iostat", &evsel_list, &stat_config, "default",
+ "measure I/O performance metrics provided by arch/platform",
+ iostat_parse),
+ OPT_END()
+ };
const char * const stat_usage[] = {
"perf stat [<options>] [<command>]",
NULL
@@ -2503,6 +2558,8 @@ int cmd_stat(int argc, const char **argv)
(const char **) stat_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
+ stat_config.aggr_mode = opt_aggr_mode_to_aggr_mode(&opt_mode);
+
if (stat_config.csv_sep) {
stat_config.csv_output = true;
if (!strcmp(stat_config.csv_sep, "\\t"))
@@ -2511,7 +2568,7 @@ int cmd_stat(int argc, const char **argv)
stat_config.csv_sep = DEFAULT_SEPARATOR;
if (argc && strlen(argv[0]) > 2 && strstarts("record", argv[0])) {
- argc = __cmd_record(argc, argv);
+ argc = __cmd_record(stat_options, &opt_mode, argc, argv);
if (argc < 0)
return -1;
} else if (argc && strlen(argv[0]) > 2 && strstarts("report", argv[0]))
@@ -2543,6 +2600,14 @@ int cmd_stat(int argc, const char **argv)
goto out;
}
+ if (stat_config.csv_output || (stat_config.metric_only && stat_config.json_output)) {
+ /*
+ * Current CSV and metric-only JSON output doesn't display the
+ * metric threshold so don't compute it.
+ */
+ stat_config.metric_no_threshold = true;
+ }
+
if (stat_config.walltime_run_table && stat_config.run_count <= 1) {
fprintf(stderr, "--table is only supported with -r\n");
parse_options_usage(stat_usage, stat_options, "r", 1);
@@ -2603,6 +2668,7 @@ int cmd_stat(int argc, const char **argv)
} else if (big_num_opt == 0) /* User passed --no-big-num */
stat_config.big_num = false;
+ target.inherit = !stat_config.no_inherit;
err = target__validate(&target);
if (err) {
target__strerror(&target, err, errbuf, BUFSIZ);
@@ -2702,7 +2768,7 @@ int cmd_stat(int argc, const char **argv)
stat_config.metric_no_threshold,
stat_config.user_requested_cpu_list,
stat_config.system_wide,
- &stat_config.metric_events);
+ stat_config.hardware_aware_grouping);
zfree(&metrics);
if (ret) {
@@ -2711,7 +2777,7 @@ int cmd_stat(int argc, const char **argv)
}
}
- if (add_default_attributes())
+ if (add_default_events())
goto out;
if (stat_config.cgroup_list) {
@@ -2722,8 +2788,7 @@ int cmd_stat(int argc, const char **argv)
goto out;
}
- if (evlist__expand_cgroup(evsel_list, stat_config.cgroup_list,
- &stat_config.metric_events, true) < 0) {
+ if (evlist__expand_cgroup(evsel_list, stat_config.cgroup_list, true) < 0) {
parse_options_usage(stat_usage, stat_options,
"for-each-cgroup", 0);
goto out;
@@ -2830,7 +2895,10 @@ int cmd_stat(int argc, const char **argv)
evlist__reset_prev_raw_counts(evsel_list);
status = run_perf_stat(argc, argv, run_idx);
- if (forever && status != -1 && !interval) {
+ if (status == -1)
+ break;
+
+ if (forever && !interval) {
print_counters(NULL, argc, argv);
perf_stat__reset_stats();
}
@@ -2895,7 +2963,6 @@ out:
evlist__delete(evsel_list);
- metricgroup__rblist_exit(&stat_config.metric_events);
evlist__close_control(stat_config.ctl_fd, stat_config.ctl_fd_ack, &stat_config.ctl_fd_close);
return status;
diff --git a/tools/perf/builtin-timechart.c b/tools/perf/builtin-timechart.c
index 19d4542ea18a..22050c640dfa 100644
--- a/tools/perf/builtin-timechart.c
+++ b/tools/perf/builtin-timechart.c
@@ -38,7 +38,7 @@
#include "util/tracepoint.h"
#include "util/util.h"
#include <linux/err.h>
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
#ifdef LACKS_OPEN_MEMSTREAM_PROTOTYPE
FILE *open_memstream(char **ptr, size_t *sizeloc);
@@ -320,7 +320,7 @@ static int *cpus_cstate_state;
static u64 *cpus_pstate_start_times;
static u64 *cpus_pstate_state;
-static int process_comm_event(struct perf_tool *tool,
+static int process_comm_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -330,7 +330,7 @@ static int process_comm_event(struct perf_tool *tool,
return 0;
}
-static int process_fork_event(struct perf_tool *tool,
+static int process_fork_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -340,7 +340,7 @@ static int process_fork_event(struct perf_tool *tool,
return 0;
}
-static int process_exit_event(struct perf_tool *tool,
+static int process_exit_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
struct machine *machine __maybe_unused)
@@ -571,7 +571,7 @@ typedef int (*tracepoint_handler)(struct timechart *tchart,
struct perf_sample *sample,
const char *backtrace);
-static int process_sample_event(struct perf_tool *tool,
+static int process_sample_event(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -1158,7 +1158,6 @@ static void draw_io_bars(struct timechart *tchart)
}
svg_box(Y, c->start_time, c->end_time, "process3");
- sample = c->io_samples;
for (sample = c->io_samples; sample; sample = sample->next) {
double h = (double)sample->bytes / c->max_bytes;
@@ -1606,14 +1605,20 @@ static int __cmd_timechart(struct timechart *tchart, const char *output_name)
.mode = PERF_DATA_MODE_READ,
.force = tchart->force,
};
-
- struct perf_session *session = perf_session__new(&data, &tchart->tool);
+ struct perf_session *session;
int ret = -EINVAL;
+ perf_tool__init(&tchart->tool, /*ordered_events=*/true);
+ tchart->tool.comm = process_comm_event;
+ tchart->tool.fork = process_fork_event;
+ tchart->tool.exit = process_exit_event;
+ tchart->tool.sample = process_sample_event;
+
+ session = perf_session__new(&data, &tchart->tool);
if (IS_ERR(session))
return PTR_ERR(session);
- symbol__init(&session->header.env);
+ symbol__init(perf_session__env(session));
(void)perf_header__process_sections(&session->header,
perf_data__fd(session->data),
@@ -1924,13 +1929,6 @@ parse_time(const struct option *opt, const char *arg, int __maybe_unused unset)
int cmd_timechart(int argc, const char **argv)
{
struct timechart tchart = {
- .tool = {
- .comm = process_comm_event,
- .fork = process_fork_event,
- .exit = process_exit_event,
- .sample = process_sample_event,
- .ordered_events = true,
- },
.proc_num = 15,
.min_time = NSEC_PER_MSEC,
.merge_dist = 1000,
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 5301d1badd43..a11f629c7d76 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -35,6 +35,7 @@
#include "util/mmap.h"
#include "util/session.h"
#include "util/thread.h"
+#include "util/stat.h"
#include "util/symbol.h"
#include "util/synthetic-events.h"
#include "util/top.h"
@@ -129,7 +130,7 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
/*
* We can't annotate with just /proc/kallsyms
*/
- if (dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS && !dso__is_kcore(dso)) {
+ if (dso__symtab_type(dso) == DSO_BINARY_TYPE__KALLSYMS && !dso__is_kcore(dso)) {
pr_err("Can't annotate %s: No vmlinux file was found in the "
"path\n", sym->name);
sleep(1);
@@ -182,7 +183,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
"Tools: %s\n\n"
"Not all samples will be on the annotation output.\n\n"
"Please report to linux-kernel@vger.kernel.org\n",
- ip, dso->long_name, dso__symtab_origin(dso),
+ ip, dso__long_name(dso), dso__symtab_origin(dso),
map__start(map), map__end(map), sym->start, sym->end,
sym->binding == STB_GLOBAL ? 'g' :
sym->binding == STB_LOCAL ? 'l' : 'w', sym->name,
@@ -191,7 +192,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
if (use_browser <= 0)
sleep(5);
- map__set_erange_warned(map, true);
+ map__set_erange_warned(map);
}
static void perf_top__record_precise_ip(struct perf_top *top,
@@ -263,13 +264,13 @@ static void perf_top__show_details(struct perf_top *top)
printf("Showing %s for %s\n", evsel__name(top->sym_evsel), symbol->name);
printf(" Events Pcnt (>=%d%%)\n", annotate_opts.min_pcnt);
- more = symbol__annotate_printf(&he->ms, top->sym_evsel);
+ more = hist_entry__annotate_printf(he, top->sym_evsel);
if (top->evlist->enabled) {
if (top->zero)
- symbol__annotate_zero_histogram(symbol, top->sym_evsel->core.idx);
+ symbol__annotate_zero_histogram(symbol, top->sym_evsel);
else
- symbol__annotate_decay_histogram(symbol, top->sym_evsel->core.idx);
+ symbol__annotate_decay_histogram(symbol, top->sym_evsel);
}
if (more != 0)
printf("%d lines not displayed, maybe increase display entries [e]\n", more);
@@ -642,11 +643,12 @@ repeat:
*/
evlist__for_each_entry(top->evlist, pos) {
struct hists *hists = evsel__hists(pos);
- hists->uid_filter_str = top->record_opts.target.uid_str;
+ hists->uid_filter_str = top->uid_str;
}
ret = evlist__tui_browse_hists(top->evlist, help, &hbt, top->min_percent,
- &top->session->header.env, !top->record_opts.overwrite);
+ perf_session__env(top->session),
+ !top->record_opts.overwrite);
if (ret == K_RELOAD) {
top->zero = true;
goto repeat;
@@ -735,12 +737,12 @@ static int hist_iter__top_callback(struct hist_entry_iter *iter,
perf_top__record_precise_ip(top, iter->he, iter->sample, evsel, al->addr);
hist__account_cycles(iter->sample->branch_stack, al, iter->sample,
- !(top->record_opts.branch_stack & PERF_SAMPLE_BRANCH_ANY),
- NULL);
+ !(top->record_opts.branch_stack & PERF_SAMPLE_BRANCH_ANY),
+ NULL, evsel);
return 0;
}
-static void perf_event__process_sample(struct perf_tool *tool,
+static void perf_event__process_sample(const struct perf_tool *tool,
const union perf_event *event,
struct evsel *evsel,
struct perf_sample *sample,
@@ -809,7 +811,7 @@ static void perf_event__process_sample(struct perf_tool *tool,
* invalid --vmlinux ;-)
*/
if (!machine->kptr_restrict_warned && !top->vmlinux_warned &&
- __map__is_kernel(al.map) && map__has_symbols(al.map)) {
+ __map__is_kernel(al.map) && !map__has_symbols(al.map)) {
if (symbol_conf.vmlinux_name) {
char serr[256];
@@ -1055,6 +1057,13 @@ try_again:
}
}
+ if (evlist__apply_filters(evlist, &counter, &opts->target)) {
+ pr_err("failed to set filter \"%s\" on event %s with %d (%s)\n",
+ counter->filter ?: "BPF", evsel__name(counter), errno,
+ str_error_r(errno, msg, sizeof(msg)));
+ goto out_err;
+ }
+
if (evlist__mmap(evlist, opts->mmap_pages) < 0) {
ui__error("Failed to mmap with %d (%s)\n",
errno, str_error_r(errno, msg, sizeof(msg)));
@@ -1150,6 +1159,7 @@ static int deliver_event(struct ordered_events *qe,
return 0;
}
+ perf_sample__init(&sample, /*all=*/false);
ret = evlist__parse_sample(evlist, event, &sample);
if (ret) {
pr_err("Can't parse sample, err = %d\n", ret);
@@ -1160,8 +1170,10 @@ static int deliver_event(struct ordered_events *qe,
assert(evsel != NULL);
if (event->header.type == PERF_RECORD_SAMPLE) {
- if (evswitch__discard(&top->evswitch, evsel))
- return 0;
+ if (evswitch__discard(&top->evswitch, evsel)) {
+ ret = 0;
+ goto next_event;
+ }
++top->samples;
}
@@ -1212,6 +1224,7 @@ static int deliver_event(struct ordered_events *qe,
ret = 0;
next_event:
+ perf_sample__exit(&sample);
return ret;
}
@@ -1241,7 +1254,7 @@ static int __cmd_top(struct perf_top *top)
int ret;
if (!annotate_opts.objdump_path) {
- ret = perf_env__lookup_objdump(&top->session->header.env,
+ ret = perf_env__lookup_objdump(perf_session__env(top->session),
&annotate_opts.objdump_path);
if (ret)
return ret;
@@ -1288,7 +1301,7 @@ static int __cmd_top(struct perf_top *top)
perf_set_multithreaded();
if (perf_hpp_list.socket) {
- ret = perf_env__read_cpu_topology_map(&perf_env);
+ ret = perf_env__read_cpu_topology_map(perf_session__env(top->session));
if (ret < 0) {
char errbuf[BUFSIZ];
const char *err = str_error_r(-ret, errbuf, sizeof(errbuf));
@@ -1298,7 +1311,11 @@ static int __cmd_top(struct perf_top *top)
}
}
- evlist__uniquify_name(top->evlist);
+ /*
+ * Use global stat_config that is zero meaning aggr_mode is AGGR_NONE
+ * and hybrid_merge is false.
+ */
+ evlist__uniquify_evsel_names(top->evlist, &stat_config);
ret = perf_top__start_counters(top);
if (ret)
return ret;
@@ -1462,6 +1479,8 @@ int cmd_top(int argc, const char **argv)
OPT_CALLBACK('e', "event", &parse_events_option_args, "event",
"event selector. use 'perf list' to list available events",
parse_events_option),
+ OPT_CALLBACK(0, "filter", &top.evlist, "filter",
+ "event filter", parse_filter),
OPT_U64('c', "count", &opts->user_interval, "event period to sample"),
OPT_STRING('p', "pid", &target->pid, "pid",
"profile events on existing process id"),
@@ -1553,7 +1572,7 @@ int cmd_top(int argc, const char **argv)
"Add prefix to source file path names in programs (with --prefix-strip)"),
OPT_STRING(0, "prefix-strip", &annotate_opts.prefix_strip, "N",
"Strip first N entries of source file path name in programs (with --prefix)"),
- OPT_STRING('u', "uid", &target->uid_str, "user", "user to profile"),
+ OPT_STRING('u', "uid", &top.uid_str, "user", "user to profile"),
OPT_CALLBACK(0, "percent-limit", &top, "percent",
"Don't show entries under that percent", parse_percent_limit),
OPT_CALLBACK(0, "percentage", NULL, "relative|absolute",
@@ -1573,7 +1592,7 @@ int cmd_top(int argc, const char **argv)
"add last branch records to call history"),
OPT_BOOLEAN(0, "raw-trace", &symbol_conf.raw_trace,
"Show raw trace event output (do not use print fmt or plugins)"),
- OPT_BOOLEAN(0, "hierarchy", &symbol_conf.report_hierarchy,
+ OPT_BOOLEAN('H', "hierarchy", &symbol_conf.report_hierarchy,
"Show entries in a hierarchy"),
OPT_BOOLEAN(0, "overwrite", &top.record_opts.overwrite,
"Use a backward ring buffer, default: no"),
@@ -1605,6 +1624,7 @@ int cmd_top(int argc, const char **argv)
NULL
};
int status = hists__init();
+ struct perf_env host_env;
if (status < 0)
return status;
@@ -1618,14 +1638,19 @@ int cmd_top(int argc, const char **argv)
if (top.evlist == NULL)
return -ENOMEM;
+ perf_env__init(&host_env);
status = perf_config(perf_top_config, &top);
if (status)
- return status;
+ goto out_delete_evlist;
/*
* Since the per arch annotation init routine may need the cpuid, read
* it here, since we are not getting this from the perf.data header.
*/
- status = perf_env__read_cpuid(&perf_env);
+ status = perf_env__set_cmdline(&host_env, argc, argv);
+ if (status)
+ goto out_delete_evlist;
+
+ status = perf_env__read_cpuid(&host_env);
if (status) {
/*
* Some arches do not provide a get_cpuid(), so just use pr_debug, otherwise
@@ -1635,7 +1660,6 @@ int cmd_top(int argc, const char **argv)
"Couldn't read the cpuid for this machine: %s\n",
str_error_r(errno, errbuf, sizeof(errbuf)));
}
- top.evlist->env = &perf_env;
argc = parse_options(argc, argv, options, top_usage, 0);
if (argc)
@@ -1643,18 +1667,24 @@ int cmd_top(int argc, const char **argv)
if (disassembler_style) {
annotate_opts.disassembler_style = strdup(disassembler_style);
- if (!annotate_opts.disassembler_style)
- return -ENOMEM;
+ if (!annotate_opts.disassembler_style) {
+ status = -ENOMEM;
+ goto out_delete_evlist;
+ }
}
if (objdump_path) {
annotate_opts.objdump_path = strdup(objdump_path);
- if (!annotate_opts.objdump_path)
- return -ENOMEM;
+ if (!annotate_opts.objdump_path) {
+ status = -ENOMEM;
+ goto out_delete_evlist;
+ }
}
if (addr2line_path) {
symbol_conf.addr2line_path = strdup(addr2line_path);
- if (!symbol_conf.addr2line_path)
- return -ENOMEM;
+ if (!symbol_conf.addr2line_path) {
+ status = -ENOMEM;
+ goto out_delete_evlist;
+ }
}
status = symbol__validate_sym_arguments();
@@ -1716,6 +1746,14 @@ int cmd_top(int argc, const char **argv)
if (opts->branch_stack && callchain_param.enabled)
symbol_conf.show_branchflag_count = true;
+ if (opts->branch_stack) {
+ status = perf_env__read_core_pmu_caps(&host_env);
+ if (status) {
+ pr_err("PMU capability data is not available\n");
+ goto out_delete_evlist;
+ }
+ }
+
sort__mode = SORT_MODE__TOP;
/* display thread wants entries to be collapsed in a different tree */
perf_hpp_list.need_collapse = 1;
@@ -1729,7 +1767,17 @@ int cmd_top(int argc, const char **argv)
setup_browser(false);
- if (setup_sorting(top.evlist) < 0) {
+ top.session = __perf_session__new(/*data=*/NULL, /*tool=*/NULL,
+ /*trace_event_repipe=*/false,
+ &host_env);
+ if (IS_ERR(top.session)) {
+ status = PTR_ERR(top.session);
+ top.session = NULL;
+ goto out_delete_evlist;
+ }
+ top.evlist->session = top.session;
+
+ if (setup_sorting(top.evlist, perf_session__env(top.session)) < 0) {
if (sort_order)
parse_options_usage(top_usage, options, "s", 1);
if (field_order)
@@ -1744,15 +1792,17 @@ int cmd_top(int argc, const char **argv)
ui__warning("%s\n", errbuf);
}
- status = target__parse_uid(target);
- if (status) {
- int saved_errno = errno;
-
- target__strerror(target, status, errbuf, BUFSIZ);
- ui__error("%s\n", errbuf);
+ if (top.uid_str) {
+ uid_t uid = parse_uid(top.uid_str);
- status = -saved_errno;
- goto out_delete_evlist;
+ if (uid == UINT_MAX) {
+ ui__error("Invalid User: %s", top.uid_str);
+ status = -EINVAL;
+ goto out_delete_evlist;
+ }
+ status = parse_uid_filter(top.evlist, uid);
+ if (status)
+ goto out_delete_evlist;
}
if (target__none(target))
@@ -1777,7 +1827,7 @@ int cmd_top(int argc, const char **argv)
if (!callchain_param.enabled) {
symbol_conf.cumulate_callchain = false;
- perf_hpp__cancel_cumulate();
+ perf_hpp__cancel_cumulate(top.evlist);
}
if (symbol_conf.cumulate_callchain && !callchain_param.order_set)
@@ -1802,12 +1852,8 @@ int cmd_top(int argc, const char **argv)
signal(SIGWINCH, winch_sig);
}
- top.session = perf_session__new(NULL, NULL);
- if (IS_ERR(top.session)) {
- status = PTR_ERR(top.session);
- top.session = NULL;
- goto out_delete_evlist;
- }
+ if (!evlist__needs_bpf_sb_event(top.evlist))
+ top.record_opts.no_bpf_event = true;
#ifdef HAVE_LIBBPF_SUPPORT
if (!top.record_opts.no_bpf_event) {
@@ -1819,7 +1865,7 @@ int cmd_top(int argc, const char **argv)
goto out_delete_evlist;
}
- if (evlist__add_bpf_sb_event(top.sb_evlist, &perf_env)) {
+ if (evlist__add_bpf_sb_event(top.sb_evlist, &host_env)) {
pr_err("Couldn't ask for PERF_RECORD_BPF_EVENT side band events.\n.");
status = -EINVAL;
goto out_delete_evlist;
@@ -1841,6 +1887,7 @@ out_delete_evlist:
evlist__delete(top.evlist);
perf_session__delete(top.session);
annotation_options__exit();
+ perf_env__exit(&host_env);
return status;
}
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 109b8e64fe69..c607f39b8c8b 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -19,9 +19,7 @@
#ifdef HAVE_LIBBPF_SUPPORT
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
-#ifdef HAVE_BPF_SKEL
-#include "bpf_skel/augmented_raw_syscalls.skel.h"
-#endif
+#include <bpf/btf.h>
#endif
#include "util/bpf_map.h"
#include "util/rlimit.h"
@@ -38,6 +36,7 @@
#include "util/synthetic-events.h"
#include "util/evlist.h"
#include "util/evswitch.h"
+#include "util/hashmap.h"
#include "util/mmap.h"
#include <subcmd/pager.h>
#include <subcmd/exec-cmd.h>
@@ -53,6 +52,7 @@
#include "util/thread_map.h"
#include "util/stat.h"
#include "util/tool.h"
+#include "util/trace.h"
#include "util/util.h"
#include "trace/beauty/beauty.h"
#include "trace-event.h"
@@ -62,8 +62,9 @@
#include "print_binary.h"
#include "string2.h"
#include "syscalltbl.h"
-#include "rb_resort.h"
#include "../perf.h"
+#include "trace_augment.h"
+#include "dwarf-regs.h"
#include <errno.h>
#include <inttypes.h>
@@ -74,6 +75,7 @@
#include <linux/err.h>
#include <linux/filter.h>
#include <linux/kernel.h>
+#include <linux/list_sort.h>
#include <linux/random.h>
#include <linux/stringify.h>
#include <linux/time64.h>
@@ -83,9 +85,10 @@
#include <linux/ctype.h>
#include <perf/mmap.h>
+#include <tools/libc_compat.h>
#ifdef HAVE_LIBTRACEEVENT
-#include <traceevent/event-parse.h>
+#include <event-parse.h>
#endif
#ifndef O_CLOEXEC
@@ -100,6 +103,12 @@
/*
* strtoul: Go from a string to a value, i.e. for msr: MSR_FS_BASE to 0xc0000100
+ *
+ * We have to explicitely mark the direction of the flow of data, if from the
+ * kernel to user space or the other way around, since the BPF collector we
+ * have so far copies only from user to kernel space, mark the arguments that
+ * go that direction, so that we don´t end up collecting the previous contents
+ * for syscall args that goes from kernel to user space.
*/
struct syscall_arg_fmt {
size_t (*scnprintf)(char *bf, size_t size, struct syscall_arg *arg);
@@ -108,7 +117,12 @@ struct syscall_arg_fmt {
void *parm;
const char *name;
u16 nr_entries; // for arrays
+ bool from_user;
bool show_zero;
+#ifdef HAVE_LIBBPF_SUPPORT
+ const struct btf_type *type;
+ int type_id; /* used in btf_dump */
+#endif
};
struct syscall_fmt {
@@ -126,18 +140,21 @@ struct syscall_fmt {
};
struct trace {
+ struct perf_env host_env;
struct perf_tool tool;
- struct syscalltbl *sctbl;
struct {
- struct syscall *table;
+ /** Sorted sycall numbers used by the trace. */
+ struct syscall **table;
+ /** Size of table. */
+ size_t table_size;
struct {
struct evsel *sys_enter,
*sys_exit,
*bpf_output;
} events;
} syscalls;
-#ifdef HAVE_BPF_SKEL
- struct augmented_raw_syscalls_bpf *skel;
+#ifdef HAVE_LIBBPF_SUPPORT
+ struct btf *btf;
#endif
struct record_opts opts;
struct evlist *evlist;
@@ -160,14 +177,26 @@ struct trace {
pid_t *entries;
struct bpf_map *map;
} filter_pids;
+ /*
+ * TODO: The map is from an ID (aka system call number) to struct
+ * syscall_stats. If there is >1 e_machine, such as i386 and x86-64
+ * processes, then the stats here will gather wrong the statistics for
+ * the non EM_HOST system calls. A fix would be to add the e_machine
+ * into the key, but this would make the code inconsistent with the
+ * per-thread version.
+ */
+ struct hashmap *syscall_stats;
double duration_filter;
double runtime_ms;
+ unsigned long pfmaj, pfmin;
struct {
u64 vfs_getname,
proc_getname;
} stats;
unsigned int max_stack;
unsigned int min_stack;
+ enum trace_summary_mode summary_mode;
+ int max_summary;
int raw_augmented_syscalls_args_size;
bool raw_augmented_syscalls;
bool fd_path_disabled;
@@ -195,14 +224,31 @@ struct trace {
bool show_string_prefix;
bool force;
bool vfs_getname;
+ bool force_btf;
+ bool summary_bpf;
int trace_pgfaults;
char *perfconfig_events;
struct {
struct ordered_events data;
u64 last;
} oe;
+ const char *uid_str;
};
+static void trace__load_vmlinux_btf(struct trace *trace __maybe_unused)
+{
+#ifdef HAVE_LIBBPF_SUPPORT
+ if (trace->btf != NULL)
+ return;
+
+ trace->btf = btf__load_vmlinux_btf();
+ if (verbose > 0) {
+ fprintf(trace->output, trace->btf ? "vmlinux BTF loaded\n" :
+ "Failed to load vmlinux BTF\n");
+ }
+#endif
+}
+
struct tp_field {
int offset;
union {
@@ -357,7 +403,12 @@ static struct syscall_arg_fmt *evsel__syscall_arg_fmt(struct evsel *evsel)
}
if (et->fmt == NULL) {
- et->fmt = calloc(evsel->tp_format->format.nr_fields, sizeof(struct syscall_arg_fmt));
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+
+ if (tp_format == NULL)
+ goto out_delete;
+
+ et->fmt = calloc(tp_format->format.nr_fields, sizeof(struct syscall_arg_fmt));
if (et->fmt == NULL)
goto out_delete;
}
@@ -764,7 +815,7 @@ static const char *fcntl_cmds[] = {
static DEFINE_STRARRAY(fcntl_cmds, "F_");
static const char *fcntl_linux_specific_cmds[] = {
- "SETLEASE", "GETLEASE", "NOTIFY", [5] = "CANCELLK", "DUPFD_CLOEXEC",
+ "SETLEASE", "GETLEASE", "NOTIFY", "DUPFD_QUERY", [5] = "CANCELLK", "DUPFD_CLOEXEC",
"SETPIPE_SZ", "GETPIPE_SZ", "ADD_SEALS", "GET_SEALS",
"GET_RW_HINT", "SET_RW_HINT", "GET_FILE_RW_HINT", "SET_FILE_RW_HINT",
};
@@ -829,6 +880,15 @@ static size_t syscall_arg__scnprintf_filename(char *bf, size_t size,
#define SCA_FILENAME syscall_arg__scnprintf_filename
+// 'argname' is just documentational at this point, to remove the previous comment with that info
+#define SCA_FILENAME_FROM_USER(argname) \
+ { .scnprintf = SCA_FILENAME, \
+ .from_user = true, }
+
+static size_t syscall_arg__scnprintf_buf(char *bf, size_t size, struct syscall_arg *arg);
+
+#define SCA_BUF syscall_arg__scnprintf_buf
+
static size_t syscall_arg__scnprintf_pipe_flags(char *bf, size_t size,
struct syscall_arg *arg)
{
@@ -886,17 +946,189 @@ static size_t syscall_arg__scnprintf_getrandom_flags(char *bf, size_t size,
#define SCA_GETRANDOM_FLAGS syscall_arg__scnprintf_getrandom_flags
+#ifdef HAVE_LIBBPF_SUPPORT
+static void syscall_arg_fmt__cache_btf_enum(struct syscall_arg_fmt *arg_fmt, struct btf *btf, char *type)
+{
+ int id;
+
+ type = strstr(type, "enum ");
+ if (type == NULL)
+ return;
+
+ type += 5; // skip "enum " to get the enumeration name
+
+ id = btf__find_by_name(btf, type);
+ if (id < 0)
+ return;
+
+ arg_fmt->type = btf__type_by_id(btf, id);
+}
+
+static bool syscall_arg__strtoul_btf_enum(char *bf, size_t size, struct syscall_arg *arg, u64 *val)
+{
+ const struct btf_type *bt = arg->fmt->type;
+ struct btf *btf = arg->trace->btf;
+ struct btf_enum *be = btf_enum(bt);
+
+ for (int i = 0; i < btf_vlen(bt); ++i, ++be) {
+ const char *name = btf__name_by_offset(btf, be->name_off);
+ int max_len = max(size, strlen(name));
+
+ if (strncmp(name, bf, max_len) == 0) {
+ *val = be->val;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+static bool syscall_arg__strtoul_btf_type(char *bf, size_t size, struct syscall_arg *arg, u64 *val)
+{
+ const struct btf_type *bt;
+ char *type = arg->type_name;
+ struct btf *btf;
+
+ trace__load_vmlinux_btf(arg->trace);
+
+ btf = arg->trace->btf;
+ if (btf == NULL)
+ return false;
+
+ if (arg->fmt->type == NULL) {
+ // See if this is an enum
+ syscall_arg_fmt__cache_btf_enum(arg->fmt, btf, type);
+ }
+
+ // Now let's see if we have a BTF type resolved
+ bt = arg->fmt->type;
+ if (bt == NULL)
+ return false;
+
+ // If it is an enum:
+ if (btf_is_enum(arg->fmt->type))
+ return syscall_arg__strtoul_btf_enum(bf, size, arg, val);
+
+ return false;
+}
+
+static size_t btf_enum_scnprintf(const struct btf_type *type, struct btf *btf, char *bf, size_t size, int val)
+{
+ struct btf_enum *be = btf_enum(type);
+ const int nr_entries = btf_vlen(type);
+
+ for (int i = 0; i < nr_entries; ++i, ++be) {
+ if (be->val == val) {
+ return scnprintf(bf, size, "%s",
+ btf__name_by_offset(btf, be->name_off));
+ }
+ }
+
+ return 0;
+}
+
+struct trace_btf_dump_snprintf_ctx {
+ char *bf;
+ size_t printed, size;
+};
+
+static void trace__btf_dump_snprintf(void *vctx, const char *fmt, va_list args)
+{
+ struct trace_btf_dump_snprintf_ctx *ctx = vctx;
+
+ ctx->printed += vscnprintf(ctx->bf + ctx->printed, ctx->size - ctx->printed, fmt, args);
+}
+
+static size_t btf_struct_scnprintf(const struct btf_type *type, struct btf *btf, char *bf, size_t size, struct syscall_arg *arg)
+{
+ struct trace_btf_dump_snprintf_ctx ctx = {
+ .bf = bf,
+ .size = size,
+ };
+ struct augmented_arg *augmented_arg = arg->augmented.args;
+ int type_id = arg->fmt->type_id, consumed;
+ struct btf_dump *btf_dump;
+
+ LIBBPF_OPTS(btf_dump_opts, dump_opts);
+ LIBBPF_OPTS(btf_dump_type_data_opts, dump_data_opts);
+
+ if (arg == NULL || arg->augmented.args == NULL)
+ return 0;
+
+ dump_data_opts.compact = true;
+ dump_data_opts.skip_names = !arg->trace->show_arg_names;
+
+ btf_dump = btf_dump__new(btf, trace__btf_dump_snprintf, &ctx, &dump_opts);
+ if (btf_dump == NULL)
+ return 0;
+
+ /* pretty print the struct data here */
+ if (btf_dump__dump_type_data(btf_dump, type_id, arg->augmented.args->value, type->size, &dump_data_opts) == 0)
+ return 0;
+
+ consumed = sizeof(*augmented_arg) + augmented_arg->size;
+ arg->augmented.args = ((void *)arg->augmented.args) + consumed;
+ arg->augmented.size -= consumed;
+
+ btf_dump__free(btf_dump);
+
+ return ctx.printed;
+}
+
+static size_t trace__btf_scnprintf(struct trace *trace, struct syscall_arg *arg, char *bf,
+ size_t size, int val, char *type)
+{
+ struct syscall_arg_fmt *arg_fmt = arg->fmt;
+
+ if (trace->btf == NULL)
+ return 0;
+
+ if (arg_fmt->type == NULL) {
+ // Check if this is an enum and if we have the BTF type for it.
+ syscall_arg_fmt__cache_btf_enum(arg_fmt, trace->btf, type);
+ }
+
+ // Did we manage to find a BTF type for the syscall/tracepoint argument?
+ if (arg_fmt->type == NULL)
+ return 0;
+
+ if (btf_is_enum(arg_fmt->type))
+ return btf_enum_scnprintf(arg_fmt->type, trace->btf, bf, size, val);
+ else if (btf_is_struct(arg_fmt->type) || btf_is_union(arg_fmt->type))
+ return btf_struct_scnprintf(arg_fmt->type, trace->btf, bf, size, arg);
+
+ return 0;
+}
+
+#else // HAVE_LIBBPF_SUPPORT
+static size_t trace__btf_scnprintf(struct trace *trace __maybe_unused, struct syscall_arg *arg __maybe_unused,
+ char *bf __maybe_unused, size_t size __maybe_unused, int val __maybe_unused,
+ char *type __maybe_unused)
+{
+ return 0;
+}
+
+static bool syscall_arg__strtoul_btf_type(char *bf __maybe_unused, size_t size __maybe_unused,
+ struct syscall_arg *arg __maybe_unused, u64 *val __maybe_unused)
+{
+ return false;
+}
+#endif // HAVE_LIBBPF_SUPPORT
+
+#define STUL_BTF_TYPE syscall_arg__strtoul_btf_type
+
#define STRARRAY(name, array) \
{ .scnprintf = SCA_STRARRAY, \
.strtoul = STUL_STRARRAY, \
- .parm = &strarray__##array, }
+ .parm = &strarray__##array, \
+ .show_zero = true, }
#define STRARRAY_FLAGS(name, array) \
{ .scnprintf = SCA_STRARRAY_FLAGS, \
.strtoul = STUL_STRARRAY_FLAGS, \
- .parm = &strarray__##array, }
+ .parm = &strarray__##array, \
+ .show_zero = true, }
-#include "trace/beauty/arch_errno_names.c"
#include "trace/beauty/eventfd.c"
#include "trace/beauty/futex_op.c"
#include "trace/beauty/futex_val3.c"
@@ -920,16 +1152,17 @@ static const struct syscall_fmt syscall_fmts[] = {
[1] = { .scnprintf = SCA_PTR, /* arg2 */ }, }, },
{ .name = "bind",
.arg = { [0] = { .scnprintf = SCA_INT, /* fd */ },
- [1] = { .scnprintf = SCA_SOCKADDR, /* umyaddr */ },
+ [1] = SCA_SOCKADDR_FROM_USER(umyaddr),
[2] = { .scnprintf = SCA_INT, /* addrlen */ }, }, },
{ .name = "bpf",
- .arg = { [0] = STRARRAY(cmd, bpf_cmd), }, },
+ .arg = { [0] = STRARRAY(cmd, bpf_cmd),
+ [1] = { .from_user = true /* attr */, }, } },
{ .name = "brk", .hexret = true,
.arg = { [0] = { .scnprintf = SCA_PTR, /* brk */ }, }, },
{ .name = "clock_gettime",
.arg = { [0] = STRARRAY(clk_id, clockid), }, },
{ .name = "clock_nanosleep",
- .arg = { [2] = { .scnprintf = SCA_TIMESPEC, /* rqtp */ }, }, },
+ .arg = { [2] = SCA_TIMESPEC_FROM_USER(req), }, },
{ .name = "clone", .errpid = true, .nr_args = 5,
.arg = { [0] = { .name = "flags", .scnprintf = SCA_CLONE_FLAGS, },
[1] = { .name = "child_stack", .scnprintf = SCA_HEX, },
@@ -940,12 +1173,21 @@ static const struct syscall_fmt syscall_fmts[] = {
.arg = { [0] = { .scnprintf = SCA_CLOSE_FD, /* fd */ }, }, },
{ .name = "connect",
.arg = { [0] = { .scnprintf = SCA_INT, /* fd */ },
- [1] = { .scnprintf = SCA_SOCKADDR, /* servaddr */ },
+ [1] = SCA_SOCKADDR_FROM_USER(servaddr),
[2] = { .scnprintf = SCA_INT, /* addrlen */ }, }, },
{ .name = "epoll_ctl",
.arg = { [1] = STRARRAY(op, epoll_ctl_ops), }, },
{ .name = "eventfd2",
.arg = { [1] = { .scnprintf = SCA_EFD_FLAGS, /* flags */ }, }, },
+ { .name = "faccessat",
+ .arg = { [0] = { .scnprintf = SCA_FDAT, /* dirfd */ },
+ [1] = SCA_FILENAME_FROM_USER(pathname),
+ [2] = { .scnprintf = SCA_ACCMODE, /* mode */ }, }, },
+ { .name = "faccessat2",
+ .arg = { [0] = { .scnprintf = SCA_FDAT, /* dirfd */ },
+ [1] = SCA_FILENAME_FROM_USER(pathname),
+ [2] = { .scnprintf = SCA_ACCMODE, /* mode */ },
+ [3] = { .scnprintf = SCA_FACCESSAT2_FLAGS, /* flags */ }, }, },
{ .name = "fchmodat",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* fd */ }, }, },
{ .name = "fchownat",
@@ -965,10 +1207,9 @@ static const struct syscall_fmt syscall_fmts[] = {
[2] = { .scnprintf = SCA_FSMOUNT_ATTR_FLAGS, /* attr_flags */ }, }, },
{ .name = "fspick",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ },
- [1] = { .scnprintf = SCA_FILENAME, /* path */ },
+ [1] = SCA_FILENAME_FROM_USER(path),
[2] = { .scnprintf = SCA_FSPICK_FLAGS, /* flags */ }, }, },
{ .name = "fstat", .alias = "newfstat", },
- { .name = "fstatat", .alias = "newfstatat", },
{ .name = "futex",
.arg = { [1] = { .scnprintf = SCA_FUTEX_OP, /* op */ },
[5] = { .scnprintf = SCA_FUTEX_VAL3, /* val3 */ }, }, },
@@ -1024,32 +1265,36 @@ static const struct syscall_fmt syscall_fmts[] = {
#if defined(__s390x__)
.alias = "old_mmap",
#endif
- .arg = { [2] = { .scnprintf = SCA_MMAP_PROT, /* prot */ },
+ .arg = { [2] = { .scnprintf = SCA_MMAP_PROT, .show_zero = true, /* prot */ },
[3] = { .scnprintf = SCA_MMAP_FLAGS, /* flags */
.strtoul = STUL_STRARRAY_FLAGS,
.parm = &strarray__mmap_flags, },
[5] = { .scnprintf = SCA_HEX, /* offset */ }, }, },
{ .name = "mount",
- .arg = { [0] = { .scnprintf = SCA_FILENAME, /* dev_name */ },
+ .arg = { [0] = SCA_FILENAME_FROM_USER(devname),
[3] = { .scnprintf = SCA_MOUNT_FLAGS, /* flags */
.mask_val = SCAMV_MOUNT_FLAGS, /* flags */ }, }, },
{ .name = "move_mount",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* from_dfd */ },
- [1] = { .scnprintf = SCA_FILENAME, /* from_pathname */ },
+ [1] = SCA_FILENAME_FROM_USER(pathname),
[2] = { .scnprintf = SCA_FDAT, /* to_dfd */ },
- [3] = { .scnprintf = SCA_FILENAME, /* to_pathname */ },
+ [3] = SCA_FILENAME_FROM_USER(pathname),
[4] = { .scnprintf = SCA_MOVE_MOUNT_FLAGS, /* flags */ }, }, },
{ .name = "mprotect",
.arg = { [0] = { .scnprintf = SCA_HEX, /* start */ },
- [2] = { .scnprintf = SCA_MMAP_PROT, /* prot */ }, }, },
+ [2] = { .scnprintf = SCA_MMAP_PROT, .show_zero = true, /* prot */ }, }, },
{ .name = "mq_unlink",
- .arg = { [0] = { .scnprintf = SCA_FILENAME, /* u_name */ }, }, },
+ .arg = { [0] = SCA_FILENAME_FROM_USER(u_name), }, },
{ .name = "mremap", .hexret = true,
.arg = { [3] = { .scnprintf = SCA_MREMAP_FLAGS, /* flags */ }, }, },
{ .name = "name_to_handle_at",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
- { .name = "newfstatat",
- .arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
+ { .name = "nanosleep",
+ .arg = { [0] = SCA_TIMESPEC_FROM_USER(req), }, },
+ { .name = "newfstatat", .alias = "fstatat",
+ .arg = { [0] = { .scnprintf = SCA_FDAT, /* dirfd */ },
+ [1] = SCA_FILENAME_FROM_USER(pathname),
+ [3] = { .scnprintf = SCA_FS_AT_FLAGS, /* flags */ }, }, },
{ .name = "open",
.arg = { [1] = { .scnprintf = SCA_OPEN_FLAGS, /* flags */ }, }, },
{ .name = "open_by_handle_at",
@@ -1059,7 +1304,7 @@ static const struct syscall_fmt syscall_fmts[] = {
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ },
[2] = { .scnprintf = SCA_OPEN_FLAGS, /* flags */ }, }, },
{ .name = "perf_event_open",
- .arg = { [0] = { .scnprintf = SCA_PERF_ATTR, /* attr */ },
+ .arg = { [0] = SCA_PERF_ATTR_FROM_USER(attr),
[2] = { .scnprintf = SCA_INT, /* cpu */ },
[3] = { .scnprintf = SCA_FD, /* group_fd */ },
[4] = { .scnprintf = SCA_PERF_FLAGS, /* flags */ }, }, },
@@ -1071,7 +1316,7 @@ static const struct syscall_fmt syscall_fmts[] = {
.arg = { [0] = { .scnprintf = SCA_INT, /* key */ }, }, },
{ .name = "pkey_mprotect",
.arg = { [0] = { .scnprintf = SCA_HEX, /* start */ },
- [2] = { .scnprintf = SCA_MMAP_PROT, /* prot */ },
+ [2] = { .scnprintf = SCA_MMAP_PROT, .show_zero = true, /* prot */ },
[3] = { .scnprintf = SCA_INT, /* pkey */ }, }, },
{ .name = "poll", .timeout = true, },
{ .name = "ppoll", .timeout = true, },
@@ -1084,7 +1329,8 @@ static const struct syscall_fmt syscall_fmts[] = {
{ .name = "pread", .alias = "pread64", },
{ .name = "preadv", .alias = "pread", },
{ .name = "prlimit64",
- .arg = { [1] = STRARRAY(resource, rlimit_resources), }, },
+ .arg = { [1] = STRARRAY(resource, rlimit_resources),
+ [2] = { .from_user = true /* new_rlim */, }, }, },
{ .name = "pwrite", .alias = "pwrite64", },
{ .name = "readlinkat",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
@@ -1101,6 +1347,8 @@ static const struct syscall_fmt syscall_fmts[] = {
.arg = { [0] = { .scnprintf = SCA_FDAT, /* olddirfd */ },
[2] = { .scnprintf = SCA_FDAT, /* newdirfd */ },
[4] = { .scnprintf = SCA_RENAMEAT2_FLAGS, /* flags */ }, }, },
+ { .name = "rseq",
+ .arg = { [0] = { .from_user = true /* rseq */, }, }, },
{ .name = "rt_sigaction",
.arg = { [0] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
{ .name = "rt_sigprocmask",
@@ -1122,12 +1370,15 @@ static const struct syscall_fmt syscall_fmts[] = {
.arg = { [2] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ }, }, },
{ .name = "sendto",
.arg = { [3] = { .scnprintf = SCA_MSG_FLAGS, /* flags */ },
- [4] = { .scnprintf = SCA_SOCKADDR, /* addr */ }, }, },
+ [4] = SCA_SOCKADDR_FROM_USER(addr), }, },
+ { .name = "set_robust_list",
+ .arg = { [0] = { .from_user = true /* head */, }, }, },
{ .name = "set_tid_address", .errpid = true, },
{ .name = "setitimer",
.arg = { [0] = STRARRAY(which, itimers), }, },
{ .name = "setrlimit",
- .arg = { [0] = STRARRAY(resource, rlimit_resources), }, },
+ .arg = { [0] = STRARRAY(resource, rlimit_resources),
+ [1] = { .from_user = true /* rlim */, }, }, },
{ .name = "setsockopt",
.arg = { [1] = STRARRAY(level, socket_level), }, },
{ .name = "socket",
@@ -1141,12 +1392,12 @@ static const struct syscall_fmt syscall_fmts[] = {
{ .name = "stat", .alias = "newstat", },
{ .name = "statx",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* fdat */ },
- [2] = { .scnprintf = SCA_STATX_FLAGS, /* flags */ } ,
+ [2] = { .scnprintf = SCA_FS_AT_FLAGS, /* flags */ } ,
[3] = { .scnprintf = SCA_STATX_MASK, /* mask */ }, }, },
{ .name = "swapoff",
- .arg = { [0] = { .scnprintf = SCA_FILENAME, /* specialfile */ }, }, },
+ .arg = { [0] = SCA_FILENAME_FROM_USER(specialfile), }, },
{ .name = "swapon",
- .arg = { [0] = { .scnprintf = SCA_FILENAME, /* specialfile */ }, }, },
+ .arg = { [0] = SCA_FILENAME_FROM_USER(specialfile), }, },
{ .name = "symlinkat",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
{ .name = "sync_file_range",
@@ -1156,16 +1407,20 @@ static const struct syscall_fmt syscall_fmts[] = {
{ .name = "tkill",
.arg = { [1] = { .scnprintf = SCA_SIGNUM, /* sig */ }, }, },
{ .name = "umount2", .alias = "umount",
- .arg = { [0] = { .scnprintf = SCA_FILENAME, /* name */ }, }, },
+ .arg = { [0] = SCA_FILENAME_FROM_USER(name), }, },
{ .name = "uname", .alias = "newuname", },
{ .name = "unlinkat",
- .arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ }, }, },
+ .arg = { [0] = { .scnprintf = SCA_FDAT, /* dfd */ },
+ [1] = SCA_FILENAME_FROM_USER(pathname),
+ [2] = { .scnprintf = SCA_FS_AT_FLAGS, /* flags */ }, }, },
{ .name = "utimensat",
.arg = { [0] = { .scnprintf = SCA_FDAT, /* dirfd */ }, }, },
{ .name = "wait4", .errpid = true,
.arg = { [2] = { .scnprintf = SCA_WAITID_OPTIONS, /* options */ }, }, },
{ .name = "waitid", .errpid = true,
.arg = { [3] = { .scnprintf = SCA_WAITID_OPTIONS, /* options */ }, }, },
+ { .name = "write",
+ .arg = { [1] = { .scnprintf = SCA_BUF /* buf */, .from_user = true, }, }, },
};
static int syscall_fmt__cmp(const void *name, const void *fmtp)
@@ -1206,23 +1461,39 @@ static const struct syscall_fmt *syscall_fmt__find_by_alias(const char *alias)
return __syscall_fmt__find_by_alias(syscall_fmts, nmemb, alias);
}
-/*
- * is_exit: is this "exit" or "exit_group"?
- * is_open: is this "open" or "openat"? To associate the fd returned in sys_exit with the pathname in sys_enter.
- * args_size: sum of the sizes of the syscall arguments, anything after that is augmented stuff: pathname for openat, etc.
- * nonexistent: Just a hole in the syscall table, syscall id not allocated
+/**
+ * struct syscall
*/
struct syscall {
+ /** @e_machine: The ELF machine associated with the entry. */
+ int e_machine;
+ /** @id: id value from the tracepoint, the system call number. */
+ int id;
struct tep_event *tp_format;
int nr_args;
+ /**
+ * @args_size: sum of the sizes of the syscall arguments, anything
+ * after that is augmented stuff: pathname for openat, etc.
+ */
+
int args_size;
struct {
struct bpf_program *sys_enter,
*sys_exit;
} bpf_prog;
+ /** @is_exit: is this "exit" or "exit_group"? */
bool is_exit;
+ /**
+ * @is_open: is this "open" or "openat"? To associate the fd returned in
+ * sys_exit with the pathname in sys_enter.
+ */
bool is_open;
+ /**
+ * @nonexistent: Name lookup failed. Just a hole in the syscall table,
+ * syscall id not allocated.
+ */
bool nonexistent;
+ bool use_btf;
struct tep_format_field *args;
const char *name;
const struct syscall_fmt *fmt;
@@ -1279,16 +1550,48 @@ struct thread_trace {
struct file *table;
} files;
- struct intlist *syscall_stats;
+ struct hashmap *syscall_stats;
};
-static struct thread_trace *thread_trace__new(void)
+static size_t syscall_id_hash(long key, void *ctx __maybe_unused)
+{
+ return key;
+}
+
+static bool syscall_id_equal(long key1, long key2, void *ctx __maybe_unused)
+{
+ return key1 == key2;
+}
+
+static struct hashmap *alloc_syscall_stats(void)
+{
+ return hashmap__new(syscall_id_hash, syscall_id_equal, NULL);
+}
+
+static void delete_syscall_stats(struct hashmap *syscall_stats)
+{
+ struct hashmap_entry *pos;
+ size_t bkt;
+
+ if (syscall_stats == NULL)
+ return;
+
+ hashmap__for_each_entry(syscall_stats, pos, bkt)
+ zfree(&pos->pvalue);
+ hashmap__free(syscall_stats);
+}
+
+static struct thread_trace *thread_trace__new(struct trace *trace)
{
struct thread_trace *ttrace = zalloc(sizeof(struct thread_trace));
if (ttrace) {
ttrace->files.max = -1;
- ttrace->syscall_stats = intlist__new(NULL);
+ if (trace->summary) {
+ ttrace->syscall_stats = alloc_syscall_stats();
+ if (IS_ERR(ttrace->syscall_stats))
+ zfree(&ttrace);
+ }
}
return ttrace;
@@ -1303,14 +1606,14 @@ static void thread_trace__delete(void *pttrace)
if (!ttrace)
return;
- intlist__delete(ttrace->syscall_stats);
+ delete_syscall_stats(ttrace->syscall_stats);
ttrace->syscall_stats = NULL;
thread_trace__free_files(ttrace);
zfree(&ttrace->entry_str);
free(ttrace);
}
-static struct thread_trace *thread__trace(struct thread *thread, FILE *fp)
+static struct thread_trace *thread__trace(struct thread *thread, struct trace *trace)
{
struct thread_trace *ttrace;
@@ -1318,7 +1621,7 @@ static struct thread_trace *thread__trace(struct thread *thread, FILE *fp)
goto fail;
if (thread__priv(thread) == NULL)
- thread__set_priv(thread, thread_trace__new());
+ thread__set_priv(thread, thread_trace__new(trace));
if (thread__priv(thread) == NULL)
goto fail;
@@ -1328,7 +1631,7 @@ static struct thread_trace *thread__trace(struct thread *thread, FILE *fp)
return ttrace;
fail:
- color_fprintf(fp, PERF_COLOR_RED,
+ color_fprintf(trace->output, PERF_COLOR_RED,
"WARNING: not enough memory, dropping samples!\n");
return NULL;
}
@@ -1349,7 +1652,7 @@ static const size_t trace__entry_str_size = 2048;
static void thread_trace__free_files(struct thread_trace *ttrace)
{
- for (int i = 0; i < ttrace->files.max; ++i) {
+ for (int i = 0; i <= ttrace->files.max; ++i) {
struct file *file = ttrace->files.table + i;
zfree(&file->pathname);
}
@@ -1395,6 +1698,7 @@ static int trace__set_fd_pathname(struct thread *thread, int fd, const char *pat
if (file != NULL) {
struct stat st;
+
if (stat(pathname, &st) == 0)
file->dev_maj = major(st.st_rdev);
file->pathname = strdup(pathname);
@@ -1536,6 +1840,32 @@ static size_t syscall_arg__scnprintf_filename(char *bf, size_t size,
return 0;
}
+#define MAX_CONTROL_CHAR 31
+#define MAX_ASCII 127
+
+static size_t syscall_arg__scnprintf_buf(char *bf, size_t size, struct syscall_arg *arg)
+{
+ struct augmented_arg *augmented_arg = arg->augmented.args;
+ unsigned char *orig = (unsigned char *)augmented_arg->value;
+ size_t printed = 0;
+ int consumed;
+
+ if (augmented_arg == NULL)
+ return 0;
+
+ for (int j = 0; j < augmented_arg->size; ++j) {
+ bool control_char = orig[j] <= MAX_CONTROL_CHAR || orig[j] >= MAX_ASCII;
+ /* print control characters (0~31 and 127), and non-ascii characters in \(digits) */
+ printed += scnprintf(bf + printed, size - printed, control_char ? "\\%d" : "%c", (int)orig[j]);
+ }
+
+ consumed = sizeof(*augmented_arg) + augmented_arg->size;
+ arg->augmented.args = ((void *)arg->augmented.args) + consumed;
+ arg->augmented.size -= consumed;
+
+ return printed;
+}
+
static bool trace__filter_duration(struct trace *trace, double t)
{
return t < (trace->duration_filter * NSEC_PER_MSEC);
@@ -1611,7 +1941,7 @@ static int trace__process_event(struct trace *trace, struct machine *machine,
switch (event->header.type) {
case PERF_RECORD_LOST:
color_fprintf(trace->output, PERF_COLOR_RED,
- "LOST %" PRIu64 " events!\n", event->lost.lost);
+ "LOST %" PRIu64 " events!\n", (u64)event->lost.lost);
ret = machine__process_lost_event(machine, event, sample);
break;
default:
@@ -1622,7 +1952,7 @@ static int trace__process_event(struct trace *trace, struct machine *machine,
return ret;
}
-static int trace__tool_process(struct perf_tool *tool,
+static int trace__tool_process(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine)
@@ -1649,17 +1979,24 @@ static char *trace__machine__resolve_kernel_addr(void *vmachine, unsigned long l
return machine__resolve_kernel_addr(vmachine, addrp, modp);
}
-static int trace__symbols_init(struct trace *trace, struct evlist *evlist)
+static int trace__symbols_init(struct trace *trace, int argc, const char **argv,
+ struct evlist *evlist)
{
int err = symbol__init(NULL);
if (err)
return err;
- trace->host = machine__new_host();
- if (trace->host == NULL)
- return -ENOMEM;
+ perf_env__init(&trace->host_env);
+ err = perf_env__set_cmdline(&trace->host_env, argc, argv);
+ if (err)
+ goto out;
+ trace->host = machine__new_host(&trace->host_env);
+ if (trace->host == NULL) {
+ err = -ENOMEM;
+ goto out;
+ }
thread__set_priv_destructor(thread_trace__delete);
err = trace_event__register_resolver(trace->host, trace__machine__resolve_kernel_addr);
@@ -1670,9 +2007,10 @@ static int trace__symbols_init(struct trace *trace, struct evlist *evlist)
evlist->core.threads, trace__tool_process,
true, false, 1);
out:
- if (err)
+ if (err) {
+ perf_env__exit(&trace->host_env);
symbol__exit();
-
+ }
return err;
}
@@ -1681,6 +2019,7 @@ static void trace__symbols__exit(struct trace *trace)
machine__exit(trace->host);
trace->host = NULL;
+ perf_env__exit(&trace->host_env);
symbol__exit();
}
@@ -1729,7 +2068,8 @@ static const struct syscall_arg_fmt *syscall_arg_fmt__find_by_name(const char *n
}
static struct tep_format_field *
-syscall_arg_fmt__init_array(struct syscall_arg_fmt *arg, struct tep_format_field *field)
+syscall_arg_fmt__init_array(struct syscall_arg_fmt *arg, struct tep_format_field *field,
+ bool *use_btf)
{
struct tep_format_field *last_field = NULL;
int len;
@@ -1742,11 +2082,15 @@ syscall_arg_fmt__init_array(struct syscall_arg_fmt *arg, struct tep_format_field
len = strlen(field->name);
+ // As far as heuristics (or intention) goes this seems to hold true, and makes sense!
+ if ((field->flags & TEP_FIELD_IS_POINTER) && strstarts(field->type, "const "))
+ arg->from_user = true;
+
if (strcmp(field->type, "const char *") == 0 &&
((len >= 4 && strcmp(field->name + len - 4, "name") == 0) ||
- strstr(field->name, "path") != NULL))
+ strstr(field->name, "path") != NULL)) {
arg->scnprintf = SCA_FILENAME;
- else if ((field->flags & TEP_FIELD_IS_POINTER) || strstr(field->name, "addr"))
+ } else if ((field->flags & TEP_FIELD_IS_POINTER) || strstr(field->name, "addr"))
arg->scnprintf = SCA_PTR;
else if (strcmp(field->type, "pid_t") == 0)
arg->scnprintf = SCA_PID;
@@ -1767,6 +2111,9 @@ syscall_arg_fmt__init_array(struct syscall_arg_fmt *arg, struct tep_format_field
* 7 unsigned long
*/
arg->scnprintf = SCA_FD;
+ } else if (strstr(field->type, "enum") && use_btf != NULL) {
+ *use_btf = true;
+ arg->strtoul = STUL_BTF_TYPE;
} else {
const struct syscall_arg_fmt *fmt =
syscall_arg_fmt__find_by_name(field->name);
@@ -1783,7 +2130,8 @@ syscall_arg_fmt__init_array(struct syscall_arg_fmt *arg, struct tep_format_field
static int syscall__set_arg_fmts(struct syscall *sc)
{
- struct tep_format_field *last_field = syscall_arg_fmt__init_array(sc->arg_fmt, sc->args);
+ struct tep_format_field *last_field = syscall_arg_fmt__init_array(sc->arg_fmt, sc->args,
+ &sc->use_btf);
if (last_field)
sc->args_size = last_field->offset + last_field->size;
@@ -1791,40 +2139,21 @@ static int syscall__set_arg_fmts(struct syscall *sc)
return 0;
}
-static int trace__read_syscall_info(struct trace *trace, int id)
+static int syscall__read_info(struct syscall *sc, struct trace *trace)
{
char tp_name[128];
- struct syscall *sc;
- const char *name = syscalltbl__name(trace->sctbl, id);
-
-#ifdef HAVE_SYSCALL_TABLE_SUPPORT
- if (trace->syscalls.table == NULL) {
- trace->syscalls.table = calloc(trace->sctbl->syscalls.max_id + 1, sizeof(*sc));
- if (trace->syscalls.table == NULL)
- return -ENOMEM;
- }
-#else
- if (id > trace->sctbl->syscalls.max_id || (id == 0 && trace->syscalls.table == NULL)) {
- // When using libaudit we don't know beforehand what is the max syscall id
- struct syscall *table = realloc(trace->syscalls.table, (id + 1) * sizeof(*sc));
-
- if (table == NULL)
- return -ENOMEM;
-
- // Need to memset from offset 0 and +1 members if brand new
- if (trace->syscalls.table == NULL)
- memset(table, 0, (id + 1) * sizeof(*sc));
- else
- memset(table + trace->sctbl->syscalls.max_id + 1, 0, (id - trace->sctbl->syscalls.max_id) * sizeof(*sc));
+ const char *name;
+ int err;
- trace->syscalls.table = table;
- trace->sctbl->syscalls.max_id = id;
- }
-#endif
- sc = trace->syscalls.table + id;
if (sc->nonexistent)
return -EEXIST;
+ if (sc->name) {
+ /* Info already read. */
+ return 0;
+ }
+
+ name = syscalltbl__name(sc->e_machine, sc->id);
if (name == NULL) {
sc->nonexistent = true;
return -EEXIST;
@@ -1847,11 +2176,16 @@ static int trace__read_syscall_info(struct trace *trace, int id)
*/
if (IS_ERR(sc->tp_format)) {
sc->nonexistent = true;
- return PTR_ERR(sc->tp_format);
+ err = PTR_ERR(sc->tp_format);
+ sc->tp_format = NULL;
+ return err;
}
- if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ?
- RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields))
+ /*
+ * The tracepoint format contains __syscall_nr field, so it's one more
+ * than the actual number of syscall arguments.
+ */
+ if (syscall__alloc_arg_fmts(sc, sc->tp_format->format.nr_fields - 1))
return -ENOMEM;
sc->args = sc->tp_format->format.fields;
@@ -1868,16 +2202,26 @@ static int trace__read_syscall_info(struct trace *trace, int id)
sc->is_exit = !strcmp(name, "exit_group") || !strcmp(name, "exit");
sc->is_open = !strcmp(name, "open") || !strcmp(name, "openat");
- return syscall__set_arg_fmts(sc);
+ err = syscall__set_arg_fmts(sc);
+
+ /* after calling syscall__set_arg_fmts() we'll know whether use_btf is true */
+ if (sc->use_btf)
+ trace__load_vmlinux_btf(trace);
+
+ return err;
}
-static int evsel__init_tp_arg_scnprintf(struct evsel *evsel)
+static int evsel__init_tp_arg_scnprintf(struct evsel *evsel, bool *use_btf)
{
struct syscall_arg_fmt *fmt = evsel__syscall_arg_fmt(evsel);
if (fmt != NULL) {
- syscall_arg_fmt__init_array(fmt, evsel->tp_format->format.fields);
- return 0;
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+
+ if (tp_format) {
+ syscall_arg_fmt__init_array(fmt, tp_format->format.fields, use_btf);
+ return 0;
+ }
}
return -ENOMEM;
@@ -1909,10 +2253,14 @@ static int trace__validate_ev_qualifier(struct trace *trace)
strlist__for_each_entry(pos, trace->ev_qualifier) {
const char *sc = pos->s;
- int id = syscalltbl__id(trace->sctbl, sc), match_next = -1;
+ /*
+ * TODO: Assume more than the validation/warnings are all for
+ * the same binary type as perf.
+ */
+ int id = syscalltbl__id(EM_HOST, sc), match_next = -1;
if (id < 0) {
- id = syscalltbl__strglobmatch_first(trace->sctbl, sc, &match_next);
+ id = syscalltbl__strglobmatch_first(EM_HOST, sc, &match_next);
if (id >= 0)
goto matches;
@@ -1932,7 +2280,7 @@ matches:
continue;
while (1) {
- id = syscalltbl__strglobmatch_next(trace->sctbl, sc, &match_next);
+ id = syscalltbl__strglobmatch_next(EM_HOST, sc, &match_next);
if (id < 0)
break;
if (nr_allocated == nr_used) {
@@ -2035,7 +2383,7 @@ static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
unsigned char *args, void *augmented_args, int augmented_args_size,
struct trace *trace, struct thread *thread)
{
- size_t printed = 0;
+ size_t printed = 0, btf_printed;
unsigned long val;
u8 bit = 1;
struct syscall_arg arg = {
@@ -2051,6 +2399,7 @@ static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
.show_string_prefix = trace->show_string_prefix,
};
struct thread_trace *ttrace = thread__priv(thread);
+ void *default_scnprintf;
/*
* Things like fcntl will set this in its 'cmd' formatter to pick the
@@ -2076,17 +2425,15 @@ static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
val = syscall_arg_fmt__mask_val(&sc->arg_fmt[arg.idx], &arg, val);
/*
- * Suppress this argument if its value is zero and
- * and we don't have a string associated in an
- * strarray for it.
- */
- if (val == 0 &&
- !trace->show_zeros &&
- !(sc->arg_fmt &&
- (sc->arg_fmt[arg.idx].show_zero ||
- sc->arg_fmt[arg.idx].scnprintf == SCA_STRARRAY ||
- sc->arg_fmt[arg.idx].scnprintf == SCA_STRARRAYS) &&
- sc->arg_fmt[arg.idx].parm))
+ * Suppress this argument if its value is zero and show_zero
+ * property isn't set.
+ *
+ * If it has a BTF type, then override the zero suppression knob
+ * as the common case is for zero in an enum to have an associated entry.
+ */
+ if (val == 0 && !trace->show_zeros &&
+ !(sc->arg_fmt && sc->arg_fmt[arg.idx].show_zero) &&
+ !(sc->arg_fmt && sc->arg_fmt[arg.idx].strtoul == STUL_BTF_TYPE))
continue;
printed += scnprintf(bf + printed, size - printed, "%s", printed ? ", " : "");
@@ -2094,6 +2441,17 @@ static size_t syscall__scnprintf_args(struct syscall *sc, char *bf, size_t size,
if (trace->show_arg_names)
printed += scnprintf(bf + printed, size - printed, "%s: ", field->name);
+ default_scnprintf = sc->arg_fmt[arg.idx].scnprintf;
+
+ if (trace->force_btf || default_scnprintf == NULL || default_scnprintf == SCA_PTR) {
+ btf_printed = trace__btf_scnprintf(trace, &arg, bf + printed,
+ size - printed, val, field->type);
+ if (btf_printed) {
+ printed += btf_printed;
+ continue;
+ }
+ }
+
printed += syscall_arg_fmt__scnprintf_val(&sc->arg_fmt[arg.idx],
bf + printed, size - printed, &arg, val);
}
@@ -2120,13 +2478,92 @@ next_arg:
return printed;
}
+static struct syscall *syscall__new(int e_machine, int id)
+{
+ struct syscall *sc = zalloc(sizeof(*sc));
+
+ if (!sc)
+ return NULL;
+
+ sc->e_machine = e_machine;
+ sc->id = id;
+ return sc;
+}
+
+static void syscall__delete(struct syscall *sc)
+{
+ if (!sc)
+ return;
+
+ free(sc->arg_fmt);
+ free(sc);
+}
+
+static int syscall__bsearch_cmp(const void *key, const void *entry)
+{
+ const struct syscall *a = key, *b = *((const struct syscall **)entry);
+
+ if (a->e_machine != b->e_machine)
+ return a->e_machine - b->e_machine;
+
+ return a->id - b->id;
+}
+
+static int syscall__cmp(const void *va, const void *vb)
+{
+ const struct syscall *a = *((const struct syscall **)va);
+ const struct syscall *b = *((const struct syscall **)vb);
+
+ if (a->e_machine != b->e_machine)
+ return a->e_machine - b->e_machine;
+
+ return a->id - b->id;
+}
+
+static struct syscall *trace__find_syscall(struct trace *trace, int e_machine, int id)
+{
+ struct syscall key = {
+ .e_machine = e_machine,
+ .id = id,
+ };
+ struct syscall *sc, **tmp;
+
+ if (trace->syscalls.table) {
+ struct syscall **sc_entry = bsearch(&key, trace->syscalls.table,
+ trace->syscalls.table_size,
+ sizeof(trace->syscalls.table[0]),
+ syscall__bsearch_cmp);
+
+ if (sc_entry)
+ return *sc_entry;
+ }
+
+ sc = syscall__new(e_machine, id);
+ if (!sc)
+ return NULL;
+
+ tmp = reallocarray(trace->syscalls.table, trace->syscalls.table_size + 1,
+ sizeof(trace->syscalls.table[0]));
+ if (!tmp) {
+ syscall__delete(sc);
+ return NULL;
+ }
+
+ trace->syscalls.table = tmp;
+ trace->syscalls.table[trace->syscalls.table_size++] = sc;
+ qsort(trace->syscalls.table, trace->syscalls.table_size, sizeof(trace->syscalls.table[0]),
+ syscall__cmp);
+ return sc;
+}
+
typedef int (*tracepoint_handler)(struct trace *trace, struct evsel *evsel,
union perf_event *event,
struct perf_sample *sample);
-static struct syscall *trace__syscall_info(struct trace *trace,
- struct evsel *evsel, int id)
+static struct syscall *trace__syscall_info(struct trace *trace, struct evsel *evsel,
+ int e_machine, int id)
{
+ struct syscall *sc;
int err = 0;
if (id < 0) {
@@ -2151,39 +2588,20 @@ static struct syscall *trace__syscall_info(struct trace *trace,
err = -EINVAL;
-#ifdef HAVE_SYSCALL_TABLE_SUPPORT
- if (id > trace->sctbl->syscalls.max_id) {
-#else
- if (id >= trace->sctbl->syscalls.max_id) {
- /*
- * With libaudit we don't know beforehand what is the max_id,
- * so we let trace__read_syscall_info() figure that out as we
- * go on reading syscalls.
- */
- err = trace__read_syscall_info(trace, id);
- if (err)
-#endif
- goto out_cant_read;
- }
-
- if ((trace->syscalls.table == NULL || trace->syscalls.table[id].name == NULL) &&
- (err = trace__read_syscall_info(trace, id)) != 0)
- goto out_cant_read;
+ sc = trace__find_syscall(trace, e_machine, id);
+ if (sc)
+ err = syscall__read_info(sc, trace);
- if (trace->syscalls.table && trace->syscalls.table[id].nonexistent)
- goto out_cant_read;
-
- return &trace->syscalls.table[id];
-
-out_cant_read:
- if (verbose > 0) {
+ if (err && verbose > 0) {
char sbuf[STRERR_BUFSIZE];
- fprintf(trace->output, "Problems reading syscall %d: %d (%s)", id, -err, str_error_r(-err, sbuf, sizeof(sbuf)));
- if (id <= trace->sctbl->syscalls.max_id && trace->syscalls.table[id].name != NULL)
- fprintf(trace->output, "(%s)", trace->syscalls.table[id].name);
+
+ fprintf(trace->output, "Problems reading syscall %d: %d (%s)", id, -err,
+ str_error_r(-err, sbuf, sizeof(sbuf)));
+ if (sc && sc->name)
+ fprintf(trace->output, "(%s)", sc->name);
fputs(" information\n", trace->output);
}
- return NULL;
+ return err ? NULL : sc;
}
struct syscall_stats {
@@ -2194,24 +2612,29 @@ struct syscall_stats {
};
static void thread__update_stats(struct thread *thread, struct thread_trace *ttrace,
- int id, struct perf_sample *sample, long err, bool errno_summary)
+ int id, struct perf_sample *sample, long err,
+ struct trace *trace)
{
- struct int_node *inode;
- struct syscall_stats *stats;
+ struct hashmap *syscall_stats = ttrace->syscall_stats;
+ struct syscall_stats *stats = NULL;
u64 duration = 0;
- inode = intlist__findnew(ttrace->syscall_stats, id);
- if (inode == NULL)
+ if (trace->summary_bpf)
return;
- stats = inode->priv;
- if (stats == NULL) {
+ if (trace->summary_mode == SUMMARY__BY_TOTAL)
+ syscall_stats = trace->syscall_stats;
+
+ if (!hashmap__find(syscall_stats, id, &stats)) {
stats = zalloc(sizeof(*stats));
if (stats == NULL)
return;
init_stats(&stats->stats);
- inode->priv = stats;
+ if (hashmap__add(syscall_stats, id, stats) < 0) {
+ free(stats);
+ return;
+ }
}
if (ttrace->entry_time && sample->time > ttrace->entry_time)
@@ -2222,7 +2645,7 @@ static void thread__update_stats(struct thread *thread, struct thread_trace *ttr
if (err < 0) {
++stats->nr_failures;
- if (!errno_summary)
+ if (!trace->errno_summary)
return;
err = -err;
@@ -2293,7 +2716,6 @@ static int trace__fprintf_sample(struct trace *trace, struct evsel *evsel,
static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sample, int *augmented_args_size, int raw_augmented_args_size)
{
- void *augmented_args = NULL;
/*
* For now with BPF raw_augmented we hook into raw_syscalls:sys_enter
* and there we get all 6 syscall args plus the tracepoint common fields
@@ -2311,18 +2733,24 @@ static void *syscall__augmented_args(struct syscall *sc, struct perf_sample *sam
int args_size = raw_augmented_args_size ?: sc->args_size;
*augmented_args_size = sample->raw_size - args_size;
- if (*augmented_args_size > 0)
- augmented_args = sample->raw_data + args_size;
+ if (*augmented_args_size > 0) {
+ static uintptr_t argbuf[1024]; /* assuming single-threaded */
- return augmented_args;
-}
+ if ((size_t)(*augmented_args_size) > sizeof(argbuf))
+ return NULL;
-static void syscall__exit(struct syscall *sc)
-{
- if (!sc)
- return;
+ /*
+ * The perf ring-buffer is 8-byte aligned but sample->raw_data
+ * is not because it's preceded by u32 size. Later, beautifier
+ * will use the augmented args with stricter alignments like in
+ * some struct. To make sure it's aligned, let's copy the args
+ * into a static buffer as it's single-threaded for now.
+ */
+ memcpy(argbuf, sample->raw_data + args_size, *augmented_args_size);
- zfree(&sc->arg_fmt);
+ return argbuf;
+ }
+ return NULL;
}
static int trace__sys_enter(struct trace *trace, struct evsel *evsel,
@@ -2334,16 +2762,17 @@ static int trace__sys_enter(struct trace *trace, struct evsel *evsel,
int printed = 0;
struct thread *thread;
int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1;
- int augmented_args_size = 0;
+ int augmented_args_size = 0, e_machine;
void *augmented_args = NULL;
- struct syscall *sc = trace__syscall_info(trace, evsel, id);
+ struct syscall *sc;
struct thread_trace *ttrace;
- if (sc == NULL)
- return -1;
-
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
- ttrace = thread__trace(thread, trace->output);
+ e_machine = thread__e_machine(thread, trace->host);
+ sc = trace__syscall_info(trace, evsel, e_machine, id);
+ if (sc == NULL)
+ goto out_put;
+ ttrace = thread__trace(thread, trace);
if (ttrace == NULL)
goto out_put;
@@ -2410,16 +2839,19 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
struct thread_trace *ttrace;
struct thread *thread;
int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1;
- struct syscall *sc = trace__syscall_info(trace, evsel, id);
+ struct syscall *sc;
char msg[1024];
void *args, *augmented_args = NULL;
- int augmented_args_size;
+ int augmented_args_size, e_machine;
+ size_t printed = 0;
- if (sc == NULL)
- return -1;
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
- ttrace = thread__trace(thread, trace->output);
+ e_machine = thread__e_machine(thread, trace->host);
+ sc = trace__syscall_info(trace, evsel, e_machine, id);
+ if (sc == NULL)
+ goto out_put;
+ ttrace = thread__trace(thread, trace);
/*
* We need to get ttrace just to make sure it is there when syscall__scnprintf_args()
* and the rest of the beautifiers accessing it via struct syscall_arg touches it.
@@ -2429,8 +2861,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
args = perf_evsel__sc_tp_ptr(evsel, args, sample);
augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
- syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
- fprintf(trace->output, "%s", msg);
+ printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
+ fprintf(trace->output, "%.*s", (int)printed, msg);
err = 0;
out_put:
thread__put(thread);
@@ -2467,13 +2899,6 @@ static int trace__fprintf_callchain(struct trace *trace, struct perf_sample *sam
return sample__fprintf_callchain(sample, 38, print_opts, get_tls_callchain_cursor(), symbol_conf.bt_stop_list, trace->output);
}
-static const char *errno_to_name(struct evsel *evsel, int err)
-{
- struct perf_env *env = evsel__env(evsel);
-
- return perf_env__arch_strerrno(env, err);
-}
-
static int trace__sys_exit(struct trace *trace, struct evsel *evsel,
union perf_event *event __maybe_unused,
struct perf_sample *sample)
@@ -2483,15 +2908,16 @@ static int trace__sys_exit(struct trace *trace, struct evsel *evsel,
bool duration_calculated = false;
struct thread *thread;
int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1, callchain_ret = 0, printed = 0;
- int alignment = trace->args_alignment;
- struct syscall *sc = trace__syscall_info(trace, evsel, id);
+ int alignment = trace->args_alignment, e_machine;
+ struct syscall *sc;
struct thread_trace *ttrace;
- if (sc == NULL)
- return -1;
-
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
- ttrace = thread__trace(thread, trace->output);
+ e_machine = thread__e_machine(thread, trace->host);
+ sc = trace__syscall_info(trace, evsel, e_machine, id);
+ if (sc == NULL)
+ goto out_put;
+ ttrace = thread__trace(thread, trace);
if (ttrace == NULL)
goto out_put;
@@ -2500,7 +2926,7 @@ static int trace__sys_exit(struct trace *trace, struct evsel *evsel,
ret = perf_evsel__sc_tp_uint(evsel, ret, sample);
if (trace->summary)
- thread__update_stats(thread, ttrace, id, sample, ret, trace->errno_summary);
+ thread__update_stats(thread, ttrace, id, sample, ret, trace);
if (!trace->fd_path_disabled && sc->is_open && ret >= 0 && ttrace->filename.pending_open) {
trace__set_fd_pathname(thread, ret, ttrace->filename.name);
@@ -2558,8 +2984,9 @@ signed_print:
} else if (ret < 0) {
errno_print: {
char bf[STRERR_BUFSIZE];
- const char *emsg = str_error_r(-ret, bf, sizeof(bf)),
- *e = errno_to_name(evsel, -ret);
+ struct perf_env *env = evsel__env(evsel) ?: &trace->host_env;
+ const char *emsg = str_error_r(-ret, bf, sizeof(bf));
+ const char *e = perf_env__arch_strerrno(env, err);
fprintf(trace->output, "-1 %s (%s)", e, emsg);
}
@@ -2580,8 +3007,8 @@ errno_print: {
else if (sc->fmt->errpid) {
struct thread *child = machine__find_thread(trace->host, ret, ret);
+ fprintf(trace->output, "%ld", ret);
if (child != NULL) {
- fprintf(trace->output, "%ld", ret);
if (thread__comm_set(child))
fprintf(trace->output, " (%s)", thread__comm_str(child));
thread__put(child);
@@ -2680,7 +3107,7 @@ static int trace__sched_stat_runtime(struct trace *trace, struct evsel *evsel,
struct thread *thread = machine__findnew_thread(trace->host,
sample->pid,
sample->tid);
- struct thread_trace *ttrace = thread__trace(thread, trace->output);
+ struct thread_trace *ttrace = thread__trace(thread, trace);
if (ttrace == NULL)
goto out_dump;
@@ -2738,9 +3165,10 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
{
char bf[2048];
size_t size = sizeof(bf);
- struct tep_format_field *field = evsel->tp_format->format.fields;
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+ struct tep_format_field *field = tp_format ? tp_format->format.fields : NULL;
struct syscall_arg_fmt *arg = __evsel__syscall_arg_fmt(evsel);
- size_t printed = 0;
+ size_t printed = 0, btf_printed;
unsigned long val;
u8 bit = 1;
struct syscall_arg syscall_arg = {
@@ -2781,17 +3209,8 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
*/
val = syscall_arg_fmt__mask_val(arg, &syscall_arg, val);
- /*
- * Suppress this argument if its value is zero and
- * we don't have a string associated in an
- * strarray for it.
- */
- if (val == 0 &&
- !trace->show_zeros &&
- !((arg->show_zero ||
- arg->scnprintf == SCA_STRARRAY ||
- arg->scnprintf == SCA_STRARRAYS) &&
- arg->parm))
+ /* Suppress this argument if its value is zero and show_zero property isn't set. */
+ if (val == 0 && !trace->show_zeros && !arg->show_zero && arg->strtoul != STUL_BTF_TYPE)
continue;
printed += scnprintf(bf + printed, size - printed, "%s", printed ? ", " : "");
@@ -2799,10 +3218,16 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
if (trace->show_arg_names)
printed += scnprintf(bf + printed, size - printed, "%s: ", field->name);
+ btf_printed = trace__btf_scnprintf(trace, &syscall_arg, bf + printed, size - printed, val, field->type);
+ if (btf_printed) {
+ printed += btf_printed;
+ continue;
+ }
+
printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val);
}
- return printed + fprintf(trace->output, "%s", bf);
+ return fprintf(trace->output, "%.*s", (int)printed, bf);
}
static int trace__event_handler(struct trace *trace, struct evsel *evsel,
@@ -2811,13 +3236,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
{
struct thread *thread;
int callchain_ret = 0;
- /*
- * Check if we called perf_evsel__disable(evsel) due to, for instance,
- * this event's max_events having been hit and this is an entry coming
- * from the ring buffer that we should discard, since the max events
- * have already been considered/printed.
- */
- if (evsel->disabled)
+
+ if (evsel->nr_events_printed >= evsel->max_events)
return 0;
thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
@@ -2844,7 +3264,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
if (evsel == trace->syscalls.events.bpf_output) {
int id = perf_evsel__sc_tp_uint(evsel, id, sample);
- struct syscall *sc = trace__syscall_info(trace, evsel, id);
+ int e_machine = thread ? thread__e_machine(thread, trace->host) : EM_HOST;
+ struct syscall *sc = trace__syscall_info(trace, evsel, e_machine, id);
if (sc) {
fprintf(trace->output, "%s(", sc->name);
@@ -2864,11 +3285,13 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
if (evsel__is_bpf_output(evsel)) {
bpf_output__fprintf(trace, sample);
- } else if (evsel->tp_format) {
- if (strncmp(evsel->tp_format->name, "sys_enter_", 10) ||
- trace__fprintf_sys_enter(trace, evsel, sample)) {
+ } else {
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+
+ if (tp_format && (strncmp(tp_format->name, "sys_enter_", 10) ||
+ trace__fprintf_sys_enter(trace, evsel, sample))) {
if (trace->libtraceevent_print) {
- event_format__fprintf(evsel->tp_format, sample->cpu,
+ event_format__fprintf(tp_format, sample->cpu,
sample->raw_data, sample->raw_size,
trace->output);
} else {
@@ -2902,7 +3325,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
{
if ((verbose > 0 || print_dso) && al->map)
- fprintf(f, "%s@", map__dso(al->map)->long_name);
+ fprintf(f, "%s@", dso__long_name(map__dso(al->map)));
if ((verbose > 0 || print_sym) && al->sym)
fprintf(f, "%s+0x%" PRIx64, al->sym->name,
@@ -2939,14 +3362,17 @@ static int trace__pgfault(struct trace *trace,
}
}
- ttrace = thread__trace(thread, trace->output);
+ ttrace = thread__trace(thread, trace);
if (ttrace == NULL)
goto out_put;
- if (evsel->core.attr.config == PERF_COUNT_SW_PAGE_FAULTS_MAJ)
+ if (evsel->core.attr.config == PERF_COUNT_SW_PAGE_FAULTS_MAJ) {
ttrace->pfmaj++;
- else
+ trace->pfmaj++;
+ } else {
ttrace->pfmin++;
+ trace->pfmin++;
+ }
if (trace->summary_only)
goto out;
@@ -3009,7 +3435,7 @@ static void trace__set_base_time(struct trace *trace,
trace->base_time = sample->time;
}
-static int trace__process_sample(struct perf_tool *tool,
+static int trace__process_sample(const struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct evsel *evsel,
@@ -3105,6 +3531,7 @@ out_free:
}
static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp);
+static size_t trace__fprintf_total_summary(struct trace *trace, FILE *fp);
static bool evlist__add_vfs_getname(struct evlist *evlist)
{
@@ -3275,27 +3702,29 @@ out_enomem:
goto out;
}
-#ifdef HAVE_BPF_SKEL
-static struct bpf_program *trace__find_bpf_program_by_title(struct trace *trace, const char *name)
+#ifdef HAVE_LIBBPF_SUPPORT
+
+static struct bpf_program *unaugmented_prog;
+
+static int syscall_arg_fmt__cache_btf_struct(struct syscall_arg_fmt *arg_fmt, struct btf *btf, char *type)
{
- struct bpf_program *pos, *prog = NULL;
- const char *sec_name;
+ int id;
- if (trace->skel->obj == NULL)
- return NULL;
+ if (arg_fmt->type != NULL)
+ return -1;
- bpf_object__for_each_program(pos, trace->skel->obj) {
- sec_name = bpf_program__section_name(pos);
- if (sec_name && !strcmp(sec_name, name)) {
- prog = pos;
- break;
- }
- }
+ id = btf__find_by_name(btf, type);
+ if (id < 0)
+ return -1;
- return prog;
+ arg_fmt->type = btf__type_by_id(btf, id);
+ arg_fmt->type_id = id;
+
+ return 0;
}
-static struct bpf_program *trace__find_syscall_bpf_prog(struct trace *trace, struct syscall *sc,
+static struct bpf_program *trace__find_syscall_bpf_prog(struct trace *trace __maybe_unused,
+ struct syscall *sc,
const char *prog_name, const char *type)
{
struct bpf_program *prog;
@@ -3303,19 +3732,19 @@ static struct bpf_program *trace__find_syscall_bpf_prog(struct trace *trace, str
if (prog_name == NULL) {
char default_prog_name[256];
scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->name);
- prog = trace__find_bpf_program_by_title(trace, default_prog_name);
+ prog = augmented_syscalls__find_by_title(default_prog_name);
if (prog != NULL)
goto out_found;
if (sc->fmt && sc->fmt->alias) {
scnprintf(default_prog_name, sizeof(default_prog_name), "tp/syscalls/sys_%s_%s", type, sc->fmt->alias);
- prog = trace__find_bpf_program_by_title(trace, default_prog_name);
+ prog = augmented_syscalls__find_by_title(default_prog_name);
if (prog != NULL)
goto out_found;
}
goto out_unaugmented;
}
- prog = trace__find_bpf_program_by_title(trace, prog_name);
+ prog = augmented_syscalls__find_by_title(prog_name);
if (prog != NULL) {
out_found:
@@ -3325,12 +3754,12 @@ out_found:
pr_debug("Couldn't find BPF prog \"%s\" to associate with syscalls:sys_%s_%s, not augmenting it\n",
prog_name, type, sc->name);
out_unaugmented:
- return trace->skel->progs.syscall_unaugmented;
+ return unaugmented_prog;
}
-static void trace__init_syscall_bpf_progs(struct trace *trace, int id)
+static void trace__init_syscall_bpf_progs(struct trace *trace, int e_machine, int id)
{
- struct syscall *sc = trace__syscall_info(trace, NULL, id);
+ struct syscall *sc = trace__syscall_info(trace, NULL, e_machine, id);
if (sc == NULL)
return;
@@ -3339,23 +3768,107 @@ static void trace__init_syscall_bpf_progs(struct trace *trace, int id)
sc->bpf_prog.sys_exit = trace__find_syscall_bpf_prog(trace, sc, sc->fmt ? sc->fmt->bpf_prog_name.sys_exit : NULL, "exit");
}
-static int trace__bpf_prog_sys_enter_fd(struct trace *trace, int id)
+static int trace__bpf_prog_sys_enter_fd(struct trace *trace, int e_machine, int id)
{
- struct syscall *sc = trace__syscall_info(trace, NULL, id);
- return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(trace->skel->progs.syscall_unaugmented);
+ struct syscall *sc = trace__syscall_info(trace, NULL, e_machine, id);
+ return sc ? bpf_program__fd(sc->bpf_prog.sys_enter) : bpf_program__fd(unaugmented_prog);
}
-static int trace__bpf_prog_sys_exit_fd(struct trace *trace, int id)
+static int trace__bpf_prog_sys_exit_fd(struct trace *trace, int e_machine, int id)
{
- struct syscall *sc = trace__syscall_info(trace, NULL, id);
- return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(trace->skel->progs.syscall_unaugmented);
+ struct syscall *sc = trace__syscall_info(trace, NULL, e_machine, id);
+ return sc ? bpf_program__fd(sc->bpf_prog.sys_exit) : bpf_program__fd(unaugmented_prog);
}
-static struct bpf_program *trace__find_usable_bpf_prog_entry(struct trace *trace, struct syscall *sc)
+static int trace__bpf_sys_enter_beauty_map(struct trace *trace, int e_machine, int key, unsigned int *beauty_array)
{
- struct tep_format_field *field, *candidate_field;
- int id;
+ struct tep_format_field *field;
+ struct syscall *sc = trace__syscall_info(trace, NULL, e_machine, key);
+ const struct btf_type *bt;
+ char *struct_offset, *tmp, name[32];
+ bool can_augment = false;
+ int i, cnt;
+
+ if (sc == NULL)
+ return -1;
+
+ trace__load_vmlinux_btf(trace);
+ if (trace->btf == NULL)
+ return -1;
+ for (i = 0, field = sc->args; field; ++i, field = field->next) {
+ // XXX We're only collecting pointer payloads _from_ user space
+ if (!sc->arg_fmt[i].from_user)
+ continue;
+
+ struct_offset = strstr(field->type, "struct ");
+ if (struct_offset == NULL)
+ struct_offset = strstr(field->type, "union ");
+ else
+ struct_offset++; // "union" is shorter
+
+ if (field->flags & TEP_FIELD_IS_POINTER && struct_offset) { /* struct or union (think BPF's attr arg) */
+ struct_offset += 6;
+
+ /* for 'struct foo *', we only want 'foo' */
+ for (tmp = struct_offset, cnt = 0; *tmp != ' ' && *tmp != '\0'; ++tmp, ++cnt) {
+ }
+
+ strncpy(name, struct_offset, cnt);
+ name[cnt] = '\0';
+
+ /* cache struct's btf_type and type_id */
+ if (syscall_arg_fmt__cache_btf_struct(&sc->arg_fmt[i], trace->btf, name))
+ continue;
+
+ bt = sc->arg_fmt[i].type;
+ beauty_array[i] = bt->size;
+ can_augment = true;
+ } else if (field->flags & TEP_FIELD_IS_POINTER && /* string */
+ strcmp(field->type, "const char *") == 0 &&
+ (strstr(field->name, "name") ||
+ strstr(field->name, "path") ||
+ strstr(field->name, "file") ||
+ strstr(field->name, "root") ||
+ strstr(field->name, "key") ||
+ strstr(field->name, "special") ||
+ strstr(field->name, "type") ||
+ strstr(field->name, "description"))) {
+ beauty_array[i] = 1;
+ can_augment = true;
+ } else if (field->flags & TEP_FIELD_IS_POINTER && /* buffer */
+ strstr(field->type, "char *") &&
+ (strstr(field->name, "buf") ||
+ strstr(field->name, "val") ||
+ strstr(field->name, "msg"))) {
+ int j;
+ struct tep_format_field *field_tmp;
+
+ /* find the size of the buffer that appears in pairs with buf */
+ for (j = 0, field_tmp = sc->args; field_tmp; ++j, field_tmp = field_tmp->next) {
+ if (!(field_tmp->flags & TEP_FIELD_IS_POINTER) && /* only integers */
+ (strstr(field_tmp->name, "count") ||
+ strstr(field_tmp->name, "siz") || /* size, bufsiz */
+ (strstr(field_tmp->name, "len") && strcmp(field_tmp->name, "filename")))) {
+ /* filename's got 'len' in it, we don't want that */
+ beauty_array[i] = -(j + 1);
+ can_augment = true;
+ break;
+ }
+ }
+ }
+ }
+
+ if (can_augment)
+ return 0;
+
+ return -1;
+}
+
+static struct bpf_program *trace__find_usable_bpf_prog_entry(struct trace *trace,
+ struct syscall *sc)
+{
+ struct tep_format_field *field, *candidate_field;
/*
* We're only interested in syscalls that have a pointer:
*/
@@ -3367,13 +3880,14 @@ static struct bpf_program *trace__find_usable_bpf_prog_entry(struct trace *trace
return NULL;
try_to_find_pair:
- for (id = 0; id < trace->sctbl->syscalls.nr_entries; ++id) {
- struct syscall *pair = trace__syscall_info(trace, NULL, id);
+ for (int i = 0, num_idx = syscalltbl__num_idx(sc->e_machine); i < num_idx; ++i) {
+ int id = syscalltbl__id_at_idx(sc->e_machine, i);
+ struct syscall *pair = trace__syscall_info(trace, NULL, sc->e_machine, id);
struct bpf_program *pair_prog;
bool is_candidate = false;
- if (pair == NULL || pair == sc ||
- pair->bpf_prog.sys_enter == trace->skel->progs.syscall_unaugmented)
+ if (pair == NULL || pair->id == sc->id ||
+ pair->bpf_prog.sys_enter == unaugmented_prog)
continue;
for (field = sc->args, candidate_field = pair->args;
@@ -3439,11 +3953,12 @@ try_to_find_pair:
*/
if (pair_prog == NULL) {
pair_prog = trace__find_syscall_bpf_prog(trace, pair, pair->fmt ? pair->fmt->bpf_prog_name.sys_enter : NULL, "enter");
- if (pair_prog == trace->skel->progs.syscall_unaugmented)
+ if (pair_prog == unaugmented_prog)
goto next_candidate;
}
- pr_debug("Reusing \"%s\" BPF sys_enter augmenter for \"%s\"\n", pair->name, sc->name);
+ pr_debug("Reusing \"%s\" BPF sys_enter augmenter for \"%s\"\n", pair->name,
+ sc->name);
return pair_prog;
next_candidate:
continue;
@@ -3452,29 +3967,45 @@ try_to_find_pair:
return NULL;
}
-static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
+static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace, int e_machine)
{
- int map_enter_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_enter);
- int map_exit_fd = bpf_map__fd(trace->skel->maps.syscalls_sys_exit);
- int err = 0, key;
+ int map_enter_fd;
+ int map_exit_fd;
+ int beauty_map_fd;
+ int err = 0;
+ unsigned int beauty_array[6];
- for (key = 0; key < trace->sctbl->syscalls.nr_entries; ++key) {
- int prog_fd;
+ if (augmented_syscalls__get_map_fds(&map_enter_fd, &map_exit_fd, &beauty_map_fd) < 0)
+ return -1;
+
+ unaugmented_prog = augmented_syscalls__unaugmented();
+
+ for (int i = 0, num_idx = syscalltbl__num_idx(e_machine); i < num_idx; ++i) {
+ int prog_fd, key = syscalltbl__id_at_idx(e_machine, i);
if (!trace__syscall_enabled(trace, key))
continue;
- trace__init_syscall_bpf_progs(trace, key);
+ trace__init_syscall_bpf_progs(trace, e_machine, key);
// It'll get at least the "!raw_syscalls:unaugmented"
- prog_fd = trace__bpf_prog_sys_enter_fd(trace, key);
+ prog_fd = trace__bpf_prog_sys_enter_fd(trace, e_machine, key);
err = bpf_map_update_elem(map_enter_fd, &key, &prog_fd, BPF_ANY);
if (err)
break;
- prog_fd = trace__bpf_prog_sys_exit_fd(trace, key);
+ prog_fd = trace__bpf_prog_sys_exit_fd(trace, e_machine, key);
err = bpf_map_update_elem(map_exit_fd, &key, &prog_fd, BPF_ANY);
if (err)
break;
+
+ /* use beauty_map to tell BPF how many bytes to collect, set beauty_map's value here */
+ memset(beauty_array, 0, sizeof(beauty_array));
+ err = trace__bpf_sys_enter_beauty_map(trace, e_machine, key, (unsigned int *)beauty_array);
+ if (err)
+ continue;
+ err = bpf_map_update_elem(beauty_map_fd, &key, beauty_array, BPF_ANY);
+ if (err)
+ break;
}
/*
@@ -3505,8 +4036,9 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
* first and second arg (this one on the raw_syscalls:sys_exit prog
* array tail call, then that one will be used.
*/
- for (key = 0; key < trace->sctbl->syscalls.nr_entries; ++key) {
- struct syscall *sc = trace__syscall_info(trace, NULL, key);
+ for (int i = 0, num_idx = syscalltbl__num_idx(e_machine); i < num_idx; ++i) {
+ int key = syscalltbl__id_at_idx(e_machine, i);
+ struct syscall *sc = trace__syscall_info(trace, NULL, e_machine, key);
struct bpf_program *pair_prog;
int prog_fd;
@@ -3517,7 +4049,7 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
* For now we're just reusing the sys_enter prog, and if it
* already has an augmenter, we don't need to find one.
*/
- if (sc->bpf_prog.sys_enter != trace->skel->progs.syscall_unaugmented)
+ if (sc->bpf_prog.sys_enter != unaugmented_prog)
continue;
/*
@@ -3542,7 +4074,13 @@ static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace)
return err;
}
-#endif // HAVE_BPF_SKEL
+#else // !HAVE_LIBBPF_SUPPORT
+static int trace__init_syscalls_bpf_prog_array_maps(struct trace *trace __maybe_unused,
+ int e_machine __maybe_unused)
+{
+ return -1;
+}
+#endif // HAVE_LIBBPF_SUPPORT
static int trace__set_ev_qualifier_filter(struct trace *trace)
{
@@ -3551,24 +4089,6 @@ static int trace__set_ev_qualifier_filter(struct trace *trace)
return 0;
}
-static int bpf_map__set_filter_pids(struct bpf_map *map __maybe_unused,
- size_t npids __maybe_unused, pid_t *pids __maybe_unused)
-{
- int err = 0;
-#ifdef HAVE_LIBBPF_SUPPORT
- bool value = true;
- int map_fd = bpf_map__fd(map);
- size_t i;
-
- for (i = 0; i < npids; ++i) {
- err = bpf_map_update_elem(map_fd, &pids[i], &value, BPF_ANY);
- if (err)
- break;
- }
-#endif
- return err;
-}
-
static int trace__set_filter_loop_pids(struct trace *trace)
{
unsigned int nr = 1, err;
@@ -3588,14 +4108,17 @@ static int trace__set_filter_loop_pids(struct trace *trace)
if (!strcmp(thread__comm_str(parent), "sshd") ||
strstarts(thread__comm_str(parent), "gnome-terminal")) {
pids[nr++] = thread__tid(parent);
+ thread__put(parent);
break;
}
+ thread__put(thread);
thread = parent;
}
+ thread__put(thread);
err = evlist__append_tp_filter_pids(trace->evlist, nr, pids);
- if (!err && trace->filter_pids.map)
- err = bpf_map__set_filter_pids(trace->filter_pids.map, nr, pids);
+ if (!err)
+ err = augmented_syscalls__set_filter_pids(nr, pids);
return err;
}
@@ -3612,8 +4135,8 @@ static int trace__set_filter_pids(struct trace *trace)
if (trace->filter_pids.nr > 0) {
err = evlist__append_tp_filter_pids(trace->evlist, trace->filter_pids.nr,
trace->filter_pids.entries);
- if (!err && trace->filter_pids.map) {
- err = bpf_map__set_filter_pids(trace->filter_pids.map, trace->filter_pids.nr,
+ if (!err) {
+ err = augmented_syscalls__set_filter_pids(trace->filter_pids.nr,
trace->filter_pids.entries);
}
} else if (perf_thread_map__pid(trace->evlist->core.threads, 0) == -1) {
@@ -3627,13 +4150,16 @@ static int __trace__deliver_event(struct trace *trace, union perf_event *event)
{
struct evlist *evlist = trace->evlist;
struct perf_sample sample;
- int err = evlist__parse_sample(evlist, event, &sample);
+ int err;
+ perf_sample__init(&sample, /*all=*/false);
+ err = evlist__parse_sample(evlist, event, &sample);
if (err)
fprintf(trace->output, "Can't parse sample, err = %d, skipping...\n", err);
else
trace__handle_event(trace, event, &sample);
+ perf_sample__exit(&sample);
return 0;
}
@@ -3680,22 +4206,31 @@ static int ordered_events__deliver_event(struct ordered_events *oe,
return __trace__deliver_event(trace, event->event);
}
-static struct syscall_arg_fmt *evsel__find_syscall_arg_fmt_by_name(struct evsel *evsel, char *arg)
+static struct syscall_arg_fmt *evsel__find_syscall_arg_fmt_by_name(struct evsel *evsel, char *arg,
+ char **type)
{
- struct tep_format_field *field;
struct syscall_arg_fmt *fmt = __evsel__syscall_arg_fmt(evsel);
+ const struct tep_event *tp_format;
+
+ if (!fmt)
+ return NULL;
- if (evsel->tp_format == NULL || fmt == NULL)
+ tp_format = evsel__tp_format(evsel);
+ if (!tp_format)
return NULL;
- for (field = evsel->tp_format->format.fields; field; field = field->next, ++fmt)
- if (strcmp(field->name, arg) == 0)
+ for (const struct tep_format_field *field = tp_format->format.fields; field;
+ field = field->next, ++fmt) {
+ if (strcmp(field->name, arg) == 0) {
+ *type = field->type;
return fmt;
+ }
+ }
return NULL;
}
-static int trace__expand_filter(struct trace *trace __maybe_unused, struct evsel *evsel)
+static int trace__expand_filter(struct trace *trace, struct evsel *evsel)
{
char *tok, *left = evsel->filter, *new_filter = evsel->filter;
@@ -3728,14 +4263,14 @@ static int trace__expand_filter(struct trace *trace __maybe_unused, struct evsel
struct syscall_arg_fmt *fmt;
int left_size = tok - left,
right_size = right_end - right;
- char arg[128];
+ char arg[128], *type;
while (isspace(left[left_size - 1]))
--left_size;
scnprintf(arg, sizeof(arg), "%.*s", left_size, left);
- fmt = evsel__find_syscall_arg_fmt_by_name(evsel, arg);
+ fmt = evsel__find_syscall_arg_fmt_by_name(evsel, arg, &type);
if (fmt == NULL) {
pr_err("\"%s\" not found in \"%s\", can't set filter \"%s\"\n",
arg, evsel->name, evsel->filter);
@@ -3748,6 +4283,9 @@ static int trace__expand_filter(struct trace *trace __maybe_unused, struct evsel
if (fmt->strtoul) {
u64 val;
struct syscall_arg syscall_arg = {
+ .trace = trace,
+ .fmt = fmt,
+ .type_name = type,
.parm = fmt->parm,
};
@@ -3822,6 +4360,14 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
trace->live = true;
+ if (trace->summary_bpf) {
+ if (trace_prepare_bpf_summary(trace->summary_mode) < 0)
+ goto out_delete_evlist;
+
+ if (trace->summary_only)
+ goto create_maps;
+ }
+
if (!trace->raw_augmented_syscalls) {
if (trace->trace_syscalls && trace__add_syscall_newtp(trace))
goto out_error_raw_syscalls;
@@ -3846,8 +4392,8 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
evlist__add(evlist, pgfault_min);
}
- /* Enable ignoring missing threads when -u/-p option is defined. */
- trace->opts.ignore_missing_thread = trace->opts.target.uid != UINT_MAX || trace->opts.target.pid;
+ /* Enable ignoring missing threads when -p option is defined. */
+ trace->opts.ignore_missing_thread = trace->opts.target.pid;
if (trace->sched &&
evlist__add_newtp(evlist, "sched", "sched_stat_runtime", trace__sched_stat_runtime))
@@ -3880,18 +4426,25 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
if (trace->cgroup)
evlist__set_default_cgroup(trace->evlist, trace->cgroup);
+create_maps:
err = evlist__create_maps(evlist, &trace->opts.target);
if (err < 0) {
fprintf(trace->output, "Problems parsing the target to trace, check your options!\n");
goto out_delete_evlist;
}
- err = trace__symbols_init(trace, evlist);
+ err = trace__symbols_init(trace, argc, argv, evlist);
if (err < 0) {
fprintf(trace->output, "Problems initializing symbol libraries!\n");
goto out_delete_evlist;
}
+ if (trace->summary_mode == SUMMARY__BY_TOTAL && !trace->summary_bpf) {
+ trace->syscall_stats = alloc_syscall_stats();
+ if (IS_ERR(trace->syscall_stats))
+ goto out_delete_evlist;
+ }
+
evlist__config(evlist, &trace->opts, &callchain_param);
if (forks) {
@@ -3906,31 +4459,18 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
err = evlist__open(evlist);
if (err < 0)
goto out_error_open;
-#ifdef HAVE_BPF_SKEL
- if (trace->syscalls.events.bpf_output) {
- struct perf_cpu cpu;
- /*
- * Set up the __augmented_syscalls__ BPF map to hold for each
- * CPU the bpf-output event's file descriptor.
- */
- perf_cpu_map__for_each_cpu(cpu, i, trace->syscalls.events.bpf_output->core.cpus) {
- bpf_map__update_elem(trace->skel->maps.__augmented_syscalls__,
- &cpu.cpu, sizeof(int),
- xyarray__entry(trace->syscalls.events.bpf_output->core.fd,
- cpu.cpu, 0),
- sizeof(__u32), BPF_ANY);
- }
- }
-#endif
+ augmented_syscalls__setup_bpf_output();
+
err = trace__set_filter_pids(trace);
if (err < 0)
goto out_error_mem;
-#ifdef HAVE_BPF_SKEL
- if (trace->skel && trace->skel->progs.sys_enter)
- trace__init_syscalls_bpf_prog_array_maps(trace);
-#endif
+ /*
+ * TODO: Initialize for all host binary machine types, not just
+ * those matching the perf binary.
+ */
+ trace__init_syscalls_bpf_prog_array_maps(trace, EM_HOST);
if (trace->ev_qualifier_ids.nr > 0) {
err = trace__set_ev_qualifier_filter(trace);
@@ -3954,18 +4494,21 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
* So just disable this beautifier (SCA_FD, SCA_FDAT) when 'close' is
* not in use.
*/
- trace->fd_path_disabled = !trace__syscall_enabled(trace, syscalltbl__id(trace->sctbl, "close"));
+ /* TODO: support for more than just perf binary machine type close. */
+ trace->fd_path_disabled = !trace__syscall_enabled(trace, syscalltbl__id(EM_HOST, "close"));
err = trace__expand_filters(trace, &evsel);
if (err)
goto out_delete_evlist;
- err = evlist__apply_filters(evlist, &evsel);
+ err = evlist__apply_filters(evlist, &evsel, &trace->opts.target);
if (err < 0)
goto out_error_apply_filters;
- err = evlist__mmap(evlist, trace->opts.mmap_pages);
- if (err < 0)
- goto out_error_mmap;
+ if (!trace->summary_only || !trace->summary_bpf) {
+ err = evlist__mmap(evlist, trace->opts.mmap_pages);
+ if (err < 0)
+ goto out_error_mmap;
+ }
if (!target__none(&trace->opts.target) && !trace->opts.target.initial_delay)
evlist__enable(evlist);
@@ -3978,6 +4521,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
evlist__enable(evlist);
}
+ if (trace->summary_bpf)
+ trace_start_bpf_summary();
+
trace->multiple_threads = perf_thread_map__pid(evlist->core.threads, 0) == -1 ||
perf_thread_map__nr(evlist->core.threads) > 1 ||
evlist__first(evlist)->core.attr.inherit;
@@ -4045,12 +4591,21 @@ out_disable:
evlist__disable(evlist);
+ if (trace->summary_bpf)
+ trace_end_bpf_summary();
+
if (trace->sort_events)
ordered_events__flush(&trace->oe.data, OE_FLUSH__FINAL);
if (!err) {
- if (trace->summary)
- trace__fprintf_thread_summary(trace, trace->output);
+ if (trace->summary) {
+ if (trace->summary_bpf)
+ trace_print_bpf_summary(trace->output, trace->max_summary);
+ else if (trace->summary_mode == SUMMARY__BY_TOTAL)
+ trace__fprintf_total_summary(trace, trace->output);
+ else
+ trace__fprintf_thread_summary(trace, trace->output);
+ }
if (trace->show_tool_stats) {
fprintf(trace->output, "Stats:\n "
@@ -4062,6 +4617,8 @@ out_disable:
}
out_delete_evlist:
+ trace_cleanup_bpf_summary();
+ delete_syscall_stats(trace->syscall_stats);
trace__symbols__exit(trace);
evlist__free_syscall_tp_fields(evlist);
evlist__delete(evlist);
@@ -4121,6 +4678,7 @@ static int trace__replay(struct trace *trace)
struct evsel *evsel;
int err = -1;
+ perf_tool__init(&trace->tool, /*ordered_events=*/true);
trace->tool.sample = trace__process_sample;
trace->tool.mmap = perf_event__process_mmap;
trace->tool.mmap2 = perf_event__process_mmap2;
@@ -4148,7 +4706,7 @@ static int trace__replay(struct trace *trace)
if (trace->opts.target.tid)
symbol_conf.tid_list_str = strdup(trace->opts.target.tid);
- if (symbol__init(&session->header.env) < 0)
+ if (symbol__init(perf_session__env(session)) < 0)
goto out;
trace->host = &session->machines.host;
@@ -4189,6 +4747,12 @@ static int trace__replay(struct trace *trace)
evsel->handler = trace__pgfault;
}
+ if (trace->summary_mode == SUMMARY__BY_TOTAL) {
+ trace->syscall_stats = alloc_syscall_stats();
+ if (IS_ERR(trace->syscall_stats))
+ goto out;
+ }
+
setup_pager();
err = perf_session__process_events(session);
@@ -4199,12 +4763,13 @@ static int trace__replay(struct trace *trace)
trace__fprintf_thread_summary(trace, trace->output);
out:
+ delete_syscall_stats(trace->syscall_stats);
perf_session__delete(session);
return err;
}
-static size_t trace__fprintf_threads_header(FILE *fp)
+static size_t trace__fprintf_summary_header(FILE *fp)
{
size_t printed;
@@ -4213,29 +4778,57 @@ static size_t trace__fprintf_threads_header(FILE *fp)
return printed;
}
-DEFINE_RESORT_RB(syscall_stats, a->msecs > b->msecs,
+struct syscall_entry {
struct syscall_stats *stats;
double msecs;
int syscall;
-)
+};
+
+static int entry_cmp(const void *e1, const void *e2)
{
- struct int_node *source = rb_entry(nd, struct int_node, rb_node);
- struct syscall_stats *stats = source->priv;
+ const struct syscall_entry *entry1 = e1;
+ const struct syscall_entry *entry2 = e2;
- entry->syscall = source->i;
- entry->stats = stats;
- entry->msecs = stats ? (u64)stats->stats.n * (avg_stats(&stats->stats) / NSEC_PER_MSEC) : 0;
+ return entry1->msecs > entry2->msecs ? -1 : 1;
}
-static size_t thread__dump_stats(struct thread_trace *ttrace,
- struct trace *trace, FILE *fp)
+static struct syscall_entry *syscall__sort_stats(struct hashmap *syscall_stats)
+{
+ struct syscall_entry *entry;
+ struct hashmap_entry *pos;
+ unsigned bkt, i, nr;
+
+ nr = syscall_stats->sz;
+ entry = malloc(nr * sizeof(*entry));
+ if (entry == NULL)
+ return NULL;
+
+ i = 0;
+ hashmap__for_each_entry(syscall_stats, pos, bkt) {
+ struct syscall_stats *ss = pos->pvalue;
+ struct stats *st = &ss->stats;
+
+ entry[i].stats = ss;
+ entry[i].msecs = (u64)st->n * (avg_stats(st) / NSEC_PER_MSEC);
+ entry[i].syscall = pos->key;
+ i++;
+ }
+ assert(i == nr);
+
+ qsort(entry, nr, sizeof(*entry), entry_cmp);
+ return entry;
+}
+
+static size_t syscall__dump_stats(struct trace *trace, int e_machine, FILE *fp,
+ struct hashmap *syscall_stats)
{
size_t printed = 0;
+ int lines = 0;
struct syscall *sc;
- struct rb_node *nd;
- DECLARE_RESORT_RB_INTLIST(syscall_stats, ttrace->syscall_stats);
+ struct syscall_entry *entries;
- if (syscall_stats == NULL)
+ entries = syscall__sort_stats(syscall_stats);
+ if (entries == NULL)
return 0;
printed += fprintf(fp, "\n");
@@ -4244,8 +4837,10 @@ static size_t thread__dump_stats(struct thread_trace *ttrace,
printed += fprintf(fp, " (msec) (msec) (msec) (msec) (%%)\n");
printed += fprintf(fp, " --------------- -------- ------ -------- --------- --------- --------- ------\n");
- resort_rb__for_each_entry(nd, syscall_stats) {
- struct syscall_stats *stats = syscall_stats_entry->stats;
+ for (size_t i = 0; i < syscall_stats->sz; i++) {
+ struct syscall_entry *entry = &entries[i];
+ struct syscall_stats *stats = entry->stats;
+
if (stats) {
double min = (double)(stats->stats.min) / NSEC_PER_MSEC;
double max = (double)(stats->stats.max) / NSEC_PER_MSEC;
@@ -4256,10 +4851,13 @@ static size_t thread__dump_stats(struct thread_trace *ttrace,
pct = avg ? 100.0 * stddev_stats(&stats->stats) / avg : 0.0;
avg /= NSEC_PER_MSEC;
- sc = &trace->syscalls.table[syscall_stats_entry->syscall];
+ sc = trace__syscall_info(trace, /*evsel=*/NULL, e_machine, entry->syscall);
+ if (!sc)
+ continue;
+
printed += fprintf(fp, " %-15s", sc->name);
printed += fprintf(fp, " %8" PRIu64 " %6" PRIu64 " %9.3f %9.3f %9.3f",
- n, stats->nr_failures, syscall_stats_entry->msecs, min, avg);
+ n, stats->nr_failures, entry->msecs, min, avg);
printed += fprintf(fp, " %9.3f %9.2f%%\n", max, pct);
if (trace->errno_summary && stats->nr_failures) {
@@ -4270,19 +4868,35 @@ static size_t thread__dump_stats(struct thread_trace *ttrace,
fprintf(fp, "\t\t\t\t%s: %d\n", perf_env__arch_strerrno(trace->host->env, e + 1), stats->errnos[e]);
}
}
+ lines++;
}
+
+ if (trace->max_summary && trace->max_summary <= lines)
+ break;
}
- resort_rb__delete(syscall_stats);
+ free(entries);
printed += fprintf(fp, "\n\n");
return printed;
}
+static size_t thread__dump_stats(struct thread_trace *ttrace,
+ struct trace *trace, int e_machine, FILE *fp)
+{
+ return syscall__dump_stats(trace, e_machine, fp, ttrace->syscall_stats);
+}
+
+static size_t system__dump_stats(struct trace *trace, int e_machine, FILE *fp)
+{
+ return syscall__dump_stats(trace, e_machine, fp, trace->syscall_stats);
+}
+
static size_t trace__fprintf_thread(FILE *fp, struct thread *thread, struct trace *trace)
{
size_t printed = 0;
struct thread_trace *ttrace = thread__priv(thread);
+ int e_machine = thread__e_machine(thread, trace->host);
double ratio;
if (ttrace == NULL)
@@ -4302,7 +4916,7 @@ static size_t trace__fprintf_thread(FILE *fp, struct thread *thread, struct trac
else if (fputc('\n', fp) != EOF)
++printed;
- printed += thread__dump_stats(ttrace, trace, fp);
+ printed += thread__dump_stats(ttrace, trace, e_machine, fp);
return printed;
}
@@ -4312,34 +4926,60 @@ static unsigned long thread__nr_events(struct thread_trace *ttrace)
return ttrace ? ttrace->nr_events : 0;
}
-DEFINE_RESORT_RB(threads,
- (thread__nr_events(thread__priv(a->thread)) <
- thread__nr_events(thread__priv(b->thread))),
- struct thread *thread;
-)
+static int trace_nr_events_cmp(void *priv __maybe_unused,
+ const struct list_head *la,
+ const struct list_head *lb)
{
- entry->thread = rb_entry(nd, struct thread_rb_node, rb_node)->thread;
+ struct thread_list *a = list_entry(la, struct thread_list, list);
+ struct thread_list *b = list_entry(lb, struct thread_list, list);
+ unsigned long a_nr_events = thread__nr_events(thread__priv(a->thread));
+ unsigned long b_nr_events = thread__nr_events(thread__priv(b->thread));
+
+ if (a_nr_events != b_nr_events)
+ return a_nr_events < b_nr_events ? -1 : 1;
+
+ /* Identical number of threads, place smaller tids first. */
+ return thread__tid(a->thread) < thread__tid(b->thread)
+ ? -1
+ : (thread__tid(a->thread) > thread__tid(b->thread) ? 1 : 0);
}
static size_t trace__fprintf_thread_summary(struct trace *trace, FILE *fp)
{
- size_t printed = trace__fprintf_threads_header(fp);
- struct rb_node *nd;
- int i;
-
- for (i = 0; i < THREADS__TABLE_SIZE; i++) {
- DECLARE_RESORT_RB_MACHINE_THREADS(threads, trace->host, i);
+ size_t printed = trace__fprintf_summary_header(fp);
+ LIST_HEAD(threads);
- if (threads == NULL) {
- fprintf(fp, "%s", "Error sorting output by nr_events!\n");
- return 0;
- }
+ if (machine__thread_list(trace->host, &threads) == 0) {
+ struct thread_list *pos;
- resort_rb__for_each_entry(nd, threads)
- printed += trace__fprintf_thread(fp, threads_entry->thread, trace);
+ list_sort(NULL, &threads, trace_nr_events_cmp);
- resort_rb__delete(threads);
+ list_for_each_entry(pos, &threads, list)
+ printed += trace__fprintf_thread(fp, pos->thread, trace);
}
+ thread_list__delete(&threads);
+ return printed;
+}
+
+static size_t trace__fprintf_total_summary(struct trace *trace, FILE *fp)
+{
+ size_t printed = trace__fprintf_summary_header(fp);
+
+ printed += fprintf(fp, " total, ");
+ printed += fprintf(fp, "%lu events", trace->nr_events);
+
+ if (trace->pfmaj)
+ printed += fprintf(fp, ", %lu majfaults", trace->pfmaj);
+ if (trace->pfmin)
+ printed += fprintf(fp, ", %lu minfaults", trace->pfmin);
+ if (trace->sched)
+ printed += fprintf(fp, ", %.3f msec\n", trace->runtime_ms);
+ else if (fputc('\n', fp) != EOF)
+ ++printed;
+
+ /* TODO: get all system e_machines. */
+ printed += system__dump_stats(trace, EM_HOST, fp);
+
return printed;
}
@@ -4436,47 +5076,62 @@ static void evsel__set_syscall_arg_fmt(struct evsel *evsel, const char *name)
const struct syscall_fmt *scfmt = syscall_fmt__find(name);
if (scfmt) {
- int skip = 0;
+ const struct tep_event *tp_format = evsel__tp_format(evsel);
+
+ if (tp_format) {
+ int skip = 0;
- if (strcmp(evsel->tp_format->format.fields->name, "__syscall_nr") == 0 ||
- strcmp(evsel->tp_format->format.fields->name, "nr") == 0)
- ++skip;
+ if (strcmp(tp_format->format.fields->name, "__syscall_nr") == 0 ||
+ strcmp(tp_format->format.fields->name, "nr") == 0)
+ ++skip;
- memcpy(fmt + skip, scfmt->arg, (evsel->tp_format->format.nr_fields - skip) * sizeof(*fmt));
+ memcpy(fmt + skip, scfmt->arg,
+ (tp_format->format.nr_fields - skip) * sizeof(*fmt));
+ }
}
}
}
-static int evlist__set_syscall_tp_fields(struct evlist *evlist)
+static int evlist__set_syscall_tp_fields(struct evlist *evlist, bool *use_btf)
{
struct evsel *evsel;
evlist__for_each_entry(evlist, evsel) {
- if (evsel->priv || !evsel->tp_format)
+ const struct tep_event *tp_format;
+
+ if (evsel->priv)
+ continue;
+
+ tp_format = evsel__tp_format(evsel);
+ if (!tp_format)
continue;
- if (strcmp(evsel->tp_format->system, "syscalls")) {
- evsel__init_tp_arg_scnprintf(evsel);
+ if (strcmp(tp_format->system, "syscalls")) {
+ evsel__init_tp_arg_scnprintf(evsel, use_btf);
continue;
}
if (evsel__init_syscall_tp(evsel))
return -1;
- if (!strncmp(evsel->tp_format->name, "sys_enter_", 10)) {
+ if (!strncmp(tp_format->name, "sys_enter_", 10)) {
struct syscall_tp *sc = __evsel__syscall_tp(evsel);
if (__tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64)))
return -1;
- evsel__set_syscall_arg_fmt(evsel, evsel->tp_format->name + sizeof("sys_enter_") - 1);
- } else if (!strncmp(evsel->tp_format->name, "sys_exit_", 9)) {
+ evsel__set_syscall_arg_fmt(evsel,
+ tp_format->name + sizeof("sys_enter_") - 1);
+ } else if (!strncmp(tp_format->name, "sys_exit_", 9)) {
struct syscall_tp *sc = __evsel__syscall_tp(evsel);
- if (__tp_field__init_uint(&sc->ret, sizeof(u64), sc->id.offset + sizeof(u64), evsel->needs_swap))
+ if (__tp_field__init_uint(&sc->ret, sizeof(u64),
+ sc->id.offset + sizeof(u64),
+ evsel->needs_swap))
return -1;
- evsel__set_syscall_arg_fmt(evsel, evsel->tp_format->name + sizeof("sys_exit_") - 1);
+ evsel__set_syscall_arg_fmt(evsel,
+ tp_format->name + sizeof("sys_exit_") - 1);
}
}
@@ -4515,8 +5170,9 @@ static int trace__parse_events_option(const struct option *opt, const char *str,
*sep = '\0';
list = 0;
- if (syscalltbl__id(trace->sctbl, s) >= 0 ||
- syscalltbl__strglobmatch_first(trace->sctbl, s, &idx) >= 0) {
+ /* TODO: support for more than just perf binary machine type syscalls. */
+ if (syscalltbl__id(EM_HOST, s) >= 0 ||
+ syscalltbl__strglobmatch_first(EM_HOST, s, &idx) >= 0) {
list = 1;
goto do_concat;
}
@@ -4599,6 +5255,25 @@ static int trace__parse_cgroups(const struct option *opt, const char *str, int u
return 0;
}
+static int trace__parse_summary_mode(const struct option *opt, const char *str,
+ int unset __maybe_unused)
+{
+ struct trace *trace = opt->value;
+
+ if (!strcmp(str, "thread")) {
+ trace->summary_mode = SUMMARY__BY_THREAD;
+ } else if (!strcmp(str, "total")) {
+ trace->summary_mode = SUMMARY__BY_TOTAL;
+ } else if (!strcmp(str, "cgroup")) {
+ trace->summary_mode = SUMMARY__BY_CGROUP;
+ } else {
+ pr_err("Unknown summary mode: %s\n", str);
+ return -1;
+ }
+
+ return 0;
+}
+
static int trace__config(const char *var, const char *value, void *arg)
{
struct trace *trace = arg;
@@ -4645,30 +5320,23 @@ out:
static void trace__exit(struct trace *trace)
{
- int i;
-
+ thread__zput(trace->current);
strlist__delete(trace->ev_qualifier);
zfree(&trace->ev_qualifier_ids.entries);
if (trace->syscalls.table) {
- for (i = 0; i <= trace->sctbl->syscalls.max_id; i++)
- syscall__exit(&trace->syscalls.table[i]);
+ for (size_t i = 0; i < trace->syscalls.table_size; i++)
+ syscall__delete(trace->syscalls.table[i]);
zfree(&trace->syscalls.table);
}
- syscalltbl__delete(trace->sctbl);
zfree(&trace->perfconfig_events);
-}
-
-#ifdef HAVE_BPF_SKEL
-static int bpf__setup_bpf_output(struct evlist *evlist)
-{
- int err = parse_event(evlist, "bpf-output/no-inherit=1,name=__augmented_syscalls__/");
-
- if (err)
- pr_debug("ERROR: failed to create the \"__augmented_syscalls__\" bpf-output event\n");
-
- return err;
-}
+ evlist__delete(trace->evlist);
+ trace->evlist = NULL;
+ ordered_events__free(&trace->oe.data);
+#ifdef HAVE_LIBBPF_SUPPORT
+ btf__free(trace->btf);
+ trace->btf = NULL;
#endif
+}
int cmd_trace(int argc, const char **argv)
{
@@ -4682,7 +5350,6 @@ int cmd_trace(int argc, const char **argv)
struct trace trace = {
.opts = {
.target = {
- .uid = UINT_MAX,
.uses_mmap = true,
},
.user_freq = UINT_MAX,
@@ -4729,8 +5396,7 @@ int cmd_trace(int argc, const char **argv)
"child tasks do not inherit counters"),
OPT_CALLBACK('m', "mmap-pages", &trace.opts.mmap_pages, "pages",
"number of mmap data pages", evlist__parse_mmap_pages),
- OPT_STRING('u', "uid", &trace.opts.target.uid_str, "user",
- "user to profile"),
+ OPT_STRING('u', "uid", &trace.uid_str, "user", "user to profile"),
OPT_CALLBACK(0, "duration", &trace, "float",
"show only events with duration > N.M ms",
trace__set_duration),
@@ -4746,6 +5412,9 @@ int cmd_trace(int argc, const char **argv)
"Show all syscalls and summary with statistics"),
OPT_BOOLEAN(0, "errno-summary", &trace.errno_summary,
"Show errno stats per syscall, use with -s or -S"),
+ OPT_CALLBACK(0, "summary-mode", &trace, "mode",
+ "How to show summary: select thread (default), total or cgroup",
+ trace__parse_summary_mode),
OPT_CALLBACK_DEFAULT('F', "pf", &trace.trace_pgfaults, "all|maj|min",
"Trace pagefaults", parse_pagefaults, "maj"),
OPT_BOOLEAN(0, "syscalls", &trace.trace_syscalls, "Trace syscalls"),
@@ -4777,6 +5446,11 @@ int cmd_trace(int argc, const char **argv)
OPT_INTEGER('D', "delay", &trace.opts.target.initial_delay,
"ms to wait before starting measurement after program "
"start"),
+ OPT_BOOLEAN(0, "force-btf", &trace.force_btf, "Prefer btf_dump general pretty printer"
+ "to customized ones"),
+ OPT_BOOLEAN(0, "bpf-summary", &trace.summary_bpf, "Summary syscall stats in BPF"),
+ OPT_INTEGER(0, "max-summary", &trace.max_summary,
+ "Max number of entries in the summary."),
OPTS_EVSWITCH(&trace.evswitch),
OPT_END()
};
@@ -4797,10 +5471,12 @@ int cmd_trace(int argc, const char **argv)
sigchld_act.sa_sigaction = sighandler_chld;
sigaction(SIGCHLD, &sigchld_act, NULL);
+ ordered_events__init(&trace.oe.data, ordered_events__deliver_event, &trace);
+ ordered_events__set_copy_on_queue(&trace.oe.data, true);
+
trace.evlist = evlist__new();
- trace.sctbl = syscalltbl__new();
- if (trace.evlist == NULL || trace.sctbl == NULL) {
+ if (trace.evlist == NULL) {
pr_err("Not enough memory to run!\n");
err = -ENOMEM;
goto out;
@@ -4860,46 +5536,35 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode");
}
-#ifdef HAVE_BPF_SKEL
if (!trace.trace_syscalls)
goto skip_augmentation;
- trace.skel = augmented_raw_syscalls_bpf__open();
- if (!trace.skel) {
- pr_debug("Failed to open augmented syscalls BPF skeleton");
- } else {
- /*
- * Disable attaching the BPF programs except for sys_enter and
- * sys_exit that tail call into this as necessary.
- */
- struct bpf_program *prog;
+ if ((argc >= 1) && (strcmp(argv[0], "record") == 0)) {
+ pr_debug("Syscall augmentation fails with record, disabling augmentation");
+ goto skip_augmentation;
+ }
- bpf_object__for_each_program(prog, trace.skel->obj) {
- if (prog != trace.skel->progs.sys_enter && prog != trace.skel->progs.sys_exit)
- bpf_program__set_autoattach(prog, /*autoattach=*/false);
+ if (trace.summary_bpf) {
+ if (!trace.opts.target.system_wide) {
+ /* TODO: Add filters in the BPF to support other targets. */
+ pr_err("Error: --bpf-summary only works for system-wide mode.\n");
+ goto out;
}
+ if (trace.summary_only)
+ goto skip_augmentation;
+ }
- err = augmented_raw_syscalls_bpf__load(trace.skel);
+ err = augmented_syscalls__prepare();
+ if (err < 0)
+ goto skip_augmentation;
- if (err < 0) {
- libbpf_strerror(err, bf, sizeof(bf));
- pr_debug("Failed to load augmented syscalls BPF skeleton: %s\n", bf);
- } else {
- augmented_raw_syscalls_bpf__attach(trace.skel);
- trace__add_syscall_newtp(&trace);
- }
- }
+ trace__add_syscall_newtp(&trace);
+
+ err = augmented_syscalls__create_bpf_output(trace.evlist);
+ if (err == 0)
+ trace.syscalls.events.bpf_output = evlist__last(trace.evlist);
- err = bpf__setup_bpf_output(trace.evlist);
- if (err) {
- libbpf_strerror(err, bf, sizeof(bf));
- pr_err("ERROR: Setup BPF output event failed: %s\n", bf);
- goto out;
- }
- trace.syscalls.events.bpf_output = evlist__last(trace.evlist);
- assert(!strcmp(evsel__name(trace.syscalls.events.bpf_output), "__augmented_syscalls__"));
skip_augmentation:
-#endif
err = -1;
if (trace.trace_pgfaults) {
@@ -4929,16 +5594,16 @@ skip_augmentation:
}
if (trace.evlist->core.nr_entries > 0) {
+ bool use_btf = false;
+
evlist__set_default_evsel_handler(trace.evlist, trace__event_handler);
- if (evlist__set_syscall_tp_fields(trace.evlist)) {
+ if (evlist__set_syscall_tp_fields(trace.evlist, &use_btf)) {
perror("failed to set syscalls:* tracepoint fields");
goto out;
}
- }
- if (trace.sort_events) {
- ordered_events__init(&trace.oe.data, ordered_events__deliver_event, &trace);
- ordered_events__set_copy_on_queue(&trace.oe.data, true);
+ if (use_btf)
+ trace__load_vmlinux_btf(&trace);
}
/*
@@ -4954,7 +5619,7 @@ skip_augmentation:
*/
if (trace.syscalls.events.bpf_output) {
evlist__for_each_entry(trace.evlist, evsel) {
- bool raw_syscalls_sys_exit = strcmp(evsel__name(evsel), "raw_syscalls:sys_exit") == 0;
+ bool raw_syscalls_sys_exit = evsel__name_is(evsel, "raw_syscalls:sys_exit");
if (raw_syscalls_sys_exit) {
trace.raw_augmented_syscalls = true;
@@ -5018,8 +5683,10 @@ init_augmented_syscall_tp:
}
}
- if ((argc >= 1) && (strcmp(argv[0], "record") == 0))
- return trace__record(&trace, argc-1, &argv[1]);
+ if ((argc >= 1) && (strcmp(argv[0], "record") == 0)) {
+ err = trace__record(&trace, argc-1, &argv[1]);
+ goto out;
+ }
/* Using just --errno-summary will trigger --summary */
if (trace.errno_summary && !trace.summary && !trace.summary_only)
@@ -5029,6 +5696,19 @@ init_augmented_syscall_tp:
if (trace.summary_only)
trace.summary = trace.summary_only;
+ /* Keep exited threads, otherwise information might be lost for summary */
+ if (trace.summary) {
+ symbol_conf.keep_exited_threads = true;
+ if (trace.summary_mode == SUMMARY__NONE)
+ trace.summary_mode = SUMMARY__BY_THREAD;
+
+ if (!trace.summary_bpf && trace.summary_mode == SUMMARY__BY_CGROUP) {
+ pr_err("Error: --summary-mode=cgroup only works with --bpf-summary\n");
+ err = -EINVAL;
+ goto out;
+ }
+ }
+
if (output_name != NULL) {
err = trace__open_output(&trace, output_name);
if (err < 0) {
@@ -5048,11 +5728,19 @@ init_augmented_syscall_tp:
goto out_close;
}
- err = target__parse_uid(&trace.opts.target);
- if (err) {
- target__strerror(&trace.opts.target, err, bf, sizeof(bf));
- fprintf(trace.output, "%s", bf);
- goto out_close;
+ if (trace.uid_str) {
+ uid_t uid = parse_uid(trace.uid_str);
+
+ if (uid == UINT_MAX) {
+ ui__error("Invalid User: %s", trace.uid_str);
+ err = -EINVAL;
+ goto out_close;
+ }
+ err = parse_uid_filter(trace.evlist, uid);
+ if (err)
+ goto out_close;
+
+ trace.opts.target.system_wide = true;
}
if (!argc && target__none(&trace.opts.target))
@@ -5068,8 +5756,6 @@ out_close:
fclose(trace.output);
out:
trace__exit(&trace);
-#ifdef HAVE_BPF_SKEL
- augmented_raw_syscalls_bpf__destroy(trace.skel);
-#endif
+ augmented_syscalls__cleanup();
return err;
}
diff --git a/tools/perf/builtin-version.c b/tools/perf/builtin-version.c
index ac20c2b9bbc2..10f25c6705b1 100644
--- a/tools/perf/builtin-version.c
+++ b/tools/perf/builtin-version.c
@@ -26,62 +26,10 @@ static const char * const version_usage[] = {
NULL
};
-static void on_off_print(const char *status)
-{
- printf("[ ");
-
- if (!strcmp(status, "OFF"))
- color_fprintf(stdout, PERF_COLOR_RED, "%-3s", status);
- else
- color_fprintf(stdout, PERF_COLOR_GREEN, "%-3s", status);
-
- printf(" ]");
-}
-
-static void status_print(const char *name, const char *macro,
- const char *status)
-{
- printf("%22s: ", name);
- on_off_print(status);
- printf(" # %s\n", macro);
-}
-
-#define STATUS(__d, __m) \
-do { \
- if (IS_BUILTIN(__d)) \
- status_print(#__m, #__d, "on"); \
- else \
- status_print(#__m, #__d, "OFF"); \
-} while (0)
-
static void library_status(void)
{
- STATUS(HAVE_DWARF_SUPPORT, dwarf);
- STATUS(HAVE_DWARF_GETLOCATIONS_SUPPORT, dwarf_getlocations);
-#ifndef HAVE_SYSCALL_TABLE_SUPPORT
- STATUS(HAVE_LIBAUDIT_SUPPORT, libaudit);
-#endif
- STATUS(HAVE_SYSCALL_TABLE_SUPPORT, syscall_table);
- STATUS(HAVE_LIBBFD_SUPPORT, libbfd);
- STATUS(HAVE_DEBUGINFOD_SUPPORT, debuginfod);
- STATUS(HAVE_LIBELF_SUPPORT, libelf);
- STATUS(HAVE_LIBNUMA_SUPPORT, libnuma);
- STATUS(HAVE_LIBNUMA_SUPPORT, numa_num_possible_cpus);
- STATUS(HAVE_LIBPERL_SUPPORT, libperl);
- STATUS(HAVE_LIBPYTHON_SUPPORT, libpython);
- STATUS(HAVE_SLANG_SUPPORT, libslang);
- STATUS(HAVE_LIBCRYPTO_SUPPORT, libcrypto);
- STATUS(HAVE_LIBUNWIND_SUPPORT, libunwind);
- STATUS(HAVE_DWARF_SUPPORT, libdw-dwarf-unwind);
- STATUS(HAVE_ZLIB_SUPPORT, zlib);
- STATUS(HAVE_LZMA_SUPPORT, lzma);
- STATUS(HAVE_AUXTRACE_SUPPORT, get_cpuid);
- STATUS(HAVE_LIBBPF_SUPPORT, bpf);
- STATUS(HAVE_AIO_SUPPORT, aio);
- STATUS(HAVE_ZSTD_SUPPORT, zstd);
- STATUS(HAVE_LIBPFM, libpfm4);
- STATUS(HAVE_LIBTRACEEVENT, libtraceevent);
- STATUS(HAVE_BPF_SKEL, bpf_skeletons);
+ for (int i = 0; supported_features[i].name; ++i)
+ feature_status__printf(&supported_features[i]);
}
int cmd_version(int argc, const char **argv)
diff --git a/tools/perf/builtin.h b/tools/perf/builtin.h
index f2ab5bae2150..40c4078c295f 100644
--- a/tools/perf/builtin.h
+++ b/tools/perf/builtin.h
@@ -2,13 +2,27 @@
#ifndef BUILTIN_H
#define BUILTIN_H
+struct feature_status {
+ const char *name;
+ const char *macro;
+ const char *tip;
+ int is_builtin;
+};
+
+extern struct feature_status supported_features[];
+
+void feature_status__printf(const struct feature_status *feature);
+
+struct cmdnames;
+
void list_common_cmds_help(void);
-const char *help_unknown_cmd(const char *cmd);
+const char *help_unknown_cmd(const char *cmd, struct cmdnames *main_cmds);
int cmd_annotate(int argc, const char **argv);
int cmd_bench(int argc, const char **argv);
int cmd_buildid_cache(int argc, const char **argv);
int cmd_buildid_list(int argc, const char **argv);
+int cmd_check(int argc, const char **argv);
int cmd_config(int argc, const char **argv);
int cmd_c2c(int argc, const char **argv);
int cmd_diff(int argc, const char **argv);
@@ -37,6 +51,4 @@ int cmd_ftrace(int argc, const char **argv);
int cmd_daemon(int argc, const char **argv);
int cmd_kwork(int argc, const char **argv);
-int find_scripts(char **scripts_array, char **scripts_path_array, int num,
- int pathlen);
#endif
diff --git a/tools/perf/check-header_ignore_hunks/lib/list_sort.c b/tools/perf/check-header_ignore_hunks/lib/list_sort.c
new file mode 100644
index 000000000000..b7316d29857d
--- /dev/null
+++ b/tools/perf/check-header_ignore_hunks/lib/list_sort.c
@@ -0,0 +1,24 @@
+@@ -50,6 +50,7 @@
+ struct list_head *a, struct list_head *b)
+ {
+ struct list_head *tail = head;
++ u8 count = 0;
+
+ for (;;) {
+ /* if equal, take 'a' -- important for sort stability */
+@@ -75,6 +76,15 @@
+ /* Finish linking remainder of list b on to tail */
+ tail->next = b;
+ do {
++ /*
++ * If the merge is highly unbalanced (e.g. the input is
++ * already sorted), this loop may run many iterations.
++ * Continue callbacks to the client even though no
++ * element comparison is needed, so the client's cmp()
++ * routine can invoke cond_resched() periodically.
++ */
++ if (unlikely(!++count))
++ cmp(priv, b, b);
+ b->prev = tail;
+ tail = b;
+ b = b->next;
diff --git a/tools/perf/check-headers.sh b/tools/perf/check-headers.sh
index 66ba33dbcef2..e0537f275da2 100755
--- a/tools/perf/check-headers.sh
+++ b/tools/perf/check-headers.sh
@@ -4,43 +4,39 @@
YELLOW='\033[0;33m'
NC='\033[0m' # No Color
-declare -a FILES
-FILES=(
+declare -a FILES=(
"include/uapi/linux/const.h"
"include/uapi/drm/drm.h"
"include/uapi/drm/i915_drm.h"
+ "include/uapi/linux/bits.h"
"include/uapi/linux/fadvise.h"
- "include/uapi/linux/fcntl.h"
- "include/uapi/linux/fs.h"
"include/uapi/linux/fscrypt.h"
+ "include/uapi/linux/genetlink.h"
+ "include/uapi/linux/if_addr.h"
+ "include/uapi/linux/in.h"
"include/uapi/linux/kcmp.h"
"include/uapi/linux/kvm.h"
- "include/uapi/linux/in.h"
- "include/uapi/linux/mount.h"
- "include/uapi/linux/openat2.h"
+ "include/uapi/linux/neighbour.h"
+ "include/uapi/linux/netfilter.h"
+ "include/uapi/linux/netfilter_arp.h"
"include/uapi/linux/perf_event.h"
- "include/uapi/linux/prctl.h"
- "include/uapi/linux/sched.h"
+ "include/uapi/linux/rtnetlink.h"
"include/uapi/linux/seccomp.h"
"include/uapi/linux/stat.h"
- "include/uapi/linux/usbdevice_fs.h"
- "include/uapi/linux/vhost.h"
- "include/uapi/sound/asound.h"
"include/linux/bits.h"
"include/vdso/bits.h"
+ "include/linux/cfi_types.h"
"include/linux/const.h"
"include/vdso/const.h"
+ "include/vdso/unaligned.h"
+ "include/linux/gfp_types.h"
"include/linux/hash.h"
"include/linux/list-sort.h"
"include/uapi/linux/hw_breakpoint.h"
- "arch/x86/include/asm/disabled-features.h"
- "arch/x86/include/asm/required-features.h"
"arch/x86/include/asm/cpufeatures.h"
"arch/x86/include/asm/inat_types.h"
"arch/x86/include/asm/emulate_prefix.h"
- "arch/x86/include/asm/irq_vectors.h"
"arch/x86/include/asm/msr-index.h"
- "arch/x86/include/uapi/asm/prctl.h"
"arch/x86/lib/x86-opcode-map.txt"
"arch/x86/tools/gen-insn-attr-x86.awk"
"arch/arm/include/uapi/asm/perf_regs.h"
@@ -51,15 +47,12 @@ FILES=(
"arch/s390/include/uapi/asm/perf_regs.h"
"arch/x86/include/uapi/asm/perf_regs.h"
"arch/x86/include/uapi/asm/kvm.h"
- "arch/x86/include/uapi/asm/kvm_perf.h"
"arch/x86/include/uapi/asm/svm.h"
"arch/x86/include/uapi/asm/unistd.h"
"arch/x86/include/uapi/asm/vmx.h"
"arch/powerpc/include/uapi/asm/kvm.h"
"arch/s390/include/uapi/asm/kvm.h"
- "arch/s390/include/uapi/asm/kvm_perf.h"
"arch/s390/include/uapi/asm/sie.h"
- "arch/arm/include/uapi/asm/kvm.h"
"arch/arm64/include/uapi/asm/kvm.h"
"arch/arm64/include/uapi/asm/unistd.h"
"arch/alpha/include/uapi/asm/errno.h"
@@ -80,10 +73,10 @@ FILES=(
"include/uapi/asm-generic/ioctls.h"
"include/uapi/asm-generic/mman-common.h"
"include/uapi/asm-generic/unistd.h"
+ "scripts/syscall.tbl"
)
-declare -a SYNC_CHECK_FILES
-SYNC_CHECK_FILES=(
+declare -a SYNC_CHECK_FILES=(
"arch/x86/include/asm/inat.h"
"arch/x86/include/asm/insn.h"
"arch/x86/lib/inat.c"
@@ -95,9 +88,19 @@ SYNC_CHECK_FILES=(
# tables that then gets included in .c files for things like id->string syscall
# tables (and the reverse lookup as well: string -> id)
-declare -a BEAUTY_FILES
-BEAUTY_FILES=(
+declare -a BEAUTY_FILES=(
+ "arch/x86/include/asm/irq_vectors.h"
+ "arch/x86/include/uapi/asm/prctl.h"
"include/linux/socket.h"
+ "include/uapi/linux/fcntl.h"
+ "include/uapi/linux/fs.h"
+ "include/uapi/linux/mount.h"
+ "include/uapi/linux/prctl.h"
+ "include/uapi/linux/sched.h"
+ "include/uapi/linux/stat.h"
+ "include/uapi/linux/usbdevice_fs.h"
+ "include/uapi/linux/vhost.h"
+ "include/uapi/sound/asound.h"
)
declare -a FAILURES
@@ -135,6 +138,30 @@ beauty_check () {
check_2 "tools/perf/trace/beauty/$file" "$file" "$@"
}
+check_ignore_some_hunks () {
+ orig_file="$1"
+ tools_file="tools/$orig_file"
+ hunks_to_ignore="tools/perf/check-header_ignore_hunks/$orig_file"
+
+ if [ ! -f "$hunks_to_ignore" ]; then
+ echo "$hunks_to_ignore not found. Skipping $orig_file check."
+ FAILURES+=(
+ "$tools_file $orig_file"
+ )
+ return
+ fi
+
+ cmd="diff -u \"$tools_file\" \"$orig_file\" | grep -vf \"$hunks_to_ignore\" | wc -l | grep -qw 0"
+
+ if [ -f "$orig_file" ] && ! eval "$cmd"
+ then
+ FAILURES+=(
+ "$tools_file $orig_file"
+ )
+ fi
+}
+
+
# Check if we have the kernel headers (tools/perf/../../include), else
# we're probably on a detached tarball, so no point in trying to check
# differences.
@@ -160,21 +187,29 @@ done
# diff with extra ignore lines
check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))" -I"^#include <linux/cfi_types.h>"'
check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"'
-check arch/x86/include/asm/amd-ibs.h '-I "^#include [<\"]\(asm/\)*msr-index.h"'
+check arch/x86/include/asm/amd/ibs.h '-I "^#include .*/msr-index.h"'
check arch/arm64/include/asm/cputype.h '-I "^#include [<\"]\(asm/\)*sysreg.h"'
-check include/asm-generic/unaligned.h '-I "^#include <linux/unaligned/packed_struct.h>" -I "^#include <asm/byteorder.h>" -I "^#pragma GCC diagnostic"'
+check include/linux/unaligned.h '-I "^#include <linux/unaligned/packed_struct.h>" -I "^#include <asm/byteorder.h>" -I "^#pragma GCC diagnostic"'
check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common\(-tools\)*.h>"'
check include/uapi/linux/mman.h '-I "^#include <\(uapi/\)*asm/mman.h>"'
check include/linux/build_bug.h '-I "^#\(ifndef\|endif\)\( \/\/\)* static_assert$"'
check include/linux/ctype.h '-I "isdigit("'
check lib/ctype.c '-I "^EXPORT_SYMBOL" -I "^#include <linux/export.h>" -B'
-check lib/list_sort.c '-I "^#include <linux/bug.h>"'
# diff non-symmetric files
+check_2 tools/perf/arch/x86/entry/syscalls/syscall_32.tbl arch/x86/entry/syscalls/syscall_32.tbl
check_2 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl
check_2 tools/perf/arch/powerpc/entry/syscalls/syscall.tbl arch/powerpc/kernel/syscalls/syscall.tbl
check_2 tools/perf/arch/s390/entry/syscalls/syscall.tbl arch/s390/kernel/syscalls/syscall.tbl
check_2 tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl arch/mips/kernel/syscalls/syscall_n64.tbl
+check_2 tools/perf/arch/arm/entry/syscalls/syscall.tbl arch/arm/tools/syscall.tbl
+check_2 tools/perf/arch/sh/entry/syscalls/syscall.tbl arch/sh/kernel/syscalls/syscall.tbl
+check_2 tools/perf/arch/sparc/entry/syscalls/syscall.tbl arch/sparc/kernel/syscalls/syscall.tbl
+check_2 tools/perf/arch/xtensa/entry/syscalls/syscall.tbl arch/xtensa/kernel/syscalls/syscall.tbl
+check_2 tools/perf/arch/alpha/entry/syscalls/syscall.tbl arch/alpha/entry/syscalls/syscall.tbl
+check_2 tools/perf/arch/parisc/entry/syscalls/syscall.tbl arch/parisc/entry/syscalls/syscall.tbl
+check_2 tools/perf/arch/arm64/entry/syscalls/syscall_32.tbl arch/arm64/entry/syscalls/syscall_32.tbl
+check_2 tools/perf/arch/arm64/entry/syscalls/syscall_64.tbl arch/arm64/entry/syscalls/syscall_64.tbl
for i in "${BEAUTY_FILES[@]}"
do
@@ -185,6 +220,10 @@ done
check_2 tools/perf/util/hashmap.h tools/lib/bpf/hashmap.h
check_2 tools/perf/util/hashmap.c tools/lib/bpf/hashmap.c
+# Files with larger differences
+
+check_ignore_some_hunks lib/list_sort.c
+
cd tools/perf || exit
if [ ${#FAILURES[@]} -gt 0 ]
diff --git a/tools/perf/dlfilters/dlfilter-test-api-v0.c b/tools/perf/dlfilters/dlfilter-test-api-v0.c
index 4083b1abeaab..4ca2d7b2ea6c 100644
--- a/tools/perf/dlfilters/dlfilter-test-api-v0.c
+++ b/tools/perf/dlfilters/dlfilter-test-api-v0.c
@@ -220,7 +220,7 @@ static int check_sample(struct filter_data *d, const struct perf_dlfilter_sample
CHECK_SAMPLE(raw_callchain_nr);
CHECK(!sample->raw_callchain);
-#define EVENT_NAME "branches:"
+#define EVENT_NAME "branches"
CHECK(!strncmp(sample->event, EVENT_NAME, strlen(EVENT_NAME)));
return 0;
diff --git a/tools/perf/dlfilters/dlfilter-test-api-v2.c b/tools/perf/dlfilters/dlfilter-test-api-v2.c
index 32ff619e881c..00d73a16c4fd 100644
--- a/tools/perf/dlfilters/dlfilter-test-api-v2.c
+++ b/tools/perf/dlfilters/dlfilter-test-api-v2.c
@@ -235,7 +235,7 @@ static int check_sample(struct filter_data *d, const struct perf_dlfilter_sample
CHECK_SAMPLE(raw_callchain_nr);
CHECK(!sample->raw_callchain);
-#define EVENT_NAME "branches:"
+#define EVENT_NAME "branches"
CHECK(!strncmp(sample->event, EVENT_NAME, strlen(EVENT_NAME)));
return 0;
diff --git a/tools/perf/include/perf/perf_dlfilter.h b/tools/perf/include/perf/perf_dlfilter.h
index 16fc4568ac53..2d3540ed3c58 100644
--- a/tools/perf/include/perf/perf_dlfilter.h
+++ b/tools/perf/include/perf/perf_dlfilter.h
@@ -87,7 +87,7 @@ struct perf_dlfilter_al {
__u8 is_64_bit; /* Only valid if dso is not NULL */
__u8 is_kernel_ip; /* True if in kernel space */
__u32 buildid_size;
- __u8 *buildid;
+ const __u8 *buildid;
/* Below members are only populated by resolve_ip() */
__u8 filtered; /* True if this sample event will be filtered out */
const char *comm;
diff --git a/tools/perf/jvmti/libjvmti.c b/tools/perf/jvmti/libjvmti.c
index fcca275e5bf9..82514e6532b8 100644
--- a/tools/perf/jvmti/libjvmti.c
+++ b/tools/perf/jvmti/libjvmti.c
@@ -141,11 +141,11 @@ copy_class_filename(const char * class_sign, const char * file_name, char * resu
* Assume path name is class hierarchy, this is a common practice with Java programs
*/
if (*class_sign == 'L') {
- int j, i = 0;
+ size_t j, i = 0;
char *p = strrchr(class_sign, '/');
if (p) {
/* drop the 'L' prefix and copy up to the final '/' */
- for (i = 0; i < (p - class_sign); i++)
+ for (i = 0; i < (size_t)(p - class_sign); i++)
result[i] = class_sign[i+1];
}
/*
diff --git a/tools/perf/perf-archive.sh b/tools/perf/perf-archive.sh
index f94795794b36..7977e9b0a5ea 100755
--- a/tools/perf/perf-archive.sh
+++ b/tools/perf/perf-archive.sh
@@ -16,6 +16,13 @@ while [ $# -gt 0 ] ; do
elif [ $1 == "--unpack" ]; then
UNPACK=1
shift
+ elif [ $1 == "--exclude-buildids" ]; then
+ EXCLUDE_BUILDIDS="$2"
+ if [ ! -e "$EXCLUDE_BUILDIDS" ]; then
+ echo "Provided exclude-buildids file $EXCLUDE_BUILDIDS does not exist"
+ exit 1
+ fi
+ shift 2
else
PERF_DATA=$1
UNPACK_TAR=$1
@@ -34,7 +41,7 @@ if [ $UNPACK -eq 1 ]; then
TARGET=`find . -regex "\./perf.*\.tar\.bz2"`
TARGET_NUM=`echo -n "$TARGET" | grep -c '^'`
- if [ -z "$TARGET" -o $TARGET_NUM -gt 1 ]; then
+ if [ -z "$TARGET" ] || [ $TARGET_NUM -gt 1 ]; then
echo -e "Error: $TARGET_NUM files found for unpacking:\n$TARGET"
echo "Provide the requested file as an argument"
exit 1
@@ -86,11 +93,29 @@ fi
BUILDIDS=$(mktemp /tmp/perf-archive-buildids.XXXXXX)
-perf buildid-list -i $PERF_DATA --with-hits | grep -v "^ " > $BUILDIDS
-if [ ! -s $BUILDIDS ] ; then
- echo "perf archive: no build-ids found"
- rm $BUILDIDS || true
- exit 1
+#
+# EXCLUDE_BUILDIDS is an optional file that contains build-ids to be excluded from the
+# archive. It is a list of build-ids, one per line, without any leading or trailing spaces.
+# If the file is empty, all build-ids will be included in the archive. To create a exclude-
+# buildids file, you can use the following command:
+# perf buildid-list -i perf.data --with-hits | grep -v "^ " > exclude_buildids.txt
+# You can edit the file to remove the lines that you want to keep in the archive, then:
+# perf archive --exclude-buildids exclude_buildids.txt
+#
+if [ -s "$EXCLUDE_BUILDIDS" ]; then
+ perf buildid-list -i $PERF_DATA --with-hits | grep -v "^ " | grep -Fv -f $EXCLUDE_BUILDIDS > $BUILDIDS
+ if [ ! -s "$BUILDIDS" ] ; then
+ echo "perf archive: no build-ids found after applying exclude-buildids file"
+ rm $BUILDIDS || true
+ exit 1
+ fi
+else
+ perf buildid-list -i $PERF_DATA --with-hits | grep -v "^ " > $BUILDIDS
+ if [ ! -s "$BUILDIDS" ] ; then
+ echo "perf archive: no build-ids found"
+ rm $BUILDIDS || true
+ exit 1
+ fi
fi
MANIFEST=$(mktemp /tmp/perf-archive-manifest.XXXXXX)
diff --git a/tools/perf/perf-completion.sh b/tools/perf/perf-completion.sh
index f224d79b89e6..69cba3c170d5 100644
--- a/tools/perf/perf-completion.sh
+++ b/tools/perf/perf-completion.sh
@@ -108,6 +108,8 @@ __perf__ltrim_colon_completions()
__perfcomp ()
{
+ # Expansion of spaces to array is deliberate.
+ # shellcheck disable=SC2207
COMPREPLY=( $( compgen -W "$1" -- "$2" ) )
}
@@ -127,13 +129,13 @@ __perf_prev_skip_opts ()
let i=cword-1
cmds_=$($cmd $1 --list-cmds)
- prev_skip_opts=()
+ prev_skip_opts=""
while [ $i -ge 0 ]; do
- if [[ ${words[i]} == $1 ]]; then
+ if [[ ${words[i]} == "$1" ]]; then
return
fi
for cmd_ in $cmds_; do
- if [[ ${words[i]} == $cmd_ ]]; then
+ if [[ ${words[i]} == "$cmd_" ]]; then
prev_skip_opts=${words[i]}
return
fi
@@ -164,9 +166,10 @@ __perf_main ()
$prev_skip_opts == @(record|stat|top) ]]; then
local cur1=${COMP_WORDS[COMP_CWORD]}
- local raw_evts=$($cmd list --raw-dump hw sw cache tracepoint pmu sdt)
+ local raw_evts
local arr s tmp result cpu_evts
+ raw_evts=$($cmd list --raw-dump hw sw cache tracepoint pmu sdt)
# aarch64 doesn't have /sys/bus/event_source/devices/cpu/events
if [[ `uname -m` != aarch64 ]]; then
cpu_evts=$(ls /sys/bus/event_source/devices/cpu/events)
@@ -175,10 +178,12 @@ __perf_main ()
if [[ "$cur1" == */* && ${cur1#*/} =~ ^[A-Z] ]]; then
OLD_IFS="$IFS"
IFS=" "
+ # Expansion of spaces to array is deliberate.
+ # shellcheck disable=SC2206
arr=($raw_evts)
IFS="$OLD_IFS"
- for s in ${arr[@]}
+ for s in "${arr[@]}"
do
if [[ "$s" == *cpu/* ]]; then
tmp=${s#*cpu/}
@@ -200,11 +205,13 @@ __perf_main ()
fi
elif [[ $prev == @("--pfm-events") &&
$prev_skip_opts == @(record|stat|top) ]]; then
- local evts=$($cmd list --raw-dump pfm)
+ local evts
+ evts=$($cmd list --raw-dump pfm)
__perfcomp "$evts" "$cur"
elif [[ $prev == @("-M"|"--metrics") &&
$prev_skip_opts == @(stat) ]]; then
- local metrics=$($cmd list --raw-dump metric metricgroup)
+ local metrics
+ metrics=$($cmd list --raw-dump metric metricgroup)
__perfcomp "$metrics" "$cur"
else
# List subcommands for perf commands
@@ -278,6 +285,8 @@ if [[ -n ${ZSH_VERSION-} ]]; then
let cword=CURRENT-1
emulate ksh -c __perf_main
let _ret && _default && _ret=0
+ # _ret is only assigned 0 or 1, disable inaccurate analysis.
+ # shellcheck disable=SC2152
return _ret
}
diff --git a/tools/perf/perf.c b/tools/perf/perf.c
index 921bee0a6437..88c60ecf3395 100644
--- a/tools/perf/perf.c
+++ b/tools/perf/perf.c
@@ -18,6 +18,7 @@
#include <subcmd/run-command.h>
#include "util/parse-events.h"
#include <subcmd/parse-options.h>
+#include <subcmd/help.h>
#include "util/debug.h"
#include "util/event.h"
#include "util/util.h" // usage()
@@ -51,6 +52,7 @@ static struct cmd_struct commands[] = {
{ "archive", NULL, 0 },
{ "buildid-cache", cmd_buildid_cache, 0 },
{ "buildid-list", cmd_buildid_list, 0 },
+ { "check", cmd_check, 0 },
{ "config", cmd_config, 0 },
{ "c2c", cmd_c2c, 0 },
{ "diff", cmd_diff, 0 },
@@ -82,7 +84,7 @@ static struct cmd_struct commands[] = {
#endif
{ "kvm", cmd_kvm, 0 },
{ "test", cmd_test, 0 },
-#if defined(HAVE_LIBTRACEEVENT) && (defined(HAVE_LIBAUDIT_SUPPORT) || defined(HAVE_SYSCALL_TABLE_SUPPORT))
+#if defined(HAVE_LIBTRACEEVENT)
{ "trace", cmd_trace, 0 },
#endif
{ "inject", cmd_inject, 0 },
@@ -344,12 +346,9 @@ static int run_builtin(struct cmd_struct *p, int argc, const char **argv)
use_pager = 1;
commit_pager_choice();
- perf_env__init(&perf_env);
- perf_env__set_cmdline(&perf_env, argc, argv);
status = p->fn(argc, argv);
perf_config__exit();
exit_browser(status);
- perf_env__exit(&perf_env);
if (status)
return status & 0xff;
@@ -458,7 +457,7 @@ static int libperf_print(enum libperf_print_level level,
int main(int argc, const char **argv)
{
- int err;
+ int err, done_help = 0;
const char *cmd;
char sbuf[STRERR_BUFSIZE];
@@ -512,10 +511,6 @@ int main(int argc, const char **argv)
fprintf(stderr,
"trace command not available: missing libtraceevent devel package at build time.\n");
goto out;
-#elif !defined(HAVE_LIBAUDIT_SUPPORT) && !defined(HAVE_SYSCALL_TABLE_SUPPORT)
- fprintf(stderr,
- "trace command not available: missing audit-libs devel package at build time.\n");
- goto out;
#else
setup_path();
argv[0] = "trace";
@@ -540,8 +535,6 @@ int main(int argc, const char **argv)
}
cmd = argv[0];
- test_attr__init();
-
/*
* We use PATH to find perf commands, but we prepend some higher
* precedence paths: the "--exec-path" option, the PERF_EXEC_PATH
@@ -557,22 +550,32 @@ int main(int argc, const char **argv)
pthread__block_sigwinch();
while (1) {
- static int done_help;
-
run_argv(&argc, &argv);
if (errno != ENOENT)
break;
if (!done_help) {
- cmd = argv[0] = help_unknown_cmd(cmd);
+ struct cmdnames main_cmds = {};
+
+ for (unsigned int i = 0; i < ARRAY_SIZE(commands); i++) {
+ add_cmdname(&main_cmds,
+ commands[i].cmd,
+ strlen(commands[i].cmd));
+ }
+ cmd = argv[0] = help_unknown_cmd(cmd, &main_cmds);
+ clean_cmdnames(&main_cmds);
done_help = 1;
+ if (!cmd)
+ break;
} else
break;
}
- fprintf(stderr, "Failed to run command '%s': %s\n",
- cmd, str_error_r(errno, sbuf, sizeof(sbuf)));
+ if (cmd) {
+ fprintf(stderr, "Failed to run command '%s': %s\n",
+ cmd, str_error_r(errno, sbuf, sizeof(sbuf)));
+ }
out:
if (debug_fp)
fclose(debug_fp);
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index c004dd4e65a3..e004178472d9 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -2,9 +2,7 @@
#ifndef _PERF_PERF_H
#define _PERF_PERF_H
-#ifndef MAX_NR_CPUS
-#define MAX_NR_CPUS 2048
-#endif
+#define MAX_NR_CPUS 4096
enum perf_affinity {
PERF_AFFINITY_SYS = 0,
diff --git a/tools/perf/pmu-events/Build b/tools/perf/pmu-events/Build
index 1d18bb89402e..32f387d48908 100644
--- a/tools/perf/pmu-events/Build
+++ b/tools/perf/pmu-events/Build
@@ -11,6 +11,8 @@ METRIC_TEST_PY = pmu-events/metric_test.py
EMPTY_PMU_EVENTS_C = pmu-events/empty-pmu-events.c
PMU_EVENTS_C = $(OUTPUT)pmu-events/pmu-events.c
METRIC_TEST_LOG = $(OUTPUT)pmu-events/metric_test.log
+TEST_EMPTY_PMU_EVENTS_C = $(OUTPUT)pmu-events/test-empty-pmu-events.c
+EMPTY_PMU_EVENTS_TEST_LOG = $(OUTPUT)pmu-events/empty-pmu-events.log
ifeq ($(JEVENTS_ARCH),)
JEVENTS_ARCH=$(SRCARCH)
@@ -31,7 +33,38 @@ $(METRIC_TEST_LOG): $(METRIC_TEST_PY) $(METRIC_PY)
$(call rule_mkdir)
$(Q)$(call echo-cmd,test)$(PYTHON) $< 2> $@ || (cat $@ && false)
-$(PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_LOG)
+$(TEST_EMPTY_PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_LOG)
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) none none pmu-events/arch $@
+
+$(EMPTY_PMU_EVENTS_TEST_LOG): $(EMPTY_PMU_EVENTS_C) $(TEST_EMPTY_PMU_EVENTS_C)
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)diff -u $^ 2> $@ || (cat $@ && false)
+
+ifdef MYPY
+ PMU_EVENTS_PY_TESTS := $(wildcard *.py)
+ PMU_EVENTS_MYPY_TEST_LOGS := $(JEVENTS_PY_TESTS:%=%.mypy_log)
+else
+ PMU_EVENTS_MYPY_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.mypy_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)mypy "$<" > $@ || (cat $@ && rm $@ && false)
+
+ifdef PYLINT
+ PMU_EVENTS_PY_TESTS := $(wildcard *.py)
+ PMU_EVENTS_PYLINT_TEST_LOGS := $(JEVENTS_PY_TESTS:%=%.pylint_log)
+else
+ PMU_EVENTS_PYLINT_TEST_LOGS :=
+endif
+
+$(OUTPUT)%.pylint_log: %
+ $(call rule_mkdir)
+ $(Q)$(call echo-cmd,test)pylint "$<" > $@ || (cat $@ && rm $@ && false)
+
+$(PMU_EVENTS_C): $(JSON) $(JSON_TEST) $(JEVENTS_PY) $(METRIC_PY) $(METRIC_TEST_LOG) \
+ $(EMPTY_PMU_EVENTS_TEST_LOG) $(PMU_EVENTS_MYPY_TEST_LOGS) $(PMU_EVENTS_PYLINT_TEST_LOGS)
$(call rule_mkdir)
$(Q)$(call echo-cmd,gen)$(PYTHON) $(JEVENTS_PY) $(JEVENTS_ARCH) $(JEVENTS_MODEL) pmu-events/arch $@
endif
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json
index 7a2b7b200f14..ac75f12e27bf 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/cache.json
@@ -9,7 +9,9 @@
"ArchStdEvent": "L1D_CACHE_REFILL_RD"
},
{
- "ArchStdEvent": "L1D_CACHE_INVAL"
+ "ArchStdEvent": "L1D_CACHE_INVAL",
+ "Errata": "Errata AC03_CPU_41",
+ "BriefDescription": "L1D cache invalidate. Impacted by errata -"
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD"
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/instruction.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/instruction.json
index 18d1f2f76a23..9fe697d12fe0 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/instruction.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/instruction.json
@@ -78,9 +78,6 @@
"ArchStdEvent": "OP_RETIRED"
},
{
- "ArchStdEvent": "OP_SPEC"
- },
- {
"PublicDescription": "Operation speculatively executed, NOP",
"EventCode": "0x100",
"EventName": "NOP_SPEC",
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/memory.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/memory.json
index 0711782bfa6b..13382d29b25f 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/memory.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/memory.json
@@ -1,6 +1,8 @@
[
{
- "ArchStdEvent": "LD_RETIRED"
+ "ArchStdEvent": "LD_RETIRED",
+ "Errata": "Errata AC03_CPU_52",
+ "BriefDescription": "Instruction architecturally executed, condition code check pass, load. Impacted by errata -"
},
{
"ArchStdEvent": "MEM_ACCESS_RD"
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json
index afcdad58ef89..324104438e78 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereone/metrics.json
@@ -113,7 +113,7 @@
{
"MetricName": "load_store_spec_rate",
"MetricExpr": "((LDST_SPEC / INST_SPEC) * 100)",
- "BriefDescription": "The rate of load or store instructions speculatively executed to overall instructions speclatively executed",
+ "BriefDescription": "The rate of load or store instructions speculatively executed to overall instructions speculatively executed",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
@@ -132,7 +132,7 @@
{
"MetricName": "pc_write_spec_rate",
"MetricExpr": "((PC_WRITE_SPEC / INST_SPEC) * 100)",
- "BriefDescription": "The rate of software change of the PC speculatively executed to overall instructions speclatively executed",
+ "BriefDescription": "The rate of software change of the PC speculatively executed to overall instructions speculatively executed",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
@@ -195,14 +195,14 @@
{
"MetricName": "stall_frontend_cache_rate",
"MetricExpr": "((STALL_FRONTEND_CACHE / CPU_CYCLES) * 100)",
- "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and cache miss",
+ "BriefDescription": "Proportion of cycles stalled and no operations delivered from frontend and cache miss",
"MetricGroup": "Stall",
"ScaleUnit": "1percent of cycles"
},
{
"MetricName": "stall_frontend_tlb_rate",
"MetricExpr": "((STALL_FRONTEND_TLB / CPU_CYCLES) * 100)",
- "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and TLB miss",
+ "BriefDescription": "Proportion of cycles stalled and no operations delivered from frontend and TLB miss",
"MetricGroup": "Stall",
"ScaleUnit": "1percent of cycles"
},
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/cache.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/cache.json
index c50d8e930b05..f4bfe7083a6b 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/cache.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/cache.json
@@ -9,7 +9,9 @@
"ArchStdEvent": "L1D_CACHE_REFILL_RD"
},
{
- "ArchStdEvent": "L1D_CACHE_INVAL"
+ "ArchStdEvent": "L1D_CACHE_INVAL",
+ "Errata": "Errata AC04_CPU_1",
+ "BriefDescription": "L1D cache invalidate. Impacted by errata -"
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD"
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/memory.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/memory.json
index a211d94aacde..6c06bc928415 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/memory.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/memory.json
@@ -1,6 +1,8 @@
[
{
- "ArchStdEvent": "LD_RETIRED"
+ "ArchStdEvent": "LD_RETIRED",
+ "Errata": "Errata AC04_CPU_21",
+ "BriefDescription": "Instruction architecturally executed, condition code check pass, load. Impacted by errata -"
},
{
"ArchStdEvent": "MEM_ACCESS_RD"
diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
index c5d1d22bd034..6817cac149e0 100644
--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
+++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
@@ -113,7 +113,7 @@
{
"MetricName": "load_store_spec_rate",
"MetricExpr": "LDST_SPEC / INST_SPEC",
- "BriefDescription": "The rate of load or store instructions speculatively executed to overall instructions speclatively executed",
+ "BriefDescription": "The rate of load or store instructions speculatively executed to overall instructions speculatively executed",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "100percent of operations"
},
@@ -132,7 +132,7 @@
{
"MetricName": "pc_write_spec_rate",
"MetricExpr": "PC_WRITE_SPEC / INST_SPEC",
- "BriefDescription": "The rate of software change of the PC speculatively executed to overall instructions speclatively executed",
+ "BriefDescription": "The rate of software change of the PC speculatively executed to overall instructions speculatively executed",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "100percent of operations"
},
@@ -195,14 +195,14 @@
{
"MetricName": "stall_frontend_cache_rate",
"MetricExpr": "STALL_FRONTEND_CACHE / CPU_CYCLES",
- "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and cache miss",
+ "BriefDescription": "Proportion of cycles stalled and no operations delivered from frontend and cache miss",
"MetricGroup": "Stall",
"ScaleUnit": "100percent of cycles"
},
{
"MetricName": "stall_frontend_tlb_rate",
"MetricExpr": "STALL_FRONTEND_TLB / CPU_CYCLES",
- "BriefDescription": "Proportion of cycles stalled and no ops delivered from frontend and TLB miss",
+ "BriefDescription": "Proportion of cycles stalled and no operations delivered from frontend and TLB miss",
"MetricGroup": "Stall",
"ScaleUnit": "100percent of cycles"
},
@@ -229,19 +229,19 @@
},
{
"MetricName": "slots_lost_misspeculation_fraction",
- "MetricExpr": "(OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
+ "MetricExpr": "100 * (OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
"BriefDescription": "Fraction of slots lost due to misspeculation",
"DefaultMetricgroupName": "TopdownL1",
"MetricGroup": "Default;TopdownL1",
- "ScaleUnit": "100percent of slots"
+ "ScaleUnit": "1percent of slots"
},
{
"MetricName": "retired_fraction",
- "MetricExpr": "OP_RETIRED / (CPU_CYCLES * #slots)",
+ "MetricExpr": "100 * OP_RETIRED / (CPU_CYCLES * #slots)",
"BriefDescription": "Fraction of slots retiring, useful work",
"DefaultMetricgroupName": "TopdownL1",
"MetricGroup": "Default;TopdownL1",
- "ScaleUnit": "100percent of slots"
+ "ScaleUnit": "1percent of slots"
},
{
"MetricName": "backend_core",
@@ -266,7 +266,7 @@
},
{
"MetricName": "frontend_bandwidth",
- "MetricExpr": "frontend_bound - frontend_latency",
+ "MetricExpr": "frontend_bound - 100 * frontend_latency",
"BriefDescription": "Fraction of slots the CPU did not dispatch at full bandwidth - able to dispatch partial slots only (1, 2, or 3 uops)",
"MetricGroup": "TopdownL2",
"ScaleUnit": "1percent of slots"
@@ -391,7 +391,7 @@
"ScaleUnit": "100percent of cache acceses"
},
{
- "MetricName": "l1d_cache_access_prefetces",
+ "MetricName": "l1d_cache_access_prefetches",
"MetricExpr": "L1D_CACHE_PRFM / L1D_CACHE",
"BriefDescription": "L1D cache access - prefetch",
"MetricGroup": "Cache",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json
index 4404b8e91690..7126fbf292e0 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/exception.json
@@ -5,7 +5,7 @@
},
{
"ArchStdEvent": "EXC_RETURN",
- "PublicDescription": "Counts any architecturally executed exception return instructions. Eg: AArch64: ERET"
+ "PublicDescription": "Counts any architecturally executed exception return instructions. For example: AArch64: ERET"
},
{
"ArchStdEvent": "EXC_UNDEF",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json
index 428810f855b8..c5dcdcf43c58 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/general.json
@@ -5,6 +5,6 @@
},
{
"ArchStdEvent": "CNT_CYCLES",
- "PublicDescription": "Counts constant frequency cycles"
+ "PublicDescription": "Increments at a constant frequency equal to the rate of increment of the System Counter, CNTPCT_EL0."
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json
index da7c129f2569..799d106d5173 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l1d_cache.json
@@ -1,11 +1,11 @@
[
{
"ArchStdEvent": "L1D_CACHE_REFILL",
- "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line. This event does not count cache line allocations from preload instructions or from hardware cache prefetching."
+ "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line."
},
{
"ArchStdEvent": "L1D_CACHE",
- "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) count as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted."
+ "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) counts as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted."
},
{
"ArchStdEvent": "L1D_CACHE_WB",
@@ -17,7 +17,7 @@
},
{
"ArchStdEvent": "L1D_CACHE_RD",
- "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches count as both a write access and read access."
+ "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches counts as both a write access and read access."
},
{
"ArchStdEvent": "L1D_CACHE_WR",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json
index 0e31d0daf88b..ed8291ab9737 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l2_cache.json
@@ -1,11 +1,11 @@
[
{
"ArchStdEvent": "L2D_CACHE",
- "PublicDescription": "Counts level 2 cache accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level caches or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache."
+ "PublicDescription": "Counts accesses to the level 2 cache due to data accesses. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level data cache or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL",
- "PublicDescription": "Counts cache line refills into the level 2 cache. level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts cache line refills into the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WB",
@@ -13,23 +13,23 @@
},
{
"ArchStdEvent": "L2D_CACHE_ALLOCATE",
- "PublicDescription": "TBD"
+ "PublicDescription": "Counts level 2 cache line allocates that do not fetch data from outside the level 2 data or unified cache."
},
{
"ArchStdEvent": "L2D_CACHE_RD",
- "PublicDescription": "Counts level 2 cache accesses due to memory read operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts level 2 data cache accesses due to memory read operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WR",
- "PublicDescription": "Counts level 2 cache accesses due to memory write operations. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts level 2 cache accesses due to memory write operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_RD",
- "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_REFILL_WR",
- "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L2D_CACHE_WB_VICTIM",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json
index 45bfba532df7..4a2e72fc5ada 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/l3_cache.json
@@ -9,11 +9,11 @@
},
{
"ArchStdEvent": "L3D_CACHE",
- "PublicDescription": "Counts level 3 cache accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
+ "PublicDescription": "Counts level 3 cache accesses. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L3D_CACHE_RD",
- "PublicDescription": "TBD"
+ "PublicDescription": "Counts level 3 cache accesses caused by any memory read operation. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
},
{
"ArchStdEvent": "L3D_CACHE_LMISS_RD",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json
index bb712d57d58a..fd5a2e0099b8 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/ll_cache.json
@@ -1,10 +1,10 @@
[
{
"ArchStdEvent": "LL_CACHE_RD",
- "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources."
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for the L3 cache. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources."
},
{
"ArchStdEvent": "LL_CACHE_MISS_RD",
- "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts when the system register CPUECTLR.EXTLLC bit is set. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations."
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for L3 cache. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations."
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json
index 106a97f8b2e7..bb3491012a8f 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/memory.json
@@ -33,7 +33,7 @@
},
{
"ArchStdEvent": "MEM_ACCESS_CHECKED",
- "PublicDescription": "Counts the number of memory read and write accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)."
+ "PublicDescription": "Counts the number of memory read and write accesses counted by MEM_ACCESS that are tag checked by the Memory Tagging Extension (MTE). This event is implemented as the sum of MEM_ACCESS_CHECKED_RD and MEM_ACCESS_CHECKED_WR"
},
{
"ArchStdEvent": "MEM_ACCESS_CHECKED_RD",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json
index 5f449270b448..97d352f94323 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/metrics.json
@@ -5,7 +5,7 @@
},
{
"MetricName": "backend_stalled_cycles",
- "MetricExpr": "((STALL_BACKEND / CPU_CYCLES) * 100)",
+ "MetricExpr": "STALL_BACKEND / CPU_CYCLES * 100",
"BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the backend unit of the processor.",
"MetricGroup": "Cycle_Accounting",
"ScaleUnit": "1percent of cycles"
@@ -16,45 +16,45 @@
},
{
"MetricName": "branch_misprediction_ratio",
- "MetricExpr": "(BR_MIS_PRED_RETIRED / BR_RETIRED)",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED",
"BriefDescription": "This metric measures the ratio of branches mispredicted to the total number of branches architecturally executed. This gives an indication of the effectiveness of the branch prediction unit.",
"MetricGroup": "Miss_Ratio;Branch_Effectiveness",
- "ScaleUnit": "1per branch"
+ "ScaleUnit": "100percent of branches"
},
{
"MetricName": "branch_mpki",
- "MetricExpr": "((BR_MIS_PRED_RETIRED / INST_RETIRED) * 1000)",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of branch mispredictions per thousand instructions executed.",
"MetricGroup": "MPKI;Branch_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "branch_percentage",
- "MetricExpr": "(((BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC) * 100)",
+ "MetricExpr": "(BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC * 100",
"BriefDescription": "This metric measures branch operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
{
"MetricName": "crypto_percentage",
- "MetricExpr": "((CRYPTO_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "CRYPTO_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures crypto operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
{
"MetricName": "dtlb_mpki",
- "MetricExpr": "((DTLB_WALK / INST_RETIRED) * 1000)",
+ "MetricExpr": "DTLB_WALK / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of data TLB Walks per thousand instructions executed.",
"MetricGroup": "MPKI;DTLB_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "dtlb_walk_ratio",
- "MetricExpr": "(DTLB_WALK / L1D_TLB)",
+ "MetricExpr": "DTLB_WALK / L1D_TLB",
"BriefDescription": "This metric measures the ratio of data TLB Walks to the total number of data TLB accesses. This gives an indication of the effectiveness of the data TLB accesses.",
"MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
- "ScaleUnit": "1per TLB access"
+ "ScaleUnit": "100percent of TLB accesses"
},
{
"ArchStdEvent": "frontend_bound",
@@ -62,147 +62,147 @@
},
{
"MetricName": "frontend_stalled_cycles",
- "MetricExpr": "((STALL_FRONTEND / CPU_CYCLES) * 100)",
+ "MetricExpr": "STALL_FRONTEND / CPU_CYCLES * 100",
"BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the frontend unit of the processor.",
"MetricGroup": "Cycle_Accounting",
"ScaleUnit": "1percent of cycles"
},
{
"MetricName": "integer_dp_percentage",
- "MetricExpr": "((DP_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "DP_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures scalar integer operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
{
"MetricName": "ipc",
- "MetricExpr": "(INST_RETIRED / CPU_CYCLES)",
+ "MetricExpr": "INST_RETIRED / CPU_CYCLES",
"BriefDescription": "This metric measures the number of instructions retired per cycle.",
"MetricGroup": "General",
"ScaleUnit": "1per cycle"
},
{
"MetricName": "itlb_mpki",
- "MetricExpr": "((ITLB_WALK / INST_RETIRED) * 1000)",
+ "MetricExpr": "ITLB_WALK / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of instruction TLB Walks per thousand instructions executed.",
"MetricGroup": "MPKI;ITLB_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "itlb_walk_ratio",
- "MetricExpr": "(ITLB_WALK / L1I_TLB)",
+ "MetricExpr": "ITLB_WALK / L1I_TLB",
"BriefDescription": "This metric measures the ratio of instruction TLB Walks to the total number of instruction TLB accesses. This gives an indication of the effectiveness of the instruction TLB accesses.",
"MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
- "ScaleUnit": "1per TLB access"
+ "ScaleUnit": "100percent of TLB accesses"
},
{
"MetricName": "l1d_cache_miss_ratio",
- "MetricExpr": "(L1D_CACHE_REFILL / L1D_CACHE)",
+ "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE",
"BriefDescription": "This metric measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cache accesses. This gives an indication of the effectiveness of the level 1 data cache.",
"MetricGroup": "Miss_Ratio;L1D_Cache_Effectiveness",
- "ScaleUnit": "1per cache access"
+ "ScaleUnit": "100percent of cache accesses"
},
{
"MetricName": "l1d_cache_mpki",
- "MetricExpr": "((L1D_CACHE_REFILL / INST_RETIRED) * 1000)",
+ "MetricExpr": "L1D_CACHE_REFILL / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of level 1 data cache accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;L1D_Cache_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "l1d_tlb_miss_ratio",
- "MetricExpr": "(L1D_TLB_REFILL / L1D_TLB)",
+ "MetricExpr": "L1D_TLB_REFILL / L1D_TLB",
"BriefDescription": "This metric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TLB accesses. This gives an indication of the effectiveness of the level 1 data TLB.",
"MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
- "ScaleUnit": "1per TLB access"
+ "ScaleUnit": "100percent of TLB accesses"
},
{
"MetricName": "l1d_tlb_mpki",
- "MetricExpr": "((L1D_TLB_REFILL / INST_RETIRED) * 1000)",
- "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.",
+ "MetricExpr": "L1D_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 data TLB accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;DTLB_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "l1i_cache_miss_ratio",
- "MetricExpr": "(L1I_CACHE_REFILL / L1I_CACHE)",
+ "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE",
"BriefDescription": "This metric measures the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction cache accesses. This gives an indication of the effectiveness of the level 1 instruction cache.",
"MetricGroup": "Miss_Ratio;L1I_Cache_Effectiveness",
- "ScaleUnit": "1per cache access"
+ "ScaleUnit": "100percent of cache accesses"
},
{
"MetricName": "l1i_cache_mpki",
- "MetricExpr": "((L1I_CACHE_REFILL / INST_RETIRED) * 1000)",
+ "MetricExpr": "L1I_CACHE_REFILL / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;L1I_Cache_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "l1i_tlb_miss_ratio",
- "MetricExpr": "(L1I_TLB_REFILL / L1I_TLB)",
+ "MetricExpr": "L1I_TLB_REFILL / L1I_TLB",
"BriefDescription": "This metric measures the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instruction TLB accesses. This gives an indication of the effectiveness of the level 1 instruction TLB.",
"MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
- "ScaleUnit": "1per TLB access"
+ "ScaleUnit": "100percent of TLB accesses"
},
{
"MetricName": "l1i_tlb_mpki",
- "MetricExpr": "((L1I_TLB_REFILL / INST_RETIRED) * 1000)",
+ "MetricExpr": "L1I_TLB_REFILL / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;ITLB_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "l2_cache_miss_ratio",
- "MetricExpr": "(L2D_CACHE_REFILL / L2D_CACHE)",
+ "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE",
"BriefDescription": "This metric measures the ratio of level 2 cache accesses missed to the total number of level 2 cache accesses. This gives an indication of the effectiveness of the level 2 cache, which is a unified cache that stores both data and instruction. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
"MetricGroup": "Miss_Ratio;L2_Cache_Effectiveness",
- "ScaleUnit": "1per cache access"
+ "ScaleUnit": "100percent of cache accesses"
},
{
"MetricName": "l2_cache_mpki",
- "MetricExpr": "((L2D_CACHE_REFILL / INST_RETIRED) * 1000)",
+ "MetricExpr": "L2D_CACHE_REFILL / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of level 2 unified cache accesses missed per thousand instructions executed. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
"MetricGroup": "MPKI;L2_Cache_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "l2_tlb_miss_ratio",
- "MetricExpr": "(L2D_TLB_REFILL / L2D_TLB)",
+ "MetricExpr": "L2D_TLB_REFILL / L2D_TLB",
"BriefDescription": "This metric measures the ratio of level 2 unified TLB accesses missed to the total number of level 2 unified TLB accesses. This gives an indication of the effectiveness of the level 2 TLB.",
"MetricGroup": "Miss_Ratio;ITLB_Effectiveness;DTLB_Effectiveness",
- "ScaleUnit": "1per TLB access"
+ "ScaleUnit": "100percent of TLB accesses"
},
{
"MetricName": "l2_tlb_mpki",
- "MetricExpr": "((L2D_TLB_REFILL / INST_RETIRED) * 1000)",
+ "MetricExpr": "L2D_TLB_REFILL / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of level 2 unified TLB accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;ITLB_Effectiveness;DTLB_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "ll_cache_read_hit_ratio",
- "MetricExpr": "((LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD)",
+ "MetricExpr": "(LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD",
"BriefDescription": "This metric measures the ratio of last level cache read accesses hit in the cache to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
"MetricGroup": "LL_Cache_Effectiveness",
- "ScaleUnit": "1per cache access"
+ "ScaleUnit": "100percent of cache accesses"
},
{
"MetricName": "ll_cache_read_miss_ratio",
- "MetricExpr": "(LL_CACHE_MISS_RD / LL_CACHE_RD)",
+ "MetricExpr": "LL_CACHE_MISS_RD / LL_CACHE_RD",
"BriefDescription": "This metric measures the ratio of last level cache read accesses missed to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
"MetricGroup": "Miss_Ratio;LL_Cache_Effectiveness",
- "ScaleUnit": "1per cache access"
+ "ScaleUnit": "100percent of cache accesses"
},
{
"MetricName": "ll_cache_read_mpki",
- "MetricExpr": "((LL_CACHE_MISS_RD / INST_RETIRED) * 1000)",
+ "MetricExpr": "LL_CACHE_MISS_RD / INST_RETIRED * 1000",
"BriefDescription": "This metric measures the number of last level cache read accesses missed per thousand instructions executed.",
"MetricGroup": "MPKI;LL_Cache_Effectiveness",
"ScaleUnit": "1MPKI"
},
{
"MetricName": "load_percentage",
- "MetricExpr": "((LD_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "LD_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures load operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
@@ -213,21 +213,21 @@
},
{
"MetricName": "scalar_fp_percentage",
- "MetricExpr": "((VFP_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "VFP_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures scalar floating point operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
{
"MetricName": "simd_percentage",
- "MetricExpr": "((ASE_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "ASE_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures advanced SIMD operations as a percentage of total operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
},
{
"MetricName": "store_percentage",
- "MetricExpr": "((ST_SPEC / INST_SPEC) * 100)",
+ "MetricExpr": "ST_SPEC / INST_SPEC * 100",
"BriefDescription": "This metric measures store operations as a percentage of operations speculatively executed.",
"MetricGroup": "Operation_Mix",
"ScaleUnit": "1percent of operations"
@@ -300,5 +300,12 @@
"MetricGroup": "Operation_Mix",
"MetricName": "branch_indirect_spec_rate",
"ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "sve_all_percentage",
+ "MetricExpr": "SVE_INST_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations, including loads and stores, as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json
index f297b049b62f..337e6a916f2b 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/retired.json
@@ -9,7 +9,7 @@
},
{
"ArchStdEvent": "CID_WRITE_RETIRED",
- "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR register, which usually contain the kernel PID and can be output with hardware trace."
+ "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR_EL1 register, which usually contain the kernel PID and can be output with hardware trace."
},
{
"ArchStdEvent": "TTBR_WRITE_RETIRED",
@@ -17,7 +17,7 @@
},
{
"ArchStdEvent": "BR_RETIRED",
- "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted."
+ "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted. Note that exception generating instructions, exception return instructions and context synchronization instructions are not counted."
},
{
"ArchStdEvent": "BR_MIS_PRED_RETIRED",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json
index 1af961f8a6c8..a7ea0d4c4ea4 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/spec_operation.json
@@ -5,7 +5,7 @@
},
{
"ArchStdEvent": "BR_PRED",
- "PublicDescription": "Counts branches speculatively executed and were predicted right."
+ "PublicDescription": "Counts all speculatively executed branches."
},
{
"ArchStdEvent": "INST_SPEC",
@@ -29,7 +29,7 @@
},
{
"ArchStdEvent": "LDREX_SPEC",
- "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. Eg: LDREX, LDX"
+ "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. For example: LDREX, LDX"
},
{
"ArchStdEvent": "STREX_PASS_SPEC",
@@ -73,15 +73,15 @@
},
{
"ArchStdEvent": "BR_IMMED_SPEC",
- "PublicDescription": "Counts immediate branch operations which are speculatively executed."
+ "PublicDescription": "Counts direct branch operations which are speculatively executed."
},
{
"ArchStdEvent": "BR_RETURN_SPEC",
- "PublicDescription": "Counts procedure return operations (RET) which are speculatively executed."
+ "PublicDescription": "Counts procedure return operations (RET, RETAA and RETAB) which are speculatively executed."
},
{
"ArchStdEvent": "BR_INDIRECT_SPEC",
- "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations. Eg: BR Xn, RET"
+ "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations and direct branch instructions. Some examples of the instructions counted by this event include BR Xn, RET, etc..."
},
{
"ArchStdEvent": "ISB_SPEC",
@@ -97,11 +97,11 @@
},
{
"ArchStdEvent": "RC_LD_SPEC",
- "PublicDescription": "Counts any load acquire operations that are speculatively executed. Eg: LDAR, LDARH, LDARB"
+ "PublicDescription": "Counts any load acquire operations that are speculatively executed. For example: LDAR, LDARH, LDARB"
},
{
"ArchStdEvent": "RC_ST_SPEC",
- "PublicDescription": "Counts any store release operations that are speculatively executed. Eg: STLR, STLRH, STLRB'"
+ "PublicDescription": "Counts any store release operations that are speculatively executed. For example: STLR, STLRH, STLRB"
},
{
"ArchStdEvent": "ASE_INST_SPEC",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json
index bbbebc805034..1fcba19dfb7d 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/stall.json
@@ -1,7 +1,7 @@
[
{
"ArchStdEvent": "STALL_FRONTEND",
- "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. All the frontend slots were empty during the cycle when this event counts."
+ "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. STALL_FRONTEND_SLOTS counts SLOTS during the cycle when this event counts."
},
{
"ArchStdEvent": "STALL_BACKEND",
@@ -9,11 +9,11 @@
},
{
"ArchStdEvent": "STALL",
- "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)."
+ "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). This event is the sum of STALL_FRONTEND and STALL_BACKEND"
},
{
"ArchStdEvent": "STALL_SLOT_BACKEND",
- "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints."
+ "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints. STALL_BACKEND counts during the cycle when STALL_SLOT_BACKEND counts at least 1."
},
{
"ArchStdEvent": "STALL_SLOT_FRONTEND",
@@ -21,7 +21,7 @@
},
{
"ArchStdEvent": "STALL_SLOT",
- "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall)."
+ "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). STALL_SLOT is the sum of STALL_SLOT_FRONTEND and STALL_SLOT_BACKEND."
},
{
"ArchStdEvent": "STALL_BACKEND_MEM",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json
index b550af1831f5..5704f1e83af9 100644
--- a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n2-v2/tlb.json
@@ -25,11 +25,11 @@
},
{
"ArchStdEvent": "DTLB_WALK",
- "PublicDescription": "Counts data memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Note that partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations."
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
},
{
"ArchStdEvent": "ITLB_WALK",
- "PublicDescription": "Counts instruction memory translation table walks caused by a miss in the L2 TLB driven by a memory access. Partial translations that also cause a table walk are counted. This event does not count table walks caused by TLB maintenance operations."
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
},
{
"ArchStdEvent": "L1D_TLB_REFILL_RD",
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/bus.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/bus.json
new file mode 100644
index 000000000000..2e11a8c4a484
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/bus.json
@@ -0,0 +1,18 @@
+[
+ {
+ "ArchStdEvent": "BUS_ACCESS",
+ "PublicDescription": "Counts memory transactions issued by the CPU to the external bus, including snoop requests and snoop responses. Each beat of data is counted individually."
+ },
+ {
+ "ArchStdEvent": "BUS_CYCLES",
+ "PublicDescription": "Counts bus cycles in the CPU. Bus cycles represent a clock cycle in which a transaction could be sent or received on the interface from the CPU to the external bus. Since that interface is driven at the same clock speed as the CPU, this event is a duplicate of CPU_CYCLES."
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_RD",
+ "PublicDescription": "Counts memory read transactions seen on the external bus. Each beat of data is counted individually."
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_WR",
+ "PublicDescription": "Counts memory write transactions seen on the external bus. Each beat of data is counted individually."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/exception.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/exception.json
new file mode 100644
index 000000000000..7126fbf292e0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/exception.json
@@ -0,0 +1,62 @@
+[
+ {
+ "ArchStdEvent": "EXC_TAKEN",
+ "PublicDescription": "Counts any taken architecturally visible exceptions such as IRQ, FIQ, SError, and other synchronous exceptions. Exceptions are counted whether or not they are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_RETURN",
+ "PublicDescription": "Counts any architecturally executed exception return instructions. For example: AArch64: ERET"
+ },
+ {
+ "ArchStdEvent": "EXC_UNDEF",
+ "PublicDescription": "Counts the number of synchronous exceptions which are taken locally that are due to attempting to execute an instruction that is UNDEFINED. Attempting to execute instruction bit patterns that have not been allocated. Attempting to execute instructions when they are disabled. Attempting to execute instructions at an inappropriate Exception level. Attempting to execute an instruction when the value of PSTATE.IL is 1."
+ },
+ {
+ "ArchStdEvent": "EXC_SVC",
+ "PublicDescription": "Counts SVC exceptions taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_PABORT",
+ "PublicDescription": "Counts synchronous exceptions that are taken locally and caused by Instruction Aborts."
+ },
+ {
+ "ArchStdEvent": "EXC_DABORT",
+ "PublicDescription": "Counts exceptions that are taken locally and are caused by data aborts or SErrors. Conditions that could cause those exceptions are attempting to read or write memory where the MMU generates a fault, attempting to read or write memory with a misaligned address, interrupts from the nSEI inputs and internally generated SErrors."
+ },
+ {
+ "ArchStdEvent": "EXC_IRQ",
+ "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_FIQ",
+ "PublicDescription": "Counts FIQ exceptions including the virtual FIQs that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_SMC",
+ "PublicDescription": "Counts SMC exceptions take to EL3."
+ },
+ {
+ "ArchStdEvent": "EXC_HVC",
+ "PublicDescription": "Counts HVC exceptions taken to EL2."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_PABORT",
+ "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Instruction Aborts. For example, attempting to execute an instruction with a misaligned PC."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_DABORT",
+ "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Data Aborts or SError interrupts. Conditions that could cause those exceptions are:\n\n1. Attempting to read or write memory where the MMU generates a fault,\n2. Attempting to read or write memory with a misaligned address,\n3. Interrupts from the SEI input.\n4. internally generated SErrors."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_OTHER",
+ "PublicDescription": "Counts the number of synchronous trap exceptions which are not taken locally and are not SVC, SMC, HVC, data aborts, Instruction Aborts, or interrupts."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_IRQ",
+ "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are not taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_FIQ",
+ "PublicDescription": "Counts FIQs which are not taken locally but taken from EL0, EL1,\n or EL2 to EL3 (which would be the normal behavior for FIQs when not executing\n in EL3)."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/fp_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/fp_operation.json
new file mode 100644
index 000000000000..cec3435ac766
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/fp_operation.json
@@ -0,0 +1,22 @@
+[
+ {
+ "ArchStdEvent": "FP_HP_SPEC",
+ "PublicDescription": "Counts speculatively executed half precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_SP_SPEC",
+ "PublicDescription": "Counts speculatively executed single precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_DP_SPEC",
+ "PublicDescription": "Counts speculatively executed double precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_SCALE_OPS_SPEC",
+ "PublicDescription": "Counts speculatively executed scalable single precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_FIXED_OPS_SPEC",
+ "PublicDescription": "Counts speculatively executed non-scalable single precision floating point operations."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/general.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/general.json
new file mode 100644
index 000000000000..c5dcdcf43c58
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/general.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "CPU_CYCLES",
+ "PublicDescription": "Counts CPU clock cycles (not timer cycles). The clock measured by this event is defined as the physical clock driving the CPU logic."
+ },
+ {
+ "ArchStdEvent": "CNT_CYCLES",
+ "PublicDescription": "Increments at a constant frequency equal to the rate of increment of the System Counter, CNTPCT_EL0."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1d_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1d_cache.json
new file mode 100644
index 000000000000..ee04d9fe1a70
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1d_cache.json
@@ -0,0 +1,50 @@
+[
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL",
+ "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE",
+ "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) counts as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WB",
+ "PublicDescription": "Counts write-backs of dirty data from the L1 data cache to the L2 cache. This occurs when either a dirty cache line is evicted from L1 data cache and allocated in the L2 cache or dirty data is written to the L2 and possibly to the next level of cache. This event counts both victim cache line evictions and cache write-backs from snoops or cache maintenance operations. The following cache operations are not counted:\n\n1. Invalidations which do not result in data being transferred out of the L1 (such as evictions of clean data),\n2. Full line writes which write to L2 without writing L1, such as write streaming mode."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_LMISS_RD",
+ "PublicDescription": "Counts cache line refills into the level 1 data cache from any memory read operations, that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_RD",
+ "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches counts as both a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WR",
+ "PublicDescription": "Counts level 1 data cache accesses generated by store operations. This event also counts accesses caused by a DC ZVA (data cache zero, specified by virtual address) instruction. Near atomic operations that resolve in the CPUs caches count as a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_INNER",
+ "PublicDescription": "Counts level 1 data cache refills where the cache line data came from caches inside the immediate cluster of the core."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_OUTER",
+ "PublicDescription": "Counts level 1 data cache refills for which the cache line data came from outside the immediate cluster of the core, like an SLC in the system interconnect or DRAM."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_INVAL",
+ "PublicDescription": "Counts each explicit invalidation of a cache line in the level 1 data cache caused by:\n\n- Cache Maintenance Operations (CMO) that operate by a virtual address.\n- Broadcast cache coherency operations from another CPU in the system.\n\nThis event does not count for the following conditions:\n\n1. A cache refill invalidates a cache line.\n2. A CMO which is executed on that CPU and invalidates a cache line specified by set/way.\n\nNote that CMOs that operate by set/way cannot be broadcast from one CPU to another."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_RW",
+ "PublicDescription": "Counts level 1 data demand cache accesses from any load or store operation. Near atomic operations that resolve in the CPUs caches counts as both a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HWPRF",
+ "PublicDescription": "Counts level 1 data cache accesses from any load/store operations generated by the hardware prefetcher."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_HWPRF",
+ "PublicDescription": "Counts level 1 data cache refills where the cache line is requested by a hardware prefetcher."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1i_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1i_cache.json
new file mode 100644
index 000000000000..633f1030359d
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l1i_cache.json
@@ -0,0 +1,14 @@
+[
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL",
+ "PublicDescription": "Counts cache line refills in the level 1 instruction cache caused by a missed instruction fetch. Instruction fetches may include accessing multiple instructions, but the single cache line allocation is counted once."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE",
+ "PublicDescription": "Counts instruction fetches which access the level 1 instruction cache. Instruction cache accesses caused by cache maintenance operations are not counted."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_LMISS",
+ "PublicDescription": "Counts cache line refills into the level 1 instruction cache, that incurred additional latency."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l2_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l2_cache.json
new file mode 100644
index 000000000000..e6cce710c560
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l2_cache.json
@@ -0,0 +1,78 @@
+[
+ {
+ "ArchStdEvent": "L2D_CACHE",
+ "PublicDescription": "Counts accesses to the level 2 cache due to data accesses. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level data cache or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL",
+ "PublicDescription": "Counts cache line refills into the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB",
+ "PublicDescription": "Counts write-backs of data from the L2 cache to outside the CPU. This includes snoops to the L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line invalidations which do not write data outside the CPU and snoops which return data from an L1 cache are not counted. Data would not be written outside the cache when invalidating a clean cache line."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_ALLOCATE",
+ "PublicDescription": "Counts level 2 cache line allocates that do not fetch data from outside the level 2 data or unified cache."
+ },
+ {
+ "ArchStdEvent": "L2I_CACHE",
+ "PublicDescription": "Counts accesses to the level 2 cache due to instruction accesses. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level instruction cache."
+ },
+ {
+ "ArchStdEvent": "L2I_CACHE_REFILL",
+ "PublicDescription": "Counts cache line refills into the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_RD",
+ "PublicDescription": "Counts level 2 data cache accesses due to memory read operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WR",
+ "PublicDescription": "Counts level 2 cache accesses due to memory write operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_RD",
+ "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_WR",
+ "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB_VICTIM",
+ "PublicDescription": "Counts evictions from the level 2 cache because of a line being allocated into the L2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB_CLEAN",
+ "PublicDescription": "Counts write-backs from the level 2 cache that are a result of either:\n\n1. Cache maintenance operations,\n\n2. Snoop responses or,\n\n3. Direct cache transfers to another CPU due to a forwarding snoop request."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_INVAL",
+ "PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by cache maintenance operations that operate by a virtual address, or by external coherency operations. This event does not count if either:\n\n1. A cache refill invalidates a cache line or,\n2. A Cache Maintenance Operation (CMO), which invalidates a cache line specified by set/way, is executed on that CPU.\n\nCMOs that operate by set/way cannot be broadcast from one CPU to another."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_LMISS_RD",
+ "PublicDescription": "Counts cache line refills into the level 2 unified cache from any memory read operations that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L2I_CACHE_LMISS",
+ "PublicDescription": "Counts cache line refills into the level 2 unified cache from any instruction read operations that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_RW",
+ "PublicDescription": "Counts level 2 cache demand accesses from any load/store operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2I_CACHE_RD",
+ "PublicDescription": "Counts level 2 cache accesses that are due to a demand instruction cache access."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_PRF",
+ "PublicDescription": "Counts level 2 data cache accesses from software preload or prefetch instructions or hardware prefetcher."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_PRF",
+ "PublicDescription": "Counts refills due to accesses generated as a result of prefetches."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l3_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l3_cache.json
new file mode 100644
index 000000000000..8fe51a628419
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/l3_cache.json
@@ -0,0 +1,26 @@
+[
+ {
+ "ArchStdEvent": "L3D_CACHE_ALLOCATE",
+ "PublicDescription": "Counts level 3 cache line allocates that do not fetch data from outside the level 3 data or unified cache. For example, allocates due to streaming stores."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_REFILL",
+ "PublicDescription": "Counts level 3 accesses that receive data from outside the L3 cache."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE",
+ "PublicDescription": "Counts level 3 cache accesses. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_RD",
+ "PublicDescription": "Counts level 3 cache accesses caused by any memory read operation. Level 3 cache is a unified cache for data and instruction accesses. Accesses are for misses in the lower level caches or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_LMISS_RD",
+ "PublicDescription": "Counts any cache line refill into the level 3 cache from memory read operations that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_MISS",
+ "PublicDescription": "Counts level 3 cache accesses that missed in the level 3 cache."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ll_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ll_cache.json
new file mode 100644
index 000000000000..c9259682d39e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/ll_cache.json
@@ -0,0 +1,22 @@
+[
+ {
+ "ArchStdEvent": "LL_CACHE",
+ "PublicDescription": "Counts transactions that were returned from outside the core cluster. This event counts transactions for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts transactions for L3 cache."
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_MISS",
+ "PublicDescription": "Counts transactions that were returned from outside the core cluster and missed in the last level cache"
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_RD",
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for the L3 cache. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources."
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_MISS_RD",
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for L3 cache. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations."
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_REFILL",
+ "PublicDescription": "Counts last level accesses that receive data from outside the last level cache."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/memory.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/memory.json
new file mode 100644
index 000000000000..f19204a5faae
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/memory.json
@@ -0,0 +1,54 @@
+[
+ {
+ "ArchStdEvent": "MEM_ACCESS",
+ "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions."
+ },
+ {
+ "ArchStdEvent": "REMOTE_ACCESS",
+ "PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mesh in the system. If the CHI bus response back to the core indicates that the data source is from another chip (mesh), then the counter is updated. If no data is returned, even if the system snoops another chip/mesh, then the counter is not updated."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_RD",
+ "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_WR",
+ "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
+ },
+ {
+ "ArchStdEvent": "LDST_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory read and write accesses in a cycle that incurred additional latency, due to the alignment of the address and the size of data being accessed, which results in store crossing a single cache line."
+ },
+ {
+ "ArchStdEvent": "LD_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory read accesses in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed, which results in load crossing a single cache line."
+ },
+ {
+ "ArchStdEvent": "ST_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory write access in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED",
+ "PublicDescription": "Counts the number of memory read and write accesses counted by MEM_ACCESS that are tag checked by the Memory Tagging Extension (MTE). This event is implemented as the sum of MEM_ACCESS_CHECKED_RD and MEM_ACCESS_CHECKED_WR"
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED_RD",
+ "PublicDescription": "Counts the number of memory read accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED_WR",
+ "PublicDescription": "Counts the number of memory write accesses in a cycle that is tag checked by the Memory Tagging Extension (MTE)."
+ },
+ {
+ "ArchStdEvent": "INST_FETCH_PERCYC",
+ "PublicDescription": "Counts number of instruction fetches outstanding per cycle, which will provide an average latency of instruction fetch."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_RD_PERCYC",
+ "PublicDescription": "Counts the number of outstanding loads or memory read accesses per cycle."
+ },
+ {
+ "ArchStdEvent": "INST_FETCH",
+ "PublicDescription": "Counts Instruction memory accesses that the PE makes."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/metrics.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/metrics.json
new file mode 100644
index 000000000000..eb3a35f244e7
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/metrics.json
@@ -0,0 +1,457 @@
+[
+ {
+ "ArchStdEvent": "backend_bound"
+ },
+ {
+ "MetricName": "backend_busy_bound",
+ "MetricExpr": "STALL_BACKEND_BUSY / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to issue queues being full to accept operations for execution.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_cache_l1d_bound",
+ "MetricExpr": "STALL_BACKEND_L1D / (STALL_BACKEND_L1D + STALL_BACKEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by level 1 data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_cache_l2d_bound",
+ "MetricExpr": "STALL_BACKEND_MEM / (STALL_BACKEND_L1D + STALL_BACKEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by level 2 data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_core_bound",
+ "MetricExpr": "STALL_BACKEND_CPUBOUND / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to backend core resource constraints not related to instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_core_rename_bound",
+ "MetricExpr": "STALL_BACKEND_RENAME / STALL_BACKEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend as the rename unit registers are unavailable.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_bound",
+ "MetricExpr": "STALL_BACKEND_MEMBOUND / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to backend core resource constraints related to memory access latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_cache_bound",
+ "MetricExpr": "(STALL_BACKEND_L1D + STALL_BACKEND_MEM) / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory latency issues caused by data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_store_bound",
+ "MetricExpr": "STALL_BACKEND_ST / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory write pending caused by stores stalled in the pre-commit stage.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_tlb_bound",
+ "MetricExpr": "STALL_BACKEND_TLB / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by data TLB misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_stalled_cycles",
+ "MetricExpr": "STALL_BACKEND / CPU_CYCLES * 100",
+ "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the backend unit of the processor.",
+ "MetricGroup": "Cycle_Accounting",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "ArchStdEvent": "bad_speculation",
+ "MetricExpr": "(1 - STALL_SLOT / (5 * CPU_CYCLES)) * (1 - OP_RETIRED / OP_SPEC) * 100 + STALL_FRONTEND_FLUSH / CPU_CYCLES * 100"
+ },
+ {
+ "MetricName": "barrier_percentage",
+ "MetricExpr": "(ISB_SPEC + DSB_SPEC + DMB_SPEC) / INST_SPEC * 100",
+ "BriefDescription": "This metric measures instruction and data barrier operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "branch_direct_ratio",
+ "MetricExpr": "BR_IMMED_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of direct branches retired to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "branch_indirect_ratio",
+ "MetricExpr": "BR_IND_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of indirect branches retired, including function returns, to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "branch_misprediction_ratio",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of branches mispredicted to the total number of branches architecturally executed. This gives an indication of the effectiveness of the branch prediction unit.",
+ "MetricGroup": "Miss_Ratio;Branch_Effectiveness",
+ "ScaleUnit": "100percent of branches"
+ },
+ {
+ "MetricName": "branch_mpki",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of branch mispredictions per thousand instructions executed.",
+ "MetricGroup": "MPKI;Branch_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "branch_percentage",
+ "MetricExpr": "PC_WRITE_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures branch operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "branch_return_ratio",
+ "MetricExpr": "BR_RETURN_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of branches retired that are function returns to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "crypto_percentage",
+ "MetricExpr": "CRYPTO_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures crypto operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "dtlb_mpki",
+ "MetricExpr": "DTLB_WALK / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of data TLB Walks per thousand instructions executed.",
+ "MetricGroup": "MPKI;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "dtlb_walk_ratio",
+ "MetricExpr": "DTLB_WALK / L1D_TLB",
+ "BriefDescription": "This metric measures the ratio of data TLB Walks to the total number of data TLB accesses. This gives an indication of the effectiveness of the data TLB accesses.",
+ "MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "fp16_percentage",
+ "MetricExpr": "FP_HP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures half-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp32_percentage",
+ "MetricExpr": "FP_SP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures single-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp64_percentage",
+ "MetricExpr": "FP_DP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures double-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp_ops_per_cycle",
+ "MetricExpr": "(FP_SCALE_OPS_SPEC + FP_FIXED_OPS_SPEC) / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by any instruction. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "ArchStdEvent": "frontend_bound",
+ "MetricExpr": "(STALL_SLOT_FRONTEND / (5 * CPU_CYCLES) - STALL_FRONTEND_FLUSH / CPU_CYCLES) * 100"
+ },
+ {
+ "MetricName": "frontend_cache_l1i_bound",
+ "MetricExpr": "STALL_FRONTEND_L1I / (STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory access latency issues caused by level 1 instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_cache_l2i_bound",
+ "MetricExpr": "STALL_FRONTEND_MEM / (STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory access latency issues caused by level 2 instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_bound",
+ "MetricExpr": "STALL_FRONTEND_CPUBOUND / STALL_FRONTEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to frontend core resource constraints not related to instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_flow_bound",
+ "MetricExpr": "STALL_FRONTEND_FLOW / STALL_FRONTEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend as the decode unit is awaiting input from the branch prediction unit.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_flush_bound",
+ "MetricExpr": "STALL_FRONTEND_FLUSH / STALL_FRONTEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend as the processor is recovering from a pipeline flush caused by bad speculation or other machine resteers.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_bound",
+ "MetricExpr": "STALL_FRONTEND_MEMBOUND / STALL_FRONTEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to frontend core resource constraints related to the instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_cache_bound",
+ "MetricExpr": "(STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) / STALL_FRONTEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to instruction fetch latency issues caused by instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_tlb_bound",
+ "MetricExpr": "STALL_FRONTEND_TLB / STALL_FRONTEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to instruction fetch latency issues caused by instruction TLB misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_stalled_cycles",
+ "MetricExpr": "STALL_FRONTEND / CPU_CYCLES * 100",
+ "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the frontend unit of the processor.",
+ "MetricGroup": "Cycle_Accounting",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "integer_dp_percentage",
+ "MetricExpr": "(DP_SPEC - DSB_SPEC) / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalar integer operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "ipc",
+ "MetricExpr": "INST_RETIRED / CPU_CYCLES",
+ "BriefDescription": "This metric measures the number of instructions retired per cycle.",
+ "MetricGroup": "General",
+ "ScaleUnit": "1per cycle"
+ },
+ {
+ "MetricName": "itlb_mpki",
+ "MetricExpr": "ITLB_WALK / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of instruction TLB Walks per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "itlb_walk_ratio",
+ "MetricExpr": "ITLB_WALK / L1I_TLB",
+ "BriefDescription": "This metric measures the ratio of instruction TLB Walks to the total number of instruction TLB accesses. This gives an indication of the effectiveness of the instruction TLB accesses.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1d_cache_miss_ratio",
+ "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cache accesses. This gives an indication of the effectiveness of the level 1 data cache.",
+ "MetricGroup": "Miss_Ratio;L1D_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l1d_cache_mpki",
+ "MetricExpr": "L1D_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 data cache accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;L1D_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1d_tlb_miss_ratio",
+ "MetricExpr": "L1D_TLB_REFILL / L1D_TLB",
+ "BriefDescription": "This metric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TLB accesses. This gives an indication of the effectiveness of the level 1 data TLB.",
+ "MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1d_tlb_mpki",
+ "MetricExpr": "L1D_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 data TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1i_cache_miss_ratio",
+ "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction cache accesses. This gives an indication of the effectiveness of the level 1 instruction cache.",
+ "MetricGroup": "Miss_Ratio;L1I_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l1i_cache_mpki",
+ "MetricExpr": "L1I_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;L1I_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1i_tlb_miss_ratio",
+ "MetricExpr": "L1I_TLB_REFILL / L1I_TLB",
+ "BriefDescription": "This metric measures the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instruction TLB accesses. This gives an indication of the effectiveness of the level 1 instruction TLB.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1i_tlb_mpki",
+ "MetricExpr": "L1I_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l2_cache_miss_ratio",
+ "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 2 cache accesses missed to the total number of level 2 cache accesses. This gives an indication of the effectiveness of the level 2 cache, which is a unified cache that stores both data and instruction. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
+ "MetricGroup": "Miss_Ratio;L2_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l2_cache_mpki",
+ "MetricExpr": "L2D_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 2 unified cache accesses missed per thousand instructions executed. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
+ "MetricGroup": "MPKI;L2_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l2_tlb_miss_ratio",
+ "MetricExpr": "L2D_TLB_REFILL / L2D_TLB",
+ "BriefDescription": "This metric measures the ratio of level 2 unified TLB accesses missed to the total number of level 2 unified TLB accesses. This gives an indication of the effectiveness of the level 2 TLB.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l2_tlb_mpki",
+ "MetricExpr": "L2D_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 2 unified TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "ll_cache_read_hit_ratio",
+ "MetricExpr": "(LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD",
+ "BriefDescription": "This metric measures the ratio of last level cache read accesses hit in the cache to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
+ "MetricGroup": "LL_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "ll_cache_read_miss_ratio",
+ "MetricExpr": "LL_CACHE_MISS_RD / LL_CACHE_RD",
+ "BriefDescription": "This metric measures the ratio of last level cache read accesses missed to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
+ "MetricGroup": "Miss_Ratio;LL_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "ll_cache_read_mpki",
+ "MetricExpr": "LL_CACHE_MISS_RD / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of last level cache read accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;LL_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "load_percentage",
+ "MetricExpr": "LD_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures load operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "nonsve_fp_ops_per_cycle",
+ "MetricExpr": "FP_FIXED_OPS_SPEC / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by an instruction that is not an SVE instruction. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "ArchStdEvent": "retiring"
+ },
+ {
+ "MetricName": "scalar_fp_percentage",
+ "MetricExpr": "VFP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalar floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "simd_percentage",
+ "MetricExpr": "ASE_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures advanced SIMD operations as a percentage of total operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "store_percentage",
+ "MetricExpr": "ST_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures store operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_all_percentage",
+ "MetricExpr": "SVE_INST_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations, including loads and stores, as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_fp_ops_per_cycle",
+ "MetricExpr": "FP_SCALE_OPS_SPEC / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by SVE instructions. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "MetricName": "sve_predicate_empty_percentage",
+ "MetricExpr": "SVE_PRED_EMPTY_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with no active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_full_percentage",
+ "MetricExpr": "SVE_PRED_FULL_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with all active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_partial_percentage",
+ "MetricExpr": "SVE_PRED_PARTIAL_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with at least one active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_percentage",
+ "MetricExpr": "SVE_PRED_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with predicates as a percentage of operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/retired.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/retired.json
new file mode 100644
index 000000000000..135e5dbd8c78
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/retired.json
@@ -0,0 +1,90 @@
+[
+ {
+ "ArchStdEvent": "SW_INCR",
+ "PublicDescription": "Counts software writes to the PMSWINC_EL0 (software PMU increment) register. The PMSWINC_EL0 register is a manually updated counter for use by application software.\n\nThis event could be used to measure any user program event, such as accesses to a particular data structure (by writing to the PMSWINC_EL0 register each time the data structure is accessed).\n\nTo use the PMSWINC_EL0 register and event, developers must insert instructions that write to the PMSWINC_EL0 register into the source code.\n\nSince the SW_INCR event records writes to the PMSWINC_EL0 register, there is no need to do a read/increment/write sequence to the PMSWINC_EL0 register."
+ },
+ {
+ "ArchStdEvent": "INST_RETIRED",
+ "PublicDescription": "Counts instructions that have been architecturally executed."
+ },
+ {
+ "ArchStdEvent": "CID_WRITE_RETIRED",
+ "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR_EL1 register, which usually contain the kernel PID and can be output with hardware trace."
+ },
+ {
+ "ArchStdEvent": "PC_WRITE_RETIRED",
+ "PublicDescription": "Counts branch instructions that caused a change of Program Counter, which effectively causes a change in the control flow of the program."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns."
+ },
+ {
+ "ArchStdEvent": "TTBR_WRITE_RETIRED",
+ "PublicDescription": "Counts architectural writes to TTBR0/1_EL1. If virtualization host extensions are enabled (by setting the HCR_EL2.E2H bit to 1), then accesses to TTBR0/1_EL1 that are redirected to TTBR0/1_EL2, or accesses to TTBR0/1_EL12, are counted. TTBRn registers are typically updated when the kernel is swapping user-space threads or applications."
+ },
+ {
+ "ArchStdEvent": "BR_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted. Note that exception generating instructions, exception return instructions and context synchronization instructions are not counted."
+ },
+ {
+ "ArchStdEvent": "BR_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts branches counted by BR_RETIRED which were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "OP_RETIRED",
+ "PublicDescription": "Counts micro-operations that are architecturally executed. This is a count of number of micro-operations retired from the commit queue in a single cycle."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_TAKEN_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches that were taken."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_TAKEN_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were taken."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_IND_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IND_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_PRED_RETIRED",
+ "PublicDescription": "Counts branch instructions counted by BR_RETIRED which were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IND_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spe.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spe.json
new file mode 100644
index 000000000000..ca0217fa4681
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spe.json
@@ -0,0 +1,42 @@
+[
+ {
+ "ArchStdEvent": "SAMPLE_POP",
+ "PublicDescription": "Counts statistical profiling sample population, the count of all operations that could be sampled but may or may not be chosen for sampling."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED",
+ "PublicDescription": "Counts statistical profiling samples taken for sampling."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FILTRATE",
+ "PublicDescription": "Counts statistical profiling samples taken which are not removed by filtering."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_COLLISION",
+ "PublicDescription": "Counts statistical profiling samples that have collided with a previous sample and so therefore not taken."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_BR",
+ "PublicDescription": "Counts statistical profiling samples taken which are branches."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_LD",
+ "PublicDescription": "Counts statistical profiling samples taken which are loads or load atomic operations."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_ST",
+ "PublicDescription": "Counts statistical profiling samples taken which are stores or store atomic operations."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_OP",
+ "PublicDescription": "Counts statistical profiling samples taken which are matching any operation type filters supported."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_EVENT",
+ "PublicDescription": "Counts statistical profiling samples taken which are matching event packet filter constraints."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_LAT",
+ "PublicDescription": "Counts statistical profiling samples taken which are exceeding minimum latency set by operation latency filter constraints."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spec_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spec_operation.json
new file mode 100644
index 000000000000..f91eb18d683c
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/spec_operation.json
@@ -0,0 +1,90 @@
+[
+ {
+ "ArchStdEvent": "BR_MIS_PRED",
+ "PublicDescription": "Counts branches which are speculatively executed and mispredicted."
+ },
+ {
+ "ArchStdEvent": "BR_PRED",
+ "PublicDescription": "Counts all speculatively executed branches."
+ },
+ {
+ "ArchStdEvent": "INST_SPEC",
+ "PublicDescription": "Counts operations that have been speculatively executed."
+ },
+ {
+ "ArchStdEvent": "OP_SPEC",
+ "PublicDescription": "Counts micro-operations speculatively executed. This is the count of the number of micro-operations dispatched in a cycle."
+ },
+ {
+ "ArchStdEvent": "STREX_FAIL_SPEC",
+ "PublicDescription": "Counts store-exclusive operations that have been speculatively executed and have not successfully completed the store operation."
+ },
+ {
+ "ArchStdEvent": "STREX_SPEC",
+ "PublicDescription": "Counts store-exclusive operations that have been speculatively executed."
+ },
+ {
+ "ArchStdEvent": "LD_SPEC",
+ "PublicDescription": "Counts speculatively executed load operations including Single Instruction Multiple Data (SIMD) load operations."
+ },
+ {
+ "ArchStdEvent": "ST_SPEC",
+ "PublicDescription": "Counts speculatively executed store operations including Single Instruction Multiple Data (SIMD) store operations."
+ },
+ {
+ "ArchStdEvent": "DP_SPEC",
+ "PublicDescription": "Counts speculatively executed logical or arithmetic instructions such as MOV/MVN operations."
+ },
+ {
+ "ArchStdEvent": "ASE_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD operations excluding load, store and move micro-operations that move data to or from SIMD (vector) registers."
+ },
+ {
+ "ArchStdEvent": "VFP_SPEC",
+ "PublicDescription": "Counts speculatively executed floating point operations. This event does not count operations that move data to or from floating point (vector) registers."
+ },
+ {
+ "ArchStdEvent": "PC_WRITE_SPEC",
+ "PublicDescription": "Counts speculatively executed operations which cause software changes of the PC. Those operations include all taken branch operations."
+ },
+ {
+ "ArchStdEvent": "CRYPTO_SPEC",
+ "PublicDescription": "Counts speculatively executed cryptographic operations except for PMULL and VMULL operations."
+ },
+ {
+ "ArchStdEvent": "ISB_SPEC",
+ "PublicDescription": "Counts ISB operations that are executed."
+ },
+ {
+ "ArchStdEvent": "DSB_SPEC",
+ "PublicDescription": "Counts DSB operations that are speculatively issued to Load/Store unit in the CPU."
+ },
+ {
+ "ArchStdEvent": "DMB_SPEC",
+ "PublicDescription": "Counts DMB operations that are speculatively issued to the Load/Store unit in the CPU. This event does not count implied barriers from load acquire/store release operations."
+ },
+ {
+ "ArchStdEvent": "RC_LD_SPEC",
+ "PublicDescription": "Counts any load acquire operations that are speculatively executed. For example: LDAR, LDARH, LDARB"
+ },
+ {
+ "ArchStdEvent": "RC_ST_SPEC",
+ "PublicDescription": "Counts any store release operations that are speculatively executed. For example: STLR, STLRH, STLRB"
+ },
+ {
+ "ArchStdEvent": "ASE_INST_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD operations."
+ },
+ {
+ "ArchStdEvent": "CAS_NEAR_PASS",
+ "PublicDescription": "Counts compare and swap instructions that executed locally to the PE and updated the location accessed."
+ },
+ {
+ "ArchStdEvent": "CAS_NEAR_SPEC",
+ "PublicDescription": "Counts compare and swap instructions that executed locally to the PE."
+ },
+ {
+ "ArchStdEvent": "CAS_FAR_SPEC",
+ "PublicDescription": "Counts compare and swap instructions that did not execute locally to the PE."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/stall.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/stall.json
new file mode 100644
index 000000000000..51cda27880b9
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/stall.json
@@ -0,0 +1,86 @@
+[
+ {
+ "ArchStdEvent": "STALL_FRONTEND",
+ "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. STALL_FRONTEND_SLOTS counts SLOTS during the cycle when this event counts."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND",
+ "PublicDescription": "Counts cycles whenever the rename unit is unable to send any micro-operations to the backend of the pipeline because of backend resource constraints. Backend resource constraints can include issue stage fullness, execution stage fullness, or other internal pipeline resource fullness. All the backend slots were empty during the cycle when this event counts."
+ },
+ {
+ "ArchStdEvent": "STALL",
+ "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). This event is the sum of STALL_FRONTEND and STALL_BACKEND"
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_BACKEND",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints. STALL_BACKEND counts during the cycle when STALL_SLOT_BACKEND counts at least 1."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_FRONTEND",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend due to frontend resource constraints."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). STALL_SLOT is the sum of STALL_SLOT_FRONTEND and STALL_SLOT_BACKEND."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEM",
+ "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the last level core cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEMBOUND",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_L1I",
+ "PublicDescription": "Counts cycles when the frontend is stalled because there is an instruction fetch request pending in the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEM",
+ "PublicDescription": "Counts cycles when the frontend is stalled because there is an instruction fetch request pending in the last level core cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_TLB",
+ "PublicDescription": "Counts when the frontend is stalled on any TLB misses being handled. This event also counts the TLB accesses made by hardware prefetches."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_CPUBOUND",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the CPU resources excluding memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLOW",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the branch prediction unit."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLUSH",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage as the frontend is recovering from a machine flush or resteer. Example scenarios that cause a flush include branch mispredictions, taken exceptions, micro-architectural flush etc."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEMBOUND",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations due to resource constraints in the memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_L1D",
+ "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_TLB",
+ "PublicDescription": "Counts cycles when the backend is stalled on any demand TLB misses being handled."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ST",
+ "PublicDescription": "Counts cycles when the backend is stalled and there is a store that has not reached the pre-commit stage."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_CPUBOUND",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations due to any resource constraints in the CPU excluding memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_BUSY",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations because the issue queues are full to take any operations for execution."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_RENAME",
+ "PublicDescription": "Counts cycles when backend is stalled even when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/sve.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/sve.json
new file mode 100644
index 000000000000..51dab48cb2ba
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/sve.json
@@ -0,0 +1,50 @@
+[
+ {
+ "ArchStdEvent": "SVE_INST_SPEC",
+ "PublicDescription": "Counts speculatively executed operations that are SVE operations."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_EMPTY_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with no active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_FULL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with all predicate elements active."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_PARTIAL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one but not all active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_NOT_FULL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one non active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_LDFF_SPEC",
+ "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations."
+ },
+ {
+ "ArchStdEvent": "SVE_LDFF_FAULT_SPEC",
+ "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations that clear at least one bit in the FFR."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT8_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type an 8-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT16_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 16-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT32_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 32-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT64_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 64-bit integer."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/tlb.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/tlb.json
new file mode 100644
index 000000000000..c7aa89c2f19f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/tlb.json
@@ -0,0 +1,74 @@
+[
+ {
+ "ArchStdEvent": "L1I_TLB_REFILL",
+ "PublicDescription": "Counts level 1 instruction TLB refills from any Instruction fetch. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL",
+ "PublicDescription": "Counts level 1 data TLB accesses that resulted in TLB refills. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count on an access from an AT(address translation) instruction."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB",
+ "PublicDescription": "Counts level 1 data TLB accesses caused by any memory load or store operation. Note that load or store instructions can be broken up into multiple memory operations. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1I_TLB",
+ "PublicDescription": "Counts level 1 instruction TLB accesses, whether the access hits or misses in the TLB. This event counts both demand accesses and prefetch or preload generated accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_REFILL",
+ "PublicDescription": "Counts level 2 TLB refills caused by memory operations from both data and instruction fetch, except for those caused by TLB maintenance operations and hardware prefetches."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB",
+ "PublicDescription": "Counts level 2 TLB accesses except those caused by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK",
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_PERCYC",
+ "PublicDescription": "Counts the number of data translation table walks in progress per cycle."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_PERCYC",
+ "PublicDescription": "Counts the number of instruction translation table walks in progress per cycle."
+ },
+ {
+ "ArchStdEvent": "DTLB_HWUPD",
+ "PublicDescription": "Counts number of memory accesses triggered by a data translation table walk and performing an update of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that this event counts accesses triggered by software preloads, but not accesses triggered by hardware prefetchers."
+ },
+ {
+ "ArchStdEvent": "ITLB_HWUPD",
+ "PublicDescription": "Counts number of memory accesses triggered by an instruction translation table walk and performing an update of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD."
+ },
+ {
+ "ArchStdEvent": "DTLB_STEP",
+ "PublicDescription": "Counts number of memory accesses triggered by a demand data translation table walk and performing a read of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that this event counts accesses triggered by software preloads, but not accesses triggered by hardware prefetchers."
+ },
+ {
+ "ArchStdEvent": "ITLB_STEP",
+ "PublicDescription": "Counts number of memory accesses triggered by an instruction translation table walk and performing a read of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_LARGE",
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and yielding a large page. The set of large pages is defined as all pages with a final size higher than or equal to 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. If DTLB_WALK_BLOCK is implemented, then it is an alias for this event in this family. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_LARGE",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and yielding a large page. The set of large pages is defined as all pages with a final size higher than or equal to 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. In this family, this is equal to ITLB_WALK_BLOCK event. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_SMALL",
+ "PublicDescription": "Counts number of data translation table walks caused by a miss in the L2 TLB and yielding a small page. The set of small pages is defined as all pages with a final size lower than 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. If DTLB_WALK_PAGE event is implemented, then it is an alias for this event in this family. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_SMALL",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and yielding a small page. The set of small pages is defined as all pages with a final size lower than 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. In this family, this is equal to ITLB_WALK_PAGE event. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/trace.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/trace.json
new file mode 100644
index 000000000000..a09043486cd9
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-n3/trace.json
@@ -0,0 +1,42 @@
+[
+ {
+ "ArchStdEvent": "TRB_WRAP",
+ "PublicDescription": "This event is generated each time the current write pointer is wrapped to the base pointer."
+ },
+ {
+ "ArchStdEvent": "TRB_TRIG",
+ "PublicDescription": "This event is generated when a Trace Buffer Extension Trigger Event occurs."
+ },
+ {
+ "ArchStdEvent": "TRCEXTOUT0",
+ "PublicDescription": "This event is generated each time an event is signaled by ETE external event 0."
+ },
+ {
+ "ArchStdEvent": "TRCEXTOUT1",
+ "PublicDescription": "This event is generated each time an event is signaled by ETE external event 1."
+ },
+ {
+ "ArchStdEvent": "TRCEXTOUT2",
+ "PublicDescription": "This event is generated each time an event is signaled by ETE external event 2."
+ },
+ {
+ "ArchStdEvent": "TRCEXTOUT3",
+ "PublicDescription": "This event is generated each time an event is signaled by ETE external event 3."
+ },
+ {
+ "ArchStdEvent": "CTI_TRIGOUT4",
+ "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 4."
+ },
+ {
+ "ArchStdEvent": "CTI_TRIGOUT5",
+ "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 5."
+ },
+ {
+ "ArchStdEvent": "CTI_TRIGOUT6",
+ "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 6."
+ },
+ {
+ "ArchStdEvent": "CTI_TRIGOUT7",
+ "PublicDescription": "This event is generated each time an event is signaled on CTI output trigger 7."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/brbe.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/brbe.json
new file mode 100644
index 000000000000..9fdf5b0453a0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/brbe.json
@@ -0,0 +1,6 @@
+[
+ {
+ "ArchStdEvent": "BRB_FILTRATE",
+ "PublicDescription": "Counts branch records captured which are not removed by filtering."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/bus.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/bus.json
new file mode 100644
index 000000000000..2e11a8c4a484
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/bus.json
@@ -0,0 +1,18 @@
+[
+ {
+ "ArchStdEvent": "BUS_ACCESS",
+ "PublicDescription": "Counts memory transactions issued by the CPU to the external bus, including snoop requests and snoop responses. Each beat of data is counted individually."
+ },
+ {
+ "ArchStdEvent": "BUS_CYCLES",
+ "PublicDescription": "Counts bus cycles in the CPU. Bus cycles represent a clock cycle in which a transaction could be sent or received on the interface from the CPU to the external bus. Since that interface is driven at the same clock speed as the CPU, this event is a duplicate of CPU_CYCLES."
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_RD",
+ "PublicDescription": "Counts memory read transactions seen on the external bus. Each beat of data is counted individually."
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_WR",
+ "PublicDescription": "Counts memory write transactions seen on the external bus. Each beat of data is counted individually."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/exception.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/exception.json
new file mode 100644
index 000000000000..7126fbf292e0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/exception.json
@@ -0,0 +1,62 @@
+[
+ {
+ "ArchStdEvent": "EXC_TAKEN",
+ "PublicDescription": "Counts any taken architecturally visible exceptions such as IRQ, FIQ, SError, and other synchronous exceptions. Exceptions are counted whether or not they are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_RETURN",
+ "PublicDescription": "Counts any architecturally executed exception return instructions. For example: AArch64: ERET"
+ },
+ {
+ "ArchStdEvent": "EXC_UNDEF",
+ "PublicDescription": "Counts the number of synchronous exceptions which are taken locally that are due to attempting to execute an instruction that is UNDEFINED. Attempting to execute instruction bit patterns that have not been allocated. Attempting to execute instructions when they are disabled. Attempting to execute instructions at an inappropriate Exception level. Attempting to execute an instruction when the value of PSTATE.IL is 1."
+ },
+ {
+ "ArchStdEvent": "EXC_SVC",
+ "PublicDescription": "Counts SVC exceptions taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_PABORT",
+ "PublicDescription": "Counts synchronous exceptions that are taken locally and caused by Instruction Aborts."
+ },
+ {
+ "ArchStdEvent": "EXC_DABORT",
+ "PublicDescription": "Counts exceptions that are taken locally and are caused by data aborts or SErrors. Conditions that could cause those exceptions are attempting to read or write memory where the MMU generates a fault, attempting to read or write memory with a misaligned address, interrupts from the nSEI inputs and internally generated SErrors."
+ },
+ {
+ "ArchStdEvent": "EXC_IRQ",
+ "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_FIQ",
+ "PublicDescription": "Counts FIQ exceptions including the virtual FIQs that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_SMC",
+ "PublicDescription": "Counts SMC exceptions take to EL3."
+ },
+ {
+ "ArchStdEvent": "EXC_HVC",
+ "PublicDescription": "Counts HVC exceptions taken to EL2."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_PABORT",
+ "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Instruction Aborts. For example, attempting to execute an instruction with a misaligned PC."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_DABORT",
+ "PublicDescription": "Counts exceptions which are traps not taken locally and are caused by Data Aborts or SError interrupts. Conditions that could cause those exceptions are:\n\n1. Attempting to read or write memory where the MMU generates a fault,\n2. Attempting to read or write memory with a misaligned address,\n3. Interrupts from the SEI input.\n4. internally generated SErrors."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_OTHER",
+ "PublicDescription": "Counts the number of synchronous trap exceptions which are not taken locally and are not SVC, SMC, HVC, data aborts, Instruction Aborts, or interrupts."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_IRQ",
+ "PublicDescription": "Counts IRQ exceptions including the virtual IRQs that are not taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_TRAP_FIQ",
+ "PublicDescription": "Counts FIQs which are not taken locally but taken from EL0, EL1,\n or EL2 to EL3 (which would be the normal behavior for FIQs when not executing\n in EL3)."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/fp_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/fp_operation.json
new file mode 100644
index 000000000000..cec3435ac766
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/fp_operation.json
@@ -0,0 +1,22 @@
+[
+ {
+ "ArchStdEvent": "FP_HP_SPEC",
+ "PublicDescription": "Counts speculatively executed half precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_SP_SPEC",
+ "PublicDescription": "Counts speculatively executed single precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_DP_SPEC",
+ "PublicDescription": "Counts speculatively executed double precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_SCALE_OPS_SPEC",
+ "PublicDescription": "Counts speculatively executed scalable single precision floating point operations."
+ },
+ {
+ "ArchStdEvent": "FP_FIXED_OPS_SPEC",
+ "PublicDescription": "Counts speculatively executed non-scalable single precision floating point operations."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/general.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/general.json
new file mode 100644
index 000000000000..4d816015b8c2
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/general.json
@@ -0,0 +1,40 @@
+[
+ {
+ "ArchStdEvent": "CPU_CYCLES",
+ "PublicDescription": "Counts CPU clock cycles (not timer cycles). The clock measured by this event is defined as the physical clock driving the CPU logic."
+ },
+ {
+ "PublicDescription": "Count of RXDAT or RXRSP responses received with indication completer fullness indicator set to 0",
+ "EventCode": "0x198",
+ "EventName": "L2_CHI_CBUSY0",
+ "BriefDescription": "Number of RXDAT or RXRSP response received with CBusy of 0"
+ },
+ {
+ "PublicDescription": "Count of RXDAT or RXRSP responses received with indication completer fullness indicator set to 1",
+ "EventCode": "0x199",
+ "EventName": "L2_CHI_CBUSY1",
+ "BriefDescription": "Number of RXDAT or RXRSP response received with CBusy of 1"
+ },
+ {
+ "PublicDescription": "Count of RXDAT or RXRSP responses received with indication completer fullness indicator set to 2",
+ "EventCode": "0x19A",
+ "EventName": "L2_CHI_CBUSY2",
+ "BriefDescription": "Number of RXDAT or RXRSP response received with CBusy of 2"
+ },
+ {
+ "PublicDescription": "Count of RXDAT or RXRSP responses received with indication completer fullness indicator set to 3",
+ "EventCode": "0x19B",
+ "EventName": "L2_CHI_CBUSY3",
+ "BriefDescription": "Number of RXDAT or RXRSP response received with CBusy of 3"
+ },
+ {
+ "PublicDescription": "Count of RXDAT or RXRSP responses received with indication completer indicating multiple cores actively making requests",
+ "EventCode": "0x19C",
+ "EventName": "L2_CHI_CBUSY_MT",
+ "BriefDescription": "Number of RXDAT or RXRSP response received with CBusy Multi-threaded set"
+ },
+ {
+ "ArchStdEvent": "CNT_CYCLES",
+ "PublicDescription": "Increments at a constant frequency equal to the rate of increment of the System Counter, CNTPCT_EL0."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1d_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1d_cache.json
new file mode 100644
index 000000000000..891e07631c6e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1d_cache.json
@@ -0,0 +1,74 @@
+[
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL",
+ "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load or store operations that missed in the level 1 data cache. This event only counts one event per cache line."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE",
+ "PublicDescription": "Counts level 1 data cache accesses from any load/store operations. Atomic operations that resolve in the CPUs caches (near atomic operations) counts as both a write access and read access. Each access to a cache line is counted including the multiple accesses caused by single instructions such as LDM or STM. Each access to other level 1 data or unified memory structures, for example refill buffers, write buffers, and write-back buffers, are also counted."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WB",
+ "PublicDescription": "Counts write-backs of dirty data from the L1 data cache to the L2 cache. This occurs when either a dirty cache line is evicted from L1 data cache and allocated in the L2 cache or dirty data is written to the L2 and possibly to the next level of cache. This event counts both victim cache line evictions and cache write-backs from snoops or cache maintenance operations. The following cache operations are not counted:\n\n1. Invalidations which do not result in data being transferred out of the L1 (such as evictions of clean data),\n2. Full line writes which write to L2 without writing L1, such as write streaming mode."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_LMISS_RD",
+ "PublicDescription": "Counts cache line refills into the level 1 data cache from any memory read operations, that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_RD",
+ "PublicDescription": "Counts level 1 data cache accesses from any load operation. Atomic load operations that resolve in the CPUs caches counts as both a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WR",
+ "PublicDescription": "Counts level 1 data cache accesses generated by store operations. This event also counts accesses caused by a DC ZVA (data cache zero, specified by virtual address) instruction. Near atomic operations that resolve in the CPUs caches count as a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_RD",
+ "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed load instructions where the memory read operation misses in the level 1 data cache. This event only counts one event per cache line."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_WR",
+ "PublicDescription": "Counts level 1 data cache refills caused by speculatively executed store instructions where the memory write operation misses in the level 1 data cache. This event only counts one event per cache line."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_INNER",
+ "PublicDescription": "Counts level 1 data cache refills where the cache line data came from caches inside the immediate cluster of the core."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_OUTER",
+ "PublicDescription": "Counts level 1 data cache refills for which the cache line data came from outside the immediate cluster of the core, like an SLC in the system interconnect or DRAM."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WB_VICTIM",
+ "PublicDescription": "Counts dirty cache line evictions from the level 1 data cache caused by a new cache line allocation. This event does not count evictions caused by cache maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WB_CLEAN",
+ "PublicDescription": "Counts write-backs from the level 1 data cache that are a result of a coherency operation made by another CPU. Event count includes cache maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_INVAL",
+ "PublicDescription": "Counts each explicit invalidation of a cache line in the level 1 data cache caused by:\n\n- Cache Maintenance Operations (CMO) that operate by a virtual address.\n- Broadcast cache coherency operations from another CPU in the system.\n\nThis event does not count for the following conditions:\n\n1. A cache refill invalidates a cache line.\n2. A CMO which is executed on that CPU and invalidates a cache line specified by set/way.\n\nNote that CMOs that operate by set/way cannot be broadcast from one CPU to another."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_RW",
+ "PublicDescription": "Counts level 1 data demand cache accesses from any load or store operation. Near atomic operations that resolve in the CPUs caches counts as both a write access and read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_PRFM",
+ "PublicDescription": "Counts level 1 data cache accesses from software preload or prefetch instructions."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_MISS",
+ "PublicDescription": "Counts cache line misses in the level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_PRFM",
+ "PublicDescription": "Counts level 1 data cache refills where the cache line access was generated by software preload or prefetch instructions."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HWPRF",
+ "PublicDescription": "Counts level 1 data cache accesses from any load/store operations generated by the hardware prefetcher."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1i_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1i_cache.json
new file mode 100644
index 000000000000..fc511c5d2021
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l1i_cache.json
@@ -0,0 +1,62 @@
+[
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL",
+ "PublicDescription": "Counts cache line refills in the level 1 instruction cache caused by a missed instruction fetch. Instruction fetches may include accessing multiple instructions, but the single cache line allocation is counted once."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE",
+ "PublicDescription": "Counts instruction fetches which access the level 1 instruction cache. Instruction cache accesses caused by cache maintenance operations are not counted."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_LMISS",
+ "PublicDescription": "Counts cache line refills into the level 1 instruction cache, that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_RD",
+ "PublicDescription": "Counts demand instruction fetches which access the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_PRFM",
+ "PublicDescription": "Counts instruction fetches generated by software preload or prefetch instructions which access the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HWPRF",
+ "PublicDescription": "Counts instruction fetches which access the level 1 instruction cache generated by the hardware prefetcher."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL_PRFM",
+ "PublicDescription": "Counts cache line refills in the level 1 instruction cache caused by a missed instruction fetch generated by software preload or prefetch instructions. Instruction fetches may include accessing multiple instructions, but the single cache line allocation is counted once."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT_RD",
+ "PublicDescription": "Counts demand instruction fetches that access the level 1 instruction cache and hit in the L1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT_RD_FPRFM",
+ "PublicDescription": "Counts demand instruction fetches that access the level 1 instruction cache that hit in the L1 instruction cache and the line was requested by a software prefetch."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT_RD_FHWPRF",
+ "PublicDescription": "Counts demand instruction fetches generated by hardware prefetch that access the level 1 instruction cache and hit in the L1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT",
+ "PublicDescription": "Counts instruction fetches that access the level 1 instruction cache and hit in the level 1 instruction cache. Instruction cache accesses caused by cache maintenance operations are not counted."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT_PRFM",
+ "PublicDescription": "Counts instruction fetches generated by software preload or prefetch instructions that access the level 1 instruction cache and hit in the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_LFB_HIT_RD",
+ "PublicDescription": "Counts demand instruction fetches that access the level 1 instruction cache and hit in a line that is in the process of being loaded into the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_LFB_HIT_RD_FPRFM",
+ "PublicDescription": "Counts demand instruction fetches generated by software prefetch instructions that access the level 1 instruction cache and hit in a line that is in the process of being loaded into the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_LFB_HIT_RD_FHWPRF",
+ "PublicDescription": "Counts demand instruction fetches generated by hardware prefetch that access the level 1 instruction cache and hit in a line that is in the process of being loaded into the level 1 instruction cache."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l2_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l2_cache.json
new file mode 100644
index 000000000000..b38d71fd1136
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/l2_cache.json
@@ -0,0 +1,78 @@
+[
+ {
+ "ArchStdEvent": "L2D_CACHE",
+ "PublicDescription": "Counts accesses to the level 2 cache due to data accesses. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the first level data cache or translation resolutions due to accesses. This event also counts write back of dirty data from level 1 data cache to the L2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL",
+ "PublicDescription": "Counts cache line refills into the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB",
+ "PublicDescription": "Counts write-backs of data from the L2 cache to outside the CPU. This includes snoops to the L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line invalidations which do not write data outside the CPU and snoops which return data from an L1 cache are not counted. Data would not be written outside the cache when invalidating a clean cache line."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_RD",
+ "PublicDescription": "Counts level 2 data cache accesses due to memory read operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WR",
+ "PublicDescription": "Counts level 2 cache accesses due to memory write operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_RD",
+ "PublicDescription": "Counts refills for memory accesses due to memory read operation counted by L2D_CACHE_RD. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_WR",
+ "PublicDescription": "Counts refills for memory accesses due to memory write operation counted by L2D_CACHE_WR. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB_VICTIM",
+ "PublicDescription": "Counts evictions from the level 2 cache because of a line being allocated into the L2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB_CLEAN",
+ "PublicDescription": "Counts write-backs from the level 2 cache that are a result of either:\n\n1. Cache maintenance operations,\n\n2. Snoop responses or,\n\n3. Direct cache transfers to another CPU due to a forwarding snoop request."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_INVAL",
+ "PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by cache maintenance operations that operate by a virtual address, or by external coherency operations. This event does not count if either:\n\n1. A cache refill invalidates a cache line or,\n2. A Cache Maintenance Operation (CMO), which invalidates a cache line specified by set/way, is executed on that CPU.\n\nCMOs that operate by set/way cannot be broadcast from one CPU to another."
+ },
+ {
+ "PublicDescription": "Counts level 2 cache accesses due to level 1 data cache hardware prefetcher.",
+ "EventCode": "0x1B8",
+ "EventName": "L2D_CACHE_L1HWPRF",
+ "BriefDescription": "L2D cache access due to L1 hardware prefetch"
+ },
+ {
+ "PublicDescription": "Counts level 2 cache refills where the cache line is requested by a level 1 data cache hardware prefetcher.",
+ "EventCode": "0x1B9",
+ "EventName": "L2D_CACHE_REFILL_L1HWPRF",
+ "BriefDescription": "L2D cache refill due to L1 hardware prefetch"
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_LMISS_RD",
+ "PublicDescription": "Counts cache line refills into the level 2 unified cache from any memory read operations that incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_RW",
+ "PublicDescription": "Counts level 2 cache demand accesses from any load/store operations. Level 2 cache is a unified cache for data and instruction accesses, accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_PRFM",
+ "PublicDescription": "Counts level 2 data cache accesses generated by software preload or prefetch instructions."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_MISS",
+ "PublicDescription": "Counts cache line misses in the level 2 cache. Level 2 cache is a unified cache for data and instruction accesses. Accesses are for misses in the level 1 data cache or translation resolutions due to accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_PRFM",
+ "PublicDescription": "Counts refills due to accesses generated as a result of software preload or prefetch instructions as counted by L2D_CACHE_PRFM."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_HWPRF",
+ "PublicDescription": "Counts level 2 data cache accesses generated by L2D hardware prefetchers."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ll_cache.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ll_cache.json
new file mode 100644
index 000000000000..fd5a2e0099b8
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/ll_cache.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "LL_CACHE_RD",
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for the L3 cache. This event counts read transactions returned from outside the core if those transactions are either hit in the system level cache or missed in the SLC and are returned from any other external sources."
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_MISS_RD",
+ "PublicDescription": "Counts read transactions that were returned from outside the core cluster but missed in the system level cache. This event counts for external last level cache when the system register CPUECTLR.EXTLLC bit is set, otherwise it counts for L3 cache. This event counts read transactions returned from outside the core if those transactions are missed in the System level Cache. The data source of the transaction is indicated by a field in the CHI transaction returning to the CPU. This event does not count reads caused by cache maintenance operations."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/memory.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/memory.json
new file mode 100644
index 000000000000..0454ffc1d364
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/memory.json
@@ -0,0 +1,58 @@
+[
+ {
+ "ArchStdEvent": "MEM_ACCESS",
+ "PublicDescription": "Counts memory accesses issued by the CPU load store unit, where those accesses are issued due to load or store operations. This event counts memory accesses no matter whether the data is received from any level of cache hierarchy or external memory. If memory accesses are broken up into smaller transactions than what were specified in the load or store instructions, then the event counts those smaller memory transactions."
+ },
+ {
+ "ArchStdEvent": "MEMORY_ERROR",
+ "PublicDescription": "Counts any detected correctable or uncorrectable physical memory errors (ECC or parity) in protected CPUs RAMs. On the core, this event counts errors in the caches (including data and tag rams). Any detected memory error (from either a speculative and abandoned access, or an architecturally executed access) is counted. Note that errors are only detected when the actual protected memory is accessed by an operation."
+ },
+ {
+ "ArchStdEvent": "REMOTE_ACCESS",
+ "PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mesh in the system. If the CHI bus response back to the core indicates that the data source is from another chip (mesh), then the counter is updated. If no data is returned, even if the system snoops another chip/mesh, then the counter is not updated."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_RD",
+ "PublicDescription": "Counts memory accesses issued by the CPU due to load operations. The event counts any memory load access, no matter whether the data is received from any level of cache hierarchy or external memory. The event also counts atomic load operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_WR",
+ "PublicDescription": "Counts memory accesses issued by the CPU due to store operations. The event counts any memory store access, no matter whether the data is located in any level of cache or external memory. The event also counts atomic load and store operations. If memory accesses are broken up by the load/store unit into smaller transactions that are issued by the bus interface, then the event counts those smaller transactions."
+ },
+ {
+ "ArchStdEvent": "LDST_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory read and write accesses in a cycle that incurred additional latency, due to the alignment of the address and the size of data being accessed, which results in store crossing a single cache line."
+ },
+ {
+ "ArchStdEvent": "LD_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory read accesses in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed, which results in load crossing a single cache line."
+ },
+ {
+ "ArchStdEvent": "ST_ALIGN_LAT",
+ "PublicDescription": "Counts the number of memory write access in a cycle that incurred additional latency, due to the alignment of the address and size of data being accessed incurred additional latency."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED",
+ "PublicDescription": "Counts the number of memory read and write accesses counted by MEM_ACCESS that are tag checked by the Memory Tagging Extension (MTE). This event is implemented as the sum of MEM_ACCESS_CHECKED_RD and MEM_ACCESS_CHECKED_WR"
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED_RD",
+ "PublicDescription": "Counts the number of memory read accesses in a cycle that are tag checked by the Memory Tagging Extension (MTE)."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_CHECKED_WR",
+ "PublicDescription": "Counts the number of memory write accesses in a cycle that is tag checked by the Memory Tagging Extension (MTE)."
+ },
+ {
+ "ArchStdEvent": "INST_FETCH_PERCYC",
+ "PublicDescription": "Counts number of instruction fetches outstanding per cycle, which will provide an average latency of instruction fetch."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_RD_PERCYC",
+ "PublicDescription": "Counts the number of outstanding loads or memory read accesses per cycle."
+ },
+ {
+ "ArchStdEvent": "INST_FETCH",
+ "PublicDescription": "Counts Instruction memory accesses that the PE makes."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/metrics.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/metrics.json
new file mode 100644
index 000000000000..4a671f55eaf3
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/metrics.json
@@ -0,0 +1,457 @@
+[
+ {
+ "ArchStdEvent": "backend_bound"
+ },
+ {
+ "MetricName": "backend_busy_bound",
+ "MetricExpr": "STALL_BACKEND_BUSY / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to issue queues being full to accept operations for execution.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_cache_l1d_bound",
+ "MetricExpr": "STALL_BACKEND_L1D / (STALL_BACKEND_L1D + STALL_BACKEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by level 1 data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_cache_l2d_bound",
+ "MetricExpr": "STALL_BACKEND_MEM / (STALL_BACKEND_L1D + STALL_BACKEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by level 2 data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_core_bound",
+ "MetricExpr": "STALL_BACKEND_CPUBOUND / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to backend core resource constraints not related to instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_core_rename_bound",
+ "MetricExpr": "STALL_BACKEND_RENAME / STALL_BACKEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend as the rename unit registers are unavailable.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_bound",
+ "MetricExpr": "STALL_BACKEND_MEMBOUND / STALL_BACKEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to backend core resource constraints related to memory access latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_cache_bound",
+ "MetricExpr": "(STALL_BACKEND_L1D + STALL_BACKEND_MEM) / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory latency issues caused by data cache misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_store_bound",
+ "MetricExpr": "STALL_BACKEND_ST / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory write pending caused by stores stalled in the pre-commit stage.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_mem_tlb_bound",
+ "MetricExpr": "STALL_BACKEND_TLB / STALL_BACKEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the backend due to memory access latency issues caused by data TLB misses.",
+ "MetricGroup": "Topdown_Backend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "backend_stalled_cycles",
+ "MetricExpr": "STALL_BACKEND / CPU_CYCLES * 100",
+ "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the backend unit of the processor.",
+ "MetricGroup": "Cycle_Accounting",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "ArchStdEvent": "bad_speculation",
+ "MetricExpr": "(1 - STALL_SLOT / (10 * CPU_CYCLES)) * (1 - OP_RETIRED / OP_SPEC) * 100 + STALL_FRONTEND_FLUSH / CPU_CYCLES * 100"
+ },
+ {
+ "MetricName": "barrier_percentage",
+ "MetricExpr": "(ISB_SPEC + DSB_SPEC + DMB_SPEC) / INST_SPEC * 100",
+ "BriefDescription": "This metric measures instruction and data barrier operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "branch_direct_ratio",
+ "MetricExpr": "BR_IMMED_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of direct branches retired to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "branch_indirect_ratio",
+ "MetricExpr": "BR_IND_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of indirect branches retired, including function returns, to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "branch_misprediction_ratio",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of branches mispredicted to the total number of branches architecturally executed. This gives an indication of the effectiveness of the branch prediction unit.",
+ "MetricGroup": "Miss_Ratio;Branch_Effectiveness",
+ "ScaleUnit": "100percent of branches"
+ },
+ {
+ "MetricName": "branch_mpki",
+ "MetricExpr": "BR_MIS_PRED_RETIRED / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of branch mispredictions per thousand instructions executed.",
+ "MetricGroup": "MPKI;Branch_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "branch_percentage",
+ "MetricExpr": "(BR_IMMED_SPEC + BR_INDIRECT_SPEC) / INST_SPEC * 100",
+ "BriefDescription": "This metric measures branch operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "branch_return_ratio",
+ "MetricExpr": "BR_RETURN_RETIRED / BR_RETIRED",
+ "BriefDescription": "This metric measures the ratio of branches retired that are function returns to the total number of branches architecturally executed.",
+ "MetricGroup": "Branch_Effectiveness",
+ "ScaleUnit": "1per branch"
+ },
+ {
+ "MetricName": "crypto_percentage",
+ "MetricExpr": "CRYPTO_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures crypto operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "dtlb_mpki",
+ "MetricExpr": "DTLB_WALK / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of data TLB Walks per thousand instructions executed.",
+ "MetricGroup": "MPKI;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "dtlb_walk_ratio",
+ "MetricExpr": "DTLB_WALK / L1D_TLB",
+ "BriefDescription": "This metric measures the ratio of data TLB Walks to the total number of data TLB accesses. This gives an indication of the effectiveness of the data TLB accesses.",
+ "MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "fp16_percentage",
+ "MetricExpr": "FP_HP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures half-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp32_percentage",
+ "MetricExpr": "FP_SP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures single-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp64_percentage",
+ "MetricExpr": "FP_DP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures double-precision floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "FP_Precision_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "fp_ops_per_cycle",
+ "MetricExpr": "(FP_SCALE_OPS_SPEC + FP_FIXED_OPS_SPEC) / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by any instruction. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "ArchStdEvent": "frontend_bound",
+ "MetricExpr": "(STALL_SLOT_FRONTEND / (10 * CPU_CYCLES) - STALL_FRONTEND_FLUSH / CPU_CYCLES) * 100"
+ },
+ {
+ "MetricName": "frontend_cache_l1i_bound",
+ "MetricExpr": "STALL_FRONTEND_L1I / (STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory access latency issues caused by level 1 instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_cache_l2i_bound",
+ "MetricExpr": "STALL_FRONTEND_MEM / (STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to memory access latency issues caused by level 2 instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_bound",
+ "MetricExpr": "STALL_FRONTEND_CPUBOUND / STALL_FRONTEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to frontend core resource constraints not related to instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_flow_bound",
+ "MetricExpr": "STALL_FRONTEND_FLOW / STALL_FRONTEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend as the decode unit is awaiting input from the branch prediction unit.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_core_flush_bound",
+ "MetricExpr": "STALL_FRONTEND_FLUSH / STALL_FRONTEND_CPUBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend as the processor is recovering from a pipeline flush caused by bad speculation or other machine resteers.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_bound",
+ "MetricExpr": "STALL_FRONTEND_MEMBOUND / STALL_FRONTEND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to frontend core resource constraints related to the instruction fetch latency issues caused by memory access components.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_cache_bound",
+ "MetricExpr": "(STALL_FRONTEND_L1I + STALL_FRONTEND_MEM) / STALL_FRONTEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to instruction fetch latency issues caused by instruction cache misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_mem_tlb_bound",
+ "MetricExpr": "STALL_FRONTEND_TLB / STALL_FRONTEND_MEMBOUND * 100",
+ "BriefDescription": "This metric is the percentage of total cycles stalled in the frontend due to instruction fetch latency issues caused by instruction TLB misses.",
+ "MetricGroup": "Topdown_Frontend",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "frontend_stalled_cycles",
+ "MetricExpr": "STALL_FRONTEND / CPU_CYCLES * 100",
+ "BriefDescription": "This metric is the percentage of cycles that were stalled due to resource constraints in the frontend unit of the processor.",
+ "MetricGroup": "Cycle_Accounting",
+ "ScaleUnit": "1percent of cycles"
+ },
+ {
+ "MetricName": "integer_dp_percentage",
+ "MetricExpr": "DP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalar integer operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "ipc",
+ "MetricExpr": "INST_RETIRED / CPU_CYCLES",
+ "BriefDescription": "This metric measures the number of instructions retired per cycle.",
+ "MetricGroup": "General",
+ "ScaleUnit": "1per cycle"
+ },
+ {
+ "MetricName": "itlb_mpki",
+ "MetricExpr": "ITLB_WALK / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of instruction TLB Walks per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "itlb_walk_ratio",
+ "MetricExpr": "ITLB_WALK / L1I_TLB",
+ "BriefDescription": "This metric measures the ratio of instruction TLB Walks to the total number of instruction TLB accesses. This gives an indication of the effectiveness of the instruction TLB accesses.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1d_cache_miss_ratio",
+ "MetricExpr": "L1D_CACHE_REFILL / L1D_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cache accesses. This gives an indication of the effectiveness of the level 1 data cache.",
+ "MetricGroup": "Miss_Ratio;L1D_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l1d_cache_mpki",
+ "MetricExpr": "L1D_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 data cache accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;L1D_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1d_tlb_miss_ratio",
+ "MetricExpr": "L1D_TLB_REFILL / L1D_TLB",
+ "BriefDescription": "This metric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TLB accesses. This gives an indication of the effectiveness of the level 1 data TLB.",
+ "MetricGroup": "Miss_Ratio;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1d_tlb_mpki",
+ "MetricExpr": "L1D_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 data TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1i_cache_miss_ratio",
+ "MetricExpr": "L1I_CACHE_REFILL / L1I_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction cache accesses. This gives an indication of the effectiveness of the level 1 instruction cache.",
+ "MetricGroup": "Miss_Ratio;L1I_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l1i_cache_mpki",
+ "MetricExpr": "L1I_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;L1I_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l1i_tlb_miss_ratio",
+ "MetricExpr": "L1I_TLB_REFILL / L1I_TLB",
+ "BriefDescription": "This metric measures the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instruction TLB accesses. This gives an indication of the effectiveness of the level 1 instruction TLB.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l1i_tlb_mpki",
+ "MetricExpr": "L1I_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l2_cache_miss_ratio",
+ "MetricExpr": "L2D_CACHE_REFILL / L2D_CACHE",
+ "BriefDescription": "This metric measures the ratio of level 2 cache accesses missed to the total number of level 2 cache accesses. This gives an indication of the effectiveness of the level 2 cache, which is a unified cache that stores both data and instruction. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
+ "MetricGroup": "Miss_Ratio;L2_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "l2_cache_mpki",
+ "MetricExpr": "L2D_CACHE_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 2 unified cache accesses missed per thousand instructions executed. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a unified cache.",
+ "MetricGroup": "MPKI;L2_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "l2_tlb_miss_ratio",
+ "MetricExpr": "L2D_TLB_REFILL / L2D_TLB",
+ "BriefDescription": "This metric measures the ratio of level 2 unified TLB accesses missed to the total number of level 2 unified TLB accesses. This gives an indication of the effectiveness of the level 2 TLB.",
+ "MetricGroup": "Miss_Ratio;ITLB_Effectiveness;DTLB_Effectiveness",
+ "ScaleUnit": "100percent of TLB accesses"
+ },
+ {
+ "MetricName": "l2_tlb_mpki",
+ "MetricExpr": "L2D_TLB_REFILL / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of level 2 unified TLB accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;ITLB_Effectiveness;DTLB_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "ll_cache_read_hit_ratio",
+ "MetricExpr": "(LL_CACHE_RD - LL_CACHE_MISS_RD) / LL_CACHE_RD",
+ "BriefDescription": "This metric measures the ratio of last level cache read accesses hit in the cache to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
+ "MetricGroup": "LL_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "ll_cache_read_miss_ratio",
+ "MetricExpr": "LL_CACHE_MISS_RD / LL_CACHE_RD",
+ "BriefDescription": "This metric measures the ratio of last level cache read accesses missed to the total number of last level cache accesses. This gives an indication of the effectiveness of the last level cache for read traffic. Note that cache accesses in this cache are either data memory access or instruction fetch as this is a system level cache.",
+ "MetricGroup": "Miss_Ratio;LL_Cache_Effectiveness",
+ "ScaleUnit": "100percent of cache accesses"
+ },
+ {
+ "MetricName": "ll_cache_read_mpki",
+ "MetricExpr": "LL_CACHE_MISS_RD / INST_RETIRED * 1000",
+ "BriefDescription": "This metric measures the number of last level cache read accesses missed per thousand instructions executed.",
+ "MetricGroup": "MPKI;LL_Cache_Effectiveness",
+ "ScaleUnit": "1MPKI"
+ },
+ {
+ "MetricName": "load_percentage",
+ "MetricExpr": "LD_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures load operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "nonsve_fp_ops_per_cycle",
+ "MetricExpr": "FP_FIXED_OPS_SPEC / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by an instruction that is not an SVE instruction. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "ArchStdEvent": "retiring"
+ },
+ {
+ "MetricName": "scalar_fp_percentage",
+ "MetricExpr": "VFP_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalar floating point operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "simd_percentage",
+ "MetricExpr": "ASE_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures advanced SIMD operations as a percentage of total operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "store_percentage",
+ "MetricExpr": "ST_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures store operations as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_all_percentage",
+ "MetricExpr": "SVE_INST_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations, including loads and stores, as a percentage of operations speculatively executed.",
+ "MetricGroup": "Operation_Mix",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_fp_ops_per_cycle",
+ "MetricExpr": "FP_SCALE_OPS_SPEC / CPU_CYCLES",
+ "BriefDescription": "This metric measures floating point operations per cycle in any precision performed by SVE instructions. Operations are counted by computation and by vector lanes, fused computations such as multiply-add count as twice per vector lane for example.",
+ "MetricGroup": "FP_Arithmetic_Intensity",
+ "ScaleUnit": "1operations per cycle"
+ },
+ {
+ "MetricName": "sve_predicate_empty_percentage",
+ "MetricExpr": "SVE_PRED_EMPTY_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with no active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_full_percentage",
+ "MetricExpr": "SVE_PRED_FULL_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with all active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_partial_percentage",
+ "MetricExpr": "SVE_PRED_PARTIAL_SPEC / SVE_PRED_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with at least one active predicates as a percentage of sve predicated operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ },
+ {
+ "MetricName": "sve_predicate_percentage",
+ "MetricExpr": "SVE_PRED_SPEC / INST_SPEC * 100",
+ "BriefDescription": "This metric measures scalable vector operations with predicates as a percentage of operations speculatively executed.",
+ "MetricGroup": "SVE_Effectiveness",
+ "ScaleUnit": "1percent of operations"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/retired.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/retired.json
new file mode 100644
index 000000000000..04617c399dda
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/retired.json
@@ -0,0 +1,98 @@
+[
+ {
+ "ArchStdEvent": "SW_INCR",
+ "PublicDescription": "Counts software writes to the PMSWINC_EL0 (software PMU increment) register. The PMSWINC_EL0 register is a manually updated counter for use by application software.\n\nThis event could be used to measure any user program event, such as accesses to a particular data structure (by writing to the PMSWINC_EL0 register each time the data structure is accessed).\n\nTo use the PMSWINC_EL0 register and event, developers must insert instructions that write to the PMSWINC_EL0 register into the source code.\n\nSince the SW_INCR event records writes to the PMSWINC_EL0 register, there is no need to do a read/increment/write sequence to the PMSWINC_EL0 register."
+ },
+ {
+ "ArchStdEvent": "INST_RETIRED",
+ "PublicDescription": "Counts instructions that have been architecturally executed."
+ },
+ {
+ "ArchStdEvent": "CID_WRITE_RETIRED",
+ "PublicDescription": "Counts architecturally executed writes to the CONTEXTIDR_EL1 register, which usually contain the kernel PID and can be output with hardware trace."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns."
+ },
+ {
+ "ArchStdEvent": "TTBR_WRITE_RETIRED",
+ "PublicDescription": "Counts architectural writes to TTBR0/1_EL1. If virtualization host extensions are enabled (by setting the HCR_EL2.E2H bit to 1), then accesses to TTBR0/1_EL1 that are redirected to TTBR0/1_EL2, or accesses to TTBR0/1_EL12, are counted. TTBRn registers are typically updated when the kernel is swapping user-space threads or applications."
+ },
+ {
+ "ArchStdEvent": "BR_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches, whether the branch is taken or not. Instructions that explicitly write to the PC are also counted. Note that exception generating instructions, exception return instructions and context synchronization instructions are not counted."
+ },
+ {
+ "ArchStdEvent": "BR_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts branches counted by BR_RETIRED which were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "OP_RETIRED",
+ "PublicDescription": "Counts micro-operations that are architecturally executed. This is a count of number of micro-operations retired from the commit queue in a single cycle."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_TAKEN_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were taken."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed direct branches that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_IND_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IND_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_INDNR_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches excluding procedure returns that were mispredicted and caused a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_TAKEN_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches that were taken and were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_TAKEN_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches that were taken and were mispredicted causing a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_SKIP_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches that were not taken and were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_SKIP_MIS_PRED_RETIRED",
+ "PublicDescription": "Counts architecturally executed branches that were not taken and were mispredicted causing a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "BR_PRED_RETIRED",
+ "PublicDescription": "Counts branch instructions counted by BR_RETIRED which were correctly predicted."
+ },
+ {
+ "ArchStdEvent": "BR_IND_RETIRED",
+ "PublicDescription": "Counts architecturally executed indirect branches including procedure returns."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spe.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spe.json
new file mode 100644
index 000000000000..ca0217fa4681
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spe.json
@@ -0,0 +1,42 @@
+[
+ {
+ "ArchStdEvent": "SAMPLE_POP",
+ "PublicDescription": "Counts statistical profiling sample population, the count of all operations that could be sampled but may or may not be chosen for sampling."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED",
+ "PublicDescription": "Counts statistical profiling samples taken for sampling."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FILTRATE",
+ "PublicDescription": "Counts statistical profiling samples taken which are not removed by filtering."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_COLLISION",
+ "PublicDescription": "Counts statistical profiling samples that have collided with a previous sample and so therefore not taken."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_BR",
+ "PublicDescription": "Counts statistical profiling samples taken which are branches."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_LD",
+ "PublicDescription": "Counts statistical profiling samples taken which are loads or load atomic operations."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_ST",
+ "PublicDescription": "Counts statistical profiling samples taken which are stores or store atomic operations."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_OP",
+ "PublicDescription": "Counts statistical profiling samples taken which are matching any operation type filters supported."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_EVENT",
+ "PublicDescription": "Counts statistical profiling samples taken which are matching event packet filter constraints."
+ },
+ {
+ "ArchStdEvent": "SAMPLE_FEED_LAT",
+ "PublicDescription": "Counts statistical profiling samples taken which are exceeding minimum latency set by operation latency filter constraints."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spec_operation.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spec_operation.json
new file mode 100644
index 000000000000..7d7359402e9e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/spec_operation.json
@@ -0,0 +1,126 @@
+[
+ {
+ "ArchStdEvent": "BR_MIS_PRED",
+ "PublicDescription": "Counts branches which are speculatively executed and mispredicted."
+ },
+ {
+ "ArchStdEvent": "BR_PRED",
+ "PublicDescription": "Counts all speculatively executed branches."
+ },
+ {
+ "ArchStdEvent": "INST_SPEC",
+ "PublicDescription": "Counts operations that have been speculatively executed."
+ },
+ {
+ "ArchStdEvent": "OP_SPEC",
+ "PublicDescription": "Counts micro-operations speculatively executed. This is the count of the number of micro-operations dispatched in a cycle."
+ },
+ {
+ "ArchStdEvent": "UNALIGNED_LD_SPEC",
+ "PublicDescription": "Counts unaligned memory read operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses. The event does not count preload operations (PLD, PLI)."
+ },
+ {
+ "ArchStdEvent": "UNALIGNED_ST_SPEC",
+ "PublicDescription": "Counts unaligned memory write operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses."
+ },
+ {
+ "ArchStdEvent": "UNALIGNED_LDST_SPEC",
+ "PublicDescription": "Counts unaligned memory operations issued by the CPU. This event counts unaligned accesses (as defined by the actual instruction), even if they are subsequently issued as multiple aligned accesses."
+ },
+ {
+ "ArchStdEvent": "LDREX_SPEC",
+ "PublicDescription": "Counts Load-Exclusive operations that have been speculatively executed. For example: LDREX, LDX"
+ },
+ {
+ "ArchStdEvent": "STREX_PASS_SPEC",
+ "PublicDescription": "Counts store-exclusive operations that have been speculatively executed and have successfully completed the store operation."
+ },
+ {
+ "ArchStdEvent": "STREX_FAIL_SPEC",
+ "PublicDescription": "Counts store-exclusive operations that have been speculatively executed and have not successfully completed the store operation."
+ },
+ {
+ "ArchStdEvent": "STREX_SPEC",
+ "PublicDescription": "Counts store-exclusive operations that have been speculatively executed."
+ },
+ {
+ "ArchStdEvent": "LD_SPEC",
+ "PublicDescription": "Counts speculatively executed load operations including Single Instruction Multiple Data (SIMD) load operations."
+ },
+ {
+ "ArchStdEvent": "ST_SPEC",
+ "PublicDescription": "Counts speculatively executed store operations including Single Instruction Multiple Data (SIMD) store operations."
+ },
+ {
+ "ArchStdEvent": "LDST_SPEC",
+ "PublicDescription": "Counts load and store operations that have been speculatively executed."
+ },
+ {
+ "ArchStdEvent": "DP_SPEC",
+ "PublicDescription": "Counts speculatively executed logical or arithmetic instructions such as MOV/MVN operations."
+ },
+ {
+ "ArchStdEvent": "ASE_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD operations excluding load, store and move micro-operations that move data to or from SIMD (vector) registers."
+ },
+ {
+ "ArchStdEvent": "VFP_SPEC",
+ "PublicDescription": "Counts speculatively executed floating point operations. This event does not count operations that move data to or from floating point (vector) registers."
+ },
+ {
+ "ArchStdEvent": "PC_WRITE_SPEC",
+ "PublicDescription": "Counts speculatively executed operations which cause software changes of the PC. Those operations include all taken branch operations."
+ },
+ {
+ "ArchStdEvent": "CRYPTO_SPEC",
+ "PublicDescription": "Counts speculatively executed cryptographic operations except for PMULL and VMULL operations."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_SPEC",
+ "PublicDescription": "Counts direct branch operations which are speculatively executed."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_SPEC",
+ "PublicDescription": "Counts procedure return operations (RET, RETAA and RETAB) which are speculatively executed."
+ },
+ {
+ "ArchStdEvent": "BR_INDIRECT_SPEC",
+ "PublicDescription": "Counts indirect branch operations including procedure returns, which are speculatively executed. This includes operations that force a software change of the PC, other than exception-generating operations and direct branch instructions. Some examples of the instructions counted by this event include BR Xn, RET, etc..."
+ },
+ {
+ "ArchStdEvent": "ISB_SPEC",
+ "PublicDescription": "Counts ISB operations that are executed."
+ },
+ {
+ "ArchStdEvent": "DSB_SPEC",
+ "PublicDescription": "Counts DSB operations that are speculatively issued to Load/Store unit in the CPU."
+ },
+ {
+ "ArchStdEvent": "DMB_SPEC",
+ "PublicDescription": "Counts DMB operations that are speculatively issued to the Load/Store unit in the CPU. This event does not count implied barriers from load acquire/store release operations."
+ },
+ {
+ "ArchStdEvent": "RC_LD_SPEC",
+ "PublicDescription": "Counts any load acquire operations that are speculatively executed. For example: LDAR, LDARH, LDARB"
+ },
+ {
+ "ArchStdEvent": "RC_ST_SPEC",
+ "PublicDescription": "Counts any store release operations that are speculatively executed. For example: STLR, STLRH, STLRB"
+ },
+ {
+ "ArchStdEvent": "SIMD_INST_SPEC",
+ "PublicDescription": "Counts speculatively executed operations that are SIMD or SVE vector operations or Advanced SIMD non-scalar operations."
+ },
+ {
+ "ArchStdEvent": "ASE_INST_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD operations."
+ },
+ {
+ "ArchStdEvent": "INT_SPEC",
+ "PublicDescription": "Counts speculatively executed integer arithmetic operations."
+ },
+ {
+ "ArchStdEvent": "PRF_SPEC",
+ "PublicDescription": "Counts speculatively executed operations that prefetch memory. For example: Scalar: PRFM, SVE: PRFB, PRFD, PRFH, or PRFW."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/stall.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/stall.json
new file mode 100644
index 000000000000..cafa73508db6
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/stall.json
@@ -0,0 +1,124 @@
+[
+ {
+ "ArchStdEvent": "STALL_FRONTEND",
+ "PublicDescription": "Counts cycles when frontend could not send any micro-operations to the rename stage because of frontend resource stalls caused by fetch memory latency or branch prediction flow stalls. STALL_FRONTEND_SLOTS counts SLOTS during the cycle when this event counts."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND",
+ "PublicDescription": "Counts cycles whenever the rename unit is unable to send any micro-operations to the backend of the pipeline because of backend resource constraints. Backend resource constraints can include issue stage fullness, execution stage fullness, or other internal pipeline resource fullness. All the backend slots were empty during the cycle when this event counts."
+ },
+ {
+ "ArchStdEvent": "STALL",
+ "PublicDescription": "Counts cycles when no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). This event is the sum of STALL_FRONTEND and STALL_BACKEND"
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_BACKEND",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent from the rename unit to the backend due to backend resource constraints. STALL_BACKEND counts during the cycle when STALL_SLOT_BACKEND counts at least 1."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_FRONTEND",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend due to frontend resource constraints."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT",
+ "PublicDescription": "Counts slots per cycle in which no operations are sent to the rename unit from the frontend or from the rename unit to the backend for any reason (either frontend or backend stall). STALL_SLOT is the sum of STALL_SLOT_FRONTEND and STALL_SLOT_BACKEND."
+ },
+ {
+ "PublicDescription": "Counts cycles counted by STALL_BACKEND_BUSY when the backend could not accept any micro-operations\nbecause the simple integer issue queues are full to take any operations for execution.",
+ "EventCode": "0x15C",
+ "EventName": "DISPATCH_STALL_IQ_SX",
+ "BriefDescription": "Dispatch stalled due to IQ full,SX"
+ },
+ {
+ "PublicDescription": "Counts cycles counted by STALL_BACKEND_BUSY when the backend could not accept any micro-operations\nbecause the complex integer issue queues are full and can not take any operations for execution.",
+ "EventCode": "0x15D",
+ "EventName": "DISPATCH_STALL_IQ_MX",
+ "BriefDescription": "Dispatch stalled due to IQ full,MX"
+ },
+ {
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations\nbecause the load/store issue queues are full and can not take any operations for execution.",
+ "EventCode": "0x15E",
+ "EventName": "DISPATCH_STALL_IQ_LS",
+ "BriefDescription": "Dispatch stalled due to IQ full,LS"
+ },
+ {
+ "PublicDescription": "Counts cycles counted by STALL_BACKEND_BUSY when the backend could not accept any micro-operations\nbecause the vector issue queues are full and can not take any operations for execution.",
+ "EventCode": "0x15F",
+ "EventName": "DISPATCH_STALL_IQ_VX",
+ "BriefDescription": "Dispatch stalled due to IQ full,VX"
+ },
+ {
+ "PublicDescription": "Counts cycles counted by STALL_BACKEND_BUSY when the backend could not accept any micro-operations\nbecause the commit queue is full and can not take any operations for execution.",
+ "EventCode": "0x160",
+ "EventName": "DISPATCH_STALL_MCQ",
+ "BriefDescription": "Dispatch stalled due to MCQ full"
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEM",
+ "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the last level core cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEMBOUND",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_L1I",
+ "PublicDescription": "Counts cycles when the frontend is stalled because there is an instruction fetch request pending in the level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEM",
+ "PublicDescription": "Counts cycles when the frontend is stalled because there is an instruction fetch request pending in the last level core cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_TLB",
+ "PublicDescription": "Counts when the frontend is stalled on any TLB misses being handled. This event also counts the TLB accesses made by hardware prefetches."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_CPUBOUND",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the CPU resources excluding memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLOW",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage due to resource constraints in the branch prediction unit."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLUSH",
+ "PublicDescription": "Counts cycles when the frontend could not send any micro-operations to the rename stage as the frontend is recovering from a machine flush or resteer. Example scenarios that cause a flush include branch mispredictions, taken exceptions, micro-architectural flush etc."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEMBOUND",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations due to resource constraints in the memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_L1D",
+ "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_L2D",
+ "PublicDescription": "Counts cycles when the backend is stalled because there is a pending demand load request in progress in the level 2 data cache."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_TLB",
+ "PublicDescription": "Counts cycles when the backend is stalled on any demand TLB misses being handled."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ST",
+ "PublicDescription": "Counts cycles when the backend is stalled and there is a store that has not reached the pre-commit stage."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_CPUBOUND",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations due to any resource constraints in the CPU excluding memory resources."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_BUSY",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations because the issue queues are full to take any operations for execution."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ILOCK",
+ "PublicDescription": "Counts cycles when the backend could not accept any micro-operations due to resource constraints imposed by input dependency."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_RENAME",
+ "PublicDescription": "Counts cycles when backend is stalled even when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/sve.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/sve.json
new file mode 100644
index 000000000000..51dab48cb2ba
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/sve.json
@@ -0,0 +1,50 @@
+[
+ {
+ "ArchStdEvent": "SVE_INST_SPEC",
+ "PublicDescription": "Counts speculatively executed operations that are SVE operations."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_EMPTY_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with no active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_FULL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with all predicate elements active."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_PARTIAL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one but not all active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_NOT_FULL_SPEC",
+ "PublicDescription": "Counts speculatively executed predicated SVE operations with at least one non active predicate elements."
+ },
+ {
+ "ArchStdEvent": "SVE_LDFF_SPEC",
+ "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations."
+ },
+ {
+ "ArchStdEvent": "SVE_LDFF_FAULT_SPEC",
+ "PublicDescription": "Counts speculatively executed SVE first fault or non-fault load operations that clear at least one bit in the FFR."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT8_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type an 8-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT16_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 16-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT32_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 32-bit integer."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT64_SPEC",
+ "PublicDescription": "Counts speculatively executed Advanced SIMD or SVE integer operations with the largest data type a 64-bit integer."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/tlb.json b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/tlb.json
new file mode 100644
index 000000000000..41c5472c1def
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/arm/neoverse-v3/tlb.json
@@ -0,0 +1,138 @@
+[
+ {
+ "ArchStdEvent": "L1I_TLB_REFILL",
+ "PublicDescription": "Counts level 1 instruction TLB refills from any Instruction fetch. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL",
+ "PublicDescription": "Counts level 1 data TLB accesses that resulted in TLB refills. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count on an access from an AT(address translation) instruction."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB",
+ "PublicDescription": "Counts level 1 data TLB accesses caused by any memory load or store operation. Note that load or store instructions can be broken up into multiple memory operations. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1I_TLB",
+ "PublicDescription": "Counts level 1 instruction TLB accesses, whether the access hits or misses in the TLB. This event counts both demand accesses and prefetch or preload generated accesses."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_REFILL",
+ "PublicDescription": "Counts level 2 TLB refills caused by memory operations from both data and instruction fetch, except for those caused by TLB maintenance operations and hardware prefetches."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB",
+ "PublicDescription": "Counts level 2 TLB accesses except those caused by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK",
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL_RD",
+ "PublicDescription": "Counts level 1 data TLB refills caused by memory read operations. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the translation table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count on an access from an Address Translation (AT) instruction."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL_WR",
+ "PublicDescription": "Counts level 1 data TLB refills caused by data side memory write operations. If there are multiple misses in the TLB that are resolved by the refill, then this event only counts once. This event counts for refills caused by preload instructions or hardware prefetch accesses. This event counts regardless of whether the miss hits in L2 or results in a translation table walk. This event will not count if the table walk results in a fault (such as a translation or access fault), since there is no new translation created for the TLB. This event will not count with an access from an Address Translation (AT) instruction."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_RD",
+ "PublicDescription": "Counts level 1 data TLB accesses caused by memory read operations. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_WR",
+ "PublicDescription": "Counts any L1 data side TLB accesses caused by memory write operations. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_REFILL_RD",
+ "PublicDescription": "Counts level 2 TLB refills caused by memory read operations from both data and instruction fetch except for those caused by TLB maintenance operations or hardware prefetches."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_REFILL_WR",
+ "PublicDescription": "Counts level 2 TLB refills caused by memory write operations from both data and instruction fetch except for those caused by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_RD",
+ "PublicDescription": "Counts level 2 TLB accesses caused by memory read operations from both data and instruction fetch except for those caused by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_WR",
+ "PublicDescription": "Counts level 2 TLB accesses caused by memory write operations from both data and instruction fetch except for those caused by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_PERCYC",
+ "PublicDescription": "Counts the number of data translation table walks in progress per cycle."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_PERCYC",
+ "PublicDescription": "Counts the number of instruction translation table walks in progress per cycle."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_RW",
+ "PublicDescription": "Counts level 1 data TLB demand accesses caused by memory read or write operations. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1I_TLB_RD",
+ "PublicDescription": "Counts level 1 instruction TLB demand accesses whether the access hits or misses in the TLB."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_PRFM",
+ "PublicDescription": "Counts level 1 data TLB accesses generated by software prefetch or preload memory accesses. Load or store instructions can be broken into multiple memory operations. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "L1I_TLB_PRFM",
+ "PublicDescription": "Counts level 1 instruction TLB accesses generated by software preload or prefetch instructions. This event counts whether the access hits or misses in the TLB. This event does not count TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_HWUPD",
+ "PublicDescription": "Counts number of memory accesses triggered by a data translation table walk and performing an update of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that this event counts accesses triggered by software preloads, but not accesses triggered by hardware prefetchers."
+ },
+ {
+ "ArchStdEvent": "ITLB_HWUPD",
+ "PublicDescription": "Counts number of memory accesses triggered by an instruction translation table walk and performing an update of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD."
+ },
+ {
+ "ArchStdEvent": "DTLB_STEP",
+ "PublicDescription": "Counts number of memory accesses triggered by a demand data translation table walk and performing a read of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that this event counts accesses triggered by software preloads, but not accesses triggered by hardware prefetchers."
+ },
+ {
+ "ArchStdEvent": "ITLB_STEP",
+ "PublicDescription": "Counts number of memory accesses triggered by an instruction translation table walk and performing a read of a translation table entry. Memory accesses are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_LARGE",
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and yielding a large page. The set of large pages is defined as all pages with a final size higher than or equal to 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. If DTLB_WALK_BLOCK is implemented, then it is an alias for this event in this family. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_LARGE",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and yielding a large page. The set of large pages is defined as all pages with a final size higher than or equal to 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. In this family, this is equal to ITLB_WALK_BLOCK event. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_SMALL",
+ "PublicDescription": "Counts number of data translation table walks caused by a miss in the L2 TLB and yielding a small page. The set of small pages is defined as all pages with a final size lower than 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. If DTLB_WALK_PAGE event is implemented, then it is an alias for this event in this family. Note that partial translations that cause a translation table walk are also counted. Also note that this event counts walks triggered by software preloads, but not walks triggered by hardware prefetchers, and that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_SMALL",
+ "PublicDescription": "Counts number of instruction translation table walks caused by a miss in the L2 TLB and yielding a small page. The set of small pages is defined as all pages with a final size lower than 2MB. Translation table walks that end up taking a translation fault are not counted, as the page size would be undefined in that case. In this family, this is equal to ITLB_WALK_PAGE event. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_RW",
+ "PublicDescription": "Counts number of demand data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_RD",
+ "PublicDescription": "Counts number of demand instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_PRFM",
+ "PublicDescription": "Counts number of software prefetches or preloads generated data translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_PRFM",
+ "PublicDescription": "Counts number of software prefetches or preloads generated instruction translation table walks caused by a miss in the L2 TLB and performing at least one memory access. Translation table walks are counted even if the translation ended up taking a translation fault for reasons different than EPD, E0PD and NFD. Note that partial translations that cause a translation table walk are also counted. Also note that this event does not count walks triggered by TLB maintenance operations."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/common-and-microarch.json b/tools/perf/pmu-events/arch/arm64/common-and-microarch.json
index 492083b99256..2416d9f8a83d 100644
--- a/tools/perf/pmu-events/arch/arm64/common-and-microarch.json
+++ b/tools/perf/pmu-events/arch/arm64/common-and-microarch.json
@@ -228,6 +228,16 @@
"BriefDescription": "Attributable Level 1 instruction TLB access"
},
{
+ "EventCode": "0x27",
+ "EventName": "L2I_CACHE",
+ "BriefDescription": "Level 2 instruction cache access"
+ },
+ {
+ "EventCode": "0x28",
+ "EventName": "L2I_CACHE_REFILL",
+ "BriefDescription": "Level 2 instruction cache refill"
+ },
+ {
"PublicDescription": "Attributable Level 3 data cache allocation without refill",
"EventCode": "0x29",
"EventName": "L3D_CACHE_ALLOCATE",
@@ -276,6 +286,16 @@
"BriefDescription": "Access to another socket in a multi-socket system"
},
{
+ "EventCode": "0x32",
+ "EventName": "LL_CACHE",
+ "BriefDescription": "Last level cache access"
+ },
+ {
+ "EventCode": "0x33",
+ "EventName": "LL_CACHE_MISS",
+ "BriefDescription": "Last level cache miss"
+ },
+ {
"PublicDescription": "Access to data TLB causes a translation table walk",
"EventCode": "0x34",
"EventName": "DTLB_WALK",
@@ -396,6 +416,11 @@
"BriefDescription": "Level 2 data cache long-latency read miss"
},
{
+ "EventCode": "0x400A",
+ "EventName": "L2I_CACHE_LMISS",
+ "BriefDescription": "Level 2 instruction cache long-latency miss"
+ },
+ {
"PublicDescription": "Level 3 data cache long-latency read miss. The counter counts each memory read access counted by L3D_CACHE that incurs additional latency because it returns data from outside the Level 3 data or unified cache of this processing element. The event indicates to software that the access missed in the Level 3 data or unified cache and might have a significant performance impact compared to the latency of an access that hits in the Level 3 data or unified cache.",
"EventCode": "0x400B",
"EventName": "L3D_CACHE_LMISS_RD",
@@ -522,6 +547,11 @@
"BriefDescription": "Instruction architecturally executed, SVE."
},
{
+ "EventCode": "0x8004",
+ "EventName": "SIMD_INST_SPEC",
+ "BriefDescription": "Operation speculatively executed, SIMD"
+ },
+ {
"PublicDescription": "ASE operations speculatively executed",
"EventCode": "0x8005",
"EventName": "ASE_INST_SPEC",
@@ -534,6 +564,11 @@
"BriefDescription": "SVE operations speculatively executed"
},
{
+ "EventCode": "0x8007",
+ "EventName": "ASE_SVE_INST_SPEC",
+ "BriefDescription": "Operation speculatively executed, Advanced SIMD or SVE."
+ },
+ {
"PublicDescription": "Microarchitectural operation, Operations speculatively executed.",
"EventCode": "0x8008",
"EventName": "UOP_SPEC",
@@ -552,48 +587,393 @@
"BriefDescription": "Floating-point Operations speculatively executed."
},
{
+ "EventCode": "0x8011",
+ "EventName": "ASE_FP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD."
+ },
+ {
+ "EventCode": "0x8012",
+ "EventName": "SVE_FP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE."
+ },
+ {
+ "EventCode": "0x8013",
+ "EventName": "ASE_SVE_FP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE."
+ },
+ {
"PublicDescription": "Floating-point half-precision operations speculatively executed",
"EventCode": "0x8014",
"EventName": "FP_HP_SPEC",
"BriefDescription": "Floating-point half-precision operations speculatively executed"
},
{
+ "EventCode": "0x8015",
+ "EventName": "ASE_FP_HP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD half precision."
+ },
+ {
+ "EventCode": "0x8016",
+ "EventName": "SVE_FP_HP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE half precision."
+ },
+ {
+ "EventCode": "0x8017",
+ "EventName": "ASE_SVE_FP_HP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE half precision."
+ },
+ {
"PublicDescription": "Floating-point single-precision operations speculatively executed",
"EventCode": "0x8018",
"EventName": "FP_SP_SPEC",
"BriefDescription": "Floating-point single-precision operations speculatively executed"
},
{
+ "EventCode": "0x8019",
+ "EventName": "ASE_FP_SP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD single precision."
+ },
+ {
+ "EventCode": "0x801A",
+ "EventName": "SVE_FP_SP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE single precision."
+ },
+ {
+ "EventCode": "0x801B",
+ "EventName": "ASE_SVE_FP_SP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE single precision."
+ },
+ {
"PublicDescription": "Floating-point double-precision operations speculatively executed",
"EventCode": "0x801C",
"EventName": "FP_DP_SPEC",
"BriefDescription": "Floating-point double-precision operations speculatively executed"
},
{
+ "EventCode": "0x801D",
+ "EventName": "ASE_FP_DP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD double precision."
+ },
+ {
+ "EventCode": "0x801E",
+ "EventName": "SVE_FP_DP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE double precision."
+ },
+ {
+ "EventCode": "0x801F",
+ "EventName": "ASE_SVE_FP_DP_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE double precision."
+ },
+ {
+ "EventCode": "0x8020",
+ "EventName": "FP_DIV_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, divide."
+ },
+ {
+ "EventCode": "0x8021",
+ "EventName": "ASE_FP_DIV_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD divide."
+ },
+ {
+ "EventCode": "0x8022",
+ "EventName": "SVE_FP_DIV_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE divide."
+ },
+ {
+ "EventCode": "0x8023",
+ "EventName": "ASE_SVE_FP_DIV_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE divide."
+ },
+ {
+ "EventCode": "0x8024",
+ "EventName": "FP_SQRT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, square root."
+ },
+ {
+ "EventCode": "0x8025",
+ "EventName": "ASE_FP_SQRT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD square root."
+ },
+ {
+ "EventCode": "0x8026",
+ "EventName": "SVE_FP_SQRT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE square root."
+ },
+ {
+ "EventCode": "0x8027",
+ "EventName": "ASE_SVE_FP_SQRT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE square-root."
+ },
+ {
"PublicDescription": "Floating-point FMA Operations speculatively executed.",
"EventCode": "0x8028",
"EventName": "FP_FMA_SPEC",
"BriefDescription": "Floating-point FMA Operations speculatively executed."
},
{
+ "EventCode": "0x8029",
+ "EventName": "ASE_FP_FMA_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD FMA."
+ },
+ {
+ "EventCode": "0x802A",
+ "EventName": "SVE_FP_FMA_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE FMA."
+ },
+ {
+ "EventCode": "0x802B",
+ "EventName": "ASE_SVE_FP_FMA_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE FMA."
+ },
+ {
+ "EventCode": "0x802C",
+ "EventName": "FP_MUL_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, multiply."
+ },
+ {
+ "EventCode": "0x802D",
+ "EventName": "ASE_FP_MUL_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD multiply."
+ },
+ {
+ "EventCode": "0x802E",
+ "EventName": "SVE_FP_MUL_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE multiply."
+ },
+ {
+ "EventCode": "0x802F",
+ "EventName": "ASE_SVE_FP_MUL_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE multiply."
+ },
+ {
+ "EventCode": "0x8030",
+ "EventName": "FP_ADDSUB_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, add or subtract."
+ },
+ {
+ "EventCode": "0x8031",
+ "EventName": "ASE_FP_ADDSUB_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD add or subtract."
+ },
+ {
+ "EventCode": "0x8032",
+ "EventName": "SVE_FP_ADDSUB_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE add or subtract."
+ },
+ {
+ "EventCode": "0x8033",
+ "EventName": "ASE_SVE_FP_ADDSUB_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE add or subtract."
+ },
+ {
"PublicDescription": "Floating-point reciprocal estimate Operations speculatively executed.",
"EventCode": "0x8034",
"EventName": "FP_RECPE_SPEC",
"BriefDescription": "Floating-point reciprocal estimate Operations speculatively executed."
},
{
+ "EventCode": "0x8035",
+ "EventName": "ASE_FP_RECPE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD reciprocal estimate."
+ },
+ {
+ "EventCode": "0x8036",
+ "EventName": "SVE_FP_RECPE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE reciprocal estimate."
+ },
+ {
+ "EventCode": "0x8037",
+ "EventName": "ASE_SVE_FP_RECPE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE reciprocal estimate."
+ },
+ {
"PublicDescription": "floating-point convert Operations speculatively executed.",
"EventCode": "0x8038",
"EventName": "FP_CVT_SPEC",
"BriefDescription": "floating-point convert Operations speculatively executed."
},
{
+ "EventCode": "0x8039",
+ "EventName": "ASE_FP_CVT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD convert."
+ },
+ {
+ "EventCode": "0x803A",
+ "EventName": "SVE_FP_CVT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE convert."
+ },
+ {
+ "EventCode": "0x803B",
+ "EventName": "ASE_SVE_FP_CVT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE convert."
+ },
+ {
+ "EventCode": "0x803C",
+ "EventName": "SVE_FP_AREDUCE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE accumulating reduction."
+ },
+ {
+ "EventCode": "0x803D",
+ "EventName": "ASE_FP_PREDUCE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD pairwise add step."
+ },
+ {
+ "EventCode": "0x803E",
+ "EventName": "SVE_FP_VREDUCE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, SVE vector reduction."
+ },
+ {
+ "EventCode": "0x803F",
+ "EventName": "ASE_SVE_FP_VREDUCE_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE vector reduction."
+ },
+ {
+ "EventCode": "0x8040",
+ "EventName": "INT_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed."
+ },
+ {
+ "EventCode": "0x8041",
+ "EventName": "ASE_INT_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD."
+ },
+ {
+ "EventCode": "0x8042",
+ "EventName": "SVE_INT_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE."
+ },
+ {
"PublicDescription": "Advanced SIMD and SVE integer Operations speculatively executed.",
"EventCode": "0x8043",
"EventName": "ASE_SVE_INT_SPEC",
"BriefDescription": "Advanced SIMD and SVE integer Operations speculatively executed."
},
{
+ "EventCode": "0x8044",
+ "EventName": "INT_DIV_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, divide."
+ },
+ {
+ "EventCode": "0x8045",
+ "EventName": "INT_DIV64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, 64-bit divide."
+ },
+ {
+ "EventCode": "0x8046",
+ "EventName": "SVE_INT_DIV_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE divide."
+ },
+ {
+ "EventCode": "0x8047",
+ "EventName": "SVE_INT_DIV64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE 64-bit divide."
+ },
+ {
+ "EventCode": "0x8048",
+ "EventName": "INT_MUL_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, multiply."
+ },
+ {
+ "EventCode": "0x8049",
+ "EventName": "ASE_INT_MUL_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD multiply."
+ },
+ {
+ "EventCode": "0x804A",
+ "EventName": "SVE_INT_MUL_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE multiply."
+ },
+ {
+ "EventCode": "0x804B",
+ "EventName": "ASE_SVE_INT_MUL_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE multiply."
+ },
+ {
+ "EventCode": "0x804C",
+ "EventName": "INT_MUL64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, 64\u00d764 multiply."
+ },
+ {
+ "EventCode": "0x804D",
+ "EventName": "SVE_INT_MUL64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE 64\u00d764 multiply."
+ },
+ {
+ "EventCode": "0x804E",
+ "EventName": "INT_MULH64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, 64\u00d764 multiply returning high part."
+ },
+ {
+ "EventCode": "0x804F",
+ "EventName": "SVE_INT_MULH64_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE 64\u00d764 multiply high part."
+ },
+ {
+ "EventCode": "0x8058",
+ "EventName": "NONFP_SPEC",
+ "BriefDescription": "Non-floating-point Operation speculatively executed."
+ },
+ {
+ "EventCode": "0x8059",
+ "EventName": "ASE_NONFP_SPEC",
+ "BriefDescription": "Non-floating-point Operation speculatively executed, Advanced SIMD."
+ },
+ {
+ "EventCode": "0x805A",
+ "EventName": "SVE_NONFP_SPEC",
+ "BriefDescription": "Non-floating-point Operation speculatively executed, SVE."
+ },
+ {
+ "EventCode": "0x805B",
+ "EventName": "ASE_SVE_NONFP_SPEC",
+ "BriefDescription": "Non-floating-point Operation speculatively executed, Advanced SIMD or SVE."
+ },
+ {
+ "EventCode": "0x805D",
+ "EventName": "ASE_INT_VREDUCE_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD reduction."
+ },
+ {
+ "EventCode": "0x805E",
+ "EventName": "SVE_INT_VREDUCE_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, SVE reduction."
+ },
+ {
+ "EventCode": "0x805F",
+ "EventName": "ASE_SVE_INT_VREDUCE_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE reduction."
+ },
+ {
+ "EventCode": "0x8060",
+ "EventName": "SVE_PERM_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE permute."
+ },
+ {
+ "EventCode": "0x8065",
+ "EventName": "SVE_XPIPE_Z2R_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE vector to scalar cross-pipe."
+ },
+ {
+ "EventCode": "0x8066",
+ "EventName": "SVE_XPIPE_R2Z_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE scalar to vector cross-pipe."
+ },
+ {
+ "EventCode": "0x8068",
+ "EventName": "SVE_PGEN_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE predicate generating."
+ },
+ {
+ "EventCode": "0x8069",
+ "EventName": "SVE_PGEN_FLG_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE predicate flag setting."
+ },
+ {
+ "EventCode": "0x806D",
+ "EventName": "SVE_PPERM_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE predicate permute."
+ },
+ {
"PublicDescription": "SVE predicated Operations speculatively executed.",
"EventCode": "0x8074",
"EventName": "SVE_PRED_SPEC",
@@ -630,6 +1010,16 @@
"BriefDescription": "SVE MOVPRFX Operations speculatively executed."
},
{
+ "EventCode": "0x807D",
+ "EventName": "SVE_MOVPRFX_Z_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE MOVPRFX zeroing predication."
+ },
+ {
+ "EventCode": "0x807E",
+ "EventName": "SVE_MOVPRFX_M_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE MOVPRFX merging predication."
+ },
+ {
"PublicDescription": "SVE MOVPRFX unfused Operations speculatively executed.",
"EventCode": "0x807F",
"EventName": "SVE_MOVPRFX_U_SPEC",
@@ -696,6 +1086,16 @@
"BriefDescription": "SVE contiguous prefetch element Operations speculatively executed."
},
{
+ "EventCode": "0x80A1",
+ "EventName": "SVE_LDNT_CONTIG_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE non-temporal contiguous load element."
+ },
+ {
+ "EventCode": "0x80A2",
+ "EventName": "SVE_STNT_CONTIG_SPEC",
+ "BriefDescription": "Operation speculatively executed, SVE non-temporal contiguous store element."
+ },
+ {
"PublicDescription": "Advanced SIMD and SVE contiguous load multiple vector Operations speculatively executed.",
"EventCode": "0x80A5",
"EventName": "ASE_SVE_LD_MULTI_SPEC",
@@ -786,6 +1186,16 @@
"BriefDescription": "Non-scalable double-precision floating-point element Operations speculatively executed."
},
{
+ "EventCode": "0x80C8",
+ "EventName": "INT_SCALE_OPS_SPEC",
+ "BriefDescription": "Scalable integer element arithmetic operations Speculatively executed."
+ },
+ {
+ "EventCode": "0x80C9",
+ "EventName": "INT_FIXED_OPS_SPEC",
+ "BriefDescription": "Non-scalable integer element arithmetic operations Speculatively executed."
+ },
+ {
"PublicDescription": "Advanced SIMD and SVE 8-bit integer operations speculatively executed",
"EventCode": "0x80E3",
"EventName": "ASE_SVE_INT8_SPEC",
@@ -808,5 +1218,690 @@
"EventCode": "0x80EF",
"EventName": "ASE_SVE_INT64_SPEC",
"BriefDescription": "Advanced SIMD and SVE 64-bit integer operations speculatively executed"
+ },
+ {
+ "EventCode": "0x80F3",
+ "EventName": "ASE_SVE_FP_DOT_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE dot-product."
+ },
+ {
+ "EventCode": "0x80F7",
+ "EventName": "ASE_SVE_FP_MMLA_SPEC",
+ "BriefDescription": "Floating-point Operation speculatively executed, Advanced SIMD or SVE matrix multiply."
+ },
+ {
+ "EventCode": "0x80FB",
+ "EventName": "ASE_SVE_INT_DOT_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE dot-product."
+ },
+ {
+ "EventCode": "0x80FF",
+ "EventName": "ASE_SVE_INT_MMLA_SPEC",
+ "BriefDescription": "Integer Operation speculatively executed, Advanced SIMD or SVE matrix multiply."
+ },
+ {
+ "EventCode": "0x8108",
+ "EventName": "BR_IMMED_TAKEN_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, immediate, taken"
+ },
+ {
+ "EventCode": "0x810C",
+ "EventName": "BR_INDNR_TAKEN_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, indirect excluding procedure return, taken"
+ },
+ {
+ "EventCode": "0x8110",
+ "EventName": "BR_IMMED_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted immediate"
+ },
+ {
+ "EventCode": "0x8111",
+ "EventName": "BR_IMMED_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted immediate"
+ },
+ {
+ "EventCode": "0x8112",
+ "EventName": "BR_IND_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted indirect"
+ },
+ {
+ "EventCode": "0x8113",
+ "EventName": "BR_IND_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted indirect"
+ },
+ {
+ "EventCode": "0x8114",
+ "EventName": "BR_RETURN_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted procedure return"
+ },
+ {
+ "EventCode": "0x8115",
+ "EventName": "BR_RETURN_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted procedure return"
+ },
+ {
+ "EventCode": "0x8116",
+ "EventName": "BR_INDNR_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted indirect excluding procedure return"
+ },
+ {
+ "EventCode": "0x8117",
+ "EventName": "BR_INDNR_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted indirect excluding procedure return"
+ },
+ {
+ "EventCode": "0x8118",
+ "EventName": "BR_TAKEN_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted branch, taken"
+ },
+ {
+ "EventCode": "0x8119",
+ "EventName": "BR_TAKEN_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted branch, taken"
+ },
+ {
+ "EventCode": "0x811A",
+ "EventName": "BR_SKIP_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted branch, not taken"
+ },
+ {
+ "EventCode": "0x811B",
+ "EventName": "BR_SKIP_MIS_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, mispredicted branch, not taken"
+ },
+ {
+ "EventCode": "0x811C",
+ "EventName": "BR_PRED_RETIRED",
+ "BriefDescription": "Branch instruction architecturally executed, predicted branch"
+ },
+ {
+ "EventCode": "0x811D",
+ "EventName": "BR_IND_RETIRED",
+ "BriefDescription": "Instruction architecturally executed, indirect branch"
+ },
+ {
+ "EventCode": "0x811F",
+ "EventName": "BRB_FILTRATE",
+ "BriefDescription": "Branch Record captured"
+ },
+ {
+ "EventCode": "0x8120",
+ "EventName": "INST_FETCH_PERCYC",
+ "BriefDescription": "Event in progress, INST FETCH"
+ },
+ {
+ "EventCode": "0x8121",
+ "EventName": "MEM_ACCESS_RD_PERCYC",
+ "BriefDescription": "Event in progress, MEM ACCESS RD"
+ },
+ {
+ "EventCode": "0x8124",
+ "EventName": "INST_FETCH",
+ "BriefDescription": "Instruction memory access"
+ },
+ {
+ "EventCode": "0x8128",
+ "EventName": "DTLB_WALK_PERCYC",
+ "BriefDescription": "Data translation table walks in progress."
+ },
+ {
+ "EventCode": "0x8129",
+ "EventName": "ITLB_WALK_PERCYC",
+ "BriefDescription": "Instruction translation table walks in progress."
+ },
+ {
+ "EventCode": "0x812A",
+ "EventName": "SAMPLE_FEED_BR",
+ "BriefDescription": "Statisical Profiling sample taken, branch"
+ },
+ {
+ "EventCode": "0x812B",
+ "EventName": "SAMPLE_FEED_LD",
+ "BriefDescription": "Statisical Profiling sample taken, load"
+ },
+ {
+ "EventCode": "0x812C",
+ "EventName": "SAMPLE_FEED_ST",
+ "BriefDescription": "Statisical Profiling sample taken, store"
+ },
+ {
+ "EventCode": "0x812D",
+ "EventName": "SAMPLE_FEED_OP",
+ "BriefDescription": "Statisical Profiling sample taken, matching operation type"
+ },
+ {
+ "EventCode": "0x812E",
+ "EventName": "SAMPLE_FEED_EVENT",
+ "BriefDescription": "Statisical Profiling sample taken, matching events"
+ },
+ {
+ "EventCode": "0x812F",
+ "EventName": "SAMPLE_FEED_LAT",
+ "BriefDescription": "Statisical Profiling sample taken, exceeding minimum latency"
+ },
+ {
+ "EventCode": "0x8130",
+ "EventName": "L1D_TLB_RW",
+ "BriefDescription": "Level 1 data TLB demand access"
+ },
+ {
+ "EventCode": "0x8131",
+ "EventName": "L1I_TLB_RD",
+ "BriefDescription": "Level 1 instruction TLB demand access"
+ },
+ {
+ "EventCode": "0x8132",
+ "EventName": "L1D_TLB_PRFM",
+ "BriefDescription": "Level 1 data TLB software preload"
+ },
+ {
+ "EventCode": "0x8133",
+ "EventName": "L1I_TLB_PRFM",
+ "BriefDescription": "Level 1 instruction TLB software preload"
+ },
+ {
+ "EventCode": "0x8134",
+ "EventName": "DTLB_HWUPD",
+ "BriefDescription": "Data TLB hardware update of translation table"
+ },
+ {
+ "EventCode": "0x8135",
+ "EventName": "ITLB_HWUPD",
+ "BriefDescription": "Instruction TLB hardware update of translation table"
+ },
+ {
+ "EventCode": "0x8136",
+ "EventName": "DTLB_STEP",
+ "BriefDescription": "Data TLB translation table walk, step."
+ },
+ {
+ "EventCode": "0x8137",
+ "EventName": "ITLB_STEP",
+ "BriefDescription": "Instruction TLB translation table walk, step."
+ },
+ {
+ "EventCode": "0x8138",
+ "EventName": "DTLB_WALK_LARGE",
+ "BriefDescription": "Data TLB large page translation table walk."
+ },
+ {
+ "EventCode": "0x8139",
+ "EventName": "ITLB_WALK_LARGE",
+ "BriefDescription": "Instruction TLB large page translation table walk."
+ },
+ {
+ "EventCode": "0x813A",
+ "EventName": "DTLB_WALK_SMALL",
+ "BriefDescription": "Data TLB small page translation table walk."
+ },
+ {
+ "EventCode": "0x813B",
+ "EventName": "ITLB_WALK_SMALL",
+ "BriefDescription": "Instruction TLB small page translation table walk."
+ },
+ {
+ "EventCode": "0x813C",
+ "EventName": "DTLB_WALK_RW",
+ "BriefDescription": "Data TLB demand access with at least one translation table walk"
+ },
+ {
+ "EventCode": "0x813D",
+ "EventName": "ITLB_WALK_RD",
+ "BriefDescription": "Instruction TLB demand access with at least one translation table walk"
+ },
+ {
+ "EventCode": "0x813E",
+ "EventName": "DTLB_WALK_PRFM",
+ "BriefDescription": "Data TLB software preload access with at least one translation table walk"
+ },
+ {
+ "EventCode": "0x813F",
+ "EventName": "ITLB_WALK_PRFM",
+ "BriefDescription": "Instruction TLB software preload access with at least one translation table walk"
+ },
+ {
+ "EventCode": "0x8140",
+ "EventName": "L1D_CACHE_RW",
+ "BriefDescription": "Level 1 data cache demand access"
+ },
+ {
+ "EventCode": "0x8141",
+ "EventName": "L1I_CACHE_RD",
+ "BriefDescription": "Level 1 instruction cache demand fetch"
+ },
+ {
+ "EventCode": "0x8142",
+ "EventName": "L1D_CACHE_PRFM",
+ "BriefDescription": "Level 1 data cache software preload"
+ },
+ {
+ "EventCode": "0x8143",
+ "EventName": "L1I_CACHE_PRFM",
+ "BriefDescription": "Level 1 instruction cache software preload"
+ },
+ {
+ "EventCode": "0x8144",
+ "EventName": "L1D_CACHE_MISS",
+ "BriefDescription": "Level 1 data cache demand access miss."
+ },
+ {
+ "EventCode": "0x8145",
+ "EventName": "L1I_CACHE_HWPRF",
+ "BriefDescription": "Level 1 instruction cache hardware prefetch."
+ },
+ {
+ "EventCode": "0x8146",
+ "EventName": "L1D_CACHE_REFILL_PRFM",
+ "BriefDescription": "Level 1 data cache refill, software preload"
+ },
+ {
+ "EventCode": "0x8147",
+ "EventName": "L1I_CACHE_REFILL_PRFM",
+ "BriefDescription": "Level 1 instruction cache refill, software preload"
+ },
+ {
+ "EventCode": "0x8148",
+ "EventName": "L2D_CACHE_RW",
+ "BriefDescription": "Level 2 data cache demand access"
+ },
+ {
+ "EventCode": "0x8149",
+ "EventName": "L2I_CACHE_RD",
+ "BriefDescription": "Level 2 instruction cache demand fetch"
+ },
+ {
+ "EventCode": "0x814A",
+ "EventName": "L2D_CACHE_PRFM",
+ "BriefDescription": "Level 2 data cache software preload"
+ },
+ {
+ "EventCode": "0x814C",
+ "EventName": "L2D_CACHE_MISS",
+ "BriefDescription": "Level 2 data cache demand access miss."
+ },
+ {
+ "EventCode": "0x814E",
+ "EventName": "L2D_CACHE_REFILL_PRFM",
+ "BriefDescription": "Level 2 data cache refill, software preload"
+ },
+ {
+ "EventCode": "0x8152",
+ "EventName": "L3D_CACHE_MISS",
+ "BriefDescription": "Level 3 data cache demand access miss"
+ },
+ {
+ "EventCode": "0x8154",
+ "EventName": "L1D_CACHE_HWPRF",
+ "BriefDescription": "Level 1 data cache hardware prefetch."
+ },
+ {
+ "EventCode": "0x8155",
+ "EventName": "L2D_CACHE_HWPRF",
+ "BriefDescription": "Level 2 data cache hardware prefetch."
+ },
+ {
+ "EventCode": "0x8158",
+ "EventName": "STALL_FRONTEND_MEMBOUND",
+ "BriefDescription": "Frontend stall cycles, memory bound."
+ },
+ {
+ "EventCode": "0x8159",
+ "EventName": "STALL_FRONTEND_L1I",
+ "BriefDescription": "Frontend stall cycles, level 1 instruction cache."
+ },
+ {
+ "EventCode": "0x815A",
+ "EventName": "STALL_FRONTEND_L2I",
+ "BriefDescription": "Frontend stall cycles, level 2 instruction cache."
+ },
+ {
+ "EventCode": "0x815B",
+ "EventName": "STALL_FRONTEND_MEM",
+ "BriefDescription": "Frontend stall cycles, last level PE cache or memory."
+ },
+ {
+ "EventCode": "0x815C",
+ "EventName": "STALL_FRONTEND_TLB",
+ "BriefDescription": "Frontend stall cycles, TLB."
+ },
+ {
+ "EventCode": "0x8160",
+ "EventName": "STALL_FRONTEND_CPUBOUND",
+ "BriefDescription": "Frontend stall cycles, processor bound."
+ },
+ {
+ "EventCode": "0x8161",
+ "EventName": "STALL_FRONTEND_FLOW",
+ "BriefDescription": "Frontend stall cycles, flow control."
+ },
+ {
+ "EventCode": "0x8162",
+ "EventName": "STALL_FRONTEND_FLUSH",
+ "BriefDescription": "Frontend stall cycles, flush recovery."
+ },
+ {
+ "EventCode": "0x8163",
+ "EventName": "STALL_FRONTEND_RENAME",
+ "BriefDescription": "Frontend stall cycles, rename full."
+ },
+ {
+ "EventCode": "0x8164",
+ "EventName": "STALL_BACKEND_MEMBOUND",
+ "BriefDescription": "Backend stall cycles, memory bound."
+ },
+ {
+ "EventCode": "0x8165",
+ "EventName": "STALL_BACKEND_L1D",
+ "BriefDescription": "Backend stall cycles, level 1 data cache."
+ },
+ {
+ "EventCode": "0x8166",
+ "EventName": "STALL_BACKEND_L2D",
+ "BriefDescription": "Backend stall cycles, level 2 data cache."
+ },
+ {
+ "EventCode": "0x8167",
+ "EventName": "STALL_BACKEND_TLB",
+ "BriefDescription": "Backend stall cycles, TLB."
+ },
+ {
+ "EventCode": "0x8168",
+ "EventName": "STALL_BACKEND_ST",
+ "BriefDescription": "Backend stall cycles, store."
+ },
+ {
+ "EventCode": "0x816A",
+ "EventName": "STALL_BACKEND_CPUBOUND",
+ "BriefDescription": "Backend stall cycles, processor bound."
+ },
+ {
+ "EventCode": "0x816B",
+ "EventName": "STALL_BACKEND_BUSY",
+ "BriefDescription": "Backend stall cycles, backend busy."
+ },
+ {
+ "EventCode": "0x816C",
+ "EventName": "STALL_BACKEND_ILOCK",
+ "BriefDescription": "Backend stall cycles, input dependency."
+ },
+ {
+ "EventCode": "0x816D",
+ "EventName": "STALL_BACKEND_RENAME",
+ "BriefDescription": "Backend stall cycles, rename full."
+ },
+ {
+ "EventCode": "0x816E",
+ "EventName": "STALL_BACKEND_ATOMIC",
+ "BriefDescription": "Backend stall cycles, atomic operation."
+ },
+ {
+ "EventCode": "0x816F",
+ "EventName": "STALL_BACKEND_MEMCPYSET",
+ "BriefDescription": "Backend stall cycles, Memory Copy or Set operation."
+ },
+ {
+ "EventCode": "0x8171",
+ "EventName": "CAS_NEAR_PASS",
+ "BriefDescription": "Atomic memory Operation speculatively executed, Compare and Swap pass"
+ },
+ {
+ "EventCode": "0x8172",
+ "EventName": "CAS_NEAR_SPEC",
+ "BriefDescription": "Atomic memory Operation speculatively executed, Compare and Swap near"
+ },
+ {
+ "EventCode": "0x8173",
+ "EventName": "CAS_FAR_SPEC",
+ "BriefDescription": "Atomic memory Operation speculatively executed, Compare and Swap far"
+ },
+ {
+ "EventCode": "0x8186",
+ "EventName": "UOP_RETIRED",
+ "BriefDescription": "Micro-operation architecturally executed."
+ },
+ {
+ "EventCode": "0x8188",
+ "EventName": "DTLB_WALK_BLOCK",
+ "BriefDescription": "Data TLB block translation table walk."
+ },
+ {
+ "EventCode": "0x8189",
+ "EventName": "ITLB_WALK_BLOCK",
+ "BriefDescription": "Instruction TLB block translation table walk."
+ },
+ {
+ "EventCode": "0x818A",
+ "EventName": "DTLB_WALK_PAGE",
+ "BriefDescription": "Data TLB page translation table walk."
+ },
+ {
+ "EventCode": "0x818B",
+ "EventName": "ITLB_WALK_PAGE",
+ "BriefDescription": "Instruction TLB page translation table walk."
+ },
+ {
+ "EventCode": "0x81B8",
+ "EventName": "L1I_CACHE_REFILL_HWPRF",
+ "BriefDescription": "Level 1 instruction cache refill, hardware prefetch."
+ },
+ {
+ "EventCode": "0x81BC",
+ "EventName": "L1D_CACHE_REFILL_HWPRF",
+ "BriefDescription": "Level 1 data cache refill, hardware prefetch."
+ },
+ {
+ "EventCode": "0x81BD",
+ "EventName": "L2D_CACHE_REFILL_HWPRF",
+ "BriefDescription": "Level 2 data cache refill, hardware prefetch."
+ },
+ {
+ "EventCode": "0x81C0",
+ "EventName": "L1I_CACHE_HIT_RD",
+ "BriefDescription": "Level 1 instruction cache demand fetch hit."
+ },
+ {
+ "EventCode": "0x81C4",
+ "EventName": "L1D_CACHE_HIT_RD",
+ "BriefDescription": "Level 1 data cache demand access hit, read."
+ },
+ {
+ "EventCode": "0x81C5",
+ "EventName": "L2D_CACHE_HIT_RD",
+ "BriefDescription": "Level 2 data cache demand access hit, read."
+ },
+ {
+ "EventCode": "0x81C8",
+ "EventName": "L1D_CACHE_HIT_WR",
+ "BriefDescription": "Level 1 data cache demand access hit, write."
+ },
+ {
+ "EventCode": "0x81C9",
+ "EventName": "L2D_CACHE_HIT_WR",
+ "BriefDescription": "Level 2 data cache demand access hit, write."
+ },
+ {
+ "EventCode": "0x81D0",
+ "EventName": "L1I_CACHE_HIT_RD_FPRFM",
+ "BriefDescription": "Level 1 instruction cache demand fetch first hit, fetched by software preload"
+ },
+ {
+ "EventCode": "0x81E0",
+ "EventName": "L1I_CACHE_HIT_RD_FHWPRF",
+ "BriefDescription": "Level 1 instruction cache demand fetch first hit, fetched by hardware prefetcher"
+ },
+ {
+ "EventCode": "0x8200",
+ "EventName": "L1I_CACHE_HIT",
+ "BriefDescription": "Level 1 instruction cache hit."
+ },
+ {
+ "EventCode": "0x8204",
+ "EventName": "L1D_CACHE_HIT",
+ "BriefDescription": "Level 1 data cache hit."
+ },
+ {
+ "EventCode": "0x8205",
+ "EventName": "L2D_CACHE_HIT",
+ "BriefDescription": "Level 2 data cache hit."
+ },
+ {
+ "EventCode": "0x8208",
+ "EventName": "L1I_CACHE_HIT_PRFM",
+ "BriefDescription": "Level 1 instruction cache software preload hit"
+ },
+ {
+ "EventCode": "0x8240",
+ "EventName": "L1I_LFB_HIT_RD",
+ "BriefDescription": "Level 1 instruction cache demand fetch line-fill buffer hit."
+ },
+ {
+ "EventCode": "0x8244",
+ "EventName": "L1D_LFB_HIT_RD",
+ "BriefDescription": "Level 1 data cache demand access line-fill buffer hit, read."
+ },
+ {
+ "EventCode": "0x8245",
+ "EventName": "L2D_LFB_HIT_RD",
+ "BriefDescription": "Level 2 data cache demand access line-fill buffer hit, read."
+ },
+ {
+ "EventCode": "0x8248",
+ "EventName": "L1D_LFB_HIT_WR",
+ "BriefDescription": "Level 1 data cache demand access line-fill buffer hit, write."
+ },
+ {
+ "EventCode": "0x8249",
+ "EventName": "L2D_LFB_HIT_WR",
+ "BriefDescription": "Level 2 data cache demand access line-fill buffer hit, write."
+ },
+ {
+ "EventCode": "0x8250",
+ "EventName": "L1I_LFB_HIT_RD_FPRFM",
+ "BriefDescription": "Level 1 instruction cache demand fetch line-fill buffer first hit, recently fetched by software preload"
+ },
+ {
+ "EventCode": "0x8260",
+ "EventName": "L1I_LFB_HIT_RD_FHWPRF",
+ "BriefDescription": "Level 1 instruction cache demand fetch line-fill buffer first hit, recently fetched by hardware prefetcher"
+ },
+ {
+ "EventCode": "0x8280",
+ "EventName": "L1I_CACHE_PRF",
+ "BriefDescription": "Level 1 instruction cache, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x8284",
+ "EventName": "L1D_CACHE_PRF",
+ "BriefDescription": "Level 1 data cache, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x8285",
+ "EventName": "L2D_CACHE_PRF",
+ "BriefDescription": "Level 2 data cache, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x8288",
+ "EventName": "L1I_CACHE_REFILL_PRF",
+ "BriefDescription": "Level 1 instruction cache refill, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x828C",
+ "EventName": "L1D_CACHE_REFILL_PRF",
+ "BriefDescription": "Level 1 data cache refill, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x828D",
+ "EventName": "L2D_CACHE_REFILL_PRF",
+ "BriefDescription": "Level 2 data cache refill, preload or prefetch hit."
+ },
+ {
+ "EventCode": "0x829A",
+ "EventName": "LL_CACHE_REFILL",
+ "BriefDescription": "Last level cache refill"
+ },
+ {
+ "EventCode": "0x8320",
+ "EventName": "L1D_CACHE_REFILL_PERCYC",
+ "BriefDescription": "Level 1 data or unified cache refills in progress."
+ },
+ {
+ "EventCode": "0x8321",
+ "EventName": "L2D_CACHE_REFILL_PERCYC",
+ "BriefDescription": "Level 2 data or unified cache refills in progress."
+ },
+ {
+ "EventCode": "0x8324",
+ "EventName": "L1I_CACHE_REFILL_PERCYC",
+ "BriefDescription": "Level 1 instruction or unified cache refills in progress."
+ },
+ {
+ "EventCode": "0x8431",
+ "EventName": "ASE_FP_VREDUCE_SPEC",
+ "BriefDescription": "Floating-point operation_speculatively_executed, Advanced SIMD pairwise or reduction."
+ },
+ {
+ "EventCode": "0x8432",
+ "EventName": "SVE_FP_PREDUCE_SPEC",
+ "BriefDescription": "Floating-point operation_speculatively_executed, Advanced SIMD pairwise add step or pairwise reduce step."
+ },
+ {
+ "EventCode": "0x8443",
+ "EventName": "ASE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "Advanced SIMD data processing operation speculatively_executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x8444",
+ "EventName": "ASE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "Advanced SIMD data processing operation speculatively_executed, smallest type is 8-bit floating-point."
+ },
+ {
+ "EventCode": "0x844B",
+ "EventName": "ASE_SVE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "Advanced SIMD data processing or SVE data processing operation speculatively_executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x844C",
+ "EventName": "ASE_SVE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "Advanced SIMD data processing or SVE data processing operation speculatively_executed, smallest type is 8-bit floating-point."
+ },
+ {
+ "EventCode": "0x8463",
+ "EventName": "SVE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "SVE data processing operation speculatively_executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x8464",
+ "EventName": "SVE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "SVE data processing operation speculatively_executed, smallest type is 8-bit floating-point."
+ },
+ {
+ "EventCode": "0x8473",
+ "EventName": "FP_BF16_MIN_SPEC",
+ "BriefDescription": "Floating-point operation speculatively_executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x8474",
+ "EventName": "FP_FP8_MIN_SPEC",
+ "BriefDescription": "Floating-point operation speculatively_executed, smallest type is 8-bit floating-point."
+ },
+ {
+ "EventCode": "0x8483",
+ "EventName": "FP_BF16_FIXED_MIN_OPS_SPEC",
+ "BriefDescription": "Non-scalable element arithmetic operations speculatively executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x8484",
+ "EventName": "FP_FP8_FIXED_MIN_OPS_SPEC",
+ "BriefDescription": "Non-scalable element arithmetic operations speculatively executed, smallest type is 8-bit floating-point."
+ },
+ {
+ "EventCode": "0x848B",
+ "EventName": "FP_BF16_SCALE_MIN_OPS_SPEC",
+ "BriefDescription": "Scalable element arithmetic operations speculatively executed, smallest type is BFloat16 floating-point."
+ },
+ {
+ "EventCode": "0x848C",
+ "EventName": "FP_FP8_SCALE_MIN_OPS_SPEC",
+ "BriefDescription": "Scalable element arithmetic operations speculatively executed, smallest type is 8-bit floating-point."
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/ddrc.json b/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/ddrc.json
new file mode 100644
index 000000000000..74ac12660a29
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/ddrc.json
@@ -0,0 +1,9 @@
+[
+ {
+ "BriefDescription": "ddr cycles event",
+ "EventCode": "0x00",
+ "EventName": "imx91_ddr.cycles",
+ "Unit": "imx9_ddr",
+ "Compat": "imx91"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/metrics.json b/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/metrics.json
new file mode 100644
index 000000000000..f0c5911eb2d0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx91/sys/metrics.json
@@ -0,0 +1,26 @@
+[
+ {
+ "BriefDescription": "bandwidth usage for lpddr4 evk board",
+ "MetricName": "imx91_bandwidth_usage.lpddr4",
+ "MetricExpr": "(((( imx9_ddr0@ddrc_pm_0@ ) * 2 * 8 ) + (( imx9_ddr0@ddrc_pm_3@ + imx9_ddr0@ddrc_pm_5@ + imx9_ddr0@ddrc_pm_7@ + imx9_ddr0@ddrc_pm_9@ - imx9_ddr0@ddrc_pm_2@ - imx9_ddr0@ddrc_pm_4@ - imx9_ddr0@ddrc_pm_6@ - imx9_ddr0@ddrc_pm_8@ ) * 32 )) / duration_time) / (2400 * 1000000 * 2)",
+ "ScaleUnit": "1e2%",
+ "Unit": "imx9_ddr",
+ "Compat": "imx91"
+ },
+ {
+ "BriefDescription": "bytes all masters read from ddr",
+ "MetricName": "imx91_ddr_read.all",
+ "MetricExpr": "( imx9_ddr0@ddrc_pm_0@ ) * 2 * 8",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx91"
+ },
+ {
+ "BriefDescription": "bytes all masters write to ddr",
+ "MetricName": "imx91_ddr_write.all",
+ "MetricExpr": "( imx9_ddr0@ddrc_pm_3@ + imx9_ddr0@ddrc_pm_5@ + imx9_ddr0@ddrc_pm_7@ + imx9_ddr0@ddrc_pm_9@ - imx9_ddr0@ddrc_pm_2@ - imx9_ddr0@ddrc_pm_4@ - imx9_ddr0@ddrc_pm_6@ - imx9_ddr0@ddrc_pm_8@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx91"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/ddrc.json b/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/ddrc.json
new file mode 100644
index 000000000000..eeeae4d49fce
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/ddrc.json
@@ -0,0 +1,9 @@
+[
+ {
+ "BriefDescription": "ddr cycles event",
+ "EventCode": "0x00",
+ "EventName": "imx93_ddr.cycles",
+ "Unit": "imx9_ddr",
+ "Compat": "imx93"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/metrics.json b/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/metrics.json
new file mode 100644
index 000000000000..4d2454ca1259
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx93/sys/metrics.json
@@ -0,0 +1,26 @@
+[
+ {
+ "BriefDescription": "bandwidth usage for lpddr4x evk board",
+ "MetricName": "imx93_bandwidth_usage.lpddr4x",
+ "MetricExpr": "(((( imx9_ddr0@ddrc_pm_0@ ) * 2 * 8 ) + (( imx9_ddr0@ddrc_pm_3@ + imx9_ddr0@ddrc_pm_5@ + imx9_ddr0@ddrc_pm_7@ + imx9_ddr0@ddrc_pm_9@ - imx9_ddr0@ddrc_pm_2@ - imx9_ddr0@ddrc_pm_4@ - imx9_ddr0@ddrc_pm_6@ - imx9_ddr0@ddrc_pm_8@ ) * 32 )) / duration_time) / (3733 * 1000000 * 2)",
+ "ScaleUnit": "1e2%",
+ "Unit": "imx9_ddr",
+ "Compat": "imx93"
+ },
+ {
+ "BriefDescription": "bytes all masters read from ddr",
+ "MetricName": "imx93_ddr_read.all",
+ "MetricExpr": "( imx9_ddr0@ddrc_pm_0@ ) * 2 * 8",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx93"
+ },
+ {
+ "BriefDescription": "bytes all masters write to ddr",
+ "MetricName": "imx93_ddr_write.all",
+ "MetricExpr": "( imx9_ddr0@ddrc_pm_3@ + imx9_ddr0@ddrc_pm_5@ + imx9_ddr0@ddrc_pm_7@ + imx9_ddr0@ddrc_pm_9@ - imx9_ddr0@ddrc_pm_2@ - imx9_ddr0@ddrc_pm_4@ - imx9_ddr0@ddrc_pm_6@ - imx9_ddr0@ddrc_pm_8@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx93"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/ddrc.json b/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/ddrc.json
new file mode 100644
index 000000000000..4dc9d2968bdc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/ddrc.json
@@ -0,0 +1,9 @@
+[
+ {
+ "BriefDescription": "ddr cycles event",
+ "EventCode": "0x00",
+ "EventName": "imx95_ddr.cycles",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/metrics.json b/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/metrics.json
new file mode 100644
index 000000000000..45a0d51dfb63
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/freescale/imx95/sys/metrics.json
@@ -0,0 +1,882 @@
+[
+ {
+ "BriefDescription": "bandwidth usage for lpddr5 evk board",
+ "MetricName": "imx95_bandwidth_usage.lpddr5",
+ "MetricExpr": "(( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x000\\,axi_id\\=0x000@ + imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x000\\,axi_id\\=0x000@ ) * 32 / duration_time) / (6400 * 1000000 * 4)",
+ "ScaleUnit": "1e2%",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bandwidth usage for lpddr4x evk board",
+ "MetricName": "imx95_bandwidth_usage.lpddr4x",
+ "MetricExpr": "(( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x000\\,axi_id\\=0x000@ + imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x000\\,axi_id\\=0x000@ ) * 32 / duration_time) / (4000 * 1000000 * 4)",
+ "ScaleUnit": "1e2%",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all masters read from ddr",
+ "MetricName": "imx95_ddr_read.all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x000\\,axi_id\\=0x000@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all masters write to ddr",
+ "MetricName": "imx95_ddr_write.all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x000\\,axi_id\\=0x000@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all a55 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3fc\\,axi_id\\=0x000@ + imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3fe\\,axi_id\\=0x004@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all a55 write to ddr (part1)",
+ "MetricName": "imx95_ddr_write.a55_all_1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3fc\\,axi_id\\=0x000@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all a55 write to ddr (part2)",
+ "MetricName": "imx95_ddr_write.a55_all_2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3fe\\,axi_id\\=0x004@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 0 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3ff\\,axi_id\\=0x000@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 0 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3ff\\,axi_id\\=0x000@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 1 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x00f\\,axi_id\\=0x001@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 1 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x001@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 2 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x00f\\,axi_id\\=0x002@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 2 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x002@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 3 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x00f\\,axi_id\\=0x003@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 3 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x003@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 4 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x00f\\,axi_id\\=0x004@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 4 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x004@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 5 read from ddr",
+ "MetricName": "imx95_ddr_read.a55_5",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x00f\\,axi_id\\=0x005@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of a55 core 5 write to ddr",
+ "MetricName": "imx95_ddr_write.a55_5",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x005@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of Cortex-A DSU L3 evicted/ACP transactions read from ddr",
+ "MetricName": "imx95_ddr_read.cortexa_dsu_l3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x00f\\,axi_id\\=0x007@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of Cortex-A DSU L3 evicted/ACP transactions write to ddr",
+ "MetricName": "imx95_ddr_write.cortexa_dsu_l3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x007@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of m33 read from ddr",
+ "MetricName": "imx95_ddr_read.m33",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x00f\\,axi_id\\=0x008@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of m33 write to ddr",
+ "MetricName": "imx95_ddr_write.m33",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x008@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of m7 read from ddr",
+ "MetricName": "imx95_ddr_read.m7",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x00f\\,axi_id\\=0x009@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of m7 write to ddr",
+ "MetricName": "imx95_ddr_write.m7",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x009@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of sentinel read from ddr",
+ "MetricName": "imx95_ddr_read.sentinel",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x00f\\,axi_id\\=0x00a@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of sentinel write to ddr",
+ "MetricName": "imx95_ddr_write.sentinel",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x00a@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of edma1 read from ddr",
+ "MetricName": "imx95_ddr_read.edma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x00f\\,axi_id\\=0x00b@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of edma1 write to ddr",
+ "MetricName": "imx95_ddr_write.edma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x00b@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of edma2 read from ddr",
+ "MetricName": "imx95_ddr_read.edma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x00f\\,axi_id\\=0x00c@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of edma2 write to ddr",
+ "MetricName": "imx95_ddr_write.edma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x00c@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of netc read from ddr",
+ "MetricName": "imx95_ddr_read.netc",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x00f\\,axi_id\\=0x00d@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of netc write to ddr",
+ "MetricName": "imx95_ddr_write.netc",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x00d@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of npu read from ddr",
+ "MetricName": "imx95_ddr_read.npu",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x010@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of npu write to ddr",
+ "MetricName": "imx95_ddr_write.npu",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x010@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of gpu read from ddr",
+ "MetricName": "imx95_ddr_read.gpu",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x020@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of gpu write to ddr",
+ "MetricName": "imx95_ddr_write.gpu",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x020@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc1 read from ddr",
+ "MetricName": "imx95_ddr_read.usdhc1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x0b0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc1 write to ddr",
+ "MetricName": "imx95_ddr_write.usdhc1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x0b0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc2 read from ddr",
+ "MetricName": "imx95_ddr_read.usdhc2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x0c0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc2 write to ddr",
+ "MetricName": "imx95_ddr_write.usdhc2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x0c0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc3 read from ddr",
+ "MetricName": "imx95_ddr_read.usdhc3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x0d0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usdhc3 write to ddr",
+ "MetricName": "imx95_ddr_write.usdhc3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x0d0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of xspi read from ddr",
+ "MetricName": "imx95_ddr_read.xspi",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x0f0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of xspi write to ddr",
+ "MetricName": "imx95_ddr_write.xspi",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x0f0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie1 read from ddr",
+ "MetricName": "imx95_ddr_read.pcie1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x100@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie1 write to ddr",
+ "MetricName": "imx95_ddr_write.pcie1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x100@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie2 read from ddr",
+ "MetricName": "imx95_ddr_read.pcie2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x00f\\,axi_id\\=0x006@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie2 write to ddr",
+ "MetricName": "imx95_ddr_write.pcie2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x00f\\,axi_id\\=0x006@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie3 read from ddr",
+ "MetricName": "imx95_ddr_read.pcie3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x120@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie3 write to ddr",
+ "MetricName": "imx95_ddr_write.pcie3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x120@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie4 read from ddr",
+ "MetricName": "imx95_ddr_read.pcie4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x130@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of pcie4 write to ddr",
+ "MetricName": "imx95_ddr_write.pcie4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x130@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usb1 read from ddr",
+ "MetricName": "imx95_ddr_read.usb1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x140@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usb1 write to ddr",
+ "MetricName": "imx95_ddr_write.usb1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x140@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usb2 read from ddr",
+ "MetricName": "imx95_ddr_read.usb2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x150@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of usb2 write to ddr",
+ "MetricName": "imx95_ddr_write.usb2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x150@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of vpu codec primary bus read from ddr",
+ "MetricName": "imx95_ddr_read.vpu_primy",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x180@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of vpu codec primary bus write to ddr",
+ "MetricName": "imx95_ddr_write.vpu_primy",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x180@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of vpu codec secondary bus read from ddr",
+ "MetricName": "imx95_ddr_read.vpu_secndy",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x190@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of vpu codec secondary bus write to ddr",
+ "MetricName": "imx95_ddr_write.vpu_secndy",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x190@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of jpeg decoder read from ddr",
+ "MetricName": "imx95_ddr_read.jpeg_dec",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x1a0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of jpeg decoder write to ddr",
+ "MetricName": "imx95_ddr_write.jpeg_dec",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x1a0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of jpeg encoder read from ddr",
+ "MetricName": "imx95_ddr_read.jpeg_dec",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x1b0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of jpeg encoder write to ddr",
+ "MetricName": "imx95_ddr_write.jpeg_enc",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x1b0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all vpu submodules read from ddr",
+ "MetricName": "imx95_ddr_read.vpu_all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x380\\,axi_id\\=0x180@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all vpu submodules write to ddr",
+ "MetricName": "imx95_ddr_write.vpu_all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x380\\,axi_id\\=0x180@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of cortex m0+ read from ddr",
+ "MetricName": "imx95_ddr_read.m0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x200@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of cortex m0+ write to ddr",
+ "MetricName": "imx95_ddr_write.m0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x200@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of camera edma read from ddr",
+ "MetricName": "imx95_ddr_read.camera_edma",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x210@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of camera edma write to ddr",
+ "MetricName": "imx95_ddr_write.camera_edma",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x210@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi rd read from ddr",
+ "MetricName": "imx95_ddr_read.isi_rd",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x220@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi rd write to ddr",
+ "MetricName": "imx95_ddr_write.isi_rd",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x220@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr y read from ddr",
+ "MetricName": "imx95_ddr_read.isi_wr_y",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x230@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr y write to ddr",
+ "MetricName": "imx95_ddr_write.isi_wr_y",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x230@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr u read from ddr",
+ "MetricName": "imx95_ddr_read.isi_wr_u",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x240@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr u write to ddr",
+ "MetricName": "imx95_ddr_write.isi_wr_u",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x240@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr v read from ddr",
+ "MetricName": "imx95_ddr_read.isi_wr_v",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x250@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isi wr v write to ddr",
+ "MetricName": "imx95_ddr_write.isi_wr_v",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x250@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp input dma1 read from ddr",
+ "MetricName": "imx95_ddr_read.isp_in_dma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x260@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp input dma1 write to ddr",
+ "MetricName": "imx95_ddr_write.isp_in_dma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x260@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp input dma2 read from ddr",
+ "MetricName": "imx95_ddr_read.isp_in_dma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x270@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp input dma2 write to ddr",
+ "MetricName": "imx95_ddr_write.isp_in_dma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x270@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp output dma1 read from ddr",
+ "MetricName": "imx95_ddr_read.isp_out_dma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x280@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp output dma1 write to ddr",
+ "MetricName": "imx95_ddr_write.isp_out_dma1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x280@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp output dma2 read from ddr",
+ "MetricName": "imx95_ddr_read.isp_out_dma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x290@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of isp output dma2 write to ddr",
+ "MetricName": "imx95_ddr_write.isp_out_dma2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x290@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all camera submodules read from ddr",
+ "MetricName": "imx95_ddr_read.camera_all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x380\\,axi_id\\=0x200@ + imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x280@ + imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x290@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all camera submodules write to ddr (part1)",
+ "MetricName": "imx95_ddr_write.camera_all_1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x380\\,axi_id\\=0x200@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all camera submodules write to ddr (part2)",
+ "MetricName": "imx95_ddr_write.camera_all_2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x280@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all camera submodules write to ddr (part3)",
+ "MetricName": "imx95_ddr_write.camera_all_3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x290@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer0 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x2f0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer0 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer0",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x2f0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer1 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x300@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer1 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x300@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer2 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x310@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer2 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x310@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer3 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x320@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer3 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer3",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x320@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer4 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x330@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer4 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer4",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x330@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer5 read from ddr",
+ "MetricName": "imx95_ddr_read.disp_layer5",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt2\\,axi_mask\\=0x3f0\\,axi_id\\=0x340@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display layer5 write to ddr",
+ "MetricName": "imx95_ddr_write.disp_layer5",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x340@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display blitter read from ddr",
+ "MetricName": "imx95_ddr_read.disp_blit",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x3f0\\,axi_id\\=0x350@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display blitter write to ddr",
+ "MetricName": "imx95_ddr_write.disp_blit",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x350@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display command sequencer read from ddr",
+ "MetricName": "imx95_ddr_read.disp_cmd",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3f0\\,axi_id\\=0x360@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of display command sequencer write to ddr",
+ "MetricName": "imx95_ddr_write.disp_cmd",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3f0\\,axi_id\\=0x360@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all display submodules read from ddr",
+ "MetricName": "imx95_ddr_read.disp_all",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_rd_beat_filt0\\,axi_mask\\=0x300\\,axi_id\\=0x300@ + imx9_ddr0@eddrtq_pm_rd_beat_filt1\\,axi_mask\\=0x3a0\\,axi_id\\=0x2a0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all display submodules write to ddr (part1)",
+ "MetricName": "imx95_ddr_write.disp_all_1",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x300\\,axi_id\\=0x300@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ },
+ {
+ "BriefDescription": "bytes of all display submodules write to ddr (part2)",
+ "MetricName": "imx95_ddr_write.disp_all_2",
+ "MetricExpr": "( imx9_ddr0@eddrtq_pm_wr_beat_filt\\,axi_mask\\=0x3a0\\,axi_id\\=0x2a0@ ) * 32",
+ "ScaleUnit": "9.765625e-4KB",
+ "Unit": "imx9_ddr",
+ "Compat": "imx95"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/core-imp-def.json
new file mode 100644
index 000000000000..57a854ff5033
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/core-imp-def.json
@@ -0,0 +1,6 @@
+[
+ {
+ "ArchStdEvent": "L1I_CACHE_PRF",
+ "BriefDescription": "This event counts L1I_CACHE caused by hardware prefetch or software prefetch."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/cycle_accounting.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/cycle_accounting.json
new file mode 100644
index 000000000000..84374adbb0f8
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/cycle_accounting.json
@@ -0,0 +1,122 @@
+[
+ {
+ "EventCode": "0x0182",
+ "EventName": "LD_COMP_WAIT_L1_MISS",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L2 cache access."
+ },
+ {
+ "EventCode": "0x0183",
+ "EventName": "LD_COMP_WAIT_L1_MISS_EX",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L2 cache access."
+ },
+ {
+ "EventCode": "0x0184",
+ "EventName": "LD_COMP_WAIT",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L1D cache, L2 cache, L3 cache and memory access."
+ },
+ {
+ "EventCode": "0x0185",
+ "EventName": "LD_COMP_WAIT_EX",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L1D cache, L2 cache, L3 cache and memory access."
+ },
+ {
+ "EventCode": "0x0186",
+ "EventName": "LD_COMP_WAIT_PFP_BUSY",
+ "BriefDescription": "This event counts every cycle that no instruction was committed due to the lack of an available prefetch port."
+ },
+ {
+ "EventCode": "0x0187",
+ "EventName": "LD_COMP_WAIT_PFP_BUSY_EX",
+ "BriefDescription": "This event counts the LD_COMP_WAIT_PFP_BUSY caused by an integer load operation."
+ },
+ {
+ "EventCode": "0x0188",
+ "EventName": "LD_COMP_WAIT_PFP_BUSY_SWPF",
+ "BriefDescription": "This event counts the LD_COMP_WAIT_PFP_BUSY caused by a software prefetch instruction."
+ },
+ {
+ "EventCode": "0x0189",
+ "EventName": "EU_COMP_WAIT",
+ "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is an integer or floating-point/SIMD instruction."
+ },
+ {
+ "EventCode": "0x018A",
+ "EventName": "FL_COMP_WAIT",
+ "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is a floating-point/SIMD instruction."
+ },
+ {
+ "EventCode": "0x018B",
+ "EventName": "BR_COMP_WAIT",
+ "BriefDescription": "This event counts every cycle that no instruction was committed and the oldest and uncommitted instruction is a branch instruction."
+ },
+ {
+ "EventCode": "0x018C",
+ "EventName": "ROB_EMPTY",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the CSE is empty."
+ },
+ {
+ "EventCode": "0x018D",
+ "EventName": "ROB_EMPTY_STQ_BUSY",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the CSE is empty and the store port (SP) is full."
+ },
+ {
+ "EventCode": "0x018E",
+ "EventName": "WFE_WFI_CYCLE",
+ "BriefDescription": "This event counts every cycle that the instruction unit is halted by the WFE/WFI instruction."
+ },
+ {
+ "EventCode": "0x018F",
+ "EventName": "RETENTION_CYCLE",
+ "BriefDescription": "This event counts every cycle that the instruction unit is halted by the RETENTION state."
+ },
+ {
+ "EventCode": "0x0190",
+ "EventName": "_0INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that no instruction was committed, but counts at the time when commits MOVPRFX only."
+ },
+ {
+ "EventCode": "0x0191",
+ "EventName": "_1INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that one instruction is committed."
+ },
+ {
+ "EventCode": "0x0192",
+ "EventName": "_2INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that two instructions are committed."
+ },
+ {
+ "EventCode": "0x0193",
+ "EventName": "_3INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that three instructions are committed."
+ },
+ {
+ "EventCode": "0x0194",
+ "EventName": "_4INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that four instructions are committed."
+ },
+ {
+ "EventCode": "0x0195",
+ "EventName": "_5INST_COMMIT",
+ "BriefDescription": "This event counts every cycle that five instructions are committed."
+ },
+ {
+ "EventCode": "0x0198",
+ "EventName": "UOP_ONLY_COMMIT",
+ "BriefDescription": "This event counts every cycle that only any micro-operations are committed."
+ },
+ {
+ "EventCode": "0x0199",
+ "EventName": "SINGLE_MOVPRFX_COMMIT",
+ "BriefDescription": "This event counts every cycle that only the MOVPRFX instruction is committed."
+ },
+ {
+ "EventCode": "0x019C",
+ "EventName": "LD_COMP_WAIT_L2_MISS",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted load/store/prefetch operation waits for L2 cache miss."
+ },
+ {
+ "EventCode": "0x019D",
+ "EventName": "LD_COMP_WAIT_L2_MISS_EX",
+ "BriefDescription": "This event counts every cycle that no instruction was committed because the oldest and uncommitted integer load operation waits for L2 cache miss."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/energy.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/energy.json
new file mode 100644
index 000000000000..b55173f71e42
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/energy.json
@@ -0,0 +1,17 @@
+[
+ {
+ "EventCode": "0x01F0",
+ "EventName": "EA_CORE",
+ "BriefDescription": "This event counts energy consumption of core."
+ },
+ {
+ "EventCode": "0x03F0",
+ "EventName": "EA_L3",
+ "BriefDescription": "This event counts energy consumption of L3 cache."
+ },
+ {
+ "EventCode": "0x03F1",
+ "EventName": "EA_LDO_LOSS",
+ "BriefDescription": "This event counts energy consumption of LDO loss."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/exception.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/exception.json
new file mode 100644
index 000000000000..fba66bbcfeb5
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/exception.json
@@ -0,0 +1,42 @@
+[
+ {
+ "ArchStdEvent": "EXC_TAKEN",
+ "BriefDescription": "This event counts each exception taken."
+ },
+ {
+ "ArchStdEvent": "EXC_RETURN",
+ "BriefDescription": "This event counts each executed exception return instruction."
+ },
+ {
+ "ArchStdEvent": "EXC_UNDEF",
+ "BriefDescription": "This event counts only other synchronous exceptions that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_SVC",
+ "BriefDescription": "This event counts only Supervisor Call exceptions that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_PABORT",
+ "BriefDescription": "This event counts only Instruction Abort exceptions that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_DABORT",
+ "BriefDescription": "This event counts only Data Abort or SError interrupt exceptions that are taken locally."
+ },
+ {
+ "ArchStdEvent": "EXC_IRQ",
+ "BriefDescription": "This event counts only IRQ exceptions that are taken locally, including Virtual IRQ exceptions."
+ },
+ {
+ "ArchStdEvent": "EXC_FIQ",
+ "BriefDescription": "This event counts only FIQ exceptions that are taken locally, including Virtual FIQ exceptions."
+ },
+ {
+ "ArchStdEvent": "EXC_SMC",
+ "BriefDescription": "This event counts only Secure Monitor Call exceptions. This event does not increment on SMC instructions trapped as a Hyp Trap exception."
+ },
+ {
+ "ArchStdEvent": "EXC_HVC",
+ "BriefDescription": "This event counts for both Hypervisor Call exceptions taken locally in the hypervisor and those taken as an exception from Non-secure EL1."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/fp_operation.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/fp_operation.json
new file mode 100644
index 000000000000..2ffdc16530dd
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/fp_operation.json
@@ -0,0 +1,265 @@
+[
+ {
+ "EventCode": "0x0105",
+ "EventName": "FP_MV_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point move operation."
+ },
+ {
+ "EventCode": "0x0112",
+ "EventName": "FP_LD_SPEC",
+ "BriefDescription": "This event counts architecturally executed NOSIMD load operations that using SIMD&FP registers."
+ },
+ {
+ "EventCode": "0x0113",
+ "EventName": "FP_ST_SPEC",
+ "BriefDescription": "This event counts architecturally executed NOSIMD store operations that using SIMD&FP registers."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point operation."
+ },
+ {
+ "ArchStdEvent": "FP_HP_SPEC",
+ "BriefDescription": "This event counts architecturally executed half-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_HP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD half-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_HP_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE half-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_HP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE half-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "FP_SP_SPEC",
+ "BriefDescription": "This event counts architecturally executed single-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_SP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD single-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_SP_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE single-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_SP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE single-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "FP_DP_SPEC",
+ "BriefDescription": "This event counts architecturally executed double-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_DP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD double-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_DP_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE double-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_DP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE double-precision floating-point operation."
+ },
+ {
+ "ArchStdEvent": "FP_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point divide operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point divide operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point divide operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point divide operation."
+ },
+ {
+ "ArchStdEvent": "FP_SQRT_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point square root operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_SQRT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point square root operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_SQRT_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point square root operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_SQRT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point square root operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_FMA_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point FMA operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_FMA_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point FMA operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_FMA_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point FMA operation."
+ },
+ {
+ "ArchStdEvent": "FP_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point multiply operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point multiply operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point multiply operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point multiply operation."
+ },
+ {
+ "ArchStdEvent": "FP_ADDSUB_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point add or subtract operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_ADDSUB_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point add or subtract operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_ADDSUB_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point add or subtract operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_ADDSUB_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point add or subtract operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_RECPE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point reciprocal estimate operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_RECPE_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point reciprocal estimate operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_RECPE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point reciprocal estimate operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_CVT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point convert operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_CVT_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point convert operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_CVT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point convert operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_AREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point accumulating reduction operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_PREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point pairwise add step operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point vector reduction operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE floating-point vector reduction operation."
+ },
+ {
+ "ArchStdEvent": "FP_SCALE_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE arithmetic operation. See FP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by (128 / CSIZE) and by twice that amount for operations that would also be counted by SVE_FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_FIXED_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed v8SIMD&FP arithmetic operation. See FP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by the specified number of elements for Advanced SIMD operations or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_DOT_SPEC",
+ "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE floating-point dot-product operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_MMLA_SPEC",
+ "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE floating-point matrix multiply operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD floating-point vector reduction operation."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_PREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE floating-point pairwise add step operation."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD data processing operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "ASE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD data processing operations, smallest type is 8-bit floating-point."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD data processing or SVE data processing operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD data processing or SVE data processing operations, smallest type is 8-bit floating-point."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_BF16_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE data processing operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "SVE_FP_FP8_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE data processing operations, smallest type is 8-bit floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_BF16_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed data processing operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_FP8_MIN_SPEC",
+ "BriefDescription": "This event counts architecturally executed data processing operations, smallest type is 8-bit floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_BF16_FIXED_MIN_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed non-scalable element arithmetic operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_FP8_FIXED_MIN_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed non-scalable element arithmetic operations, smallest type is 8-bit floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_BF16_SCALE_MIN_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed scalable element arithmetic operations, smallest type is BFloat16 floating-point."
+ },
+ {
+ "ArchStdEvent": "FP_FP8_SCALE_MIN_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed scalable element arithmetic operations, smallest type is 8-bit floating-point."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/gcycle.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/gcycle.json
new file mode 100644
index 000000000000..b4ceddc0d25e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/gcycle.json
@@ -0,0 +1,97 @@
+[
+ {
+ "EventCode": "0x0880",
+ "EventName": "GCYCLES",
+ "BriefDescription": "This event counts the number of cycles at 100MHz."
+ },
+ {
+ "EventCode": "0x0890",
+ "EventName": "FL0_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 0."
+ },
+ {
+ "EventCode": "0x0891",
+ "EventName": "FL1_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 1."
+ },
+ {
+ "EventCode": "0x0892",
+ "EventName": "FL2_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 2."
+ },
+ {
+ "EventCode": "0x0893",
+ "EventName": "FL3_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 3."
+ },
+ {
+ "EventCode": "0x0894",
+ "EventName": "FL4_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 4."
+ },
+ {
+ "EventCode": "0x0895",
+ "EventName": "FL5_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 5."
+ },
+ {
+ "EventCode": "0x0896",
+ "EventName": "FL6_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 6."
+ },
+ {
+ "EventCode": "0x0897",
+ "EventName": "FL7_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 7."
+ },
+ {
+ "EventCode": "0x0898",
+ "EventName": "FL8_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 8."
+ },
+ {
+ "EventCode": "0x0899",
+ "EventName": "FL9_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 9."
+ },
+ {
+ "EventCode": "0x089A",
+ "EventName": "FL10_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 10."
+ },
+ {
+ "EventCode": "0x089B",
+ "EventName": "FL11_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 11."
+ },
+ {
+ "EventCode": "0x089C",
+ "EventName": "FL12_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 12."
+ },
+ {
+ "EventCode": "0x089D",
+ "EventName": "FL13_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 13."
+ },
+ {
+ "EventCode": "0x089E",
+ "EventName": "FL14_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 14."
+ },
+ {
+ "EventCode": "0x089F",
+ "EventName": "FL15_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the Frequency Level 15."
+ },
+ {
+ "EventCode": "0x08A0",
+ "EventName": "RETENTION_GCYCLES",
+ "BriefDescription": "This event counts the number of cycles where the measured core is staying in the RETENTION state."
+ },
+ {
+ "EventCode": "0x08A1",
+ "EventName": "RETENTION_COUNT",
+ "BriefDescription": "This event counts the number of changes from the normal state to the RETENTION state."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/general.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/general.json
new file mode 100644
index 000000000000..32f0fbfc4de4
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/general.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "CPU_CYCLES",
+ "BriefDescription": "This event counts every cycle."
+ },
+ {
+ "ArchStdEvent": "CNT_CYCLES",
+ "BriefDescription": "This event counts the constant frequency cycles counter increments at a constant frequency equal to the rate of increment of the System counter."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/hwpf.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/hwpf.json
new file mode 100644
index 000000000000..a784a032f353
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/hwpf.json
@@ -0,0 +1,52 @@
+[
+ {
+ "EventCode": "0x0230",
+ "EventName": "L1HWPF_STREAM_PF",
+ "BriefDescription": "This event counts streaming prefetch requests to L1D cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0231",
+ "EventName": "L1HWPF_STRIDE_PF",
+ "BriefDescription": "This event counts stride prefetch requests to L1D cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0232",
+ "EventName": "L1HWPF_PFTGT_PF",
+ "BriefDescription": "This event counts LDS prefetch requests to L1D cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0234",
+ "EventName": "L2HWPF_STREAM_PF",
+ "BriefDescription": "This event counts streaming prefetch requests to L2 cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0235",
+ "EventName": "L2HWPF_STRIDE_PF",
+ "BriefDescription": "This event counts stride prefetch requests to L2 cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0237",
+ "EventName": "L2HWPF_OTHER",
+ "BriefDescription": "This event counts prefetch requests to L2 cache generated by the other causes."
+ },
+ {
+ "EventCode": "0x0238",
+ "EventName": "L3HWPF_STREAM_PF",
+ "BriefDescription": "This event counts streaming prefetch requests to L3 cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x0239",
+ "EventName": "L3HWPF_STRIDE_PF",
+ "BriefDescription": "This event counts stride prefetch requests to L3 cache generated by hardware prefetcher."
+ },
+ {
+ "EventCode": "0x023B",
+ "EventName": "L3HWPF_OTHER",
+ "BriefDescription": "This event counts prefetch requests to L3 cache generated by the other causes."
+ },
+ {
+ "EventCode": "0x023C",
+ "EventName": "L1IHWPF_NEXTLINE_PF",
+ "BriefDescription": "This event counts next line's prefetch requests to L1I cache generated by hardware prefetcher."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1d_cache.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1d_cache.json
new file mode 100644
index 000000000000..a2ff3b49ac0d
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1d_cache.json
@@ -0,0 +1,113 @@
+[
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL",
+ "BriefDescription": "This event counts operations that cause a refill of the L1D cache. See L1D_CACHE_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE",
+ "BriefDescription": "This event counts operations that cause a cache access to the L1D cache. See L1D_CACHE of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WB",
+ "BriefDescription": "This event counts every write-back of data from the L1D cache. See L1D_CACHE_WB of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_LMISS_RD",
+ "BriefDescription": "This event counts operations that cause a refill of the L1D cache that incurs additional latency."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_RD",
+ "BriefDescription": "This event counts L1D CACHE caused by read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WR",
+ "BriefDescription": "This event counts L1D CACHE caused by write access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_RD",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by read access."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_WR",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by write access."
+ },
+ {
+ "EventCode": "0x0200",
+ "EventName": "L1D_CACHE_DM",
+ "BriefDescription": "This event counts L1D_CACHE caused by demand access."
+ },
+ {
+ "EventCode": "0x0201",
+ "EventName": "L1D_CACHE_DM_RD",
+ "BriefDescription": "This event counts L1D_CACHE caused by demand read access."
+ },
+ {
+ "EventCode": "0x0202",
+ "EventName": "L1D_CACHE_DM_WR",
+ "BriefDescription": "This event counts L1D_CACHE caused by demand write access."
+ },
+ {
+ "EventCode": "0x0208",
+ "EventName": "L1D_CACHE_REFILL_DM",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand access."
+ },
+ {
+ "EventCode": "0x0209",
+ "EventName": "L1D_CACHE_REFILL_DM_RD",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand read access."
+ },
+ {
+ "EventCode": "0x020A",
+ "EventName": "L1D_CACHE_REFILL_DM_WR",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by demand write access."
+ },
+ {
+ "EventCode": "0x020D",
+ "EventName": "L1D_CACHE_BTC",
+ "BriefDescription": "This event counts demand access that hits cache line with shared status and requests exclusive access in the Level 1 data cache, causing a coherence access to outside of the Level 1 caches of this PE."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_MISS",
+ "BriefDescription": "This event counts demand access that misses in the Level 1 data cache, causing an access to outside of the Level 1 caches of this PE."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HWPRF",
+ "BriefDescription": "This event counts L1D_CACHE caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_HWPRF",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HIT_RD",
+ "BriefDescription": "This event counts demand read counted by L1D_CACHE_RD that hits in the Level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HIT_WR",
+ "BriefDescription": "This event counts demand write counted by L1D_CACHE_WR that hits in the Level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_HIT",
+ "BriefDescription": "This event counts access counted by L1D_CACHE that hits in the Level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_LFB_HIT_RD",
+ "BriefDescription": "This event counts demand access counted by L1D_CACHE_HIT_RD that hits a cache line that is in the process of being loaded into the Level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_LFB_HIT_WR",
+ "BriefDescription": "This event counts demand access counted by L1D_CACHE_HIT_WR that hits a cache line that is in the process of being loaded into the Level 1 data cache."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_PRF",
+ "BriefDescription": "This event counts L1D_CACHE caused by hardware prefetch or software prefetch."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_PRF",
+ "BriefDescription": "This event counts L1D_CACHE_REFILL caused by hardware prefetch or software prefetch."
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_PERCYC",
+ "BriefDescription": "This counter counts by the number of cache refills counted by L1D_CACHE_REFILL in progress on each Processor cycle."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1i_cache.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1i_cache.json
new file mode 100644
index 000000000000..5250af8631c0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l1i_cache.json
@@ -0,0 +1,52 @@
+[
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL",
+ "BriefDescription": "This event counts operations that cause a refill of the L1I cache. See L1I_CACHE_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE",
+ "BriefDescription": "This event counts operations that cause a cache access to the L1I cache. See L1I_CACHE of ARMv9 Reference Manual for more information."
+ },
+ {
+ "EventCode": "0x0207",
+ "EventName": "L1I_CACHE_DM_RD",
+ "BriefDescription": "This event counts L1I_CACHE caused by demand read access."
+ },
+ {
+ "EventCode": "0x020F",
+ "EventName": "L1I_CACHE_REFILL_DM_RD",
+ "BriefDescription": "This event counts L1I_CACHE_REFILL caused by demand read access."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_LMISS",
+ "BriefDescription": "This event counts operations that cause a refill of the L1I cache that incurs additional latency."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HWPRF",
+ "BriefDescription": "This event counts L1I_CACHE caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL_HWPRF",
+ "BriefDescription": "This event counts L1I_CACHE_REFILL caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT_RD",
+ "BriefDescription": "This event counts demand fetch counted by L1I_CACHE_DM_RD that hits in the Level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_HIT",
+ "BriefDescription": "This event counts access counted by L1I_CACHE that hits in the Level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_LFB_HIT_RD",
+ "BriefDescription": "This event counts demand access counted by L1I_CACHE_HIT_RD that hits a cache line that is in the process of being loaded into the Level 1 instruction cache."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL_PRF",
+ "BriefDescription": "This event counts L1I_CACHE_REFILL caused by hardware prefetch or software prefetch."
+ },
+ {
+ "ArchStdEvent": "L1I_CACHE_REFILL_PERCYC",
+ "BriefDescription": "This counter counts by the number of cache refills counted by L1I_CACHE_REFILL in progress on each Processor cycle."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l2_cache.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l2_cache.json
new file mode 100644
index 000000000000..67f9151d7685
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l2_cache.json
@@ -0,0 +1,160 @@
+[
+ {
+ "ArchStdEvent": "L2D_CACHE",
+ "BriefDescription": "This event counts operations that cause a cache access to the L2 cache. See L2D_CACHE of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL",
+ "BriefDescription": "This event counts operations that cause a refill of the L2 cache. See L2D_CACHE_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB",
+ "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace, non-temporal-store and DC ZVA."
+ },
+ {
+ "ArchStdEvent": "L2I_TLB_REFILL",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I TLB. See L2I_TLB_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2I_TLB",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I TLB. See L2I_TLB of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_RD",
+ "BriefDescription": "This event counts L2D_CACHE caused by read access."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WR",
+ "BriefDescription": "This event counts L2D_CACHE caused by write access."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_RD",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by read access."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_WR",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by write access."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_WB_VICTIM",
+ "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace."
+ },
+ {
+ "EventCode": "0x0300",
+ "EventName": "L2D_CACHE_DM",
+ "BriefDescription": "This event counts L2D_CACHE caused by demand access."
+ },
+ {
+ "EventCode": "0x0301",
+ "EventName": "L2D_CACHE_DM_RD",
+ "BriefDescription": "This event counts L2D_CACHE caused by demand read access."
+ },
+ {
+ "EventCode": "0x0302",
+ "EventName": "L2D_CACHE_DM_WR",
+ "BriefDescription": "This event counts L2D_CACHE caused by demand write access."
+ },
+ {
+ "EventCode": "0x0305",
+ "EventName": "L2D_CACHE_HWPRF_ADJACENT",
+ "BriefDescription": "This event counts L2D_CACHE caused by hardware adjacent prefetch."
+ },
+ {
+ "EventCode": "0x0308",
+ "EventName": "L2D_CACHE_REFILL_DM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand access."
+ },
+ {
+ "EventCode": "0x0309",
+ "EventName": "L2D_CACHE_REFILL_DM_RD",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand read access."
+ },
+ {
+ "EventCode": "0x030A",
+ "EventName": "L2D_CACHE_REFILL_DM_WR",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write access."
+ },
+ {
+ "EventCode": "0x030B",
+ "EventName": "L2D_CACHE_REFILL_DM_WR_EXCL",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write exclusive access."
+ },
+ {
+ "EventCode": "0x030C",
+ "EventName": "L2D_CACHE_REFILL_DM_WR_ATOM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by demand write atomic access."
+ },
+ {
+ "EventCode": "0x030D",
+ "EventName": "L2D_CACHE_BTC",
+ "BriefDescription": "This event counts demand access that hits cache line with shared status and requests exclusive access in the Level 1 data and Level 2 caches, causing a coherence access to outside of the Level 1 and Level 2 caches of this PE."
+ },
+ {
+ "EventCode": "0x03B0",
+ "EventName": "L2D_CACHE_WB_VICTIM_CLEAN",
+ "BriefDescription": "This event counts every write-back of data from the L2 cache caused by L2 replace where the data is clean. In this case, the data will usually be written to L3 cache."
+ },
+ {
+ "EventCode": "0x03B1",
+ "EventName": "L2D_CACHE_WB_NT",
+ "BriefDescription": "This event counts every write-back of data from the L2 cache caused by non-temporal-store."
+ },
+ {
+ "EventCode": "0x03B2",
+ "EventName": "L2D_CACHE_WB_DCZVA",
+ "BriefDescription": "This event counts every write-back of data from the L2 cache caused by DC ZVA."
+ },
+ {
+ "EventCode": "0x03B3",
+ "EventName": "L2D_CACHE_FB",
+ "BriefDescription": "This event counts every flush-back (drop) of data from the L2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_LMISS_RD",
+ "BriefDescription": "This event counts operations that cause a refill of the L2 cache that incurs additional latency."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_MISS",
+ "BriefDescription": "This event counts demand access that misses in the Level 1 data and Level 2 caches, causing an access to outside of the Level 1 and Level 2 caches of this PE."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_HWPRF",
+ "BriefDescription": "This event counts L2D_CACHE caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_HWPRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by hardware prefetch."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_HIT_RD",
+ "BriefDescription": "This event counts demand read counted by L2D_CACHE_RD that hits in the Level 2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_HIT_WR",
+ "BriefDescription": "This event counts demand write counted by L2D_CACHE_WR that hits in the Level 2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_HIT",
+ "BriefDescription": "This event counts access counted by L2D_CACHE that hits in the Level 2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_LFB_HIT_RD",
+ "BriefDescription": "This event counts demand access counted by L2D_CACHE_HIT_RD that hits a recently fetched line in the Level 2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_LFB_HIT_WR",
+ "BriefDescription": "This event counts demand access counted by L2D_CACHE_HIT_WR that hits a recently fetched line in the Level 2 cache."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_PRF",
+ "BriefDescription": "This event counts L2D_CACHE caused by hardware prefetch or software prefetch."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_PRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL caused by hardware prefetch or software prefetch."
+ },
+ {
+ "ArchStdEvent": "L2D_CACHE_REFILL_PERCYC",
+ "BriefDescription": "This counter counts by the number of cache refills counted by L2D_CACHE_REFILL in progress on each Processor cycle."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l3_cache.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l3_cache.json
new file mode 100644
index 000000000000..cf49c4d452b7
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/l3_cache.json
@@ -0,0 +1,154 @@
+[
+ {
+ "ArchStdEvent": "L3D_CACHE",
+ "BriefDescription": "This event counts operations that cause a cache access to the L3 cache, as defined by the sum of L2D_CACHE_REFILL_L3D_CACHE and L2D_CACHE_WB_VICTIM_CLEAN events."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_RD",
+ "BriefDescription": "This event counts access counted by L3D_CACHE that is a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_CACHE events."
+ },
+ {
+ "EventCode": "0x0390",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE",
+ "BriefDescription": "This event counts operations that cause a cache access to the L3 cache."
+ },
+ {
+ "EventCode": "0x0391",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand access."
+ },
+ {
+ "EventCode": "0x0392",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM_RD",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand read access."
+ },
+ {
+ "EventCode": "0x0393",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE_DM_WR",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by demand write access."
+ },
+ {
+ "EventCode": "0x0394",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE_PRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by hardware prefetch or software prefetch."
+ },
+ {
+ "EventCode": "0x0395",
+ "EventName": "L2D_CACHE_REFILL_L3D_CACHE_HWPRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_CACHE caused by hardware prefetch."
+ },
+ {
+ "EventCode": "0x0396",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS",
+ "BriefDescription": "This event counts operations that cause a miss of the L3 cache. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x0397",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand access."
+ },
+ {
+ "EventCode": "0x0398",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_RD",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand read access."
+ },
+ {
+ "EventCode": "0x0399",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_WR",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by demand write access."
+ },
+ {
+ "EventCode": "0x039A",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_PRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by hardware prefetch or software prefetch. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x039B",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_HWPRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS caused by hardware prefetch. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x039C",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT",
+ "BriefDescription": "This event counts operations that cause a hit of the L3 cache. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x039D",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand access."
+ },
+ {
+ "EventCode": "0x039E",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM_RD",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand read access."
+ },
+ {
+ "EventCode": "0x039F",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT_DM_WR",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by demand write access."
+ },
+ {
+ "EventCode": "0x03A0",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT_PRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by hardware prefetch or software prefetch. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x03A1",
+ "EventName": "L2D_CACHE_REFILL_L3D_HIT_HWPRF",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_HIT caused by hardware prefetch. Note: This event may count inaccurately."
+ },
+ {
+ "EventCode": "0x03A3",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_PFTGT_HIT",
+ "BriefDescription": "This event counts the number of L3 cache misses caused by demand access where the requests hit the PFTGT buffer."
+ },
+ {
+ "EventCode": "0x03A4",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_RD_PFTGT_HIT",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM_PFTGT_HIT caused by read access."
+ },
+ {
+ "EventCode": "0x03A5",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_WR_PFTGT_HIT",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM_PFTGT_HIT caused by write access."
+ },
+ {
+ "EventCode": "0x03A6",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_L_MEM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access the memory in the same socket as the requests."
+ },
+ {
+ "EventCode": "0x03A7",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_FR_MEM",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access the memory in the different socket from the requests."
+ },
+ {
+ "EventCode": "0x03A8",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_L_L2",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access the different L2 cache from the requests in the same Numa nodes as the requests."
+ },
+ {
+ "EventCode": "0x03A9",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_NR_L2",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access L2 cache in the different Numa nodes from the requests in the same socket as the requests."
+ },
+ {
+ "EventCode": "0x03AA",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_NR_L3",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access L3 cache in the different Numa nodes from the requests in the same socket as the requests."
+ },
+ {
+ "EventCode": "0x03AB",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_FR_L2",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access L2 cache in the different socket from the requests."
+ },
+ {
+ "EventCode": "0x03AC",
+ "EventName": "L2D_CACHE_REFILL_L3D_MISS_DM_FR_L3",
+ "BriefDescription": "This event counts L2D_CACHE_REFILL_L3D_MISS_DM where the requests access L3 cache in the different socket from the requests."
+ },
+ {
+ "ArchStdEvent": "L3D_CACHE_LMISS_RD",
+ "BriefDescription": "This event counts access counted by L3D_CACHE that is not completed by the L3 cache, and a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_MISS events. Note: This event may count inaccurately."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ll_cache.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ll_cache.json
new file mode 100644
index 000000000000..d49d9f6df72c
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/ll_cache.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "LL_CACHE_RD",
+ "BriefDescription": "This event counts access counted by L3D_CACHE that is a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_CACHE events."
+ },
+ {
+ "ArchStdEvent": "LL_CACHE_MISS_RD",
+ "BriefDescription": "This event counts access counted by L3D_CACHE that is not completed by the L3 cache, and a Memory-read operation, as defined by the L2D_CACHE_REFILL_L3D_MISS events. Note: This event may count inaccurately."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/memory.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/memory.json
new file mode 100644
index 000000000000..4ef125e3a253
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/memory.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "MEM_ACCESS",
+ "BriefDescription": "This event counts architecturally executed memory-reading instructions and memory-writing instructions, as defined by the LDST_SPEC events."
+ },
+ {
+ "ArchStdEvent": "MEM_ACCESS_RD",
+ "BriefDescription": "This event counts architecturally executed memory-reading instructions, as defined by the LD_SPEC events."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pipeline.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pipeline.json
new file mode 100644
index 000000000000..15cf54730b85
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pipeline.json
@@ -0,0 +1,208 @@
+[
+ {
+ "EventCode": "0x01A0",
+ "EventName": "EAGA_VAL",
+ "BriefDescription": "This event counts valid cycles of EAGA pipeline."
+ },
+ {
+ "EventCode": "0x01A1",
+ "EventName": "EAGB_VAL",
+ "BriefDescription": "This event counts valid cycles of EAGB pipeline."
+ },
+ {
+ "EventCode": "0x01A3",
+ "EventName": "PRX_VAL",
+ "BriefDescription": "This event counts valid cycles of PRX pipeline."
+ },
+ {
+ "EventCode": "0x01A4",
+ "EventName": "EXA_VAL",
+ "BriefDescription": "This event counts valid cycles of EXA pipeline."
+ },
+ {
+ "EventCode": "0x01A5",
+ "EventName": "EXB_VAL",
+ "BriefDescription": "This event counts valid cycles of EXB pipeline."
+ },
+ {
+ "EventCode": "0x01A6",
+ "EventName": "EXC_VAL",
+ "BriefDescription": "This event counts valid cycles of EXC pipeline."
+ },
+ {
+ "EventCode": "0x01A7",
+ "EventName": "EXD_VAL",
+ "BriefDescription": "This event counts valid cycles of EXD pipeline."
+ },
+ {
+ "EventCode": "0x01A8",
+ "EventName": "FLA_VAL",
+ "BriefDescription": "This event counts valid cycles of FLA pipeline."
+ },
+ {
+ "EventCode": "0x01A9",
+ "EventName": "FLB_VAL",
+ "BriefDescription": "This event counts valid cycles of FLB pipeline."
+ },
+ {
+ "EventCode": "0x01AA",
+ "EventName": "STEA_VAL",
+ "BriefDescription": "This event counts valid cycles of STEA pipeline."
+ },
+ {
+ "EventCode": "0x01AB",
+ "EventName": "STEB_VAL",
+ "BriefDescription": "This event counts valid cycles of STEB pipeline."
+ },
+ {
+ "EventCode": "0x01AC",
+ "EventName": "STFL_VAL",
+ "BriefDescription": "This event counts valid cycles of STFL pipeline."
+ },
+ {
+ "EventCode": "0x01AD",
+ "EventName": "STPX_VAL",
+ "BriefDescription": "This event counts valid cycles of STPX pipeline."
+ },
+ {
+ "EventCode": "0x01B0",
+ "EventName": "FLA_VAL_PRD_CNT",
+ "BriefDescription": "This event counts the number of 1's in the predicate bits of request in FLA pipeline, where it is corrected so that it becomes 32 when all bits are 1."
+ },
+ {
+ "EventCode": "0x01B1",
+ "EventName": "FLB_VAL_PRD_CNT",
+ "BriefDescription": "This event counts the number of 1's in the predicate bits of request in FLB pipeline, where it is corrected so that it becomes 32 when all bits are 1."
+ },
+ {
+ "EventCode": "0x01B2",
+ "EventName": "FLA_VAL_FOR_PRD",
+ "BriefDescription": "This event counts valid cycles of FLA pipeline."
+ },
+ {
+ "EventCode": "0x01B3",
+ "EventName": "FLB_VAL_FOR_PRD",
+ "BriefDescription": "This event counts valid cycles of FLB pipeline."
+ },
+ {
+ "EventCode": "0x0240",
+ "EventName": "L1_PIPE0_VAL",
+ "BriefDescription": "This event counts valid cycles of L1D cache pipeline#0."
+ },
+ {
+ "EventCode": "0x0241",
+ "EventName": "L1_PIPE1_VAL",
+ "BriefDescription": "This event counts valid cycles of L1D cache pipeline#1."
+ },
+ {
+ "EventCode": "0x0242",
+ "EventName": "L1_PIPE2_VAL",
+ "BriefDescription": "This event counts valid cycles of L1D cache pipeline#2."
+ },
+ {
+ "EventCode": "0x0250",
+ "EventName": "L1_PIPE0_COMP",
+ "BriefDescription": "This event counts completed requests in L1D cache pipeline#0."
+ },
+ {
+ "EventCode": "0x0251",
+ "EventName": "L1_PIPE1_COMP",
+ "BriefDescription": "This event counts completed requests in L1D cache pipeline#1."
+ },
+ {
+ "EventCode": "0x025A",
+ "EventName": "L1_PIPE_ABORT_STLD_INTLK",
+ "BriefDescription": "This event counts aborted requests in L1D pipelines that due to store-load interlock."
+ },
+ {
+ "EventCode": "0x026C",
+ "EventName": "L1I_PIPE_COMP",
+ "BriefDescription": "This event counts completed requests in L1I cache pipeline."
+ },
+ {
+ "EventCode": "0x026D",
+ "EventName": "L1I_PIPE_VAL",
+ "BriefDescription": "This event counts valid cycles of L1I cache pipeline."
+ },
+ {
+ "EventCode": "0x0278",
+ "EventName": "L1_PIPE0_VAL_IU_TAG_ADRS_SCE",
+ "BriefDescription": "This event counts requests in L1D cache pipeline#0 that its sce bit of tagged address is 1."
+ },
+ {
+ "EventCode": "0x0279",
+ "EventName": "L1_PIPE1_VAL_IU_TAG_ADRS_SCE",
+ "BriefDescription": "This event counts requests in L1D cache pipeline#1 that its sce bit of tagged address is 1."
+ },
+ {
+ "EventCode": "0x02A0",
+ "EventName": "L1_PIPE0_VAL_IU_NOT_SEC0",
+ "BriefDescription": "This event counts requests in L1D cache pipeline#0 that its sector cache ID is not 0."
+ },
+ {
+ "EventCode": "0x02A1",
+ "EventName": "L1_PIPE1_VAL_IU_NOT_SEC0",
+ "BriefDescription": "This event counts requests in L1D cache pipeline#1 that its sector cache ID is not 0."
+ },
+ {
+ "EventCode": "0x02B0",
+ "EventName": "L1_PIPE_COMP_GATHER_2FLOW",
+ "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 2-flows because 2 elements could not be combined."
+ },
+ {
+ "EventCode": "0x02B1",
+ "EventName": "L1_PIPE_COMP_GATHER_1FLOW",
+ "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 1-flow because 2 elements could be combined."
+ },
+ {
+ "EventCode": "0x02B2",
+ "EventName": "L1_PIPE_COMP_GATHER_0FLOW",
+ "BriefDescription": "This event counts the number of times where 2 elements of the gather instructions became 0-flow because both predicate values are 0."
+ },
+ {
+ "EventCode": "0x02B3",
+ "EventName": "L1_PIPE_COMP_SCATTER_1FLOW",
+ "BriefDescription": "This event counts the number of flows of the scatter instructions."
+ },
+ {
+ "EventCode": "0x02B8",
+ "EventName": "L1_PIPE0_COMP_PRD_CNT",
+ "BriefDescription": "This event counts the number of 1's in the predicate bits of request in L1D cache pipeline#0, where it is corrected so that it becomes 64 when all bits are 1."
+ },
+ {
+ "EventCode": "0x02B9",
+ "EventName": "L1_PIPE1_COMP_PRD_CNT",
+ "BriefDescription": "This event counts the number of 1's in the predicate bits of request in L1D cache pipeline#1, where it is corrected so that it becomes 64 when all bits are 1."
+ },
+ {
+ "EventCode": "0x0330",
+ "EventName": "L2_PIPE_VAL",
+ "BriefDescription": "This event counts valid cycles of L2 cache pipeline."
+ },
+ {
+ "EventCode": "0x0350",
+ "EventName": "L2_PIPE_COMP_ALL",
+ "BriefDescription": "This event counts completed requests in L2 cache pipeline."
+ },
+ {
+ "EventCode": "0x0370",
+ "EventName": "L2_PIPE_COMP_PF_L2MIB_MCH",
+ "BriefDescription": "This event counts operations where software or hardware prefetch hits an L2 cache refill buffer allocated by demand access."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_TLB",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the instruction TLB."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_TLB",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in the data TLB."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ST",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is stalled waiting for a store."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ILOCK",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when operations are available from the frontend but at least one is not ready to be sent to the backend because of an input dependency."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pmu.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pmu.json
new file mode 100644
index 000000000000..65bd6cdd0dd5
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/pmu.json
@@ -0,0 +1,10 @@
+[
+ {
+ "ArchStdEvent": "PMU_OVFS",
+ "BriefDescription": "This event counts the event generated each time one of the condition occurs described in Arm Architecture Reference Manual for A-profile architecture. This event is only for output to the trace unit."
+ },
+ {
+ "ArchStdEvent": "PMU_HOVFS",
+ "BriefDescription": "This event counts the event generated each time an event is counted by an event counter <n> and all of the condition occur described in Arm Architecture Reference Manual for A-profile architecture. This event is only for output to the trace unit."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/retired.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/retired.json
new file mode 100644
index 000000000000..de56aafec2dc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/retired.json
@@ -0,0 +1,30 @@
+[
+ {
+ "ArchStdEvent": "SW_INCR",
+ "BriefDescription": "This event counts on writes to the PMSWINC register."
+ },
+ {
+ "ArchStdEvent": "INST_RETIRED",
+ "BriefDescription": "This event counts every architecturally executed instruction."
+ },
+ {
+ "ArchStdEvent": "CID_WRITE_RETIRED",
+ "BriefDescription": "This event counts every write to CONTEXTIDR."
+ },
+ {
+ "ArchStdEvent": "BR_RETIRED",
+ "BriefDescription": "This event counts architecturally executed branch instruction."
+ },
+ {
+ "ArchStdEvent": "BR_MIS_PRED_RETIRED",
+ "BriefDescription": "This event counts architecturally executed branch instruction which was mispredicted."
+ },
+ {
+ "ArchStdEvent": "OP_RETIRED",
+ "BriefDescription": "This event counts every architecturally executed micro-operation."
+ },
+ {
+ "ArchStdEvent": "UOP_RETIRED",
+ "BriefDescription": "This event counts micro-operation that would be executed in a Simple sequential execution of the program."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/spec_operation.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/spec_operation.json
new file mode 100644
index 000000000000..1caf3baeae4e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/spec_operation.json
@@ -0,0 +1,171 @@
+[
+ {
+ "ArchStdEvent": "BR_MIS_PRED",
+ "BriefDescription": "This event counts each correction to the predicted program flow that occurs because of a misprediction from, or no prediction from, the branch prediction resources and that relates to instructions that the branch prediction resources are capable of predicting."
+ },
+ {
+ "ArchStdEvent": "BR_PRED",
+ "BriefDescription": "This event counts every branch or other change in the program flow that the branch prediction resources are capable of predicting."
+ },
+ {
+ "ArchStdEvent": "INST_SPEC",
+ "BriefDescription": "This event counts every architecturally executed instruction."
+ },
+ {
+ "ArchStdEvent": "OP_SPEC",
+ "BriefDescription": "This event counts every speculatively executed micro-operation."
+ },
+ {
+ "ArchStdEvent": "LDREX_SPEC",
+ "BriefDescription": "This event counts architecturally executed load-exclusive instructions."
+ },
+ {
+ "ArchStdEvent": "STREX_SPEC",
+ "BriefDescription": "This event counts architecturally executed store-exclusive instructions."
+ },
+ {
+ "ArchStdEvent": "LD_SPEC",
+ "BriefDescription": "This event counts architecturally executed memory-reading instructions, as defined by the LD_RETIRED event."
+ },
+ {
+ "ArchStdEvent": "ST_SPEC",
+ "BriefDescription": "This event counts architecturally executed memory-writing instructions, as defined by the ST_RETIRED event. This event counts DCZVA as a store operation."
+ },
+ {
+ "ArchStdEvent": "LDST_SPEC",
+ "BriefDescription": "This event counts architecturally executed memory-reading instructions and memory-writing instructions, as defined by the LD_RETIRED and ST_RETIRED events."
+ },
+ {
+ "ArchStdEvent": "DP_SPEC",
+ "BriefDescription": "This event counts architecturally executed integer data-processing instructions. See DP_SPEC of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "ASE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD data-processing instructions."
+ },
+ {
+ "ArchStdEvent": "VFP_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point data-processing instructions."
+ },
+ {
+ "ArchStdEvent": "PC_WRITE_SPEC",
+ "BriefDescription": "This event counts only software changes of the PC that defined by the instruction architecturally executed, condition code check pass, software change of the PC event."
+ },
+ {
+ "ArchStdEvent": "CRYPTO_SPEC",
+ "BriefDescription": "This event counts architecturally executed cryptographic instructions, except PMULL and VMULL."
+ },
+ {
+ "ArchStdEvent": "BR_IMMED_SPEC",
+ "BriefDescription": "This event counts architecturally executed immediate branch instructions."
+ },
+ {
+ "ArchStdEvent": "BR_RETURN_SPEC",
+ "BriefDescription": "This event counts architecturally executed procedure return operations that defined by the BR_RETURN_RETIRED event."
+ },
+ {
+ "ArchStdEvent": "BR_INDIRECT_SPEC",
+ "BriefDescription": "This event counts architecturally executed indirect branch instructions that includes software change of the PC other than exception-generating instructions and immediate branch instructions."
+ },
+ {
+ "ArchStdEvent": "ISB_SPEC",
+ "BriefDescription": "This event counts architecturally executed Instruction Synchronization Barrier instructions."
+ },
+ {
+ "ArchStdEvent": "DSB_SPEC",
+ "BriefDescription": "This event counts architecturally executed Data Synchronization Barrier instructions."
+ },
+ {
+ "ArchStdEvent": "DMB_SPEC",
+ "BriefDescription": "This event counts architecturally executed Data Memory Barrier instructions, excluding the implied barrier operations of load/store operations with release consistency semantics."
+ },
+ {
+ "ArchStdEvent": "CSDB_SPEC",
+ "BriefDescription": "This event counts architecturally executed control speculation barrier instructions."
+ },
+ {
+ "EventCode": "0x0108",
+ "EventName": "PRD_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that using predicate register."
+ },
+ {
+ "EventCode": "0x0109",
+ "EventName": "IEL_SPEC",
+ "BriefDescription": "This event counts architecturally executed inter-element manipulation operation."
+ },
+ {
+ "EventCode": "0x010A",
+ "EventName": "IREG_SPEC",
+ "BriefDescription": "This event counts architecturally executed inter-register manipulation operation."
+ },
+ {
+ "EventCode": "0x011A",
+ "EventName": "BC_LD_SPEC",
+ "BriefDescription": "This event counts architecturally executed SIMD broadcast floating-point load operation."
+ },
+ {
+ "EventCode": "0x011B",
+ "EventName": "DCZVA_SPEC",
+ "BriefDescription": "This event counts architecturally executed zero blocking operations due to the DC ZVA instruction."
+ },
+ {
+ "EventCode": "0x0121",
+ "EventName": "EFFECTIVE_INST_SPEC",
+ "BriefDescription": "This event counts architecturally executed instructions, excluding the MOVPRFX instruction."
+ },
+ {
+ "EventCode": "0x0123",
+ "EventName": "PRE_INDEX_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that uses pre-index as its addressing mode."
+ },
+ {
+ "EventCode": "0x0124",
+ "EventName": "POST_INDEX_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that uses post-index as its addressing mode."
+ },
+ {
+ "EventCode": "0x0139",
+ "EventName": "UOP_SPLIT",
+ "BriefDescription": "This event counts the occurrence count of the micro-operation split."
+ },
+ {
+ "ArchStdEvent": "ASE_INST_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD operation."
+ },
+ {
+ "ArchStdEvent": "INT_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations due to scalar, Advanced SIMD, and SVE instructions listed in Integer instructions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "INT_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed integer divide operation."
+ },
+ {
+ "ArchStdEvent": "INT_DIV64_SPEC",
+ "BriefDescription": "This event counts architecturally executed 64-bit integer divide operation."
+ },
+ {
+ "ArchStdEvent": "INT_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed integer multiply operation."
+ },
+ {
+ "ArchStdEvent": "INT_MUL64_SPEC",
+ "BriefDescription": "This event counts architecturally executed integer 64-bit x 64-bit multiply operation."
+ },
+ {
+ "ArchStdEvent": "INT_MULH64_SPEC",
+ "BriefDescription": "This event counts architecturally executed integer 64-bit x 64-bit multiply returning high part operation."
+ },
+ {
+ "ArchStdEvent": "NONFP_SPEC",
+ "BriefDescription": "This event counts architecturally executed non-floating-point operation."
+ },
+ {
+ "ArchStdEvent": "INT_SCALE_OPS_SPEC",
+ "BriefDescription": "This event counts each integer ALU operation counted by SVE_INT_SPEC. See ALU operation counts section of ARMv9 Reference Manual for information on the counter increment for different types of instruction."
+ },
+ {
+ "ArchStdEvent": "INT_FIXED_OPS_SPEC",
+ "BriefDescription": "This event counts each integer ALU operation counted by INT_SPEC that is not counted by SVE_INT_SPEC. See ALU operation counts section of ARMv9 Reference Manual for information on the counter increment for different types of instruction."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/stall.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/stall.json
new file mode 100644
index 000000000000..e1e16d513828
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/stall.json
@@ -0,0 +1,94 @@
+[
+ {
+ "ArchStdEvent": "STALL_FRONTEND",
+ "BriefDescription": "This event counts every cycle counted by the CPU_CYCLES event on that no operation was issued because there are no operations available to issue for this PE from the frontend."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND",
+ "BriefDescription": "This event counts every cycle counted by the CPU_CYCLES event on that no operation was issued because the backend is unable to accept any operation."
+ },
+ {
+ "ArchStdEvent": "STALL",
+ "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_BACKEND",
+ "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to the backend."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT_FRONTEND",
+ "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to the frontend."
+ },
+ {
+ "ArchStdEvent": "STALL_SLOT",
+ "BriefDescription": "This event counts every cycle that no instruction or operation Slot was dispatched from decode unit."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEM",
+ "BriefDescription": "This event counts every cycle that no instruction was dispatched from decode unit due to memory stall."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEMBOUND",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND when no instructions are delivered from the memory system."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_L1I",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the first level of instruction cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_L2I",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the second level of instruction cache."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_MEM",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_MEMBOUND when there is a demand instruction miss in the last level of instruction cache within the PE clock domain or a non-cacheable instruction fetch in progress."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_CPUBOUND",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND when the frontend is stalled on a frontend processor resource, not including memory."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLOW",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when the frontend is stalled on unavailability of prediction flow resources."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_FLUSH",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when the frontend is recovering from a pipeline flush."
+ },
+ {
+ "ArchStdEvent": "STALL_FRONTEND_RENAME",
+ "BriefDescription": "This event counts every cycle counted by STALL_FRONTEND_CPUBOUND when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEMBOUND",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when the backend is waiting for a memory access to complete."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_L1D",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in L1D cache."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_L2D",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when there is a demand data miss in L2 cache."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_CPUBOUND",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when the backend is stalled on a processor resource, not including memory."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_BUSY",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND when operations are available from the frontend but the backend is not able to accept an operation because an execution unit is busy."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_RENAME",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_CPUBOUND when operations are available from the frontend but at least one is not ready to be sent to the backend because no rename register is available."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_ATOMIC",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is processing an Atomic operation."
+ },
+ {
+ "ArchStdEvent": "STALL_BACKEND_MEMCPYSET",
+ "BriefDescription": "This event counts every cycle counted by STALL_BACKEND_MEMBOUND when the backend is processing a Memory Copy or Set instruction."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/sve.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/sve.json
new file mode 100644
index 000000000000..88cab0caf49e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/sve.json
@@ -0,0 +1,254 @@
+[
+ {
+ "ArchStdEvent": "SIMD_INST_RETIRED",
+ "BriefDescription": "This event counts architecturally executed SIMD instructions, excluding the Advanced SIMD scalar instructions and the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "SVE_INST_RETIRED",
+ "BriefDescription": "This event counts architecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "SVE_INST_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE instructions, including the instructions listed in Non-SIMD SVE instructions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INST_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE operation."
+ },
+ {
+ "ArchStdEvent": "UOP_SPEC",
+ "BriefDescription": "This event counts all architecturally executed micro-operation."
+ },
+ {
+ "ArchStdEvent": "SVE_MATH_SPEC",
+ "BriefDescription": "This event counts architecturally executed math function operations due to the SVE FTSMUL, FTMAD, FTSSEL, and FEXPA instructions."
+ },
+ {
+ "ArchStdEvent": "FP_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations due to scalar, Advanced SIMD, and SVE instructions listed in Floating-point instructions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "FP_FMA_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point fused multiply-add and multiply-subtract operation."
+ },
+ {
+ "ArchStdEvent": "FP_RECPE_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point reciprocal estimate operations due to the Advanced SIMD scalar, Advanced SIMD vector, and SVE FRECPE and FRSQRTE instructions."
+ },
+ {
+ "ArchStdEvent": "FP_CVT_SPEC",
+ "BriefDescription": "This event counts architecturally executed floating-point convert operations due to the scalar, Advanced SIMD, and SVE floating-point conversion instructions listed in Floating-point conversions section of ARMv9 Reference Manual."
+ },
+ {
+ "ArchStdEvent": "ASE_INT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD integer operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE integer operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_DIV_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer divide operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_DIV64_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE 64-bit integer divide operation."
+ },
+ {
+ "ArchStdEvent": "ASE_INT_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD integer multiply operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer multiply operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT_MUL_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE integer multiply operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_MUL64_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer 64-bit x 64-bit multiply operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_MULH64_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer 64-bit x 64-bit multiply returning high part operation."
+ },
+ {
+ "ArchStdEvent": "ASE_NONFP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD non-floating-point operation."
+ },
+ {
+ "ArchStdEvent": "SVE_NONFP_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE non-floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_NONFP_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE non-floating-point operation."
+ },
+ {
+ "ArchStdEvent": "ASE_INT_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD integer reduction operation."
+ },
+ {
+ "ArchStdEvent": "SVE_INT_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE integer reduction operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT_VREDUCE_SPEC",
+ "BriefDescription": "This event counts architecturally executed Advanced SIMD or SVE integer reduction operation."
+ },
+ {
+ "ArchStdEvent": "SVE_PERM_SPEC",
+ "BriefDescription": "This event counts architecturally executed vector or predicate permute operation."
+ },
+ {
+ "ArchStdEvent": "SVE_XPIPE_Z2R_SPEC",
+ "BriefDescription": "This event counts architecturally executed vector to general-purpose scalar cross-pipeline transfer operation."
+ },
+ {
+ "ArchStdEvent": "SVE_XPIPE_R2Z_SPEC",
+ "BriefDescription": "This event counts architecturally executed general-purpose scalar to vector cross-pipeline transfer operation."
+ },
+ {
+ "ArchStdEvent": "SVE_PGEN_SPEC",
+ "BriefDescription": "This event counts architecturally executed predicate-generating operation."
+ },
+ {
+ "ArchStdEvent": "SVE_PGEN_FLG_SPEC",
+ "BriefDescription": "This event counts architecturally executed predicate-generating operation that sets condition flags."
+ },
+ {
+ "ArchStdEvent": "SVE_PPERM_SPEC",
+ "BriefDescription": "This event counts architecturally executed predicate permute operation."
+ },
+ {
+ "ArchStdEvent": "SVE_PRED_SPEC",
+ "BriefDescription": "This event counts architecturally executed SIMD data-processing and load/store operations due to SVE instructions with a Governing predicate operand that determines the Active elements."
+ },
+ {
+ "ArchStdEvent": "SVE_MOVPRFX_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations due to MOVPRFX instructions, whether or not they were fused with the prefixed instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_MOVPRFX_Z_SPEC",
+ "BriefDescription": "This event counts architecturally executed operation counted by SVE_MOVPRFX_SPEC where the operation uses zeroing predication."
+ },
+ {
+ "ArchStdEvent": "SVE_MOVPRFX_M_SPEC",
+ "BriefDescription": "This event counts architecturally executed operation counted by SVE_MOVPRFX_SPEC where the operation uses merging predication."
+ },
+ {
+ "ArchStdEvent": "SVE_MOVPRFX_U_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations due to MOVPRFX instructions that were not fused with the prefixed instruction."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_LD_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to Advanced SIMD or SVE load instructions."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_ST_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to Advanced SIMD or SVE store instructions."
+ },
+ {
+ "ArchStdEvent": "PRF_SPEC",
+ "BriefDescription": "This event counts architecturally executed prefetch operations due to scalar PRFM, PRFUM and SVE PRF instructions."
+ },
+ {
+ "ArchStdEvent": "BASE_LD_REG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to an instruction that loads a general-purpose register."
+ },
+ {
+ "ArchStdEvent": "BASE_ST_REG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to an instruction that stores a general-purpose register, excluding the DC ZVA instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_LDR_REG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to an SVE LDR instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_STR_REG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to an SVE STR instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_LDR_PREG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to an SVE LDR (predicate) instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_STR_PREG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to an SVE STR (predicate) instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_PRF_CONTIG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that prefetch memory due to an SVE predicated single contiguous element prefetch instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_LDNT_CONTIG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operation that reads from memory with a non-temporal hint due to an SVE non-temporal contiguous element load instruction."
+ },
+ {
+ "ArchStdEvent": "SVE_STNT_CONTIG_SPEC",
+ "BriefDescription": "This event counts architecturally executed operation that writes to memory with a non-temporal hint due to an SVE non-temporal contiguous element store instruction."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_LD_MULTI_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to Advanced SIMD or SVE multiple vector contiguous structure load instructions."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_ST_MULTI_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to Advanced SIMD or SVE multiple vector contiguous structure store instructions."
+ },
+ {
+ "ArchStdEvent": "SVE_LD_GATHER_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that read from memory due to SVE non-contiguous gather-load instructions."
+ },
+ {
+ "ArchStdEvent": "SVE_ST_SCATTER_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that write to memory due to SVE non-contiguous scatter-store instructions."
+ },
+ {
+ "ArchStdEvent": "SVE_PRF_GATHER_SPEC",
+ "BriefDescription": "This event counts architecturally executed operations that prefetch memory due to SVE non-contiguous gather-prefetch instructions."
+ },
+ {
+ "ArchStdEvent": "SVE_LDFF_SPEC",
+ "BriefDescription": "This event counts architecturally executed memory read operations due to SVE First-fault and Non-fault load instructions."
+ },
+ {
+ "ArchStdEvent": "FP_HP_SCALE_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE half-precision arithmetic operation. See FP_HP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 8, or by 16 for operations that would also be counted by SVE_FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_HP_FIXED_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed v8SIMD&FP half-precision arithmetic operation. See FP_HP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by the number of 16-bit elements for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_SP_SCALE_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE single-precision arithmetic operation. See FP_SP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 4, or by 8 for operations that would also be counted by SVE_FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_SP_FIXED_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed v8SIMD&FP single-precision arithmetic operation. See FP_SP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by the number of 32-bit elements for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_DP_SCALE_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed SVE double-precision arithmetic operation. See FP_DP_SCALE_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 2, or by 4 for operations that would also be counted by SVE_FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "FP_DP_FIXED_OPS_SPEC",
+ "BriefDescription": "This event counts architecturally executed v8SIMD&FP double-precision arithmetic operation. See FP_DP_FIXED_OPS_SPEC of ARMv9 Reference Manual for more information. This event counter is incremented by 2 for Advanced SIMD operations, or by 1 for scalar operations, and by twice those amounts for operations that would also be counted by FP_FMA_SPEC."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT_DOT_SPEC",
+ "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE integer dot-product operation."
+ },
+ {
+ "ArchStdEvent": "ASE_SVE_INT_MMLA_SPEC",
+ "BriefDescription": "This event counts architecturally executed microarchitectural Advanced SIMD or SVE integer matrix multiply operation."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/tlb.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/tlb.json
new file mode 100644
index 000000000000..f54029ba369a
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/tlb.json
@@ -0,0 +1,362 @@
+[
+ {
+ "ArchStdEvent": "L1I_TLB_REFILL",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I TLB. See L1I_TLB_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D TLB. See L1D_TLB_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1D_TLB",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D TLB. See L1D_TLB of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L1I_TLB",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I TLB. See L1I_TLB of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB_REFILL",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D TLB. See L2D_TLB_REFILL of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "L2D_TLB",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D TLB. See L2D_TLB of ARMv9 Reference Manual for more information."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK",
+ "BriefDescription": "This event counts data TLB access with at least one translation table walk."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK",
+ "BriefDescription": "This event counts instruction TLB access with at least one translation table walk."
+ },
+ {
+ "EventCode": "0x0C00",
+ "EventName": "L1I_TLB_4K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 4KB page."
+ },
+ {
+ "EventCode": "0x0C01",
+ "EventName": "L1I_TLB_64K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 64KB page."
+ },
+ {
+ "EventCode": "0x0C02",
+ "EventName": "L1I_TLB_2M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 2MB page."
+ },
+ {
+ "EventCode": "0x0C03",
+ "EventName": "L1I_TLB_32M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 32MB page."
+ },
+ {
+ "EventCode": "0x0C04",
+ "EventName": "L1I_TLB_512M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 512MB page."
+ },
+ {
+ "EventCode": "0x0C05",
+ "EventName": "L1I_TLB_1G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 1GB page."
+ },
+ {
+ "EventCode": "0x0C06",
+ "EventName": "L1I_TLB_16G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1I in 16GB page."
+ },
+ {
+ "EventCode": "0x0C08",
+ "EventName": "L1D_TLB_4K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 4KB page."
+ },
+ {
+ "EventCode": "0x0C09",
+ "EventName": "L1D_TLB_64K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 64KB page."
+ },
+ {
+ "EventCode": "0x0C0A",
+ "EventName": "L1D_TLB_2M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 2MB page."
+ },
+ {
+ "EventCode": "0x0C0B",
+ "EventName": "L1D_TLB_32M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 32MB page."
+ },
+ {
+ "EventCode": "0x0C0C",
+ "EventName": "L1D_TLB_512M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 512MB page."
+ },
+ {
+ "EventCode": "0x0C0D",
+ "EventName": "L1D_TLB_1G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 1GB page."
+ },
+ {
+ "EventCode": "0x0C0E",
+ "EventName": "L1D_TLB_16G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L1D in 16GB page."
+ },
+ {
+ "EventCode": "0x0C10",
+ "EventName": "L1I_TLB_REFILL_4K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 4KB page."
+ },
+ {
+ "EventCode": "0x0C11",
+ "EventName": "L1I_TLB_REFILL_64K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 64KB page."
+ },
+ {
+ "EventCode": "0x0C12",
+ "EventName": "L1I_TLB_REFILL_2M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 2MB page."
+ },
+ {
+ "EventCode": "0x0C13",
+ "EventName": "L1I_TLB_REFILL_32M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 32MB page."
+ },
+ {
+ "EventCode": "0x0C14",
+ "EventName": "L1I_TLB_REFILL_512M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 512MB page."
+ },
+ {
+ "EventCode": "0x0C15",
+ "EventName": "L1I_TLB_REFILL_1G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 1GB page."
+ },
+ {
+ "EventCode": "0x0C16",
+ "EventName": "L1I_TLB_REFILL_16G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1I in 16GB page."
+ },
+ {
+ "EventCode": "0x0C18",
+ "EventName": "L1D_TLB_REFILL_4K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 4KB page."
+ },
+ {
+ "EventCode": "0x0C19",
+ "EventName": "L1D_TLB_REFILL_64K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 64KB page."
+ },
+ {
+ "EventCode": "0x0C1A",
+ "EventName": "L1D_TLB_REFILL_2M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 2MB page."
+ },
+ {
+ "EventCode": "0x0C1B",
+ "EventName": "L1D_TLB_REFILL_32M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 32MB page."
+ },
+ {
+ "EventCode": "0x0C1C",
+ "EventName": "L1D_TLB_REFILL_512M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 512MB page."
+ },
+ {
+ "EventCode": "0x0C1D",
+ "EventName": "L1D_TLB_REFILL_1G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 1GB page."
+ },
+ {
+ "EventCode": "0x0C1E",
+ "EventName": "L1D_TLB_REFILL_16G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L1D in 16GB page."
+ },
+ {
+ "EventCode": "0x0C20",
+ "EventName": "L2I_TLB_4K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 4KB page."
+ },
+ {
+ "EventCode": "0x0C21",
+ "EventName": "L2I_TLB_64K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 64KB page."
+ },
+ {
+ "EventCode": "0x0C22",
+ "EventName": "L2I_TLB_2M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 2MB page."
+ },
+ {
+ "EventCode": "0x0C23",
+ "EventName": "L2I_TLB_32M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 32MB page."
+ },
+ {
+ "EventCode": "0x0C24",
+ "EventName": "L2I_TLB_512M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 512MB page."
+ },
+ {
+ "EventCode": "0x0C25",
+ "EventName": "L2I_TLB_1G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 1GB page."
+ },
+ {
+ "EventCode": "0x0C26",
+ "EventName": "L2I_TLB_16G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2I in 16GB page."
+ },
+ {
+ "EventCode": "0x0C28",
+ "EventName": "L2D_TLB_4K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 4KB page."
+ },
+ {
+ "EventCode": "0x0C29",
+ "EventName": "L2D_TLB_64K",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 64KB page."
+ },
+ {
+ "EventCode": "0x0C2A",
+ "EventName": "L2D_TLB_2M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 2MB page."
+ },
+ {
+ "EventCode": "0x0C2B",
+ "EventName": "L2D_TLB_32M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 32MB page."
+ },
+ {
+ "EventCode": "0x0C2C",
+ "EventName": "L2D_TLB_512M",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 512MB page."
+ },
+ {
+ "EventCode": "0x0C2D",
+ "EventName": "L2D_TLB_1G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 1GB page."
+ },
+ {
+ "EventCode": "0x0C2E",
+ "EventName": "L2D_TLB_16G",
+ "BriefDescription": "This event counts operations that cause a TLB access to the L2D in 16GB page."
+ },
+ {
+ "EventCode": "0x0C30",
+ "EventName": "L2I_TLB_REFILL_4K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 4KB page."
+ },
+ {
+ "EventCode": "0x0C31",
+ "EventName": "L2I_TLB_REFILL_64K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 64KB page."
+ },
+ {
+ "EventCode": "0x0C32",
+ "EventName": "L2I_TLB_REFILL_2M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 2MB page."
+ },
+ {
+ "EventCode": "0x0C33",
+ "EventName": "L2I_TLB_REFILL_32M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 32MB page."
+ },
+ {
+ "EventCode": "0x0C34",
+ "EventName": "L2I_TLB_REFILL_512M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 512MB page."
+ },
+ {
+ "EventCode": "0x0C35",
+ "EventName": "L2I_TLB_REFILL_1G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 1GB page."
+ },
+ {
+ "EventCode": "0x0C36",
+ "EventName": "L2I_TLB_REFILL_16G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2I in 16GB page."
+ },
+ {
+ "EventCode": "0x0C38",
+ "EventName": "L2D_TLB_REFILL_4K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 4KB page."
+ },
+ {
+ "EventCode": "0x0C39",
+ "EventName": "L2D_TLB_REFILL_64K",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 64KB page."
+ },
+ {
+ "EventCode": "0x0C3A",
+ "EventName": "L2D_TLB_REFILL_2M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 2MB page."
+ },
+ {
+ "EventCode": "0x0C3B",
+ "EventName": "L2D_TLB_REFILL_32M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 32MB page."
+ },
+ {
+ "EventCode": "0x0C3C",
+ "EventName": "L2D_TLB_REFILL_512M",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 512MB page."
+ },
+ {
+ "EventCode": "0x0C3D",
+ "EventName": "L2D_TLB_REFILL_1G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 1GB page."
+ },
+ {
+ "EventCode": "0x0C3E",
+ "EventName": "L2D_TLB_REFILL_16G",
+ "BriefDescription": "This event counts operations that cause a TLB refill of the L2D in 16GB page."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_PERCYC",
+ "BriefDescription": "This event counts the number of DTLB_WALK events in progress on each Processor cycle."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_PERCYC",
+ "BriefDescription": "This event counts the number of ITLB_WALK events in progress on each Processor cycle."
+ },
+ {
+ "ArchStdEvent": "DTLB_STEP",
+ "BriefDescription": "This event counts translation table walk access made by a refill of the data TLB."
+ },
+ {
+ "ArchStdEvent": "ITLB_STEP",
+ "BriefDescription": "This event counts translation table walk access made by a refill of the instruction TLB."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_LARGE",
+ "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a large page size."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_LARGE",
+ "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a large page size."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_SMALL",
+ "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a small page size."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_SMALL",
+ "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a small page size."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_BLOCK",
+ "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a Block."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_BLOCK",
+ "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a Block."
+ },
+ {
+ "ArchStdEvent": "DTLB_WALK_PAGE",
+ "BriefDescription": "This event counts translation table walk counted by DTLB_WALK where the result of the walk yields a Page."
+ },
+ {
+ "ArchStdEvent": "ITLB_WALK_PAGE",
+ "BriefDescription": "This event counts translation table walk counted by ITLB_WALK where the result of the walk yields a Page."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/trace.json b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/trace.json
new file mode 100644
index 000000000000..0c6e5054c9b5
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/fujitsu/monaka/trace.json
@@ -0,0 +1,18 @@
+[
+ {
+ "ArchStdEvent": "TRB_WRAP",
+ "BriefDescription": "This event counts the event generated each time the current write pointer is wrapped to the base pointer."
+ },
+ {
+ "ArchStdEvent": "TRB_TRIG",
+ "BriefDescription": "This event counts the event generated when a Trace Buffer Extension Trigger Event occurs."
+ },
+ {
+ "ArchStdEvent": "TRCEXTOUT0",
+ "BriefDescription": "This event counts the event generated each time an event is signaled by the trace unit external event 0."
+ },
+ {
+ "ArchStdEvent": "CTI_TRIGOUT4",
+ "BriefDescription": "This event counts the event generated each time an event is signaled on CTI output trigger 4."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
index 6463531b9941..b6a0d2de8534 100644
--- a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
+++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/metrics.json
@@ -3,235 +3,235 @@
"MetricExpr": "FETCH_BUBBLE / (4 * CPU_CYCLES)",
"PublicDescription": "Frontend bound L1 topdown metric",
"BriefDescription": "Frontend bound L1 topdown metric",
- "DefaultMetricgroupName": "TopDownL1",
- "MetricGroup": "Default;TopDownL1",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
"MetricName": "frontend_bound"
},
{
"MetricExpr": "(INST_SPEC - INST_RETIRED) / (4 * CPU_CYCLES)",
"PublicDescription": "Bad Speculation L1 topdown metric",
"BriefDescription": "Bad Speculation L1 topdown metric",
- "DefaultMetricgroupName": "TopDownL1",
- "MetricGroup": "Default;TopDownL1",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
"MetricName": "bad_speculation"
},
{
"MetricExpr": "INST_RETIRED / (CPU_CYCLES * 4)",
"PublicDescription": "Retiring L1 topdown metric",
"BriefDescription": "Retiring L1 topdown metric",
- "DefaultMetricgroupName": "TopDownL1",
- "MetricGroup": "Default;TopDownL1",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
"MetricName": "retiring"
},
{
"MetricExpr": "1 - (frontend_bound + bad_speculation + retiring)",
"PublicDescription": "Backend Bound L1 topdown metric",
"BriefDescription": "Backend Bound L1 topdown metric",
- "DefaultMetricgroupName": "TopDownL1",
- "MetricGroup": "Default;TopDownL1",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
"MetricName": "backend_bound"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x201d@ / CPU_CYCLES",
"PublicDescription": "Fetch latency bound L2 topdown metric",
"BriefDescription": "Fetch latency bound L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "fetch_latency_bound"
},
{
"MetricExpr": "frontend_bound - fetch_latency_bound",
"PublicDescription": "Fetch bandwidth bound L2 topdown metric",
"BriefDescription": "Fetch bandwidth bound L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "fetch_bandwidth_bound"
},
{
"MetricExpr": "(bad_speculation * BR_MIS_PRED) / (BR_MIS_PRED + armv8_pmuv3_0@event\\=0x2013@)",
"PublicDescription": "Branch mispredicts L2 topdown metric",
"BriefDescription": "Branch mispredicts L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "branch_mispredicts"
},
{
"MetricExpr": "bad_speculation - branch_mispredicts",
"PublicDescription": "Machine clears L2 topdown metric",
"BriefDescription": "Machine clears L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "machine_clears"
},
{
"MetricExpr": "(EXE_STALL_CYCLE - (MEM_STALL_ANYLOAD + armv8_pmuv3_0@event\\=0x7005@)) / CPU_CYCLES",
"PublicDescription": "Core bound L2 topdown metric",
"BriefDescription": "Core bound L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "core_bound"
},
{
"MetricExpr": "(MEM_STALL_ANYLOAD + armv8_pmuv3_0@event\\=0x7005@) / CPU_CYCLES",
"PublicDescription": "Memory bound L2 topdown metric",
"BriefDescription": "Memory bound L2 topdown metric",
- "MetricGroup": "TopDownL2",
+ "MetricGroup": "TopdownL2",
"MetricName": "memory_bound"
},
{
"MetricExpr": "(((L2I_TLB - L2I_TLB_REFILL) * 15) + (L2I_TLB_REFILL * 100)) / CPU_CYCLES",
"PublicDescription": "Idle by itlb miss L3 topdown metric",
"BriefDescription": "Idle by itlb miss L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "idle_by_itlb_miss"
},
{
"MetricExpr": "(((L2I_CACHE - L2I_CACHE_REFILL) * 15) + (L2I_CACHE_REFILL * 100)) / CPU_CYCLES",
"PublicDescription": "Idle by icache miss L3 topdown metric",
"BriefDescription": "Idle by icache miss L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "idle_by_icache_miss"
},
{
"MetricExpr": "(BR_MIS_PRED * 5) / CPU_CYCLES",
"PublicDescription": "BP misp flush L3 topdown metric",
"BriefDescription": "BP misp flush L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "bp_misp_flush"
},
{
"MetricExpr": "(armv8_pmuv3_0@event\\=0x2013@ * 5) / CPU_CYCLES",
"PublicDescription": "OOO flush L3 topdown metric",
"BriefDescription": "OOO flush L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "ooo_flush"
},
{
"MetricExpr": "(armv8_pmuv3_0@event\\=0x1001@ * 5) / CPU_CYCLES",
"PublicDescription": "Static predictor flush L3 topdown metric",
"BriefDescription": "Static predictor flush L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "sp_flush"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x1010@ / BR_MIS_PRED",
"PublicDescription": "Indirect branch L3 topdown metric",
"BriefDescription": "Indirect branch L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "indirect_branch"
},
{
"MetricExpr": "(armv8_pmuv3_0@event\\=0x1013@ + armv8_pmuv3_0@event\\=0x1016@) / BR_MIS_PRED",
"PublicDescription": "Push branch L3 topdown metric",
"BriefDescription": "Push branch L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "push_branch"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x100d@ / BR_MIS_PRED",
"PublicDescription": "Pop branch L3 topdown metric",
"BriefDescription": "Pop branch L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "pop_branch"
},
{
"MetricExpr": "(BR_MIS_PRED - armv8_pmuv3_0@event\\=0x1010@ - armv8_pmuv3_0@event\\=0x1013@ - armv8_pmuv3_0@event\\=0x1016@ - armv8_pmuv3_0@event\\=0x100d@) / BR_MIS_PRED",
"PublicDescription": "Other branch L3 topdown metric",
"BriefDescription": "Other branch L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "other_branch"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x2012@ / armv8_pmuv3_0@event\\=0x2013@",
"PublicDescription": "Nuke flush L3 topdown metric",
"BriefDescription": "Nuke flush L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "nuke_flush"
},
{
"MetricExpr": "1 - nuke_flush",
"PublicDescription": "Other flush L3 topdown metric",
"BriefDescription": "Other flush L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "other_flush"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x2010@ / CPU_CYCLES",
"PublicDescription": "Sync stall L3 topdown metric",
"BriefDescription": "Sync stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "sync_stall"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x2004@ / CPU_CYCLES",
"PublicDescription": "Rob stall L3 topdown metric",
"BriefDescription": "Rob stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "rob_stall"
},
{
"MetricExpr": "(armv8_pmuv3_0@event\\=0x2006@ + armv8_pmuv3_0@event\\=0x2007@ + armv8_pmuv3_0@event\\=0x2008@) / CPU_CYCLES",
"PublicDescription": "Ptag stall L3 topdown metric",
"BriefDescription": "Ptag stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "ptag_stall"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x201e@ / CPU_CYCLES",
"PublicDescription": "SaveOpQ stall L3 topdown metric",
"BriefDescription": "SaveOpQ stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "saveopq_stall"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x2005@ / CPU_CYCLES",
"PublicDescription": "PC buffer stall L3 topdown metric",
"BriefDescription": "PC buffer stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "pc_buffer_stall"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x7002@ / CPU_CYCLES",
"PublicDescription": "Divider L3 topdown metric",
"BriefDescription": "Divider L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "divider"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x7003@ / CPU_CYCLES",
"PublicDescription": "FSU stall L3 topdown metric",
"BriefDescription": "FSU stall L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "fsu_stall"
},
{
"MetricExpr": "core_bound - divider - fsu_stall",
"PublicDescription": "EXE ports util L3 topdown metric",
"BriefDescription": "EXE ports util L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "exe_ports_util"
},
{
"MetricExpr": "(MEM_STALL_ANYLOAD - MEM_STALL_L1MISS) / CPU_CYCLES",
"PublicDescription": "L1 bound L3 topdown metric",
"BriefDescription": "L1 bound L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "l1_bound"
},
{
"MetricExpr": "(MEM_STALL_L1MISS - MEM_STALL_L2MISS) / CPU_CYCLES",
"PublicDescription": "L2 bound L3 topdown metric",
"BriefDescription": "L2 bound L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "l2_bound"
},
{
"MetricExpr": "MEM_STALL_L2MISS / CPU_CYCLES",
"PublicDescription": "Mem bound L3 topdown metric",
"BriefDescription": "Mem bound L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "mem_bound"
},
{
"MetricExpr": "armv8_pmuv3_0@event\\=0x7005@ / CPU_CYCLES",
"PublicDescription": "Store bound L3 topdown metric",
"BriefDescription": "Store bound L3 topdown metric",
- "MetricGroup": "TopDownL3",
+ "MetricGroup": "TopdownL3",
"MetricName": "store_bound"
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json
index 2b3cb55df288..014454d78293 100644
--- a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json
+++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-ddrc.json
@@ -3,56 +3,48 @@
"ConfigCode": "0x00",
"EventName": "flux_wr",
"BriefDescription": "DDRC total write operations",
- "PublicDescription": "DDRC total write operations",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x01",
"EventName": "flux_rd",
"BriefDescription": "DDRC total read operations",
- "PublicDescription": "DDRC total read operations",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x02",
"EventName": "flux_wcmd",
"BriefDescription": "DDRC write commands",
- "PublicDescription": "DDRC write commands",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x03",
"EventName": "flux_rcmd",
"BriefDescription": "DDRC read commands",
- "PublicDescription": "DDRC read commands",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x04",
"EventName": "pre_cmd",
"BriefDescription": "DDRC precharge commands",
- "PublicDescription": "DDRC precharge commands",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x05",
"EventName": "act_cmd",
"BriefDescription": "DDRC active commands",
- "PublicDescription": "DDRC active commands",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x06",
"EventName": "rnk_chg",
"BriefDescription": "DDRC rank commands",
- "PublicDescription": "DDRC rank commands",
"Unit": "hisi_sccl,ddrc"
},
{
"ConfigCode": "0x07",
"EventName": "rw_chg",
"BriefDescription": "DDRC read and write changes",
- "PublicDescription": "DDRC read and write changes",
"Unit": "hisi_sccl,ddrc"
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json
index 9a7ec7af2060..b2b895fa670e 100644
--- a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json
+++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-hha.json
@@ -3,42 +3,41 @@
"ConfigCode": "0x00",
"EventName": "rx_ops_num",
"BriefDescription": "The number of all operations received by the HHA",
- "PublicDescription": "The number of all operations received by the HHA",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x01",
"EventName": "rx_outer",
"BriefDescription": "The number of all operations received by the HHA from another socket",
- "PublicDescription": "The number of all operations received by the HHA from another socket",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x02",
"EventName": "rx_sccl",
"BriefDescription": "The number of all operations received by the HHA from another SCCL in this socket",
- "PublicDescription": "The number of all operations received by the HHA from another SCCL in this socket",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x03",
"EventName": "rx_ccix",
"BriefDescription": "Count of the number of operations that HHA has received from CCIX",
- "PublicDescription": "Count of the number of operations that HHA has received from CCIX",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x4",
"EventName": "rx_wbi",
+ "BriefDescription": "Count of the number of WriteBackI operations that HHA has received",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x5",
"EventName": "rx_wbip",
+ "BriefDescription": "Count of the number of WriteBackIPtl operations that HHA has received",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x11",
+ "BriefDescription": "Count of the number of WriteThruIStash operations that HHA has received",
"EventName": "rx_wtistash",
"Unit": "hisi_sccl,hha"
},
@@ -46,107 +45,114 @@
"ConfigCode": "0x1c",
"EventName": "rd_ddr_64b",
"BriefDescription": "The number of read operations sent by HHA to DDRC which size is 64 bytes",
- "PublicDescription": "The number of read operations sent by HHA to DDRC which size is 64bytes",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x1d",
"EventName": "wr_ddr_64b",
"BriefDescription": "The number of write operations sent by HHA to DDRC which size is 64 bytes",
- "PublicDescription": "The number of write operations sent by HHA to DDRC which size is 64 bytes",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x1e",
"EventName": "rd_ddr_128b",
"BriefDescription": "The number of read operations sent by HHA to DDRC which size is 128 bytes",
- "PublicDescription": "The number of read operations sent by HHA to DDRC which size is 128 bytes",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x1f",
"EventName": "wr_ddr_128b",
"BriefDescription": "The number of write operations sent by HHA to DDRC which size is 128 bytes",
- "PublicDescription": "The number of write operations sent by HHA to DDRC which size is 128 bytes",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x20",
"EventName": "spill_num",
"BriefDescription": "Count of the number of spill operations that the HHA has sent",
- "PublicDescription": "Count of the number of spill operations that the HHA has sent",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x21",
"EventName": "spill_success",
"BriefDescription": "Count of the number of successful spill operations that the HHA has sent",
- "PublicDescription": "Count of the number of successful spill operations that the HHA has sent",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x23",
"EventName": "bi_num",
+ "BriefDescription": "Count of the number of HHA BackInvalid operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x32",
"EventName": "mediated_num",
+ "BriefDescription": "Count of the number of Mediated operations that the HHA has forwarded",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x33",
"EventName": "tx_snp_num",
+ "BriefDescription": "Count of the number of Snoop operations that the HHA has sent",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x34",
"EventName": "tx_snp_outer",
+ "BriefDescription": "Count of the number of Snoop operations that the HHA has sent to another socket",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x35",
"EventName": "tx_snp_ccix",
+ "BriefDescription": "Count of the number of Snoop operations that the HHA has sent to CCIX",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x38",
"EventName": "rx_snprspdata",
+ "BriefDescription": "Count of the number of SnprspData flit operations that HHA has received",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x3c",
"EventName": "rx_snprsp_outer",
+ "BriefDescription": "Count of the number of SnprspData operations that HHA has received from another socket",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x40",
"EventName": "sdir-lookup",
+ "BriefDescription": "Count of the number of HHA S-Dir lookup operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x41",
"EventName": "edir-lookup",
+ "BriefDescription": "Count of the number of HHA E-Dir lookup operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x42",
"EventName": "sdir-hit",
+ "BriefDescription": "Count of the number of HHA S-Dir hit operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x43",
"EventName": "edir-hit",
+ "BriefDescription": "Count of the number of HHA E-Dir hit operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x4c",
"EventName": "sdir-home-migrate",
+ "BriefDescription": "Count of the number of HHA S-Dir read home migrate operations",
"Unit": "hisi_sccl,hha"
},
{
"ConfigCode": "0x4d",
"EventName": "edir-home-migrate",
+ "BriefDescription": "Count of the number of HHA E-Dir read home migrate operations",
"Unit": "hisi_sccl,hha"
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json
index e3479b65be9a..d83c22eb1d15 100644
--- a/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json
+++ b/tools/perf/pmu-events/arch/arm64/hisilicon/hip08/uncore-l3c.json
@@ -3,91 +3,78 @@
"ConfigCode": "0x00",
"EventName": "rd_cpipe",
"BriefDescription": "Total read accesses",
- "PublicDescription": "Total read accesses",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x01",
"EventName": "wr_cpipe",
"BriefDescription": "Total write accesses",
- "PublicDescription": "Total write accesses",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x02",
"EventName": "rd_hit_cpipe",
"BriefDescription": "Total read hits",
- "PublicDescription": "Total read hits",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x03",
"EventName": "wr_hit_cpipe",
"BriefDescription": "Total write hits",
- "PublicDescription": "Total write hits",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x04",
"EventName": "victim_num",
"BriefDescription": "l3c precharge commands",
- "PublicDescription": "l3c precharge commands",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x20",
"EventName": "rd_spipe",
"BriefDescription": "Count of the number of read lines that come from this cluster of CPU core in spipe",
- "PublicDescription": "Count of the number of read lines that come from this cluster of CPU core in spipe",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x21",
"EventName": "wr_spipe",
"BriefDescription": "Count of the number of write lines that come from this cluster of CPU core in spipe",
- "PublicDescription": "Count of the number of write lines that come from this cluster of CPU core in spipe",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x22",
"EventName": "rd_hit_spipe",
"BriefDescription": "Count of the number of read lines that hits in spipe of this L3C",
- "PublicDescription": "Count of the number of read lines that hits in spipe of this L3C",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x23",
"EventName": "wr_hit_spipe",
"BriefDescription": "Count of the number of write lines that hits in spipe of this L3C",
- "PublicDescription": "Count of the number of write lines that hits in spipe of this L3C",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x29",
"EventName": "back_invalid",
"BriefDescription": "Count of the number of L3C back invalid operations",
- "PublicDescription": "Count of the number of L3C back invalid operations",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x40",
"EventName": "retry_cpu",
"BriefDescription": "Count of the number of retry that L3C suppresses the CPU operations",
- "PublicDescription": "Count of the number of retry that L3C suppresses the CPU operations",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x41",
"EventName": "retry_ring",
"BriefDescription": "Count of the number of retry that L3C suppresses the ring operations",
- "PublicDescription": "Count of the number of retry that L3C suppresses the ring operations",
"Unit": "hisi_sccl,l3c"
},
{
"ConfigCode": "0x42",
"EventName": "prefetch_drop",
"BriefDescription": "Count of the number of prefetch drops from this L3C",
- "PublicDescription": "Count of the number of prefetch drops from this L3C",
"Unit": "hisi_sccl,l3c"
}
]
diff --git a/tools/perf/pmu-events/arch/arm64/mapfile.csv b/tools/perf/pmu-events/arch/arm64/mapfile.csv
index f4d1ca4d1493..bb3fa8a33496 100644
--- a/tools/perf/pmu-events/arch/arm64/mapfile.csv
+++ b/tools/perf/pmu-events/arch/arm64/mapfile.csv
@@ -36,9 +36,12 @@
0x00000000410fd480,v1,arm/cortex-x2,core
0x00000000410fd490,v1,arm/neoverse-n2-v2,core
0x00000000410fd4f0,v1,arm/neoverse-n2-v2,core
+0x00000000410fd830,v1,arm/neoverse-v3,core
+0x00000000410fd8e0,v1,arm/neoverse-n3,core
0x00000000420f5160,v1,cavium/thunderx2,core
0x00000000430f0af0,v1,cavium/thunderx2,core
0x00000000460f0010,v1,fujitsu/a64fx,core
+0x00000000460f0030,v1,fujitsu/monaka,core
0x00000000480fd010,v1,hisilicon/hip08,core
0x00000000500f0000,v1,ampere/emag,core
0x00000000c00fac30,v1,ampere/ampereone,core
diff --git a/tools/perf/pmu-events/arch/arm64/recommended.json b/tools/perf/pmu-events/arch/arm64/recommended.json
index 210afa856091..a3b4941ae90c 100644
--- a/tools/perf/pmu-events/arch/arm64/recommended.json
+++ b/tools/perf/pmu-events/arch/arm64/recommended.json
@@ -318,6 +318,11 @@
"BriefDescription": "Barrier speculatively executed, DMB"
},
{
+ "EventCode": "0x7F",
+ "EventName": "CSDB_SPEC",
+ "BriefDescription": "Barrier Speculatively executed, CSDB."
+ },
+ {
"PublicDescription": "Exception taken, Other synchronous",
"EventCode": "0x81",
"EventName": "EXC_UNDEF",
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json b/tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/ali_drw.json
index e21c469a8ef0..e21c469a8ef0 100644
--- a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/ali_drw.json
+++ b/tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/ali_drw.json
diff --git a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json b/tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/metrics.json
index bc865b374b6a..bc865b374b6a 100644
--- a/tools/perf/pmu-events/arch/arm64/freescale/yitian710/sys/metrics.json
+++ b/tools/perf/pmu-events/arch/arm64/thead/yitian710/sys/metrics.json
diff --git a/tools/perf/pmu-events/arch/common/common/software.json b/tools/perf/pmu-events/arch/common/common/software.json
new file mode 100644
index 000000000000..f2551f1107fd
--- /dev/null
+++ b/tools/perf/pmu-events/arch/common/common/software.json
@@ -0,0 +1,92 @@
+[
+ {
+ "Unit": "software",
+ "EventName": "cpu-clock",
+ "BriefDescription": "Per-CPU high-resolution timer based event",
+ "ConfigCode": "0"
+ },
+ {
+ "Unit": "software",
+ "EventName": "task-clock",
+ "BriefDescription": "Per-task high-resolution timer based event",
+ "ConfigCode": "1"
+ },
+ {
+ "Unit": "software",
+ "EventName": "faults",
+ "BriefDescription": "Number of page faults [This event is an alias of page-faults]",
+ "ConfigCode": "2"
+ },
+ {
+ "Unit": "software",
+ "EventName": "page-faults",
+ "BriefDescription": "Number of page faults [This event is an alias of faults]",
+ "ConfigCode": "2"
+ },
+ {
+ "Unit": "software",
+ "EventName": "context-switches",
+ "BriefDescription": "Number of context switches [This event is an alias of cs]",
+ "ConfigCode": "3"
+ },
+ {
+ "Unit": "software",
+ "EventName": "cs",
+ "BriefDescription": "Number of context switches [This event is an alias of context-switches]",
+ "ConfigCode": "3"
+ },
+ {
+ "Unit": "software",
+ "EventName": "cpu-migrations",
+ "BriefDescription": "Number of times a process has migrated to a new CPU [This event is an alias of migrations]",
+ "ConfigCode": "4"
+ },
+ {
+ "Unit": "software",
+ "EventName": "migrations",
+ "BriefDescription": "Number of times a process has migrated to a new CPU [This event is an alias of cpu-migrations]",
+ "ConfigCode": "4"
+ },
+ {
+ "Unit": "software",
+ "EventName": "minor-faults",
+ "BriefDescription": "Number of minor page faults. Minor faults don't require I/O to handle",
+ "ConfigCode": "5"
+ },
+ {
+ "Unit": "software",
+ "EventName": "major-faults",
+ "BriefDescription": "Number of major page faults. Major faults require I/O to handle",
+ "ConfigCode": "6"
+ },
+ {
+ "Unit": "software",
+ "EventName": "alignment-faults",
+ "BriefDescription": "Number of kernel handled memory alignment faults",
+ "ConfigCode": "7"
+ },
+ {
+ "Unit": "software",
+ "EventName": "emulation-faults",
+ "BriefDescription": "Number of kernel handled unimplemented instruction faults handled through emulation",
+ "ConfigCode": "8"
+ },
+ {
+ "Unit": "software",
+ "EventName": "dummy",
+ "BriefDescription": "A placeholder event that doesn't count anything",
+ "ConfigCode": "9"
+ },
+ {
+ "Unit": "software",
+ "EventName": "bpf-output",
+ "BriefDescription": "An event used by BPF programs to write to the perf ring buffer",
+ "ConfigCode": "10"
+ },
+ {
+ "Unit": "software",
+ "EventName": "cgroup-switches",
+ "BriefDescription": "Number of context switches to a task in a different cgroup",
+ "ConfigCode": "11"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/common/common/tool.json b/tools/perf/pmu-events/arch/common/common/tool.json
new file mode 100644
index 000000000000..12f2ef1813a6
--- /dev/null
+++ b/tools/perf/pmu-events/arch/common/common/tool.json
@@ -0,0 +1,74 @@
+[
+ {
+ "Unit": "tool",
+ "EventName": "duration_time",
+ "BriefDescription": "Wall clock interval time in nanoseconds",
+ "ConfigCode": "1"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "user_time",
+ "BriefDescription": "User (non-kernel) time in nanoseconds",
+ "ConfigCode": "2"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "system_time",
+ "BriefDescription": "System/kernel time in nanoseconds",
+ "ConfigCode": "3"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "has_pmem",
+ "BriefDescription": "1 if persistent memory installed otherwise 0",
+ "ConfigCode": "4"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "num_cores",
+ "BriefDescription": "Number of cores. A core consists of 1 or more thread, with each thread being associated with a logical Linux CPU",
+ "ConfigCode": "5"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "num_cpus",
+ "BriefDescription": "Number of logical Linux CPUs. There may be multiple such CPUs on a core",
+ "ConfigCode": "6"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "num_cpus_online",
+ "BriefDescription": "Number of online logical Linux CPUs. There may be multiple such CPUs on a core",
+ "ConfigCode": "7"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "num_dies",
+ "BriefDescription": "Number of dies. Each die has 1 or more cores",
+ "ConfigCode": "8"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "num_packages",
+ "BriefDescription": "Number of packages. Each package has 1 or more die",
+ "ConfigCode": "9"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "slots",
+ "BriefDescription": "Number of functional units that in parallel can execute parts of an instruction",
+ "ConfigCode": "10"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "smt_on",
+ "BriefDescription": "1 if simultaneous multithreading (aka hyperthreading) is enable otherwise 0",
+ "ConfigCode": "11"
+ },
+ {
+ "Unit": "tool",
+ "EventName": "system_tsc_freq",
+ "BriefDescription": "The amount a Time Stamp Counter (TSC) increases per second",
+ "ConfigCode": "12"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/powerpc/compat/generic-events.json b/tools/perf/pmu-events/arch/powerpc/compat/generic-events.json
new file mode 100644
index 000000000000..6f5e8efcb098
--- /dev/null
+++ b/tools/perf/pmu-events/arch/powerpc/compat/generic-events.json
@@ -0,0 +1,117 @@
+[
+ {
+ "EventCode": "0x600F4",
+ "EventName": "PM_CYC",
+ "BriefDescription": "Processor cycles."
+ },
+ {
+ "EventCode": "0x100F2",
+ "EventName": "PM_CYC_INST_CMPL",
+ "BriefDescription": "1 or more ppc insts finished"
+ },
+ {
+ "EventCode": "0x100f4",
+ "EventName": "PM_FLOP_CMPL",
+ "BriefDescription": "Floating Point Operations Finished."
+ },
+ {
+ "EventCode": "0x100F6",
+ "EventName": "PM_L1_ITLB_MISS",
+ "BriefDescription": "Number of I-ERAT reloads."
+ },
+ {
+ "EventCode": "0x100F8",
+ "EventName": "PM_NO_INST_AVAIL",
+ "BriefDescription": "Number of cycles the ICT has no itags assigned to this thread."
+ },
+ {
+ "EventCode": "0x100fc",
+ "EventName": "PM_LD_CMPL",
+ "BriefDescription": "Load instruction completed."
+ },
+ {
+ "EventCode": "0x200F0",
+ "EventName": "PM_ST_CMPL",
+ "BriefDescription": "Stores completed from S2Q (2nd-level store queue)."
+ },
+ {
+ "EventCode": "0x200F2",
+ "EventName": "PM_INST_DISP",
+ "BriefDescription": "PowerPC instruction dispatched."
+ },
+ {
+ "EventCode": "0x200F4",
+ "EventName": "PM_RUN_CYC",
+ "BriefDescription": "Processor cycles gated by the run latch."
+ },
+ {
+ "EventCode": "0x200F6",
+ "EventName": "PM_L1_DTLB_RELOAD",
+ "BriefDescription": "DERAT Reloaded due to a DERAT miss."
+ },
+ {
+ "EventCode": "0x200FA",
+ "EventName": "PM_BR_TAKEN_CMPL",
+ "BriefDescription": "Branch Taken instruction completed."
+ },
+ {
+ "EventCode": "0x200FC",
+ "EventName": "PM_L1_ICACHE_MISS",
+ "BriefDescription": "Demand instruction cache miss."
+ },
+ {
+ "EventCode": "0x200FE",
+ "EventName": "PM_L1_RELOAD_FROM_MEM",
+ "BriefDescription": "L1 Dcache reload from memory"
+ },
+ {
+ "EventCode": "0x300F0",
+ "EventName": "PM_ST_MISS_L1",
+ "BriefDescription": "Store Missed L1"
+ },
+ {
+ "EventCode": "0x300FC",
+ "EventName": "PM_DTLB_MISS",
+ "BriefDescription": "Data PTEG reload"
+ },
+ {
+ "EventCode": "0x300FE",
+ "EventName": "PM_DATA_FROM_L3MISS",
+ "BriefDescription": "Demand LD - L3 Miss (not L2 hit and not L3 hit)"
+ },
+ {
+ "EventCode": "0x400F0",
+ "EventName": "PM_LD_MISS_L1",
+ "BriefDescription": "L1 Dcache load miss"
+ },
+ {
+ "EventCode": "0x400F2",
+ "EventName": "PM_CYC_INST_DISP",
+ "BriefDescription": "Cycle when instruction(s) dispatched."
+ },
+ {
+ "EventCode": "0x400F6",
+ "EventName": "PM_BR_MPRED_CMPL",
+ "BriefDescription": "A mispredicted branch completed. Includes direction and target."
+ },
+ {
+ "EventCode": "0x400FA",
+ "EventName": "PM_RUN_INST_CMPL",
+ "BriefDescription": "PowerPC instruction completed while the run latch is set."
+ },
+ {
+ "EventCode": "0x400FC",
+ "EventName": "PM_ITLB_MISS",
+ "BriefDescription": "Instruction TLB reload (after a miss), all page sizes. Includes only demand misses."
+ },
+ {
+ "EventCode": "0x400fe",
+ "EventName": "PM_LD_NOT_CACHED",
+ "BriefDescription": "Load data not cached."
+ },
+ {
+ "EventCode": "0x500fa",
+ "EventName": "PM_INST_CMPL",
+ "BriefDescription": "Instructions."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/powerpc/mapfile.csv b/tools/perf/pmu-events/arch/powerpc/mapfile.csv
index 599a588dbeb4..cbd3cb443784 100644
--- a/tools/perf/pmu-events/arch/powerpc/mapfile.csv
+++ b/tools/perf/pmu-events/arch/powerpc/mapfile.csv
@@ -15,3 +15,5 @@
0x0066[[:xdigit:]]{4},1,power8,core
0x004e[[:xdigit:]]{4},1,power9,core
0x0080[[:xdigit:]]{4},1,power10,core
+0x0082[[:xdigit:]]{4},1,power10,core
+0x00ffffff,1,compat,core
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/cache.json b/tools/perf/pmu-events/arch/powerpc/power10/cache.json
index 839ae26945fb..b7e0be09ff57 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/cache.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/cache.json
@@ -1,12 +1,22 @@
[
{
+ "EventCode": "0x1002C",
+ "EventName": "PM_LD_PREFETCH_CACHE_LINE_MISS",
+ "BriefDescription": "The L1 cache was reloaded with a line that fulfills a prefetch request."
+ },
+ {
+ "EventCode": "0x200FD",
+ "EventName": "PM_L1_ICACHE_MISS",
+ "BriefDescription": "Demand instruction cache miss."
+ },
+ {
+ "EventCode": "0x30068",
+ "EventName": "PM_L1_ICACHE_RELOADED_PREF",
+ "BriefDescription": "Counts all instruction cache prefetch reloads (includes demand turned into prefetch)."
+ },
+ {
"EventCode": "0x300F4",
"EventName": "PM_RUN_INST_CMPL_CONC",
"BriefDescription": "PowerPC instruction completed by this thread when all threads in the core had the run-latch set."
- },
- {
- "EventCode": "0x400F6",
- "EventName": "PM_BR_MPRED_CMPL",
- "BriefDescription": "A mispredicted branch completed. Includes direction and target."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/datasource.json b/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
index 0eeaaf1a95b8..a5d5be35b5e6 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/datasource.json
@@ -1,5 +1,15 @@
[
{
+ "EventCode": "0x1505E",
+ "EventName": "PM_LD_HIT_L1",
+ "BriefDescription": "Load finished without experiencing an L1 miss."
+ },
+ {
+ "EventCode": "0x100FC",
+ "EventName": "PM_LD_REF_L1",
+ "BriefDescription": "All L1 D cache load references counted at finish, gated by reject. In P9 and earlier this event counted only cacheable loads but in P10 both cacheable and non-cacheable loads are included."
+ },
+ {
"EventCode": "0x200FE",
"EventName": "PM_DATA_FROM_L2MISS",
"BriefDescription": "The processor's L1 data cache was reloaded from a source beyond the local core's L2 due to a demand miss."
@@ -10,11 +20,41 @@
"BriefDescription": "The processor's L1 data cache was reloaded from beyond the local core's L3 due to a demand miss."
},
{
+ "EventCode": "0x400F0",
+ "EventName": "PM_LD_DEMAND_MISS_L1_FIN",
+ "BriefDescription": "Load missed L1, counted at finish time."
+ },
+ {
"EventCode": "0x400FE",
"EventName": "PM_DATA_FROM_MEMORY",
"BriefDescription": "The processor's data cache was reloaded from local, remote, or distant memory due to a demand miss."
},
{
+ "EventCode": "0x0000004080",
+ "EventName": "PM_INST_FROM_L1",
+ "BriefDescription": "An instruction fetch hit in the L1. Each fetch group contains 8 instructions. The same line can hit 4 times if 32 sequential instructions are fetched."
+ },
+ {
+ "EventCode": "0x000000026080",
+ "EventName": "PM_L2_LD_MISS",
+ "BriefDescription": "All successful D-Side Load dispatches for this thread that missed in the L2. Since the event happens in a 2:1 clock domain and is time-sliced across all 4 threads, the event count should be multiplied by 2."
+ },
+ {
+ "EventCode": "0x000000026880",
+ "EventName": "PM_L2_ST_MISS",
+ "BriefDescription": "All successful D-Side Store dispatches for this thread that missed in the L2. Since the event happens in a 2:1 clock domain and is time-sliced across all 4 threads, the event count should be multiplied by 2."
+ },
+ {
+ "EventCode": "0x010000046880",
+ "EventName": "PM_L2_ST_HIT",
+ "BriefDescription": "All successful D-side store dispatches for this thread that were L2 hits. Since the event happens in a 2:1 clock domain and is time-sliced across all 4 threads, the event count should be multiplied by 2."
+ },
+ {
+ "EventCode": "0x000000036880",
+ "EventName": "PM_L2_INST_MISS",
+ "BriefDescription": "All successful instruction (demand and prefetch) dispatches for this thread that missed in the L2. Since the event happens in a 2:1 clock domain and is time-sliced across all 4 threads, the event count should be multiplied by 2."
+ },
+ {
"EventCode": "0x000300000000C040",
"EventName": "PM_INST_FROM_L2",
"BriefDescription": "The processor's instruction cache was reloaded from the local core's L2 due to a demand miss."
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/frontend.json b/tools/perf/pmu-events/arch/powerpc/power10/frontend.json
index 5977f5e64212..b6998987ab75 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/frontend.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/frontend.json
@@ -75,18 +75,48 @@
"BriefDescription": "Cycles in which an instruction or group of instructions were cancelled after being issued. This event increments once per occurrence, regardless of how many instructions are included in the issue group."
},
{
+ "EventCode": "0x44054",
+ "EventName": "PM_VECTOR_LD_CMPL",
+ "BriefDescription": "Vector load instruction completed."
+ },
+ {
"EventCode": "0x44056",
"EventName": "PM_VECTOR_ST_CMPL",
"BriefDescription": "Vector store instruction completed."
},
{
+ "EventCode": "0x4D05E",
+ "EventName": "PM_BR_CMPL",
+ "BriefDescription": "A branch completed. All branches are included."
+ },
+ {
"EventCode": "0x4E054",
"EventName": "PM_DTLB_HIT_1G",
"BriefDescription": "Data TLB hit (DERAT reload) page size 1G. Implies radix translation. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."
},
{
+ "EventCode": "0x400F6",
+ "EventName": "PM_BR_MPRED_CMPL",
+ "BriefDescription": "A mispredicted branch completed. Includes direction and target."
+ },
+ {
"EventCode": "0x400FC",
"EventName": "PM_ITLB_MISS",
"BriefDescription": "Instruction TLB reload (after a miss), all page sizes. Includes only demand misses."
+ },
+ {
+ "EventCode": "0x00000048B4",
+ "EventName": "PM_BR_TKN_UNCOND_FIN",
+ "BriefDescription": "An unconditional branch finished. All unconditional branches are taken."
+ },
+ {
+ "EventCode": "0x00000040B8",
+ "EventName": "PM_PRED_BR_TKN_COND_DIR",
+ "BriefDescription": "A conditional branch finished with correctly predicted direction. Resolved taken."
+ },
+ {
+ "EventCode": "0x00000048B8",
+ "EventName": "PM_PRED_BR_NTKN_COND_DIR",
+ "BriefDescription": "A conditional branch finished with correctly predicted direction. Resolved not taken."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/locks.json b/tools/perf/pmu-events/arch/powerpc/power10/locks.json
index b5a0d6521963..a8ea4d0def1a 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/locks.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/locks.json
@@ -5,8 +5,18 @@
"BriefDescription": "Conditional store instruction (STCX) failed. LARX and STCX are instructions used to acquire a lock."
},
{
+ "EventCode": "0x2E014",
+ "EventName": "PM_STCX_FIN",
+ "BriefDescription": "Conditional store instruction (STCX) finished. LARX and STCX are instructions used to acquire a lock."
+ },
+ {
"EventCode": "0x4E050",
"EventName": "PM_STCX_PASS_FIN",
"BriefDescription": "Conditional store instruction (STCX) passed. LARX and STCX are instructions used to acquire a lock."
+ },
+ {
+ "EventCode": "0x000000C8B8",
+ "EventName": "PM_STCX_SUCCESS_CMPL",
+ "BriefDescription": "STCX instructions that completed successfully. Specifically, counts only when a pass status is returned from the nest."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/memory.json b/tools/perf/pmu-events/arch/powerpc/power10/memory.json
index 885262957beb..0d7191b3f2c6 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/memory.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/memory.json
@@ -70,6 +70,11 @@
"BriefDescription": "The processor's L1 data cache was reloaded from the source specified in MMCR3[30:42]. If MMCR1[16|17] is 0 (default), this count includes only lines that were reloaded to satisfy a demand miss. If MMCR1[16|17] is 1, this count includes both demand misses and prefetch reloads."
},
{
+ "EventCode": "0x3F04A",
+ "EventName": "PM_LSU_ST5_FIN",
+ "BriefDescription": "LSU Finished an internal operation in ST2 port."
+ },
+ {
"EventCode": "0x3C054",
"EventName": "PM_DERAT_MISS_16M",
"BriefDescription": "Data ERAT Miss (Data TLB Access) page size 16M. When MMCR1[16]=0 this event counts only DERAT reloads for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."
@@ -108,5 +113,30 @@
"EventCode": "0x4C05A",
"EventName": "PM_DTLB_MISS_1G",
"BriefDescription": "Data TLB reload (after a miss) page size 1G. Implies radix translation was used. When MMCR1[16]=0 this event counts only for demand misses. When MMCR1[16]=1 this event includes demand misses and prefetches."
+ },
+ {
+ "EventCode": "0x000000F880",
+ "EventName": "PM_SNOOP_TLBIE_CYC",
+ "BriefDescription": "Cycles in which TLBIE snoops are executed in the LSU."
+ },
+ {
+ "EventCode": "0x000000F084",
+ "EventName": "PM_SNOOP_TLBIE_CACHE_WALK_CYC",
+ "BriefDescription": "TLBIE snoop cycles in which the data cache is being walked."
+ },
+ {
+ "EventCode": "0x000000F884",
+ "EventName": "PM_SNOOP_TLBIE_WAIT_ST_CYC",
+ "BriefDescription": "TLBIE snoop cycles in which older stores are still draining."
+ },
+ {
+ "EventCode": "0x000000F088",
+ "EventName": "PM_SNOOP_TLBIE_WAIT_LD_CYC",
+ "BriefDescription": "TLBIE snoop cycles in which older loads are still draining."
+ },
+ {
+ "EventCode": "0x000000F08C",
+ "EventName": "PM_SNOOP_TLBIE_WAIT_MMU_CYC",
+ "BriefDescription": "TLBIE snoop cycles in which the Load-Store unit is waiting for the MMU to finish invalidation."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/others.json b/tools/perf/pmu-events/arch/powerpc/power10/others.json
index fcf8a8ebe7bd..1bf802076ee0 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/others.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/others.json
@@ -1,112 +1,72 @@
[
{
- "EventCode": "0x1002C",
- "EventName": "PM_LD_PREFETCH_CACHE_LINE_MISS",
- "BriefDescription": "The L1 cache was reloaded with a line that fulfills a prefetch request."
- },
- {
- "EventCode": "0x1505E",
- "EventName": "PM_LD_HIT_L1",
- "BriefDescription": "Load finished without experiencing an L1 miss."
- },
- {
- "EventCode": "0x1F056",
- "EventName": "PM_DISP_SS0_2_INSTR_CYC",
- "BriefDescription": "Cycles in which Superslice 0 dispatches either 1 or 2 instructions."
- },
- {
- "EventCode": "0x1F05A",
- "EventName": "PM_DISP_HELD_SYNC_CYC",
- "BriefDescription": "Cycles dispatch is held because of a synchronizing instruction that requires the ICT to be empty before dispatch."
- },
- {
"EventCode": "0x10066",
"EventName": "PM_ADJUNCT_CYC",
"BriefDescription": "Cycles in which the thread is in Adjunct state. MSR[S HV PR] bits = 011."
},
{
- "EventCode": "0x100FC",
- "EventName": "PM_LD_REF_L1",
- "BriefDescription": "All L1 D cache load references counted at finish, gated by reject. In P9 and earlier this event counted only cacheable loads but in P10 both cacheable and non-cacheable loads are included."
- },
- {
"EventCode": "0x2E010",
"EventName": "PM_ADJUNCT_INST_CMPL",
"BriefDescription": "PowerPC instruction completed while the thread was in Adjunct state."
},
{
- "EventCode": "0x2E014",
- "EventName": "PM_STCX_FIN",
- "BriefDescription": "Conditional store instruction (STCX) finished. LARX and STCX are instructions used to acquire a lock."
- },
- {
- "EventCode": "0x2F054",
- "EventName": "PM_DISP_SS1_2_INSTR_CYC",
- "BriefDescription": "Cycles in which Superslice 1 dispatches either 1 or 2 instructions."
- },
- {
- "EventCode": "0x2F056",
- "EventName": "PM_DISP_SS1_4_INSTR_CYC",
- "BriefDescription": "Cycles in which Superslice 1 dispatches either 3 or 4 instructions."
- },
- {
"EventCode": "0x200F2",
"EventName": "PM_INST_DISP",
"BriefDescription": "PowerPC instruction dispatched."
},
{
- "EventCode": "0x200FD",
- "EventName": "PM_L1_ICACHE_MISS",
- "BriefDescription": "Demand instruction cache miss."
+ "EventCode": "0x300F6",
+ "EventName": "PM_LD_DEMAND_MISS_L1",
+ "BriefDescription": "The L1 cache was reloaded with a line that fulfills a demand miss request. Counted at reload time, before finish."
},
{
- "EventCode": "0x3F04A",
- "EventName": "PM_LSU_ST5_FIN",
- "BriefDescription": "LSU Finished an internal operation in ST2 port."
+ "EventCode": "0x40012",
+ "EventName": "PM_L1_ICACHE_RELOADED_ALL",
+ "BriefDescription": "Counts all instruction cache reloads includes demand, prefetch, prefetch turned into demand and demand turned into prefetch."
},
{
- "EventCode": "0x3405A",
- "EventName": "PM_PRIVILEGED_INST_CMPL",
- "BriefDescription": "PowerPC instruction completed while the thread was in Privileged state."
+ "EventCode": "0x00000038BC",
+ "EventName": "PM_ISYNC_CMPL",
+ "BriefDescription": "Isync completion count per thread."
},
{
- "EventCode": "0x3F054",
- "EventName": "PM_DISP_SS0_4_INSTR_CYC",
- "BriefDescription": "Cycles in which Superslice 0 dispatches either 3 or 4 instructions."
+ "EventCode": "0x000000C088",
+ "EventName": "PM_LD0_32B_FIN",
+ "BriefDescription": "256-bit load finished in the LD0 load execution unit."
},
{
- "EventCode": "0x3F056",
- "EventName": "PM_DISP_SS0_8_INSTR_CYC",
- "BriefDescription": "Cycles in which Superslice 0 dispatches either 5, 6, 7 or 8 instructions."
+ "EventCode": "0x000000C888",
+ "EventName": "PM_LD1_32B_FIN",
+ "BriefDescription": "256-bit load finished in the LD1 load execution unit."
},
{
- "EventCode": "0x30068",
- "EventName": "PM_L1_ICACHE_RELOADED_PREF",
- "BriefDescription": "Counts all instruction cache prefetch reloads (includes demand turned into prefetch)."
+ "EventCode": "0x000000C090",
+ "EventName": "PM_LD0_UNALIGNED_FIN",
+ "BriefDescription": "Load instructions in LD0 port that are either unaligned, or treated as unaligned and require an additional recycle through the pipeline using the load gather buffer. This typically adds about 10 cycles to the latency of the instruction. This includes loads that cross the 128 byte boundary, octword loads that are not aligned, and a special forward progress case of a load that does not hit in the L1 and crosses the 32 byte boundary and is launched NTC. Counted at finish time."
},
{
- "EventCode": "0x300F6",
- "EventName": "PM_LD_DEMAND_MISS_L1",
- "BriefDescription": "The L1 cache was reloaded with a line that fulfills a demand miss request. Counted at reload time, before finish."
+ "EventCode": "0x000000C890",
+ "EventName": "PM_LD1_UNALIGNED_FIN",
+ "BriefDescription": "Load instructions in LD1 port that are either unaligned, or treated as unaligned and require an additional recycle through the pipeline using the load gather buffer. This typically adds about 10 cycles to the latency of the instruction. This includes loads that cross the 128 byte boundary, octword loads that are not aligned, and a special forward progress case of a load that does not hit in the L1 and crosses the 32 byte boundary and is launched NTC. Counted at finish time."
},
{
- "EventCode": "0x40012",
- "EventName": "PM_L1_ICACHE_RELOADED_ALL",
- "BriefDescription": "Counts all instruction cache reloads includes demand, prefetch, prefetch turned into demand and demand turned into prefetch."
+ "EventCode": "0x000000C0A4",
+ "EventName": "PM_ST0_UNALIGNED_FIN",
+ "BriefDescription": "Store instructions in ST0 port that are either unaligned, or treated as unaligned and require an additional recycle through the pipeline. This typically adds about 10 cycles to the latency of the instruction. This only includes stores that cross the 128 byte boundary. Counted at finish time."
},
{
- "EventCode": "0x44054",
- "EventName": "PM_VECTOR_LD_CMPL",
- "BriefDescription": "Vector load instruction completed."
+ "EventCode": "0x000000C8A4",
+ "EventName": "PM_ST1_UNALIGNED_FIN",
+ "BriefDescription": "Store instructions in ST1 port that are either unaligned, or treated as unaligned and require an additional recycle through the pipeline. This typically adds about 10 cycles to the latency of the instruction. This only includes stores that cross the 128 byte boundary. Counted at finish time."
},
{
- "EventCode": "0x4D05E",
- "EventName": "PM_BR_CMPL",
- "BriefDescription": "A branch completed. All branches are included."
+ "EventCode": "0x000000D0B4",
+ "EventName": "PM_DC_PREF_STRIDED_CONF",
+ "BriefDescription": "A demand load referenced a line in an active strided prefetch stream. The stream could have been allocated through the hardware prefetch mechanism or through software."
},
{
- "EventCode": "0x400F0",
- "EventName": "PM_LD_DEMAND_MISS_L1_FIN",
- "BriefDescription": "Load missed L1, counted at finish time."
+ "EventCode": "0x0000004884",
+ "EventName": "PM_NO_FETCH_IBUF_FULL_CYC",
+ "BriefDescription": "Cycles in which no instructions are fetched because there is no room in the instruction buffers."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json b/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json
index 21b23bb55d0d..940375d251cb 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/pipeline.json
@@ -95,11 +95,21 @@
"BriefDescription": "Cycles in which the oldest instruction in the pipeline was a lwsync waiting to complete."
},
{
+ "EventCode": "0x1F056",
+ "EventName": "PM_DISP_SS0_2_INSTR_CYC",
+ "BriefDescription": "Cycles in which Superslice 0 dispatches either 1 or 2 instructions."
+ },
+ {
"EventCode": "0x1F058",
"EventName": "PM_DISP_HELD_CYC",
"BriefDescription": "Cycles dispatch is held."
},
{
+ "EventCode": "0x1F05A",
+ "EventName": "PM_DISP_HELD_SYNC_CYC",
+ "BriefDescription": "Cycles dispatch is held because of a synchronizing instruction that requires the ICT to be empty before dispatch."
+ },
+ {
"EventCode": "0x10064",
"EventName": "PM_DISP_STALL_IC_L2",
"BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L2."
@@ -230,6 +240,16 @@
"BriefDescription": "Cycles in which the oldest instruction in the pipeline (NTC) finishes. Note that instructions can finish out of order, therefore not all the instructions that finish have a Next-to-complete status."
},
{
+ "EventCode": "0x2F054",
+ "EventName": "PM_DISP_SS1_2_INSTR_CYC",
+ "BriefDescription": "Cycles in which Superslice 1 dispatches either 1 or 2 instructions."
+ },
+ {
+ "EventCode": "0x2F056",
+ "EventName": "PM_DISP_SS1_4_INSTR_CYC",
+ "BriefDescription": "Cycles in which Superslice 1 dispatches either 3 or 4 instructions."
+ },
+ {
"EventCode": "0x20066",
"EventName": "PM_DISP_HELD_OTHER_CYC",
"BriefDescription": "Cycles dispatch is held for any other reason."
@@ -330,6 +350,16 @@
"BriefDescription": "Cycles when dispatch was stalled while the instruction was fetched from the local L3."
},
{
+ "EventCode": "0x3F054",
+ "EventName": "PM_DISP_SS0_4_INSTR_CYC",
+ "BriefDescription": "Cycles in which Superslice 0 dispatches either 3 or 4 instructions."
+ },
+ {
+ "EventCode": "0x3F056",
+ "EventName": "PM_DISP_SS0_8_INSTR_CYC",
+ "BriefDescription": "Cycles in which Superslice 0 dispatches either 5, 6, 7 or 8 instructions."
+ },
+ {
"EventCode": "0x30060",
"EventName": "PM_DISP_HELD_XVFC_MAPPER_CYC",
"BriefDescription": "Cycles dispatch is held because the XVFC mapper/SRB was full."
@@ -458,5 +488,20 @@
"EventCode": "0x400F8",
"EventName": "PM_FLUSH",
"BriefDescription": "Flush (any type)."
+ },
+ {
+ "EventCode": "0x0B0000016080",
+ "EventName": "PM_L2_TLBIE_SLBIE_START",
+ "BriefDescription": "NCU Master received a TLBIE/SLBIEG/SLBIAG operation from the core. Event count should be multiplied by 2 since the data is coming from a 2:1 clock domain and the data is time sliced across all 4 threads."
+ },
+ {
+ "EventCode": "0x0B0000016880",
+ "EventName": "PM_L2_TLBIE_SLBIE_DELAY",
+ "BriefDescription": "Cycles when a TLBIE/SLBIEG/SLBIAG command was held in a hottemp condition by the NCU Master. Multiply this count by 1000 to obtain the total number of cycles. This can be divided by PM_L2_TLBIE_SLBIE_SENT to obtain the average time a TLBIE/SLBIEG/SLBIAG command was held. Event count should be multiplied by 2 since the data is coming from a 2:1 clock domain and the data is time sliced across all 4 threads."
+ },
+ {
+ "EventCode": "0x0B0000026880",
+ "EventName": "PM_L2_SNP_TLBIE_SLBIE_DELAY",
+ "BriefDescription": "Cycles when a TLBIE/SLBIEG/SLBIAG that targets this thread's LPAR was in flight while in a hottemp condition. Multiply this count by 1000 to obtain the total number of cycles. This can be divided by PM_L2_SNP_TLBIE_SLBIE_START to obtain the overall efficiency. Note: 'inflight' means SnpTLB has been sent to core(ie doesn't include when SnpTLB is in NCU waiting to be launched serially behind different SnpTLB). The NCU Snooper gets in a 'hottemp' delay window when it detects it is above its TLBIE/SLBIE threshold for process SnpTLBIE/SLBIE with this core. Event count should be multiplied by 2 since the data is coming from a 2:1 clock domain and the data is time sliced across all 4 threads."
}
]
diff --git a/tools/perf/pmu-events/arch/powerpc/power10/pmc.json b/tools/perf/pmu-events/arch/powerpc/power10/pmc.json
index 0e0253d0e757..6f5b0e8fde12 100644
--- a/tools/perf/pmu-events/arch/powerpc/power10/pmc.json
+++ b/tools/perf/pmu-events/arch/powerpc/power10/pmc.json
@@ -105,6 +105,11 @@
"BriefDescription": "Processor cycles gated by the run latch."
},
{
+ "EventCode": "0x200F8",
+ "EventName": "PM_EXT_INT",
+ "BriefDescription": "Cycles an external interrupt was active."
+ },
+ {
"EventCode": "0x30010",
"EventName": "PM_PMC2_OVERFLOW",
"BriefDescription": "The event selected for PMC2 caused the event counter to overflow."
@@ -125,6 +130,11 @@
"BriefDescription": "The event selected for PMC6 caused the event counter to overflow."
},
{
+ "EventCode": "0x3405A",
+ "EventName": "PM_PRIVILEGED_INST_CMPL",
+ "BriefDescription": "PowerPC instruction completed while the thread was in Privileged state."
+ },
+ {
"EventCode": "0x3006C",
"EventName": "PM_RUN_CYC_SMT2_MODE",
"BriefDescription": "Cycles when this thread's run latch is set and the core is in SMT2 mode."
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json b/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json
index 9b4a032186a7..7149caec4f80 100644
--- a/tools/perf/pmu-events/arch/riscv/sifive/u74/firmware.json
+++ b/tools/perf/pmu-events/arch/riscv/andes/ax45/firmware.json
@@ -36,7 +36,7 @@
"ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
},
{
- "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+ "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
},
{
"ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
diff --git a/tools/perf/pmu-events/arch/riscv/andes/ax45/instructions.json b/tools/perf/pmu-events/arch/riscv/andes/ax45/instructions.json
new file mode 100644
index 000000000000..713a08c1a40f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/andes/ax45/instructions.json
@@ -0,0 +1,127 @@
+[
+ {
+ "EventCode": "0x10",
+ "EventName": "cycle_count",
+ "BriefDescription": "Cycle count"
+ },
+ {
+ "EventCode": "0x20",
+ "EventName": "inst_count",
+ "BriefDescription": "Retired instruction count"
+ },
+ {
+ "EventCode": "0x30",
+ "EventName": "int_load_inst",
+ "BriefDescription": "Integer load instruction count"
+ },
+ {
+ "EventCode": "0x40",
+ "EventName": "int_store_inst",
+ "BriefDescription": "Integer store instruction count"
+ },
+ {
+ "EventCode": "0x50",
+ "EventName": "atomic_inst",
+ "BriefDescription": "Atomic instruction count"
+ },
+ {
+ "EventCode": "0x60",
+ "EventName": "sys_inst",
+ "BriefDescription": "System instruction count"
+ },
+ {
+ "EventCode": "0x70",
+ "EventName": "int_compute_inst",
+ "BriefDescription": "Integer computational instruction count"
+ },
+ {
+ "EventCode": "0x80",
+ "EventName": "condition_br",
+ "BriefDescription": "Conditional branch instruction count"
+ },
+ {
+ "EventCode": "0x90",
+ "EventName": "taken_condition_br",
+ "BriefDescription": "Taken conditional branch instruction count"
+ },
+ {
+ "EventCode": "0xA0",
+ "EventName": "jal_inst",
+ "BriefDescription": "JAL instruction count"
+ },
+ {
+ "EventCode": "0xB0",
+ "EventName": "jalr_inst",
+ "BriefDescription": "JALR instruction count"
+ },
+ {
+ "EventCode": "0xC0",
+ "EventName": "ret_inst",
+ "BriefDescription": "Return instruction count"
+ },
+ {
+ "EventCode": "0xD0",
+ "EventName": "control_trans_inst",
+ "BriefDescription": "Control transfer instruction count"
+ },
+ {
+ "EventCode": "0xE0",
+ "EventName": "ex9_inst",
+ "BriefDescription": "EXEC.IT instruction count"
+ },
+ {
+ "EventCode": "0xF0",
+ "EventName": "int_mul_inst",
+ "BriefDescription": "Integer multiplication instruction count"
+ },
+ {
+ "EventCode": "0x100",
+ "EventName": "int_div_rem_inst",
+ "BriefDescription": "Integer division/remainder instruction count"
+ },
+ {
+ "EventCode": "0x110",
+ "EventName": "float_load_inst",
+ "BriefDescription": "Floating-point load instruction count"
+ },
+ {
+ "EventCode": "0x120",
+ "EventName": "float_store_inst",
+ "BriefDescription": "Floating-point store instruction count"
+ },
+ {
+ "EventCode": "0x130",
+ "EventName": "float_add_sub_inst",
+ "BriefDescription": "Floating-point addition/subtraction instruction count"
+ },
+ {
+ "EventCode": "0x140",
+ "EventName": "float_mul_inst",
+ "BriefDescription": "Floating-point multiplication instruction count"
+ },
+ {
+ "EventCode": "0x150",
+ "EventName": "float_fused_muladd_inst",
+ "BriefDescription": "Floating-point fused multiply-add instruction count"
+ },
+ {
+ "EventCode": "0x160",
+ "EventName": "float_div_sqrt_inst",
+ "BriefDescription": "Floating-point division or square-root instruction count"
+ },
+ {
+ "EventCode": "0x170",
+ "EventName": "other_float_inst",
+ "BriefDescription": "Other floating-point instruction count"
+ },
+ {
+ "EventCode": "0x180",
+ "EventName": "int_mul_add_sub_inst",
+ "BriefDescription": "Integer multiplication and add/sub instruction count"
+ },
+ {
+ "EventCode": "0x190",
+ "EventName": "retired_ops",
+ "BriefDescription": "Retired operation count"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/andes/ax45/memory.json b/tools/perf/pmu-events/arch/riscv/andes/ax45/memory.json
new file mode 100644
index 000000000000..c7401b526c77
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/andes/ax45/memory.json
@@ -0,0 +1,57 @@
+[
+ {
+ "EventCode": "0x01",
+ "EventName": "ilm_access",
+ "BriefDescription": "ILM access"
+ },
+ {
+ "EventCode": "0x11",
+ "EventName": "dlm_access",
+ "BriefDescription": "DLM access"
+ },
+ {
+ "EventCode": "0x21",
+ "EventName": "icache_access",
+ "BriefDescription": "ICACHE access"
+ },
+ {
+ "EventCode": "0x31",
+ "EventName": "icache_miss",
+ "BriefDescription": "ICACHE miss"
+ },
+ {
+ "EventCode": "0x41",
+ "EventName": "dcache_access",
+ "BriefDescription": "DCACHE access"
+ },
+ {
+ "EventCode": "0x51",
+ "EventName": "dcache_miss",
+ "BriefDescription": "DCACHE miss"
+ },
+ {
+ "EventCode": "0x61",
+ "EventName": "dcache_load_access",
+ "BriefDescription": "DCACHE load access"
+ },
+ {
+ "EventCode": "0x71",
+ "EventName": "dcache_load_miss",
+ "BriefDescription": "DCACHE load miss"
+ },
+ {
+ "EventCode": "0x81",
+ "EventName": "dcache_store_access",
+ "BriefDescription": "DCACHE store access"
+ },
+ {
+ "EventCode": "0x91",
+ "EventName": "dcache_store_miss",
+ "BriefDescription": "DCACHE store miss"
+ },
+ {
+ "EventCode": "0xA1",
+ "EventName": "dcache_wb",
+ "BriefDescription": "DCACHE writeback"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/andes/ax45/microarch.json b/tools/perf/pmu-events/arch/riscv/andes/ax45/microarch.json
new file mode 100644
index 000000000000..a6d378cbaa74
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/andes/ax45/microarch.json
@@ -0,0 +1,77 @@
+[
+ {
+ "EventCode": "0xB1",
+ "EventName": "cycle_wait_icache_fill",
+ "BriefDescription": "Cycles waiting for ICACHE fill data"
+ },
+ {
+ "EventCode": "0xC1",
+ "EventName": "cycle_wait_dcache_fill",
+ "BriefDescription": "Cycles waiting for DCACHE fill data"
+ },
+ {
+ "EventCode": "0xD1",
+ "EventName": "uncached_ifetch_from_bus",
+ "BriefDescription": "Uncached ifetch data access from bus"
+ },
+ {
+ "EventCode": "0xE1",
+ "EventName": "uncached_load_from_bus",
+ "BriefDescription": "Uncached load data access from bus"
+ },
+ {
+ "EventCode": "0xF1",
+ "EventName": "cycle_wait_uncached_ifetch",
+ "BriefDescription": "Cycles waiting for uncached ifetch data from bus"
+ },
+ {
+ "EventCode": "0x101",
+ "EventName": "cycle_wait_uncached_load",
+ "BriefDescription": "Cycles waiting for uncached load data from bus"
+ },
+ {
+ "EventCode": "0x111",
+ "EventName": "main_itlb_access",
+ "BriefDescription": "Main ITLB access"
+ },
+ {
+ "EventCode": "0x121",
+ "EventName": "main_itlb_miss",
+ "BriefDescription": "Main ITLB miss"
+ },
+ {
+ "EventCode": "0x131",
+ "EventName": "main_dtlb_access",
+ "BriefDescription": "Main DTLB access"
+ },
+ {
+ "EventCode": "0x141",
+ "EventName": "main_dtlb_miss",
+ "BriefDescription": "Main DTLB miss"
+ },
+ {
+ "EventCode": "0x151",
+ "EventName": "cycle_wait_itlb_fill",
+ "BriefDescription": "Cycles waiting for Main ITLB fill data"
+ },
+ {
+ "EventCode": "0x161",
+ "EventName": "pipe_stall_cycle_dtlb_miss",
+ "BriefDescription": "Pipeline stall cycles caused by Main DTLB miss"
+ },
+ {
+ "EventCode": "0x02",
+ "EventName": "mispredict_condition_br",
+ "BriefDescription": "Misprediction of conditional branches"
+ },
+ {
+ "EventCode": "0x12",
+ "EventName": "mispredict_take_condition_br",
+ "BriefDescription": "Misprediction of taken conditional branches"
+ },
+ {
+ "EventCode": "0x22",
+ "EventName": "mispredict_target_ret_inst",
+ "BriefDescription": "Misprediction of targets of Return instructions"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/mapfile.csv b/tools/perf/pmu-events/arch/riscv/mapfile.csv
index cfc449b19810..0a7e7dcc81be 100644
--- a/tools/perf/pmu-events/arch/riscv/mapfile.csv
+++ b/tools/perf/pmu-events/arch/riscv/mapfile.csv
@@ -14,6 +14,11 @@
#
#
#MVENDORID-MARCHID-MIMPID,Version,Filename,EventType
-0x489-0x8000000000000007-0x[[:xdigit:]]+,v1,sifive/u74,core
+0x489-0x8000000000000007-0x[[:xdigit:]]+,v1,sifive/bullet,core
+0x489-0x8000000000000[1-9a-e]07-0x[78ac][[:xdigit:]]+,v1,sifive/bullet-07,core
+0x489-0x8000000000000[1-9a-e]07-0xd[[:xdigit:]]+,v1,sifive/bullet-0d,core
+0x489-0x8000000000000008-0x[[:xdigit:]]+,v1,sifive/p550,core
+0x489-0x8000000000000[1-6]08-0x[9b][[:xdigit:]]+,v1,sifive/p650,core
0x5b7-0x0-0x0,v1,thead/c900-legacy,core
0x67e-0x80000000db0000[89]0-0x[[:xdigit:]]+,v1,starfive/dubhe-80,core
+0x31e-0x8000000000008a45-0x[[:xdigit:]]+,v1,andes/ax45,core
diff --git a/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json b/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
index a9939823b14b..0c9b9a2d2958 100644
--- a/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
+++ b/tools/perf/pmu-events/arch/riscv/riscv-sbi-firmware.json
@@ -74,7 +74,7 @@
{
"PublicDescription": "Sent SFENCE.VMA with ASID request to other HART event",
"ConfigCode": "0x800000000000000c",
- "EventName": "FW_SFENCE_VMA_RECEIVED",
+ "EventName": "FW_SFENCE_VMA_ASID_SENT",
"BriefDescription": "Sent SFENCE.VMA with ASID request to other HART event"
},
{
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/cycle-and-instruction-count.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/cycle-and-instruction-count.json
new file mode 100644
index 000000000000..5c8124cfe926
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/cycle-and-instruction-count.json
@@ -0,0 +1,12 @@
+[
+ {
+ "EventName": "CORE_CLOCK_CYCLES",
+ "EventCode": "0x165",
+ "BriefDescription": "Counts core clock cycles"
+ },
+ {
+ "EventName": "INSTRUCTIONS_RETIRED",
+ "EventCode": "0x265",
+ "BriefDescription": "Counts instructions retired"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/firmware.json
new file mode 120000
index 000000000000..34e5c2870eee
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/firmware.json
@@ -0,0 +1 @@
+../bullet/firmware.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/instruction.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/instruction.json
new file mode 120000
index 000000000000..62eacc2d7497
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/instruction.json
@@ -0,0 +1 @@
+../bullet/instruction.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/memory.json
new file mode 120000
index 000000000000..df50fc47a5fe
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/memory.json
@@ -0,0 +1 @@
+../bullet/memory.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/microarch.json
new file mode 100644
index 000000000000..de8efd7b8b34
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/microarch.json
@@ -0,0 +1,62 @@
+[
+ {
+ "EventName": "ADDRESSGEN_INTERLOCK",
+ "EventCode": "0x101",
+ "BriefDescription": "Counts cycles with an address-generation interlock"
+ },
+ {
+ "EventName": "LONGLATENCY_INTERLOCK",
+ "EventCode": "0x201",
+ "BriefDescription": "Counts cycles with a long-latency interlock"
+ },
+ {
+ "EventName": "CSR_INTERLOCK",
+ "EventCode": "0x401",
+ "BriefDescription": "Counts cycles with a CSR interlock"
+ },
+ {
+ "EventName": "ICACHE_BLOCKED",
+ "EventCode": "0x801",
+ "BriefDescription": "Counts cycles in which the instruction cache was not able to provide an instruction"
+ },
+ {
+ "EventName": "DCACHE_BLOCKED",
+ "EventCode": "0x1001",
+ "BriefDescription": "Counts cycles in which the data cache blocked an instruction"
+ },
+ {
+ "EventName": "BRANCH_DIRECTION_MISPREDICTION",
+ "EventCode": "0x2001",
+ "BriefDescription": "Counts mispredictions of conditional branch direction (taken/not taken)"
+ },
+ {
+ "EventName": "BRANCH_TARGET_MISPREDICTION",
+ "EventCode": "0x4001",
+ "BriefDescription": "Counts mispredictions of the target PC of control-flow instructions"
+ },
+ {
+ "EventName": "PIPELINE_FLUSH",
+ "EventCode": "0x8001",
+ "BriefDescription": "Counts flushes of the core pipeline. Common causes include fence.i and CSR accesses"
+ },
+ {
+ "EventName": "REPLAY",
+ "EventCode": "0x10001",
+ "BriefDescription": "Counts instruction replays"
+ },
+ {
+ "EventName": "INTEGER_MUL_DIV_INTERLOCK",
+ "EventCode": "0x20001",
+ "BriefDescription": "Counts cycles with a multiply or divide interlock"
+ },
+ {
+ "EventName": "FP_INTERLOCK",
+ "EventCode": "0x40001",
+ "BriefDescription": "Counts cycles with a floating-point interlock"
+ },
+ {
+ "EventName": "TRACE_STALL",
+ "EventCode": "0x80001",
+ "BriefDescription": "Counts cycles in which the core pipeline is stalled due to backpressure from the Trace Encoder"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/watchpoint.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/watchpoint.json
new file mode 100644
index 000000000000..aa7a12818521
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-07/watchpoint.json
@@ -0,0 +1,42 @@
+[
+ {
+ "EventName": "WATCHPOINT_0",
+ "EventCode": "0x164",
+ "BriefDescription": "Counts occurrences of watchpoint 0 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_1",
+ "EventCode": "0x264",
+ "BriefDescription": "Counts occurrences of watchpoint 1 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_2",
+ "EventCode": "0x464",
+ "BriefDescription": "Counts occurrences of watchpoint 2 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_3",
+ "EventCode": "0x864",
+ "BriefDescription": "Counts occurrences of watchpoint 3 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_4",
+ "EventCode": "0x1064",
+ "BriefDescription": "Counts occurrences of watchpoint 4 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_5",
+ "EventCode": "0x2064",
+ "BriefDescription": "Counts occurrences of watchpoint 5 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_6",
+ "EventCode": "0x4064",
+ "BriefDescription": "Counts occurrences of watchpoint 6 with action=8"
+ },
+ {
+ "EventName": "WATCHPOINT_7",
+ "EventCode": "0x8064",
+ "BriefDescription": "Counts occurrences of watchpoint 7 with action=8"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/cycle-and-instruction-count.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/cycle-and-instruction-count.json
new file mode 120000
index 000000000000..ccd29278f61b
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/cycle-and-instruction-count.json
@@ -0,0 +1 @@
+../bullet-07/cycle-and-instruction-count.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/firmware.json
new file mode 120000
index 000000000000..34e5c2870eee
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/firmware.json
@@ -0,0 +1 @@
+../bullet/firmware.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/instruction.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/instruction.json
new file mode 120000
index 000000000000..62eacc2d7497
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/instruction.json
@@ -0,0 +1 @@
+../bullet/instruction.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/memory.json
new file mode 120000
index 000000000000..df50fc47a5fe
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/memory.json
@@ -0,0 +1 @@
+../bullet/memory.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/microarch.json
new file mode 100644
index 000000000000..6573b24788eb
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/microarch.json
@@ -0,0 +1,72 @@
+[
+ {
+ "EventName": "ADDRESSGEN_INTERLOCK",
+ "EventCode": "0x101",
+ "BriefDescription": "Counts cycles with an address-generation interlock"
+ },
+ {
+ "EventName": "LONGLATENCY_INTERLOCK",
+ "EventCode": "0x201",
+ "BriefDescription": "Counts cycles with a long-latency interlock"
+ },
+ {
+ "EventName": "CSR_INTERLOCK",
+ "EventCode": "0x401",
+ "BriefDescription": "Counts cycles with a CSR interlock"
+ },
+ {
+ "EventName": "ICACHE_BLOCKED",
+ "EventCode": "0x801",
+ "BriefDescription": "Counts cycles in which the instruction cache was not able to provide an instruction"
+ },
+ {
+ "EventName": "DCACHE_BLOCKED",
+ "EventCode": "0x1001",
+ "BriefDescription": "Counts cycles in which the data cache blocked an instruction"
+ },
+ {
+ "EventName": "BRANCH_DIRECTION_MISPREDICTION",
+ "EventCode": "0x2001",
+ "BriefDescription": "Counts mispredictions of conditional branch direction (taken/not taken)"
+ },
+ {
+ "EventName": "BRANCH_TARGET_MISPREDICTION",
+ "EventCode": "0x4001",
+ "BriefDescription": "Counts mispredictions of the target PC of control-flow instructions"
+ },
+ {
+ "EventName": "PIPELINE_FLUSH",
+ "EventCode": "0x8001",
+ "BriefDescription": "Counts flushes of the core pipeline. Common causes include fence.i and CSR accesses"
+ },
+ {
+ "EventName": "REPLAY",
+ "EventCode": "0x10001",
+ "BriefDescription": "Counts instruction replays"
+ },
+ {
+ "EventName": "INTEGER_MUL_DIV_INTERLOCK",
+ "EventCode": "0x20001",
+ "BriefDescription": "Counts cycles with a multiply or divide interlock"
+ },
+ {
+ "EventName": "FP_INTERLOCK",
+ "EventCode": "0x40001",
+ "BriefDescription": "Counts cycles with a floating-point interlock"
+ },
+ {
+ "EventName": "TRACE_STALL",
+ "EventCode": "0x80001",
+ "BriefDescription": "Counts cycles in which the core pipeline is stalled due to backpressure from the Trace Encoder"
+ },
+ {
+ "EventName": "ITLB_MISS_STALL",
+ "EventCode": "0x100001",
+ "BriefDescription": "Counts cycles in which the core pipeline is stalled due to ITLB Miss"
+ },
+ {
+ "EventName": "DTLB_MISS_STALL",
+ "EventCode": "0x200001",
+ "BriefDescription": "Counts cycles in which the core pipeline is stalled due to DTLB Miss"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/watchpoint.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/watchpoint.json
new file mode 120000
index 000000000000..e88b98bfc5c8
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet-0d/watchpoint.json
@@ -0,0 +1 @@
+../bullet-07/watchpoint.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet/firmware.json
new file mode 100644
index 000000000000..7149caec4f80
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet/firmware.json
@@ -0,0 +1,68 @@
+[
+ {
+ "ArchStdEvent": "FW_MISALIGNED_LOAD"
+ },
+ {
+ "ArchStdEvent": "FW_MISALIGNED_STORE"
+ },
+ {
+ "ArchStdEvent": "FW_ACCESS_LOAD"
+ },
+ {
+ "ArchStdEvent": "FW_ACCESS_STORE"
+ },
+ {
+ "ArchStdEvent": "FW_ILLEGAL_INSN"
+ },
+ {
+ "ArchStdEvent": "FW_SET_TIMER"
+ },
+ {
+ "ArchStdEvent": "FW_IPI_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_IPI_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_FENCE_I_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_FENCE_I_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_SFENCE_VMA_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_GVMA_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_GVMA_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_GVMA_VMID_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_GVMA_VMID_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_VVMA_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_VVMA_RECEIVED"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_VVMA_ASID_SENT"
+ },
+ {
+ "ArchStdEvent": "FW_HFENCE_VVMA_ASID_RECEIVED"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet/instruction.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet/instruction.json
new file mode 100644
index 000000000000..284e4c1566e0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet/instruction.json
@@ -0,0 +1,92 @@
+[
+ {
+ "EventName": "EXCEPTION_TAKEN",
+ "EventCode": "0x100",
+ "BriefDescription": "Counts exceptions taken"
+ },
+ {
+ "EventName": "INTEGER_LOAD_RETIRED",
+ "EventCode": "0x200",
+ "BriefDescription": "Counts integer load instructions retired"
+ },
+ {
+ "EventName": "INTEGER_STORE_RETIRED",
+ "EventCode": "0x400",
+ "BriefDescription": "Counts integer store instructions retired"
+ },
+ {
+ "EventName": "ATOMIC_MEMORY_RETIRED",
+ "EventCode": "0x800",
+ "BriefDescription": "Counts atomic memory instructions retired"
+ },
+ {
+ "EventName": "SYSTEM_INSTRUCTION_RETIRED",
+ "EventCode": "0x1000",
+ "BriefDescription": "Counts system instructions retired (CSR, WFI, MRET, etc.)"
+ },
+ {
+ "EventName": "INTEGER_ARITHMETIC_RETIRED",
+ "EventCode": "0x2000",
+ "BriefDescription": "Counts integer arithmetic instructions retired"
+ },
+ {
+ "EventName": "CONDITIONAL_BRANCH_RETIRED",
+ "EventCode": "0x4000",
+ "BriefDescription": "Counts conditional branch instructions retired"
+ },
+ {
+ "EventName": "JAL_INSTRUCTION_RETIRED",
+ "EventCode": "0x8000",
+ "BriefDescription": "Counts jump-and-link instructions retired"
+ },
+ {
+ "EventName": "JALR_INSTRUCTION_RETIRED",
+ "EventCode": "0x10000",
+ "BriefDescription": "Counts indirect jump instructions (JALR) retired"
+ },
+ {
+ "EventName": "INTEGER_MULTIPLICATION_RETIRED",
+ "EventCode": "0x20000",
+ "BriefDescription": "Counts integer multiplication instructions retired"
+ },
+ {
+ "EventName": "INTEGER_DIVISION_RETIRED",
+ "EventCode": "0x40000",
+ "BriefDescription": "Counts integer division instructions retired"
+ },
+ {
+ "EventName": "FP_LOAD_RETIRED",
+ "EventCode": "0x80000",
+ "BriefDescription": "Counts floating-point load instructions retired"
+ },
+ {
+ "EventName": "FP_STORE_RETIRED",
+ "EventCode": "0x100000",
+ "BriefDescription": "Counts floating-point store instructions retired"
+ },
+ {
+ "EventName": "FP_ADD_RETIRED",
+ "EventCode": "0x200000",
+ "BriefDescription": "Counts floating-point add instructions retired"
+ },
+ {
+ "EventName": "FP_MUL_RETIRED",
+ "EventCode": "0x400000",
+ "BriefDescription": "Counts floating-point multiply instructions retired"
+ },
+ {
+ "EventName": "FP_MULADD_RETIRED",
+ "EventCode": "0x800000",
+ "BriefDescription": "Counts floating-point fused multiply-add instructions retired"
+ },
+ {
+ "EventName": "FP_DIV_SQRT_RETIRED",
+ "EventCode": "0x1000000",
+ "BriefDescription": "Counts floating point divide or square root instructions retired"
+ },
+ {
+ "EventName": "OTHER_FP_RETIRED",
+ "EventCode": "0x2000000",
+ "BriefDescription": "Counts other floating-point instructions retired"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet/memory.json
new file mode 100644
index 000000000000..70441a55dd66
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet/memory.json
@@ -0,0 +1,32 @@
+[
+ {
+ "EventName": "ICACHE_MISS",
+ "EventCode": "0x102",
+ "BriefDescription": "Counts instruction cache misses"
+ },
+ {
+ "EventName": "DCACHE_MISS",
+ "EventCode": "0x202",
+ "BriefDescription": "Counts data cache misses"
+ },
+ {
+ "EventName": "DCACHE_RELEASE",
+ "EventCode": "0x402",
+ "BriefDescription": "Counts writeback requests from the data cache"
+ },
+ {
+ "EventName": "ITLB_MISS",
+ "EventCode": "0x802",
+ "BriefDescription": "Counts Instruction TLB misses caused by instruction address translation requests"
+ },
+ {
+ "EventName": "DTLB_MISS",
+ "EventCode": "0x1002",
+ "BriefDescription": "Counts Data TLB misses caused by data address translation requests"
+ },
+ {
+ "EventName": "UTLB_MISS",
+ "EventCode": "0x2002",
+ "BriefDescription": "Counts Unified TLB misses caused by address translation requests"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/bullet/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/bullet/microarch.json
new file mode 100644
index 000000000000..d9cdb7d747ee
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/bullet/microarch.json
@@ -0,0 +1,57 @@
+[
+ {
+ "EventName": "ADDRESSGEN_INTERLOCK",
+ "EventCode": "0x101",
+ "BriefDescription": "Counts cycles with an address-generation interlock"
+ },
+ {
+ "EventName": "LONGLATENCY_INTERLOCK",
+ "EventCode": "0x201",
+ "BriefDescription": "Counts cycles with a long-latency interlock"
+ },
+ {
+ "EventName": "CSR_INTERLOCK",
+ "EventCode": "0x401",
+ "BriefDescription": "Counts cycles with a CSR interlock"
+ },
+ {
+ "EventName": "ICACHE_BLOCKED",
+ "EventCode": "0x801",
+ "BriefDescription": "Counts cycles in which the instruction cache was not able to provide an instruction"
+ },
+ {
+ "EventName": "DCACHE_BLOCKED",
+ "EventCode": "0x1001",
+ "BriefDescription": "Counts cycles in which the data cache blocked an instruction"
+ },
+ {
+ "EventName": "BRANCH_DIRECTION_MISPREDICTION",
+ "EventCode": "0x2001",
+ "BriefDescription": "Counts mispredictions of conditional branch direction (taken/not taken)"
+ },
+ {
+ "EventName": "BRANCH_TARGET_MISPREDICTION",
+ "EventCode": "0x4001",
+ "BriefDescription": "Counts mispredictions of the target PC of control-flow instructions"
+ },
+ {
+ "EventName": "PIPELINE_FLUSH",
+ "EventCode": "0x8001",
+ "BriefDescription": "Counts flushes of the core pipeline. Common causes include fence.i and CSR accesses"
+ },
+ {
+ "EventName": "REPLAY",
+ "EventCode": "0x10001",
+ "BriefDescription": "Counts instruction replays"
+ },
+ {
+ "EventName": "INTEGER_MUL_DIV_INTERLOCK",
+ "EventCode": "0x20001",
+ "BriefDescription": "Counts cycles with a multiply or divide interlock"
+ },
+ {
+ "EventName": "FP_INTERLOCK",
+ "EventCode": "0x40001",
+ "BriefDescription": "Counts cycles with a floating-point interlock"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p550/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/p550/firmware.json
new file mode 120000
index 000000000000..34e5c2870eee
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p550/firmware.json
@@ -0,0 +1 @@
+../bullet/firmware.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p550/instruction.json b/tools/perf/pmu-events/arch/riscv/sifive/p550/instruction.json
new file mode 120000
index 000000000000..62eacc2d7497
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p550/instruction.json
@@ -0,0 +1 @@
+../bullet/instruction.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p550/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/p550/memory.json
new file mode 100644
index 000000000000..8393f81b2cf0
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p550/memory.json
@@ -0,0 +1,47 @@
+[
+ {
+ "EventName": "ICACHE_MISS",
+ "EventCode": "0x102",
+ "BriefDescription": "Counts instruction cache misses"
+ },
+ {
+ "EventName": "DCACHE_MISS",
+ "EventCode": "0x202",
+ "BriefDescription": "Counts data cache misses"
+ },
+ {
+ "EventName": "DCACHE_RELEASE",
+ "EventCode": "0x402",
+ "BriefDescription": "Counts writeback requests from the data cache"
+ },
+ {
+ "EventName": "ITLB_MISS",
+ "EventCode": "0x802",
+ "BriefDescription": "Counts Instruction TLB misses caused by instruction address translation requests"
+ },
+ {
+ "EventName": "DTLB_MISS",
+ "EventCode": "0x1002",
+ "BriefDescription": "Counts Data TLB misses caused by data address translation requests"
+ },
+ {
+ "EventName": "UTLB_MISS",
+ "EventCode": "0x2002",
+ "BriefDescription": "Counts Unified TLB misses caused by address translation requests"
+ },
+ {
+ "EventName": "UTLB_HIT",
+ "EventCode": "0x4002",
+ "BriefDescription": "Counts Unified TLB hits for address translation requests"
+ },
+ {
+ "EventName": "PTE_CACHE_MISS",
+ "EventCode": "0x8002",
+ "BriefDescription": "Counts Page Table Entry cache misses"
+ },
+ {
+ "EventName": "PTE_CACHE_HIT",
+ "EventCode": "0x10002",
+ "BriefDescription": "Counts Page Table Entry cache hits"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p550/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/p550/microarch.json
new file mode 120000
index 000000000000..ba5dd2960e9f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p550/microarch.json
@@ -0,0 +1 @@
+../bullet/microarch.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/cycle-and-instruction-count.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/cycle-and-instruction-count.json
new file mode 120000
index 000000000000..ccd29278f61b
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/cycle-and-instruction-count.json
@@ -0,0 +1 @@
+../bullet-07/cycle-and-instruction-count.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/firmware.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/firmware.json
new file mode 120000
index 000000000000..34e5c2870eee
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/firmware.json
@@ -0,0 +1 @@
+../bullet/firmware.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/instruction.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/instruction.json
new file mode 120000
index 000000000000..62eacc2d7497
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/instruction.json
@@ -0,0 +1 @@
+../bullet/instruction.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/memory.json
new file mode 100644
index 000000000000..f1431b339c7f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/memory.json
@@ -0,0 +1,57 @@
+[
+ {
+ "EventName": "ICACHE_MISS",
+ "EventCode": "0x102",
+ "BriefDescription": "Counts instruction cache misses"
+ },
+ {
+ "EventName": "DCACHE_MISS",
+ "EventCode": "0x202",
+ "BriefDescription": "Counts data cache misses"
+ },
+ {
+ "EventName": "DCACHE_RELEASE",
+ "EventCode": "0x402",
+ "BriefDescription": "Counts writeback requests from the data cache"
+ },
+ {
+ "EventName": "ITLB_MISS",
+ "EventCode": "0x802",
+ "BriefDescription": "Counts Instruction TLB misses caused by instruction address translation requests"
+ },
+ {
+ "EventName": "DTLB_MISS",
+ "EventCode": "0x1002",
+ "BriefDescription": "Counts Data TLB misses caused by data address translation requests"
+ },
+ {
+ "EventName": "UTLB_MISS",
+ "EventCode": "0x2002",
+ "BriefDescription": "Counts Unified TLB misses caused by address translation requests"
+ },
+ {
+ "EventName": "UTLB_HIT",
+ "EventCode": "0x4002",
+ "BriefDescription": "Counts Unified TLB hits for address translation requests"
+ },
+ {
+ "EventName": "PTE_CACHE_MISS",
+ "EventCode": "0x8002",
+ "BriefDescription": "Counts Page Table Entry cache misses"
+ },
+ {
+ "EventName": "PTE_CACHE_HIT",
+ "EventCode": "0x10002",
+ "BriefDescription": "Counts Page Table Entry cache hits"
+ },
+ {
+ "EventName": "ITLB_MULTI_HIT",
+ "EventCode": "0x20002",
+ "BriefDescription": "Counts Instruction TLB multi-hits"
+ },
+ {
+ "EventName": "DTLB_MULTI_HIT",
+ "EventCode": "0x40002",
+ "BriefDescription": "Counts Data TLB multi-hits"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/microarch.json
new file mode 100644
index 000000000000..de8efd7b8b34
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/microarch.json
@@ -0,0 +1,62 @@
+[
+ {
+ "EventName": "ADDRESSGEN_INTERLOCK",
+ "EventCode": "0x101",
+ "BriefDescription": "Counts cycles with an address-generation interlock"
+ },
+ {
+ "EventName": "LONGLATENCY_INTERLOCK",
+ "EventCode": "0x201",
+ "BriefDescription": "Counts cycles with a long-latency interlock"
+ },
+ {
+ "EventName": "CSR_INTERLOCK",
+ "EventCode": "0x401",
+ "BriefDescription": "Counts cycles with a CSR interlock"
+ },
+ {
+ "EventName": "ICACHE_BLOCKED",
+ "EventCode": "0x801",
+ "BriefDescription": "Counts cycles in which the instruction cache was not able to provide an instruction"
+ },
+ {
+ "EventName": "DCACHE_BLOCKED",
+ "EventCode": "0x1001",
+ "BriefDescription": "Counts cycles in which the data cache blocked an instruction"
+ },
+ {
+ "EventName": "BRANCH_DIRECTION_MISPREDICTION",
+ "EventCode": "0x2001",
+ "BriefDescription": "Counts mispredictions of conditional branch direction (taken/not taken)"
+ },
+ {
+ "EventName": "BRANCH_TARGET_MISPREDICTION",
+ "EventCode": "0x4001",
+ "BriefDescription": "Counts mispredictions of the target PC of control-flow instructions"
+ },
+ {
+ "EventName": "PIPELINE_FLUSH",
+ "EventCode": "0x8001",
+ "BriefDescription": "Counts flushes of the core pipeline. Common causes include fence.i and CSR accesses"
+ },
+ {
+ "EventName": "REPLAY",
+ "EventCode": "0x10001",
+ "BriefDescription": "Counts instruction replays"
+ },
+ {
+ "EventName": "INTEGER_MUL_DIV_INTERLOCK",
+ "EventCode": "0x20001",
+ "BriefDescription": "Counts cycles with a multiply or divide interlock"
+ },
+ {
+ "EventName": "FP_INTERLOCK",
+ "EventCode": "0x40001",
+ "BriefDescription": "Counts cycles with a floating-point interlock"
+ },
+ {
+ "EventName": "TRACE_STALL",
+ "EventCode": "0x80001",
+ "BriefDescription": "Counts cycles in which the core pipeline is stalled due to backpressure from the Trace Encoder"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/p650/watchpoint.json b/tools/perf/pmu-events/arch/riscv/sifive/p650/watchpoint.json
new file mode 120000
index 000000000000..e88b98bfc5c8
--- /dev/null
+++ b/tools/perf/pmu-events/arch/riscv/sifive/p650/watchpoint.json
@@ -0,0 +1 @@
+../bullet-07/watchpoint.json \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/u74/instructions.json b/tools/perf/pmu-events/arch/riscv/sifive/u74/instructions.json
deleted file mode 100644
index 5eab718c9256..000000000000
--- a/tools/perf/pmu-events/arch/riscv/sifive/u74/instructions.json
+++ /dev/null
@@ -1,92 +0,0 @@
-[
- {
- "EventName": "EXCEPTION_TAKEN",
- "EventCode": "0x0000100",
- "BriefDescription": "Exception taken"
- },
- {
- "EventName": "INTEGER_LOAD_RETIRED",
- "EventCode": "0x0000200",
- "BriefDescription": "Integer load instruction retired"
- },
- {
- "EventName": "INTEGER_STORE_RETIRED",
- "EventCode": "0x0000400",
- "BriefDescription": "Integer store instruction retired"
- },
- {
- "EventName": "ATOMIC_MEMORY_RETIRED",
- "EventCode": "0x0000800",
- "BriefDescription": "Atomic memory operation retired"
- },
- {
- "EventName": "SYSTEM_INSTRUCTION_RETIRED",
- "EventCode": "0x0001000",
- "BriefDescription": "System instruction retired"
- },
- {
- "EventName": "INTEGER_ARITHMETIC_RETIRED",
- "EventCode": "0x0002000",
- "BriefDescription": "Integer arithmetic instruction retired"
- },
- {
- "EventName": "CONDITIONAL_BRANCH_RETIRED",
- "EventCode": "0x0004000",
- "BriefDescription": "Conditional branch retired"
- },
- {
- "EventName": "JAL_INSTRUCTION_RETIRED",
- "EventCode": "0x0008000",
- "BriefDescription": "JAL instruction retired"
- },
- {
- "EventName": "JALR_INSTRUCTION_RETIRED",
- "EventCode": "0x0010000",
- "BriefDescription": "JALR instruction retired"
- },
- {
- "EventName": "INTEGER_MULTIPLICATION_RETIRED",
- "EventCode": "0x0020000",
- "BriefDescription": "Integer multiplication instruction retired"
- },
- {
- "EventName": "INTEGER_DIVISION_RETIRED",
- "EventCode": "0x0040000",
- "BriefDescription": "Integer division instruction retired"
- },
- {
- "EventName": "FP_LOAD_RETIRED",
- "EventCode": "0x0080000",
- "BriefDescription": "Floating-point load instruction retired"
- },
- {
- "EventName": "FP_STORE_RETIRED",
- "EventCode": "0x0100000",
- "BriefDescription": "Floating-point store instruction retired"
- },
- {
- "EventName": "FP_ADDITION_RETIRED",
- "EventCode": "0x0200000",
- "BriefDescription": "Floating-point addition retired"
- },
- {
- "EventName": "FP_MULTIPLICATION_RETIRED",
- "EventCode": "0x0400000",
- "BriefDescription": "Floating-point multiplication retired"
- },
- {
- "EventName": "FP_FUSEDMADD_RETIRED",
- "EventCode": "0x0800000",
- "BriefDescription": "Floating-point fused multiply-add retired"
- },
- {
- "EventName": "FP_DIV_SQRT_RETIRED",
- "EventCode": "0x1000000",
- "BriefDescription": "Floating-point division or square-root retired"
- },
- {
- "EventName": "OTHER_FP_RETIRED",
- "EventCode": "0x2000000",
- "BriefDescription": "Other floating-point instruction retired"
- }
-] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/u74/memory.json b/tools/perf/pmu-events/arch/riscv/sifive/u74/memory.json
deleted file mode 100644
index be1a46312ac3..000000000000
--- a/tools/perf/pmu-events/arch/riscv/sifive/u74/memory.json
+++ /dev/null
@@ -1,32 +0,0 @@
-[
- {
- "EventName": "ICACHE_RETIRED",
- "EventCode": "0x0000102",
- "BriefDescription": "Instruction cache miss"
- },
- {
- "EventName": "DCACHE_MISS_MMIO_ACCESSES",
- "EventCode": "0x0000202",
- "BriefDescription": "Data cache miss or memory-mapped I/O access"
- },
- {
- "EventName": "DCACHE_WRITEBACK",
- "EventCode": "0x0000402",
- "BriefDescription": "Data cache write-back"
- },
- {
- "EventName": "INST_TLB_MISS",
- "EventCode": "0x0000802",
- "BriefDescription": "Instruction TLB miss"
- },
- {
- "EventName": "DATA_TLB_MISS",
- "EventCode": "0x0001002",
- "BriefDescription": "Data TLB miss"
- },
- {
- "EventName": "UTLB_MISS",
- "EventCode": "0x0002002",
- "BriefDescription": "UTLB miss"
- }
-] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/sifive/u74/microarch.json b/tools/perf/pmu-events/arch/riscv/sifive/u74/microarch.json
deleted file mode 100644
index 50ffa55418cb..000000000000
--- a/tools/perf/pmu-events/arch/riscv/sifive/u74/microarch.json
+++ /dev/null
@@ -1,57 +0,0 @@
-[
- {
- "EventName": "ADDRESSGEN_INTERLOCK",
- "EventCode": "0x0000101",
- "BriefDescription": "Address-generation interlock"
- },
- {
- "EventName": "LONGLAT_INTERLOCK",
- "EventCode": "0x0000201",
- "BriefDescription": "Long-latency interlock"
- },
- {
- "EventName": "CSR_READ_INTERLOCK",
- "EventCode": "0x0000401",
- "BriefDescription": "CSR read interlock"
- },
- {
- "EventName": "ICACHE_ITIM_BUSY",
- "EventCode": "0x0000801",
- "BriefDescription": "Instruction cache/ITIM busy"
- },
- {
- "EventName": "DCACHE_DTIM_BUSY",
- "EventCode": "0x0001001",
- "BriefDescription": "Data cache/DTIM busy"
- },
- {
- "EventName": "BRANCH_DIRECTION_MISPREDICTION",
- "EventCode": "0x0002001",
- "BriefDescription": "Branch direction misprediction"
- },
- {
- "EventName": "BRANCH_TARGET_MISPREDICTION",
- "EventCode": "0x0004001",
- "BriefDescription": "Branch/jump target misprediction"
- },
- {
- "EventName": "PIPE_FLUSH_CSR_WRITE",
- "EventCode": "0x0008001",
- "BriefDescription": "Pipeline flush from CSR write"
- },
- {
- "EventName": "PIPE_FLUSH_OTHER_EVENT",
- "EventCode": "0x0010001",
- "BriefDescription": "Pipeline flush from other event"
- },
- {
- "EventName": "INTEGER_MULTIPLICATION_INTERLOCK",
- "EventCode": "0x0020001",
- "BriefDescription": "Integer multiplication interlock"
- },
- {
- "EventName": "FP_INTERLOCK",
- "EventCode": "0x0040001",
- "BriefDescription": "Floating-point interlock"
- }
-] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json b/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
index 9b4a032186a7..7149caec4f80 100644
--- a/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
+++ b/tools/perf/pmu-events/arch/riscv/starfive/dubhe-80/firmware.json
@@ -36,7 +36,7 @@
"ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
},
{
- "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+ "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
},
{
"ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
diff --git a/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json b/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
index 9b4a032186a7..7149caec4f80 100644
--- a/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
+++ b/tools/perf/pmu-events/arch/riscv/thead/c900-legacy/firmware.json
@@ -36,7 +36,7 @@
"ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
},
{
- "ArchStdEvent": "FW_SFENCE_VMA_RECEIVED"
+ "ArchStdEvent": "FW_SFENCE_VMA_ASID_SENT"
},
{
"ArchStdEvent": "FW_SFENCE_VMA_ASID_RECEIVED"
diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
index c2b10ec1c6e0..02cce3a629cb 100644
--- a/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
+++ b/tools/perf/pmu-events/arch/s390/cf_z16/extended.json
@@ -94,77 +94,77 @@
"Unit": "CPU-M-CF",
"EventCode": "145",
"EventName": "DCW_REQ",
- "BriefDescription": "Directory Write Level 1 Data Cache from Cache",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "146",
"EventName": "DCW_REQ_IV",
- "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "147",
"EventName": "DCW_REQ_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Chip HP Hit",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Chip HP Hit",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "148",
"EventName": "DCW_REQ_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Data Cache from Cache with Drawer HP Hit",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Drawer HP Hit",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "149",
"EventName": "DCW_ON_CHIP",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip Cache",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "150",
"EventName": "DCW_ON_CHIP_IV",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "151",
"EventName": "DCW_ON_CHIP_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip Cache with Chip HP Hit",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Chip HP Hit",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "152",
"EventName": "DCW_ON_CHIP_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip Cache with Drawer HP Hit",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Drawer HP Hit",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "153",
"EventName": "DCW_ON_MODULE",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Module Cache",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Module L2-Cache",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Module Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "154",
"EventName": "DCW_ON_DRAWER",
- "BriefDescription": "Directory Write Level 1 Data Cache from On-Drawer Cache",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Drawer L2-Cache",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "155",
"EventName": "DCW_OFF_DRAWER",
- "BriefDescription": "Directory Write Level 1 Data Cache from Off-Drawer Cache",
+ "BriefDescription": "Directory Write Level 1 Data Cache from Off-Drawer L2-Cache",
"PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache."
},
{
@@ -199,140 +199,140 @@
"Unit": "CPU-M-CF",
"EventCode": "160",
"EventName": "IDCW_ON_MODULE_IV",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "161",
"EventName": "IDCW_ON_MODULE_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory Cache with Chip Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Chip Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache using chip horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "162",
"EventName": "IDCW_ON_MODULE_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory Cache with Drawer Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Drawer Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "163",
"EventName": "IDCW_ON_DRAWER_IV",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "164",
"EventName": "IDCW_ON_DRAWER_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer Cache with Chip Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Chip Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "165",
"EventName": "IDCW_ON_DRAWER_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer Cache with Drawer Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Drawer Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "166",
"EventName": "IDCW_OFF_DRAWER_IV",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "167",
"EventName": "IDCW_OFF_DRAWER_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer Cache with Chip Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Chip Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "168",
"EventName": "IDCW_OFF_DRAWER_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer Cache with Drawer Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Drawer Hit",
"PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "169",
"EventName": "ICW_REQ",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced the requestors Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "170",
"EventName": "ICW_REQ_IV",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "171",
"EventName": "ICW_REQ_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Chip HP Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Chip HP Hit",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "172",
"EventName": "ICW_REQ_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from Cache with Drawer HP Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Drawer HP Hit",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "173",
"EventName": "ICW_ON_CHIP",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip Cache",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "174",
"EventName": "ICW_ON_CHIP_IV",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip Cache with Intervention",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Intervention",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced an On-Chip Level-2 cache with intervention."
},
{
"Unit": "CPU-M-CF",
"EventCode": "175",
"EventName": "ICW_ON_CHIP_CHIP_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip Cache with Chip HP Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Chip HP Hit",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip Level-2 cache using chip level horizontal persistence, Chip-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "176",
"EventName": "ICW_ON_CHIP_DRAWER_HIT",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip Cache with Drawer HP Hit",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Drawer HP Hit",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip level 2 cache using drawer level horizontal persistence, Drawer-HP hit."
},
{
"Unit": "CPU-M-CF",
"EventCode": "177",
"EventName": "ICW_ON_MODULE",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Module Cache",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Module L2-Cache",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "178",
"EventName": "ICW_ON_DRAWER",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Drawer Cache",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Drawer L2-Cache",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced an On-Drawer Level-2 cache."
},
{
"Unit": "CPU-M-CF",
"EventCode": "179",
"EventName": "ICW_OFF_DRAWER",
- "BriefDescription": "Directory Write Level 1 Instruction Cache from Off-Drawer Cache",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from Off-Drawer L2-Cache",
"PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced an Off-Drawer Level-2 cache."
},
{
diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/pai_crypto.json b/tools/perf/pmu-events/arch/s390/cf_z16/pai_crypto.json
index cf8563d059b9..a82674f62409 100644
--- a/tools/perf/pmu-events/arch/s390/cf_z16/pai_crypto.json
+++ b/tools/perf/pmu-events/arch/s390/cf_z16/pai_crypto.json
@@ -753,14 +753,14 @@
"EventCode": "4203",
"EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_TDEA_128",
"BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED TDEA 128",
- "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA- 128 function ending with CC=0"
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA-128 function ending with CC=0"
},
{
"Unit": "PAI-CRYPTO",
"EventCode": "4204",
"EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_TDEA_192",
"BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED TDEA 192",
- "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA- 192 function ending with CC=0"
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA-192 function ending with CC=0"
},
{
"Unit": "PAI-CRYPTO",
@@ -788,21 +788,21 @@
"EventCode": "4208",
"EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_128",
"BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 128",
- "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES- 128 function ending with CC=0"
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-128 function ending with CC=0"
},
{
"Unit": "PAI-CRYPTO",
"EventCode": "4209",
"EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_192",
"BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 192",
- "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES- 192 function ending with CC=0"
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-192 function ending with CC=0"
},
{
"Unit": "PAI-CRYPTO",
"EventCode": "4210",
- "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_256A",
- "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 256A",
- "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES- 256A function ending with CC=0"
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_256",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 256",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-256 function ending with CC=0"
},
{
"Unit": "PAI-CRYPTO",
diff --git a/tools/perf/pmu-events/arch/s390/cf_z16/transaction.json b/tools/perf/pmu-events/arch/s390/cf_z16/transaction.json
index ec2ff78e2b5f..3ab1d3a6638c 100644
--- a/tools/perf/pmu-events/arch/s390/cf_z16/transaction.json
+++ b/tools/perf/pmu-events/arch/s390/cf_z16/transaction.json
@@ -2,71 +2,71 @@
{
"BriefDescription": "Transaction count",
"MetricName": "transaction",
- "MetricExpr": "TX_C_TEND + TX_NC_TEND + TX_NC_TABORT + TX_C_TABORT_SPECIAL + TX_C_TABORT_NO_SPECIAL"
+ "MetricExpr": "TX_C_TEND + TX_NC_TEND + TX_NC_TABORT + TX_C_TABORT_SPECIAL + TX_C_TABORT_NO_SPECIAL if has_event(TX_C_TEND) else 0"
},
{
"BriefDescription": "Cycles per Instruction",
"MetricName": "cpi",
- "MetricExpr": "CPU_CYCLES / INSTRUCTIONS"
+ "MetricExpr": "CPU_CYCLES / INSTRUCTIONS if has_event(INSTRUCTIONS) else 0"
},
{
"BriefDescription": "Problem State Instruction Ratio",
"MetricName": "prbstate",
- "MetricExpr": "(PROBLEM_STATE_INSTRUCTIONS / INSTRUCTIONS) * 100"
+ "MetricExpr": "(PROBLEM_STATE_INSTRUCTIONS / INSTRUCTIONS) * 100 if has_event(INSTRUCTIONS) else 0"
},
{
"BriefDescription": "Level One Miss per 100 Instructions",
"MetricName": "l1mp",
- "MetricExpr": "((L1I_DIR_WRITES + L1D_DIR_WRITES) / INSTRUCTIONS) * 100"
+ "MetricExpr": "((L1I_DIR_WRITES + L1D_DIR_WRITES) / INSTRUCTIONS) * 100 if has_event(INSTRUCTIONS) else 0"
},
{
"BriefDescription": "Percentage sourced from Level 2 cache",
"MetricName": "l2p",
- "MetricExpr": "((DCW_REQ + DCW_REQ_IV + ICW_REQ + ICW_REQ_IV) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100"
+ "MetricExpr": "((DCW_REQ + DCW_REQ_IV + ICW_REQ + ICW_REQ_IV) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ) else 0"
},
{
"BriefDescription": "Percentage sourced from Level 3 on same chip cache",
"MetricName": "l3p",
- "MetricExpr": "((DCW_REQ_CHIP_HIT + DCW_ON_CHIP + DCW_ON_CHIP_IV + DCW_ON_CHIP_CHIP_HIT + ICW_REQ_CHIP_HIT + ICW_ON_CHIP + ICW_ON_CHIP_IV + ICW_ON_CHIP_CHIP_HIT) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100"
+ "MetricExpr": "((DCW_REQ_CHIP_HIT + DCW_ON_CHIP + DCW_ON_CHIP_IV + DCW_ON_CHIP_CHIP_HIT + ICW_REQ_CHIP_HIT + ICW_ON_CHIP + ICW_ON_CHIP_IV + ICW_ON_CHIP_CHIP_HIT) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ_CHIP_HIT) else 0"
},
{
"BriefDescription": "Percentage sourced from Level 4 Local cache on same book",
"MetricName": "l4lp",
- "MetricExpr": "((DCW_REQ_DRAWER_HIT + DCW_ON_CHIP_DRAWER_HIT + DCW_ON_MODULE + DCW_ON_DRAWER + IDCW_ON_MODULE_IV + IDCW_ON_MODULE_CHIP_HIT + IDCW_ON_MODULE_DRAWER_HIT + IDCW_ON_DRAWER_IV + IDCW_ON_DRAWER_CHIP_HIT + IDCW_ON_DRAWER_DRAWER_HIT + ICW_REQ_DRAWER_HIT + ICW_ON_CHIP_DRAWER_HIT + ICW_ON_MODULE + ICW_ON_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100"
+ "MetricExpr": "((DCW_REQ_DRAWER_HIT + DCW_ON_CHIP_DRAWER_HIT + DCW_ON_MODULE + DCW_ON_DRAWER + IDCW_ON_MODULE_IV + IDCW_ON_MODULE_CHIP_HIT + IDCW_ON_MODULE_DRAWER_HIT + IDCW_ON_DRAWER_IV + IDCW_ON_DRAWER_CHIP_HIT + IDCW_ON_DRAWER_DRAWER_HIT + ICW_REQ_DRAWER_HIT + ICW_ON_CHIP_DRAWER_HIT + ICW_ON_MODULE + ICW_ON_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ_DRAWER_HIT) else 0"
},
{
"BriefDescription": "Percentage sourced from Level 4 Remote cache on different book",
"MetricName": "l4rp",
- "MetricExpr": "((DCW_OFF_DRAWER + IDCW_OFF_DRAWER_IV + IDCW_OFF_DRAWER_CHIP_HIT + IDCW_OFF_DRAWER_DRAWER_HIT + ICW_OFF_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100"
+ "MetricExpr": "((DCW_OFF_DRAWER + IDCW_OFF_DRAWER_IV + IDCW_OFF_DRAWER_CHIP_HIT + IDCW_OFF_DRAWER_DRAWER_HIT + ICW_OFF_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_OFF_DRAWER) else 0"
},
{
"BriefDescription": "Percentage sourced from memory",
"MetricName": "memp",
- "MetricExpr": "((DCW_ON_CHIP_MEMORY + DCW_ON_MODULE_MEMORY + DCW_ON_DRAWER_MEMORY + DCW_OFF_DRAWER_MEMORY + ICW_ON_CHIP_MEMORY + ICW_ON_MODULE_MEMORY + ICW_ON_DRAWER_MEMORY + ICW_OFF_DRAWER_MEMORY) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100"
+ "MetricExpr": "((DCW_ON_CHIP_MEMORY + DCW_ON_MODULE_MEMORY + DCW_ON_DRAWER_MEMORY + DCW_OFF_DRAWER_MEMORY + ICW_ON_CHIP_MEMORY + ICW_ON_MODULE_MEMORY + ICW_ON_DRAWER_MEMORY + ICW_OFF_DRAWER_MEMORY) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_ON_CHIP_MEMORY) else 0"
},
{
"BriefDescription": "Cycles per Instructions from Finite cache/memory",
"MetricName": "finite_cpi",
- "MetricExpr": "L1C_TLB2_MISSES / INSTRUCTIONS"
+ "MetricExpr": "L1C_TLB2_MISSES / INSTRUCTIONS if has_event(L1C_TLB2_MISSES) else 0"
},
{
"BriefDescription": "Estimated Instruction Complexity CPI infinite Level 1",
"MetricName": "est_cpi",
- "MetricExpr": "(CPU_CYCLES / INSTRUCTIONS) - (L1C_TLB2_MISSES / INSTRUCTIONS)"
+ "MetricExpr": "(CPU_CYCLES / INSTRUCTIONS) - (L1C_TLB2_MISSES / INSTRUCTIONS) if has_event(INSTRUCTIONS) else 0"
},
{
"BriefDescription": "Estimated Sourcing Cycles per Level 1 Miss",
"MetricName": "scpl1m",
- "MetricExpr": "L1C_TLB2_MISSES / (L1I_DIR_WRITES + L1D_DIR_WRITES)"
+ "MetricExpr": "L1C_TLB2_MISSES / (L1I_DIR_WRITES + L1D_DIR_WRITES) if has_event(L1C_TLB2_MISSES) else 0"
},
{
"BriefDescription": "Estimated TLB CPU percentage of Total CPU",
"MetricName": "tlb_percent",
- "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / CPU_CYCLES) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES)) * 100"
+ "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / CPU_CYCLES) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES)) * 100 if has_event(CPU_CYCLES) else 0"
},
{
"BriefDescription": "Estimated Cycles per TLB Miss",
"MetricName": "tlb_miss",
- "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / (DTLB2_WRITES + ITLB2_WRITES)) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES))"
+ "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / (DTLB2_WRITES + ITLB2_WRITES)) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES)) if has_event(DTLB2_MISSES) else 0"
}
]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/basic.json b/tools/perf/pmu-events/arch/s390/cf_z17/basic.json
new file mode 100644
index 000000000000..1023d47028ce
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/basic.json
@@ -0,0 +1,58 @@
+[
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "0",
+ "EventName": "CPU_CYCLES",
+ "BriefDescription": "Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles, excluding the number of cycles while the CPU is in the wait state."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "1",
+ "EventName": "INSTRUCTIONS",
+ "BriefDescription": "Instruction Count",
+ "PublicDescription": "This counter counts the total number of instructions executed by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "2",
+ "EventName": "L1I_DIR_WRITES",
+ "BriefDescription": "Level-1 I-Cache Directory Write Count",
+ "PublicDescription": "This counter counts the total number of level-1 instruction-cache or unified-cache directory writes."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "3",
+ "EventName": "L1I_PENALTY_CYCLES",
+ "BriefDescription": "Level-1 I-Cache Penalty Cycle Count",
+ "PublicDescription": "This counter counts the total number of cache penalty cycles for level-1 instruction cache or unified cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "4",
+ "EventName": "L1D_DIR_WRITES",
+ "BriefDescription": "Level-1 D-Cache Directory Write Count",
+ "PublicDescription": "This counter counts the total number of level-1 data-cache directory writes."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "5",
+ "EventName": "L1D_PENALTY_CYCLES",
+ "BriefDescription": "Level-1 D-Cache Penalty Cycle Count",
+ "PublicDescription": "This counter counts the total number of cache penalty cycles for level-1 data cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "32",
+ "EventName": "PROBLEM_STATE_CPU_CYCLES",
+ "BriefDescription": "Problem-State Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the CPU is in the problem state, excluding the number of cycles while the CPU is in the wait state."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "33",
+ "EventName": "PROBLEM_STATE_INSTRUCTIONS",
+ "BriefDescription": "Problem-State Instruction Count",
+ "PublicDescription": "This counter counts the total number of instructions executed by the CPU while in the problem state."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/crypto6.json b/tools/perf/pmu-events/arch/s390/cf_z17/crypto6.json
new file mode 100644
index 000000000000..8b4380b8e489
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/crypto6.json
@@ -0,0 +1,142 @@
+[
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "64",
+ "EventName": "PRNG_FUNCTIONS",
+ "BriefDescription": "PRNG Function Count",
+ "PublicDescription": "This counter counts the total number of the pseudorandom-number-generation functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "65",
+ "EventName": "PRNG_CYCLES",
+ "BriefDescription": "PRNG Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the DEA/AES/SHA coprocessor is busy performing the pseudorandom- number-generation functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "66",
+ "EventName": "PRNG_BLOCKED_FUNCTIONS",
+ "BriefDescription": "PRNG Blocked Function Count",
+ "PublicDescription": "This counter counts the total number of the pseudorandom-number-generation functions that are issued by the CPU and are blocked because the DEA/AES/SHA coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "67",
+ "EventName": "PRNG_BLOCKED_CYCLES",
+ "BriefDescription": "PRNG Blocked Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles blocked for the pseudorandom-number-generation functions issued by the CPU because the DEA/AES/SHA coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "68",
+ "EventName": "SHA_FUNCTIONS",
+ "BriefDescription": "SHA Function Count",
+ "PublicDescription": "This counter counts the total number of the SHA functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "69",
+ "EventName": "SHA_CYCLES",
+ "BriefDescription": "SHA Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the SHA coprocessor is busy performing the SHA functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "70",
+ "EventName": "SHA_BLOCKED_FUNCTIONS",
+ "BriefDescription": "SHA Blocked Function Count",
+ "PublicDescription": "This counter counts the total number of the SHA functions that are issued by the CPU and are blocked because the SHA coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "71",
+ "EventName": "SHA_BLOCKED_CYCLES",
+ "BriefDescription": "SHA Blocked Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles blocked for the SHA functions issued by the CPU because the SHA coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "72",
+ "EventName": "DEA_FUNCTIONS",
+ "BriefDescription": "DEA Function Count",
+ "PublicDescription": "This counter counts the total number of the DEA functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "73",
+ "EventName": "DEA_CYCLES",
+ "BriefDescription": "DEA Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the DEA/AES coprocessor is busy performing the DEA functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "74",
+ "EventName": "DEA_BLOCKED_FUNCTIONS",
+ "BriefDescription": "DEA Blocked Function Count",
+ "PublicDescription": "This counter counts the total number of the DEA functions that are issued by the CPU and are blocked because the DEA/AES coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "75",
+ "EventName": "DEA_BLOCKED_CYCLES",
+ "BriefDescription": "DEA Blocked Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles blocked for the DEA functions issued by the CPU because the DEA/AES coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "76",
+ "EventName": "AES_FUNCTIONS",
+ "BriefDescription": "AES Function Count",
+ "PublicDescription": "This counter counts the total number of the AES functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "77",
+ "EventName": "AES_CYCLES",
+ "BriefDescription": "AES Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the DEA/AES coprocessor is busy performing the AES functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "78",
+ "EventName": "AES_BLOCKED_FUNCTIONS",
+ "BriefDescription": "AES Blocked Function Count",
+ "PublicDescription": "This counter counts the total number of the AES functions that are issued by the CPU and are blocked because the DEA/AES coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "79",
+ "EventName": "AES_BLOCKED_CYCLES",
+ "BriefDescription": "AES Blocked Cycle Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles blocked for the AES functions issued by the CPU because the DEA/AES coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "80",
+ "EventName": "ECC_FUNCTION_COUNT",
+ "BriefDescription": "ECC Function Count",
+ "PublicDescription": "This counter counts the total number of the elliptic-curve cryptography (ECC) functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "81",
+ "EventName": "ECC_CYCLES_COUNT",
+ "BriefDescription": "ECC Cycles Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles when the ECC coprocessor is busy performing the elliptic-curve cryptography (ECC) functions issued by the CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "82",
+ "EventName": "ECC_BLOCKED_FUNCTION_COUNT",
+ "BriefDescription": "Ecc Blocked Function Count",
+ "PublicDescription": "This counter counts the total number of the elliptic-curve cryptography (ECC) functions that are issued by the CPU and are blocked because the ECC coprocessor is busy performing a function issued by another CPU."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "83",
+ "EventName": "ECC_BLOCKED_CYCLES_COUNT",
+ "BriefDescription": "ECC Blocked Cycles Count",
+ "PublicDescription": "This counter counts the total number of CPU cycles blocked for the elliptic-curve cryptography (ECC) functions issued by the CPU because the ECC coprocessor is busy performing a function issued by another CPU."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/extended.json b/tools/perf/pmu-events/arch/s390/cf_z17/extended.json
new file mode 100644
index 000000000000..e139482e217f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/extended.json
@@ -0,0 +1,541 @@
+[
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "128",
+ "EventName": "L1D_RO_EXCL_WRITES",
+ "BriefDescription": "L1D Read-only Exclusive Writes",
+ "PublicDescription": "A directory write to the Level-1 Data cache where the line was originally in a Read-Only state in the cache but has been updated to be in the Exclusive state that allows stores to the cache line."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "129",
+ "EventName": "DTLB2_WRITES",
+ "BriefDescription": "DTLB2 Writes",
+ "PublicDescription": "A translation has been written into The Translation Lookaside Buffer 2 (TLB2) and the request was made by the Level-1 Data cache. This is a replacement for what was provided for the DTLB on z13 and prior machines."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "130",
+ "EventName": "DTLB2_MISSES",
+ "BriefDescription": "DTLB2 Misses",
+ "PublicDescription": "A TLB2 miss is in progress for a request made by the Level-1 Data cache. Incremented by one for every TLB2 miss in progress for the Level-1 Data cache on this cycle. This is a replacement for what was provided for the DTLB on z13 and prior machines."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "131",
+ "EventName": "CRSTE_1MB_WRITES",
+ "BriefDescription": "One Megabyte CRSTE writes",
+ "PublicDescription": "A translation entry was written into the Combined Region and Segment Table Entry array in the Level-2 TLB for a one-megabyte page."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "132",
+ "EventName": "DTLB2_GPAGE_WRITES",
+ "BriefDescription": "DTLB2 Two-Gigabyte Page Writes",
+ "PublicDescription": "A translation entry for a two-gigabyte page was written into the Level-2 TLB."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "134",
+ "EventName": "ITLB2_WRITES",
+ "BriefDescription": "ITLB2 Writes",
+ "PublicDescription": "A translation entry has been written into the Translation Lookaside Buffer 2 (TLB2) and the request was made by the Level-1 Instruction cache. This is a replacement for what was provided for the ITLB on z13 and prior machines."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "135",
+ "EventName": "ITLB2_MISSES",
+ "BriefDescription": "ITLB2 Misses",
+ "PublicDescription": "A TLB2 miss is in progress for a request made by the Level-1 Instruction cache. Incremented by one for every TLB2 miss in progress for the Level-1 Instruction cache in a cycle. This is a replacement for what was provided for the ITLB on z13 and prior machines."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "137",
+ "EventName": "TLB2_PTE_WRITES",
+ "BriefDescription": "TLB2 Page Table Entry Writes",
+ "PublicDescription": "A translation entry was written into the Page Table Entry array in the Level-2 TLB."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "138",
+ "EventName": "TLB2_CRSTE_WRITES",
+ "BriefDescription": "TLB2 Combined Region and Segment Entry Writes",
+ "PublicDescription": "Translation entries were written into the Combined Region and Segment Table Entry array and the Page Table Entry array in the Level-2 TLB."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "139",
+ "EventName": "TLB2_ENGINES_BUSY",
+ "BriefDescription": "TLB2 Engines Busy",
+ "PublicDescription": "The number of Level-2 TLB translation engines busy in a cycle."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "140",
+ "EventName": "TX_C_TEND",
+ "BriefDescription": "Completed TEND instructions in constrained TX mode",
+ "PublicDescription": "A TEND instruction has completed in a constrained transactional-execution mode."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "141",
+ "EventName": "TX_NC_TEND",
+ "BriefDescription": "Completed TEND instructions in non-constrained TX mode",
+ "PublicDescription": "A TEND instruction has completed in a non-constrained transactional-execution mode."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "143",
+ "EventName": "L1C_TLB2_MISSES",
+ "BriefDescription": "L1C TLB2 Misses",
+ "PublicDescription": "Increments by one for any cycle where a Level-1 cache or Level-2 TLB miss is in progress."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "145",
+ "EventName": "DCW_REQ",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "146",
+ "EventName": "DCW_REQ_IV",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "147",
+ "EventName": "DCW_REQ_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Chip HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "148",
+ "EventName": "DCW_REQ_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Data Cache from L2-Cache with Drawer HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "149",
+ "EventName": "DCW_ON_CHIP",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "150",
+ "EventName": "DCW_ON_CHIP_IV",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "151",
+ "EventName": "DCW_ON_CHIP_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Chip HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "152",
+ "EventName": "DCW_ON_CHIP_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Chip L2-Cache with Drawer HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Chip Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "153",
+ "EventName": "DCW_ON_MODULE",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Module L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Module Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "154",
+ "EventName": "DCW_ON_DRAWER",
+ "BriefDescription": "Directory Write Level 1 Data Cache from On-Drawer L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "155",
+ "EventName": "DCW_OFF_DRAWER",
+ "BriefDescription": "Directory Write Level 1 Data Cache from Off-Drawer L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Data cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "156",
+ "EventName": "DCW_ON_CHIP_MEMORY",
+ "BriefDescription": "Directory Write Level 1 Cache from On-Chip Memory",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from On-Chip memory."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "157",
+ "EventName": "DCW_ON_MODULE_MEMORY",
+ "BriefDescription": "Directory Write Level 1 Cache from On-Module Memory",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from On-Module memory."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "158",
+ "EventName": "DCW_ON_DRAWER_MEMORY",
+ "BriefDescription": "Directory Write Level 1 Cache from On-Drawer Memory",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from On-Drawer memory."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "159",
+ "EventName": "DCW_OFF_DRAWER_MEMORY",
+ "BriefDescription": "Directory Write Level 1 Cache from Off-Drawer Memory",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from Off-Drawer memory."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "160",
+ "EventName": "IDCW_ON_MODULE_IV",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "161",
+ "EventName": "IDCW_ON_MODULE_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Chip Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "162",
+ "EventName": "IDCW_ON_MODULE_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Module Memory L2-Cache with Drawer Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "163",
+ "EventName": "IDCW_ON_DRAWER_IV",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "164",
+ "EventName": "IDCW_ON_DRAWER_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Chip Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "165",
+ "EventName": "IDCW_ON_DRAWER_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from On-Drawer L2-Cache with Drawer Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "166",
+ "EventName": "IDCW_OFF_DRAWER_IV",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "167",
+ "EventName": "IDCW_OFF_DRAWER_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Chip Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "168",
+ "EventName": "IDCW_OFF_DRAWER_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction and Data Cache from Off-Drawer L2-Cache with Drawer Hit",
+ "PublicDescription": "A directory write to the Level-1 Data or Level-1 Instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "169",
+ "EventName": "ICW_REQ",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced the requestors Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "170",
+ "EventName": "ICW_REQ_IV",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "171",
+ "EventName": "ICW_REQ_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Chip HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "172",
+ "EventName": "ICW_REQ_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from L2-Cache with Drawer HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from the requestors Level-2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "173",
+ "EventName": "ICW_ON_CHIP",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "174",
+ "EventName": "ICW_ON_CHIP_IV",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Intervention",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip Level-2 cache with intervention."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "175",
+ "EventName": "ICW_ON_CHIP_CHIP_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Chip HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip Level-2 cache after using chip level horizontal persistence, Chip-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "176",
+ "EventName": "ICW_ON_CHIP_DRAWER_HIT",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Chip L2-Cache with Drawer HP Hit",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Chip level 2 cache after using drawer level horizontal persistence, Drawer-HP hit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "177",
+ "EventName": "ICW_ON_MODULE",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Module L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Module Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "178",
+ "EventName": "ICW_ON_DRAWER",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from On-Drawer L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an On-Drawer Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "179",
+ "EventName": "ICW_OFF_DRAWER",
+ "BriefDescription": "Directory Write Level 1 Instruction Cache from Off-Drawer L2-Cache",
+ "PublicDescription": "A directory write to the Level-1 Instruction cache directory where the returned cache line was sourced from an Off-Drawer Level-2 cache."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "202",
+ "EventName": "CYCLES_SAMETHRD",
+ "BriefDescription": "CPU is not in wait state and CPU is running by itself",
+ "PublicDescription": "The number of cycles the CPU is not in wait state and the CPU is running by itself on the Core."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "203",
+ "EventName": "CYCLES_DIFFTHRD",
+ "BriefDescription": "CPU is not in wait state and CPU is running by another thread",
+ "PublicDescription": "The number of cycles the CPU is not in wait state and the CPU is running with another thread on the Core."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "204",
+ "EventName": "INST_SAMETHRD",
+ "BriefDescription": "Instructions executed on CPU by itself",
+ "PublicDescription": "The number of instructions executed on the CPU and the CPU is running by itself on the Core."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "205",
+ "EventName": "INST_DIFFTHRD",
+ "BriefDescription": "Instructions executed on CPU by another thread",
+ "PublicDescription": "The number of instructions executed on the CPU and the CPU is running with another thread on the Core."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "206",
+ "EventName": "WRONG_BRANCH_PREDICTION",
+ "BriefDescription": "Incorrect branch prediction on core",
+ "PublicDescription": "A count of the number of branches that were predicted incorrectly by the branch prediction logic in the Core. This includes incorrectly predicted branches that are executed in Firmware. Examples of instructions implemented in Firmware are complicated instructions like MVCL (Move Character Long) and PC (Program Call)."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "225",
+ "EventName": "VX_BCD_EXECUTION_SLOTS",
+ "BriefDescription": "Count finished vector arithmetic Binary Coded Decimal instructions",
+ "PublicDescription": "Count of floating point execution slots used for finished vector arithmetic Binary Coded Decimal instructions. Instructions: VAP, VSP, VMP, VMSP, VDP, VSDP, VRP, VLIP, VSRP, VPSOP, VCP, VTP, VPKZ, VUPKZ, VCVB, VCVBG, VCVD, VCVDG, VSCHP, VSCSHP, VCSPH, VCLZDP, VPKZR, VSRPR, VUPKZH, VUPKZL, VTZ, VUPH, VUPL, VCVBX, VCVDX."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "226",
+ "EventName": "DECIMAL_INSTRUCTIONS",
+ "BriefDescription": "Decimal instruction dispatched",
+ "PublicDescription": "Decimal instruction dispatched. Instructions: CVB, CVD, AP, CP, DP, ED, EDMK, MP, SRP, SP, ZAP, TP."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "232",
+ "EventName": "LAST_HOST_TRANSLATIONS",
+ "BriefDescription": "Last host translation done",
+ "PublicDescription": "Last Host Translation done."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "244",
+ "EventName": "TX_NC_TABORT",
+ "BriefDescription": "Aborted transactions in unconstrained TX mode",
+ "PublicDescription": "A transaction abort has occurred in a non-constrained transactional-execution mode."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "245",
+ "EventName": "TX_C_TABORT_NO_SPECIAL",
+ "BriefDescription": "Aborted transactions in constrained TX mode",
+ "PublicDescription": "A transaction abort has occurred in a constrained transactional-execution mode and the CPU is not using any special logic to allow the transaction to complete."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "246",
+ "EventName": "TX_C_TABORT_SPECIAL",
+ "BriefDescription": "Aborted transactions in constrained TX mode using special completion logic",
+ "PublicDescription": "A transaction abort has occurred in a constrained transactional-execution mode and the CPU is using special logic to allow the transaction to complete."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "248",
+ "EventName": "DFLT_ACCESS",
+ "BriefDescription": "Cycles CPU spent obtaining access to Deflate unit",
+ "PublicDescription": "Cycles CPU spent obtaining access to Deflate unit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "253",
+ "EventName": "DFLT_CYCLES",
+ "BriefDescription": "Cycles CPU is using Deflate unit",
+ "PublicDescription": "Cycles CPU is using Deflate unit."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "256",
+ "EventName": "SORTL",
+ "BriefDescription": "Count SORTL instructions",
+ "PublicDescription": "Increments by one for every SORT LISTS (SORTL) instruction executed."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "265",
+ "EventName": "DFLT_CC",
+ "BriefDescription": "Increments DEFLATE CONVERSION CALL",
+ "PublicDescription": "Increments by one for every DEFLATE CONVERSION CALL (DFLTCC) instruction executed."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "266",
+ "EventName": "DFLT_CCFINISH",
+ "BriefDescription": "Increments completed DEFLATE CONVERSION CALL",
+ "PublicDescription": "Increments by one for every DEFLATE CONVERSION CALL (DFLTCC) instruction executed that ended in Condition Codes 0, 1 or 2."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "267",
+ "EventName": "NNPA_INVOCATIONS",
+ "BriefDescription": "NNPA Total invocations",
+ "PublicDescription": "Increments by one for every NEURAL NETWORK PROCESSING ASSIST (NNPA) instruction executed."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "268",
+ "EventName": "NNPA_COMPLETIONS",
+ "BriefDescription": "NNPA Total completions",
+ "PublicDescription": "Increments by one for every NEURAL NETWORK PROCESSING ASSIST (NNPA) instruction executed that ended in Condition Code 0."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "269",
+ "EventName": "NNPA_WAIT_LOCK",
+ "BriefDescription": "Cycles spent obtaining NNPA lock",
+ "PublicDescription": "Cycles CPU spent obtaining access to IBM Z Integrated Accelerator for AI."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "270",
+ "EventName": "NNPA_HOLD_LOCK",
+ "BriefDescription": "Cycles spent holding NNPA lock",
+ "PublicDescription": "Cycles CPU is using IBM Z Integrated Accelerator for AI."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "272",
+ "EventName": "NNPA_INST_ONCHIP",
+ "BriefDescription": "NNPA instructions used on-chip Integrated Accelerator",
+ "PublicDescription": "A NEURAL NETWORK PROCESSING ASSIST (NNPA) instruction has used the Local On-Chip IBM Z Integrated Accelerator for AI during its execution"
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "273",
+ "EventName": "NNPA_INST_OFFCHIP",
+ "BriefDescription": "NNPA instructions used off-chip Integrated Accelerator",
+ "PublicDescription": "A NEURAL NETWORK PROCESSING ASSIST (NNPA) instruction has used an Off-Chip IBM Z Integrated Accelerator for AI during its execution."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "274",
+ "EventName": "NNPA_INST_DIFF",
+ "BriefDescription": "NNPA instructions used different Integrated Accelerator",
+ "PublicDescription": "A NEURAL NETWORK PROCESSING ASSIST (NNPA) instruction has used a different IBM Z Integrated Accelerator for AI since it was last executed."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "276",
+ "EventName": "NNPA_4K_PREFETCH",
+ "BriefDescription": "Number of 4K prefetches for Integated Accelerator",
+ "PublicDescription": "Number of 4K prefetches done for a remote IBM Z Integated Accelerator for AI."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "277",
+ "EventName": "NNPA_COMPL_LOCK",
+ "BriefDescription": "A Perform Locked Operation has completed",
+ "PublicDescription": "A PERFORM LOCKED OPERATION (PLO) has completed."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "278",
+ "EventName": "NNPA_RETRY_LOCK",
+ "BriefDescription": "A Perform Locked Operation has been retried",
+ "PublicDescription": "A PERFORM LOCKED OPERATION (PLO) has been retried and the CPU did not use any special logic to allow the PLO to complete."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "279",
+ "EventName": "NNPA_RETRY_LOCK_WITH_PLO",
+ "BriefDescription": "A Perform Locked Operation has been retried using special logic",
+ "PublicDescription": "A PERFORM LOCKED OPERATION (PLO) has been retried and the CPU is using special logic to allow PLO to complete."
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "448",
+ "EventName": "MT_DIAG_CYCLES_ONE_THR_ACTIVE",
+ "BriefDescription": "Cycle count with one thread active",
+ "PublicDescription": "Cycle count with one thread active"
+ },
+ {
+ "Unit": "CPU-M-CF",
+ "EventCode": "449",
+ "EventName": "MT_DIAG_CYCLES_TWO_THR_ACTIVE",
+ "BriefDescription": "Cycle count with two threads active",
+ "PublicDescription": "Cycle count with two threads active"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/pai_crypto.json b/tools/perf/pmu-events/arch/s390/cf_z17/pai_crypto.json
new file mode 100644
index 000000000000..fd2eb536ecc7
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/pai_crypto.json
@@ -0,0 +1,1213 @@
+[
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4096",
+ "EventName": "CRYPTO_ALL",
+ "BriefDescription": "CRYPTO ALL",
+ "PublicDescription": "Sums of all non zero cryptography counters"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4097",
+ "EventName": "KM_DEA",
+ "BriefDescription": "KM DEA",
+ "PublicDescription": "KM-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4098",
+ "EventName": "KM_TDEA_128",
+ "BriefDescription": "KM TDEA 128",
+ "PublicDescription": "KM-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4099",
+ "EventName": "KM_TDEA_192",
+ "BriefDescription": "KM TDEA 192",
+ "PublicDescription": "KM-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4100",
+ "EventName": "KM_ENCRYPTED_DEA",
+ "BriefDescription": "KM ENCRYPTED DEA",
+ "PublicDescription": "KM-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4101",
+ "EventName": "KM_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KM ENCRYPTED TDEA 128",
+ "PublicDescription": "KM-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4102",
+ "EventName": "KM_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KM ENCRYPTED TDEA 192",
+ "PublicDescription": "KM-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4103",
+ "EventName": "KM_AES_128",
+ "BriefDescription": "KM AES 128",
+ "PublicDescription": "KM-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4104",
+ "EventName": "KM_AES_192",
+ "BriefDescription": "KM AES 192",
+ "PublicDescription": "KM-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4105",
+ "EventName": "KM_AES_256",
+ "BriefDescription": "KM AES 256",
+ "PublicDescription": "KM-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4106",
+ "EventName": "KM_ENCRYPTED_AES_128",
+ "BriefDescription": "KM ENCRYPTED AES 128",
+ "PublicDescription": "KM-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4107",
+ "EventName": "KM_ENCRYPTED_AES_192",
+ "BriefDescription": "KM ENCRYPTED AES 192",
+ "PublicDescription": "KM-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4108",
+ "EventName": "KM_ENCRYPTED_AES_256",
+ "BriefDescription": "KM ENCRYPTED AES 256",
+ "PublicDescription": "KM-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4109",
+ "EventName": "KM_XTS_AES_128",
+ "BriefDescription": "KM XTS AES 128",
+ "PublicDescription": "KM-XTS-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4110",
+ "EventName": "KM_XTS_AES_256",
+ "BriefDescription": "KM XTS AES 256",
+ "PublicDescription": "KM-XTS-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4111",
+ "EventName": "KM_XTS_ENCRYPTED_AES_128",
+ "BriefDescription": "KM XTS ENCRYPTED AES 128",
+ "PublicDescription": "KM-XTS-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4112",
+ "EventName": "KM_XTS_ENCRYPTED_AES_256",
+ "BriefDescription": "KM XTS ENCRYPTED AES 256",
+ "PublicDescription": "KM-XTS-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4113",
+ "EventName": "KMC_DEA",
+ "BriefDescription": "KMC DEA",
+ "PublicDescription": "KMC-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4114",
+ "EventName": "KMC_TDEA_128",
+ "BriefDescription": "KMC TDEA 128",
+ "PublicDescription": "KMC-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4115",
+ "EventName": "KMC_TDEA_192",
+ "BriefDescription": "KMC TDEA 192",
+ "PublicDescription": "KMC-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4116",
+ "EventName": "KMC_ENCRYPTED_DEA",
+ "BriefDescription": "KMC ENCRYPTED DEA",
+ "PublicDescription": "KMC-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4117",
+ "EventName": "KMC_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KMC ENCRYPTED TDEA 128",
+ "PublicDescription": "KMC-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4118",
+ "EventName": "KMC_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KMC ENCRYPTED TDEA 192",
+ "PublicDescription": "KMC-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4119",
+ "EventName": "KMC_AES_128",
+ "BriefDescription": "KMC AES 128",
+ "PublicDescription": "KMC-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4120",
+ "EventName": "KMC_AES_192",
+ "BriefDescription": "KMC AES 192",
+ "PublicDescription": "KMC-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4121",
+ "EventName": "KMC_AES_256",
+ "BriefDescription": "KMC AES 256",
+ "PublicDescription": "KMC-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4122",
+ "EventName": "KMC_ENCRYPTED_AES_128",
+ "BriefDescription": "KMC ENCRYPTED AES 128",
+ "PublicDescription": "KMC-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4123",
+ "EventName": "KMC_ENCRYPTED_AES_192",
+ "BriefDescription": "KMC ENCRYPTED AES 192",
+ "PublicDescription": "KMC-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4124",
+ "EventName": "KMC_ENCRYPTED_AES_256",
+ "BriefDescription": "KMC ENCRYPTED AES 256",
+ "PublicDescription": "KMC-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4125",
+ "EventName": "KMC_PRNG",
+ "BriefDescription": "KMC PRNG",
+ "PublicDescription": "KMC-PRNG function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4126",
+ "EventName": "KMA_GCM_AES_128",
+ "BriefDescription": "KMA GCM AES 128",
+ "PublicDescription": "KMA-GCM-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4127",
+ "EventName": "KMA_GCM_AES_192",
+ "BriefDescription": "KMA GCM AES 192",
+ "PublicDescription": "KMA-GCM-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4128",
+ "EventName": "KMA_GCM_AES_256",
+ "BriefDescription": "KMA GCM AES 256",
+ "PublicDescription": "KMA-GCM-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4129",
+ "EventName": "KMA_GCM_ENCRYPTED_AES_128",
+ "BriefDescription": "KMA GCM ENCRYPTED AES 128",
+ "PublicDescription": "KMA-GCM-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4130",
+ "EventName": "KMA_GCM_ENCRYPTED_AES_192",
+ "BriefDescription": "KMA GCM ENCRYPTED AES 192",
+ "PublicDescription": "KMA-GCM-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4131",
+ "EventName": "KMA_GCM_ENCRYPTED_AES_256",
+ "BriefDescription": "KMA GCM ENCRYPTED AES 256",
+ "PublicDescription": "KMA-GCM-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4132",
+ "EventName": "KMF_DEA",
+ "BriefDescription": "KMF DEA",
+ "PublicDescription": "KMF-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4133",
+ "EventName": "KMF_TDEA_128",
+ "BriefDescription": "KMF TDEA 128",
+ "PublicDescription": "KMF-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4134",
+ "EventName": "KMF_TDEA_192",
+ "BriefDescription": "KMF TDEA 192",
+ "PublicDescription": "KMF-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4135",
+ "EventName": "KMF_ENCRYPTED_DEA",
+ "BriefDescription": "KMF ENCRYPTED DEA",
+ "PublicDescription": "KMF-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4136",
+ "EventName": "KMF_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KMF ENCRYPTED TDEA 128",
+ "PublicDescription": "KMF-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4137",
+ "EventName": "KMF_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KMF ENCRYPTED TDEA 192",
+ "PublicDescription": "KMF-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4138",
+ "EventName": "KMF_AES_128",
+ "BriefDescription": "KMF AES 128",
+ "PublicDescription": "KMF-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4139",
+ "EventName": "KMF_AES_192",
+ "BriefDescription": "KMF AES 192",
+ "PublicDescription": "KMF-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4140",
+ "EventName": "KMF_AES_256",
+ "BriefDescription": "KMF AES 256",
+ "PublicDescription": "KMF-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4141",
+ "EventName": "KMF_ENCRYPTED_AES_128",
+ "BriefDescription": "KMF ENCRYPTED AES 128",
+ "PublicDescription": "KMF-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4142",
+ "EventName": "KMF_ENCRYPTED_AES_192",
+ "BriefDescription": "KMF ENCRYPTED AES 192",
+ "PublicDescription": "KMF-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4143",
+ "EventName": "KMF_ENCRYPTED_AES_256",
+ "BriefDescription": "KMF ENCRYPTED AES 256",
+ "PublicDescription": "KMF-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4144",
+ "EventName": "KMCTR_DEA",
+ "BriefDescription": "KMCTR DEA",
+ "PublicDescription": "KMCTR-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4145",
+ "EventName": "KMCTR_TDEA_128",
+ "BriefDescription": "KMCTR TDEA 128",
+ "PublicDescription": "KMCTR-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4146",
+ "EventName": "KMCTR_TDEA_192",
+ "BriefDescription": "KMCTR TDEA 192",
+ "PublicDescription": "KMCTR-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4147",
+ "EventName": "KMCTR_ENCRYPTED_DEA",
+ "BriefDescription": "KMCTR ENCRYPTED DEA",
+ "PublicDescription": "KMCTR-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4148",
+ "EventName": "KMCTR_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KMCTR ENCRYPTED TDEA 128",
+ "PublicDescription": "KMCTR-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4149",
+ "EventName": "KMCTR_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KMCTR ENCRYPTED TDEA 192",
+ "PublicDescription": "KMCTR-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4150",
+ "EventName": "KMCTR_AES_128",
+ "BriefDescription": "KMCTR AES 128",
+ "PublicDescription": "KMCTR-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4151",
+ "EventName": "KMCTR_AES_192",
+ "BriefDescription": "KMCTR AES 192",
+ "PublicDescription": "KMCTR-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4152",
+ "EventName": "KMCTR_AES_256",
+ "BriefDescription": "KMCTR AES 256",
+ "PublicDescription": "KMCTR-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4153",
+ "EventName": "KMCTR_ENCRYPTED_AES_128",
+ "BriefDescription": "KMCTR ENCRYPTED AES 128",
+ "PublicDescription": "KMCTR-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4154",
+ "EventName": "KMCTR_ENCRYPTED_AES_192",
+ "BriefDescription": "KMCTR ENCRYPTED AES 192",
+ "PublicDescription": "KMCTR-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4155",
+ "EventName": "KMCTR_ENCRYPTED_AES_256",
+ "BriefDescription": "KMCTR ENCRYPTED AES 256",
+ "PublicDescription": "KMCTR-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4156",
+ "EventName": "KMO_DEA",
+ "BriefDescription": "KMO DEA",
+ "PublicDescription": "KMO-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4157",
+ "EventName": "KMO_TDEA_128",
+ "BriefDescription": "KMO TDEA 128",
+ "PublicDescription": "KMO-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4158",
+ "EventName": "KMO_TDEA_192",
+ "BriefDescription": "KMO TDEA 192",
+ "PublicDescription": "KMO-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4159",
+ "EventName": "KMO_ENCRYPTED_DEA",
+ "BriefDescription": "KMO ENCRYPTED DEA",
+ "PublicDescription": "KMO-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4160",
+ "EventName": "KMO_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KMO ENCRYPTED TDEA 128",
+ "PublicDescription": "KMO-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4161",
+ "EventName": "KMO_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KMO ENCRYPTED TDEA 192",
+ "PublicDescription": "KMO-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4162",
+ "EventName": "KMO_AES_128",
+ "BriefDescription": "KMO AES 128",
+ "PublicDescription": "KMO-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4163",
+ "EventName": "KMO_AES_192",
+ "BriefDescription": "KMO AES 192",
+ "PublicDescription": "KMO-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4164",
+ "EventName": "KMO_AES_256",
+ "BriefDescription": "KMO AES 256",
+ "PublicDescription": "KMO-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4165",
+ "EventName": "KMO_ENCRYPTED_AES_128",
+ "BriefDescription": "KMO ENCRYPTED AES 128",
+ "PublicDescription": "KMO-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4166",
+ "EventName": "KMO_ENCRYPTED_AES_192",
+ "BriefDescription": "KMO ENCRYPTED AES 192",
+ "PublicDescription": "KMO-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4167",
+ "EventName": "KMO_ENCRYPTED_AES_256",
+ "BriefDescription": "KMO ENCRYPTED AES 256",
+ "PublicDescription": "KMO-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4168",
+ "EventName": "KIMD_SHA_1",
+ "BriefDescription": "KIMD SHA 1",
+ "PublicDescription": "KIMD-SHA-1 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4169",
+ "EventName": "KIMD_SHA_256",
+ "BriefDescription": "KIMD SHA 256",
+ "PublicDescription": "KIMD-SHA-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4170",
+ "EventName": "KIMD_SHA_512",
+ "BriefDescription": "KIMD SHA 512",
+ "PublicDescription": "KIMD-SHA-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4171",
+ "EventName": "KIMD_SHA3_224",
+ "BriefDescription": "KIMD SHA3 224",
+ "PublicDescription": "KIMD-SHA3-224 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4172",
+ "EventName": "KIMD_SHA3_256",
+ "BriefDescription": "KIMD SHA3 256",
+ "PublicDescription": "KIMD-SHA3-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4173",
+ "EventName": "KIMD_SHA3_384",
+ "BriefDescription": "KIMD SHA3 384",
+ "PublicDescription": "KIMD-SHA3-384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4174",
+ "EventName": "KIMD_SHA3_512",
+ "BriefDescription": "KIMD SHA3 512",
+ "PublicDescription": "KIMD-SHA3-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4175",
+ "EventName": "KIMD_SHAKE_128",
+ "BriefDescription": "KIMD SHAKE 128",
+ "PublicDescription": "KIMD-SHAKE-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4176",
+ "EventName": "KIMD_SHAKE_256",
+ "BriefDescription": "KIMD SHAKE 256",
+ "PublicDescription": "KIMD-SHAKE-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4177",
+ "EventName": "KIMD_GHASH",
+ "BriefDescription": "KIMD GHASH",
+ "PublicDescription": "KIMD-GHASH function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4178",
+ "EventName": "KLMD_SHA_1",
+ "BriefDescription": "KLMD SHA 1",
+ "PublicDescription": "KLMD-SHA-1 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4179",
+ "EventName": "KLMD_SHA_256",
+ "BriefDescription": "KLMD SHA 256",
+ "PublicDescription": "KLMD-SHA-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4180",
+ "EventName": "KLMD_SHA_512",
+ "BriefDescription": "KLMD SHA 512",
+ "PublicDescription": "KLMD-SHA-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4181",
+ "EventName": "KLMD_SHA3_224",
+ "BriefDescription": "KLMD SHA3 224",
+ "PublicDescription": "KLMD-SHA3-224 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4182",
+ "EventName": "KLMD_SHA3_256",
+ "BriefDescription": "KLMD SHA3 256",
+ "PublicDescription": "KLMD-SHA3-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4183",
+ "EventName": "KLMD_SHA3_384",
+ "BriefDescription": "KLMD SHA3 384",
+ "PublicDescription": "KLMD-SHA3-384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4184",
+ "EventName": "KLMD_SHA3_512",
+ "BriefDescription": "KLMD SHA3 512",
+ "PublicDescription": "KLMD-SHA3-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4185",
+ "EventName": "KLMD_SHAKE_128",
+ "BriefDescription": "KLMD SHAKE 128",
+ "PublicDescription": "KLMD-SHAKE-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4186",
+ "EventName": "KLMD_SHAKE_256",
+ "BriefDescription": "KLMD SHAKE 256",
+ "PublicDescription": "KLMD-SHAKE-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4187",
+ "EventName": "KMAC_DEA",
+ "BriefDescription": "KMAC DEA",
+ "PublicDescription": "KMAC-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4188",
+ "EventName": "KMAC_TDEA_128",
+ "BriefDescription": "KMAC TDEA 128",
+ "PublicDescription": "KMAC-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4189",
+ "EventName": "KMAC_TDEA_192",
+ "BriefDescription": "KMAC TDEA 192",
+ "PublicDescription": "KMAC-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4190",
+ "EventName": "KMAC_ENCRYPTED_DEA",
+ "BriefDescription": "KMAC ENCRYPTED DEA",
+ "PublicDescription": "KMAC-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4191",
+ "EventName": "KMAC_ENCRYPTED_TDEA_128",
+ "BriefDescription": "KMAC ENCRYPTED TDEA 128",
+ "PublicDescription": "KMAC-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4192",
+ "EventName": "KMAC_ENCRYPTED_TDEA_192",
+ "BriefDescription": "KMAC ENCRYPTED TDEA 192",
+ "PublicDescription": "KMAC-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4193",
+ "EventName": "KMAC_AES_128",
+ "BriefDescription": "KMAC AES 128",
+ "PublicDescription": "KMAC-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4194",
+ "EventName": "KMAC_AES_192",
+ "BriefDescription": "KMAC AES 192",
+ "PublicDescription": "KMAC-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4195",
+ "EventName": "KMAC_AES_256",
+ "BriefDescription": "KMAC AES 256",
+ "PublicDescription": "KMAC-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4196",
+ "EventName": "KMAC_ENCRYPTED_AES_128",
+ "BriefDescription": "KMAC ENCRYPTED AES 128",
+ "PublicDescription": "KMAC-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4197",
+ "EventName": "KMAC_ENCRYPTED_AES_192",
+ "BriefDescription": "KMAC ENCRYPTED AES 192",
+ "PublicDescription": "KMAC-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4198",
+ "EventName": "KMAC_ENCRYPTED_AES_256",
+ "BriefDescription": "KMAC ENCRYPTED AES 256",
+ "PublicDescription": "KMAC-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4199",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_DEA",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING DEA",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4200",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_TDEA_128",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING TDEA 128",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4201",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_TDEA_192",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING TDEA 192",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4202",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_DEA",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED DEA",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-DEA function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4203",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_TDEA_128",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED TDEA 128",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4204",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_TDEA_192",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED TDEA 192",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-TDEA-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4205",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_AES_128",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING AES 128",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4206",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_AES_192",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING AES 192",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4207",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_AES_256",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING AES 256",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4208",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_128",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 128",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4209",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_192",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 192",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-192 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4210",
+ "EventName": "PCC_COMPUTE_LAST_BLOCK_CMAC_USING_ENCRYPTED_AES_256",
+ "BriefDescription": "PCC COMPUTE LAST BLOCK CMAC USING ENCRYPTED AES 256",
+ "PublicDescription": "PCC-Compute-Last-Block-CMAC-Using-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4211",
+ "EventName": "PCC_COMPUTE_XTS_PARAMETER_USING_AES_128",
+ "BriefDescription": "PCC COMPUTE XTS PARAMETER USING AES 128",
+ "PublicDescription": "PCC-Compute-XTS-Parameter-Using-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4212",
+ "EventName": "PCC_COMPUTE_XTS_PARAMETER_USING_AES_256",
+ "BriefDescription": "PCC COMPUTE XTS PARAMETER USING AES 256",
+ "PublicDescription": "PCC-Compute-XTS-Parameter-Using-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4213",
+ "EventName": "PCC_COMPUTE_XTS_PARAMETER_USING_ENCRYPTED_AES_128",
+ "BriefDescription": "PCC COMPUTE XTS PARAMETER USING ENCRYPTED AES 128",
+ "PublicDescription": "PCC-Compute-XTS-Parameter-Using-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4214",
+ "EventName": "PCC_COMPUTE_XTS_PARAMETER_USING_ENCRYPTED_AES_256",
+ "BriefDescription": "PCC COMPUTE XTS PARAMETER USING ENCRYPTED AES 256",
+ "PublicDescription": "PCC-Compute-XTS-Parameter-Using-Encrypted-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4215",
+ "EventName": "PCC_SCALAR_MULTIPLY_P256",
+ "BriefDescription": "PCC SCALAR MULTIPLY P256",
+ "PublicDescription": "PCC-Scalar-Multiply-P256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4216",
+ "EventName": "PCC_SCALAR_MULTIPLY_P384",
+ "BriefDescription": "PCC SCALAR MULTIPLY P384",
+ "PublicDescription": "PCC-Scalar-Multiply-P384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4217",
+ "EventName": "PCC_SCALAR_MULTIPLY_P521",
+ "BriefDescription": "PCC SCALAR MULTIPLY P521",
+ "PublicDescription": "PCC-Scalar-Multiply-P521 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4218",
+ "EventName": "PCC_SCALAR_MULTIPLY_ED25519",
+ "BriefDescription": "PCC SCALAR MULTIPLY ED25519",
+ "PublicDescription": "PCC-Scalar-Multiply-Ed25519 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4219",
+ "EventName": "PCC_SCALAR_MULTIPLY_ED448",
+ "BriefDescription": "PCC SCALAR MULTIPLY ED448",
+ "PublicDescription": "PCC-Scalar-Multiply-Ed448 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4220",
+ "EventName": "PCC_SCALAR_MULTIPLY_X25519",
+ "BriefDescription": "PCC SCALAR MULTIPLY X25519",
+ "PublicDescription": "PCC-Scalar-Multiply-X25519 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4221",
+ "EventName": "PCC_SCALAR_MULTIPLY_X448",
+ "BriefDescription": "PCC SCALAR MULTIPLY X448",
+ "PublicDescription": "PCC-Scalar-Multiply-X448 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4222",
+ "EventName": "PRNO_SHA_512_DRNG",
+ "BriefDescription": "PRNO SHA 512 DRNG",
+ "PublicDescription": "PRNO-SHA-512-DRNG function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4223",
+ "EventName": "PRNO_TRNG_QUERY_RAW_TO_CONDITIONED_RATIO",
+ "BriefDescription": "PRNO TRNG QUERY RAW TO CONDITIONED RATIO",
+ "PublicDescription": "PRNO-TRNG-Query-Raw-to-Conditioned-Ratio function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4224",
+ "EventName": "PRNO_TRNG",
+ "BriefDescription": "PRNO TRNG",
+ "PublicDescription": "PRNO-TRNG function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4225",
+ "EventName": "KDSA_ECDSA_VERIFY_P256",
+ "BriefDescription": "KDSA ECDSA VERIFY P256",
+ "PublicDescription": "KDSA-ECDSA-Verify-P256 function ending with CC=0 or CC=2"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4226",
+ "EventName": "KDSA_ECDSA_VERIFY_P384",
+ "BriefDescription": "KDSA ECDSA VERIFY P384",
+ "PublicDescription": "KDSA-ECDSA-Verify-P384 function ending with CC=0 or CC=2"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4227",
+ "EventName": "KDSA_ECDSA_VERIFY_P521",
+ "BriefDescription": "KDSA ECDSA VERIFY P521",
+ "PublicDescription": "KDSA-ECDSA-Verify-P521 function ending with CC=0 or CC=2"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4228",
+ "EventName": "KDSA_ECDSA_SIGN_P256",
+ "BriefDescription": "KDSA ECDSA SIGN P256",
+ "PublicDescription": "KDSA-ECDSA-Sign-P256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4229",
+ "EventName": "KDSA_ECDSA_SIGN_P384",
+ "BriefDescription": "KDSA ECDSA SIGN P384",
+ "PublicDescription": "KDSA-ECDSA-Sign-P384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4230",
+ "EventName": "KDSA_ECDSA_SIGN_P521",
+ "BriefDescription": "KDSA ECDSA SIGN P521",
+ "PublicDescription": "KDSA-ECDSA-Sign-P521 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4231",
+ "EventName": "KDSA_ENCRYPTED_ECDSA_SIGN_P256",
+ "BriefDescription": "KDSA ENCRYPTED ECDSA SIGN P256",
+ "PublicDescription": "KDSA-Encrypted-ECDSA-Sign-P256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4232",
+ "EventName": "KDSA_ENCRYPTED_ECDSA_SIGN_P384",
+ "BriefDescription": "KDSA ENCRYPTED ECDSA SIGN P384",
+ "PublicDescription": "KDSA-Encrypted-ECDSA-Sign-P384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4233",
+ "EventName": "KDSA_ENCRYPTED_ECDSA_SIGN_P521",
+ "BriefDescription": "KDSA ENCRYPTED ECDSA SIGN P521",
+ "PublicDescription": "KDSA-Encrypted-ECDSA-Sign-P521 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4234",
+ "EventName": "KDSA_EDDSA_VERIFY_ED25519",
+ "BriefDescription": "KDSA EDDSA VERIFY ED25519",
+ "PublicDescription": "KDSA-EdDSA-Verify-Ed25519 function ending with CC=0 or CC=2"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4235",
+ "EventName": "KDSA_EDDSA_VERIFY_ED448",
+ "BriefDescription": "KDSA EDDSA VERIFY ED448",
+ "PublicDescription": "KDSA-EdDSA-Verify-Ed448 function ending with CC=0 or CC=2"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4236",
+ "EventName": "KDSA_EDDSA_SIGN_ED25519",
+ "BriefDescription": "KDSA EDDSA SIGN ED25519",
+ "PublicDescription": "KDSA-EdDSA-Sign-Ed25519 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4237",
+ "EventName": "KDSA_EDDSA_SIGN_ED448",
+ "BriefDescription": "KDSA EDDSA SIGN ED448",
+ "PublicDescription": "KDSA-EdDSA-Sign-Ed448 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4238",
+ "EventName": "KDSA_ENCRYPTED_EDDSA_SIGN_ED25519",
+ "BriefDescription": "KDSA ENCRYPTED EDDSA SIGN ED25519",
+ "PublicDescription": "KDSA-Encrypted-EdDSA-Sign-Ed25519 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4239",
+ "EventName": "KDSA_ENCRYPTED_EDDSA_SIGN_ED448",
+ "BriefDescription": "KDSA ENCRYPTED EDDSA SIGN ED448",
+ "PublicDescription": "KDSA-Encrypted-EdDSA-Sign-Ed448 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4240",
+ "EventName": "PCKMO_ENCRYPT_DEA_KEY",
+ "BriefDescription": "PCKMO ENCRYPT DEA KEY",
+ "PublicDescription": "PCKMO-Encrypt-DEA-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4241",
+ "EventName": "PCKMO_ENCRYPT_TDEA_128_KEY",
+ "BriefDescription": "PCKMO ENCRYPT TDEA 128 KEY",
+ "PublicDescription": "PCKMO-Encrypt-TDEA-128-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4242",
+ "EventName": "PCKMO_ENCRYPT_TDEA_192_KEY",
+ "BriefDescription": "PCKMO ENCRYPT TDEA 192 KEY",
+ "PublicDescription": "PCKMO-Encrypt-TDEA-192-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4243",
+ "EventName": "PCKMO_ENCRYPT_AES_128_KEY",
+ "BriefDescription": "PCKMO ENCRYPT AES 128 KEY",
+ "PublicDescription": "PCKMO-Encrypt-AES-128-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4244",
+ "EventName": "PCKMO_ENCRYPT_AES_192_KEY",
+ "BriefDescription": "PCKMO ENCRYPT AES 192 KEY",
+ "PublicDescription": "PCKMO-Encrypt-AES-192-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4245",
+ "EventName": "PCKMO_ENCRYPT_AES_256_KEY",
+ "BriefDescription": "PCKMO ENCRYPT AES 256 KEY",
+ "PublicDescription": "PCKMO-Encrypt-AES-256-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4246",
+ "EventName": "PCKMO_ENCRYPT_ECC_P256_KEY",
+ "BriefDescription": "PCKMO ENCRYPT ECC P256 KEY",
+ "PublicDescription": "PCKMO-Encrypt-ECC-P256-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4247",
+ "EventName": "PCKMO_ENCRYPT_ECC_P384_KEY",
+ "BriefDescription": "PCKMO ENCRYPT ECC P384 KEY",
+ "PublicDescription": "PCKMO-Encrypt-ECC-P384-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4248",
+ "EventName": "PCKMO_ENCRYPT_ECC_P521_KEY",
+ "BriefDescription": "PCKMO ENCRYPT ECC P521 KEY",
+ "PublicDescription": "PCKMO-Encrypt-ECC-P521-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4249",
+ "EventName": "PCKMO_ENCRYPT_ECC_ED25519_KEY",
+ "BriefDescription": "PCKMO ENCRYPT ECC ED25519 KEY",
+ "PublicDescription": "PCKMO-Encrypt-ECC-Ed25519-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4250",
+ "EventName": "PCKMO_ENCRYPT_ECC_ED448_KEY",
+ "BriefDescription": "PCKMO ENCRYPT ECC ED448 KEY",
+ "PublicDescription": "PCKMO-Encrypt-ECC-Ed448-key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4251",
+ "EventName": "IBM_RESERVED_155",
+ "BriefDescription": "IBM RESERVED_155",
+ "PublicDescription": "Reserved for IBM use"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4252",
+ "EventName": "IBM_RESERVED_156",
+ "BriefDescription": "IBM RESERVED_156",
+ "PublicDescription": "Reserved for IBM use"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4253",
+ "EventName": "KM_FULL_XTS_AES_128",
+ "BriefDescription": "KM FULL XTS AES 128",
+ "PublicDescription": "KM-Full-XTS-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4254",
+ "EventName": "KM_FULL_XTS_AES_256",
+ "BriefDescription": "KM FULL XTS AES 256",
+ "PublicDescription": "KM-Full-XTS-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4255",
+ "EventName": "KM_FULL_XTS_ENCRYPTED_AES_128",
+ "BriefDescription": "KM FULL XTS ENCRYPTED AES 128",
+ "PublicDescription": "KM-Full-XTS-Encrypted-AES-128 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4256",
+ "EventName": "KM_FULL_XTS_ENCRYPTED_AES_256",
+ "BriefDescription": "KM FULL XTS ENCRYPTED AES 256",
+ "PublicDescription": "KM-FULL-XTS-ENCRYPTED-AES-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4257",
+ "EventName": "KMAC_HMAC_SHA_224",
+ "BriefDescription": "KMAC HMAC SHA 224",
+ "PublicDescription": "KMAC-HMAC-SHA-224 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4258",
+ "EventName": "KMAC_HMAC_SHA_256",
+ "BriefDescription": "KMAC HMAC SHA 256",
+ "PublicDescription": "KMAC-HMAC-SHA-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4259",
+ "EventName": "KMAC_HMAC_SHA_384",
+ "BriefDescription": "KMAC HMAC SHA 384",
+ "PublicDescription": "KMAC-HMAC-SHA-384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4260",
+ "EventName": "KMAC_HMAC_SHA_512",
+ "BriefDescription": "KMAC HMAC SHA 512",
+ "PublicDescription": "KMAC-HMAC-SHA-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4261",
+ "EventName": "KMAC_HMAC_ENCRYPTED_SHA_224",
+ "BriefDescription": "KMAC HMAC ENCRYPTED SHA 224",
+ "PublicDescription": "KMAC-HMAC-Encrypted-SHA-224 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4262",
+ "EventName": "KMAC_HMAC_ENCRYPTED_SHA_256",
+ "BriefDescription": "KMAC HMAC ENCRYPTED SHA 256",
+ "PublicDescription": "KMAC-HMAC-Encrypted-SHA-256 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4263",
+ "EventName": "KMAC_HMAC_ENCRYPTED_SHA_384",
+ "BriefDescription": "KMAC HMAC ENCRYPTED SHA 384",
+ "PublicDescription": "KMAC-HMAC-Encrypted-SHA-384 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4264",
+ "EventName": "KMAC_HMAC_ENCRYPTED_SHA_512",
+ "BriefDescription": "KMAC HMAC ENCRYPTED SHA 512",
+ "PublicDescription": "KMAC-HMAC-Encrypted-SHA-512 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4265",
+ "EventName": "PCKMO_ENCRYPT_HMAC_512_KEY",
+ "BriefDescription": "PCKMO ENCRYPT HMAC 512 KEY",
+ "PublicDescription": "PCKMO-Encrypt-HMAC-512-Key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4266",
+ "EventName": "PCKMO_ENCRYPT_HMAC_1024_KEY",
+ "BriefDescription": "PCKMO ENCRYPT HMAC 1024 KEY",
+ "PublicDescription": "PCKMO-Encrypt-HMAC-1024-Key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4267",
+ "EventName": "PCKMO_ENCRYPT_AES_XTS_128",
+ "BriefDescription": "PCKMO ENCRYPT AES XTS Double Key 128",
+ "PublicDescription": "PCKMO-ENCRYPT-AES-XTS-128 Double Key function"
+ },
+ {
+ "Unit": "PAI-CRYPTO",
+ "EventCode": "4268",
+ "EventName": "PCKMO_ENCRYPT_AES_XTS_256",
+ "BriefDescription": "PCKMO ENCRYPT AES XTS Double Key 256",
+ "PublicDescription": "PCKMO-ENCRYPT-AES-XTS-256 Double Key function"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/pai_ext.json b/tools/perf/pmu-events/arch/s390/cf_z17/pai_ext.json
new file mode 100644
index 000000000000..935e9f5763b4
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/pai_ext.json
@@ -0,0 +1,261 @@
+[
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6144",
+ "EventName": "NNPA_ALL",
+ "BriefDescription": "NNPA ALL",
+ "PublicDescription": "Sums of all non zero NNPA counters"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6145",
+ "EventName": "NNPA_ADD",
+ "BriefDescription": "NNPA ADD function",
+ "PublicDescription": "NNPA-ADD function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6146",
+ "EventName": "NNPA_SUB",
+ "BriefDescription": "NNPA SUB function",
+ "PublicDescription": "NNPA-SUB function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6147",
+ "EventName": "NNPA_MUL",
+ "BriefDescription": "NNPA MUL function",
+ "PublicDescription": "NNPA-MUL function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6148",
+ "EventName": "NNPA_DIV",
+ "BriefDescription": "NNPA_DIV function",
+ "PublicDescription": "NNPA-DIV function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6149",
+ "EventName": "NNPA_MIN",
+ "BriefDescription": "NNPA MIN function",
+ "PublicDescription": "NNPA-MIN function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6150",
+ "EventName": "NNPA_MAX",
+ "BriefDescription": "NNPA MAX function",
+ "PublicDescription": "NNPA-MAX function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6151",
+ "EventName": "NNPA_LOG",
+ "BriefDescription": "NNPA LOG function",
+ "PublicDescription": "NNPA Log function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6152",
+ "EventName": "NNPA_EXP",
+ "BriefDescription": "NNPA EXP function",
+ "PublicDescription": "NNPA-EXP function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6153",
+ "EventName": "NNPA_IBM_RESERVED_9",
+ "BriefDescription": "Reserved for IBM use",
+ "PublicDescription": "Reserved for IBM use"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6154",
+ "EventName": "NNPA_RELU",
+ "BriefDescription": "NNPA RELU function",
+ "PublicDescription": "NNPA-RELU function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6155",
+ "EventName": "NNPA_TANH",
+ "BriefDescription": "NNPA TANH function",
+ "PublicDescription": "NNPA-TANH function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6156",
+ "EventName": "NNPA_SIGMOID",
+ "BriefDescription": "NNPA SIGMOID function",
+ "PublicDescription": "NNPA-SIGMOID function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6157",
+ "EventName": "NNPA_SOFTMAX",
+ "BriefDescription": "NNPA SOFTMAX function",
+ "PublicDescription": "NNPA-SOFTMAX function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6158",
+ "EventName": "NNPA_BATCHNORM",
+ "BriefDescription": "NNPA BATCHNORM function",
+ "PublicDescription": "NNPA-BATCHNORM function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6159",
+ "EventName": "NNPA_MAXPOOL2D",
+ "BriefDescription": "NNPA MAXPOOL2D function",
+ "PublicDescription": "NNPA-MAXPOOL2D function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6160",
+ "EventName": "NNPA_AVGPOOL2D",
+ "BriefDescription": "NNPA_AVGPOOL2D function",
+ "PublicDescription": "NNPA-AVGPOOL2D function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6161",
+ "EventName": "NNPA_LSTMACT",
+ "BriefDescription": "NNPA LSTMACT function",
+ "PublicDescription": "NNPA-LSTMACT function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6162",
+ "EventName": "NNPA_GRUACT",
+ "BriefDescription": "NNPA GRUACT function",
+ "PublicDescription": "NNPA-GRUACT function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6163",
+ "EventName": "NNPA_CONVOLUTION",
+ "BriefDescription": "NNPA CONVOLUTION function",
+ "PublicDescription": "NNPA-CONVOLUTION function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6164",
+ "EventName": "NNPA_MATMUL_OP",
+ "BriefDescription": "NNPA MATMUL OP function",
+ "PublicDescription": "NNPA-MATMUL-OP function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6165",
+ "EventName": "NNPA_MATMUL_OP_BCAST23",
+ "BriefDescription": "NNPA MATMUL OP BCAST23 function",
+ "PublicDescription": "NNPA-MATMUL-OP-BCAST23 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6166",
+ "EventName": "NNPA_SMALLBATCH",
+ "BriefDescription": "NNPA Counter 22",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6167",
+ "EventName": "NNPA_LARGEDIM",
+ "BriefDescription": "NNPA Counter 23",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6168",
+ "EventName": "NNPA_SMALLTENSOR",
+ "BriefDescription": "NNPA Counter 24",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6169",
+ "EventName": "NNPA_1MFRAME",
+ "BriefDescription": "NNPA Counter 25",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6170",
+ "EventName": "NNPA_2GFRAME",
+ "BriefDescription": "NNPA Counter 26",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6171",
+ "EventName": "NNPA_ACCESSEXCEPT",
+ "BriefDescription": "NNPA Counter 27",
+ "PublicDescription": "NNPA function with conditions as described in Principles of Operation"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6172",
+ "EventName": "NNPA_TRANSFORM",
+ "BriefDescription": "NNPA-TRANSFORM function",
+ "PublicDescription": "NNPA-TRANSFORM function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6173",
+ "EventName": "NNPA_GELU",
+ "BriefDescription": "NNPA-GELU function",
+ "PublicDescription": "NNPA-GELU function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6174",
+ "EventName": "NNPA_MOMENTS",
+ "BriefDescription": "NNPA-MOMENTS function",
+ "PublicDescription": "NNPA-MOMENTS function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6175",
+ "EventName": "NNPA_LAYERNORM",
+ "BriefDescription": "NNPA-LAYERNORM function",
+ "PublicDescription": "NNPA-LAYERNORM function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6176",
+ "EventName": "NNPA_MATMUL_OP_BCAST1",
+ "BriefDescription": "NNPA-MATMUL_OP_BCAST1 function",
+ "PublicDescription": "NNPA-MATMUL-OP-BCAST1 function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6177",
+ "EventName": "NNPA_SQRT",
+ "BriefDescription": "NNPA-SQRT function",
+ "PublicDescription": "NNPA-SQRT function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6178",
+ "EventName": "NNPA_INVSQRT",
+ "BriefDescription": "NNPA-INVSQRT function",
+ "PublicDescription": "NNPA-INVSQRT function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6179",
+ "EventName": "NNPA_NORM",
+ "BriefDescription": "NNPA-NORM function",
+ "PublicDescription": "NNPA-NORM function ending with CC=0"
+ },
+ {
+ "Unit": "PAI-EXT",
+ "EventCode": "6180",
+ "EventName": "NNPA_REDUCE",
+ "BriefDescription": "NNPA-REDUCE function",
+ "PublicDescription": "NNPA-REDUCE function ending with CC=0"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/cf_z17/transaction.json b/tools/perf/pmu-events/arch/s390/cf_z17/transaction.json
new file mode 100644
index 000000000000..74df533c8b6f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/s390/cf_z17/transaction.json
@@ -0,0 +1,72 @@
+[
+ {
+ "BriefDescription": "Transaction count",
+ "MetricName": "transaction",
+ "MetricExpr": "TX_C_TEND + TX_NC_TEND + TX_NC_TABORT + TX_C_TABORT_SPECIAL + TX_C_TABORT_NO_SPECIAL if has_event(TX_C_TEND) else 0"
+ },
+ {
+ "BriefDescription": "Cycles per Instruction",
+ "MetricName": "cpi",
+ "MetricExpr": "CPU_CYCLES / INSTRUCTIONS if has_event(INSTRUCTIONS) else 0"
+ },
+ {
+ "BriefDescription": "Problem State Instruction Ratio",
+ "MetricName": "prbstate",
+ "MetricExpr": "(PROBLEM_STATE_INSTRUCTIONS / INSTRUCTIONS) * 100 if has_event(INSTRUCTIONS) else 0"
+ },
+ {
+ "BriefDescription": "Level One Miss per 100 Instructions",
+ "MetricName": "l1mp",
+ "MetricExpr": "((L1I_DIR_WRITES + L1D_DIR_WRITES) / INSTRUCTIONS) * 100 if has_event(INSTRUCTIONS) else 0"
+ },
+ {
+ "BriefDescription": "Percentage sourced from Level 2 cache",
+ "MetricName": "l2p",
+ "MetricExpr": "((DCW_REQ + DCW_REQ_IV + ICW_REQ + ICW_REQ_IV) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ) else 0"
+ },
+ {
+ "BriefDescription": "Percentage sourced from Level 3 on same chip cache",
+ "MetricName": "l3p",
+ "MetricExpr": "((DCW_REQ_CHIP_HIT + DCW_ON_CHIP + DCW_ON_CHIP_IV + DCW_ON_CHIP_CHIP_HIT + ICW_REQ_CHIP_HIT + ICW_ON_CHIP + ICW_ON_CHIP_IV + ICW_ON_CHIP_CHIP_HIT) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ_CHIP_HIT) else 0"
+ },
+ {
+ "BriefDescription": "Percentage sourced from Level 4 Local cache on same drawer",
+ "MetricName": "l4lp",
+ "MetricExpr": "((DCW_REQ_DRAWER_HIT + DCW_ON_CHIP_DRAWER_HIT + DCW_ON_MODULE + DCW_ON_DRAWER + IDCW_ON_MODULE_IV + IDCW_ON_MODULE_CHIP_HIT + IDCW_ON_MODULE_DRAWER_HIT + IDCW_ON_DRAWER_IV + IDCW_ON_DRAWER_CHIP_HIT + IDCW_ON_DRAWER_DRAWER_HIT + ICW_REQ_DRAWER_HIT + ICW_ON_CHIP_DRAWER_HIT + ICW_ON_MODULE + ICW_ON_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_REQ_DRAWER_HIT) else 0"
+ },
+ {
+ "BriefDescription": "Percentage sourced from Level 4 Remote cache on different book",
+ "MetricName": "l4rp",
+ "MetricExpr": "((DCW_OFF_DRAWER + IDCW_OFF_DRAWER_IV + IDCW_OFF_DRAWER_CHIP_HIT + IDCW_OFF_DRAWER_DRAWER_HIT + ICW_OFF_DRAWER) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_OFF_DRAWER) else 0"
+ },
+ {
+ "BriefDescription": "Percentage sourced from memory",
+ "MetricName": "memp",
+ "MetricExpr": "((DCW_ON_CHIP_MEMORY + DCW_ON_MODULE_MEMORY + DCW_ON_DRAWER_MEMORY + DCW_OFF_DRAWER_MEMORY) / (L1I_DIR_WRITES + L1D_DIR_WRITES)) * 100 if has_event(DCW_ON_CHIP_MEMORY) else 0"
+ },
+ {
+ "BriefDescription": "Cycles per Instructions from Finite cache/memory",
+ "MetricName": "finite_cpi",
+ "MetricExpr": "L1C_TLB2_MISSES / INSTRUCTIONS if has_event(L1C_TLB2_MISSES) else 0"
+ },
+ {
+ "BriefDescription": "Estimated Instruction Complexity CPI infinite Level 1",
+ "MetricName": "est_cpi",
+ "MetricExpr": "(CPU_CYCLES / INSTRUCTIONS) - (L1C_TLB2_MISSES / INSTRUCTIONS) if has_event(INSTRUCTIONS) else 0"
+ },
+ {
+ "BriefDescription": "Estimated Sourcing Cycles per Level 1 Miss",
+ "MetricName": "scpl1m",
+ "MetricExpr": "L1C_TLB2_MISSES / (L1I_DIR_WRITES + L1D_DIR_WRITES) if has_event(L1C_TLB2_MISSES) else 0"
+ },
+ {
+ "BriefDescription": "Estimated TLB CPU percentage of Total CPU",
+ "MetricName": "tlb_percent",
+ "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / CPU_CYCLES) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES)) * 100 if has_event(CPU_CYCLES) else 0"
+ },
+ {
+ "BriefDescription": "Estimated Cycles per TLB Miss",
+ "MetricName": "tlb_miss",
+ "MetricExpr": "((DTLB2_MISSES + ITLB2_MISSES) / (DTLB2_WRITES + ITLB2_WRITES)) * (L1C_TLB2_MISSES / (L1I_PENALTY_CYCLES + L1D_PENALTY_CYCLES)) if has_event(DTLB2_MISSES) else 0"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/s390/mapfile.csv b/tools/perf/pmu-events/arch/s390/mapfile.csv
index a918e1af77a5..6fdede50e7b2 100644
--- a/tools/perf/pmu-events/arch/s390/mapfile.csv
+++ b/tools/perf/pmu-events/arch/s390/mapfile.csv
@@ -5,4 +5,5 @@ Family-model,Version,Filename,EventType
^IBM.296[45].*[13]\.[1-5].[[:xdigit:]]+$,1,cf_z13,core
^IBM.390[67].*[13]\.[1-5].[[:xdigit:]]+$,3,cf_z14,core
^IBM.856[12].*3\.6.[[:xdigit:]]+$,3,cf_z15,core
-^IBM.393[12].*3\.7.[[:xdigit:]]+$,3,cf_z16,core
+^IBM.393[12].*$,3,cf_z16,core
+^IBM.917[56].*$,3,cf_z17,core
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
index bbfa3883e533..cae7c0cf02f2 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
@@ -1,75 +1,61 @@
[
{
"BriefDescription": "C10 residency percent per package",
- "MetricExpr": "cstate_pkg@c10\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c10\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C10_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C1 residency percent per core",
- "MetricExpr": "cstate_core@c1\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C1_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
- "MetricGroup": "Power",
- "MetricName": "C7_Pkg_Residency",
- "ScaleUnit": "100%"
- },
- {
"BriefDescription": "C8 residency percent per package",
- "MetricExpr": "cstate_pkg@c8\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c8\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C8_Pkg_Residency",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "C9 residency percent per package",
- "MetricExpr": "cstate_pkg@c9\\-residency@ / TSC",
- "MetricGroup": "Power",
- "MetricName": "C9_Pkg_Residency",
- "ScaleUnit": "100%"
- },
- {
"BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
"MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
"MetricGroup": "smi",
@@ -113,42 +99,30 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions.",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_alloc_restriction",
- "MetricThreshold": "tma_alloc_restriction > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions",
+ "MetricExpr": "tma_core_bound",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_allocation_restriction",
+ "MetricThreshold": "tma_allocation_restriction > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALL@ / tma_info_core_slots",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count. The rest of these subevents count backend stalls, in cycles, due to an outstanding request which is memory bound vs core bound. The subevents are not slot based events and therefore can not be precisely added or subtracted from the Backend_Bound_Aux subevents which are slot based.",
- "ScaleUnit": "100%",
- "Unit": "cpu_atom"
- },
- {
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
- "DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "tma_backend_bound",
- "MetricGroup": "Default;TopdownL1;tma_L1_group",
- "MetricName": "tma_backend_bound_aux",
- "MetricThreshold": "tma_backend_bound_aux > 0.2",
- "MetricgroupNoGroup": "TopdownL1;Default",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that UOPS must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count. All of these subevents count backend stalls, in slots, due to a resource limitation. These are not cycle based events and therefore can not be precisely added or subtracted from the Backend_Bound subevents which are cycle based. These subevents are supplementary to Backend_Bound and can be used to analyze results from a resource perspective at allocation.",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "(tma_info_core_slots - (cpu_atom@TOPDOWN_FE_BOUND.ALL@ + cpu_atom@TOPDOWN_BE_BOUND.ALL@ + cpu_atom@TOPDOWN_RETIRING.ALL@)) / tma_info_core_slots",
+ "MetricExpr": "(5 * cpu_atom@CPU_CLK_UNHALTED.CORE@ - (cpu_atom@TOPDOWN_FE_BOUND.ALL@ + cpu_atom@TOPDOWN_BE_BOUND.ALL@ + cpu_atom@TOPDOWN_RETIRING.ALL@)) / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
"MetricName": "tma_bad_speculation",
"MetricThreshold": "tma_bad_speculation > 0.15",
@@ -158,644 +132,584 @@
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of uops that are not from the microsequencer.",
- "MetricExpr": "(cpu_atom@TOPDOWN_RETIRING.ALL@ - cpu_atom@UOPS_RETIRED.MS@) / tma_info_core_slots",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_retiring_group",
- "MetricName": "tma_base",
- "MetricThreshold": "tma_base > 0.6",
- "MetricgroupNoGroup": "TopdownL2",
- "ScaleUnit": "100%",
- "Unit": "cpu_atom"
- },
- {
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_DETECT@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_DETECT@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
"MetricName": "tma_branch_detect",
- "MetricThreshold": "tma_branch_detect > 0.05",
- "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "MetricThreshold": "tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts.",
- "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MISPREDICT@ / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MISPREDICT@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
"MetricName": "tma_branch_mispredicts",
- "MetricThreshold": "tma_branch_mispredicts > 0.05",
+ "MetricThreshold": "tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_RESTEER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_RESTEER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
"MetricName": "tma_branch_resteer",
- "MetricThreshold": "tma_branch_resteer > 0.05",
+ "MetricThreshold": "tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.CISC@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.CISC@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_cisc",
- "MetricThreshold": "tma_cisc > 0.05",
+ "MetricThreshold": "tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles due to backend bound stalls that are core execution bound and not attributed to outstanding demand load or store stalls.",
- "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)",
+ "BriefDescription": "Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_core_bound",
- "MetricThreshold": "tma_core_bound > 0.1",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.1",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.DECODE@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.DECODE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_decode",
- "MetricThreshold": "tma_decode > 0.05",
+ "MetricThreshold": "tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to memory disambiguation.",
- "MetricExpr": "tma_nuke * (cpu_atom@MACHINE_CLEARS.DISAMBIGUATION@ / cpu_atom@MACHINE_CLEARS.SLOW@)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_disambiguation",
- "MetricThreshold": "tma_disambiguation > 0.02",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.FASTNUKE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_fast_nuke",
+ "MetricThreshold": "tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@ / tma_info_core_clks - max((cpu_atom@MEM_BOUND_STALLS.LOAD@ - cpu_atom@LD_HEAD.L1_MISS_AT_RET@) / tma_info_core_clks, 0) * cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_dram_bound",
- "MetricThreshold": "tma_dram_bound > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear classified as a fast nuke due to memory ordering, memory disambiguation and memory renaming.",
- "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.FASTNUKE@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
- "MetricName": "tma_fast_nuke",
- "MetricThreshold": "tma_fast_nuke > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ICACHE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH@ / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
- "MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1",
+ "MetricName": "tma_ifetch_bandwidth",
+ "MetricThreshold": "tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_LATENCY@ / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_LATENCY@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
- "MetricName": "tma_fetch_latency",
- "MetricThreshold": "tma_fetch_latency > 0.15",
+ "MetricName": "tma_ifetch_latency",
+ "MetricThreshold": "tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to FP assists.",
- "MetricExpr": "tma_nuke * (cpu_atom@MACHINE_CLEARS.FP_ASSIST@ / cpu_atom@MACHINE_CLEARS.SLOW@)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_fp_assist",
- "MetricThreshold": "tma_fp_assist > 0.02",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of time that retirement is stalled due to a first level data TLB miss",
+ "MetricExpr": "100 * (cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ + cpu_atom@LD_HEAD.PGWALK_AT_RET@) / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_bottleneck_%_dtlb_miss_bound_cycles",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of floating point divide operations per uop.",
- "MetricExpr": "cpu_atom@UOPS_RETIRED.FPDIV@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_base_group",
- "MetricName": "tma_fpdiv_uops",
- "MetricThreshold": "tma_fpdiv_uops > 0.2",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Ifetch",
+ "MetricName": "tma_info_bottleneck_%_ifetch_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
- "DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ALL@ / tma_info_core_slots",
- "MetricGroup": "Default;TopdownL1;tma_L1_group",
- "MetricName": "tma_frontend_bound",
- "MetricThreshold": "tma_frontend_bound > 0.2",
- "MetricgroupNoGroup": "TopdownL1;Default",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of time that retirement is stalled due to an L1 miss",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.LOAD@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Load_Store_Miss",
+ "MetricName": "tma_info_bottleneck_%_load_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ICACHE@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
- "MetricName": "tma_icache_misses",
- "MetricThreshold": "tma_icache_misses > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.ANY_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Mem_Exec",
+ "MetricName": "tma_info_bottleneck_%_mem_exec_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "",
- "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@",
- "MetricName": "tma_info_core_clks",
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_inst_mix_ipbranch",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "",
- "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE_P@",
- "MetricName": "tma_info_core_clks_p",
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.CALL@",
+ "MetricName": "tma_info_br_inst_mix_ipcall",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Cycles Per Instruction",
- "MetricExpr": "tma_info_core_clks / INST_RETIRED.ANY",
- "MetricName": "tma_info_core_cpi",
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.FAR_BRANCH@u",
+ "MetricName": "tma_info_br_inst_mix_ipfarbranch",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions Per Cycle",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / tma_info_core_clks",
- "MetricName": "tma_info_core_ipc",
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@BR_MISP_RETIRED.COND@ - cpu_atom@BR_MISP_RETIRED.COND_TAKEN@)",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_ntaken",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "",
- "MetricExpr": "5 * tma_info_core_clks",
- "MetricName": "tma_info_core_slots",
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.COND_TAKEN@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_taken",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Uops Per Instruction",
- "MetricExpr": "cpu_atom@UOPS_RETIRED.ALL@ / INST_RETIRED.ANY",
- "MetricName": "tma_info_core_upi",
+ "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.INDIRECT@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_indirect",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in DRAM",
- "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
- "MetricName": "tma_info_frontend_inst_miss_cost_dramhit_percent",
+ "BriefDescription": "Instructions per retired return Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.RETURN@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_ret",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in the L2",
- "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
- "MetricName": "tma_info_frontend_inst_miss_cost_l2hit_percent",
+ "BriefDescription": "Instructions per retired Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_inst_mix_ipmispredict",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in the L3",
- "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
- "MetricName": "tma_info_frontend_inst_miss_cost_l3hit_percent",
+ "BriefDescription": "Ratio of all branches which mispredict",
+ "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Ratio of all branches which mispredict",
- "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / BR_INST_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_branch_mispredict_ratio",
+ "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
+ "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BACLEARS.ANY@",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
- "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / BACLEARS.ANY",
- "MetricName": "tma_info_inst_mix_branch_mispredict_to_unknown_branch_ratio",
+ "BriefDescription": "Percentage of time that allocation is stalled due to load buffer full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.LD_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_load_buffer_stall_cycles",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of all uops which are FPDiv uops",
- "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.FPDIV@ / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_fpdiv_uop_ratio",
+ "BriefDescription": "Percentage of time that allocation is stalled due to memory reservation stations full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.RSV@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_mem_rsv_stall_cycles",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of all uops which are IDiv uops",
- "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.IDIV@ / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_idiv_uop_ratio",
+ "BriefDescription": "Percentage of time that allocation is stalled due to store buffer full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_store_buffer_stall_cycles",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_INST_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_ipbranch",
+ "BriefDescription": "Cycles Per Instruction",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_core_cpi",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_INST_RETIRED.CALL",
- "MetricName": "tma_info_inst_mix_ipcall",
+ "BriefDescription": "Instructions Per Cycle",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_core_ipc",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per Far Branch",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@BR_INST_RETIRED.FAR_BRANCH@ / 2)",
- "MetricName": "tma_info_inst_mix_ipfarbranch",
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "cpu_atom@UOPS_RETIRED.ALL@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_core_upi",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per Load",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_inst_mix_ipload",
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@BR_MISP_RETIRED.COND@ - cpu_atom@BR_MISP_RETIRED.COND_TAKEN@)",
- "MetricName": "tma_info_inst_mix_ipmisp_cond_ntaken",
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss doesn't hit in the L2",
+ "MetricExpr": "100 * (cpu_atom@MEM_BOUND_STALLS.IFETCH_LLC_HIT@ + cpu_atom@MEM_BOUND_STALLS.IFETCH_DRAM_HIT@) / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2miss",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_MISP_RETIRED.COND_TAKEN",
- "MetricName": "tma_info_inst_mix_ipmisp_cond_taken",
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_MISP_RETIRED.INDIRECT",
- "MetricName": "tma_info_inst_mix_ipmisp_indirect",
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.IFETCH_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.IFETCH@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per retired return Branch Misprediction",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_MISP_RETIRED.RETURN",
- "MetricName": "tma_info_inst_mix_ipmisp_ret",
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.LOAD_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2hit",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per retired Branch Misprediction",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_ipmispredict",
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses in the L2",
+ "MetricExpr": "100 * (cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ + cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@) / cpu_atom@MEM_BOUND_STALLS.LOAD@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2miss",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per Store",
- "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / MEM_UOPS_RETIRED.ALL_STORES",
- "MetricName": "tma_info_inst_mix_ipstore",
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3hit",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of all uops which are ucode ops",
- "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.MS@ / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_microcode_uop_ratio",
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3miss",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of all uops which are x87 uops",
- "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.X87@ / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_x87_uop_ratio",
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_l1_bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of total non-speculative loads with a address aliasing block",
- "MetricExpr": "100 * cpu_atom@LD_BLOCKS.4K_ALIAS@ / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_address_alias_blocks",
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement",
+ "MetricExpr": "100 * (cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ + cpu_atom@MEM_BOUND_STALLS.LOAD@) / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_load_bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of total non-speculative loads that are splits",
- "MetricExpr": "100 * cpu_atom@MEM_UOPS_RETIRED.SPLIT_LOADS@ / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_load_splits",
+ "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full",
+ "MetricExpr": "100 * (cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@MEM_SCHEDULER_BLOCK.ALL@) * tma_mem_scheduler",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_store_bound",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
- "MetricExpr": "100 * cpu_atom@LD_BLOCKS.DATA_UNKNOWN@ / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_store_fwd_blocks",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory disambiguation",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.DISAMBIGUATION@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_disamb_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Cycle cost per DRAM hit",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_DRAM_HIT@ / MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_dram_hit",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to floating point assists",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.FP_ASSIST@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_fp_assist_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Cycle cost per L2 hit",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_L2_HIT@ / MEM_LOAD_UOPS_RETIRED.L2_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_l2_hit",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory ordering",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_monuke_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Cycle cost per LLC hit",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / MEM_LOAD_UOPS_RETIRED.L3_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_l3_hit",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory renaming",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.MRN_NUKE@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_mrn_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "load ops retired per 1000 instruction",
- "MetricExpr": "1e3 * cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@ / INST_RETIRED.ANY",
- "MetricName": "tma_info_memory_memloadpki",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to page faults",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.PAGE_FAULT@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_page_fault_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.REF_TSC@ / TSC",
- "MetricName": "tma_info_system_cpu_utilization",
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to self-modifying code",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.SMC@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_smc_pki",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Fraction of cycles spent in Kernel mode",
- "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@k / CPU_CLK_UNHALTED.CORE",
- "MetricGroup": "Summary",
- "MetricName": "tma_info_system_kernel_utilization",
+ "BriefDescription": "Percentage of total non-speculative loads with an address aliasing block",
+ "MetricExpr": "100 * cpu_atom@LD_BLOCKS.4K_ALIAS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_adressaliasing",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Average Frequency Utilization relative nominal frequency",
- "MetricExpr": "tma_info_core_clks / CPU_CLK_UNHALTED.REF_TSC",
- "MetricGroup": "Power",
- "MetricName": "tma_info_system_turbo_utilization",
+ "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
+ "MetricExpr": "100 * cpu_atom@LD_BLOCKS.DATA_UNKNOWN@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_storefwdblk",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ITLB@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
- "MetricName": "tma_itlb_misses",
- "MetricThreshold": "tma_itlb_misses > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of Memory Execution Bound due to a first level data cache miss",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.L1_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_l1miss",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a load block.",
- "MetricExpr": "cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ / tma_info_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l1_bound",
- "MetricThreshold": "tma_l1_bound > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.OTHER_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 Cache.",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_L2_HIT@ / tma_info_core_clks - max((cpu_atom@MEM_BOUND_STALLS.LOAD@ - cpu_atom@LD_HEAD.L1_MISS_AT_RET@) / tma_info_core_clks, 0) * cpu_atom@MEM_BOUND_STALLS.LOAD_L2_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l2_bound",
- "MetricThreshold": "tma_l2_bound > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of Memory Execution Bound due to a pagewalk",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.PGWALK_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_pagewalk",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
- "MetricExpr": "cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / tma_info_core_clks - max((cpu_atom@MEM_BOUND_STALLS.LOAD@ - cpu_atom@LD_HEAD.L1_MISS_AT_RET@) / tma_info_core_clks, 0) * cpu_atom@MEM_BOUND_STALLS.LOAD_LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS.LOAD@",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l3_bound",
- "MetricThreshold": "tma_l3_bound > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of Memory Execution Bound due to a second level TLB miss",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_stlbhit",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to load buffer full",
- "MetricExpr": "tma_mem_scheduler * cpu_atom@MEM_SCHEDULER_BLOCK.LD_BUF@ / MEM_SCHEDULER_BLOCK.ALL",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_ld_buffer",
- "MetricThreshold": "tma_ld_buffer > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of Memory Execution Bound due to a store forward address match",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.ST_ADDR_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_storefwding",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
- "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS@ / tma_info_core_slots",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
- "MetricName": "tma_machine_clears",
- "MetricThreshold": "tma_machine_clears > 0.05",
- "MetricgroupNoGroup": "TopdownL2",
- "ScaleUnit": "100%",
+ "BriefDescription": "Instructions per Load",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_ipload",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops.",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.MEM_SCHEDULER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_mem_scheduler",
- "MetricThreshold": "tma_mem_scheduler > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Instructions per Store",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_STORES@",
+ "MetricName": "tma_info_mem_mix_ipstore",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to stores or loads.",
- "MetricExpr": "min(tma_backend_bound, cpu_atom@LD_HEAD.ANY_AT_RET@ / tma_info_core_clks + tma_store_bound)",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
- "MetricName": "tma_memory_bound",
- "MetricThreshold": "tma_memory_bound > 0.2",
- "MetricgroupNoGroup": "TopdownL2",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of total non-speculative loads that perform one or more locks",
+ "MetricExpr": "100 * cpu_atom@MEM_UOPS_RETIRED.LOCK_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_load_locks_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to memory ordering.",
- "MetricExpr": "tma_nuke * (cpu_atom@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_atom@MACHINE_CLEARS.SLOW@)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_memory_ordering",
- "MetricThreshold": "tma_memory_ordering > 0.02",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of total non-speculative loads that are splits",
+ "MetricExpr": "100 * cpu_atom@MEM_UOPS_RETIRED.SPLIT_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_load_splits_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS)",
- "MetricExpr": "cpu_atom@UOPS_RETIRED.MS@ / tma_info_core_slots",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_retiring_group",
- "MetricName": "tma_ms_uops",
- "MetricThreshold": "tma_ms_uops > 0.05",
- "MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
- "ScaleUnit": "100%",
+ "BriefDescription": "Ratio of mem load uops to all uops",
+ "MetricExpr": "1e3 * cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@ / cpu_atom@UOPS_RETIRED.ALL@",
+ "MetricName": "tma_info_mem_mix_memload_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops.",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_non_mem_scheduler",
- "MetricThreshold": "tma_non_mem_scheduler > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction",
+ "MetricExpr": "100 * cpu_atom@SERIALIZATION.C01_MS_SCB@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricName": "tma_info_serialization_%_tpause_cycles",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear (slow nuke).",
- "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.NUKE@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
- "MetricName": "tma_nuke",
- "MetricThreshold": "tma_nuke > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Average CPU Utilization",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.REF_TSC@ / msr@tsc\\,cpu=cpu_atom@",
+ "MetricName": "tma_info_system_cpu_utilization",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.OTHER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
- "MetricName": "tma_other_fb",
- "MetricThreshold": "tma_other_fb > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Fraction of cycles spent in Kernel mode",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE_P@k / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_kernel_utilization",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a number of other load blocks.",
- "MetricExpr": "cpu_atom@LD_HEAD.OTHER_AT_RET@ / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_other_l1",
- "MetricThreshold": "tma_other_l1 > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE_P@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hits in the L2, LLC, DRAM or MMIO (Non-DRAM) but could not be correctly attributed or cycles in which the load miss is waiting on a request buffer.",
- "MetricExpr": "max(0, tma_memory_bound - (tma_store_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound))",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_other_load_store",
- "MetricThreshold": "tma_other_load_store > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@CPU_CLK_UNHALTED.REF_TSC@",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of uops retired excluding ms and fp div uops.",
- "MetricExpr": "(cpu_atom@TOPDOWN_RETIRING.ALL@ - cpu_atom@UOPS_RETIRED.MS@ - cpu_atom@UOPS_RETIRED.FPDIV@) / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_base_group",
- "MetricName": "tma_other_ret",
- "MetricThreshold": "tma_other_ret > 0.3",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of all uops which are FPDiv uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.FPDIV@ / cpu_atom@UOPS_RETIRED.ALL@",
+ "MetricName": "tma_info_uop_mix_fpdiv_uop_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to page faults.",
- "MetricExpr": "tma_nuke * (cpu_atom@MACHINE_CLEARS.PAGE_FAULT@ / cpu_atom@MACHINE_CLEARS.SLOW@)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_page_fault",
- "MetricThreshold": "tma_page_fault > 0.02",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of all uops which are IDiv uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.IDIV@ / cpu_atom@UOPS_RETIRED.ALL@",
+ "MetricName": "tma_info_uop_mix_idiv_uop_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
- "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.PREDECODE@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
- "MetricName": "tma_predecode",
- "MetricThreshold": "tma_predecode > 0.05",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of all uops which are microcode ops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.MS@ / cpu_atom@UOPS_RETIRED.ALL@",
+ "MetricName": "tma_info_uop_mix_microcode_uop_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls).",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REGISTER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_register",
- "MetricThreshold": "tma_register > 0.1",
- "ScaleUnit": "100%",
+ "BriefDescription": "Percentage of all uops which are x87 uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.X87@ / cpu_atom@UOPS_RETIRED.ALL@",
+ "MetricName": "tma_info_uop_mix_x87_uop_ratio",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls).",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REORDER_BUFFER@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_reorder_buffer",
- "MetricThreshold": "tma_reorder_buffer > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ITLB@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
- "MetricExpr": "tma_backend_bound",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_aux_group",
- "MetricName": "tma_resource_bound",
- "MetricThreshold": "tma_resource_bound > 0.2",
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.05 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count.",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
- "DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "cpu_atom@TOPDOWN_RETIRING.ALL@ / tma_info_core_slots",
- "MetricGroup": "Default;TopdownL1;tma_L1_group",
- "MetricName": "tma_retiring",
- "MetricThreshold": "tma_retiring > 0.75",
- "MetricgroupNoGroup": "TopdownL1;Default",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.MEM_SCHEDULER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_mem_scheduler",
+ "MetricThreshold": "tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to RSV full relative",
- "MetricExpr": "tma_mem_scheduler * cpu_atom@MEM_SCHEDULER_BLOCK.RSV@ / MEM_SCHEDULER_BLOCK.ALL",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_rsv",
- "MetricThreshold": "tma_rsv > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_non_mem_scheduler",
+ "MetricThreshold": "tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS).",
- "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.SERIALIZATION@ / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_serialization",
- "MetricThreshold": "tma_serialization > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.NUKE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_nuke",
+ "MetricThreshold": "tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.OTHER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_other_fb",
+ "MetricThreshold": "tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to SMC.",
- "MetricExpr": "tma_nuke * (cpu_atom@MACHINE_CLEARS.SMC@ / cpu_atom@MACHINE_CLEARS.SLOW@)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_smc",
- "MetricThreshold": "tma_smc > 0.02",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.PREDECODE@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_predecode",
+ "MetricThreshold": "tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to store buffer full",
- "MetricExpr": "tma_store_bound",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_st_buffer",
- "MetricThreshold": "tma_st_buffer > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REGISTER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_register",
+ "MetricThreshold": "tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a first level TLB miss.",
- "MetricExpr": "cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_stlb_hit",
- "MetricThreshold": "tma_stlb_hit > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REORDER_BUFFER@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_reorder_buffer",
+ "MetricThreshold": "tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a second level TLB miss requiring a page walk.",
- "MetricExpr": "cpu_atom@LD_HEAD.PGWALK_AT_RET@ / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_stlb_miss",
- "MetricThreshold": "tma_stlb_miss > 0.05",
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a resource limitation",
+ "MetricExpr": "tma_backend_bound - tma_core_bound",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_resource_bound",
+ "MetricThreshold": "tma_resource_bound > 0.2 & tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full.",
- "MetricExpr": "tma_mem_scheduler * (cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@MEM_SCHEDULER_BLOCK.ALL@)",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_store_bound",
- "MetricThreshold": "tma_store_bound > 0.1",
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_RETIRING.ALL@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.75",
+ "MetricgroupNoGroup": "TopdownL1;Default",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a store forward block.",
- "MetricExpr": "cpu_atom@LD_HEAD.ST_ADDR_AT_RET@ / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_store_fwd_blk",
- "MetricThreshold": "tma_store_fwd_blk > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.SERIALIZATION@ / (5 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_serialization",
+ "MetricThreshold": "tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%",
"Unit": "cpu_atom"
},
@@ -811,14 +725,14 @@
"MetricExpr": "(cpu_core@UOPS_DISPATCHED.PORT_0@ + cpu_core@UOPS_DISPATCHED.PORT_1@ + cpu_core@UOPS_DISPATCHED.PORT_5_11@ + cpu_core@UOPS_DISPATCHED.PORT_6@) / (5 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * cpu_core@ASSISTS.ANY\\,umask\\=0x1B@ / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "78 * cpu_core@ASSISTS.ANY@ / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
@@ -837,8 +751,8 @@
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "cpu_core@topdown\\-be\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) + 0 * tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "cpu_core@topdown\\-be\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -859,13 +773,118 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: ",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricExpr": "100 * ((1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + cpu_core@RS.EMPTY_RESOURCE@ / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricExpr": "100 * (tma_retiring - (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
- "MetricExpr": "cpu_core@topdown\\-br\\-mispredict@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) + 0 * tma_info_thread_slots",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricExpr": "cpu_core@topdown\\-br\\-mispredict@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -880,6 +899,24 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings).",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C01@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c01_wait",
+ "MetricThreshold": "tma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings).",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C02@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c02_wait",
+ "MetricThreshold": "tma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
@@ -900,12 +937,66 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, tma_icache_misses - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD@ / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_itlb_misses - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "cpu_core@ITLB_MISSES.WALK_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ + cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ / (cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ + cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
- "MetricExpr": "(25 * tma_info_system_average_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) + 24 * tma_info_system_average_frequency * cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "(25 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) + 24 * tma_info_system_core_frequency * cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -922,11 +1013,12 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
- "MetricExpr": "24 * tma_info_system_average_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@ + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (1 - cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "24 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@ + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * (1 - cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ / (cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM@ + cpu_core@OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD@))) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -935,7 +1027,7 @@
"MetricExpr": "(cpu_core@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu_core@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
"MetricName": "tma_decoder0_alone",
- "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 6 > 0.35))",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -943,7 +1035,7 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "cpu_core@ARITH.DIV_ACTIVE@ / tma_info_thread_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
@@ -956,7 +1048,7 @@
"MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_dram_bound",
"MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -965,7 +1057,7 @@
"MetricExpr": "(cpu_core@IDQ.DSB_CYCLES_ANY@ - cpu_core@IDQ.DSB_CYCLES_OK@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 6 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -976,47 +1068,47 @@
"MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_dsb_switches",
"MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "min(7 * cpu_core@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + cpu_core@DTLB_LOAD_MISSES.WALK_ACTIVE@, max(cpu_core@CYCLE_ACTIVITY.CYCLES_MEM_ANY@ - cpu_core@MEMORY_ACTIVITY.CYCLES_L1D_MISS@, 0)) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(7 * cpu_core@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + cpu_core@DTLB_STORE_MISSES.WALK_ACTIVE@) / tma_info_core_core_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
- "MetricExpr": "28 * tma_info_system_average_frequency * cpu_core@OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM@ / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricExpr": "28 * tma_info_system_core_frequency * cpu_core@OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM@ / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricExpr": "cpu_core@L1D_PEND_MISS.FB_FULL@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
- "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -1025,9 +1117,9 @@
"MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 6 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -1043,12 +1135,12 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops",
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
"MetricExpr": "max(0, tma_heavy_operations - tma_microcode_sequencer)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
"MetricName": "tma_few_uops_instructions",
"MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -1073,8 +1165,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "cpu_core@ARITH.FPDIV_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -1084,7 +1185,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "cpu_core@FP_ARITH_INST_RETIRED.VECTOR@ / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -1098,7 +1199,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -1108,7 +1209,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -1116,7 +1217,7 @@
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "cpu_core@topdown\\-fe\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) - cpu_core@INT_MISC.UOP_DROPPING@ / tma_info_thread_slots",
- "MetricGroup": "Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -1127,28 +1228,28 @@
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions",
"MetricExpr": "tma_light_operations * cpu_core@INST_RETIRED.MACRO_FUSED@ / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_fused_instructions",
"MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_operations > 0.6",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JCC are commonly used examples.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences",
- "MetricExpr": "cpu_core@topdown\\-heavy\\-ops@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) + 0 * tma_info_thread_slots",
+ "MetricExpr": "cpu_core@topdown\\-heavy\\-ops@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
"MetricGroup": "Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences. Sample with: UOPS_RETIRED.HEAVY",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+]). Sample with: UOPS_RETIRED.HEAVY",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
"MetricExpr": "cpu_core@ICACHE_DATA.STALLS@ / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
@@ -1156,40 +1257,40 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 6 / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / 100",
"MetricGroup": "Bad;BrMispredicts;tma_issueBM",
"MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers",
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_MISP_RETIRED.COND_NTAKEN",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_NTAKEN@",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional taken branches (lower number means higher occurrence rate).",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_MISP_RETIRED.COND_TAKEN",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_TAKEN@",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_taken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_taken < 200",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "cpu_core@BR_MISP_RETIRED.INDIRECT_CALL\\,umask\\=0x80@ / BR_MISP_RETIRED.INDIRECT",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.INDIRECT@",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instructions per retired mispredicts for return branches (lower number means higher occurrence rate).",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_MISP_RETIRED.RET",
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.RET@",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_ret",
"MetricThreshold": "tma_info_bad_spec_ipmisp_ret < 500",
@@ -1197,13 +1298,20 @@
},
{
"BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Bad;BadSpec;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmispredict",
"MetricThreshold": "tma_info_bad_spec_ipmispredict < 200",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "cpu_core@INT_MISC.CLEARS_COUNT@ / (cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ + cpu_core@MACHINE_CLEARS.COUNT@)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
"MetricGroup": "Cor;SMT",
@@ -1212,12 +1320,21 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_lsd + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
- "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite))",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite + tma_ms))",
"MetricGroup": "DSBmiss;Fed;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_misses",
"MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
- "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"Unit": "cpu_core"
},
{
@@ -1230,91 +1347,29 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
- "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
- "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB;tma_issueBC",
- "MetricName": "tma_info_bottleneck_big_code",
- "MetricThreshold": "tma_info_bottleneck_big_code > 20",
- "PublicDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses). Related metrics: tma_info_bottleneck_branching_overhead",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
- "MetricExpr": "100 * ((cpu_core@BR_INST_RETIRED.COND@ + 3 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + (cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ - cpu_core@BR_INST_RETIRED.COND_TAKEN@ - 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@)) / tma_info_thread_slots)",
- "MetricGroup": "Ret;tma_issueBC",
- "MetricName": "tma_info_bottleneck_branching_overhead",
- "MetricThreshold": "tma_info_bottleneck_branching_overhead > 10",
- "PublicDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls). Related metrics: tma_info_bottleneck_big_code",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
- "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code",
- "MetricGroup": "Fed;FetchBW;Frontend",
- "MetricName": "tma_info_bottleneck_instruction_fetch_bw",
- "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))",
- "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW",
- "MetricName": "tma_info_bottleneck_memory_bandwidth",
- "MetricThreshold": "tma_info_bottleneck_memory_bandwidth > 20",
- "PublicDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
- "MetricExpr": "100 * tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
- "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB",
- "MetricName": "tma_info_bottleneck_memory_data_tlbs",
- "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20",
- "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound))",
- "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat",
- "MetricName": "tma_info_bottleneck_memory_latency",
- "MetricThreshold": "tma_info_bottleneck_memory_latency > 20",
- "PublicDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches). Related metrics: tma_l3_hit_latency, tma_mem_latency",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
- "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bottleneck_mispredictions",
- "MetricThreshold": "tma_info_bottleneck_mispredictions > 20",
- "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
- "Unit": "cpu_core"
- },
- {
"BriefDescription": "Fraction of branches that are CALL or RET",
- "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@BR_INST_RETIRED.NEAR_RETURN@) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@BR_INST_RETIRED.NEAR_RETURN@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Bad;Branches",
"MetricName": "tma_info_branches_callret",
"Unit": "cpu_core"
},
{
"BriefDescription": "Fraction of branches that are non-taken conditionals",
- "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_NTAKEN@ / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_NTAKEN@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Bad;Branches;CodeGen;PGO",
"MetricName": "tma_info_branches_cond_nt",
"Unit": "cpu_core"
},
{
"BriefDescription": "Fraction of branches that are taken conditionals",
- "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_TAKEN@ / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_TAKEN@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Bad;Branches;CodeGen;PGO",
"MetricName": "tma_info_branches_cond_tk",
"Unit": "cpu_core"
},
{
"BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
- "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ - cpu_core@BR_INST_RETIRED.COND_TAKEN@ - 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ - cpu_core@BR_INST_RETIRED.COND_TAKEN@ - 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Bad;Branches",
"MetricName": "tma_info_branches_jump",
"Unit": "cpu_core"
@@ -1328,7 +1383,7 @@
},
{
"BriefDescription": "Core actual clocks when any Logical Processor is active on the Physical Core",
- "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.DISTRIBUTED@",
+ "MetricExpr": "(cpu_core@CPU_CLK_UNHALTED.DISTRIBUTED@ if #SMT_on else tma_info_thread_clks)",
"MetricGroup": "SMT",
"MetricName": "tma_info_core_core_clks",
"Unit": "cpu_core"
@@ -1341,8 +1396,15 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * (cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@) + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / tma_info_core_core_clks",
+ "MetricExpr": "(cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc",
"Unit": "cpu_core"
@@ -1356,8 +1418,8 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / (cpu_core@UOPS_EXECUTED.CORE_CYCLES_GE_1@ / 2 if #SMT_on else cpu_core@UOPS_EXECUTED.CORE_CYCLES_GE_1@)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp",
"Unit": "cpu_core"
@@ -1368,7 +1430,7 @@
"MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
"MetricName": "tma_info_frontend_dsb_coverage",
"MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 6 > 0.35",
- "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp",
"Unit": "cpu_core"
},
{
@@ -1394,7 +1456,7 @@
},
{
"BriefDescription": "Instructions per non-speculative DSB miss (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / FRONTEND_RETIRED.ANY_DSB_MISS",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FRONTEND_RETIRED.ANY_DSB_MISS@",
"MetricGroup": "DSBmiss;Fed",
"MetricName": "tma_info_frontend_ipdsb_miss_ret",
"MetricThreshold": "tma_info_frontend_ipdsb_miss_ret < 50",
@@ -1402,21 +1464,21 @@
},
{
"BriefDescription": "Instructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)",
- "MetricExpr": "tma_info_inst_mix_instructions / BACLEARS.ANY",
+ "MetricExpr": "tma_info_inst_mix_instructions / cpu_core@BACLEARS.ANY@",
"MetricGroup": "Fed",
"MetricName": "tma_info_frontend_ipunknown_branch",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache true code cacheline misses per kilo instruction",
- "MetricExpr": "1e3 * cpu_core@FRONTEND_RETIRED.L2_MISS@ / INST_RETIRED.ANY",
+ "MetricExpr": "1e3 * cpu_core@FRONTEND_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "IcMiss",
"MetricName": "tma_info_frontend_l2mpki_code",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache speculative code cacheline misses per kilo instruction",
- "MetricExpr": "1e3 * cpu_core@L2_RQSTS.CODE_RD_MISS@ / INST_RETIRED.ANY",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.CODE_RD_MISS@ / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "IcMiss",
"MetricName": "tma_info_frontend_l2mpki_code_all",
"Unit": "cpu_core"
@@ -1429,8 +1491,23 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection",
+ "MetricExpr": "cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_unknown_branch_cost",
+ "PublicDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node.",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
- "MetricExpr": "cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
"MetricGroup": "Branches;Fed;PGO",
"MetricName": "tma_info_inst_mix_bptkbranch",
"Unit": "cpu_core"
@@ -1445,11 +1522,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + cpu_core@FP_ARITH_INST_RETIRED.VECTOR@)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW.",
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW.",
"Unit": "cpu_core"
},
{
@@ -1458,7 +1535,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting.",
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
"Unit": "cpu_core"
},
{
@@ -1467,30 +1544,30 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting.",
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
"Unit": "cpu_core"
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@",
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting.",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
"Unit": "cpu_core"
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@",
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting.",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
"Unit": "cpu_core"
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
"MetricGroup": "Branches;Fed;InsType",
"MetricName": "tma_info_inst_mix_ipbranch",
"MetricThreshold": "tma_info_inst_mix_ipbranch < 8",
@@ -1498,7 +1575,7 @@
},
{
"BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_INST_RETIRED.NEAR_CALL",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_CALL@",
"MetricGroup": "Branches;Fed;PGO",
"MetricName": "tma_info_inst_mix_ipcall",
"MetricThreshold": "tma_info_inst_mix_ipcall < 200",
@@ -1506,7 +1583,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * (cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@) + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10",
@@ -1514,15 +1591,22 @@
},
{
"BriefDescription": "Instructions per Load (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / MEM_INST_RETIRED.ALL_LOADS",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_LOADS@",
"MetricGroup": "InsType",
"MetricName": "tma_info_inst_mix_ipload",
"MetricThreshold": "tma_info_inst_mix_ipload < 3",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / cpu_core@CPU_CLK_UNHALTED.PAUSE_INST@",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / MEM_INST_RETIRED.ALL_STORES",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@",
"MetricGroup": "InsType",
"MetricName": "tma_info_inst_mix_ipstore",
"MetricThreshold": "tma_info_inst_mix_ipstore < 8",
@@ -1530,193 +1614,222 @@
},
{
"BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@SW_PREFETCH_ACCESS.T0\\,umask\\=0xF@",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@SW_PREFETCH_ACCESS.ANY@",
"MetricGroup": "Prefetches",
"MetricName": "tma_info_inst_mix_ipswpf",
"MetricThreshold": "tma_info_inst_mix_ipswpf < 100",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instruction per taken branch",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / BR_INST_RETIRED.NEAR_TAKEN",
+ "BriefDescription": "Instructions per taken branch",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 13",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp",
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * cpu_core@L1D.REPLACEMENT@ / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw",
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * cpu_core@L2_LINES_IN.ALL@ / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw",
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * cpu_core@OFFCORE_REQUESTS.ALL_REQUESTS@ / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
"MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_core_l3_cache_access_bw",
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * cpu_core@LONGEST_LAT_CACHE.MISS@ / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t",
"Unit": "cpu_core"
},
{
"BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
- "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_fb_hpki",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@L1D.REPLACEMENT@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki",
"Unit": "cpu_core"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
- "MetricExpr": "1e3 * cpu_core@L2_RQSTS.ALL_DEMAND_DATA_RD@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.ALL_DEMAND_DATA_RD@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki_load",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@L2_LINES_IN.ALL@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
- "MetricExpr": "1e3 * (cpu_core@L2_RQSTS.REFERENCES@ - cpu_core@L2_RQSTS.MISS@) / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * (cpu_core@L2_RQSTS.REFERENCES@ - cpu_core@L2_RQSTS.MISS@) / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_all",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
- "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_HIT@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_HIT@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L2_MISS@ / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
- "MetricExpr": "1e3 * cpu_core@L2_RQSTS.MISS@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all",
"Unit": "cpu_core"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
- "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_MISS@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load",
"Unit": "cpu_core"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L3_MISS@ / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki",
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.RFO_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricExpr": "cpu_core@L1D_PEND_MISS.PENDING@ / MEM_LOAD_COMPLETED.L1_MISS_ANY",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency",
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@OFFCORE_REQUESTS.ALL_REQUESTS@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricExpr": "cpu_core@L1D_PEND_MISS.PENDING@ / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)",
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@LONGEST_LAT_CACHE.MISS@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L3_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
- "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD@ / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp",
+ "MetricName": "tma_info_memory_latency_data_l2_mlp",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
- "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.DEMAND_DATA_RD@",
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp",
+ "MetricName": "tma_info_memory_latency_load_l2_mlp",
"Unit": "cpu_core"
},
{
"BriefDescription": "Average Latency for L3 cache miss demand Loads",
- "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD@",
"MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l3_miss_latency",
+ "MetricName": "tma_info_memory_latency_load_l3_miss_latency",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t",
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "cpu_core@L1D_PEND_MISS.PENDING@ / cpu_core@MEM_LOAD_COMPLETED.L1_MISS_ANY@",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t",
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@SQ_MISC.BUS_LOCK@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_access_bw",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t",
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_MISC_RETIRED.UC@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t",
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "cpu_core@L1D_PEND_MISS.PENDING@ / cpu_core@L1D_PEND_MISS.PENDING_CYCLES@",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "cpu_core@L2_LINES_OUT.USELESS_HWPF@ / (cpu_core@L2_LINES_OUT.SILENT@ + cpu_core@L2_LINES_OUT.NON_SILENT@)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15",
"Unit": "cpu_core"
},
{
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
- "MetricExpr": "1e3 * cpu_core@ITLB_MISSES.WALK_COMPLETED@ / INST_RETIRED.ANY",
+ "MetricExpr": "1e3 * cpu_core@ITLB_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "Fed;MemoryTLB",
"MetricName": "tma_info_memory_tlb_code_stlb_mpki",
"Unit": "cpu_core"
},
{
"BriefDescription": "STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
- "MetricExpr": "1e3 * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED@ / INST_RETIRED.ANY",
+ "MetricExpr": "1e3 * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "Mem;MemoryTLB",
"MetricName": "tma_info_memory_tlb_load_stlb_mpki",
"Unit": "cpu_core"
@@ -1731,22 +1844,43 @@
},
{
"BriefDescription": "STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
- "MetricExpr": "1e3 * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED@ / INST_RETIRED.ANY",
+ "MetricExpr": "1e3 * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "Mem;MemoryTLB",
"MetricName": "tma_info_memory_tlb_store_stlb_mpki",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / (cpu_core@UOPS_EXECUTED.CORE_CYCLES_GE_1@ / 2 if #SMT_on else cpu_core@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "cpu_core@IDQ.DSB_UOPS@ / cpu_core@IDQ.DSB_CYCLES_ANY@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from LSD per cycle",
+ "MetricExpr": "cpu_core@LSD.UOPS@ / cpu_core@LSD.CYCLES_ACTIVE@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_lsd",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "cpu_core@IDQ.MITE_UOPS@ / cpu_core@IDQ.MITE_CYCLES_ANY@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Instructions per a microcode Assist invocation",
- "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@ASSISTS.ANY\\,umask\\=0x1B@",
- "MetricGroup": "Pipeline;Ret;Retire",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@ASSISTS.ANY@",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
"MetricName": "tma_info_pipeline_ipassist",
"MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
"PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)",
@@ -1762,39 +1896,54 @@
{
"BriefDescription": "Estimated fraction of retirement-cycles dealing with repeat instructions",
"MetricExpr": "cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
- "MetricGroup": "Pipeline;Ret",
+ "MetricGroup": "MicroSeq;Pipeline;Ret",
"MetricName": "tma_info_pipeline_strings_cycles",
"MetricThreshold": "tma_info_pipeline_strings_cycles > 0.1",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C0_WAIT@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait",
+ "MetricName": "tma_info_system_c0_wait",
+ "MetricThreshold": "tma_info_system_c0_wait > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc\\,cpu=cpu_core@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency",
+ "MetricName": "tma_info_system_core_frequency",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.REF_TSC@ / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.REF_TSC@ / msr@tsc\\,cpu=cpu_core@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / tma_info_system_time / 1e3",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
- "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_mem_bandwidth, tma_sq_full",
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full",
"Unit": "cpu_core"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(cpu_core@FP_ARITH_INST_RETIRED.SCALAR_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.SCALAR_DOUBLE@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * (cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE@ + cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE@) + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / 1e9 / duration_time",
+ "MetricExpr": "(cpu_core@FP_ARITH_INST_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_INST_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE@) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine.",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width",
"Unit": "cpu_core"
},
{
@@ -1807,14 +1956,14 @@
},
{
"BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
- "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / cpu_core@INST_RETIRED.ANY_P@k",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@INST_RETIRED.ANY_P@k",
"MetricGroup": "OS",
"MetricName": "tma_info_system_kernel_cpi",
"Unit": "cpu_core"
},
{
"BriefDescription": "Fraction of cycles spent in the Operating System (OS) Kernel mode",
- "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / CPU_CLK_UNHALTED.THREAD",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@CPU_CLK_UNHALTED.THREAD@",
"MetricGroup": "OS",
"MetricName": "tma_info_system_kernel_utilization",
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05",
@@ -1830,7 +1979,6 @@
},
{
"BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
- "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(UNC_ARB_TRK_OCCUPANCY.RD + UNC_ARB_DAT_OCCUPANCY.RD) / UNC_ARB_TRK_REQUESTS.RD",
"MetricGroup": "Mem;MemoryLat;SoC",
"MetricName": "tma_info_system_mem_read_latency",
@@ -1838,11 +1986,18 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "Average latency of all requests to external memory (in Uncore cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(UNC_ARB_TRK_OCCUPANCY.ALL + UNC_ARB_DAT_OCCUPANCY.RD) / UNC_ARB_TRK_REQUESTS.ALL",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_request_latency",
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@ / cpu_core@CPU_CLK_UNHALTED.THREAD@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power",
"Unit": "cpu_core"
},
{
@@ -1860,13 +2015,28 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
- "MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
+ "MetricExpr": "tma_info_thread_clks / cpu_core@CPU_CLK_UNHALTED.REF_TSC@",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
"MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD@",
"MetricGroup": "Pipeline",
@@ -1882,7 +2052,7 @@
},
{
"BriefDescription": "The ratio of Executed- by Issued-Uops",
- "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / UOPS_ISSUED.ANY",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_ISSUED.ANY@",
"MetricGroup": "Cor;Pipeline",
"MetricName": "tma_info_thread_execute_per_issue",
"PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage.",
@@ -1911,23 +2081,32 @@
},
{
"BriefDescription": "Uops Per Instruction",
- "MetricExpr": "tma_retiring * tma_info_thread_slots / INST_RETIRED.ANY",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@INST_RETIRED.ANY@",
"MetricGroup": "Pipeline;Ret;Retire",
"MetricName": "tma_info_thread_uoppi",
"MetricThreshold": "tma_info_thread_uoppi > 1.05",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Instruction per taken branch",
- "MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
+ "BriefDescription": "Uops per taken branch",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
"MetricThreshold": "tma_info_thread_uptb < 9",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired)",
- "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b + tma_shuffles",
+ "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b",
"MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_int_operations",
"MetricThreshold": "tma_int_operations > 0.1 & tma_light_operations > 0.6",
@@ -1946,19 +2125,19 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "BriefDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
"MetricExpr": "(cpu_core@INT_VEC_RETIRED.ADD_256@ + cpu_core@INT_VEC_RETIRED.MUL_256@ + cpu_core@INT_VEC_RETIRED.VNNI_256@) / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
"MetricName": "tma_int_vector_256b",
"MetricThreshold": "tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
- "PublicDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "cpu_core@ICACHE_TAG.STALLS@ / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
@@ -1966,29 +2145,49 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L1D_MISS@) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "min(2 * (cpu_core@MEM_INST_RETIRED.ALL_LOADS@ - cpu_core@MEM_LOAD_RETIRED.FB_HIT@ - cpu_core@MEM_LOAD_RETIRED.L1_MISS@) * 20 / 100, max(cpu_core@CYCLE_ACTIVITY.CYCLES_MEM_ANY@ - cpu_core@MEMORY_ACTIVITY.CYCLES_L1D_MISS@, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(cpu_core@MEMORY_ACTIVITY.STALLS_L1D_MISS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L2_MISS@) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "3 * tma_info_system_core_frequency * cpu_core@MEM_LOAD_RETIRED.L2_HIT@ * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricExpr": "(cpu_core@MEMORY_ACTIVITY.STALLS_L2_MISS@ - cpu_core@MEMORY_ACTIVITY.STALLS_L3_MISS@) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
@@ -1996,12 +2195,12 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
- "MetricExpr": "9 * tma_info_system_average_frequency * cpu_core@MEM_LOAD_RETIRED.L3_HIT@ * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "9 * tma_info_system_core_frequency * (cpu_core@MEM_LOAD_RETIRED.L3_HIT@ * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_memory_latency, tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2011,7 +2210,7 @@
"MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_lcp",
"MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2022,7 +2221,7 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2055,12 +2254,40 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(16 * max(0, cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ - cpu_core@L2_RQSTS.ALL_RFO@) + cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@ * (10 * cpu_core@L2_RQSTS.RFO_HIT@ + min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO@))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2069,7 +2296,7 @@
"MetricExpr": "(cpu_core@LSD.CYCLES_ACTIVE@ - cpu_core@LSD.CYCLES_OK@) / tma_info_core_core_clks / 2",
"MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_lsd",
- "MetricThreshold": "tma_lsd > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 6 > 0.35)",
+ "MetricThreshold": "tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit. LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure.",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -2077,37 +2304,37 @@
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_memory_latency, tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
- "MetricExpr": "cpu_core@topdown\\-mem\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) + 0 * tma_info_thread_slots",
+ "MetricExpr": "cpu_core@topdown\\-mem\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
"MetricGroup": "Backend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_memory_bound",
"MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
@@ -2118,11 +2345,10 @@
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "13 * cpu_core@MISC2_RETIRED.LFENCE@ / tma_info_thread_clks",
- "MetricGroup": "TopdownL6;tma_L6_group;tma_serializing_operation_group",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
"MetricName": "tma_memory_fence",
- "MetricThreshold": "tma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))))",
+ "MetricThreshold": "tma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2141,17 +2367,17 @@
"MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
"MetricName": "tma_microcode_sequencer",
"MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "tma_branch_mispredicts / tma_bad_speculation * cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clks",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2160,35 +2386,44 @@
"MetricExpr": "(cpu_core@IDQ.MITE_CYCLES_ANY@ - cpu_core@IDQ.MITE_CYCLES_OK@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 6 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
- "BriefDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued",
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
"MetricExpr": "160 * cpu_core@ASSISTS.SSE_AVX_MIX@ / tma_info_thread_clks",
"MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
"MetricName": "tma_mixing_vectors",
"MetricThreshold": "tma_mixing_vectors > 0.05",
- "PublicDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued. Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "max(cpu_core@IDQ.MS_CYCLES_ANY@, cpu_core@UOPS_RETIRED.MS\\,cmask\\=1@ / (cpu_core@UOPS_RETIRED.SLOTS@ / cpu_core@UOPS_ISSUED.ANY@)) / tma_info_core_core_clks / 2.4",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)",
- "MetricExpr": "3 * cpu_core@UOPS_RETIRED.MS\\,cmask\\=1\\,edge@ / (tma_retiring * tma_info_thread_slots / cpu_core@UOPS_ISSUED.ANY@) / tma_info_thread_clks",
+ "MetricExpr": "3 * cpu_core@UOPS_RETIRED.MS\\,cmask\\=1\\,edge@ / (cpu_core@UOPS_RETIRED.SLOTS@ / cpu_core@UOPS_ISSUED.ANY@) / tma_info_thread_clks",
"MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
"MetricName": "tma_ms_switches",
"MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: FRONTEND_RETIRED.MS_FLOWS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused",
"MetricExpr": "tma_light_operations * (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ - cpu_core@INST_RETIRED.MACRO_FUSED@) / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_non_fused_branches",
"MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_operations > 0.6",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused.",
@@ -2198,16 +2433,17 @@
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
"MetricExpr": "tma_light_operations * cpu_core@INST_RETIRED.NOP@ / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
"MetricName": "tma_nop_instructions",
- "MetricThreshold": "tma_nop_instructions > 0.1 & tma_light_operations > 0.6",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
- "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_int_operations + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches + tma_nop_instructions))",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_int_operations + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))",
"MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_other_light_ops",
"MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
@@ -2216,6 +2452,24 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / (cpu_core@INT_MISC.CLEARS_COUNT@ - cpu_core@MACHINE_CLEARS.COUNT@)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - cpu_core@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_core@MACHINE_CLEARS.COUNT@), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults",
"MetricExpr": "99 * cpu_core@ASSISTS.PAGE_FAULT@ / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_assists_group",
@@ -2246,18 +2500,19 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "cpu_core@UOPS_DISPATCHED.PORT_6@ / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
- "MetricExpr": "((cpu_core@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + tma_serializing_operation * (cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@) + (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_PORTS_UTIL\\,umask\\=0xc@)) / tma_info_thread_clks if cpu_core@ARITH.DIV_ACTIVE@ < cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ else (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_PORTS_UTIL\\,umask\\=0xc@) / tma_info_thread_clks)",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_3_PORTS_UTIL@)) / tma_info_thread_clks if cpu_core@ARITH.DIV_ACTIVE@ < cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ else (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_3_PORTS_UTIL@) / tma_info_thread_clks)",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -2267,7 +2522,8 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "cpu_core@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks + tma_serializing_operation * (cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@) / tma_info_thread_clks",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "(cpu_core@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + max(cpu_core@RS.EMPTY_RESOURCE@ - cpu_core@RESOURCE_STALLS.SCOREBOARD@, 0)) / tma_info_thread_clks * (cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@) / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -2277,6 +2533,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
"MetricExpr": "cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
@@ -2287,7 +2544,6 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "cpu_core@EXE_ACTIVITY.2_PORTS_UTIL@ / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
@@ -2298,11 +2554,10 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "cpu_core@UOPS_EXECUTED.CYCLES_GE_3@ / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -2310,8 +2565,8 @@
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "cpu_core@topdown\\-retiring@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@) + 0 * tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "cpu_core@topdown\\-retiring@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -2321,30 +2576,30 @@
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
- "MetricExpr": "cpu_core@RESOURCE_STALLS.SCOREBOARD@ / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL5;tma_L5_group;tma_issueSO;tma_ports_utilized_0_group",
+ "MetricExpr": "cpu_core@RESOURCE_STALLS.SCOREBOARD@ / tma_info_thread_clks + tma_c02_wait",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
"MetricName": "tma_serializing_operation",
- "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)))",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
- "BriefDescription": "This metric represents Shuffle (cross \"vector lane\" data transfers) uops fraction the CPU has retired.",
- "MetricExpr": "cpu_core@INT_VEC_RETIRED.SHUFFLES@ / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "HPC;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group",
- "MetricName": "tma_shuffles",
- "MetricThreshold": "tma_shuffles > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)",
+ "MetricExpr": "tma_light_operations * cpu_core@INT_VEC_RETIRED.SHUFFLES@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "HPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_shuffles_256b",
+ "MetricThreshold": "tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross \"vector lane\" data transfers.",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "cpu_core@CPU_CLK_UNHALTED.PAUSE@ / tma_info_thread_clks",
- "MetricGroup": "TopdownL6;tma_L6_group;tma_serializing_operation_group",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
"MetricName": "tma_slow_pause",
- "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))))",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -2354,7 +2609,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu_core@LD_BLOCKS.NO_SR@ / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%",
"Unit": "cpu_core"
@@ -2372,10 +2627,10 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(cpu_core@XQ.FULL_CYCLES@ + cpu_core@L1D_PEND_MISS.L2_STALLS@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
@@ -2402,7 +2657,7 @@
{
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricExpr": "(cpu_core@MEM_STORE_RETIRED.L2_HIT@ * 10 * (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) + (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) * min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO@)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -2438,6 +2693,33 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
"MetricExpr": "9 * cpu_core@OCR.STREAMING_WR.ANY_RESPONSE@ / tma_info_thread_clks",
"MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
@@ -2450,16 +2732,16 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
"ScaleUnit": "100%",
"Unit": "cpu_core"
},
{
"BriefDescription": "This metric serves as an approximation of legacy x87 usage",
- "MetricExpr": "tma_retiring * cpu_core@UOPS_EXECUTED.X87@ / UOPS_EXECUTED.THREAD",
+ "MetricExpr": "tma_retiring * cpu_core@UOPS_EXECUTED.X87@ / cpu_core@UOPS_EXECUTED.THREAD@",
"MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
"MetricName": "tma_x87_use",
"MetricThreshold": "tma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/cache.json b/tools/perf/pmu-events/arch/x86/alderlake/cache.json
index b3d7f8fb50df..4cd535baf703 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/cache.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D.HWPF_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.HWPF_MISS",
"SampleAfterValue": "1000003",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x48",
@@ -38,6 +42,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event L1D_PEND_MISS.L2_STALLS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALL",
@@ -47,6 +52,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALLS",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -56,6 +62,7 @@
},
{
"BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
@@ -65,6 +72,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -75,6 +83,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -83,7 +92,28 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Modified cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.SILENT",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.USELESS_HWPF",
"PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
@@ -92,7 +122,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the total number of L2 Cache accesses. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.ALL",
+ "PublicDescription": "Counts the total number of L2 Cache Accesses, includes hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only. Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "All accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.ALL",
"PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES]",
@@ -101,7 +141,28 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of L2 Cache accesses that resulted in a hit. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.HIT",
+ "PublicDescription": "Counts the number of L2 Cache accesses that resulted in a hit from a front door request only (does not include rejects or recycles), Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache accesses that resulted in a miss. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "PublicDescription": "Counts the number of L2 Cache accesses that resulted in a miss from a front door request only (does not include rejects or recycles). Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Read requests with true-miss in L2 cache. [This event is alias to L2_RQSTS.MISS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.MISS",
"PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]",
@@ -111,6 +172,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -120,6 +182,7 @@
},
{
"BriefDescription": "Demand Data Read access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts Demand Data Read requests accessing the L2 cache. These requests may hit or miss L2 cache. True-miss exclude misses that were merged with ongoing L2 misses. An access is counted once.",
@@ -129,6 +192,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Counts demand requests that miss L2 cache.",
@@ -138,6 +202,7 @@
},
{
"BriefDescription": "L2_RQSTS.ALL_HWPF",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_HWPF",
"SampleAfterValue": "200003",
@@ -146,6 +211,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -155,6 +221,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
@@ -164,6 +231,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.",
@@ -173,6 +241,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
@@ -182,6 +251,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts demand Data Read requests with true-miss in the L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. An access is counted once.",
@@ -191,6 +261,7 @@
},
{
"BriefDescription": "L2_RQSTS.HWPF_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.HWPF_MISS",
"SampleAfterValue": "200003",
@@ -199,6 +270,7 @@
},
{
"BriefDescription": "Read requests with true-miss in L2 cache. [This event is alias to L2_REQUEST.MISS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]",
@@ -208,6 +280,7 @@
},
{
"BriefDescription": "All accesses to L2 cache [This event is alias to L2_REQUEST.ALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.ALL]",
@@ -217,6 +290,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
@@ -226,6 +300,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
@@ -235,6 +310,7 @@
},
{
"BriefDescription": "SW prefetch requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_HIT",
"PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -244,6 +320,7 @@
},
{
"BriefDescription": "SW prefetch requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_MISS",
"PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -252,16 +329,28 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x23",
+ "EventName": "L2_TRANS.L2_WB",
+ "PublicDescription": "Counts L2 writebacks that access L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
- "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x41",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -271,15 +360,17 @@
},
{
"BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
- "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x4f",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -289,6 +380,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
@@ -298,6 +390,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_DRAM_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in DRAM or MMIO (non-DRAM).",
@@ -307,6 +400,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_L2_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache.",
@@ -316,6 +410,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -325,6 +420,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD",
"SampleAfterValue": "200003",
@@ -333,6 +429,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_DRAM_HIT",
"SampleAfterValue": "200003",
@@ -341,6 +438,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_L2_HIT",
"SampleAfterValue": "200003",
@@ -349,6 +447,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -358,94 +457,95 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
+ "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x81",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
- "PEBS": "1",
- "PublicDescription": "Counts all retired store instructions.",
+ "PublicDescription": "Counts all retired store instructions. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x82",
"Unit": "cpu_core"
},
{
"BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts all retired memory instructions - loads and stores.",
+ "PublicDescription": "Counts all retired memory instructions - loads and stores. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x83",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with locked access.",
+ "PublicDescription": "Counts retired load instructions with locked access. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x21",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions that split across a cacheline boundary.",
+ "PublicDescription": "Counts retired load instructions that split across a cacheline boundary. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x41",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES",
- "PEBS": "1",
- "PublicDescription": "Counts retired store instructions that split across a cacheline boundary.",
+ "PublicDescription": "Counts retired store instructions that split across a cacheline boundary. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x42",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
- "PEBS": "1",
- "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x11",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
- "PEBS": "1",
- "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x12",
"Unit": "cpu_core"
},
{
"BriefDescription": "Completed demand load uops that miss the L1 d-cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "MEM_LOAD_COMPLETED.L1_MISS_ANY",
"PublicDescription": "Number of completed demand load requests that missed the L1 data cache including shadow misses (FB hits, merge to an ongoing L1D miss)",
@@ -455,201 +555,262 @@
},
{
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.",
+ "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.",
+ "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required.",
+ "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x8",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
- "PEBS": "1",
- "PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM.",
+ "PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions with at least 1 uncacheable load or lock.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC",
- "PEBS": "1",
- "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock).",
+ "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock). Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready.",
+ "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x40",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x8",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources.",
+ "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions missed L2 cache as data sources.",
+ "PublicDescription": "Counts retired load instructions missed L2 cache as data sources. Available PDIST counters: 0",
"SampleAfterValue": "100021",
"UMask": "0x10",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100021",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "50021",
"UMask": "0x20",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in DRAM.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x80",
"Unit": "cpu_atom"
},
{
+ "BriefDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core or module.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that hit in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of load uops retired that hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x2",
"Unit": "cpu_atom"
},
{
+ "BriefDescription": "Counts the number of load uops retired that miss in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of load uops retired that hit in the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x4",
"Unit": "cpu_atom"
},
{
+ "BriefDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_UOPS_RETIRED_MISC.HIT_E_F",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_UOPS_RETIRED_MISC.L3_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of cycles that uops are blocked for any of the following reasons: load buffer, store buffer or RSV full.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.ALL",
"SampleAfterValue": "20003",
@@ -658,6 +819,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to a load buffer full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.LD_BUF",
"SampleAfterValue": "20003",
@@ -666,6 +828,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to an RSV full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.RSV",
"SampleAfterValue": "20003",
@@ -674,6 +837,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to a store buffer full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.ST_BUF",
"SampleAfterValue": "20003",
@@ -682,6 +846,7 @@
},
{
"BriefDescription": "MEM_STORE_RETIRED.L2_HIT",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "MEM_STORE_RETIRED.L2_HIT",
"SampleAfterValue": "200003",
@@ -690,10 +855,10 @@
},
{
"BriefDescription": "Counts the number of load uops retired.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts the total number of load uops retired.",
"SampleAfterValue": "200003",
"UMask": "0x81",
@@ -701,10 +866,10 @@
},
{
"BriefDescription": "Counts the number of store uops retired.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of store uops retired.",
"SampleAfterValue": "200003",
"UMask": "0x82",
@@ -712,12 +877,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -725,12 +890,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -738,12 +903,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -751,12 +916,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -764,12 +929,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -777,12 +942,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -790,12 +955,12 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
@@ -803,33 +968,73 @@
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5",
"Unit": "cpu_atom"
},
{
+ "BriefDescription": "Counts the number of load uops retired that performed one or more locks.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x41",
"Unit": "cpu_atom"
},
{
+ "BriefDescription": "Counts the total number of load and store uops retired that missed in the second level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the second Level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of store ops retired that miss in the second level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of stores uops retired. Counts with or without PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
- "PEBS": "2",
"PublicDescription": "Counts the number of stores uops retired. Counts with or without PEBS enabled. If PEBS is enabled and a PEBS record is generated, will populate PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x6",
@@ -837,6 +1042,7 @@
},
{
"BriefDescription": "Retired memory uops for any access",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe5",
"EventName": "MEM_UOP_RETIRED.ANY",
"PublicDescription": "Number of retired micro-operations (uops) for load or store memory accesses",
@@ -845,97 +1051,356 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.COREWB_M.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10008",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts writebacks of modified cachelines that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.COREWB_M.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C0008",
+ "PublicDescription": "Counts writebacks of modified cachelines that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts writebacks of non-modified cachelines that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.COREWB_NONM.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C1000",
+ "PublicDescription": "Counts writebacks of non-modified cachelines that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F803C0001",
+ "MSRValue": "0x1F803C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modified. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit in another cores caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit in another cores caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F803C0002",
+ "MSRValue": "0x1F803C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit in another cores caches, data forwarding is required as the data is modified. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read, RFO and ITOM requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C4477",
+ "PublicDescription": "Counts all data read, code read, RFO and ITOM requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x14000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "OFFCORE_REQUESTS.ALL_REQUESTS",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"SampleAfterValue": "100003",
@@ -944,6 +1409,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -952,7 +1418,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
+ "PublicDescription": "Counts both cacheable and non-cacheable code read requests.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -961,7 +1438,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
+ "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "This event is deprecated. Refer to new event OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"Errata": "ADL038",
"EventCode": "0x20",
@@ -972,6 +1460,7 @@
},
{
"BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "ADL038",
"EventCode": "0x20",
@@ -981,7 +1470,19 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Cycles where at least 1 outstanding demand data read request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
@@ -990,17 +1491,19 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "For every cycle where the core is waiting on at least 1 outstanding Demand RFO request, increments by 1.",
+ "BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
- "PublicDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
+ "PublicDescription": "Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
"SampleAfterValue": "1000003",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "Counter": "0,1,2,3",
"Errata": "ADL038",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
@@ -1009,7 +1512,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -1019,6 +1533,7 @@
},
{
"BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "SQ_MISC.BUS_LOCK",
"PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
@@ -1027,7 +1542,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.NTA",
"PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
@@ -1037,6 +1562,7 @@
},
{
"BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"PublicDescription": "Counts the number of PREFETCHW instructions executed.",
@@ -1046,6 +1572,7 @@
},
{
"BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.T0",
"PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
@@ -1055,6 +1582,7 @@
},
{
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.T1_T2",
"PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
@@ -1064,6 +1592,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to instruction cache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ICACHE",
"SampleAfterValue": "1000003",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/floating-point.json b/tools/perf/pmu-events/arch/x86/alderlake/floating-point.json
index c8ba96c4a7f8..62fd70f220e5 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/floating-point.json
@@ -1,6 +1,16 @@
[
{
+ "BriefDescription": "Counts the number of cycles the floating point divider is in the loop stage.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "ARITH.FPDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.FPDIV_ACTIVE",
@@ -9,7 +19,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of floating point divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.FP",
"PublicDescription": "Counts all microcode Floating Point assists.",
@@ -19,6 +39,7 @@
},
{
"BriefDescription": "ASSISTS.SSE_AVX_MIX",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.SSE_AVX_MIX",
"SampleAfterValue": "1000003",
@@ -26,7 +47,8 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "FP_ARITH_DISPATCHED.PORT_0",
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_0 [This event is alias to FP_ARITH_DISPATCHED.V0]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_0",
"SampleAfterValue": "2000003",
@@ -34,7 +56,8 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "FP_ARITH_DISPATCHED.PORT_1",
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_1 [This event is alias to FP_ARITH_DISPATCHED.V1]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_1",
"SampleAfterValue": "2000003",
@@ -42,7 +65,8 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "FP_ARITH_DISPATCHED.PORT_5",
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_5 [This event is alias to FP_ARITH_DISPATCHED.V2]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_5",
"SampleAfterValue": "2000003",
@@ -50,7 +74,35 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "FP_ARITH_DISPATCHED.V0 [This event is alias to FP_ARITH_DISPATCHED.PORT_0]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V0",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.V1 [This event is alias to FP_ARITH_DISPATCHED.PORT_1]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.V2 [This event is alias to FP_ARITH_DISPATCHED.PORT_5]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V2",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -60,6 +112,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -69,6 +122,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -78,6 +132,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -87,6 +142,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -96,6 +152,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -105,6 +162,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -114,6 +172,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -123,6 +182,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"PublicDescription": "Number of any Vector retired FP arithmetic instructions. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -132,6 +192,7 @@
},
{
"BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.FP_ASSIST",
"PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
@@ -141,9 +202,9 @@
},
{
"BriefDescription": "Counts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.FPDIV",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x8",
"Unit": "cpu_atom"
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/frontend.json b/tools/perf/pmu-events/arch/x86/alderlake/frontend.json
index 542ba4a81996..ff3b30c2619a 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Clears due to Unknown Branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Number of times the front-end is resteered when it finds a branch instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk.",
@@ -28,6 +31,7 @@
},
{
"BriefDescription": "Cycles the Microcode Sequencer is busy.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.MS_BUSY",
"SampleAfterValue": "500009",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
@@ -45,232 +50,235 @@
},
{
"BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x1",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
+ "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x11",
- "PEBS": "1",
- "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
+ "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x14",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss.",
+ "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x12",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss.",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x13",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x600106",
- "PEBS": "1",
- "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall.",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7",
"MSRValue": "0x608006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7",
"MSRValue": "0x601006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7",
"MSRValue": "0x600206",
- "PEBS": "1",
- "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7",
"MSRValue": "0x610006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x100206",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7",
"MSRValue": "0x602006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7",
"MSRValue": "0x600406",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7",
"MSRValue": "0x620006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7",
"MSRValue": "0x604006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7",
"MSRValue": "0x600806",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "FRONTEND_RETIRED.MS_FLOWS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.MS_FLOWS",
"MSRIndex": "0x3F7",
"MSRValue": "0x8",
- "PEBS": "1",
+ "PublicDescription": "FRONTEND_RETIRED.MS_FLOWS Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x15",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.",
+ "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
"MSRIndex": "0x3F7",
"MSRValue": "0x17",
- "PEBS": "1",
+ "PublicDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of requests to the instruction cache for one or more bytes of a cache line.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PublicDescription": "Counts the total number of requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
@@ -280,6 +288,7 @@
},
{
"BriefDescription": "Counts the number of instruction cache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts the number of missed requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
@@ -289,6 +298,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_DATA.STALLS",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The decode pipeline works at a 32 Byte granularity.",
@@ -297,7 +307,19 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "ICACHE_DATA.STALL_PERIODS",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALL_PERIODS",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
@@ -307,6 +329,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY",
@@ -317,16 +340,18 @@
},
{
"BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK",
- "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
"SampleAfterValue": "2000003",
"UMask": "0x8",
"Unit": "cpu_core"
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
@@ -336,6 +361,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_ANY",
@@ -346,6 +372,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_OK",
@@ -356,6 +383,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -365,6 +393,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES_ANY",
@@ -375,6 +404,7 @@
},
{
"BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -386,6 +416,7 @@
},
{
"BriefDescription": "Uops delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS).",
@@ -395,6 +426,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]",
@@ -404,6 +436,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE",
@@ -414,6 +447,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CYCLES_FE_WAS_OK",
@@ -425,6 +459,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CORE]",
@@ -434,6 +469,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -444,6 +480,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/memory.json b/tools/perf/pmu-events/arch/x86/alderlake/memory.json
index 23d36164433f..a0260d5b8619 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retires.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.ANY_AT_RET",
"SampleAfterValue": "1000003",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiring.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.L1_BOUND_AT_RET",
"SampleAfterValue": "1000003",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.L1_MISS_AT_RET",
"SampleAfterValue": "1000003",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.OTHER_AT_RET",
"PublicDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etc.",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalk.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.PGWALK_AT_RET",
"SampleAfterValue": "1000003",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address match.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.ST_ADDR_AT_RET",
"SampleAfterValue": "1000003",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguation.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"SampleAfterValue": "20003",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
@@ -76,6 +85,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.CYCLES_L1D_MISS",
@@ -85,6 +95,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L1D_MISS",
@@ -94,6 +105,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L2_MISS",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "Execution stalls while L3 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "9",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L3_MISS",
@@ -113,13 +126,26 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "53",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "1009",
"UMask": "0x1",
@@ -127,12 +153,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "20011",
"UMask": "0x1",
@@ -140,12 +166,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "503",
"UMask": "0x1",
@@ -153,12 +179,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100007",
"UMask": "0x1",
@@ -166,12 +192,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100003",
"UMask": "0x1",
@@ -179,12 +205,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "101",
"UMask": "0x1",
@@ -192,12 +218,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "2003",
"UMask": "0x1",
@@ -205,12 +231,12 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "50021",
"UMask": "0x1",
@@ -218,77 +244,174 @@
},
{
"BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
+ "Counter": "0",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
- "PEBS": "2",
- "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8",
+ "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8 Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F84400004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS] Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS] Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784004000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F84404000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "100003",
@@ -297,6 +420,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache. Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cache.",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/metricgroups.json b/tools/perf/pmu-events/arch/x86/alderlake/metricgroups.json
index 516eb0f93f02..855585fe6fae 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/metricgroups.json
@@ -2,9 +2,23 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "C0Wait": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -21,12 +35,18 @@
"Frontend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"HPC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"IcMiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ifetch": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"IntVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Load_Store_Miss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem_Exec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -57,6 +77,7 @@
"TopdownL4": "Metrics for top-down breakdown at level 4",
"TopdownL5": "Metrics for top-down breakdown at level 5",
"TopdownL6": "Metrics for top-down breakdown at level 6",
+ "load_store_bound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"tma_L1_group": "Metrics for top-down breakdown at level 1",
"tma_L2_group": "Metrics for top-down breakdown at level 2",
"tma_L3_group": "Metrics for top-down breakdown at level 3",
@@ -65,12 +86,13 @@
"tma_L6_group": "Metrics for top-down breakdown at level 6",
"tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
"tma_assists_group": "Metrics contributing to tma_assists category",
- "tma_backend_bound_aux_group": "Metrics contributing to tma_backend_bound_aux category",
"tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
- "tma_base_group": "Metrics contributing to tma_base category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -80,11 +102,14 @@
"tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
"tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
"tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
+ "tma_ifetch_bandwidth_group": "Metrics contributing to tma_ifetch_bandwidth category",
+ "tma_ifetch_latency_group": "Metrics contributing to tma_ifetch_latency category",
"tma_int_operations_group": "Metrics contributing to tma_int_operations category",
"tma_issue2P": "Metrics related by the issue $issue2P",
- "tma_issueBC": "Metrics related by the issue $issueBC",
"tma_issueBM": "Metrics related by the issue $issueBM",
"tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
"tma_issueD0": "Metrics related by the issue $issueD0",
"tma_issueFB": "Metrics related by the issue $issueFB",
"tma_issueFL": "Metrics related by the issue $issueFL",
@@ -100,17 +125,19 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
"tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
- "tma_mem_scheduler_group": "Metrics contributing to tma_mem_scheduler category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
"tma_mite_group": "Metrics contributing to tma_mite category",
- "tma_nuke_group": "Metrics contributing to tma_nuke category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
"tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
"tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
"tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
@@ -118,5 +145,6 @@
"tma_retiring_group": "Metrics contributing to tma_retiring category",
"tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
"tma_store_bound_group": "Metrics contributing to tma_store_bound category",
- "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category"
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/other.json b/tools/perf/pmu-events/arch/x86/alderlake/other.json
index 1db73e020215..af46cde26b54 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/other.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/other.json
@@ -1,14 +1,17 @@
[
{
- "BriefDescription": "ASSISTS.HARDWARE",
+ "BriefDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events. the event also counts for Machine Ordering count.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.HARDWARE",
+ "PublicDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events. This includes, but not limited to, assists at EXE or MEM uop writeback like AVX* load/store/gather/scatter (non-FP GSSE-assist ) , assists generated by ROB like PEBS and RTIT, Uncore trap, RAR (Remote Action Request) and CET (Control flow Enforcement Technology) assists. the event also counts for Machine Ordering count.",
"SampleAfterValue": "100003",
"UMask": "0x4",
"Unit": "cpu_core"
},
{
"BriefDescription": "ASSISTS.PAGE_FAULT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.PAGE_FAULT",
"SampleAfterValue": "1000003",
@@ -17,6 +20,7 @@
},
{
"BriefDescription": "CORE_POWER.LICENSE_1",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LICENSE_1",
"SampleAfterValue": "200003",
@@ -25,6 +29,7 @@
},
{
"BriefDescription": "CORE_POWER.LICENSE_2",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LICENSE_2",
"SampleAfterValue": "200003",
@@ -33,6 +38,7 @@
},
{
"BriefDescription": "CORE_POWER.LICENSE_3",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LICENSE_3",
"SampleAfterValue": "200003",
@@ -40,129 +46,66 @@
"Unit": "cpu_core"
},
{
- "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
- "EventCode": "0xB7",
- "EventName": "OCR.COREWB_M.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10008",
- "SampleAfterValue": "100003",
+ "BriefDescription": "This event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS]",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
+ "EventCode": "0xe4",
+ "EventName": "LBR_INSERTS.ANY",
+ "SampleAfterValue": "1000003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts demand data reads that have any type of response.",
+ "BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "EventName": "OCR.FULL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
+ "MSRValue": "0x800000010000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "EventName": "OCR.PARTIAL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
+ "MSRValue": "0x400000010000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
- "SampleAfterValue": "100003",
- "UMask": "0x1",
- "Unit": "cpu_core"
- },
- {
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
- "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
- "EventCode": "0xa5",
- "EventName": "RS.EMPTY",
- "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
- "SampleAfterValue": "1000003",
- "UMask": "0x7",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
- "CounterMask": "1",
- "EdgeDetect": "1",
- "EventCode": "0xa5",
- "EventName": "RS.EMPTY_COUNT",
- "Invert": "1",
- "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
- "SampleAfterValue": "100003",
- "UMask": "0x7",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY_COUNT",
- "CounterMask": "1",
- "Deprecated": "1",
- "EdgeDetect": "1",
- "EventCode": "0xa5",
- "EventName": "RS_EMPTY.COUNT",
- "Invert": "1",
- "SampleAfterValue": "100003",
- "UMask": "0x7",
- "Unit": "cpu_core"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY",
- "Deprecated": "1",
- "EventCode": "0xa5",
- "EventName": "RS_EMPTY.CYCLES",
- "SampleAfterValue": "1000003",
- "UMask": "0x7",
- "Unit": "cpu_core"
- },
- {
"BriefDescription": "Cycles the uncore cannot take further requests",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x2d",
"EventName": "XQ.FULL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
index f9876bef16da..33d1f39e441f 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "This event is deprecated. Refer to new event ARITH.DIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -10,7 +11,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of cycles when any of the floating point or integer dividers are active.",
+ "Counter": "0,1,2,3,4,5",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.DIV_ACTIVE",
@@ -20,7 +32,26 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of active floating point and integer dividers per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_OCCUPANCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point and integer divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "This event is deprecated. Refer to new event ARITH.FPDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -30,7 +61,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of cycles any of the two integer dividers are active.",
+ "Counter": "0,1,2,3,4,5",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "This event counts the cycles the integer divider is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.IDIV_ACTIVE",
@@ -39,7 +81,26 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of active integer dividers per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_OCCUPANCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of integer divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "This event is deprecated. Refer to new event ARITH.IDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -50,6 +111,7 @@
},
{
"BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.ANY",
"PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware. Examples include AD (page Access Dirty), FP and AVX related assists.",
@@ -59,427 +121,428 @@
},
{
"BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
"SampleAfterValue": "200003",
"Unit": "cpu_atom"
},
{
"BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all branch instructions retired.",
+ "PublicDescription": "Counts all branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"Unit": "cpu_core"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf9",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
- "PublicDescription": "Counts conditional branch instructions retired.",
+ "PublicDescription": "Counts conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x11",
"Unit": "cpu_core"
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts not taken branch instructions retired.",
+ "PublicDescription": "Counts not taken branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x10",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken conditional branch instructions retired.",
+ "PublicDescription": "Counts taken conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xbf",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
- "PublicDescription": "Counts far branch instructions retired.",
+ "PublicDescription": "Counts far branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x40",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
- "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
+ "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x80",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.IND_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.COND",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of near CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf9",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
- "PublicDescription": "Counts both direct and indirect near call instructions retired.",
+ "PublicDescription": "Counts both direct and indirect near call instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
- "PublicDescription": "Counts return instructions retired.",
+ "PublicDescription": "Counts return instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x8",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xc0",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken branch instructions retired.",
+ "PublicDescription": "Counts taken branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x20",
"Unit": "cpu_core"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NON_RETURN_IND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of near relative CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.REL_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfd",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_RETURN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.COND_TAKEN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.TAKEN_JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
"SampleAfterValue": "200003",
"Unit": "cpu_atom"
},
{
"BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
+ "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
- "PublicDescription": "Counts mispredicted conditional branch instructions retired.",
+ "PublicDescription": "Counts mispredicted conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x11",
"Unit": "cpu_core"
},
{
"BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_NTAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken.",
+ "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x10",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe",
"Unit": "cpu_atom"
},
{
"BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken conditional mispredicted branch instructions retired.",
+ "PublicDescription": "Counts taken conditional mispredicted branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Miss-predicted near indirect branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
- "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
+ "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x80",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Mispredicted indirect CALL retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
- "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect.",
+ "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x2",
"Unit": "cpu_core"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.IND_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.COND",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of mispredicted near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x80",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken.",
+ "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x20",
"Unit": "cpu_core"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NON_RETURN_IND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RET",
- "PEBS": "1",
- "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.",
+ "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x8",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7",
"Unit": "cpu_atom"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.COND_TAKEN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.TAKEN_JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C01",
"PublicDescription": "Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
@@ -489,6 +552,7 @@
},
{
"BriefDescription": "Core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C02",
"PublicDescription": "Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
@@ -498,6 +562,7 @@
},
{
"BriefDescription": "Core clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C0_WAIT",
"PublicDescription": "Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instruction.",
@@ -507,6 +572,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.",
"SampleAfterValue": "2000003",
@@ -515,6 +581,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.",
@@ -523,6 +590,7 @@
},
{
"BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
"PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -532,6 +600,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
@@ -541,6 +610,7 @@
},
{
"BriefDescription": "CPU_CLK_UNHALTED.PAUSE",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.PAUSE",
"SampleAfterValue": "2000003",
@@ -549,6 +619,7 @@
},
{
"BriefDescription": "CPU_CLK_UNHALTED.PAUSE_INST",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xec",
@@ -558,7 +629,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "This event is deprecated. Refer to new event CPU_CLK_UNHALTED.REF_TSC_P",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
"PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -568,6 +650,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency. (Fixed event)",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2.",
"SampleAfterValue": "2000003",
@@ -576,6 +659,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -584,6 +668,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
@@ -593,6 +678,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
@@ -602,6 +688,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.",
"SampleAfterValue": "2000003",
@@ -610,6 +697,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -618,6 +706,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.",
@@ -626,6 +715,7 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -634,6 +724,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "8",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -643,6 +734,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -652,6 +744,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "16",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -661,6 +754,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "12",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -670,6 +764,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -679,6 +774,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -688,6 +784,7 @@
},
{
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
@@ -696,7 +793,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Cycles total of 2 or 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_3_PORTS_UTIL",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -706,6 +813,7 @@
},
{
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -715,6 +823,7 @@
},
{
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -724,6 +833,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "5",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.BOUND_ON_LOADS",
@@ -733,6 +843,7 @@
},
{
"BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
@@ -743,6 +854,7 @@
},
{
"BriefDescription": "Cycles no uop executed while RS was not empty, the SB was not full and there was no outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS",
"PublicDescription": "Number of cycles total of 0 uops executed on all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) was not full and there was no outstanding load.",
@@ -752,6 +864,7 @@
},
{
"BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
@@ -761,42 +874,43 @@
},
{
"BriefDescription": "Counts the total number of instructions retired. (Fixed event)",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0.",
+ "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
+ "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Counts the total number of instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses a programmable general purpose performance counter.",
"SampleAfterValue": "2000003",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003",
"Unit": "cpu_core"
},
{
"BriefDescription": "INST_RETIRED.MACRO_FUSED",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.MACRO_FUSED",
"SampleAfterValue": "2000003",
@@ -805,6 +919,7 @@
},
{
"BriefDescription": "Retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.NOP",
"PublicDescription": "Counts all retired NOP or ENDBR32/64 instructions",
@@ -814,15 +929,16 @@
},
{
"BriefDescription": "Precise instruction retired with PEBS precise-distribution",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.PREC_DIST",
- "PEBS": "1",
- "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0.",
+ "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Iterations of Repeat string retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.REP_ITERATION",
"PublicDescription": "Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent.",
@@ -832,6 +948,7 @@
},
{
"BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xad",
@@ -843,6 +960,7 @@
},
{
"BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
@@ -852,6 +970,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
@@ -861,6 +980,7 @@
},
{
"BriefDescription": "Bubble cycles of BAClear (Unknown Branch).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.UNKNOWN_BRANCH_CYCLES",
"MSRIndex": "0x3F7",
@@ -871,6 +991,7 @@
},
{
"BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.UOP_DROPPING",
"PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
@@ -880,6 +1001,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.128BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.128BIT",
"SampleAfterValue": "1000003",
@@ -888,6 +1010,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.256BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.256BIT",
"SampleAfterValue": "1000003",
@@ -896,6 +1019,7 @@
},
{
"BriefDescription": "integer ADD, SUB, SAD 128-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.ADD_128",
"PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 128-bit vector instructions.",
@@ -905,6 +1029,7 @@
},
{
"BriefDescription": "integer ADD, SUB, SAD 256-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.ADD_256",
"PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 256-bit vector instructions.",
@@ -914,6 +1039,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.MUL_256",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.MUL_256",
"SampleAfterValue": "1000003",
@@ -922,6 +1048,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.SHUFFLES",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.SHUFFLES",
"SampleAfterValue": "1000003",
@@ -930,6 +1057,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.VNNI_128",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.VNNI_128",
"SampleAfterValue": "1000003",
@@ -938,6 +1066,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.VNNI_256",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.VNNI_256",
"SampleAfterValue": "1000003",
@@ -946,25 +1075,26 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event LD_BLOCKS.ADDRESS_ALIAS",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.4K_ALIAS",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x4",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ADDRESS_ALIAS",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x4",
"Unit": "cpu_atom"
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ADDRESS_ALIAS",
"PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.",
@@ -974,15 +1104,16 @@
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DATA_UNKNOWN",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x1",
"Unit": "cpu_atom"
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -992,6 +1123,7 @@
},
{
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -1001,15 +1133,17 @@
},
{
"BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PREFETCH.SWPF",
- "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
+ "PublicDescription": "Counts all software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
"SampleAfterValue": "100003",
"UMask": "0x1",
"Unit": "cpu_core"
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -1020,6 +1154,7 @@
},
{
"BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_OK",
@@ -1030,6 +1165,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
@@ -1039,6 +1175,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xc3",
@@ -1050,6 +1187,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPU.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.DISAMBIGUATION",
"SampleAfterValue": "20003",
@@ -1058,6 +1196,7 @@
},
{
"BriefDescription": "Counts the number of machines clears due to memory renaming.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MRN_NUKE",
"SampleAfterValue": "1000003",
@@ -1066,6 +1205,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to a page fault. Counts both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation occurs.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.PAGE_FAULT",
"SampleAfterValue": "20003",
@@ -1073,7 +1213,9 @@
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the number of machine clears that flush the pipeline and restart the machine with the use of microcode due to SMC, MEMORY_ORDERING, FP_ASSISTS, PAGE_FAULT, DISAMBIGUATION, and FPC_VIRTUAL_TRAP.",
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SLOW",
"SampleAfterValue": "20003",
@@ -1082,6 +1224,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code page.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"SampleAfterValue": "20003",
@@ -1090,6 +1233,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -1099,6 +1243,7 @@
},
{
"BriefDescription": "LFENCE instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe0",
"EventName": "MISC2_RETIRED.LFENCE",
"PublicDescription": "number of LFENCE retired instructions",
@@ -1107,7 +1252,18 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY]",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xe4",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "PublicDescription": "Counts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. This event is PDIR on GP0 and NPEBS on all other GPs [This event is alias to LBR_INSERTS.ANY]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT.",
@@ -1117,6 +1273,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
@@ -1126,6 +1283,7 @@
},
{
"BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SCOREBOARD",
"SampleAfterValue": "100003",
@@ -1133,7 +1291,72 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY",
+ "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_COUNT",
+ "Invert": "1",
+ "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty due to a resource in the back-end",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_RESOURCE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY_COUNT",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "Deprecated": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS_EMPTY.COUNT",
+ "Invert": "1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS_EMPTY.CYCLES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state. For Tremont, UMWAIT and TPAUSE will only put the CPU into C0.1 activity state (not C0.2 activity state)",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.C01_MS_SCB",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x75",
"EventName": "SERIALIZATION.NON_C01_MS_SCB",
"PublicDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires. The most commonly executed instruction with an MS scoreboard is PAUSE.",
@@ -1143,6 +1366,7 @@
},
{
"BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
"PublicDescription": "Number of slots in TMA method where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.",
@@ -1152,6 +1376,7 @@
},
{
"BriefDescription": "TMA slots wasted due to incorrect speculations.",
+ "Counter": "0",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BAD_SPEC_SLOTS",
"PublicDescription": "Number of slots of TMA method that were wasted due to incorrect speculation. It covers all types of control-flow or data-related mis-speculations.",
@@ -1161,6 +1386,7 @@
},
{
"BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
+ "Counter": "0",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
"PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
@@ -1170,6 +1396,7 @@
},
{
"BriefDescription": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.MEMORY_BOUND_SLOTS",
"SampleAfterValue": "10000003",
@@ -1178,6 +1405,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003",
@@ -1186,6 +1414,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
@@ -1195,6 +1424,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.ALL",
"PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ) even if an FE_bound event occurs during this period. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
@@ -1203,6 +1433,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to fast nukes such as memory ordering and memory disambiguation machine clears.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
"SampleAfterValue": "1000003",
@@ -1211,6 +1442,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
"SampleAfterValue": "1000003",
@@ -1219,6 +1451,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to branch mispredicts.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
"SampleAfterValue": "1000003",
@@ -1227,6 +1460,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.NUKE",
"SampleAfterValue": "1000003",
@@ -1235,6 +1469,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to backend stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALL",
"SampleAfterValue": "1000003",
@@ -1242,6 +1477,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to certain allocation restrictions.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
"SampleAfterValue": "1000003",
@@ -1250,6 +1486,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -1258,6 +1495,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -1266,6 +1504,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REGISTER",
"SampleAfterValue": "1000003",
@@ -1274,6 +1513,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the reorder buffer being full (ROB stalls).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
"SampleAfterValue": "1000003",
@@ -1282,6 +1522,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
"SampleAfterValue": "1000003",
@@ -1290,6 +1531,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to frontend stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ALL",
"SampleAfterValue": "1000003",
@@ -1297,6 +1539,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -1306,6 +1549,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
@@ -1315,6 +1559,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.CISC",
"SampleAfterValue": "1000003",
@@ -1323,6 +1568,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.DECODE",
"SampleAfterValue": "1000003",
@@ -1331,6 +1577,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH",
"SampleAfterValue": "1000003",
@@ -1339,6 +1586,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to a latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY",
"SampleAfterValue": "1000003",
@@ -1347,6 +1595,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ITLB misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ITLB",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
@@ -1356,6 +1605,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.OTHER",
"SampleAfterValue": "1000003",
@@ -1364,6 +1614,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to wrong predecodes.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.PREDECODE",
"SampleAfterValue": "1000003",
@@ -1372,14 +1623,15 @@
},
{
"BriefDescription": "Counts the total number of consumed retirement slots.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "TOPDOWN_RETIRING.ALL",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"Unit": "cpu_atom"
},
{
"BriefDescription": "UOPS_DECODED.DEC0_UOPS",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UOPS_DECODED.DEC0_UOPS",
"SampleAfterValue": "1000003",
@@ -1388,6 +1640,7 @@
},
{
"BriefDescription": "Uops executed on port 0",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_0",
"PublicDescription": "Number of uops dispatch to execution port 0.",
@@ -1397,6 +1650,7 @@
},
{
"BriefDescription": "Uops executed on port 1",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_1",
"PublicDescription": "Number of uops dispatch to execution port 1.",
@@ -1406,6 +1660,7 @@
},
{
"BriefDescription": "Uops executed on ports 2, 3 and 10",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_2_3_10",
"PublicDescription": "Number of uops dispatch to execution ports 2, 3 and 10",
@@ -1415,6 +1670,7 @@
},
{
"BriefDescription": "Uops executed on ports 4 and 9",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_4_9",
"PublicDescription": "Number of uops dispatch to execution ports 4 and 9",
@@ -1424,6 +1680,7 @@
},
{
"BriefDescription": "Uops executed on ports 5 and 11",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_5_11",
"PublicDescription": "Number of uops dispatch to execution ports 5 and 11",
@@ -1433,6 +1690,7 @@
},
{
"BriefDescription": "Uops executed on port 6",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_6",
"PublicDescription": "Number of uops dispatch to execution port 6.",
@@ -1442,6 +1700,7 @@
},
{
"BriefDescription": "Uops executed on ports 7 and 8",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_7_8",
"PublicDescription": "Number of uops dispatch to execution ports 7 and 8.",
@@ -1451,6 +1710,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -1461,6 +1721,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -1471,6 +1732,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -1481,6 +1743,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -1491,6 +1754,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1",
@@ -1501,6 +1765,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2",
@@ -1511,6 +1776,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3",
@@ -1521,6 +1787,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4",
@@ -1531,6 +1798,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.STALLS",
@@ -1542,6 +1810,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UOPS_EXECUTED.STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb1",
@@ -1553,6 +1822,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.THREAD",
"SampleAfterValue": "2000003",
@@ -1561,6 +1831,7 @@
},
{
"BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.",
@@ -1569,7 +1840,17 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "Counts the number of uops issued by the front end every cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x0e",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_atom"
+ },
+ {
"BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xae",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
@@ -1578,15 +1859,26 @@
"Unit": "cpu_core"
},
{
+ "BriefDescription": "UOPS_ISSUED.CYCLES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.CYCLES",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
"BriefDescription": "Counts the total number of uops retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.ALL",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Cycles with retired uop(s).",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.CYCLES",
@@ -1597,6 +1889,7 @@
},
{
"BriefDescription": "Retired uops except the last uop of each instruction.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.HEAVY",
"PublicDescription": "Counts the number of retired micro-operations (uops) except the last uop of each instruction. An instruction that is decoded into less than two uops does not contribute to the count.",
@@ -1606,18 +1899,18 @@
},
{
"BriefDescription": "Counts the number of integer divide uops retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.IDIV",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x10",
"Unit": "cpu_atom"
},
{
"BriefDescription": "Counts the number of uops that are from complex flows issued by the micro-sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MS",
- "PEBS": "1",
"PublicDescription": "Counts the number of uops that are from complex flows issued by the Microcode Sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
"SampleAfterValue": "2000003",
"UMask": "0x1",
@@ -1625,6 +1918,7 @@
},
{
"BriefDescription": "UOPS_RETIRED.MS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MS",
"MSRIndex": "0x3F7",
@@ -1635,6 +1929,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS",
"PublicDescription": "Counts the retirement slots used each cycle.",
@@ -1644,6 +1939,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.STALLS",
@@ -1655,6 +1951,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UOPS_RETIRED.STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xc2",
@@ -1666,9 +1963,9 @@
},
{
"BriefDescription": "Counts the number of x87 uops retired, includes those in MS flows.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.X87",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x2",
"Unit": "cpu_atom"
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/alderlake/uncore-interconnect.json
index 8bf020a9dfa8..b5604c7534e1 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of requests allocated in Coherency Tracker.",
+ "Counter": "0,1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -9,48 +10,59 @@
},
{
"BriefDescription": "Each cycle counts number of any coherent request at memory controller that were issued by any core.",
+ "Counter": "0",
"EventCode": "0x85",
"EventName": "UNC_ARB_DAT_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "ARB"
},
{
"BriefDescription": "Each cycle counts number of coherent reads pending on data return from memory controller that were issued by any core.",
+ "Counter": "0",
"EventCode": "0x85",
"EventName": "UNC_ARB_DAT_OCCUPANCY.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_ARB_REQ_TRK_REQUEST.DRD",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x81",
"EventName": "UNC_ARB_DAT_REQUESTS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_ARB_DAT_OCCUPANCY.ALL",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x85",
"EventName": "UNC_ARB_IFA_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "ARB"
},
{
"BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_OCCUPANCY.RD]",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_REQ_TRK_OCCUPANCY.DRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_REQUESTS.RD]",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_REQ_TRK_REQUEST.DRD",
"PerPkg": "1",
@@ -59,6 +71,7 @@
},
{
"BriefDescription": "Each cycle counts number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from its allocation in ReqTrk till deallocation. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -67,14 +80,17 @@
},
{
"BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_OCCUPANCY.DRD]",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "Counts the number of coherent and in-coherent requests initiated by IA cores, processor graphic units, or LLC.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -83,6 +99,7 @@
},
{
"BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_REQUEST.DRD]",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.RD",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json b/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
index 163d7e7755c4..bcf275cd592a 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts every 64B read request entering the Memory Controller 0 to DRAM (sum of all channels).",
+ "Counter": "0",
"EventCode": "0xff",
"EventName": "UNC_MC0_RDCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Counts every 64B write request entering the Memory Controller 0 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+ "Counter": "1",
"EventCode": "0xff",
"EventName": "UNC_MC0_WRCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Counts every 64B read request entering the Memory Controller 1 to DRAM (sum of all channels).",
+ "Counter": "3",
"EventCode": "0xff",
"EventName": "UNC_MC1_RDCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Counts every 64B write request entering the Memory Controller 1 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+ "Counter": "4",
"EventCode": "0xff",
"EventName": "UNC_MC1_WRCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "ACT command for a read request sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x24",
"EventName": "UNC_M_ACT_COUNT_RD",
"PerPkg": "1",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "ACT command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x26",
"EventName": "UNC_M_ACT_COUNT_TOTAL",
"PerPkg": "1",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "ACT command for a write request sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x25",
"EventName": "UNC_M_ACT_COUNT_WR",
"PerPkg": "1",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Read CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x22",
"EventName": "UNC_M_CAS_COUNT_RD",
"PerPkg": "1",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Write CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x23",
"EventName": "UNC_M_CAS_COUNT_WR",
"PerPkg": "1",
@@ -70,6 +79,7 @@
},
{
"BriefDescription": "Number of clocks",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x01",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
@@ -77,6 +87,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Empty",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1D",
"EventName": "UNC_M_DRAM_PAGE_EMPTY_RD",
"PerPkg": "1",
@@ -84,6 +95,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Empty",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x20",
"EventName": "UNC_M_DRAM_PAGE_EMPTY_WR",
"PerPkg": "1",
@@ -91,6 +103,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Hit",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1C",
"EventName": "UNC_M_DRAM_PAGE_HIT_RD",
"PerPkg": "1",
@@ -98,6 +111,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Hit",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1F",
"EventName": "UNC_M_DRAM_PAGE_HIT_WR",
"PerPkg": "1",
@@ -105,6 +119,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Miss",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1E",
"EventName": "UNC_M_DRAM_PAGE_MISS_RD",
"PerPkg": "1",
@@ -112,6 +127,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Miss",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x21",
"EventName": "UNC_M_DRAM_PAGE_MISS_WR",
"PerPkg": "1",
@@ -119,6 +135,7 @@
},
{
"BriefDescription": "Any Rank at Hot state",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x19",
"EventName": "UNC_M_DRAM_THERMAL_HOT",
"PerPkg": "1",
@@ -126,6 +143,7 @@
},
{
"BriefDescription": "Any Rank at Warm state",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1A",
"EventName": "UNC_M_DRAM_THERMAL_WARM",
"PerPkg": "1",
@@ -133,6 +151,7 @@
},
{
"BriefDescription": "Incoming read prefetch request from IA.",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x0A",
"EventName": "UNC_M_PREFETCH_RD",
"PerPkg": "1",
@@ -140,6 +159,7 @@
},
{
"BriefDescription": "PRE command sent to DRAM due to page table idle timer expiration",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x28",
"EventName": "UNC_M_PRE_COUNT_IDLE",
"PerPkg": "1",
@@ -147,6 +167,7 @@
},
{
"BriefDescription": "PRE command sent to DRAM for a read/write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x27",
"EventName": "UNC_M_PRE_COUNT_PAGE_MISS",
"PerPkg": "1",
@@ -154,6 +175,7 @@
},
{
"BriefDescription": "Incoming VC0 read request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x02",
"EventName": "UNC_M_VC0_REQUESTS_RD",
"PerPkg": "1",
@@ -161,6 +183,7 @@
},
{
"BriefDescription": "Incoming VC0 write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x03",
"EventName": "UNC_M_VC0_REQUESTS_WR",
"PerPkg": "1",
@@ -168,6 +191,7 @@
},
{
"BriefDescription": "Incoming VC1 read request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x04",
"EventName": "UNC_M_VC1_REQUESTS_RD",
"PerPkg": "1",
@@ -175,6 +199,7 @@
},
{
"BriefDescription": "Incoming VC1 write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x05",
"EventName": "UNC_M_VC1_REQUESTS_WR",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/uncore-other.json b/tools/perf/pmu-events/arch/x86/alderlake/uncore-other.json
index 2af92e43b28a..1ac5b5ef8094 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/uncore-other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_CLOCK.SOCKET",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/alderlake/virtual-memory.json
index 3827d292da80..132ce48af6d9 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
@@ -20,6 +22,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -29,6 +32,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -38,6 +42,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -47,6 +52,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -56,6 +62,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -65,6 +72,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
@@ -74,6 +82,7 @@
},
{
"BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
@@ -83,6 +92,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
@@ -93,6 +103,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -102,6 +113,7 @@
},
{
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -111,6 +123,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -120,6 +133,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -129,6 +143,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -138,6 +153,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
@@ -147,6 +163,7 @@
},
{
"BriefDescription": "Counts the number of page walks initiated by a instruction fetch that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSED_WALK",
"SampleAfterValue": "1000003",
@@ -155,6 +172,7 @@
},
{
"BriefDescription": "Counts the number of page walks due to an instruction fetch that miss the PDE (Page Directory Entry) cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.PDE_CACHE_MISS",
"SampleAfterValue": "2000003",
@@ -163,6 +181,7 @@
},
{
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
@@ -172,6 +191,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_ACTIVE",
@@ -182,6 +202,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -191,6 +212,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -200,6 +222,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -209,6 +232,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -218,6 +242,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
@@ -227,10 +252,44 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB miss.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.DTLB_MISS_AT_RET",
"SampleAfterValue": "1000003",
"UMask": "0x90",
"Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12",
+ "Unit": "cpu_atom"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
index a35edf7d86a9..0f72c9192df6 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
@@ -1,75 +1,61 @@
[
{
"BriefDescription": "C10 residency percent per package",
- "MetricExpr": "cstate_pkg@c10\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c10\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C10_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C1 residency percent per core",
- "MetricExpr": "cstate_core@c1\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C1_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
- "MetricGroup": "Power",
- "MetricName": "C7_Pkg_Residency",
- "ScaleUnit": "100%"
- },
- {
"BriefDescription": "C8 residency percent per package",
- "MetricExpr": "cstate_pkg@c8\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c8\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C8_Pkg_Residency",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "C9 residency percent per package",
- "MetricExpr": "cstate_pkg@c9\\-residency@ / TSC",
- "MetricGroup": "Power",
- "MetricName": "C9_Pkg_Residency",
- "ScaleUnit": "100%"
- },
- {
"BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
"MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
"MetricGroup": "smi",
@@ -85,39 +71,28 @@
"ScaleUnit": "1SMI#"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions.",
- "MetricExpr": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
- "MetricName": "tma_alloc_restriction",
- "MetricThreshold": "tma_alloc_restriction > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions",
+ "MetricExpr": "tma_core_bound",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_allocation_restriction",
+ "MetricThreshold": "tma_allocation_restriction > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "TOPDOWN_BE_BOUND.ALL / tma_info_core_slots",
+ "MetricExpr": "TOPDOWN_BE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count. The rest of these subevents count backend stalls, in cycles, due to an outstanding request which is memory bound vs core bound. The subevents are not slot based events and therefore can not be precisely added or subtracted from the Backend_Bound_Aux subevents which are slot based.",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
- "DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "tma_backend_bound",
- "MetricGroup": "Default;TopdownL1;tma_L1_group",
- "MetricName": "tma_backend_bound_aux",
- "MetricThreshold": "tma_backend_bound_aux > 0.2",
- "MetricgroupNoGroup": "TopdownL1;Default",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that UOPS must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count. All of these subevents count backend stalls, in slots, due to a resource limitation. These are not cycle based events and therefore can not be precisely added or subtracted from the Backend_Bound subevents which are cycle based. These subevents are supplementary to Backend_Bound and can be used to analyze results from a resource perspective at allocation.",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count",
"ScaleUnit": "100%"
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "(tma_info_core_slots - (TOPDOWN_FE_BOUND.ALL + TOPDOWN_BE_BOUND.ALL + TOPDOWN_RETIRING.ALL)) / tma_info_core_slots",
+ "MetricExpr": "(5 * CPU_CLK_UNHALTED.CORE - (TOPDOWN_FE_BOUND.ALL + TOPDOWN_BE_BOUND.ALL + TOPDOWN_RETIRING.ALL)) / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
"MetricName": "tma_bad_speculation",
"MetricThreshold": "tma_bad_speculation > 0.15",
@@ -126,496 +101,492 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of uops that are not from the microsequencer.",
- "MetricExpr": "(TOPDOWN_RETIRING.ALL - UOPS_RETIRED.MS) / tma_info_core_slots",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_retiring_group",
- "MetricName": "tma_base",
- "MetricThreshold": "tma_base > 0.6",
- "MetricgroupNoGroup": "TopdownL2",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
- "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_DETECT / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
+ "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_DETECT / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
"MetricName": "tma_branch_detect",
- "MetricThreshold": "tma_branch_detect > 0.05",
- "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "MetricThreshold": "tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts.",
- "MetricExpr": "TOPDOWN_BAD_SPECULATION.MISPREDICT / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.MISPREDICT / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
"MetricName": "tma_branch_mispredicts",
- "MetricThreshold": "tma_branch_mispredicts > 0.05",
+ "MetricThreshold": "tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
- "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_RESTEER / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_RESTEER / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
"MetricName": "tma_branch_resteer",
- "MetricThreshold": "tma_branch_resteer > 0.05",
+ "MetricThreshold": "tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
- "MetricExpr": "TOPDOWN_FE_BOUND.CISC / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "MetricExpr": "TOPDOWN_FE_BOUND.CISC / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_cisc",
- "MetricThreshold": "tma_cisc > 0.05",
+ "MetricThreshold": "tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of cycles due to backend bound stalls that are core execution bound and not attributed to outstanding demand load or store stalls.",
- "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)",
+ "BriefDescription": "Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation",
+ "MetricExpr": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_core_bound",
- "MetricThreshold": "tma_core_bound > 0.1",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.1",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
- "MetricExpr": "TOPDOWN_FE_BOUND.DECODE / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.DECODE / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_decode",
- "MetricThreshold": "tma_decode > 0.05",
+ "MetricThreshold": "tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to memory disambiguation.",
- "MetricExpr": "tma_nuke * (MACHINE_CLEARS.DISAMBIGUATION / MACHINE_CLEARS.SLOW)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_disambiguation",
- "MetricThreshold": "tma_disambiguation > 0.02",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.FASTNUKE / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_fast_nuke",
+ "MetricThreshold": "tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_DRAM_HIT / tma_info_core_clks - max((MEM_BOUND_STALLS.LOAD - LD_HEAD.L1_MISS_AT_RET) / tma_info_core_clks, 0) * MEM_BOUND_STALLS.LOAD_DRAM_HIT / MEM_BOUND_STALLS.LOAD",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_dram_bound",
- "MetricThreshold": "tma_dram_bound > 0.1",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ALL / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear classified as a fast nuke due to memory ordering, memory disambiguation and memory renaming.",
- "MetricExpr": "TOPDOWN_BAD_SPECULATION.FASTNUKE / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
- "MetricName": "tma_fast_nuke",
- "MetricThreshold": "tma_fast_nuke > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ICACHE / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
- "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
- "MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1",
+ "MetricName": "tma_ifetch_bandwidth",
+ "MetricThreshold": "tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
- "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
- "MetricName": "tma_fetch_latency",
- "MetricThreshold": "tma_fetch_latency > 0.15",
+ "MetricName": "tma_ifetch_latency",
+ "MetricThreshold": "tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to FP assists.",
- "MetricExpr": "tma_nuke * (MACHINE_CLEARS.FP_ASSIST / MACHINE_CLEARS.SLOW)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_fp_assist",
- "MetricThreshold": "tma_fp_assist > 0.02",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of time that retirement is stalled due to a first level data TLB miss",
+ "MetricExpr": "100 * (LD_HEAD.DTLB_MISS_AT_RET + LD_HEAD.PGWALK_AT_RET) / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_bottleneck_%_dtlb_miss_bound_cycles"
},
{
- "BriefDescription": "Counts the number of floating point divide operations per uop.",
- "MetricExpr": "UOPS_RETIRED.FPDIV / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_base_group",
- "MetricName": "tma_fpdiv_uops",
- "MetricThreshold": "tma_fpdiv_uops > 0.2",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Ifetch",
+ "MetricName": "tma_info_bottleneck_%_ifetch_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
- "DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "TOPDOWN_FE_BOUND.ALL / tma_info_core_slots",
- "MetricGroup": "Default;TopdownL1;tma_L1_group",
- "MetricName": "tma_frontend_bound",
- "MetricThreshold": "tma_frontend_bound > 0.2",
- "MetricgroupNoGroup": "TopdownL1;Default",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of time that retirement is stalled due to an L1 miss",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.LOAD / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Load_Store_Miss",
+ "MetricName": "tma_info_bottleneck_%_load_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
- "MetricExpr": "TOPDOWN_FE_BOUND.ICACHE / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
- "MetricName": "tma_icache_misses",
- "MetricThreshold": "tma_icache_misses > 0.05",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall",
+ "MetricExpr": "100 * LD_HEAD.ANY_AT_RET / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Mem_Exec",
+ "MetricName": "tma_info_bottleneck_%_mem_exec_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound"
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_inst_mix_ipbranch"
+ },
+ {
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
+ "MetricName": "tma_info_br_inst_mix_ipcall"
},
{
- "BriefDescription": "",
- "MetricExpr": "CPU_CLK_UNHALTED.CORE",
- "MetricName": "tma_info_core_clks"
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u",
+ "MetricName": "tma_info_br_inst_mix_ipfarbranch"
},
{
- "BriefDescription": "",
- "MetricExpr": "CPU_CLK_UNHALTED.CORE_P",
- "MetricName": "tma_info_core_clks_p"
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
+ "MetricExpr": "INST_RETIRED.ANY / (BR_MISP_RETIRED.COND - BR_MISP_RETIRED.COND_TAKEN)",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_ntaken"
+ },
+ {
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_taken"
+ },
+ {
+ "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_indirect"
+ },
+ {
+ "BriefDescription": "Instructions per retired return Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RETURN",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_ret"
+ },
+ {
+ "BriefDescription": "Instructions per retired Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_inst_mix_ipmispredict"
+ },
+ {
+ "BriefDescription": "Ratio of all branches which mispredict",
+ "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_ratio"
+ },
+ {
+ "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
+ "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BACLEARS.ANY",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to load buffer full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.LD_BUF / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_load_buffer_stall_cycles"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to memory reservation stations full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.RSV / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_mem_rsv_stall_cycles"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to store buffer full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.ST_BUF / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_store_buffer_stall_cycles"
},
{
"BriefDescription": "Cycles Per Instruction",
- "MetricExpr": "tma_info_core_clks / INST_RETIRED.ANY",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE / INST_RETIRED.ANY",
"MetricName": "tma_info_core_cpi"
},
{
"BriefDescription": "Instructions Per Cycle",
- "MetricExpr": "INST_RETIRED.ANY / tma_info_core_clks",
+ "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.CORE",
"MetricName": "tma_info_core_ipc"
},
{
- "BriefDescription": "",
- "MetricExpr": "5 * tma_info_core_clks",
- "MetricName": "tma_info_core_slots"
- },
- {
"BriefDescription": "Uops Per Instruction",
"MetricExpr": "UOPS_RETIRED.ALL / INST_RETIRED.ANY",
"MetricName": "tma_info_core_upi"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in DRAM",
- "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_DRAM_HIT / MEM_BOUND_STALLS.IFETCH",
- "MetricName": "tma_info_frontend_inst_miss_cost_dramhit_percent"
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L2",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_L2_HIT / MEM_BOUND_STALLS.IFETCH",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in the L2",
- "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_L2_HIT / MEM_BOUND_STALLS.IFETCH",
- "MetricName": "tma_info_frontend_inst_miss_cost_l2hit_percent"
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss doesn't hit in the L2",
+ "MetricExpr": "100 * (MEM_BOUND_STALLS.IFETCH_LLC_HIT + MEM_BOUND_STALLS.IFETCH_DRAM_HIT) / MEM_BOUND_STALLS.IFETCH",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2miss"
},
{
- "BriefDescription": "Percent of instruction miss cost that hit in the L3",
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L3",
"MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_LLC_HIT / MEM_BOUND_STALLS.IFETCH",
- "MetricName": "tma_info_frontend_inst_miss_cost_l3hit_percent"
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit"
},
{
- "BriefDescription": "Ratio of all branches which mispredict",
- "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_branch_mispredict_ratio"
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.IFETCH_DRAM_HIT / MEM_BOUND_STALLS.IFETCH",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss"
},
{
- "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
- "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BACLEARS.ANY",
- "MetricName": "tma_info_inst_mix_branch_mispredict_to_unknown_branch_ratio"
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L2",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_BOUND_STALLS.LOAD",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2hit"
},
{
- "BriefDescription": "Percentage of all uops which are FPDiv uops",
- "MetricExpr": "100 * UOPS_RETIRED.FPDIV / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_fpdiv_uop_ratio"
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses in the L2",
+ "MetricExpr": "100 * (MEM_BOUND_STALLS.LOAD_LLC_HIT + MEM_BOUND_STALLS.LOAD_DRAM_HIT) / MEM_BOUND_STALLS.LOAD",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2miss"
},
{
- "BriefDescription": "Percentage of all uops which are IDiv uops",
- "MetricExpr": "100 * UOPS_RETIRED.IDIV / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_idiv_uop_ratio"
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_BOUND_STALLS.LOAD",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3hit"
},
{
- "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_ipbranch"
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS.LOAD_DRAM_HIT / MEM_BOUND_STALLS.LOAD",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3miss"
},
{
- "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
- "MetricName": "tma_info_inst_mix_ipcall"
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block",
+ "MetricExpr": "100 * LD_HEAD.L1_BOUND_AT_RET / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_l1_bound"
},
{
- "BriefDescription": "Instructions per Far Branch",
- "MetricExpr": "INST_RETIRED.ANY / (BR_INST_RETIRED.FAR_BRANCH / 2)",
- "MetricName": "tma_info_inst_mix_ipfarbranch"
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement",
+ "MetricExpr": "100 * (LD_HEAD.L1_BOUND_AT_RET + MEM_BOUND_STALLS.LOAD) / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_load_bound"
},
{
- "BriefDescription": "Instructions per Load",
- "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_inst_mix_ipload"
+ "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full",
+ "MetricExpr": "100 * (MEM_SCHEDULER_BLOCK.ST_BUF / MEM_SCHEDULER_BLOCK.ALL) * tma_mem_scheduler",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_store_bound"
},
{
- "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
- "MetricExpr": "INST_RETIRED.ANY / (BR_MISP_RETIRED.COND - BR_MISP_RETIRED.COND_TAKEN)",
- "MetricName": "tma_info_inst_mix_ipmisp_cond_ntaken"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory disambiguation",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.DISAMBIGUATION / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_disamb_pki"
},
{
- "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
- "MetricName": "tma_info_inst_mix_ipmisp_cond_taken"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to floating point assists",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.FP_ASSIST / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_fp_assist_pki"
},
{
- "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
- "MetricName": "tma_info_inst_mix_ipmisp_indirect"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory ordering",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.MEMORY_ORDERING / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_monuke_pki"
},
{
- "BriefDescription": "Instructions per retired return Branch Misprediction",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RETURN",
- "MetricName": "tma_info_inst_mix_ipmisp_ret"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory renaming",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.MRN_NUKE / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_mrn_pki"
},
{
- "BriefDescription": "Instructions per retired Branch Misprediction",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricName": "tma_info_inst_mix_ipmispredict"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to page faults",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.PAGE_FAULT / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_page_fault_pki"
},
{
- "BriefDescription": "Instructions per Store",
- "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_STORES",
- "MetricName": "tma_info_inst_mix_ipstore"
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to self-modifying code",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.SMC / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_smc_pki"
},
{
- "BriefDescription": "Percentage of all uops which are ucode ops",
- "MetricExpr": "100 * UOPS_RETIRED.MS / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_microcode_uop_ratio"
+ "BriefDescription": "Percentage of total non-speculative loads with an address aliasing block",
+ "MetricExpr": "100 * LD_BLOCKS.4K_ALIAS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_adressaliasing"
},
{
- "BriefDescription": "Percentage of all uops which are x87 uops",
- "MetricExpr": "100 * UOPS_RETIRED.X87 / UOPS_RETIRED.ALL",
- "MetricName": "tma_info_inst_mix_x87_uop_ratio"
+ "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
+ "MetricExpr": "100 * LD_BLOCKS.DATA_UNKNOWN / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_storefwdblk"
},
{
- "BriefDescription": "Percentage of total non-speculative loads with a address aliasing block",
- "MetricExpr": "100 * LD_BLOCKS.4K_ALIAS / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_address_alias_blocks"
+ "BriefDescription": "Percentage of Memory Execution Bound due to a first level data cache miss",
+ "MetricExpr": "100 * LD_HEAD.L1_MISS_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_l1miss"
},
{
- "BriefDescription": "Percentage of total non-speculative loads that are splits",
- "MetricExpr": "100 * MEM_UOPS_RETIRED.SPLIT_LOADS / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_load_splits"
+ "BriefDescription": "Percentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc",
+ "MetricExpr": "100 * LD_HEAD.OTHER_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks"
},
{
- "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
- "MetricExpr": "100 * LD_BLOCKS.DATA_UNKNOWN / MEM_UOPS_RETIRED.ALL_LOADS",
- "MetricName": "tma_info_l1_bound_store_fwd_blocks"
+ "BriefDescription": "Percentage of Memory Execution Bound due to a pagewalk",
+ "MetricExpr": "100 * LD_HEAD.PGWALK_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_pagewalk"
},
{
- "BriefDescription": "Cycle cost per DRAM hit",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_DRAM_HIT / MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_dram_hit"
+ "BriefDescription": "Percentage of Memory Execution Bound due to a second level TLB miss",
+ "MetricExpr": "100 * LD_HEAD.DTLB_MISS_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_stlbhit"
},
{
- "BriefDescription": "Cycle cost per L2 hit",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_LOAD_UOPS_RETIRED.L2_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_l2_hit"
+ "BriefDescription": "Percentage of Memory Execution Bound due to a store forward address match",
+ "MetricExpr": "100 * LD_HEAD.ST_ADDR_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_storefwding"
},
{
- "BriefDescription": "Cycle cost per LLC hit",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_LOAD_UOPS_RETIRED.L3_HIT",
- "MetricName": "tma_info_memory_cycles_per_demand_load_l3_hit"
+ "BriefDescription": "Instructions per Load",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_ipload"
},
{
- "BriefDescription": "load ops retired per 1000 instruction",
- "MetricExpr": "1e3 * MEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANY",
- "MetricName": "tma_info_memory_memloadpki"
+ "BriefDescription": "Instructions per Store",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_STORES",
+ "MetricName": "tma_info_mem_mix_ipstore"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that perform one or more locks",
+ "MetricExpr": "100 * MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_load_locks_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that are splits",
+ "MetricExpr": "100 * MEM_UOPS_RETIRED.SPLIT_LOADS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_load_splits_ratio"
+ },
+ {
+ "BriefDescription": "Ratio of mem load uops to all uops",
+ "MetricExpr": "1e3 * MEM_UOPS_RETIRED.ALL_LOADS / UOPS_RETIRED.ALL",
+ "MetricName": "tma_info_mem_mix_memload_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction",
+ "MetricExpr": "100 * SERIALIZATION.C01_MS_SCB / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricName": "tma_info_serialization_%_tpause_cycles"
},
{
"BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
"MetricName": "tma_info_system_cpu_utilization"
},
{
"BriefDescription": "Fraction of cycles spent in Kernel mode",
- "MetricExpr": "cpu@CPU_CLK_UNHALTED.CORE@k / CPU_CLK_UNHALTED.CORE",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE_P:k / CPU_CLK_UNHALTED.CORE",
"MetricGroup": "Summary",
"MetricName": "tma_info_system_kernel_utilization"
},
{
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE_P / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
- "MetricExpr": "tma_info_core_clks / CPU_CLK_UNHALTED.REF_TSC",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
- "MetricExpr": "TOPDOWN_FE_BOUND.ITLB / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_latency_group",
- "MetricName": "tma_itlb_misses",
- "MetricThreshold": "tma_itlb_misses > 0.05",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of all uops which are FPDiv uops",
+ "MetricExpr": "100 * UOPS_RETIRED.FPDIV / UOPS_RETIRED.ALL",
+ "MetricName": "tma_info_uop_mix_fpdiv_uop_ratio"
},
{
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a load block.",
- "MetricExpr": "LD_HEAD.L1_BOUND_AT_RET / tma_info_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l1_bound",
- "MetricThreshold": "tma_l1_bound > 0.1",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of all uops which are IDiv uops",
+ "MetricExpr": "100 * UOPS_RETIRED.IDIV / UOPS_RETIRED.ALL",
+ "MetricName": "tma_info_uop_mix_idiv_uop_ratio"
},
{
- "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 Cache.",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_L2_HIT / tma_info_core_clks - max((MEM_BOUND_STALLS.LOAD - LD_HEAD.L1_MISS_AT_RET) / tma_info_core_clks, 0) * MEM_BOUND_STALLS.LOAD_L2_HIT / MEM_BOUND_STALLS.LOAD",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l2_bound",
- "MetricThreshold": "tma_l2_bound > 0.1",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of all uops which are microcode ops",
+ "MetricExpr": "100 * UOPS_RETIRED.MS / UOPS_RETIRED.ALL",
+ "MetricName": "tma_info_uop_mix_microcode_uop_ratio"
},
{
- "BriefDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
- "MetricExpr": "MEM_BOUND_STALLS.LOAD_LLC_HIT / tma_info_core_clks - max((MEM_BOUND_STALLS.LOAD - LD_HEAD.L1_MISS_AT_RET) / tma_info_core_clks, 0) * MEM_BOUND_STALLS.LOAD_LLC_HIT / MEM_BOUND_STALLS.LOAD",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_l3_bound",
- "MetricThreshold": "tma_l3_bound > 0.1",
- "ScaleUnit": "100%"
+ "BriefDescription": "Percentage of all uops which are x87 uops",
+ "MetricExpr": "100 * UOPS_RETIRED.X87 / UOPS_RETIRED.ALL",
+ "MetricName": "tma_info_uop_mix_x87_uop_ratio"
},
{
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to load buffer full",
- "MetricExpr": "tma_mem_scheduler * MEM_SCHEDULER_BLOCK.LD_BUF / MEM_SCHEDULER_BLOCK.ALL",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_ld_buffer",
- "MetricThreshold": "tma_ld_buffer > 0.05",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ITLB / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
- "MetricExpr": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / tma_info_core_slots",
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
"MetricName": "tma_machine_clears",
- "MetricThreshold": "tma_machine_clears > 0.05",
+ "MetricThreshold": "tma_machine_clears > 0.05 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops.",
- "MetricExpr": "TOPDOWN_BE_BOUND.MEM_SCHEDULER / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops",
+ "MetricExpr": "TOPDOWN_BE_BOUND.MEM_SCHEDULER / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
"MetricName": "tma_mem_scheduler",
- "MetricThreshold": "tma_mem_scheduler > 0.1",
+ "MetricThreshold": "tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of cycles the core is stalled due to stores or loads.",
- "MetricExpr": "min(tma_backend_bound, LD_HEAD.ANY_AT_RET / tma_info_core_clks + tma_store_bound)",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
- "MetricName": "tma_memory_bound",
- "MetricThreshold": "tma_memory_bound > 0.2",
- "MetricgroupNoGroup": "TopdownL2",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to memory ordering.",
- "MetricExpr": "tma_nuke * (MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.SLOW)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_memory_ordering",
- "MetricThreshold": "tma_memory_ordering > 0.02",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS)",
- "MetricExpr": "UOPS_RETIRED.MS / tma_info_core_slots",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_retiring_group",
- "MetricName": "tma_ms_uops",
- "MetricThreshold": "tma_ms_uops > 0.05",
- "MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops.",
- "MetricExpr": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops",
+ "MetricExpr": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
"MetricName": "tma_non_mem_scheduler",
- "MetricThreshold": "tma_non_mem_scheduler > 0.1",
+ "MetricThreshold": "tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear (slow nuke).",
- "MetricExpr": "TOPDOWN_BAD_SPECULATION.NUKE / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.NUKE / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
"MetricName": "tma_nuke",
- "MetricThreshold": "tma_nuke > 0.05",
+ "MetricThreshold": "tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
- "MetricExpr": "TOPDOWN_FE_BOUND.OTHER / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.OTHER / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_other_fb",
- "MetricThreshold": "tma_other_fb > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a number of other load blocks.",
- "MetricExpr": "LD_HEAD.OTHER_AT_RET / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_other_l1",
- "MetricThreshold": "tma_other_l1 > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hits in the L2, LLC, DRAM or MMIO (Non-DRAM) but could not be correctly attributed or cycles in which the load miss is waiting on a request buffer.",
- "MetricExpr": "max(0, tma_memory_bound - (tma_store_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_dram_bound))",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_other_load_store",
- "MetricThreshold": "tma_other_load_store > 0.1",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of uops retired excluding ms and fp div uops.",
- "MetricExpr": "(TOPDOWN_RETIRING.ALL - UOPS_RETIRED.MS - UOPS_RETIRED.FPDIV) / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_base_group",
- "MetricName": "tma_other_ret",
- "MetricThreshold": "tma_other_ret > 0.3",
+ "MetricThreshold": "tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to page faults.",
- "MetricExpr": "tma_nuke * (MACHINE_CLEARS.PAGE_FAULT / MACHINE_CLEARS.SLOW)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_page_fault",
- "MetricThreshold": "tma_page_fault > 0.02",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
- "MetricExpr": "TOPDOWN_FE_BOUND.PREDECODE / tma_info_core_slots",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.PREDECODE / (5 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
"MetricName": "tma_predecode",
- "MetricThreshold": "tma_predecode > 0.05",
+ "MetricThreshold": "tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls).",
- "MetricExpr": "TOPDOWN_BE_BOUND.REGISTER / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.REGISTER / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
"MetricName": "tma_register",
- "MetricThreshold": "tma_register > 0.1",
+ "MetricThreshold": "tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls).",
- "MetricExpr": "TOPDOWN_BE_BOUND.REORDER_BUFFER / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.REORDER_BUFFER / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
"MetricName": "tma_reorder_buffer",
- "MetricThreshold": "tma_reorder_buffer > 0.1",
+ "MetricThreshold": "tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
- "MetricExpr": "tma_backend_bound",
- "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_aux_group",
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a resource limitation",
+ "MetricExpr": "tma_backend_bound - tma_core_bound",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_resource_bound",
- "MetricThreshold": "tma_resource_bound > 0.2",
+ "MetricThreshold": "tma_resource_bound > 0.2 & tma_backend_bound > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count.",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
+ "MetricExpr": "TOPDOWN_RETIRING.ALL / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.75",
@@ -623,67 +594,11 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to RSV full relative",
- "MetricExpr": "tma_mem_scheduler * MEM_SCHEDULER_BLOCK.RSV / MEM_SCHEDULER_BLOCK.ALL",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_rsv",
- "MetricThreshold": "tma_rsv > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS).",
- "MetricExpr": "TOPDOWN_BE_BOUND.SERIALIZATION / tma_info_core_slots",
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.SERIALIZATION / (5 * CPU_CLK_UNHALTED.CORE)",
"MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
"MetricName": "tma_serialization",
- "MetricThreshold": "tma_serialization > 0.1",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of machine clears relative to the number of nuke slots due to SMC.",
- "MetricExpr": "tma_nuke * (MACHINE_CLEARS.SMC / MACHINE_CLEARS.SLOW)",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_nuke_group",
- "MetricName": "tma_smc",
- "MetricThreshold": "tma_smc > 0.02",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles, relative to the number of mem_scheduler slots, in which uops are blocked due to store buffer full",
- "MetricExpr": "tma_store_bound",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_mem_scheduler_group",
- "MetricName": "tma_st_buffer",
- "MetricThreshold": "tma_st_buffer > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a first level TLB miss.",
- "MetricExpr": "LD_HEAD.DTLB_MISS_AT_RET / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_stlb_hit",
- "MetricThreshold": "tma_stlb_hit > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a second level TLB miss requiring a page walk.",
- "MetricExpr": "LD_HEAD.PGWALK_AT_RET / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_stlb_miss",
- "MetricThreshold": "tma_stlb_miss > 0.05",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full.",
- "MetricExpr": "tma_mem_scheduler * (MEM_SCHEDULER_BLOCK.ST_BUF / MEM_SCHEDULER_BLOCK.ALL)",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_store_bound",
- "MetricThreshold": "tma_store_bound > 0.1",
- "ScaleUnit": "100%"
- },
- {
- "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a store forward block.",
- "MetricExpr": "LD_HEAD.ST_ADDR_AT_RET / tma_info_core_clks",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
- "MetricName": "tma_store_fwd_blk",
- "MetricThreshold": "tma_store_fwd_blk > 0.05",
+ "MetricThreshold": "tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
"ScaleUnit": "100%"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/cache.json b/tools/perf/pmu-events/arch/x86/alderlaken/cache.json
index 043445ae14a8..669f4979b651 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/cache.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/cache.json
@@ -1,22 +1,51 @@
[
{
+ "BriefDescription": "Counts the total number of L2 Cache accesses. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.ALL",
+ "PublicDescription": "Counts the total number of L2 Cache Accesses, includes hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only. Counts on a per core basis.",
+ "SampleAfterValue": "200003"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache accesses that resulted in a hit. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.HIT",
+ "PublicDescription": "Counts the number of L2 Cache accesses that resulted in a hit from a front door request only (does not include rejects or recycles), Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache accesses that resulted in a miss. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "PublicDescription": "Counts the number of L2 Cache accesses that resulted in a miss from a front door request only (does not include rejects or recycles). Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
- "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x41"
},
{
"BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
- "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x4f"
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
@@ -25,6 +54,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_DRAM_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in DRAM or MMIO (non-DRAM).",
@@ -33,6 +63,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_L2_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache.",
@@ -41,6 +72,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -49,6 +81,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD",
"SampleAfterValue": "200003",
@@ -56,6 +89,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_DRAM_HIT",
"SampleAfterValue": "200003",
@@ -63,6 +97,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_L2_HIT",
"SampleAfterValue": "200003",
@@ -70,6 +105,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -78,33 +114,88 @@
},
{
"BriefDescription": "Counts the number of load uops retired that hit in DRAM.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x80"
},
{
+ "BriefDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core or module.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that hit in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x8"
+ },
+ {
"BriefDescription": "Counts the number of load uops retired that hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
+ "BriefDescription": "Counts the number of load uops retired that miss in the L2 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Counts the number of load uops retired that hit in the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
+ "BriefDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_UOPS_RETIRED_MISC.HIT_E_F",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_UOPS_RETIRED_MISC.L3_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "Counts the number of cycles that uops are blocked for any of the following reasons: load buffer, store buffer or RSV full.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.ALL",
"SampleAfterValue": "20003",
@@ -112,6 +203,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to a load buffer full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.LD_BUF",
"SampleAfterValue": "20003",
@@ -119,6 +211,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to an RSV full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.RSV",
"SampleAfterValue": "20003",
@@ -126,6 +219,7 @@
},
{
"BriefDescription": "Counts the number of cycles that uops are blocked due to a store buffer full condition.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x04",
"EventName": "MEM_SCHEDULER_BLOCK.ST_BUF",
"SampleAfterValue": "20003",
@@ -133,195 +227,409 @@
},
{
"BriefDescription": "Counts the number of load uops retired.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts the total number of load uops retired.",
"SampleAfterValue": "200003",
"UMask": "0x81"
},
{
"BriefDescription": "Counts the number of store uops retired.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of store uops retired.",
"SampleAfterValue": "200003",
"UMask": "0x82"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 128 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 16 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 256 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 32 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 4 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 512 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 64 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts the number of tagged loads with an instruction latency that exceeds or equals the threshold of 8 cycles as defined in MEC_CR_PEBS_LD_LAT_THRESHOLD (3F6H). Only counts with PEBS enabled. If a PEBS record is generated, will populate the PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
+ "BriefDescription": "Counts the number of load uops retired that performed one or more locks.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21"
+ },
+ {
"BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x41"
},
{
+ "BriefDescription": "Counts the total number of load and store uops retired that missed in the second level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the second Level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "Counts the number of store ops retired that miss in the second level TLB.",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12"
+ },
+ {
"BriefDescription": "Counts the number of stores uops retired. Counts with or without PEBS enabled.",
+ "Counter": "0,1,2,3,4,5",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
- "PEBS": "2",
"PublicDescription": "Counts the number of stores uops retired. Counts with or without PEBS enabled. If PEBS is enabled and a PEBS record is generated, will populate PEBS Latency and PEBS Data Source fields accordingly.",
"SampleAfterValue": "1000003",
"UMask": "0x6"
},
{
+ "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.COREWB_M.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10008",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F803C0001",
+ "MSRValue": "0x1F803C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F803C0002",
+ "MSRValue": "0x1F803C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x14000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F803C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C4000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to instruction cache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ICACHE",
"SampleAfterValue": "1000003",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/floating-point.json b/tools/perf/pmu-events/arch/x86/alderlaken/floating-point.json
index 30e8ca3c1485..ed963fcb6485 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/floating-point.json
@@ -1,6 +1,23 @@
[
{
+ "BriefDescription": "Counts the number of cycles the floating point divider is in the loop stage.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
"BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.FP_ASSIST",
"PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
@@ -9,9 +26,9 @@
},
{
"BriefDescription": "Counts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.FPDIV",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x8"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/frontend.json b/tools/perf/pmu-events/arch/x86/alderlaken/frontend.json
index 36898bab2bba..2a68f9969da0 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of requests to the instruction cache for one or more bytes of a cache line.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PublicDescription": "Counts the total number of requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Counts the number of instruction cache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts the number of missed requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/memory.json b/tools/perf/pmu-events/arch/x86/alderlaken/memory.json
index 863a3ba2b4b2..049c5e2630d7 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retires.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.ANY_AT_RET",
"SampleAfterValue": "1000003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiring.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.L1_BOUND_AT_RET",
"SampleAfterValue": "1000003",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.L1_MISS_AT_RET",
"SampleAfterValue": "1000003",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.OTHER_AT_RET",
"PublicDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etc.",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalk.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.PGWALK_AT_RET",
"SampleAfterValue": "1000003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address match.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.ST_ADDR_AT_RET",
"SampleAfterValue": "1000003",
@@ -44,44 +50,119 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguation.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"SampleAfterValue": "20003",
"UMask": "0x2"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F84400004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS] Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS]",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F84400002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. [L3_MISS_LOCAL is alias to L3_MISS] Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x784004000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xB7",
+ "EventName": "OCR.SWPF_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F84404000",
+ "PublicDescription": "Counts L1 data cache software prefetches which include T0/T1/T2 and NTA (except PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/metricgroups.json b/tools/perf/pmu-events/arch/x86/alderlaken/metricgroups.json
index 7b2049cd2694..40984c23a6c9 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/metricgroups.json
@@ -1,26 +1,23 @@
{
+ "Flops": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ifetch": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Load_Store_Miss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem_Exec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Summary": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"TopdownL1": "Metrics for top-down breakdown at level 1",
"TopdownL2": "Metrics for top-down breakdown at level 2",
"TopdownL3": "Metrics for top-down breakdown at level 3",
- "TopdownL4": "Metrics for top-down breakdown at level 4",
+ "load_store_bound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"tma_L1_group": "Metrics for top-down breakdown at level 1",
"tma_L2_group": "Metrics for top-down breakdown at level 2",
"tma_L3_group": "Metrics for top-down breakdown at level 3",
- "tma_L4_group": "Metrics for top-down breakdown at level 4",
- "tma_backend_bound_aux_group": "Metrics contributing to tma_backend_bound_aux category",
"tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
- "tma_base_group": "Metrics contributing to tma_base category",
- "tma_fetch_bandwidth_group": "Metrics contributing to tma_fetch_bandwidth category",
- "tma_fetch_latency_group": "Metrics contributing to tma_fetch_latency category",
+ "tma_core_bound_group": "Metrics contributing to tma_core_bound category",
"tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
- "tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_ifetch_bandwidth_group": "Metrics contributing to tma_ifetch_bandwidth category",
+ "tma_ifetch_latency_group": "Metrics contributing to tma_ifetch_latency category",
"tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
- "tma_mem_scheduler_group": "Metrics contributing to tma_mem_scheduler category",
- "tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
- "tma_nuke_group": "Metrics contributing to tma_nuke category",
- "tma_resource_bound_group": "Metrics contributing to tma_resource_bound category",
- "tma_retiring_group": "Metrics contributing to tma_retiring category"
+ "tma_resource_bound_group": "Metrics contributing to tma_resource_bound category"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/other.json b/tools/perf/pmu-events/arch/x86/alderlaken/other.json
index 6336de61f628..144d7b06f240 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/other.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/other.json
@@ -1,37 +1,43 @@
[
{
- "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
- "EventCode": "0xB7",
- "EventName": "OCR.COREWB_M.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10008",
- "SampleAfterValue": "100003",
+ "BriefDescription": "This event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS]",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
+ "EventCode": "0xe4",
+ "EventName": "LBR_INSERTS.ANY",
+ "SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
- "BriefDescription": "Counts demand data reads that have any type of response.",
+ "BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "EventName": "OCR.FULL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
+ "MSRValue": "0x800000010000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "EventName": "OCR.PARTIAL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
+ "MSRValue": "0x400000010000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xB7",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/pipeline.json b/tools/perf/pmu-events/arch/x86/alderlaken/pipeline.json
index 3153bab527a9..1dd61baec1a9 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/pipeline.json
@@ -1,232 +1,283 @@
[
{
+ "BriefDescription": "Counts the number of cycles when any of the floating point or integer dividers are active.",
+ "Counter": "0,1,2,3,4,5",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts the number of active floating point and integer dividers per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_OCCUPANCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point and integer divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles any of the two integer dividers are active.",
+ "Counter": "0,1,2,3,4,5",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of active integer dividers per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_OCCUPANCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of integer divider uops executed per cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.IDIV_UOPS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf9"
},
{
"BriefDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xbf"
},
{
"BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.IND_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.COND",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of near CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf9"
},
{
"BriefDescription": "Counts the number of near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7"
},
{
"BriefDescription": "Counts the number of near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xc0"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NON_RETURN_IND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of near relative CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.REL_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfd"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.NEAR_RETURN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.COND_TAKEN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.TAKEN_JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT_CALL",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.IND_CALL",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.COND",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of mispredicted near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x80"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.INDIRECT",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NON_RETURN_IND",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RETURN",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xf7"
},
{
"BriefDescription": "This event is deprecated. Refer to new event BR_MISP_RETIRED.COND_TAKEN",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.TAKEN_JCC",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.",
"SampleAfterValue": "2000003",
@@ -234,13 +285,24 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.",
"SampleAfterValue": "2000003"
},
{
+ "BriefDescription": "This event is deprecated. Refer to new event CPU_CLK_UNHALTED.REF_TSC_P",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency. (Fixed event)",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2.",
"SampleAfterValue": "2000003",
@@ -248,6 +310,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
@@ -256,6 +319,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.",
"SampleAfterValue": "2000003",
@@ -263,6 +327,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.",
@@ -270,47 +335,48 @@
},
{
"BriefDescription": "Counts the total number of instructions retired. (Fixed event)",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0.",
+ "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the total number of instructions retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses a programmable general purpose performance counter.",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "This event is deprecated. Refer to new event LD_BLOCKS.ADDRESS_ALIAS",
+ "Counter": "0,1,2,3,4,5",
"Deprecated": "1",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.4K_ALIAS",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ADDRESS_ALIAS",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DATA_UNKNOWN",
- "PEBS": "1",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPU.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.DISAMBIGUATION",
"SampleAfterValue": "20003",
@@ -318,6 +384,7 @@
},
{
"BriefDescription": "Counts the number of machines clears due to memory renaming.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MRN_NUKE",
"SampleAfterValue": "1000003",
@@ -325,13 +392,16 @@
},
{
"BriefDescription": "Counts the number of machine clears due to a page fault. Counts both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation occurs.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.PAGE_FAULT",
"SampleAfterValue": "20003",
"UMask": "0x20"
},
{
- "BriefDescription": "Counts the number of machine clears that flush the pipeline and restart the machine with the use of microcode due to SMC, MEMORY_ORDERING, FP_ASSISTS, PAGE_FAULT, DISAMBIGUATION, and FPC_VIRTUAL_TRAP.",
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3,4,5",
+ "Deprecated": "1",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SLOW",
"SampleAfterValue": "20003",
@@ -339,13 +409,32 @@
},
{
"BriefDescription": "Counts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code page.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"SampleAfterValue": "20003",
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY]",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0xe4",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "PublicDescription": "Counts the number of LBR entries recorded. Requires LBRs to be enabled in IA32_LBR_CTL. This event is PDIR on GP0 and NPEBS on all other GPs [This event is alias to LBR_INSERTS.ANY]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state. For Tremont, UMWAIT and TPAUSE will only put the CPU into C0.1 activity state (not C0.2 activity state)",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.C01_MS_SCB",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x75",
"EventName": "SERIALIZATION.NON_C01_MS_SCB",
"PublicDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires. The most commonly executed instruction with an MS scoreboard is PAUSE.",
@@ -354,6 +443,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.ALL",
"PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ) even if an FE_bound event occurs during this period. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
@@ -361,6 +451,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to fast nukes such as memory ordering and memory disambiguation machine clears.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
"SampleAfterValue": "1000003",
@@ -368,6 +459,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
"SampleAfterValue": "1000003",
@@ -375,6 +467,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to branch mispredicts.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
"SampleAfterValue": "1000003",
@@ -382,6 +475,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.NUKE",
"SampleAfterValue": "1000003",
@@ -389,12 +483,14 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to backend stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to certain allocation restrictions.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
"SampleAfterValue": "1000003",
@@ -402,6 +498,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -409,6 +506,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -416,6 +514,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REGISTER",
"SampleAfterValue": "1000003",
@@ -423,6 +522,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the reorder buffer being full (ROB stalls).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
"SampleAfterValue": "1000003",
@@ -430,6 +530,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
"SampleAfterValue": "1000003",
@@ -437,12 +538,14 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to frontend stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -451,6 +554,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
@@ -459,6 +563,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.CISC",
"SampleAfterValue": "1000003",
@@ -466,6 +571,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stalls.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.DECODE",
"SampleAfterValue": "1000003",
@@ -473,6 +579,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH",
"SampleAfterValue": "1000003",
@@ -480,6 +587,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to a latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY",
"SampleAfterValue": "1000003",
@@ -487,6 +595,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ITLB misses.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ITLB",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
@@ -495,6 +604,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.OTHER",
"SampleAfterValue": "1000003",
@@ -502,6 +612,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to wrong predecodes.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.PREDECODE",
"SampleAfterValue": "1000003",
@@ -509,40 +620,48 @@
},
{
"BriefDescription": "Counts the total number of consumed retirement slots.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "TOPDOWN_RETIRING.ALL",
- "PEBS": "1",
"SampleAfterValue": "1000003"
},
{
+ "BriefDescription": "Counts the number of uops issued by the front end every cycle.",
+ "Counter": "0,1,2,3,4,5",
+ "EventCode": "0x0e",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
+ "SampleAfterValue": "200003"
+ },
+ {
"BriefDescription": "Counts the total number of uops retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.ALL",
- "PEBS": "1",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Counts the number of integer divide uops retired.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.IDIV",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of uops that are from complex flows issued by the micro-sequencer (MS).",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MS",
- "PEBS": "1",
"PublicDescription": "Counts the number of uops that are from complex flows issued by the Microcode Sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of x87 uops retired, includes those in MS flows.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.X87",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x2"
}
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-interconnect.json
index 8bf020a9dfa8..b5604c7534e1 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of requests allocated in Coherency Tracker.",
+ "Counter": "0,1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -9,48 +10,59 @@
},
{
"BriefDescription": "Each cycle counts number of any coherent request at memory controller that were issued by any core.",
+ "Counter": "0",
"EventCode": "0x85",
"EventName": "UNC_ARB_DAT_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "ARB"
},
{
"BriefDescription": "Each cycle counts number of coherent reads pending on data return from memory controller that were issued by any core.",
+ "Counter": "0",
"EventCode": "0x85",
"EventName": "UNC_ARB_DAT_OCCUPANCY.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_ARB_REQ_TRK_REQUEST.DRD",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x81",
"EventName": "UNC_ARB_DAT_REQUESTS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_ARB_DAT_OCCUPANCY.ALL",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x85",
"EventName": "UNC_ARB_IFA_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "ARB"
},
{
"BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_OCCUPANCY.RD]",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_REQ_TRK_OCCUPANCY.DRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_TRK_REQUESTS.RD]",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_REQ_TRK_REQUEST.DRD",
"PerPkg": "1",
@@ -59,6 +71,7 @@
},
{
"BriefDescription": "Each cycle counts number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from its allocation in ReqTrk till deallocation. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -67,14 +80,17 @@
},
{
"BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_OCCUPANCY.DRD]",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "ARB"
},
{
"BriefDescription": "Counts the number of coherent and in-coherent requests initiated by IA cores, processor graphic units, or LLC.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -83,6 +99,7 @@
},
{
"BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches [This event is alias to UNC_ARB_REQ_TRK_REQUEST.DRD]",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.RD",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
index 163d7e7755c4..bcf275cd592a 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts every 64B read request entering the Memory Controller 0 to DRAM (sum of all channels).",
+ "Counter": "0",
"EventCode": "0xff",
"EventName": "UNC_MC0_RDCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Counts every 64B write request entering the Memory Controller 0 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+ "Counter": "1",
"EventCode": "0xff",
"EventName": "UNC_MC0_WRCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Counts every 64B read request entering the Memory Controller 1 to DRAM (sum of all channels).",
+ "Counter": "3",
"EventCode": "0xff",
"EventName": "UNC_MC1_RDCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Counts every 64B write request entering the Memory Controller 1 to DRAM (sum of all channels). Each write request counts as a new request incrementing this counter. However, same cache line write requests (both full and partial) are combined to a single 64 byte data transfer to DRAM.",
+ "Counter": "4",
"EventCode": "0xff",
"EventName": "UNC_MC1_WRCAS_COUNT_FREERUN",
"PerPkg": "1",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "ACT command for a read request sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x24",
"EventName": "UNC_M_ACT_COUNT_RD",
"PerPkg": "1",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "ACT command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x26",
"EventName": "UNC_M_ACT_COUNT_TOTAL",
"PerPkg": "1",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "ACT command for a write request sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x25",
"EventName": "UNC_M_ACT_COUNT_WR",
"PerPkg": "1",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Read CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x22",
"EventName": "UNC_M_CAS_COUNT_RD",
"PerPkg": "1",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Write CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x23",
"EventName": "UNC_M_CAS_COUNT_WR",
"PerPkg": "1",
@@ -70,6 +79,7 @@
},
{
"BriefDescription": "Number of clocks",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x01",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
@@ -77,6 +87,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Empty",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1D",
"EventName": "UNC_M_DRAM_PAGE_EMPTY_RD",
"PerPkg": "1",
@@ -84,6 +95,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Empty",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x20",
"EventName": "UNC_M_DRAM_PAGE_EMPTY_WR",
"PerPkg": "1",
@@ -91,6 +103,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Hit",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1C",
"EventName": "UNC_M_DRAM_PAGE_HIT_RD",
"PerPkg": "1",
@@ -98,6 +111,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Hit",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1F",
"EventName": "UNC_M_DRAM_PAGE_HIT_WR",
"PerPkg": "1",
@@ -105,6 +119,7 @@
},
{
"BriefDescription": "incoming read request page status is Page Miss",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1E",
"EventName": "UNC_M_DRAM_PAGE_MISS_RD",
"PerPkg": "1",
@@ -112,6 +127,7 @@
},
{
"BriefDescription": "incoming write request page status is Page Miss",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x21",
"EventName": "UNC_M_DRAM_PAGE_MISS_WR",
"PerPkg": "1",
@@ -119,6 +135,7 @@
},
{
"BriefDescription": "Any Rank at Hot state",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x19",
"EventName": "UNC_M_DRAM_THERMAL_HOT",
"PerPkg": "1",
@@ -126,6 +143,7 @@
},
{
"BriefDescription": "Any Rank at Warm state",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x1A",
"EventName": "UNC_M_DRAM_THERMAL_WARM",
"PerPkg": "1",
@@ -133,6 +151,7 @@
},
{
"BriefDescription": "Incoming read prefetch request from IA.",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x0A",
"EventName": "UNC_M_PREFETCH_RD",
"PerPkg": "1",
@@ -140,6 +159,7 @@
},
{
"BriefDescription": "PRE command sent to DRAM due to page table idle timer expiration",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x28",
"EventName": "UNC_M_PRE_COUNT_IDLE",
"PerPkg": "1",
@@ -147,6 +167,7 @@
},
{
"BriefDescription": "PRE command sent to DRAM for a read/write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x27",
"EventName": "UNC_M_PRE_COUNT_PAGE_MISS",
"PerPkg": "1",
@@ -154,6 +175,7 @@
},
{
"BriefDescription": "Incoming VC0 read request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x02",
"EventName": "UNC_M_VC0_REQUESTS_RD",
"PerPkg": "1",
@@ -161,6 +183,7 @@
},
{
"BriefDescription": "Incoming VC0 write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x03",
"EventName": "UNC_M_VC0_REQUESTS_WR",
"PerPkg": "1",
@@ -168,6 +191,7 @@
},
{
"BriefDescription": "Incoming VC1 read request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x04",
"EventName": "UNC_M_VC1_REQUESTS_RD",
"PerPkg": "1",
@@ -175,6 +199,7 @@
},
{
"BriefDescription": "Incoming VC1 write request",
+ "Counter": "0,1,2,3,4",
"EventCode": "0x05",
"EventName": "UNC_M_VC1_REQUESTS_WR",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-other.json b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-other.json
index 2af92e43b28a..1ac5b5ef8094 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/uncore-other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_CLOCK.SOCKET",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/virtual-memory.json b/tools/perf/pmu-events/arch/x86/alderlaken/virtual-memory.json
index 67fd640f790e..d9c737a17df0 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Counts the number of page walks initiated by a instruction fetch that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSED_WALK",
"SampleAfterValue": "1000003",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Counts the number of page walks due to an instruction fetch that miss the PDE (Page Directory Entry) cache.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.PDE_CACHE_MISS",
"SampleAfterValue": "2000003",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -39,9 +44,40 @@
},
{
"BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB miss.",
+ "Counter": "0,1,2,3,4,5",
"EventCode": "0x05",
"EventName": "LD_HEAD.DTLB_MISS_AT_RET",
"SampleAfterValue": "1000003",
"UMask": "0x90"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "Counter": "0,1,2,3,4,5",
+ "Data_LA": "1",
+ "Deprecated": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen4/cache.json b/tools/perf/pmu-events/arch/x86/amdzen4/cache.json
index ecbe9660b2b3..e6d710cf3ce2 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen4/cache.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen4/cache.json
@@ -676,6 +676,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from DRAM in the same NUMA node.",
"UMask": "0x01",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -683,6 +687,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from DRAM in a different NUMA node.",
"UMask": "0x02",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -690,6 +698,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from another CCX's cache when the address was in the same NUMA node.",
"UMask": "0x04",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -697,6 +709,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from another CCX's cache when the address was in a different NUMA node.",
"UMask": "0x08",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -704,6 +720,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from extension memory (CXL) in the same NUMA node.",
"UMask": "0x10",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -711,6 +731,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency when data is sourced from extension memory (CXL) in a different NUMA node.",
"UMask": "0x20",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -718,6 +742,10 @@
"EventCode": "0xac",
"BriefDescription": "Average sampled latency from all data sources.",
"UMask": "0x3f",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -725,6 +753,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from DRAM in the same NUMA node.",
"UMask": "0x01",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -732,6 +764,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from DRAM in a different NUMA node.",
"UMask": "0x02",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -739,6 +775,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from another CCX's cache when the address was in the same NUMA node.",
"UMask": "0x04",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -746,6 +786,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from another CCX's cache when the address was in a different NUMA node.",
"UMask": "0x08",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -753,6 +797,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from extension memory (CXL) in the same NUMA node.",
"UMask": "0x10",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -760,6 +808,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from extension memory (CXL) in a different NUMA node.",
"UMask": "0x20",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
},
{
@@ -767,6 +819,10 @@
"EventCode": "0xad",
"BriefDescription": "L3 cache fill requests sourced from all data sources.",
"UMask": "0x3f",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
"Unit": "L3PMC"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/branch-prediction.json b/tools/perf/pmu-events/arch/x86/amdzen5/branch-prediction.json
new file mode 100644
index 000000000000..2d8d18cb85c1
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/branch-prediction.json
@@ -0,0 +1,93 @@
+[
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_hit",
+ "EventCode": "0x84",
+ "BriefDescription": "Instruction fetches that miss in the L1 ITLB but hit in the L2 ITLB."
+ },
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if4k",
+ "EventCode": "0x85",
+ "BriefDescription": "Instruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 4k pages.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if2m",
+ "EventCode": "0x85",
+ "BriefDescription": "Instruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 2M pages.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_miss.if1g",
+ "EventCode": "0x85",
+ "BriefDescription": "Instruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for 1G pages.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_miss.coalesced_4k",
+ "EventCode": "0x85",
+ "BriefDescription": "Instruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pages.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "bp_l1_tlb_miss_l2_tlb_miss.all",
+ "EventCode": "0x85",
+ "BriefDescription": "Instruction fetches that miss in both the L1 and L2 ITLBs (page-table walks are requested) for all page sizes.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "bp_l2_btb_correct",
+ "EventCode": "0x8b",
+ "BriefDescription": "L2 branch prediction overrides existing prediction (speculative)."
+ },
+ {
+ "EventName": "bp_dyn_ind_pred",
+ "EventCode": "0x8e",
+ "BriefDescription": "Dynamic indirect predictions (branch used the indirect predictor to make a prediction)."
+ },
+ {
+ "EventName": "bp_de_redirect",
+ "EventCode": "0x91",
+ "BriefDescription": "Number of times an early redirect is sent to branch predictor. This happens when either the decoder or dispatch logic is able to detect that the branch predictor needs to be redirected."
+ },
+ {
+ "EventName": "bp_l1_tlb_fetch_hit.if4k",
+ "EventCode": "0x94",
+ "BriefDescription": "Instruction fetches that hit in the L1 ITLB for 4k or coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pages.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "bp_l1_tlb_fetch_hit.if2m",
+ "EventCode": "0x94",
+ "BriefDescription": "Instruction fetches that hit in the L1 ITLB for 2M pages.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "bp_l1_tlb_fetch_hit.if1g",
+ "EventCode": "0x94",
+ "BriefDescription": "Instruction fetches that hit in the L1 ITLB for 1G pages.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "bp_l1_tlb_fetch_hit.all",
+ "EventCode": "0x94",
+ "BriefDescription": "Instruction fetches that hit in the L1 ITLB for all page sizes.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "bp_redirects.resync",
+ "EventCode": "0x9f",
+ "BriefDescription": "Redirects of the branch predictor caused by resyncs.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "bp_redirects.ex_redir",
+ "EventCode": "0x9f",
+ "BriefDescription": "Redirects of the branch predictor caused by mispredicts.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "bp_redirects.all",
+ "EventCode": "0x9f",
+ "BriefDescription": "Redirects of the branch predictor."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/data-fabric.json b/tools/perf/pmu-events/arch/x86/amdzen5/data-fabric.json
new file mode 100644
index 000000000000..fa06569d881d
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/data-fabric.json
@@ -0,0 +1,1634 @@
+[
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_0",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 0.",
+ "EventCode": "0x1f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_1",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 1.",
+ "EventCode": "0x5f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_2",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 2.",
+ "EventCode": "0x9f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_3",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 3.",
+ "EventCode": "0xdf",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_4",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 4.",
+ "EventCode": "0x11f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_5",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 5.",
+ "EventCode": "0x15f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_6",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 6.",
+ "EventCode": "0x19f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_7",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 7.",
+ "EventCode": "0x1df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_8",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 8.",
+ "EventCode": "0x21f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_9",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 9.",
+ "EventCode": "0x25f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_10",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 10.",
+ "EventCode": "0x29f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_read_data_beats_dram_11",
+ "PublicDescription": "Read data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 11.",
+ "EventCode": "0x2df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_0",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 0.",
+ "EventCode": "0x1f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_1",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 1.",
+ "EventCode": "0x5f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_2",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 2.",
+ "EventCode": "0x9f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_3",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 3.",
+ "EventCode": "0xdf",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_4",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 4.",
+ "EventCode": "0x11f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_5",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 5.",
+ "EventCode": "0x15f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_6",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 6.",
+ "EventCode": "0x19f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_7",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 7.",
+ "EventCode": "0x1df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_8",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 8.",
+ "EventCode": "0x21f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_9",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 9.",
+ "EventCode": "0x25f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_10",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 10.",
+ "EventCode": "0x29f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_write_data_beats_dram_11",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local socket and DRAM Channel 11.",
+ "EventCode": "0x2df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_0",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 0.",
+ "EventCode": "0x1f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_1",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 1.",
+ "EventCode": "0x5f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_2",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 2.",
+ "EventCode": "0x9f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_3",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 3.",
+ "EventCode": "0xdf",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_4",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 4.",
+ "EventCode": "0x11f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_5",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 5.",
+ "EventCode": "0x15f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_6",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 6.",
+ "EventCode": "0x19f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_7",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 7.",
+ "EventCode": "0x1df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_8",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 8.",
+ "EventCode": "0x21f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_9",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 9.",
+ "EventCode": "0x25f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_10",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 10.",
+ "EventCode": "0x29f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_write_data_beats_dram_11",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between remote socket and DRAM Channel 11.",
+ "EventCode": "0x2df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_0",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 0.",
+ "EventCode": "0x1f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_1",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 1.",
+ "EventCode": "0x5f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_2",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 2.",
+ "EventCode": "0x9f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_3",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 3.",
+ "EventCode": "0xdf",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_4",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 4.",
+ "EventCode": "0x11f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_5",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 5.",
+ "EventCode": "0x15f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_6",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 6.",
+ "EventCode": "0x19f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_7",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 7.",
+ "EventCode": "0x1df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_8",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 8.",
+ "EventCode": "0x21f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_9",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 9.",
+ "EventCode": "0x25f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_10",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 10.",
+ "EventCode": "0x29f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_write_data_beats_dram_11",
+ "PublicDescription": "Write data beats (64 bytes) for transactions between local or remote socket and DRAM Channel 11.",
+ "EventCode": "0x2df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_0",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_1",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_2",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_3",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_4",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_5",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_6",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_read_data_beats_io_7",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_0",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_1",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_2",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_3",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_4",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_5",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_6",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_upstream_write_data_beats_io_7",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_0",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_1",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_2",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_3",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_4",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_5",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_6",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_read_data_beats_io_7",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between remote socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_0",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_1",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_2",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_3",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_4",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_5",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_6",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_upstream_write_data_beats_io_7",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between remote socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_0",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_1",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_2",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_3",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_4",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_5",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_6",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_read_data_beats_io_7",
+ "PublicDescription": "Upstream DMA read data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_0",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 0.",
+ "EventCode": "0x81f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_1",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 1.",
+ "EventCode": "0x85f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_2",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 2.",
+ "EventCode": "0x89f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_3",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 3.",
+ "EventCode": "0x8df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_4",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 4.",
+ "EventCode": "0x91f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_5",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 5.",
+ "EventCode": "0x95f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_6",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 6.",
+ "EventCode": "0x99f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_upstream_write_data_beats_io_7",
+ "PublicDescription": "Upstream DMA write data beats (64 bytes) for transactions between local or remote socket and IO Root Complex 7.",
+ "EventCode": "0x9df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_0",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_1",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_2",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_3",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_4",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_5",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_6",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_7",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_8",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_9",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_10",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_11",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_12",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_13",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_14",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_cfi_15",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0x7fe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_0",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_1",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_2",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_3",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_4",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_5",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_6",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_7",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_8",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_9",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_10",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_11",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_12",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_13",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_14",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_cfi_15",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0x7ff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_0",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_1",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_2",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_3",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_4",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_5",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_6",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_7",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_8",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_9",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_10",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_11",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_12",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_13",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_14",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_inbound_data_beats_cfi_15",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between remote socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0xbfe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_0",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_1",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_2",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_3",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_4",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_5",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_6",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_7",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_8",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_9",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_10",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_11",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_12",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_13",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_14",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "remote_socket_outbound_data_beats_cfi_15",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between remote socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0xbff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_0",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_1",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_2",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_3",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_4",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_5",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_6",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_7",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_8",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_9",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_10",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_11",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_12",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_13",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_14",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_inbound_data_beats_cfi_15",
+ "PublicDescription": "Inbound data beats (32 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0xffe",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_0",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 0.",
+ "EventCode": "0x41e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_1",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 1.",
+ "EventCode": "0x45e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_2",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 2.",
+ "EventCode": "0x49e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_3",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 3.",
+ "EventCode": "0x4de",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_4",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 4.",
+ "EventCode": "0x51e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_5",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 5.",
+ "EventCode": "0x55e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_6",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 6.",
+ "EventCode": "0x59e",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_7",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 7.",
+ "EventCode": "0x5de",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_8",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 8.",
+ "EventCode": "0x41f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_9",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 9.",
+ "EventCode": "0x45f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_10",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 10.",
+ "EventCode": "0x49f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_11",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 11.",
+ "EventCode": "0x4df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_12",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 12.",
+ "EventCode": "0x51f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_13",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 13.",
+ "EventCode": "0x55f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_14",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 14.",
+ "EventCode": "0x59f",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_or_remote_socket_outbound_data_beats_cfi_15",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local or remote socket and Core-to-Fabric Interface 15.",
+ "EventCode": "0x5df",
+ "UMask": "0xfff",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_0",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 0.",
+ "EventCode": "0xd5f",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_1",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 1.",
+ "EventCode": "0xd9f",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_2",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 2.",
+ "EventCode": "0xddf",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_3",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 3.",
+ "EventCode": "0xe1f",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_4",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 4.",
+ "EventCode": "0xe5f",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_inbound_data_beats_link_5",
+ "PublicDescription": "Inbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 5.",
+ "EventCode": "0xe9f",
+ "UMask": "0xf3f",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_0",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 0.",
+ "EventCode": "0xd5f",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_1",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 1.",
+ "EventCode": "0xd9f",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_2",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 2.",
+ "EventCode": "0xddf",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_3",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 3.",
+ "EventCode": "0xe1f",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_4",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 4.",
+ "EventCode": "0xe5f",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ },
+ {
+ "EventName": "local_socket_outbound_data_beats_link_5",
+ "PublicDescription": "Outbound data beats (64 bytes) for transactions between local socket and remote socket over Cross-socket Link 5.",
+ "EventCode": "0xe9f",
+ "UMask": "0xf3e",
+ "PerPkg": "1",
+ "Unit": "DFPMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/decode.json b/tools/perf/pmu-events/arch/x86/amdzen5/decode.json
new file mode 100644
index 000000000000..d0eff7f2a3ea
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/decode.json
@@ -0,0 +1,115 @@
+[
+ {
+ "EventName": "de_op_queue_empty",
+ "EventCode": "0xa9",
+ "BriefDescription": "Cycles where the op queue is empty. Such cycles indicate that the front-end is not delivering instructions fast enough."
+ },
+ {
+ "EventName": "de_src_op_disp.x86_decoder",
+ "EventCode": "0xaa",
+ "BriefDescription": "Ops dispatched from x86 decoder.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "de_src_op_disp.op_cache",
+ "EventCode": "0xaa",
+ "BriefDescription": "Ops dispatched from op cache.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "de_src_op_disp.all",
+ "EventCode": "0xaa",
+ "BriefDescription": "Ops dispatched from any source.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "de_dis_ops_from_decoder.any_fp_dispatch",
+ "EventCode": "0xab",
+ "BriefDescription": "Number of ops dispatched to the floating-point unit.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "de_dis_ops_from_decoder.any_integer_dispatch",
+ "EventCode": "0xab",
+ "BriefDescription": "Number of ops dispatched to the integer execution unit.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part1.int_phy_reg_file_rsrc_stall",
+ "EventCode": "0xae",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to an integer physical register file resource stall.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part1.load_queue_rsrc_stall",
+ "EventCode": "0xae",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a lack of load queue tokens.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part1.store_queue_rsrc_stall",
+ "EventCode": "0xae",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a lack of store queue tokens.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part1.taken_brnch_buffer_rsrc",
+ "EventCode": "0xae",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a taken branch buffer resource stall.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part1.fp_sch_rsrc_stall",
+ "EventCode": "0xae",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a floating-point non-schedulable queue token stall.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part2.al_tokens",
+ "EventCode": "0xaf",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to unavailability of ALU tokens.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part2.ag_tokens",
+ "EventCode": "0xaf",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to unavailability of agen tokens.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part2.ex_flush_recovery",
+ "EventCode": "0xaf",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to a pending integer execution flush recovery.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "de_dispatch_stall_cycle_dynamic_tokens_part2.retq",
+ "EventCode": "0xaf",
+ "BriefDescription": "Cycles where a dispatch group is valid but does not get dispatched due to unavailability of retire queue tokens.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "de_no_dispatch_per_slot.no_ops_from_frontend",
+ "EventCode": "0x1a0",
+ "BriefDescription": "In each cycle counts dispatch slots left empty because the front-end did not supply ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "de_no_dispatch_per_slot.backend_stalls",
+ "EventCode": "0x1a0",
+ "BriefDescription": "In each cycle counts ops unable to dispatch because of back-end stalls.",
+ "UMask": "0x1e"
+ },
+ {
+ "EventName": "de_no_dispatch_per_slot.smt_contention",
+ "EventCode": "0x1a0",
+ "BriefDescription": "In each cycle counts ops unable to dispatch because the dispatch cycle was granted to the other SMT thread.",
+ "UMask": "0x60"
+ },
+ {
+ "EventName": "de_additional_resource_stalls.dispatch_stalls",
+ "EventCode": "0x1a2",
+ "BriefDescription": "Counts additional cycles where dispatch is stalled due to a lack of dispatch resources.",
+ "UMask": "0x30"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/execution.json b/tools/perf/pmu-events/arch/x86/amdzen5/execution.json
new file mode 100644
index 000000000000..5a46d3db74e7
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/execution.json
@@ -0,0 +1,174 @@
+[
+ {
+ "EventName": "ex_ret_instr",
+ "EventCode": "0xc0",
+ "BriefDescription": "Retired instructions."
+ },
+ {
+ "EventName": "ex_ret_ops",
+ "EventCode": "0xc1",
+ "BriefDescription": "Retired macro-ops."
+ },
+ {
+ "EventName": "ex_ret_brn",
+ "EventCode": "0xc2",
+ "BriefDescription": "Retired branch instructions (all types of architectural control flow changes, including exceptions and interrupts)."
+ },
+ {
+ "EventName": "ex_ret_brn_misp",
+ "EventCode": "0xc3",
+ "BriefDescription": "Retired branch instructions mispredicted."
+ },
+ {
+ "EventName": "ex_ret_brn_tkn",
+ "EventCode": "0xc4",
+ "BriefDescription": "Retired taken branch instructions (all types of architectural control flow changes, including exceptions and interrupts)."
+ },
+ {
+ "EventName": "ex_ret_brn_tkn_misp",
+ "EventCode": "0xc5",
+ "BriefDescription": "Retired taken branch instructions mispredicted."
+ },
+ {
+ "EventName": "ex_ret_brn_far",
+ "EventCode": "0xc6",
+ "BriefDescription": "Retired far control transfers (far call/jump/return, IRET, SYSCALL and SYSRET, plus exceptions and interrupts). Far control transfers are not subject to branch prediction."
+ },
+ {
+ "EventName": "ex_ret_near_ret",
+ "EventCode": "0xc8",
+ "BriefDescription": "Retired near returns (RET or RET Iw)."
+ },
+ {
+ "EventName": "ex_ret_near_ret_mispred",
+ "EventCode": "0xc9",
+ "BriefDescription": "Retired near returns mispredicted. Each misprediction incurs the same penalty as a mispredicted conditional branch instruction."
+ },
+ {
+ "EventName": "ex_ret_brn_ind_misp",
+ "EventCode": "0xca",
+ "BriefDescription": "Retired indirect branch instructions mispredicted (only EX mispredicts). Each misprediction incurs the same penalty as a mispredicted conditional branch instruction."
+ },
+ {
+ "EventName": "ex_ret_mmx_fp_instr.x87",
+ "EventCode": "0xcb",
+ "BriefDescription": "Retired x87 instructions.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ex_ret_mmx_fp_instr.mmx",
+ "EventCode": "0xcb",
+ "BriefDescription": "Retired MMX instructions.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ex_ret_mmx_fp_instr.sse",
+ "EventCode": "0xcb",
+ "BriefDescription": "Retired SSE instructions (includes SSE, SSE2, SSE3, SSSE3, SSE4A, SSE41, SSE42 and AVX).",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ex_ret_ind_brch_instr",
+ "EventCode": "0xcc",
+ "BriefDescription": "Retired indirect branch instructions."
+ },
+ {
+ "EventName": "ex_ret_cond",
+ "EventCode": "0xd1",
+ "BriefDescription": "Retired conditional branch instructions."
+ },
+ {
+ "EventName": "ex_div_busy",
+ "EventCode": "0xd3",
+ "BriefDescription": "Number of cycles the divider is busy."
+ },
+ {
+ "EventName": "ex_div_count",
+ "EventCode": "0xd4",
+ "BriefDescription": "Divide ops executed."
+ },
+ {
+ "EventName": "ex_no_retire.empty",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire due to the lack of valid ops in the retire queue (may be caused by front-end bottlenecks or pipeline redirects).",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ex_no_retire.not_complete",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire while the oldest op is waiting to be executed.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ex_no_retire.other",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire caused by other reasons (retire breaks, traps, faults, etc.).",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ex_no_retire.thread_not_selected",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire because thread arbitration did not select the thread.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ex_no_retire.load_not_complete",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire while the oldest op is waiting for load data.",
+ "UMask": "0xa2"
+ },
+ {
+ "EventName": "ex_no_retire.all",
+ "EventCode": "0xd6",
+ "BriefDescription": "Cycles with no retire for any reason.",
+ "UMask": "0x1b"
+ },
+ {
+ "EventName": "ex_ret_ucode_instr",
+ "EventCode": "0x1c1",
+ "BriefDescription": "Retired microcoded instructions."
+ },
+ {
+ "EventName": "ex_ret_ucode_ops",
+ "EventCode": "0x1c2",
+ "BriefDescription": "Retired microcode ops."
+ },
+ {
+ "EventName": "ex_ret_msprd_brnch_instr_dir_msmtch",
+ "EventCode": "0x1c7",
+ "BriefDescription": "Retired branch instructions mispredicted due to direction mismatch."
+ },
+ {
+ "EventName": "ex_ret_uncond_brnch_instr_mispred",
+ "EventCode": "0x1c8",
+ "BriefDescription": "Retired unconditional indirect branch instructions mispredicted."
+ },
+ {
+ "EventName": "ex_ret_uncond_brnch_instr",
+ "EventCode": "0x1c9",
+ "BriefDescription": "Retired unconditional branch instructions."
+ },
+ {
+ "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops",
+ "EventCode": "0x1cf",
+ "BriefDescription": "Ops tagged by IBS.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ex_tagged_ibs_ops.ibs_tagged_ops_ret",
+ "EventCode": "0x1cf",
+ "BriefDescription": "Ops tagged by IBS that retired.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ex_tagged_ibs_ops.ibs_count_rollover",
+ "EventCode": "0x1cf",
+ "BriefDescription": "Ops not tagged by IBS due to a previous tagged op that has not yet signaled interrupt.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ex_ret_fused_instr",
+ "EventCode": "0x1d0",
+ "BriefDescription": "Retired fused instructions."
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/floating-point.json b/tools/perf/pmu-events/arch/x86/amdzen5/floating-point.json
new file mode 100644
index 000000000000..9204bfb1d69e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/floating-point.json
@@ -0,0 +1,812 @@
+[
+ {
+ "EventName": "fp_ret_x87_fp_ops.add_sub_ops",
+ "EventCode": "0x02",
+ "BriefDescription": "Retired x87 floating-point add and subtract ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_ret_x87_fp_ops.mul_ops",
+ "EventCode": "0x02",
+ "BriefDescription": "Retired x87 floating-point multiply ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_ret_x87_fp_ops.div_sqrt_ops",
+ "EventCode": "0x02",
+ "BriefDescription": "Retired x87 floating-point divide and square root ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_ret_x87_fp_ops.all",
+ "EventCode": "0x02",
+ "BriefDescription": "Retired x87 floating-point ops of all types.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.add_sub_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point add and subtract ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.mult_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point multiply ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.div_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point divide and square root ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.mac_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point multiply-accumulate ops (each operation is counted as 2 ops).",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.bfloat16_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point bfloat16 ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.scalar_single_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point scalar single-precision ops.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.packed_single_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point packed single-precision ops.",
+ "UMask": "0x60"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.scalar_double_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point scalar double-precision ops.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.packed_double_flops",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point packed double-precision ops.",
+ "UMask": "0xa0"
+ },
+ {
+ "EventName": "fp_ret_sse_avx_ops.all",
+ "EventCode": "0x03",
+ "BriefDescription": "Retired SSE and AVX floating-point ops of all types.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.x87_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired x87 floating-point ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.mmx_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired MMX floating-point ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.scalar_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired scalar floating-point ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.pack_128_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired packed 128-bit floating-point ops.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.pack_256_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired packed 256-bit floating-point ops.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.pack_512_uops_retired",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired packed 512-bit floating-point ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "fp_ops_retired_by_width.all",
+ "EventCode": "0x08",
+ "BriefDescription": "Retired floating-point ops of all widths.",
+ "UMask": "0x3f"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_add",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point add ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_sub",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point subtract ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_mul",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point multiply ops.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_mac",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point multiply-accumulate ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_div",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point divide ops.",
+ "UMask": "0x05"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_sqrt",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point square root ops.",
+ "UMask": "0x06"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_cmp",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point compare ops.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_cvt",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point convert ops.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_blend",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point blend ops.",
+ "UMask": "0x09"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_other",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point ops of other types.",
+ "UMask": "0x0e"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.scalar_all",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired scalar floating-point ops of all types.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_add",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point add ops.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_sub",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point subtract ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_mul",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point multiply ops.",
+ "UMask": "0x30"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_mac",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point multiply-accumulate ops.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_div",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point divide ops.",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_sqrt",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point square root ops.",
+ "UMask": "0x60"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_cmp",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point compare ops.",
+ "UMask": "0x70"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_cvt",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point convert ops.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_blend",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point blend ops.",
+ "UMask": "0x90"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_shuffle",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0xb0"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_logical",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point logical ops.",
+ "UMask": "0xd0"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_other",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point ops of other types.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.vector_all",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired vector floating-point ops of all types.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "fp_ops_retired_by_type.all",
+ "EventCode": "0x0a",
+ "BriefDescription": "Retired floating-point ops of all types.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_add",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer add.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_sub",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer subtract ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_mul",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer multiply ops.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_mac",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer multiply-accumulate ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_cmp",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer compare ops.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_shift",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer shift ops.",
+ "UMask": "0x09"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_mov",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer MOV ops.",
+ "UMask": "0x0a"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_shuffle",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0x0b"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_pack",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer pack ops.",
+ "UMask": "0x0c"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_logical",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer logical ops.",
+ "UMask": "0x0d"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_other",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer multiply ops of other types.",
+ "UMask": "0x0e"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.mmx_all",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired MMX integer ops of all types.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_add",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer add ops.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_sub",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer subtract ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_mul",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer multiply ops.",
+ "UMask": "0x30"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_mac",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer multiply-accumulate ops.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_aes",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer AES ops.",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_sha",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer SHA ops.",
+ "UMask": "0x60"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_cmp",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer compare ops.",
+ "UMask": "0x70"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_clm",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer CLM ops.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_shift",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer shift ops.",
+ "UMask": "0x90"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_mov",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer MOV ops.",
+ "UMask": "0xa0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_shuffle",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0xb0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_pack",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer pack ops.",
+ "UMask": "0xc0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_logical",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer logical ops.",
+ "UMask": "0xd0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_other",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer ops of other types.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.sse_avx_all",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE and AVX integer ops of all types.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "sse_avx_ops_retired.all",
+ "EventCode": "0x0b",
+ "BriefDescription": "Retired SSE, AVX and MMX integer ops of all types.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_add",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point add ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_sub",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point subtract ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_mul",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point multiply ops.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_mac",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point multiply-accumulate ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_div",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point divide ops.",
+ "UMask": "0x05"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_sqrt",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point square root ops.",
+ "UMask": "0x06"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_cmp",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point compare ops.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_cvt",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point convert ops.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_blend",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point blend ops.",
+ "UMask": "0x09"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_shuffle",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0x0b"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_logical",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point logical ops.",
+ "UMask": "0x0d"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_other",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point ops of other types.",
+ "UMask": "0x0e"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp128_all",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 128-bit packed floating-point ops of all types.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_add",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point add ops.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_sub",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point subtract ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_mul",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point multiply ops.",
+ "UMask": "0x30"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_mac",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point multiply-accumulate ops.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_div",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point divide ops.",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_sqrt",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point square root ops.",
+ "UMask": "0x60"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_cmp",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point compare ops.",
+ "UMask": "0x70"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_cvt",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point convert ops.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_blend",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point blend ops.",
+ "UMask": "0x90"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_shuffle",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0xb0"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_logical",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point logical ops.",
+ "UMask": "0xd0"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_other",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point ops of other types.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.fp256_all",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired 256-bit packed floating-point ops of all types.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "fp_pack_ops_retired.all",
+ "EventCode": "0x0c",
+ "BriefDescription": "Retired packed floating-point ops of all types.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_add",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer add ops.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_sub",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer subtract ops.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_mul",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer multiply ops.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_mac",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer multiply-accumulate ops.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_aes",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer AES ops.",
+ "UMask": "0x05"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_sha",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer SHA ops.",
+ "UMask": "0x06"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_cmp",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer compare ops.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_clm",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer CLM ops.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_shift",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer shift ops.",
+ "UMask": "0x09"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_mov",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer MOV ops.",
+ "UMask": "0x0a"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_shuffle",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0x0b"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_pack",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer pack ops.",
+ "UMask": "0x0c"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_logical",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer logical ops.",
+ "UMask": "0x0d"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_other",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer ops of other types.",
+ "UMask": "0x0e"
+ },
+ {
+ "EventName": "packed_int_op_type.int128_all",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 128-bit packed integer ops of all types.",
+ "UMask": "0x0f"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_add",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer add ops.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_sub",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer subtract ops.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_mul",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer multiply ops.",
+ "UMask": "0x30"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_mac",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer multiply-accumulate ops.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_cmp",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer compare ops.",
+ "UMask": "0x70"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_shift",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer shift ops.",
+ "UMask": "0x90"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_mov",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer MOV ops.",
+ "UMask": "0xa0"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_shuffle",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer shuffle ops (may include instructions not necessarily thought of as including shuffles e.g. horizontal add, dot product, and certain MOV instructions).",
+ "UMask": "0xb0"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_pack",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer pack ops.",
+ "UMask": "0xc0"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_logical",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer logical ops.",
+ "UMask": "0xd0"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_other",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer ops of other types.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "packed_int_op_type.int256_all",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired 256-bit packed integer ops of all types.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "packed_int_op_type.all",
+ "EventCode": "0x0d",
+ "BriefDescription": "Retired packed integer ops of all types.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "fp_disp_faults.x87_fill_fault",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults for x87 fills.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "fp_disp_faults.xmm_fill_fault",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults for XMM fills.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "fp_disp_faults.ymm_fill_fault",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults for YMM fills.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "fp_disp_faults.ymm_spill_fault",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults for YMM spills.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "fp_disp_faults.sse_avx_all",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults of all types for SSE and AVX ops.",
+ "UMask": "0x0e"
+ },
+ {
+ "EventName": "fp_disp_faults.all",
+ "EventCode": "0x0e",
+ "BriefDescription": "Floating-point dispatch faults of all types.",
+ "UMask": "0x0f"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/inst-cache.json b/tools/perf/pmu-events/arch/x86/amdzen5/inst-cache.json
new file mode 100644
index 000000000000..ad75e5bf9513
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/inst-cache.json
@@ -0,0 +1,72 @@
+[
+ {
+ "EventName": "ic_cache_fill_l2",
+ "EventCode": "0x82",
+ "BriefDescription": "Instruction cache lines (64 bytes) fulfilled from the L2 cache."
+ },
+ {
+ "EventName": "ic_cache_fill_sys",
+ "EventCode": "0x83",
+ "BriefDescription": "Instruction cache lines (64 bytes) fulfilled from system memory or another cache."
+ },
+ {
+ "EventName": "ic_fetch_ibs_events.fetch_tagged",
+ "EventCode": "0x188",
+ "BriefDescription": "Fetches tagged by Fetch IBS. Not all tagged fetches result in a valid sample and an IBS interrupt.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ic_fetch_ibs_events.sample_discarded",
+ "EventCode": "0x188",
+ "BriefDescription": "Fetches discarded after being tagged by Fetch IBS due to reasons other than IBS filtering.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ic_fetch_ibs_events.sample_filtered",
+ "EventCode": "0x188",
+ "BriefDescription": "Fetches discarded after being tagged by Fetch IBS due to IBS filtering.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ic_fetch_ibs_events.sample_valid",
+ "EventCode": "0x188",
+ "BriefDescription": "Fetches tagged by Fetch IBS that result in a valid sample and an IBS interrupt.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ic_tag_hit_miss.instruction_cache_hit",
+ "EventCode": "0x18e",
+ "BriefDescription": "Instruction cache hits.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "ic_tag_hit_miss.instruction_cache_miss",
+ "EventCode": "0x18e",
+ "BriefDescription": "Instruction cache misses.",
+ "UMask": "0x18"
+ },
+ {
+ "EventName": "ic_tag_hit_miss.all_instruction_cache_accesses",
+ "EventCode": "0x18e",
+ "BriefDescription": "Instruction cache accesses of all types.",
+ "UMask": "0x1f"
+ },
+ {
+ "EventName": "op_cache_hit_miss.op_cache_hit",
+ "EventCode": "0x28f",
+ "BriefDescription": "Op cache hits.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "op_cache_hit_miss.op_cache_miss",
+ "EventCode": "0x28f",
+ "BriefDescription": "Op cache misses.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "op_cache_hit_miss.all_op_cache_accesses",
+ "EventCode": "0x28f",
+ "BriefDescription": "Op cache accesses of all types.",
+ "UMask": "0x07"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/l2-cache.json b/tools/perf/pmu-events/arch/x86/amdzen5/l2-cache.json
new file mode 100644
index 000000000000..d1de51a02922
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/l2-cache.json
@@ -0,0 +1,266 @@
+[
+ {
+ "EventName": "l2_request_g1.group2",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests of non-cacheable type (non-cached data and instructions reads, self-modifying code checks).",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "l2_request_g1.l2_hw_pf",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: from hardware prefetchers to prefetch directly into L2 (hit or miss).",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "l2_request_g1.prefetch_l2_cmd",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: prefetch directly into L2.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "l2_request_g1.cacheable_ic_read",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: instruction cache reads.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "l2_request_g1.ls_rd_blk_c_s",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: data cache shared reads.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "l2_request_g1.rd_blk_x",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: data cache stores.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "l2_request_g1.rd_blk_l",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests: data cache reads including hardware and software prefetch.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "l2_request_g1.all_dc",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests of common types from L1 data cache (including prefetches).",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "l2_request_g1.all_no_prefetch",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests of common types not including prefetches.",
+ "UMask": "0xf1"
+ },
+ {
+ "EventName": "l2_request_g1.all",
+ "EventCode": "0x60",
+ "BriefDescription": "L2 cache requests of all types.",
+ "UMask": "0xf7"
+ },
+ {
+ "EventName": "l2_request_g2.ls_rd_sized_nc",
+ "EventCode": "0x61",
+ "BriefDescription": "L2 cache requests: non-coherent, non-cacheable LS sized reads.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "l2_request_g2.ls_rd_sized",
+ "EventCode": "0x61",
+ "BriefDescription": "L2 cache requests: coherent, non-cacheable LS sized reads.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "l2_wcb_req.wcb_close",
+ "EventCode": "0x63",
+ "BriefDescription": "Write Combining Buffer (WCB) closures.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_fill_miss",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: instruction cache request miss in L2.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_fill_hit_s",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: instruction cache hit non-modifiable line in L2.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_fill_hit_x",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: instruction cache hit modifiable line in L2.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_hit_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for instruction cache hits.",
+ "UMask": "0x06"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_access_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for instruction cache access.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ls_rd_blk_c",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: data cache request miss in L2.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_dc_miss_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for data and instruction cache misses.",
+ "UMask": "0x09"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ls_rd_blk_x",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: data cache store or state change hit in L2.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_s",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: data cache read hit non-modifiable line in L2.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ls_rd_blk_l_hit_x",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: data cache read hit modifiable line in L2.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ls_rd_blk_cs",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) with status: data cache shared read hit in L2.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "l2_cache_req_stat.dc_hit_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for data cache hits.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "l2_cache_req_stat.ic_dc_hit_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for data and instruction cache hits.",
+ "UMask": "0xf6"
+ },
+ {
+ "EventName": "l2_cache_req_stat.dc_access_in_l2",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for data cache access.",
+ "UMask": "0xf8"
+ },
+ {
+ "EventName": "l2_cache_req_stat.all",
+ "EventCode": "0x64",
+ "BriefDescription": "Core to L2 cache requests (not including L2 prefetch) for data and instruction cache access.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "l2_pf_hit_l2.l2_hwpf",
+ "EventCode": "0x70",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L2 hardware prefetchers.",
+ "UMask": "0x1f"
+ },
+ {
+ "EventName": "l2_pf_hit_l2.l1_dc_hwpf",
+ "EventCode": "0x70",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L1 data hardware prefetchers.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "l2_pf_hit_l2.l1_dc_l2_hwpf",
+ "EventCode": "0x70",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which hit in the L2 cache and are generated from L1 data and L2 hardware prefetchers.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_hit_l3.l2_hwpf",
+ "EventCode": "0x71",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L2 hardware prefetchers.",
+ "UMask": "0x1f"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_hit_l3.l1_dc_hwpf",
+ "EventCode": "0x71",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L1 data hardware prefetchers.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpf",
+ "EventCode": "0x71",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 cache but hit in the L3 cache and are generated from L1 data and L2 hardware prefetchers.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_l3.l2_hwpf",
+ "EventCode": "0x72",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L2 hardware prefetchers.",
+ "UMask": "0x1f"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_l3.l1_dc_hwpf",
+ "EventCode": "0x72",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L1 data hardware prefetchers.",
+ "UMask": "0xe0"
+ },
+ {
+ "EventName": "l2_pf_miss_l2_l3.l1_dc_l2_hwpf",
+ "EventCode": "0x72",
+ "BriefDescription": "L2 prefetches accepted by the L2 pipeline which miss the L2 as well as the L3 caches and are generated from L1 data and L2 hardware prefetchers.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.local_ccx",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.near_cache",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from cache of another CCX when the address was in the same NUMA node.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.dram_io_near",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from either DRAM or MMIO in the same NUMA node.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.far_cache",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from cache of another CCX when the address was in a different NUMA node.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.dram_io_far",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from either DRAM or MMIO in a different NUMA node (same or different socket).",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.alternate_memories",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from extension memory.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "l2_fill_rsp_src.all",
+ "EventCode": "0x165",
+ "BriefDescription": "L2 cache fills from all types of data sources.",
+ "UMask": "0xde"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/l3-cache.json b/tools/perf/pmu-events/arch/x86/amdzen5/l3-cache.json
new file mode 100644
index 000000000000..b50fe14d4520
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/l3-cache.json
@@ -0,0 +1,177 @@
+[
+ {
+ "EventName": "l3_lookup_state.l3_miss",
+ "EventCode": "0x04",
+ "BriefDescription": "L3 cache misses.",
+ "UMask": "0x01",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_lookup_state.l3_hit",
+ "EventCode": "0x04",
+ "BriefDescription": "L3 cache hits.",
+ "UMask": "0xfe",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_lookup_state.all_coherent_accesses_to_l3",
+ "EventCode": "0x04",
+ "BriefDescription": "L3 cache requests for all coherent accesses.",
+ "UMask": "0xff",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.dram_near",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from DRAM in the same NUMA node.",
+ "UMask": "0x01",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.dram_far",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from DRAM in a different NUMA node.",
+ "UMask": "0x02",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.near_cache",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from another CCX's cache when the address was in the same NUMA node.",
+ "UMask": "0x04",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.far_cache",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from another CCX's cache when the address was in a different NUMA node.",
+ "UMask": "0x08",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.ext_near",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from extension memory (CXL) in the same NUMA node.",
+ "UMask": "0x10",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.ext_far",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency when data is sourced from extension memory (CXL) in a different NUMA node.",
+ "UMask": "0x20",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency.all",
+ "EventCode": "0xac",
+ "BriefDescription": "Average sampled latency from all data sources.",
+ "UMask": "0x3f",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.dram_near",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from DRAM in the same NUMA node.",
+ "UMask": "0x01",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.dram_far",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from DRAM in a different NUMA node.",
+ "UMask": "0x02",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.near_cache",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from another CCX's cache when the address was in the same NUMA node.",
+ "UMask": "0x04",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.far_cache",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from another CCX's cache when the address was in a different NUMA node.",
+ "UMask": "0x08",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.ext_near",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from extension memory (CXL) in the same NUMA node.",
+ "UMask": "0x10",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.ext_far",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from extension memory (CXL) in a different NUMA node.",
+ "UMask": "0x20",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ },
+ {
+ "EventName": "l3_xi_sampled_latency_requests.all",
+ "EventCode": "0xad",
+ "BriefDescription": "L3 cache fill requests sourced from all data sources.",
+ "UMask": "0x3f",
+ "EnAllCores": "0x1",
+ "EnAllSlices": "0x1",
+ "SliceId": "0x3",
+ "ThreadMask": "0x3",
+ "Unit": "L3PMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json b/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json
new file mode 100644
index 000000000000..ff6627a77805
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json
@@ -0,0 +1,517 @@
+[
+ {
+ "EventName": "ls_bad_status2.stli_other",
+ "EventCode": "0x24",
+ "BriefDescription": "Store-to-load conflicts (load unable to complete due to a non-forwardable conflict with an older store).",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_locks.bus_lock",
+ "EventCode": "0x25",
+ "BriefDescription": "Retired Lock instructions which caused a bus lock.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_ret_cl_flush",
+ "EventCode": "0x26",
+ "BriefDescription": "Retired CLFLUSH instructions."
+ },
+ {
+ "EventName": "ls_ret_cpuid",
+ "EventCode": "0x27",
+ "BriefDescription": "Retired CPUID instructions."
+ },
+ {
+ "EventName": "ls_dispatch.ld_dispatch",
+ "EventCode": "0x29",
+ "BriefDescription": "Number of memory load operations dispatched to the load-store unit.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_dispatch.store_dispatch",
+ "EventCode": "0x29",
+ "BriefDescription": "Number of memory store operations dispatched to the load-store unit.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_dispatch.ld_st_dispatch",
+ "EventCode": "0x29",
+ "BriefDescription": "Number of memory load-store operations dispatched to the load-store unit.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_dispatch.all",
+ "EventCode": "0x29",
+ "BriefDescription": "Number of memory operations dispatched to the load-store unit.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "ls_smi_rx",
+ "EventCode": "0x2b",
+ "BriefDescription": "SMIs received."
+ },
+ {
+ "EventName": "ls_int_taken",
+ "EventCode": "0x2c",
+ "BriefDescription": "Interrupts taken."
+ },
+ {
+ "EventName": "ls_stlf",
+ "EventCode": "0x35",
+ "BriefDescription": "Store-to-load-forward (STLF) hits."
+ },
+ {
+ "EventName": "ls_st_commit_cancel2.st_commit_cancel_wcb_full",
+ "EventCode": "0x37",
+ "BriefDescription": "Non-cacheable store commits cancelled due to the non-cacheable commit buffer being full.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_mab_alloc.load_store_allocations",
+ "EventCode": "0x41",
+ "BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for load-store allocations.",
+ "UMask": "0x3f"
+ },
+ {
+ "EventName": "ls_mab_alloc.hardware_prefetcher_allocations",
+ "EventCode": "0x41",
+ "BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for hardware prefetcher allocations.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_mab_alloc.all_allocations",
+ "EventCode": "0x41",
+ "BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for all types of allocations.",
+ "UMask": "0x7f"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.local_l2",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from local L2 cache.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.local_ccx",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.local_all",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from local L2 cache, L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.near_cache",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from cache of another CCX when the address was in the same NUMA node.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.dram_io_near",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from either DRAM or MMIO in the same NUMA node.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.far_cache",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from cache of another CCX when the address was in a different NUMA node.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.remote_cache",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from cache of another CCX when the address was in the same or a different NUMA node.",
+ "UMask": "0x14"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.dram_io_far",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket).",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.dram_io_all",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from either DRAM or MMIO in the same or a different NUMA node (same or different socket).",
+ "UMask": "0x48"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.far_all",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket).",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.alternate_memories",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from extension memory.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "ls_dmnd_fills_from_sys.all",
+ "EventCode": "0x43",
+ "BriefDescription": "Demand data cache fills from all types of data sources.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.local_l2",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from local L2 cache.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.local_ccx",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.local_all",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from local L2 cache or L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.near_cache",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from cache of another CCX when the address was in the same NUMA node.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.dram_io_near",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from either DRAM or MMIO in the same NUMA node.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.far_cache",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from cache of another CCX when the address was in a different NUMA node.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.remote_cache",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from cache of another CCX when the address was in the same or a different NUMA node.",
+ "UMask": "0x14"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.dram_io_far",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket).",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.dram_io_all",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from either DRAM or MMIO in any NUMA node (same or different socket).",
+ "UMask": "0x48"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.far_all",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket).",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.alternate_memories",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from extension memory.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "ls_any_fills_from_sys.all",
+ "EventCode": "0x44",
+ "BriefDescription": "Any data cache fills from all types of data sources.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_hit",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB hits for 4k pages.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_coalesced_page_hit",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB hits for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pages.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_hit",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB hits for 2M pages.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_hit",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB hits for 1G pages.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_4k_l2_miss",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 4k pages.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_coalesced_page_miss",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB misses (page-table walks are requested) for coalesced pages. A coalesced page is a 16k page created from four adjacent 4k pages.",
+ "UMask": "0x20"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_2m_l2_miss",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 2M pages.",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.tlb_reload_1g_l2_miss",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB misses (page-table walks are requested) for 1G pages.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.all_l2_miss",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses with L2 DTLB misses (page-table walks are requested) for all page sizes.",
+ "UMask": "0xf0"
+ },
+ {
+ "EventName": "ls_l1_d_tlb_miss.all",
+ "EventCode": "0x45",
+ "BriefDescription": "L1 DTLB misses for all page sizes.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "ls_misal_loads.ma64",
+ "EventCode": "0x47",
+ "BriefDescription": "64B misaligned (cacheline crossing) loads.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_misal_loads.ma4k",
+ "EventCode": "0x47",
+ "BriefDescription": "4kB misaligned (page crossing) loads.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_pref_instr_disp.prefetch",
+ "EventCode": "0x4b",
+ "BriefDescription": "Software prefetch instructions dispatched (speculative) of type PrefetchT0 (move data to all cache levels), T1 (move data to all cache levels except L1) and T2 (move data to all cache levels except L1 and L2).",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_pref_instr_disp.prefetch_w",
+ "EventCode": "0x4b",
+ "BriefDescription": "Software prefetch instructions dispatched (speculative) of type PrefetchW (move data to L1 cache and mark it modifiable).",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_pref_instr_disp.prefetch_nta",
+ "EventCode": "0x4b",
+ "BriefDescription": "Software prefetch instructions dispatched (speculative) of type PrefetchNTA (move data with minimum cache pollution i.e. non-temporal access).",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_pref_instr_disp.all",
+ "EventCode": "0x4b",
+ "BriefDescription": "Software prefetch instructions dispatched (speculative) of all types.",
+ "UMask": "0x07"
+ },
+ {
+ "EventName": "wcb_close.full_line_64b",
+ "EventCode": "0x50",
+ "BriefDescription": "Number of events that caused a Write Combining Buffer (WCB) entry to close because all 64 bytes of the entry have been written to.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_inef_sw_pref.data_pipe_sw_pf_dc_hit",
+ "EventCode": "0x52",
+ "BriefDescription": "Software prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a data cache hit.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_inef_sw_pref.mab_mch_cnt",
+ "EventCode": "0x52",
+ "BriefDescription": "Software prefetches that did not fetch data outside of the processor core as the PREFETCH instruction saw a match on an already allocated Miss Address Buffer (MAB).",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_inef_sw_pref.all",
+ "EventCode": "0x52",
+ "BriefDescript6ion": "Software prefetches that did not fetch data outside of the processor core for any reason.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.local_l2",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from local L2 cache.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.local_ccx",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.local_all",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from local L2 cache, L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.near_cache",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from cache of another CCX in the same NUMA node.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.dram_io_near",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from either DRAM or MMIO in the same NUMA node.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.far_cache",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from cache of another CCX in a different NUMA node.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.remote_cache",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from cache of another CCX when the address was in the same or a different NUMA node.",
+ "UMask": "0x14"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.dram_io_far",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket).",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.dram_io_all",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from either DRAM or MMIO in the same or a different NUMA node (same or different socket).",
+ "UMask": "0x48"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.far_all",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket).",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.alternate_memories",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from extension memory.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "ls_sw_pf_dc_fills.all",
+ "EventCode": "0x59",
+ "BriefDescription": "Software prefetch data cache fills from all types of data sources.",
+ "UMask": "0xdf"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.local_l2",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from local L2 cache.",
+ "UMask": "0x01"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.local_ccx",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x02"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.local_all",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from local L2 cache, L3 cache or different L2 cache in the same CCX.",
+ "UMask": "0x03"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.near_cache",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from cache of another CCX when the address was in the same NUMA node.",
+ "UMask": "0x04"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.dram_io_near",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from either DRAM or MMIO in the same NUMA node.",
+ "UMask": "0x08"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.far_cache",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from cache of another CCX when the address was in a different NUMA node.",
+ "UMask": "0x10"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.remote_cache",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from cache of another CCX when the address was in the same or a different NUMA node.",
+ "UMask": "0x14"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.dram_io_far",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from either DRAM or MMIO in a different NUMA node (same or different socket).",
+ "UMask": "0x40"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.dram_io_all",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from either DRAM or MMIO in the same or a different NUMA node (same or different socket).",
+ "UMask": "0x48"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.far_all",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from either cache of another CCX, DRAM or MMIO when the address was in a different NUMA node (same or different socket).",
+ "UMask": "0x50"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.alternate_memories",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from extension memory.",
+ "UMask": "0x80"
+ },
+ {
+ "EventName": "ls_hw_pf_dc_fills.all",
+ "EventCode": "0x5a",
+ "BriefDescription": "Hardware prefetch data cache fills from all types of data sources.",
+ "UMask": "0xdf"
+ },
+ {
+ "EventName": "ls_alloc_mab_count",
+ "EventCode": "0x5f",
+ "BriefDescription": "In-flight L1 data cache misses i.e. Miss Address Buffer (MAB) allocations each cycle."
+ },
+ {
+ "EventName": "ls_not_halted_cyc",
+ "EventCode": "0x76",
+ "BriefDescription": "Core cycles not in halt."
+ },
+ {
+ "EventName": "ls_tlb_flush.all",
+ "EventCode": "0x78",
+ "BriefDescription": "All TLB Flushes.",
+ "UMask": "0xff"
+ },
+ {
+ "EventName": "ls_not_halted_p0_cyc.p0_freq_cyc",
+ "EventCode": "0x120",
+ "BriefDescription": "Reference cycles (P0 frequency) not in halt .",
+ "UMask": "0x1"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/memory-controller.json b/tools/perf/pmu-events/arch/x86/amdzen5/memory-controller.json
new file mode 100644
index 000000000000..1a629fc9474a
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/memory-controller.json
@@ -0,0 +1,101 @@
+[
+ {
+ "EventName": "umc_mem_clk",
+ "PublicDescription": "Number of memory clock (MEMCLK) cycles.",
+ "EventCode": "0x00",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.all",
+ "PublicDescription": "Number of ACTIVATE commands sent.",
+ "EventCode": "0x05",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.rd",
+ "PublicDescription": "Number of ACTIVATE commands sent for reads.",
+ "EventCode": "0x05",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_act_cmd.wr",
+ "PublicDescription": "Number of ACTIVATE commands sent for writes.",
+ "EventCode": "0x05",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.all",
+ "PublicDescription": "Number of PRECHARGE commands sent.",
+ "EventCode": "0x06",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.rd",
+ "PublicDescription": "Number of PRECHARGE commands sent for reads.",
+ "EventCode": "0x06",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_pchg_cmd.wr",
+ "PublicDescription": "Number of PRECHARGE commands sent for writes.",
+ "EventCode": "0x06",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.all",
+ "PublicDescription": "Number of CAS commands sent.",
+ "EventCode": "0x0a",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.rd",
+ "PublicDescription": "Number of CAS commands sent for reads.",
+ "EventCode": "0x0a",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_cas_cmd.wr",
+ "PublicDescription": "Number of CAS commands sent for writes.",
+ "EventCode": "0x0a",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.all",
+ "PublicDescription": "Number of clock cycles used by the data bus.",
+ "EventCode": "0x14",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.rd",
+ "PublicDescription": "Number of clock cycles used by the data bus for reads.",
+ "EventCode": "0x14",
+ "RdWrMask": "0x1",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ },
+ {
+ "EventName": "umc_data_slot_clks.wr",
+ "PublicDescription": "Number of clock cycles used by the data bus for writes.",
+ "EventCode": "0x14",
+ "RdWrMask": "0x2",
+ "PerPkg": "1",
+ "Unit": "UMCPMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/pipeline.json b/tools/perf/pmu-events/arch/x86/amdzen5/pipeline.json
new file mode 100644
index 000000000000..d860bf599cf2
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/pipeline.json
@@ -0,0 +1,99 @@
+[
+ {
+ "MetricName": "total_dispatch_slots",
+ "BriefDescription": "Total dispatch slots (up to 8 instructions can be dispatched in each cycle).",
+ "MetricExpr": "8 * ls_not_halted_cyc",
+ "ScaleUnit": "1slots"
+ },
+ {
+ "MetricName": "frontend_bound",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because the frontend did not supply enough instructions/ops.",
+ "MetricExpr": "d_ratio(de_no_dispatch_per_slot.no_ops_from_frontend, total_dispatch_slots)",
+ "MetricGroup": "PipelineL1",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "bad_speculation",
+ "BriefDescription": "Percentage of dispatched ops that did not retire.",
+ "MetricExpr": "d_ratio(de_src_op_disp.all - ex_ret_ops, total_dispatch_slots)",
+ "MetricGroup": "PipelineL1",
+ "ScaleUnit": "100%ops"
+ },
+ {
+ "MetricName": "backend_bound",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because of backend stalls.",
+ "MetricExpr": "d_ratio(de_no_dispatch_per_slot.backend_stalls, total_dispatch_slots)",
+ "MetricGroup": "PipelineL1",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "smt_contention",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because the other thread was selected.",
+ "MetricExpr": "d_ratio(de_no_dispatch_per_slot.smt_contention, total_dispatch_slots)",
+ "MetricGroup": "PipelineL1",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "retiring",
+ "BriefDescription": "Percentage of dispatch slots used by ops that retired.",
+ "MetricExpr": "d_ratio(ex_ret_ops, total_dispatch_slots)",
+ "MetricGroup": "PipelineL1",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "frontend_bound_by_latency",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because of a latency bottleneck in the frontend (such as instruction cache or TLB misses).",
+ "MetricExpr": "d_ratio((8 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\\,cmask\\=0x8@), total_dispatch_slots)",
+ "MetricGroup": "PipelineL2;frontend_bound_group",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "frontend_bound_by_bandwidth",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because of a bandwidth bottleneck in the frontend (such as decode or op cache fetch bandwidth).",
+ "MetricExpr": "d_ratio(de_no_dispatch_per_slot.no_ops_from_frontend - (8 * cpu@de_no_dispatch_per_slot.no_ops_from_frontend\\,cmask\\=0x8@), total_dispatch_slots)",
+ "MetricGroup": "PipelineL2;frontend_bound_group",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "bad_speculation_from_mispredicts",
+ "BriefDescription": "Percentage of dispatched ops that were flushed due to branch mispredicts.",
+ "MetricExpr": "d_ratio(bad_speculation * ex_ret_brn_misp, ex_ret_brn_misp + bp_redirects.resync)",
+ "MetricGroup": "PipelineL2;bad_speculation_group",
+ "ScaleUnit": "100%ops"
+ },
+ {
+ "MetricName": "bad_speculation_from_pipeline_restarts",
+ "BriefDescription": "Percentage of dispatched ops that were flushed due to pipeline restarts (resyncs).",
+ "MetricExpr": "d_ratio(bad_speculation * bp_redirects.resync, ex_ret_brn_misp + bp_redirects.resync)",
+ "MetricGroup": "PipelineL2;bad_speculation_group",
+ "ScaleUnit": "100%ops"
+ },
+ {
+ "MetricName": "backend_bound_by_memory",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because of stalls due to the memory subsystem.",
+ "MetricExpr": "backend_bound * d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete)",
+ "MetricGroup": "PipelineL2;backend_bound_group",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "backend_bound_by_cpu",
+ "BriefDescription": "Percentage of dispatch slots that remained unused because of stalls not related to the memory subsystem.",
+ "MetricExpr": "backend_bound * (1 - d_ratio(ex_no_retire.load_not_complete, ex_no_retire.not_complete))",
+ "MetricGroup": "PipelineL2;backend_bound_group",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "retiring_from_fastpath",
+ "BriefDescription": "Percentage of dispatch slots used by fastpath ops that retired.",
+ "MetricExpr": "retiring * (1 - d_ratio(ex_ret_ucode_ops, ex_ret_ops))",
+ "MetricGroup": "PipelineL2;retiring_group",
+ "ScaleUnit": "100%slots"
+ },
+ {
+ "MetricName": "retiring_from_microcode",
+ "BriefDescription": "Percentage of dispatch slots used by microcode ops that retired.",
+ "MetricExpr": "retiring * d_ratio(ex_ret_ucode_ops, ex_ret_ops)",
+ "MetricGroup": "PipelineL2;retiring_group",
+ "ScaleUnit": "100%slots"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen5/recommended.json
new file mode 100644
index 000000000000..635d57e3bc15
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/recommended.json
@@ -0,0 +1,457 @@
+[
+ {
+ "MetricName": "branch_misprediction_rate",
+ "BriefDescription": "Execution-time branch misprediction rate (non-speculative).",
+ "MetricExpr": "d_ratio(ex_ret_brn_misp, ex_ret_brn)",
+ "MetricGroup": "branch_prediction",
+ "ScaleUnit": "1per_branch"
+ },
+ {
+ "MetricName": "all_data_cache_accesses_pti",
+ "BriefDescription": "All data cache accesses per thousand instructions.",
+ "MetricExpr": "ls_dispatch.all / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "all_l2_cache_accesses_pti",
+ "BriefDescription": "All L2 cache accesses per thousand instructions.",
+ "MetricExpr": "(l2_request_g1.all_no_prefetch + l2_pf_hit_l2.l2_hwpf + l2_pf_miss_l2_hit_l3.l2_hwpf + l2_pf_miss_l2_l3.l2_hwpf) / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_accesses_from_l1_ic_misses_pti",
+ "BriefDescription": "L2 cache accesses from L1 instruction cache misses (including prefetch) per thousand instructions.",
+ "MetricExpr": "l2_request_g1.cacheable_ic_read / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_accesses_from_l1_dc_misses_pti",
+ "BriefDescription": "L2 cache accesses from L1 data cache misses (including prefetch) per thousand instructions.",
+ "MetricExpr": "l2_request_g1.all_dc / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_accesses_from_l2_hwpf_pti",
+ "BriefDescription": "L2 cache accesses from L2 cache hardware prefetcher per thousand instructions.",
+ "MetricExpr": "(l2_pf_hit_l2.l1_dc_l2_hwpf + l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpf + l2_pf_miss_l2_l3.l1_dc_l2_hwpf) / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "all_l2_cache_misses_pti",
+ "BriefDescription": "All L2 cache misses per thousand instructions.",
+ "MetricExpr": "(l2_cache_req_stat.ic_dc_miss_in_l2 + l2_pf_miss_l2_hit_l3.l2_hwpf + l2_pf_miss_l2_l3.l2_hwpf) / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_misses_from_l1_ic_miss_pti",
+ "BriefDescription": "L2 cache misses from L1 instruction cache misses per thousand instructions.",
+ "MetricExpr": "l2_cache_req_stat.ic_fill_miss / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_misses_from_l1_dc_miss_pti",
+ "BriefDescription": "L2 cache misses from L1 data cache misses per thousand instructions.",
+ "MetricExpr": "l2_cache_req_stat.ls_rd_blk_c / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_misses_from_l2_hwpf_pti",
+ "BriefDescription": "L2 cache misses from L2 cache hardware prefetcher per thousand instructions.",
+ "MetricExpr": "(l2_pf_miss_l2_hit_l3.l1_dc_l2_hwpf + l2_pf_miss_l2_l3.l1_dc_l2_hwpf) / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "all_l2_cache_hits_pti",
+ "BriefDescription": "All L2 cache hits per thousand instructions.",
+ "MetricExpr": "(l2_cache_req_stat.ic_dc_hit_in_l2 + l2_pf_hit_l2.l2_hwpf) / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_hits_from_l1_ic_miss_pti",
+ "BriefDescription": "L2 cache hits from L1 instruction cache misses per thousand instructions.",
+ "MetricExpr": "l2_cache_req_stat.ic_hit_in_l2 / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_hits_from_l1_dc_miss_pti",
+ "BriefDescription": "L2 cache hits from L1 data cache misses per thousand instructions.",
+ "MetricExpr": "l2_cache_req_stat.dc_hit_in_l2 / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_cache_hits_from_l2_hwpf_pti",
+ "BriefDescription": "L2 cache hits from L2 cache hardware prefetcher per thousand instructions.",
+ "MetricExpr": "l2_pf_hit_l2.l1_dc_l2_hwpf / instructions",
+ "MetricGroup": "l2_cache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l3_cache_accesses",
+ "BriefDescription": "L3 cache accesses.",
+ "MetricExpr": "l3_lookup_state.all_coherent_accesses_to_l3",
+ "MetricGroup": "l3_cache"
+ },
+ {
+ "MetricName": "l3_misses",
+ "BriefDescription": "L3 misses (including cacheline state change requests).",
+ "MetricExpr": "l3_lookup_state.l3_miss",
+ "MetricGroup": "l3_cache"
+ },
+ {
+ "MetricName": "l3_read_miss_latency",
+ "BriefDescription": "Average L3 read miss latency (in core clocks).",
+ "MetricExpr": "(l3_xi_sampled_latency.all * 10) / l3_xi_sampled_latency_requests.all",
+ "MetricGroup": "l3_cache",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "MetricName": "l3_read_miss_latency_for_local_dram",
+ "BriefDescription": "Average L3 read miss latency (in core clocks) for local DRAM.",
+ "MetricExpr": "(l3_xi_sampled_latency.dram_near * 10) / l3_xi_sampled_latency_requests.dram_near",
+ "MetricGroup": "l3_cache",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "MetricName": "l3_read_miss_latency_for_remote_dram",
+ "BriefDescription": "Average L3 read miss latency (in core clocks) for remote DRAM.",
+ "MetricExpr": "(l3_xi_sampled_latency.dram_far * 10) / l3_xi_sampled_latency_requests.dram_far",
+ "MetricGroup": "l3_cache",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "MetricName": "op_cache_fetch_miss_ratio",
+ "BriefDescription": "Op cache miss ratio for all fetches.",
+ "MetricExpr": "d_ratio(op_cache_hit_miss.op_cache_miss, op_cache_hit_miss.all_op_cache_accesses)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "ic_fetch_miss_ratio",
+ "BriefDescription": "Instruction cache miss ratio for all fetches. An instruction cache miss will not be counted by this metric if it is an OC hit.",
+ "MetricExpr": "d_ratio(ic_tag_hit_miss.instruction_cache_miss, ic_tag_hit_miss.all_instruction_cache_accesses)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "l1_data_cache_fills_from_memory_pti",
+ "BriefDescription": "L1 data cache fills from DRAM or MMIO in any NUMA node per thousand instructions.",
+ "MetricExpr": "ls_any_fills_from_sys.dram_io_all / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_data_cache_fills_from_remote_node_pti",
+ "BriefDescription": "L1 data cache fills from a different NUMA node per thousand instructions.",
+ "MetricExpr": "ls_any_fills_from_sys.far_all / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_data_cache_fills_from_same_ccx_pti",
+ "BriefDescription": "L1 data cache fills from within the same CCX per thousand instructions.",
+ "MetricExpr": "ls_any_fills_from_sys.local_all / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_data_cache_fills_from_different_ccx_pti",
+ "BriefDescription": "L1 data cache fills from another CCX cache in any NUMA node per thousand instructions.",
+ "MetricExpr": "ls_any_fills_from_sys.remote_cache / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "all_l1_data_cache_fills_pti",
+ "BriefDescription": "All L1 data cache fills per thousand instructions.",
+ "MetricExpr": "ls_any_fills_from_sys.all / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_local_l2_pti",
+ "BriefDescription": "L1 demand data cache fills from local L2 cache per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.local_l2 / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_same_ccx_pti",
+ "BriefDescription": "L1 demand data cache fills from within the same CCX per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.local_ccx / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_near_cache_pti",
+ "BriefDescription": "L1 demand data cache fills from another CCX cache in the same NUMA node per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.near_cache / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_near_memory_pti",
+ "BriefDescription": "L1 demand data cache fills from DRAM or MMIO in the same NUMA node per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.dram_io_near / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_far_cache_pti",
+ "BriefDescription": "L1 demand data cache fills from another CCX cache in a different NUMA node per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.far_cache / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_demand_data_cache_fills_from_far_memory_pti",
+ "BriefDescription": "L1 demand data cache fills from DRAM or MMIO in a different NUMA node per thousand instructions.",
+ "MetricExpr": "ls_dmnd_fills_from_sys.dram_io_far / instructions",
+ "MetricGroup": "l1_dcache",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_itlb_misses_pti",
+ "BriefDescription": "L1 instruction TLB misses per thousand instructions.",
+ "MetricExpr": "(bp_l1_tlb_miss_l2_tlb_hit + bp_l1_tlb_miss_l2_tlb_miss.all) / instructions",
+ "MetricGroup": "tlb",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_itlb_misses_pti",
+ "BriefDescription": "L2 instruction TLB misses and instruction page walks per thousand instructions.",
+ "MetricExpr": "bp_l1_tlb_miss_l2_tlb_miss.all / instructions",
+ "MetricGroup": "tlb",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l1_dtlb_misses_pti",
+ "BriefDescription": "L1 data TLB misses per thousand instructions.",
+ "MetricExpr": "ls_l1_d_tlb_miss.all / instructions",
+ "MetricGroup": "tlb",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "l2_dtlb_misses_pti",
+ "BriefDescription": "L2 data TLB misses and data page walks per thousand instructions.",
+ "MetricExpr": "ls_l1_d_tlb_miss.all_l2_miss / instructions",
+ "MetricGroup": "tlb",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "all_tlbs_flushed_pti",
+ "BriefDescription": "All TLBs flushed per thousand instructions.",
+ "MetricExpr": "ls_tlb_flush.all / instructions",
+ "MetricGroup": "tlb",
+ "ScaleUnit": "1e3per_1k_instr"
+ },
+ {
+ "MetricName": "macro_ops_dispatched",
+ "BriefDescription": "Macro-ops dispatched.",
+ "MetricExpr": "de_src_op_disp.all",
+ "MetricGroup": "decoder"
+ },
+ {
+ "MetricName": "sse_avx_stalls",
+ "BriefDescription": "Mixed SSE/AVX stalls.",
+ "MetricExpr": "fp_disp_faults.sse_avx_all"
+ },
+ {
+ "MetricName": "macro_ops_retired",
+ "BriefDescription": "Macro-ops retired.",
+ "MetricExpr": "ex_ret_ops"
+ },
+ {
+ "MetricName": "umc_data_bus_utilization",
+ "BriefDescription": "Memory controller data bus utilization.",
+ "MetricExpr": "d_ratio(umc_data_slot_clks.all / 2, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_cas_cmd_rate",
+ "BriefDescription": "Memory controller CAS command rate.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1per_memclk"
+ },
+ {
+ "MetricName": "umc_cas_cmd_read_ratio",
+ "BriefDescription": "Ratio of memory controller CAS commands for reads.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.rd, umc_cas_cmd.all)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_cas_cmd_write_ratio",
+ "BriefDescription": "Ratio of memory controller CAS commands for writes.",
+ "MetricExpr": "d_ratio(umc_cas_cmd.wr, umc_cas_cmd.all)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "MetricName": "umc_mem_read_bandwidth",
+ "BriefDescription": "Estimated memory read bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.rd * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_mem_write_bandwidth",
+ "BriefDescription": "Estimated memory write bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.wr * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_mem_bandwidth",
+ "BriefDescription": "Estimated combined memory bandwidth.",
+ "MetricExpr": "(umc_cas_cmd.all * 64) / 1e6 / duration_time",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "MetricName": "umc_activate_cmd_rate",
+ "BriefDescription": "Memory controller ACTIVATE command rate.",
+ "MetricExpr": "d_ratio(umc_act_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1per_memclk"
+ },
+ {
+ "MetricName": "umc_precharge_cmd_rate",
+ "BriefDescription": "Memory controller PRECHARGE command rate.",
+ "MetricExpr": "d_ratio(umc_pchg_cmd.all * 1000, umc_mem_clk)",
+ "MetricGroup": "memory_controller",
+ "PerPkg": "1",
+ "ScaleUnit": "1per_memclk"
+ },
+ {
+ "MetricName": "dram_read_bandwidth_for_local_or_remote_socket",
+ "BriefDescription": "DRAM read data bandwidth for accesses in local or remote socket.",
+ "MetricExpr": "(local_or_remote_socket_read_data_beats_dram_0 + local_or_remote_socket_read_data_beats_dram_1 + local_or_remote_socket_read_data_beats_dram_2 + local_or_remote_socket_read_data_beats_dram_3 + local_or_remote_socket_read_data_beats_dram_4 + local_or_remote_socket_read_data_beats_dram_5 + local_or_remote_socket_read_data_beats_dram_6 + local_or_remote_socket_read_data_beats_dram_7 + local_or_remote_socket_read_data_beats_dram_8 + local_or_remote_socket_read_data_beats_dram_9 + local_or_remote_socket_read_data_beats_dram_10 + local_or_remote_socket_read_data_beats_dram_11) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "dram_write_bandwidth_for_local_socket",
+ "BriefDescription": "DRAM write data bandwidth for accesses in local socket.",
+ "MetricExpr": "(local_socket_write_data_beats_dram_0 + local_socket_write_data_beats_dram_1 + local_socket_write_data_beats_dram_2 + local_socket_write_data_beats_dram_3 + local_socket_write_data_beats_dram_4 + local_socket_write_data_beats_dram_5 + local_socket_write_data_beats_dram_6 + local_socket_write_data_beats_dram_7 + local_socket_write_data_beats_dram_8 + local_socket_write_data_beats_dram_9 + local_socket_write_data_beats_dram_10 + local_socket_write_data_beats_dram_11) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "dram_write_bandwidth_for_remote_socket",
+ "BriefDescription": "DRAM write data bandwidth for accesses in remote socket.",
+ "MetricExpr": "(remote_socket_write_data_beats_dram_0 + remote_socket_write_data_beats_dram_1 + remote_socket_write_data_beats_dram_2 + remote_socket_write_data_beats_dram_3 + remote_socket_write_data_beats_dram_4 + remote_socket_write_data_beats_dram_5 + remote_socket_write_data_beats_dram_6 + remote_socket_write_data_beats_dram_7 + remote_socket_write_data_beats_dram_8 + remote_socket_write_data_beats_dram_9 + remote_socket_write_data_beats_dram_10 + remote_socket_write_data_beats_dram_11) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "dram_write_bandwidth_for_local_or_remote_socket",
+ "BriefDescription": "DRAM write data bandwidth for accesses in local or remote socket.",
+ "MetricExpr": "(local_or_remote_socket_write_data_beats_dram_0 + local_or_remote_socket_write_data_beats_dram_1 + local_or_remote_socket_write_data_beats_dram_2 + local_or_remote_socket_write_data_beats_dram_3 + local_or_remote_socket_write_data_beats_dram_4 + local_or_remote_socket_write_data_beats_dram_5 + local_or_remote_socket_write_data_beats_dram_6 + local_or_remote_socket_write_data_beats_dram_7 + local_or_remote_socket_write_data_beats_dram_8 + local_or_remote_socket_write_data_beats_dram_9 + local_or_remote_socket_write_data_beats_dram_10 + local_or_remote_socket_write_data_beats_dram_11) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "upstream_dma_read_bandwidth_for_local_socket",
+ "BriefDescription": "Upstream DMA read data bandwidth for accesses in local socket.",
+ "MetricExpr": "(local_socket_upstream_read_data_beats_io_0 + local_socket_upstream_read_data_beats_io_1 + local_socket_upstream_read_data_beats_io_2 + local_socket_upstream_read_data_beats_io_3 + local_socket_upstream_read_data_beats_io_4 + local_socket_upstream_read_data_beats_io_5 + local_socket_upstream_read_data_beats_io_6 + local_socket_upstream_read_data_beats_io_7) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "upstream_dma_write_bandwidth_for_local_socket",
+ "BriefDescription": "Upstream DMA write data bandwidth for accesses in local socket.",
+ "MetricExpr": "(local_socket_upstream_write_data_beats_io_0 + local_socket_upstream_write_data_beats_io_1 + local_socket_upstream_write_data_beats_io_2 + local_socket_upstream_write_data_beats_io_3 + local_socket_upstream_write_data_beats_io_4 + local_socket_upstream_write_data_beats_io_5 + local_socket_upstream_write_data_beats_io_6 + local_socket_upstream_write_data_beats_io_7) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "upstream_dma_read_bandwidth_for_remote_socket",
+ "BriefDescription": "Upstream DMA read data bandwidth for accesses in remote socket.",
+ "MetricExpr": "(remote_socket_upstream_read_data_beats_io_0 + remote_socket_upstream_read_data_beats_io_1 + remote_socket_upstream_read_data_beats_io_2 + remote_socket_upstream_read_data_beats_io_3 + remote_socket_upstream_read_data_beats_io_4 + remote_socket_upstream_read_data_beats_io_5 + remote_socket_upstream_read_data_beats_io_6 + remote_socket_upstream_read_data_beats_io_7) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "upstream_dma_write_bandwidth_for_remote_socket",
+ "BriefDescription": "Upstream DMA write data bandwidth for accesses in remote socket.",
+ "MetricExpr": "(remote_socket_upstream_write_data_beats_io_0 + remote_socket_upstream_write_data_beats_io_1 + remote_socket_upstream_write_data_beats_io_2 + remote_socket_upstream_write_data_beats_io_3 + remote_socket_upstream_write_data_beats_io_4 + remote_socket_upstream_write_data_beats_io_5 + remote_socket_upstream_write_data_beats_io_6 + remote_socket_upstream_write_data_beats_io_7) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "core_inbound_data_bandwidth_for_local_socket",
+ "BriefDescription": "Core inbound data bandwidth for accesses in local socket.",
+ "MetricExpr": "(local_socket_inbound_data_beats_cfi_0 + local_socket_inbound_data_beats_cfi_1 + local_socket_inbound_data_beats_cfi_2 + local_socket_inbound_data_beats_cfi_3 + local_socket_inbound_data_beats_cfi_4 + local_socket_inbound_data_beats_cfi_5 + local_socket_inbound_data_beats_cfi_6 + local_socket_inbound_data_beats_cfi_7 + local_socket_inbound_data_beats_cfi_8 + local_socket_inbound_data_beats_cfi_9 + local_socket_inbound_data_beats_cfi_10 + local_socket_inbound_data_beats_cfi_11 + local_socket_inbound_data_beats_cfi_12 + local_socket_inbound_data_beats_cfi_13 + local_socket_inbound_data_beats_cfi_14 + local_socket_inbound_data_beats_cfi_15) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "3.2e-5MB/s"
+ },
+ {
+ "MetricName": "core_outbound_data_bandwidth_for_local_socket",
+ "BriefDescription": "Core outbound data bandwidth for accesses in local socket.",
+ "MetricExpr": "(local_socket_outbound_data_beats_cfi_0 + local_socket_outbound_data_beats_cfi_1 + local_socket_outbound_data_beats_cfi_2 + local_socket_outbound_data_beats_cfi_3 + local_socket_outbound_data_beats_cfi_4 + local_socket_outbound_data_beats_cfi_5 + local_socket_outbound_data_beats_cfi_6 + local_socket_outbound_data_beats_cfi_7 + local_socket_outbound_data_beats_cfi_8 + local_socket_outbound_data_beats_cfi_9 + local_socket_outbound_data_beats_cfi_10 + local_socket_outbound_data_beats_cfi_11 + local_socket_outbound_data_beats_cfi_12 + local_socket_outbound_data_beats_cfi_13 + local_socket_outbound_data_beats_cfi_14 + local_socket_outbound_data_beats_cfi_15) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "core_inbound_data_bandwidth_for_remote_socket",
+ "BriefDescription": "Core inbound data bandwidth for accesses in remote socket.",
+ "MetricExpr": "(remote_socket_inbound_data_beats_cfi_0 + remote_socket_inbound_data_beats_cfi_1 + remote_socket_inbound_data_beats_cfi_2 + remote_socket_inbound_data_beats_cfi_3 + remote_socket_inbound_data_beats_cfi_4 + remote_socket_inbound_data_beats_cfi_5 + remote_socket_inbound_data_beats_cfi_6 + remote_socket_inbound_data_beats_cfi_7 + remote_socket_inbound_data_beats_cfi_8 + remote_socket_inbound_data_beats_cfi_9 + remote_socket_inbound_data_beats_cfi_10 + remote_socket_inbound_data_beats_cfi_11 + remote_socket_inbound_data_beats_cfi_12 + remote_socket_inbound_data_beats_cfi_13 + remote_socket_inbound_data_beats_cfi_14 + remote_socket_inbound_data_beats_cfi_15) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "3.2e-5MB/s"
+ },
+ {
+ "MetricName": "core_outbound_data_bandwidth_for_remote_socket",
+ "BriefDescription": "Core outbound data bandwidth for accesses in remote socket.",
+ "MetricExpr": "(remote_socket_outbound_data_beats_cfi_0 + remote_socket_outbound_data_beats_cfi_1 + remote_socket_outbound_data_beats_cfi_2 + remote_socket_outbound_data_beats_cfi_3 + remote_socket_outbound_data_beats_cfi_4 + remote_socket_outbound_data_beats_cfi_5 + remote_socket_outbound_data_beats_cfi_6 + remote_socket_outbound_data_beats_cfi_7 + remote_socket_outbound_data_beats_cfi_8 + remote_socket_outbound_data_beats_cfi_9 + remote_socket_outbound_data_beats_cfi_10 + remote_socket_outbound_data_beats_cfi_11 + remote_socket_outbound_data_beats_cfi_12 + remote_socket_outbound_data_beats_cfi_13 + remote_socket_outbound_data_beats_cfi_14 + remote_socket_outbound_data_beats_cfi_15) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "cross_socket_inbound_data_bandwidth_for_local_socket",
+ "BriefDescription": "Inbound data bandwidth for accesses between local socket and remote socket.",
+ "MetricExpr": "(local_socket_inbound_data_beats_link_0 + local_socket_inbound_data_beats_link_1 + local_socket_inbound_data_beats_link_2 + local_socket_inbound_data_beats_link_3 + local_socket_inbound_data_beats_link_4 + local_socket_inbound_data_beats_link_5) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ },
+ {
+ "MetricName": "cross_socket_outbound_data_bandwidth_for_local_socket",
+ "BriefDescription": "Outbound data bandwidth for accesses between local socket and remote socket.",
+ "MetricExpr": "(local_socket_outbound_data_beats_link_0 + local_socket_outbound_data_beats_link_1 + local_socket_outbound_data_beats_link_2 + local_socket_outbound_data_beats_link_3 + local_socket_outbound_data_beats_link_4 + local_socket_outbound_data_beats_link_5) / duration_time",
+ "MetricGroup": "data_fabric",
+ "PerPkg": "1",
+ "ScaleUnit": "6.4e-5MB/s"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/arl-metrics.json b/tools/perf/pmu-events/arch/x86/arrowlake/arl-metrics.json
new file mode 100644
index 000000000000..4f1f77404943
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/arl-metrics.json
@@ -0,0 +1,2795 @@
+[
+ {
+ "BriefDescription": "C10 residency percent per package",
+ "MetricExpr": "cstate_pkg@c10\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C10_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C1 residency percent per core",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C1_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C2 residency percent per package",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C2_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C3 residency percent per package",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C3_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per core",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per package",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C7 residency percent per core",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C7_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C8 residency percent per package",
+ "MetricExpr": "cstate_pkg@c8\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C8_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
+ "MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
+ "MetricGroup": "smi",
+ "MetricName": "smi_cycles",
+ "MetricThreshold": "smi_cycles > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Number of SMI interrupts.",
+ "MetricExpr": "msr@smi@",
+ "MetricGroup": "smi",
+ "MetricName": "smi_num",
+ "ScaleUnit": "1SMI#"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions",
+ "MetricExpr": "tma_core_bound",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_allocation_restriction",
+ "MetricThreshold": "tma_allocation_restriction > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALL_P@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_backend_bound",
+ "MetricThreshold": "tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.ALL_P@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_bad_speculation",
+ "MetricThreshold": "tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_DETECT@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_branch_detect",
+ "MetricThreshold": "tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MISPREDICT@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
+ "MetricName": "tma_branch_mispredicts",
+ "MetricThreshold": "tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.BRANCH_RESTEER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_branch_resteer",
+ "MetricThreshold": "tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.CISC@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_cisc",
+ "MetricThreshold": "tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_core_bound",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.DECODE@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_decode",
+ "MetricThreshold": "tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.FASTNUKE@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_fast_nuke",
+ "MetricThreshold": "tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ALL@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ICACHE@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_ifetch_bandwidth",
+ "MetricThreshold": "tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.FRONTEND_LATENCY@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_ifetch_latency",
+ "MetricThreshold": "tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_FLOPS_RETIRED.ALL@",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipflop",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic instruction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_INST_RETIRED.ALL@",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@FP_INST_RETIRED.128B_DP@ + cpu_atom@FP_INST_RETIRED.128B_SP@)",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_avx128",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX 256-bit instruction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@FP_INST_RETIRED.256B_DP@ + cpu_atom@FP_INST_RETIRED.256B_SP@)",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_avx256",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_INST_RETIRED.64B_DP@",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_scalar_dp",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@FP_INST_RETIRED.32B_SP@",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_scalar_sp",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled due to a first level data TLB miss",
+ "MetricExpr": "100 * (cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ + cpu_atom@LD_HEAD.PGWALK_AT_RET@) / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_bottleneck_%_dtlb_miss_bound_cycles",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Ifetch",
+ "MetricName": "tma_info_bottleneck_%_ifetch_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled due to an L1 miss",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Load_Store_Miss",
+ "MetricName": "tma_info_bottleneck_%_load_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.ANY_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Mem_Exec",
+ "MetricName": "tma_info_bottleneck_%_mem_exec_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_inst_mix_ipbranch",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.NEAR_CALL@",
+ "MetricName": "tma_info_br_inst_mix_ipcall",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_INST_RETIRED.FAR_BRANCH@u",
+ "MetricName": "tma_info_br_inst_mix_ipfarbranch",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / (cpu_atom@BR_MISP_RETIRED.COND@ - cpu_atom@BR_MISP_RETIRED.COND_TAKEN@)",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_ntaken",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.COND_TAKEN@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_taken",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.INDIRECT@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_indirect",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per retired return Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.RETURN@",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_ret",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per retired Branch Misprediction",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_inst_mix_ipmispredict",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Ratio of all branches which mispredict",
+ "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
+ "MetricExpr": "cpu_atom@BR_MISP_RETIRED.ALL_BRANCHES@ / cpu_atom@BACLEARS.ANY@",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to load buffer full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.LD_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_load_buffer_stall_cycles",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to memory reservation stations full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.RSV@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_mem_rsv_stall_cycles",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to store buffer full",
+ "MetricExpr": "100 * cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_buffer_stalls_%_store_buffer_stall_cycles",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_core_cpi",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "cpu_atom@FP_FLOPS_RETIRED.ALL@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_core_flopc",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_core_ipc",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "cpu_atom@TOPDOWN_RETIRING.ALL@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_core_upi",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.L2_HIT@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss doesn't hit in the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.L2_MISS@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2miss",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_IFETCH.LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L3",
+ "MetricExpr": "100 * (cpu_atom@MEM_BOUND_STALLS_IFETCH.L2_MISS@ - cpu_atom@MEM_BOUND_STALLS_IFETCH.LLC_HIT@) / cpu_atom@MEM_BOUND_STALLS_IFETCH.ALL@",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_LOAD.L2_HIT@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2hit",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses in the L2",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_LOAD.L2_MISS@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2miss",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L3",
+ "MetricExpr": "100 * cpu_atom@MEM_BOUND_STALLS_LOAD.LLC_HIT@ / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3hit",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L3",
+ "MetricExpr": "100 * (cpu_atom@MEM_BOUND_STALLS_LOAD.L2_MISS@ - cpu_atom@MEM_BOUND_STALLS_LOAD.LLC_HIT@) / cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3miss",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_l1_bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement",
+ "MetricExpr": "100 * (cpu_atom@LD_HEAD.L1_BOUND_AT_RET@ + cpu_atom@MEM_BOUND_STALLS_LOAD.ALL@) / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_load_bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full",
+ "MetricExpr": "100 * (cpu_atom@MEM_SCHEDULER_BLOCK.ST_BUF@ / cpu_atom@MEM_SCHEDULER_BLOCK.ALL@) * tma_mem_scheduler",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_store_bound",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory disambiguation",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.DISAMBIGUATION@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_disamb_pki",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to floating point assists",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.FP_ASSIST@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_fp_assist_pki",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory ordering",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_monuke_pki",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to memory renaming",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.MRN_NUKE@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_mrn_pki",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to page faults",
+ "MetricExpr": "1e3 * cpu_atom@MACHINE_CLEARS.PAGE_FAULT@ / cpu_atom@INST_RETIRED.ANY@",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_page_fault_pki",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads with an address aliasing block",
+ "MetricExpr": "100 * cpu_atom@LD_BLOCKS.ADDRESS_ALIAS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_adressaliasing",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
+ "MetricExpr": "100 * cpu_atom@LD_BLOCKS.DATA_UNKNOWN@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_storefwdblk",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a first level data cache miss",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.L1_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_l1miss",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.OTHER_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a pagewalk",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.PGWALK_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_pagewalk",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a second level TLB miss",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.DTLB_MISS_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_stlbhit",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a store forward address match",
+ "MetricExpr": "100 * cpu_atom@LD_HEAD.ST_ADDR_AT_RET@ / cpu_atom@LD_HEAD.ANY_AT_RET@",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_storefwding",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per Load",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_ipload",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instructions per Store",
+ "MetricExpr": "cpu_atom@INST_RETIRED.ANY@ / cpu_atom@MEM_UOPS_RETIRED.ALL_STORES@",
+ "MetricName": "tma_info_mem_mix_ipstore",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that perform one or more locks",
+ "MetricExpr": "100 * cpu_atom@MEM_UOPS_RETIRED.LOCK_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_load_locks_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that are splits",
+ "MetricExpr": "100 * cpu_atom@MEM_UOPS_RETIRED.SPLIT_LOADS@ / cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@",
+ "MetricName": "tma_info_mem_mix_load_splits_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Ratio of mem load uops to all uops",
+ "MetricExpr": "1e3 * cpu_atom@MEM_UOPS_RETIRED.ALL_LOADS@ / cpu_atom@TOPDOWN_RETIRING.ALL@",
+ "MetricName": "tma_info_mem_mix_memload_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction",
+ "MetricExpr": "100 * cpu_atom@SERIALIZATION.C01_MS_SCB@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricName": "tma_info_serialization_%_tpause_cycles",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Average CPU Utilization",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.REF_TSC@ / msr@tsc\\,cpu=cpu_atom@",
+ "MetricName": "tma_info_system_cpu_utilization",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "cpu_atom@FP_FLOPS_RETIRED.ALL@ / (duration_time * 1e9)",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_system_gflops",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Fraction of cycles spent in Kernel mode",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE_P@k / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_kernel_utilization",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE_P@ / cpu_atom@CPU_CLK_UNHALTED.CORE@",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "cpu_atom@CPU_CLK_UNHALTED.CORE@ / cpu_atom@CPU_CLK_UNHALTED.REF_TSC@",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are FPDiv uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.FPDIV@ / cpu_atom@TOPDOWN_RETIRING.ALL@",
+ "MetricName": "tma_info_uop_mix_fpdiv_uop_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are IDiv uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.IDIV@ / cpu_atom@TOPDOWN_RETIRING.ALL@",
+ "MetricName": "tma_info_uop_mix_idiv_uop_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are microcode ops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.MS@ / cpu_atom@TOPDOWN_RETIRING.ALL@",
+ "MetricName": "tma_info_uop_mix_microcode_uop_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are x87 uops",
+ "MetricExpr": "100 * cpu_atom@UOPS_RETIRED.X87@ / cpu_atom@TOPDOWN_RETIRING.ALL@",
+ "MetricName": "tma_info_uop_mix_x87_uop_ratio",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.ITLB_MISS@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.05 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.MEM_SCHEDULER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_mem_scheduler",
+ "MetricThreshold": "tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_non_mem_scheduler",
+ "MetricThreshold": "tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BAD_SPECULATION.NUKE@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_nuke",
+ "MetricThreshold": "tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.OTHER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_other_fb",
+ "MetricThreshold": "tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
+ "MetricExpr": "cpu_atom@TOPDOWN_FE_BOUND.PREDECODE@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_predecode",
+ "MetricThreshold": "tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REGISTER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_register",
+ "MetricThreshold": "tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.REORDER_BUFFER@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_reorder_buffer",
+ "MetricThreshold": "tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a resource limitation",
+ "MetricExpr": "tma_backend_bound - tma_core_bound",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_resource_bound",
+ "MetricThreshold": "tma_resource_bound > 0.2 & tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_atom@TOPDOWN_RETIRING.ALL@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "Default;TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.75",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)",
+ "MetricExpr": "cpu_atom@TOPDOWN_BE_BOUND.SERIALIZATION@ / (8 * cpu_atom@CPU_CLK_UNHALTED.CORE@)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_serialization",
+ "MetricThreshold": "tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Uncore frequency per die [GHZ]",
+ "MetricExpr": "tma_info_system_socket_clks / #num_dies / duration_time / 1e9",
+ "MetricGroup": "SoC",
+ "MetricName": "UNCORE_FREQ",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
+ "MetricExpr": "cpu_core@UOPS_DISPATCHED.ALU@ / (6 * tma_info_thread_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_alu_op_utilization",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
+ "MetricExpr": "78 * cpu_core@ASSISTS.ANY@ / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_assists",
+ "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops as a result of handing SSE to AVX* or AVX* to SSE transition Assists.",
+ "MetricExpr": "63 * cpu_core@ASSISTS.SSE_AVX_MIX@ / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_avx_assists",
+ "MetricThreshold": "tma_avx_assists > 0.1",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_core@topdown\\-be\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_backend_bound",
+ "MetricThreshold": "tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_core@topdown\\-bad\\-spec@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_bad_speculation",
+ "MetricThreshold": "tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: ",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_capacity / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricExpr": "100 * ((1 - cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + cpu_core@RS.EMPTY_RESOURCE@ / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_microcode_sequencer + tma_few_uops_instructions) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / (tma_dtlb_load + tma_fb_full + tma_l1_latency_capacity + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_early_blk + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricExpr": "100 * (tma_retiring - (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ + 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@INST_RETIRED.NOP@) / tma_info_thread_slots - tma_microcode_sequencer / (tma_microcode_sequencer + tma_few_uops_instructions) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
+ "MetricExpr": "cpu_core@topdown\\-br\\-mispredict@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricName": "tma_branch_mispredicts",
+ "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers",
+ "MetricExpr": "cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clks + tma_unknown_branches",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_branch_resteers",
+ "MetricThreshold": "tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings).",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C01@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c01_wait",
+ "MetricThreshold": "tma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings).",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C02@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c02_wait",
+ "MetricThreshold": "tma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
+ "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_cisc",
+ "MetricThreshold": "tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears",
+ "MetricExpr": "(1 - tma_branch_mispredicts / tma_bad_speculation) * cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC",
+ "MetricName": "tma_clears_resteers",
+ "MetricThreshold": "tma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, cpu_core@FRONTEND_RETIRED.L1I_MISS@ * cpu_core@FRONTEND_RETIRED.L1I_MISS@R / tma_info_thread_clks - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "cpu_core@FRONTEND_RETIRED.L2_MISS@ * cpu_core@FRONTEND_RETIRED.L2_MISS@R / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, cpu_core@FRONTEND_RETIRED.ITLB_MISS@ * cpu_core@FRONTEND_RETIRED.ITLB_MISS@R / tma_info_thread_clks - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "cpu_core@FRONTEND_RETIRED.STLB_MISS@ * cpu_core@FRONTEND_RETIRED.STLB_MISS@R / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "cpu_core@ITLB_MISSES.WALK_ACTIVE@ / tma_info_thread_clks * cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ + cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "cpu_core@ITLB_MISSES.WALK_ACTIVE@ / tma_info_thread_clks * cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ / (cpu_core@ITLB_MISSES.WALK_COMPLETED_4K@ + cpu_core@ITLB_MISSES.WALK_COMPLETED_2M_4M@)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by non-taken conditional branches.",
+ "MetricExpr": "cpu_core@BR_MISP_RETIRED.COND_NTAKEN_COST@ * cpu_core@BR_MISP_RETIRED.COND_NTAKEN_COST@R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_cond_nt_mispredicts",
+ "MetricThreshold": "tma_cond_nt_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to misprediction by backward-taken conditional branches.",
+ "MetricExpr": "cpu_core@BR_MISP_RETIRED.COND_TAKEN_BWD_COST@ * cpu_core@BR_MISP_RETIRED.COND_TAKEN_BWD_COST@R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_cond_tk_bwd_mispredicts",
+ "MetricThreshold": "tma_cond_tk_bwd_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to misprediction by forward-taken conditional branches.",
+ "MetricExpr": "cpu_core@BR_MISP_RETIRED.COND_TAKEN_FWD_COST@ * cpu_core@BR_MISP_RETIRED.COND_TAKEN_FWD_COST@R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_cond_tk_fwd_mispredicts",
+ "MetricThreshold": "tma_cond_tk_fwd_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
+ "MetricExpr": "(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS@R, 24 * tma_info_system_core_frequency) + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM@R, 25 * tma_info_system_core_frequency)) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_contested_accesses",
+ "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck",
+ "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)",
+ "MetricGroup": "Backend;Compute;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_core_bound",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck. Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations).",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD@R, 24 * tma_info_system_core_frequency) + cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@ * min(cpu_core@MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD@R, 25 * tma_info_system_core_frequency)) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_data_sharing",
+ "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
+ "MetricExpr": "cpu_core@ARITH.DIV_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_divider",
+ "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIV_ACTIVE",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads",
+ "MetricExpr": "cpu_core@MEMORY_STALLS.MEM@ / tma_info_thread_clks",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_dram_bound",
+ "MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline",
+ "MetricExpr": "(cpu_core@IDQ.DSB_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / 2 + cpu_core@IDQ.DSB_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.STARVATION_CYCLES@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
+ "MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_dsb",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines",
+ "MetricExpr": "cpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES@ / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_dsb_switches",
+ "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.STLB_HIT_LOADS@ * min(cpu_core@MEM_INST_RETIRED.STLB_HIT_LOADS@R, 7) / tma_info_thread_clks + tma_load_stlb_miss",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricName": "tma_dtlb_load",
+ "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.STLB_HIT_STORES@ * min(cpu_core@MEM_INST_RETIRED.STLB_HIT_STORES@R, 7) / tma_info_thread_clks + tma_store_stlb_miss",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricName": "tma_dtlb_store",
+ "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
+ "MetricExpr": "28 * tma_info_system_core_frequency * cpu_core@OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM@ / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricName": "tma_false_sharing",
+ "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
+ "MetricExpr": "cpu_core@L1D_MISS.FB_FULL@ / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricName": "tma_fb_full",
+ "MetricThreshold": "tma_fb_full > 0.3",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues",
+ "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
+ "MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
+ "MetricName": "tma_fetch_bandwidth",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues",
+ "MetricExpr": "cpu_core@topdown\\-fetch\\-lat@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_fetch_latency",
+ "MetricThreshold": "tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues. For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
+ "MetricExpr": "max(0, tma_heavy_operations - tma_microcode_sequencer)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
+ "MetricName": "tma_few_uops_instructions",
+ "MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector",
+ "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fp_arith",
+ "MetricThreshold": "tma_fp_arith > 0.2 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired). Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain and FMA double-counting.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "30 * cpu_core@ASSISTS.FP@ / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "cpu_core@ARITH.FPDIV_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
+ "MetricExpr": "cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_scalar",
+ "MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
+ "MetricExpr": "cpu_core@FP_ARITH_OPS_RETIRED.VECTOR@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_vector",
+ "MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors",
+ "MetricExpr": "(cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_SINGLE@) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_128b",
+ "MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors",
+ "MetricExpr": "cpu_core@FP_ARITH_OPS_RETIRED.VECTOR\\,umask\\=0x30@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_256b",
+ "MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_core@topdown\\-fe\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions",
+ "MetricExpr": "tma_light_operations * cpu_core@INST_RETIRED.MACRO_FUSED@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fused_instructions",
+ "MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences",
+ "MetricExpr": "cpu_core@topdown\\-heavy\\-ops@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_heavy_operations",
+ "MetricThreshold": "tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+]). Sample with: UOPS_RETIRED.HEAVY",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
+ "MetricExpr": "cpu_core@ICACHE_DATA.STALLS@ / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by indirect CALL instructions.",
+ "MetricExpr": "cpu_core@BR_MISP_RETIRED.INDIRECT_CALL_COST@ * cpu_core@BR_MISP_RETIRED.INDIRECT_CALL_COST@R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ind_call_mispredicts",
+ "MetricThreshold": "tma_ind_call_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by indirect JMP instructions.",
+ "MetricExpr": "max((cpu_core@BR_MISP_RETIRED.INDIRECT_COST@ * cpu_core@BR_MISP_RETIRED.INDIRECT_COST@R - cpu_core@BR_MISP_RETIRED.INDIRECT_CALL_COST@ * cpu_core@BR_MISP_RETIRED.INDIRECT_CALL_COST@R) / tma_info_thread_clks, 0)",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ind_jump_mispredicts",
+ "MetricThreshold": "tma_ind_jump_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 8 / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / 100",
+ "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
+ "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_NTAKEN@",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional backward-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_TAKEN_BWD@",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_taken_bwd",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional forward-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.COND_TAKEN_FWD@",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_taken_fwd",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.INDIRECT@",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_indirect",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.RET@",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_ret",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_ret < 500",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmispredict",
+ "MetricThreshold": "tma_info_bad_spec_ipmispredict < 200",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "cpu_core@INT_MISC.CLEARS_COUNT@ / (cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ + cpu_core@MACHINE_CLEARS.COUNT@)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_lsd + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite + tma_ms))",
+ "MetricGroup": "DSBmiss;Fed;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL",
+ "MetricName": "tma_info_botlnk_l2_ic_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5",
+ "PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: ",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are CALL or RET",
+ "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_CALL@ + cpu_core@BR_INST_RETIRED.NEAR_RETURN@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_callret",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are non-taken conditionals",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_NTAKEN@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_nt",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are forward taken conditionals",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_TAKEN_BWD@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_tk_bwd",
+ "MetricThreshold": "tma_info_branches_cond_tk_bwd > 0.3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are forward taken conditionals",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.COND_TAKEN_FWD@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_tk_fwd",
+ "MetricThreshold": "tma_info_branches_cond_tk_fwd > 0.2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
+ "MetricExpr": "(cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ - cpu_core@BR_INST_RETIRED.COND_TAKEN_BWD@ - cpu_core@BR_INST_RETIRED.COND_TAKEN_FWD@ - 2 * cpu_core@BR_INST_RETIRED.NEAR_CALL@) / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_jump",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of branches of other types (not individually covered by other metrics in Info.Branches group)",
+ "MetricExpr": "1 - (tma_info_branches_cond_nt + tma_info_branches_cond_tk_bwd + tma_info_branches_cond_tk_fwd + tma_info_branches_callret + tma_info_branches_jump)",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_other_branches",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "(cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_OPS_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE@) / tma_info_thread_clks",
+ "MetricGroup": "Flops;Ret",
+ "MetricName": "tma_info_core_flopc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
+ "MetricExpr": "(cpu_core@FP_ARITH_DISPATCHED.V0@ + cpu_core@FP_ARITH_DISPATCHED.V1@ + cpu_core@FP_ARITH_DISPATCHED.V2@ + cpu_core@FP_ARITH_DISPATCHED.V3@) / (4 * tma_info_thread_clks)",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_core_fp_arith_utilization",
+ "PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common).",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
+ "MetricName": "tma_info_core_ilp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
+ "MetricExpr": "cpu_core@IDQ.DSB_UOPS@ / cpu_core@UOPS_ISSUED.ANY@",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_frontend_dsb_coverage",
+ "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 8 > 0.35",
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
+ "MetricExpr": "cpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES@ / cpu_core@DSB2MITE_SWITCHES.PENALTY_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "DSBmiss",
+ "MetricName": "tma_info_frontend_dsb_switch_cost",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired DSB misses",
+ "MetricExpr": "cpu_core@FRONTEND_RETIRED.ANY_DSB_MISS@ * cpu_core@FRONTEND_RETIRED.ANY_DSB_MISS@R / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;Fed;FetchLat",
+ "MetricName": "tma_info_frontend_dsb_switches_ret",
+ "MetricThreshold": "tma_info_frontend_dsb_switches_ret > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of Uops issued by front-end when it issued something",
+ "MetricExpr": "cpu_core@UOPS_ISSUED.ANY@ / cpu_core@UOPS_ISSUED.ANY\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_frontend_fetch_upc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Latency for L1 instruction cache misses",
+ "MetricExpr": "cpu_core@ICACHE_DATA.STALLS@ / cpu_core@ICACHE_DATA.STALL_PERIODS@",
+ "MetricGroup": "Fed;FetchLat;IcMiss",
+ "MetricName": "tma_info_frontend_icache_miss_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per non-speculative DSB miss (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FRONTEND_RETIRED.ANY_DSB_MISS@",
+ "MetricGroup": "DSBmiss;Fed",
+ "MetricName": "tma_info_frontend_ipdsb_miss_ret",
+ "MetricThreshold": "tma_info_frontend_ipdsb_miss_ret < 50",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / cpu_core@BACLEARS.ANY@",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_ipunknown_branch",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache true code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@FRONTEND_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache speculative code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.CODE_RD_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code_all",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of Uops delivered by the LSD (Loop Stream Detector; aka Loop Cache)",
+ "MetricExpr": "cpu_core@LSD.UOPS@ / cpu_core@UOPS_ISSUED.ANY@",
+ "MetricGroup": "Fed;LSD",
+ "MetricName": "tma_info_frontend_lsd_coverage",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired operations that invoke the Microcode Sequencer",
+ "MetricExpr": "cpu_core@FRONTEND_RETIRED.MS_FLOWS@ * cpu_core@FRONTEND_RETIRED.MS_FLOWS@R / tma_info_thread_clks",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_frontend_ms_latency_ret",
+ "MetricThreshold": "tma_info_frontend_ms_latency_ret > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.NEAR_TAKEN@ / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection",
+ "MetricExpr": "cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_unknown_branch_cost",
+ "PublicDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired branches who got branch address clears",
+ "MetricExpr": "cpu_core@FRONTEND_RETIRED.UNKNOWN_BRANCH@ * cpu_core@FRONTEND_RETIRED.UNKNOWN_BRANCH@R / tma_info_thread_clks",
+ "MetricGroup": "Fed;FetchLat",
+ "MetricName": "tma_info_frontend_unknown_branches_ret",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Branch instructions per taken branch.",
+ "MetricExpr": "cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_bptkbranch",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total number of retired Instructions",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Summary;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_inst_mix_instructions",
+ "PublicDescription": "Total number of retired Instructions. Sample with: INST_RETIRED.PREC_DIST",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ + cpu_core@FP_ARITH_OPS_RETIRED.VECTOR@)",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_iparith",
+ "MetricThreshold": "tma_info_inst_mix_iparith < 10",
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_SINGLE@)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx128",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_OPS_RETIRED.256B_PACKED_DOUBLE@ + cpu_core@FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE@)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx256",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_OPS_RETIRED.SCALAR_DOUBLE@",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_dp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@FP_ARITH_OPS_RETIRED.SCALAR_SINGLE@",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_sp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.ALL_BRANCHES@",
+ "MetricGroup": "Branches;Fed;InsType",
+ "MetricName": "tma_info_inst_mix_ipbranch",
+ "MetricThreshold": "tma_info_inst_mix_ipbranch < 8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_CALL@",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_ipcall",
+ "MetricThreshold": "tma_info_inst_mix_ipcall < 200",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / (cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_OPS_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE@)",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_ipflop",
+ "MetricThreshold": "tma_info_inst_mix_ipflop < 10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Load (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_LOADS@",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipload",
+ "MetricThreshold": "tma_info_inst_mix_ipload < 3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / cpu_core@CPU_CLK_UNHALTED.PAUSE_INST@",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipstore",
+ "MetricThreshold": "tma_info_inst_mix_ipstore < 8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@MEM_INST_RETIRED.ALL_SWPF@",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_inst_mix_ipswpf",
+ "MetricThreshold": "tma_info_inst_mix_ipswpf < 100",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per taken branch",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
+ "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
+ "MetricName": "tma_info_inst_mix_iptb",
+ "MetricThreshold": "tma_info_inst_mix_iptb < 17",
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_fb_hpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@L1D.L1_REPLACEMENT@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the Level 0 within L1D cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@L1D.L0_REPLACEMENT@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1dl0_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L0 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * (cpu_core@MEM_LOAD_RETIRED.L1_MISS@ + cpu_core@MEM_LOAD_RETIRED.L1_HIT_L1@) / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1dl0_mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.ALL_DEMAND_DATA_RD@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki_load",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@L2_LINES_IN.ALL@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * (cpu_core@L2_RQSTS.REFERENCES@ - cpu_core@L2_RQSTS.MISS@) / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_all",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_HIT@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_load",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L2_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Backend;CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_all",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.DEMAND_DATA_RD_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki_load",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * cpu_core@L2_RQSTS.RFO_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@OFFCORE_REQUESTS.ALL_REQUESTS@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * cpu_core@LONGEST_LAT_CACHE.MISS@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_RETIRED.L3_MISS@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss data reads",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_data_l2_mlp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Latency for L2 cache miss demand Loads",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.DEMAND_DATA_RD@",
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss demand Loads",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_mlp",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Latency for L3 cache miss demand Loads",
+ "MetricExpr": "cpu_core@OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD@ / cpu_core@OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD@",
+ "MetricGroup": "Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l3_miss_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "cpu_core@L1D_PENDING.LOAD@ / cpu_core@L1D_MISS.LOAD@",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@SQ_MISC.BUS_LOCK@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * cpu_core@MEM_LOAD_MISC_RETIRED.UC@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "cpu_core@L1D_PENDING.LOAD@ / cpu_core@L1D_PENDING.LOAD_CYCLES@",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "cpu_core@L2_LINES_OUT.USELESS_HWPF@ / (cpu_core@L2_LINES_OUT.SILENT@ + cpu_core@L2_LINES_OUT.NON_SILENT@)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * cpu_core@ITLB_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Fed;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_code_stlb_mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to STLB misses by demand loads",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.STLB_MISS_LOADS@ * cpu_core@MEM_INST_RETIRED.STLB_MISS_LOADS@R / tma_info_thread_clks",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_load_stlb_miss_ret",
+ "MetricThreshold": "tma_info_memory_tlb_load_stlb_miss_ret > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_load_stlb_mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
+ "MetricExpr": "(cpu_core@ITLB_MISSES.WALK_PENDING@ + cpu_core@DTLB_LOAD_MISSES.WALK_PENDING@ + cpu_core@DTLB_STORE_MISSES.WALK_PENDING@) / (4 * tma_info_thread_clks)",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_page_walks_utilization",
+ "MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to STLB misses by demand stores",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.STLB_MISS_STORES@ * cpu_core@MEM_INST_RETIRED.STLB_MISS_STORES@R / tma_info_thread_clks",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_store_stlb_miss_ret",
+ "MetricThreshold": "tma_info_memory_tlb_store_stlb_miss_ret > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED@ / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_store_stlb_mpki",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "cpu_core@IDQ.DSB_UOPS@ / cpu_core@IDQ.DSB_CYCLES_ANY@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from LSD per cycle",
+ "MetricExpr": "cpu_core@LSD.UOPS@ / cpu_core@LSD.CYCLES_ACTIVE@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_lsd",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "cpu_core@IDQ.MITE_UOPS@ / cpu_core@IDQ.MITE_CYCLES_ANY@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MS per cycle",
+ "MetricExpr": "cpu_core@IDQ.MS_UOPS@ / cpu_core@IDQ.MS_UOPS\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_pipeline_fetch_ms",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@ASSISTS.ANY@",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_retire",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Estimated fraction of retirement-cycles dealing with repeat instructions",
+ "MetricExpr": "cpu_core@INST_RETIRED.REP_ITERATION@ / cpu_core@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "MicroSeq;Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_strings_cycles",
+ "MetricThreshold": "tma_info_pipeline_strings_cycles > 0.1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.C0_WAIT@ / tma_info_thread_clks",
+ "MetricGroup": "C0Wait",
+ "MetricName": "tma_info_system_c0_wait",
+ "MetricThreshold": "tma_info_system_c0_wait > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc\\,cpu=cpu_core@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Power;Summary",
+ "MetricName": "tma_info_system_core_frequency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
+ "MetricGroup": "HPC;Summary",
+ "MetricName": "tma_info_system_cpu_utilization",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.REF_TSC@ / msr@tsc\\,cpu=cpu_core@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
+ "MetricExpr": "32 * UNC_M_TOTAL_DATA / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
+ "MetricName": "tma_info_system_dram_bw_use",
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "(cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ + 2 * cpu_core@FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE@ + 4 * cpu_core@FP_ARITH_OPS_RETIRED.4_FLOPS@ + 8 * cpu_core@FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE@) / 1e9 / tma_info_system_time",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_system_gflops",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / cpu_core@BR_INST_RETIRED.FAR_BRANCH@u",
+ "MetricGroup": "Branches;OS",
+ "MetricName": "tma_info_system_ipfarbranch",
+ "MetricThreshold": "tma_info_system_ipfarbranch < 1e6",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@INST_RETIRED.ANY_P@k",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_cpi",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fraction of cycles spent in the Operating System (OS) Kernel mode",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@k / cpu_core@CPU_CLK_UNHALTED.THREAD@",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_utilization",
+ "MetricThreshold": "tma_info_system_kernel_utilization > 0.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD_P@ / cpu_core@CPU_CLK_UNHALTED.THREAD@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Socket actual clocks when any core is active on that socket",
+ "MetricExpr": "UNC_CLOCK.SOCKET",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_socket_clks",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "tma_info_thread_clks / cpu_core@CPU_CLK_UNHALTED.REF_TSC@",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.THREAD@",
+ "MetricGroup": "Pipeline",
+ "MetricName": "tma_info_thread_clks",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction (per Logical Processor)",
+ "MetricExpr": "1 / tma_info_thread_ipc",
+ "MetricGroup": "Mem;Pipeline",
+ "MetricName": "tma_info_thread_cpi",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "The ratio of Executed- by Issued-Uops",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.THREAD@ / cpu_core@UOPS_ISSUED.ANY@",
+ "MetricGroup": "Cor;Pipeline",
+ "MetricName": "tma_info_thread_execute_per_issue",
+ "PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage.",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle (per Logical Processor)",
+ "MetricExpr": "cpu_core@INST_RETIRED.ANY@ / tma_info_thread_clks",
+ "MetricGroup": "Ret;Summary",
+ "MetricName": "tma_info_thread_ipc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
+ "MetricExpr": "cpu_core@TOPDOWN.SLOTS@",
+ "MetricGroup": "TmaL1;tma_L1_group",
+ "MetricName": "tma_info_thread_slots",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@INST_RETIRED.ANY@",
+ "MetricGroup": "Pipeline;Ret;Retire",
+ "MetricName": "tma_info_thread_uoppi",
+ "MetricThreshold": "tma_info_thread_uoppi > 1.05",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops per taken branch",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu_core@BR_INST_RETIRED.NEAR_TAKEN@",
+ "MetricGroup": "Branches;Fed;FetchBW",
+ "MetricName": "tma_info_thread_uptb",
+ "MetricThreshold": "tma_info_thread_uptb < 12",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_int_operations",
+ "MetricThreshold": "tma_int_operations > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired). Vector/Matrix Int operations and shuffles are counted. Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "cpu_core@INT_VEC_RETIRED.128BIT@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_128b",
+ "MetricThreshold": "tma_int_vector_128b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "cpu_core@INT_VEC_RETIRED.256BIT@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_256b",
+ "MetricThreshold": "tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
+ "MetricExpr": "cpu_core@ICACHE_TAG.STALLS@ / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
+ "MetricExpr": "cpu_core@MEMORY_STALLS.L1@ / tma_info_thread_clks",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricName": "tma_l1_bound",
+ "MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit Level 1 after missing Level 0 within the L1D cache.",
+ "MetricExpr": "cpu_core@MEM_LOAD_RETIRED.L1_HIT_L1@ * min(cpu_core@MEM_LOAD_RETIRED.L1_HIT_L1@R, 9) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_capacity",
+ "MetricThreshold": "tma_l1_latency_capacity > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "4 * cpu_core@DEPENDENT_LOADS.ANY\\,cmask\\=1@ / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
+ "MetricExpr": "cpu_core@MEMORY_STALLS.L2@ / tma_info_thread_clks",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l2_bound",
+ "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "cpu_core@MEM_LOAD_RETIRED.L2_HIT@ * min(cpu_core@MEM_LOAD_RETIRED.L2_HIT@R, 3 * tma_info_system_core_frequency) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
+ "MetricExpr": "cpu_core@MEMORY_STALLS.L3@ / tma_info_thread_clks",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l3_bound",
+ "MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "cpu_core@MEM_LOAD_RETIRED.L3_HIT@ * min(cpu_core@MEM_LOAD_RETIRED.L3_HIT@R, 9 * tma_info_system_core_frequency) * (1 + cpu_core@MEM_LOAD_RETIRED.FB_HIT@ / cpu_core@MEM_LOAD_RETIRED.L1_MISS@ / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricName": "tma_l3_hit_latency",
+ "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
+ "MetricExpr": "cpu_core@DECODE.LCP@ / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_lcp",
+ "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)",
+ "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)",
+ "MetricGroup": "Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_light_operations",
+ "MetricThreshold": "tma_light_operations > 0.6",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
+ "MetricExpr": "cpu_core@UOPS_DISPATCHED.LOAD@ / (3 * tma_info_thread_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_load_op_utilization",
+ "MetricThreshold": "tma_load_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_dtlb_load - tma_load_stlb_miss)",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_hit",
+ "MetricThreshold": "tma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by load accesses, performing a hardware page walk",
+ "MetricExpr": "cpu_core@DTLB_LOAD_MISSES.WALK_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_miss",
+ "MetricThreshold": "tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ / (cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_LOAD_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ * cpu_core@MEM_INST_RETIRED.LOCK_LOADS@R / tma_info_thread_clks",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricName": "tma_lock_latency",
+ "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit",
+ "MetricExpr": "cpu_core@LSD.UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / tma_info_thread_clks / 2",
+ "MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_lsd",
+ "MetricThreshold": "tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit. LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
+ "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricName": "tma_mem_bandwidth",
+ "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD@) / tma_info_thread_clks - tma_mem_bandwidth",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricName": "tma_mem_latency",
+ "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
+ "MetricExpr": "cpu_core@topdown\\-mem\\-bound@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "Backend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_memory_bound",
+ "MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "PublicDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck. Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two).",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions.",
+ "MetricExpr": "13 * cpu_core@MISC2_RETIRED.LFENCE@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_memory_fence",
+ "MetricThreshold": "tma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses.",
+ "MetricExpr": "tma_light_operations * cpu_core@MEM_UOP_RETIRED.ANY@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_memory_operations",
+ "MetricThreshold": "tma_memory_operations > 0.1 & tma_light_operations > 0.6",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit",
+ "MetricExpr": "cpu_core@UOPS_RETIRED.MS@ / tma_info_thread_slots",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
+ "MetricName": "tma_microcode_sequencer",
+ "MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
+ "MetricExpr": "tma_branch_mispredicts / tma_bad_speculation * cpu_core@INT_MISC.CLEAR_RESTEER_CYCLES@ / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricName": "tma_mispredicts_resteers",
+ "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)",
+ "MetricExpr": "(cpu_core@IDQ.MITE_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / 2 + cpu_core@IDQ.MITE_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.STARVATION_CYCLES@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_mite",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
+ "MetricExpr": "160 * cpu_core@ASSISTS.SSE_AVX_MIX@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
+ "MetricName": "tma_mixing_vectors",
+ "MetricThreshold": "tma_mixing_vectors > 0.05",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "cpu_core@IDQ.MS_CYCLES_ANY@ / tma_info_thread_clks / 1.8",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)",
+ "MetricExpr": "3 * cpu_core@IDQ.MS_SWITCHES@ / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
+ "MetricName": "tma_ms_switches",
+ "MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused",
+ "MetricExpr": "tma_light_operations * (cpu_core@BR_INST_RETIRED.ALL_BRANCHES@ - cpu_core@INST_RETIRED.BR_FUSED@) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_non_fused_branches",
+ "MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
+ "MetricExpr": "tma_light_operations * cpu_core@INST_RETIRED.NOP@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_nop_instructions",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(0, tma_light_operations - (tma_x87_use + (cpu_core@FP_ARITH_OPS_RETIRED.SCALAR@ + cpu_core@FP_ARITH_OPS_RETIRED.VECTOR@) / (tma_retiring * tma_info_thread_slots) + (cpu_core@INT_VEC_RETIRED.ADD_128@ + cpu_core@INT_VEC_RETIRED.VNNI_128@ + cpu_core@INT_VEC_RETIRED.ADD_256@ + cpu_core@INT_VEC_RETIRED.MUL_256@ + cpu_core@INT_VEC_RETIRED.VNNI_256@) / (tma_retiring * tma_info_thread_slots) + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_other_light_ops",
+ "MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - cpu_core@BR_MISP_RETIRED.ALL_BRANCHES@ / (cpu_core@INT_MISC.CLEARS_COUNT@ - cpu_core@MACHINE_CLEARS.COUNT@)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - cpu_core@MACHINE_CLEARS.MEMORY_ORDERING@ / cpu_core@MACHINE_CLEARS.COUNT@), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults",
+ "MetricExpr": "99 * cpu_core@ASSISTS.PAGE_FAULT@ / tma_info_thread_slots",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_page_faults",
+ "MetricThreshold": "tma_page_faults > 0.05",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults. A Page Fault may apply on first application access to a memory page. Note operating system handling of page faults accounts for the majority of its cost.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "((cpu_core@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ + (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_3_PORTS_UTIL@)) / tma_info_thread_clks if cpu_core@ARITH.DIV_ACTIVE@ < cpu_core@CYCLE_ACTIVITY.STALLS_TOTAL@ - cpu_core@EXE_ACTIVITY.BOUND_ON_LOADS@ else (cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ + tma_retiring * cpu_core@EXE_ACTIVITY.2_3_PORTS_UTIL@) / tma_info_thread_clks)",
+ "MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_ports_utilization",
+ "MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related). Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "cpu_core@EXE_ACTIVITY.EXE_BOUND_0_PORTS@ / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_0",
+ "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "cpu_core@EXE_ACTIVITY.1_PORTS_UTIL@ / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_1",
+ "MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "cpu_core@EXE_ACTIVITY.2_PORTS_UTIL@ / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_2",
+ "MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "cpu_core@UOPS_EXECUTED.CYCLES_GE_3@ / tma_info_thread_clks",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_3m",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by (indirect) RET instructions.",
+ "MetricExpr": "cpu_core@BR_MISP_RETIRED.RET_COST@ * cpu_core@BR_MISP_RETIRED.RET_COST@R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ret_mispredicts",
+ "MetricThreshold": "tma_ret_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "cpu_core@topdown\\-retiring@ / (cpu_core@topdown\\-fe\\-bound@ + cpu_core@topdown\\-bad\\-spec@ + cpu_core@topdown\\-retiring@ + cpu_core@topdown\\-be\\-bound@)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category. Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved. Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance. For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
+ "MetricExpr": "(cpu_core@BE_STALLS.SCOREBOARD@ + cpu_core@CPU_CLK_UNHALTED.C02@) / tma_info_thread_clks",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
+ "MetricName": "tma_serializing_operation",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)",
+ "MetricExpr": "tma_light_operations * cpu_core@INT_VEC_RETIRED.SHUFFLES@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "HPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_shuffles_256b",
+ "MetricThreshold": "tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross \"vector lane\" data transfers.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
+ "MetricExpr": "cpu_core@CPU_CLK_UNHALTED.PAUSE@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_slow_pause",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.SPLIT_LOADS@ * min(cpu_core@MEM_INST_RETIRED.SPLIT_LOADS@R, tma_info_memory_load_miss_real_latency) / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_split_loads",
+ "MetricThreshold": "tma_split_loads > 0.3",
+ "PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents rate of split store accesses",
+ "MetricExpr": "cpu_core@MEM_INST_RETIRED.SPLIT_STORES@ * min(cpu_core@MEM_INST_RETIRED.SPLIT_STORES@R, 1) / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group",
+ "MetricName": "tma_split_stores",
+ "MetricThreshold": "tma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents rate of split store accesses. Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
+ "MetricExpr": "(cpu_core@XQ.FULL@ + cpu_core@L1D_MISS.L2_STALLS@) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricName": "tma_sq_full",
+ "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write",
+ "MetricExpr": "cpu_core@EXE_ACTIVITY.BOUND_ON_STORES@ / tma_info_thread_clks",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_store_bound",
+ "MetricThreshold": "tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates clocks wasted due to loads blocked due to unknown store address (did not do memory disambiguation) or due to unknown store data",
+ "MetricExpr": "7 * cpu_core@LD_BLOCKS.STORE_EARLY\\,cmask\\=1@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_store_early_blk",
+ "MetricThreshold": "tma_store_early_blk > 0.2",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores",
+ "MetricExpr": "13 * cpu_core@LD_BLOCKS.STORE_FORWARD@ / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_store_fwd_blk",
+ "MetricThreshold": "tma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
+ "MetricExpr": "(cpu_core@MEM_STORE_RETIRED.L2_HIT@ * 10 * (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) + (1 - cpu_core@MEM_INST_RETIRED.LOCK_LOADS@ / cpu_core@MEM_INST_RETIRED.ALL_STORES@) * min(cpu_core@CPU_CLK_UNHALTED.THREAD@, cpu_core@OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO@)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricName": "tma_store_latency",
+ "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations",
+ "MetricExpr": "(cpu_core@UOPS_DISPATCHED.STD@ + cpu_core@UOPS_DISPATCHED.STA@) / (7 * tma_info_thread_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_store_op_utilization",
+ "MetricThreshold": "tma_store_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the TLB was missed by store accesses, hitting in the second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_dtlb_store - tma_store_stlb_miss)",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_hit",
+ "MetricThreshold": "tma_store_stlb_hit > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the STLB was missed by store accesses, performing a hardware page walk",
+ "MetricExpr": "cpu_core@DTLB_STORE_MISSES.WALK_ACTIVE@ / tma_info_thread_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_miss",
+ "MetricThreshold": "tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ / (cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_4K@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M@ + cpu_core@DTLB_STORE_MISSES.WALK_COMPLETED_1G@)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
+ "MetricExpr": "9 * cpu_core@OCR.STREAMING_WR.ANY_RESPONSE@ / tma_info_thread_clks",
+ "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
+ "MetricName": "tma_streaming_stores",
+ "MetricThreshold": "tma_streaming_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should Streaming stores be a bottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE. Related metrics: tma_fb_full",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
+ "MetricExpr": "cpu_core@INT_MISC.UNKNOWN_BRANCH_CYCLES@ / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricName": "tma_unknown_branches",
+ "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This metric serves as an approximation of legacy x87 usage",
+ "MetricExpr": "tma_retiring * cpu_core@UOPS_EXECUTED.X87@ / cpu_core@UOPS_EXECUTED.THREAD@",
+ "MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
+ "MetricName": "tma_x87_use",
+ "MetricThreshold": "tma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint.",
+ "ScaleUnit": "100%",
+ "Unit": "cpu_core"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/cache.json b/tools/perf/pmu-events/arch/x86/arrowlake/cache.json
new file mode 100644
index 000000000000..30dd56b487ba
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/cache.json
@@ -0,0 +1,1744 @@
+[
+ {
+ "BriefDescription": "Counts the number of request that were not accepted into the L2Q because the L2Q is FULL.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x31",
+ "EventName": "CORE_REJECT_L2Q.ANY",
+ "PublicDescription": "Counts the number of (demand and L1 prefetchers) core requests rejected by the L2Q due to a full or nearly full w condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to insure fairness between cores, or to delay a cores dirty eviction when the address conflicts incoming external snoops. (Note that L2 prefetcher requests that are dropped are not counted by this event.)",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x51",
+ "EventName": "DL1.DIRTY_EVICTION",
+ "PublicDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches. Does not count evictions or dirty writebacks caused by snoops. Does not count a replacement unless a (dirty) line was written back.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines replaced in L0 data cache.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x51",
+ "EventName": "L1D.L0_REPLACEMENT",
+ "PublicDescription": "Counts L0 data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cachelines replaced into the L1 d-cache. Successful replacements only (not blocked) and exclude WB-miss case",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x51",
+ "EventName": "L1D.L1_REPLACEMENT",
+ "PublicDescription": "Counts cachelines replaced into the L1 d-cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cachelines replaced into the L0 and L1 d-cache. Successful replacements only (not blocked) and exclude WB-miss case",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x51",
+ "EventName": "L1D.REPLACEMENT",
+ "PublicDescription": "Counts cachelines replaced into the L0 and L1 d-cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x49",
+ "EventName": "L1D_MISS.FB_FULL",
+ "PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x49",
+ "EventName": "L1D_MISS.L2_STALLS",
+ "PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of demand requests that missed L1D cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x49",
+ "EventName": "L1D_MISS.LOAD",
+ "PublicDescription": "Count occurrences (rising-edge) of DCACHE_PENDING sub-event0. Impl. sends per-port binary inc-bit the occupancy increases* (at FB alloc or promotion).",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "2",
+ "EventCode": "0x48",
+ "EventName": "L1D_PENDING.LOAD",
+ "PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
+ "CounterMask": "1",
+ "EventCode": "0x48",
+ "EventName": "L1D_PENDING.LOAD_CYCLES",
+ "PublicDescription": "Counts duration of L1D miss outstanding in cycles.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.ALL",
+ "PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1f",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Exclusive state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.E",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Exclusive state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Forward state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.F",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Forward state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Modified state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.M",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Modified state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Shared state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.S",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Shared state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Modified cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 cache lines that are evicted due to an L2 cache fill",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of L2 cache lines that are evicted due to an L2 cache fill. Increments on the core that brought the line in originally.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.SILENT",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 cache lines that are silently dropped due to an L2 cache fill",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.SILENT",
+ "PublicDescription": "Counts the number of L2 cache lines that are silently dropped due to an L2 cache fill. Increments on the core that brought the line in originally.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.USELESS_HWPF",
+ "PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x30",
+ "EventName": "L2_REJECT_XQ.ANY",
+ "PublicDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition which likely indicates back pressure from the IDI link. The XQ may reject transactions from the L2Q (non-cacheable requests), BBL (L2 misses) and WOB (L2 write-back victims).",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "All accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES, L2_RQSTS.ANY]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.ALL",
+ "PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES, L2_RQSTS.ANY]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xff",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache Accesses that resulted in a Hit from a front door request only (does not include rejects or recycles), per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Read requests with true-miss in L2 cache [This event is alias to L2_RQSTS.MISS]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3f",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of total L2 Cache Accesses that resulted in a Miss from a front door request only (does not include rejects or recycles), per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache Accesses that miss the L2 and get BBL reject short and long rejects, per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.REJECTS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_CODE_RD",
+ "PublicDescription": "Counts the total number of L2 code requests.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand Data Read access L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
+ "PublicDescription": "Counts Demand Data Read requests accessing the L2 cache. These requests may hit or miss L2 cache. True-miss exclude misses that were merged with ongoing L2 misses. An access is counted once.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "All accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES, L2_REQUEST.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ANY",
+ "PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES, L2_REQUEST.ALL]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xff",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.CODE_RD_HIT",
+ "PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x44",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.CODE_RD_MISS",
+ "PublicDescription": "Counts L2 cache misses when fetching instructions.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x24",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
+ "PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x41",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand Data Read miss L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
+ "PublicDescription": "Counts demand Data Read requests with true-miss in the L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. An access is counted once.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Read requests with true-miss in L2 cache [This event is alias to L2_REQUEST.MISS]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.MISS",
+ "PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3f",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "All accesses to L2 cache [This event is alias to L2_REQUEST.ALL,L2_RQSTS.ANY]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.REFERENCES",
+ "PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.ALL,L2_RQSTS.ANY]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xff",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.RFO_HIT",
+ "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x42",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.RFO_MISS",
+ "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x22",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x42",
+ "EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
+ "PublicDescription": "This event counts the number of cycles when the L1D is locked.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2e",
+ "EventName": "LONGEST_LAT_CACHE.MISS",
+ "PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x41",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2e",
+ "EventName": "LONGEST_LAT_CACHE.REFERENCE",
+ "PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4f",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x2e",
+ "EventName": "LONGEST_LAT_CACHE.REFERENCE",
+ "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4f",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7f",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.L2_HIT",
+ "PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache. Includes L2 Hit resulting from and L1D eviction of another core in the same module which is longer latency than a typical L2 hit.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.L2_HIT",
+ "PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which missed in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an icache or itlb miss which hit in the LLC.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.LLC_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an L1 demand load miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7f",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an L1 demand load miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7f",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.L2_HIT",
+ "PublicDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 cache. Includes L2 Hit resulting from and L1D eviction of another core in the same module which is longer latency than a typical L2 hit.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.L2_HIT",
+ "PublicDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which missed in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which missed in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7e",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which hit in the LLC.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which missed all the local caches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x78",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which missed all the local caches. If the core has access to an L3 cache, an LLC miss refers to an L3 cache miss, otherwise it is an L2 cache miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x78",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled to a store buffer full condition",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.SBFULL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts all retired load instructions.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.ALL_LOADS",
+ "PublicDescription": "Counts Instructions with at least one architecturally visible load retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x81",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.ALL_STORES",
+ "PublicDescription": "Counts all retired store instructions. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x82",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired software prefetch instructions.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.ALL_SWPF",
+ "PublicDescription": "Counts all retired software prefetch instructions. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x84",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.ANY",
+ "PublicDescription": "Counts all retired memory instructions - loads and stores. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x87",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.LOCK_LOADS",
+ "PublicDescription": "Counts retired load instructions with locked access. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x21",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
+ "PublicDescription": "Counts retired load instructions that split across a cacheline boundary. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x41",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.SPLIT_STORES",
+ "PublicDescription": "Counts retired store instructions that split across a cacheline boundary. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x42",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_ANY",
+ "PublicDescription": "Number of retired instructions with a clean hit in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_LOADS",
+ "PublicDescription": "Number of retired load instructions with a clean hit in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x9",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired store instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_STORES",
+ "PublicDescription": "Number of retired store instructions that hit in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0xa",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired SWPF instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_SWPF",
+ "PublicDescription": "Number of retired SWPF instructions that hit in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_ANY",
+ "PublicDescription": "Retired instructions that miss the STLB. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x17",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
+ "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x11",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
+ "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x12",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired SWPF instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_SWPF",
+ "PublicDescription": "Number of retired SWPF instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x14",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were a cross-core Snoop hits and forwards data from an in on-package core cache (induced by NI$)",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
+ "PublicDescription": "Counts retired load instructions whose data sources were a cross-core Snoop hits and forwards data from an in on-package core cache (induced by NI$) Available PDIST counters: 0,1",
+ "SampleAfterValue": "20011",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3, Hit-with-FWD is normally excluded.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM",
+ "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3, Hit-with-FWD is normally excluded. Available PDIST counters: 0,1",
+ "SampleAfterValue": "20011",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
+ "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "20011",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
+ "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "20011",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions with at least 1 uncacheable load or lock.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd4",
+ "EventName": "MEM_LOAD_MISC_RETIRED.UC",
+ "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock). Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.FB_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x101",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts retired load instructions with at least one uop that hit in the Level 0 of the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_HIT_L0",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the Level 0 of the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts retired load instructions with at least one uop that hit in the Level 1 of the L1 data cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_HIT_L1",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the Level 1 of the L1 data cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_MISS",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "200003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L2_HIT",
+ "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources. Available PDIST counters: 0,1",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L2_MISS",
+ "PublicDescription": "Counts retired load instructions missed L2 cache as data sources. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100021",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L3_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100021",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L3_MISS",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache. Available PDIST counters: 0,1",
+ "SampleAfterValue": "50021",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit the L1 data cache",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit the L1 data cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L1 data cache",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x40",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit in the L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
+ "PublicDescription": "Counts the number of load ops retired that hit in the L2 cache. Includes L2 Hit resulting from and L1D eviction of another core in the same module which is longer latency than a typical L2 hit.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L2 cache",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x80",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit in the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1c",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of loads that hit in a write combining buffer (WCB), excluding the first load that caused the WCB to allocate.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of loads that hit in a write combining buffer (WCB), excluding the first load that caused the WCB to allocate.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked for any of the following reasons: load buffer, store buffer or RSV full.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked for any of the following reasons: load buffer, store buffer or RSV full.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ALL",
+ "SampleAfterValue": "20003",
+ "UMask": "0x7",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to load buffer full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.LD_BUF",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to a load buffer full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.LD_BUF",
+ "SampleAfterValue": "20003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to RSV full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.RSV",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to an RSV full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.RSV",
+ "SampleAfterValue": "20003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to store buffer full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ST_BUF",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to a store buffer full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ST_BUF",
+ "SampleAfterValue": "20003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "MEM_STORE_RETIRED.L2_HIT",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x44",
+ "EventName": "MEM_STORE_RETIRED.L2_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x81",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x81",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of store uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x82",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of store ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x82",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x80",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x80",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x10",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x10",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x800",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x100",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x100",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x20",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x20",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x4",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x4",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x200",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x200",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x40",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x40",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x8",
+ "SampleAfterValue": "200003",
+ "UMask": "0x5",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x8",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that performed one or more locks",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that performed one or more locks",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of memory uops retired that were splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x43",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of memory uops retired that were splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x43",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x41",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x41",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split store uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x42",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split store uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x42",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of memory uops retired that missed in the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the second Level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of store uops retired that miss in the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
+ "SampleAfterValue": "200003",
+ "UMask": "0x6",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Retired memory uops for any access",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe5",
+ "EventName": "MEM_UOP_RETIRED.ANY",
+ "PublicDescription": "Number of retired micro-operations (uops) for load or store memory accesses",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xf",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts writebacks of modified cachelines that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.COREWB_M.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x7E001E00008",
+ "PublicDescription": "Counts writebacks of modified cachelines that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts writebacks of non-modified cachelines that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.COREWB_NONM.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x7E001E01000",
+ "PublicDescription": "Counts writebacks of non-modified cachelines that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x40001E00001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop hit in another cores caches, data forwarding is required as the data is modified. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop hit in another cores caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x20001E00001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop hit in another cores caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x40001E00002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop hit in another cores caches, data forwarding is required as the data is modified. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read, RFO and ITOM requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x7E001E04477",
+ "PublicDescription": "Counts all data read, code read, RFO and ITOM requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
+ "PublicDescription": "Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc..",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DATA_RD",
+ "PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cacheable and Non-Cacheable code read requests",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
+ "PublicDescription": "Counts both cacheable and Non-Cacheable code read requests.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
+ "PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
+ "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "PublicDescription": "Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where at least 1 outstanding demand data read request is pending.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
+ "PublicDescription": "Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "PublicDescription": "Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
+ "PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Store Read transactions pending for off-core. Highly correlated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
+ "PublicDescription": "Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completion.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2c",
+ "EventName": "SQ_MISC.BUS_LOCK",
+ "PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.NTA",
+ "PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
+ "PublicDescription": "Counts the number of PREFETCHW instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.T0",
+ "PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.T1_T2",
+ "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to an icache miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ICACHE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to an icache miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ICACHE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/floating-point.json b/tools/perf/pmu-events/arch/x86/arrowlake/floating-point.json
new file mode 100644
index 000000000000..23a80c526aa1
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/floating-point.json
@@ -0,0 +1,532 @@
+[
+ {
+ "BriefDescription": "Cycles when floating-point divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "PublicDescription": "Counts cycles when divide unit is busy executing divide or square root operations. Accounts for floating-point operations only.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles when any of the floating point dividers are active.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.FP",
+ "PublicDescription": "Counts all microcode Floating Point assists.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "ASSISTS.SSE_AVX_MIX",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.SSE_AVX_MIX",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of FP-arith-uops dispatched on 1st VEC port (port 0). FP-arith-uops are of type ADD* / SUB* / MUL / FMA* / DPP.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V0",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of FP-arith-uops dispatched on 2nd VEC port (port 1)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of FP-arith-uops dispatched on 3rd VEC port (port 5)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V2",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of FP-arith-uops dispatched on 4th VEC port",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V3",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.128B_PACKED_SINGLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.256B_PACKED_DOUBLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.4_FLOPS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
+ "SampleAfterValue": "100003",
+ "UMask": "0x18",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.SCALAR",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.SCALAR_DOUBLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.SCALAR_SINGLE",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event FP_ARITH_OPS_RETIRED.VECTOR",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "Deprecated": "1",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.VECTOR",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3c",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED.VECTOR_128B [This event is alias to FP_ARITH_OPS_RETIRED.VECTOR_128B]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.VECTOR_128B",
+ "SampleAfterValue": "100003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED.VECTOR_256B [This event is alias to FP_ARITH_OPS_RETIRED.VECTOR_256B]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.VECTOR_256B",
+ "SampleAfterValue": "100003",
+ "UMask": "0x30",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.128B_PACKED_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.128B_PACKED_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.256B_PACKED_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.256B_PACKED_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.4_FLOPS",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x18",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.SCALAR",
+ "PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.SCALAR_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.SCALAR_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.VECTOR",
+ "PublicDescription": "Number of any Vector retired FP arithmetic instructions. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3c",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_OPS_RETIRED.VECTOR_128B [This event is alias to FP_ARITH_INST_RETIRED.VECTOR_128B]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.VECTOR_128B",
+ "SampleAfterValue": "100003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "FP_ARITH_OPS_RETIRED.VECTOR_256B [This event is alias to FP_ARITH_INST_RETIRED.VECTOR_256B]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_OPS_RETIRED.VECTOR_256B",
+ "SampleAfterValue": "100003",
+ "UMask": "0x30",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of all types of floating point operations per uop with all default weighting",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of all types of floating point operations per uop with all default weighting",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP64]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 32 bit single precision results",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP32",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 32 bit single precision results [This event is alias to FP_FLOPS_RETIRED.SP]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP32",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 64 bit double precision results",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP64",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 64 bit double precision results [This event is alias to FP_FLOPS_RETIRED.DP]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP64",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP32]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 128 bit double precision floating point. This may be SSE or AVX.128 operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the total number of floating point retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 128 bit single precision floating point. This may be SSE or AVX.128 operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 128 bit single precision floating point. This may be SSE or AVX.128 operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 256 bit double precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.256B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 256 bit double precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.256B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 256 bit single precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.256B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 32bit single precision floating point",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.32B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 32bit single precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.32B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 64 bit double precision floating point",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.64B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 64 bit double precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.64B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the total number of floating point retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3f",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on floating point and vector integer port 0, 1, 2, 3.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "FP_VINT_UOPS_EXECUTED.PRIMARY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on floating point and vector integer store data port.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "FP_VINT_UOPS_EXECUTED.STD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.FP_ASSIST",
+ "PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
+ "SampleAfterValue": "20003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.FP_ASSIST",
+ "PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
+ "SampleAfterValue": "20003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point divide uops retired (x87 and sse, including x87 sqrt)",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.FPDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point divide uops retired (x87 and sse, including x87 sqrt).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.FPDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/frontend.json b/tools/perf/pmu-events/arch/x86/arrowlake/frontend.json
new file mode 100644
index 000000000000..db2ef84ca041
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/frontend.json
@@ -0,0 +1,745 @@
+[
+ {
+ "BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe6",
+ "EventName": "BACLEARS.ANY",
+ "PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Clears due to Unknown Branches.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x60",
+ "EventName": "BACLEARS.ANY",
+ "PublicDescription": "Number of times the front-end is resteered when it finds a branch instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe6",
+ "EventName": "BACLEARS.ANY",
+ "PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x87",
+ "EventName": "DECODE.LCP",
+ "PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles the Microcode Sequencer is busy.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x87",
+ "EventName": "DECODE.MS_BUSY",
+ "SampleAfterValue": "500009",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x61",
+ "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
+ "PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged with having preceded with frontend bound behavior",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged with having preceded with frontend bound behavior",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Retired ANT branches",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ANY_ANT",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x9",
+ "PublicDescription": "Always Not Taken (ANT) conditional retired branches (no BTB entry and not mispredicted) Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x1",
+ "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles/empty issue slots due to a baclear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.BRANCH_DETECT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles/empty issue slots due to a baclear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.BRANCH_DETECT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles /empty issue slots due to a btclear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.BRANCH_RESTEER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles /empty issue slots due to a btclear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.BRANCH_RESTEER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged following an ms flow due to the bubble/wasted issue slot from exiting long ms flow",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.CISC",
+ "PublicDescription": "Counts the number of instructions retired that were tagged following an ms flow due to the bubble/wasted issue slot from exiting long ms flow",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged following an ms flow due to the bubble/wasted issue slot from exiting long ms flow",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.CISC",
+ "PublicDescription": "Counts the number of instructions retired that were tagged following an ms flow due to the bubble/wasted issue slot from exiting long ms flow",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged every cycle the decoder is unable to send 4 uops",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.DECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged every cycle the decoder is unable to send 3 uops per cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.DECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.DSB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x11",
+ "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to icache miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ICACHE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to icache miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ICACHE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ITLB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x14",
+ "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.L1I_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x12",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.L2_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x13",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x608006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x601006",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600206",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x610006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x100206",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x602006",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600406",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x620006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x604006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600806",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Mispredicted Retired ANT branches",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.MISP_ANT",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x9",
+ "PublicDescription": "ANT retired branches that got just mispredicted Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts flows delivered by the Microcode Sequencer",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.MS_FLOWS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x8",
+ "PublicDescription": "Counts flows delivered by the Microcode Sequencer Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired tagged after a wasted issue slot if none of the previous events occurred",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.OTHER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired tagged after a wasted issue slot if none of the previous events occurred",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.OTHER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles/empty issue slots due to a predecode wrong",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.PREDECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instruction retired that are tagged after a branch instruction causes bubbles/empty issue slots due to a predecode wrong.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.PREDECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.STLB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x15",
+ "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired instructions that caused clears due to being Unknown Branches.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x17",
+ "PublicDescription": "Number retired branch instructions that caused the front-end to be resteered when it finds the instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to Instruction L1 cache miss, that hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "FRONTEND_RETIRED_SOURCE.ICACHE_L2_HIT",
+ "PublicDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to Instruction L1 cache miss, that hit in the L2 cache. Includes L2 Hit resulting from and L1D eviction of another core in the same module which is longer latency than a typical L2 hit.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to Instruction L1 cache miss, that missed in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "FRONTEND_RETIRED_SOURCE.ICACHE_L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xe",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to Instruction L1 cache miss, that hit in the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "FRONTEND_RETIRED_SOURCE.ICACHE_L3_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss that hit in the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "FRONTEND_RETIRED_SOURCE.ITLB_STLB_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss that also missed the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "FRONTEND_RETIRED_SOURCE.ITLB_STLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.ACCESSES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.ACCESSES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are present.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.MISSES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.MISSES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALLS",
+ "PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The decode pipeline works at a 32 Byte granularity.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "ICACHE_DATA.STALL_PERIODS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALL_PERIODS",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x83",
+ "EventName": "ICACHE_TAG.STALLS",
+ "PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_CYCLES_ANY",
+ "PublicDescription": "Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "8",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_CYCLES_OK",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_UOPS",
+ "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_CYCLES_ANY",
+ "PublicDescription": "Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "8",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_CYCLES_OK",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_UOPS",
+ "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_CYCLES_ANY",
+ "PublicDescription": "Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_SWITCHES",
+ "PublicDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_UOPS",
+ "PublicDescription": "Counts the number of uops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.CORE",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations. Software can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to IDQ_BUBBLES.STARVATION_CYCLES]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "8",
+ "Deprecated": "1",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.CYCLES_FE_WAS_OK",
+ "Invert": "1",
+ "PublicDescription": "Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when no uops are delivered by the IDQ for 2 or more cycles when backend of the machine is not stalled - normally indicating a Fetch Latency issue",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.FETCH_LATENCY",
+ "PublicDescription": "Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls for 2 or more cycles - normally indicating a Fetch Latency issue.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "8",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.STARVATION_CYCLES",
+ "PublicDescription": "Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the micro-sequencer is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "MS_DECODED.MS_BUSY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/memory.json b/tools/perf/pmu-events/arch/x86/arrowlake/memory.json
new file mode 100644
index 000000000000..aba1e27e5e37
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/memory.json
@@ -0,0 +1,401 @@
+[
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retires.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ANY_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xff",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retires.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ANY_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xff",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiring.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_BOUND_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xf4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiring.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_BOUND_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xf4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x81",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x81",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.OTHER_AT_RET",
+ "PublicDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etc.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc0",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.OTHER_AT_RET",
+ "PublicDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etc.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc0",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalk.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.PGWALK_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xa0",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalk.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.PGWALK_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xa0",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address match.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ST_ADDR_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x84",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address match.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ST_ADDR_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x84",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to request buffers full or lock in progress.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.WCB_FULL_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x82",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of memory ordering machine clears triggered due to a snoop from an external agent. Does not count internally generated machine clears such as those due to disambiguations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
+ "SampleAfterValue": "20003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
+ "PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
+ "SampleAfterValue": "20003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "53",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x80",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "1009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x10",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "20011",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_2048",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x800",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "23",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x100",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "503",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x20",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "100007",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x4",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x200",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "101",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x40",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "2003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "2,3,4,5,6,7,8,9",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x8",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "50021",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
+ "Counter": "0,1",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
+ "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8 Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts misaligned loads that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts misaligned loads that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts misaligned stores that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts misaligned stores that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1E780000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0xFE7F8000001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0xFE7F8000002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where data return is pending for a Demand Data Read request who miss L3 cache.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
+ "PublicDescription": "Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
+ "PublicDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache. Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cache.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/metricgroups.json b/tools/perf/pmu-events/arch/x86/arrowlake/metricgroups.json
new file mode 100644
index 000000000000..855585fe6fae
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/metricgroups.json
@@ -0,0 +1,150 @@
+{
+ "Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "C0Wait": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSBmiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DataSharing": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Fed": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Flops": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpScalar": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Frontend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "HPC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IcMiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ifetch": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IntVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Load_Store_Miss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem_Exec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryTLB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_BW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_Lat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MicroSeq": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "OS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Offcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PGO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Server": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Snoop": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SoC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Summary": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL1": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL2": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL3mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TopdownL1": "Metrics for top-down breakdown at level 1",
+ "TopdownL2": "Metrics for top-down breakdown at level 2",
+ "TopdownL3": "Metrics for top-down breakdown at level 3",
+ "TopdownL4": "Metrics for top-down breakdown at level 4",
+ "TopdownL5": "Metrics for top-down breakdown at level 5",
+ "TopdownL6": "Metrics for top-down breakdown at level 6",
+ "load_store_bound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "tma_L1_group": "Metrics for top-down breakdown at level 1",
+ "tma_L2_group": "Metrics for top-down breakdown at level 2",
+ "tma_L3_group": "Metrics for top-down breakdown at level 3",
+ "tma_L4_group": "Metrics for top-down breakdown at level 4",
+ "tma_L5_group": "Metrics for top-down breakdown at level 5",
+ "tma_L6_group": "Metrics for top-down breakdown at level 6",
+ "tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
+ "tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
+ "tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
+ "tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
+ "tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
+ "tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
+ "tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
+ "tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
+ "tma_fetch_bandwidth_group": "Metrics contributing to tma_fetch_bandwidth category",
+ "tma_fetch_latency_group": "Metrics contributing to tma_fetch_latency category",
+ "tma_fp_arith_group": "Metrics contributing to tma_fp_arith category",
+ "tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
+ "tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
+ "tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
+ "tma_ifetch_bandwidth_group": "Metrics contributing to tma_ifetch_bandwidth category",
+ "tma_ifetch_latency_group": "Metrics contributing to tma_ifetch_latency category",
+ "tma_int_operations_group": "Metrics contributing to tma_int_operations category",
+ "tma_issue2P": "Metrics related by the issue $issue2P",
+ "tma_issueBM": "Metrics related by the issue $issueBM",
+ "tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
+ "tma_issueD0": "Metrics related by the issue $issueD0",
+ "tma_issueFB": "Metrics related by the issue $issueFB",
+ "tma_issueFL": "Metrics related by the issue $issueFL",
+ "tma_issueL1": "Metrics related by the issue $issueL1",
+ "tma_issueLat": "Metrics related by the issue $issueLat",
+ "tma_issueMC": "Metrics related by the issue $issueMC",
+ "tma_issueMS": "Metrics related by the issue $issueMS",
+ "tma_issueMV": "Metrics related by the issue $issueMV",
+ "tma_issueRFO": "Metrics related by the issue $issueRFO",
+ "tma_issueSL": "Metrics related by the issue $issueSL",
+ "tma_issueSO": "Metrics related by the issue $issueSO",
+ "tma_issueSmSt": "Metrics related by the issue $issueSmSt",
+ "tma_issueSpSt": "Metrics related by the issue $issueSpSt",
+ "tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
+ "tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
+ "tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
+ "tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
+ "tma_light_operations_group": "Metrics contributing to tma_light_operations category",
+ "tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
+ "tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
+ "tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
+ "tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
+ "tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
+ "tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
+ "tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
+ "tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
+ "tma_resource_bound_group": "Metrics contributing to tma_resource_bound category",
+ "tma_retiring_group": "Metrics contributing to tma_retiring category",
+ "tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
+ "tma_store_bound_group": "Metrics contributing to tma_store_bound category",
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
+}
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/other.json b/tools/perf/pmu-events/arch/x86/arrowlake/other.json
new file mode 100644
index 000000000000..ab7aac14e697
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/other.json
@@ -0,0 +1,90 @@
+[
+ {
+ "BriefDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.HARDWARE",
+ "PublicDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events. This includes, but not limited to, assists at EXE or MEM uop writeback like AVX* load/store/gather/scatter (non-FP GSSE-assist ) , assists generated by ROB like PEBS and RTIT, Uncore trap, RAR (Remote Action Request) and CET (Control flow Enforcement Technology) assists.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "ASSISTS.PAGE_FAULT",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.PAGE_FAULT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xe4",
+ "EventName": "LBR_INSERTS.ANY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.FULL_STREAMING_WR.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x800000010000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.PARTIAL_STREAMING_WR.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x400000010000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles the uncore cannot take further requests",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x2d",
+ "EventName": "XQ.FULL",
+ "PublicDescription": "number of cycles when the thread is active and the uncore cannot take any further requests (for example prefetches, loads or stores initiated by the Core that miss the L2 cache).",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/pipeline.json b/tools/perf/pmu-events/arch/x86/arrowlake/pipeline.json
new file mode 100644
index 000000000000..0651e2c4561e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/pipeline.json
@@ -0,0 +1,2500 @@
+[
+ {
+ "BriefDescription": "Counts the number of cycles when any of the floating point or integer dividers are active.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "PublicDescription": "Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x9",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles when any of the floating point or integer dividers are active.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles when integer divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.IDIV_ACTIVE",
+ "PublicDescription": "Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer operations only.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.ANY",
+ "PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware. Examples include AD (page Access Dirty), FP and AVX related assists.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1f",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa2",
+ "EventName": "BE_STALLS.SCOREBOARD",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts all branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL010, ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts retired JCC (Jump on Conditional Code) branch instructions retired includes both taken and not taken branches",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND",
+ "PublicDescription": "Counts conditional branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x111",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_NTAKEN",
+ "PublicDescription": "Counts not taken branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of taken JCC branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN",
+ "PublicDescription": "Counts taken conditional branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x101",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Taken backward conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN_BWD",
+ "PublicDescription": "Counts taken backward conditional branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Taken forward conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN_FWD",
+ "PublicDescription": "Counts taken forward conditional branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x102",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and Interrupt call and return",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.FAR_BRANCH",
+ "SampleAfterValue": "200003",
+ "UMask": "0xbf",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.FAR_BRANCH",
+ "PublicDescription": "Counts far branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.FAR_BRANCH",
+ "SampleAfterValue": "200003",
+ "UMask": "0xbf",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT",
+ "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xef",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf9",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_CALL",
+ "PublicDescription": "Counts both direct and indirect near call instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of near CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL010, ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf9",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near RET branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_RETURN",
+ "PublicDescription": "Counts return instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_TAKEN",
+ "PublicDescription": "Counts taken branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Errata": "ARL011",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc0",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near relative CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.REL_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfd",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of near relative CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.REL_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfd",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of near relative JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.REL_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xdf",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "All mispredicted branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES_COST",
+ "PublicDescription": "All mispredicted branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x44",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted JCC branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND",
+ "PublicDescription": "Counts mispredicted conditional branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x111",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Mispredicted conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_COST",
+ "PublicDescription": "Mispredicted conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x151",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_NTAKEN",
+ "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Mispredicted non-taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_NTAKEN_COST",
+ "PublicDescription": "Mispredicted non-taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x50",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted taken JCC branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN",
+ "PublicDescription": "Counts taken conditional mispredicted branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x101",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken backward.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_BWD",
+ "PublicDescription": "Counts taken backward conditional mispredicted branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken backward. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_BWD_COST",
+ "PublicDescription": "number of branch instructions retired that were mispredicted and taken backward. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x8001",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Mispredicted taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_COST",
+ "PublicDescription": "Mispredicted taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x141",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken forward.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_FWD",
+ "PublicDescription": "Counts taken forward conditional mispredicted branch instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken forward. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_FWD_COST",
+ "PublicDescription": "number of branch instructions retired that were mispredicted and taken forward. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x8002",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Miss-predicted near indirect branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT",
+ "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Mispredicted indirect CALL retired.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
+ "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Mispredicted indirect CALL retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL_COST",
+ "PublicDescription": "Mispredicted indirect CALL retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x42",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Mispredicted near indirect branch instructions retired (excluding returns). This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_COST",
+ "PublicDescription": "Mispredicted near indirect branch instructions retired (excluding returns). This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100003",
+ "UMask": "0xc0",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xef",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
+ "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0x80",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Mispredicted taken near branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN_COST",
+ "PublicDescription": "Mispredicted taken near branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "400009",
+ "UMask": "0x60",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RET",
+ "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near RET branch instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Mispredicted ret instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RET_COST",
+ "PublicDescription": "Mispredicted ret instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0,1",
+ "SampleAfterValue": "100007",
+ "UMask": "0x48",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C01",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C02",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI state.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C0_WAIT",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instruction.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x70",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.CORE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Core cycles when the core is not in a halt state.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.CORE",
+ "PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the programmable counters available for other events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.CORE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.CORE_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Thread cycles when thread is not in halt state [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.CORE_P",
+ "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time. [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.CORE_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Core clocks when a PAUSE is pending.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.PAUSE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Pause instructions",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.PAUSE_INST",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles.",
+ "Counter": "Fixed counter 2",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC",
+ "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles",
+ "Counter": "Fixed counter 2",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x3",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted reference clock cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
+ "PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
+ "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
+ "PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.THREAD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Core cycles when the thread is not in a halt state.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.THREAD",
+ "PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the programmable counters available for other events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.THREAD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.THREAD_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Thread cycles when thread is not in halt state [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.THREAD_P",
+ "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time. [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.THREAD_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "16",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "4",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Count number of times a load is depending on another load that had just write back its data or in previous or 2 cycles back. This event supports in-direct dependency through a single uop.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x02",
+ "EventName": "DEPENDENT_LOADS.ANY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
+ "PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles total of 2 or 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_3_PORTS_UTIL",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
+ "PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
+ "PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
+ "PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "5",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.BOUND_ON_LOADS",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x21",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "2",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
+ "PublicDescription": "Counts cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles no uop executed while RS was not empty, the SB was not full and there was no outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS",
+ "PublicDescription": "Number of cycles total of 0 uops executed on all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) was not full and there was no outstanding load.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "2",
+ "EventCode": "0x75",
+ "EventName": "INST_DECODED.DECODERS",
+ "PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of instructions retired.",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.ANY",
+ "PublicDescription": "Fixed Counter: Counts the number of instructions retired. Available PDIST counters: 32",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.ANY",
+ "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter. Available PDIST counters: 32",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of instructions retired",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.ANY",
+ "PublicDescription": "Fixed Counter: Counts the number of instructions retired Available PDIST counters: 32",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.ANY_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.ANY_P",
+ "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter. Available PDIST counters: 0,1",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.ANY_P",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "retired macro-fused uops when there is a branch in the macro-fused pair (the two instructions that got macro-fused count once in this pmon)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.BR_FUSED",
+ "PublicDescription": "retired macro-fused uops when there is a branch in the macro-fused pair (the two instructions that got macro-fused count once in this pmon) Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "INST_RETIRED.MACRO_FUSED",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.MACRO_FUSED",
+ "PublicDescription": "INST_RETIRED.MACRO_FUSED Available PDIST counters: 0,1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x30",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.NOP",
+ "PublicDescription": "Counts all retired NOP or ENDBR32/64 or PREFETCHIT0/1 instructions Available PDIST counters: 0,1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Precise instruction retired with PEBS precise-distribution",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.PREC_DIST",
+ "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0. Available PDIST counters: 32",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Iterations of Repeat string retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.REP_ITERATION",
+ "PublicDescription": "Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent. Available PDIST counters: 0,1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Bubble cycles of BPClear.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.BPCLEAR_CYCLES",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0xB",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.CLEARS_COUNT",
+ "PublicDescription": "Counts the number of speculative clears due to any type of branch misprediction or machine clears",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
+ "PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.RECOVERY_CYCLES",
+ "PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Bubble cycles of BAClear (Unknown Branch).",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.UNKNOWN_BRANCH_CYCLES",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x7",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.UOP_DROPPING",
+ "PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on secondary integer ports 0,1,2,3.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "INT_UOPS_EXECUTED.2ND",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on a load port.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "INT_UOPS_EXECUTED.LD",
+ "PublicDescription": "Counts the number of uops executed on a load port. This event counts for integer uops even if the destination is FP/vector",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on integer port 0,1, 2, 3.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "INT_UOPS_EXECUTED.PRIMARY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x78",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on a Store address port.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "INT_UOPS_EXECUTED.STA",
+ "PublicDescription": "Counts the number of uops executed on a Store address port. This event counts integer uops even if the data source is FP/vector",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of uops executed on an integer store data and jump port.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "INT_UOPS_EXECUTED.STD_JMP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of vector integer instructions retired of 128-bit vector-width.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.128BIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x13",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of vector integer instructions retired of 256-bit vector-width.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.256BIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xac",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "integer ADD, SUB, SAD 128-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.ADD_128",
+ "PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 128-bit vector instructions.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "integer ADD, SUB, SAD 256-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.ADD_256",
+ "PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 256-bit vector instructions.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.MUL_256",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.MUL_256",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.SHUFFLES",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.SHUFFLES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.VNNI_128",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.VNNI_128",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.VNNI_256",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.VNNI_256",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.ADDRESS_ALIAS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.ADDRESS_ALIAS",
+ "PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.ADDRESS_ALIAS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of occurrences a retired load gets blocked because its address exactly matches an older store whose data is not ready (a.k.a. unknown). unready_fwd",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.DATA_UNKNOWN",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.DATA_UNKNOWN",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.NO_SR",
+ "PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x88",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of times a load got early blocked due to preceding store operation with unknown address or unknown data. Excluding in-line (immediate) wakeups",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.STORE_EARLY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xa1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of occurrences a retired load gets blocked because its address partially overlaps with an older store (size mismatch) - unknown_sta/bad_forward",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.STORE_FORWARD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.STORE_FORWARD",
+ "PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x82",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because its address partially overlapped with an older store.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.STORE_FORWARD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xa8",
+ "EventName": "LSD.CYCLES_ACTIVE",
+ "PublicDescription": "Counts the cycles when at least one uop is delivered by the LSD (Loop-stream detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "8",
+ "EventCode": "0xa8",
+ "EventName": "LSD.CYCLES_OK",
+ "PublicDescription": "Counts the cycles when optimal number of uops is delivered by the LSD (Loop-stream detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa8",
+ "EventName": "LSD.UOPS",
+ "PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts all machine clears for any reason including, but not limited to memory ordering, SMC, and FP assist.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.ANY",
+ "SampleAfterValue": "20003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.COUNT",
+ "PublicDescription": "Counts the number of machine clears (nukes) of any type.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of memory ordering machine clears triggered due to an internal load passing an older store within the same CPU.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.DISAMBIGUATION",
+ "SampleAfterValue": "20003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPU.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.DISAMBIGUATION",
+ "SampleAfterValue": "20003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of nukes due to memory renaming",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MRN_NUKE",
+ "SampleAfterValue": "20003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of times that the machine clears due to a page fault. Covers both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.PAGE_FAULT",
+ "SampleAfterValue": "20003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to a page fault. Counts both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation occurs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.PAGE_FAULT",
+ "SampleAfterValue": "20003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SLOW",
+ "SampleAfterValue": "20003",
+ "UMask": "0x6e",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SLOW",
+ "SampleAfterValue": "20003",
+ "UMask": "0x6f",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SMC",
+ "PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SMC",
+ "SampleAfterValue": "20003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts cycles where no execution is happening due to loads waiting for L1 cache (that is: no execution & load in flight & no load missed L1 cache)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x46",
+ "EventName": "MEMORY_STALLS.L1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts cycles where no execution is happening due to loads waiting for L2 cache (that is: no execution & load in flight & load missed L1 & no load missed L2 cache)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x46",
+ "EventName": "MEMORY_STALLS.L2",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts cycles where no execution is happening due to loads waiting for L3 cache (that is: no execution & load in flight & load missed L1 & load missed L2 cache & no load missed L3 Cache)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x46",
+ "EventName": "MEMORY_STALLS.L3",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts cycles where no execution is happening due to loads waiting for Memory (that is: no execution & load in flight & a load missed L3 cache)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x46",
+ "EventName": "MEMORY_STALLS.MEM",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "LFENCE instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe0",
+ "EventName": "MISC2_RETIRED.LFENCE",
+ "PublicDescription": "number of LFENCE retired instructions",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "LBR record is inserted",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xe4",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "PublicDescription": "LBR record is inserted Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of Last Branch Record (LBR) entries. Requires LBRs to be enabled and configured in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe4",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY",
+ "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_COUNT",
+ "Invert": "1",
+ "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles when RS was empty and a resource allocation stall is asserted",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_RESOURCE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.C01_MS_SCB",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.C01_MS_SCB",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots where no uop could issue due to an IQ scoreboard that stalls allocation until a specified older uop retires or (in the case of jump scoreboard) executes. Commonly executed instructions with IQ scoreboards include LFENCE and MFENCE.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.IQ_JEU_SCB",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.NON_C01_MS_SCB",
+ "PublicDescription": "Counts the number of issue slots not consumed by the backend due to a micro-sequencer (MS) scoreboard, which stalls the front-end from issuing from the UROM until a specified older uop retires. The most commonly executed instruction with an MS scoreboard is PAUSE.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions. Software can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TMA slots wasted due to incorrect speculations.",
+ "Counter": "0",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.BAD_SPEC_SLOTS",
+ "PublicDescription": "Number of slots of TMA method that were wasted due to incorrect speculation. It covers all types of control-flow or data-related mis-speculations.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
+ "Counter": "0",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
+ "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "Counter": "3",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
+ "EventName": "TOPDOWN.SLOTS",
+ "PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.SLOTS_P",
+ "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.ALL_P",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.ALL_P",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Fast Nukes such as Memory Ordering Machine clears and MRN nukes",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Fast Nukes such as Memory Ordering Machine clears and MRN nukes",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Branch Mispredict",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Branch Mispredict",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.NUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.NUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN_BE_BOUND.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to due to certain allocation restrictions",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to due to certain allocation restrictions",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN_BE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stall (scheduler not being able to accept another uop). This could be caused by RSV full or load/store buffer block.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stall (scheduler not being able to accept another uop). This could be caused by RSV full or load/store buffer block.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC and FPC RAT stalls - which can be due to the FIQ and IEC reservation station stall (integer, FP and SIMD scheduler not being able to accept another uop. )",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC and FPC RAT stalls - which can be due to the FIQ and IEC reservation station stall (integer, FP and SIMD scheduler not being able to accept another uop. )",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to mrbl stall. A 'marble' refers to a physical register file entry, also known as the physical destination (PDST).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REGISTER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to mrbl stall. A 'marble' refers to a physical register file entry, also known as the physical destination (PDST).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REGISTER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to ROB full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to ROB full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to iq/jeu scoreboards or ms scb",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to iq/jeu scoreboards or ms scb",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of retirement slots not consumed due to front end stalls.",
+ "Counter": "Fixed counter 5",
+ "EventName": "TOPDOWN_FE_BOUND.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to front end stalls",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x9c",
+ "EventName": "TOPDOWN_FE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to front end stalls",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BAClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BAClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ms",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.CISC",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ms",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.CISC",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.DECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.DECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8d",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8d",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache misses.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x72",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache misses.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x72",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to TOPDOWN_FE_BOUND.ITLB_MISS]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ITLB",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to itlb miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to itlb miss [This event is alias to TOPDOWN_FE_BOUND.ITLB]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend that do not categorize into any other common frontend stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.OTHER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend that do not categorize into any other common frontend stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.OTHER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to predecode wrong",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.PREDECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to predecode wrong",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.PREDECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of consumed retirement slots.",
+ "Counter": "Fixed counter 6",
+ "EventName": "TOPDOWN_RETIRING.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of consumed retirement slots.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "TOPDOWN_RETIRING.ALL_P",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of consumed retirement slots.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x72",
+ "EventName": "TOPDOWN_RETIRING.ALL_P",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Number of non dec-by-all uops decoded by decoder",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x76",
+ "EventName": "UOPS_DECODED.DEC0_UOPS",
+ "PublicDescription": "This event counts the number of not dec-by-all uops decoded by decoder 0.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops executed on INT EU ALU ports.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.ALU",
+ "PublicDescription": "Number of ALU integer uops dispatch to execution.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops executed on any INT EU ports",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.INT_EU_ALL",
+ "PublicDescription": "Number of integer uops dispatched to execution.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Uops dispatched/executed by any of the 3 JEUs (all ups that hold the JEU including macro; micro jumps; fetch-from-eip)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.JMP",
+ "PublicDescription": "Number of jump uops dispatch to execution",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops executed on Load ports",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.LOAD",
+ "PublicDescription": "Number of Load uops dispatched to execution.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of (shift) 1-cycle Uops dispatched/executed by any of the Shift Eus",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.SHIFT",
+ "PublicDescription": "Number of SHIFT integer uops dispatch to execution",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Uops dispatched/executed by Slow EU (e.g. 3+ cycles LEA, >1 cycles shift, iDIVs, CR; *H operation)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.SLOW",
+ "PublicDescription": "Number of Slow integer uops dispatch to execution.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Number of Uops dispatched on STA ports",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.STA",
+ "PublicDescription": "Number of STA (Store Address) uops dispatch to execution",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x80",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Uops executed on STD ports",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.STD",
+ "PublicDescription": "Number of STD (Store Data) uops dispatch to execution",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "3",
+ "CounterMask": "1",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_1",
+ "PublicDescription": "Cycles where at least 1 uop was executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "3",
+ "CounterMask": "2",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_2",
+ "PublicDescription": "Cycles where at least 2 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "3",
+ "CounterMask": "3",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_3",
+ "PublicDescription": "Cycles where at least 3 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "3",
+ "CounterMask": "4",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_4",
+ "PublicDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "3",
+ "CounterMask": "1",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.STALLS",
+ "Invert": "1",
+ "PublicDescription": "Counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "3",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.THREAD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.X87",
+ "PublicDescription": "Counts the number of x87 uops executed.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x0e",
+ "EventName": "UOPS_ISSUED.ANY",
+ "SampleAfterValue": "200003",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of uops issued by the front end every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x0e",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
+ "SampleAfterValue": "1000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "UOPS_ISSUED.CYCLES",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.CYCLES",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the total number of uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.ALL",
+ "SampleAfterValue": "2000003",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles with retired uop(s).",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.CYCLES",
+ "PublicDescription": "Counts cycles where at least one uop has retired.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Retired uops except the last uop of each instruction.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.HEAVY",
+ "PublicDescription": "Counts the number of retired micro-operations (uops) except the last uop of each instruction. An instruction that is decoded into less than two uops does not contribute to the count.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of integer divide uops retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.IDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of integer divide uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.IDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "UOPS_RETIRED.MS",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x8",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Number of non-speculative switches to the Microcode Sequencer (MS)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS_SWITCHES",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x8",
+ "PublicDescription": "Switches to the Microcode Sequencer",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.SLOTS",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric. Software can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.STALLS",
+ "Invert": "1",
+ "PublicDescription": "This event counts cycles without actually retired uops.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of x87 uops retired, includes those in ms flows",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.X87",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of x87 uops retired, includes those in ms flows",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.X87",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/uncore-cache.json b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-cache.json
new file mode 100644
index 000000000000..f294852dfbe6
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-cache.json
@@ -0,0 +1,20 @@
+[
+ {
+ "BriefDescription": "Number of all entries allocated. Includes also retries.",
+ "Counter": "0,1",
+ "EventCode": "0x35",
+ "EventName": "UNC_HAC_CBO_TOR_ALLOCATION.ALL",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "HAC_CBO"
+ },
+ {
+ "BriefDescription": "Asserted on coherent DRD + DRdPref allocations into the queue. Cacheable only",
+ "Counter": "0,1",
+ "EventCode": "0x35",
+ "EventName": "UNC_HAC_CBO_TOR_ALLOCATION.DRD",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "HAC_CBO"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-interconnect.json
new file mode 100644
index 000000000000..15971f0c71db
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-interconnect.json
@@ -0,0 +1,47 @@
+[
+ {
+ "BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches",
+ "Counter": "0,1",
+ "EventCode": "0x81",
+ "EventName": "UNC_HAC_ARB_REQ_TRK_REQUEST.DRD",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "HAC_ARB"
+ },
+ {
+ "BriefDescription": "Number of all CMI transactions",
+ "Counter": "0,1",
+ "EventCode": "0x8A",
+ "EventName": "UNC_HAC_ARB_TRANSACTIONS.ALL",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "HAC_ARB"
+ },
+ {
+ "BriefDescription": "Number of all CMI reads",
+ "Counter": "0,1",
+ "EventCode": "0x8A",
+ "EventName": "UNC_HAC_ARB_TRANSACTIONS.READS",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "HAC_ARB"
+ },
+ {
+ "BriefDescription": "Number of all CMI writes not including Mflush",
+ "Counter": "0,1",
+ "EventCode": "0x8A",
+ "EventName": "UNC_HAC_ARB_TRANSACTIONS.WRITES",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "HAC_ARB"
+ },
+ {
+ "BriefDescription": "Total number of all outgoing entries allocated. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0,1",
+ "EventCode": "0x81",
+ "EventName": "UNC_HAC_ARB_TRK_REQUESTS.ALL",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "HAC_ARB"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/uncore-memory.json b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-memory.json
new file mode 100644
index 000000000000..ceb8839f0767
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-memory.json
@@ -0,0 +1,160 @@
+[
+ {
+ "BriefDescription": "Counts every CAS read command sent from the Memory Controller 0 to DRAM (sum of all channels).",
+ "Counter": "0",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC0_RDCAS_COUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every CAS read command sent from the Memory Controller 0 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of data.",
+ "UMask": "0x20",
+ "Unit": "imc_free_running_0"
+ },
+ {
+ "BriefDescription": "Counts every read and write request entering the Memory Controller 0.",
+ "Counter": "2",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC0_TOTAL_REQCOUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every read and write request entering the Memory Controller 0 (sum of all channels). All requests are counted as one, whether they are 32B or 64B Read/Write or partial/full line writes. Some write requests to the same address may merge to a single write command to DRAM. Therefore, the total request count may be higher than total DRAM BW.",
+ "UMask": "0x10",
+ "Unit": "imc_free_running_0"
+ },
+ {
+ "BriefDescription": "Counts every CAS write command sent from the Memory Controller 0 to DRAM (sum of all channels).",
+ "Counter": "1",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC0_WRCAS_COUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every CAS write command sent from the Memory Controller 0 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of data.",
+ "UMask": "0x30",
+ "Unit": "imc_free_running_0"
+ },
+ {
+ "BriefDescription": "Counts every CAS read command sent from the Memory Controller 1 to DRAM (sum of all channels).",
+ "Counter": "3",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC1_RDCAS_COUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every CAS read command sent from the Memory Controller 1 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of data.",
+ "UMask": "0x20",
+ "Unit": "imc_free_running_1"
+ },
+ {
+ "BriefDescription": "Counts every read and write request entering the Memory Controller 1.",
+ "Counter": "5",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC1_TOTAL_REQCOUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every read and write request entering the Memory Controller 1 (sum of all channels). All requests are counted as one, whether they are 32B or 64B Read/Write or partial/full line writes. Some write requests to the same address may merge to a single write command to DRAM. Therefore, the total request count may be higher than total DRAM BW.",
+ "UMask": "0x10",
+ "Unit": "imc_free_running_1"
+ },
+ {
+ "BriefDescription": "Counts every CAS write command sent from the Memory Controller 1 to DRAM (sum of all channels).",
+ "Counter": "4",
+ "EventCode": "0xff",
+ "EventName": "UNC_MC1_WRCAS_COUNT_FREERUN",
+ "PerPkg": "1",
+ "PublicDescription": "Counts every CAS write command sent from the Memory Controller 1 to DRAM (sum of all channels). Each CAS commands can be for 32B or 64B of data.",
+ "UMask": "0x30",
+ "Unit": "imc_free_running_1"
+ },
+ {
+ "BriefDescription": "ACT command for a read request sent to DRAM",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x24",
+ "EventName": "UNC_M_ACT_COUNT_RD",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "ACT command sent to DRAM",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x26",
+ "EventName": "UNC_M_ACT_COUNT_TOTAL",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "ACT command for a write request sent to DRAM",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x25",
+ "EventName": "UNC_M_ACT_COUNT_WR",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Read CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_CAS_COUNT_RD",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Write CAS command sent to DRAM",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x23",
+ "EventName": "UNC_M_CAS_COUNT_WR",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Any Rank at Hot state",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x19",
+ "EventName": "UNC_M_DRAM_THERMAL_HOT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Any Rank at Warm state",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x1A",
+ "EventName": "UNC_M_DRAM_THERMAL_WARM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "PRE command sent to DRAM due to page table idle timer expiration",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x28",
+ "EventName": "UNC_M_PRE_COUNT_IDLE",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "PRE command sent to DRAM for a read/write request",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x27",
+ "EventName": "UNC_M_PRE_COUNT_PAGE_MISS",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of bytes read from DRAM, in 32B chunks. Counter increments by 1 after receiving 32B chunk data.",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x3A",
+ "EventName": "UNC_M_RD_DATA",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Total number of read and write byte transfers to/from DRAM, in 32B chunks. Counter increments by 1 after sending or receiving 32B chunk data.",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x3C",
+ "EventName": "UNC_M_TOTAL_DATA",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of bytes written to DRAM, in 32B chunks. Counter increments by 1 after sending 32B chunk data.",
+ "Counter": "0,1,2,3,4",
+ "EventCode": "0x3B",
+ "EventName": "UNC_M_WR_DATA",
+ "PerPkg": "1",
+ "Unit": "iMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/haswell/uncore-other.json b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-other.json
index 2af92e43b28a..b3f9c588b410 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/uncore-other.json
@@ -1,9 +1,10 @@
[
{
"BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_CLOCK.SOCKET",
"PerPkg": "1",
- "Unit": "CLOCK"
+ "Unit": "CNCU"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/arrowlake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/arrowlake/virtual-memory.json
new file mode 100644
index 000000000000..a3e4a4f3ab45
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/arrowlake/virtual-memory.json
@@ -0,0 +1,522 @@
+[
+ {
+ "BriefDescription": "Counts the number of page walks initiated by a demand load that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.MISS_CAUSED_WALK",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
+ "PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x320",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a demand load.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0xe",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks initiated by a store that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.MISS_CAUSED_WALK",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.STLB_HIT",
+ "PublicDescription": "Counts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLB.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.STLB_HIT",
+ "PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x320",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Accounts for all pages sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.STLB_HIT",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a store.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0xe",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 1G page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xe",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks initiated by a instruction fetch that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.MISS_CAUSED_WALK",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks initiated by a instruction fetch that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.MISS_CAUSED_WALK",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to an instruction fetch that did not start a page walk. Account for all pages sizes. Will result in an ITLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.STLB_HIT",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.STLB_HIT",
+ "PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x120",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to an instruction fetch that did not start a page walk. Account for all pages sizes. Will result in an ITLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.STLB_HIT",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "CounterMask": "1",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a code (instruction fetch) request.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xe",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0xe",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for iside in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for iside in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals. Walks could be counted by edge detecting on this event, but would count restarted suspended walks.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7,8,9",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10",
+ "Unit": "cpu_core"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for iside in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for iside in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals. Walks could be counted by edge detecting on this event, but would count restarted suspended walks.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10",
+ "Unit": "cpu_lowpower"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.DTLB_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x90",
+ "Unit": "cpu_atom"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.DTLB_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x90",
+ "Unit": "cpu_lowpower"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/cache.json b/tools/perf/pmu-events/arch/x86/bonnell/cache.json
index 1ca95a70d48a..86582bb8aa39 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/cache.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1 Data Cacheable reads and writes",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.ALL_CACHE_REF",
"SampleAfterValue": "2000000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "L1 Data reads and writes",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.ALL_REF",
"SampleAfterValue": "2000000",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "Modified cache lines evicted from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.EVICT",
"SampleAfterValue": "200000",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "L1 Cacheable Data Reads",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.LD",
"SampleAfterValue": "2000000",
@@ -29,6 +33,7 @@
},
{
"BriefDescription": "L1 Data line replacements",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.REPL",
"SampleAfterValue": "200000",
@@ -36,6 +41,7 @@
},
{
"BriefDescription": "Modified cache lines allocated in the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.REPLM",
"SampleAfterValue": "200000",
@@ -43,6 +49,7 @@
},
{
"BriefDescription": "L1 Cacheable Data Writes",
+ "Counter": "0,1",
"EventCode": "0x40",
"EventName": "L1D_CACHE.ST",
"SampleAfterValue": "2000000",
@@ -50,6 +57,7 @@
},
{
"BriefDescription": "Cycles L2 address bus is in use.",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "L2_ADS.SELF",
"SampleAfterValue": "200000",
@@ -57,6 +65,7 @@
},
{
"BriefDescription": "All data requests from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "L2_DATA_RQSTS.SELF.E_STATE",
"SampleAfterValue": "200000",
@@ -64,6 +73,7 @@
},
{
"BriefDescription": "All data requests from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "L2_DATA_RQSTS.SELF.I_STATE",
"SampleAfterValue": "200000",
@@ -71,6 +81,7 @@
},
{
"BriefDescription": "All data requests from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "L2_DATA_RQSTS.SELF.MESI",
"SampleAfterValue": "200000",
@@ -78,6 +89,7 @@
},
{
"BriefDescription": "All data requests from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "L2_DATA_RQSTS.SELF.M_STATE",
"SampleAfterValue": "200000",
@@ -85,6 +97,7 @@
},
{
"BriefDescription": "All data requests from the L1 data cache",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "L2_DATA_RQSTS.SELF.S_STATE",
"SampleAfterValue": "200000",
@@ -92,6 +105,7 @@
},
{
"BriefDescription": "Cycles the L2 cache data bus is busy.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "L2_DBUS_BUSY.SELF",
"SampleAfterValue": "200000",
@@ -99,6 +113,7 @@
},
{
"BriefDescription": "Cycles the L2 transfers data to the core.",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "L2_DBUS_BUSY_RD.SELF",
"SampleAfterValue": "200000",
@@ -106,6 +121,7 @@
},
{
"BriefDescription": "L2 cacheable instruction fetch requests",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "L2_IFETCH.SELF.E_STATE",
"SampleAfterValue": "200000",
@@ -113,6 +129,7 @@
},
{
"BriefDescription": "L2 cacheable instruction fetch requests",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "L2_IFETCH.SELF.I_STATE",
"SampleAfterValue": "200000",
@@ -120,6 +137,7 @@
},
{
"BriefDescription": "L2 cacheable instruction fetch requests",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "L2_IFETCH.SELF.MESI",
"SampleAfterValue": "200000",
@@ -127,6 +145,7 @@
},
{
"BriefDescription": "L2 cacheable instruction fetch requests",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "L2_IFETCH.SELF.M_STATE",
"SampleAfterValue": "200000",
@@ -134,6 +153,7 @@
},
{
"BriefDescription": "L2 cacheable instruction fetch requests",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "L2_IFETCH.SELF.S_STATE",
"SampleAfterValue": "200000",
@@ -141,6 +161,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.ANY.E_STATE",
"SampleAfterValue": "200000",
@@ -148,6 +169,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.ANY.I_STATE",
"SampleAfterValue": "200000",
@@ -155,6 +177,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.ANY.MESI",
"SampleAfterValue": "200000",
@@ -162,6 +185,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.ANY.M_STATE",
"SampleAfterValue": "200000",
@@ -169,6 +193,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.ANY.S_STATE",
"SampleAfterValue": "200000",
@@ -176,6 +201,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.DEMAND.E_STATE",
"SampleAfterValue": "200000",
@@ -183,6 +209,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.DEMAND.I_STATE",
"SampleAfterValue": "200000",
@@ -190,6 +217,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.DEMAND.MESI",
"SampleAfterValue": "200000",
@@ -197,6 +225,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.DEMAND.M_STATE",
"SampleAfterValue": "200000",
@@ -204,6 +233,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.DEMAND.S_STATE",
"SampleAfterValue": "200000",
@@ -211,6 +241,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.PREFETCH.E_STATE",
"SampleAfterValue": "200000",
@@ -218,6 +249,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.PREFETCH.I_STATE",
"SampleAfterValue": "200000",
@@ -225,6 +257,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.PREFETCH.MESI",
"SampleAfterValue": "200000",
@@ -232,6 +265,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.PREFETCH.M_STATE",
"SampleAfterValue": "200000",
@@ -239,6 +273,7 @@
},
{
"BriefDescription": "L2 cache reads",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "L2_LD.SELF.PREFETCH.S_STATE",
"SampleAfterValue": "200000",
@@ -246,6 +281,7 @@
},
{
"BriefDescription": "All read requests from L1 instruction and data caches",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "L2_LD_IFETCH.SELF.E_STATE",
"SampleAfterValue": "200000",
@@ -253,6 +289,7 @@
},
{
"BriefDescription": "All read requests from L1 instruction and data caches",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "L2_LD_IFETCH.SELF.I_STATE",
"SampleAfterValue": "200000",
@@ -260,6 +297,7 @@
},
{
"BriefDescription": "All read requests from L1 instruction and data caches",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "L2_LD_IFETCH.SELF.MESI",
"SampleAfterValue": "200000",
@@ -267,6 +305,7 @@
},
{
"BriefDescription": "All read requests from L1 instruction and data caches",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "L2_LD_IFETCH.SELF.M_STATE",
"SampleAfterValue": "200000",
@@ -274,6 +313,7 @@
},
{
"BriefDescription": "All read requests from L1 instruction and data caches",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "L2_LD_IFETCH.SELF.S_STATE",
"SampleAfterValue": "200000",
@@ -281,6 +321,7 @@
},
{
"BriefDescription": "L2 cache misses.",
+ "Counter": "0,1",
"EventCode": "0x24",
"EventName": "L2_LINES_IN.SELF.ANY",
"SampleAfterValue": "200000",
@@ -288,6 +329,7 @@
},
{
"BriefDescription": "L2 cache misses.",
+ "Counter": "0,1",
"EventCode": "0x24",
"EventName": "L2_LINES_IN.SELF.DEMAND",
"SampleAfterValue": "200000",
@@ -295,6 +337,7 @@
},
{
"BriefDescription": "L2 cache misses.",
+ "Counter": "0,1",
"EventCode": "0x24",
"EventName": "L2_LINES_IN.SELF.PREFETCH",
"SampleAfterValue": "200000",
@@ -302,6 +345,7 @@
},
{
"BriefDescription": "L2 cache lines evicted.",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.SELF.ANY",
"SampleAfterValue": "200000",
@@ -309,6 +353,7 @@
},
{
"BriefDescription": "L2 cache lines evicted.",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.SELF.DEMAND",
"SampleAfterValue": "200000",
@@ -316,6 +361,7 @@
},
{
"BriefDescription": "L2 cache lines evicted.",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.SELF.PREFETCH",
"SampleAfterValue": "200000",
@@ -323,6 +369,7 @@
},
{
"BriefDescription": "L2 locked accesses",
+ "Counter": "0,1",
"EventCode": "0x2B",
"EventName": "L2_LOCK.SELF.E_STATE",
"SampleAfterValue": "200000",
@@ -330,6 +377,7 @@
},
{
"BriefDescription": "L2 locked accesses",
+ "Counter": "0,1",
"EventCode": "0x2B",
"EventName": "L2_LOCK.SELF.I_STATE",
"SampleAfterValue": "200000",
@@ -337,6 +385,7 @@
},
{
"BriefDescription": "L2 locked accesses",
+ "Counter": "0,1",
"EventCode": "0x2B",
"EventName": "L2_LOCK.SELF.MESI",
"SampleAfterValue": "200000",
@@ -344,6 +393,7 @@
},
{
"BriefDescription": "L2 locked accesses",
+ "Counter": "0,1",
"EventCode": "0x2B",
"EventName": "L2_LOCK.SELF.M_STATE",
"SampleAfterValue": "200000",
@@ -351,6 +401,7 @@
},
{
"BriefDescription": "L2 locked accesses",
+ "Counter": "0,1",
"EventCode": "0x2B",
"EventName": "L2_LOCK.SELF.S_STATE",
"SampleAfterValue": "200000",
@@ -358,6 +409,7 @@
},
{
"BriefDescription": "L2 cache line modifications.",
+ "Counter": "0,1",
"EventCode": "0x25",
"EventName": "L2_M_LINES_IN.SELF",
"SampleAfterValue": "200000",
@@ -365,6 +417,7 @@
},
{
"BriefDescription": "Modified lines evicted from the L2 cache",
+ "Counter": "0,1",
"EventCode": "0x27",
"EventName": "L2_M_LINES_OUT.SELF.ANY",
"SampleAfterValue": "200000",
@@ -372,6 +425,7 @@
},
{
"BriefDescription": "Modified lines evicted from the L2 cache",
+ "Counter": "0,1",
"EventCode": "0x27",
"EventName": "L2_M_LINES_OUT.SELF.DEMAND",
"SampleAfterValue": "200000",
@@ -379,6 +433,7 @@
},
{
"BriefDescription": "Modified lines evicted from the L2 cache",
+ "Counter": "0,1",
"EventCode": "0x27",
"EventName": "L2_M_LINES_OUT.SELF.PREFETCH",
"SampleAfterValue": "200000",
@@ -386,6 +441,7 @@
},
{
"BriefDescription": "Cycles no L2 cache requests are pending",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "L2_NO_REQ.SELF",
"SampleAfterValue": "200000",
@@ -393,6 +449,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.ANY.E_STATE",
"SampleAfterValue": "200000",
@@ -400,6 +457,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.ANY.I_STATE",
"SampleAfterValue": "200000",
@@ -407,6 +465,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.ANY.MESI",
"SampleAfterValue": "200000",
@@ -414,6 +473,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.ANY.M_STATE",
"SampleAfterValue": "200000",
@@ -421,6 +481,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.ANY.S_STATE",
"SampleAfterValue": "200000",
@@ -428,6 +489,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.DEMAND.E_STATE",
"SampleAfterValue": "200000",
@@ -435,6 +497,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.DEMAND.I_STATE",
"SampleAfterValue": "200000",
@@ -442,6 +505,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.DEMAND.MESI",
"SampleAfterValue": "200000",
@@ -449,6 +513,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.DEMAND.M_STATE",
"SampleAfterValue": "200000",
@@ -456,6 +521,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.DEMAND.S_STATE",
"SampleAfterValue": "200000",
@@ -463,6 +529,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.PREFETCH.E_STATE",
"SampleAfterValue": "200000",
@@ -470,6 +537,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.PREFETCH.I_STATE",
"SampleAfterValue": "200000",
@@ -477,6 +545,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.PREFETCH.MESI",
"SampleAfterValue": "200000",
@@ -484,6 +553,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.PREFETCH.M_STATE",
"SampleAfterValue": "200000",
@@ -491,6 +561,7 @@
},
{
"BriefDescription": "Rejected L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x30",
"EventName": "L2_REJECT_BUSQ.SELF.PREFETCH.S_STATE",
"SampleAfterValue": "200000",
@@ -498,6 +569,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.ANY.E_STATE",
"SampleAfterValue": "200000",
@@ -505,6 +577,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.ANY.I_STATE",
"SampleAfterValue": "200000",
@@ -512,6 +585,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.ANY.MESI",
"SampleAfterValue": "200000",
@@ -519,6 +593,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.ANY.M_STATE",
"SampleAfterValue": "200000",
@@ -526,6 +601,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.ANY.S_STATE",
"SampleAfterValue": "200000",
@@ -533,6 +609,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.DEMAND.E_STATE",
"SampleAfterValue": "200000",
@@ -540,6 +617,7 @@
},
{
"BriefDescription": "L2 cache demand requests from this core that missed the L2",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.DEMAND.I_STATE",
"SampleAfterValue": "200000",
@@ -547,6 +625,7 @@
},
{
"BriefDescription": "L2 cache demand requests from this core",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.DEMAND.MESI",
"SampleAfterValue": "200000",
@@ -554,6 +633,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.DEMAND.M_STATE",
"SampleAfterValue": "200000",
@@ -561,6 +641,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.DEMAND.S_STATE",
"SampleAfterValue": "200000",
@@ -568,6 +649,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.PREFETCH.E_STATE",
"SampleAfterValue": "200000",
@@ -575,6 +657,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.PREFETCH.I_STATE",
"SampleAfterValue": "200000",
@@ -582,6 +665,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.PREFETCH.MESI",
"SampleAfterValue": "200000",
@@ -589,6 +673,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.PREFETCH.M_STATE",
"SampleAfterValue": "200000",
@@ -596,6 +681,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "L2_RQSTS.SELF.PREFETCH.S_STATE",
"SampleAfterValue": "200000",
@@ -603,6 +689,7 @@
},
{
"BriefDescription": "L2 store requests",
+ "Counter": "0,1",
"EventCode": "0x2A",
"EventName": "L2_ST.SELF.E_STATE",
"SampleAfterValue": "200000",
@@ -610,6 +697,7 @@
},
{
"BriefDescription": "L2 store requests",
+ "Counter": "0,1",
"EventCode": "0x2A",
"EventName": "L2_ST.SELF.I_STATE",
"SampleAfterValue": "200000",
@@ -617,6 +705,7 @@
},
{
"BriefDescription": "L2 store requests",
+ "Counter": "0,1",
"EventCode": "0x2A",
"EventName": "L2_ST.SELF.MESI",
"SampleAfterValue": "200000",
@@ -624,6 +713,7 @@
},
{
"BriefDescription": "L2 store requests",
+ "Counter": "0,1",
"EventCode": "0x2A",
"EventName": "L2_ST.SELF.M_STATE",
"SampleAfterValue": "200000",
@@ -631,6 +721,7 @@
},
{
"BriefDescription": "L2 store requests",
+ "Counter": "0,1",
"EventCode": "0x2A",
"EventName": "L2_ST.SELF.S_STATE",
"SampleAfterValue": "200000",
@@ -638,6 +729,7 @@
},
{
"BriefDescription": "Retired loads that hit the L2 cache (precise event).",
+ "Counter": "0,1",
"EventCode": "0xCB",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
"SampleAfterValue": "200000",
@@ -645,6 +737,7 @@
},
{
"BriefDescription": "Retired loads that miss the L2 cache",
+ "Counter": "0,1",
"EventCode": "0xCB",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
"SampleAfterValue": "10000",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/counter.json b/tools/perf/pmu-events/arch/x86/bonnell/counter.json
new file mode 100644
index 000000000000..eb89b55f31bd
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/bonnell/counter.json
@@ -0,0 +1,7 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "4",
+ "CountersNumGeneric": "2"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/floating-point.json b/tools/perf/pmu-events/arch/x86/bonnell/floating-point.json
index 18bf5ec47e72..d1bd5be95a15 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Floating point assists for retired operations.",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "FP_ASSIST.AR",
"SampleAfterValue": "10000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Floating point assists.",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "FP_ASSIST.S",
"SampleAfterValue": "10000",
@@ -15,12 +17,14 @@
},
{
"BriefDescription": "SIMD assists invoked.",
+ "Counter": "0,1",
"EventCode": "0xCD",
"EventName": "SIMD_ASSIST",
"SampleAfterValue": "100000"
},
{
"BriefDescription": "Retired computational Streaming SIMD Extensions (SSE) packed-single instructions.",
+ "Counter": "0,1",
"EventCode": "0xCA",
"EventName": "SIMD_COMP_INST_RETIRED.PACKED_SINGLE",
"SampleAfterValue": "2000000",
@@ -28,6 +32,7 @@
},
{
"BriefDescription": "Retired computational Streaming SIMD Extensions 2 (SSE2) scalar-double instructions.",
+ "Counter": "0,1",
"EventCode": "0xCA",
"EventName": "SIMD_COMP_INST_RETIRED.SCALAR_DOUBLE",
"SampleAfterValue": "2000000",
@@ -35,6 +40,7 @@
},
{
"BriefDescription": "Retired computational Streaming SIMD Extensions (SSE) scalar-single instructions.",
+ "Counter": "0,1",
"EventCode": "0xCA",
"EventName": "SIMD_COMP_INST_RETIRED.SCALAR_SINGLE",
"SampleAfterValue": "2000000",
@@ -42,12 +48,14 @@
},
{
"BriefDescription": "SIMD Instructions retired.",
+ "Counter": "0,1",
"EventCode": "0xCE",
"EventName": "SIMD_INSTR_RETIRED",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Retired Streaming SIMD Extensions (SSE) packed-single instructions.",
+ "Counter": "0,1",
"EventCode": "0xC7",
"EventName": "SIMD_INST_RETIRED.PACKED_SINGLE",
"SampleAfterValue": "2000000",
@@ -55,6 +63,7 @@
},
{
"BriefDescription": "Retired Streaming SIMD Extensions 2 (SSE2) scalar-double instructions.",
+ "Counter": "0,1",
"EventCode": "0xC7",
"EventName": "SIMD_INST_RETIRED.SCALAR_DOUBLE",
"SampleAfterValue": "2000000",
@@ -62,6 +71,7 @@
},
{
"BriefDescription": "Retired Streaming SIMD Extensions (SSE) scalar-single instructions.",
+ "Counter": "0,1",
"EventCode": "0xC7",
"EventName": "SIMD_INST_RETIRED.SCALAR_SINGLE",
"SampleAfterValue": "2000000",
@@ -69,6 +79,7 @@
},
{
"BriefDescription": "Retired Streaming SIMD Extensions 2 (SSE2) vector instructions.",
+ "Counter": "0,1",
"EventCode": "0xC7",
"EventName": "SIMD_INST_RETIRED.VECTOR",
"SampleAfterValue": "2000000",
@@ -76,12 +87,14 @@
},
{
"BriefDescription": "Saturated arithmetic instructions retired.",
+ "Counter": "0,1",
"EventCode": "0xCF",
"EventName": "SIMD_SAT_INSTR_RETIRED",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "SIMD saturated arithmetic micro-ops retired.",
+ "Counter": "0,1",
"EventCode": "0xB1",
"EventName": "SIMD_SAT_UOP_EXEC.AR",
"SampleAfterValue": "2000000",
@@ -89,12 +102,14 @@
},
{
"BriefDescription": "SIMD saturated arithmetic micro-ops executed.",
+ "Counter": "0,1",
"EventCode": "0xB1",
"EventName": "SIMD_SAT_UOP_EXEC.S",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "SIMD micro-ops retired (excluding stores).",
+ "Counter": "0,1",
"EventCode": "0xB0",
"EventName": "SIMD_UOPS_EXEC.AR",
"PEBS": "2",
@@ -103,12 +118,14 @@
},
{
"BriefDescription": "SIMD micro-ops executed (excluding stores).",
+ "Counter": "0,1",
"EventCode": "0xB0",
"EventName": "SIMD_UOPS_EXEC.S",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "SIMD packed arithmetic micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.ARITHMETIC.AR",
"SampleAfterValue": "2000000",
@@ -116,6 +133,7 @@
},
{
"BriefDescription": "SIMD packed arithmetic micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.ARITHMETIC.S",
"SampleAfterValue": "2000000",
@@ -123,6 +141,7 @@
},
{
"BriefDescription": "SIMD packed logical micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.LOGICAL.AR",
"SampleAfterValue": "2000000",
@@ -130,6 +149,7 @@
},
{
"BriefDescription": "SIMD packed logical micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.LOGICAL.S",
"SampleAfterValue": "2000000",
@@ -137,6 +157,7 @@
},
{
"BriefDescription": "SIMD packed multiply micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.MUL.AR",
"SampleAfterValue": "2000000",
@@ -144,6 +165,7 @@
},
{
"BriefDescription": "SIMD packed multiply micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.MUL.S",
"SampleAfterValue": "2000000",
@@ -151,6 +173,7 @@
},
{
"BriefDescription": "SIMD packed micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.PACK.AR",
"SampleAfterValue": "2000000",
@@ -158,6 +181,7 @@
},
{
"BriefDescription": "SIMD packed micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.PACK.S",
"SampleAfterValue": "2000000",
@@ -165,6 +189,7 @@
},
{
"BriefDescription": "SIMD packed shift micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.SHIFT.AR",
"SampleAfterValue": "2000000",
@@ -172,6 +197,7 @@
},
{
"BriefDescription": "SIMD packed shift micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.SHIFT.S",
"SampleAfterValue": "2000000",
@@ -179,6 +205,7 @@
},
{
"BriefDescription": "SIMD unpacked micro-ops retired",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.UNPACK.AR",
"SampleAfterValue": "2000000",
@@ -186,6 +213,7 @@
},
{
"BriefDescription": "SIMD unpacked micro-ops executed",
+ "Counter": "0,1",
"EventCode": "0xB3",
"EventName": "SIMD_UOP_TYPE_EXEC.UNPACK.S",
"SampleAfterValue": "2000000",
@@ -193,6 +221,7 @@
},
{
"BriefDescription": "Floating point computational micro-ops retired.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "X87_COMP_OPS_EXE.ANY.AR",
"PEBS": "2",
@@ -201,6 +230,7 @@
},
{
"BriefDescription": "Floating point computational micro-ops executed.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "X87_COMP_OPS_EXE.ANY.S",
"SampleAfterValue": "2000000",
@@ -208,6 +238,7 @@
},
{
"BriefDescription": "FXCH uops retired.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "X87_COMP_OPS_EXE.FXCH.AR",
"PEBS": "2",
@@ -216,6 +247,7 @@
},
{
"BriefDescription": "FXCH uops executed.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "X87_COMP_OPS_EXE.FXCH.S",
"SampleAfterValue": "2000000",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/frontend.json b/tools/perf/pmu-events/arch/x86/bonnell/frontend.json
index 42284c02c11d..7657fd6d3a11 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "BACLEARS asserted.",
+ "Counter": "0,1",
"EventCode": "0xE6",
"EventName": "BACLEARS.ANY",
"SampleAfterValue": "2000000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Cycles during which instruction fetches are stalled.",
+ "Counter": "0,1",
"EventCode": "0x86",
"EventName": "CYCLES_ICACHE_MEM_STALLED.ICACHE_MEM_STALLED",
"SampleAfterValue": "2000000",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "Decode stall due to IQ full",
+ "Counter": "0,1",
"EventCode": "0x87",
"EventName": "DECODE_STALL.IQ_FULL",
"SampleAfterValue": "2000000",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "Decode stall due to PFB empty",
+ "Counter": "0,1",
"EventCode": "0x87",
"EventName": "DECODE_STALL.PFB_EMPTY",
"SampleAfterValue": "2000000",
@@ -29,6 +33,7 @@
},
{
"BriefDescription": "Instruction fetches.",
+ "Counter": "0,1",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"SampleAfterValue": "200000",
@@ -36,6 +41,7 @@
},
{
"BriefDescription": "Icache hit",
+ "Counter": "0,1",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"SampleAfterValue": "200000",
@@ -43,6 +49,7 @@
},
{
"BriefDescription": "Icache miss",
+ "Counter": "0,1",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"SampleAfterValue": "200000",
@@ -50,6 +57,7 @@
},
{
"BriefDescription": "All Instructions decoded",
+ "Counter": "0,1",
"EventCode": "0xAA",
"EventName": "MACRO_INSTS.ALL_DECODED",
"SampleAfterValue": "2000000",
@@ -57,6 +65,7 @@
},
{
"BriefDescription": "CISC macro instructions decoded",
+ "Counter": "0,1",
"EventCode": "0xAA",
"EventName": "MACRO_INSTS.CISC_DECODED",
"SampleAfterValue": "2000000",
@@ -64,6 +73,7 @@
},
{
"BriefDescription": "Non-CISC macro instructions decoded",
+ "Counter": "0,1",
"EventCode": "0xAA",
"EventName": "MACRO_INSTS.NON_CISC_DECODED",
"SampleAfterValue": "2000000",
@@ -71,6 +81,7 @@
},
{
"BriefDescription": "This event counts the cycles where 1 or more uops are issued by the micro-sequencer (MS), including microcode assists and inserted flows, and written to the IQ.",
+ "Counter": "0,1",
"CounterMask": "1",
"EventCode": "0xA9",
"EventName": "UOPS.MS_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/memory.json b/tools/perf/pmu-events/arch/x86/bonnell/memory.json
index ac02dc2482c8..f8b45b6fb4d3 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/memory.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Nonzero segbase 1 bubble",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.BUBBLE",
"SampleAfterValue": "200000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Nonzero segbase load 1 bubble",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.LD_BUBBLE",
"SampleAfterValue": "200000",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "Load splits",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.LD_SPLIT",
"SampleAfterValue": "200000",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "Load splits (At Retirement)",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.LD_SPLIT.AR",
"SampleAfterValue": "200000",
@@ -29,6 +33,7 @@
},
{
"BriefDescription": "Nonzero segbase ld-op-st 1 bubble",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.RMW_BUBBLE",
"SampleAfterValue": "200000",
@@ -36,6 +41,7 @@
},
{
"BriefDescription": "ld-op-st splits",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.RMW_SPLIT",
"SampleAfterValue": "200000",
@@ -43,6 +49,7 @@
},
{
"BriefDescription": "Memory references that cross an 8-byte boundary.",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.SPLIT",
"SampleAfterValue": "200000",
@@ -50,6 +57,7 @@
},
{
"BriefDescription": "Memory references that cross an 8-byte boundary (At Retirement)",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.SPLIT.AR",
"SampleAfterValue": "200000",
@@ -57,6 +65,7 @@
},
{
"BriefDescription": "Nonzero segbase store 1 bubble",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.ST_BUBBLE",
"SampleAfterValue": "200000",
@@ -64,6 +73,7 @@
},
{
"BriefDescription": "Store splits",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.ST_SPLIT",
"SampleAfterValue": "200000",
@@ -71,6 +81,7 @@
},
{
"BriefDescription": "Store splits (Ar Retirement)",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "MISALIGN_MEM_REF.ST_SPLIT.AR",
"SampleAfterValue": "200000",
@@ -78,6 +89,7 @@
},
{
"BriefDescription": "L1 hardware prefetch request",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.HW_PREFETCH",
"SampleAfterValue": "2000000",
@@ -85,6 +97,7 @@
},
{
"BriefDescription": "Streaming SIMD Extensions (SSE) Prefetch NTA instructions executed",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.PREFETCHNTA",
"SampleAfterValue": "200000",
@@ -92,6 +105,7 @@
},
{
"BriefDescription": "Streaming SIMD Extensions (SSE) PrefetchT0 instructions executed.",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.PREFETCHT0",
"SampleAfterValue": "200000",
@@ -99,6 +113,7 @@
},
{
"BriefDescription": "Streaming SIMD Extensions (SSE) PrefetchT1 instructions executed.",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.PREFETCHT1",
"SampleAfterValue": "200000",
@@ -106,6 +121,7 @@
},
{
"BriefDescription": "Streaming SIMD Extensions (SSE) PrefetchT2 instructions executed.",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.PREFETCHT2",
"SampleAfterValue": "200000",
@@ -113,6 +129,7 @@
},
{
"BriefDescription": "Any Software prefetch",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.SOFTWARE_PREFETCH",
"SampleAfterValue": "200000",
@@ -120,6 +137,7 @@
},
{
"BriefDescription": "Any Software prefetch",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.SOFTWARE_PREFETCH.AR",
"SampleAfterValue": "200000",
@@ -127,6 +145,7 @@
},
{
"BriefDescription": "Streaming SIMD Extensions (SSE) PrefetchT1 and PrefetchT2 instructions executed",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "PREFETCH.SW_L2",
"SampleAfterValue": "200000",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/other.json b/tools/perf/pmu-events/arch/x86/bonnell/other.json
index 782594c8bda5..6e6f64b96834 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/other.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Bus queue is empty.",
+ "Counter": "0,1",
"EventCode": "0x7D",
"EventName": "BUSQ_EMPTY.SELF",
"SampleAfterValue": "200000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Number of Bus Not Ready signals asserted.",
+ "Counter": "0,1",
"EventCode": "0x61",
"EventName": "BUS_BNR_DRV.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -15,12 +17,14 @@
},
{
"BriefDescription": "Number of Bus Not Ready signals asserted.",
+ "Counter": "0,1",
"EventCode": "0x61",
"EventName": "BUS_BNR_DRV.THIS_AGENT",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "Bus cycles while processor receives data.",
+ "Counter": "0,1",
"EventCode": "0x64",
"EventName": "BUS_DATA_RCV.SELF",
"SampleAfterValue": "200000",
@@ -28,6 +32,7 @@
},
{
"BriefDescription": "Bus cycles when data is sent on the bus.",
+ "Counter": "0,1",
"EventCode": "0x62",
"EventName": "BUS_DRDY_CLOCKS.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -35,12 +40,14 @@
},
{
"BriefDescription": "Bus cycles when data is sent on the bus.",
+ "Counter": "0,1",
"EventCode": "0x62",
"EventName": "BUS_DRDY_CLOCKS.THIS_AGENT",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "HITM signal asserted.",
+ "Counter": "0,1",
"EventCode": "0x7B",
"EventName": "BUS_HITM_DRV.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -48,12 +55,14 @@
},
{
"BriefDescription": "HITM signal asserted.",
+ "Counter": "0,1",
"EventCode": "0x7B",
"EventName": "BUS_HITM_DRV.THIS_AGENT",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "HIT signal asserted.",
+ "Counter": "0,1",
"EventCode": "0x7A",
"EventName": "BUS_HIT_DRV.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -61,12 +70,14 @@
},
{
"BriefDescription": "HIT signal asserted.",
+ "Counter": "0,1",
"EventCode": "0x7A",
"EventName": "BUS_HIT_DRV.THIS_AGENT",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "IO requests waiting in the bus queue.",
+ "Counter": "0,1",
"EventCode": "0x7F",
"EventName": "BUS_IO_WAIT.SELF",
"SampleAfterValue": "200000",
@@ -74,6 +85,7 @@
},
{
"BriefDescription": "Bus cycles when a LOCK signal is asserted.",
+ "Counter": "0,1",
"EventCode": "0x63",
"EventName": "BUS_LOCK_CLOCKS.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -81,6 +93,7 @@
},
{
"BriefDescription": "Bus cycles when a LOCK signal is asserted.",
+ "Counter": "0,1",
"EventCode": "0x63",
"EventName": "BUS_LOCK_CLOCKS.SELF",
"SampleAfterValue": "200000",
@@ -88,6 +101,7 @@
},
{
"BriefDescription": "Outstanding cacheable data read bus requests duration.",
+ "Counter": "0,1",
"EventCode": "0x60",
"EventName": "BUS_REQUEST_OUTSTANDING.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -95,6 +109,7 @@
},
{
"BriefDescription": "Outstanding cacheable data read bus requests duration.",
+ "Counter": "0,1",
"EventCode": "0x60",
"EventName": "BUS_REQUEST_OUTSTANDING.SELF",
"SampleAfterValue": "200000",
@@ -102,6 +117,7 @@
},
{
"BriefDescription": "All bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x70",
"EventName": "BUS_TRANS_ANY.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -109,6 +125,7 @@
},
{
"BriefDescription": "All bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x70",
"EventName": "BUS_TRANS_ANY.SELF",
"SampleAfterValue": "200000",
@@ -116,6 +133,7 @@
},
{
"BriefDescription": "Burst read bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x65",
"EventName": "BUS_TRANS_BRD.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -123,6 +141,7 @@
},
{
"BriefDescription": "Burst read bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x65",
"EventName": "BUS_TRANS_BRD.SELF",
"SampleAfterValue": "200000",
@@ -130,6 +149,7 @@
},
{
"BriefDescription": "Burst (full cache-line) bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6E",
"EventName": "BUS_TRANS_BURST.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -137,6 +157,7 @@
},
{
"BriefDescription": "Burst (full cache-line) bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6E",
"EventName": "BUS_TRANS_BURST.SELF",
"SampleAfterValue": "200000",
@@ -144,6 +165,7 @@
},
{
"BriefDescription": "Deferred bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6D",
"EventName": "BUS_TRANS_DEF.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -151,6 +173,7 @@
},
{
"BriefDescription": "Deferred bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6D",
"EventName": "BUS_TRANS_DEF.SELF",
"SampleAfterValue": "200000",
@@ -158,6 +181,7 @@
},
{
"BriefDescription": "Instruction-fetch bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x68",
"EventName": "BUS_TRANS_IFETCH.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -165,6 +189,7 @@
},
{
"BriefDescription": "Instruction-fetch bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x68",
"EventName": "BUS_TRANS_IFETCH.SELF",
"SampleAfterValue": "200000",
@@ -172,6 +197,7 @@
},
{
"BriefDescription": "Invalidate bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x69",
"EventName": "BUS_TRANS_INVAL.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -179,6 +205,7 @@
},
{
"BriefDescription": "Invalidate bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x69",
"EventName": "BUS_TRANS_INVAL.SELF",
"SampleAfterValue": "200000",
@@ -186,6 +213,7 @@
},
{
"BriefDescription": "IO bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6C",
"EventName": "BUS_TRANS_IO.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -193,6 +221,7 @@
},
{
"BriefDescription": "IO bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6C",
"EventName": "BUS_TRANS_IO.SELF",
"SampleAfterValue": "200000",
@@ -200,6 +229,7 @@
},
{
"BriefDescription": "Memory bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6F",
"EventName": "BUS_TRANS_MEM.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -207,6 +237,7 @@
},
{
"BriefDescription": "Memory bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6F",
"EventName": "BUS_TRANS_MEM.SELF",
"SampleAfterValue": "200000",
@@ -214,6 +245,7 @@
},
{
"BriefDescription": "Partial bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6B",
"EventName": "BUS_TRANS_P.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -221,6 +253,7 @@
},
{
"BriefDescription": "Partial bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x6B",
"EventName": "BUS_TRANS_P.SELF",
"SampleAfterValue": "200000",
@@ -228,6 +261,7 @@
},
{
"BriefDescription": "Partial write bus transaction.",
+ "Counter": "0,1",
"EventCode": "0x6A",
"EventName": "BUS_TRANS_PWR.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -235,6 +269,7 @@
},
{
"BriefDescription": "Partial write bus transaction.",
+ "Counter": "0,1",
"EventCode": "0x6A",
"EventName": "BUS_TRANS_PWR.SELF",
"SampleAfterValue": "200000",
@@ -242,6 +277,7 @@
},
{
"BriefDescription": "RFO bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x66",
"EventName": "BUS_TRANS_RFO.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -249,6 +285,7 @@
},
{
"BriefDescription": "RFO bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x66",
"EventName": "BUS_TRANS_RFO.SELF",
"SampleAfterValue": "200000",
@@ -256,6 +293,7 @@
},
{
"BriefDescription": "Explicit writeback bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x67",
"EventName": "BUS_TRANS_WB.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -263,6 +301,7 @@
},
{
"BriefDescription": "Explicit writeback bus transactions.",
+ "Counter": "0,1",
"EventCode": "0x67",
"EventName": "BUS_TRANS_WB.SELF",
"SampleAfterValue": "200000",
@@ -270,6 +309,7 @@
},
{
"BriefDescription": "Cycles during which interrupts are disabled.",
+ "Counter": "0,1",
"EventCode": "0xC6",
"EventName": "CYCLES_INT_MASKED.CYCLES_INT_MASKED",
"SampleAfterValue": "2000000",
@@ -277,26 +317,22 @@
},
{
"BriefDescription": "Cycles during which interrupts are pending and disabled.",
+ "Counter": "0,1",
"EventCode": "0xC6",
"EventName": "CYCLES_INT_MASKED.CYCLES_INT_PENDING_AND_MASKED",
"SampleAfterValue": "2000000",
"UMask": "0x2"
},
{
- "BriefDescription": "Memory cluster signals to block micro-op dispatch for any reason",
- "EventCode": "0x9",
- "EventName": "DISPATCH_BLOCKED.ANY",
- "SampleAfterValue": "200000",
- "UMask": "0x20"
- },
- {
"BriefDescription": "Number of Enhanced Intel SpeedStep(R) Technology (EIST) transitions",
+ "Counter": "0,1",
"EventCode": "0x3A",
"EventName": "EIST_TRANS",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.ALL_AGENTS.ANY",
"SampleAfterValue": "200000",
@@ -304,6 +340,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.ALL_AGENTS.CLEAN",
"SampleAfterValue": "200000",
@@ -311,6 +348,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.ALL_AGENTS.HIT",
"SampleAfterValue": "200000",
@@ -318,6 +356,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.ALL_AGENTS.HITM",
"SampleAfterValue": "200000",
@@ -325,6 +364,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.THIS_AGENT.ANY",
"SampleAfterValue": "200000",
@@ -332,6 +372,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.THIS_AGENT.CLEAN",
"SampleAfterValue": "200000",
@@ -339,6 +380,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.THIS_AGENT.HIT",
"SampleAfterValue": "200000",
@@ -346,6 +388,7 @@
},
{
"BriefDescription": "External snoops.",
+ "Counter": "0,1",
"EventCode": "0x77",
"EventName": "EXT_SNOOP.THIS_AGENT.HITM",
"SampleAfterValue": "200000",
@@ -353,12 +396,14 @@
},
{
"BriefDescription": "Hardware interrupts received.",
+ "Counter": "0,1",
"EventCode": "0xC8",
"EventName": "HW_INT_RCV",
"SampleAfterValue": "200000"
},
{
"BriefDescription": "Number of segment register loads.",
+ "Counter": "0,1",
"EventCode": "0x6",
"EventName": "SEGMENT_REG_LOADS.ANY",
"SampleAfterValue": "200000",
@@ -366,6 +411,7 @@
},
{
"BriefDescription": "Bus stalled for snoops.",
+ "Counter": "0,1",
"EventCode": "0x7E",
"EventName": "SNOOP_STALL_DRV.ALL_AGENTS",
"SampleAfterValue": "200000",
@@ -373,6 +419,7 @@
},
{
"BriefDescription": "Bus stalled for snoops.",
+ "Counter": "0,1",
"EventCode": "0x7E",
"EventName": "SNOOP_STALL_DRV.SELF",
"SampleAfterValue": "200000",
@@ -380,6 +427,7 @@
},
{
"BriefDescription": "Number of thermal trips",
+ "Counter": "0,1",
"EventCode": "0x3B",
"EventName": "THERMAL_TRIP",
"SampleAfterValue": "200000",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/pipeline.json b/tools/perf/pmu-events/arch/x86/bonnell/pipeline.json
index 91b98ee8ba9a..48d3d053a369 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Bogus branches",
+ "Counter": "0,1",
"EventCode": "0xE4",
"EventName": "BOGUS_BR",
"SampleAfterValue": "2000000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Branch instructions decoded",
+ "Counter": "0,1",
"EventCode": "0xE0",
"EventName": "BR_INST_DECODED",
"SampleAfterValue": "2000000",
@@ -15,12 +17,14 @@
},
{
"BriefDescription": "Retired branch instructions.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ANY",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Retired branch instructions.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ANY1",
"SampleAfterValue": "2000000",
@@ -28,6 +32,7 @@
},
{
"BriefDescription": "Retired mispredicted branch instructions (precise event).",
+ "Counter": "0,1",
"EventCode": "0xC5",
"EventName": "BR_INST_RETIRED.MISPRED",
"PEBS": "1",
@@ -35,6 +40,7 @@
},
{
"BriefDescription": "Retired branch instructions that were mispredicted not-taken.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.MISPRED_NOT_TAKEN",
"SampleAfterValue": "200000",
@@ -42,6 +48,7 @@
},
{
"BriefDescription": "Retired branch instructions that were mispredicted taken.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.MISPRED_TAKEN",
"SampleAfterValue": "200000",
@@ -49,6 +56,7 @@
},
{
"BriefDescription": "Retired branch instructions that were predicted not-taken.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.PRED_NOT_TAKEN",
"SampleAfterValue": "2000000",
@@ -56,6 +64,7 @@
},
{
"BriefDescription": "Retired branch instructions that were predicted taken.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.PRED_TAKEN",
"SampleAfterValue": "2000000",
@@ -63,6 +72,7 @@
},
{
"BriefDescription": "Retired taken branch instructions.",
+ "Counter": "0,1",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.TAKEN",
"SampleAfterValue": "2000000",
@@ -70,6 +80,7 @@
},
{
"BriefDescription": "All macro conditional branch instructions.",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.COND",
"SampleAfterValue": "2000000",
@@ -77,6 +88,7 @@
},
{
"BriefDescription": "Only taken macro conditional branch instructions",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.COND_TAKEN",
"SampleAfterValue": "2000000",
@@ -84,6 +96,7 @@
},
{
"BriefDescription": "All non-indirect calls",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.DIR_CALL",
"SampleAfterValue": "2000000",
@@ -91,6 +104,7 @@
},
{
"BriefDescription": "All indirect branches that are not calls.",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.IND",
"SampleAfterValue": "2000000",
@@ -98,6 +112,7 @@
},
{
"BriefDescription": "All indirect calls, including both register and memory indirect.",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.IND_CALL",
"SampleAfterValue": "2000000",
@@ -105,6 +120,7 @@
},
{
"BriefDescription": "All indirect branches that have a return mnemonic",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.RET",
"SampleAfterValue": "2000000",
@@ -112,6 +128,7 @@
},
{
"BriefDescription": "All macro unconditional branch instructions, excluding calls and indirects",
+ "Counter": "0,1",
"EventCode": "0x88",
"EventName": "BR_INST_TYPE_RETIRED.UNCOND",
"SampleAfterValue": "2000000",
@@ -119,6 +136,7 @@
},
{
"BriefDescription": "Mispredicted cond branch instructions retired",
+ "Counter": "0,1",
"EventCode": "0x89",
"EventName": "BR_MISSP_TYPE_RETIRED.COND",
"SampleAfterValue": "200000",
@@ -126,6 +144,7 @@
},
{
"BriefDescription": "Mispredicted and taken cond branch instructions retired",
+ "Counter": "0,1",
"EventCode": "0x89",
"EventName": "BR_MISSP_TYPE_RETIRED.COND_TAKEN",
"SampleAfterValue": "200000",
@@ -133,6 +152,7 @@
},
{
"BriefDescription": "Mispredicted ind branches that are not calls",
+ "Counter": "0,1",
"EventCode": "0x89",
"EventName": "BR_MISSP_TYPE_RETIRED.IND",
"SampleAfterValue": "200000",
@@ -140,6 +160,7 @@
},
{
"BriefDescription": "Mispredicted indirect calls, including both register and memory indirect.",
+ "Counter": "0,1",
"EventCode": "0x89",
"EventName": "BR_MISSP_TYPE_RETIRED.IND_CALL",
"SampleAfterValue": "200000",
@@ -147,6 +168,7 @@
},
{
"BriefDescription": "Mispredicted return branches",
+ "Counter": "0,1",
"EventCode": "0x89",
"EventName": "BR_MISSP_TYPE_RETIRED.RETURN",
"SampleAfterValue": "200000",
@@ -154,6 +176,7 @@
},
{
"BriefDescription": "Bus cycles when core is not halted",
+ "Counter": "0,1",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.BUS",
"SampleAfterValue": "200000",
@@ -161,31 +184,44 @@
},
{
"BriefDescription": "Core cycles when core is not halted",
+ "Counter": "Fixed counter 2",
"EventCode": "0xA",
"EventName": "CPU_CLK_UNHALTED.CORE",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Core cycles when core is not halted",
+ "Counter": "0,1",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Reference cycles when core is not halted.",
+ "Counter": "Fixed counter 3",
"EventCode": "0xA",
"EventName": "CPU_CLK_UNHALTED.REF",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Cycles the divider is busy.",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "CYCLES_DIV_BUSY",
"SampleAfterValue": "2000000",
"UMask": "0x1"
},
{
+ "BriefDescription": "Memory cluster signals to block micro-op dispatch for any reason",
+ "Counter": "0,1",
+ "EventCode": "0x9",
+ "EventName": "DISPATCH_BLOCKED.ANY",
+ "SampleAfterValue": "200000",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "Divide operations retired",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "DIV.AR",
"SampleAfterValue": "2000000",
@@ -193,6 +229,7 @@
},
{
"BriefDescription": "Divide operations executed.",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "DIV.S",
"SampleAfterValue": "2000000",
@@ -200,12 +237,14 @@
},
{
"BriefDescription": "Instructions retired.",
+ "Counter": "Fixed counter 1",
"EventCode": "0xA",
"EventName": "INST_RETIRED.ANY",
"SampleAfterValue": "2000000"
},
{
"BriefDescription": "Instructions retired (precise event).",
+ "Counter": "0,1",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
"PEBS": "2",
@@ -213,6 +252,7 @@
},
{
"BriefDescription": "Self-Modifying Code detected.",
+ "Counter": "0,1",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"SampleAfterValue": "200000",
@@ -220,6 +260,7 @@
},
{
"BriefDescription": "Multiply operations retired",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "MUL.AR",
"SampleAfterValue": "2000000",
@@ -227,6 +268,7 @@
},
{
"BriefDescription": "Multiply operations executed.",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "MUL.S",
"SampleAfterValue": "2000000",
@@ -234,6 +276,7 @@
},
{
"BriefDescription": "Micro-op reissues for any cause",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "REISSUE.ANY",
"SampleAfterValue": "200000",
@@ -241,6 +284,7 @@
},
{
"BriefDescription": "Micro-op reissues for any cause (At Retirement)",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "REISSUE.ANY.AR",
"SampleAfterValue": "200000",
@@ -248,6 +292,7 @@
},
{
"BriefDescription": "Micro-op reissues on a store-load collision",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "REISSUE.OVERLAP_STORE",
"SampleAfterValue": "200000",
@@ -255,6 +300,7 @@
},
{
"BriefDescription": "Micro-op reissues on a store-load collision (At Retirement)",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "REISSUE.OVERLAP_STORE.AR",
"SampleAfterValue": "200000",
@@ -262,6 +308,7 @@
},
{
"BriefDescription": "Cycles issue is stalled due to div busy.",
+ "Counter": "0,1",
"EventCode": "0xDC",
"EventName": "RESOURCE_STALLS.DIV_BUSY",
"SampleAfterValue": "2000000",
@@ -269,6 +316,7 @@
},
{
"BriefDescription": "All store forwards",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "STORE_FORWARDS.ANY",
"SampleAfterValue": "200000",
@@ -276,6 +324,7 @@
},
{
"BriefDescription": "Good store forwards",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "STORE_FORWARDS.GOOD",
"SampleAfterValue": "200000",
@@ -283,6 +332,7 @@
},
{
"BriefDescription": "Micro-ops retired.",
+ "Counter": "0,1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ANY",
"SampleAfterValue": "2000000",
@@ -290,6 +340,7 @@
},
{
"BriefDescription": "Cycles no micro-ops retired.",
+ "Counter": "0,1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALLED_CYCLES",
"SampleAfterValue": "2000000",
@@ -297,6 +348,7 @@
},
{
"BriefDescription": "Periods no micro-ops retired.",
+ "Counter": "0,1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALLS",
"SampleAfterValue": "2000000",
diff --git a/tools/perf/pmu-events/arch/x86/bonnell/virtual-memory.json b/tools/perf/pmu-events/arch/x86/bonnell/virtual-memory.json
index 82e07c73cff0..e8512c585572 100644
--- a/tools/perf/pmu-events/arch/x86/bonnell/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/bonnell/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Memory accesses that missed the DTLB.",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "DATA_TLB_MISSES.DTLB_MISS",
"SampleAfterValue": "200000",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "DTLB misses due to load operations.",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "DATA_TLB_MISSES.DTLB_MISS_LD",
"SampleAfterValue": "200000",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "DTLB misses due to store operations.",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "DATA_TLB_MISSES.DTLB_MISS_ST",
"SampleAfterValue": "200000",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "L0 DTLB misses due to load operations.",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "DATA_TLB_MISSES.L0_DTLB_MISS_LD",
"SampleAfterValue": "200000",
@@ -29,6 +33,7 @@
},
{
"BriefDescription": "L0 DTLB misses due to store operations",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "DATA_TLB_MISSES.L0_DTLB_MISS_ST",
"SampleAfterValue": "200000",
@@ -36,6 +41,7 @@
},
{
"BriefDescription": "ITLB flushes.",
+ "Counter": "0,1",
"EventCode": "0x82",
"EventName": "ITLB.FLUSH",
"SampleAfterValue": "200000",
@@ -43,6 +49,7 @@
},
{
"BriefDescription": "ITLB hits.",
+ "Counter": "0,1",
"EventCode": "0x82",
"EventName": "ITLB.HIT",
"SampleAfterValue": "200000",
@@ -50,6 +57,7 @@
},
{
"BriefDescription": "ITLB misses.",
+ "Counter": "0,1",
"EventCode": "0x82",
"EventName": "ITLB.MISSES",
"PEBS": "2",
@@ -58,6 +66,7 @@
},
{
"BriefDescription": "Retired loads that miss the DTLB (precise event).",
+ "Counter": "0,1",
"EventCode": "0xCB",
"EventName": "MEM_LOAD_RETIRED.DTLB_MISS",
"PEBS": "1",
@@ -66,6 +75,7 @@
},
{
"BriefDescription": "Duration of page-walks in core cycles",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.CYCLES",
"SampleAfterValue": "2000000",
@@ -73,6 +83,7 @@
},
{
"BriefDescription": "Duration of D-side only page walks",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.D_SIDE_CYCLES",
"SampleAfterValue": "2000000",
@@ -80,6 +91,7 @@
},
{
"BriefDescription": "Number of D-side only page walks",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.D_SIDE_WALKS",
"SampleAfterValue": "200000",
@@ -87,6 +99,7 @@
},
{
"BriefDescription": "Duration of I-Side page walks",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.I_SIDE_CYCLES",
"SampleAfterValue": "2000000",
@@ -94,6 +107,7 @@
},
{
"BriefDescription": "Number of I-Side page walks",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.I_SIDE_WALKS",
"SampleAfterValue": "200000",
@@ -101,6 +115,7 @@
},
{
"BriefDescription": "Number of page-walks executed.",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "PAGE_WALKS.WALKS",
"SampleAfterValue": "200000",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json b/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json
index 55a10b0bf36f..1d8e910f5961 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/bdw-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -80,17 +80,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -98,9 +97,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -121,7 +119,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -139,7 +137,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -160,7 +157,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -181,7 +178,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -190,10 +187,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.FPU_DIV_ACTIVE / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -211,7 +208,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -227,7 +224,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -236,7 +233,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -245,7 +242,7 @@
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
@@ -255,7 +252,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -266,7 +263,7 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
@@ -292,7 +289,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -301,7 +298,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -314,7 +311,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -323,13 +320,13 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -343,27 +340,20 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses.",
"MetricExpr": "ICACHE.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_mispredicts_resteers"
- },
- {
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -389,20 +379,20 @@
},
{
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
@@ -421,6 +411,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -435,11 +431,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -447,7 +443,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -455,7 +451,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -463,7 +459,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -471,7 +467,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -489,7 +485,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -509,129 +505,129 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_all"
},
{
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -641,8 +637,8 @@
"MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
@@ -653,30 +649,36 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / tma_info_system_time / 1e3",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -699,17 +701,17 @@
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
},
{
- "BriefDescription": "Average number of parallel requests to external memory",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_parallel_requests",
- "PublicDescription": "Average number of parallel requests to external memory. Accounts for all requests"
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
},
{
- "BriefDescription": "Average latency of all requests to external memory (in Uncore cycles)",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_REQUESTS.ALL",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_request_latency"
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
},
{
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
@@ -724,6 +726,13 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
@@ -768,7 +777,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -777,25 +786,25 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS",
@@ -805,20 +814,20 @@
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -837,12 +846,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -854,7 +862,7 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
@@ -864,7 +872,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -872,21 +880,21 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -912,7 +920,7 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
@@ -923,7 +931,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.",
"ScaleUnit": "100%"
},
@@ -942,7 +950,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -951,7 +959,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -991,12 +999,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1020,7 +1028,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1029,7 +1037,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1038,7 +1046,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1048,15 +1056,15 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -1069,7 +1077,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -1085,7 +1093,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -1113,7 +1121,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1130,10 +1138,10 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tma_clears_resteers",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: BACLEARS.ANY",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/cache.json b/tools/perf/pmu-events/arch/x86/broadwell/cache.json
index f8ee5aefccea..fd3df87318c5 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -17,14 +19,16 @@
},
{
"BriefDescription": "L1D miss outstandings duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
- "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch.\nNote: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
+ "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +39,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Not rejected writebacks that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_DEMAND_RQSTS.WB_HIT",
"PublicDescription": "This event counts the number of WB requests that hit L2 cache.",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "This event counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "This event counts the number of L2 cache lines in the Exclusive state filling the L2. Counting does not cover rejects.",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "This event counts the number of L2 cache lines in the Invalidate state filling the L2. Counting does not cover rejects.",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "This event counts the number of L2 cache lines in the Shared state filling the L2. Counting does not cover rejects.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"SampleAfterValue": "100003",
@@ -90,6 +101,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "This event counts the total number of L2 code requests.",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"SampleAfterValue": "200003",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"SampleAfterValue": "200003",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "This event counts the total number of requests from the L2 hardware prefetchers.",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"SampleAfterValue": "200003",
@@ -143,6 +161,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"SampleAfterValue": "200003",
@@ -150,6 +169,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache.",
@@ -158,6 +178,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "L2 prefetch requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_HIT",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that hit L2 cache. L3 prefetch new types.",
@@ -174,6 +196,7 @@
},
{
"BriefDescription": "L2 prefetch requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_MISS",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that miss L2 cache.",
@@ -182,6 +205,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"SampleAfterValue": "200003",
@@ -189,6 +213,7 @@
},
{
"BriefDescription": "All L2 requests.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"SampleAfterValue": "200003",
@@ -196,6 +221,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"SampleAfterValue": "200003",
@@ -203,6 +229,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"SampleAfterValue": "200003",
@@ -210,6 +237,7 @@
},
{
"BriefDescription": "L2 or L3 HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "This event counts L2 or L3 HW prefetches that access L2 cache including rejects.",
@@ -218,6 +246,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "This event counts transactions that access the L2 pipe including snoops, pagewalks, and so on.",
@@ -226,6 +255,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "This event counts the number of L2 cache accesses when fetching instructions.",
@@ -234,6 +264,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "This event counts Demand Data Read requests that access L2 cache, including rejects.",
@@ -242,6 +273,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "This event counts L1D writebacks that access L2 cache.",
@@ -250,6 +282,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "This event counts L2 fill requests that access L2 cache.",
@@ -258,6 +291,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "This event counts L2 writebacks that access L2 cache.",
@@ -266,6 +300,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "This event counts Read for Ownership (RFO) requests that access L2 cache.",
@@ -274,6 +309,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).",
@@ -282,6 +318,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -290,6 +327,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -298,6 +336,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -309,6 +348,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared L3.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -331,6 +372,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -342,6 +384,7 @@
},
{
"BriefDescription": "Data from local DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70, BDM100",
"EventCode": "0xD3",
@@ -353,26 +396,29 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HIT_LFB",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
+ "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
"SampleAfterValue": "100003",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
+ "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load uops misses in L1 cache as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
@@ -383,6 +429,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD1",
@@ -394,6 +441,7 @@
},
{
"BriefDescription": "Miss in mid-level (L2) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
@@ -404,6 +452,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD1",
@@ -415,6 +464,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100, BDE70",
"EventCode": "0xD1",
@@ -425,6 +475,7 @@
},
{
"BriefDescription": "Retired load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
@@ -435,6 +486,7 @@
},
{
"BriefDescription": "Retired store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
@@ -445,6 +497,7 @@
},
{
"BriefDescription": "Retired load uops with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD0",
@@ -456,6 +509,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
@@ -466,6 +520,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
@@ -476,6 +531,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
@@ -486,6 +542,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
@@ -496,6 +553,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "This event counts the demand and prefetch data reads. All Core Data Reads include cacheable Demands and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -504,6 +562,7 @@
},
{
"BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "This event counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, and so on.",
@@ -512,6 +571,7 @@
},
{
"BriefDescription": "Cacheable and non-cacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "This event counts both cacheable and non-cacheable code read requests.",
@@ -520,6 +580,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -528,6 +589,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -536,14 +598,16 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
- "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.\nNote: Writeback pending FIFO has six entries.",
+ "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full. Note: Writeback pending FIFO has six entries.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -553,6 +617,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -563,6 +628,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -573,6 +639,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -583,6 +650,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
@@ -592,15 +660,17 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
- "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.\nNote: A prefetch promoted to Demand is counted from the promotion point.",
+ "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted from the promotion point.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -610,6 +680,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
@@ -619,6 +690,7 @@
},
{
"BriefDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100003",
@@ -626,6 +698,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -635,6 +708,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -644,6 +718,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -653,6 +728,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -662,6 +738,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -671,6 +748,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -680,6 +758,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -689,6 +768,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -698,6 +778,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -707,6 +788,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -716,6 +798,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -725,6 +808,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -734,6 +818,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -743,6 +828,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -752,6 +838,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -761,6 +848,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -770,6 +858,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -779,6 +868,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -788,6 +878,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -797,6 +888,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -806,6 +898,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -815,6 +908,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -824,6 +918,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -833,6 +928,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -842,6 +938,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -851,6 +948,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -860,6 +958,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -869,6 +968,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -878,6 +978,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -887,6 +988,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -896,6 +998,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -905,6 +1008,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -914,6 +1018,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -923,6 +1028,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -932,6 +1038,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -941,6 +1048,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -950,6 +1058,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -959,6 +1068,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -968,6 +1078,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -977,6 +1088,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -986,6 +1098,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -995,6 +1108,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1004,6 +1118,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1013,6 +1128,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1022,6 +1138,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1031,6 +1148,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1040,6 +1158,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1049,6 +1168,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1058,6 +1178,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1067,6 +1188,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1076,6 +1198,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1085,6 +1208,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1094,6 +1218,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1103,6 +1228,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1112,6 +1238,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1121,6 +1248,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1130,6 +1258,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1139,6 +1268,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1148,6 +1278,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1157,6 +1288,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1166,6 +1298,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1175,6 +1308,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1184,6 +1318,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1193,6 +1328,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1202,6 +1338,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1211,6 +1348,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive) have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1220,6 +1358,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1229,6 +1368,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1238,6 +1378,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1247,6 +1388,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1256,6 +1398,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1265,6 +1408,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1274,6 +1418,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1283,6 +1428,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1292,6 +1438,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1301,6 +1448,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1310,6 +1458,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1319,6 +1468,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1328,6 +1478,7 @@
},
{
"BriefDescription": "Counts all demand code reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1337,6 +1488,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1346,6 +1498,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1355,6 +1508,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1364,6 +1518,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1373,6 +1528,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1382,6 +1538,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1391,6 +1548,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1400,6 +1558,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1409,6 +1568,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1418,6 +1578,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1427,6 +1588,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1436,6 +1598,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1445,6 +1608,7 @@
},
{
"BriefDescription": "Counts demand data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1454,6 +1618,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1463,6 +1628,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1472,6 +1638,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1481,6 +1648,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1490,6 +1658,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1499,6 +1668,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1508,6 +1678,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1517,6 +1688,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1526,6 +1698,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1535,6 +1708,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1544,6 +1718,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1553,6 +1728,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1562,6 +1738,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1571,6 +1748,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1580,6 +1758,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1589,6 +1768,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1598,6 +1778,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1607,6 +1788,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1616,6 +1798,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1625,6 +1808,7 @@
},
{
"BriefDescription": "Counts any other requests have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1634,6 +1818,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1643,6 +1828,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1652,6 +1838,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1661,6 +1848,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1670,6 +1858,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1679,6 +1868,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1688,6 +1878,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1697,6 +1888,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1706,6 +1898,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1715,6 +1908,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1724,6 +1918,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1733,6 +1928,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1742,6 +1938,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1751,6 +1948,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1760,6 +1958,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1769,6 +1968,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1778,6 +1978,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1787,6 +1988,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1796,6 +1998,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1805,6 +2008,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1814,6 +2018,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1823,6 +2028,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1832,6 +2038,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1841,6 +2048,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1850,6 +2058,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1859,6 +2068,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1868,6 +2078,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1877,6 +2088,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1886,6 +2098,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1895,6 +2108,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1904,6 +2118,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1913,6 +2128,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1922,6 +2138,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1931,6 +2148,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1940,6 +2158,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1949,6 +2168,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1958,6 +2178,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1967,6 +2188,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1976,6 +2198,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1985,6 +2208,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1994,6 +2218,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2003,6 +2228,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2012,6 +2238,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2021,6 +2248,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2030,6 +2258,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2039,6 +2268,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2048,6 +2278,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2057,6 +2288,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2066,6 +2298,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2075,6 +2308,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2084,6 +2318,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2093,6 +2328,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2102,6 +2338,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2111,6 +2348,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2120,6 +2358,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2129,6 +2368,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2138,6 +2378,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2147,6 +2388,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2156,6 +2398,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2165,6 +2408,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2174,6 +2418,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2183,6 +2428,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2192,6 +2438,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2201,6 +2448,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2210,6 +2458,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2219,6 +2468,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2228,6 +2478,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2237,6 +2488,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2246,6 +2498,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2255,6 +2508,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2264,6 +2518,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2273,6 +2528,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2282,6 +2538,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2291,6 +2548,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2300,6 +2558,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2309,6 +2568,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2318,6 +2578,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2327,6 +2588,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2336,6 +2598,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2345,6 +2608,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2354,6 +2618,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2363,6 +2628,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2372,6 +2638,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2381,6 +2648,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2390,6 +2658,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2399,6 +2668,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2408,6 +2678,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2417,6 +2688,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2426,6 +2698,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2435,6 +2708,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2444,6 +2718,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"PublicDescription": "This event counts the number of split locks in the super queue.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/counter.json b/tools/perf/pmu-events/arch/x86/broadwell/counter.json
new file mode 100644
index 000000000000..1be6522e2bbc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/broadwell/counter.json
@@ -0,0 +1,22 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "ARB",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "cbox_0",
+ "CountersNumFixed": 1,
+ "CountersNumGeneric": "0"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/floating-point.json b/tools/perf/pmu-events/arch/x86/broadwell/floating-point.json
index 986869252e71..9bf595af3f42 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational double precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.DOUBLE",
"SampleAfterValue": "2000006",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational packed floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* packed double and single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.PACKED",
"SampleAfterValue": "2000004",
@@ -55,6 +62,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation operation. Applies to SSE* and AVX* scalar double and single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -71,6 +80,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -79,6 +89,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational single precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SINGLE",
"SampleAfterValue": "2000005",
@@ -86,6 +97,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "2000003",
@@ -93,6 +105,7 @@
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -102,6 +115,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "This event counts any input SSE* FP assist - invalid operation, denormal operand, dividing by zero, SNaN operand. Counting includes only cases involving penalties that required micro-code assist intervention.",
@@ -110,6 +124,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "This event counts the number of SSE* floating point (FP) micro-code assist (numeric overflow/underflow) when the output value (destination register) is invalid. Counting covers only cases involving penalties that require micro-code assist intervention.",
@@ -118,6 +133,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "This event counts x87 floating point (FP) micro-code assist (invalid operation, denormal operand, SNaN operand) when the input value (one of the source operands to an FP instruction) is invalid.",
@@ -126,6 +142,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "This event counts the number of x87 floating point (FP) micro-code assist (numeric overflow/underflow, inexact result) when the output value (destination register) is invalid.",
@@ -134,6 +151,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -141,6 +159,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -148,6 +167,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
@@ -157,6 +177,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "Micro-op dispatches cancelled due to insufficient SIMD physical register file read ports",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UOP_DISPATCHES_CANCELLED.SIMD_PRF",
"PublicDescription": "This event counts the number of micro-operations cancelled after they were dispatched from the scheduler to the execution units when the total number of physical register read ports across all dispatch ports exceeds the read bandwidth of the physical register file. The SIMD_PRF subevent applies to the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI, VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*, VFNMSUB*. See the Broadwell Optimization Guide for more information.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/frontend.json b/tools/perf/pmu-events/arch/x86/broadwell/frontend.json
index bd5da39564e1..018020a51436 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"SampleAfterValue": "100003",
@@ -8,14 +9,16 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
- "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. \nMM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.\nPenalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
+ "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "This event counts the number of both cacheable and noncacheable Instruction Cache, Streaming Buffer and Victim Cache Reads including UC fetches.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFDATA_STALL",
"PublicDescription": "This event counts cycles during which the demand fetch waits for data (wfdM104H) from L2 or iSB (opportunistic hit).",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accesses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes UC accesses.",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -76,6 +85,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -85,6 +95,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQ.",
@@ -93,6 +104,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
"PublicDescription": "This counts the number of cycles that the instruction decoder queue is empty and can indicate that the application may be bound in the front end. It does not determine whether there are uops being delivered to the Alloc stage since uops can be delivered by bypass skipping the Instruction Decode Queue (IDQ) when it is empty.",
@@ -101,6 +113,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -109,6 +122,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -118,6 +132,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -126,6 +141,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -135,6 +151,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -144,6 +161,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -154,6 +172,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "This event counts the number of uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -162,6 +181,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -170,6 +190,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.",
@@ -187,14 +209,16 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
- "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when:\n a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread;\n b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); \n c. Instruction Decode Queue (IDQ) delivers four uops.",
+ "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -204,6 +228,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
@@ -213,6 +238,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE",
@@ -222,6 +248,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE",
@@ -230,6 +257,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/memory.json b/tools/perf/pmu-events/arch/x86/broadwell/memory.json
index ac7cdb831960..bbc5d7deea6b 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of times HLE abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an HLE abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to uncommon conditions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an HLE abort.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an HLE abort.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times HLE caused a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times HLE aborted and was not due to the abort conditions in subevents 3-6.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of times HLE commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"PublicDescription": "Number of times HLE commit succeeded.",
@@ -58,22 +65,25 @@
},
{
"BriefDescription": "Number of times we entered an HLE region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.START",
- "PublicDescription": "Number of times we entered an HLE region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an HLE region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
- "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:\n1. memory disambiguation,\n2. external snoop, or\n3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
+ "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following: 1. memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "Randomly selected loads with latency value being above 128",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 16",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -100,6 +111,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 256",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -113,6 +125,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 32",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -126,6 +139,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 4",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -139,6 +153,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 512",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -152,6 +167,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 64",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -165,6 +181,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 8",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -178,6 +195,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "This event counts speculative cache-line split load uops dispatched to the L1 cache.",
@@ -186,6 +204,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "This event counts speculative cache line split store-address (STA) uops dispatched to the L1 cache.",
@@ -194,6 +213,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -203,6 +223,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -212,6 +233,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -221,6 +243,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -230,6 +253,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -239,6 +263,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -248,6 +273,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -257,6 +283,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -266,6 +293,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -275,6 +303,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -284,6 +313,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -293,6 +323,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -302,6 +333,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -311,6 +343,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -320,6 +353,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -329,6 +363,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -338,6 +373,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -347,6 +383,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -356,6 +393,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -365,6 +403,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -374,6 +413,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -383,6 +423,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -392,6 +433,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -401,6 +443,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -410,6 +453,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -419,6 +463,7 @@
},
{
"BriefDescription": "Counts all prefetch code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_CODE_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -428,6 +473,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -437,6 +483,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -446,6 +493,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -455,6 +503,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -464,6 +513,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -473,6 +523,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -482,6 +533,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -491,6 +543,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -500,6 +553,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -509,6 +563,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -518,6 +573,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -527,6 +583,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -536,6 +593,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -545,6 +603,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -554,6 +613,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -563,6 +623,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -572,6 +633,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -581,6 +643,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -590,6 +653,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -599,6 +663,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -608,6 +673,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -617,6 +683,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -626,6 +693,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -635,6 +703,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -644,6 +713,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -653,6 +723,7 @@
},
{
"BriefDescription": "Counts prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -662,6 +733,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -671,6 +743,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -680,6 +753,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -689,6 +763,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -698,6 +773,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -707,6 +783,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -716,6 +793,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -725,6 +803,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -734,6 +813,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -743,6 +823,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -752,6 +833,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -761,6 +843,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -770,6 +853,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -779,6 +863,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -788,6 +873,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -797,6 +883,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -806,6 +893,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -815,6 +903,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -824,6 +913,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -833,6 +923,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -842,6 +933,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -851,6 +943,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -860,6 +953,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -869,6 +963,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -878,6 +973,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -887,6 +983,7 @@
},
{
"BriefDescription": "Counts writebacks (modified to exclusive)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -896,6 +993,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -905,6 +1003,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -914,6 +1013,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -923,6 +1023,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -932,6 +1033,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -941,6 +1043,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -950,6 +1053,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -959,6 +1063,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -968,6 +1073,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -977,6 +1083,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -986,6 +1093,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -995,6 +1103,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1004,6 +1113,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1013,6 +1123,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1022,6 +1133,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1031,6 +1143,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1040,6 +1153,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1049,6 +1163,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1058,6 +1173,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1067,6 +1183,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1076,6 +1193,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1085,6 +1203,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1094,6 +1213,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1103,6 +1223,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1112,6 +1233,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1121,6 +1243,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1130,6 +1253,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1139,6 +1263,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1148,6 +1273,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1157,6 +1283,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1166,6 +1293,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1175,6 +1303,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1184,6 +1313,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1193,6 +1323,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1202,6 +1333,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1211,6 +1343,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1220,6 +1353,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1229,6 +1363,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1238,6 +1373,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1247,6 +1383,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1256,6 +1393,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1265,6 +1403,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1274,6 +1413,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1283,6 +1423,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1292,6 +1433,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1301,6 +1443,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1310,6 +1453,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1319,6 +1463,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1328,6 +1473,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1337,6 +1483,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1346,6 +1493,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1355,6 +1503,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1364,6 +1513,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1373,6 +1523,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1382,6 +1533,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1391,6 +1543,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1400,6 +1553,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1409,6 +1563,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1418,6 +1573,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1427,6 +1583,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1436,6 +1593,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1445,6 +1603,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1454,6 +1613,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1463,6 +1623,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1472,6 +1633,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1481,6 +1643,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1490,6 +1653,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1499,6 +1663,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1508,6 +1673,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1517,6 +1683,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1526,6 +1693,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1535,6 +1703,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1544,6 +1713,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1553,6 +1723,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1562,6 +1733,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1571,6 +1743,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1580,6 +1753,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1589,6 +1763,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1598,6 +1773,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1607,6 +1783,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1616,6 +1793,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1625,6 +1803,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1634,6 +1813,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1643,6 +1823,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1652,6 +1833,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1661,6 +1843,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1670,6 +1853,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1679,6 +1863,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1688,6 +1873,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1697,6 +1883,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1706,6 +1893,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1715,6 +1903,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1724,6 +1913,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1733,6 +1923,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1742,6 +1933,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1751,6 +1943,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1760,6 +1953,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1769,6 +1963,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1778,6 +1973,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1787,6 +1983,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1796,6 +1993,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1805,6 +2003,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1814,6 +2013,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1823,6 +2023,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1832,6 +2033,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1841,6 +2043,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1850,6 +2053,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1859,6 +2063,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1868,6 +2073,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1877,6 +2083,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1886,6 +2093,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1895,6 +2103,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1904,6 +2113,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1913,6 +2123,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1922,6 +2133,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1931,6 +2143,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1940,6 +2153,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1949,6 +2163,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1958,6 +2173,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1967,6 +2183,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1976,6 +2193,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1985,6 +2203,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1994,6 +2213,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NON_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2003,6 +2223,7 @@
},
{
"BriefDescription": "Number of times RTM abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1",
@@ -2012,6 +2233,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an RTM abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -2020,6 +2242,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an RTM abort.",
@@ -2028,6 +2251,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an RTM abort.",
@@ -2036,6 +2260,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times a RTM caused a fault.",
@@ -2044,6 +2269,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times RTM aborted and was not due to the abort conditions in subevents 3-6.",
@@ -2052,6 +2278,7 @@
},
{
"BriefDescription": "Number of times RTM commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Number of times RTM commit succeeded.",
@@ -2060,14 +2287,16 @@
},
{
"BriefDescription": "Number of times we entered an RTM region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
- "PublicDescription": "Number of times we entered an RTM region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an RTM region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -2075,6 +2304,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -2083,6 +2313,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -2091,6 +2322,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"PublicDescription": "RTM region detected inside HLE.",
@@ -2099,6 +2331,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"SampleAfterValue": "2000003",
@@ -2106,6 +2339,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow.",
@@ -2114,6 +2348,7 @@
},
{
"BriefDescription": "Number of times a TSX line had a cache conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Number of times a TSX line had a cache conflict.",
@@ -2122,6 +2357,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"PublicDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch.",
@@ -2130,6 +2366,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"PublicDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.",
@@ -2138,6 +2375,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"PublicDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.",
@@ -2146,6 +2384,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"PublicDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock.",
@@ -2154,6 +2393,7 @@
},
{
"BriefDescription": "Number of times we could not allocate Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"PublicDescription": "Number of times we could not allocate Lock Buffer.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/metricgroups.json b/tools/perf/pmu-events/arch/x86/broadwell/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/other.json b/tools/perf/pmu-events/arch/x86/broadwell/other.json
index 1c2a5b001949..f0de6a71719b 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/other.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "This event counts the unhalted core cycles during which the thread is in the ring 0 privileged mode.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "This event counts unhalted core cycles during which the thread is in rings 1, 2, or 3.",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "This event counts cycles in which the L1 and L2 are locked due to a UC lock or split lock. A lock is asserted in case of locked memory access, due to noncacheable memory, locked operation that spans two cache lines, or a page walk from the noncacheable page table. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such access.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/pipeline.json b/tools/perf/pmu-events/arch/x86/broadwell/pipeline.json
index 9a902d2160e6..962cd07eb307 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divider is busy executing divide operations",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.FPU_DIV_ACTIVE",
"PublicDescription": "This event counts the number of the divide operations executed. Uses edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the number of the divide operations executed.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired branch instructions.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-conditional branch instructions.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-unconditional branch instructions, excluding calls and indirects.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts both taken and not taken speculative and retired direct near calls.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches excluding calls and return branches.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches that have a return mnemonic.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken macro-conditional branch instructions.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions excluding calls and indirect branches.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired direct near calls.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired indirect branches excluding calls and return branches.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired indirect calls including both register and memory indirect.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts taken speculative and retired indirect branches that have a return mnemonic.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all (macro) branch instructions retired.",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
@@ -130,6 +146,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -166,6 +186,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -175,6 +196,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -184,6 +206,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "This event counts not taken branch instructions retired.",
@@ -192,6 +215,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted branch instructions.",
@@ -200,6 +224,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -208,6 +233,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken mispredicted indirect branches excluding calls and returns.",
@@ -216,6 +242,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -224,6 +251,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -232,6 +260,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches excluding calls and returns.",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches that have a return mnemonic.",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all mispredicted macro branch instructions retired.",
@@ -270,6 +303,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -288,6 +323,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -297,6 +333,7 @@
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RET",
"PEBS": "1",
@@ -306,6 +343,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -313,6 +351,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "This is a fixed-frequency event programmed to general counters. It counts when the core is unhalted at 100 Mhz.",
@@ -322,6 +361,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -329,6 +369,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -336,13 +377,15 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
- "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. \nNote: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
+ "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
"UMask": "0x3"
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate).",
@@ -352,6 +395,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -359,6 +403,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -367,12 +412,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -381,12 +428,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -395,6 +444,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -404,6 +454,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -412,6 +463,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_PENDING",
@@ -421,6 +473,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -430,6 +483,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -438,6 +492,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -447,6 +502,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -455,6 +511,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -464,6 +521,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -472,6 +530,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_PENDING",
@@ -481,6 +540,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -490,6 +550,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -498,6 +559,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -506,6 +568,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "This event counts stalls occurred due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunk.",
@@ -514,13 +577,15 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. \nNotes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. \nCounting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
+ "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "BDM61",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -529,6 +594,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "BDM11, BDM55",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -539,6 +605,7 @@
},
{
"BriefDescription": "FP operations retired. X87 FP operations that have no exceptions:",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.X87",
"PublicDescription": "This event counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.",
@@ -547,6 +614,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.RAT_STALL_CYCLES",
"PublicDescription": "This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another thread.",
@@ -555,6 +623,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -565,6 +634,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -573,6 +643,7 @@
},
{
"BriefDescription": "This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"SampleAfterValue": "100003",
@@ -580,14 +651,16 @@
},
{
"BriefDescription": "Cases when loads get true Block-on-Store blocking code preventing store forwarding",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
- "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when:\n - preceding store conflicts with the load (incomplete overlap);\n - store forwarding is impossible due to u-arch limitations;\n - preceding lock RMW operations are not forwarded;\n - store has the no-forward bit set (uncacheable/page-split/masked stores);\n - all-blocking stores are used (mostly, fences and port I/O);\nand others.\nThe most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events.\nSee the table of not supported store forwards in the Optimization Guide.",
+ "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: - preceding store conflicts with the load (incomplete overlap); - store forwarding is impossible due to u-arch limitations; - preceding lock RMW operations are not forwarded; - store has the no-forward bit set (uncacheable/page-split/masked stores); - all-blocking stores are used (mostly, fences and port I/O); and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "False dependencies in MOB due to partial compare",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.",
@@ -596,6 +669,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the hardware prefetch.",
@@ -604,6 +678,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructions.",
@@ -612,6 +687,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -620,6 +696,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -628,6 +705,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "LSD.UOPS",
"SampleAfterValue": "2000003",
@@ -635,6 +713,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -644,6 +723,7 @@
},
{
"BriefDescription": "Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.CYCLES",
"PublicDescription": "This event counts both thread-specific (TS) and all-thread (AT) nukes.",
@@ -652,6 +732,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"PublicDescription": "Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a fault.",
@@ -660,6 +741,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "This event counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -668,6 +750,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -675,6 +758,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -682,6 +766,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"SampleAfterValue": "100003",
@@ -689,6 +774,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.ANY",
"PublicDescription": "This event counts resource-related stall cycles.",
@@ -697,6 +783,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"PublicDescription": "This event counts ROB full stall cycles. This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -705,6 +792,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"PublicDescription": "This event counts stall cycles caused by absence of eligible entries in the reservation station (RS). This may result from RS overflow, or from RS deallocation because of the RS array Write Port allocation scheme (each RS entry has two write ports instead of four. As a result, empty entries could not be used, although RS is not really full). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -713,6 +801,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -721,6 +810,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "This event counts cases of saving new LBR records by hardware. This assumes proper enabling of LBRs and takes into account LBR filtering done by the LBR_SELECT register.",
@@ -729,14 +819,16 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
- "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread.\nNote: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
+ "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread. Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -747,6 +839,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -755,6 +848,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -763,6 +857,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -771,6 +866,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -779,6 +875,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -787,6 +884,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -795,6 +893,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -803,6 +902,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -811,6 +911,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Number of uops executed from any thread.",
@@ -819,6 +920,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -827,6 +929,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -835,6 +938,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -843,6 +947,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -851,6 +956,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
"Invert": "1",
@@ -859,6 +965,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC",
@@ -867,6 +974,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC",
@@ -875,6 +983,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC",
@@ -883,6 +992,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC",
@@ -891,6 +1001,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -901,6 +1012,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.THREAD",
"PublicDescription": "Number of uops to be executed per-thread each cycle.",
@@ -909,6 +1021,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -918,6 +1031,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0_CORE",
"SampleAfterValue": "2000003",
@@ -925,6 +1039,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -934,6 +1049,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1_CORE",
"SampleAfterValue": "2000003",
@@ -941,6 +1057,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -950,6 +1067,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -957,6 +1075,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -966,6 +1085,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3_CORE",
"SampleAfterValue": "2000003",
@@ -973,6 +1093,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -982,6 +1103,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4_CORE",
"SampleAfterValue": "2000003",
@@ -989,6 +1111,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -998,6 +1121,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5_CORE",
"SampleAfterValue": "2000003",
@@ -1005,6 +1129,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -1014,6 +1139,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6_CORE",
"SampleAfterValue": "2000003",
@@ -1021,6 +1147,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -1030,6 +1157,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7_CORE",
"SampleAfterValue": "2000003",
@@ -1037,6 +1165,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS).",
@@ -1045,14 +1174,16 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
- "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive\n added by GSR u-arch.",
+ "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive added by GSR u-arch.",
"SampleAfterValue": "2000003",
"UMask": "0x10"
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"SampleAfterValue": "2000003",
@@ -1060,6 +1191,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"SampleAfterValue": "2000003",
@@ -1067,6 +1199,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -1077,6 +1210,7 @@
},
{
"BriefDescription": "Actually retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -1086,6 +1220,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1095,6 +1230,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1105,6 +1241,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/uncore-cache.json b/tools/perf/pmu-events/arch/x86/broadwell/uncore-cache.json
index c5cc43825cb9..c4c57febdc72 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L3 Lookup any request that access cache and found line in E or S-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_ES",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in I-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_I",
"PerPkg": "1",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in M-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_M",
"PerPkg": "1",
@@ -28,6 +31,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in MESI-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_MESI",
"PerPkg": "1",
@@ -37,6 +41,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in E or S-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_ES",
"PerPkg": "1",
@@ -46,6 +51,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in I-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_I",
"PerPkg": "1",
@@ -55,6 +61,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in M-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_M",
"PerPkg": "1",
@@ -64,6 +71,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in any MESI-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_MESI",
"PerPkg": "1",
@@ -73,6 +81,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in E or S-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_ES",
"PerPkg": "1",
@@ -82,6 +91,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in M-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_M",
"PerPkg": "1",
@@ -91,6 +101,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in MESI-state",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_MESI",
"PerPkg": "1",
@@ -100,6 +111,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_XCORE",
"PerPkg": "1",
@@ -108,6 +120,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_XCORE",
"PerPkg": "1",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_EVICTION",
"PerPkg": "1",
@@ -124,10 +138,20 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_XCORE",
"PerPkg": "1",
"UMask": "0x41",
"Unit": "CBOX"
+ },
+ {
+ "BriefDescription": "This 48-bit fixed counter counts the UCLK cycles",
+ "Counter": "FIXED",
+ "EventCode": "0xff",
+ "EventName": "UNC_CLOCK.SOCKET",
+ "PerPkg": "1",
+ "PublicDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Unit": "cbox_0"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwell/uncore-interconnect.json
index 64af685274a2..99f8cc992a24 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of entries allocated. Account for Any type: e.g. Snoop, Core aperture, etc.",
+ "Counter": "0,1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Each cycle counts number of all Core outgoing valid entries. Such entry is defined as valid from its allocation till first of IDI0 or DRS0 messages is sent out. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Cycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.;",
+ "Counter": "0",
"CounterMask": "1",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries that are in DirectData mode. Such entry is defined as valid when it is allocated till data sent to Core (first chunk, IDI0). Applicable for IA Cores' requests in normal case.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.DRD_DIRECT",
"PerPkg": "1",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "Total number of Core outgoing entries allocated. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Number of Core coherent Data Read entries allocated in DirectData mode",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.DRD_DIRECT",
"PerPkg": "1",
@@ -52,6 +58,7 @@
},
{
"BriefDescription": "Number of Writes allocated - any write transactions: full/partials writes and evictions.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.WRITES",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwell/virtual-memory.json b/tools/perf/pmu-events/arch/x86/broadwell/virtual-memory.json
index 93621e004d88..eb1d9541e26c 100644
--- a/tools/perf/pmu-events/arch/x86/broadwell/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwell/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"SampleAfterValue": "2000003",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_2M",
"SampleAfterValue": "2000003",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_4K",
"SampleAfterValue": "2000003",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
@@ -149,6 +167,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "EPT.WALK_CYCLES",
"PublicDescription": "This event counts cycles for an extended page table walk. The Extended Page directory cache differs from standard TLB caches by the operating system that use it. Virtual machine operating systems use the extended page directory cache, while guest operating systems use the standard TLB caches.",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).",
@@ -165,6 +185,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
@@ -174,6 +195,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Code misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -188,6 +211,7 @@
},
{
"BriefDescription": "Core misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
@@ -203,6 +228,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
@@ -212,6 +238,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
@@ -221,6 +248,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
@@ -230,6 +258,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
@@ -239,6 +268,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L1",
@@ -247,6 +277,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L2",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L3",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in Memory.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_MEMORY",
@@ -271,6 +304,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L1",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L2",
@@ -287,6 +322,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L3",
@@ -295,6 +331,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "This event counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -303,6 +340,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on).",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json
index e1f55fcfa0d0..a5e408ca46a7 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/bdwde-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -80,17 +80,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
@@ -98,9 +97,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -121,7 +119,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -139,7 +137,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -160,7 +157,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -181,7 +178,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -190,10 +187,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.FPU_DIV_ACTIVE / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIV_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -203,7 +200,7 @@
"MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_dram_bound",
"MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
"ScaleUnit": "100%"
},
{
@@ -211,7 +208,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -227,7 +224,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -236,7 +233,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -246,7 +243,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -257,9 +254,9 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
@@ -283,7 +280,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -292,7 +289,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -305,7 +302,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -314,13 +311,13 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -334,28 +331,21 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+]). Sample with: UOPS_RETIRED.HEAVY",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
"MetricExpr": "ICACHE.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_mispredicts_resteers"
- },
- {
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -381,20 +371,20 @@
},
{
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
@@ -413,6 +403,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -427,11 +423,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -439,7 +435,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -447,7 +443,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -455,7 +451,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -463,7 +459,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -481,7 +477,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -501,129 +497,129 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_all"
},
{
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -633,8 +629,8 @@
"MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
@@ -645,30 +641,36 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_time",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -691,6 +693,19 @@
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
},
{
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
"MetricExpr": "(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_UNHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0)",
"MetricGroup": "SMT",
@@ -703,6 +718,13 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
@@ -747,7 +769,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -756,48 +778,48 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -816,12 +838,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -833,17 +854,17 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -851,21 +872,21 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -891,7 +912,7 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
@@ -902,7 +923,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
"ScaleUnit": "100%"
},
@@ -921,7 +942,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -930,7 +951,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -968,12 +989,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -996,7 +1017,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1005,7 +1026,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1014,7 +1035,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1024,16 +1045,16 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -1046,7 +1067,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -1062,7 +1083,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -1090,7 +1111,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1108,10 +1129,10 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tma_clears_resteers",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/cache.json b/tools/perf/pmu-events/arch/x86/broadwellde/cache.json
index 6784331ac1cb..49d8de8f1b51 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -17,14 +19,16 @@
},
{
"BriefDescription": "L1D miss outstandings duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
- "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch.\nNote: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
+ "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +39,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Not rejected writebacks that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_DEMAND_RQSTS.WB_HIT",
"PublicDescription": "This event counts the number of WB requests that hit L2 cache.",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "This event counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "This event counts the number of L2 cache lines in the Exclusive state filling the L2. Counting does not cover rejects.",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "This event counts the number of L2 cache lines in the Invalidate state filling the L2. Counting does not cover rejects.",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "This event counts the number of L2 cache lines in the Shared state filling the L2. Counting does not cover rejects.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"SampleAfterValue": "100003",
@@ -90,6 +101,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "This event counts the total number of L2 code requests.",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"SampleAfterValue": "200003",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"SampleAfterValue": "200003",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "This event counts the total number of requests from the L2 hardware prefetchers.",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"SampleAfterValue": "200003",
@@ -143,6 +161,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"SampleAfterValue": "200003",
@@ -150,6 +169,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache.",
@@ -158,6 +178,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "L2 prefetch requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_HIT",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that hit L2 cache. L3 prefetch new types.",
@@ -174,6 +196,7 @@
},
{
"BriefDescription": "L2 prefetch requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_MISS",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that miss L2 cache.",
@@ -182,6 +205,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"SampleAfterValue": "200003",
@@ -189,6 +213,7 @@
},
{
"BriefDescription": "All L2 requests.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"SampleAfterValue": "200003",
@@ -196,6 +221,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"SampleAfterValue": "200003",
@@ -203,6 +229,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"SampleAfterValue": "200003",
@@ -210,6 +237,7 @@
},
{
"BriefDescription": "L2 or L3 HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "This event counts L2 or L3 HW prefetches that access L2 cache including rejects.",
@@ -218,6 +246,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "This event counts transactions that access the L2 pipe including snoops, pagewalks, and so on.",
@@ -226,6 +255,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "This event counts the number of L2 cache accesses when fetching instructions.",
@@ -234,6 +264,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "This event counts Demand Data Read requests that access L2 cache, including rejects.",
@@ -242,6 +273,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "This event counts L1D writebacks that access L2 cache.",
@@ -250,6 +282,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "This event counts L2 fill requests that access L2 cache.",
@@ -258,6 +291,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "This event counts L2 writebacks that access L2 cache.",
@@ -266,6 +300,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "This event counts Read for Ownership (RFO) requests that access L2 cache.",
@@ -274,6 +309,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).",
@@ -282,6 +318,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -290,6 +327,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -298,6 +336,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -309,6 +348,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared L3.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -331,6 +372,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -342,6 +384,7 @@
},
{
"BriefDescription": "Data from local DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70, BDM100",
"EventCode": "0xD3",
@@ -353,6 +396,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -363,6 +407,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: forwarded from remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -373,6 +418,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: Remote cache HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -383,26 +429,29 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HIT_LFB",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
+ "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
"SampleAfterValue": "100003",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
+ "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load uops misses in L1 cache as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
@@ -413,6 +462,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD1",
@@ -424,6 +474,7 @@
},
{
"BriefDescription": "Miss in mid-level (L2) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
@@ -434,6 +485,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD1",
@@ -445,6 +497,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100, BDE70",
"EventCode": "0xD1",
@@ -455,6 +508,7 @@
},
{
"BriefDescription": "Retired load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
@@ -465,6 +519,7 @@
},
{
"BriefDescription": "Retired store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
@@ -475,6 +530,7 @@
},
{
"BriefDescription": "Retired load uops with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD0",
@@ -486,6 +542,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
@@ -496,6 +553,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
@@ -506,6 +564,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
@@ -516,6 +575,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
@@ -526,6 +586,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "This event counts the demand and prefetch data reads. All Core Data Reads include cacheable Demands and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -534,6 +595,7 @@
},
{
"BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "This event counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, and so on.",
@@ -542,6 +604,7 @@
},
{
"BriefDescription": "Cacheable and non-cacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "This event counts both cacheable and non-cacheable code read requests.",
@@ -550,6 +613,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -558,6 +622,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -566,14 +631,16 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
- "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.\nNote: Writeback pending FIFO has six entries.",
+ "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full. Note: Writeback pending FIFO has six entries.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -583,6 +650,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -593,6 +661,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -603,6 +672,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -613,6 +683,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
@@ -622,15 +693,17 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
- "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.\nNote: A prefetch promoted to Demand is counted from the promotion point.",
+ "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted from the promotion point.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -640,6 +713,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
@@ -649,6 +723,7 @@
},
{
"BriefDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100003",
@@ -656,6 +731,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"PublicDescription": "This event counts the number of split locks in the super queue.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/counter.json b/tools/perf/pmu-events/arch/x86/broadwellde/counter.json
new file mode 100644
index 000000000000..ada968d0a038
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/counter.json
@@ -0,0 +1,42 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "HA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "R2PCIe",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/floating-point.json b/tools/perf/pmu-events/arch/x86/broadwellde/floating-point.json
index 986869252e71..9bf595af3f42 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational double precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.DOUBLE",
"SampleAfterValue": "2000006",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational packed floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* packed double and single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.PACKED",
"SampleAfterValue": "2000004",
@@ -55,6 +62,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation operation. Applies to SSE* and AVX* scalar double and single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -71,6 +80,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -79,6 +89,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational single precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SINGLE",
"SampleAfterValue": "2000005",
@@ -86,6 +97,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "2000003",
@@ -93,6 +105,7 @@
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -102,6 +115,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "This event counts any input SSE* FP assist - invalid operation, denormal operand, dividing by zero, SNaN operand. Counting includes only cases involving penalties that required micro-code assist intervention.",
@@ -110,6 +124,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "This event counts the number of SSE* floating point (FP) micro-code assist (numeric overflow/underflow) when the output value (destination register) is invalid. Counting covers only cases involving penalties that require micro-code assist intervention.",
@@ -118,6 +133,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "This event counts x87 floating point (FP) micro-code assist (invalid operation, denormal operand, SNaN operand) when the input value (one of the source operands to an FP instruction) is invalid.",
@@ -126,6 +142,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "This event counts the number of x87 floating point (FP) micro-code assist (numeric overflow/underflow, inexact result) when the output value (destination register) is invalid.",
@@ -134,6 +151,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -141,6 +159,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -148,6 +167,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
@@ -157,6 +177,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "Micro-op dispatches cancelled due to insufficient SIMD physical register file read ports",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UOP_DISPATCHES_CANCELLED.SIMD_PRF",
"PublicDescription": "This event counts the number of micro-operations cancelled after they were dispatched from the scheduler to the execution units when the total number of physical register read ports across all dispatch ports exceeds the read bandwidth of the physical register file. The SIMD_PRF subevent applies to the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI, VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*, VFNMSUB*. See the Broadwell Optimization Guide for more information.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/frontend.json b/tools/perf/pmu-events/arch/x86/broadwellde/frontend.json
index bd5da39564e1..018020a51436 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"SampleAfterValue": "100003",
@@ -8,14 +9,16 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
- "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. \nMM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.\nPenalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
+ "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "This event counts the number of both cacheable and noncacheable Instruction Cache, Streaming Buffer and Victim Cache Reads including UC fetches.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFDATA_STALL",
"PublicDescription": "This event counts cycles during which the demand fetch waits for data (wfdM104H) from L2 or iSB (opportunistic hit).",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accesses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes UC accesses.",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -76,6 +85,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -85,6 +95,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQ.",
@@ -93,6 +104,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
"PublicDescription": "This counts the number of cycles that the instruction decoder queue is empty and can indicate that the application may be bound in the front end. It does not determine whether there are uops being delivered to the Alloc stage since uops can be delivered by bypass skipping the Instruction Decode Queue (IDQ) when it is empty.",
@@ -101,6 +113,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -109,6 +122,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -118,6 +132,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -126,6 +141,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -135,6 +151,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -144,6 +161,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -154,6 +172,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "This event counts the number of uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -162,6 +181,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -170,6 +190,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.",
@@ -187,14 +209,16 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
- "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when:\n a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread;\n b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); \n c. Instruction Decode Queue (IDQ) delivers four uops.",
+ "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -204,6 +228,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
@@ -213,6 +238,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE",
@@ -222,6 +248,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE",
@@ -230,6 +257,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/memory.json b/tools/perf/pmu-events/arch/x86/broadwellde/memory.json
index 041b6ff4062e..9fca9a4eeb48 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of times HLE abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an HLE abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to uncommon conditions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an HLE abort.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an HLE abort.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times HLE caused a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times HLE aborted and was not due to the abort conditions in subevents 3-6.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of times HLE commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"PublicDescription": "Number of times HLE commit succeeded.",
@@ -58,22 +65,25 @@
},
{
"BriefDescription": "Number of times we entered an HLE region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.START",
- "PublicDescription": "Number of times we entered an HLE region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an HLE region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
- "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:\n1. memory disambiguation,\n2. external snoop, or\n3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
+ "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following: 1. memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "Randomly selected loads with latency value being above 128",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 16",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -100,6 +111,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 256",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -113,6 +125,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 32",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -126,6 +139,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 4",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -139,6 +153,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 512",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -152,6 +167,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 64",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -165,6 +181,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 8",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -178,6 +195,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "This event counts speculative cache-line split load uops dispatched to the L1 cache.",
@@ -186,6 +204,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "This event counts speculative cache line split store-address (STA) uops dispatched to the L1 cache.",
@@ -194,6 +213,7 @@
},
{
"BriefDescription": "Number of times RTM abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1",
@@ -203,6 +223,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an RTM abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -211,6 +232,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an RTM abort.",
@@ -219,6 +241,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an RTM abort.",
@@ -227,6 +250,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times a RTM caused a fault.",
@@ -235,6 +259,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times RTM aborted and was not due to the abort conditions in subevents 3-6.",
@@ -243,6 +268,7 @@
},
{
"BriefDescription": "Number of times RTM commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Number of times RTM commit succeeded.",
@@ -251,14 +277,16 @@
},
{
"BriefDescription": "Number of times we entered an RTM region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
- "PublicDescription": "Number of times we entered an RTM region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an RTM region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -266,6 +294,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -274,6 +303,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -282,6 +312,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"PublicDescription": "RTM region detected inside HLE.",
@@ -290,6 +321,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"SampleAfterValue": "2000003",
@@ -297,6 +329,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow.",
@@ -305,6 +338,7 @@
},
{
"BriefDescription": "Number of times a TSX line had a cache conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Number of times a TSX line had a cache conflict.",
@@ -313,6 +347,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"PublicDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch.",
@@ -321,6 +356,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"PublicDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.",
@@ -329,6 +365,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"PublicDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.",
@@ -337,6 +374,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"PublicDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock.",
@@ -345,6 +383,7 @@
},
{
"BriefDescription": "Number of times we could not allocate Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"PublicDescription": "Number of times we could not allocate Lock Buffer.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/metricgroups.json b/tools/perf/pmu-events/arch/x86/broadwellde/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/other.json b/tools/perf/pmu-events/arch/x86/broadwellde/other.json
index 1c2a5b001949..f0de6a71719b 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/other.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "This event counts the unhalted core cycles during which the thread is in the ring 0 privileged mode.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "This event counts unhalted core cycles during which the thread is in rings 1, 2, or 3.",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "This event counts cycles in which the L1 and L2 are locked due to a UC lock or split lock. A lock is asserted in case of locked memory access, due to noncacheable memory, locked operation that spans two cache lines, or a page walk from the noncacheable page table. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such access.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/pipeline.json b/tools/perf/pmu-events/arch/x86/broadwellde/pipeline.json
index 9a902d2160e6..962cd07eb307 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divider is busy executing divide operations",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.FPU_DIV_ACTIVE",
"PublicDescription": "This event counts the number of the divide operations executed. Uses edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the number of the divide operations executed.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired branch instructions.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-conditional branch instructions.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-unconditional branch instructions, excluding calls and indirects.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts both taken and not taken speculative and retired direct near calls.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches excluding calls and return branches.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches that have a return mnemonic.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken macro-conditional branch instructions.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions excluding calls and indirect branches.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired direct near calls.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired indirect branches excluding calls and return branches.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired indirect calls including both register and memory indirect.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts taken speculative and retired indirect branches that have a return mnemonic.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all (macro) branch instructions retired.",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
@@ -130,6 +146,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -166,6 +186,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -175,6 +196,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -184,6 +206,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "This event counts not taken branch instructions retired.",
@@ -192,6 +215,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted branch instructions.",
@@ -200,6 +224,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -208,6 +233,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken mispredicted indirect branches excluding calls and returns.",
@@ -216,6 +242,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -224,6 +251,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -232,6 +260,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches excluding calls and returns.",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches that have a return mnemonic.",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all mispredicted macro branch instructions retired.",
@@ -270,6 +303,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -288,6 +323,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -297,6 +333,7 @@
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RET",
"PEBS": "1",
@@ -306,6 +343,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -313,6 +351,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "This is a fixed-frequency event programmed to general counters. It counts when the core is unhalted at 100 Mhz.",
@@ -322,6 +361,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -329,6 +369,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -336,13 +377,15 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
- "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. \nNote: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
+ "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
"UMask": "0x3"
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate).",
@@ -352,6 +395,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -359,6 +403,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -367,12 +412,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -381,12 +428,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -395,6 +444,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -404,6 +454,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -412,6 +463,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_PENDING",
@@ -421,6 +473,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -430,6 +483,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -438,6 +492,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -447,6 +502,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -455,6 +511,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -464,6 +521,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -472,6 +530,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_PENDING",
@@ -481,6 +540,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -490,6 +550,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -498,6 +559,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -506,6 +568,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "This event counts stalls occurred due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunk.",
@@ -514,13 +577,15 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. \nNotes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. \nCounting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
+ "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "BDM61",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -529,6 +594,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "BDM11, BDM55",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -539,6 +605,7 @@
},
{
"BriefDescription": "FP operations retired. X87 FP operations that have no exceptions:",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.X87",
"PublicDescription": "This event counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.",
@@ -547,6 +614,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.RAT_STALL_CYCLES",
"PublicDescription": "This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another thread.",
@@ -555,6 +623,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -565,6 +634,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -573,6 +643,7 @@
},
{
"BriefDescription": "This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"SampleAfterValue": "100003",
@@ -580,14 +651,16 @@
},
{
"BriefDescription": "Cases when loads get true Block-on-Store blocking code preventing store forwarding",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
- "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when:\n - preceding store conflicts with the load (incomplete overlap);\n - store forwarding is impossible due to u-arch limitations;\n - preceding lock RMW operations are not forwarded;\n - store has the no-forward bit set (uncacheable/page-split/masked stores);\n - all-blocking stores are used (mostly, fences and port I/O);\nand others.\nThe most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events.\nSee the table of not supported store forwards in the Optimization Guide.",
+ "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: - preceding store conflicts with the load (incomplete overlap); - store forwarding is impossible due to u-arch limitations; - preceding lock RMW operations are not forwarded; - store has the no-forward bit set (uncacheable/page-split/masked stores); - all-blocking stores are used (mostly, fences and port I/O); and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "False dependencies in MOB due to partial compare",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.",
@@ -596,6 +669,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the hardware prefetch.",
@@ -604,6 +678,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructions.",
@@ -612,6 +687,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -620,6 +696,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -628,6 +705,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "LSD.UOPS",
"SampleAfterValue": "2000003",
@@ -635,6 +713,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -644,6 +723,7 @@
},
{
"BriefDescription": "Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.CYCLES",
"PublicDescription": "This event counts both thread-specific (TS) and all-thread (AT) nukes.",
@@ -652,6 +732,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"PublicDescription": "Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a fault.",
@@ -660,6 +741,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "This event counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -668,6 +750,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -675,6 +758,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -682,6 +766,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"SampleAfterValue": "100003",
@@ -689,6 +774,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.ANY",
"PublicDescription": "This event counts resource-related stall cycles.",
@@ -697,6 +783,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"PublicDescription": "This event counts ROB full stall cycles. This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -705,6 +792,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"PublicDescription": "This event counts stall cycles caused by absence of eligible entries in the reservation station (RS). This may result from RS overflow, or from RS deallocation because of the RS array Write Port allocation scheme (each RS entry has two write ports instead of four. As a result, empty entries could not be used, although RS is not really full). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -713,6 +801,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -721,6 +810,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "This event counts cases of saving new LBR records by hardware. This assumes proper enabling of LBRs and takes into account LBR filtering done by the LBR_SELECT register.",
@@ -729,14 +819,16 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
- "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread.\nNote: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
+ "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread. Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -747,6 +839,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -755,6 +848,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -763,6 +857,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -771,6 +866,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -779,6 +875,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -787,6 +884,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -795,6 +893,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -803,6 +902,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -811,6 +911,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Number of uops executed from any thread.",
@@ -819,6 +920,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -827,6 +929,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -835,6 +938,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -843,6 +947,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -851,6 +956,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
"Invert": "1",
@@ -859,6 +965,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC",
@@ -867,6 +974,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC",
@@ -875,6 +983,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC",
@@ -883,6 +992,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC",
@@ -891,6 +1001,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -901,6 +1012,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.THREAD",
"PublicDescription": "Number of uops to be executed per-thread each cycle.",
@@ -909,6 +1021,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -918,6 +1031,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0_CORE",
"SampleAfterValue": "2000003",
@@ -925,6 +1039,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -934,6 +1049,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1_CORE",
"SampleAfterValue": "2000003",
@@ -941,6 +1057,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -950,6 +1067,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -957,6 +1075,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -966,6 +1085,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3_CORE",
"SampleAfterValue": "2000003",
@@ -973,6 +1093,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -982,6 +1103,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4_CORE",
"SampleAfterValue": "2000003",
@@ -989,6 +1111,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -998,6 +1121,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5_CORE",
"SampleAfterValue": "2000003",
@@ -1005,6 +1129,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -1014,6 +1139,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6_CORE",
"SampleAfterValue": "2000003",
@@ -1021,6 +1147,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -1030,6 +1157,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7_CORE",
"SampleAfterValue": "2000003",
@@ -1037,6 +1165,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS).",
@@ -1045,14 +1174,16 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
- "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive\n added by GSR u-arch.",
+ "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive added by GSR u-arch.",
"SampleAfterValue": "2000003",
"UMask": "0x10"
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"SampleAfterValue": "2000003",
@@ -1060,6 +1191,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"SampleAfterValue": "2000003",
@@ -1067,6 +1199,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -1077,6 +1210,7 @@
},
{
"BriefDescription": "Actually retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -1086,6 +1220,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1095,6 +1230,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1105,6 +1241,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-cache.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-cache.json
index 56bba6d4e0f6..bfc499fdfdeb 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Bounce Control",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_C_BOUNCE_CONTROL",
"PerPkg": "1",
@@ -8,12 +9,14 @@
},
{
"BriefDescription": "Uncore Clocks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_C_CLOCKTICKS",
"PerPkg": "1",
"Unit": "CBOX"
},
{
"BriefDescription": "Counter 0 Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_C_COUNTER0_OCCUPANCY",
"PerPkg": "1",
@@ -22,6 +25,7 @@
},
{
"BriefDescription": "FaST wire asserted",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_C_FAST_ASSERTED",
"PerPkg": "1",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Cache Lookups; Any Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.ANY",
"PerPkg": "1",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "Cache Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.DATA_READ",
"PerPkg": "1",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Cache Lookups; Lookups that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.NID",
"PerPkg": "1",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Cache Lookups; Any Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.READ",
"PerPkg": "1",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cache Lookups; External Snoop Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.REMOTE_SNOOP",
"PerPkg": "1",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Cache Lookups; Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.WRITE",
"PerPkg": "1",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.E_STATE",
"PerPkg": "1",
@@ -93,6 +104,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.F_STATE",
"PerPkg": "1",
@@ -102,6 +114,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.I_STATE",
"PerPkg": "1",
@@ -111,6 +124,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.MISS",
"PerPkg": "1",
@@ -120,6 +134,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.M_STATE",
"PerPkg": "1",
@@ -129,6 +144,7 @@
},
{
"BriefDescription": "Lines Victimized; Victimized Lines that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.NID",
"PerPkg": "1",
@@ -138,6 +154,7 @@
},
{
"BriefDescription": "Cbo Misc; DRd hitting non-M with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_MISS",
"PerPkg": "1",
@@ -147,6 +164,7 @@
},
{
"BriefDescription": "Cbo Misc; Clean Victim with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_VICTIM",
"PerPkg": "1",
@@ -156,6 +174,7 @@
},
{
"BriefDescription": "Cbo Misc; RFO HitS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RFO_HIT_S",
"PerPkg": "1",
@@ -165,6 +184,7 @@
},
{
"BriefDescription": "Cbo Misc; Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RSPI_WAS_FSE",
"PerPkg": "1",
@@ -174,6 +194,7 @@
},
{
"BriefDescription": "Cbo Misc",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.STARTED",
"PerPkg": "1",
@@ -183,6 +204,7 @@
},
{
"BriefDescription": "Cbo Misc; Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.WC_ALIASING",
"PerPkg": "1",
@@ -192,6 +214,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE0",
"PerPkg": "1",
@@ -201,6 +224,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE1",
"PerPkg": "1",
@@ -210,6 +234,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE2",
"PerPkg": "1",
@@ -219,6 +244,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE3",
"PerPkg": "1",
@@ -228,6 +254,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Bits Decremented",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.LRU_DECREMENT",
"PerPkg": "1",
@@ -237,6 +264,7 @@
},
{
"BriefDescription": "LRU Queue; Non-0 Aged Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.VICTIM_NON_ZERO",
"PerPkg": "1",
@@ -246,6 +274,7 @@
},
{
"BriefDescription": "AD Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -255,6 +284,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -264,6 +294,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.CW",
"PerPkg": "1",
@@ -273,6 +304,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -282,6 +314,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_ODD",
"PerPkg": "1",
@@ -291,6 +324,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_EVEN",
"PerPkg": "1",
@@ -300,6 +334,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_ODD",
"PerPkg": "1",
@@ -309,6 +344,7 @@
},
{
"BriefDescription": "AK Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -318,6 +354,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -327,6 +364,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.CW",
"PerPkg": "1",
@@ -336,6 +374,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -345,6 +384,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_ODD",
"PerPkg": "1",
@@ -354,6 +394,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_EVEN",
"PerPkg": "1",
@@ -363,6 +404,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_ODD",
"PerPkg": "1",
@@ -372,6 +414,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -381,6 +424,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -390,6 +434,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.CW",
"PerPkg": "1",
@@ -399,6 +444,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -408,6 +454,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_ODD",
"PerPkg": "1",
@@ -417,6 +464,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_EVEN",
"PerPkg": "1",
@@ -426,6 +474,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_ODD",
"PerPkg": "1",
@@ -435,6 +484,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AD",
"PerPkg": "1",
@@ -443,6 +493,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AK",
"PerPkg": "1",
@@ -451,6 +502,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.BL",
"PerPkg": "1",
@@ -459,6 +511,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.IV",
"PerPkg": "1",
@@ -467,6 +520,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -476,6 +530,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DN",
"PerPkg": "1",
@@ -485,6 +540,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DOWN",
"PerPkg": "1",
@@ -494,6 +550,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.UP",
"PerPkg": "1",
@@ -503,6 +560,7 @@
},
{
"BriefDescription": "AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AD",
"PerPkg": "1",
@@ -511,6 +569,7 @@
},
{
"BriefDescription": "AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AK",
"PerPkg": "1",
@@ -519,6 +578,7 @@
},
{
"BriefDescription": "BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.BL",
"PerPkg": "1",
@@ -527,6 +587,7 @@
},
{
"BriefDescription": "IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.IV",
"PerPkg": "1",
@@ -535,6 +596,7 @@
},
{
"BriefDescription": "Number of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce traffic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_C_RING_SRC_THRTL",
"PerPkg": "1",
@@ -542,15 +604,17 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IPQ",
"PerPkg": "1",
- "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally startved and therefore we are blocking the IRQ.",
+ "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally starved and therefore we are blocking the IRQ.",
"UMask": "0x2",
"Unit": "CBOX"
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IRQ",
"PerPkg": "1",
@@ -560,6 +624,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; ISMQ_BID",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.ISMQ_BIDS",
"PerPkg": "1",
@@ -569,6 +634,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.PRQ",
"PerPkg": "1",
@@ -578,6 +644,7 @@
},
{
"BriefDescription": "Ingress Allocations; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IPQ",
"PerPkg": "1",
@@ -587,6 +654,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ",
"PerPkg": "1",
@@ -596,6 +664,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ_REJ",
"PerPkg": "1",
@@ -605,6 +674,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ",
"PerPkg": "1",
@@ -614,6 +684,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ_REJ",
"PerPkg": "1",
@@ -623,6 +694,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IPQ",
"PerPkg": "1",
@@ -632,6 +704,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IRQ",
"PerPkg": "1",
@@ -641,6 +714,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; ISMQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.ISMQ",
"PerPkg": "1",
@@ -650,6 +724,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.PRQ",
"PerPkg": "1",
@@ -659,6 +734,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -668,6 +744,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ANY",
"PerPkg": "1",
@@ -677,6 +754,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.FULL",
"PerPkg": "1",
@@ -686,6 +764,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -695,6 +774,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -704,6 +784,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -713,6 +794,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -722,6 +804,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ANY",
"PerPkg": "1",
@@ -731,6 +814,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.FULL",
"PerPkg": "1",
@@ -740,6 +824,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -749,6 +834,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.NID",
"PerPkg": "1",
@@ -758,6 +844,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -767,6 +854,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.RTID",
"PerPkg": "1",
@@ -776,6 +864,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -785,6 +874,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -794,6 +884,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -803,6 +894,7 @@
},
{
"BriefDescription": "ISMQ Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.ANY",
"PerPkg": "1",
@@ -812,6 +904,7 @@
},
{
"BriefDescription": "ISMQ Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.FULL",
"PerPkg": "1",
@@ -821,6 +914,7 @@
},
{
"BriefDescription": "ISMQ Retries; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -830,6 +924,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.NID",
"PerPkg": "1",
@@ -839,6 +934,7 @@
},
{
"BriefDescription": "ISMQ Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -848,6 +944,7 @@
},
{
"BriefDescription": "ISMQ Retries; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.RTID",
"PerPkg": "1",
@@ -857,6 +954,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.WB_CREDITS",
"PerPkg": "1",
@@ -866,6 +964,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -875,6 +974,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -884,6 +984,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -893,6 +994,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IPQ",
"PerPkg": "1",
@@ -902,6 +1004,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ",
"PerPkg": "1",
@@ -911,6 +1014,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ Rejected",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ_REJ",
"PerPkg": "1",
@@ -920,6 +1024,7 @@
},
{
"BriefDescription": "Ingress Occupancy; PRQ Rejects",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.PRQ_REJ",
"PerPkg": "1",
@@ -929,6 +1034,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -938,6 +1044,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -947,6 +1054,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -956,6 +1064,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -965,6 +1074,7 @@
},
{
"BriefDescription": "TOR Inserts; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.ALL",
"PerPkg": "1",
@@ -974,6 +1084,7 @@
},
{
"BriefDescription": "TOR Inserts; Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.EVICTION",
"PerPkg": "1",
@@ -983,6 +1094,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL",
"PerPkg": "1",
@@ -992,6 +1104,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1001,6 +1114,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL",
"PerPkg": "1",
@@ -1010,6 +1124,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1019,6 +1134,7 @@
},
{
"BriefDescription": "TOR Inserts; Miss Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE",
"PerPkg": "1",
@@ -1028,6 +1144,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE",
"PerPkg": "1",
@@ -1037,6 +1154,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1046,6 +1164,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_ALL",
"PerPkg": "1",
@@ -1055,6 +1174,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_EVICTION",
"PerPkg": "1",
@@ -1064,6 +1184,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Miss All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_ALL",
"PerPkg": "1",
@@ -1073,6 +1194,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1082,6 +1204,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_OPCODE",
"PerPkg": "1",
@@ -1091,6 +1214,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_WB",
"PerPkg": "1",
@@ -1100,6 +1224,7 @@
},
{
"BriefDescription": "TOR Inserts; Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.OPCODE",
"PerPkg": "1",
@@ -1109,6 +1234,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE",
"PerPkg": "1",
@@ -1118,6 +1244,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1127,6 +1254,7 @@
},
{
"BriefDescription": "TOR Inserts; Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.WB",
"PerPkg": "1",
@@ -1136,6 +1264,7 @@
},
{
"BriefDescription": "TOR Occupancy; Any",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -1145,6 +1274,7 @@
},
{
"BriefDescription": "TOR Occupancy; Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.EVICTION",
"PerPkg": "1",
@@ -1154,6 +1284,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -1163,6 +1294,7 @@
},
{
"BriefDescription": "TOR Occupancy; Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1172,6 +1304,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss All",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_ALL",
"PerPkg": "1",
@@ -1181,6 +1314,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL",
"PerPkg": "1",
@@ -1190,6 +1324,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1199,6 +1334,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_OPCODE",
"PerPkg": "1",
@@ -1208,6 +1344,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE",
"PerPkg": "1",
@@ -1217,6 +1354,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1226,6 +1364,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_ALL",
"PerPkg": "1",
@@ -1235,6 +1374,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_EVICTION",
"PerPkg": "1",
@@ -1244,6 +1384,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_ALL",
"PerPkg": "1",
@@ -1253,6 +1394,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched Miss",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1262,6 +1404,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_OPCODE",
"PerPkg": "1",
@@ -1271,6 +1414,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_WB",
"PerPkg": "1",
@@ -1280,6 +1424,7 @@
},
{
"BriefDescription": "TOR Occupancy; Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.OPCODE",
"PerPkg": "1",
@@ -1289,6 +1434,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -1298,6 +1444,7 @@
},
{
"BriefDescription": "TOR Occupancy; Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1307,6 +1454,7 @@
},
{
"BriefDescription": "TOR Occupancy; Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.WB",
"PerPkg": "1",
@@ -1316,6 +1464,7 @@
},
{
"BriefDescription": "Onto AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AD",
"PerPkg": "1",
@@ -1324,6 +1473,7 @@
},
{
"BriefDescription": "Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AK",
"PerPkg": "1",
@@ -1332,6 +1482,7 @@
},
{
"BriefDescription": "Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.BL",
"PerPkg": "1",
@@ -1340,6 +1491,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CACHE",
"PerPkg": "1",
@@ -1349,6 +1501,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CORE",
"PerPkg": "1",
@@ -1358,6 +1511,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CACHE",
"PerPkg": "1",
@@ -1367,6 +1521,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CORE",
"PerPkg": "1",
@@ -1376,6 +1531,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Cacheno",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CACHE",
"PerPkg": "1",
@@ -1385,6 +1541,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CORE",
"PerPkg": "1",
@@ -1394,6 +1551,7 @@
},
{
"BriefDescription": "Egress Allocations; IV - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.IV_CACHE",
"PerPkg": "1",
@@ -1403,6 +1561,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AD Ring (to core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AD_CORE",
"PerPkg": "1",
@@ -1412,6 +1571,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AK_BOTH",
"PerPkg": "1",
@@ -1421,6 +1581,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.BL_BOTH",
"PerPkg": "1",
@@ -1430,6 +1591,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto IV Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.IV",
"PerPkg": "1",
@@ -1439,6 +1601,7 @@
},
{
"BriefDescription": "BT Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_H_BT_CYCLES_NE",
"PerPkg": "1",
@@ -1447,6 +1610,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_BL_HAZARD",
"PerPkg": "1",
@@ -1456,6 +1620,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Snoop Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_SNP_HAZARD",
"PerPkg": "1",
@@ -1465,6 +1630,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.RSPACKCFLT_HAZARD",
"PerPkg": "1",
@@ -1474,6 +1640,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.WBMDATA_HAZARD",
"PerPkg": "1",
@@ -1483,24 +1650,27 @@
},
{
"BriefDescription": "HA to iMC Bypass; Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.NOT_TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "uclks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_H_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent.",
@@ -1508,6 +1678,7 @@
},
{
"BriefDescription": "Direct2Core Messages Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_H_DIRECT2CORE_COUNT",
"PerPkg": "1",
@@ -1516,6 +1687,7 @@
},
{
"BriefDescription": "Cycles when Direct2Core was Disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_H_DIRECT2CORE_CYCLES_DISABLED",
"PerPkg": "1",
@@ -1524,6 +1696,7 @@
},
{
"BriefDescription": "Number of Reads that had Direct2Core Overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_H_DIRECT2CORE_TXN_OVERRIDE",
"PerPkg": "1",
@@ -1532,6 +1705,7 @@
},
{
"BriefDescription": "Directory Lat Opt Return",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_H_DIRECTORY_LAT_OPT",
"PerPkg": "1",
@@ -1540,6 +1714,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.NO_SNP",
"PerPkg": "1",
@@ -1549,6 +1724,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.SNP",
"PerPkg": "1",
@@ -1558,6 +1734,7 @@
},
{
"BriefDescription": "Directory Updates; Any Directory Update",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -1567,6 +1744,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.CLEAR",
"PerPkg": "1",
@@ -1576,6 +1754,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.SET",
"PerPkg": "1",
@@ -1585,6 +1764,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1593,6 +1773,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALL",
"PerPkg": "1",
@@ -1601,6 +1782,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALLOCS",
"PerPkg": "1",
@@ -1609,6 +1791,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.EVICTS",
"PerPkg": "1",
@@ -1617,6 +1800,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.HOM",
"PerPkg": "1",
@@ -1625,6 +1809,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.INVALS",
"PerPkg": "1",
@@ -1633,6 +1818,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1641,6 +1827,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSP",
"PerPkg": "1",
@@ -1649,6 +1836,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1657,6 +1845,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1665,6 +1854,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDS",
"PerPkg": "1",
@@ -1673,6 +1863,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1681,6 +1872,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOI",
"PerPkg": "1",
@@ -1689,6 +1881,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1697,6 +1890,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ALL",
"PerPkg": "1",
@@ -1705,6 +1899,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.HOM",
"PerPkg": "1",
@@ -1713,6 +1908,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1721,6 +1917,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSP",
"PerPkg": "1",
@@ -1729,6 +1926,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1737,6 +1935,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1745,6 +1944,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDS",
"PerPkg": "1",
@@ -1753,6 +1953,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1761,6 +1962,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOI",
"PerPkg": "1",
@@ -1769,6 +1971,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1777,6 +1980,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALL",
"PerPkg": "1",
@@ -1785,6 +1989,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALLOCS",
"PerPkg": "1",
@@ -1793,6 +1998,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.HOM",
"PerPkg": "1",
@@ -1801,6 +2007,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.INVALS",
"PerPkg": "1",
@@ -1809,6 +2016,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1817,6 +2025,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSP",
"PerPkg": "1",
@@ -1825,6 +2034,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1833,6 +2043,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1841,6 +2052,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDS",
"PerPkg": "1",
@@ -1849,6 +2061,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1857,6 +2070,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOI",
"PerPkg": "1",
@@ -1865,6 +2079,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0",
"PerPkg": "1",
@@ -1874,6 +2089,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1",
"PerPkg": "1",
@@ -1883,6 +2099,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI2",
"PerPkg": "1",
@@ -1892,6 +2109,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0",
"PerPkg": "1",
@@ -1901,6 +2119,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1",
"PerPkg": "1",
@@ -1910,6 +2129,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI2",
"PerPkg": "1",
@@ -1919,6 +2139,7 @@
},
{
"BriefDescription": "HA to iMC Normal Priority Reads Issued; Normal Priority",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_H_IMC_READS.NORMAL",
"PerPkg": "1",
@@ -1928,6 +2149,7 @@
},
{
"BriefDescription": "Retry Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_H_IMC_RETRY",
"PerPkg": "1",
@@ -1935,6 +2157,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; All Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.ALL",
"PerPkg": "1",
@@ -1944,6 +2167,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL",
"PerPkg": "1",
@@ -1953,6 +2177,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL_ISOCH",
"PerPkg": "1",
@@ -1962,6 +2187,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL",
"PerPkg": "1",
@@ -1971,6 +2197,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL_ISOCH",
"PerPkg": "1",
@@ -1980,6 +2207,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.HUB",
"PerPkg": "1",
@@ -1988,6 +2216,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.SAT",
"PerPkg": "1",
@@ -1996,6 +2225,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS0",
"PerPkg": "1",
@@ -2005,6 +2235,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS1",
"PerPkg": "1",
@@ -2014,6 +2245,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS2",
"PerPkg": "1",
@@ -2023,6 +2255,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS3",
"PerPkg": "1",
@@ -2032,6 +2265,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS0",
"PerPkg": "1",
@@ -2041,6 +2275,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS1",
"PerPkg": "1",
@@ -2050,6 +2285,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Cancelled",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.CANCELLED",
"PerPkg": "1",
@@ -2059,6 +2295,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2068,6 +2305,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL",
"PerPkg": "1",
@@ -2077,6 +2315,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Reads Local - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL_USEFUL",
"PerPkg": "1",
@@ -2086,6 +2325,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE",
"PerPkg": "1",
@@ -2095,6 +2335,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE_USEFUL",
"PerPkg": "1",
@@ -2104,6 +2345,7 @@
},
{
"BriefDescription": "OSB Early Data Return; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.ALL",
"PerPkg": "1",
@@ -2113,6 +2355,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_I",
"PerPkg": "1",
@@ -2122,6 +2365,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_S",
"PerPkg": "1",
@@ -2131,6 +2375,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_I",
"PerPkg": "1",
@@ -2140,6 +2385,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_S",
"PerPkg": "1",
@@ -2149,6 +2395,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2158,6 +2405,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -2167,6 +2415,7 @@
},
{
"BriefDescription": "Read and Write Requests; Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS",
"PerPkg": "1",
@@ -2176,6 +2425,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -2185,6 +2435,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -2194,6 +2445,7 @@
},
{
"BriefDescription": "Read and Write Requests; Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES",
"PerPkg": "1",
@@ -2203,6 +2455,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -2212,6 +2465,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -2221,6 +2475,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -2230,6 +2485,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2239,6 +2495,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -2248,6 +2505,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW",
"PerPkg": "1",
@@ -2257,6 +2515,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -2266,6 +2525,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -2275,6 +2535,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -2284,6 +2545,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -2293,6 +2555,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2302,6 +2565,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -2311,6 +2575,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW",
"PerPkg": "1",
@@ -2320,6 +2585,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -2329,6 +2595,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -2338,6 +2605,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -2347,6 +2615,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -2356,6 +2625,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2365,6 +2635,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -2374,6 +2645,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW",
"PerPkg": "1",
@@ -2383,6 +2655,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -2392,6 +2665,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -2401,42 +2675,47 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -2446,6 +2725,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -2455,6 +2735,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -2464,6 +2745,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
@@ -2473,6 +2755,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2482,6 +2765,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2491,6 +2775,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2500,6 +2785,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2509,6 +2795,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2518,6 +2805,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2527,6 +2815,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2536,6 +2825,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2545,6 +2835,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.LOCAL",
"PerPkg": "1",
@@ -2554,6 +2845,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.REMOTE",
"PerPkg": "1",
@@ -2563,6 +2855,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -2572,6 +2865,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.LOCAL",
"PerPkg": "1",
@@ -2581,6 +2875,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.REMOTE",
"PerPkg": "1",
@@ -2590,6 +2885,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -2599,6 +2895,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -2608,6 +2905,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RSPCNFLCT*",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPCNFLCT",
"PerPkg": "1",
@@ -2617,6 +2915,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPI",
"PerPkg": "1",
@@ -2626,6 +2925,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPIFWD",
"PerPkg": "1",
@@ -2635,6 +2935,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPS",
"PerPkg": "1",
@@ -2644,6 +2945,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPSFWD",
"PerPkg": "1",
@@ -2653,6 +2955,7 @@
},
{
"BriefDescription": "Snoop Responses Received; Rsp*Fwd*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_FWD_WB",
"PerPkg": "1",
@@ -2662,6 +2965,7 @@
},
{
"BriefDescription": "Snoop Responses Received; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_WB",
"PerPkg": "1",
@@ -2671,6 +2975,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Other",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.OTHER",
"PerPkg": "1",
@@ -2680,6 +2985,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPCNFLCT",
"PerPkg": "1",
@@ -2689,6 +2995,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPI",
"PerPkg": "1",
@@ -2698,6 +3005,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPIFWD",
"PerPkg": "1",
@@ -2707,6 +3015,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPS",
"PerPkg": "1",
@@ -2716,6 +3025,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPSFWD",
"PerPkg": "1",
@@ -2725,6 +3035,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxFWDxWB",
"PerPkg": "1",
@@ -2734,6 +3045,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxWB",
"PerPkg": "1",
@@ -2743,6 +3055,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -2752,6 +3065,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -2761,6 +3075,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -2770,6 +3085,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -2779,6 +3095,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION0",
"PerPkg": "1",
@@ -2788,6 +3105,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION1",
"PerPkg": "1",
@@ -2797,6 +3115,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION2",
"PerPkg": "1",
@@ -2806,6 +3125,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION3",
"PerPkg": "1",
@@ -2815,6 +3135,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION4",
"PerPkg": "1",
@@ -2824,6 +3145,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION5",
"PerPkg": "1",
@@ -2833,6 +3155,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION6",
"PerPkg": "1",
@@ -2842,6 +3165,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION7",
"PerPkg": "1",
@@ -2851,6 +3175,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION10",
"PerPkg": "1",
@@ -2860,6 +3185,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 11",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION11",
"PerPkg": "1",
@@ -2869,6 +3195,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION8",
"PerPkg": "1",
@@ -2878,6 +3205,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION9",
"PerPkg": "1",
@@ -2887,6 +3215,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -2896,6 +3225,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles GP Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.GP",
"PerPkg": "1",
@@ -2905,33 +3235,37 @@
},
{
"BriefDescription": "Tracker Cycles Not Empty; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.ALL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
"UMask": "0x3",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.LOCAL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.REMOTE",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2941,6 +3275,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_REMOTE",
"PerPkg": "1",
@@ -2950,6 +3285,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_LOCAL",
"PerPkg": "1",
@@ -2959,6 +3295,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_REMOTE",
"PerPkg": "1",
@@ -2968,6 +3305,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_LOCAL",
"PerPkg": "1",
@@ -2977,6 +3315,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_REMOTE",
"PerPkg": "1",
@@ -2986,6 +3325,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -2995,6 +3335,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -3004,6 +3345,7 @@
},
{
"BriefDescription": "Outbound NDR Ring Transactions; Non-data Responses",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_H_TxR_AD.HOM",
"PerPkg": "1",
@@ -3013,6 +3355,7 @@
},
{
"BriefDescription": "AD Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3022,6 +3365,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3031,6 +3375,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3040,6 +3385,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3049,6 +3395,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3058,6 +3405,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3067,6 +3415,7 @@
},
{
"BriefDescription": "AD Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.ALL",
"PerPkg": "1",
@@ -3076,6 +3425,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3085,6 +3435,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3094,6 +3445,7 @@
},
{
"BriefDescription": "AK Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3103,6 +3455,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3112,6 +3465,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3121,6 +3475,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3130,6 +3485,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3139,6 +3495,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3148,6 +3505,7 @@
},
{
"BriefDescription": "AK Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.ALL",
"PerPkg": "1",
@@ -3157,6 +3515,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3166,6 +3525,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3175,6 +3535,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CACHE",
"PerPkg": "1",
@@ -3184,6 +3545,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Core",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CORE",
"PerPkg": "1",
@@ -3193,6 +3555,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to QPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_QPI",
"PerPkg": "1",
@@ -3202,6 +3565,7 @@
},
{
"BriefDescription": "BL Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3211,6 +3575,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3220,6 +3585,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3229,6 +3595,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3238,6 +3605,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3247,6 +3615,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3256,6 +3625,7 @@
},
{
"BriefDescription": "BL Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.ALL",
"PerPkg": "1",
@@ -3265,6 +3635,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3274,6 +3645,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3283,6 +3655,7 @@
},
{
"BriefDescription": "Injection Starvation; For AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.AK",
"PerPkg": "1",
@@ -3292,6 +3665,7 @@
},
{
"BriefDescription": "Injection Starvation; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.BL",
"PerPkg": "1",
@@ -3301,42 +3675,47 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -3346,6 +3725,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -3355,6 +3735,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -3364,6 +3745,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
index 910395977a6e..5ccc49cb66a6 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Total Write Cache Occupancy; Any Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Total Write Cache Occupancy; Select Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE",
"PerPkg": "1",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Clocks in the IRP",
+ "Counter": "0,1",
"EventName": "UNC_I_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Number of clocks in the IRP.",
@@ -26,78 +29,87 @@
},
{
"BriefDescription": "Coherent Ops; CLFlush",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CLFLUSH",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; CRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; DRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.DRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIDCAHin5t",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIDCAHINT",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIRdCur",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIRDCUR",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIItoM",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCITOM",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; RFO",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.RFO",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; WbMtoI",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.WBMTOI",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
"PerPkg": "1",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
"PerPkg": "1",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
"PerPkg": "1",
@@ -125,6 +139,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REJ",
"PerPkg": "1",
@@ -134,6 +149,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REQ",
"PerPkg": "1",
@@ -143,6 +159,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_XFER",
"PerPkg": "1",
@@ -152,6 +169,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
"PerPkg": "1",
@@ -161,6 +179,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch TimeOut",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_TIMEOUT",
"PerPkg": "1",
@@ -170,6 +189,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Data Throttled",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.DATA_THROTTLE",
"PerPkg": "1",
@@ -179,6 +199,7 @@
},
{
"BriefDescription": "Misc Events - Set 1",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.LOST_FWD",
"PerPkg": "1",
@@ -188,6 +209,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
"PerPkg": "1",
@@ -197,6 +219,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Valid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
"PerPkg": "1",
@@ -206,6 +229,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_E",
"PerPkg": "1",
@@ -215,6 +239,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_I",
"PerPkg": "1",
@@ -224,6 +249,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_M",
"PerPkg": "1",
@@ -233,6 +259,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_S",
"PerPkg": "1",
@@ -242,6 +269,7 @@
},
{
"BriefDescription": "AK Ingress Occupancy",
+ "Counter": "0,1",
"EventCode": "0xA",
"EventName": "UNC_I_RxR_AK_INSERTS",
"PerPkg": "1",
@@ -250,6 +278,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
@@ -258,6 +287,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - DRS",
+ "Counter": "0,1",
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
@@ -266,6 +296,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
@@ -274,6 +305,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
@@ -282,6 +314,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCB",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
@@ -290,6 +323,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
@@ -298,6 +332,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
@@ -306,6 +341,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCS",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
@@ -314,6 +350,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
@@ -322,6 +359,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
"PerPkg": "1",
@@ -331,6 +369,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit I",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
"PerPkg": "1",
@@ -340,6 +379,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit M",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
"PerPkg": "1",
@@ -349,6 +389,7 @@
},
{
"BriefDescription": "Snoop Responses; Miss",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.MISS",
"PerPkg": "1",
@@ -358,6 +399,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpCode",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
"PerPkg": "1",
@@ -367,6 +409,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpData",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
"PerPkg": "1",
@@ -376,6 +419,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpInv",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
"PerPkg": "1",
@@ -385,6 +429,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Atomic",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.ATOMIC",
"PerPkg": "1",
@@ -394,6 +439,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Other",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.OTHER",
"PerPkg": "1",
@@ -403,6 +449,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Read Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.RD_PREF",
"PerPkg": "1",
@@ -412,6 +459,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Reads",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.READS",
"PerPkg": "1",
@@ -421,6 +469,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Writes",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WRITES",
"PerPkg": "1",
@@ -430,6 +479,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Write Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -439,6 +489,7 @@
},
{
"BriefDescription": "No AD Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_TxR_AD_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -447,6 +498,7 @@
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_TxR_BL_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -455,6 +507,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xE",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCB",
"PerPkg": "1",
@@ -463,6 +516,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCS",
"PerPkg": "1",
@@ -471,6 +525,7 @@
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0xD",
"EventName": "UNC_I_TxR_REQUEST_OCCUPANCY",
"PerPkg": "1",
@@ -479,6 +534,7 @@
},
{
"BriefDescription": "VLW Received",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
"PerPkg": "1",
@@ -488,6 +544,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.DISABLE",
"PerPkg": "1",
@@ -497,6 +554,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.ENABLE",
"PerPkg": "1",
@@ -506,6 +564,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_DISABLE",
"PerPkg": "1",
@@ -515,6 +574,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_ENABLE",
"PerPkg": "1",
@@ -524,6 +584,7 @@
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack; Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
"PerPkg": "1",
@@ -533,6 +594,7 @@
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
"PerPkg": "1",
@@ -541,6 +603,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Correctable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.CMC",
"PerPkg": "1",
@@ -550,6 +613,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Livelock",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LIVELOCK",
"PerPkg": "1",
@@ -559,6 +623,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; LTError",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LTERROR",
"PerPkg": "1",
@@ -568,6 +633,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T0",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T0",
"PerPkg": "1",
@@ -577,6 +643,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T1",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T1",
"PerPkg": "1",
@@ -586,6 +653,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Other",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.OTHER",
"PerPkg": "1",
@@ -595,6 +663,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Trap",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.TRAP",
"PerPkg": "1",
@@ -604,6 +673,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Uncorrectable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.UMC",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-io.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-io.json
index 01e04daf03da..daef7accdbcb 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-io.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_R2_CLOCKTICKS",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
"PerPkg": "1",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
"PerPkg": "1",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; DRS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.DRS",
"PerPkg": "1",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCB",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCB",
"PerPkg": "1",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCS",
"PerPkg": "1",
@@ -68,6 +76,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; DRS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -77,6 +86,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCB",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -86,6 +96,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -95,6 +106,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -113,6 +126,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -122,6 +136,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -131,6 +146,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW",
"PerPkg": "1",
@@ -140,6 +156,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -149,6 +166,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -158,6 +176,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Dn",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.DN",
"PerPkg": "1",
@@ -167,6 +186,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.UP",
"PerPkg": "1",
@@ -176,6 +196,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -185,6 +206,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -194,6 +216,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -203,6 +226,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -212,6 +236,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW",
"PerPkg": "1",
@@ -221,6 +246,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -230,6 +256,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -239,6 +266,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -248,6 +276,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -257,6 +286,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -266,6 +296,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -275,6 +306,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW",
"PerPkg": "1",
@@ -284,6 +316,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -293,6 +326,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -302,6 +336,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -311,6 +346,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CCW",
"PerPkg": "1",
@@ -320,6 +356,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CW",
"PerPkg": "1",
@@ -329,6 +366,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCB",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCB",
"PerPkg": "1",
@@ -338,6 +376,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCS",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCS",
"PerPkg": "1",
@@ -347,6 +386,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCB",
"PerPkg": "1",
@@ -356,6 +396,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCS",
"PerPkg": "1",
@@ -365,6 +406,7 @@
},
{
"BriefDescription": "Ingress Occupancy Accumulator; DRS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R2_RxR_OCCUPANCY.DRS",
"PerPkg": "1",
@@ -374,6 +416,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -383,6 +426,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -392,6 +436,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -401,6 +446,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -410,6 +456,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -419,6 +466,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -428,6 +476,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -437,6 +486,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -446,6 +496,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AD",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AD",
"PerPkg": "1",
@@ -455,6 +506,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AK",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AK",
"PerPkg": "1",
@@ -464,6 +516,7 @@
},
{
"BriefDescription": "Egress Cycles Full; BL",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.BL",
"PerPkg": "1",
@@ -473,6 +526,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AD",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AD",
"PerPkg": "1",
@@ -482,6 +536,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AK",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AK",
"PerPkg": "1",
@@ -491,6 +546,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; BL",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.BL",
"PerPkg": "1",
@@ -500,6 +556,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AD CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AD",
"PerPkg": "1",
@@ -509,6 +566,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AK",
"PerPkg": "1",
@@ -518,6 +576,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_BL",
"PerPkg": "1",
@@ -527,6 +586,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AD",
"PerPkg": "1",
@@ -536,6 +596,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AK",
"PerPkg": "1",
@@ -545,6 +606,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_BL",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-memory.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-memory.json
index a764234a3584..ddc83d3885ae 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.BYP",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Read",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.RD",
"PerPkg": "1",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.WR",
"PerPkg": "1",
@@ -28,6 +31,7 @@
},
{
"BriefDescription": "ACT command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.ACT",
"PerPkg": "1",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "CAS command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.CAS",
"PerPkg": "1",
@@ -44,6 +49,7 @@
},
{
"BriefDescription": "PRE command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.PRE",
"PerPkg": "1",
@@ -52,6 +58,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -61,6 +68,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -70,6 +78,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
"PerPkg": "1",
@@ -79,6 +88,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_RMM",
"PerPkg": "1",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Underfill Read Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
"PerPkg": "1",
@@ -96,6 +107,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_WMM",
"PerPkg": "1",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -113,6 +126,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_RMM",
"PerPkg": "1",
@@ -122,6 +136,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_WMM",
"PerPkg": "1",
@@ -131,12 +146,14 @@
},
{
"BriefDescription": "DRAM Clockticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_DCLOCKTICKS",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_M_DRAM_PRE_ALL",
"PerPkg": "1",
@@ -145,6 +162,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1",
@@ -154,6 +172,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1",
@@ -163,6 +182,7 @@
},
{
"BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_M_ECC_CORRECTABLE_ERRORS",
"PerPkg": "1",
@@ -171,6 +191,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Isoch Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.ISOCH",
"PerPkg": "1",
@@ -180,6 +201,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Partial Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.PARTIAL",
"PerPkg": "1",
@@ -189,6 +211,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.READ",
"PerPkg": "1",
@@ -198,6 +221,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.WRITE",
"PerPkg": "1",
@@ -207,6 +231,7 @@
},
{
"BriefDescription": "Channel DLLOFF Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M_POWER_CHANNEL_DLLOFF",
"PerPkg": "1",
@@ -215,6 +240,7 @@
},
{
"BriefDescription": "Channel PPD Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
"PerPkg": "1",
@@ -223,6 +249,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK0",
"PerPkg": "1",
@@ -232,6 +259,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK1",
"PerPkg": "1",
@@ -241,6 +269,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK2",
"PerPkg": "1",
@@ -250,6 +279,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK3",
"PerPkg": "1",
@@ -259,6 +289,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK4",
"PerPkg": "1",
@@ -268,6 +299,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK5",
"PerPkg": "1",
@@ -277,6 +309,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK6",
"PerPkg": "1",
@@ -286,6 +319,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK7",
"PerPkg": "1",
@@ -295,6 +329,7 @@
},
{
"BriefDescription": "Critical Throttle Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES",
"PerPkg": "1",
@@ -303,6 +338,7 @@
},
{
"BriefDescription": "UNC_M_POWER_PCU_THROTTLING",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M_POWER_PCU_THROTTLING",
"PerPkg": "1",
@@ -310,6 +346,7 @@
},
{
"BriefDescription": "Clock-Enabled Self-Refresh",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
"PerPkg": "1",
@@ -318,6 +355,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK0",
"PerPkg": "1",
@@ -327,6 +365,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK1",
"PerPkg": "1",
@@ -336,6 +375,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK2",
"PerPkg": "1",
@@ -345,6 +385,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK3",
"PerPkg": "1",
@@ -354,6 +395,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK4",
"PerPkg": "1",
@@ -363,6 +405,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK5",
"PerPkg": "1",
@@ -372,6 +415,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK6",
"PerPkg": "1",
@@ -381,6 +425,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK7",
"PerPkg": "1",
@@ -390,6 +435,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Read Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_RD",
"PerPkg": "1",
@@ -399,6 +445,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Write Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_WR",
"PerPkg": "1",
@@ -408,6 +455,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.BYP",
"PerPkg": "1",
@@ -417,6 +465,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to timer expiration",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_CLOSE",
"PerPkg": "1",
@@ -426,6 +475,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharges due to page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
"PerPkg": "1",
@@ -435,6 +485,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -444,6 +495,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1",
@@ -453,6 +505,7 @@
},
{
"BriefDescription": "Read CAS issued with HIGH priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.HIGH",
"PerPkg": "1",
@@ -461,6 +514,7 @@
},
{
"BriefDescription": "Read CAS issued with LOW priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.LOW",
"PerPkg": "1",
@@ -469,6 +523,7 @@
},
{
"BriefDescription": "Read CAS issued with MEDIUM priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.MED",
"PerPkg": "1",
@@ -477,6 +532,7 @@
},
{
"BriefDescription": "Read CAS issued with PANIC NON ISOCH priority (starved)",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.PANIC",
"PerPkg": "1",
@@ -485,6 +541,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -494,6 +551,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -502,6 +560,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -511,6 +570,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -520,6 +580,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -529,6 +590,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -538,6 +600,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -547,6 +610,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -556,6 +620,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -565,6 +630,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -574,6 +640,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -583,6 +650,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -592,6 +660,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -601,6 +670,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -610,6 +680,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -619,6 +690,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -628,6 +700,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -637,6 +710,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -646,6 +720,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -655,6 +730,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -664,6 +740,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -673,6 +750,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -682,6 +760,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -690,6 +769,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -699,6 +779,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -708,6 +789,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -717,6 +799,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -726,6 +809,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -735,6 +819,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -744,6 +829,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -753,6 +839,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -762,6 +849,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -771,6 +859,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -780,6 +869,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -789,6 +879,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -798,6 +889,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -807,6 +899,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -816,6 +909,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -825,6 +919,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -834,6 +929,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -843,6 +939,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -852,6 +949,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -861,6 +959,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK0",
"PerPkg": "1",
@@ -869,6 +968,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -878,6 +978,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -886,6 +987,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -895,6 +997,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -904,6 +1007,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -913,6 +1017,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -922,6 +1027,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -931,6 +1037,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -940,6 +1047,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -949,6 +1057,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -958,6 +1067,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -967,6 +1077,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -976,6 +1087,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -985,6 +1097,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -994,6 +1107,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -1003,6 +1117,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -1012,6 +1127,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -1021,6 +1137,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -1030,6 +1147,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -1039,6 +1157,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -1048,6 +1167,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -1057,6 +1177,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -1066,6 +1187,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -1074,6 +1196,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -1083,6 +1206,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -1092,6 +1216,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -1101,6 +1226,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -1110,6 +1236,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -1119,6 +1246,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -1128,6 +1256,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -1137,6 +1266,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -1146,6 +1276,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -1155,6 +1286,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -1164,6 +1296,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -1173,6 +1306,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -1182,6 +1316,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -1191,6 +1326,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -1200,6 +1336,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -1209,6 +1346,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -1218,6 +1356,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -1227,6 +1366,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -1236,6 +1376,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -1245,6 +1386,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -1254,6 +1396,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -1262,6 +1405,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -1271,6 +1415,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -1280,6 +1425,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -1289,6 +1435,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -1298,6 +1445,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -1307,6 +1455,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -1316,6 +1465,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -1325,6 +1475,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -1334,6 +1485,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -1343,6 +1495,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -1352,6 +1505,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -1361,6 +1515,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -1370,6 +1525,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -1379,6 +1535,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -1388,6 +1545,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -1397,6 +1555,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -1406,6 +1565,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -1415,6 +1575,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -1424,6 +1585,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -1433,6 +1595,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -1442,6 +1605,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -1450,6 +1614,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -1459,6 +1624,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -1468,6 +1634,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -1477,6 +1644,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -1486,6 +1654,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -1495,6 +1664,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -1504,6 +1674,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -1513,6 +1684,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -1522,6 +1694,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -1531,6 +1704,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -1540,6 +1714,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -1549,6 +1724,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -1558,6 +1734,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -1567,6 +1744,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -1576,6 +1754,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -1585,6 +1764,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -1594,6 +1774,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -1603,6 +1784,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -1612,6 +1794,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG3",
"PerPkg": "1",
@@ -1621,6 +1804,7 @@
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1629,6 +1813,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS",
"PerPkg": "1",
@@ -1637,6 +1822,7 @@
},
{
"BriefDescription": "VMSE MXB write buffer occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M_VMSE_MXB_WR_OCCUPANCY",
"PerPkg": "1",
@@ -1644,6 +1830,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.RMM",
"PerPkg": "1",
@@ -1652,6 +1839,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.WMM",
"PerPkg": "1",
@@ -1660,6 +1848,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.LOW_THRESH",
"PerPkg": "1",
@@ -1668,6 +1857,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.STARVE",
"PerPkg": "1",
@@ -1676,6 +1866,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.VMSE_RETRY",
"PerPkg": "1",
@@ -1684,6 +1875,7 @@
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL",
"PerPkg": "1",
@@ -1692,6 +1884,7 @@
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1700,6 +1893,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT",
"PerPkg": "1",
@@ -1708,6 +1902,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT",
"PerPkg": "1",
@@ -1716,6 +1911,7 @@
},
{
"BriefDescription": "Not getting the requested Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_M_WRONG_MM",
"PerPkg": "1",
@@ -1723,6 +1919,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -1732,6 +1929,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -1740,6 +1938,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -1749,6 +1948,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -1758,6 +1958,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -1767,6 +1968,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -1776,6 +1978,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -1785,6 +1988,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -1794,6 +1998,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -1803,6 +2008,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -1812,6 +2018,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -1821,6 +2028,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -1830,6 +2038,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -1839,6 +2048,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -1848,6 +2058,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -1857,6 +2068,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -1866,6 +2078,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -1875,6 +2088,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -1884,6 +2098,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -1893,6 +2108,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -1902,6 +2118,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -1911,6 +2128,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -1920,6 +2138,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -1928,6 +2147,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -1937,6 +2157,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -1946,6 +2167,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -1955,6 +2177,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -1964,6 +2187,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -1973,6 +2197,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -1982,6 +2207,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -1991,6 +2217,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -2000,6 +2227,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -2009,6 +2237,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -2018,6 +2247,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -2027,6 +2257,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -2036,6 +2267,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -2045,6 +2277,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -2054,6 +2287,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -2063,6 +2297,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -2072,6 +2307,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -2081,6 +2317,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -2090,6 +2327,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -2099,6 +2337,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -2108,6 +2347,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -2116,6 +2356,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -2125,6 +2366,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -2134,6 +2376,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -2143,6 +2386,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -2152,6 +2396,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -2161,6 +2406,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -2170,6 +2416,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -2179,6 +2426,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -2188,6 +2436,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -2197,6 +2446,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -2206,6 +2456,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -2215,6 +2466,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -2224,6 +2476,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -2233,6 +2486,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -2242,6 +2496,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -2251,6 +2506,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -2260,6 +2516,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -2269,6 +2526,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -2278,6 +2536,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -2287,6 +2546,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -2296,6 +2556,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -2304,6 +2565,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -2313,6 +2575,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -2322,6 +2585,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -2331,6 +2595,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -2340,6 +2605,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -2349,6 +2615,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -2358,6 +2625,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -2367,6 +2635,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -2376,6 +2645,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -2385,6 +2655,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -2394,6 +2665,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -2403,6 +2675,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -2412,6 +2685,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -2421,6 +2695,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -2430,6 +2705,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -2439,6 +2715,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -2448,6 +2725,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -2457,6 +2735,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -2466,6 +2745,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -2475,6 +2755,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -2484,6 +2765,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -2492,6 +2774,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -2501,6 +2784,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -2510,6 +2794,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -2519,6 +2804,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -2528,6 +2814,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -2537,6 +2824,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -2546,6 +2834,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -2555,6 +2844,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -2564,6 +2854,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -2573,6 +2864,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -2582,6 +2874,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -2591,6 +2884,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -2600,6 +2894,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -2609,6 +2904,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -2618,6 +2914,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -2627,6 +2924,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -2636,6 +2934,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -2645,6 +2944,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -2654,6 +2954,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -2663,6 +2964,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -2672,6 +2974,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -2680,6 +2983,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -2689,6 +2993,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -2698,6 +3003,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -2707,6 +3013,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -2716,6 +3023,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -2725,6 +3033,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -2734,6 +3043,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -2743,6 +3053,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -2752,6 +3063,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -2761,6 +3073,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -2770,6 +3083,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -2779,6 +3093,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -2788,6 +3103,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -2797,6 +3113,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -2806,6 +3123,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -2815,6 +3133,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -2824,6 +3143,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -2833,6 +3153,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -2842,6 +3163,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-power.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-power.json
index 83d20130c217..afdc636b9855 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-power.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "pclk Cycles",
+ "Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE0_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_P_CORE10_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_P_CORE11_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_P_CORE12_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_P_CORE13_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_P_CORE14_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_P_CORE15_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_P_CORE16_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_P_CORE17_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_P_CORE1_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -88,6 +99,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_P_CORE2_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_P_CORE3_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_P_CORE4_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -112,6 +126,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_P_CORE5_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_P_CORE6_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_P_CORE7_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_P_CORE8_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -144,6 +162,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_P_CORE9_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -152,6 +171,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS_CORE0",
"PerPkg": "1",
@@ -160,6 +180,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_P_DEMOTIONS_CORE1",
"PerPkg": "1",
@@ -168,6 +189,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_P_DEMOTIONS_CORE10",
"PerPkg": "1",
@@ -176,6 +198,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3B",
"EventName": "UNC_P_DEMOTIONS_CORE11",
"PerPkg": "1",
@@ -184,6 +207,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_P_DEMOTIONS_CORE12",
"PerPkg": "1",
@@ -192,6 +216,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_P_DEMOTIONS_CORE13",
"PerPkg": "1",
@@ -200,6 +225,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_P_DEMOTIONS_CORE14",
"PerPkg": "1",
@@ -208,6 +234,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_P_DEMOTIONS_CORE15",
"PerPkg": "1",
@@ -216,6 +243,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_P_DEMOTIONS_CORE16",
"PerPkg": "1",
@@ -224,6 +252,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_P_DEMOTIONS_CORE17",
"PerPkg": "1",
@@ -232,6 +261,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_P_DEMOTIONS_CORE2",
"PerPkg": "1",
@@ -240,6 +270,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_P_DEMOTIONS_CORE3",
"PerPkg": "1",
@@ -248,6 +279,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_P_DEMOTIONS_CORE4",
"PerPkg": "1",
@@ -256,6 +288,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_P_DEMOTIONS_CORE5",
"PerPkg": "1",
@@ -264,6 +297,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_P_DEMOTIONS_CORE6",
"PerPkg": "1",
@@ -272,6 +306,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_P_DEMOTIONS_CORE7",
"PerPkg": "1",
@@ -280,6 +315,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_P_DEMOTIONS_CORE8",
"PerPkg": "1",
@@ -288,6 +324,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_P_DEMOTIONS_CORE9",
"PerPkg": "1",
@@ -296,6 +333,7 @@
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
"PerPkg": "1",
@@ -304,6 +342,7 @@
},
{
"BriefDescription": "OS Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_P_FREQ_MAX_OS_CYCLES",
"PerPkg": "1",
@@ -312,6 +351,7 @@
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
"PerPkg": "1",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
"PerPkg": "1",
@@ -328,6 +369,7 @@
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
"PerPkg": "1",
@@ -336,6 +378,7 @@
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
"PerPkg": "1",
@@ -344,6 +387,7 @@
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
"PerPkg": "1",
@@ -352,6 +396,7 @@
},
{
"BriefDescription": "Package C State Residency - C1E",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_P_PKG_RESIDENCY_C1E_CYCLES",
"PerPkg": "1",
@@ -360,6 +405,7 @@
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
"PerPkg": "1",
@@ -368,6 +414,7 @@
},
{
"BriefDescription": "Package C State Residency - C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
"PerPkg": "1",
@@ -376,6 +423,7 @@
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
"PerPkg": "1",
@@ -384,6 +432,7 @@
},
{
"BriefDescription": "Package C7 State Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_P_PKG_RESIDENCY_C7_CYCLES",
"PerPkg": "1",
@@ -392,30 +441,37 @@
},
{
"BriefDescription": "Number of cores in C-State; C0 and C1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
+ "Filter": "occ_sel=1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
+ "Filter": "occ_sel=2",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C6 and C7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
+ "Filter": "occ_sel=3",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
"PerPkg": "1",
@@ -424,6 +480,7 @@
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
"PerPkg": "1",
@@ -432,6 +489,7 @@
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -440,6 +498,7 @@
},
{
"BriefDescription": "UNC_P_UFS_TRANSITIONS_RING_GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "UNC_P_UFS_TRANSITIONS_RING_GV",
"PerPkg": "1",
@@ -448,6 +507,7 @@
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/virtual-memory.json b/tools/perf/pmu-events/arch/x86/broadwellde/virtual-memory.json
index 93621e004d88..eb1d9541e26c 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"SampleAfterValue": "2000003",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_2M",
"SampleAfterValue": "2000003",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_4K",
"SampleAfterValue": "2000003",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
@@ -149,6 +167,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "EPT.WALK_CYCLES",
"PublicDescription": "This event counts cycles for an extended page table walk. The Extended Page directory cache differs from standard TLB caches by the operating system that use it. Virtual machine operating systems use the extended page directory cache, while guest operating systems use the standard TLB caches.",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).",
@@ -165,6 +185,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
@@ -174,6 +195,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Code misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -188,6 +211,7 @@
},
{
"BriefDescription": "Core misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
@@ -203,6 +228,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
@@ -212,6 +238,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
@@ -221,6 +248,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
@@ -230,6 +258,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
@@ -239,6 +268,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L1",
@@ -247,6 +277,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L2",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L3",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in Memory.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_MEMORY",
@@ -271,6 +304,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L1",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L2",
@@ -287,6 +322,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L3",
@@ -295,6 +331,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "This event counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -303,6 +340,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on).",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json b/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json
index b319e4edc238..5b83b040060c 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/bdx-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -55,7 +55,7 @@
"MetricName": "UNCORE_FREQ"
},
{
- "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles.",
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
"MetricName": "cpi",
"ScaleUnit": "1per_instr"
@@ -68,7 +68,7 @@
},
{
"BriefDescription": "Percentage of time spent in the active CPU power state C0",
- "MetricExpr": "tma_info_system_cpu_utilization",
+ "MetricExpr": "tma_info_system_cpus_utilized",
"MetricName": "cpu_utilization",
"ScaleUnit": "100%"
},
@@ -76,24 +76,24 @@
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
"MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_store_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
- "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU.",
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=0x19e@ * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.",
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
"MetricExpr": "(cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=0x1c8\\,filter_tid\\=0x3e@ + cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=0x180\\,filter_tid\\=0x3e@) * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write",
"ScaleUnit": "1MB/s"
@@ -102,14 +102,14 @@
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "itlb_large_page_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "itlb_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
@@ -209,13 +209,13 @@
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@)",
"MetricName": "numa_reads_addressed_to_local_dram",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@)",
"MetricName": "numa_reads_addressed_to_remote_dram",
"ScaleUnit": "100%"
@@ -282,17 +282,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -300,9 +299,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -323,7 +321,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -341,7 +339,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -362,7 +359,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -383,7 +380,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -392,10 +389,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.FPU_DIV_ACTIVE / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -413,7 +410,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -429,7 +426,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + cpu@DTLB_LOAD_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_LOAD_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -438,7 +435,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + cpu@DTLB_STORE_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * DTLB_STORE_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -447,7 +444,7 @@
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
@@ -457,7 +454,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -468,7 +465,7 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
@@ -494,7 +491,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -503,7 +500,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -516,7 +513,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -525,13 +522,13 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -545,27 +542,20 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses.",
"MetricExpr": "ICACHE.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_mispredicts_resteers"
- },
- {
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -591,20 +581,20 @@
},
{
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
@@ -623,6 +613,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -637,11 +633,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0x3c@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -649,7 +645,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -657,7 +653,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -665,7 +661,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -673,7 +669,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -691,7 +687,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -711,129 +707,129 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_all"
},
{
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -843,8 +839,8 @@
"MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
@@ -855,30 +851,36 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_time",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -902,6 +904,7 @@
},
{
"BriefDescription": "Average number of parallel data read requests to external memory",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182\\,thresh\\=1@",
"MetricGroup": "Mem;MemoryBW;SoC",
"MetricName": "tma_info_system_mem_parallel_reads",
@@ -909,12 +912,25 @@
},
{
"BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
- "MetricExpr": "1e9 * (UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_INSERTS.MISS_OPCODE@filter_opc\\=0x182@) / (tma_info_system_socket_clks / duration_time)",
+ "MetricExpr": "1e9 * (UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_INSERTS.MISS_OPCODE@filter_opc\\=0x182@) / (tma_info_system_socket_clks / tma_info_system_time)",
"MetricGroup": "Mem;MemoryLat;SoC",
"MetricName": "tma_info_system_mem_read_latency",
"PublicDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)"
},
{
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
"MetricExpr": "(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_UNHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0)",
"MetricGroup": "SMT",
@@ -927,12 +943,25 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization"
},
{
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
"BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD",
"MetricGroup": "Pipeline",
@@ -971,7 +1000,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -980,25 +1009,25 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + cpu@ITLB_MISSES.WALK_DURATION\\,cmask\\=1@ + 7 * ITLB_MISSES.WALK_COMPLETED) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS",
@@ -1008,20 +1037,20 @@
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_MISS / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "41 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -1040,12 +1069,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -1058,8 +1086,8 @@
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
"MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_local_dram",
- "MetricThreshold": "tma_local_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM_PS",
"ScaleUnit": "100%"
},
@@ -1067,7 +1095,7 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
@@ -1077,7 +1105,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -1085,21 +1113,21 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -1125,7 +1153,7 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES * tma_branch_resteers / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT + BACLEARS.ANY)",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
@@ -1136,7 +1164,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.",
"ScaleUnit": "100%"
},
@@ -1155,7 +1183,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1164,7 +1192,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1204,12 +1232,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1233,7 +1261,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else CYCLE_ACTIVITY.STALLS_TOTAL - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1242,7 +1270,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1251,7 +1279,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1261,9 +1289,9 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
@@ -1281,15 +1309,15 @@
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "310 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
"MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_remote_dram",
- "MetricThreshold": "tma_remote_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -1302,7 +1330,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -1318,7 +1346,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -1346,7 +1374,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1363,10 +1391,10 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "tma_branch_resteers - tma_mispredicts_resteers - tma_clears_resteers",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: BACLEARS.ANY",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/cache.json b/tools/perf/pmu-events/arch/x86/broadwellx/cache.json
index 781e7c64e71f..59bc8c71487e 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "This event counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -17,14 +19,16 @@
},
{
"BriefDescription": "L1D miss outstandings duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
- "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch.\nNote: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
+ "PublicDescription": "This event counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand; from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +39,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Not rejected writebacks that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_DEMAND_RQSTS.WB_HIT",
"PublicDescription": "This event counts the number of WB requests that hit L2 cache.",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "This event counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "This event counts the number of L2 cache lines in the Exclusive state filling the L2. Counting does not cover rejects.",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "This event counts the number of L2 cache lines in the Invalidate state filling the L2. Counting does not cover rejects.",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "This event counts the number of L2 cache lines in the Shared state filling the L2. Counting does not cover rejects.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"SampleAfterValue": "100003",
@@ -90,6 +101,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "This event counts the total number of L2 code requests.",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "This event counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"SampleAfterValue": "200003",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"SampleAfterValue": "200003",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "This event counts the total number of requests from the L2 hardware prefetchers.",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "This event counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"SampleAfterValue": "200003",
@@ -143,6 +161,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"SampleAfterValue": "200003",
@@ -150,6 +169,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache.",
@@ -158,6 +178,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "This event counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "L2 prefetch requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_HIT",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that hit L2 cache. L3 prefetch new types.",
@@ -174,6 +196,7 @@
},
{
"BriefDescription": "L2 prefetch requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_MISS",
"PublicDescription": "This event counts the number of requests from the L2 hardware prefetchers that miss L2 cache.",
@@ -182,6 +205,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"SampleAfterValue": "200003",
@@ -189,6 +213,7 @@
},
{
"BriefDescription": "All L2 requests.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"SampleAfterValue": "200003",
@@ -196,6 +221,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"SampleAfterValue": "200003",
@@ -203,6 +229,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"SampleAfterValue": "200003",
@@ -210,6 +237,7 @@
},
{
"BriefDescription": "L2 or L3 HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "This event counts L2 or L3 HW prefetches that access L2 cache including rejects.",
@@ -218,6 +246,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "This event counts transactions that access the L2 pipe including snoops, pagewalks, and so on.",
@@ -226,6 +255,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "This event counts the number of L2 cache accesses when fetching instructions.",
@@ -234,6 +264,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "This event counts Demand Data Read requests that access L2 cache, including rejects.",
@@ -242,6 +273,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "This event counts L1D writebacks that access L2 cache.",
@@ -250,6 +282,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "This event counts L2 fill requests that access L2 cache.",
@@ -258,6 +291,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "This event counts L2 writebacks that access L2 cache.",
@@ -266,6 +300,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "This event counts Read for Ownership (RFO) requests that access L2 cache.",
@@ -274,6 +309,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "This event counts the number of cycles when the L1D is locked. It is a superset of the 0x1 mask (BUS_LOCK_CLOCKS.BUS_LOCK_DURATION).",
@@ -282,6 +318,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts core-originated cacheable demand requests that miss the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -290,6 +327,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts core-originated cacheable demand requests that refer to the last level cache (LLC). Demand requests include loads, RFOs, and hardware prefetches from L1D, and instruction fetches from IFU.",
@@ -298,6 +336,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -309,6 +348,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared L3.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -331,6 +372,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD2",
@@ -342,6 +384,7 @@
},
{
"BriefDescription": "Data from local DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70, BDM100",
"EventCode": "0xD3",
@@ -353,6 +396,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -363,6 +407,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: forwarded from remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -373,6 +418,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: Remote cache HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDE70",
"EventCode": "0xD3",
@@ -383,26 +429,29 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HIT_LFB",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
+ "PublicDescription": "This event counts retired load uops which data sources were load uops missed L1 but hit a fill buffer due to a preceding miss to the same cache line with the data not ready. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load.",
"SampleAfterValue": "100003",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
- "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache.\nNote: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
+ "PublicDescription": "This event counts retired load uops which data sources were hits in the nearest-level (L1) cache. Note: Only two data-sources of L1/FB are applicable for AVX-256bit even though the corresponding AVX load could be serviced by a deeper level in the memory hierarchy. Data source is reported for the Low-half load. This event also counts SW prefetches independent of the actual data source.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load uops misses in L1 cache as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
@@ -413,6 +462,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD1",
@@ -424,6 +474,7 @@
},
{
"BriefDescription": "Miss in mid-level (L2) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
@@ -434,6 +485,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100",
"EventCode": "0xD1",
@@ -445,6 +497,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM100, BDE70",
"EventCode": "0xD1",
@@ -455,6 +508,7 @@
},
{
"BriefDescription": "Retired load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
@@ -465,6 +519,7 @@
},
{
"BriefDescription": "Retired store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
@@ -475,6 +530,7 @@
},
{
"BriefDescription": "Retired load uops with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "BDM35",
"EventCode": "0xD0",
@@ -486,6 +542,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
@@ -496,6 +553,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
@@ -506,6 +564,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
@@ -516,6 +575,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
@@ -526,6 +586,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "This event counts the demand and prefetch data reads. All Core Data Reads include cacheable Demands and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -534,6 +595,7 @@
},
{
"BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "This event counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, and so on.",
@@ -542,6 +604,7 @@
},
{
"BriefDescription": "Cacheable and non-cacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "This event counts both cacheable and non-cacheable code read requests.",
@@ -550,6 +613,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "This event counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -558,6 +622,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "This event counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -566,14 +631,16 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
- "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.\nNote: Writeback pending FIFO has six entries.",
+ "PublicDescription": "This event counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full. Note: Writeback pending FIFO has six entries.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -583,6 +650,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -593,6 +661,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -603,6 +672,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -613,6 +683,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
@@ -622,15 +693,17 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
- "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.\nNote: A prefetch promoted to Demand is counted from the promotion point.",
+ "PublicDescription": "This event counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS. Note: A prefetch promoted to Demand is counted from the promotion point.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"Errata": "BDM76",
"EventCode": "0x60",
@@ -640,6 +713,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "BDM76",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
@@ -649,6 +723,7 @@
},
{
"BriefDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100003",
@@ -656,6 +731,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -665,6 +741,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -674,6 +751,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -683,6 +761,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -692,6 +771,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -701,6 +781,7 @@
},
{
"BriefDescription": "Counts all requests hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -710,6 +791,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -719,6 +801,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -728,6 +811,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -737,6 +821,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -746,6 +831,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -755,6 +841,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -764,6 +851,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"PublicDescription": "This event counts the number of split locks in the super queue.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/counter.json b/tools/perf/pmu-events/arch/x86/broadwellx/counter.json
new file mode 100644
index 000000000000..9fde9c0a896d
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/counter.json
@@ -0,0 +1,57 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "HA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "QPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "R2PCIe",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "R3QPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "3"
+ },
+ {
+ "Unit": "SBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/floating-point.json b/tools/perf/pmu-events/arch/x86/broadwellx/floating-point.json
index 986869252e71..9bf595af3f42 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 4 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational double precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.DOUBLE",
"SampleAfterValue": "2000006",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational packed floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* packed double and single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.PACKED",
"SampleAfterValue": "2000004",
@@ -55,6 +62,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation operation. Applies to SSE* and AVX* scalar double and single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -71,6 +80,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -79,6 +89,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational single precision floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar and packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SINGLE",
"SampleAfterValue": "2000005",
@@ -86,6 +97,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "2000003",
@@ -93,6 +105,7 @@
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -102,6 +115,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "This event counts any input SSE* FP assist - invalid operation, denormal operand, dividing by zero, SNaN operand. Counting includes only cases involving penalties that required micro-code assist intervention.",
@@ -110,6 +124,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "This event counts the number of SSE* floating point (FP) micro-code assist (numeric overflow/underflow) when the output value (destination register) is invalid. Counting covers only cases involving penalties that require micro-code assist intervention.",
@@ -118,6 +133,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "This event counts x87 floating point (FP) micro-code assist (invalid operation, denormal operand, SNaN operand) when the input value (one of the source operands to an FP instruction) is invalid.",
@@ -126,6 +142,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "This event counts the number of x87 floating point (FP) micro-code assist (numeric overflow/underflow, inexact result) when the output value (destination register) is invalid.",
@@ -134,6 +151,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -141,6 +159,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -148,6 +167,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
@@ -157,6 +177,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "BDM30",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "Micro-op dispatches cancelled due to insufficient SIMD physical register file read ports",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UOP_DISPATCHES_CANCELLED.SIMD_PRF",
"PublicDescription": "This event counts the number of micro-operations cancelled after they were dispatched from the scheduler to the execution units when the total number of physical register read ports across all dispatch ports exceeds the read bandwidth of the physical register file. The SIMD_PRF subevent applies to the following instructions: VDPPS, DPPS, VPCMPESTRI, PCMPESTRI, VPCMPESTRM, PCMPESTRM, VFMADD*, VFMADDSUB*, VFMSUB*, VMSUBADD*, VFNMADD*, VFNMSUB*. See the Broadwell Optimization Guide for more information.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/frontend.json b/tools/perf/pmu-events/arch/x86/broadwellx/frontend.json
index bd5da39564e1..018020a51436 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"SampleAfterValue": "100003",
@@ -8,14 +9,16 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
- "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. \nMM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.\nPenalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
+ "PublicDescription": "This event counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs. Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "This event counts the number of both cacheable and noncacheable Instruction Cache, Streaming Buffer and Victim Cache Reads including UC fetches.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFDATA_STALL",
"PublicDescription": "This event counts cycles during which the demand fetch waits for data (wfdM104H) from L2 or iSB (opportunistic hit).",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accesses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "This event counts the number of instruction cache, streaming buffer and victim cache misses. Counting includes UC accesses.",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -76,6 +85,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -85,6 +95,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may bypass the IDQ.",
@@ -93,6 +104,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
"PublicDescription": "This counts the number of cycles that the instruction decoder queue is empty and can indicate that the application may be bound in the front end. It does not determine whether there are uops being delivered to the Alloc stage since uops can be delivered by bypass skipping the Instruction Decode Queue (IDQ) when it is empty.",
@@ -101,6 +113,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -109,6 +122,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -118,6 +132,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "This event counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may bypass the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -126,6 +141,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -135,6 +151,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -144,6 +161,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -154,6 +172,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "This event counts the number of uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -162,6 +181,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "This event counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ.",
@@ -170,6 +190,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "This event counts the total number of uops delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may bypass the IDQ. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.",
@@ -187,14 +209,16 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
- "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when:\n a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread;\n b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); \n c. Instruction Decode Queue (IDQ) delivers four uops.",
+ "PublicDescription": "This event counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread; b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions); c. Instruction Decode Queue (IDQ) delivers four uops.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -204,6 +228,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
@@ -213,6 +238,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE",
@@ -222,6 +248,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE",
@@ -230,6 +257,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/memory.json b/tools/perf/pmu-events/arch/x86/broadwellx/memory.json
index a7449e5b68dc..093c8b564fc0 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of times HLE abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an HLE abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to uncommon conditions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an HLE abort.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an HLE abort.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times HLE caused a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times HLE aborted and was not due to the abort conditions in subevents 3-6.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of times HLE commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"PublicDescription": "Number of times HLE commit succeeded.",
@@ -58,22 +65,25 @@
},
{
"BriefDescription": "Number of times we entered an HLE region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.START",
- "PublicDescription": "Number of times we entered an HLE region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an HLE region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
- "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following:\n1. memory disambiguation,\n2. external snoop, or\n3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
+ "PublicDescription": "This event counts the number of memory ordering Machine Clears detected. Memory Ordering Machine Clears can result from one of the following: 1. memory disambiguation, 2. external snoop, or 3. cross SMT-HW-thread snoop (stores) hitting load buffer.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "Randomly selected loads with latency value being above 128",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 16",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -100,6 +111,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 256",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -113,6 +125,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 32",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -126,6 +139,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 4",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -139,6 +153,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 512",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -152,6 +167,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 64",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -165,6 +181,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 8",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "BDM100, BDM35",
"EventCode": "0xcd",
@@ -178,6 +195,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "This event counts speculative cache-line split load uops dispatched to the L1 cache.",
@@ -186,6 +204,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "This event counts speculative cache line split store-address (STA) uops dispatched to the L1 cache.",
@@ -194,6 +213,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -203,6 +223,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -212,6 +233,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -221,6 +243,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -230,6 +253,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the data is returned from remote dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -239,6 +263,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -248,6 +273,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and clean or shared data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -257,6 +283,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -266,6 +293,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -275,6 +303,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from remote dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -284,6 +313,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -293,6 +323,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and clean or shared data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -302,6 +333,7 @@
},
{
"BriefDescription": "Counts all requests miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -311,6 +343,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -320,6 +353,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -329,6 +363,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -338,6 +373,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -347,6 +383,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -356,6 +393,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -365,6 +403,7 @@
},
{
"BriefDescription": "Number of times RTM abort was triggered",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1",
@@ -374,6 +413,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an RTM abort was attributed to a Memory condition (See TSX_Memory event for additional details).",
@@ -382,6 +422,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC2",
"PublicDescription": "Number of times the TSX watchdog signaled an RTM abort.",
@@ -390,6 +431,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC3",
"PublicDescription": "Number of times a disallowed operation caused an RTM abort.",
@@ -398,6 +440,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC4",
"PublicDescription": "Number of times a RTM caused a fault.",
@@ -406,6 +449,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times RTM aborted and was not due to the abort conditions in subevents 3-6.",
@@ -414,6 +458,7 @@
},
{
"BriefDescription": "Number of times RTM commit succeeded",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Number of times RTM commit succeeded.",
@@ -422,14 +467,16 @@
},
{
"BriefDescription": "Number of times we entered an RTM region; does not count nested transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
- "PublicDescription": "Number of times we entered an RTM region\n does not count nested transactions.",
+ "PublicDescription": "Number of times we entered an RTM region does not count nested transactions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -437,6 +484,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -445,6 +493,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -453,6 +502,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"PublicDescription": "RTM region detected inside HLE.",
@@ -461,6 +511,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"SampleAfterValue": "2000003",
@@ -468,6 +519,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Number of times a TSX Abort was triggered due to an evicted line caused by a transaction overflow.",
@@ -476,6 +528,7 @@
},
{
"BriefDescription": "Number of times a TSX line had a cache conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Number of times a TSX line had a cache conflict.",
@@ -484,6 +537,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"PublicDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch.",
@@ -492,6 +546,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"PublicDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.",
@@ -500,6 +555,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"PublicDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.",
@@ -508,6 +564,7 @@
},
{
"BriefDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"PublicDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock.",
@@ -516,6 +573,7 @@
},
{
"BriefDescription": "Number of times we could not allocate Lock Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"PublicDescription": "Number of times we could not allocate Lock Buffer.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/metricgroups.json b/tools/perf/pmu-events/arch/x86/broadwellx/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/other.json b/tools/perf/pmu-events/arch/x86/broadwellx/other.json
index 1c2a5b001949..f0de6a71719b 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/other.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "This event counts the unhalted core cycles during which the thread is in the ring 0 privileged mode.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "This event counts unhalted core cycles during which the thread is in rings 1, 2, or 3.",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "This event counts cycles in which the L1 and L2 are locked due to a UC lock or split lock. A lock is asserted in case of locked memory access, due to noncacheable memory, locked operation that spans two cache lines, or a page walk from the noncacheable page table. L1D and L2 locks have a very high performance penalty and it is highly recommended to avoid such access.",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/pipeline.json b/tools/perf/pmu-events/arch/x86/broadwellx/pipeline.json
index 9a902d2160e6..962cd07eb307 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divider is busy executing divide operations",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.FPU_DIV_ACTIVE",
"PublicDescription": "This event counts the number of the divide operations executed. Uses edge-detect and a cmask value of 1 on ARITH.FPU_DIV_ACTIVE to get the number of the divide operations executed.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired branch instructions.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-conditional branch instructions.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"PublicDescription": "This event counts both taken and not taken speculative and retired macro-unconditional branch instructions, excluding calls and indirects.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts both taken and not taken speculative and retired direct near calls.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches excluding calls and return branches.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts both taken and not taken speculative and retired indirect branches that have a return mnemonic.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken macro-conditional branch instructions.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"PublicDescription": "This event counts taken speculative and retired macro-conditional branch instructions excluding calls and indirect branches.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired direct near calls.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired indirect branches excluding calls and return branches.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"PublicDescription": "This event counts taken speculative and retired indirect calls including both register and memory indirect.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"PublicDescription": "This event counts taken speculative and retired indirect branches that have a return mnemonic.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all (macro) branch instructions retired.",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
@@ -130,6 +146,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "BDW98",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -166,6 +186,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -175,6 +196,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -184,6 +206,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "This event counts not taken branch instructions retired.",
@@ -192,6 +215,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted branch instructions.",
@@ -200,6 +224,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -208,6 +233,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts both taken and not taken mispredicted indirect branches excluding calls and returns.",
@@ -216,6 +242,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -224,6 +251,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "This event counts not taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -232,6 +260,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "This event counts taken speculative and retired mispredicted macro conditional branch instructions.",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches excluding calls and returns.",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"PublicDescription": "This event counts taken speculative and retired mispredicted indirect branches that have a return mnemonic.",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "This event counts all mispredicted macro branch instructions retired.",
@@ -270,6 +303,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired. (Precise Event - PEBS)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -288,6 +323,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -297,6 +333,7 @@
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RET",
"PEBS": "1",
@@ -306,6 +343,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -313,6 +351,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "This is a fixed-frequency event programmed to general counters. It counts when the core is unhalted at 100 Mhz.",
@@ -322,6 +361,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -329,6 +369,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -336,13 +377,15 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
- "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. \nNote: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
+ "PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. This event is clocked by base clock (100 Mhz) on Sandy Bridge. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
"UMask": "0x3"
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate).",
@@ -352,6 +395,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "100003",
@@ -359,6 +403,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "This event counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -367,12 +412,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -381,12 +428,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -395,6 +444,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -404,6 +454,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -412,6 +463,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_PENDING",
@@ -421,6 +473,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -430,6 +483,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -438,6 +492,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -447,6 +502,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -455,6 +511,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -464,6 +521,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -472,6 +530,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_PENDING",
@@ -481,6 +540,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -490,6 +550,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -498,6 +559,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -506,6 +568,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "This event counts stalls occurred due to changing prefix length (66, 67 or REX.W when they change the length of the decoded instruction). Occurrences counting is proportional to the number of prefixes in a 16B-line. This may result in the following penalties: three-cycle penalty for each LCP in a 16-byte chunk.",
@@ -514,13 +577,15 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. \nNotes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. \nCounting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
+ "PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "BDM61",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -529,6 +594,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "BDM11, BDM55",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -539,6 +605,7 @@
},
{
"BriefDescription": "FP operations retired. X87 FP operations that have no exceptions:",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.X87",
"PublicDescription": "This event counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.",
@@ -547,6 +614,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.RAT_STALL_CYCLES",
"PublicDescription": "This event counts the number of cycles during which Resource Allocation Table (RAT) external stall is sent to Instruction Decode Queue (IDQ) for the current thread. This also includes the cycles during which the Allocator is serving another thread.",
@@ -555,6 +623,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -565,6 +634,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -573,6 +643,7 @@
},
{
"BriefDescription": "This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"SampleAfterValue": "100003",
@@ -580,14 +651,16 @@
},
{
"BriefDescription": "Cases when loads get true Block-on-Store blocking code preventing store forwarding",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
- "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when:\n - preceding store conflicts with the load (incomplete overlap);\n - store forwarding is impossible due to u-arch limitations;\n - preceding lock RMW operations are not forwarded;\n - store has the no-forward bit set (uncacheable/page-split/masked stores);\n - all-blocking stores are used (mostly, fences and port I/O);\nand others.\nThe most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events.\nSee the table of not supported store forwards in the Optimization Guide.",
+ "PublicDescription": "This event counts how many times the load operation got the true Block-on-Store blocking code preventing store forwarding. This includes cases when: - preceding store conflicts with the load (incomplete overlap); - store forwarding is impossible due to u-arch limitations; - preceding lock RMW operations are not forwarded; - store has the no-forward bit set (uncacheable/page-split/masked stores); - all-blocking stores are used (mostly, fences and port I/O); and others. The most common case is a load blocked due to its address range overlapping with a preceding smaller uncompleted store. Note: This event does not take into account cases of out-of-SW-control (for example, SbTailHit), unknown physical STA, and cases of blocking loads on store due to being non-WB memory type or a lock. These cases are covered by other events. See the table of not supported store forwards in the Optimization Guide.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
"BriefDescription": "False dependencies in MOB due to partial compare",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "This event counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.",
@@ -596,6 +669,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the hardware prefetch.",
@@ -604,6 +678,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "This event counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by asm inspection of the nearby instructions.",
@@ -612,6 +687,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -620,6 +696,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -628,6 +705,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "LSD.UOPS",
"SampleAfterValue": "2000003",
@@ -635,6 +713,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -644,6 +723,7 @@
},
{
"BriefDescription": "Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.CYCLES",
"PublicDescription": "This event counts both thread-specific (TS) and all-thread (AT) nukes.",
@@ -652,6 +732,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"PublicDescription": "Maskmov false fault - counts number of time ucode passes through Maskmov flow due to instruction's mask being 0 while the flow was completed without raising a fault.",
@@ -660,6 +741,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "This event counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -668,6 +750,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -675,6 +758,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -682,6 +766,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"SampleAfterValue": "100003",
@@ -689,6 +774,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.ANY",
"PublicDescription": "This event counts resource-related stall cycles.",
@@ -697,6 +783,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"PublicDescription": "This event counts ROB full stall cycles. This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -705,6 +792,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"PublicDescription": "This event counts stall cycles caused by absence of eligible entries in the reservation station (RS). This may result from RS overflow, or from RS deallocation because of the RS array Write Port allocation scheme (each RS entry has two write ports instead of four. As a result, empty entries could not be used, although RS is not really full). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -713,6 +801,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "This event counts stall cycles caused by the store buffer (SB) overflow (excluding draining from synch). This counts cycles that the pipeline backend blocked uop delivery from the front end.",
@@ -721,6 +810,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "This event counts cases of saving new LBR records by hardware. This assumes proper enabling of LBRs and takes into account LBR filtering done by the LBR_SELECT register.",
@@ -729,14 +819,16 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
- "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread.\nNote: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
+ "PublicDescription": "This event counts cycles during which the reservation station (RS) is empty for the thread. Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -747,6 +839,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -755,6 +848,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -763,6 +857,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -771,6 +866,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -779,6 +875,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -787,6 +884,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -795,6 +893,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -803,6 +902,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -811,6 +911,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Number of uops executed from any thread.",
@@ -819,6 +920,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -827,6 +929,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -835,6 +938,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -843,6 +947,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -851,6 +956,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
"Invert": "1",
@@ -859,6 +965,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC",
@@ -867,6 +974,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC",
@@ -875,6 +983,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC",
@@ -883,6 +992,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC",
@@ -891,6 +1001,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -901,6 +1012,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.THREAD",
"PublicDescription": "Number of uops to be executed per-thread each cycle.",
@@ -909,6 +1021,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 0.",
@@ -918,6 +1031,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0_CORE",
"SampleAfterValue": "2000003",
@@ -925,6 +1039,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 1.",
@@ -934,6 +1049,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1_CORE",
"SampleAfterValue": "2000003",
@@ -941,6 +1057,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 2.",
@@ -950,6 +1067,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -957,6 +1075,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 3.",
@@ -966,6 +1085,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3_CORE",
"SampleAfterValue": "2000003",
@@ -973,6 +1093,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 4.",
@@ -982,6 +1103,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4_CORE",
"SampleAfterValue": "2000003",
@@ -989,6 +1111,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 5.",
@@ -998,6 +1121,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5_CORE",
"SampleAfterValue": "2000003",
@@ -1005,6 +1129,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 6.",
@@ -1014,6 +1139,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6_CORE",
"SampleAfterValue": "2000003",
@@ -1021,6 +1147,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7",
"PublicDescription": "This event counts, on the per-thread basis, cycles during which uops are dispatched from the Reservation Station (RS) to port 7.",
@@ -1030,6 +1157,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7_CORE",
"SampleAfterValue": "2000003",
@@ -1037,6 +1165,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "This event counts the number of Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS).",
@@ -1045,14 +1174,16 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
- "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive\n added by GSR u-arch.",
+ "PublicDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive added by GSR u-arch.",
"SampleAfterValue": "2000003",
"UMask": "0x10"
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"SampleAfterValue": "2000003",
@@ -1060,6 +1191,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"SampleAfterValue": "2000003",
@@ -1067,6 +1199,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -1077,6 +1210,7 @@
},
{
"BriefDescription": "Actually retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -1086,6 +1220,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1095,6 +1230,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1105,6 +1241,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json
index 400d784d1457..8f6955615bfd 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "LLC prefetch misses for code reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.CODE_LLC_PREFETCH",
"Filter": "filter_opc=0x191",
@@ -12,6 +13,7 @@
},
{
"BriefDescription": "LLC prefetch misses for data reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.DATA_LLC_PREFETCH",
"Filter": "filter_opc=0x192",
@@ -23,6 +25,7 @@
},
{
"BriefDescription": "LLC misses - demand and prefetch data reads - excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.DATA_READ",
"Filter": "filter_opc=0x182",
@@ -34,6 +37,7 @@
},
{
"BriefDescription": "MMIO reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_READ",
"Filter": "filter_opc=0x187,filter_nc=1",
@@ -45,6 +49,7 @@
},
{
"BriefDescription": "MMIO writes. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_WRITE",
"Filter": "filter_opc=0x18f,filter_nc=1",
@@ -56,6 +61,7 @@
},
{
"BriefDescription": "PCIe write misses (full cache line). Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_NON_SNOOP_WRITE",
"Filter": "filter_opc=0x1c8,filter_tid=0x3e",
@@ -67,6 +73,7 @@
},
{
"BriefDescription": "LLC misses for PCIe read current. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_READ",
"Filter": "filter_opc=0x19e",
@@ -78,6 +85,7 @@
},
{
"BriefDescription": "ItoM write misses (as part of fast string memcpy stores) + PCIe full line writes. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_WRITE",
"Filter": "filter_opc=0x1c8",
@@ -89,6 +97,7 @@
},
{
"BriefDescription": "LLC prefetch misses for RFO. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.RFO_LLC_PREFETCH",
"Filter": "filter_opc=0x190",
@@ -100,6 +109,7 @@
},
{
"BriefDescription": "LLC misses - Uncacheable reads (from cpu) . Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.UNCACHEABLE",
"Filter": "filter_opc=0x187",
@@ -111,6 +121,7 @@
},
{
"BriefDescription": "L2 demand and L2 prefetch code references to LLC. Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.CODE_LLC_PREFETCH",
"Filter": "filter_opc=0x181",
@@ -122,6 +133,7 @@
},
{
"BriefDescription": "PCIe writes (partial cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_NS_PARTIAL_WRITE",
"Filter": "filter_opc=0x180,filter_tid=0x3e",
@@ -132,6 +144,7 @@
},
{
"BriefDescription": "PCIe read current. Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_READ",
"Filter": "filter_opc=0x19e",
@@ -143,6 +156,7 @@
},
{
"BriefDescription": "PCIe write references (full cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_WRITE",
"Filter": "filter_opc=0x1c8,filter_tid=0x3e",
@@ -154,6 +168,7 @@
},
{
"BriefDescription": "Streaming stores (full cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_FULL",
"Filter": "filter_opc=0x18c",
@@ -165,6 +180,7 @@
},
{
"BriefDescription": "Streaming stores (partial cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_PARTIAL",
"Filter": "filter_opc=0x18d",
@@ -176,6 +192,7 @@
},
{
"BriefDescription": "Bounce Control",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_C_BOUNCE_CONTROL",
"PerPkg": "1",
@@ -183,12 +200,14 @@
},
{
"BriefDescription": "Uncore Clocks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_C_CLOCKTICKS",
"PerPkg": "1",
"Unit": "CBOX"
},
{
"BriefDescription": "Counter 0 Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_C_COUNTER0_OCCUPANCY",
"PerPkg": "1",
@@ -197,6 +216,7 @@
},
{
"BriefDescription": "FaST wire asserted",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_C_FAST_ASSERTED",
"PerPkg": "1",
@@ -205,6 +225,7 @@
},
{
"BriefDescription": "All LLC Misses (code+ data rd + data wr - including demand and prefetch)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.ANY",
"Filter": "filter_state=0x1",
@@ -216,6 +237,7 @@
},
{
"BriefDescription": "Cache Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.DATA_READ",
"PerPkg": "1",
@@ -225,6 +247,7 @@
},
{
"BriefDescription": "Cache Lookups; Lookups that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.NID",
"PerPkg": "1",
@@ -234,6 +257,7 @@
},
{
"BriefDescription": "Cache Lookups; Any Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.READ",
"PerPkg": "1",
@@ -243,6 +267,7 @@
},
{
"BriefDescription": "Cache Lookups; External Snoop Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.REMOTE_SNOOP",
"PerPkg": "1",
@@ -252,6 +277,7 @@
},
{
"BriefDescription": "Cache Lookups; Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.WRITE",
"PerPkg": "1",
@@ -261,6 +287,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.E_STATE",
"PerPkg": "1",
@@ -270,6 +297,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.F_STATE",
"PerPkg": "1",
@@ -279,6 +307,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.I_STATE",
"PerPkg": "1",
@@ -288,6 +317,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.MISS",
"PerPkg": "1",
@@ -297,6 +327,7 @@
},
{
"BriefDescription": "M line evictions from LLC (writebacks to memory)",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.M_STATE",
"PerPkg": "1",
@@ -307,6 +338,7 @@
},
{
"BriefDescription": "Lines Victimized; Victimized Lines that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.NID",
"PerPkg": "1",
@@ -316,6 +348,7 @@
},
{
"BriefDescription": "Cbo Misc; DRd hitting non-M with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_MISS",
"PerPkg": "1",
@@ -325,6 +358,7 @@
},
{
"BriefDescription": "Cbo Misc; Clean Victim with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_VICTIM",
"PerPkg": "1",
@@ -334,6 +368,7 @@
},
{
"BriefDescription": "Cbo Misc; RFO HitS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RFO_HIT_S",
"PerPkg": "1",
@@ -343,6 +378,7 @@
},
{
"BriefDescription": "Cbo Misc; Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RSPI_WAS_FSE",
"PerPkg": "1",
@@ -352,6 +388,7 @@
},
{
"BriefDescription": "Cbo Misc",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.STARTED",
"PerPkg": "1",
@@ -361,6 +398,7 @@
},
{
"BriefDescription": "Cbo Misc; Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.WC_ALIASING",
"PerPkg": "1",
@@ -370,6 +408,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE0",
"PerPkg": "1",
@@ -379,6 +418,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE1",
"PerPkg": "1",
@@ -388,6 +428,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE2",
"PerPkg": "1",
@@ -397,6 +438,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE3",
"PerPkg": "1",
@@ -406,6 +448,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Bits Decremented",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.LRU_DECREMENT",
"PerPkg": "1",
@@ -415,6 +458,7 @@
},
{
"BriefDescription": "LRU Queue; Non-0 Aged Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.VICTIM_NON_ZERO",
"PerPkg": "1",
@@ -424,6 +468,7 @@
},
{
"BriefDescription": "AD Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -433,6 +478,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN",
"PerPkg": "1",
@@ -442,6 +488,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -451,6 +498,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_ODD",
"PerPkg": "1",
@@ -460,6 +508,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP",
"PerPkg": "1",
@@ -469,6 +518,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_EVEN",
"PerPkg": "1",
@@ -478,6 +528,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_ODD",
"PerPkg": "1",
@@ -487,6 +538,7 @@
},
{
"BriefDescription": "AK Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -496,6 +548,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN",
"PerPkg": "1",
@@ -505,6 +558,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -514,6 +568,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_ODD",
"PerPkg": "1",
@@ -523,6 +578,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP",
"PerPkg": "1",
@@ -532,6 +588,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_EVEN",
"PerPkg": "1",
@@ -541,6 +598,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_ODD",
"PerPkg": "1",
@@ -550,6 +608,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -559,6 +618,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN",
"PerPkg": "1",
@@ -568,6 +628,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -577,6 +638,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_ODD",
"PerPkg": "1",
@@ -586,6 +648,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP",
"PerPkg": "1",
@@ -595,6 +658,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_EVEN",
"PerPkg": "1",
@@ -604,6 +668,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_ODD",
"PerPkg": "1",
@@ -613,6 +678,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AD",
"PerPkg": "1",
@@ -621,6 +687,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AK",
"PerPkg": "1",
@@ -629,6 +696,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.BL",
"PerPkg": "1",
@@ -637,6 +705,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.IV",
"PerPkg": "1",
@@ -645,6 +714,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -654,6 +724,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DN",
"PerPkg": "1",
@@ -663,6 +734,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DOWN",
"PerPkg": "1",
@@ -672,6 +744,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.UP",
"PerPkg": "1",
@@ -681,6 +754,7 @@
},
{
"BriefDescription": "AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AD",
"PerPkg": "1",
@@ -689,6 +763,7 @@
},
{
"BriefDescription": "AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AK",
"PerPkg": "1",
@@ -697,6 +772,7 @@
},
{
"BriefDescription": "BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.BL",
"PerPkg": "1",
@@ -705,6 +781,7 @@
},
{
"BriefDescription": "IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.IV",
"PerPkg": "1",
@@ -713,6 +790,7 @@
},
{
"BriefDescription": "Number of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce traffic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_C_RING_SRC_THRTL",
"PerPkg": "1",
@@ -720,15 +798,17 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IPQ",
"PerPkg": "1",
- "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally startved and therefore we are blocking the IRQ.",
+ "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally starved and therefore we are blocking the IRQ.",
"UMask": "0x2",
"Unit": "CBOX"
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IRQ",
"PerPkg": "1",
@@ -738,6 +818,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; ISMQ_BID",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.ISMQ_BIDS",
"PerPkg": "1",
@@ -747,6 +828,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.PRQ",
"PerPkg": "1",
@@ -756,6 +838,7 @@
},
{
"BriefDescription": "Ingress Allocations; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IPQ",
"PerPkg": "1",
@@ -765,6 +848,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ",
"PerPkg": "1",
@@ -774,6 +858,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ_REJ",
"PerPkg": "1",
@@ -783,6 +868,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ",
"PerPkg": "1",
@@ -792,6 +878,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ_REJ",
"PerPkg": "1",
@@ -801,6 +888,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IPQ",
"PerPkg": "1",
@@ -810,6 +898,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IRQ",
"PerPkg": "1",
@@ -819,6 +908,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; ISMQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.ISMQ",
"PerPkg": "1",
@@ -828,6 +918,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.PRQ",
"PerPkg": "1",
@@ -837,6 +928,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -846,6 +938,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ANY",
"PerPkg": "1",
@@ -855,6 +948,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.FULL",
"PerPkg": "1",
@@ -864,6 +958,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -873,6 +968,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -882,6 +978,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -891,6 +988,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -900,6 +998,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ANY",
"PerPkg": "1",
@@ -909,6 +1008,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.FULL",
"PerPkg": "1",
@@ -918,6 +1018,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -927,6 +1028,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.NID",
"PerPkg": "1",
@@ -936,6 +1038,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -945,6 +1048,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.RTID",
"PerPkg": "1",
@@ -954,6 +1058,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -963,6 +1068,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -972,6 +1078,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -981,6 +1088,7 @@
},
{
"BriefDescription": "ISMQ Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.ANY",
"PerPkg": "1",
@@ -990,6 +1098,7 @@
},
{
"BriefDescription": "ISMQ Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.FULL",
"PerPkg": "1",
@@ -999,6 +1108,7 @@
},
{
"BriefDescription": "ISMQ Retries; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -1008,6 +1118,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.NID",
"PerPkg": "1",
@@ -1017,6 +1128,7 @@
},
{
"BriefDescription": "ISMQ Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -1026,6 +1138,7 @@
},
{
"BriefDescription": "ISMQ Retries; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.RTID",
"PerPkg": "1",
@@ -1035,6 +1148,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.WB_CREDITS",
"PerPkg": "1",
@@ -1044,6 +1158,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -1053,6 +1168,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -1062,6 +1178,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -1071,6 +1188,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IPQ",
"PerPkg": "1",
@@ -1080,6 +1198,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ",
"PerPkg": "1",
@@ -1089,6 +1208,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ Rejected",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ_REJ",
"PerPkg": "1",
@@ -1098,6 +1218,7 @@
},
{
"BriefDescription": "Ingress Occupancy; PRQ Rejects",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.PRQ_REJ",
"PerPkg": "1",
@@ -1107,6 +1228,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -1116,6 +1238,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -1125,6 +1248,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -1134,6 +1258,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -1143,6 +1268,7 @@
},
{
"BriefDescription": "TOR Inserts; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.ALL",
"PerPkg": "1",
@@ -1152,6 +1278,7 @@
},
{
"BriefDescription": "TOR Inserts; Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.EVICTION",
"PerPkg": "1",
@@ -1161,6 +1288,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL",
"PerPkg": "1",
@@ -1170,6 +1298,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1179,6 +1308,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL",
"PerPkg": "1",
@@ -1188,6 +1318,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1197,6 +1328,7 @@
},
{
"BriefDescription": "TOR Inserts; Miss Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE",
"PerPkg": "1",
@@ -1206,6 +1338,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE",
"PerPkg": "1",
@@ -1215,6 +1348,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1224,6 +1358,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_ALL",
"PerPkg": "1",
@@ -1233,6 +1368,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_EVICTION",
"PerPkg": "1",
@@ -1242,6 +1378,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Miss All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_ALL",
"PerPkg": "1",
@@ -1251,6 +1388,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1260,6 +1398,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_OPCODE",
"PerPkg": "1",
@@ -1269,6 +1408,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_WB",
"PerPkg": "1",
@@ -1278,6 +1418,7 @@
},
{
"BriefDescription": "TOR Inserts; Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.OPCODE",
"PerPkg": "1",
@@ -1287,6 +1428,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE",
"PerPkg": "1",
@@ -1296,6 +1438,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1305,6 +1448,7 @@
},
{
"BriefDescription": "TOR Inserts; Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.WB",
"PerPkg": "1",
@@ -1314,6 +1458,7 @@
},
{
"BriefDescription": "TOR Occupancy; Any",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -1323,6 +1468,7 @@
},
{
"BriefDescription": "TOR Occupancy; Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.EVICTION",
"PerPkg": "1",
@@ -1332,6 +1478,7 @@
},
{
"BriefDescription": "Occupancy counter for LLC data reads (demand and L2 prefetch). Derived from unc_c_tor_occupancy.miss_opcode",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LLC_DATA_READ",
"Filter": "filter_opc=0x182",
@@ -1342,6 +1489,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -1351,6 +1499,7 @@
},
{
"BriefDescription": "TOR Occupancy; Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1360,6 +1509,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss All",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_ALL",
"PerPkg": "1",
@@ -1369,6 +1519,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL",
"PerPkg": "1",
@@ -1378,6 +1529,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1387,6 +1539,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_OPCODE",
"PerPkg": "1",
@@ -1396,6 +1549,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE",
"PerPkg": "1",
@@ -1405,6 +1559,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1414,6 +1569,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_ALL",
"PerPkg": "1",
@@ -1423,6 +1579,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_EVICTION",
"PerPkg": "1",
@@ -1432,6 +1589,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_ALL",
"PerPkg": "1",
@@ -1441,6 +1599,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched Miss",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1450,6 +1609,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_OPCODE",
"PerPkg": "1",
@@ -1459,6 +1619,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_WB",
"PerPkg": "1",
@@ -1468,6 +1629,7 @@
},
{
"BriefDescription": "TOR Occupancy; Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.OPCODE",
"PerPkg": "1",
@@ -1477,6 +1639,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -1486,6 +1649,7 @@
},
{
"BriefDescription": "TOR Occupancy; Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1495,6 +1659,7 @@
},
{
"BriefDescription": "TOR Occupancy; Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.WB",
"PerPkg": "1",
@@ -1504,6 +1669,7 @@
},
{
"BriefDescription": "Onto AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AD",
"PerPkg": "1",
@@ -1512,6 +1678,7 @@
},
{
"BriefDescription": "Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AK",
"PerPkg": "1",
@@ -1520,6 +1687,7 @@
},
{
"BriefDescription": "Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.BL",
"PerPkg": "1",
@@ -1528,6 +1696,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CACHE",
"PerPkg": "1",
@@ -1537,6 +1706,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CORE",
"PerPkg": "1",
@@ -1546,6 +1716,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CACHE",
"PerPkg": "1",
@@ -1555,6 +1726,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CORE",
"PerPkg": "1",
@@ -1564,6 +1736,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Cacheno",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CACHE",
"PerPkg": "1",
@@ -1573,6 +1746,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CORE",
"PerPkg": "1",
@@ -1582,6 +1756,7 @@
},
{
"BriefDescription": "Egress Allocations; IV - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.IV_CACHE",
"PerPkg": "1",
@@ -1591,6 +1766,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AD Ring (to core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AD_CORE",
"PerPkg": "1",
@@ -1600,6 +1776,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AK_BOTH",
"PerPkg": "1",
@@ -1609,6 +1786,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.BL_BOTH",
"PerPkg": "1",
@@ -1618,6 +1796,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto IV Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.IV",
"PerPkg": "1",
@@ -1627,6 +1806,7 @@
},
{
"BriefDescription": "BT Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_H_BT_CYCLES_NE",
"PerPkg": "1",
@@ -1635,6 +1815,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_BL_HAZARD",
"PerPkg": "1",
@@ -1644,6 +1825,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Snoop Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_SNP_HAZARD",
"PerPkg": "1",
@@ -1653,6 +1835,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.RSPACKCFLT_HAZARD",
"PerPkg": "1",
@@ -1662,6 +1845,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.WBMDATA_HAZARD",
"PerPkg": "1",
@@ -1671,24 +1855,27 @@
},
{
"BriefDescription": "HA to iMC Bypass; Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.NOT_TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "uclks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_H_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent.",
@@ -1696,6 +1883,7 @@
},
{
"BriefDescription": "Direct2Core Messages Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_H_DIRECT2CORE_COUNT",
"PerPkg": "1",
@@ -1704,6 +1892,7 @@
},
{
"BriefDescription": "Cycles when Direct2Core was Disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_H_DIRECT2CORE_CYCLES_DISABLED",
"PerPkg": "1",
@@ -1712,6 +1901,7 @@
},
{
"BriefDescription": "Number of Reads that had Direct2Core Overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_H_DIRECT2CORE_TXN_OVERRIDE",
"PerPkg": "1",
@@ -1720,6 +1910,7 @@
},
{
"BriefDescription": "Directory Lat Opt Return",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_H_DIRECTORY_LAT_OPT",
"PerPkg": "1",
@@ -1728,6 +1919,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.NO_SNP",
"PerPkg": "1",
@@ -1737,6 +1929,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.SNP",
"PerPkg": "1",
@@ -1746,6 +1939,7 @@
},
{
"BriefDescription": "Directory Updates; Any Directory Update",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -1755,6 +1949,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.CLEAR",
"PerPkg": "1",
@@ -1764,6 +1959,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.SET",
"PerPkg": "1",
@@ -1773,6 +1969,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1781,6 +1978,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALL",
"PerPkg": "1",
@@ -1789,6 +1987,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALLOCS",
"PerPkg": "1",
@@ -1797,6 +1996,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.EVICTS",
"PerPkg": "1",
@@ -1805,6 +2005,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.HOM",
"PerPkg": "1",
@@ -1813,6 +2014,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.INVALS",
"PerPkg": "1",
@@ -1821,6 +2023,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1829,6 +2032,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSP",
"PerPkg": "1",
@@ -1837,6 +2041,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1845,6 +2050,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1853,6 +2059,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDS",
"PerPkg": "1",
@@ -1861,6 +2068,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1869,6 +2077,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOI",
"PerPkg": "1",
@@ -1877,6 +2086,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1885,6 +2095,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ALL",
"PerPkg": "1",
@@ -1893,6 +2104,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.HOM",
"PerPkg": "1",
@@ -1901,6 +2113,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1909,6 +2122,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSP",
"PerPkg": "1",
@@ -1917,6 +2131,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1925,6 +2140,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1933,6 +2149,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDS",
"PerPkg": "1",
@@ -1941,6 +2158,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1949,6 +2167,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOI",
"PerPkg": "1",
@@ -1957,6 +2176,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1965,6 +2185,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALL",
"PerPkg": "1",
@@ -1973,6 +2194,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALLOCS",
"PerPkg": "1",
@@ -1981,6 +2203,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.HOM",
"PerPkg": "1",
@@ -1989,6 +2212,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.INVALS",
"PerPkg": "1",
@@ -1997,6 +2221,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.READ_OR_INVITOE",
"PerPkg": "1",
@@ -2005,6 +2230,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSP",
"PerPkg": "1",
@@ -2013,6 +2239,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -2021,6 +2248,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -2029,6 +2257,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDS",
"PerPkg": "1",
@@ -2037,6 +2266,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOE_OR_S",
"PerPkg": "1",
@@ -2045,6 +2275,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOI",
"PerPkg": "1",
@@ -2053,6 +2284,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0",
"PerPkg": "1",
@@ -2062,6 +2294,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1",
"PerPkg": "1",
@@ -2071,6 +2304,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI2",
"PerPkg": "1",
@@ -2080,6 +2314,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0",
"PerPkg": "1",
@@ -2089,6 +2324,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1",
"PerPkg": "1",
@@ -2098,6 +2334,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI2",
"PerPkg": "1",
@@ -2107,6 +2344,7 @@
},
{
"BriefDescription": "HA to iMC Normal Priority Reads Issued; Normal Priority",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_H_IMC_READS.NORMAL",
"PerPkg": "1",
@@ -2116,6 +2354,7 @@
},
{
"BriefDescription": "Retry Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_H_IMC_RETRY",
"PerPkg": "1",
@@ -2123,6 +2362,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; All Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.ALL",
"PerPkg": "1",
@@ -2132,6 +2372,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL",
"PerPkg": "1",
@@ -2141,6 +2382,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL_ISOCH",
"PerPkg": "1",
@@ -2150,6 +2392,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL",
"PerPkg": "1",
@@ -2159,6 +2402,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL_ISOCH",
"PerPkg": "1",
@@ -2168,6 +2412,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.HUB",
"PerPkg": "1",
@@ -2176,6 +2421,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.SAT",
"PerPkg": "1",
@@ -2184,6 +2430,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS0",
"PerPkg": "1",
@@ -2193,6 +2440,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS1",
"PerPkg": "1",
@@ -2202,6 +2450,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS2",
"PerPkg": "1",
@@ -2211,6 +2460,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS3",
"PerPkg": "1",
@@ -2220,6 +2470,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS0",
"PerPkg": "1",
@@ -2229,6 +2480,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS1",
"PerPkg": "1",
@@ -2238,6 +2490,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Cancelled",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.CANCELLED",
"PerPkg": "1",
@@ -2247,6 +2500,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2256,6 +2510,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL",
"PerPkg": "1",
@@ -2265,6 +2520,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Reads Local - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL_USEFUL",
"PerPkg": "1",
@@ -2274,6 +2530,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE",
"PerPkg": "1",
@@ -2283,6 +2540,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE_USEFUL",
"PerPkg": "1",
@@ -2292,6 +2550,7 @@
},
{
"BriefDescription": "OSB Early Data Return; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.ALL",
"PerPkg": "1",
@@ -2301,6 +2560,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_I",
"PerPkg": "1",
@@ -2310,6 +2570,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_S",
"PerPkg": "1",
@@ -2319,6 +2580,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_I",
"PerPkg": "1",
@@ -2328,6 +2590,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_S",
"PerPkg": "1",
@@ -2337,6 +2600,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2346,6 +2610,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -2355,6 +2620,7 @@
},
{
"BriefDescription": "Read and Write Requests; Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS",
"PerPkg": "1",
@@ -2364,6 +2630,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -2373,6 +2640,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -2382,6 +2650,7 @@
},
{
"BriefDescription": "Read and Write Requests; Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES",
"PerPkg": "1",
@@ -2391,6 +2660,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -2400,6 +2670,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -2409,6 +2680,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -2418,6 +2690,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2427,6 +2700,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -2436,6 +2710,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW",
"PerPkg": "1",
@@ -2445,6 +2720,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -2454,6 +2730,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -2463,6 +2740,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -2472,6 +2750,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -2481,6 +2760,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2490,6 +2770,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -2499,6 +2780,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW",
"PerPkg": "1",
@@ -2508,6 +2790,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -2517,6 +2800,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -2526,6 +2810,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -2535,6 +2820,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -2544,6 +2830,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2553,6 +2840,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -2562,6 +2850,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW",
"PerPkg": "1",
@@ -2571,6 +2860,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -2580,6 +2870,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -2589,42 +2880,47 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -2634,6 +2930,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -2643,6 +2940,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -2652,6 +2950,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
@@ -2661,6 +2960,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2670,6 +2970,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2679,6 +2980,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2688,6 +2990,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2697,6 +3000,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2706,6 +3010,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2715,6 +3020,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2724,6 +3030,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2733,6 +3040,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.LOCAL",
"PerPkg": "1",
@@ -2742,6 +3050,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.REMOTE",
"PerPkg": "1",
@@ -2751,6 +3060,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -2760,6 +3070,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.LOCAL",
"PerPkg": "1",
@@ -2769,6 +3080,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.REMOTE",
"PerPkg": "1",
@@ -2778,6 +3090,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -2787,6 +3100,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -2796,6 +3110,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RSPCNFLCT*",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPCNFLCT",
"PerPkg": "1",
@@ -2805,6 +3120,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPI",
"PerPkg": "1",
@@ -2814,6 +3130,7 @@
},
{
"BriefDescription": "M line forwarded from remote cache with no writeback to memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPIFWD",
"PerPkg": "1",
@@ -2824,6 +3141,7 @@
},
{
"BriefDescription": "Shared line response from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPS",
"PerPkg": "1",
@@ -2834,6 +3152,7 @@
},
{
"BriefDescription": "Shared line forwarded from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPSFWD",
"PerPkg": "1",
@@ -2844,6 +3163,7 @@
},
{
"BriefDescription": "M line forwarded from remote cache along with writeback to memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_FWD_WB",
"PerPkg": "1",
@@ -2854,6 +3174,7 @@
},
{
"BriefDescription": "Snoop Responses Received; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_WB",
"PerPkg": "1",
@@ -2863,6 +3184,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Other",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.OTHER",
"PerPkg": "1",
@@ -2872,6 +3194,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPCNFLCT",
"PerPkg": "1",
@@ -2881,6 +3204,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPI",
"PerPkg": "1",
@@ -2890,6 +3214,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPIFWD",
"PerPkg": "1",
@@ -2899,6 +3224,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPS",
"PerPkg": "1",
@@ -2908,6 +3234,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPSFWD",
"PerPkg": "1",
@@ -2917,6 +3244,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxFWDxWB",
"PerPkg": "1",
@@ -2926,6 +3254,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxWB",
"PerPkg": "1",
@@ -2935,6 +3264,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -2944,6 +3274,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -2953,6 +3284,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -2962,6 +3294,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -2971,6 +3304,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION0",
"PerPkg": "1",
@@ -2980,6 +3314,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION1",
"PerPkg": "1",
@@ -2989,6 +3324,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION2",
"PerPkg": "1",
@@ -2998,6 +3334,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION3",
"PerPkg": "1",
@@ -3007,6 +3344,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION4",
"PerPkg": "1",
@@ -3016,6 +3354,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION5",
"PerPkg": "1",
@@ -3025,6 +3364,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION6",
"PerPkg": "1",
@@ -3034,6 +3374,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION7",
"PerPkg": "1",
@@ -3043,6 +3384,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION10",
"PerPkg": "1",
@@ -3052,6 +3394,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 11",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION11",
"PerPkg": "1",
@@ -3061,6 +3404,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION8",
"PerPkg": "1",
@@ -3070,6 +3414,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION9",
"PerPkg": "1",
@@ -3079,6 +3424,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3088,6 +3434,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles GP Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.GP",
"PerPkg": "1",
@@ -3097,33 +3444,37 @@
},
{
"BriefDescription": "Tracker Cycles Not Empty; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.ALL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
"UMask": "0x3",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.LOCAL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.REMOTE",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_LOCAL",
"PerPkg": "1",
@@ -3133,6 +3484,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_REMOTE",
"PerPkg": "1",
@@ -3142,6 +3494,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_LOCAL",
"PerPkg": "1",
@@ -3151,6 +3504,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_REMOTE",
"PerPkg": "1",
@@ -3160,6 +3514,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_LOCAL",
"PerPkg": "1",
@@ -3169,6 +3524,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_REMOTE",
"PerPkg": "1",
@@ -3178,6 +3534,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -3187,6 +3544,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -3196,6 +3554,7 @@
},
{
"BriefDescription": "Outbound NDR Ring Transactions; Non-data Responses",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_H_TxR_AD.HOM",
"PerPkg": "1",
@@ -3205,6 +3564,7 @@
},
{
"BriefDescription": "AD Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3214,6 +3574,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3223,6 +3584,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3232,6 +3594,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3241,6 +3604,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3250,6 +3614,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3259,6 +3624,7 @@
},
{
"BriefDescription": "AD Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.ALL",
"PerPkg": "1",
@@ -3268,6 +3634,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3277,6 +3644,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3286,6 +3654,7 @@
},
{
"BriefDescription": "AK Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3295,6 +3664,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3304,6 +3674,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3313,6 +3684,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3322,6 +3694,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3331,6 +3704,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3340,6 +3714,7 @@
},
{
"BriefDescription": "AK Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.ALL",
"PerPkg": "1",
@@ -3349,6 +3724,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3358,6 +3734,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3367,6 +3744,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CACHE",
"PerPkg": "1",
@@ -3376,6 +3754,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Core",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CORE",
"PerPkg": "1",
@@ -3385,6 +3764,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to QPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_QPI",
"PerPkg": "1",
@@ -3394,6 +3774,7 @@
},
{
"BriefDescription": "BL Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3403,6 +3784,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3412,6 +3794,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3421,6 +3804,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3430,6 +3814,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3439,6 +3824,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3448,6 +3834,7 @@
},
{
"BriefDescription": "BL Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.ALL",
"PerPkg": "1",
@@ -3457,6 +3844,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3466,6 +3854,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3475,6 +3864,7 @@
},
{
"BriefDescription": "Injection Starvation; For AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.AK",
"PerPkg": "1",
@@ -3484,6 +3874,7 @@
},
{
"BriefDescription": "Injection Starvation; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.BL",
"PerPkg": "1",
@@ -3493,42 +3884,47 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -3538,6 +3934,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -3547,6 +3944,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -3556,6 +3954,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
index b9fb216bee16..56d729c8e946 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
@@ -1,24 +1,7 @@
[
{
- "BriefDescription": "Number of non data (control) flits transmitted . Derived from unc_q_txl_flits_g0.non_data",
- "EventName": "QPI_CTL_BANDWIDTH_TX",
- "PerPkg": "1",
- "PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI. This basically tracks the protocol overhead on the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This includes the header flits for data packets.",
- "ScaleUnit": "8Bytes",
- "UMask": "0x4",
- "Unit": "QPI"
- },
- {
- "BriefDescription": "Number of data flits transmitted . Derived from unc_q_txl_flits_g0.data",
- "EventName": "QPI_DATA_BANDWIDTH_TX",
- "PerPkg": "1",
- "PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI. Each flit contains 64b of data. This includes both DRS and NCB data flits (coherent and non-coherent). This can be used to calculate the data bandwidth of the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This does not include the header flits that go in data packets.",
- "ScaleUnit": "8Bytes",
- "UMask": "0x2",
- "Unit": "QPI"
- },
- {
"BriefDescription": "Total Write Cache Occupancy; Any Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY",
"PerPkg": "1",
@@ -28,6 +11,7 @@
},
{
"BriefDescription": "Total Write Cache Occupancy; Select Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE",
"PerPkg": "1",
@@ -37,6 +21,7 @@
},
{
"BriefDescription": "Clocks in the IRP",
+ "Counter": "0,1",
"EventName": "UNC_I_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Number of clocks in the IRP.",
@@ -44,78 +29,87 @@
},
{
"BriefDescription": "Coherent Ops; CLFlush",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CLFLUSH",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; CRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; DRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.DRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIDCAHin5t",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIDCAHINT",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIRdCur",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIRDCUR",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIItoM",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCITOM",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; RFO",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.RFO",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; WbMtoI",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.WBMTOI",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
"PerPkg": "1",
@@ -125,6 +119,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
"PerPkg": "1",
@@ -134,6 +129,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
"PerPkg": "1",
@@ -143,6 +139,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REJ",
"PerPkg": "1",
@@ -152,6 +149,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REQ",
"PerPkg": "1",
@@ -161,6 +159,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_XFER",
"PerPkg": "1",
@@ -170,6 +169,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
"PerPkg": "1",
@@ -179,6 +179,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch TimeOut",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_TIMEOUT",
"PerPkg": "1",
@@ -188,6 +189,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Data Throttled",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.DATA_THROTTLE",
"PerPkg": "1",
@@ -197,6 +199,7 @@
},
{
"BriefDescription": "Misc Events - Set 1",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.LOST_FWD",
"PerPkg": "1",
@@ -206,6 +209,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
"PerPkg": "1",
@@ -215,6 +219,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Valid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
"PerPkg": "1",
@@ -224,6 +229,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_E",
"PerPkg": "1",
@@ -233,6 +239,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_I",
"PerPkg": "1",
@@ -242,6 +249,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_M",
"PerPkg": "1",
@@ -251,6 +259,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_S",
"PerPkg": "1",
@@ -260,6 +269,7 @@
},
{
"BriefDescription": "AK Ingress Occupancy",
+ "Counter": "0,1",
"EventCode": "0xA",
"EventName": "UNC_I_RxR_AK_INSERTS",
"PerPkg": "1",
@@ -268,6 +278,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
@@ -276,6 +287,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - DRS",
+ "Counter": "0,1",
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
@@ -284,6 +296,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
@@ -292,6 +305,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
@@ -300,6 +314,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCB",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
@@ -308,6 +323,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
@@ -316,6 +332,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
@@ -324,6 +341,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCS",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
@@ -332,6 +350,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
@@ -340,6 +359,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
"PerPkg": "1",
@@ -349,6 +369,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit I",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
"PerPkg": "1",
@@ -358,6 +379,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit M",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
"PerPkg": "1",
@@ -367,6 +389,7 @@
},
{
"BriefDescription": "Snoop Responses; Miss",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.MISS",
"PerPkg": "1",
@@ -376,6 +399,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpCode",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
"PerPkg": "1",
@@ -385,6 +409,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpData",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
"PerPkg": "1",
@@ -394,6 +419,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpInv",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
"PerPkg": "1",
@@ -403,6 +429,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Atomic",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.ATOMIC",
"PerPkg": "1",
@@ -412,6 +439,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Other",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.OTHER",
"PerPkg": "1",
@@ -421,6 +449,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Read Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.RD_PREF",
"PerPkg": "1",
@@ -430,6 +459,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Reads",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.READS",
"PerPkg": "1",
@@ -439,6 +469,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Writes",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WRITES",
"PerPkg": "1",
@@ -448,6 +479,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Write Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -457,6 +489,7 @@
},
{
"BriefDescription": "No AD Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_TxR_AD_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -465,6 +498,7 @@
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_TxR_BL_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -473,6 +507,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xE",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCB",
"PerPkg": "1",
@@ -481,6 +516,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCS",
"PerPkg": "1",
@@ -489,6 +525,7 @@
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0xD",
"EventName": "UNC_I_TxR_REQUEST_OCCUPANCY",
"PerPkg": "1",
@@ -497,6 +534,7 @@
},
{
"BriefDescription": "Number of qfclks",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_CLOCKTICKS",
"PerPkg": "1",
@@ -505,6 +543,7 @@
},
{
"BriefDescription": "Count of CTO Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_Q_CTO_COUNT",
"PerPkg": "1",
@@ -513,6 +552,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS",
"PerPkg": "1",
@@ -522,6 +562,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_MISS",
"PerPkg": "1",
@@ -531,6 +572,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT",
"PerPkg": "1",
@@ -540,6 +582,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss, Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT_MISS",
"PerPkg": "1",
@@ -549,6 +592,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_MISS",
"PerPkg": "1",
@@ -558,6 +602,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_HIT",
"PerPkg": "1",
@@ -567,6 +612,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Miss and Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_MISS",
"PerPkg": "1",
@@ -576,6 +622,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Success",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.SUCCESS_RBT_HIT",
"PerPkg": "1",
@@ -585,6 +632,7 @@
},
{
"BriefDescription": "Cycles in L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_L1_POWER_CYCLES",
"PerPkg": "1",
@@ -593,6 +641,7 @@
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -601,6 +650,7 @@
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL0_POWER_CYCLES",
"PerPkg": "1",
@@ -609,6 +659,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_BYPASSED",
"PerPkg": "1",
@@ -617,6 +668,7 @@
},
{
"BriefDescription": "CRC Errors Detected; LinkInit",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_CRC_ERRORS.LINK_INIT",
"PerPkg": "1",
@@ -626,6 +678,7 @@
},
{
"BriefDescription": "UNC_Q_RxL_CRC_ERRORS.NORMAL_OP",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_CRC_ERRORS.NORMAL_OP",
"PerPkg": "1",
@@ -634,6 +687,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS",
"PerPkg": "1",
@@ -643,6 +697,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.HOM",
"PerPkg": "1",
@@ -652,6 +707,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCB",
"PerPkg": "1",
@@ -661,6 +717,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCS",
"PerPkg": "1",
@@ -670,6 +727,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NDR",
"PerPkg": "1",
@@ -679,6 +737,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.SNP",
"PerPkg": "1",
@@ -688,6 +747,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.DRS",
"PerPkg": "1",
@@ -697,6 +757,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.HOM",
"PerPkg": "1",
@@ -706,6 +767,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCB",
"PerPkg": "1",
@@ -715,6 +777,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCS",
"PerPkg": "1",
@@ -724,6 +787,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NDR",
"PerPkg": "1",
@@ -733,6 +797,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.SNP",
"PerPkg": "1",
@@ -742,6 +807,7 @@
},
{
"BriefDescription": "VNA Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VNA",
"PerPkg": "1",
@@ -750,6 +816,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_CYCLES_NE",
"PerPkg": "1",
@@ -758,6 +825,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL_CYCLES_NE_DRS.VN0",
"PerPkg": "1",
@@ -767,6 +835,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL_CYCLES_NE_DRS.VN1",
"PerPkg": "1",
@@ -776,6 +845,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_RxL_CYCLES_NE_HOM.VN0",
"PerPkg": "1",
@@ -785,6 +855,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_RxL_CYCLES_NE_HOM.VN1",
"PerPkg": "1",
@@ -794,6 +865,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCB.VN0",
"PerPkg": "1",
@@ -803,6 +875,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCB.VN1",
"PerPkg": "1",
@@ -812,6 +885,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCS.VN0",
"PerPkg": "1",
@@ -821,6 +895,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCS.VN1",
"PerPkg": "1",
@@ -830,6 +905,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_RxL_CYCLES_NE_NDR.VN0",
"PerPkg": "1",
@@ -839,6 +915,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_RxL_CYCLES_NE_NDR.VN1",
"PerPkg": "1",
@@ -848,6 +925,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_RxL_CYCLES_NE_SNP.VN0",
"PerPkg": "1",
@@ -857,6 +935,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_RxL_CYCLES_NE_SNP.VN1",
"PerPkg": "1",
@@ -866,6 +945,7 @@
},
{
"BriefDescription": "Flits Received - Group 0; Idle and Null Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_RxL_FLITS_G0.IDLE",
"PerPkg": "1",
@@ -875,6 +955,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Flits (both Header and Data)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS",
"PerPkg": "1",
@@ -884,6 +965,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Data Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS_DATA",
"PerPkg": "1",
@@ -893,6 +975,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Header Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS_NONDATA",
"PerPkg": "1",
@@ -902,6 +985,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM",
"PerPkg": "1",
@@ -911,6 +995,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Non-Request Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM_NONREQ",
"PerPkg": "1",
@@ -920,6 +1005,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Request Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM_REQ",
"PerPkg": "1",
@@ -929,6 +1015,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; SNP Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.SNP",
"PerPkg": "1",
@@ -938,6 +1025,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB",
"PerPkg": "1",
@@ -947,6 +1035,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent data Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB_DATA",
"PerPkg": "1",
@@ -956,6 +1045,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent non-data Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB_NONDATA",
"PerPkg": "1",
@@ -965,6 +1055,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent standard Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCS",
"PerPkg": "1",
@@ -974,6 +1065,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Data Response Rx Flits - AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NDR_AD",
"PerPkg": "1",
@@ -983,6 +1075,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Data Response Rx Flits - AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NDR_AK",
"PerPkg": "1",
@@ -992,6 +1085,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_Q_RxL_INSERTS",
"PerPkg": "1",
@@ -1000,6 +1094,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_INSERTS_DRS.VN0",
"PerPkg": "1",
@@ -1009,6 +1104,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_INSERTS_DRS.VN1",
"PerPkg": "1",
@@ -1018,6 +1114,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_RxL_INSERTS_HOM.VN0",
"PerPkg": "1",
@@ -1027,6 +1124,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_RxL_INSERTS_HOM.VN1",
"PerPkg": "1",
@@ -1036,6 +1134,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_INSERTS_NCB.VN0",
"PerPkg": "1",
@@ -1045,6 +1144,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_INSERTS_NCB.VN1",
"PerPkg": "1",
@@ -1054,6 +1154,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_INSERTS_NCS.VN0",
"PerPkg": "1",
@@ -1063,6 +1164,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_INSERTS_NCS.VN1",
"PerPkg": "1",
@@ -1072,6 +1174,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_Q_RxL_INSERTS_NDR.VN0",
"PerPkg": "1",
@@ -1081,6 +1184,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_Q_RxL_INSERTS_NDR.VN1",
"PerPkg": "1",
@@ -1090,6 +1194,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_RxL_INSERTS_SNP.VN0",
"PerPkg": "1",
@@ -1099,6 +1204,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_RxL_INSERTS_SNP.VN1",
"PerPkg": "1",
@@ -1108,6 +1214,7 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_OCCUPANCY",
"PerPkg": "1",
@@ -1116,6 +1223,7 @@
},
{
"BriefDescription": "RxQ Occupancy - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_Q_RxL_OCCUPANCY_DRS.VN0",
"PerPkg": "1",
@@ -1125,6 +1233,7 @@
},
{
"BriefDescription": "RxQ Occupancy - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_Q_RxL_OCCUPANCY_DRS.VN1",
"PerPkg": "1",
@@ -1134,6 +1243,7 @@
},
{
"BriefDescription": "RxQ Occupancy - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_Q_RxL_OCCUPANCY_HOM.VN0",
"PerPkg": "1",
@@ -1143,6 +1253,7 @@
},
{
"BriefDescription": "RxQ Occupancy - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_Q_RxL_OCCUPANCY_HOM.VN1",
"PerPkg": "1",
@@ -1152,6 +1263,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCB.VN0",
"PerPkg": "1",
@@ -1161,6 +1273,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCB.VN1",
"PerPkg": "1",
@@ -1170,6 +1283,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCS.VN0",
"PerPkg": "1",
@@ -1179,6 +1293,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCS.VN1",
"PerPkg": "1",
@@ -1188,6 +1303,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_Q_RxL_OCCUPANCY_NDR.VN0",
"PerPkg": "1",
@@ -1197,6 +1313,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_Q_RxL_OCCUPANCY_NDR.VN1",
"PerPkg": "1",
@@ -1206,6 +1323,7 @@
},
{
"BriefDescription": "RxQ Occupancy - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_Q_RxL_OCCUPANCY_SNP.VN0",
"PerPkg": "1",
@@ -1215,6 +1333,7 @@
},
{
"BriefDescription": "RxQ Occupancy - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_Q_RxL_OCCUPANCY_SNP.VN1",
"PerPkg": "1",
@@ -1224,6 +1343,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_DRS",
"PerPkg": "1",
@@ -1233,6 +1353,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_HOM",
"PerPkg": "1",
@@ -1242,6 +1363,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NCB",
"PerPkg": "1",
@@ -1251,6 +1373,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NCS",
"PerPkg": "1",
@@ -1260,6 +1383,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NDR",
"PerPkg": "1",
@@ -1269,6 +1393,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_SNP",
"PerPkg": "1",
@@ -1278,6 +1403,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.EGRESS_CREDITS",
"PerPkg": "1",
@@ -1287,6 +1413,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.GV",
"PerPkg": "1",
@@ -1296,6 +1423,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_DRS",
"PerPkg": "1",
@@ -1305,6 +1433,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_HOM",
"PerPkg": "1",
@@ -1314,6 +1443,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NCB",
"PerPkg": "1",
@@ -1323,6 +1453,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NCS",
"PerPkg": "1",
@@ -1332,6 +1463,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NDR",
"PerPkg": "1",
@@ -1341,6 +1473,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_SNP",
"PerPkg": "1",
@@ -1350,6 +1483,7 @@
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_TxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -1358,6 +1492,7 @@
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_TxL0_POWER_CYCLES",
"PerPkg": "1",
@@ -1366,6 +1501,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_Q_TxL_BYPASSED",
"PerPkg": "1",
@@ -1374,6 +1510,7 @@
},
{
"BriefDescription": "Cycles Stalled with no LLR Credits; LLR is almost full",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_TxL_CRC_NO_CREDITS.ALMOST_FULL",
"PerPkg": "1",
@@ -1383,6 +1520,7 @@
},
{
"BriefDescription": "Cycles Stalled with no LLR Credits; LLR is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_TxL_CRC_NO_CREDITS.FULL",
"PerPkg": "1",
@@ -1392,6 +1530,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Cycles not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_Q_TxL_CYCLES_NE",
"PerPkg": "1",
@@ -1400,6 +1539,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 0; Data Tx Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G0.DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI. Each flit contains 64b of data. This includes both DRS and NCB data flits (coherent and non-coherent). This can be used to calculate the data bandwidth of the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This does not include the header flits that go in data packets.",
@@ -1408,6 +1548,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 0; Non-Data protocol Tx Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI. This basically tracks the protocol overhead on the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This includes the header flits for data packets.",
@@ -1416,6 +1557,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Flits (both Header and Data)",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency.",
@@ -1424,6 +1566,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Data Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS_DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the data flits (not the header).",
@@ -1432,6 +1575,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Header Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS_NONDATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the header flits (not the data). This includes extended headers.",
@@ -1440,6 +1584,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits transmitted over QPI on the home channel.",
@@ -1448,6 +1593,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Non-Request Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM_NONREQ",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits transmitted over QPI on the home channel. These are most commonly snoop responses, and this event can be used as a proxy for that.",
@@ -1456,6 +1602,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Request Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM_REQ",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request transmitted over QPI on the home channel. This basically counts the number of remote memory requests transmitted over QPI. In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Misses.",
@@ -1464,6 +1611,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; SNP Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.SNP",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits transmitted over QPI. These requests are contained in the snoop channel. This does not include snoop responses, which are transmitted on the home channel.",
@@ -1472,6 +1620,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent Bypass Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB",
"PerPkg": "1",
@@ -1481,6 +1630,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent data Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB_DATA",
"PerPkg": "1",
@@ -1490,6 +1640,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent non-data Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB_NONDATA",
"PerPkg": "1",
@@ -1499,6 +1650,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent standard Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCS",
"PerPkg": "1",
@@ -1508,6 +1660,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Data Response Tx Flits - AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NDR_AD",
"PerPkg": "1",
@@ -1517,6 +1670,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Data Response Tx Flits - AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NDR_AK",
"PerPkg": "1",
@@ -1526,6 +1680,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_Q_TxL_INSERTS",
"PerPkg": "1",
@@ -1534,6 +1689,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_Q_TxL_OCCUPANCY",
"PerPkg": "1",
@@ -1542,6 +1698,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1551,6 +1708,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1560,6 +1718,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1569,6 +1728,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1578,6 +1738,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1587,6 +1748,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1596,6 +1758,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1605,6 +1768,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1614,6 +1778,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1623,6 +1788,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1632,6 +1798,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1641,6 +1808,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1650,6 +1818,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AK NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_Q_TxR_AK_NDR_CREDIT_ACQUIRED",
"PerPkg": "1",
@@ -1658,6 +1827,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AK NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_Q_TxR_AK_NDR_CREDIT_OCCUPANCY",
"PerPkg": "1",
@@ -1666,6 +1836,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1675,6 +1846,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1684,6 +1856,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for Shared VN",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN_SHR",
"PerPkg": "1",
@@ -1693,6 +1866,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1702,6 +1876,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1711,6 +1886,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for Shared VN",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN_SHR",
"PerPkg": "1",
@@ -1720,6 +1896,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1729,6 +1906,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1738,6 +1916,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1747,6 +1926,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1756,6 +1936,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1765,6 +1946,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1774,6 +1956,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1783,6 +1966,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1792,6 +1976,7 @@
},
{
"BriefDescription": "VNA Credits Returned",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_Q_VNA_CREDIT_RETURNS",
"PerPkg": "1",
@@ -1800,6 +1985,7 @@
},
{
"BriefDescription": "VNA Credits Pending Return - Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_Q_VNA_CREDIT_RETURN_OCCUPANCY",
"PerPkg": "1",
@@ -1808,6 +1994,7 @@
},
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2",
"EventCode": "0x1",
"EventName": "UNC_R3_CLOCKTICKS",
"PerPkg": "1",
@@ -1816,6 +2003,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO10",
"PerPkg": "1",
@@ -1825,6 +2013,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO11",
"PerPkg": "1",
@@ -1834,6 +2023,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO12",
"PerPkg": "1",
@@ -1843,6 +2033,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO13",
"PerPkg": "1",
@@ -1852,6 +2043,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO14_16",
"PerPkg": "1",
@@ -1861,6 +2053,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO8",
"PerPkg": "1",
@@ -1870,6 +2063,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO9",
"PerPkg": "1",
@@ -1879,6 +2073,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO_15_17",
"PerPkg": "1",
@@ -1888,6 +2083,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO0",
"PerPkg": "1",
@@ -1897,6 +2093,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO1",
"PerPkg": "1",
@@ -1906,6 +2103,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO2",
"PerPkg": "1",
@@ -1915,6 +2113,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO3",
"PerPkg": "1",
@@ -1924,6 +2123,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO4",
"PerPkg": "1",
@@ -1933,6 +2133,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO5",
"PerPkg": "1",
@@ -1942,6 +2143,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO6",
"PerPkg": "1",
@@ -1951,6 +2153,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO7",
"PerPkg": "1",
@@ -1960,6 +2163,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA0",
"PerPkg": "1",
@@ -1969,6 +2173,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA1",
"PerPkg": "1",
@@ -1978,6 +2183,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCB",
"PerPkg": "1",
@@ -1987,6 +2193,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCS",
"PerPkg": "1",
@@ -1996,6 +2203,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0xB",
"EventName": "UNC_R3_IOT_BACKPRESSURE.HUB",
"PerPkg": "1",
@@ -2004,6 +2212,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0xB",
"EventName": "UNC_R3_IOT_BACKPRESSURE.SAT",
"PerPkg": "1",
@@ -2012,6 +2221,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0xD",
"EventName": "UNC_R3_IOT_CTS_HI.CTS2",
"PerPkg": "1",
@@ -2021,6 +2231,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0xD",
"EventName": "UNC_R3_IOT_CTS_HI.CTS3",
"PerPkg": "1",
@@ -2030,6 +2241,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0xC",
"EventName": "UNC_R3_IOT_CTS_LO.CTS0",
"PerPkg": "1",
@@ -2039,6 +2251,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0xC",
"EventName": "UNC_R3_IOT_CTS_LO.CTS1",
"PerPkg": "1",
@@ -2048,6 +2261,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_HOM",
"PerPkg": "1",
@@ -2057,6 +2271,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_NDR",
"PerPkg": "1",
@@ -2066,6 +2281,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_SNP",
"PerPkg": "1",
@@ -2075,6 +2291,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2084,6 +2301,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2093,6 +2311,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2102,6 +2321,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2111,6 +2331,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2120,6 +2341,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2129,6 +2351,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2138,6 +2361,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2147,6 +2371,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2156,6 +2381,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2165,6 +2391,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2174,6 +2401,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2183,6 +2411,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_HOM",
"PerPkg": "1",
@@ -2192,6 +2421,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_NDR",
"PerPkg": "1",
@@ -2201,6 +2431,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_SNP",
"PerPkg": "1",
@@ -2210,6 +2441,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2219,6 +2451,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2228,6 +2461,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2237,6 +2471,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2246,6 +2481,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; All",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -2255,6 +2491,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -2264,6 +2501,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2273,6 +2511,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -2282,6 +2521,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW",
"PerPkg": "1",
@@ -2291,6 +2531,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -2300,6 +2541,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -2309,6 +2551,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; All",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -2318,6 +2561,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -2327,6 +2571,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2336,6 +2581,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -2345,6 +2591,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW",
"PerPkg": "1",
@@ -2354,6 +2601,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -2363,6 +2611,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -2372,6 +2621,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; All",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -2381,6 +2631,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -2390,6 +2641,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2399,6 +2651,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -2408,6 +2661,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW",
"PerPkg": "1",
@@ -2417,6 +2671,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -2426,6 +2681,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -2435,6 +2691,7 @@
},
{
"BriefDescription": "R3 IV Ring in Use; Any",
+ "Counter": "0,1,2",
"EventCode": "0xA",
"EventName": "UNC_R3_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -2444,6 +2701,7 @@
},
{
"BriefDescription": "R3 IV Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0xA",
"EventName": "UNC_R3_RING_IV_USED.CW",
"PerPkg": "1",
@@ -2453,6 +2711,7 @@
},
{
"BriefDescription": "Ring Stop Starved; AK",
+ "Counter": "0,1,2",
"EventCode": "0xE",
"EventName": "UNC_R3_RING_SINK_STARVED.AK",
"PerPkg": "1",
@@ -2462,6 +2721,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; HOM",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.HOM",
"PerPkg": "1",
@@ -2471,6 +2731,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NDR",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.NDR",
"PerPkg": "1",
@@ -2480,6 +2741,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; SNP",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.SNP",
"PerPkg": "1",
@@ -2489,6 +2751,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; DRS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.DRS",
"PerPkg": "1",
@@ -2498,6 +2761,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; HOM",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.HOM",
"PerPkg": "1",
@@ -2507,6 +2771,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NCB",
"PerPkg": "1",
@@ -2516,6 +2781,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NCS",
"PerPkg": "1",
@@ -2525,6 +2791,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NDR",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NDR",
"PerPkg": "1",
@@ -2534,6 +2801,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; SNP",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.SNP",
"PerPkg": "1",
@@ -2543,6 +2811,7 @@
},
{
"BriefDescription": "Ingress Allocations; DRS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.DRS",
"PerPkg": "1",
@@ -2552,6 +2821,7 @@
},
{
"BriefDescription": "Ingress Allocations; HOM",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.HOM",
"PerPkg": "1",
@@ -2561,6 +2831,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NCB",
"PerPkg": "1",
@@ -2570,6 +2841,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NCS",
"PerPkg": "1",
@@ -2579,6 +2851,7 @@
},
{
"BriefDescription": "Ingress Allocations; NDR",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NDR",
"PerPkg": "1",
@@ -2588,6 +2861,7 @@
},
{
"BriefDescription": "Ingress Allocations; SNP",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.SNP",
"PerPkg": "1",
@@ -2597,6 +2871,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; DRS",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.DRS",
"PerPkg": "1",
@@ -2606,6 +2881,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; HOM",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.HOM",
"PerPkg": "1",
@@ -2615,6 +2891,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NCB",
"PerPkg": "1",
@@ -2624,6 +2901,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NCS",
"PerPkg": "1",
@@ -2633,6 +2911,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NDR",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NDR",
"PerPkg": "1",
@@ -2642,6 +2921,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; SNP",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.SNP",
"PerPkg": "1",
@@ -2651,6 +2931,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; DRS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.DRS",
"PerPkg": "1",
@@ -2660,6 +2941,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; HOM",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.HOM",
"PerPkg": "1",
@@ -2669,6 +2951,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NCB",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NCB",
"PerPkg": "1",
@@ -2678,6 +2961,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NCS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NCS",
"PerPkg": "1",
@@ -2687,6 +2971,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NDR",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NDR",
"PerPkg": "1",
@@ -2696,6 +2981,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; SNP",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.SNP",
"PerPkg": "1",
@@ -2705,6 +2991,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R3_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2714,6 +3001,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R3_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2723,6 +3011,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R3_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2732,6 +3021,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R3_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2741,6 +3031,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "UNC_R3_SBO1_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2750,6 +3041,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "UNC_R3_SBO1_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2759,6 +3051,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2B",
"EventName": "UNC_R3_SBO1_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2768,6 +3061,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2B",
"EventName": "UNC_R3_SBO1_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2777,6 +3071,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -2786,6 +3081,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -2795,6 +3091,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -2804,6 +3101,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -2813,6 +3111,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AD CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_AD",
"PerPkg": "1",
@@ -2822,6 +3121,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_AK",
"PerPkg": "1",
@@ -2831,6 +3131,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_BL",
"PerPkg": "1",
@@ -2840,6 +3141,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_AD",
"PerPkg": "1",
@@ -2849,6 +3151,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_AK",
"PerPkg": "1",
@@ -2858,6 +3161,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_BL",
"PerPkg": "1",
@@ -2867,6 +3171,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -2876,6 +3181,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -2885,6 +3191,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -2894,6 +3201,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -2903,6 +3211,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -2912,6 +3221,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -2921,6 +3231,7 @@
},
{
"BriefDescription": "VN0 Credit Used; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -2930,6 +3241,7 @@
},
{
"BriefDescription": "VN0 Credit Used; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.HOM",
"PerPkg": "1",
@@ -2939,6 +3251,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -2948,6 +3261,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -2957,6 +3271,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NDR",
"PerPkg": "1",
@@ -2966,6 +3281,7 @@
},
{
"BriefDescription": "VN0 Credit Used; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.SNP",
"PerPkg": "1",
@@ -2975,6 +3291,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -2984,6 +3301,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -2993,6 +3311,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -3002,6 +3321,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -3011,6 +3331,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -3020,6 +3341,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -3029,6 +3351,7 @@
},
{
"BriefDescription": "VN1 Credit Used; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -3038,6 +3361,7 @@
},
{
"BriefDescription": "VN1 Credit Used; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.HOM",
"PerPkg": "1",
@@ -3047,6 +3371,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -3056,6 +3381,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -3065,6 +3391,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NDR",
"PerPkg": "1",
@@ -3074,6 +3401,7 @@
},
{
"BriefDescription": "VN1 Credit Used; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.SNP",
"PerPkg": "1",
@@ -3083,6 +3411,7 @@
},
{
"BriefDescription": "VNA credit Acquisitions; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -3092,6 +3421,7 @@
},
{
"BriefDescription": "VNA credit Acquisitions; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -3101,6 +3431,7 @@
},
{
"BriefDescription": "VNA Credit Reject; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -3110,6 +3441,7 @@
},
{
"BriefDescription": "VNA Credit Reject; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -3119,6 +3451,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -3128,6 +3461,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -3137,6 +3471,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -3146,6 +3481,7 @@
},
{
"BriefDescription": "VNA Credit Reject; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -3155,6 +3491,7 @@
},
{
"BriefDescription": "Bounce Control",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_S_BOUNCE_CONTROL",
"PerPkg": "1",
@@ -3162,12 +3499,14 @@
},
{
"BriefDescription": "Uncore Clocks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_S_CLOCKTICKS",
"PerPkg": "1",
"Unit": "SBOX"
},
{
"BriefDescription": "FaST wire asserted",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_S_FAST_ASSERTED",
"PerPkg": "1",
@@ -3176,6 +3515,7 @@
},
{
"BriefDescription": "AD Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -3185,6 +3525,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN",
"PerPkg": "1",
@@ -3194,6 +3535,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3203,6 +3545,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3212,6 +3555,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP",
"PerPkg": "1",
@@ -3221,6 +3565,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP_EVEN",
"PerPkg": "1",
@@ -3230,6 +3575,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP_ODD",
"PerPkg": "1",
@@ -3239,6 +3585,7 @@
},
{
"BriefDescription": "AK Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -3248,6 +3595,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN",
"PerPkg": "1",
@@ -3257,6 +3605,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3266,6 +3615,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3275,6 +3625,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP",
"PerPkg": "1",
@@ -3284,6 +3635,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP_EVEN",
"PerPkg": "1",
@@ -3293,6 +3645,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP_ODD",
"PerPkg": "1",
@@ -3302,6 +3655,7 @@
},
{
"BriefDescription": "BL Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -3311,6 +3665,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN",
"PerPkg": "1",
@@ -3320,6 +3675,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3329,6 +3685,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3338,6 +3695,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP",
"PerPkg": "1",
@@ -3347,6 +3705,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP_EVEN",
"PerPkg": "1",
@@ -3356,6 +3715,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP_ODD",
"PerPkg": "1",
@@ -3365,6 +3725,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.AD_CACHE",
"PerPkg": "1",
@@ -3373,6 +3734,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.AK_CORE",
"PerPkg": "1",
@@ -3381,6 +3743,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.BL_CORE",
"PerPkg": "1",
@@ -3389,6 +3752,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.IV_CORE",
"PerPkg": "1",
@@ -3397,6 +3761,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_S_RING_IV_USED.DN",
"PerPkg": "1",
@@ -3406,6 +3771,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_S_RING_IV_USED.UP",
"PerPkg": "1",
@@ -3415,6 +3781,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.AD_CACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.AD_CACHE",
"PerPkg": "1",
@@ -3423,6 +3790,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.AK_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.AK_CORE",
"PerPkg": "1",
@@ -3431,6 +3799,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.BL_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.BL_CORE",
"PerPkg": "1",
@@ -3439,6 +3808,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.IV_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.IV_CORE",
"PerPkg": "1",
@@ -3447,6 +3817,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.AD_BNC",
"PerPkg": "1",
@@ -3456,6 +3827,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.AD_CRD",
"PerPkg": "1",
@@ -3465,6 +3837,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.BL_BNC",
"PerPkg": "1",
@@ -3474,6 +3847,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.BL_CRD",
"PerPkg": "1",
@@ -3483,6 +3857,7 @@
},
{
"BriefDescription": "Bypass; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AD_BNC",
"PerPkg": "1",
@@ -3492,6 +3867,7 @@
},
{
"BriefDescription": "Bypass; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AD_CRD",
"PerPkg": "1",
@@ -3501,6 +3877,7 @@
},
{
"BriefDescription": "Bypass; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AK",
"PerPkg": "1",
@@ -3510,6 +3887,7 @@
},
{
"BriefDescription": "Bypass; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.BL_BNC",
"PerPkg": "1",
@@ -3519,6 +3897,7 @@
},
{
"BriefDescription": "Bypass; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.BL_CRD",
"PerPkg": "1",
@@ -3528,6 +3907,7 @@
},
{
"BriefDescription": "Bypass; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.IV",
"PerPkg": "1",
@@ -3537,6 +3917,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AD_BNC",
"PerPkg": "1",
@@ -3546,6 +3927,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AD_CRD",
"PerPkg": "1",
@@ -3555,6 +3937,7 @@
},
{
"BriefDescription": "Injection Starvation; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AK",
"PerPkg": "1",
@@ -3564,6 +3947,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.BL_BNC",
"PerPkg": "1",
@@ -3573,6 +3957,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.BL_CRD",
"PerPkg": "1",
@@ -3582,6 +3967,7 @@
},
{
"BriefDescription": "Injection Starvation; IVF Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.IFV",
"PerPkg": "1",
@@ -3591,6 +3977,7 @@
},
{
"BriefDescription": "Injection Starvation; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.IV",
"PerPkg": "1",
@@ -3600,6 +3987,7 @@
},
{
"BriefDescription": "Ingress Allocations; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AD_BNC",
"PerPkg": "1",
@@ -3609,6 +3997,7 @@
},
{
"BriefDescription": "Ingress Allocations; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AD_CRD",
"PerPkg": "1",
@@ -3618,6 +4007,7 @@
},
{
"BriefDescription": "Ingress Allocations; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AK",
"PerPkg": "1",
@@ -3627,6 +4017,7 @@
},
{
"BriefDescription": "Ingress Allocations; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.BL_BNC",
"PerPkg": "1",
@@ -3636,6 +4027,7 @@
},
{
"BriefDescription": "Ingress Allocations; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.BL_CRD",
"PerPkg": "1",
@@ -3645,6 +4037,7 @@
},
{
"BriefDescription": "Ingress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.IV",
"PerPkg": "1",
@@ -3654,6 +4047,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AD_BNC",
"PerPkg": "1",
@@ -3663,6 +4057,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AD_CRD",
"PerPkg": "1",
@@ -3672,6 +4067,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AK",
"PerPkg": "1",
@@ -3681,6 +4077,7 @@
},
{
"BriefDescription": "Ingress Occupancy; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.BL_BNC",
"PerPkg": "1",
@@ -3690,6 +4087,7 @@
},
{
"BriefDescription": "Ingress Occupancy; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.BL_CRD",
"PerPkg": "1",
@@ -3699,6 +4097,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.IV",
"PerPkg": "1",
@@ -3708,6 +4107,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.AD",
"PerPkg": "1",
@@ -3716,6 +4116,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.AK",
"PerPkg": "1",
@@ -3724,6 +4125,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.BL",
"PerPkg": "1",
@@ -3732,6 +4134,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AD_BNC",
"PerPkg": "1",
@@ -3741,6 +4144,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AD_CRD",
"PerPkg": "1",
@@ -3750,6 +4154,7 @@
},
{
"BriefDescription": "Egress Allocations; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AK",
"PerPkg": "1",
@@ -3759,6 +4164,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.BL_BNC",
"PerPkg": "1",
@@ -3768,6 +4174,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.BL_CRD",
"PerPkg": "1",
@@ -3777,6 +4184,7 @@
},
{
"BriefDescription": "Egress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.IV",
"PerPkg": "1",
@@ -3786,6 +4194,7 @@
},
{
"BriefDescription": "Egress Occupancy; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AD_BNC",
"PerPkg": "1",
@@ -3795,6 +4204,7 @@
},
{
"BriefDescription": "Egress Occupancy; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AD_CRD",
"PerPkg": "1",
@@ -3804,6 +4214,7 @@
},
{
"BriefDescription": "Egress Occupancy; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AK",
"PerPkg": "1",
@@ -3813,6 +4224,7 @@
},
{
"BriefDescription": "Egress Occupancy; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.BL_BNC",
"PerPkg": "1",
@@ -3822,6 +4234,7 @@
},
{
"BriefDescription": "Egress Occupancy; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.BL_CRD",
"PerPkg": "1",
@@ -3831,6 +4244,7 @@
},
{
"BriefDescription": "Egress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.IV",
"PerPkg": "1",
@@ -3840,6 +4254,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.AD",
"PerPkg": "1",
@@ -3849,6 +4264,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.AK",
"PerPkg": "1",
@@ -3858,6 +4274,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.BL",
"PerPkg": "1",
@@ -3867,6 +4284,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto IV Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.IV",
"PerPkg": "1",
@@ -3876,6 +4294,7 @@
},
{
"BriefDescription": "Clockticks in the UBOX using a dedicated 48-bit Fixed Counter",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_U_CLOCKTICKS",
"PerPkg": "1",
@@ -3883,6 +4302,7 @@
},
{
"BriefDescription": "VLW Received",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
"PerPkg": "1",
@@ -3892,6 +4312,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.DISABLE",
"PerPkg": "1",
@@ -3901,6 +4322,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.ENABLE",
"PerPkg": "1",
@@ -3910,6 +4332,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_DISABLE",
"PerPkg": "1",
@@ -3919,6 +4342,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_ENABLE",
"PerPkg": "1",
@@ -3928,6 +4352,7 @@
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack; Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
"PerPkg": "1",
@@ -3937,6 +4362,7 @@
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
"PerPkg": "1",
@@ -3945,6 +4371,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Correctable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.CMC",
"PerPkg": "1",
@@ -3954,6 +4381,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Livelock",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LIVELOCK",
"PerPkg": "1",
@@ -3963,6 +4391,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; LTError",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LTERROR",
"PerPkg": "1",
@@ -3972,6 +4401,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T0",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T0",
"PerPkg": "1",
@@ -3981,6 +4411,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T1",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T1",
"PerPkg": "1",
@@ -3990,6 +4421,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Other",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.OTHER",
"PerPkg": "1",
@@ -3999,6 +4431,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Trap",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.TRAP",
"PerPkg": "1",
@@ -4008,6 +4441,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Uncorrectable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.UMC",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-io.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-io.json
index 01e04daf03da..daef7accdbcb 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-io.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_R2_CLOCKTICKS",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
"PerPkg": "1",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
"PerPkg": "1",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; DRS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.DRS",
"PerPkg": "1",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCB",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCB",
"PerPkg": "1",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCS",
"PerPkg": "1",
@@ -68,6 +76,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; DRS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -77,6 +86,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCB",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -86,6 +96,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -95,6 +106,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -113,6 +126,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -122,6 +136,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -131,6 +146,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW",
"PerPkg": "1",
@@ -140,6 +156,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -149,6 +166,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -158,6 +176,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Dn",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.DN",
"PerPkg": "1",
@@ -167,6 +186,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.UP",
"PerPkg": "1",
@@ -176,6 +196,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -185,6 +206,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -194,6 +216,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -203,6 +226,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -212,6 +236,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW",
"PerPkg": "1",
@@ -221,6 +246,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -230,6 +256,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -239,6 +266,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -248,6 +276,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -257,6 +286,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -266,6 +296,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -275,6 +306,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW",
"PerPkg": "1",
@@ -284,6 +316,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -293,6 +326,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -302,6 +336,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -311,6 +346,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CCW",
"PerPkg": "1",
@@ -320,6 +356,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CW",
"PerPkg": "1",
@@ -329,6 +366,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCB",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCB",
"PerPkg": "1",
@@ -338,6 +376,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCS",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCS",
"PerPkg": "1",
@@ -347,6 +386,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCB",
"PerPkg": "1",
@@ -356,6 +396,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCS",
"PerPkg": "1",
@@ -365,6 +406,7 @@
},
{
"BriefDescription": "Ingress Occupancy Accumulator; DRS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R2_RxR_OCCUPANCY.DRS",
"PerPkg": "1",
@@ -374,6 +416,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -383,6 +426,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -392,6 +436,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -401,6 +446,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -410,6 +456,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -419,6 +466,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -428,6 +476,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -437,6 +486,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -446,6 +496,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AD",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AD",
"PerPkg": "1",
@@ -455,6 +506,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AK",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AK",
"PerPkg": "1",
@@ -464,6 +516,7 @@
},
{
"BriefDescription": "Egress Cycles Full; BL",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.BL",
"PerPkg": "1",
@@ -473,6 +526,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AD",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AD",
"PerPkg": "1",
@@ -482,6 +536,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AK",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AK",
"PerPkg": "1",
@@ -491,6 +546,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; BL",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.BL",
"PerPkg": "1",
@@ -500,6 +556,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AD CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AD",
"PerPkg": "1",
@@ -509,6 +566,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AK",
"PerPkg": "1",
@@ -518,6 +576,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_BL",
"PerPkg": "1",
@@ -527,6 +586,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AD",
"PerPkg": "1",
@@ -536,6 +596,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AK",
"PerPkg": "1",
@@ -545,6 +606,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_BL",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json
index b5a33e7a68c6..ca09c1286485 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "read requests to memory controller. Derived from unc_m_cas_count.rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_READ",
"PerPkg": "1",
@@ -11,6 +12,7 @@
},
{
"BriefDescription": "write requests to memory controller. Derived from unc_m_cas_count.wr",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_WRITE",
"PerPkg": "1",
@@ -21,6 +23,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.BYP",
"PerPkg": "1",
@@ -30,6 +33,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Read",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.RD",
"PerPkg": "1",
@@ -39,6 +43,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.WR",
"PerPkg": "1",
@@ -48,6 +53,7 @@
},
{
"BriefDescription": "ACT command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.ACT",
"PerPkg": "1",
@@ -56,6 +62,7 @@
},
{
"BriefDescription": "CAS command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.CAS",
"PerPkg": "1",
@@ -64,6 +71,7 @@
},
{
"BriefDescription": "PRE command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.PRE",
"PerPkg": "1",
@@ -72,6 +80,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -81,6 +90,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -90,6 +100,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
"PerPkg": "1",
@@ -99,6 +110,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_RMM",
"PerPkg": "1",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Underfill Read Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
"PerPkg": "1",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_WMM",
"PerPkg": "1",
@@ -124,6 +138,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -133,6 +148,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_RMM",
"PerPkg": "1",
@@ -142,6 +158,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_WMM",
"PerPkg": "1",
@@ -151,6 +168,7 @@
},
{
"BriefDescription": "Clockticks in the Memory Controller using a dedicated 48-bit Fixed Counter",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
@@ -158,18 +176,22 @@
},
{
"BriefDescription": "Clockticks in the Memory Controller using one of the programmable counters",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_CLOCKTICKS_P",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_CLOCKTICKS_P",
+ "Counter": "0,1,2,3",
+ "Deprecated": "1",
"EventName": "UNC_M_DCLOCKTICKS",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_M_DRAM_PRE_ALL",
"PerPkg": "1",
@@ -178,6 +200,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1",
@@ -187,6 +210,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1",
@@ -196,6 +220,7 @@
},
{
"BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_M_ECC_CORRECTABLE_ERRORS",
"PerPkg": "1",
@@ -204,6 +229,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Isoch Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.ISOCH",
"PerPkg": "1",
@@ -213,6 +239,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Partial Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.PARTIAL",
"PerPkg": "1",
@@ -222,6 +249,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.READ",
"PerPkg": "1",
@@ -231,6 +259,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.WRITE",
"PerPkg": "1",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "Channel DLLOFF Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M_POWER_CHANNEL_DLLOFF",
"PerPkg": "1",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Channel PPD Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
"PerPkg": "1",
@@ -256,6 +287,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK0",
"PerPkg": "1",
@@ -265,6 +297,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK1",
"PerPkg": "1",
@@ -274,6 +307,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK2",
"PerPkg": "1",
@@ -283,6 +317,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK3",
"PerPkg": "1",
@@ -292,6 +327,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK4",
"PerPkg": "1",
@@ -301,6 +337,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK5",
"PerPkg": "1",
@@ -310,6 +347,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK6",
"PerPkg": "1",
@@ -319,6 +357,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK7",
"PerPkg": "1",
@@ -328,6 +367,7 @@
},
{
"BriefDescription": "Critical Throttle Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES",
"PerPkg": "1",
@@ -336,6 +376,7 @@
},
{
"BriefDescription": "UNC_M_POWER_PCU_THROTTLING",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M_POWER_PCU_THROTTLING",
"PerPkg": "1",
@@ -343,6 +384,7 @@
},
{
"BriefDescription": "Clock-Enabled Self-Refresh",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
"PerPkg": "1",
@@ -351,6 +393,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK0",
"PerPkg": "1",
@@ -360,6 +403,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK1",
"PerPkg": "1",
@@ -369,6 +413,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK2",
"PerPkg": "1",
@@ -378,6 +423,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK3",
"PerPkg": "1",
@@ -387,6 +433,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK4",
"PerPkg": "1",
@@ -396,6 +443,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK5",
"PerPkg": "1",
@@ -405,6 +453,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK6",
"PerPkg": "1",
@@ -414,6 +463,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK7",
"PerPkg": "1",
@@ -423,6 +473,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Read Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_RD",
"PerPkg": "1",
@@ -432,6 +483,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Write Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_WR",
"PerPkg": "1",
@@ -441,6 +493,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.BYP",
"PerPkg": "1",
@@ -450,6 +503,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to timer expiration",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_CLOSE",
"PerPkg": "1",
@@ -459,6 +513,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharges due to page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
"PerPkg": "1",
@@ -468,6 +523,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -477,6 +533,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1",
@@ -486,6 +543,7 @@
},
{
"BriefDescription": "Read CAS issued with HIGH priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.HIGH",
"PerPkg": "1",
@@ -494,6 +552,7 @@
},
{
"BriefDescription": "Read CAS issued with LOW priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.LOW",
"PerPkg": "1",
@@ -502,6 +561,7 @@
},
{
"BriefDescription": "Read CAS issued with MEDIUM priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.MED",
"PerPkg": "1",
@@ -510,6 +570,7 @@
},
{
"BriefDescription": "Read CAS issued with PANIC NON ISOCH priority (starved)",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.PANIC",
"PerPkg": "1",
@@ -518,6 +579,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -527,6 +589,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -535,6 +598,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -544,6 +608,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -553,6 +618,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -562,6 +628,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -571,6 +638,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -580,6 +648,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -589,6 +658,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -598,6 +668,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -607,6 +678,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -616,6 +688,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -625,6 +698,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -634,6 +708,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -643,6 +718,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -652,6 +728,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -661,6 +738,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -670,6 +748,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -679,6 +758,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -688,6 +768,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -697,6 +778,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -706,6 +788,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -715,6 +798,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -723,6 +807,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -732,6 +817,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -741,6 +827,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -750,6 +837,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -759,6 +847,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -768,6 +857,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -777,6 +867,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -786,6 +877,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -795,6 +887,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -804,6 +897,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -813,6 +907,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -822,6 +917,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -831,6 +927,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -840,6 +937,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -849,6 +947,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -858,6 +957,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -867,6 +967,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -876,6 +977,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -885,6 +987,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -894,6 +997,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK0",
"PerPkg": "1",
@@ -902,6 +1006,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -911,6 +1016,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -919,6 +1025,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -928,6 +1035,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -937,6 +1045,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -946,6 +1055,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -955,6 +1065,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -964,6 +1075,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -973,6 +1085,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -982,6 +1095,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -991,6 +1105,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -1000,6 +1115,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -1009,6 +1125,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -1018,6 +1135,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -1027,6 +1145,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -1036,6 +1155,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -1045,6 +1165,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -1054,6 +1175,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -1063,6 +1185,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -1072,6 +1195,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -1081,6 +1205,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -1090,6 +1215,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -1099,6 +1225,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -1107,6 +1234,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -1116,6 +1244,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -1125,6 +1254,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -1134,6 +1264,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -1143,6 +1274,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -1152,6 +1284,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -1161,6 +1294,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -1170,6 +1304,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -1179,6 +1314,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -1188,6 +1324,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -1197,6 +1334,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -1206,6 +1344,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -1215,6 +1354,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -1224,6 +1364,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -1233,6 +1374,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -1242,6 +1384,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -1251,6 +1394,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -1260,6 +1404,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -1269,6 +1414,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -1278,6 +1424,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -1287,6 +1434,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -1295,6 +1443,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -1304,6 +1453,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -1313,6 +1463,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -1322,6 +1473,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -1331,6 +1483,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -1340,6 +1493,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -1349,6 +1503,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -1358,6 +1513,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -1367,6 +1523,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -1376,6 +1533,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -1385,6 +1543,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -1394,6 +1553,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -1403,6 +1563,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -1412,6 +1573,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -1421,6 +1583,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -1430,6 +1593,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -1439,6 +1603,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -1448,6 +1613,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -1457,6 +1623,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -1466,6 +1633,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -1475,6 +1643,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -1483,6 +1652,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -1492,6 +1662,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -1501,6 +1672,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -1510,6 +1682,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -1519,6 +1692,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -1528,6 +1702,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -1537,6 +1712,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -1546,6 +1722,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -1555,6 +1732,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -1564,6 +1742,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -1573,6 +1752,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -1582,6 +1762,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -1591,6 +1772,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -1600,6 +1782,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -1609,6 +1792,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -1618,6 +1802,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -1627,6 +1812,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -1636,6 +1822,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -1645,6 +1832,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG3",
"PerPkg": "1",
@@ -1654,6 +1842,7 @@
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1662,6 +1851,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS",
"PerPkg": "1",
@@ -1670,6 +1860,7 @@
},
{
"BriefDescription": "VMSE MXB write buffer occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M_VMSE_MXB_WR_OCCUPANCY",
"PerPkg": "1",
@@ -1677,6 +1868,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.RMM",
"PerPkg": "1",
@@ -1685,6 +1877,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.WMM",
"PerPkg": "1",
@@ -1693,6 +1886,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.LOW_THRESH",
"PerPkg": "1",
@@ -1701,6 +1895,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.STARVE",
"PerPkg": "1",
@@ -1709,6 +1904,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.VMSE_RETRY",
"PerPkg": "1",
@@ -1717,6 +1913,7 @@
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL",
"PerPkg": "1",
@@ -1725,6 +1922,7 @@
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1733,6 +1931,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT",
"PerPkg": "1",
@@ -1741,6 +1940,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT",
"PerPkg": "1",
@@ -1749,6 +1949,7 @@
},
{
"BriefDescription": "Not getting the requested Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_M_WRONG_MM",
"PerPkg": "1",
@@ -1756,6 +1957,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -1765,6 +1967,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -1773,6 +1976,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -1782,6 +1986,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -1791,6 +1996,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -1800,6 +2006,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -1809,6 +2016,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -1818,6 +2026,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -1827,6 +2036,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -1836,6 +2046,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -1845,6 +2056,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -1854,6 +2066,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -1863,6 +2076,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -1872,6 +2086,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -1881,6 +2096,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -1890,6 +2106,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -1899,6 +2116,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -1908,6 +2126,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -1917,6 +2136,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -1926,6 +2146,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -1935,6 +2156,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -1944,6 +2166,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -1953,6 +2176,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -1961,6 +2185,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -1970,6 +2195,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -1979,6 +2205,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -1988,6 +2215,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -1997,6 +2225,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -2006,6 +2235,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -2015,6 +2245,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -2024,6 +2255,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -2033,6 +2265,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -2042,6 +2275,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -2051,6 +2285,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -2060,6 +2295,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -2069,6 +2305,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -2078,6 +2315,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -2087,6 +2325,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -2096,6 +2335,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -2105,6 +2345,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -2114,6 +2355,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -2123,6 +2365,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -2132,6 +2375,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -2141,6 +2385,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -2149,6 +2394,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -2158,6 +2404,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -2167,6 +2414,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -2176,6 +2424,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -2185,6 +2434,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -2194,6 +2444,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -2203,6 +2454,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -2212,6 +2464,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -2221,6 +2474,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -2230,6 +2484,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -2239,6 +2494,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -2248,6 +2504,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -2257,6 +2514,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -2266,6 +2524,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -2275,6 +2534,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -2284,6 +2544,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -2293,6 +2554,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -2302,6 +2564,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -2311,6 +2574,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -2320,6 +2584,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -2329,6 +2594,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -2337,6 +2603,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -2346,6 +2613,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -2355,6 +2623,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -2364,6 +2633,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -2373,6 +2643,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -2382,6 +2653,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -2391,6 +2663,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -2400,6 +2673,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -2409,6 +2683,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -2418,6 +2693,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -2427,6 +2703,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -2436,6 +2713,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -2445,6 +2723,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -2454,6 +2733,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -2463,6 +2743,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -2472,6 +2753,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -2481,6 +2763,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -2490,6 +2773,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -2499,6 +2783,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -2508,6 +2793,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -2517,6 +2803,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -2525,6 +2812,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -2534,6 +2822,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -2543,6 +2832,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -2552,6 +2842,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -2561,6 +2852,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -2570,6 +2862,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -2579,6 +2872,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -2588,6 +2882,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -2597,6 +2892,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -2606,6 +2902,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -2615,6 +2912,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -2624,6 +2922,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -2633,6 +2932,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -2642,6 +2942,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -2651,6 +2952,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -2660,6 +2962,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -2669,6 +2972,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -2678,6 +2982,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -2687,6 +2992,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -2696,6 +3002,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -2705,6 +3012,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -2713,6 +3021,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -2722,6 +3031,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -2731,6 +3041,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -2740,6 +3051,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -2749,6 +3061,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -2758,6 +3071,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -2767,6 +3081,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -2776,6 +3091,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -2785,6 +3101,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -2794,6 +3111,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -2803,6 +3121,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -2812,6 +3131,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -2821,6 +3141,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -2830,6 +3151,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -2839,6 +3161,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -2848,6 +3171,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -2857,6 +3181,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -2866,6 +3191,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -2875,6 +3201,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-power.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-power.json
index 83d20130c217..afdc636b9855 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-power.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "pclk Cycles",
+ "Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE0_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_P_CORE10_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_P_CORE11_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_P_CORE12_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_P_CORE13_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_P_CORE14_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_P_CORE15_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_P_CORE16_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_P_CORE17_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_P_CORE1_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -88,6 +99,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_P_CORE2_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_P_CORE3_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_P_CORE4_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -112,6 +126,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_P_CORE5_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_P_CORE6_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_P_CORE7_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_P_CORE8_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -144,6 +162,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_P_CORE9_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -152,6 +171,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS_CORE0",
"PerPkg": "1",
@@ -160,6 +180,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_P_DEMOTIONS_CORE1",
"PerPkg": "1",
@@ -168,6 +189,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_P_DEMOTIONS_CORE10",
"PerPkg": "1",
@@ -176,6 +198,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3B",
"EventName": "UNC_P_DEMOTIONS_CORE11",
"PerPkg": "1",
@@ -184,6 +207,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_P_DEMOTIONS_CORE12",
"PerPkg": "1",
@@ -192,6 +216,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_P_DEMOTIONS_CORE13",
"PerPkg": "1",
@@ -200,6 +225,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_P_DEMOTIONS_CORE14",
"PerPkg": "1",
@@ -208,6 +234,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_P_DEMOTIONS_CORE15",
"PerPkg": "1",
@@ -216,6 +243,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_P_DEMOTIONS_CORE16",
"PerPkg": "1",
@@ -224,6 +252,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_P_DEMOTIONS_CORE17",
"PerPkg": "1",
@@ -232,6 +261,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_P_DEMOTIONS_CORE2",
"PerPkg": "1",
@@ -240,6 +270,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_P_DEMOTIONS_CORE3",
"PerPkg": "1",
@@ -248,6 +279,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_P_DEMOTIONS_CORE4",
"PerPkg": "1",
@@ -256,6 +288,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_P_DEMOTIONS_CORE5",
"PerPkg": "1",
@@ -264,6 +297,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_P_DEMOTIONS_CORE6",
"PerPkg": "1",
@@ -272,6 +306,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_P_DEMOTIONS_CORE7",
"PerPkg": "1",
@@ -280,6 +315,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_P_DEMOTIONS_CORE8",
"PerPkg": "1",
@@ -288,6 +324,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_P_DEMOTIONS_CORE9",
"PerPkg": "1",
@@ -296,6 +333,7 @@
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
"PerPkg": "1",
@@ -304,6 +342,7 @@
},
{
"BriefDescription": "OS Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_P_FREQ_MAX_OS_CYCLES",
"PerPkg": "1",
@@ -312,6 +351,7 @@
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
"PerPkg": "1",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
"PerPkg": "1",
@@ -328,6 +369,7 @@
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
"PerPkg": "1",
@@ -336,6 +378,7 @@
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
"PerPkg": "1",
@@ -344,6 +387,7 @@
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
"PerPkg": "1",
@@ -352,6 +396,7 @@
},
{
"BriefDescription": "Package C State Residency - C1E",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_P_PKG_RESIDENCY_C1E_CYCLES",
"PerPkg": "1",
@@ -360,6 +405,7 @@
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
"PerPkg": "1",
@@ -368,6 +414,7 @@
},
{
"BriefDescription": "Package C State Residency - C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
"PerPkg": "1",
@@ -376,6 +423,7 @@
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
"PerPkg": "1",
@@ -384,6 +432,7 @@
},
{
"BriefDescription": "Package C7 State Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_P_PKG_RESIDENCY_C7_CYCLES",
"PerPkg": "1",
@@ -392,30 +441,37 @@
},
{
"BriefDescription": "Number of cores in C-State; C0 and C1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
+ "Filter": "occ_sel=1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
+ "Filter": "occ_sel=2",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C6 and C7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
+ "Filter": "occ_sel=3",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
"PerPkg": "1",
@@ -424,6 +480,7 @@
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
"PerPkg": "1",
@@ -432,6 +489,7 @@
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -440,6 +498,7 @@
},
{
"BriefDescription": "UNC_P_UFS_TRANSITIONS_RING_GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "UNC_P_UFS_TRANSITIONS_RING_GV",
"PerPkg": "1",
@@ -448,6 +507,7 @@
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/virtual-memory.json b/tools/perf/pmu-events/arch/x86/broadwellx/virtual-memory.json
index 93621e004d88..eb1d9541e26c 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"SampleAfterValue": "2000003",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_2M",
"SampleAfterValue": "2000003",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_4K",
"SampleAfterValue": "2000003",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K).",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
@@ -149,6 +167,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "EPT.WALK_CYCLES",
"PublicDescription": "This event counts cycles for an extended page table walk. The Extended Page directory cache differs from standard TLB caches by the operating system that use it. Virtual machine operating systems use the extended page directory cache, while guest operating systems use the standard TLB caches.",
@@ -157,6 +176,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "This event counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).",
@@ -165,6 +185,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
@@ -174,6 +195,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Code misses that miss the DTLB and hit the STLB (2M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_2M",
"SampleAfterValue": "100003",
@@ -188,6 +211,7 @@
},
{
"BriefDescription": "Core misses that miss the DTLB and hit the STLB (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_4K",
"SampleAfterValue": "100003",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
@@ -203,6 +228,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
@@ -212,6 +238,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
@@ -221,6 +248,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
@@ -230,6 +258,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"Errata": "BDM69",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
@@ -239,6 +268,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L1",
@@ -247,6 +277,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L2",
@@ -255,6 +286,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L3",
@@ -263,6 +295,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in Memory.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_MEMORY",
@@ -271,6 +304,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L1+FB.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L1",
@@ -279,6 +313,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L2.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L2",
@@ -287,6 +322,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L3 + XSNP.",
+ "Counter": "0,1,2,3",
"Errata": "BDM69, BDM98",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L3",
@@ -295,6 +331,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "This event counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -303,6 +340,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "This event counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, and so on).",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/cache.json b/tools/perf/pmu-events/arch/x86/cascadelakex/cache.json
index a842f05cb60d..d113c14aa7c9 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/cache.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/cache.json
@@ -1,6 +1,81 @@
[
{
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDFE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDFE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDM",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITFSE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITFSE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITI",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDFE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDFE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDM",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SHITFSE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xEF",
+ "EventName": "CORE_SNOOP_RESPONSE.RSP_SHITFSE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortly",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xFE",
+ "EventName": "IDI_MISC.WB_DOWNGRADE",
+ "PublicDescription": "Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortly.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortly",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xFE",
+ "EventName": "IDI_MISC.WB_UPGRADE",
+ "PublicDescription": "Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortly.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +84,7 @@
},
{
"BriefDescription": "Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Number of times a request needed a FB (Fill Buffer) entry but there was no entry available for it. A request includes cacheable/uncacheable demands that are load, store or SW prefetch instructions.",
@@ -17,6 +93,7 @@
},
{
"BriefDescription": "L1D miss outstandings duration in cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts duration of L1D miss outstanding, that is each cycle number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch.Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
@@ -25,6 +102,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +113,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -43,6 +122,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -51,6 +131,7 @@
},
{
"BriefDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines can be either in modified state or clean state. Modified lines may either be written back to L3 or directly written to memory and not allocated in L3. Clean lines may either be allocated in L3 or dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.NON_SILENT",
"PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines can be either in modified state or clean state. Modified lines may either be written back to L3 or directly written to memory and not allocated in L3. Clean lines may either be allocated in L3 or dropped.",
@@ -59,6 +140,7 @@
},
{
"BriefDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared state. A non-threaded event.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.SILENT",
"SampleAfterValue": "200003",
@@ -66,6 +148,7 @@
},
{
"BriefDescription": "Counts the number of lines that have been hardware prefetched but not used and now evicted by L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.USELESS_HWPF",
"SampleAfterValue": "200003",
@@ -73,6 +156,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event L2_LINES_OUT.USELESS_HWPF",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.USELESS_PREF",
@@ -81,6 +165,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -89,6 +174,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -97,6 +183,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Demand requests that miss L2 cache.",
@@ -105,6 +192,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"PublicDescription": "Demand requests to L2 cache.",
@@ -113,6 +201,7 @@
},
{
"BriefDescription": "Requests from the L1/L2/L3 hardware prefetchers or Load software prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "Counts the total number of requests from the L2 hardware prefetchers.",
@@ -121,6 +210,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -129,6 +219,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
@@ -137,6 +228,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.",
@@ -145,6 +237,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests, initiated by load instructions, that hit L2 cache",
@@ -153,6 +246,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -161,6 +255,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"PublicDescription": "All requests that miss L2 cache.",
@@ -169,6 +264,7 @@
},
{
"BriefDescription": "Requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_HIT",
"PublicDescription": "Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that hit L2 cache.",
@@ -177,6 +273,7 @@
},
{
"BriefDescription": "Requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_MISS",
"PublicDescription": "Counts requests from the L1/L2/L3 hardware prefetchers or Load software prefetches that miss L2 cache.",
@@ -185,6 +282,7 @@
},
{
"BriefDescription": "All L2 requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"PublicDescription": "All L2 requests.",
@@ -193,6 +291,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
@@ -201,6 +300,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
@@ -209,6 +309,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "Counts L2 writebacks that access L2 cache.",
@@ -217,6 +318,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"Errata": "SKL057",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
@@ -226,6 +328,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"Errata": "SKL057",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
@@ -235,6 +338,7 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
@@ -245,6 +349,7 @@
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
@@ -255,6 +360,7 @@
},
{
"BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.ANY",
@@ -265,6 +371,7 @@
},
{
"BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS",
@@ -274,6 +381,7 @@
},
{
"BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
@@ -284,6 +392,7 @@
},
{
"BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES",
@@ -294,6 +403,7 @@
},
{
"BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
@@ -304,6 +414,7 @@
},
{
"BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
@@ -314,6 +425,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT",
@@ -324,6 +436,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM",
@@ -334,6 +447,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
@@ -343,6 +457,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
@@ -353,6 +468,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
@@ -363,6 +479,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from remote dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
@@ -372,6 +489,7 @@
},
{
"BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD",
@@ -381,6 +499,7 @@
},
{
"BriefDescription": "Retired load instructions whose data sources was remote HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
@@ -391,6 +510,7 @@
},
{
"BriefDescription": "Retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM",
@@ -401,6 +521,7 @@
},
{
"BriefDescription": "Retired instructions with at least 1 uncacheable load or lock.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC",
@@ -410,6 +531,7 @@
},
{
"BriefDescription": "Retired load instructions which data sources were load missed L1 but hit FB due to preceding miss to the same cache line with data not ready",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT",
@@ -420,6 +542,7 @@
},
{
"BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT",
@@ -430,6 +553,7 @@
},
{
"BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS",
@@ -440,6 +564,7 @@
},
{
"BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
@@ -450,6 +575,7 @@
},
{
"BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
@@ -460,6 +586,7 @@
},
{
"BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT",
@@ -470,6 +597,7 @@
},
{
"BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS",
@@ -480,6 +608,7 @@
},
{
"BriefDescription": "Retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_RETIRED.LOCAL_PMM",
@@ -490,6 +619,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -499,6 +629,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -508,6 +639,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -517,6 +649,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -526,6 +659,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -535,6 +669,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -544,6 +679,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISS OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -553,6 +689,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONE OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -562,6 +699,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -571,6 +709,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -580,6 +719,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -589,6 +729,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -598,6 +739,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -607,6 +749,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -616,6 +759,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -625,6 +769,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -634,6 +779,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -643,6 +789,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -652,6 +799,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -661,6 +809,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -670,6 +819,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -679,6 +829,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -688,6 +839,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -697,6 +849,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -706,6 +859,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -715,6 +869,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -724,6 +879,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -733,6 +889,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -742,6 +899,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -751,6 +909,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -760,6 +919,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -769,6 +929,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -778,6 +939,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -787,6 +949,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -796,6 +959,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -805,6 +969,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -814,6 +979,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -823,6 +989,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -832,6 +999,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -841,6 +1009,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -850,6 +1019,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -859,6 +1029,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -868,6 +1039,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -877,6 +1049,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -886,6 +1059,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -895,6 +1069,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -904,6 +1079,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -913,6 +1089,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -922,6 +1099,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -931,6 +1109,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -940,6 +1119,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -949,6 +1129,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -958,6 +1139,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -967,6 +1149,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -976,6 +1159,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -985,6 +1169,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -994,6 +1179,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1003,6 +1189,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1012,6 +1199,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1021,6 +1209,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1030,6 +1219,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1039,6 +1229,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1048,6 +1239,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1057,6 +1249,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1066,6 +1259,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1075,6 +1269,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1084,6 +1279,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1093,6 +1289,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1102,6 +1299,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1111,6 +1309,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1120,6 +1319,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1129,6 +1329,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1138,6 +1339,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1147,6 +1349,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1156,6 +1359,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1165,6 +1369,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1174,6 +1379,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1183,6 +1389,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1192,6 +1399,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISS OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1201,6 +1409,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONE OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1210,6 +1419,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1219,6 +1429,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1228,6 +1439,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1237,6 +1449,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1246,6 +1459,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1255,6 +1469,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1264,6 +1479,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1273,6 +1489,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1282,6 +1499,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1291,6 +1509,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1300,6 +1519,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1309,6 +1529,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1318,6 +1539,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1327,6 +1549,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1336,6 +1559,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1345,6 +1569,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1354,6 +1579,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1363,6 +1589,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1372,6 +1599,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1381,6 +1609,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1390,6 +1619,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1399,6 +1629,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1408,6 +1639,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1417,6 +1649,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1426,6 +1659,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1435,6 +1669,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1444,6 +1679,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1453,6 +1689,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1462,6 +1699,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.ANY_SNOOP OCR.ALL_READS.L3_HIT.ANY_SNOOP OCR.ALL_READS.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1471,6 +1709,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1480,6 +1719,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1489,6 +1729,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1498,6 +1739,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1507,6 +1749,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1516,6 +1759,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.SNOOP_MISS OCR.ALL_READS.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1525,6 +1769,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT.SNOOP_NONE OCR.ALL_READS.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1534,6 +1779,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.ANY_SNOOP OCR.ALL_READS.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1543,6 +1789,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1552,6 +1799,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1561,6 +1809,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1570,6 +1819,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1579,6 +1829,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1588,6 +1839,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1597,6 +1849,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.ANY_SNOOP OCR.ALL_READS.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1606,6 +1859,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1615,6 +1869,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1624,6 +1879,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1633,6 +1889,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1642,6 +1899,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1651,6 +1909,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1660,6 +1919,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.ANY_SNOOP OCR.ALL_READS.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1669,6 +1929,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1678,6 +1939,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1687,6 +1949,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1696,6 +1959,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1705,6 +1969,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1714,6 +1979,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1723,6 +1989,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.ANY_SNOOP OCR.ALL_READS.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1732,6 +1999,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.HITM_OTHER_CORE OCR.ALL_READS.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1741,6 +2009,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1750,6 +2019,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1759,6 +2029,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1768,6 +2039,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1777,6 +2049,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1786,6 +2059,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.ANY_SNOOP OCR.ALL_RFO.L3_HIT.ANY_SNOOP OCR.ALL_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1795,6 +2069,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1804,6 +2079,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1813,6 +2089,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1822,6 +2099,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1831,6 +2109,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1840,6 +2119,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.SNOOP_MISS OCR.ALL_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1849,6 +2129,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT.SNOOP_NONE OCR.ALL_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1858,6 +2139,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.ANY_SNOOP OCR.ALL_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1867,6 +2149,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1876,6 +2159,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1885,6 +2169,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1894,6 +2179,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1903,6 +2189,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1912,6 +2199,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1921,6 +2209,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.ANY_SNOOP OCR.ALL_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1930,6 +2219,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1939,6 +2229,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1948,6 +2239,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1957,6 +2249,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1966,6 +2259,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1975,6 +2269,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1984,6 +2279,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.ANY_SNOOP OCR.ALL_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1993,6 +2289,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2002,6 +2299,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2011,6 +2309,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2020,6 +2319,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2029,6 +2329,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2038,6 +2339,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2047,6 +2349,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.ANY_SNOOP OCR.ALL_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2056,6 +2359,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2065,6 +2369,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2074,6 +2379,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2083,6 +2389,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2092,6 +2399,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2101,6 +2409,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2109,7 +2418,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all demand code reads have any response type.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2119,6 +2439,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2128,6 +2449,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2137,6 +2459,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2146,6 +2469,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2155,6 +2479,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2164,6 +2489,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2173,6 +2499,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2182,6 +2509,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2191,6 +2519,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2200,6 +2529,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2209,6 +2539,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2218,6 +2549,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2227,6 +2559,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2236,6 +2569,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2245,6 +2579,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2254,6 +2589,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2263,6 +2599,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2272,6 +2609,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2281,6 +2619,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2290,6 +2629,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2299,6 +2639,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2308,6 +2649,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2317,6 +2659,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2326,6 +2669,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2335,6 +2679,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2344,6 +2689,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2353,6 +2699,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2362,6 +2709,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2371,6 +2719,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2380,6 +2729,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2389,6 +2739,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2398,6 +2749,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2407,6 +2759,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2416,6 +2769,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2425,6 +2779,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2433,7 +2788,118 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80400004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80400004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1000020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x800020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x400020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x200020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80020004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads have any response type.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2443,6 +2909,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2452,6 +2919,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2461,6 +2929,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2470,6 +2939,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2479,6 +2949,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2488,6 +2959,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2497,6 +2969,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2506,6 +2979,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2515,6 +2989,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2524,6 +2999,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2533,6 +3009,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2542,6 +3019,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2551,6 +3029,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2560,6 +3039,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2569,6 +3049,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2578,6 +3059,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2587,6 +3069,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2596,6 +3079,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2605,6 +3089,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2614,6 +3099,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2623,6 +3109,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2632,6 +3119,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2641,6 +3129,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2650,6 +3139,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2659,6 +3149,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2668,6 +3159,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2677,6 +3169,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2686,6 +3179,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2695,6 +3189,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2704,6 +3199,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2713,6 +3209,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2722,6 +3219,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2731,6 +3229,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2740,6 +3239,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2749,6 +3249,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2757,7 +3258,118 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80400001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80400001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1000020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x800020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x400020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x200020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80020001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) have any response type.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.ANY_SNOOP OCR.DEMAND_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2767,6 +3379,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2776,6 +3389,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2785,6 +3399,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2794,6 +3409,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2803,6 +3419,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2812,6 +3429,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2821,6 +3439,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2830,6 +3449,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2839,6 +3459,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2848,6 +3469,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2857,6 +3479,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2866,6 +3489,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2875,6 +3499,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2884,6 +3509,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2893,6 +3519,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2902,6 +3529,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2911,6 +3539,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2920,6 +3549,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2929,6 +3559,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2938,6 +3569,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2947,6 +3579,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2956,6 +3589,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2965,6 +3599,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2974,6 +3609,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2983,6 +3619,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2992,6 +3629,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3001,6 +3639,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3010,6 +3649,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3019,6 +3659,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3028,6 +3669,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3037,6 +3679,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3046,6 +3689,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3055,6 +3699,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3064,6 +3709,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3073,6 +3719,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3081,7 +3728,108 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80400002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80400002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F80020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1000020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x800020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x400020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x200020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x80020002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.ANY_SNOOP OCR.OTHER.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3091,6 +3839,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.HITM_OTHER_CORE OCR.OTHER.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3100,6 +3849,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWD OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3109,6 +3859,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3118,6 +3869,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.NO_SNOOP_NEEDED OCR.OTHER.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3127,6 +3879,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3136,6 +3889,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3145,6 +3899,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3154,6 +3909,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3163,6 +3919,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3172,6 +3929,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3181,6 +3939,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3190,6 +3949,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3199,6 +3959,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3208,6 +3969,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3217,6 +3979,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3226,6 +3989,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3235,6 +3999,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3244,6 +4009,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3253,6 +4019,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3262,6 +4029,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3271,6 +4039,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3280,6 +4049,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3289,6 +4059,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3298,6 +4069,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3307,6 +4079,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3316,6 +4089,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3325,6 +4099,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3334,6 +4109,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3343,6 +4119,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3352,6 +4129,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3361,6 +4139,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3370,6 +4149,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3379,6 +4159,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3388,6 +4169,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3397,6 +4179,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3406,6 +4189,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3415,6 +4199,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3424,6 +4209,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3433,6 +4219,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3442,6 +4229,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3451,6 +4239,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3460,6 +4249,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3469,6 +4259,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3478,6 +4269,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3487,6 +4279,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3496,6 +4289,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3505,6 +4299,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3514,6 +4309,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3523,6 +4319,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3532,6 +4329,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3541,6 +4339,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3550,6 +4349,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3559,6 +4359,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3568,6 +4369,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3577,6 +4379,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3586,6 +4389,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3595,6 +4399,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3604,6 +4409,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3613,6 +4419,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3622,6 +4429,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3631,6 +4439,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3640,6 +4449,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3649,6 +4459,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3658,6 +4469,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3667,6 +4479,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3676,6 +4489,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3685,6 +4499,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3694,6 +4509,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3703,6 +4519,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3712,6 +4529,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3721,6 +4539,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3730,6 +4549,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3739,6 +4559,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3748,6 +4569,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3757,6 +4579,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3766,6 +4589,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3775,6 +4599,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3784,6 +4609,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3793,6 +4619,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3802,6 +4629,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3811,6 +4639,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3820,6 +4649,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3829,6 +4659,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3838,6 +4669,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3847,6 +4679,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3856,6 +4689,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3865,6 +4699,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3874,6 +4709,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3883,6 +4719,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3892,6 +4729,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3901,6 +4739,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3910,6 +4749,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3919,6 +4759,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3928,6 +4769,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3937,6 +4779,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3946,6 +4789,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3955,6 +4799,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3964,6 +4809,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3973,6 +4819,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3982,6 +4829,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3991,6 +4839,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4000,6 +4849,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4009,6 +4859,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4018,6 +4869,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4027,6 +4879,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4036,6 +4889,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4045,6 +4899,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4054,6 +4909,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.ANY_SNOOP OCR.PF_L2_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4063,6 +4919,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4072,6 +4929,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4081,6 +4939,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4090,6 +4949,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4099,6 +4959,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4108,6 +4969,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4117,6 +4979,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4126,6 +4989,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4135,6 +4999,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4144,6 +5009,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4153,6 +5019,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4162,6 +5029,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4171,6 +5039,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4180,6 +5049,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4189,6 +5059,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4198,6 +5069,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4207,6 +5079,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4216,6 +5089,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4225,6 +5099,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4234,6 +5109,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4243,6 +5119,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4252,6 +5129,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4261,6 +5139,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4270,6 +5149,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4279,6 +5159,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4288,6 +5169,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4297,6 +5179,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4306,6 +5189,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4315,6 +5199,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4324,6 +5209,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4333,6 +5219,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4342,6 +5229,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4351,6 +5239,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4360,6 +5249,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4369,6 +5259,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4378,6 +5269,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4387,6 +5279,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4396,6 +5289,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4405,6 +5299,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4414,6 +5309,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4423,6 +5319,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4432,6 +5329,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4441,6 +5339,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4450,6 +5349,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4459,6 +5359,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4468,6 +5369,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4477,6 +5379,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4486,6 +5389,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4495,6 +5399,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4504,6 +5409,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4513,6 +5419,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4522,6 +5429,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4531,6 +5439,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4540,6 +5449,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4549,6 +5459,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4558,6 +5469,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4567,6 +5479,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4576,6 +5489,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4585,6 +5499,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4594,6 +5509,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4603,6 +5519,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4612,6 +5529,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4621,6 +5539,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4630,6 +5549,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4639,6 +5559,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4648,6 +5569,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4657,6 +5579,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4666,6 +5589,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4675,6 +5599,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4684,6 +5609,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4693,6 +5619,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4702,6 +5629,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.ANY_SNOOP OCR.PF_L3_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4711,6 +5639,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4720,6 +5649,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4729,6 +5659,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4738,6 +5669,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4747,6 +5679,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4756,6 +5689,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4765,6 +5699,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4774,6 +5709,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4783,6 +5719,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4792,6 +5729,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4801,6 +5739,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4810,6 +5749,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4819,6 +5759,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4828,6 +5769,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_E.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4837,6 +5779,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4846,6 +5789,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4855,6 +5799,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4864,6 +5809,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4873,6 +5819,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4882,6 +5829,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4891,6 +5839,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_F.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4900,6 +5849,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4909,6 +5859,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4918,6 +5869,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4927,6 +5879,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4936,6 +5889,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -4945,6 +5899,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -4954,6 +5909,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_M.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4963,6 +5919,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -4972,6 +5929,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -4981,6 +5939,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4990,6 +5949,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -4999,6 +5959,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -5008,6 +5969,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -5017,6 +5979,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_HIT_S.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -5026,6 +5989,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -5034,6 +5998,7 @@
},
{
"BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc..",
@@ -5042,6 +6007,7 @@
},
{
"BriefDescription": "Cacheable and non-cacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Counts both cacheable and non-cacheable code read requests.",
@@ -5050,6 +6016,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -5058,6 +6025,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -5066,6 +6034,7 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
"PublicDescription": "Counts the number of cases when the offcore requests buffer cannot take more entries for the core. This can happen when the superqueue does not contain eligible entries, or when L1D writeback pending FIFO requests is full.Note: Writeback pending FIFO has six entries.",
@@ -5074,6 +6043,7 @@
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "Counts the number of offcore outstanding cacheable Core Data Read transactions in the super queue every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.",
@@ -5082,6 +6052,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -5091,6 +6062,7 @@
},
{
"BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
@@ -5100,6 +6072,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
@@ -5109,6 +6082,7 @@
},
{
"BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -5118,6 +6092,7 @@
},
{
"BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
"PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
@@ -5126,6 +6101,7 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "Counts the number of offcore outstanding Demand Data Read transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor. See the corresponding Umask under OFFCORE_REQUESTS.Note: A prefetch promoted to Demand is counted from the promotion point.",
@@ -5134,6 +6110,7 @@
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6",
@@ -5142,6 +6119,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
"PublicDescription": "Counts the number of offcore outstanding RFO (store) transactions in the super queue (SQ) every cycle. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.",
@@ -5150,6 +6128,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE",
@@ -5160,6 +6139,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.ANY_SNOOP",
@@ -5170,6 +6150,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE",
@@ -5180,6 +6161,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -5190,6 +6172,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -5200,6 +6183,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -5210,6 +6194,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -5220,6 +6205,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_MISS",
@@ -5230,6 +6216,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_NONE",
@@ -5240,6 +6227,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.ANY_SNOOP",
@@ -5250,6 +6238,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -5260,6 +6249,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -5270,6 +6260,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -5280,6 +6271,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -5290,6 +6282,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.SNOOP_MISS",
@@ -5300,6 +6293,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_E.SNOOP_NONE",
@@ -5310,6 +6304,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.ANY_SNOOP",
@@ -5320,6 +6315,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -5330,6 +6326,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -5340,6 +6337,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -5350,6 +6348,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -5360,6 +6359,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.SNOOP_MISS",
@@ -5370,6 +6370,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_F.SNOOP_NONE",
@@ -5380,6 +6381,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.ANY_SNOOP",
@@ -5390,6 +6392,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -5400,6 +6403,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -5410,6 +6414,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -5420,6 +6425,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -5430,6 +6436,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.SNOOP_MISS",
@@ -5440,6 +6447,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_M.SNOOP_NONE",
@@ -5450,6 +6458,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.ANY_SNOOP",
@@ -5460,6 +6469,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -5470,6 +6480,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -5480,6 +6491,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -5490,6 +6502,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -5500,6 +6513,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.SNOOP_MISS",
@@ -5510,6 +6524,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT_S.SNOOP_NONE",
@@ -5520,6 +6535,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -5530,6 +6546,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -5540,6 +6557,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -5550,6 +6568,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -5560,6 +6579,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -5570,6 +6590,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -5580,6 +6601,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -5590,6 +6612,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -5600,6 +6623,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -5610,6 +6634,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -5620,6 +6645,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.ANY_RESPONSE",
@@ -5630,6 +6656,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.ANY_SNOOP",
@@ -5640,6 +6667,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HITM_OTHER_CORE",
@@ -5650,6 +6678,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -5660,6 +6689,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -5670,6 +6700,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -5680,6 +6711,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -5690,6 +6722,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_MISS",
@@ -5700,6 +6733,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_NONE",
@@ -5710,6 +6744,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.ANY_SNOOP",
@@ -5720,6 +6755,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -5730,6 +6766,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -5740,6 +6777,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -5750,6 +6788,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -5760,6 +6799,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_MISS",
@@ -5770,6 +6810,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_E.SNOOP_NONE",
@@ -5780,6 +6821,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.ANY_SNOOP",
@@ -5790,6 +6832,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -5800,6 +6843,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -5810,6 +6854,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -5820,6 +6865,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -5830,6 +6876,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_MISS",
@@ -5840,6 +6887,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_F.SNOOP_NONE",
@@ -5850,6 +6898,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.ANY_SNOOP",
@@ -5860,6 +6909,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -5870,6 +6920,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -5880,6 +6931,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -5890,6 +6942,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -5900,6 +6953,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_MISS",
@@ -5910,6 +6964,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_M.SNOOP_NONE",
@@ -5920,6 +6975,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.ANY_SNOOP",
@@ -5930,6 +6986,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -5940,6 +6997,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -5950,6 +7008,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -5960,6 +7019,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -5970,6 +7030,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_MISS",
@@ -5980,6 +7041,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT_S.SNOOP_NONE",
@@ -5990,6 +7052,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -6000,6 +7063,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -6010,6 +7074,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -6020,6 +7085,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -6030,6 +7096,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -6040,6 +7107,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -6050,6 +7118,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -6060,6 +7129,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -6070,6 +7140,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -6080,6 +7151,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -6090,6 +7162,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.ANY_RESPONSE",
@@ -6100,6 +7173,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.ANY_SNOOP",
@@ -6110,6 +7184,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HITM_OTHER_CORE",
@@ -6120,6 +7195,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -6130,6 +7206,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -6140,6 +7217,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
@@ -6150,6 +7228,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -6160,6 +7239,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_MISS",
@@ -6170,6 +7250,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_NONE",
@@ -6180,6 +7261,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.ANY_SNOOP",
@@ -6190,6 +7272,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.HITM_OTHER_CORE",
@@ -6200,6 +7283,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -6210,6 +7294,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -6220,6 +7305,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -6230,6 +7316,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.SNOOP_MISS",
@@ -6240,6 +7327,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_E.SNOOP_NONE",
@@ -6250,6 +7338,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.ANY_SNOOP",
@@ -6260,6 +7349,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.HITM_OTHER_CORE",
@@ -6270,6 +7360,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -6280,6 +7371,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -6290,6 +7382,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -6300,6 +7393,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.SNOOP_MISS",
@@ -6310,6 +7404,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_F.SNOOP_NONE",
@@ -6320,6 +7415,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.ANY_SNOOP",
@@ -6330,6 +7426,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.HITM_OTHER_CORE",
@@ -6340,6 +7437,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -6350,6 +7448,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -6360,6 +7459,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -6370,6 +7470,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.SNOOP_MISS",
@@ -6380,6 +7481,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_M.SNOOP_NONE",
@@ -6390,6 +7492,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.ANY_SNOOP",
@@ -6400,6 +7503,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.HITM_OTHER_CORE",
@@ -6410,6 +7514,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -6420,6 +7525,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -6430,6 +7536,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -6440,6 +7547,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.SNOOP_MISS",
@@ -6450,6 +7558,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT_S.SNOOP_NONE",
@@ -6460,6 +7569,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -6470,6 +7580,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -6480,6 +7591,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -6490,6 +7602,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP",
@@ -6500,6 +7613,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -6510,6 +7624,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -6520,6 +7635,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -6530,6 +7646,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -6540,6 +7657,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISS",
@@ -6550,6 +7668,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONE",
@@ -6560,6 +7679,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.ANY_RESPONSE",
@@ -6570,6 +7690,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.ANY_SNOOP",
@@ -6580,6 +7701,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HITM_OTHER_CORE",
@@ -6590,6 +7712,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -6600,6 +7723,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -6610,6 +7734,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.NO_SNOOP_NEEDED",
@@ -6620,6 +7745,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -6630,6 +7756,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.SNOOP_MISS",
@@ -6640,6 +7767,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.SNOOP_NONE",
@@ -6650,6 +7778,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.ANY_SNOOP",
@@ -6660,6 +7789,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.HITM_OTHER_CORE",
@@ -6670,6 +7800,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -6680,6 +7811,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -6690,6 +7822,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -6700,6 +7833,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.SNOOP_MISS",
@@ -6710,6 +7844,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_E.SNOOP_NONE",
@@ -6720,6 +7855,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.ANY_SNOOP",
@@ -6730,6 +7866,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.HITM_OTHER_CORE",
@@ -6740,6 +7877,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -6750,6 +7888,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -6760,6 +7899,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -6770,6 +7910,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.SNOOP_MISS",
@@ -6780,6 +7921,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_F.SNOOP_NONE",
@@ -6790,6 +7932,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.ANY_SNOOP",
@@ -6800,6 +7943,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.HITM_OTHER_CORE",
@@ -6810,6 +7954,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -6820,6 +7965,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -6830,6 +7976,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -6840,6 +7987,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.SNOOP_MISS",
@@ -6850,6 +7998,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_M.SNOOP_NONE",
@@ -6860,6 +8009,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.ANY_SNOOP",
@@ -6870,6 +8020,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.HITM_OTHER_CORE",
@@ -6880,6 +8031,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -6890,6 +8042,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -6900,6 +8053,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -6910,6 +8064,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.SNOOP_MISS",
@@ -6920,6 +8075,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT_S.SNOOP_NONE",
@@ -6930,6 +8086,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -6940,6 +8097,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -6950,6 +8108,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -6960,6 +8119,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.ANY_SNOOP",
@@ -6970,6 +8130,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -6980,6 +8141,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -6990,6 +8152,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -7000,6 +8163,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -7010,6 +8174,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.SNOOP_MISS",
@@ -7020,6 +8185,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.SUPPLIER_NONE.SNOOP_NONE",
@@ -7030,6 +8196,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE",
@@ -7040,6 +8207,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.ANY_SNOOP",
@@ -7050,6 +8218,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HITM_OTHER_CORE",
@@ -7060,6 +8229,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -7070,6 +8240,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -7080,6 +8251,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
@@ -7090,6 +8262,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -7100,6 +8273,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_MISS",
@@ -7110,6 +8284,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_NONE",
@@ -7120,6 +8295,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.ANY_SNOOP",
@@ -7130,6 +8306,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.HITM_OTHER_CORE",
@@ -7140,6 +8317,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -7150,6 +8328,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -7160,6 +8339,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -7170,6 +8350,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.SNOOP_MISS",
@@ -7180,6 +8361,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_E.SNOOP_NONE",
@@ -7190,6 +8372,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.ANY_SNOOP",
@@ -7200,6 +8383,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.HITM_OTHER_CORE",
@@ -7210,6 +8394,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -7220,6 +8405,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -7230,6 +8416,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -7240,6 +8427,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.SNOOP_MISS",
@@ -7250,6 +8438,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_F.SNOOP_NONE",
@@ -7260,6 +8449,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.ANY_SNOOP",
@@ -7270,6 +8460,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.HITM_OTHER_CORE",
@@ -7280,6 +8471,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -7290,6 +8482,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -7300,6 +8493,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -7310,6 +8504,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.SNOOP_MISS",
@@ -7320,6 +8515,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_M.SNOOP_NONE",
@@ -7330,6 +8526,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.ANY_SNOOP",
@@ -7340,6 +8537,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.HITM_OTHER_CORE",
@@ -7350,6 +8548,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -7360,6 +8559,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -7370,6 +8570,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -7380,6 +8581,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.SNOOP_MISS",
@@ -7390,6 +8592,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT_S.SNOOP_NONE",
@@ -7400,6 +8603,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -7410,6 +8614,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -7420,6 +8625,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -7430,6 +8636,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP",
@@ -7440,6 +8647,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -7450,6 +8658,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -7460,6 +8669,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -7470,6 +8680,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -7480,6 +8691,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_MISS",
@@ -7490,6 +8702,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.SUPPLIER_NONE.SNOOP_NONE",
@@ -7500,6 +8713,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
@@ -7510,6 +8724,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.ANY_SNOOP",
@@ -7520,6 +8735,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE",
@@ -7530,6 +8746,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -7540,6 +8757,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -7550,6 +8768,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -7560,6 +8779,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -7570,6 +8790,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
@@ -7580,6 +8801,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_NONE",
@@ -7590,6 +8812,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.ANY_SNOOP",
@@ -7600,6 +8823,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -7610,6 +8834,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -7620,6 +8845,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -7630,6 +8856,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -7640,6 +8867,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.SNOOP_MISS",
@@ -7650,6 +8878,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_E.SNOOP_NONE",
@@ -7660,6 +8889,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.ANY_SNOOP",
@@ -7670,6 +8900,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -7680,6 +8911,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -7690,6 +8922,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -7700,6 +8933,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -7710,6 +8944,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.SNOOP_MISS",
@@ -7720,6 +8955,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_F.SNOOP_NONE",
@@ -7730,6 +8966,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.ANY_SNOOP",
@@ -7740,6 +8977,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -7750,6 +8988,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -7760,6 +8999,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -7770,6 +9010,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -7780,6 +9021,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.SNOOP_MISS",
@@ -7790,6 +9032,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_M.SNOOP_NONE",
@@ -7800,6 +9043,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.ANY_SNOOP",
@@ -7810,6 +9054,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -7820,6 +9065,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -7830,6 +9076,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -7840,6 +9087,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -7850,6 +9098,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.SNOOP_MISS",
@@ -7860,6 +9109,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT_S.SNOOP_NONE",
@@ -7870,6 +9120,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -7880,6 +9131,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -7890,6 +9142,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -7900,6 +9153,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -7910,6 +9164,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -7920,6 +9175,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -7930,6 +9186,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -7940,6 +9197,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -7950,6 +9208,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -7960,6 +9219,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -7970,6 +9230,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
@@ -7980,6 +9241,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.ANY_SNOOP",
@@ -7990,6 +9252,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE",
@@ -8000,6 +9263,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -8010,6 +9274,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -8020,6 +9285,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -8030,6 +9296,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -8040,6 +9307,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
@@ -8050,6 +9318,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_NONE",
@@ -8060,6 +9329,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.ANY_SNOOP",
@@ -8070,6 +9340,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -8080,6 +9351,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -8090,6 +9362,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -8100,6 +9373,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -8110,6 +9384,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.SNOOP_MISS",
@@ -8120,6 +9395,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_E.SNOOP_NONE",
@@ -8130,6 +9406,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.ANY_SNOOP",
@@ -8140,6 +9417,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -8150,6 +9428,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -8160,6 +9439,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -8170,6 +9450,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -8180,6 +9461,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.SNOOP_MISS",
@@ -8190,6 +9472,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_F.SNOOP_NONE",
@@ -8200,6 +9483,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.ANY_SNOOP",
@@ -8210,6 +9494,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -8220,6 +9505,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -8230,6 +9516,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -8240,6 +9527,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -8250,6 +9538,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.SNOOP_MISS",
@@ -8260,6 +9549,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_M.SNOOP_NONE",
@@ -8270,6 +9560,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.ANY_SNOOP",
@@ -8280,6 +9571,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -8290,6 +9582,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -8300,6 +9593,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -8310,6 +9604,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -8320,6 +9615,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.SNOOP_MISS",
@@ -8330,6 +9626,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT_S.SNOOP_NONE",
@@ -8340,6 +9637,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -8350,6 +9648,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -8360,6 +9659,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -8370,6 +9670,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -8380,6 +9681,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -8390,6 +9692,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -8400,6 +9703,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -8410,6 +9714,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -8420,6 +9725,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -8430,6 +9736,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -8440,6 +9747,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
@@ -8450,6 +9758,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.ANY_SNOOP",
@@ -8460,6 +9769,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE",
@@ -8470,6 +9780,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -8480,6 +9791,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -8490,6 +9802,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
@@ -8500,6 +9813,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -8510,6 +9824,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_MISS",
@@ -8520,6 +9835,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_NONE",
@@ -8530,6 +9846,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.ANY_SNOOP",
@@ -8540,6 +9857,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.HITM_OTHER_CORE",
@@ -8550,6 +9868,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -8560,6 +9879,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -8570,6 +9890,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -8580,6 +9901,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.SNOOP_MISS",
@@ -8590,6 +9912,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_E.SNOOP_NONE",
@@ -8600,6 +9923,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.ANY_SNOOP",
@@ -8610,6 +9934,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.HITM_OTHER_CORE",
@@ -8620,6 +9945,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -8630,6 +9956,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -8640,6 +9967,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -8650,6 +9978,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.SNOOP_MISS",
@@ -8660,6 +9989,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_F.SNOOP_NONE",
@@ -8670,6 +10000,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.ANY_SNOOP",
@@ -8680,6 +10011,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.HITM_OTHER_CORE",
@@ -8690,6 +10022,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -8700,6 +10033,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -8710,6 +10044,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -8720,6 +10055,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.SNOOP_MISS",
@@ -8730,6 +10066,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_M.SNOOP_NONE",
@@ -8740,6 +10077,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.ANY_SNOOP",
@@ -8750,6 +10088,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.HITM_OTHER_CORE",
@@ -8760,6 +10099,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -8770,6 +10110,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -8780,6 +10121,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -8790,6 +10132,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.SNOOP_MISS",
@@ -8800,6 +10143,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT_S.SNOOP_NONE",
@@ -8810,6 +10154,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -8820,6 +10165,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -8830,6 +10176,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -8840,6 +10187,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
@@ -8850,6 +10198,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -8860,6 +10209,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -8870,6 +10220,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -8880,6 +10231,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -8890,6 +10242,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.SNOOP_MISS",
@@ -8900,6 +10253,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.SUPPLIER_NONE.SNOOP_NONE",
@@ -8910,6 +10264,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.ANY_RESPONSE",
@@ -8920,6 +10275,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.ANY_SNOOP",
@@ -8930,6 +10286,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.HITM_OTHER_CORE",
@@ -8940,6 +10297,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -8950,6 +10308,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -8960,6 +10319,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.NO_SNOOP_NEEDED",
@@ -8970,6 +10330,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -8980,6 +10341,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_MISS",
@@ -8990,6 +10352,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT.SNOOP_NONE",
@@ -9000,6 +10363,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.ANY_SNOOP",
@@ -9010,6 +10374,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.HITM_OTHER_CORE",
@@ -9020,6 +10385,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -9030,6 +10396,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -9040,6 +10407,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -9050,6 +10418,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.SNOOP_MISS",
@@ -9060,6 +10429,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_E.SNOOP_NONE",
@@ -9070,6 +10440,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.ANY_SNOOP",
@@ -9080,6 +10451,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.HITM_OTHER_CORE",
@@ -9090,6 +10462,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -9100,6 +10473,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -9110,6 +10484,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -9120,6 +10495,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.SNOOP_MISS",
@@ -9130,6 +10506,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_F.SNOOP_NONE",
@@ -9140,6 +10517,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.ANY_SNOOP",
@@ -9150,6 +10528,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.HITM_OTHER_CORE",
@@ -9160,6 +10539,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -9170,6 +10550,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -9180,6 +10561,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -9190,6 +10572,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.SNOOP_MISS",
@@ -9200,6 +10583,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_M.SNOOP_NONE",
@@ -9210,6 +10594,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.ANY_SNOOP",
@@ -9220,6 +10605,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.HITM_OTHER_CORE",
@@ -9230,6 +10616,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -9240,6 +10627,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -9250,6 +10638,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -9260,6 +10649,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.SNOOP_MISS",
@@ -9270,6 +10660,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_HIT_S.SNOOP_NONE",
@@ -9280,6 +10671,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -9290,6 +10682,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -9300,6 +10693,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -9310,6 +10704,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.ANY_SNOOP",
@@ -9320,6 +10715,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -9330,6 +10726,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -9340,6 +10737,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -9350,6 +10748,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -9360,6 +10759,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_MISS",
@@ -9370,6 +10770,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.SUPPLIER_NONE.SNOOP_NONE",
@@ -9380,6 +10781,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.ANY_RESPONSE",
@@ -9390,6 +10792,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.ANY_SNOOP",
@@ -9400,6 +10803,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HITM_OTHER_CORE",
@@ -9410,6 +10814,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -9420,6 +10825,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -9430,6 +10836,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
@@ -9440,6 +10847,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -9450,6 +10858,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_MISS",
@@ -9460,6 +10869,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_NONE",
@@ -9470,6 +10880,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.ANY_SNOOP",
@@ -9480,6 +10891,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.HITM_OTHER_CORE",
@@ -9490,6 +10902,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -9500,6 +10913,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -9510,6 +10924,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -9520,6 +10935,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.SNOOP_MISS",
@@ -9530,6 +10946,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_E.SNOOP_NONE",
@@ -9540,6 +10957,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.ANY_SNOOP",
@@ -9550,6 +10968,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.HITM_OTHER_CORE",
@@ -9560,6 +10979,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -9570,6 +10990,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -9580,6 +11001,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -9590,6 +11012,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.SNOOP_MISS",
@@ -9600,6 +11023,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_F.SNOOP_NONE",
@@ -9610,6 +11034,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.ANY_SNOOP",
@@ -9620,6 +11045,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.HITM_OTHER_CORE",
@@ -9630,6 +11056,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -9640,6 +11067,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -9650,6 +11078,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -9660,6 +11089,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.SNOOP_MISS",
@@ -9670,6 +11100,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_M.SNOOP_NONE",
@@ -9680,6 +11111,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.ANY_SNOOP",
@@ -9690,6 +11122,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.HITM_OTHER_CORE",
@@ -9700,6 +11133,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -9710,6 +11144,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -9720,6 +11155,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -9730,6 +11166,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.SNOOP_MISS",
@@ -9740,6 +11177,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT_S.SNOOP_NONE",
@@ -9750,6 +11188,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -9760,6 +11199,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -9770,6 +11210,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -9780,6 +11221,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOP",
@@ -9790,6 +11232,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -9800,6 +11243,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -9810,6 +11254,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -9820,6 +11265,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -9830,6 +11276,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_MISS",
@@ -9840,6 +11287,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_NONE",
@@ -9850,6 +11298,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE",
@@ -9860,6 +11309,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.ANY_SNOOP",
@@ -9870,6 +11320,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HITM_OTHER_CORE",
@@ -9880,6 +11331,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -9890,6 +11342,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -9900,6 +11353,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -9910,6 +11364,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -9920,6 +11375,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
@@ -9930,6 +11386,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_NONE",
@@ -9940,6 +11397,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.ANY_SNOOP",
@@ -9950,6 +11408,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -9960,6 +11419,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -9970,6 +11430,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -9980,6 +11441,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -9990,6 +11452,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.SNOOP_MISS",
@@ -10000,6 +11463,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_E.SNOOP_NONE",
@@ -10010,6 +11474,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.ANY_SNOOP",
@@ -10020,6 +11485,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -10030,6 +11496,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -10040,6 +11507,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -10050,6 +11518,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -10060,6 +11529,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.SNOOP_MISS",
@@ -10070,6 +11540,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_F.SNOOP_NONE",
@@ -10080,6 +11551,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.ANY_SNOOP",
@@ -10090,6 +11562,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -10100,6 +11573,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -10110,6 +11584,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -10120,6 +11595,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -10130,6 +11606,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.SNOOP_MISS",
@@ -10140,6 +11617,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_M.SNOOP_NONE",
@@ -10150,6 +11628,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.ANY_SNOOP",
@@ -10160,6 +11639,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -10170,6 +11650,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -10180,6 +11661,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -10190,6 +11672,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -10200,6 +11683,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.SNOOP_MISS",
@@ -10210,6 +11694,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT_S.SNOOP_NONE",
@@ -10220,6 +11705,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -10230,6 +11716,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -10240,6 +11727,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -10250,6 +11738,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -10260,6 +11749,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -10270,6 +11760,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -10280,6 +11771,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -10290,6 +11782,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -10300,6 +11793,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -10310,6 +11804,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -10320,6 +11815,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE",
@@ -10330,6 +11826,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.ANY_SNOOP",
@@ -10340,6 +11837,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE",
@@ -10350,6 +11848,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -10360,6 +11859,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -10370,6 +11870,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
@@ -10380,6 +11881,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -10390,6 +11892,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_MISS",
@@ -10400,6 +11903,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_NONE",
@@ -10410,6 +11914,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.ANY_SNOOP",
@@ -10420,6 +11925,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.HITM_OTHER_CORE",
@@ -10430,6 +11936,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -10440,6 +11947,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -10450,6 +11958,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -10460,6 +11969,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.SNOOP_MISS",
@@ -10470,6 +11980,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_E.SNOOP_NONE",
@@ -10480,6 +11991,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.ANY_SNOOP",
@@ -10490,6 +12002,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.HITM_OTHER_CORE",
@@ -10500,6 +12013,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -10510,6 +12024,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -10520,6 +12035,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -10530,6 +12046,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.SNOOP_MISS",
@@ -10540,6 +12057,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_F.SNOOP_NONE",
@@ -10550,6 +12068,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.ANY_SNOOP",
@@ -10560,6 +12079,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.HITM_OTHER_CORE",
@@ -10570,6 +12090,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -10580,6 +12101,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -10590,6 +12112,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -10600,6 +12123,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.SNOOP_MISS",
@@ -10610,6 +12134,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_M.SNOOP_NONE",
@@ -10620,6 +12145,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.ANY_SNOOP",
@@ -10630,6 +12156,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.HITM_OTHER_CORE",
@@ -10640,6 +12167,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -10650,6 +12178,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -10660,6 +12189,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -10670,6 +12200,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.SNOOP_MISS",
@@ -10680,6 +12211,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT_S.SNOOP_NONE",
@@ -10690,6 +12222,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -10700,6 +12233,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -10710,6 +12244,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -10720,6 +12255,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOP",
@@ -10730,6 +12266,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -10740,6 +12277,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -10750,6 +12288,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -10760,6 +12299,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -10770,6 +12310,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_MISS",
@@ -10780,6 +12321,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NONE",
@@ -10790,6 +12332,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.ANY_RESPONSE",
@@ -10800,6 +12343,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.ANY_SNOOP",
@@ -10810,6 +12354,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HITM_OTHER_CORE",
@@ -10820,6 +12365,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -10830,6 +12376,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -10840,6 +12387,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
@@ -10850,6 +12398,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -10860,6 +12409,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_MISS",
@@ -10870,6 +12420,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_NONE",
@@ -10880,6 +12431,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.ANY_SNOOP",
@@ -10890,6 +12442,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.HITM_OTHER_CORE",
@@ -10900,6 +12453,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -10910,6 +12464,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -10920,6 +12475,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -10930,6 +12486,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.SNOOP_MISS",
@@ -10940,6 +12497,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_E.SNOOP_NONE",
@@ -10950,6 +12508,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.ANY_SNOOP",
@@ -10960,6 +12519,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.HITM_OTHER_CORE",
@@ -10970,6 +12530,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -10980,6 +12541,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -10990,6 +12552,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -11000,6 +12563,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.SNOOP_MISS",
@@ -11010,6 +12574,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_F.SNOOP_NONE",
@@ -11020,6 +12585,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.ANY_SNOOP",
@@ -11030,6 +12596,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.HITM_OTHER_CORE",
@@ -11040,6 +12607,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -11050,6 +12618,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -11060,6 +12629,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -11070,6 +12640,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.SNOOP_MISS",
@@ -11080,6 +12651,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_M.SNOOP_NONE",
@@ -11090,6 +12662,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.ANY_SNOOP",
@@ -11100,6 +12673,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.HITM_OTHER_CORE",
@@ -11110,6 +12684,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -11120,6 +12695,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -11130,6 +12706,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -11140,6 +12717,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.SNOOP_MISS",
@@ -11150,6 +12728,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT_S.SNOOP_NONE",
@@ -11160,6 +12739,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -11170,6 +12750,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -11180,6 +12761,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -11190,6 +12772,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
@@ -11200,6 +12783,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -11210,6 +12794,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -11220,6 +12805,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -11230,6 +12816,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -11240,6 +12827,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
@@ -11250,6 +12838,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
@@ -11260,6 +12849,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.ANY_RESPONSE",
@@ -11270,6 +12860,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.ANY_SNOOP",
@@ -11280,6 +12871,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HITM_OTHER_CORE",
@@ -11290,6 +12882,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_FWD",
@@ -11300,6 +12893,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
@@ -11310,6 +12904,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
@@ -11320,6 +12915,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
@@ -11330,6 +12926,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_MISS",
@@ -11340,6 +12937,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_NONE",
@@ -11350,6 +12948,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.ANY_SNOOP",
@@ -11360,6 +12959,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.HITM_OTHER_CORE",
@@ -11370,6 +12970,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_FWD",
@@ -11380,6 +12981,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.HIT_OTHER_CORE_NO_FWD",
@@ -11390,6 +12992,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.NO_SNOOP_NEEDED",
@@ -11400,6 +13003,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.SNOOP_MISS",
@@ -11410,6 +13014,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_E.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_E.SNOOP_NONE",
@@ -11420,6 +13025,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.ANY_SNOOP",
@@ -11430,6 +13036,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.HITM_OTHER_CORE",
@@ -11440,6 +13047,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_FWD",
@@ -11450,6 +13058,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.HIT_OTHER_CORE_NO_FWD",
@@ -11460,6 +13069,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.NO_SNOOP_NEEDED",
@@ -11470,6 +13080,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.SNOOP_MISS",
@@ -11480,6 +13091,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_F.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_F.SNOOP_NONE",
@@ -11490,6 +13102,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.ANY_SNOOP",
@@ -11500,6 +13113,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.HITM_OTHER_CORE",
@@ -11510,6 +13124,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_FWD",
@@ -11520,6 +13135,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.HIT_OTHER_CORE_NO_FWD",
@@ -11530,6 +13146,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.NO_SNOOP_NEEDED",
@@ -11540,6 +13157,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.SNOOP_MISS",
@@ -11550,6 +13168,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_M.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_M.SNOOP_NONE",
@@ -11560,6 +13179,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.ANY_SNOOP",
@@ -11570,6 +13190,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.HITM_OTHER_CORE",
@@ -11580,6 +13201,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_FWD",
@@ -11590,6 +13212,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.HIT_OTHER_CORE_NO_FWD",
@@ -11600,6 +13223,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.NO_SNOOP_NEEDED",
@@ -11610,6 +13234,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.SNOOP_MISS",
@@ -11620,6 +13245,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_HIT_S.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT_S.SNOOP_NONE",
@@ -11630,6 +13256,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
@@ -11640,6 +13267,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
@@ -11650,6 +13278,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
@@ -11660,6 +13289,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOP",
@@ -11670,6 +13300,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
@@ -11680,6 +13311,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
@@ -11690,6 +13322,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
@@ -11700,6 +13333,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
@@ -11710,6 +13344,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_MISS",
@@ -11720,6 +13355,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NONE",
@@ -11730,6 +13366,7 @@
},
{
"BriefDescription": "Number of cache line split locks sent to uncore.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"PublicDescription": "Counts the number of cache line split locks sent to the uncore.",
@@ -11737,7 +13374,16 @@
"UMask": "0x10"
},
{
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xf"
+ },
+ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.NTA",
"SampleAfterValue": "2000003",
@@ -11745,6 +13391,7 @@
},
{
"BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"SampleAfterValue": "2000003",
@@ -11752,6 +13399,7 @@
},
{
"BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T0",
"SampleAfterValue": "2000003",
@@ -11759,6 +13407,7 @@
},
{
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T1_T2",
"SampleAfterValue": "2000003",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json b/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json
index 8bc6c0707856..2e50a91b6728 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/clx-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -55,7 +55,7 @@
"MetricName": "UNCORE_FREQ"
},
{
- "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles.",
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
"MetricName": "cpi",
"ScaleUnit": "1per_instr"
@@ -68,7 +68,7 @@
},
{
"BriefDescription": "Percentage of time spent in the active CPU power state C0",
- "MetricExpr": "tma_info_system_cpu_utilization",
+ "MetricExpr": "tma_info_system_cpus_utilized",
"MetricName": "cpu_utilization",
"ScaleUnit": "100%"
},
@@ -76,31 +76,31 @@
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "dtlb_2mb_large_page_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
"MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_store_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
- "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU.",
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
"MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1e6 / duration_time",
"MetricName": "io_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.",
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
"MetricExpr": "(UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART2 + UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3) * 4 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write",
"ScaleUnit": "1MB/s"
@@ -109,14 +109,14 @@
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "itlb_large_page_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "itlb_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
@@ -163,7 +163,7 @@
},
{
"BriefDescription": "Ratio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
- "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x12CC0233@ / INST_RETIRED.ANY",
+ "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x12cc0233@ / INST_RETIRED.ANY",
"MetricName": "llc_code_read_mpi_demand_plus_prefetch",
"ScaleUnit": "1per_instr"
},
@@ -187,29 +187,35 @@
},
{
"BriefDescription": "Ratio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
- "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x12D40433@ / INST_RETIRED.ANY",
+ "MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x12d40433@ / INST_RETIRED.ANY",
"MetricName": "llc_data_read_mpi_demand_plus_prefetch",
"ScaleUnit": "1per_instr"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory",
"MetricExpr": "UNC_CHA_REQUESTS.READS_LOCAL * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_local_memory_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory",
"MetricExpr": "UNC_CHA_REQUESTS.WRITES_LOCAL * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_local_memory_bandwidth_write",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory",
"MetricExpr": "UNC_CHA_REQUESTS.READS_REMOTE * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_remote_memory_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to remote memory",
+ "MetricExpr": "UNC_CHA_REQUESTS.WRITES_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "llc_miss_remote_memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
"BriefDescription": "The ratio of number of completed memory load instructions to the total number completed instructions",
"MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY",
"MetricName": "loads_per_instr",
@@ -234,13 +240,13 @@
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40432@ / (cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40432@ + cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40431@)",
"MetricName": "numa_reads_addressed_to_local_dram",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40431@ / (cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40432@ + cha@UNC_CHA_TOR_INSERTS.IA_MISS\\,config1\\=0x40431@)",
"MetricName": "numa_reads_addressed_to_remote_dram",
"ScaleUnit": "100%"
@@ -313,16 +319,17 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "34 * (FP_ASSIST.ANY + OTHER_ASSISTS.ANY) / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -331,7 +338,7 @@
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
"MetricExpr": "1 - tma_frontend_bound - (UOPS_ISSUED.ANY + 4 * (INT_MISC.RECOVERY_CYCLES_ANY / 2 if #SMT_on else INT_MISC.RECOVERY_CYCLES)) / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -349,14 +356,118 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * (10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts)) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20"
+ },
+ {
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
"ScaleUnit": "100%"
},
{
@@ -370,6 +481,7 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -387,13 +499,45 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_itlb_misses - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_2M_4M / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_4K / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(44 * tma_info_system_average_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) + 44 * tma_info_system_average_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) + 44 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -408,13 +552,22 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g",
+ "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0))) if #has_pmem > 0 else 1)) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_cxl_mem_bound",
+ "MetricThreshold": "tma_cxl_mem_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g. 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory, PMM - Persistent Memory Module [from CLX to SPR] or any other CXL Type3 Memory [EMR onwards]).",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "44 * tma_info_system_average_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE / (OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE + OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -422,14 +575,14 @@
"MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
"MetricName": "tma_decoder0_alone",
- "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35))",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
@@ -438,19 +591,19 @@
{
"BriefDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound - tma_pmm_bound if #has_pmem > 0 else CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound)",
+ "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound - tma_cxl_mem_bound if #has_pmem > 0 else CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound)",
"MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_dram_bound",
"MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline",
- "MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
+ "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -460,46 +613,46 @@
"MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_dsb_switches",
"MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "min(9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(110 * tma_info_system_average_frequency * (OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM + OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM) + 47.5 * tma_info_system_average_frequency * (OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE + OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE)) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricExpr": "(110 * tma_info_system_core_frequency * (OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM + OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM) + 47.5 * tma_info_system_core_frequency * (OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE + OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE)) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
- "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
"ScaleUnit": "100%"
},
{
@@ -507,9 +660,9 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
@@ -523,12 +676,13 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops",
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "tma_heavy_operations - tma_microcode_sequencer",
"MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
"MetricName": "tma_few_uops_instructions",
"MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
"ScaleUnit": "100%"
},
{
@@ -542,8 +696,17 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "34 * FP_ASSIST.ANY / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / UOPS_RETIRED.RETIRE_SLOTS",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -552,7 +715,6 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@ / UOPS_RETIRED.RETIRE_SLOTS",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
@@ -566,7 +728,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -575,7 +737,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -590,7 +752,7 @@
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -600,10 +762,10 @@
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions",
"MetricExpr": "tma_light_operations * UOPS_RETIRED.MACRO_FUSED / UOPS_RETIRED.RETIRE_SLOTS",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_fused_instructions",
"MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_operations > 0.6",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. The instruction pairs of CMP+JCC or DEC+JCC are commonly used examples.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}",
"ScaleUnit": "100%"
},
{
@@ -613,40 +775,47 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
"MetricExpr": "(ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 4 / BR_MISP_RETIRED.ALL_BRANCHES / 100",
"MetricGroup": "Bad;BrMispredicts;tma_issueBM",
"MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers"
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
},
{
"BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
- "MetricExpr": "tma_info_core_ipmispredict",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;BadSpec;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmispredict",
"MetricThreshold": "tma_info_bad_spec_ipmispredict < 200"
},
{
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "INT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio"
+ },
+ {
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
@@ -655,16 +824,26 @@
"MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
},
{
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite))",
"MetricGroup": "DSBmiss;Fed;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_misses",
"MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
- "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
"MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL",
"MetricName": "tma_info_botlnk_l2_ic_misses",
@@ -672,67 +851,6 @@
"PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
},
{
- "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
- "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB;tma_issueBC",
- "MetricName": "tma_info_bottleneck_big_code",
- "MetricThreshold": "tma_info_bottleneck_big_code > 20",
- "PublicDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses). Related metrics: tma_info_bottleneck_branching_overhead"
- },
- {
- "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
- "MetricExpr": "100 * ((BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL)) / tma_info_thread_slots)",
- "MetricGroup": "Ret;tma_issueBC",
- "MetricName": "tma_info_bottleneck_branching_overhead",
- "MetricThreshold": "tma_info_bottleneck_branching_overhead > 10",
- "PublicDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls). Related metrics: tma_info_bottleneck_big_code"
- },
- {
- "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code",
- "MetricGroup": "Fed;FetchBW;Frontend",
- "MetricName": "tma_info_bottleneck_instruction_fetch_bw",
- "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20"
- },
- {
- "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))",
- "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW",
- "MetricName": "tma_info_bottleneck_memory_bandwidth",
- "MetricThreshold": "tma_info_bottleneck_memory_bandwidth > 20",
- "PublicDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency)))",
- "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB",
- "MetricName": "tma_info_bottleneck_memory_data_tlbs",
- "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20",
- "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound))",
- "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat",
- "MetricName": "tma_info_bottleneck_memory_latency",
- "MetricThreshold": "tma_info_bottleneck_memory_latency > 20",
- "PublicDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches). Related metrics: tma_l3_hit_latency, tma_mem_latency"
- },
- {
- "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bottleneck_mispredictions",
- "MetricThreshold": "tma_info_bottleneck_mispredictions > 20",
- "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
- },
- {
"BriefDescription": "Fraction of branches that are CALL or RET",
"MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;Branches",
@@ -753,7 +871,7 @@
{
"BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - (BR_INST_RETIRED.COND - BR_INST_RETIRED.NOT_TAKEN) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;Branches",
"MetricName": "tma_info_branches_jump"
},
@@ -770,39 +888,38 @@
"MetricName": "tma_info_core_coreipc"
},
{
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc"
+ },
+ {
"BriefDescription": "Floating Point Operations Per Cycle",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
{
- "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear)",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;TopdownL1;tma_L1_group",
- "MetricName": "tma_info_core_ipmispredict",
- "MetricgroupNoGroup": "TopdownL1"
- },
- {
"BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
"MetricExpr": "IDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)",
"MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
"MetricName": "tma_info_frontend_dsb_coverage",
"MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 4 > 0.35",
- "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
@@ -848,6 +965,12 @@
"MetricName": "tma_info_frontend_l2mpki_code_all"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -862,12 +985,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -875,7 +997,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -883,7 +1005,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
@@ -891,7 +1013,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx512",
"MetricThreshold": "tma_info_inst_mix_iparith_avx512 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -899,7 +1021,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -907,7 +1029,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -926,7 +1048,7 @@
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -939,6 +1061,12 @@
"MetricThreshold": "tma_info_inst_mix_ipload < 3"
},
{
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / ROB_MISC_EVENTS.PAUSE_INST",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause"
+ },
+ {
"BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STORES",
"MetricGroup": "InsType",
@@ -947,30 +1075,30 @@
},
{
"BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / cpu@SW_PREFETCH_ACCESS.T0\\,umask\\=0xF@",
+ "MetricExpr": "INST_RETIRED.ANY / SW_PREFETCH_ACCESS.ANY",
"MetricGroup": "Prefetches",
"MetricName": "tma_info_inst_mix_ipswpf",
"MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
@@ -986,124 +1114,136 @@
},
{
"BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
"MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_core_l3_cache_access_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
},
{
"BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_fb_hpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
+ },
+ {
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki_load"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_all"
},
{
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_access_bw",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
@@ -1132,55 +1272,101 @@
"MetricName": "tma_info_memory_tlb_store_stlb_mpki"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
{
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MS per cycle",
+ "MetricExpr": "IDQ.MS_UOPS / cpu@IDQ.MS_UOPS\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_pipeline_fetch_ms"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ASSIST.ANY + OTHER_ASSISTS.ANY)",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)"
+ },
+ {
"BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / cpu@UOPS_RETIRED.RETIRE_SLOTS\\,cmask\\=1@",
"MetricGroup": "Pipeline;Ret",
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_read_bw"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_write_bw"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_time",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
- "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_mem_bandwidth, tma_sq_full"
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * (FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE) + 8 * (FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
- "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3) * 4 / 1e9 / duration_time",
- "MetricGroup": "IoBW;Mem;Server;SoC",
- "MetricName": "tma_info_system_io_read_bw"
+ "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3) * 4 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_read_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU"
},
{
"BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
- "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1e9 / duration_time",
- "MetricGroup": "IoBW;Mem;Server;SoC",
- "MetricName": "tma_info_system_io_write_bw"
+ "MetricExpr": "(UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3) * 4 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_write_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -1205,7 +1391,7 @@
{
"BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]",
"MetricExpr": "1e9 * (UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSERTS) / imc_0@event\\=0x0@",
- "MetricGroup": "Mem;MemoryLat;Server;SoC",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
"MetricName": "tma_info_system_mem_dram_read_latency",
"PublicDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
},
@@ -1219,28 +1405,29 @@
{
"BriefDescription": "Average latency of data read request to external 3D X-Point memory [in nanoseconds]",
"MetricExpr": "(1e9 * (UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS) / imc_0@event\\=0x0@ if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryLat;Server;SoC",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
"MetricName": "tma_info_system_mem_pmm_read_latency",
"PublicDescription": "Average latency of data read request to external 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
},
{
"BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
- "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / duration_time)",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / tma_info_system_time)",
"MetricGroup": "Mem;MemoryLat;SoC",
"MetricName": "tma_info_system_mem_read_latency",
"PublicDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)"
},
{
- "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
- "MetricExpr": "(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryBW;Server;SoC",
- "MetricName": "tma_info_system_pmm_read_bw"
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
},
{
- "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
- "MetricExpr": "(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryBW;Server;SoC",
- "MetricName": "tma_info_system_pmm_write_bw"
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
},
{
"BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0",
@@ -1278,12 +1465,25 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization"
},
{
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
"BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD",
"MetricGroup": "Pipeline",
@@ -1322,7 +1522,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -1330,57 +1530,76 @@
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
- "MetricExpr": "ICACHE_64B.IFTAG_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "3.5 * tma_info_system_core_frequency * MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
- "MetricExpr": "17 * tma_info_system_average_frequency * MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "17 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_memory_latency, tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
- "MetricExpr": "ILD_STALL.LCP / tma_info_thread_clks",
+ "MetricExpr": "DECODE.LCP / tma_info_thread_clks",
"MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_lcp",
"MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
"ScaleUnit": "100%"
},
{
@@ -1390,11 +1609,12 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -1420,50 +1640,78 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_1G / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_4K / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory",
- "MetricExpr": "59.5 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "59.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_local_dram",
- "MetricThreshold": "tma_local_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM_PS",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(12 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (11 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_memory_latency, tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -1491,16 +1739,17 @@
"MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
"MetricName": "tma_microcode_sequencer",
"MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
"ScaleUnit": "100%"
},
{
@@ -1508,17 +1757,17 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued",
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
"MetricExpr": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISSUED.ANY",
"MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
"MetricName": "tma_mixing_vectors",
"MetricThreshold": "tma_mixing_vectors > 0.05",
- "PublicDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued. Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
"ScaleUnit": "100%"
},
{
@@ -1527,13 +1776,13 @@
"MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
"MetricName": "tma_ms_switches",
"MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused",
"MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHES - UOPS_RETIRED.MACRO_FUSED) / UOPS_RETIRED.RETIRE_SLOTS",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_non_fused_branches",
"MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_operations > 0.6",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused.",
@@ -1542,15 +1791,16 @@
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
"MetricExpr": "tma_light_operations * INST_RETIRED.NOP / UOPS_RETIRED.RETIRE_SLOTS",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
"MetricName": "tma_nop_instructions",
- "MetricThreshold": "tma_nop_instructions > 0.1 & tma_light_operations > 0.6",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
- "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches + tma_nop_instructions))",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))",
"MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_other_light_ops",
"MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
@@ -1558,13 +1808,21 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a",
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0))) if #has_pmem > 0 else 0)) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)",
- "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_pmm_bound",
- "MetricThreshold": "tma_pmm_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memory Module.",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%"
},
{
@@ -1573,7 +1831,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1582,7 +1840,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1622,12 +1880,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1641,7 +1899,8 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
- "MetricExpr": "((EXE_ACTIVITY.EXE_BOUND_0_PORTS + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -1650,7 +1909,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_NONE / 2 if #SMT_on else CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_core_core_clks",
+ "MetricExpr": "EXE_ACTIVITY.EXE_BOUND_0_PORTS / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1678,34 +1937,34 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(UOPS_EXECUTED.CORE_CYCLES_GE_3 / 2 if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_3) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues",
"MetricConstraint": "NO_GROUP_EVENTS_NMI",
- "MetricExpr": "(89.5 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 89.5 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "(89.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 89.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group",
"MetricName": "tma_remote_cache",
"MetricThreshold": "tma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory",
- "MetricExpr": "127 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "127 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_remote_dram",
- "MetricThreshold": "tma_remote_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -1715,18 +1974,18 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
"MetricExpr": "PARTIAL_RAT_STALLS.SCOREBOARD / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL5;tma_L5_group;tma_issueSO;tma_ports_utilized_0_group",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
"MetricName": "tma_serializing_operation",
- "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)))",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: PARTIAL_RAT_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
"MetricExpr": "40 * ROB_MISC_EVENTS.PAUSE_INST / tma_info_thread_clks",
- "MetricGroup": "TopdownL6;tma_L6_group;tma_serializing_operation_group",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
"MetricName": "tma_slow_pause",
- "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))))",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST",
"ScaleUnit": "100%"
},
@@ -1736,7 +1995,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -1752,10 +2011,10 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
"ScaleUnit": "100%"
},
{
@@ -1780,7 +2039,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 11 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1811,12 +2070,39 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_1G / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_4K / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "9 * BACLEARS.ANY / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: BACLEARS.ANY",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/counter.json b/tools/perf/pmu-events/arch/x86/cascadelakex/counter.json
new file mode 100644
index 000000000000..e94b76404856
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/counter.json
@@ -0,0 +1,52 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CHA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IIO",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M2M",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M3UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "3"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "2"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/floating-point.json b/tools/perf/pmu-events/arch/x86/cascadelakex/floating-point.json
index bb4d5101f962..3ef6f00f1135 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instruction retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Counts once for most SIMD scalar computational floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Counts once for most SIMD scalar computational single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xC7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "2000003",
@@ -96,30 +108,34 @@
},
{
"BriefDescription": "Intel AVX-512 computational 512-bit packed BFloat16 instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCF",
"EventName": "FP_ARITH_INST_RETIRED2.128BIT_PACKED_BF16",
- "PublicDescription": "Counts once for each Intel AVX-512 computational 512-bit packed BFloat16 floating-point instruction retired. Applies to the ZMM based VDPBF16PS instruction. Each count represents 64 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
+ "PublicDescription": "Counts once for each Intel AVX-512 computational 128-bit packed BFloat16 floating-point instruction retired. Applies to the XMM based VDPBF16PS instruction. Each count represents 16 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
"SampleAfterValue": "2000003",
"UMask": "0x20"
},
{
"BriefDescription": "Intel AVX-512 computational 128-bit packed BFloat16 instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCF",
"EventName": "FP_ARITH_INST_RETIRED2.256BIT_PACKED_BF16",
- "PublicDescription": "Counts once for each Intel AVX-512 computational 128-bit packed BFloat16 floating-point instruction retired. Applies to the XMM based VDPBF16PS instruction. Each count represents 16 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
+ "PublicDescription": "Counts once for each Intel AVX-512 computational 256-bit packed BFloat16 floating-point instruction retired. Applies to the YMM based VDPBF16PS instruction. Each count represents 32 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
"SampleAfterValue": "2000003",
"UMask": "0x40"
},
{
"BriefDescription": "Intel AVX-512 computational 256-bit packed BFloat16 instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCF",
"EventName": "FP_ARITH_INST_RETIRED2.512BIT_PACKED_BF16",
- "PublicDescription": "Counts once for each Intel AVX-512 computational 256-bit packed BFloat16 floating-point instruction retired. Applies to the YMM based VDPBF16PS instruction. Each count represents 32 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
+ "PublicDescription": "Counts once for each Intel AVX-512 computational 512-bit packed BFloat16 floating-point instruction retired. Applies to the ZMM based VDPBF16PS instruction. Each count represents 64 computation operations. This event is only supported on products formerly named Cooper Lake and is not supported on products formerly named Cascade Lake.",
"SampleAfterValue": "2000003",
"UMask": "0x80"
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/frontend.json b/tools/perf/pmu-events/arch/x86/cascadelakex/frontend.json
index 095904c77001..0e1dedce00f2 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]",
@@ -17,14 +19,16 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switches",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.COUNT",
- "PublicDescription": "This event counts the number of the Decode Stream Buffer (DSB)-to-MITE switches including all misses because of missing Decode Stream Buffer (DSB) cache and u-arch forced misses.\nNote: Invoking MITE requires two or three cycles delay.",
+ "PublicDescription": "This event counts the number of the Decode Stream Buffer (DSB)-to-MITE switches including all misses because of missing Decode Stream Buffer (DSB) cache and u-arch forced misses. Note: Invoking MITE requires two or three cycles delay.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Counts Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles. These cycles do not include uops routed through because of the switch itself, for example, when Instruction Decode Queue (IDQ) pre-allocation is unavailable, or Instruction Decode Queue (IDQ) is full. SBD-to-MITE switch true penalty cycles happen after the merge mux (MM) receives Decode Stream Buffer (DSB) Sync-indication until receiving the first MITE uop. MM is placed before Instruction Decode Queue (IDQ) to merge uops being fed from the MITE and Decode Stream Buffer (DSB) paths. Decode Stream Buffer (DSB) inserts the Sync-indication whenever a Decode Stream Buffer (DSB)-to-MITE switch occurs.Penalty: A Decode Stream Buffer (DSB) hit followed by a Decode Stream Buffer (DSB) miss can cost up to six cycles in which no uops are delivered to the IDQ. Most often, such switches from the Decode Stream Buffer (DSB) to the legacy pipeline cost 02 cycles.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7",
@@ -44,6 +49,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7",
@@ -55,6 +61,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7",
@@ -66,6 +73,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7",
@@ -76,6 +84,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7",
@@ -86,6 +95,7 @@
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7",
@@ -97,6 +107,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7",
@@ -107,6 +118,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7",
@@ -118,6 +130,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7",
@@ -128,6 +141,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7",
@@ -138,6 +152,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7",
@@ -149,6 +164,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 2 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_2",
"MSRIndex": "0x3F7",
@@ -159,6 +175,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 3 bubble-slots for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_3",
"MSRIndex": "0x3F7",
@@ -169,6 +186,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7",
@@ -180,6 +198,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7",
@@ -190,6 +209,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7",
@@ -200,6 +220,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7",
@@ -210,6 +231,7 @@
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7",
@@ -221,6 +243,7 @@
},
{
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7",
@@ -232,6 +255,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_16B.IFDATA_STALL",
"PublicDescription": "Cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity.",
@@ -240,6 +264,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_HIT",
"SampleAfterValue": "200003",
@@ -247,6 +272,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_MISS",
"SampleAfterValue": "200003",
@@ -254,6 +280,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_STALL",
"SampleAfterValue": "200003",
@@ -261,22 +288,25 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
- "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops [This event is alias to IDQ.DSB_CYCLES_OK]",
+ "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 or more Uops [This event is alias to IDQ.DSB_CYCLES_OK]",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
- "PublicDescription": "Counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.DSB_CYCLES_OK]",
+ "PublicDescription": "Counts the number of cycles 4 or more uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.DSB_CYCLES_OK]",
"SampleAfterValue": "2000003",
"UMask": "0x18"
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop [This event is alias to IDQ.DSB_CYCLES_ANY]",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -286,6 +316,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -295,6 +326,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -304,6 +336,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -313,6 +346,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop [This event is alias to IDQ.ALL_DSB_CYCLES_ANY_UOPS]",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY",
@@ -321,16 +355,18 @@
"UMask": "0x18"
},
{
- "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]",
+ "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 or more Uops [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK",
- "PublicDescription": "Counts the number of cycles 4 uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]",
+ "PublicDescription": "Counts the number of cycles 4 or more uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Count includes uops that may 'bypass' the IDQ. [This event is alias to IDQ.ALL_DSB_CYCLES_4_UOPS]",
"SampleAfterValue": "2000003",
"UMask": "0x18"
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path. Counting includes uops that may 'bypass' the IDQ.",
@@ -339,6 +375,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -348,6 +385,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. Counting includes uops that may 'bypass' the IDQ. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -356,6 +394,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -365,6 +404,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -374,6 +414,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "Counts the number of uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ.",
@@ -382,6 +423,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -392,6 +434,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.",
@@ -400,6 +443,7 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to Resource Allocation Table (RAT) per thread adding 4 x when Resource Allocation Table (RAT) is not stalled and Instruction Decode Queue (IDQ) delivers x uops to Resource Allocation Table (RAT) (where x belongs to {0,1,2,3}). Counting does not cover cases when: a. IDQ-Resource Allocation Table (RAT) pipe serves the other thread. b. Resource Allocation Table (RAT) is stalled for the thread (including uop drops and clear BE conditions). c. Instruction Decode Queue (IDQ) delivers four uops.",
@@ -408,6 +452,7 @@
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -417,6 +462,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
@@ -426,6 +472,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE",
@@ -435,6 +482,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE",
@@ -444,6 +492,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/memory.json b/tools/perf/pmu-events/arch/x86/cascadelakex/memory.json
index a00ad0aaf1ba..bab4ca603f08 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/memory.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L3_MISS",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to unfriendly events (such as interrupts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED_EVENTS",
"SampleAfterValue": "2000003",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED_MEM",
"SampleAfterValue": "2000003",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Number of times an HLE execution aborted due to incompatible memory type.",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to hardware timer expiration.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED_TIMER",
"SampleAfterValue": "2000003",
@@ -55,6 +62,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.ABORTED_UNFRIENDLY",
"SampleAfterValue": "2000003",
@@ -62,6 +70,7 @@
},
{
"BriefDescription": "Number of times an HLE execution successfully committed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.COMMIT",
"PublicDescription": "Number of times HLE commit succeeded.",
@@ -70,6 +79,7 @@
},
{
"BriefDescription": "Number of times an HLE execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.START",
"PublicDescription": "Number of times we entered an HLE region. Does not count nested transactions.",
@@ -78,6 +88,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"Errata": "SKL089",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
@@ -87,6 +98,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
@@ -111,6 +124,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
@@ -123,6 +137,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
@@ -135,6 +150,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
@@ -147,6 +163,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
@@ -159,6 +176,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
@@ -171,6 +189,7 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
@@ -183,6 +202,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -192,6 +212,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -201,6 +222,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -210,6 +232,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -219,6 +242,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -228,6 +252,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITM OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -237,6 +262,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -246,6 +272,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISS OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -255,6 +282,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONE OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -264,6 +292,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -273,6 +302,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -282,6 +312,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -291,6 +322,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -300,6 +332,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -309,6 +342,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -318,6 +352,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -327,6 +362,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -336,6 +372,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -345,6 +382,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -354,6 +392,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -363,6 +402,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -372,6 +412,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -381,6 +422,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -390,6 +432,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -399,6 +442,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -408,6 +452,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -417,6 +462,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -426,6 +472,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -435,6 +482,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -444,6 +492,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -453,6 +502,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -462,6 +512,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -471,6 +522,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -480,6 +532,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -489,6 +542,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -498,6 +552,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -507,6 +562,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -516,6 +572,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -525,6 +582,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -534,6 +592,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -543,6 +602,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -552,6 +612,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -561,6 +622,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -570,6 +632,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -579,6 +642,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -588,6 +652,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -597,6 +662,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -606,6 +672,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -615,6 +682,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -624,6 +692,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -633,6 +702,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -642,6 +712,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -651,6 +722,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -660,6 +732,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -669,6 +742,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -678,6 +752,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITM OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -687,6 +762,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -696,6 +772,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISS OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -705,6 +782,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONE OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -714,6 +792,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -723,6 +802,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -732,6 +812,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -741,6 +822,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -750,6 +832,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -759,6 +842,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -768,6 +852,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -777,6 +862,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -786,6 +872,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -795,6 +882,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -804,6 +892,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -813,6 +902,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -822,6 +912,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -831,6 +922,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -840,6 +932,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -849,6 +942,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -858,6 +952,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.ANY_SNOOP OCR.ALL_READS.L3_MISS.ANY_SNOOP OCR.ALL_READS.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -867,6 +962,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -876,6 +972,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -885,6 +982,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -894,6 +992,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -903,6 +1002,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.REMOTE_HITM OCR.ALL_READS.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -912,6 +1012,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -921,6 +1022,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.SNOOP_MISS OCR.ALL_READS.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -930,6 +1032,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS.SNOOP_NONE OCR.ALL_READS.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -939,6 +1042,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -948,6 +1052,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -957,6 +1062,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -966,6 +1072,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -975,6 +1082,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -984,6 +1092,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -993,6 +1102,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1002,6 +1112,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1011,6 +1122,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1020,6 +1132,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1029,6 +1142,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1038,6 +1152,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1047,6 +1162,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1056,6 +1172,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1065,6 +1182,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1074,6 +1192,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1083,6 +1202,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.ANY_SNOOP OCR.ALL_RFO.L3_MISS.ANY_SNOOP OCR.ALL_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1092,6 +1212,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1101,6 +1222,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1110,6 +1232,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1119,6 +1242,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1128,6 +1252,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.REMOTE_HITM OCR.ALL_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1137,6 +1262,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1146,6 +1272,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.SNOOP_MISS OCR.ALL_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1155,6 +1282,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS.SNOOP_NONE OCR.ALL_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1164,6 +1292,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1173,6 +1302,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1182,6 +1312,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1191,6 +1322,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1200,6 +1332,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1209,6 +1342,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1218,6 +1352,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1227,6 +1362,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1236,6 +1372,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1245,6 +1382,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1254,6 +1392,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1263,6 +1402,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1272,6 +1412,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1281,6 +1422,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1290,6 +1432,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1299,6 +1442,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1308,6 +1452,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1317,6 +1462,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1326,6 +1472,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1335,6 +1482,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1344,6 +1492,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1353,6 +1502,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1362,6 +1512,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1371,6 +1522,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1380,6 +1532,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1389,6 +1542,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1398,6 +1552,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1407,6 +1562,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1416,6 +1572,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1425,6 +1582,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1434,6 +1592,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1443,6 +1602,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1452,6 +1612,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1461,6 +1622,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1470,6 +1632,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1479,6 +1642,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1488,6 +1652,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1497,6 +1662,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1506,6 +1672,7 @@
},
{
"BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1515,6 +1682,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1524,6 +1692,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1533,6 +1702,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1542,6 +1712,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1551,6 +1722,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1560,6 +1732,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1569,6 +1742,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1578,6 +1752,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1587,6 +1762,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1596,6 +1772,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1605,6 +1782,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1614,6 +1792,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1623,6 +1802,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1632,6 +1812,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1641,6 +1822,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1650,6 +1832,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1659,6 +1842,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1668,6 +1852,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1677,6 +1862,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1686,6 +1872,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1695,6 +1882,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1704,6 +1892,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1713,6 +1902,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1722,6 +1912,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1731,6 +1922,7 @@
},
{
"BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1740,6 +1932,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1749,6 +1942,7 @@
},
{
"BriefDescription": "Counts demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1758,6 +1952,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.ANY_SNOOP OCR.DEMAND_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1767,6 +1962,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1776,6 +1972,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1785,6 +1982,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1794,6 +1992,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1803,6 +2002,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -1812,6 +2012,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1821,6 +2022,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1830,6 +2032,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1839,6 +2042,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1848,6 +2052,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1857,6 +2062,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1866,6 +2072,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1875,6 +2082,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1884,6 +2092,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1893,6 +2102,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1902,6 +2112,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1911,6 +2122,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1920,6 +2132,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1929,6 +2142,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1938,6 +2152,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1947,6 +2162,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1956,6 +2172,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1965,6 +2182,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1974,6 +2192,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1983,6 +2202,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.ANY_SNOOP OCR.OTHER.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1992,6 +2212,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.HITM_OTHER_CORE OCR.OTHER.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2001,6 +2222,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWD OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2010,6 +2232,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2019,6 +2242,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.NO_SNOOP_NEEDED OCR.OTHER.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2028,6 +2252,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2037,6 +2262,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2046,6 +2272,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2055,6 +2282,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2064,6 +2292,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2073,6 +2302,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2082,6 +2312,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2091,6 +2322,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2100,6 +2332,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2109,6 +2342,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2118,6 +2352,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2127,6 +2362,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2136,6 +2372,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2145,6 +2382,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2154,6 +2392,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2163,6 +2402,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2172,6 +2412,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2181,6 +2422,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2190,6 +2432,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2199,6 +2442,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2208,6 +2452,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2217,6 +2462,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2226,6 +2472,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2235,6 +2482,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2244,6 +2492,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2253,6 +2502,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2262,6 +2512,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2271,6 +2522,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2280,6 +2532,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2289,6 +2542,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2298,6 +2552,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2307,6 +2562,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2316,6 +2572,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2325,6 +2582,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2334,6 +2592,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2343,6 +2602,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2352,6 +2612,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2361,6 +2622,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2370,6 +2632,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2379,6 +2642,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2388,6 +2652,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2397,6 +2662,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2406,6 +2672,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2415,6 +2682,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2424,6 +2692,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2433,6 +2702,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2442,6 +2712,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2451,6 +2722,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2460,6 +2732,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2469,6 +2742,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2478,6 +2752,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2487,6 +2762,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2496,6 +2772,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2505,6 +2782,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2514,6 +2792,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2523,6 +2802,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2532,6 +2812,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2541,6 +2822,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2550,6 +2832,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2559,6 +2842,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2568,6 +2852,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2577,6 +2862,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2586,6 +2872,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2595,6 +2882,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2604,6 +2892,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2613,6 +2902,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2622,6 +2912,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2631,6 +2922,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2640,6 +2932,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2649,6 +2942,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2658,6 +2952,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.ANY_SNOOP OCR.PF_L2_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2667,6 +2962,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2676,6 +2972,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2685,6 +2982,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2694,6 +2992,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2703,6 +3002,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2712,6 +3012,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2721,6 +3022,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2730,6 +3032,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2739,6 +3042,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2748,6 +3052,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2757,6 +3062,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2766,6 +3072,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2775,6 +3082,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2784,6 +3092,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2793,6 +3102,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2802,6 +3112,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2811,6 +3122,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2820,6 +3132,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2829,6 +3142,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2838,6 +3152,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2847,6 +3162,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2856,6 +3172,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2865,6 +3182,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2874,6 +3192,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2883,6 +3202,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2892,6 +3212,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2901,6 +3222,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2910,6 +3232,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2919,6 +3242,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -2928,6 +3252,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -2937,6 +3262,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2946,6 +3272,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -2955,6 +3282,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2964,6 +3292,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -2973,6 +3302,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -2982,6 +3312,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -2991,6 +3322,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3000,6 +3332,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3009,6 +3342,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3018,6 +3352,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3027,6 +3362,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3036,6 +3372,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3045,6 +3382,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3054,6 +3392,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3063,6 +3402,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3072,6 +3412,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3081,6 +3422,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3090,6 +3432,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3099,6 +3442,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3108,6 +3452,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.ANY_SNOOP OCR.PF_L3_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3117,6 +3462,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3126,6 +3472,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3135,6 +3482,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3144,6 +3492,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3153,6 +3502,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -3162,6 +3512,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3171,6 +3522,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3180,6 +3532,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3189,6 +3542,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3198,6 +3552,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3207,6 +3562,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3216,6 +3572,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3225,6 +3582,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3234,6 +3592,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3243,6 +3602,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3252,6 +3612,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3261,6 +3622,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3270,6 +3632,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -3279,6 +3642,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3288,6 +3652,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3297,6 +3662,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -3306,6 +3672,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -3315,6 +3682,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -3324,6 +3692,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -3333,6 +3702,7 @@
},
{
"BriefDescription": "Demand Data Read requests who miss L3 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"PublicDescription": "Demand Data Read requests who miss L3 cache.",
@@ -3341,6 +3711,7 @@
},
{
"BriefDescription": "Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
@@ -3349,6 +3720,7 @@
},
{
"BriefDescription": "Counts number of Offcore outstanding Demand Data Read requests that miss L3 cache in the superQ every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "2000003",
@@ -3356,6 +3728,7 @@
},
{
"BriefDescription": "Cycles with at least 6 Demand Data Read requests that miss L3 cache in the superQ.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6",
@@ -3364,6 +3737,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.ANY_SNOOP",
@@ -3374,6 +3748,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.HITM_OTHER_CORE",
@@ -3384,6 +3759,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -3394,6 +3770,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -3404,6 +3781,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -3414,6 +3792,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HITM",
@@ -3424,6 +3803,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -3434,6 +3814,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_MISS",
@@ -3444,6 +3825,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_NONE",
@@ -3454,6 +3836,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -3464,6 +3847,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -3474,6 +3858,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -3484,6 +3869,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -3494,6 +3880,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -3504,6 +3891,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -3514,6 +3902,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -3524,6 +3913,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -3534,6 +3924,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -3544,6 +3935,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -3554,6 +3946,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -3564,6 +3957,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -3574,6 +3968,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -3584,6 +3979,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -3594,6 +3990,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -3604,6 +4001,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -3614,6 +4012,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.ANY_SNOOP",
@@ -3624,6 +4023,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.HITM_OTHER_CORE",
@@ -3634,6 +4034,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -3644,6 +4045,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -3654,6 +4056,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -3664,6 +4067,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HITM",
@@ -3674,6 +4078,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -3684,6 +4089,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS",
@@ -3694,6 +4100,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_NONE",
@@ -3704,6 +4111,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -3714,6 +4122,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -3724,6 +4133,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -3734,6 +4144,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -3744,6 +4155,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -3754,6 +4166,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -3764,6 +4177,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -3774,6 +4188,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -3784,6 +4199,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -3794,6 +4210,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -3804,6 +4221,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -3814,6 +4232,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -3824,6 +4243,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -3834,6 +4254,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -3844,6 +4265,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -3854,6 +4276,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -3864,6 +4287,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.ANY_SNOOP",
@@ -3874,6 +4298,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.HITM_OTHER_CORE",
@@ -3884,6 +4309,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -3894,6 +4320,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -3904,6 +4331,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.NO_SNOOP_NEEDED",
@@ -3914,6 +4342,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HITM",
@@ -3924,6 +4353,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
@@ -3934,6 +4364,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_MISS",
@@ -3944,6 +4375,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_NONE",
@@ -3954,6 +4386,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -3964,6 +4397,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -3974,6 +4408,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -3984,6 +4419,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -3994,6 +4430,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -4004,6 +4441,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -4014,6 +4452,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4024,6 +4463,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -4034,6 +4474,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4044,6 +4485,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -4054,6 +4496,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -4064,6 +4507,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -4074,6 +4518,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4084,6 +4529,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -4094,6 +4540,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -4104,6 +4551,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -4114,6 +4562,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.ANY_SNOOP",
@@ -4124,6 +4573,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.HITM_OTHER_CORE",
@@ -4134,6 +4584,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -4144,6 +4595,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -4154,6 +4606,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.NO_SNOOP_NEEDED",
@@ -4164,6 +4617,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.REMOTE_HITM",
@@ -4174,6 +4628,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.REMOTE_HIT_FORWARD",
@@ -4184,6 +4639,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.SNOOP_MISS",
@@ -4194,6 +4650,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.SNOOP_NONE",
@@ -4204,6 +4661,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -4214,6 +4672,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -4224,6 +4683,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -4234,6 +4694,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4244,6 +4705,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -4254,6 +4716,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -4264,6 +4727,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4274,6 +4738,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -4284,6 +4749,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4294,6 +4760,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -4304,6 +4771,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -4314,6 +4782,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -4324,6 +4793,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4334,6 +4804,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -4344,6 +4815,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -4354,6 +4826,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -4364,6 +4837,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.ANY_SNOOP",
@@ -4374,6 +4848,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.HITM_OTHER_CORE",
@@ -4384,6 +4859,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -4394,6 +4870,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -4404,6 +4881,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.NO_SNOOP_NEEDED",
@@ -4414,6 +4892,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HITM",
@@ -4424,6 +4903,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
@@ -4434,6 +4914,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_MISS",
@@ -4444,6 +4925,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_NONE",
@@ -4454,6 +4936,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -4464,6 +4947,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -4474,6 +4958,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -4484,6 +4969,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4494,6 +4980,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -4504,6 +4991,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -4514,6 +5002,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4524,6 +5013,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -4534,6 +5024,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4544,6 +5035,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -4554,6 +5046,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -4564,6 +5057,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -4574,6 +5068,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4584,6 +5079,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -4594,6 +5090,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -4604,6 +5101,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -4614,6 +5112,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.ANY_SNOOP",
@@ -4624,6 +5123,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.HITM_OTHER_CORE",
@@ -4634,6 +5134,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -4644,6 +5145,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -4654,6 +5156,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -4664,6 +5167,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HITM",
@@ -4674,6 +5178,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -4684,6 +5189,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS",
@@ -4694,6 +5200,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_NONE",
@@ -4704,6 +5211,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -4714,6 +5222,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -4724,6 +5233,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -4734,6 +5244,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4744,6 +5255,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -4754,6 +5266,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -4764,6 +5277,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4774,6 +5288,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -4784,6 +5299,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -4794,6 +5310,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -4804,6 +5321,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -4814,6 +5332,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -4824,6 +5343,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4834,6 +5354,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -4844,6 +5365,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -4854,6 +5376,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -4864,6 +5387,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.ANY_SNOOP",
@@ -4874,6 +5398,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.HITM_OTHER_CORE",
@@ -4884,6 +5409,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -4894,6 +5420,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -4904,6 +5431,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -4914,6 +5442,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HITM",
@@ -4924,6 +5453,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -4934,6 +5464,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS",
@@ -4944,6 +5475,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_NONE",
@@ -4954,6 +5486,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -4964,6 +5497,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -4974,6 +5508,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -4984,6 +5519,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -4994,6 +5530,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -5004,6 +5541,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -5014,6 +5552,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5024,6 +5563,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -5034,6 +5574,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5044,6 +5585,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -5054,6 +5596,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -5064,6 +5607,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -5074,6 +5618,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5084,6 +5629,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -5094,6 +5640,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -5104,6 +5651,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -5114,6 +5662,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.ANY_SNOOP",
@@ -5124,6 +5673,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.HITM_OTHER_CORE",
@@ -5134,6 +5684,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -5144,6 +5695,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -5154,6 +5706,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.NO_SNOOP_NEEDED",
@@ -5164,6 +5717,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HITM",
@@ -5174,6 +5728,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
@@ -5184,6 +5739,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_MISS",
@@ -5194,6 +5750,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_NONE",
@@ -5204,6 +5761,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -5214,6 +5772,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -5224,6 +5783,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -5234,6 +5794,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5244,6 +5805,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -5254,6 +5816,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -5264,6 +5827,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5274,6 +5838,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -5284,6 +5849,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5294,6 +5860,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -5304,6 +5871,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -5314,6 +5882,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -5324,6 +5893,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5334,6 +5904,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -5344,6 +5915,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -5354,6 +5926,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -5364,6 +5937,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.ANY_SNOOP",
@@ -5374,6 +5948,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.HITM_OTHER_CORE",
@@ -5384,6 +5959,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -5394,6 +5970,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -5404,6 +5981,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.NO_SNOOP_NEEDED",
@@ -5414,6 +5992,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.REMOTE_HITM",
@@ -5424,6 +6003,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.REMOTE_HIT_FORWARD",
@@ -5434,6 +6014,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_MISS",
@@ -5444,6 +6025,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS.SNOOP_NONE",
@@ -5454,6 +6036,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -5464,6 +6047,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -5474,6 +6058,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -5484,6 +6069,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5494,6 +6080,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -5504,6 +6091,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -5514,6 +6102,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5524,6 +6113,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -5534,6 +6124,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5544,6 +6135,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -5554,6 +6146,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -5564,6 +6157,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -5574,6 +6168,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5584,6 +6179,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -5594,6 +6190,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -5604,6 +6201,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -5614,6 +6212,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.ANY_SNOOP",
@@ -5624,6 +6223,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.HITM_OTHER_CORE",
@@ -5634,6 +6234,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -5644,6 +6245,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -5654,6 +6256,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.NO_SNOOP_NEEDED",
@@ -5664,6 +6267,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HITM",
@@ -5674,6 +6278,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
@@ -5684,6 +6289,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS",
@@ -5694,6 +6300,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.SNOOP_NONE",
@@ -5704,6 +6311,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -5714,6 +6322,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -5724,6 +6333,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -5734,6 +6344,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5744,6 +6355,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -5754,6 +6366,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -5764,6 +6377,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5774,6 +6388,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -5784,6 +6399,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -5794,6 +6410,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -5804,6 +6421,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -5814,6 +6432,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -5824,6 +6443,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5834,6 +6454,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -5844,6 +6465,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -5854,6 +6476,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -5864,6 +6487,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.ANY_SNOOP",
@@ -5874,6 +6498,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.HITM_OTHER_CORE",
@@ -5884,6 +6509,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -5894,6 +6520,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -5904,6 +6531,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -5914,6 +6542,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HITM",
@@ -5924,6 +6553,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -5934,6 +6564,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS",
@@ -5944,6 +6575,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_NONE",
@@ -5954,6 +6586,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -5964,6 +6597,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -5974,6 +6608,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -5984,6 +6619,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -5994,6 +6630,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -6004,6 +6641,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -6014,6 +6652,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6024,6 +6663,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -6034,6 +6674,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6044,6 +6685,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -6054,6 +6696,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -6064,6 +6707,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -6074,6 +6718,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6084,6 +6729,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -6094,6 +6740,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -6104,6 +6751,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -6114,6 +6762,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.ANY_SNOOP",
@@ -6124,6 +6773,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.HITM_OTHER_CORE",
@@ -6134,6 +6784,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -6144,6 +6795,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -6154,6 +6806,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.NO_SNOOP_NEEDED",
@@ -6164,6 +6817,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HITM",
@@ -6174,6 +6828,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
@@ -6184,6 +6839,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_MISS",
@@ -6194,6 +6850,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_NONE",
@@ -6204,6 +6861,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -6214,6 +6872,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -6224,6 +6883,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -6234,6 +6894,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6244,6 +6905,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -6254,6 +6916,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -6264,6 +6927,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6274,6 +6938,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -6284,6 +6949,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6294,6 +6960,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -6304,6 +6971,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -6314,6 +6982,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -6324,6 +6993,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6334,6 +7004,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -6344,6 +7015,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -6354,6 +7026,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -6364,6 +7037,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.ANY_SNOOP",
@@ -6374,6 +7048,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.HITM_OTHER_CORE",
@@ -6384,6 +7059,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -6394,6 +7070,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -6404,6 +7081,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.NO_SNOOP_NEEDED",
@@ -6414,6 +7092,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HITM",
@@ -6424,6 +7103,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
@@ -6434,6 +7114,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS",
@@ -6444,6 +7125,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_NONE",
@@ -6454,6 +7136,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -6464,6 +7147,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -6474,6 +7158,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -6484,6 +7169,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6494,6 +7180,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -6504,6 +7191,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -6514,6 +7202,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6524,6 +7213,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -6534,6 +7224,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6544,6 +7235,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -6554,6 +7246,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -6564,6 +7257,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -6574,6 +7268,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6584,6 +7279,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -6594,6 +7290,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -6604,6 +7301,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -6614,6 +7312,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.ANY_SNOOP",
@@ -6624,6 +7323,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.HITM_OTHER_CORE",
@@ -6634,6 +7334,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_FWD",
@@ -6644,6 +7345,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.HIT_OTHER_CORE_NO_FWD",
@@ -6654,6 +7356,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.NO_SNOOP_NEEDED",
@@ -6664,6 +7367,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HITM",
@@ -6674,6 +7378,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
@@ -6684,6 +7389,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_MISS",
@@ -6694,6 +7400,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_NONE",
@@ -6704,6 +7411,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.ANY_SNOOP",
@@ -6714,6 +7422,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HITM_OTHER_CORE",
@@ -6724,6 +7433,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_FWD",
@@ -6734,6 +7444,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6744,6 +7455,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.NO_SNOOP_NEEDED",
@@ -6754,6 +7466,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS",
@@ -6764,6 +7477,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6774,6 +7488,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_NONE",
@@ -6784,6 +7499,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
@@ -6794,6 +7510,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.ANY_SNOOP",
@@ -6804,6 +7521,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HITM_OTHER_CORE",
@@ -6814,6 +7532,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_FWD",
@@ -6824,6 +7543,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.HIT_OTHER_CORE_NO_FWD",
@@ -6834,6 +7554,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.NO_SNOOP_NEEDED",
@@ -6844,6 +7565,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_MISS",
@@ -6854,6 +7576,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_HOP1_DRAM.SNOOP_NONE",
@@ -6864,15 +7587,17 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED",
- "PEBS": "1",
+ "PEBS": "2",
"PublicDescription": "Number of times RTM abort was triggered.",
"SampleAfterValue": "2000003",
"UMask": "0x4"
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED_EVENTS",
"PublicDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
@@ -6881,6 +7606,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED_MEM",
"PublicDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -6889,6 +7615,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Number of times an RTM execution aborted due to incompatible memory type.",
@@ -6897,6 +7624,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to uncommon conditions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED_TIMER",
"SampleAfterValue": "2000003",
@@ -6904,6 +7632,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions.",
@@ -6912,6 +7641,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Number of times RTM commit succeeded.",
@@ -6920,6 +7650,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.START",
"PublicDescription": "Number of times we entered an RTM region. Does not count nested transactions.",
@@ -6928,6 +7659,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -6935,6 +7667,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -6943,6 +7676,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -6951,6 +7685,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"PublicDescription": "RTM region detected inside HLE.",
@@ -6959,6 +7694,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"PublicDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
@@ -6967,6 +7703,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data capacity limitation for transactional reads or writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY",
"SampleAfterValue": "2000003",
@@ -6974,6 +7711,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Number of times a TSX line had a cache conflict.",
@@ -6982,6 +7720,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"PublicDescription": "Number of times a TSX Abort was triggered due to release/commit but data and address mismatch.",
@@ -6990,6 +7729,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"PublicDescription": "Number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.",
@@ -6998,6 +7738,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"PublicDescription": "Number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.",
@@ -7006,6 +7747,7 @@
},
{
"BriefDescription": "Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"PublicDescription": "Number of times a TSX Abort was triggered due to a non-release/commit store to lock.",
@@ -7014,6 +7756,7 @@
},
{
"BriefDescription": "Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"PublicDescription": "Number of times we could not allocate Lock Buffer.",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/metricgroups.json b/tools/perf/pmu-events/arch/x86/cascadelakex/metricgroups.json
index bc6a9a4d27a9..a579603f720b 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/metricgroups.json
@@ -2,9 +2,22 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -25,8 +38,11 @@
"IoBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -64,10 +80,14 @@
"tma_L5_group": "Metrics for top-down breakdown at level 5",
"tma_L6_group": "Metrics for top-down breakdown at level 6",
"tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
"tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -78,9 +98,9 @@
"tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
"tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
"tma_issue2P": "Metrics related by the issue $issue2P",
- "tma_issueBC": "Metrics related by the issue $issueBC",
"tma_issueBM": "Metrics related by the issue $issueBM",
"tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
"tma_issueD0": "Metrics related by the issue $issueD0",
"tma_issueFB": "Metrics related by the issue $issueFB",
"tma_issueFL": "Metrics related by the issue $issueFL",
@@ -96,19 +116,25 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
"tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
"tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
"tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
"tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
"tma_retiring_group": "Metrics contributing to tma_retiring category",
"tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
"tma_store_bound_group": "Metrics contributing to tma_store_bound category",
- "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category"
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
}
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/other.json b/tools/perf/pmu-events/arch/x86/cascadelakex/other.json
index 3ab5e91a4c1c..51833bce994e 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/other.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL0_TURBO_LICENSE",
"PublicDescription": "Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL1_TURBO_LICENSE",
"PublicDescription": "Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.",
@@ -17,14 +19,16 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL2_TURBO_LICENSE",
- "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server michroarchtecture). This includes high current AVX 512-bit instructions.",
+ "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions.",
"SampleAfterValue": "200003",
"UMask": "0x20"
},
{
"BriefDescription": "Core cycles the core was throttled due to a pending power level request.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.THROTTLE",
"PublicDescription": "Core cycles the out-of-order engine was throttled due to a pending power level request.",
@@ -32,56 +36,8 @@
"UMask": "0x40"
},
{
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDFE",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDFE",
- "SampleAfterValue": "2000003",
- "UMask": "0x20"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IFWDM",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_IFWDM",
- "SampleAfterValue": "2000003",
- "UMask": "0x10"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITFSE",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITFSE",
- "SampleAfterValue": "2000003",
- "UMask": "0x2"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_IHITI",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_IHITI",
- "SampleAfterValue": "2000003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDFE",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDFE",
- "SampleAfterValue": "2000003",
- "UMask": "0x40"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SFWDM",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_SFWDM",
- "SampleAfterValue": "2000003",
- "UMask": "0x8"
- },
- {
- "BriefDescription": "CORE_SNOOP_RESPONSE.RSP_SHITFSE",
- "EventCode": "0xEF",
- "EventName": "CORE_SNOOP_RESPONSE.RSP_SHITFSE",
- "SampleAfterValue": "2000003",
- "UMask": "0x4"
- },
- {
"BriefDescription": "Number of hardware interrupts received by the processor.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.RECEIVED",
"PublicDescription": "Counts the number of hardware interruptions received by the processor.",
@@ -89,23 +45,8 @@
"UMask": "0x1"
},
{
- "BriefDescription": "Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortly",
- "EventCode": "0xFE",
- "EventName": "IDI_MISC.WB_DOWNGRADE",
- "PublicDescription": "Counts number of cache lines that are dropped and not written back to L3 as they are deemed to be less likely to be reused shortly.",
- "SampleAfterValue": "100003",
- "UMask": "0x4"
- },
- {
- "BriefDescription": "Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortly",
- "EventCode": "0xFE",
- "EventName": "IDI_MISC.WB_UPGRADE",
- "PublicDescription": "Counts number of cache lines that are allocated and written back to L3 with the intention that they are more likely to be reused shortly.",
- "SampleAfterValue": "100003",
- "UMask": "0x2"
- },
- {
"BriefDescription": "OCR.ALL_DATA_RD.ANY_RESPONSE have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -115,6 +56,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -124,6 +66,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -133,6 +76,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -142,6 +86,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -151,6 +96,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -160,6 +106,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -169,6 +116,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -178,6 +126,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -187,6 +136,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -196,6 +146,7 @@
},
{
"BriefDescription": "OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -205,6 +156,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.ANY_RESPONSE have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -214,6 +166,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -223,6 +176,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -232,6 +186,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -241,6 +196,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -250,6 +206,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -259,6 +216,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -268,6 +226,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -277,6 +236,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -286,6 +246,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -295,6 +256,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -304,6 +266,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.ANY_RESPONSE have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -313,6 +276,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -322,6 +286,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -331,6 +296,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -340,6 +306,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -349,6 +316,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -358,6 +326,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -367,6 +336,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -376,6 +346,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -385,6 +356,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -394,6 +366,7 @@
},
{
"BriefDescription": "OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_PF_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -403,6 +376,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.ANY_RESPONSE have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -412,6 +386,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -421,6 +396,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -430,6 +406,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -439,6 +416,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOP OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -448,6 +426,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -457,6 +436,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -466,6 +446,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -475,6 +456,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -484,6 +466,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -493,6 +476,7 @@
},
{
"BriefDescription": "OCR.ALL_READS.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_READS.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -502,6 +486,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.ANY_RESPONSE have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -511,6 +496,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -520,6 +506,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -529,6 +516,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -538,6 +526,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -547,6 +536,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -556,6 +546,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -565,6 +556,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -574,6 +566,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -583,6 +576,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -592,6 +586,7 @@
},
{
"BriefDescription": "OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ALL_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -600,304 +595,8 @@
"UMask": "0x1"
},
{
- "BriefDescription": "Counts all demand code reads have any response type.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80400004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80400004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x1000020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x800020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x400020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_MISS",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x200020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand code reads",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SUPPLIER_NONE.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80020004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads have any response type.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80400001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80400001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x1000020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x800020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x400020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x200020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80020001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) have any response type.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80400002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80400002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.ANY_SNOOP",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F80020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x1000020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x800020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x400020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs) OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs)",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_MISS",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x200020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all demand data writes (RFOs)",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SUPPLIER_NONE.SNOOP_NONE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x80020002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
"BriefDescription": "Counts any other requests have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -907,6 +606,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -916,6 +616,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -925,6 +626,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -934,6 +636,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -943,6 +646,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -952,6 +656,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -961,6 +666,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -970,6 +676,7 @@
},
{
"BriefDescription": "Counts any other requests OCR.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -979,6 +686,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -988,6 +696,7 @@
},
{
"BriefDescription": "Counts any other requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -997,6 +706,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1006,6 +716,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1015,6 +726,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1024,6 +736,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1033,6 +746,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1042,6 +756,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1051,6 +766,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1060,6 +776,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1069,6 +786,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests OCR.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1078,6 +796,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1087,6 +806,7 @@
},
{
"BriefDescription": "Counts L1 data cache hardware prefetch requests and software prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L1D_AND_SW.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1096,6 +816,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1105,6 +826,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1114,6 +836,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1123,6 +846,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1132,6 +856,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1141,6 +866,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1150,6 +876,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1159,6 +886,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1168,6 +896,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads OCR.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1177,6 +906,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1186,6 +916,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1195,6 +926,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1204,6 +936,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1213,6 +946,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1222,6 +956,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1231,6 +966,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1240,6 +976,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1249,6 +986,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1258,6 +996,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1267,6 +1006,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs OCR.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1276,6 +1016,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1285,6 +1026,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L2_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1294,6 +1036,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1303,6 +1046,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1312,6 +1056,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1321,6 +1066,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1330,6 +1076,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1339,6 +1086,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1348,6 +1096,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1357,6 +1106,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1366,6 +1116,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads OCR.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1375,6 +1126,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1384,6 +1136,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_DATA_RD.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1393,6 +1146,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs have any response type.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1402,6 +1156,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1411,6 +1166,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1420,6 +1176,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.PMM_HIT_LOCAL_PMM.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1429,6 +1186,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOP",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.ANY_SNOOP",
"MSRIndex": "0x1a6,0x1a7",
@@ -1438,6 +1196,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -1447,6 +1206,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1456,6 +1216,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1465,6 +1226,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs OCR.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -1474,6 +1236,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -1483,6 +1246,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PF_L3_RFO.SUPPLIER_NONE.SNOOP_NONE",
"MSRIndex": "0x1a6,0x1a7",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/pipeline.json b/tools/perf/pmu-events/arch/x86/cascadelakex/pipeline.json
index 66d686cc933e..9a1349527b66 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x14",
"EventName": "ARITH.DIVIDER_ACTIVE",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired. [This event is alias to BR_INST_RETIRED.CONDITIONAL]",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.COND",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired. [This event is alias to BR_INST_RETIRED.COND]",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
@@ -46,6 +51,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN",
@@ -55,6 +61,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
@@ -65,6 +72,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
@@ -75,6 +83,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
@@ -85,6 +94,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
@@ -95,6 +105,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
@@ -104,6 +115,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "This event counts both taken and not taken speculative and retired mispredicted branch instructions.",
@@ -112,6 +124,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -120,6 +133,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
@@ -127,6 +141,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -136,6 +151,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -145,6 +161,7 @@
},
{
"BriefDescription": "Mispredicted direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -154,6 +171,7 @@
},
{
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -162,6 +180,7 @@
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RET",
"PEBS": "1",
@@ -171,6 +190,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "25003",
@@ -178,6 +198,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when the thread is unhalted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"SampleAfterValue": "25003",
@@ -186,6 +207,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core crystal clock cycles when at least one thread on the physical core is unhalted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "25003",
@@ -193,6 +215,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "25003",
@@ -200,6 +223,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -207,6 +231,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when the thread is unhalted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"SampleAfterValue": "25003",
@@ -215,6 +240,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core crystal clock cycles when at least one thread on the physical core is unhalted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "25003",
@@ -222,6 +248,7 @@
},
{
"BriefDescription": "Counts when there is a transition from ring 1, 2 or 3 to ring 0.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x3C",
@@ -231,6 +258,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -239,12 +267,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -253,12 +283,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -267,6 +299,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -275,6 +308,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -283,6 +317,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -291,6 +326,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -299,6 +335,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "20",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -307,6 +344,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -315,6 +353,7 @@
},
{
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
@@ -323,6 +362,7 @@
},
{
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -331,6 +371,7 @@
},
{
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -339,6 +380,7 @@
},
{
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -347,6 +389,7 @@
},
{
"BriefDescription": "Cycles where the Store Buffer was full and no outstanding load.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
"SampleAfterValue": "2000003",
@@ -354,6 +397,7 @@
},
{
"BriefDescription": "Cycles where no uops were executed, the Reservation Station was not empty, the Store Buffer was full and there was no outstanding load.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS",
"PublicDescription": "Counts cycles during which no uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -362,6 +406,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]",
@@ -370,6 +415,7 @@
},
{
"BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
@@ -378,6 +424,7 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
"PublicDescription": "Counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, Counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. INST_RETIRED.ANY_P is counted by a programmable counter and it is an architectural performance event. Counting: Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
"SampleAfterValue": "2000003",
@@ -385,6 +432,7 @@
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "SKL091, SKL044",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -393,6 +441,7 @@
},
{
"BriefDescription": "Number of all retired NOP instructions.",
+ "Counter": "0,1,2,3",
"Errata": "SKL091, SKL044",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.NOP",
@@ -402,6 +451,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "SKL091, SKL044",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -412,6 +462,7 @@
},
{
"BriefDescription": "Number of cycles using always true condition applied to PEBS instructions retired event.",
+ "Counter": "0,2,3",
"CounterMask": "10",
"Errata": "SKL091, SKL044",
"EventCode": "0xC0",
@@ -424,6 +475,7 @@
},
{
"BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x0D",
@@ -434,6 +486,7 @@
},
{
"BriefDescription": "Cycles the issue-stage is waiting for front-end to fetch from resteered path following branch misprediction or machine clear events.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"SampleAfterValue": "2000003",
@@ -441,6 +494,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Core cycles the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
@@ -450,6 +504,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).",
+ "Counter": "0,1,2,3",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
"SampleAfterValue": "2000003",
@@ -457,6 +512,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -465,6 +521,7 @@
},
{
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -473,6 +530,7 @@
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Counts false dependencies in MOB when the partial comparison upon loose net check and dependency was resolved by the Enhanced Loose net mechanism. This may not result in high performance penalties. Loose net checks can fail when loads and stores are 4k aliased.",
@@ -481,14 +539,16 @@
},
{
"BriefDescription": "Demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.SW_PF",
- "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
+ "PublicDescription": "Counts all software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder. [This event is alias to LSD.CYCLES_OK]",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -498,6 +558,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -507,6 +568,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder. [This event is alias to LSD.CYCLES_4_UOPS]",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_OK",
@@ -516,6 +578,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "LSD.UOPS",
"PublicDescription": "Number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
@@ -524,6 +587,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -533,6 +597,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -541,6 +606,7 @@
},
{
"BriefDescription": "Number of times a microcode assist is invoked by HW other than FP-assist. Examples include AD (page Access Dirty) and AVX* related assists.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY",
"SampleAfterValue": "100003",
@@ -548,6 +614,7 @@
},
{
"BriefDescription": "Cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "PARTIAL_RAT_STALLS.SCOREBOARD",
"PublicDescription": "This event counts cycles during which the microcode scoreboard stalls happen.",
@@ -556,6 +623,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.ANY",
"PublicDescription": "Counts resource-related stall cycles.",
@@ -564,6 +632,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
@@ -572,6 +641,7 @@
},
{
"BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT.",
@@ -580,6 +650,7 @@
},
{
"BriefDescription": "Number of retired PAUSE instructions (that do not end up with a VMExit to the VMM; TSX aborted Instructions may be counted). This event is not supported on first SKL and KBL products.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.PAUSE_INST",
"SampleAfterValue": "2000003",
@@ -587,6 +658,7 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "Counts cycles during which the reservation station (RS) is empty for the thread.; Note: In ST-mode, not active thread should drive 0. This is usually caused by severely costly branch mispredictions, or allocator/FE issues.",
@@ -595,6 +667,7 @@
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -606,6 +679,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.",
@@ -614,6 +688,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.",
@@ -622,6 +697,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 2.",
@@ -630,6 +706,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 3.",
@@ -638,6 +715,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 4.",
@@ -646,6 +724,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.",
@@ -654,6 +733,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.",
@@ -662,6 +742,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 7.",
@@ -670,6 +751,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Number of uops executed from any thread.",
@@ -678,6 +760,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -686,6 +769,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -694,6 +778,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -702,6 +787,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -710,6 +796,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
@@ -719,6 +806,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC",
@@ -728,6 +816,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC",
@@ -737,6 +826,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC",
@@ -746,6 +836,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC",
@@ -755,6 +846,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -765,6 +857,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.THREAD",
"PublicDescription": "Number of uops to be executed per-thread each cycle.",
@@ -773,6 +866,7 @@
},
{
"BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.",
@@ -781,6 +875,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
@@ -789,6 +884,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"SampleAfterValue": "2000003",
@@ -796,6 +892,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -806,6 +903,7 @@
},
{
"BriefDescription": "Uops inserted at issue-stage in order to preserve upper bits of vector registers.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH",
"PublicDescription": "Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to Mixing Intel AVX and Intel SSE Code section of the Optimization Guide.",
@@ -814,6 +912,7 @@
},
{
"BriefDescription": "Number of macro-fused uops retired. (non precise)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MACRO_FUSED",
"PublicDescription": "Counts the number of macro-fused uops retired. (non precise)",
@@ -822,6 +921,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PublicDescription": "Counts the retirement slots used.",
@@ -830,6 +930,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -840,6 +941,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
index 2c880535cc82..c9596e18ec09 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "MMIO reads. Derived from unc_cha_tor_inserts.ia_miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_READ",
"Filter": "config1=0x40040e33",
@@ -11,6 +12,7 @@
},
{
"BriefDescription": "MMIO writes. Derived from unc_cha_tor_inserts.ia_miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_WRITE",
"Filter": "config1=0x40041e33",
@@ -21,6 +23,7 @@
},
{
"BriefDescription": "LLC misses - Uncacheable reads (from cpu) . Derived from unc_cha_tor_inserts.ia_miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.UNCACHEABLE",
"Filter": "config1=0x40e33",
@@ -31,6 +34,7 @@
},
{
"BriefDescription": "Streaming stores (full cache line). Derived from unc_cha_tor_inserts.ia_miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_FULL",
"Filter": "config1=0x41833",
@@ -42,6 +46,7 @@
},
{
"BriefDescription": "Streaming stores (partial cache line). Derived from unc_cha_tor_inserts.ia_miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_PARTIAL",
"Filter": "config1=0x41a33",
@@ -53,8 +58,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -62,8 +69,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -71,8 +80,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -80,8 +91,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -89,8 +102,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -98,8 +113,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -107,8 +124,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -116,8 +135,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -125,8 +146,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -134,8 +157,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -143,8 +168,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -152,8 +179,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -161,8 +190,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -170,8 +201,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -179,8 +212,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -188,8 +223,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -197,8 +234,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -206,8 +245,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -215,8 +256,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -224,8 +267,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -233,8 +278,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -242,8 +289,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -251,8 +300,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -260,8 +311,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -269,8 +322,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -278,8 +333,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -287,8 +344,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -296,8 +355,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -305,8 +366,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -314,8 +377,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -323,8 +388,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -332,8 +399,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -341,8 +410,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -350,8 +421,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -359,8 +432,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -368,8 +443,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -377,8 +454,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -386,8 +465,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -395,8 +476,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -404,8 +487,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -413,8 +498,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -422,8 +509,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -431,8 +520,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -440,8 +531,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -449,8 +542,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -458,8 +553,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -467,8 +564,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -476,8 +575,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -485,8 +586,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass; Intermediate bypass Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the intermediate bypass.",
"UMask": "0x2",
@@ -494,8 +597,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass; Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass, and issues a read to memory. Note that transactions that did not take the bypass but did not issue read to memory will not be counted.",
"UMask": "0x4",
@@ -503,8 +608,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the full bypass.",
"UMask": "0x1",
@@ -512,6 +619,7 @@
},
{
"BriefDescription": "Uncore cache clock ticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_CHA_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Counts clockticks of the clock controlling the uncore caching and home agent (CHA).",
@@ -519,55 +627,69 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_CHA_CMS_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "Core PMA Events; C1 State",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_CHA_CORE_PMA.C1_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Core PMA Events; C1 Transition",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_CHA_CORE_PMA.C1_TRANSITION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Core PMA Events; C6 State",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_CHA_CORE_PMA.C6_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Core PMA Events; C6 Transition",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_CHA_CORE_PMA.C6_TRANSITION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Core PMA Events; GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_CHA_CORE_PMA.GV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Core Cross Snoops Issued; Any Cycle with Multiple Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xe2",
@@ -575,8 +697,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Any Single Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xe1",
@@ -584,8 +708,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Any Snoop to Remote Node",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xe4",
@@ -593,6 +719,7 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Multiple Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_GTONE",
"PerPkg": "1",
@@ -602,8 +729,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Single Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x41",
@@ -611,8 +740,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Core Request to Remote Node",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x44",
@@ -620,6 +751,7 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Multiple Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_GTONE",
"PerPkg": "1",
@@ -629,8 +761,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Single Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x81",
@@ -638,8 +772,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Eviction to Remote Node",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x84",
@@ -647,8 +783,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Multiple External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x22",
@@ -656,8 +794,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; Single External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x21",
@@ -665,8 +805,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued; External Snoop to Remote Node",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x24",
@@ -674,14 +816,17 @@
},
{
"BriefDescription": "Counter 0 Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_CHA_COUNTER0_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.",
"Unit": "CHA"
},
{
"BriefDescription": "Multi-socket cacheline Directory state lookups; Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP",
"PerPkg": "1",
@@ -691,6 +836,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state lookups; Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.SNP",
"PerPkg": "1",
@@ -700,6 +846,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state updates; Directory Updated memory write from the HA pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.HA",
"PerPkg": "1",
@@ -709,6 +856,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state updates; Directory Updated memory write from TOR pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.TOR",
"PerPkg": "1",
@@ -718,8 +866,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -727,8 +877,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -736,6 +888,7 @@
},
{
"BriefDescription": "FaST wire asserted; Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_FAST_ASSERTED.HORZ",
"PerPkg": "1",
@@ -745,8 +898,10 @@
},
{
"BriefDescription": "FaST wire asserted; Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_FAST_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.",
"UMask": "0x1",
@@ -754,6 +909,7 @@
},
{
"BriefDescription": "Read request from a remote socket which hit in the HitMe Cache to a line In the E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.EX_RDS",
"PerPkg": "1",
@@ -763,80 +919,100 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Shared hit and op is RdInvOwn, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.SHARED_OWNREQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.WBMTOI_OR_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_CHA_HITME_LOOKUP.READ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_CHA_HITME_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache; No SF/LLC HitS/F and op is RdInvOwn",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.READ_OR_INV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache; SF/LLC HitS/F and op is RdInvOwn",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.SHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache; Deallocate HitME$ on Reads without RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Received RspFwdI* for a local request, but converted HitME$ to SF entry",
"UMask": "0x1",
@@ -844,16 +1020,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache; Update HitMe Cache on RdInvOwn even if not RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RSPFWDI_REM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Updated HitME$ on RspFwdI* or local HitM/E received for a remote request",
"UMask": "0x2",
@@ -861,16 +1041,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache; Update HitMe Cache to SHARed",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.SHARED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -878,8 +1062,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -887,8 +1073,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -896,8 +1084,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -905,8 +1095,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -914,8 +1106,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -923,8 +1117,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -932,8 +1128,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -941,8 +1139,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -950,8 +1150,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -959,8 +1161,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -968,8 +1172,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -977,8 +1183,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -986,8 +1194,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -995,6 +1205,7 @@
},
{
"BriefDescription": "Normal priority reads issued to the memory controller from the CHA",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.NORMAL",
"PerPkg": "1",
@@ -1004,8 +1215,10 @@
},
{
"BriefDescription": "HA to iMC Reads Issued; ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count of the number of reads issued to any of the memory controller channels. This can be filtered by the priority of the reads.",
"UMask": "0x2",
@@ -1013,6 +1226,7 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL",
"PerPkg": "1",
@@ -1022,8 +1236,10 @@
},
{
"BriefDescription": "Writes Issued to the iMC by the HA; Full Line MIG",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_MIG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"UMask": "0x10",
@@ -1031,8 +1247,10 @@
},
{
"BriefDescription": "Writes Issued to the iMC by the HA; ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"UMask": "0x4",
@@ -1040,8 +1258,10 @@
},
{
"BriefDescription": "Writes Issued to the iMC by the HA; Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"UMask": "0x2",
@@ -1049,8 +1269,10 @@
},
{
"BriefDescription": "Writes Issued to the iMC by the HA; Partial MIG",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_MIG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.; Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -1058,8 +1280,10 @@
},
{
"BriefDescription": "Writes Issued to the iMC by the HA; ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of writes issued from the HA into the memory controller. This counts for all four channels. It can be filtered by full/partial and ISOCH/non-ISOCH.",
"UMask": "0x8",
@@ -1067,64 +1291,80 @@
},
{
"BriefDescription": "Counts Number of times IODC entry allocation is attempted; Number of IODC allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_CHA_IODC_ALLOC.INVITOM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times IODC entry allocation is attempted; Number of IODC allocations dropped due to IODC Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_CHA_IODC_ALLOC.IODCFULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times IODC entry allocation is attempted; Number of IDOC allocation dropped due to OSB gate",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_CHA_IODC_ALLOC.OSBGATED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Counts number of IODC deallocations; IODC deallocated due to any reason",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_CHA_IODC_DEALLOC.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts number of IODC deallocations; IODC deallocated due to conflicting transaction",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_CHA_IODC_DEALLOC.SNPOUT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts number of IODC deallocations; IODC deallocated due to WbMtoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_CHA_IODC_DEALLOC.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Counts number of IODC deallocations; IODC deallocated due to WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_CHA_IODC_DEALLOC.WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Counts number of IODC deallocations; IODC deallocated due to WbPushMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_CHA_IODC_DEALLOC.WBPUSHMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Moved to Cbo section",
"UMask": "0x4",
@@ -1132,8 +1372,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Any Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.",
"UMask": "0x11",
@@ -1141,8 +1383,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Read transactions",
"UMask": "0x3",
@@ -1150,8 +1394,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Local",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.",
"UMask": "0x31",
@@ -1159,8 +1405,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.",
"UMask": "0x91",
@@ -1168,8 +1416,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; External Snoop Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for only snoop requests coming from the remote socket(s) through the IPQ.",
"UMask": "0x9",
@@ -1177,8 +1427,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Writeback transactions from L2 to the LLC This includes all write transactions -- both Cacheable and UC.",
"UMask": "0x5",
@@ -1186,35 +1438,43 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_E",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.E_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_F",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.F_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized; Local - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2f",
@@ -1222,8 +1482,10 @@
},
{
"BriefDescription": "Lines Victimized; Local - Lines in E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x22",
@@ -1231,8 +1493,10 @@
},
{
"BriefDescription": "Lines Victimized; Local - Lines in F State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x28",
@@ -1240,8 +1504,10 @@
},
{
"BriefDescription": "Lines Victimized; Local - Lines in M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x21",
@@ -1249,8 +1515,10 @@
},
{
"BriefDescription": "Lines Victimized; Local - Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x24",
@@ -1258,26 +1526,32 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_M",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.M_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized; Remote - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8f",
@@ -1285,8 +1559,10 @@
},
{
"BriefDescription": "Lines Victimized; Remote - Lines in E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x82",
@@ -1294,8 +1570,10 @@
},
{
"BriefDescription": "Lines Victimized; Remote - Lines in F State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x88",
@@ -1303,8 +1581,10 @@
},
{
"BriefDescription": "Lines Victimized; Remote - Lines in M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x81",
@@ -1312,8 +1592,10 @@
},
{
"BriefDescription": "Lines Victimized; Remote - Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x84",
@@ -1321,15 +1603,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_S",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.S_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized; Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E",
"PerPkg": "1",
@@ -1339,6 +1624,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in F State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_F",
"PerPkg": "1",
@@ -1348,6 +1634,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M",
"PerPkg": "1",
@@ -1357,6 +1644,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S",
"PerPkg": "1",
@@ -1366,8 +1654,10 @@
},
{
"BriefDescription": "Cbo Misc; CV0 Prefetch Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous events in the Cbo.",
"UMask": "0x20",
@@ -1375,8 +1665,10 @@
},
{
"BriefDescription": "Cbo Misc; CV0 Prefetch Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_VIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous events in the Cbo.",
"UMask": "0x10",
@@ -1384,6 +1676,7 @@
},
{
"BriefDescription": "Number of times that an RFO hit in S state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RFO_HIT_S",
"PerPkg": "1",
@@ -1393,8 +1686,10 @@
},
{
"BriefDescription": "Cbo Misc; Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RSPI_WAS_FSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous events in the Cbo.; Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction. This is useful because this information is lost in the PRE encodings.",
"UMask": "0x1",
@@ -1402,8 +1697,10 @@
},
{
"BriefDescription": "Cbo Misc; Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.WC_ALIASING",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous events in the Cbo.; Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write. This occurs when there is WC aliasing.",
"UMask": "0x2",
@@ -1411,16 +1708,20 @@
},
{
"BriefDescription": "OSB Snoop Broadcast",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"Unit": "CHA"
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw NM Set conflict in IODC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.IODC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "2LM related events; Counts the number of times CHA saw NM Set conflict in IODC",
"UMask": "0x10",
@@ -1428,8 +1729,10 @@
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw NM Set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "NM evictions due to another read to the same near memory set in the LLC.",
"UMask": "0x2",
@@ -1437,8 +1740,10 @@
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw NM Set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "NM evictions due to another read to the same near memory set in the SF.",
"UMask": "0x1",
@@ -1446,8 +1751,10 @@
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw NM Set conflict in TOR",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Reject in the CHA due to a pending read to the same near memory set in the TOR.",
"UMask": "0x4",
@@ -1455,8 +1762,10 @@
},
{
"BriefDescription": "Memory mode related events; Counts the number of times CHA saw NM Set conflict in TOR and the transaction was rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR_REJECT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Rejects in the CHA due to a pending read to the same near memory set in the TOR.",
"UMask": "0x8",
@@ -1464,8 +1773,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC0_SMI2",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.EDC0_SMI2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -1473,8 +1784,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC1_SMI3",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.EDC1_SMI3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -1482,8 +1795,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC2_SMI4",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.EDC2_SMI4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -1491,8 +1806,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; EDC3_SMI5",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.EDC3_SMI5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -1500,8 +1817,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; MC0_SMI0",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC0_SMI0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -1509,8 +1828,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty; MC1_SMI1",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC1_SMI1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue.; Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -1518,6 +1839,7 @@
},
{
"BriefDescription": "Local requests for exclusive ownership of a cache line without receiving data",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -1527,6 +1849,7 @@
},
{
"BriefDescription": "Local requests for exclusive ownership of a cache line without receiving data",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -1536,6 +1859,7 @@
},
{
"BriefDescription": "Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS",
"PerPkg": "1",
@@ -1545,6 +1869,7 @@
},
{
"BriefDescription": "Read requests from a unit on this socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -1554,6 +1879,7 @@
},
{
"BriefDescription": "Read requests from a remote socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -1563,6 +1889,7 @@
},
{
"BriefDescription": "Write requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES",
"PerPkg": "1",
@@ -1572,6 +1899,7 @@
},
{
"BriefDescription": "Write Requests from a unit on this socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -1581,6 +1909,7 @@
},
{
"BriefDescription": "Read and Write Requests; Writes Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -1590,8 +1919,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -1599,8 +1930,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -1608,8 +1941,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -1617,8 +1952,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -1626,8 +1963,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -1635,8 +1974,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -1644,8 +1985,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -1653,8 +1996,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -1662,87 +2007,109 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Allocations; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x4",
@@ -1750,6 +2117,7 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ",
"PerPkg": "1",
@@ -1759,8 +2127,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x2",
@@ -1768,8 +2138,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x10",
@@ -1777,8 +2149,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x20",
@@ -1786,8 +2160,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; RRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x40",
@@ -1795,8 +2171,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations; WBQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x80",
@@ -1804,238 +2182,297 @@
},
{
"BriefDescription": "Ingress Probe Queue Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress Probe Queue Rejects; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH",
"PerPkg": "1",
@@ -2044,24 +2481,30 @@
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "ISMQ Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x1",
@@ -2069,8 +2512,10 @@
},
{
"BriefDescription": "ISMQ Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -2078,8 +2523,10 @@
},
{
"BriefDescription": "ISMQ Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x40",
@@ -2087,8 +2534,10 @@
},
{
"BriefDescription": "ISMQ Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x10",
@@ -2096,8 +2545,10 @@
},
{
"BriefDescription": "ISMQ Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x20",
@@ -2105,8 +2556,10 @@
},
{
"BriefDescription": "ISMQ Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x4",
@@ -2114,8 +2567,10 @@
},
{
"BriefDescription": "ISMQ Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x8",
@@ -2123,8 +2578,10 @@
},
{
"BriefDescription": "ISMQ Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x80",
@@ -2132,8 +2589,10 @@
},
{
"BriefDescription": "ISMQ Retries; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x1",
@@ -2141,8 +2600,10 @@
},
{
"BriefDescription": "ISMQ Retries; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -2150,8 +2611,10 @@
},
{
"BriefDescription": "ISMQ Retries; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x40",
@@ -2159,8 +2622,10 @@
},
{
"BriefDescription": "ISMQ Retries; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x10",
@@ -2168,8 +2633,10 @@
},
{
"BriefDescription": "ISMQ Retries; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x20",
@@ -2177,8 +2644,10 @@
},
{
"BriefDescription": "ISMQ Retries; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x4",
@@ -2186,8 +2655,10 @@
},
{
"BriefDescription": "ISMQ Retries; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x8",
@@ -2195,8 +2666,10 @@
},
{
"BriefDescription": "ISMQ Retries; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x80",
@@ -2204,8 +2677,10 @@
},
{
"BriefDescription": "ISMQ Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x1",
@@ -2213,8 +2688,10 @@
},
{
"BriefDescription": "ISMQ Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -2222,8 +2699,10 @@
},
{
"BriefDescription": "ISMQ Retries; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x1",
@@ -2231,8 +2710,10 @@
},
{
"BriefDescription": "ISMQ Retries; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -2240,8 +2721,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy; IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x4",
@@ -2249,6 +2732,7 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy; IRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.IRQ",
"PerPkg": "1",
@@ -2258,8 +2742,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy; RRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x40",
@@ -2267,8 +2753,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy; WBQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x80",
@@ -2276,8 +2764,10 @@
},
{
"BriefDescription": "Other Retries; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x1",
@@ -2285,8 +2775,10 @@
},
{
"BriefDescription": "Other Retries; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x2",
@@ -2294,8 +2786,10 @@
},
{
"BriefDescription": "Other Retries; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x40",
@@ -2303,8 +2797,10 @@
},
{
"BriefDescription": "Other Retries; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x10",
@@ -2312,8 +2808,10 @@
},
{
"BriefDescription": "Other Retries; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x20",
@@ -2321,8 +2819,10 @@
},
{
"BriefDescription": "Other Retries; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x4",
@@ -2330,8 +2830,10 @@
},
{
"BriefDescription": "Other Retries; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x8",
@@ -2339,8 +2841,10 @@
},
{
"BriefDescription": "Other Retries; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x80",
@@ -2348,8 +2852,10 @@
},
{
"BriefDescription": "Other Retries; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x40",
@@ -2357,8 +2863,10 @@
},
{
"BriefDescription": "Other Retries; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x1",
@@ -2366,8 +2874,10 @@
},
{
"BriefDescription": "Other Retries; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x2",
@@ -2375,8 +2885,10 @@
},
{
"BriefDescription": "Other Retries; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x20",
@@ -2384,8 +2896,10 @@
},
{
"BriefDescription": "Other Retries; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x4",
@@ -2393,8 +2907,10 @@
},
{
"BriefDescription": "Other Retries; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x80",
@@ -2402,8 +2918,10 @@
},
{
"BriefDescription": "Other Retries; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x8",
@@ -2411,8 +2929,10 @@
},
{
"BriefDescription": "Other Retries; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x10",
@@ -2420,136 +2940,170 @@
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Request Queue Retries; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x1",
@@ -2557,8 +3111,10 @@
},
{
"BriefDescription": "Request Queue Retries; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x2",
@@ -2566,8 +3122,10 @@
},
{
"BriefDescription": "Request Queue Retries; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x40",
@@ -2575,8 +3133,10 @@
},
{
"BriefDescription": "Request Queue Retries; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x10",
@@ -2584,8 +3144,10 @@
},
{
"BriefDescription": "Request Queue Retries; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x20",
@@ -2593,8 +3155,10 @@
},
{
"BriefDescription": "Request Queue Retries; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x4",
@@ -2602,8 +3166,10 @@
},
{
"BriefDescription": "Request Queue Retries; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x8",
@@ -2611,8 +3177,10 @@
},
{
"BriefDescription": "Request Queue Retries; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x80",
@@ -2620,8 +3188,10 @@
},
{
"BriefDescription": "Request Queue Retries; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x40",
@@ -2629,8 +3199,10 @@
},
{
"BriefDescription": "Request Queue Retries; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x1",
@@ -2638,8 +3210,10 @@
},
{
"BriefDescription": "Request Queue Retries; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x2",
@@ -2647,8 +3221,10 @@
},
{
"BriefDescription": "Request Queue Retries; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x20",
@@ -2656,8 +3232,10 @@
},
{
"BriefDescription": "Request Queue Retries; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x4",
@@ -2665,8 +3243,10 @@
},
{
"BriefDescription": "Request Queue Retries; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x80",
@@ -2674,8 +3254,10 @@
},
{
"BriefDescription": "Request Queue Retries; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x8",
@@ -2683,8 +3265,10 @@
},
{
"BriefDescription": "Request Queue Retries; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x10",
@@ -2692,8 +3276,10 @@
},
{
"BriefDescription": "RRQ Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x1",
@@ -2701,8 +3287,10 @@
},
{
"BriefDescription": "RRQ Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x2",
@@ -2710,8 +3298,10 @@
},
{
"BriefDescription": "RRQ Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x40",
@@ -2719,8 +3309,10 @@
},
{
"BriefDescription": "RRQ Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x10",
@@ -2728,8 +3320,10 @@
},
{
"BriefDescription": "RRQ Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x20",
@@ -2737,8 +3331,10 @@
},
{
"BriefDescription": "RRQ Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x4",
@@ -2746,8 +3342,10 @@
},
{
"BriefDescription": "RRQ Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x8",
@@ -2755,8 +3353,10 @@
},
{
"BriefDescription": "RRQ Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x80",
@@ -2764,8 +3364,10 @@
},
{
"BriefDescription": "RRQ Rejects; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x40",
@@ -2773,8 +3375,10 @@
},
{
"BriefDescription": "RRQ Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x1",
@@ -2782,8 +3386,10 @@
},
{
"BriefDescription": "RRQ Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x2",
@@ -2791,8 +3397,10 @@
},
{
"BriefDescription": "RRQ Rejects; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x20",
@@ -2800,8 +3408,10 @@
},
{
"BriefDescription": "RRQ Rejects; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x4",
@@ -2809,8 +3419,10 @@
},
{
"BriefDescription": "RRQ Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x80",
@@ -2818,8 +3430,10 @@
},
{
"BriefDescription": "RRQ Rejects; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x8",
@@ -2827,8 +3441,10 @@
},
{
"BriefDescription": "RRQ Rejects; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x10",
@@ -2836,8 +3452,10 @@
},
{
"BriefDescription": "WBQ Rejects; AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x1",
@@ -2845,8 +3463,10 @@
},
{
"BriefDescription": "WBQ Rejects; AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x2",
@@ -2854,8 +3474,10 @@
},
{
"BriefDescription": "WBQ Rejects; Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x40",
@@ -2863,8 +3485,10 @@
},
{
"BriefDescription": "WBQ Rejects; BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x10",
@@ -2872,8 +3496,10 @@
},
{
"BriefDescription": "WBQ Rejects; BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x20",
@@ -2881,8 +3507,10 @@
},
{
"BriefDescription": "WBQ Rejects; BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x4",
@@ -2890,8 +3518,10 @@
},
{
"BriefDescription": "WBQ Rejects; BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x8",
@@ -2899,8 +3529,10 @@
},
{
"BriefDescription": "WBQ Rejects; Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x80",
@@ -2908,8 +3540,10 @@
},
{
"BriefDescription": "WBQ Rejects; Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x40",
@@ -2917,8 +3551,10 @@
},
{
"BriefDescription": "WBQ Rejects; ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x1",
@@ -2926,8 +3562,10 @@
},
{
"BriefDescription": "WBQ Rejects; HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x2",
@@ -2935,8 +3573,10 @@
},
{
"BriefDescription": "WBQ Rejects; Merging these two together to make room for ANY_REJECT_*0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x20",
@@ -2944,8 +3584,10 @@
},
{
"BriefDescription": "WBQ Rejects; LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x4",
@@ -2953,8 +3595,10 @@
},
{
"BriefDescription": "WBQ Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x80",
@@ -2962,8 +3606,10 @@
},
{
"BriefDescription": "WBQ Rejects; SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x8",
@@ -2971,8 +3617,10 @@
},
{
"BriefDescription": "WBQ Rejects; Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x10",
@@ -2980,8 +3628,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -2989,8 +3639,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -2998,8 +3650,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -3007,8 +3661,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -3016,8 +3672,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -3025,8 +3683,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -3034,8 +3694,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -3043,8 +3705,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -3052,8 +3716,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -3061,8 +3727,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_RxR_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -3070,8 +3738,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -3079,8 +3749,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -3088,8 +3760,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -3097,8 +3771,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -3106,8 +3782,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -3115,8 +3793,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IFV - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -3124,8 +3804,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -3133,8 +3815,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -3142,8 +3826,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -3151,8 +3837,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -3160,8 +3848,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -3169,8 +3859,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -3178,8 +3870,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_RxR_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -3187,8 +3881,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -3196,8 +3892,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -3205,8 +3903,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -3214,8 +3914,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -3223,8 +3925,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -3232,8 +3936,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -3241,6 +3947,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for E-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.E_STATE",
"PerPkg": "1",
@@ -3250,6 +3957,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for M-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.M_STATE",
"PerPkg": "1",
@@ -3259,6 +3967,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for S-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.S_STATE",
"PerPkg": "1",
@@ -3268,8 +3977,10 @@
},
{
"BriefDescription": "Snoops Sent; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.",
"UMask": "0x1",
@@ -3277,8 +3988,10 @@
},
{
"BriefDescription": "Snoops Sent; Broadcast snoop for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of broadcast snoops issued by the HA. This filter includes only requests coming from local sockets.",
"UMask": "0x10",
@@ -3286,8 +3999,10 @@
},
{
"BriefDescription": "Snoops Sent; Broadcast snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of broadcast snoops issued by the HA.This filter includes only requests coming from remote sockets.",
"UMask": "0x20",
@@ -3295,8 +4010,10 @@
},
{
"BriefDescription": "Snoops Sent; Directed snoops for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of directed snoops issued by the HA. This filter includes only requests coming from local sockets.",
"UMask": "0x40",
@@ -3304,8 +4021,10 @@
},
{
"BriefDescription": "Snoops Sent; Directed snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of directed snoops issued by the HA. This filter includes only requests coming from remote sockets.",
"UMask": "0x80",
@@ -3313,8 +4032,10 @@
},
{
"BriefDescription": "Snoops Sent; Broadcast or directed Snoops sent for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the local socket.",
"UMask": "0x4",
@@ -3322,8 +4043,10 @@
},
{
"BriefDescription": "Snoops Sent; Broadcast or directed Snoops sent for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of snoops issued by the HA.; Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the remote socket.",
"UMask": "0x8",
@@ -3331,6 +4054,7 @@
},
{
"BriefDescription": "RspCnflct* Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPCNFLCTS",
"PerPkg": "1",
@@ -3340,8 +4064,10 @@
},
{
"BriefDescription": "Snoop Responses Received; RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1.; Filters for a snoop response of RspFwd to a CA request. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -3349,6 +4075,7 @@
},
{
"BriefDescription": "RspI Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPI",
"PerPkg": "1",
@@ -3358,6 +4085,7 @@
},
{
"BriefDescription": "RspIFwd Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPIFWD",
"PerPkg": "1",
@@ -3367,8 +4095,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : RspS : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for snoop responses of RspS. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -3376,6 +4106,7 @@
},
{
"BriefDescription": "RspSFwd Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPSFWD",
"PerPkg": "1",
@@ -3385,6 +4116,7 @@
},
{
"BriefDescription": "Rsp*Fwd*WB Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSP_FWD_WB",
"PerPkg": "1",
@@ -3394,6 +4126,7 @@
},
{
"BriefDescription": "Rsp*WB Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSP_WBWB",
"PerPkg": "1",
@@ -3403,8 +4136,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for snoops responses of RspConflict to local CA requests. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.",
"UMask": "0x40",
@@ -3412,8 +4147,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for a snoop response of RspFwd to local CA requests. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -3421,8 +4158,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for snoops responses of RspI to local CA requests. RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data).",
"UMask": "0x1",
@@ -3430,8 +4169,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for snoop responses of RspIFwd to local CA requests. This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states. This is commonly returned with RFO transactions. It can be either a HitM or a HitFE.",
"UMask": "0x4",
@@ -3439,8 +4180,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for snoop responses of RspS to local CA requests. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -3448,8 +4191,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for a snoop response of RspSFwd to local CA requests. This is returned when a remote caching agent forwards data but holds on to its current copy. This is common for data and code reads that hit in a remote socket in E or F state.",
"UMask": "0x8",
@@ -3457,8 +4202,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSP_FWD_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for a snoop response of Rsp*Fwd*WB to local CA requests. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.",
"UMask": "0x20",
@@ -3466,8 +4213,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSP_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snoop responses received for a Local request; Filters for a snoop response of RspIWB or RspSWB to local CA requests. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.",
"UMask": "0x10",
@@ -3475,8 +4224,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3484,8 +4235,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3493,8 +4246,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3502,8 +4257,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3511,8 +4268,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3520,8 +4279,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -3529,8 +4290,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3538,8 +4301,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3547,8 +4312,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3556,8 +4323,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3565,8 +4334,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3574,8 +4345,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -3583,8 +4356,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3592,8 +4367,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3601,8 +4378,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3610,8 +4389,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3619,8 +4400,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3628,8 +4411,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -3637,8 +4422,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3646,8 +4433,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3655,8 +4444,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3664,8 +4455,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3673,8 +4466,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3682,8 +4477,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -3691,8 +4488,10 @@
},
{
"BriefDescription": "TOR Inserts; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0xff",
@@ -3700,8 +4499,10 @@
},
{
"BriefDescription": "TOR Inserts; Hits from Local",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x15",
@@ -3709,8 +4510,10 @@
},
{
"BriefDescription": "TOR Inserts; All from Local iA and IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL_IO_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally initiated requests",
"UMask": "0x35",
@@ -3718,8 +4521,10 @@
},
{
"BriefDescription": "TOR Inserts; Misses from Local",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x25",
@@ -3727,8 +4532,10 @@
},
{
"BriefDescription": "TOR Inserts; SF/LLC Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -3736,8 +4543,10 @@
},
{
"BriefDescription": "TOR Inserts; Hit (Not a Miss)",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; HITs (hit is defined to be not a miss [see below], as a result for any request allocated into the TOR, one of either HIT or MISS must be true)",
"UMask": "0x10",
@@ -3745,6 +4554,7 @@
},
{
"BriefDescription": "TOR Inserts; All from Local iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA",
"PerPkg": "1",
@@ -3754,6 +4564,7 @@
},
{
"BriefDescription": "TOR Inserts; Hits from Local iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT",
"PerPkg": "1",
@@ -3763,6 +4574,7 @@
},
{
"BriefDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
"Filter": "config1=0x40233",
@@ -3773,6 +4585,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
"Filter": "config1=0x40433",
@@ -3783,6 +4596,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefCRD",
"Filter": "config1=0x4b233",
@@ -3792,6 +4606,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefDRD",
"Filter": "config1=0x4b433",
@@ -3801,6 +4616,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LlcPrefRFO",
"Filter": "config1=0x4b033",
@@ -3811,6 +4627,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
"Filter": "config1=0x40033",
@@ -3821,6 +4638,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS",
"PerPkg": "1",
@@ -3830,6 +4648,7 @@
},
{
"BriefDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
"Filter": "config1=0x40233",
@@ -3840,6 +4659,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
"Filter": "config1=0x40433",
@@ -3850,6 +4670,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefCRD",
"Filter": "config1=0x4b233",
@@ -3859,6 +4680,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefDRD",
"Filter": "config1=0x4b433",
@@ -3868,6 +4690,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LlcPrefRFO",
"Filter": "config1=0x4b033",
@@ -3878,6 +4701,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
"Filter": "config1=0x40033",
@@ -3888,8 +4712,10 @@
},
{
"BriefDescription": "TOR Inserts; All from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; All locally generated IO traffic",
"UMask": "0x34",
@@ -3897,6 +4723,7 @@
},
{
"BriefDescription": "TOR Inserts; Hits from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT",
"PerPkg": "1",
@@ -3906,6 +4733,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS",
"PerPkg": "1",
@@ -3915,8 +4743,10 @@
},
{
"BriefDescription": "TOR Inserts; ItoM misses from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
+ "Experimental": "1",
"Filter": "config1=0x49033",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM request is used by IIO to request a data write without first reading the data for ownership.",
@@ -3925,8 +4755,10 @@
},
{
"BriefDescription": "TOR Inserts; RdCur misses from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RDCUR",
+ "Experimental": "1",
"Filter": "config1=0x43C33",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RdCur requests and miss the LLC. A RdCur request is used by IIO to read data without changing state.",
@@ -3935,8 +4767,10 @@
},
{
"BriefDescription": "TOR Inserts; RFO misses from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
+ "Experimental": "1",
"Filter": "config1=0x40033",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests a cache line to be cached in E state with the intent to modify.",
@@ -3945,8 +4779,10 @@
},
{
"BriefDescription": "TOR Inserts; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x8",
@@ -3954,26 +4790,32 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IPQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x18",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IPQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x28",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x1",
@@ -3981,17 +4823,21 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x37",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; Misses. (a miss is defined to be any transaction from the IRQ, PRQ, RRQ, IPQ or (in the victim case) the ISMQ, that required the CHA to spawn a new UPI/SMI3 request on the UPI fabric (including UPI snoops and/or any RD/WR to a local memory controller, in the event that the CHA is the home node)). Basically, if the LLC/SF/MLC complex were not able to service the request without involving another agent...it is a miss. If only IDI snoops were required, it is not a miss (that means the SF/MLC com",
"UMask": "0x20",
@@ -3999,8 +4845,10 @@
},
{
"BriefDescription": "TOR Inserts; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x4",
@@ -4008,6 +4856,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.REM_ALL",
@@ -4017,44 +4866,54 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.RRQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x50",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x60",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.WBQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.WBQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : All",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xff",
@@ -4062,8 +4921,10 @@
},
{
"BriefDescription": "TOR Occupancy; All from Local",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_FROM_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. There are a number of subevent 'filters' but only a subset of the subevent combinations are valid. Subevents that require an opcode or NID match require the Cn_MSR_PMON_BOX_FILTER.{opc, nid} field to be set. If, for example, one wanted to count DRD Local Misses, one should select MISS_OPC_MATCH and set Cn_MSR_PMON_BOX_FILTER.opc to DRD (0x182); All remotely generated requests",
"UMask": "0x37",
@@ -4071,8 +4932,10 @@
},
{
"BriefDescription": "TOR Occupancy; Hits from Local",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x17",
@@ -4080,8 +4943,10 @@
},
{
"BriefDescription": "TOR Occupancy; Misses from Local",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ALL_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x27",
@@ -4089,8 +4954,10 @@
},
{
"BriefDescription": "TOR Occupancy; SF/LLC Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T; TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -4098,8 +4965,10 @@
},
{
"BriefDescription": "TOR Occupancy; Hit (Not a Miss)",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T; HITs (hit is defined to be not a miss [see below], as a result for any request allocated into the TOR, one of either HIT or MISS must be true)",
"UMask": "0x10",
@@ -4107,6 +4976,7 @@
},
{
"BriefDescription": "TOR Occupancy; All from Local iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA",
"PerPkg": "1",
@@ -4116,6 +4986,7 @@
},
{
"BriefDescription": "TOR Occupancy; Hits from Local iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT",
"PerPkg": "1",
@@ -4125,6 +4996,7 @@
},
{
"BriefDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
"Filter": "config1=0x40233",
@@ -4135,6 +5007,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
"Filter": "config1=0x40433",
@@ -4145,6 +5018,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefCRD",
"Filter": "config1=0x4b233",
@@ -4154,6 +5028,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefDRD",
"Filter": "config1=0x4b433",
@@ -4163,6 +5038,7 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LlcPrefRFO",
"Filter": "config1=0x4b033",
@@ -4173,6 +5049,7 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
"Filter": "config1=0x40033",
@@ -4183,6 +5060,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses from Local iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS",
"PerPkg": "1",
@@ -4192,6 +5070,7 @@
},
{
"BriefDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
"Filter": "config1=0x40233",
@@ -4202,6 +5081,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
"Filter": "config1=0x40433",
@@ -4212,6 +5092,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefCRD",
"Filter": "config1=0x4b233",
@@ -4221,6 +5102,7 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefDRD",
"Filter": "config1=0x4b433",
@@ -4230,6 +5112,7 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LlcPrefRFO",
"Filter": "config1=0x4b033",
@@ -4240,6 +5123,7 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
"Filter": "config1=0x40033",
@@ -4250,8 +5134,10 @@
},
{
"BriefDescription": "TOR Occupancy; All from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T; All locally generated IO traffic",
"UMask": "0x34",
@@ -4259,8 +5145,10 @@
},
{
"BriefDescription": "TOR Occupancy; Hits from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x14",
@@ -4268,8 +5156,10 @@
},
{
"BriefDescription": "TOR Occupancy; Misses from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x24",
@@ -4277,8 +5167,10 @@
},
{
"BriefDescription": "TOR Occupancy; ITOM Misses from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "Experimental": "1",
"Filter": "config1=0x49033",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO ItoM requests that miss the LLC. An ItoM is used by IIO to request a data write without first reading the data for ownership.",
@@ -4287,8 +5179,10 @@
},
{
"BriefDescription": "TOR Occupancy; RDCUR misses from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RDCUR",
+ "Experimental": "1",
"Filter": "config1=0x43C33",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RdCur requests that miss the LLC. A RdCur request is used by IIO to read data without changing state.",
@@ -4297,8 +5191,10 @@
},
{
"BriefDescription": "TOR Occupancy; RFO misses from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "Experimental": "1",
"Filter": "config1=0x40033",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that are generated from local IO RFO requests that miss the LLC. A read for ownership (RFO) requests data to be cached in E state with the intent to modify.",
@@ -4307,8 +5203,10 @@
},
{
"BriefDescription": "TOR Occupancy; IPQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x8",
@@ -4316,26 +5214,32 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x18",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x28",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy; IRQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x1",
@@ -4343,17 +5247,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.ALL_FROM_LOC",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x37",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy; Miss",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T; Misses. (a miss is defined to be any transaction from the IRQ, PRQ, RRQ, IPQ or (in the victim case) the ISMQ, that required the CHA to spawn a new UPI/SMI3 request on the UPI fabric (including UPI snoops and/or any RD/WR to a local memory controller, in the event that the CHA is the home node)). Basically, if the LLC/SF/MLC complex were not able to service the request without involving another agent...it is a miss. If only IDI snoops were required, it is not a miss (that means the SF/MLC com",
"UMask": "0x20",
@@ -4361,8 +5269,10 @@
},
{
"BriefDescription": "TOR Occupancy; PRQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x4",
@@ -4370,8 +5280,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -4379,8 +5291,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4388,8 +5302,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -4397,8 +5313,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -4406,8 +5324,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -4415,8 +5335,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -4424,8 +5346,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4433,8 +5357,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -4442,8 +5368,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -4451,8 +5379,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -4460,8 +5390,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -4469,8 +5401,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -4478,8 +5412,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -4487,8 +5423,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -4496,8 +5434,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -4505,8 +5445,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -4514,8 +5456,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -4523,8 +5467,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -4532,8 +5478,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -4541,8 +5489,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -4550,8 +5500,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -4559,8 +5511,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -4568,8 +5522,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -4577,8 +5533,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -4586,8 +5544,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -4595,8 +5555,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -4604,8 +5566,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -4613,8 +5577,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -4622,8 +5588,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -4631,8 +5599,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -4640,8 +5610,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x20",
@@ -4649,8 +5621,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -4658,8 +5632,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -4667,8 +5643,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -4676,8 +5654,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_HORZ_NACK.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -4685,8 +5665,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -4694,8 +5676,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -4703,8 +5687,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -4712,8 +5698,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -4721,8 +5709,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -4730,8 +5720,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -4739,8 +5731,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -4748,8 +5742,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -4757,8 +5753,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -4766,8 +5764,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -4775,8 +5775,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -4784,8 +5786,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4793,8 +5797,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -4802,8 +5808,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -4811,8 +5819,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -4820,8 +5830,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -4829,8 +5841,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -4838,8 +5852,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4847,8 +5863,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -4856,8 +5874,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -4865,8 +5885,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -4874,8 +5896,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -4883,8 +5907,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -4892,8 +5918,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -4901,8 +5929,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -4910,8 +5940,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -4919,8 +5951,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -4928,8 +5962,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -4937,8 +5973,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -4946,8 +5984,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -4955,8 +5995,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -4964,8 +6006,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -4973,8 +6017,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -4982,8 +6028,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -4991,8 +6039,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5000,8 +6050,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5009,8 +6061,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5018,8 +6072,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5027,8 +6083,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5036,8 +6094,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5045,8 +6105,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5054,8 +6116,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5063,8 +6127,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5072,8 +6138,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5081,8 +6149,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -5090,8 +6160,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -5099,8 +6171,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -5108,8 +6182,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -5117,8 +6193,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -5126,8 +6204,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -5135,8 +6215,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -5144,8 +6226,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5153,8 +6237,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5162,8 +6248,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5171,8 +6259,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5180,8 +6270,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5189,8 +6281,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5198,8 +6292,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5207,8 +6303,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -5216,8 +6314,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -5225,8 +6325,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -5234,8 +6336,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -5243,8 +6347,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -5252,8 +6358,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -5261,8 +6369,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -5270,8 +6380,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; AD REQ Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x4",
@@ -5279,8 +6391,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; AD RSP VN0 Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x8",
@@ -5288,8 +6402,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; BL NCB Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x40",
@@ -5297,8 +6413,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; BL NCS Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x80",
@@ -5306,8 +6424,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; BL RSP Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x10",
@@ -5315,8 +6435,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; BL DRS Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x20",
@@ -5324,8 +6446,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; VN0 Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x2",
@@ -5333,8 +6457,10 @@
},
{
"BriefDescription": "UPI Ingress Credit Allocations; VNA Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_CHA_UPI_CREDITS_ACQUIRED.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of UPI credits acquired for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This can be used with the Credit Occupancy event in order to calculate average credit lifetime. This event supports filtering to cover the VNA/VN0 credits and the different message classes. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x1",
@@ -5342,8 +6468,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; AD REQ VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x4",
@@ -5351,8 +6479,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; AD RSP VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x8",
@@ -5360,8 +6490,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; BL NCB VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x40",
@@ -5369,6 +6501,7 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; BL NCS VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_NCS",
"PerPkg": "1",
@@ -5378,8 +6511,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; BL RSP VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x10",
@@ -5387,8 +6522,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; BL DRS VN0 Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VN0_BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x20",
@@ -5396,8 +6533,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; AD VNA Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VNA_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x1",
@@ -5405,8 +6544,10 @@
},
{
"BriefDescription": "UPI Ingress Credits In Use Cycles; BL VNA Credits",
+ "Counter": "0",
"EventCode": "0x3B",
"EventName": "UNC_CHA_UPI_CREDIT_OCCUPANCY.VNA_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of UPI credits available in each cycle for either the AD or BL ring. In order to send snoops, snoop responses, requests, data, etc to the UPI agent on the ring, it is necessary to first acquire a credit for the UPI ingress buffer. This stat increments by the number of credits that are available each cycle. This can be used in conjunction with the Credit Acquired event in order to calculate average credit lifetime. This event supports filtering for the different types of credits that are available. Note that you must select the link that you would like to monitor using the link select register, and you can only monitor 1 link at a time.",
"UMask": "0x2",
@@ -5414,8 +6555,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5423,8 +6566,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5432,8 +6577,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5441,8 +6588,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5450,8 +6599,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5459,8 +6610,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5468,8 +6621,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5477,8 +6632,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5486,8 +6643,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5495,8 +6654,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5504,8 +6665,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5513,8 +6676,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5522,8 +6687,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -5531,8 +6698,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -5540,8 +6709,10 @@
},
{
"BriefDescription": "WbPushMtoI; Pushed to LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when the CHA was received WbPushMtoI; Counts the number of times when the CHA was able to push WbPushMToI to LLC",
"UMask": "0x1",
@@ -5549,8 +6720,10 @@
},
{
"BriefDescription": "WbPushMtoI; Pushed to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.MEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when the CHA was received WbPushMtoI; Counts the number of times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to MEM)",
"UMask": "0x2",
@@ -5558,8 +6731,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC0_SMI2",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC0_SMI2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -5567,8 +6742,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC1_SMI3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC1_SMI3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -5576,8 +6753,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC2_SMI4",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC2_SMI4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -5585,8 +6764,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; EDC3_SMI5",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.EDC3_SMI5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -5594,8 +6775,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; MC0_SMI0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC0_SMI0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -5603,8 +6786,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty; MC1_SMI1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC1_SMI1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue.; Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -5612,8 +6797,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Any RspIFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.ANY_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response I to Fwd F/E",
"UMask": "0xe4",
@@ -5621,8 +6808,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.ANY_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response I to Fwd M",
"UMask": "0xf0",
@@ -5630,8 +6819,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Any RspSFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.ANY_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response S to Fwd F/E",
"UMask": "0xe2",
@@ -5639,8 +6830,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Any RspSFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.ANY_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response S to Fwd M",
"UMask": "0xe8",
@@ -5648,8 +6841,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Any RspHitFSE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.ANY_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Any Request - Response any to Hit F/S/E",
"UMask": "0xe1",
@@ -5657,8 +6852,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Core RspIFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.CORE_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response I to Fwd F/E",
"UMask": "0x44",
@@ -5666,8 +6863,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Core RspIFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.CORE_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response I to Fwd M",
"UMask": "0x50",
@@ -5675,8 +6874,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Core RspSFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.CORE_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response S to Fwd F/E",
"UMask": "0x42",
@@ -5684,8 +6885,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Core RspSFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.CORE_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response S to Fwd M",
"UMask": "0x48",
@@ -5693,8 +6896,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Core RspHitFSE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.CORE_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Core Request - Response any to Hit F/S/E",
"UMask": "0x41",
@@ -5702,8 +6907,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Evict RspIFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response I to Fwd F/E",
"UMask": "0x84",
@@ -5711,8 +6918,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Evict RspIFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response I to Fwd M",
"UMask": "0x90",
@@ -5720,8 +6929,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Evict RspSFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response S to Fwd F/E",
"UMask": "0x82",
@@ -5729,8 +6940,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Evict RspSFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response S to Fwd M",
"UMask": "0x88",
@@ -5738,8 +6951,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; Evict RspHitFSE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EVICT_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; Eviction Request - Response any to Hit F/S/E",
"UMask": "0x81",
@@ -5747,8 +6962,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; External RspIFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EXT_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response I to Fwd F/E",
"UMask": "0x24",
@@ -5756,8 +6973,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; External RspIFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EXT_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response I to Fwd M",
"UMask": "0x30",
@@ -5765,8 +6984,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; External RspSFwdFE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EXT_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response S to Fwd F/E",
"UMask": "0x22",
@@ -5774,8 +6995,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; External RspSFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EXT_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response S to Fwd M",
"UMask": "0x28",
@@ -5783,8 +7006,10 @@
},
{
"BriefDescription": "Core Cross Snoop Responses; External RspHitFSE",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_CHA_XSNP_RESP.EXT_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of core cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type. This event can be filtered based on who triggered the initial snoop(s): from Evictions, Core or External (i.e. from a remote node) Requests. And the event can be filtered based on the responses: RspX_Fwd/HitY where Y is the state prior to the snoop response and X is the state following.; External Request - Response any to Hit F/S/E",
"UMask": "0x21",
@@ -5792,6 +7017,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CLOCKTICKS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventName": "UNC_C_CLOCKTICKS",
"PerPkg": "1",
@@ -5799,6 +7025,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_FAST_ASSERTED.HORZ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA5",
"EventName": "UNC_C_FAST_ASSERTED",
@@ -5808,15 +7035,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.ANY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.DATA_READ",
@@ -5826,24 +7056,29 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x31",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x91",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.REMOTE_SNOOP",
@@ -5853,15 +7088,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_E",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.E_STATE",
@@ -5871,6 +7109,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_F",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.F_STATE",
@@ -5880,15 +7119,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2f",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_M",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.M_STATE",
@@ -5898,15 +7140,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_VICTIMS.TOTAL_S",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.S_STATE",
@@ -5916,59 +7161,72 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SRC_THRTL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA4",
"EventName": "UNC_C_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.EVICT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IPQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IPQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x18",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IPQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x28",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IRQ",
@@ -5978,6 +7236,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IRQ_HIT",
@@ -5987,6 +7246,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.IRQ_MISS",
@@ -5996,51 +7256,62 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x37",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x31",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IO",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x34",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.PRQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IO_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.PRQ_HIT",
@@ -6050,6 +7321,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IO_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.PRQ_MISS",
@@ -6059,6 +7331,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REM_ALL",
@@ -6068,87 +7341,106 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.RRQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x50",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.RRQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x60",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.WBQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.WBQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.EVICT",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.HIT",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IPQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IPQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x18",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IPQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x28",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IRQ",
@@ -6158,6 +7450,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA_HIT",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IRQ_HIT",
@@ -6167,6 +7460,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA_MISS",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.IRQ_MISS",
@@ -6176,608 +7470,743 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x37",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IA",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x31",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IO",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x34",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.MISS",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.PRQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IO_HIT",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.PRQ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_OCCUPANCY.IO_MISS",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.PRQ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x24",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_ACQUIRED.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x80",
"EventName": "UNC_H_AG0_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_AD_CRD_OCCUPANCY.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x82",
"EventName": "UNC_H_AG0_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_ACQUIRED.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x88",
"EventName": "UNC_H_AG0_BL_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG0_BL_CRD_OCCUPANCY.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8A",
"EventName": "UNC_H_AG0_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_ACQUIRED.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_H_AG1_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_AD_CRD_OCCUPANCY.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x86",
"EventName": "UNC_H_AG1_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CRD_OCCUPANCY.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8E",
"EventName": "UNC_H_AG1_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_AG1_BL_CREDITS_ACQUIRED.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x8C",
"EventName": "UNC_H_AG1_BL_CREDITS_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x57",
"EventName": "UNC_H_BYPASS_CHA_IMC.INTERMEDIATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x57",
"EventName": "UNC_H_BYPASS_CHA_IMC.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_BYPASS_CHA_IMC.TAKEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x57",
"EventName": "UNC_H_BYPASS_CHA_IMC.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CMS_CLOCKTICKS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_H_CLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C1_STATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x17",
"EventName": "UNC_H_CORE_PMA.C1_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C1_TRANSITION",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x17",
"EventName": "UNC_H_CORE_PMA.C1_TRANSITION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C6_STATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x17",
"EventName": "UNC_H_CORE_PMA.C6_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_PMA.C6_TRANSITION",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x17",
"EventName": "UNC_H_CORE_PMA.C6_TRANSITION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_PMA.GV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x17",
"EventName": "UNC_H_CORE_PMA.GV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_GTONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.ANY_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_ONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.ANY_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.ANY_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.ANY_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_GTONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.CORE_GTONE",
@@ -6787,24 +8216,29 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_ONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.CORE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x41",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.CORE_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.CORE_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x44",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_GTONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EVICT_GTONE",
@@ -6814,59 +8248,72 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_ONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EVICT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x81",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EVICT_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EVICT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x84",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_GTONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EXT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x22",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_ONE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EXT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x21",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_CORE_SNP.EXT_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x33",
"EventName": "UNC_H_CORE_SNP.EXT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x24",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_COUNTER0_OCCUPANCY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x1F",
"EventName": "UNC_H_COUNTER0_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.NO_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x53",
"EventName": "UNC_H_DIR_LOOKUP.NO_SNP",
@@ -6876,6 +8323,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_DIR_LOOKUP.SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x53",
"EventName": "UNC_H_DIR_LOOKUP.SNP",
@@ -6885,6 +8333,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x54",
"EventName": "UNC_H_DIR_UPDATE.HA",
@@ -6894,6 +8343,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_DIR_UPDATE.TOR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x54",
"EventName": "UNC_H_DIR_UPDATE.TOR",
@@ -6903,24 +8353,29 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAE",
"EventName": "UNC_H_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAE",
"EventName": "UNC_H_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_HIT.EX_RDS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5F",
"EventName": "UNC_H_HITME_HIT.EX_RDS",
@@ -6930,411 +8385,502 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_HIT.SHARED_OWNREQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5F",
"EventName": "UNC_H_HITME_HIT.SHARED_OWNREQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_HIT.WBMTOE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5F",
"EventName": "UNC_H_HITME_HIT.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_HIT.WBMTOI_OR_S",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5F",
"EventName": "UNC_H_HITME_HIT.WBMTOI_OR_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_LOOKUP.READ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5E",
"EventName": "UNC_H_HITME_LOOKUP.READ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_LOOKUP.WRITE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5E",
"EventName": "UNC_H_HITME_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x60",
"EventName": "UNC_H_HITME_MISS.NOTSHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_MISS.READ_OR_INV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x60",
"EventName": "UNC_H_HITME_MISS.READ_OR_INV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_MISS.SHARED_RDINVOWN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x60",
"EventName": "UNC_H_HITME_MISS.SHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.DEALLOCATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x61",
"EventName": "UNC_H_HITME_UPDATE.DEALLOCATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x61",
"EventName": "UNC_H_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.RDINVOWN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x61",
"EventName": "UNC_H_HITME_UPDATE.RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.RSPFWDI_REM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x61",
"EventName": "UNC_H_HITME_UPDATE.RSPFWDI_REM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HITME_UPDATE.SHARED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x61",
"EventName": "UNC_H_HITME_UPDATE.SHARED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA7",
"EventName": "UNC_H_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA7",
"EventName": "UNC_H_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA7",
"EventName": "UNC_H_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA7",
"EventName": "UNC_H_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA9",
"EventName": "UNC_H_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA9",
"EventName": "UNC_H_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA9",
"EventName": "UNC_H_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA9",
"EventName": "UNC_H_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAB",
"EventName": "UNC_H_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAB",
"EventName": "UNC_H_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAB",
"EventName": "UNC_H_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAB",
"EventName": "UNC_H_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_IV_IN_USE.LEFT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAD",
"EventName": "UNC_H_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_HORZ_RING_IV_IN_USE.RIGHT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAD",
"EventName": "UNC_H_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_READS_COUNT.NORMAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x59",
"EventName": "UNC_H_IMC_READS_COUNT.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_READS_COUNT.PRIORITY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x59",
"EventName": "UNC_H_IMC_READS_COUNT.PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULL_MIG",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.FULL_MIG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIAL_MIG",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL_MIG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5B",
"EventName": "UNC_H_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.INVITOM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x62",
"EventName": "UNC_H_IODC_ALLOC.INVITOM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.IODCFULL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x62",
"EventName": "UNC_H_IODC_ALLOC.IODCFULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_ALLOC.OSBGATED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x62",
"EventName": "UNC_H_IODC_ALLOC.OSBGATED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.ALL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "UNC_H_IODC_DEALLOC.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.SNPOUT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "UNC_H_IODC_DEALLOC.SNPOUT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBMTOE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "UNC_H_IODC_DEALLOC.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBMTOI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "UNC_H_IODC_DEALLOC.WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_IODC_DEALLOC.WBPUSHMTOI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "UNC_H_IODC_DEALLOC.WBPUSHMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_MISC.CV0_PREF_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x39",
"EventName": "UNC_H_MISC.CV0_PREF_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_MISC.CV0_PREF_VIC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x39",
"EventName": "UNC_H_MISC.CV0_PREF_VIC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_MISC.RFO_HIT_S",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x39",
"EventName": "UNC_H_MISC.RFO_HIT_S",
@@ -7344,86 +8890,105 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_MISC.RSPI_WAS_FSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x39",
"EventName": "UNC_H_MISC.RSPI_WAS_FSE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_MISC.WC_ALIASING",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x39",
"EventName": "UNC_H_MISC.WC_ALIASING",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_OSB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x55",
"EventName": "UNC_H_OSB",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC0_SMI2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.EDC0_SMI2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC1_SMI3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.EDC1_SMI3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC2_SMI4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.EDC2_SMI4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.EDC3_SMI5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.EDC3_SMI5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.MC0_SMI0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.MC0_SMI0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_READ_NO_CREDITS.MC1_SMI1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x58",
"EventName": "UNC_H_READ_NO_CREDITS.MC1_SMI1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.INVITOE_LOCAL",
@@ -7433,6 +8998,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_REQUESTS.INVITOE_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.INVITOE_REMOTE",
@@ -7442,6 +9008,7 @@
},
{
"BriefDescription": "read requests from home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.READS",
@@ -7451,6 +9018,7 @@
},
{
"BriefDescription": "read requests from local home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.READS_LOCAL",
@@ -7460,15 +9028,18 @@
},
{
"BriefDescription": "read requests from remote home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.READS_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "write requests from home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.WRITES",
@@ -7478,6 +9049,7 @@
},
{
"BriefDescription": "write requests from local home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.WRITES_LOCAL",
@@ -7487,177 +9059,216 @@
},
{
"BriefDescription": "write requests from remote home agent",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x50",
"EventName": "UNC_H_REQUESTS.WRITES_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.AD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA1",
"EventName": "UNC_H_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.AK",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA1",
"EventName": "UNC_H_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.BL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA1",
"EventName": "UNC_H_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_HORZ.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA1",
"EventName": "UNC_H_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.AD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA0",
"EventName": "UNC_H_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.AK",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA0",
"EventName": "UNC_H_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.BL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA0",
"EventName": "UNC_H_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_BOUNCES_VERT.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA0",
"EventName": "UNC_H_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.AD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA3",
"EventName": "UNC_H_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.AK",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA3",
"EventName": "UNC_H_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA3",
"EventName": "UNC_H_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.BL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA3",
"EventName": "UNC_H_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_HORZ.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA3",
"EventName": "UNC_H_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.AD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA2",
"EventName": "UNC_H_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.AK",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA2",
"EventName": "UNC_H_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.BL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA2",
"EventName": "UNC_H_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RING_SINK_STARVED_VERT.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA2",
"EventName": "UNC_H_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IPQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IRQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.IRQ",
@@ -7667,276 +9278,337 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.IRQ_REJ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.IRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.PRQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.PRQ_REJ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.PRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.RRQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_INSERTS.WBQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x13",
"EventName": "UNC_H_RxC_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x22",
"EventName": "UNC_H_RxC_IPQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.ANY_IPQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IPQ1_REJECT.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x23",
"EventName": "UNC_H_RxC_IPQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x18",
"EventName": "UNC_H_RxC_IRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.ANY_REJECT_IRQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.PA_MATCH",
@@ -7946,177 +9618,216 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_IRQ1_REJECT.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x19",
"EventName": "UNC_H_RxC_IRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "UNC_H_RxC_ISMQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2C",
"EventName": "UNC_H_RxC_ISMQ0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x25",
"EventName": "UNC_H_RxC_ISMQ1_REJECT.ANY_ISMQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x25",
"EventName": "UNC_H_RxC_ISMQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_RETRY.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2D",
"EventName": "UNC_H_RxC_ISMQ1_RETRY.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_ISMQ1_RETRY.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2D",
"EventName": "UNC_H_RxC_ISMQ1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.IPQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x11",
"EventName": "UNC_H_RxC_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.IRQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x11",
"EventName": "UNC_H_RxC_OCCUPANCY.IRQ",
@@ -8126,1005 +9837,1228 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.RRQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x11",
"EventName": "UNC_H_RxC_OCCUPANCY.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OCCUPANCY.WBQ",
+ "Counter": "0",
"Deprecated": "1",
"EventCode": "0x11",
"EventName": "UNC_H_RxC_OCCUPANCY.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2E",
"EventName": "UNC_H_RxC_OTHER0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_OTHER1_RETRY.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2F",
"EventName": "UNC_H_RxC_OTHER1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "UNC_H_RxC_PRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.ANY_PRQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_PRQ1_REJECT.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x21",
"EventName": "UNC_H_RxC_PRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2A",
"EventName": "UNC_H_RxC_REQ_Q0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2B",
"EventName": "UNC_H_RxC_REQ_Q1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x26",
"EventName": "UNC_H_RxC_RRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.ANY_RRQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_RRQ1_REJECT.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x27",
"EventName": "UNC_H_RxC_RRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x28",
"EventName": "UNC_H_RxC_WBQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.ANY0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.ANY_WBQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.HA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxC_WBQ1_REJECT.VICTIM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x29",
"EventName": "UNC_H_RxC_WBQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB4",
"EventName": "UNC_H_RxR_BUSY_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB4",
"EventName": "UNC_H_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB4",
"EventName": "UNC_H_RxR_BUSY_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BUSY_STARVED.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB4",
"EventName": "UNC_H_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_BYPASS.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB2",
"EventName": "UNC_H_RxR_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.IFV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_CRD_STARVED.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB3",
"EventName": "UNC_H_RxR_CRD_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_INSERTS.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB1",
"EventName": "UNC_H_RxR_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_RxR_OCCUPANCY.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xB0",
"EventName": "UNC_H_RxR_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.E_STATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x3D",
"EventName": "UNC_H_SF_EVICTION.E_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.M_STATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x3D",
"EventName": "UNC_H_SF_EVICTION.M_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SF_EVICTION.S_STATE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x3D",
"EventName": "UNC_H_SF_EVICTION.S_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.ALL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.BCST_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.BCST_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.BCST_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.BCST_REM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.DIRECT_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.DIRECT_REM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOPS_SENT.REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x51",
"EventName": "UNC_H_SNOOPS_SENT.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPCNFLCTS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPCNFLCT",
@@ -9134,24 +11068,29 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPIFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPIFWD",
@@ -9161,15 +11100,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSPSFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSPSFWD",
@@ -9179,6 +11121,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSP_FWD_WB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSP_FWD_WB",
@@ -9188,1575 +11131,1925 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP.RSP_WBWB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5C",
"EventName": "UNC_H_SNOOP_RESP.RSP_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSP_FWD_WB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSP_FWD_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_SNOOP_RESP_LOCAL.RSP_WB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5D",
"EventName": "UNC_H_SNP_RSP_RCV_LOCAL.RSP_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD0",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD2",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD4",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xD6",
"EventName": "UNC_H_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9D",
"EventName": "UNC_H_TxR_HORZ_ADS_USED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9D",
"EventName": "UNC_H_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9D",
"EventName": "UNC_H_TxR_HORZ_ADS_USED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9D",
"EventName": "UNC_H_TxR_HORZ_ADS_USED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_ADS_USED.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9D",
"EventName": "UNC_H_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_BYPASS.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9F",
"EventName": "UNC_H_TxR_HORZ_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_FULL.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x96",
"EventName": "UNC_H_TxR_HORZ_CYCLES_FULL.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_CYCLES_NE.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x97",
"EventName": "UNC_H_TxR_HORZ_CYCLES_NE.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_INSERTS.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x95",
"EventName": "UNC_H_TxR_HORZ_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_NACK.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x99",
"EventName": "UNC_H_TxR_HORZ_NACK.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_OCCUPANCY.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x94",
"EventName": "UNC_H_TxR_HORZ_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.AD_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9B",
"EventName": "UNC_H_TxR_HORZ_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.AK_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9B",
"EventName": "UNC_H_TxR_HORZ_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.BL_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9B",
"EventName": "UNC_H_TxR_HORZ_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_HORZ_STARVED.IV_BNC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9B",
"EventName": "UNC_H_TxR_HORZ_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_ADS_USED.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9C",
"EventName": "UNC_H_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_BYPASS.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9E",
"EventName": "UNC_H_TxR_VERT_BYPASS.IV_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_FULL.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x92",
"EventName": "UNC_H_TxR_VERT_CYCLES_FULL.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_CYCLES_NE.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x93",
"EventName": "UNC_H_TxR_VERT_CYCLES_NE.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_INSERTS.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x91",
"EventName": "UNC_H_TxR_VERT_INSERTS.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_NACK.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x98",
"EventName": "UNC_H_TxR_VERT_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_OCCUPANCY.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x90",
"EventName": "UNC_H_TxR_VERT_OCCUPANCY.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AD_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AD_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AK_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.AK_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.BL_AG0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.BL_AG1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TxR_VERT_STARVED.IV",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x9A",
"EventName": "UNC_H_TxR_VERT_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA6",
"EventName": "UNC_H_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.DN_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA6",
"EventName": "UNC_H_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA6",
"EventName": "UNC_H_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AD_IN_USE.UP_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA6",
"EventName": "UNC_H_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA8",
"EventName": "UNC_H_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.DN_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA8",
"EventName": "UNC_H_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA8",
"EventName": "UNC_H_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_AK_IN_USE.UP_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xA8",
"EventName": "UNC_H_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAA",
"EventName": "UNC_H_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.DN_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAA",
"EventName": "UNC_H_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAA",
"EventName": "UNC_H_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_BL_IN_USE.UP_ODD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAA",
"EventName": "UNC_H_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_IV_IN_USE.DN",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAC",
"EventName": "UNC_H_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_VERT_RING_IV_IN_USE.UP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xAC",
"EventName": "UNC_H_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WB_PUSH_MTOI.LLC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x56",
"EventName": "UNC_H_WB_PUSH_MTOI.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WB_PUSH_MTOI.MEM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x56",
"EventName": "UNC_H_WB_PUSH_MTOI.MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC0_SMI2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.EDC0_SMI2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC1_SMI3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.EDC1_SMI3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC2_SMI4",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.EDC2_SMI4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.EDC3_SMI5",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.EDC3_SMI5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.MC0_SMI0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.MC0_SMI0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_WRITE_NO_CREDITS.MC1_SMI1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5A",
"EventName": "UNC_H_WRITE_NO_CREDITS.MC1_SMI1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPI_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.ANY_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPI_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.ANY_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf0",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPS_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.ANY_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSPS_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.ANY_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe8",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.ANY_RSP_HITFSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.ANY_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPI_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.CORE_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x44",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPI_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.CORE_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x50",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPS_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.CORE_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x42",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSPS_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.CORE_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x48",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.CORE_RSP_HITFSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.CORE_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x41",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EVICT_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x84",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPI_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EVICT_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EVICT_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x82",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSPS_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EVICT_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x88",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EVICT_RSP_HITFSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EVICT_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x81",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPI_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EXT_RSPI_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x24",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPI_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EXT_RSPI_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x30",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPS_FWDFE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EXT_RSPS_FWDFE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x22",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSPS_FWDM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EXT_RSPS_FWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x28",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_XSNP_RESP.EXT_RSP_HITFSE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x32",
"EventName": "UNC_H_XSNP_RESP.EXT_RSP_HITFSE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x21",
"Unit": "CHA"
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-interconnect.json
index 1a342dff1503..4efbfeb338a6 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-interconnect.json
@@ -1,8 +1,10 @@
[
{
"BriefDescription": "Total Write Cache Occupancy; Any Source",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.; Tracks all requests from any source port.",
"UMask": "0x1",
@@ -10,8 +12,10 @@
},
{
"BriefDescription": "Total Write Cache Occupancy; Snoops",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.IV_Q",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.",
"UMask": "0x2",
@@ -19,6 +23,7 @@
},
{
"BriefDescription": "Total IRP occupancy of inbound read and write requests.",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM",
"PerPkg": "1",
@@ -28,58 +33,71 @@
},
{
"BriefDescription": "IRP Clocks",
+ "Counter": "0,1",
"EventCode": "0x1",
"EventName": "UNC_I_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; CLFlush",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; CRd",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.CRD",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; DRd",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.DRD",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIDCAHin5t",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.PCIDCAHINT",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIRdCur",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.PCIRDCUR",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "PCIITOM request issued by the IRP unit to the mesh with the intention of writing a full cacheline.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.PCITOM",
"PerPkg": "1",
@@ -89,6 +107,7 @@
},
{
"BriefDescription": "RFO request issued by the IRP unit to the mesh with the intention of writing a partial cacheline.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.RFO",
"PerPkg": "1",
@@ -98,22 +117,27 @@
},
{
"BriefDescription": "Coherent Ops; WbMtoI",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "FAF RF full",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_FAF_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound read requests received by the IRP and inserted into the FAF queue.",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_FAF_INSERTS",
"PerPkg": "1",
@@ -122,6 +146,7 @@
},
{
"BriefDescription": "Occupancy of the IRP FAF queue.",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_FAF_OCCUPANCY",
"PerPkg": "1",
@@ -130,95 +155,119 @@
},
{
"BriefDescription": "FAF allocation -- sent to ADQ",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_FAF_TRANSACTIONS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "All Inserts Inbound (p2p + faf + cset)",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_IRP_ALL.INBOUND_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "All Inserts Outbound (BL, AK, Snoops)",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_IRP_ALL.OUTBOUND_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.FAST_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.FAST_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.FAST_XFER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_MISC0.UNKNOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 1; Lost Forward",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.LOST_FWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop pulled away ownership before a write was committed",
"UMask": "0x10",
@@ -226,8 +275,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x20",
@@ -235,8 +286,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Valid",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x40",
@@ -244,8 +297,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SLOW_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x4",
@@ -253,8 +308,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SLOW_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x1",
@@ -262,8 +319,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SLOW_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x8",
@@ -271,8 +330,10 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_MISC1.SLOW_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x2",
@@ -280,88 +341,110 @@
},
{
"BriefDescription": "P2P Requests",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_P2P_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "P2P requests from the ITC",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Occupancy",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_P2P_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "P2P B & S Queue Occupancy",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; P2P completions",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.CMPL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; match if local only",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.LOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; match if local and target matches",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.LOC_AND_TGT_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; P2P Message",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.MSG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; P2P reads",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; Match if remote only",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.REM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; match if remote and target matches",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.REM_AND_TGT_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions; P2P Writes",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Responses to snoops of any type that hit M, E, S or I line in the IIO",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit M, E, S or I line in the IIO",
"UMask": "0x7e",
@@ -369,8 +452,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit E or S line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit E or S line in the IIO cache",
"UMask": "0x74",
@@ -378,8 +463,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit I line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit I line in the IIO cache",
"UMask": "0x72",
@@ -387,8 +474,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit M line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit M line in the IIO cache",
"UMask": "0x78",
@@ -396,8 +485,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that miss the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that miss the IIO cache",
"UMask": "0x71",
@@ -405,64 +496,80 @@
},
{
"BriefDescription": "Snoop Responses; Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; Hit I",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; Hit M",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; Miss",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; SnpCode",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; SnpData",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses; SnpInv",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound Transaction Count; Atomic",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.ATOMIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of atomic transactions",
"UMask": "0x10",
@@ -470,8 +577,10 @@
},
{
"BriefDescription": "Inbound Transaction Count; Other",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.OTHER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of 'other' kinds of transactions.",
"UMask": "0x20",
@@ -479,8 +588,10 @@
},
{
"BriefDescription": "Inbound Transaction Count; Read Prefetches",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.RD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks the number of read prefetches.",
"UMask": "0x4",
@@ -488,8 +599,10 @@
},
{
"BriefDescription": "Inbound Transaction Count; Reads",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.READS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only read requests (not including read prefetches).",
"UMask": "0x1",
@@ -497,15 +610,18 @@
},
{
"BriefDescription": "Inbound Transaction Count; Writes",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.WRITES",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Trackes only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.",
+ "PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound write (fast path) requests received by the IRP.",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -515,118 +631,150 @@
},
{
"BriefDescription": "AK Egress Allocations",
+ "Counter": "0,1",
"EventCode": "0xB",
"EventName": "UNC_I_TxC_AK_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "UNC_I_TxC_BL_DRS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "UNC_I_TxC_BL_DRS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "UNC_I_TxC_BL_DRS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x6",
"EventName": "UNC_I_TxC_BL_NCB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "UNC_I_TxC_BL_NCB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_I_TxC_BL_NCB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "UNC_I_TxC_BL_NCS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x4",
"EventName": "UNC_I_TxC_BL_NCS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0xA",
"EventName": "UNC_I_TxC_BL_NCS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "No AD Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x1A",
"EventName": "UNC_I_TxR2_AD_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number times when it is not possible to issue a request to the R2PCIe because there are no AD Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x1B",
"EventName": "UNC_I_TxR2_BL_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xD",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xE",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0xC",
"EventName": "UNC_I_TxS_REQUEST_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjunction with the allocations event in order to calculate average latency of outbound requests.",
"Unit": "IRP"
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -634,8 +782,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -643,8 +793,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -652,8 +804,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -661,8 +815,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -670,8 +826,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -679,8 +837,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -688,8 +848,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -697,8 +859,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -706,8 +870,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -715,8 +881,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -724,8 +892,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -733,8 +903,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -742,8 +914,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -751,8 +925,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -760,8 +936,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -769,8 +947,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -778,8 +958,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -787,8 +969,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -796,8 +980,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -805,8 +991,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -814,8 +1002,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -823,8 +1013,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -832,8 +1024,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -841,8 +1035,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -850,8 +1046,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -859,8 +1057,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -868,8 +1068,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -877,8 +1079,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -886,8 +1090,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -895,8 +1101,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -904,8 +1112,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -913,8 +1123,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -922,8 +1134,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -931,8 +1145,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -940,8 +1156,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -949,8 +1167,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -958,8 +1178,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -967,8 +1189,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -976,8 +1200,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -985,8 +1211,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -994,8 +1222,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -1003,8 +1233,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -1012,8 +1244,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -1021,8 +1255,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -1030,8 +1266,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -1039,8 +1277,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -1048,8 +1288,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CREDITS_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -1057,6 +1299,7 @@
},
{
"BriefDescription": "Traffic in which the M2M to iMC Bypass was not taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M2M_BYPASS_M2M_Egress.NOT_TAKEN",
"PerPkg": "1",
@@ -1066,43 +1309,54 @@
},
{
"BriefDescription": "M2M to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M2M_BYPASS_M2M_Egress.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC Bypass; Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_BYPASS_M2M_INGRESS.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_BYPASS_M2M_INGRESS.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles - at UCLK",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M2M_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M2M_CMS_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles when direct to core mode (which bypasses the CHA) was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE",
"PerPkg": "1",
@@ -1111,6 +1365,7 @@
},
{
"BriefDescription": "Messages sent direct to core (bypassing the CHA)",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M2M_DIRECT2CORE_TAKEN",
"PerPkg": "1",
@@ -1119,6 +1374,7 @@
},
{
"BriefDescription": "Number of reads in which direct to core transaction were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE",
"PerPkg": "1",
@@ -1127,6 +1383,7 @@
},
{
"BriefDescription": "Number of reads in which direct to Intel(R) UPI transactions were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS",
"PerPkg": "1",
@@ -1135,6 +1392,7 @@
},
{
"BriefDescription": "Cycles when direct to Intel(R) UPI was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE",
"PerPkg": "1",
@@ -1143,6 +1401,7 @@
},
{
"BriefDescription": "Messages sent direct to the Intel(R) UPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_M2M_DIRECT2UPI_TAKEN",
"PerPkg": "1",
@@ -1151,6 +1410,7 @@
},
{
"BriefDescription": "Number of reads that a message sent direct2 Intel(R) UPI was overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE",
"PerPkg": "1",
@@ -1159,70 +1419,87 @@
},
{
"BriefDescription": "Directory Hit; On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit; On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (any state found)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.ANY",
"PerPkg": "1",
@@ -1232,6 +1509,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (cacheline found in A state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_A",
"PerPkg": "1",
@@ -1241,6 +1519,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in I state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_I",
"PerPkg": "1",
@@ -1250,6 +1529,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in S state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_S",
"PerPkg": "1",
@@ -1259,70 +1539,87 @@
},
{
"BriefDescription": "Directory Miss; On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss; On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A2I",
"PerPkg": "1",
@@ -1332,6 +1629,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A2S",
"PerPkg": "1",
@@ -1341,6 +1639,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from/to Any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -1350,6 +1649,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I2A",
"PerPkg": "1",
@@ -1359,6 +1659,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I2S",
"PerPkg": "1",
@@ -1368,6 +1669,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S2A",
"PerPkg": "1",
@@ -1377,6 +1679,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S2I",
"PerPkg": "1",
@@ -1386,8 +1689,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -1395,8 +1700,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -1404,8 +1711,10 @@
},
{
"BriefDescription": "FaST wire asserted; Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_FAST_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.",
"UMask": "0x2",
@@ -1413,8 +1722,10 @@
},
{
"BriefDescription": "FaST wire asserted; Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_FAST_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.",
"UMask": "0x1",
@@ -1422,8 +1733,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1431,8 +1744,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1440,8 +1755,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1449,8 +1766,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1458,8 +1777,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1467,8 +1788,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1476,8 +1799,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1485,8 +1810,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA9",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1494,8 +1821,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1503,8 +1832,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1512,8 +1843,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1521,8 +1854,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1530,8 +1865,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -1539,8 +1876,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -1548,6 +1887,7 @@
},
{
"BriefDescription": "Reads to iMC issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.ALL",
"PerPkg": "1",
@@ -1557,22 +1897,27 @@
},
{
"BriefDescription": "M2M Reads Issued to iMC; All, regardless of priority.",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.FROM_TRANSGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC; Critical Priority",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Reads to iMC issued at Normal Priority (Non-Isochronous)",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.NORMAL",
"PerPkg": "1",
@@ -1582,6 +1927,7 @@
},
{
"BriefDescription": "Read requests to Intel(R) Optane(TM) DC persistent memory issued to the iMC from M2M",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.TO_PMM",
"PerPkg": "1",
@@ -1591,6 +1937,7 @@
},
{
"BriefDescription": "Writes to iMC issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.ALL",
"PerPkg": "1",
@@ -1600,30 +1947,37 @@
},
{
"BriefDescription": "M2M Writes Issued to iMC; All, regardless of priority.",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FROM_TRANSGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC; ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC; All, regardless of priority.",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.NI",
"PerPkg": "1",
@@ -1632,6 +1986,7 @@
},
{
"BriefDescription": "Partial Non-Isochronous writes to the iMC",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL",
"PerPkg": "1",
@@ -1641,14 +1996,17 @@
},
{
"BriefDescription": "M2M Writes Issued to iMC; ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Write requests to Intel(R) Optane(TM) DC persistent memory issued to the iMC from M2M",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.TO_PMM",
"PerPkg": "1",
@@ -1658,84 +2016,105 @@
},
{
"BriefDescription": "Number Packet Header Matches; MC Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M2M_PKT_MATCH.MC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Number Packet Header Matches; Mesh Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M2M_PKT_MATCH.MESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_PMM_RPQ_CYCLES_REG_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_PMM_RPQ_CYCLES_REG_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_PMM_RPQ_CYCLES_REG_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_PMM_WPQ_CYCLES_REG_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_PMM_WPQ_CYCLES_REG_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_PMM_WPQ_CYCLES_REG_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M2M_PREFCAM_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2M_PREFCAM_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch requests that got turn into a demand request",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_DEMAND_PROMOTIONS",
"PerPkg": "1",
@@ -1744,6 +2123,7 @@
},
{
"BriefDescription": "Inserts into the Memory Controller Prefetch Queue",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M2M_PREFCAM_INSERTS",
"PerPkg": "1",
@@ -1752,15 +2132,19 @@
},
{
"BriefDescription": "Prefetch CAM Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -1768,8 +2152,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -1777,8 +2163,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -1786,8 +2174,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -1795,8 +2185,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -1804,8 +2196,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -1813,8 +2207,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -1822,8 +2218,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -1831,174 +2229,217 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_CYCLES_REG_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_CYCLES_SPEC_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M2M_RxC_AD_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_M2M_RxC_AD_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M2M_RxC_AD_INSERTS",
"PerPkg": "1",
@@ -2007,6 +2448,7 @@
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M2M_RxC_AD_OCCUPANCY",
"PerPkg": "1",
@@ -2014,20 +2456,25 @@
},
{
"BriefDescription": "BL Ingress (from CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M2M_RxC_BL_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M2M_RxC_BL_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M2M_RxC_BL_INSERTS",
"PerPkg": "1",
@@ -2035,6 +2482,7 @@
},
{
"BriefDescription": "BL Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_M2M_RxC_BL_OCCUPANCY",
"PerPkg": "1",
@@ -2042,8 +2490,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -2051,8 +2501,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -2060,8 +2512,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -2069,8 +2523,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -2078,8 +2534,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -2087,8 +2545,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -2096,8 +2556,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -2105,8 +2567,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -2114,8 +2578,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -2123,8 +2589,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_RxR_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -2132,8 +2600,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -2141,8 +2611,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -2150,8 +2622,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -2159,8 +2633,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -2168,8 +2644,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -2177,8 +2655,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IFV - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -2186,8 +2666,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -2195,8 +2677,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -2204,8 +2688,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -2213,8 +2699,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -2222,8 +2710,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -2231,8 +2721,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -2240,8 +2732,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_RxR_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -2249,8 +2743,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -2258,8 +2754,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -2267,8 +2765,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -2276,8 +2776,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -2285,8 +2787,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -2294,8 +2798,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -2303,8 +2809,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -2312,8 +2820,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -2321,8 +2831,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -2330,8 +2842,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -2339,8 +2853,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -2348,8 +2864,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -2357,8 +2875,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -2366,8 +2886,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -2375,8 +2897,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -2384,8 +2908,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -2393,8 +2919,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -2402,8 +2930,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -2411,8 +2941,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -2420,8 +2952,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -2429,8 +2963,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -2438,8 +2974,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -2447,8 +2985,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -2456,8 +2996,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -2465,8 +3007,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -2474,8 +3018,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -2483,8 +3029,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -2492,8 +3040,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -2501,8 +3051,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -2510,8 +3062,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -2519,8 +3073,10 @@
},
{
"BriefDescription": "Clean line read hits(Regular and RFO) to Near Memory(DRAM cache) in Memory Mode and regular reads to DRAM in 1LM",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_CLEAN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tag Hit; Read Hit from NearMem, Clean Line",
"UMask": "0x1",
@@ -2528,6 +3084,7 @@
},
{
"BriefDescription": "Dirty line read hits(Regular and RFO) to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_DIRTY",
"PerPkg": "1",
@@ -2537,6 +3094,7 @@
},
{
"BriefDescription": "Clean line underfill read hits to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_CLEAN",
"PerPkg": "1",
@@ -2546,6 +3104,7 @@
},
{
"BriefDescription": "Dirty line underfill read hits to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_DIRTY",
"PerPkg": "1",
@@ -2555,151 +3114,190 @@
},
{
"BriefDescription": "Number AD Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2M_TGR_AD_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number BL Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2M_TGR_BL_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_CYCLES_FULL.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_CYCLES_NE.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Pending Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2M_TRACKER_PENDING_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Credit Acquired",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_M2M_TxC_AD_CREDITS_ACQUIRED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Credits Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_M2M_TxC_AD_CREDIT_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_M2M_TxC_AD_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_M2M_TxC_AD_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_M2M_TxC_AD_INSERTS",
"PerPkg": "1",
@@ -2707,20 +3305,25 @@
},
{
"BriefDescription": "Cycles with No AD Egress (to CMS) Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_M2M_TxC_AD_NO_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AD Egress (to CMS) Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2M_TxC_AD_NO_CREDIT_STALLED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_M2M_TxC_AD_OCCUPANCY",
"PerPkg": "1",
@@ -2728,430 +3331,537 @@
},
{
"BriefDescription": "Outbound Ring Transactions on AK; CRD Transactions to Cbo",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_M2M_TxC_AK.CRD_CBO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound Ring Transactions on AK; NDR Transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_M2M_TxC_AK.NDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credit Acquired; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credit Acquired; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credits Occupancy; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_M2M_TxC_AK_CREDIT_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credits Occupancy; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_M2M_TxC_AK_CREDIT_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Read Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Read Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x88",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Write Compare Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Write Compare Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Write Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full; Write Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; Read Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; Write Compare Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty; Write Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Prefetch Read Cam Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.PREF_RD_CAM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Read Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Write Compare Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations; Write Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No AK Egress (to CMS) Credits; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No AK Egress (to CMS) Credits; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Credits; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Credits; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; Read Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; Write Compare Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy; Write Credit Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Sideband",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_TxC_AK_SIDEBAND.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Sideband",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_TxC_AK_SIDEBAND.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Core",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_CORE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to QPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credit Acquired; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credit Acquired; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credits Occupancy; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2M_TxC_BL_CREDIT_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credits Occupancy; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2M_TxC_BL_CREDIT_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.ALL",
"PerPkg": "1",
@@ -3160,54 +3870,67 @@
},
{
"BriefDescription": "BL Egress (to CMS) Allocations; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Allocations; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No BL Egress (to CMS) Credits; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No BL Egress (to CMS) Credits; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Credits; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Credits; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2M_TxC_BL_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -3216,24 +3939,30 @@
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy; Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2M_TxC_BL_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy; Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2M_TxC_BL_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -3241,8 +3970,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -3250,8 +3981,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -3259,8 +3992,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -3268,8 +4003,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -3277,8 +4014,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -3286,8 +4025,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -3295,8 +4036,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -3304,8 +4047,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -3313,8 +4058,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -3322,8 +4069,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9F",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -3331,8 +4080,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -3340,8 +4091,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -3349,8 +4102,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -3358,8 +4113,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -3367,8 +4124,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -3376,8 +4135,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -3385,8 +4146,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -3394,8 +4157,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -3403,8 +4168,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -3412,8 +4179,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -3421,8 +4190,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -3430,8 +4201,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -3439,8 +4212,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -3448,8 +4223,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -3457,8 +4234,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -3466,8 +4245,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -3475,8 +4256,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -3484,8 +4267,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -3493,8 +4278,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -3502,8 +4289,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x20",
@@ -3511,8 +4300,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -3520,8 +4311,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -3529,8 +4322,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -3538,8 +4333,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_HORZ_NACK.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -3547,8 +4344,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -3556,8 +4355,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -3565,8 +4366,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -3574,8 +4377,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -3583,8 +4388,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -3592,8 +4399,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -3601,8 +4410,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -3610,8 +4421,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -3619,8 +4432,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -3628,8 +4443,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -3637,8 +4454,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -3646,8 +4465,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -3655,8 +4476,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -3664,8 +4487,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -3673,8 +4498,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -3682,8 +4509,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -3691,8 +4520,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -3700,8 +4531,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -3709,8 +4542,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -3718,8 +4553,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -3727,8 +4564,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -3736,8 +4575,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -3745,8 +4586,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -3754,8 +4597,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -3763,8 +4608,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -3772,8 +4619,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -3781,8 +4630,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -3790,8 +4641,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -3799,8 +4652,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -3808,8 +4663,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -3817,8 +4674,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -3826,8 +4685,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -3835,8 +4696,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -3844,8 +4707,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -3853,8 +4718,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -3862,8 +4729,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -3871,8 +4740,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -3880,8 +4751,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -3889,8 +4762,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -3898,8 +4773,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -3907,8 +4784,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -3916,8 +4795,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -3925,8 +4806,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -3934,8 +4817,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -3943,8 +4828,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -3952,8 +4839,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -3961,8 +4850,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -3970,8 +4861,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -3979,8 +4872,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -3988,8 +4883,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -3997,8 +4894,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -4006,8 +4905,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -4015,8 +4916,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -4024,8 +4927,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -4033,8 +4938,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -4042,8 +4949,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -4051,8 +4960,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -4060,8 +4971,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -4069,8 +4982,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -4078,8 +4993,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -4087,8 +5004,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -4096,8 +5015,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -4105,8 +5026,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -4114,8 +5037,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -4123,8 +5048,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -4132,8 +5059,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -4141,8 +5070,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -4150,8 +5081,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -4159,8 +5092,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -4168,8 +5103,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -4177,8 +5114,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -4186,8 +5125,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -4195,8 +5136,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -4204,8 +5147,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -4213,8 +5158,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -4222,8 +5169,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -4231,8 +5180,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -4240,8 +5191,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -4249,8 +5202,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -4258,179 +5213,223 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_NO_REG_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_CYCLES_REG_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_CYCLES_SPEC_CREDITS.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_FULL.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WRITE_TRACKER_CYCLES_NE.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M2M_WRITE_TRACKER_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2M_WRITE_TRACKER_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4438,8 +5437,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4447,8 +5448,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4456,8 +5459,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4465,8 +5470,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4474,8 +5481,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4483,8 +5492,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4492,8 +5503,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4501,8 +5514,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4510,8 +5525,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4519,8 +5536,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4528,8 +5547,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4537,8 +5558,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4546,8 +5569,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4555,8 +5580,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4564,8 +5591,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4573,8 +5602,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4582,8 +5613,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4591,8 +5624,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4600,8 +5635,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4609,8 +5646,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4618,8 +5657,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4627,8 +5668,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4636,8 +5679,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4645,8 +5690,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4654,8 +5701,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4663,8 +5712,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4672,8 +5723,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4681,8 +5734,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4690,8 +5745,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4699,8 +5756,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4708,8 +5767,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4717,8 +5778,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4726,8 +5789,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4735,8 +5800,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4744,8 +5811,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4753,8 +5822,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 0",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4762,8 +5833,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 1",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4771,8 +5844,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 2",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4780,8 +5855,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 3",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4789,8 +5866,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 4",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4798,8 +5877,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy; For Transgress 5",
+ "Counter": "0",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4807,8 +5888,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4816,8 +5899,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4825,8 +5910,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4834,8 +5921,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4843,8 +5932,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4852,8 +5943,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CREDITS_ACQUIRED.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4861,8 +5954,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty; Requests",
+ "Counter": "0,1,2",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x4",
@@ -4870,8 +5965,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty; Snoops",
+ "Counter": "0,1,2",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x8",
@@ -4879,8 +5976,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty; VNA Messages",
+ "Counter": "0,1,2",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x1",
@@ -4888,8 +5987,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty; Writebacks",
+ "Counter": "0,1,2",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x2",
@@ -4897,39 +5998,49 @@
},
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2",
"EventCode": "0x1",
"EventName": "UNC_M3UPI_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of uclks in the M3 uclk domain. This could be slightly different than the count in the Ubox because of enable/freeze delays. However, because the M3 is close to the Ubox, they generally should not diverge by more than a handful of cycles.",
"Unit": "M3UPI"
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2",
"EventCode": "0xC0",
"EventName": "UNC_M3UPI_CMS_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2C Sent",
+ "Counter": "0,1,2",
"EventCode": "0x2B",
"EventName": "UNC_M3UPI_D2C_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases BL sends direct to core",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2U Sent",
+ "Counter": "0,1,2",
"EventCode": "0x2A",
"EventName": "UNC_M3UPI_D2U_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cases where SMI3 sends D2U command",
"Unit": "M3UPI"
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Down",
+ "Counter": "0,1,2",
"EventCode": "0xAE",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -4937,8 +6048,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements; Up",
+ "Counter": "0,1,2",
"EventCode": "0xAE",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -4946,8 +6059,10 @@
},
{
"BriefDescription": "FaST wire asserted; Horizontal",
+ "Counter": "0,1,2",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_FAST_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.",
"UMask": "0x2",
@@ -4955,8 +6070,10 @@
},
{
"BriefDescription": "FaST wire asserted; Vertical",
+ "Counter": "0,1,2",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_FAST_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles either the local or incoming distress signals are asserted. Incoming distress includes up, dn and across.",
"UMask": "0x1",
@@ -4964,8 +6081,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -4973,8 +6092,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Left and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -4982,8 +6103,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -4991,8 +6114,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use; Right and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5000,8 +6125,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA9",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5009,8 +6136,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Left and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA9",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5018,8 +6147,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA9",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5027,8 +6158,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use; Right and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA9",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5036,8 +6169,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Even",
+ "Counter": "0,1,2",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5045,8 +6180,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Left and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5054,8 +6191,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Even",
+ "Counter": "0,1,2",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5063,8 +6202,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use; Right and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5072,8 +6213,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Left",
+ "Counter": "0,1,2",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -5081,8 +6224,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use; Right",
+ "Counter": "0,1,2",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -5090,8 +6235,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; IIO0 and IIO1 share the same ring destination. (1 VN0 credit only)",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO0_IIO1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x1",
@@ -5099,8 +6246,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; IIO2",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x2",
@@ -5108,8 +6257,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; IIO3",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x4",
@@ -5117,8 +6268,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; IIO4",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x8",
@@ -5126,8 +6279,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; IIO5",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x10",
@@ -5135,8 +6290,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; All IIO targets for NCS are in single mask. ORs them together",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x20",
@@ -5144,8 +6301,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty; Selected M2p BL NCS credits",
+ "Counter": "0,1,2",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS_SEL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No vn0 and vna credits available to send to M2",
"UMask": "0x40",
@@ -5153,8 +6312,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; AD - Slot 0",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x1",
@@ -5162,8 +6323,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; AD - Slot 1",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x2",
@@ -5171,8 +6334,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; AD - Slot 2",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x4",
@@ -5180,8 +6345,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; AK - Slot 0",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x10",
@@ -5189,8 +6356,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; AK - Slot 2",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x20",
@@ -5198,8 +6367,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received; BL - Slot 0",
+ "Counter": "0,1,2",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.BL_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x8",
@@ -5207,8 +6378,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AD",
+ "Counter": "0,1,2",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -5216,8 +6389,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; AK",
+ "Counter": "0,1,2",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -5225,8 +6400,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; BL",
+ "Counter": "0,1,2",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -5234,8 +6411,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring.; IV",
+ "Counter": "0,1,2",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -5243,8 +6422,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; AD",
+ "Counter": "0,1,2",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -5252,8 +6433,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Acknowledgements to core",
+ "Counter": "0,1,2",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -5261,8 +6444,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Data Responses to core",
+ "Counter": "0,1,2",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -5270,8 +6455,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -5279,87 +6466,109 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AD",
+ "Counter": "0,1,2",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; AK",
+ "Counter": "0,1,2",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; Acknowledgements to Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; BL",
+ "Counter": "0,1,2",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring; IV",
+ "Counter": "0,1,2",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; AD",
+ "Counter": "0,1,2",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Acknowledgements to core",
+ "Counter": "0,1,2",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Data Responses to core",
+ "Counter": "0,1,2",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring; Snoops of processor's cache.",
+ "Counter": "0,1,2",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Lost Arb for VN0; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5367,8 +6576,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5376,8 +6587,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5385,8 +6598,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5394,8 +6609,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5403,8 +6620,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5412,8 +6631,10 @@
},
{
"BriefDescription": "Lost Arb for VN0; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message requested but lost arbitration; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5421,8 +6642,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5430,8 +6653,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5439,8 +6664,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5448,8 +6675,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5457,8 +6686,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5466,8 +6697,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5475,8 +6708,10 @@
},
{
"BriefDescription": "Lost Arb for VN1; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message requested but lost arbitration; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5484,8 +6719,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; AD, BL Parallel Win",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD and BL messages won arbitration concurrently / in parallel",
"UMask": "0x40",
@@ -5493,8 +6730,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; No Progress on Pending AD VN0",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arbitration stage made no progress on pending ad vn0 messages because slotting stage cannot accept new message",
"UMask": "0x4",
@@ -5502,8 +6741,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; No Progress on Pending AD VN1",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arbitration stage made no progress on pending ad vn1 messages because slotting stage cannot accept new message",
"UMask": "0x8",
@@ -5511,8 +6752,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; No Progress on Pending BL VN0",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arbitration stage made no progress on pending bl vn0 messages because slotting stage cannot accept new message",
"UMask": "0x10",
@@ -5520,8 +6763,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; No Progress on Pending BL VN1",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arbitration stage made no progress on pending bl vn1 messages because slotting stage cannot accept new message",
"UMask": "0x20",
@@ -5529,8 +6774,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; Parallel Bias to VN0",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.PAR_BIAS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0/VN1 arbiter gave second, consecutive win to vn0, delaying vn1 win, because vn0 offered parallel ad/bl",
"UMask": "0x1",
@@ -5538,8 +6785,10 @@
},
{
"BriefDescription": "Arb Miscellaneous; Parallel Bias to VN1",
+ "Counter": "0,1,2",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.PAR_BIAS_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0/VN1 arbiter gave second, consecutive win to vn1, delaying vn0 win, because vn1 offered parallel ad/bl",
"UMask": "0x2",
@@ -5547,8 +6796,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5556,8 +6807,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5565,8 +6818,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5574,8 +6829,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5583,8 +6840,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5592,8 +6851,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5601,8 +6862,10 @@
},
{
"BriefDescription": "Can't Arb for VN0; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message was not able to request arbitration while some other message won arbitration; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5610,8 +6873,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5619,8 +6884,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5628,8 +6895,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5637,8 +6906,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5646,8 +6917,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5655,8 +6928,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5664,8 +6939,10 @@
},
{
"BriefDescription": "Can't Arb for VN1; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOAD_REQ_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message was not able to request arbitration while some other message won arbitration; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5673,8 +6950,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5682,8 +6961,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5691,8 +6972,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5700,8 +6983,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5709,8 +6994,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5718,8 +7005,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5727,8 +7016,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message is blocked from requesting arbitration due to lack of remote UPI credits; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5736,8 +7027,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5745,8 +7038,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5754,8 +7049,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5763,8 +7060,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5772,8 +7071,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5781,8 +7082,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5790,8 +7093,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRED_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message is blocked from requesting arbitration due to lack of remote UPI credits; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5799,8 +7104,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses; AD to Slot 0 on BL Arb",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_BL_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times message is bypassed around the Ingress Queue; AD is taking bypass to slot 0 of independent flit while bl message is in arbitration",
"UMask": "0x2",
@@ -5808,8 +7115,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses; AD to Slot 0 on Idle",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times message is bypassed around the Ingress Queue; AD is taking bypass to slot 0 of independent flit while pipeline is idle",
"UMask": "0x1",
@@ -5817,8 +7126,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses; AD + BL to Slot 1",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S1_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times message is bypassed around the Ingress Queue; AD is taking bypass to flit slot 1 while merging with bl message in same flit",
"UMask": "0x4",
@@ -5826,8 +7137,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses; AD + BL to Slot 2",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S2_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times message is bypassed around the Ingress Queue; AD is taking bypass to flit slot 2 while merging with bl message in same flit",
"UMask": "0x8",
@@ -5835,8 +7148,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5844,8 +7159,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5853,8 +7170,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5862,8 +7181,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5871,8 +7192,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5880,8 +7203,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5889,8 +7214,10 @@
},
{
"BriefDescription": "VN0 message lost contest for flit; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN0 packets lost the contest for Flit Slot 0.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5898,8 +7225,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -5907,8 +7236,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -5916,8 +7247,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -5925,8 +7258,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -5934,8 +7269,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -5943,8 +7280,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -5952,8 +7291,10 @@
},
{
"BriefDescription": "VN1 message lost contest for flit; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_COLLISION_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress VN1 packets lost the contest for Flit Slot 0.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -5961,8 +7302,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events; Any In BGF FIFO",
+ "Counter": "0,1,2",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Indication that at least one packet (flit) is in the bgf (fifo only)",
"UMask": "0x1",
@@ -5970,8 +7313,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events; Any in BGF Path",
+ "Counter": "0,1,2",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Indication that at least one packet (flit) is in the bgf path (i.e. pipe to fifo)",
"UMask": "0x2",
@@ -5979,8 +7324,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events; No D2K For Arb",
+ "Counter": "0,1,2",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.NO_D2K_FOR_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 or VN1 BL RSP message was blocked from arbitration request due to lack of D2K CMP credits",
"UMask": "0x4",
@@ -5988,8 +7335,10 @@
},
{
"BriefDescription": "Credit Occupancy; D2K Credits",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.D2K_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "D2K completion fifo credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x10",
@@ -5997,8 +7346,10 @@
},
{
"BriefDescription": "Credit Occupancy; Packets in BGF FIFO",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifo",
"UMask": "0x2",
@@ -6006,8 +7357,10 @@
},
{
"BriefDescription": "Credit Occupancy; Packets in BGF Path",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e. pipe to fifo or fifo)",
"UMask": "0x4",
@@ -6015,8 +7368,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "count of bl messages in pump-1-pending state, in completion fifo only",
"UMask": "0x40",
@@ -6024,8 +7379,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_TOTAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "count of bl messages in pump-1-pending state, in marker table and in fifo",
"UMask": "0x20",
@@ -6033,8 +7390,10 @@
},
{
"BriefDescription": "Credit Occupancy; Transmit Credits",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.TxQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Link layer transmit queue credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x8",
@@ -6042,8 +7401,10 @@
},
{
"BriefDescription": "Credit Occupancy; VNA In Use",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.VNA_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote UPI VNA credit occupancy (number of credits in use), accumulated across all cycles",
"UMask": "0x1",
@@ -6051,8 +7412,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6060,8 +7423,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6069,8 +7434,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6078,8 +7445,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6087,8 +7456,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6096,8 +7467,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6105,8 +7478,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6114,8 +7489,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6123,8 +7500,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6132,8 +7511,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6141,8 +7522,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6150,8 +7533,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6159,8 +7544,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6168,8 +7555,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6177,8 +7566,10 @@
},
{
"BriefDescription": "Data Flit Not Sent; All",
+ "Counter": "0,1,2",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data flit is ready for transmission but could not be sent",
"UMask": "0x1",
@@ -6186,8 +7577,10 @@
},
{
"BriefDescription": "Data Flit Not Sent; No BGF Credits",
+ "Counter": "0,1,2",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.NO_BGF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data flit is ready for transmission but could not be sent",
"UMask": "0x2",
@@ -6195,8 +7588,10 @@
},
{
"BriefDescription": "Data Flit Not Sent; No TxQ Credits",
+ "Counter": "0,1,2",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_DATA_NOT_SENT.NO_TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data flit is ready for transmission but could not be sent",
"UMask": "0x4",
@@ -6204,8 +7599,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence; Wait on Pump 0",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "generating bl data flit sequence; waiting for data pump 0",
"UMask": "0x1",
@@ -6213,8 +7610,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_AT_LIMIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "pump-1-pending logic is at capacity (pending table plus completion fifo at limit)",
"UMask": "0x10",
@@ -6222,8 +7621,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_BUSY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "pump-1-pending logic is tracking at least one message",
"UMask": "0x8",
@@ -6231,8 +7632,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_FIFO_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "pump-1-pending completion fifo is full",
"UMask": "0x40",
@@ -6240,8 +7643,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_HOLD_P0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "pump-1-pending logic is at or near capacity, such that pump-0-only bl messages are getting stalled in slotting stage",
"UMask": "0x20",
@@ -6249,8 +7654,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_TO_LIMBO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "a bl message finished but is in limbo and moved to pump-1-pending logic",
"UMask": "0x4",
@@ -6258,8 +7665,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence; Wait on Pump 1",
+ "Counter": "0,1,2",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "generating bl data flit sequence; waiting for data pump 1",
"UMask": "0x2",
@@ -6267,15 +7676,19 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC",
+ "Counter": "0,1,2",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit; One Message",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.1_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "One message in flit; VNA or non-VNA flit",
"UMask": "0x1",
@@ -6283,8 +7696,10 @@
},
{
"BriefDescription": "Sent Header Flit; One Message in non-VNA",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.1_MSG_VNX",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "One message in flit; non-VNA flit",
"UMask": "0x8",
@@ -6292,8 +7707,10 @@
},
{
"BriefDescription": "Sent Header Flit; Two Messages",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.2_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Two messages in flit; VNA flit",
"UMask": "0x2",
@@ -6301,8 +7718,10 @@
},
{
"BriefDescription": "Sent Header Flit; Three Messages",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.3_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Three messages in flit; VNA flit",
"UMask": "0x4",
@@ -6310,40 +7729,50 @@
},
{
"BriefDescription": "Sent Header Flit",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit",
+ "Counter": "0,1,2",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SENT.SLOTS_3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; All",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Needs Data Flit",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.NEED_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL message requires data flit sequence",
"UMask": "0x2",
@@ -6351,8 +7780,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Wait on Pump 0",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Waiting for header pump 0",
"UMask": "0x4",
@@ -6360,8 +7791,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Don't Need Pump 1",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header pump 1 is not required for flit",
"UMask": "0x10",
@@ -6369,8 +7802,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Don't Need Pump 1 - Bubble",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_BUT_BUBBLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header pump 1 is not required for flit but flit transmission delayed",
"UMask": "0x20",
@@ -6378,8 +7813,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Don't Need Pump 1 - Not Avail",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_NOT_AVAIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header pump 1 is not required for flit and not available",
"UMask": "0x40",
@@ -6387,8 +7824,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit; Wait on Pump 1",
+ "Counter": "0,1,2",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Waiting for header pump 1",
"UMask": "0x8",
@@ -6396,8 +7835,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Accumulate",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Header flit slotting control state machine is in any accumulate state; multi-message flit may be assembled over multiple cycles",
"UMask": "0x1",
@@ -6405,8 +7846,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Accumulate Ready",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; header flit slotting control state machine is in accum_ready state; flit is ready to send but transmission is blocked; more messages may be slotted into flit",
"UMask": "0x2",
@@ -6414,8 +7857,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Accumulate Wasted",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_WASTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Flit is being assembled over multiple cycles, but no additional message is being slotted into flit in current cycle; accumulate cycle is wasted",
"UMask": "0x4",
@@ -6423,8 +7868,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Run-Ahead - Blocked",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Header flit slotting entered run-ahead state; new header flit is started while transmission of prior, fully assembled flit is blocked",
"UMask": "0x8",
@@ -6432,8 +7879,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Run-Ahead - Message",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Header flit slotting is in run-ahead to start new flit, and message is actually slotted into new flit",
"UMask": "0x10",
@@ -6441,8 +7890,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Parallel Ok",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; New header flit construction may proceed in parallel with data flit sequence",
"UMask": "0x20",
@@ -6450,8 +7901,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Parallel Flit Finished",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Header flit finished assembly in parallel with data flit sequence",
"UMask": "0x80",
@@ -6459,8 +7912,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1; Parallel Message",
+ "Counter": "0,1,2",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.PAR_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 1; Message is slotted into header flit in parallel with data flit sequence",
"UMask": "0x40",
@@ -6468,8 +7923,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2; Rate-matching Stall",
+ "Counter": "0,1,2",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 2; Rate-matching stall injected",
"UMask": "0x1",
@@ -6477,8 +7934,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2; Rate-matching Stall - No Message",
+ "Counter": "0,1,2",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL_NOMSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Events related to Header Flit Generation - Set 2; Rate matching stall injected, but no additional message slotted during stall cycle",
"UMask": "0x2",
@@ -6486,8 +7945,10 @@
},
{
"BriefDescription": "Header Not Sent; All",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent",
"UMask": "0x1",
@@ -6495,8 +7956,10 @@
},
{
"BriefDescription": "Header Not Sent; No BGF Credits",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; No BGF credits available",
"UMask": "0x2",
@@ -6504,8 +7967,10 @@
},
{
"BriefDescription": "Header Not Sent; No BGF Credits + No Extra Message Slotted",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_BGF_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; No BGF credits available; no additional message slotted into flit",
"UMask": "0x8",
@@ -6513,8 +7978,10 @@
},
{
"BriefDescription": "Header Not Sent; No TxQ Credits",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_TXQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; No TxQ credits available",
"UMask": "0x4",
@@ -6522,8 +7989,10 @@
},
{
"BriefDescription": "Header Not Sent; No TxQ Credits + No Extra Message Slotted",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.NO_TXQ_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; No TxQ credits available; no additional message slotted into flit",
"UMask": "0x10",
@@ -6531,8 +8000,10 @@
},
{
"BriefDescription": "Header Not Sent; Sent - One Slot Taken",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.ONE_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; sending header flit with only one slot taken (two slots free)",
"UMask": "0x20",
@@ -6540,8 +8011,10 @@
},
{
"BriefDescription": "Header Not Sent; Sent - Three Slots Taken",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.THREE_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; sending header flit with three slots taken (no slots free)",
"UMask": "0x80",
@@ -6549,8 +8022,10 @@
},
{
"BriefDescription": "Header Not Sent; Sent - Two Slots Taken",
+ "Counter": "0,1,2",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_FLIT_NOT_SENT.TWO_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "header flit is ready for transmission but could not be sent; sending header flit with only two slots taken (one slots free)",
"UMask": "0x40",
@@ -6558,8 +8033,10 @@
},
{
"BriefDescription": "Message Held; Can't Slot AD",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "some AD message could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x40",
@@ -6567,8 +8044,10 @@
},
{
"BriefDescription": "Message Held; Can't Slot BL",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "some BL message could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x80",
@@ -6576,8 +8055,10 @@
},
{
"BriefDescription": "Message Held; Parallel AD Lost",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_AD_LOST",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "some AD message lost contest for slot 0 (logical OR of all AD events under INGR_SLOT_LOST_MC_VN{0,1})",
"UMask": "0x10",
@@ -6585,8 +8066,10 @@
},
{
"BriefDescription": "Message Held; Parallel Attempt",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_ATTEMPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ad and bl messages attempted to slot into the same flit in parallel",
"UMask": "0x4",
@@ -6594,8 +8077,10 @@
},
{
"BriefDescription": "Message Held; Parallel BL Lost",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_BL_LOST",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "some BL message lost contest for slot 0 (logical OR of all BL events under INGR_SLOT_LOST_MC_VN{0,1})",
"UMask": "0x20",
@@ -6603,8 +8088,10 @@
},
{
"BriefDescription": "Message Held; Parallel Success",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_SUCCESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ad and bl messages were actually slotted into the same flit in parallel",
"UMask": "0x8",
@@ -6612,8 +8099,10 @@
},
{
"BriefDescription": "Message Held; VN0",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "vn0 message(s) that couldn't be slotted into last vn0 flit are held in slotting stage while processing vn1 flit",
"UMask": "0x1",
@@ -6621,8 +8110,10 @@
},
{
"BriefDescription": "Message Held; VN1",
+ "Counter": "0,1,2",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_HELD.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "vn1 message(s) that couldn't be slotted into last vn1 flit are held in slotting stage while processing vn0 flit",
"UMask": "0x2",
@@ -6630,8 +8121,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6639,8 +8132,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6648,8 +8143,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6657,8 +8154,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6666,8 +8165,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6675,8 +8176,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6684,8 +8187,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6693,8 +8198,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6702,8 +8209,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6711,8 +8220,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6720,8 +8231,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6729,8 +8242,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6738,8 +8253,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6747,8 +8264,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6756,8 +8275,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6765,8 +8286,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6774,8 +8297,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6783,8 +8308,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6792,8 +8319,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6801,8 +8330,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6810,8 +8341,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6819,8 +8352,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6828,8 +8363,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6837,8 +8374,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6846,8 +8385,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6855,8 +8396,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6864,8 +8407,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6873,8 +8418,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6882,8 +8429,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6891,8 +8440,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6900,8 +8451,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6909,8 +8462,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6918,8 +8473,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6927,8 +8484,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6936,8 +8495,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -6945,8 +8506,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -6954,8 +8517,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -6963,8 +8528,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -6972,8 +8539,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -6981,8 +8550,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -6990,8 +8561,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -6999,8 +8572,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -7008,32 +8583,40 @@
},
{
"BriefDescription": "SMI3 Prefetch Messages; Lost Arbitration",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.ARB_LOST",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "SMI3 Prefetch Messages; Arrived",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.ARRIVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "SMI3 Prefetch Messages; Dropped - Old",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.DROP_OLD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "SMI3 Prefetch Messages; Dropped - Wrap",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.DROP_WRAP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Dropped because it was overwritten by new message while prefetch queue was full",
"UMask": "0x10",
@@ -7041,16 +8624,20 @@
},
{
"BriefDescription": "SMI3 Prefetch Messages; Slotted",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_M3UPI_RxC_SMI3_PFTCH.SLOTTED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "Remote VNA Credits; Any In Use",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.ANY_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "At least one remote vna credit is in use",
"UMask": "0x20",
@@ -7058,8 +8645,10 @@
},
{
"BriefDescription": "Remote VNA Credits; Corrected",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.CORRECTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of remote vna credits corrected (local return) per cycle",
"UMask": "0x2",
@@ -7067,8 +8656,10 @@
},
{
"BriefDescription": "Remote VNA Credits; Level < 1",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote vna credit level is less than 1 (i.e. no vna credits available)",
"UMask": "0x4",
@@ -7076,8 +8667,10 @@
},
{
"BriefDescription": "Remote VNA Credits; Level < 4",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote vna credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vna",
"UMask": "0x8",
@@ -7085,8 +8678,10 @@
},
{
"BriefDescription": "Remote VNA Credits; Level < 5",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote vna credit level is less than 5; parallel ad/bl arb on vna not possible",
"UMask": "0x10",
@@ -7094,8 +8689,10 @@
},
{
"BriefDescription": "Remote VNA Credits; Used",
+ "Counter": "0,1,2",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.USED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of remote vna credits consumed per cycle",
"UMask": "0x1",
@@ -7103,8 +8700,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -7112,8 +8711,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -7121,8 +8722,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -7130,8 +8733,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -7139,8 +8744,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -7148,8 +8755,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -7157,8 +8766,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -7166,8 +8777,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -7175,8 +8788,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -7184,8 +8799,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_RxR_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -7193,8 +8810,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -7202,8 +8821,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -7211,8 +8832,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -7220,8 +8843,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -7229,8 +8854,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -7238,8 +8865,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IFV - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -7247,8 +8876,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -7256,8 +8887,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -7265,8 +8898,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -7274,8 +8909,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -7283,8 +8920,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -7292,8 +8931,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -7301,8 +8942,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_RxR_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -7310,8 +8953,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -7319,8 +8964,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -7328,8 +8975,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -7337,8 +8986,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -7346,8 +8997,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -7355,8 +9008,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -7364,8 +9019,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7373,8 +9030,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7382,8 +9041,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7391,8 +9052,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7400,8 +9063,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7409,8 +9074,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7418,8 +9085,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7427,8 +9096,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7436,8 +9107,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7445,8 +9118,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7454,8 +9129,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7463,8 +9140,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7472,8 +9151,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7481,8 +9162,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7490,8 +9173,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7499,8 +9184,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7508,8 +9195,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7517,8 +9206,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7526,8 +9217,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 0",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7535,8 +9228,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 1",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7544,8 +9239,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 2",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7553,8 +9250,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 3",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7562,8 +9261,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 4",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7571,8 +9272,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits; For Transgress 5",
+ "Counter": "0,1,2",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7580,8 +9283,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -7589,8 +9294,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -7598,8 +9305,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -7607,8 +9316,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -7616,8 +9327,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -7625,8 +9338,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -7634,8 +9349,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -7643,8 +9360,10 @@
},
{
"BriefDescription": "Failed ARB for AD; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -7652,8 +9371,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x1",
@@ -7661,8 +9382,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x2",
@@ -7670,8 +9393,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x4",
@@ -7679,8 +9404,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.BL_EARLY_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x8",
@@ -7688,8 +9415,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x1",
@@ -7697,8 +9426,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x4",
@@ -7706,8 +9437,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x2",
@@ -7715,8 +9448,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x8",
@@ -7724,8 +9459,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x10",
@@ -7733,8 +9470,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x40",
@@ -7742,8 +9481,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x20",
@@ -7751,8 +9492,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x80",
@@ -7760,8 +9503,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -7769,8 +9514,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -7778,8 +9525,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -7787,8 +9536,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -7796,8 +9547,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -7805,8 +9558,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -7814,8 +9569,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -7823,64 +9580,80 @@
},
{
"BriefDescription": "AD Flow Q Occupancy; VN0 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN0 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN1 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy; VN1 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Number of Snoop Targets; CHA on VN0",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_CHA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to CHA",
"UMask": "0x4",
@@ -7888,8 +9661,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Non Idle cycles on VN0",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_NON_IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of non-idle cycles in issuing Vn0 Snpf",
"UMask": "0x40",
@@ -7897,8 +9672,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Peer UPI0 on VN0",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_PEER_UPI0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to peer UPI0",
"UMask": "0x1",
@@ -7906,8 +9683,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Peer UPI1 on VN0",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN0_PEER_UPI1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN0 Snpf to peer UPI1",
"UMask": "0x2",
@@ -7915,8 +9694,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; CHA on VN1",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_CHA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to CHA",
"UMask": "0x20",
@@ -7924,8 +9705,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Non Idle cycles on VN1",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_NON_IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of non-idle cycles in issuing Vn1 Snpf",
"UMask": "0x80",
@@ -7933,8 +9716,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Peer UPI0 on VN1",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_PEER_UPI0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to peer UPI0",
"UMask": "0x8",
@@ -7942,8 +9727,10 @@
},
{
"BriefDescription": "Number of Snoop Targets; Peer UPI1 on VN1",
+ "Counter": "0",
"EventCode": "0x3C",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP1_VN1.VN1_PEER_UPI1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of snpfanout targets and non-idle cycles can be used to calculate average snpfanout latency; Number of VN1 Snpf to peer UPI1",
"UMask": "0x10",
@@ -7951,8 +9738,10 @@
},
{
"BriefDescription": "Snoop Arbitration; FlowQ Won",
+ "Counter": "0,1,2",
"EventCode": "0x3D",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN0_SNPFP_NONSNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outcome of SnpF pending arbitration; FlowQ txn issued when SnpF pending on Vn0",
"UMask": "0x1",
@@ -7960,8 +9749,10 @@
},
{
"BriefDescription": "Snoop Arbitration; FlowQ SnpF Won",
+ "Counter": "0,1,2",
"EventCode": "0x3D",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN0_SNPFP_VN2SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outcome of SnpF pending arbitration; FlowQ Vn0 SnpF issued when SnpF pending on Vn1",
"UMask": "0x4",
@@ -7969,8 +9760,10 @@
},
{
"BriefDescription": "Snoop Arbitration; FlowQ Won",
+ "Counter": "0,1,2",
"EventCode": "0x3D",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN1_SNPFP_NONSNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outcome of SnpF pending arbitration; FlowQ txn issued when SnpF pending on Vn1",
"UMask": "0x2",
@@ -7978,8 +9771,10 @@
},
{
"BriefDescription": "Snoop Arbitration; FlowQ SnpF Won",
+ "Counter": "0,1,2",
"EventCode": "0x3D",
"EventName": "UNC_M3UPI_TxC_AD_SNPF_GRP2_VN1.VN1_SNPFP_VN0SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outcome of SnpF pending arbitration; FlowQ Vn1 SnpF issued when SnpF pending on Vn0",
"UMask": "0x8",
@@ -7987,8 +9782,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x1",
@@ -7996,8 +9793,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x2",
@@ -8005,8 +9804,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x8",
@@ -8014,8 +9815,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x10",
@@ -8023,8 +9826,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x20",
@@ -8032,8 +9837,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - Credit Available; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x34",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_CRD_AVAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request with prior cycle credit check complete and credit avail",
"UMask": "0x80",
@@ -8041,8 +9848,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x1",
@@ -8050,8 +9859,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x2",
@@ -8059,8 +9870,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x8",
@@ -8068,8 +9881,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x10",
@@ -8077,8 +9892,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x20",
@@ -8086,8 +9903,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - New Message; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x33",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NEW_MSG.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x80",
@@ -8095,8 +9914,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x1",
@@ -8104,8 +9925,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x4",
@@ -8113,8 +9936,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x2",
@@ -8122,8 +9947,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x8",
@@ -8131,8 +9958,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x10",
@@ -8140,8 +9969,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x40",
@@ -8149,8 +9980,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x20",
@@ -8158,8 +9991,10 @@
},
{
"BriefDescription": "Speculative ARB for AD - No Credit; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x32",
"EventName": "UNC_M3UPI_TxC_AD_SPEC_ARB_NO_OTHER_PEND.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x80",
@@ -8167,22 +10002,28 @@
},
{
"BriefDescription": "AK Flow Q Inserts",
+ "Counter": "0,1,2",
"EventCode": "0x2F",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AK Flow Q Occupancy",
+ "Counter": "0",
"EventCode": "0x1E",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Failed ARB for BL; VN0 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -8190,8 +10031,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN0 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -8199,8 +10042,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -8208,8 +10053,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -8217,8 +10064,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN1 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -8226,8 +10075,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN1 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -8235,8 +10086,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -8244,8 +10097,10 @@
},
{
"BriefDescription": "Failed ARB for BL; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -8253,8 +10108,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x1",
@@ -8262,8 +10119,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x4",
@@ -8271,8 +10130,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x2",
@@ -8280,8 +10141,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x8",
@@ -8289,8 +10152,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x10",
@@ -8298,8 +10163,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x40",
@@ -8307,8 +10174,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x20",
@@ -8316,8 +10185,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x80",
@@ -8325,8 +10196,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -8334,8 +10207,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -8343,8 +10218,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN0 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -8352,8 +10229,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN0 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -8361,8 +10240,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -8370,8 +10251,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -8379,8 +10262,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN1_NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x80",
@@ -8388,8 +10273,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts; VN1_NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -8397,72 +10284,90 @@
},
{
"BriefDescription": "BL Flow Q Occupancy; VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN0 NCS Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN1_NCS Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN1_NCB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy; VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x2",
@@ -8470,8 +10375,10 @@
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN0 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x8",
@@ -8479,8 +10386,10 @@
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x1",
@@ -8488,8 +10397,10 @@
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x20",
@@ -8497,8 +10408,10 @@
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN1 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x80",
@@ -8506,8 +10419,10 @@
},
{
"BriefDescription": "Speculative ARB for BL - New Message; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x38",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NEW_MSG.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request due to new message arriving on a specific channel (MC/VN)",
"UMask": "0x10",
@@ -8515,8 +10430,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN0 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x4",
@@ -8524,8 +10441,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN0 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x8",
@@ -8533,8 +10452,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x1",
@@ -8542,8 +10463,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN0 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x2",
@@ -8551,8 +10474,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN1 NCS Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x40",
@@ -8560,8 +10485,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN1 NCB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x80",
@@ -8569,8 +10496,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x10",
@@ -8578,8 +10507,10 @@
},
{
"BriefDescription": "Speculative ARB for AD Failed - No Credit; VN1 WB Messages",
+ "Counter": "0,1,2",
"EventCode": "0x37",
"EventName": "UNC_M3UPI_TxC_BL_SPEC_ARB_NO_OTHER_PEND.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL speculative arb request asserted due to no other channel being active (have a valid entry but don't have credits to send)",
"UMask": "0x20",
@@ -8587,8 +10518,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8596,8 +10529,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8605,8 +10540,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8614,8 +10551,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8623,8 +10562,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8632,8 +10573,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8641,8 +10584,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8650,8 +10595,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8659,8 +10606,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8668,8 +10617,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8677,8 +10628,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9F",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -8686,8 +10639,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8695,8 +10650,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8704,8 +10661,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8713,8 +10672,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8722,8 +10683,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8731,8 +10694,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8740,8 +10705,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8749,8 +10716,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8758,8 +10727,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8767,8 +10738,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8776,8 +10749,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8785,8 +10760,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8794,8 +10771,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8803,8 +10782,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8812,8 +10793,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8821,8 +10804,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8830,8 +10815,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8839,8 +10826,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8848,8 +10837,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -8857,8 +10848,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x20",
@@ -8866,8 +10859,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -8875,8 +10870,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -8884,8 +10881,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -8893,8 +10892,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -8902,8 +10903,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8911,8 +10914,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AD - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8920,8 +10925,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8929,8 +10936,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8938,8 +10947,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; BL - Credit",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8947,8 +10958,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8956,8 +10969,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AD - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -8965,8 +10980,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; AK - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AK_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -8974,8 +10991,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; BL - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -8983,8 +11002,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation; IV - Bounce",
+ "Counter": "0,1,2",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.IV_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -8992,8 +11013,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -9001,8 +11024,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -9010,8 +11035,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -9019,8 +11046,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -9028,8 +11057,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -9037,8 +11068,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -9046,8 +11079,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -9055,8 +11090,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -9064,8 +11101,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -9073,8 +11112,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -9082,8 +11123,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -9091,8 +11134,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -9100,8 +11145,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used; IV",
+ "Counter": "0,1,2",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -9109,8 +11156,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9118,8 +11167,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9127,8 +11178,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9136,8 +11189,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9145,8 +11200,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9154,8 +11211,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9163,8 +11222,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full; IV",
+ "Counter": "0,1,2",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9172,8 +11233,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9181,8 +11244,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9190,8 +11255,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9199,8 +11266,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9208,8 +11277,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9217,8 +11288,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9226,8 +11299,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty; IV",
+ "Counter": "0,1,2",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9235,8 +11310,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9244,8 +11321,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9253,8 +11332,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9262,8 +11343,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9271,8 +11354,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9280,8 +11365,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9289,8 +11376,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations; IV",
+ "Counter": "0,1,2",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9298,8 +11387,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -9307,8 +11398,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -9316,8 +11409,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -9325,8 +11420,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -9334,8 +11431,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -9343,8 +11442,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -9352,8 +11453,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs; IV",
+ "Counter": "0,1,2",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -9361,8 +11464,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9370,8 +11475,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9379,8 +11486,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9388,8 +11497,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9397,8 +11508,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9406,8 +11519,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9415,8 +11530,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy; IV",
+ "Counter": "0,1,2",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh.; Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9424,8 +11541,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -9433,8 +11552,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AD - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -9442,8 +11563,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -9451,8 +11574,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; AK - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -9460,8 +11585,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 0",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -9469,8 +11596,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; BL - Agent 1",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -9478,8 +11607,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation; IV",
+ "Counter": "0,1,2",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -9487,8 +11618,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x2",
@@ -9496,8 +11629,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x8",
@@ -9505,8 +11640,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x4",
@@ -9514,8 +11651,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x10",
@@ -9523,8 +11662,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x40",
@@ -9532,8 +11673,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x20",
@@ -9541,8 +11684,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty; VNA",
+ "Counter": "0,1,2",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPIs on the AD Ring",
"UMask": "0x1",
@@ -9550,8 +11695,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN0 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x4",
@@ -9559,8 +11706,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN0 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x2",
@@ -9568,8 +11717,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN0 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x8",
@@ -9577,8 +11728,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN1 RSP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x20",
@@ -9586,8 +11739,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN1 REQ Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x10",
@@ -9595,8 +11750,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VN1 SNP Messages",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x40",
@@ -9604,8 +11761,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty; VNA",
+ "Counter": "0,1,2",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x1",
@@ -9613,6 +11772,7 @@
},
{
"BriefDescription": "Prefetches generated by the flow control queue of the M3UPI unit.",
+ "Counter": "0,1,2",
"EventCode": "0x29",
"EventName": "UNC_M3UPI_UPI_PREFETCH_SPAWN",
"PerPkg": "1",
@@ -9621,8 +11781,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9630,8 +11792,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9639,8 +11803,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9648,8 +11814,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9657,8 +11825,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA8",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9666,8 +11836,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA8",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9675,8 +11847,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Even",
+ "Counter": "0,1,2",
"EventCode": "0xA8",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9684,8 +11858,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xA8",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9693,8 +11869,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Even",
+ "Counter": "0,1,2",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9702,8 +11880,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9711,8 +11891,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Even",
+ "Counter": "0,1,2",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9720,8 +11902,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9729,8 +11913,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Down",
+ "Counter": "0,1,2",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -9738,8 +11924,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use; Up",
+ "Counter": "0,1,2",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -9747,8 +11935,10 @@
},
{
"BriefDescription": "VN0 Credit Used; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9756,8 +11946,10 @@
},
{
"BriefDescription": "VN0 Credit Used; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9765,8 +11957,10 @@
},
{
"BriefDescription": "VN0 Credit Used; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9774,8 +11968,10 @@
},
{
"BriefDescription": "VN0 Credit Used; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9783,8 +11979,10 @@
},
{
"BriefDescription": "VN0 Credit Used; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9792,8 +11990,10 @@
},
{
"BriefDescription": "VN0 Credit Used; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9801,8 +12001,10 @@
},
{
"BriefDescription": "VN0 No Credits; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9810,8 +12012,10 @@
},
{
"BriefDescription": "VN0 No Credits; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9819,8 +12023,10 @@
},
{
"BriefDescription": "VN0 No Credits; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9828,8 +12034,10 @@
},
{
"BriefDescription": "VN0 No Credits; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9837,8 +12045,10 @@
},
{
"BriefDescription": "VN0 No Credits; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9846,8 +12056,10 @@
},
{
"BriefDescription": "VN0 No Credits; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN0 Credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9855,8 +12067,10 @@
},
{
"BriefDescription": "VN1 Credit Used; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9864,8 +12078,10 @@
},
{
"BriefDescription": "VN1 Credit Used; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9873,8 +12089,10 @@
},
{
"BriefDescription": "VN1 Credit Used; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9882,8 +12100,10 @@
},
{
"BriefDescription": "VN1 Credit Used; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9891,8 +12111,10 @@
},
{
"BriefDescription": "VN1 Credit Used; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9900,8 +12122,10 @@
},
{
"BriefDescription": "VN1 Credit Used; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers.; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9909,8 +12133,10 @@
},
{
"BriefDescription": "VN1 No Credits; WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9918,8 +12144,10 @@
},
{
"BriefDescription": "VN1 No Credits; NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9927,8 +12155,10 @@
},
{
"BriefDescription": "VN1 No Credits; REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9936,8 +12166,10 @@
},
{
"BriefDescription": "VN1 No Credits; RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9945,8 +12177,10 @@
},
{
"BriefDescription": "VN1 No Credits; SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9954,8 +12188,10 @@
},
{
"BriefDescription": "VN1 No Credits; RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of Cycles there were no VN1 Credits; Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9963,15 +12199,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M2M_TxC_BL.DRS_UPI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x40",
"EventName": "UNC_NoUnit_TxC_BL.DRS_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Clocks of the Intel(R) Ultra Path Interconnect (UPI)",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_UPI_CLOCKTICKS",
"PerPkg": "1",
@@ -9980,6 +12219,7 @@
},
{
"BriefDescription": "Data Response packets that go direct to core",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2C",
"PerPkg": "1",
@@ -9989,6 +12229,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_DIRECT_ATTEMPTS.D2U",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2K",
@@ -9998,6 +12239,7 @@
},
{
"BriefDescription": "Data Response packets that go direct to Intel(R) UPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2U",
"PerPkg": "1",
@@ -10007,70 +12249,87 @@
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles Intel(R) UPI is in L1 power mode (shutdown)",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_UPI_L1_POWER_CYCLES",
"PerPkg": "1",
@@ -10079,164 +12338,205 @@
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles where phy is not in L0, L0c, L0p, L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_UPI_PHY_INIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req Nack",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_UPI_POWER_L1_NACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a link sends/receives a LinkReqNAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx originally requested the power change). A Tx LinkReqNAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req (same as L1 Ack).",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_UPI_POWER_L1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a link sends/receives a LinkReqAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqAck refers to receiving an Ack (meaning this agent's Tx originally requested the power change). A Tx LinkReqAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles the Rx of the Intel(R) UPI is in L0p power mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -10245,16 +12545,20 @@
},
{
"BriefDescription": "Cycles in L0. Receive side.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_UPI_RxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCB",
"UMask": "0xe",
@@ -10262,8 +12566,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCB",
"UMask": "0x10e",
@@ -10271,8 +12577,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCS",
"UMask": "0xf",
@@ -10280,8 +12588,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCS",
"UMask": "0x10f",
@@ -10289,8 +12599,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQ Message Class",
"UMask": "0x8",
@@ -10298,8 +12610,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Request Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match REQ Opcodes - Specified in Umask[7:4]",
"UMask": "0x108",
@@ -10307,24 +12621,30 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1aa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0xc",
@@ -10332,8 +12652,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0x10c",
@@ -10341,8 +12663,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - RSP",
"UMask": "0xa",
@@ -10350,8 +12674,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - RSP",
"UMask": "0x10a",
@@ -10359,8 +12685,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "SNP Message Class",
"UMask": "0x9",
@@ -10368,8 +12696,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Snoop Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match SNP Opcodes - Specified in Umask[7:4]",
"UMask": "0x109",
@@ -10377,8 +12707,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0xd",
@@ -10386,8 +12718,10 @@
},
{
"BriefDescription": "Matches on Receive path of a UPI Port; Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0x10d",
@@ -10395,6 +12729,7 @@
},
{
"BriefDescription": "FLITs received which bypassed the Slot0 Receive Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT0",
"PerPkg": "1",
@@ -10404,6 +12739,7 @@
},
{
"BriefDescription": "FLITs received which bypassed the Slot0 Receive Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT1",
"PerPkg": "1",
@@ -10413,6 +12749,7 @@
},
{
"BriefDescription": "FLITs received which bypassed the Slot0 Receive Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT2",
"PerPkg": "1",
@@ -10422,30 +12759,37 @@
},
{
"BriefDescription": "VN0 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VN1 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "Valid data FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -10455,6 +12799,7 @@
},
{
"BriefDescription": "Null FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -10464,8 +12809,10 @@
},
{
"BriefDescription": "Valid Flits Received; Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
"UMask": "0x8",
@@ -10473,8 +12820,10 @@
},
{
"BriefDescription": "Valid Flits Received; Idle",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -10482,8 +12831,10 @@
},
{
"BriefDescription": "Valid Flits Received; LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -10491,8 +12842,10 @@
},
{
"BriefDescription": "Valid Flits Received; LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -10500,6 +12853,7 @@
},
{
"BriefDescription": "Protocol header and credit FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -10509,6 +12863,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_FLITS.ALL_NULL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.NULL",
@@ -10518,8 +12873,10 @@
},
{
"BriefDescription": "Valid Flits Received; Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -10527,17 +12884,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_FLITS.PROTHDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.PROT_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "Valid Flits Received; Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -10545,8 +12906,10 @@
},
{
"BriefDescription": "Valid Flits Received; Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -10554,8 +12917,10 @@
},
{
"BriefDescription": "Valid Flits Received; Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_UPI_RxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -10563,62 +12928,76 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.NCB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.NCS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.REQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_RxL_BASIC_HDR_MATCH.WB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x5",
"EventName": "UNC_UPI_RxL_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "UPI"
},
{
"BriefDescription": "RxQ Flit Buffer Allocations; Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x1",
@@ -10626,8 +13005,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations; Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x2",
@@ -10635,8 +13016,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations; Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x4",
@@ -10644,8 +13027,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets; Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x1",
@@ -10653,8 +13038,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets; Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x2",
@@ -10662,8 +13049,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets; Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x4",
@@ -10671,118 +13060,147 @@
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in which the Tx of the Intel(R) Ultra Path Interconnect (UPI) is in L0p power mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -10791,30 +13209,38 @@
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0. Transmit side.",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_UPI_TxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCB",
"UMask": "0xe",
@@ -10822,8 +13248,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCB",
"UMask": "0x10e",
@@ -10831,8 +13259,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCS",
"UMask": "0xf",
@@ -10840,8 +13270,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - NCS",
"UMask": "0x10f",
@@ -10849,8 +13281,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "REQ Message Class",
"UMask": "0x8",
@@ -10858,8 +13292,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Request Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match REQ Opcodes - Specified in Umask[7:4]",
"UMask": "0x108",
@@ -10867,24 +13303,30 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1aa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0xc",
@@ -10892,8 +13334,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0x10c",
@@ -10901,8 +13345,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - RSP",
"UMask": "0xa",
@@ -10910,8 +13356,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class - RSP",
"UMask": "0x10a",
@@ -10919,8 +13367,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "SNP Message Class",
"UMask": "0x9",
@@ -10928,8 +13378,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Snoop Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match SNP Opcodes - Specified in Umask[7:4]",
"UMask": "0x109",
@@ -10937,8 +13389,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0xd",
@@ -10946,8 +13400,10 @@
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port; Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Match Message Class -WB",
"UMask": "0x10d",
@@ -10955,6 +13411,7 @@
},
{
"BriefDescription": "FLITs that bypassed the TxL Buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_UPI_TxL_BYPASSED",
"PerPkg": "1",
@@ -10963,6 +13420,7 @@
},
{
"BriefDescription": "Valid data FLITs transmitted via any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -10972,6 +13430,7 @@
},
{
"BriefDescription": "Null FLITs transmitted from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -10981,6 +13440,7 @@
},
{
"BriefDescription": "Valid Flits Sent; Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.DATA",
"PerPkg": "1",
@@ -10990,6 +13450,7 @@
},
{
"BriefDescription": "Idle FLITs transmitted",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.IDLE",
"PerPkg": "1",
@@ -10999,8 +13460,10 @@
},
{
"BriefDescription": "Valid Flits Sent; LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -11008,8 +13471,10 @@
},
{
"BriefDescription": "Valid Flits Sent; LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -11017,6 +13482,7 @@
},
{
"BriefDescription": "Protocol header and credit FLITs transmitted across any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -11026,6 +13492,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_FLITS.ALL_NULL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.NULL",
@@ -11035,8 +13502,10 @@
},
{
"BriefDescription": "Valid Flits Sent; Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -11044,17 +13513,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_FLITS.PROTHDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.PROT_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "Valid Flits Sent; Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -11062,8 +13535,10 @@
},
{
"BriefDescription": "Valid Flits Sent; Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -11071,8 +13546,10 @@
},
{
"BriefDescription": "Valid Flits Sent; Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_UPI_TxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Shows legal flit time (hides impact of L0p and L0c).; Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -11080,157 +13557,195 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.DATA_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.DUAL_SLOT_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.LOC",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.NCB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.NCS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.NON_DATA_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.REM",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.REQ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.SGL_SLOT_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.SNP",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "UPI"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_UPI_TxL_BASIC_HDR_MATCH.WB",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x4",
"EventName": "UNC_UPI_TxL_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_UPI_TxL_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of allocations into the UPI Tx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_UPI_TxL_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the number of flits in the TxQ. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credits Pending Return - Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of VNA credits in the Rx side that are waitng to be returned back across the link.",
"Unit": "UPI"
},
{
"BriefDescription": "Clockticks in the UBOX using a dedicated 48-bit Fixed Counter",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_U_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UBOX"
},
{
"BriefDescription": "Message Received",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Virtual Logical Wire (legacy) message were received from Uncore.",
"UMask": "0x8",
@@ -11238,8 +13753,10 @@
},
{
"BriefDescription": "Message Received",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.INT_PRIO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Virtual Logical Wire (legacy) message were received from Uncore.",
"UMask": "0x10",
@@ -11247,8 +13764,10 @@
},
{
"BriefDescription": "Message Received; IPI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.IPI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Virtual Logical Wire (legacy) message were received from Uncore.; Inter Processor Interrupts",
"UMask": "0x4",
@@ -11256,8 +13775,10 @@
},
{
"BriefDescription": "Message Received; MSI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.MSI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Virtual Logical Wire (legacy) message were received from Uncore.; Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)",
"UMask": "0x2",
@@ -11265,8 +13786,10 @@
},
{
"BriefDescription": "Message Received; VLW",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.VLW_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Virtual Logical Wire (legacy) message were received from Uncore.",
"UMask": "0x1",
@@ -11274,16 +13797,20 @@
},
{
"BriefDescription": "IDI Lock/SplitLock Cycles",
+ "Counter": "0,1",
"EventCode": "0x44",
"EventName": "UNC_U_LOCK_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times an IDI Lock/SplitLock sequence was started",
"Unit": "UBOX"
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack; Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PHOLD cycles.",
"UMask": "0x1",
@@ -11291,44 +13818,42 @@
},
{
"BriefDescription": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDRAND",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.RDRAND",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDSEED",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.RDSEED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number outstanding register requests within message channel tracker",
"Unit": "UBOX"
- },
- {
- "BriefDescription": "UPI interconnect send bandwidth for payload. Derived from unc_upi_txl_flits.all_data",
- "EventCode": "0x2",
- "EventName": "UPI_DATA_BANDWIDTH_TX",
- "PerPkg": "1",
- "PublicDescription": "Counts valid data FLITs (80 bit FLow control unITs: 64bits of data) transmitted (TxL) via any of the 3 Intel(R) Ultra Path Interconnect (UPI) slots on this UPI unit.",
- "ScaleUnit": "7.11E-06Bytes",
- "UMask": "0xf",
- "Unit": "UPI"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-io.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-io.json
index 743c91f3d2f0..bce46dd4f395 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-io.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "PCI Express bandwidth reading at IIO. Derived from unc_iio_data_req_of_cpu.mem_read.part0",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "LLC_MISSES.PCIE_READ",
"FCMask": "0x07",
@@ -16,6 +17,7 @@
},
{
"BriefDescription": "PCI Express bandwidth writing at IIO. Derived from unc_iio_data_req_of_cpu.mem_write.part0",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "LLC_MISSES.PCIE_WRITE",
"FCMask": "0x07",
@@ -31,6 +33,7 @@
},
{
"BriefDescription": "Clockticks of the IIO Traffic Controller",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_IIO_CLOCKTICKS",
"PerPkg": "1",
@@ -39,6 +42,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-3",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
"FCMask": "0x4",
@@ -49,6 +53,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0",
"FCMask": "0x4",
@@ -59,6 +64,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1",
"FCMask": "0x4",
@@ -69,6 +75,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2",
"FCMask": "0x4",
@@ -79,6 +86,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3",
"FCMask": "0x4",
@@ -89,8 +97,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts; Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x01",
@@ -99,8 +109,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts; Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x02",
@@ -109,8 +121,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts; Port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x04",
@@ -119,8 +133,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts; Port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.PORT3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x08",
@@ -129,6 +145,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0-3",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
"FCMask": "0x04",
@@ -138,6 +155,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0",
"FCMask": "0x04",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 1",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1",
"FCMask": "0x04",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 2",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2",
"FCMask": "0x04",
@@ -165,6 +185,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 3",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3",
"FCMask": "0x04",
@@ -174,8 +195,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -185,8 +208,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -196,8 +221,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -207,8 +234,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -218,8 +247,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -229,8 +260,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -240,8 +273,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -251,8 +286,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -262,8 +299,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -273,8 +312,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -284,8 +325,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -295,8 +338,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -306,8 +351,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -317,8 +364,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -328,8 +377,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -339,8 +390,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -350,8 +403,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -361,8 +416,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -372,8 +429,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -383,8 +442,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -394,8 +455,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -405,8 +468,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -416,8 +481,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -427,8 +494,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -438,6 +507,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -449,6 +519,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part1",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -460,6 +531,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part2",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -471,6 +543,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part3",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -482,8 +555,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -493,8 +568,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core reading from Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -504,6 +581,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part0 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -515,6 +593,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part1 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -526,6 +605,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part2 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -537,6 +617,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part3 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -548,8 +629,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -559,8 +642,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -570,6 +655,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0",
"FCMask": "0x07",
@@ -581,6 +667,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part1",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1",
"FCMask": "0x07",
@@ -592,6 +679,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part2",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2",
"FCMask": "0x07",
@@ -603,6 +691,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part3",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3",
"FCMask": "0x07",
@@ -614,8 +703,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -625,8 +716,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -636,6 +729,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part0 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0",
"FCMask": "0x07",
@@ -647,6 +741,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part1 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1",
"FCMask": "0x07",
@@ -658,6 +753,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part2 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2",
"FCMask": "0x07",
@@ -669,6 +765,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part3 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3",
"FCMask": "0x07",
@@ -680,8 +777,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -691,8 +790,10 @@
},
{
"BriefDescription": "Data requested by the CPU; Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -702,8 +803,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -713,8 +816,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -724,8 +829,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -735,8 +842,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -746,8 +855,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -757,8 +868,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -768,8 +881,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -779,8 +894,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -790,8 +907,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -801,8 +920,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -812,6 +933,7 @@
},
{
"BriefDescription": "PCI Express bandwidth reading at IIO, part 0",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -823,6 +945,7 @@
},
{
"BriefDescription": "PCI Express bandwidth reading at IIO, part 1",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -834,6 +957,7 @@
},
{
"BriefDescription": "PCI Express bandwidth reading at IIO, part 2",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -845,6 +969,7 @@
},
{
"BriefDescription": "PCI Express bandwidth reading at IIO, part 3",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -856,8 +981,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -867,8 +994,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -878,6 +1007,7 @@
},
{
"BriefDescription": "PCI Express bandwidth writing at IIO, part 0",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -889,6 +1019,7 @@
},
{
"BriefDescription": "PCI Express bandwidth writing at IIO, part 1",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -900,6 +1031,7 @@
},
{
"BriefDescription": "PCI Express bandwidth writing at IIO, part 2",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -911,6 +1043,7 @@
},
{
"BriefDescription": "PCI Express bandwidth writing at IIO, part 3",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -922,8 +1055,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -933,8 +1068,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -944,8 +1081,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -955,8 +1094,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -966,8 +1107,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -977,8 +1120,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -988,8 +1133,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -999,8 +1146,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1010,6 +1159,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0",
"FCMask": "0x07",
@@ -1021,6 +1171,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by IIO Part1 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1",
"FCMask": "0x07",
@@ -1032,6 +1183,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by IIO Part2 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2",
"FCMask": "0x07",
@@ -1043,6 +1195,7 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by IIO Part3 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3",
"FCMask": "0x07",
@@ -1054,8 +1207,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1065,8 +1220,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1076,6 +1233,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
"FCMask": "0x07",
@@ -1087,6 +1245,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
"FCMask": "0x07",
@@ -1098,6 +1257,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
"FCMask": "0x07",
@@ -1109,6 +1269,7 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
"FCMask": "0x07",
@@ -1120,8 +1281,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1131,8 +1294,10 @@
},
{
"BriefDescription": "Data requested of the CPU; Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1142,29 +1307,37 @@
},
{
"BriefDescription": "Num Link Correctable Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_IIO_LINK_NUM_CORR_ERR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "Num Link Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_IIO_LINK_NUM_RETRIES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "Number packets that passed the Mask/Match Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_IIO_MASK_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x1",
@@ -1172,8 +1345,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus and PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x8",
@@ -1181,8 +1356,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus; Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x4",
@@ -1190,8 +1367,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus; PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x2",
@@ -1199,8 +1378,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus; !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x10",
@@ -1208,8 +1389,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if all bits specified by mask match",
"UMask": "0x20",
@@ -1217,8 +1400,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x1",
@@ -1226,8 +1411,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus and PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x8",
@@ -1235,8 +1422,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x4",
@@ -1244,8 +1433,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x2",
@@ -1253,8 +1444,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x10",
@@ -1262,8 +1455,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus; !(Non-PCIE bus) and !(PCIE bus)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Asserted if any bits specified by mask match",
"UMask": "0x20",
@@ -1271,15 +1466,19 @@
},
{
"BriefDescription": "Counting disabled",
+ "Counter": "0,1,2,3",
"EventName": "UNC_IIO_NOTHING",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1288,9 +1487,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1299,9 +1500,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1310,9 +1513,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1321,9 +1526,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1332,9 +1539,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMIC.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1343,9 +1552,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1354,9 +1565,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1365,9 +1578,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1376,9 +1591,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.ATOMICCMP.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.ATOMICCMP.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1387,6 +1604,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART0",
@@ -1398,6 +1616,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART1",
@@ -1409,6 +1628,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART2",
@@ -1420,6 +1640,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.PART3",
@@ -1431,9 +1652,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1442,9 +1665,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1453,6 +1678,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART0",
@@ -1464,6 +1690,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART1",
@@ -1475,6 +1702,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART2",
@@ -1486,6 +1714,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.PART3",
@@ -1497,9 +1726,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1508,9 +1739,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1519,9 +1752,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1530,9 +1765,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1541,9 +1778,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1552,9 +1791,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1563,9 +1804,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1574,9 +1817,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.MSG.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.MSG.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1585,9 +1830,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1596,9 +1843,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1607,9 +1856,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1618,9 +1869,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1629,9 +1882,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1640,9 +1895,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1651,9 +1908,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1662,9 +1921,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1673,9 +1934,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1684,9 +1947,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1695,9 +1960,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD0",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1706,9 +1973,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.VTD1",
+ "Counter": "0,1",
"Deprecated": "1",
"EventCode": "0x83",
"EventName": "UNC_IIO_PAYLOAD_BYTES_IN.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1717,9 +1986,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1728,9 +1999,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1739,9 +2012,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1750,9 +2025,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1761,9 +2038,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1772,9 +2051,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1783,9 +2064,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1794,9 +2077,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1805,9 +2090,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1816,9 +2103,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1827,9 +2116,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1838,9 +2129,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.CFG_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1849,9 +2142,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1860,9 +2155,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1871,9 +2168,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1882,9 +2181,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1893,9 +2194,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1904,9 +2207,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_READ.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1915,9 +2220,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1926,9 +2233,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -1937,9 +2246,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -1948,9 +2259,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -1959,9 +2272,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1970,9 +2285,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.IO_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1981,9 +2298,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -1992,9 +2311,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2003,9 +2324,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2014,9 +2337,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2025,9 +2350,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2036,9 +2363,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2047,9 +2376,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2058,9 +2389,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2069,9 +2402,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2080,9 +2415,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2091,9 +2428,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2102,9 +2441,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2113,9 +2454,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2124,9 +2467,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2135,9 +2480,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2146,9 +2493,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2157,9 +2506,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2168,9 +2519,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2179,9 +2532,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2190,9 +2545,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2201,9 +2558,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2212,9 +2571,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2223,9 +2584,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD0",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2234,9 +2597,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.VTD1",
+ "Counter": "2,3",
"Deprecated": "1",
"EventCode": "0xC0",
"EventName": "UNC_IIO_PAYLOAD_BYTES_OUT.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2245,17 +2610,21 @@
},
{
"BriefDescription": "Symbol Times on Link",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_IIO_SYMBOL_TIMES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Gen1 - increment once every 4nS, Gen2 - increment once every 2nS, Gen3 - increment once every 1nS",
"Unit": "IIO"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2264,9 +2633,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2275,9 +2646,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2286,9 +2659,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2297,9 +2672,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2308,9 +2685,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMIC.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2319,9 +2698,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2330,9 +2711,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2341,9 +2724,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2352,9 +2737,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.ATOMICCMP.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2363,9 +2750,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2374,9 +2763,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2385,9 +2776,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2396,9 +2789,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2407,9 +2802,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2418,9 +2815,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2429,9 +2828,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2440,9 +2841,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2451,9 +2854,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2462,9 +2867,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2473,9 +2880,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2484,9 +2893,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2495,9 +2906,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2506,9 +2919,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2517,9 +2932,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2528,9 +2945,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2539,9 +2958,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2550,9 +2971,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.MSG.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2561,9 +2984,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2572,9 +2997,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2583,9 +3010,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2594,9 +3023,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2605,9 +3036,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2616,9 +3049,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2627,9 +3062,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2638,9 +3075,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2649,9 +3088,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2660,9 +3101,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2671,9 +3114,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2682,9 +3127,11 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_IN.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2693,9 +3140,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2704,9 +3153,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2715,9 +3166,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2726,9 +3179,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2737,9 +3192,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2748,9 +3205,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2759,9 +3218,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2770,9 +3231,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2781,9 +3244,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2792,9 +3257,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2803,9 +3270,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.CFG_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2814,9 +3283,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2825,9 +3296,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2836,9 +3309,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2847,9 +3322,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2858,9 +3335,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2869,9 +3348,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2880,9 +3361,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2891,9 +3374,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2902,9 +3387,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2913,9 +3400,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2924,9 +3413,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2935,9 +3426,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.IO_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2946,9 +3439,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -2957,9 +3452,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -2968,9 +3465,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -2979,9 +3478,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -2990,9 +3491,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3001,9 +3504,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3012,9 +3517,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -3023,9 +3530,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -3034,9 +3543,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -3045,9 +3556,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -3056,9 +3569,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3067,9 +3582,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3078,9 +3595,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -3089,9 +3608,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -3100,9 +3621,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -3111,9 +3634,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -3122,9 +3647,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3133,9 +3660,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3144,9 +3673,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x1",
@@ -3155,9 +3686,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x2",
@@ -3166,9 +3699,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x4",
@@ -3177,9 +3712,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x8",
@@ -3188,9 +3725,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD0",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3199,9 +3738,11 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD1",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_OUT.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x7",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3210,8 +3751,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3221,8 +3764,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3232,8 +3777,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3243,8 +3790,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3254,8 +3803,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3265,8 +3816,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3276,8 +3829,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3287,8 +3842,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3298,8 +3855,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3309,8 +3868,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3320,8 +3881,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3331,8 +3894,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3342,8 +3907,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3353,8 +3920,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3364,8 +3933,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3375,8 +3946,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3386,8 +3959,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3397,8 +3972,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3408,8 +3985,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3419,8 +3998,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3430,8 +4011,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3441,8 +4024,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3452,8 +4037,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3463,8 +4050,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3474,6 +4063,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -3485,6 +4075,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -3496,6 +4087,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part2",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -3507,6 +4099,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part3",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -3518,8 +4111,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3529,8 +4124,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3540,6 +4137,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part0 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -3551,6 +4149,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part1 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -3562,6 +4161,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part2 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -3573,6 +4173,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part3 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -3584,8 +4185,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3595,8 +4198,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3606,6 +4211,7 @@
},
{
"BriefDescription": "Peer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0",
"FCMask": "0x07",
@@ -3617,6 +4223,7 @@
},
{
"BriefDescription": "Peer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1",
"FCMask": "0x07",
@@ -3628,6 +4235,7 @@
},
{
"BriefDescription": "Peer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part2",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2",
"FCMask": "0x07",
@@ -3639,6 +4247,7 @@
},
{
"BriefDescription": "Peer to peer read request for up to a 64 byte transaction is made by a different IIO unit to IIO Part3",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3",
"FCMask": "0x07",
@@ -3650,8 +4259,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3661,8 +4272,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3672,6 +4285,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made to IIO Part0 by a different IIO unit",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0",
"FCMask": "0x07",
@@ -3683,6 +4297,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made to IIO Part1 by a different IIO unit",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1",
"FCMask": "0x07",
@@ -3694,6 +4309,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made to IIO Part2 by a different IIO unit",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2",
"FCMask": "0x07",
@@ -3705,6 +4321,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made to IIO Part3 by a different IIO unit",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3",
"FCMask": "0x07",
@@ -3716,8 +4333,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3727,8 +4346,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU; Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3738,8 +4359,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3749,8 +4372,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3760,8 +4385,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3771,8 +4398,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3782,8 +4411,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3793,8 +4424,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3804,8 +4437,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3815,8 +4450,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3826,8 +4463,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3837,8 +4476,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Completion of atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMICCMP.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3848,6 +4489,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part0 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -3859,6 +4501,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part1 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -3870,6 +4513,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part2 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -3881,6 +4525,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part3 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -3892,8 +4537,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3903,8 +4550,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3914,6 +4563,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part0 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -3925,6 +4575,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part1 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -3936,6 +4587,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part2 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -3947,6 +4599,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part3 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -3958,8 +4611,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3969,8 +4624,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3980,8 +4637,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3991,8 +4650,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -4002,8 +4663,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -4013,8 +4676,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -4024,8 +4689,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4035,8 +4702,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4046,6 +4715,7 @@
},
{
"BriefDescription": "Peer to peer read request of up to a 64 byte transaction is made by IIO Part0 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART0",
"FCMask": "0x07",
@@ -4057,6 +4727,7 @@
},
{
"BriefDescription": "Peer to peer read request of up to a 64 byte transaction is made by IIO Part1 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART1",
"FCMask": "0x07",
@@ -4068,6 +4739,7 @@
},
{
"BriefDescription": "Peer to peer read request of up to a 64 byte transaction is made by IIO Part2 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART2",
"FCMask": "0x07",
@@ -4079,6 +4751,7 @@
},
{
"BriefDescription": "Peer to peer read request of up to a 64 byte transaction is made by IIO Part3 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART3",
"FCMask": "0x07",
@@ -4090,8 +4763,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4101,8 +4776,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4112,6 +4789,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made by IIO Part0 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0",
"FCMask": "0x07",
@@ -4123,6 +4801,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made by IIO Part1 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1",
"FCMask": "0x07",
@@ -4134,6 +4813,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made by IIO Part2 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2",
"FCMask": "0x07",
@@ -4145,6 +4825,7 @@
},
{
"BriefDescription": "Peer to peer write request of up to a 64 byte transaction is made by IIO Part3 to an IIO target",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3",
"FCMask": "0x07",
@@ -4156,8 +4837,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.VTD0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4167,8 +4850,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU; Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.VTD1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4178,72 +4863,90 @@
},
{
"BriefDescription": "VTd Access; context cache miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.CTXT_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; L1 miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.L1_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; L2 miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.L2_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; L3 miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.L3_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; Vtd hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.L4_PAGE_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; TLB miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.TLB1_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; TLB is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.TLB_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Access; TLB miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_VTD_ACCESS.TLB_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "VTd Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_VTD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
}
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json
index d82d2cca6f0a..265cdf334f6a 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "read requests to memory controller. Derived from unc_m_cas_count.rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_READ",
"PerPkg": "1",
@@ -11,6 +12,7 @@
},
{
"BriefDescription": "write requests to memory controller. Derived from unc_m_cas_count.wr",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_WRITE",
"PerPkg": "1",
@@ -21,8 +23,10 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.BYP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x8",
@@ -30,8 +34,10 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Read",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x1",
@@ -39,6 +45,7 @@
},
{
"BriefDescription": "DRAM Page Activate commands sent due to a write request",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.WR",
"PerPkg": "1",
@@ -48,30 +55,37 @@
},
{
"BriefDescription": "ACT command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.ACT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "CAS command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.CAS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "PRE command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.PRE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "All DRAM CAS Commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -81,6 +95,7 @@
},
{
"BriefDescription": "All DRAM Read CAS Commands issued (including underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -90,14 +105,17 @@
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; Read CAS issued in Read ISOCH Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "All DRAM Read CAS Commands issued (does not include underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
"PerPkg": "1",
@@ -107,14 +125,17 @@
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; Read CAS issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_RMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Underfill Read CAS Commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
"PerPkg": "1",
@@ -124,14 +145,17 @@
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; Read CAS issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_WMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "All DRAM Write CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -141,16 +165,20 @@
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; Read CAS issued in Write ISOCH Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_RMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of Opportunistic DRAM Write CAS commands issued on this channel while in Read-Major-Mode.",
"UMask": "0x8",
@@ -158,6 +186,7 @@
},
{
"BriefDescription": "DRAM CAS (Column Address Strobe) Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_WMM",
"PerPkg": "1",
@@ -167,6 +196,7 @@
},
{
"BriefDescription": "Memory controller clock ticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Counts clockticks of the fixed frequency clock of the memory controller using one of the programmable counters.",
@@ -174,63 +204,79 @@
},
{
"BriefDescription": "Clockticks in the Memory Controller using a dedicated 48-bit Fixed Counter",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_M_CLOCKTICKS_F",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_M_DRAM_PRE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that the precharge all command was sent.",
"Unit": "iMC"
},
{
"BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_M_ECC_CORRECTABLE_ERRORS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit errors in lockstep mode.",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_MAJMODE2.DRAM_CYC",
+ "Counter": "0,1,2,3",
"EventCode": "0xED",
"EventName": "UNC_M_MAJMODE2.DRAM_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_MAJMODE2.DRAM_ENTER",
+ "Counter": "0,1,2,3",
"EventCode": "0xED",
"EventName": "UNC_M_MAJMODE2.DRAM_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Major Mode 2 : Cycles in PMM major mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xED",
"EventName": "UNC_M_MAJMODE2.PMM_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Major Mode 2 : Entered PMM major mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xED",
"EventName": "UNC_M_MAJMODE2.PMM_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Cycles in a Major Mode; Isoch Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; We group these two modes together so that we can use four counters to track each of the major modes at one time. These major modes are used whenever there is an ISOCH txn in the memory controller. In these mode, only ISOCH transactions are processed.",
"UMask": "0x8",
@@ -238,8 +284,10 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Partial Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This major mode is used to drain starved underfill reads. Regular reads and writes are blocked and only underfill reads will be processed.",
"UMask": "0x4",
@@ -247,8 +295,10 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; Read Major Mode is the default mode for the iMC, as reads are generally more critical to forward progress than writes.",
"UMask": "0x1",
@@ -256,8 +306,10 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of cycles spent in a major mode (selected by a filter) on the given channel. Major modea are channel-wide, and not a per-rank (or dimm or bank) mode.; This mode is triggered when the WPQ hits high occupancy and causes writes to be higher priority than reads. This can cause blips in the available read bandwidth in the system and temporarily increase read latencies in order to achieve better bus utilizations and higher bandwidth.",
"UMask": "0x2",
@@ -265,6 +317,7 @@
},
{
"BriefDescription": "Intel Optane DC persistent memory bandwidth read (MB/sec). Derived from unc_m_pmm_rpq_inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M_PMM_BANDWIDTH.READ",
"PerPkg": "1",
@@ -273,6 +326,7 @@
},
{
"BriefDescription": "Intel Optane DC persistent memory bandwidth total (MB/sec). Derived from unc_m_pmm_rpq_inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M_PMM_BANDWIDTH.TOTAL",
"MetricExpr": "UNC_M_PMM_RPQ_INSERTS + UNC_M_PMM_WPQ_INSERTS",
@@ -283,6 +337,7 @@
},
{
"BriefDescription": "Intel Optane DC persistent memory bandwidth write (MB/sec). Derived from unc_m_pmm_wpq_inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE7",
"EventName": "UNC_M_PMM_BANDWIDTH.WRITE",
"PerPkg": "1",
@@ -291,6 +346,7 @@
},
{
"BriefDescription": "All commands for Intel(R) Optane(TM) DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.ALL",
"PerPkg": "1",
@@ -299,22 +355,27 @@
},
{
"BriefDescription": "Misc Commands (error, flow ACKs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Misc GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC_GNT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Regular reads(RPQ) commands for Intel(R) Optane(TM) DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RD",
"PerPkg": "1",
@@ -324,14 +385,17 @@
},
{
"BriefDescription": "RPQ GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RPQ_GNTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Underfill read commands for Intel(R) Optane(TM) DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.UFILL_RD",
"PerPkg": "1",
@@ -341,14 +405,17 @@
},
{
"BriefDescription": "Underfill GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WPQ_GNTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Write commands for Intel(R) Optane(TM) DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WR",
"PerPkg": "1",
@@ -358,102 +425,127 @@
},
{
"BriefDescription": "Expected No data packet (ERID matched NDP encoding)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_EXP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Unexpected No data packet (ERID matched a Read, but data was a NDP)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_UNEXP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Opportunistic Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.OPP_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM ECC Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ECC_ERROR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "PMM ERID detectable parity error",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ERID_ERROR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Read Requests - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Read Requests - Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Major Mode; Cycles PMM is in Partial Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xEC",
"EventName": "UNC_M_PMM_MAJMODE1.PARTIAL_WR_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xEC",
"EventName": "UNC_M_PMM_MAJMODE1.PARTIAL_WR_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xEC",
"EventName": "UNC_M_PMM_MAJMODE1.PARTIAL_WR_EXIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Major Mode; Cycles PMM is in Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xEC",
"EventName": "UNC_M_PMM_MAJMODE1.RD_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Major Mode; Cycles PMM is in Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xEC",
"EventName": "UNC_M_PMM_MAJMODE1.WR_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Intel Optane DC persistent memory read latency (ns). Derived from unc_m_pmm_rpq_occupancy.all",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_READ_LATENCY",
"MetricExpr": "UNC_M_PMM_RPQ_OCCUPANCY.ALL / UNC_M_PMM_RPQ_INSERTS / UNC_M_CLOCKTICKS",
@@ -465,20 +557,25 @@
},
{
"BriefDescription": "PMM Read Queue Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M_PMM_RPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Read Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M_PMM_RPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Write requests allocated in the PMM Write Pending Queue for Intel Optane DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M_PMM_RPQ_INSERTS",
"PerPkg": "1",
@@ -486,6 +583,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy of all read requests for Intel Optane DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -494,28 +592,35 @@
},
{
"BriefDescription": "PMM Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Write Queue Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M_PMM_WPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Write Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M_PMM_WPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Write requests allocated in the PMM Write Pending Queue for Intel Optane DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xE7",
"EventName": "UNC_M_PMM_WPQ_INSERTS",
"PerPkg": "1",
@@ -523,6 +628,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy of all write requests for Intel(R) Optane(TM) DC persistent memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -531,44 +637,55 @@
},
{
"BriefDescription": "PMM Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PMM_WPQ_PCOMMIT",
+ "Counter": "0,1,2,3",
"EventCode": "0xE8",
"EventName": "UNC_M_PMM_WPQ_PCOMMIT",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PMM_WPQ_PCOMMIT_CYC",
+ "Counter": "0,1,2,3",
"EventCode": "0xE9",
"EventName": "UNC_M_PMM_WPQ_PCOMMIT_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Channel DLLOFF Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M_POWER_CHANNEL_DLLOFF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles when all the ranks in the channel are in CKE Slow (DLLOFF) mode.",
"Unit": "iMC"
},
{
"BriefDescription": "Cycles where DRAM ranks are in power down (CKE) mode+C37",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
"MetricExpr": "(UNC_M_POWER_CHANNEL_PPD / UNC_M_CLOCKTICKS) * 100",
@@ -579,8 +696,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x1",
@@ -588,8 +707,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x2",
@@ -597,8 +718,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x4",
@@ -606,8 +729,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x8",
@@ -615,8 +740,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x10",
@@ -624,8 +751,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x20",
@@ -633,8 +762,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x40",
@@ -642,8 +773,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x80",
@@ -651,21 +784,26 @@
},
{
"BriefDescription": "Critical Throttle Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the iMC is in critical thermal throttling. When this happens, all traffic is blocked. This should be rare unless something bad is going on in the platform. There is no filtering by rank for this event.",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_POWER_PCU_THROTTLING",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M_POWER_PCU_THROTTLING",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Cycles Memory is in self refresh power mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
"MetricExpr": "(UNC_M_POWER_SELF_REFRESH / UNC_M_CLOCKTICKS) * 100",
@@ -676,8 +814,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.; Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1",
@@ -685,8 +825,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2",
@@ -694,8 +836,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x4",
@@ -703,8 +847,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x8",
@@ -712,8 +858,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x10",
@@ -721,8 +869,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x20",
@@ -730,8 +880,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x40",
@@ -739,8 +891,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x80",
@@ -748,8 +902,10 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Read Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts another read.",
"UMask": "0x1",
@@ -757,8 +913,10 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Write Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a read in the iMC preempts another read or write. Generally reads to an open page are issued ahead of requests to closed pages. This improves the page hit rate of the system. However, high priority requests can cause pages of active requests to be closed in order to get them out. This will reduce the latency of the high-priority request at the expense of lower bandwidth and increased overall average latency.; Filter for when a read preempts a write.",
"UMask": "0x2",
@@ -766,8 +924,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.BYP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x10",
@@ -775,8 +935,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to timer expiration",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_CLOSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of DRAM Precharge commands sent on this channel.; Counts the number of DRAM Precharge commands sent on this channel as a result of the page close counter expiring. This does not include implicit precharge commands sent in auto-precharge mode.",
"UMask": "0x2",
@@ -784,6 +946,7 @@
},
{
"BriefDescription": "Pre-charges due to page misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
"PerPkg": "1",
@@ -793,6 +956,7 @@
},
{
"BriefDescription": "Pre-charge for reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -802,8 +966,10 @@
},
{
"BriefDescription": "Pre-charge for writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x8",
@@ -811,1390 +977,1739 @@
},
{
"BriefDescription": "Read CAS issued with HIGH priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.HIGH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Read CAS issued with LOW priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.LOW",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Read CAS issued with MEDIUM priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.MED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Read CAS issued with PANIC NON ISOCH priority (starved)",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.PANIC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 3; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M_RD_CAS_RANK3.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M_RPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS",
"PerPkg": "1",
@@ -2203,6 +2718,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M_RPQ_OCCUPANCY",
"PerPkg": "1",
@@ -2211,452 +2727,565 @@
},
{
"BriefDescription": "Scoreboard Accesses; Write Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; Write Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; FM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; FM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; Read Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; Read Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; NM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses; NM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Alloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.ALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Dealloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FMRD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FMWR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NMRD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NMWR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Valid",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.VLD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M_SB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Not-Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M_SB_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Dealloc all commands (for error flows)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Patrol inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PATROL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts; Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Patrol",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PATROL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy; Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected; FM requests rejected due to full address conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected; NM requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected; Patrol requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMRD_CLR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMRD_SET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Write - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMWR_CLR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMWR_SET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMRD_CLR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMRD_SET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Write - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMWR_CLR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMWR_SET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Far Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Near Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.NEW",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.NEW",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.OCC",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.OCC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_HIT",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "All hits to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.HIT",
"PerPkg": "1",
@@ -2666,6 +3295,7 @@
},
{
"BriefDescription": "All Clean line misses to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_CLEAN",
"PerPkg": "1",
@@ -2675,6 +3305,7 @@
},
{
"BriefDescription": "All dirty line misses to Near Memory(DRAM cache) in Memory Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_DIRTY",
"PerPkg": "1",
@@ -2684,46 +3315,57 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.LOW_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.STARVE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.VMSE_RETRY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS",
"PerPkg": "1",
@@ -2732,6 +3374,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M_WPQ_OCCUPANCY",
"PerPkg": "1",
@@ -2740,1359 +3383,1701 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"Unit": "iMC"
},
{
"BriefDescription": "Not getting the requested Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_M_WRONG_MM",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 2; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M_WR_CAS_RANK2.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 3; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M_WR_CAS_RANK3.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.ALLBANKS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK0",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK10",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK11",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xb",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK12",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK13",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK14",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK15",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK6",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x6",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK8",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK9",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x9",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x11",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x12",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x13",
"Unit": "iMC"
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x14",
"Unit": "iMC"
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-power.json b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-power.json
index c6254af7a468..809b86dde933 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/uncore-power.json
@@ -1,195 +1,248 @@
[
{
"BriefDescription": "pclk Cycles",
+ "Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_DEMOTIONS",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 0 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_P_FIVR_PS_PS0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent in phase-shedding power state 0",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 1 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_P_FIVR_PS_PS1_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent in phase-shedding power state 1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 2 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x77",
"EventName": "UNC_P_FIVR_PS_PS2_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent in phase-shedding power state 2",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 3 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x78",
"EventName": "UNC_P_FIVR_PS_PS3_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent in phase-shedding power state 3",
"Unit": "PCU"
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when thermal conditions are the upper limit on frequency. This is related to the THERMAL_THROTTLE CYCLES_ABOVE_TEMP event, which always counts cycles when we are above the thermal temperature. This event (STRONGEST_UPPER_LIMIT) is sampled at the output of the algorithm that determines the actual frequency, while THERMAL_THROTTLE looks at the input.",
"Unit": "PCU"
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when power is the upper limit on frequency.",
"Unit": "PCU"
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.",
"Unit": "PCU"
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_MCP_PROCHOT_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_P_MCP_PROCHOT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the package was in C3. This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C0 and C1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0x40",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0x80",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C6 and C7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0xc0",
"Unit": "PCU"
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cycles spent performing core C state transitions across all cores.",
"Unit": "PCU"
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
}
diff --git a/tools/perf/pmu-events/arch/x86/cascadelakex/virtual-memory.json b/tools/perf/pmu-events/arch/x86/cascadelakex/virtual-memory.json
index f59405877ae8..ad33fff57c03 100644
--- a/tools/perf/pmu-events/arch/x86/cascadelakex/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/cascadelakex/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Counts demand data loads that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completed.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a load. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data load to a 1G page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data load to a 2M/4M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data load to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Counts 1 per cycle for each PMH that is busy with a page walk for a load. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts 1 per cycle for each PMH that is busy with a page walk for a load. EPT page walk duration are excluded in Skylake microarchitecture.",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Counts demand data stores that caused a page walk of any page size (4K/2M/4M/1G). This implies it missed in all TLB levels, but the walk need not have completed.",
@@ -74,6 +83,7 @@
},
{
"BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
@@ -82,6 +92,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 1G page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -107,6 +120,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 2M/4M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -115,6 +129,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Counts 1 per cycle for each PMH that is busy with a page walk for a store. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts 1 per cycle for each PMH that is busy with a page walk for a store. EPT page walk duration are excluded in Skylake microarchitecture.",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Counts 1 per cycle for each PMH that is busy with a EPT (Extended Page Table) walk for any request type.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.WALK_PENDING",
"PublicDescription": "Counts cycles for each PMH (Page Miss Handler) that is busy with an EPT (Extended Page Table) walk for any request type.",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "Counts the number of flushes of the big or small ITLB pages. Counting include both TLB Flush (covering all sets) and TLB Set Clear (set-specific).",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Counts page walks of any page size (4K/2M/4M/1G) caused by a code fetch. This implies it missed in the ITLB and further levels of TLB, but the walk need not have completed.",
@@ -155,6 +174,7 @@
},
{
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"SampleAfterValue": "100003",
@@ -162,6 +182,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_ACTIVE",
@@ -171,6 +192,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -179,6 +201,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -187,6 +210,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -203,14 +228,16 @@
},
{
"BriefDescription": "Counts 1 per cycle for each PMH that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylake.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING",
- "PublicDescription": "Counts 1 per cycle for each PMH (Page Miss Handler) that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylake michroarchitecture.",
+ "PublicDescription": "Counts 1 per cycle for each PMH (Page Miss Handler) that is busy with a page walk for an instruction fetch request. EPT page walk duration are excluded in Skylake microarchitecture.",
"SampleAfterValue": "100003",
"UMask": "0x10"
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -219,6 +246,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).",
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/cache.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/cache.json
new file mode 100644
index 000000000000..ecb7dc252208
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/cache.json
@@ -0,0 +1,179 @@
+[
+ {
+ "BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x2e",
+ "EventName": "LONGEST_LAT_CACHE.MISS",
+ "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x41"
+ },
+ {
+ "BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x2e",
+ "EventName": "LONGEST_LAT_CACHE.REFERENCE",
+ "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4f"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
+ "PublicDescription": "Counts the number of load ops retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x81"
+ },
+ {
+ "BriefDescription": "Counts the number of store ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.ALL_STORES",
+ "PublicDescription": "Counts the number of store ops retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x82"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x80",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x10",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x800",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x100",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x20",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x4",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x200",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x40",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x8",
+ "PublicDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
+ "PublicDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/counter.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/counter.json
new file mode 100644
index 000000000000..a0eaf5b6f2bc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/counter.json
@@ -0,0 +1,7 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "39"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/frontend.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/frontend.json
new file mode 100644
index 000000000000..7a9250e5c8f2
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/frontend.json
@@ -0,0 +1,18 @@
+[
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.ACCESSES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x80",
+ "EventName": "ICACHE.MISSES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/memory.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/memory.json
new file mode 100644
index 000000000000..58e543550279
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/memory.json
@@ -0,0 +1,24 @@
+[
+ {
+ "BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x33FBFC00001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x33FBFC00002",
+ "PublicDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/pipeline.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/pipeline.json
new file mode 100644
index 000000000000..26bd12fefa3d
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/pipeline.json
@@ -0,0 +1,115 @@
+[
+ {
+ "BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
+ "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.CORE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles. [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.CORE_P",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles.",
+ "Counter": "Fixed counter 2",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
+ "PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles.",
+ "Counter": "Fixed counter 1",
+ "EventName": "CPU_CLK_UNHALTED.THREAD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted core clock cycles. [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.THREAD_P",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of instructions retired.",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.ANY",
+ "PublicDescription": "Fixed Counter: Counts the number of instructions retired. Available PDIST counters: 32",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.ANY_P",
+ "PublicDescription": "Counts the number of instructions retired. Available PDIST counters: 0,1",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "36",
+ "EventName": "TOPDOWN_BAD_SPECULATION.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls. [This event is alias to TOPDOWN_BE_BOUND.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN_BE_BOUND.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls. [This event is alias to TOPDOWN_BE_BOUND.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN_BE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of retirement slots not consumed due to front end stalls.",
+ "Counter": "37",
+ "EventName": "TOPDOWN_FE_BOUND.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Fixed Counter: Counts the number of consumed retirement slots.",
+ "Counter": "38",
+ "EventName": "TOPDOWN_RETIRING.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/clearwaterforest/virtual-memory.json b/tools/perf/pmu-events/arch/x86/clearwaterforest/virtual-memory.json
new file mode 100644
index 000000000000..78f2b835c1fa
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/clearwaterforest/virtual-memory.json
@@ -0,0 +1,29 @@
+[
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xe"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xe"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xe"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/cache.json b/tools/perf/pmu-events/arch/x86/elkhartlake/cache.json
index c6be60584522..3410caf8a57a 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/cache.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of core requests (demand and L1 prefetchers) rejected by the L2 queue (L2Q) due to a full condition.",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "CORE_REJECT_L2Q.ANY",
"PublicDescription": "Counts the number of (demand and L1 prefetchers) core requests rejected by the L2 queue (L2Q) due to a full or nearly full condition, which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the External Queue (XQ), but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to ensure fairness between cores, or to delay a cores dirty eviction when the address conflicts incoming external snoops. (Note that L2 prefetcher requests that are dropped are not counted by this event). Counts on a per core basis.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "DL1.DIRTY_EVICTION",
"PublicDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches. Does not count evictions or dirty writebacks caused by snoops. Does not count a replacement unless a (dirty) line was written back.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition.",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "L2_REJECT_XQ.ANY",
"PublicDescription": "Counts the number of demand and prefetch transactions that the External Queue (XQ) rejects due to a full or near full condition which likely indicates back pressure from the IDI link. The XQ may reject transactions from the L2Q (non-cacheable requests), BBL (L2 misses) and WOB (L2 write-back victims).",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Counts the total number of L2 Cache accesses. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.ALL",
"PublicDescription": "Counts the total number of L2 Cache Accesses, includes hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only. Counts on a per core basis.",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Counts the number of L2 Cache accesses that resulted in a hit. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.HIT",
"PublicDescription": "Counts the number of L2 Cache accesses that resulted in a hit from a front door request only (does not include rejects or recycles), Counts on a per core basis.",
@@ -38,6 +43,7 @@
},
{
"BriefDescription": "Counts the number of L2 Cache accesses that resulted in a miss. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.MISS",
"PublicDescription": "Counts the number of L2 Cache accesses that resulted in a miss from a front door request only (does not include rejects or recycles). Counts on a per core basis.",
@@ -46,6 +52,7 @@
},
{
"BriefDescription": "Counts the number of L2 Cache accesses that miss the L2 and get rejected. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.REJECTS",
"PublicDescription": "Counts the number of L2 Cache accesses that miss the L2 and get BBL reject short and long rejects (includes those counted in L2_reject_XQ.any). Counts on a per core basis.",
@@ -54,6 +61,7 @@
},
{
"BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
@@ -62,6 +70,7 @@
},
{
"BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
@@ -70,6 +79,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
@@ -78,6 +88,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_DRAM_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or translation lookaside buffer (TLB) miss which hit in DRAM or MMIO (non-DRAM).",
@@ -86,6 +97,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_L2_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache.",
@@ -94,6 +106,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.IFETCH_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -102,6 +115,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in the L2, LLC, DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD",
"SampleAfterValue": "200003",
@@ -109,6 +123,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load miss which hit in DRAM or MMIO (Non-DRAM).",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_DRAM_HIT",
"SampleAfterValue": "200003",
@@ -116,6 +131,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_L2_HIT",
"SampleAfterValue": "200003",
@@ -123,6 +139,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the LLC or other core with HITE/F/M.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.LOAD_LLC_HIT",
"PublicDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the Last Level Cache (LLC) or other core with HITE/F/M.",
@@ -131,6 +148,7 @@
},
{
"BriefDescription": "Counts the number of cycles the core is stalled due to a store buffer being full.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "MEM_BOUND_STALLS.STORE_BUFFER_FULL",
"SampleAfterValue": "200003",
@@ -138,753 +156,1126 @@
},
{
"BriefDescription": "Counts the number of load uops retired that hit in DRAM.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that hit in DRAM. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x80"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core or module.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that hit in the L3 cache, in which a snoop was required and modified data was forwarded from another core or module. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x20"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L1 data cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that hit in the L1 data cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of load uops retired that miss in the L1 data cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that miss in the L1 data cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L2 cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that hit in the L2 cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of load uops retired that miss in the L2 cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that miss in the L2 cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of load uops retired that hit in the L3 cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that hit in the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of memory uops retired.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL",
"PEBS": "1",
- "PublicDescription": "Counts the number of memory uops retired. A single uop that performs both a load AND a store will be counted as 1, not 2 (e.g. ADD [mem], CONST)",
+ "PublicDescription": "Counts the number of memory uops retired. A single uop that performs both a load AND a store will be counted as 1, not 2 (e.g. ADD [mem], CONST) Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x83"
},
{
"BriefDescription": "Counts the number of load uops retired.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
"PEBS": "1",
- "PublicDescription": "Counts the total number of load uops retired.",
+ "PublicDescription": "Counts the total number of load uops retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x81"
},
{
"BriefDescription": "Counts the number of store uops retired.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
"PEBS": "1",
- "PublicDescription": "Counts the total number of store uops retired.",
+ "PublicDescription": "Counts the total number of store uops retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x82"
},
{
"BriefDescription": "Counts the number of load uops retired that performed one or more locks.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that performed one or more locks. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x21"
},
{
"BriefDescription": "Counts the number of memory uops retired that were splits.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of memory uops retired that were splits. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x43"
},
{
"BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired split load uops. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x41"
},
{
"BriefDescription": "Counts the number of retired split store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired split store uops. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x42"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0044",
+ "PublicDescription": "Counts all code reads that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.COREWB_M.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3000000010000",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.COREWB_M.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3001F803C0000",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.COREWB_M.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8003000000000000",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSE",
+ "Counter": "0,1,2,3",
+ "Deprecated": "1",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSE Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HITM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HITM Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_NO_FWD Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_HIT_WITH_FWD Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_MISS Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_NOT_NEEDED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_HIT.SNOOP_NOT_NEEDED Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDING",
+ "Counter": "0,1,2,3",
+ "Deprecated": "1",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_RD.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDING Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_RFO.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.FULL_STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x801F803C0000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10400",
+ "PublicDescription": "Counts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0400",
+ "PublicDescription": "Counts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_CODE_RD.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_RFO.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.L1WB_M.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1000000010000",
+ "PublicDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L1WB_M.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1001F803C0000",
+ "PublicDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.L2WB_M.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x2000000010000",
+ "PublicDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L2WB_M.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2001F803C0000",
+ "PublicDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PARTIAL_STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x401F803C0000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2003C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1003C0477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.READS_TO_CORE.OUTSTANDING",
+ "MSRIndex": "0x1a6",
+ "MSRValue": "0x8000000000000477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F803C0800",
+ "PublicDescription": "Counts streaming stores that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x101F803C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1010003C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1004003C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, but no data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1008003C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent but the snoop missed.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1002003C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache where a snoop was sent but the snoop missed. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were supplied by the L3 cache where no snoop was needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1001003C0000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by the L3 cache where no snoop was needed to satisfy the request. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory writes that were supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x201F803C0000",
+ "PublicDescription": "Counts uncached memory writes that were supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to instruction cache misses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ICACHE",
"SampleAfterValue": "1000003",
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/counter.json b/tools/perf/pmu-events/arch/x86/elkhartlake/counter.json
new file mode 100644
index 000000000000..aa443347b694
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/counter.json
@@ -0,0 +1,7 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/floating-point.json b/tools/perf/pmu-events/arch/x86/elkhartlake/floating-point.json
index 88522244b760..f47d97dfe0d9 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of cycles the floating point divider is busy.",
+ "Counter": "0,1,2,3",
"EventCode": "0xcd",
"EventName": "CYCLES_DIV_BUSY.FPDIV",
"PublicDescription": "Counts the number of cycles the floating point divider is busy. Does not imply a stall waiting for the divider.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.FP_ASSIST",
"PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
@@ -17,9 +19,11 @@
},
{
"BriefDescription": "Counts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.FPDIV",
"PEBS": "1",
+ "PublicDescription": "Counts the number of floating point divide uops retired (x87 and SSE, including x87 sqrt). Available PDIST counters: 0",
"SampleAfterValue": "2000003",
"UMask": "0x8"
}
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/frontend.json b/tools/perf/pmu-events/arch/x86/elkhartlake/frontend.json
index 5ba998e06592..6d131ed90242 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of BACLEARS due to a conditional jump.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.COND",
"SampleAfterValue": "200003",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Counts the number of BACLEARS due to an indirect branch.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.INDIRECT",
"SampleAfterValue": "200003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Counts the number of BACLEARS due to a return branch.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.RETURN",
"SampleAfterValue": "200003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Counts the number of BACLEARS due to a direct, unconditional jump.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.UNCOND",
"SampleAfterValue": "200003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Counts the number of times a decode restriction reduces the decode throughput due to wrong instruction length prediction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe9",
"EventName": "DECODE_RESTRICTION.PREDECODE_WRONG",
"SampleAfterValue": "200003",
@@ -44,6 +50,7 @@
},
{
"BriefDescription": "Counts the number of requests to the instruction cache for one or more bytes of a cache line.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PublicDescription": "Counts the total number of requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
@@ -52,6 +59,7 @@
},
{
"BriefDescription": "Counts the number of instruction cache hits.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "Counts the number of requests that hit in the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
@@ -60,6 +68,7 @@
},
{
"BriefDescription": "Counts the number of instruction cache misses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts the number of missed requests to the instruction cache. The event only counts new cache line accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count as one. Specifically, the event counts when accesses from sequential code crosses the cache line boundary, or when a branch target is moved to a new line or to a non-sequential byte chunk of the same line.",
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/memory.json b/tools/perf/pmu-events/arch/x86/elkhartlake/memory.json
index c02eb0e836ad..417cd78fc048 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/memory.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguation.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"SampleAfterValue": "20003",
@@ -8,352 +9,652 @@
},
{
"BriefDescription": "Counts the number of misaligned load uops that are 4K page splits.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of misaligned load uops that are 4K page splits. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of misaligned store uops that are 4K page splits.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
"PEBS": "1",
+ "PublicDescription": "Counts the number of misaligned store uops that are 4K page splits. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x4"
},
{
+ "BriefDescription": "Counts all code reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.ALL_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000044",
+ "PublicDescription": "Counts all code reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all code reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000044",
+ "PublicDescription": "Counts all code reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000044",
+ "PublicDescription": "Counts all code reads that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all code reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.ALL_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000044",
+ "PublicDescription": "Counts all code reads that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.COREWB_M.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3002184000000",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.COREWB_M.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3002184000000",
+ "PublicDescription": "Counts modified writebacks from L1 cache and L2 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "PublicDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.DRAM",
+ "Counter": "0,1,2,3",
+ "Deprecated": "1",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.DRAM Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.L3_MISS_LOCAL Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAM",
+ "Counter": "0,1,2,3",
+ "Deprecated": "1",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "PublicDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAM Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.FULL_STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x802184000000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.FULL_STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x802184000000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_CODE_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000040",
+ "PublicDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000010",
+ "PublicDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.HWPF_L2_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.HWPF_L2_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000020",
+ "PublicDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L1WB_M.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1002184000000",
+ "PublicDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L1WB_M.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1002184000000",
+ "PublicDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L2WB_M.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2002184000000",
+ "PublicDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.L2WB_M.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2002184000000",
+ "PublicDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.OTHER.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184008000",
+ "PublicDescription": "Counts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.OTHER.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184008000",
+ "PublicDescription": "Counts miscellaneous requests, such as I/O accesses, that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PARTIAL_STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x402184000000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PARTIAL_STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x402184000000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all hardware and software prefetches that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PREFETCHES.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000470",
+ "PublicDescription": "Counts all hardware and software prefetches that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.READS_TO_CORE.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000477",
+ "PublicDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000800",
+ "PublicDescription": "Counts streaming stores that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x2184000800",
+ "PublicDescription": "Counts streaming stores that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts uncached memory reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.UC_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100184000000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x102184000000",
+ "PublicDescription": "Counts uncached memory reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x102184000000",
+ "PublicDescription": "Counts uncached memory reads that were not supplied by the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts uncached memory reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0XB7",
+ "EventName": "OCR.UC_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100184000000",
+ "PublicDescription": "Counts uncached memory reads that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory writes that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x202184000000",
+ "PublicDescription": "Counts uncached memory writes that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory writes that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x202184000000",
+ "PublicDescription": "Counts uncached memory writes that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/other.json b/tools/perf/pmu-events/arch/x86/elkhartlake/other.json
index fefbc383b840..2cdc6b64f31d 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/other.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "This event is deprecated. Refer to new event BUS_LOCK.SELF_LOCKS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EdgeDetect": "1",
"EventCode": "0x63",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts the number of unhalted cycles a core is blocked due to an accepted lock issued by other cores.",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "BUS_LOCK.BLOCK_CYCLES",
"PublicDescription": "Counts the number of unhalted cycles a core is blocked due to an accepted lock issued by other cores. Counts on a per core basis.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event BUS_LOCK.BLOCK_CYCLES",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "BUS_LOCK.CYCLES_OTHER_BLOCK",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event BUS_LOCK.LOCK_CYCLES",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x63",
"EventName": "BUS_LOCK.CYCLES_SELF_BLOCK",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Counts the number of unhalted cycles a core is blocked due to an accepted lock it issued.",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "BUS_LOCK.LOCK_CYCLES",
"PublicDescription": "Counts the number of unhalted cycles a core is blocked due to an accepted lock it issued. Counts on a per core basis.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Counts the number of bus locks a core issued its self (e.g. lock to UC or Split Lock) and does not include cache locks.",
+ "Counter": "0,1,2,3",
"EdgeDetect": "1",
"EventCode": "0x63",
"EventName": "BUS_LOCK.SELF_LOCKS",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_DRAM_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "C0_STALLS.LOAD_DRAM_HIT",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_L2_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "C0_STALLS.LOAD_L2_HIT",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event MEM_BOUND_STALLS.LOAD_LLC_HIT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "C0_STALLS.LOAD_LLC_HIT",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Counts the number of core cycles during which interrupts are masked (disabled).",
+ "Counter": "0,1,2,3",
"EventCode": "0xcb",
"EventName": "HW_INTERRUPTS.MASKED",
"PublicDescription": "Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or not.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Counts the number of core cycles during which there are pending interrupts while interrupts are masked (disabled).",
+ "Counter": "0,1,2,3",
"EventCode": "0xcb",
"EventName": "HW_INTERRUPTS.PENDING_AND_MASKED",
"PublicDescription": "Counts the number of core cycles during which there are pending interrupts while interrupts are masked (disabled). Increments by 1 each core cycle that both EFLAGS.IF is 0 and an INTR is pending (which means the APIC is telling the ROB to cause an INTR). This event does not increment if EFLAGS.IF is 0 but all interrupt in the APICs Interrupt Request Register (IRR) are inhibited by the PPR (thus either by ISRV or TPR) because in these cases the interrupts would be held up in the APIC and would not be pended to the ROB. This event does count when an interrupt is only inhibited by MOV/POP SS state machines or the STI state machine. These extra inhibits only last for a single instructions and would not be important.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Counts the number of hardware interrupts received by the processor.",
+ "Counter": "0,1,2,3",
"EventCode": "0xcb",
"EventName": "HW_INTERRUPTS.RECEIVED",
"SampleAfterValue": "203",
@@ -96,446 +108,111 @@
},
{
"BriefDescription": "Counts all code reads that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10044",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all code reads that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.ALL_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000044",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all code reads that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.ALL_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000044",
+ "PublicDescription": "Counts all code reads that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all code reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.ALL_CODE_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
"MSRValue": "0x8000000000000044",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.COREWB_M.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3000000010000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts modified writebacks from L1 cache and L2 cache that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.COREWB_M.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8003000000000000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts cacheable demand data reads, L1 data cache hardware prefetches and software prefetches (except PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.ANY_RESPONSE",
- "Deprecated": "1",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.DRAM",
- "Deprecated": "1",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.LOCAL_DRAM",
- "Deprecated": "1",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event OCR.DEMAND_DATA_AND_L1PF_RD.OUTSTANDING",
- "Deprecated": "1",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_DATA_RD.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.DEMAND_RFO.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000002",
+ "PublicDescription": "Counts all code reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.FULL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x800000010000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache hardware prefetches and software prefetches (except PREFETCHW and PFRFO) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10040",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000040",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000040",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch code reads (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_CODE_RD.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000040",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch data reads (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L2 cache hardware prefetch RFOs (written to the L2 cache only) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.HWPF_L2_RFO.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts modified writebacks from L1 cache that miss the L2 cache that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.L1WB_M.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x1000000010000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts modified writeBacks from L2 cache that miss the L3 cache that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.L2WB_M.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x2000000010000",
+ "PublicDescription": "Counts streaming stores which modify a full 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O accesses, that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x18000",
+ "PublicDescription": "Counts miscellaneous requests, such as I/O accesses, that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PARTIAL_STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x400000010000",
+ "PublicDescription": "Counts streaming stores which modify only part of a 64 byte cacheline that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all hardware and software prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.PREFETCHES.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10470",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
- "EventCode": "0XB7",
- "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.READS_TO_CORE.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
- "EventCode": "0XB7",
- "EventName": "OCR.READS_TO_CORE.OUTSTANDING",
- "MSRIndex": "0x1a6",
- "MSRValue": "0x8000000000000477",
+ "PublicDescription": "Counts all hardware and software prefetches that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x100000010000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts uncached memory reads that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.UC_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100184000000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts uncached memory reads that were supplied by DRAM.",
- "EventCode": "0XB7",
- "EventName": "OCR.UC_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100184000000",
+ "PublicDescription": "Counts uncached memory reads that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency).",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
"MSRValue": "0x8000100000000000",
+ "PublicDescription": "Counts uncached memory reads that have an outstanding request. Returns the number of cycles until the response is received (i.e. XQ to XQ latency). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts uncached memory writes that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0XB7",
"EventName": "OCR.UC_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x200000010000",
+ "PublicDescription": "Counts uncached memory writes that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/pipeline.json b/tools/perf/pmu-events/arch/x86/elkhartlake/pipeline.json
index c483c0838e08..0fc2e821b14a 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/pipeline.json
@@ -1,126 +1,155 @@
[
{
"BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PEBS": "1",
- "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
+ "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for. Available PDIST counters: 0",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "Counts the number of near CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.CALL",
"PEBS": "1",
+ "PublicDescription": "Counts the number of near CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xf9"
},
{
"BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PEBS": "1",
+ "PublicDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xbf"
},
{
"BriefDescription": "Counts the number of near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.IND_CALL",
"PEBS": "1",
+ "PublicDescription": "Counts the number of near indirect CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.JCC",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NON_RETURN_IND",
"PEBS": "1",
+ "PublicDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of near relative CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.REL_CALL",
"PEBS": "1",
+ "PublicDescription": "Counts the number of near relative CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xfd"
},
{
"BriefDescription": "Counts the number of near RET branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.RETURN",
"PEBS": "1",
+ "PublicDescription": "Counts the number of near RET branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xf7"
},
{
"BriefDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.TAKEN_JCC",
"PEBS": "1",
+ "PublicDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PEBS": "1",
- "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
+ "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path. Available PDIST counters: 0",
"SampleAfterValue": "200003"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.IND_CALL",
"PEBS": "1",
+ "PublicDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xfb"
},
{
"BriefDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.JCC",
"PEBS": "1",
+ "PublicDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x7e"
},
{
"BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NON_RETURN_IND",
"PEBS": "1",
+ "PublicDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xeb"
},
{
"BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RETURN",
"PEBS": "1",
+ "PublicDescription": "Counts the number of mispredicted near RET branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xf7"
},
{
"BriefDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.TAKEN_JCC",
"PEBS": "1",
+ "PublicDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0xfe"
},
{
"BriefDescription": "Counts the total number of BTCLEARS.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe8",
"EventName": "BTCLEAR.ANY",
"PublicDescription": "Counts the total number of BTCLEARS which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
@@ -128,6 +157,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles. (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1.",
"SampleAfterValue": "2000003",
@@ -135,6 +165,7 @@
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. The core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses a programmable general purpose performance counter.",
@@ -142,6 +173,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2.",
@@ -150,6 +182,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency. (Fixed event)",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses fixed counter 2.",
"SampleAfterValue": "2000003",
@@ -157,6 +190,7 @@
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
@@ -165,6 +199,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xcd",
"EventName": "CYCLES_DIV_BUSY.ANY",
@@ -172,6 +207,7 @@
},
{
"BriefDescription": "Counts the number of cycles the integer divider is busy.",
+ "Counter": "0,1,2,3",
"EventCode": "0xcd",
"EventName": "CYCLES_DIV_BUSY.IDIV",
"PublicDescription": "Counts the number of cycles the integer divider is busy. Does not imply a stall waiting for the divider.",
@@ -180,60 +216,72 @@
},
{
"BriefDescription": "Counts the total number of instructions retired. (Fixed event)",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
"PEBS": "1",
- "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0.",
+ "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. Available PDIST counters: 0",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the total number of instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
"PEBS": "1",
- "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses a programmable general purpose performance counter.",
+ "PublicDescription": "Counts the total number of instructions that retired. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. This event continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses a programmable general purpose performance counter. Available PDIST counters: 0",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.4K_ALIAS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x4"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked for any of the following reasons: DTLB miss, address alias, store forward or data unknown (includes memory disambiguation blocks and ESP consuming load blocks).",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ALL",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired loads that are blocked for any of the following reasons: DTLB miss, address alias, store forward or data unknown (includes memory disambiguation blocks and ESP consuming load blocks). Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DATA_UNKNOWN",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of retired loads that are blocked because its address partially overlapped with an older store.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired loads that are blocked because its address partially overlapped with an older store. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the total number of machine clears for any reason including, but not limited to, memory ordering, memory disambiguation, SMC, and FP assist.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.ANY",
"SampleAfterValue": "20003"
},
{
"BriefDescription": "Counts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPU.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.DISAMBIGUATION",
"SampleAfterValue": "20003",
@@ -241,6 +289,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to a page fault. Counts both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation occurs.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.PAGE_FAULT",
"SampleAfterValue": "20003",
@@ -248,6 +297,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code page.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"SampleAfterValue": "20003",
@@ -255,6 +305,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.ALL",
"PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ) even if an FE_bound event occurs during this period. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
@@ -263,6 +314,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to fast nukes such as memory ordering and memory disambiguation machine clears.",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
"SampleAfterValue": "1000003",
@@ -270,6 +322,7 @@
},
{
"BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
"SampleAfterValue": "1000003",
@@ -277,6 +330,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to branch mispredicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
"SampleAfterValue": "1000003",
@@ -284,6 +338,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event TOPDOWN_BAD_SPECULATION.FASTNUKE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.MONUKE",
@@ -292,12 +347,14 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to backend stalls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to certain allocation restrictions.",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
"SampleAfterValue": "1000003",
@@ -305,6 +362,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -312,6 +370,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
"SampleAfterValue": "1000003",
@@ -319,6 +378,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls).",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REGISTER",
"SampleAfterValue": "1000003",
@@ -326,6 +386,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to the reorder buffer being full (ROB stalls).",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
"SampleAfterValue": "1000003",
@@ -333,6 +394,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS).",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
"SampleAfterValue": "1000003",
@@ -340,6 +402,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.STORE_BUFFER",
@@ -348,12 +411,14 @@
},
{
"BriefDescription": "Counts the total number of issue slots every cycle that were not consumed by the backend due to frontend stalls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
@@ -362,6 +427,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
@@ -370,6 +436,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.CISC",
"SampleAfterValue": "1000003",
@@ -377,6 +444,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stalls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.DECODE",
"SampleAfterValue": "1000003",
@@ -384,6 +452,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ITLB misses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ITLB",
"PublicDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
@@ -392,6 +461,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.OTHER",
"SampleAfterValue": "1000003",
@@ -399,6 +469,7 @@
},
{
"BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to wrong predecodes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.PREDECODE",
"SampleAfterValue": "1000003",
@@ -406,13 +477,16 @@
},
{
"BriefDescription": "Counts the total number of consumed retirement slots.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "TOPDOWN_RETIRING.ALL",
"PEBS": "1",
+ "PublicDescription": "Counts the total number of consumed retirement slots. Available PDIST counters: 0",
"SampleAfterValue": "1000003"
},
{
"BriefDescription": "Counts the number of uops issued by the front end every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0e",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
@@ -420,33 +494,40 @@
},
{
"BriefDescription": "Counts the total number of uops retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
+ "PublicDescription": "Counts the total number of uops retired. Available PDIST counters: 0",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Counts the number of integer divide uops retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.IDIV",
"PEBS": "1",
+ "PublicDescription": "Counts the number of integer divide uops retired. Available PDIST counters: 0",
"SampleAfterValue": "2000003",
"UMask": "0x10"
},
{
"BriefDescription": "Counts the number of uops that are from complex flows issued by the micro-sequencer (MS).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MS",
"PEBS": "1",
- "PublicDescription": "Counts the number of uops that are from complex flows issued by the Microcode Sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
+ "PublicDescription": "Counts the number of uops that are from complex flows issued by the Microcode Sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows. Available PDIST counters: 0",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of x87 uops retired, includes those in MS flows.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.X87",
"PEBS": "1",
+ "PublicDescription": "Counts the number of x87 uops retired, includes those in MS flows. Available PDIST counters: 0",
"SampleAfterValue": "2000003",
"UMask": "0x2"
}
diff --git a/tools/perf/pmu-events/arch/x86/elkhartlake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/elkhartlake/virtual-memory.json
index cabe29e70e79..bf56d72bb4a7 100644
--- a/tools/perf/pmu-events/arch/x86/elkhartlake/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/elkhartlake/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of page walks due to loads that miss the PDE (Page Directory Entry) cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.PDE_CACHE_MISS",
"SampleAfterValue": "200003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Counts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Account for all page sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"SampleAfterValue": "200003",
@@ -15,6 +17,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1GB pages. Includes page walks that page fault.",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
@@ -47,6 +53,7 @@
},
{
"BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for demand loads every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for demand loads every cycle. A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
@@ -55,6 +62,7 @@
},
{
"BriefDescription": "Counts the number of page walks due to stores that miss the PDE (Page Directory Entry) cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.PDE_CACHE_MISS",
"SampleAfterValue": "2000003",
@@ -62,6 +70,7 @@
},
{
"BriefDescription": "Counts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Account for all pages sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"SampleAfterValue": "2000003",
@@ -69,6 +78,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -77,6 +87,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1G pages. Includes page walks that page fault.",
@@ -85,6 +96,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
@@ -93,6 +105,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
@@ -101,6 +114,7 @@
},
{
"BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle. A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
@@ -109,6 +123,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Directory Entry hits.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.EPDE_HIT",
"PublicDescription": "Counts the number of Extended Page Directory Entry hits. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
@@ -117,6 +132,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Directory Entry misses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.EPDE_MISS",
"PublicDescription": "Counts the number Extended Page Directory Entry misses. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
@@ -125,6 +141,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Directory Pointer Entry hits.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.EPDPE_HIT",
"PublicDescription": "Counts the number Extended Page Directory Pointer Entry hits. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
@@ -133,6 +150,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Directory Pointer Entry misses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.EPDPE_MISS",
"PublicDescription": "Counts the number Extended Page Directory Pointer Entry misses. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
@@ -141,6 +159,7 @@
},
{
"BriefDescription": "Counts the number of page walks outstanding for an Extended Page table walk including GTLB hits per cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an Extended Page table walk including GTLB hits per cycle. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
@@ -149,6 +168,7 @@
},
{
"BriefDescription": "Counts the number of times there was an ITLB miss and a new translation was filled into the ITLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "ITLB.FILLS",
"PublicDescription": "Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) and a new translation was filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLB.",
@@ -157,6 +177,7 @@
},
{
"BriefDescription": "Counts the number of page walks due to an instruction fetch that miss the PDE (Page Directory Entry) cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.PDE_CACHE_MISS",
"SampleAfterValue": "2000003",
@@ -164,6 +185,7 @@
},
{
"BriefDescription": "Counts the number of first level TLB misses but second level hits due to an instruction fetch that did not start a page walk. Account for all pages sizes. Will result in an ITLB write from STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"SampleAfterValue": "2000003",
@@ -171,6 +193,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
@@ -179,6 +202,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 1G pages. Includes page walks that page fault.",
@@ -187,6 +211,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
@@ -195,6 +220,7 @@
},
{
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
@@ -203,6 +229,7 @@
},
{
"BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for instruction fetches every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for instruction fetches every cycle. A page walk is outstanding from start till PMH becomes idle again (ready to serve next walk).",
@@ -211,36 +238,44 @@
},
{
"BriefDescription": "Counts the number of retired loads that are blocked due to a first level TLB miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DTLB_MISS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of retired loads that are blocked due to a first level TLB miss. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x8"
},
{
"BriefDescription": "Counts the number of memory uops retired that missed in the second level TLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of memory uops retired that missed in the second level TLB. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x13"
},
{
"BriefDescription": "Counts the number of load uops retired that miss in the second Level TLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS",
"PEBS": "1",
+ "PublicDescription": "Counts the number of load uops retired that miss in the second Level TLB. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x11"
},
{
"BriefDescription": "Counts the number of store uops retired that miss in the second level TLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES",
"PEBS": "1",
+ "PublicDescription": "Counts the number of store uops retired that miss in the second level TLB. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x12"
}
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
index ab09bd9fb409..26568e4b77f7 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/cache.json
@@ -1,6 +1,70 @@
[
{
+ "BriefDescription": "Hit snoop reply with data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "HitM snoop reply with data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.I_FWD_M",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response). A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Hit snoop reply without sending the data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Line not found snoop reply",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.MISS",
+ "PublicDescription": "Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Hit snoop reply with data, line kept in Shared state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE",
+ "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "HitM snoop reply with data, line kept in Shared state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.S_FWD_M",
+ "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Hit snoop reply without sending the data, line kept in Shared state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE",
+ "PublicDescription": "Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "L1D.HWPF_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.HWPF_MISS",
"SampleAfterValue": "1000003",
@@ -8,6 +72,7 @@
},
{
"BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -16,6 +81,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -24,6 +90,7 @@
},
{
"BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x48",
@@ -34,6 +101,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event L1D_PEND_MISS.L2_STALLS",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALL",
@@ -42,6 +110,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALLS",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -50,6 +119,7 @@
},
{
"BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
@@ -58,6 +128,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -67,6 +138,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -74,22 +146,26 @@
"UMask": "0x1f"
},
{
- "BriefDescription": "L2_LINES_OUT.NON_SILENT",
+ "BriefDescription": "Modified cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
- "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fill.",
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.SILENT",
- "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "L2_LINES_OUT.USELESS_HWPF",
"PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
@@ -98,6 +174,7 @@
},
{
"BriefDescription": "All accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.ALL",
"PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES]",
@@ -106,6 +183,7 @@
},
{
"BriefDescription": "Read requests with true-miss in L2 cache. [This event is alias to L2_RQSTS.MISS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_REQUEST.MISS",
"PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]",
@@ -114,6 +192,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -122,6 +201,7 @@
},
{
"BriefDescription": "Demand Data Read access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts Demand Data Read requests accessing the L2 cache. These requests may hit or miss L2 cache. True-miss exclude misses that were merged with ongoing L2 misses. An access is counted once.",
@@ -130,6 +210,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Counts demand requests that miss L2 cache.",
@@ -138,6 +219,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"PublicDescription": "Counts demand requests to L2 cache.",
@@ -146,6 +228,7 @@
},
{
"BriefDescription": "L2_RQSTS.ALL_HWPF",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_HWPF",
"SampleAfterValue": "200003",
@@ -153,6 +236,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -161,6 +245,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
@@ -169,6 +254,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.",
@@ -177,6 +263,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
@@ -185,6 +272,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts demand Data Read requests with true-miss in the L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. An access is counted once.",
@@ -193,6 +281,7 @@
},
{
"BriefDescription": "L2_RQSTS.HWPF_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.HWPF_MISS",
"SampleAfterValue": "200003",
@@ -200,6 +289,7 @@
},
{
"BriefDescription": "Read requests with true-miss in L2 cache. [This event is alias to L2_REQUEST.MISS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
"PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]",
@@ -208,6 +298,7 @@
},
{
"BriefDescription": "All accesses to L2 cache [This event is alias to L2_REQUEST.ALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
"PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.ALL]",
@@ -216,6 +307,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
@@ -224,6 +316,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
@@ -232,6 +325,7 @@
},
{
"BriefDescription": "SW prefetch requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_HIT",
"PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -240,6 +334,7 @@
},
{
"BriefDescription": "SW prefetch requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_MISS",
"PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -247,7 +342,17 @@
"UMask": "0x28"
},
{
+ "BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x23",
+ "EventName": "L2_TRANS.L2_WB",
+ "PublicDescription": "Counts L2 writebacks that access L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x40"
+ },
+ {
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -256,6 +361,7 @@
},
{
"BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -264,86 +370,87 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
+ "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x81"
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
- "PEBS": "1",
- "PublicDescription": "Counts all retired store instructions.",
+ "PublicDescription": "Counts all retired store instructions. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x82"
},
{
"BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts all retired memory instructions - loads and stores.",
+ "PublicDescription": "Counts all retired memory instructions - loads and stores. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x83"
},
{
"BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with locked access.",
+ "PublicDescription": "Counts retired load instructions with locked access. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x21"
},
{
"BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions that split across a cacheline boundary.",
+ "PublicDescription": "Counts retired load instructions that split across a cacheline boundary. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x41"
},
{
"BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES",
- "PEBS": "1",
- "PublicDescription": "Counts retired store instructions that split across a cacheline boundary.",
+ "PublicDescription": "Counts retired store instructions that split across a cacheline boundary. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x42"
},
{
"BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
- "PEBS": "1",
- "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x11"
},
{
"BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
- "PEBS": "1",
- "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x12"
},
{
"BriefDescription": "Completed demand load uops that miss the L1 d-cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "MEM_LOAD_COMPLETED.L1_MISS_ANY",
"PublicDescription": "Number of completed demand load requests that missed the L1 data cache including shadow misses (FB hits, merge to an ongoing L1D miss)",
@@ -352,164 +459,167 @@
},
{
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.",
+ "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required.",
+ "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
- "PEBS": "1",
- "PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM.",
+ "PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
- "PEBS": "1",
+ "PublicDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD",
- "PEBS": "1",
- "PublicDescription": "Retired load instructions whose data sources was forwarded from a remote cache.",
+ "PublicDescription": "Retired load instructions whose data sources was forwarded from a remote cache. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
- "PEBS": "1",
+ "PublicDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x4"
},
{
"BriefDescription": "Retired instructions with at least 1 uncacheable load or lock.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC",
- "PEBS": "1",
- "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock).",
+ "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock). Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x4"
},
{
"BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready.",
+ "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources.",
+ "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources. Available PDIST counters: 0",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions missed L2 cache as data sources.",
+ "PublicDescription": "Counts retired load instructions missed L2 cache as data sources. Available PDIST counters: 0",
"SampleAfterValue": "100021",
"UMask": "0x10"
},
{
"BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100021",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS",
- "PEBS": "1",
- "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache.",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "50021",
"UMask": "0x20"
},
{
"BriefDescription": "MEM_STORE_RETIRED.L2_HIT",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "MEM_STORE_RETIRED.L2_HIT",
"SampleAfterValue": "200003",
@@ -517,6 +627,7 @@
},
{
"BriefDescription": "Retired memory uops for any access",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe5",
"EventName": "MEM_UOP_RETIRED.ANY",
"PublicDescription": "Number of retired micro-operations (uops) for load or store memory accesses",
@@ -524,259 +635,426 @@
"UMask": "0x3"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F803C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1008000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x808000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F803C0001",
+ "PublicDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1030000001",
+ "PublicDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x830000001",
+ "PublicDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1008000001",
+ "PublicDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x808000001",
+ "PublicDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F803C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1008000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x808000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts data load hardware prefetch requests to the L1 data cache that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.HWPF_L1D.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10400",
+ "PublicDescription": "Counts data load hardware prefetch requests to the L1 data cache that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetches (which bring data to L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.HWPF_L2.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10070",
+ "PublicDescription": "Counts hardware prefetches (which bring data to L2) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.HWPF_L3.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x12380",
+ "PublicDescription": "Counts hardware prefetches to the L3 only that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.HWPF_L3.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x80082380",
+ "PublicDescription": "Counts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.HWPF_L3.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x90002380",
+ "PublicDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts writebacks of modified cachelines and streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.MODIFIED_WRITE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10808",
+ "PublicDescription": "Counts writebacks of modified cachelines and streaming stores that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x4003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x8003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F33004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified).",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1830004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified). Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1030004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x830004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1008004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x808004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO), hardware prefetch RFOs (which bring data to L2), and software prefetches for exclusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 or snoop filter.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.RFO_TO_CORE.L3_HIT_M",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x1F80040022",
+ "PublicDescription": "Counts demand reads for ownership (RFO), hardware prefetch RFOs (which bring data to L2), and software prefetches for exclusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 or snoop filter. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x80080800",
+ "PublicDescription": "Counts streaming stores that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "OFFCORE_REQUESTS.ALL_REQUESTS",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"SampleAfterValue": "100003",
@@ -784,6 +1062,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -791,7 +1070,17 @@
"UMask": "0x8"
},
{
+ "BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
+ "PublicDescription": "Counts both cacheable and non-cacheable code read requests.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -799,7 +1088,17 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
+ "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "This event is deprecated. Refer to new event OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -808,6 +1107,7 @@
},
{
"BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -815,7 +1115,18 @@
"UMask": "0x8"
},
{
+ "BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "Cycles where at least 1 outstanding demand data read request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
@@ -824,6 +1135,7 @@
},
{
"BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -832,13 +1144,24 @@
},
{
"BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
"SampleAfterValue": "1000003",
"UMask": "0x8"
},
{
+ "BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -847,6 +1170,7 @@
},
{
"BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "SQ_MISC.BUS_LOCK",
"PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
@@ -854,7 +1178,16 @@
"UMask": "0x10"
},
{
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf"
+ },
+ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.NTA",
"PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
@@ -863,6 +1196,7 @@
},
{
"BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"PublicDescription": "Counts the number of PREFETCHW instructions executed.",
@@ -871,6 +1205,7 @@
},
{
"BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.T0",
"PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
@@ -879,6 +1214,7 @@
},
{
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "SW_PREFETCH_ACCESS.T1_T2",
"PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/counter.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/counter.json
new file mode 100644
index 000000000000..088d5954747c
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/counter.json
@@ -0,0 +1,82 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "4",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "M2PCIe",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IIO",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M2M",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M3UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CHA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CXLCM",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "CXLDP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "MCHBM",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M2HBM",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "MDF",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json
new file mode 100644
index 000000000000..433ae5f50704
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/emr-metrics.json
@@ -0,0 +1,2286 @@
+[
+ {
+ "BriefDescription": "C1 residency percent per core",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C1_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C2 residency percent per package",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C2_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per core",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per package",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uncore frequency per die [GHZ]",
+ "MetricExpr": "tma_info_system_socket_clks / #num_dies / duration_time / 1e9",
+ "MetricGroup": "SoC",
+ "MetricName": "UNCORE_FREQ"
+ },
+ {
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
+ "MetricName": "cpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "The average number of cores that are in cstate C0 as observed by the power control unit (PCU)",
+ "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0 / UNC_P_CLOCKTICKS * #num_packages",
+ "MetricGroup": "cpu_cstate",
+ "MetricName": "cpu_cstate_c0"
+ },
+ {
+ "BriefDescription": "The average number of cores are in cstate C6 as observed by the power control unit (PCU)",
+ "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6 / UNC_P_CLOCKTICKS * #num_packages",
+ "MetricGroup": "cpu_cstate",
+ "MetricName": "cpu_cstate_c6"
+ },
+ {
+ "BriefDescription": "CPU operating frequency (in GHz)",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC * #SYSTEM_TSC_FREQ / 1e9",
+ "MetricName": "cpu_operating_frequency",
+ "ScaleUnit": "1GHz"
+ },
+ {
+ "BriefDescription": "Percentage of time spent in the active CPU power state C0",
+ "MetricExpr": "tma_info_system_cpus_utilized",
+ "MetricName": "cpu_utilization",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_2mb_large_page_load_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_load_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
+ "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_store_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of inbound IO reads that are initiated by end device controllers that are requesting memory from the CPU and miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the local CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from a remote CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of inbound IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the local CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to a remote CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Percentage of inbound full cacheline writes initiated by end device controllers that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "MetricName": "io_full_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Percentage of inbound partial cacheline writes initiated by end device controllers that miss the L3 cache",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_RFO) / (UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_RFO)",
+ "MetricName": "io_partial_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Percentage of inbound reads initiated by end device controllers that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR / UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "MetricName": "io_read_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
+ "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
+ "MetricName": "itlb_2nd_level_large_page_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
+ "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "itlb_2nd_level_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing in L1 instruction cache (includes prefetches) to the total number of completed instructions",
+ "MetricExpr": "L2_RQSTS.ALL_CODE_RD / INST_RETIRED.ANY",
+ "MetricName": "l1_i_code_read_misses_with_prefetches_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of demand load requests hitting in L1 data cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L1_HIT / INST_RETIRED.ANY",
+ "MetricName": "l1d_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of requests missing L1 data cache (includes data+rfo w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY",
+ "MetricName": "l1d_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read request missing L2 cache to the total number of completed instructions",
+ "MetricExpr": "L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_code_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed demand load requests hitting in L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L2_HIT / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed data read request missing L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of requests missing L2 cache (includes code+data+rfo w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY",
+ "MetricName": "l2_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD / INST_RETIRED.ANY",
+ "MetricName": "llc_code_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA + UNC_CHA_TOR_INSERTS.IA_MISS_DRD + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF) / INST_RETIRED.ANY",
+ "MetricName": "llc_data_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to local memory in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency_for_local_requests",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to remote memory in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency_for_remote_requests",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to DRAM in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_to_dram_latency",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "The ratio of number of completed memory load instructions to the total number completed instructions",
+ "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY",
+ "MetricName": "loads_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "DDR memory read bandwidth (MB/sec)",
+ "MetricExpr": "UNC_M_CAS_COUNT.RD * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_total",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory write bandwidth (MB/sec)",
+ "MetricExpr": "UNC_M_CAS_COUNT.WR * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Memory write bandwidth (MB/sec) caused by directory updates; includes DDR and Intel(R) Optane(TM) Persistent Memory(PMEM)",
+ "MetricExpr": "(UNC_CHA_DIR_UPDATE.HA + UNC_CHA_DIR_UPDATE.TOR + UNC_M2M_DIRECTORY_UPDATE.ANY) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_extra_write_bw_due_to_directory_updates",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
+ "MetricName": "numa_reads_addressed_to_local_dram",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
+ "MetricName": "numa_reads_addressed_to_remote_dram",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from decoded instruction cache (decoded stream buffer or DSB) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_decoded_icache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from legacy decode pipeline (Micro-instruction Translation Engine or MITE) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.MITE_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from microcode sequencer (MS) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.MS_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_microcode_sequencer",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
+ "MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
+ "MetricGroup": "smi",
+ "MetricName": "smi_cycles",
+ "MetricThreshold": "smi_cycles > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Number of SMI interrupts.",
+ "MetricExpr": "msr@smi@",
+ "MetricGroup": "smi",
+ "MetricName": "smi_num",
+ "ScaleUnit": "1SMI#"
+ },
+ {
+ "BriefDescription": "The ratio of number of completed memory store instructions to the total number completed instructions",
+ "MetricExpr": "MEM_INST_RETIRED.ALL_STORES / INST_RETIRED.ANY",
+ "MetricName": "stores_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
+ "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5_11 + UOPS_DISPATCHED.PORT_6) / (5 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_alu_op_utilization",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the Advanced Matrix eXtensions (AMX) execution engine was busy with tile (arithmetic) operations",
+ "MetricExpr": "EXE.AMX_BUSY / tma_info_core_core_clks",
+ "MetricGroup": "BvCB;Compute;HPC;Server;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_amx_busy",
+ "MetricThreshold": "tma_amx_busy > 0.5 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
+ "MetricExpr": "78 * ASSISTS.ANY / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_assists",
+ "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops as a result of handing SSE to AVX* or AVX* to SSE transition Assists.",
+ "MetricExpr": "63 * ASSISTS.SSE_AVX_MIX / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_avx_assists",
+ "MetricThreshold": "tma_avx_assists > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_backend_bound",
+ "MetricThreshold": "tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + tma_retiring), 0)",
+ "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_bad_speculation",
+ "MetricThreshold": "tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * tma_amx_busy / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + 0 / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + 0 / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricExpr": "100 * ((1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + RS.EMPTY_RESOURCE / tma_info_thread_clks * tma_ports_utilized_0) / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-br\\-mispredict / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;Default;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricName": "tma_branch_mispredicts",
+ "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers",
+ "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks + tma_unknown_branches",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_branch_resteers",
+ "MetricThreshold": "tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings).",
+ "MetricExpr": "CPU_CLK_UNHALTED.C01 / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c01_wait",
+ "MetricThreshold": "tma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings).",
+ "MetricExpr": "CPU_CLK_UNHALTED.C02 / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c02_wait",
+ "MetricThreshold": "tma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
+ "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_cisc",
+ "MetricThreshold": "tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears",
+ "MetricExpr": "(1 - tma_branch_mispredicts / tma_bad_speculation) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC",
+ "MetricName": "tma_clears_resteers",
+ "MetricThreshold": "tma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, tma_icache_misses - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_itlb_misses - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_2M_4M / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_4K / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
+ "MetricExpr": "(76.6 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 74.6 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_contested_accesses",
+ "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)",
+ "MetricGroup": "Backend;Compute;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_core_bound",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck. Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "74.6 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_data_sharing",
+ "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder",
+ "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
+ "MetricName": "tma_decoder0_alone",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
+ "MetricExpr": "ARITH.DIV_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_divider",
+ "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIV_ACTIVE",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads",
+ "MetricExpr": "MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_dram_bound",
+ "MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline",
+ "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_dsb",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines",
+ "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_dsb_switches",
+ "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
+ "MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricName": "tma_dtlb_load",
+ "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
+ "MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricName": "tma_dtlb_store",
+ "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
+ "MetricExpr": "(170 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_MISS@offcore_rsp\\=0x103b800002@ + 81 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricName": "tma_false_sharing",
+ "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
+ "MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricName": "tma_fb_full",
+ "MetricThreshold": "tma_fb_full > 0.3",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
+ "MetricGroup": "Default;FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
+ "MetricName": "tma_fetch_bandwidth",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-fetch\\-lat / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
+ "MetricGroup": "Default;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_fetch_latency",
+ "MetricThreshold": "tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues. For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
+ "MetricExpr": "max(0, tma_heavy_operations - tma_microcode_sequencer)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
+ "MetricName": "tma_few_uops_instructions",
+ "MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector",
+ "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fp_arith",
+ "MetricThreshold": "tma_fp_arith > 0.2 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired). Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain and FMA double-counting.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "30 * ASSISTS.FP / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "ARITH.FPDIV_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_scalar",
+ "MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_vector",
+ "MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_128b",
+ "MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_256b",
+ "MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_512b",
+ "MetricThreshold": "tma_fp_vector_512b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions",
+ "MetricExpr": "tma_light_operations * INST_RETIRED.MACRO_FUSED / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fused_instructions",
+ "MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-heavy\\-ops / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "Default;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_heavy_operations",
+ "MetricThreshold": "tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+]). Sample with: UOPS_RETIRED.HEAVY",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
+ "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 6 / BR_MISP_RETIRED.ALL_BRANCHES / 100",
+ "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
+ "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_NTAKEN",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_taken",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_cond_taken < 200"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_indirect",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RET",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_ret",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_ret < 500"
+ },
+ {
+ "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmispredict",
+ "MetricThreshold": "tma_info_bad_spec_ipmispredict < 200"
+ },
+ {
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "INT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio"
+ },
+ {
+ "BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
+ "MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
+ "MetricGroup": "Cor;SMT",
+ "MetricName": "tma_info_botlnk_l0_core_bound_likely",
+ "MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite + tma_ms))",
+ "MetricGroup": "DSBmiss;Fed;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL",
+ "MetricName": "tma_info_botlnk_l2_ic_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5",
+ "PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
+ },
+ {
+ "BriefDescription": "Fraction of branches that are CALL or RET",
+ "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_callret"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are non-taken conditionals",
+ "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_nt"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are taken conditionals",
+ "MetricExpr": "BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_tk"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
+ "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_jump"
+ },
+ {
+ "BriefDescription": "Fraction of branches of other types (not individually covered by other metrics in Info.Branches group)",
+ "MetricExpr": "1 - (tma_info_branches_cond_nt + tma_info_branches_cond_tk + tma_info_branches_callret + tma_info_branches_jump)",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_other_branches"
+ },
+ {
+ "BriefDescription": "Core actual clocks when any Logical Processor is active on the Physical Core",
+ "MetricExpr": "(CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else tma_info_thread_clks)",
+ "MetricGroup": "SMT",
+ "MetricName": "tma_info_core_core_clks"
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+ "MetricExpr": "INST_RETIRED.ANY / tma_info_core_core_clks",
+ "MetricGroup": "Ret;SMT;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_core_coreipc"
+ },
+ {
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / tma_info_core_core_clks",
+ "MetricGroup": "Flops;Ret",
+ "MetricName": "tma_info_core_flopc"
+ },
+ {
+ "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
+ "MetricExpr": "(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.PORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * tma_info_core_core_clks)",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_core_fp_arith_utilization",
+ "PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
+ },
+ {
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
+ "MetricName": "tma_info_core_ilp"
+ },
+ {
+ "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
+ "MetricExpr": "IDQ.DSB_UOPS / UOPS_ISSUED.ANY",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_frontend_dsb_coverage",
+ "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 6 > 0.35",
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
+ "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / cpu@DSB2MITE_SWITCHES.PENALTY_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "DSBmiss",
+ "MetricName": "tma_info_frontend_dsb_switch_cost"
+ },
+ {
+ "BriefDescription": "Average number of Uops issued by front-end when it issued something",
+ "MetricExpr": "UOPS_ISSUED.ANY / cpu@UOPS_ISSUED.ANY\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_frontend_fetch_upc"
+ },
+ {
+ "BriefDescription": "Average Latency for L1 instruction cache misses",
+ "MetricExpr": "ICACHE_DATA.STALLS / cpu@ICACHE_DATA.STALLS\\,cmask\\=1\\,edge@",
+ "MetricGroup": "Fed;FetchLat;IcMiss",
+ "MetricName": "tma_info_frontend_icache_miss_latency"
+ },
+ {
+ "BriefDescription": "Instructions per non-speculative DSB miss (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS",
+ "MetricGroup": "DSBmiss;Fed",
+ "MetricName": "tma_info_frontend_ipdsb_miss_ret",
+ "MetricThreshold": "tma_info_frontend_ipdsb_miss_ret < 50"
+ },
+ {
+ "BriefDescription": "Instructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / BACLEARS.ANY",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_ipunknown_branch"
+ },
+ {
+ "BriefDescription": "L2 cache true code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * FRONTEND_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code"
+ },
+ {
+ "BriefDescription": "L2 cache speculative code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code_all"
+ },
+ {
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
+ "BriefDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection",
+ "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / cpu@INT_MISC.UNKNOWN_BRANCH_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_unknown_branch_cost",
+ "PublicDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node."
+ },
+ {
+ "BriefDescription": "Branch instructions per taken branch.",
+ "MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_bptkbranch"
+ },
+ {
+ "BriefDescription": "Total number of retired Instructions",
+ "MetricExpr": "INST_RETIRED.ANY",
+ "MetricGroup": "Summary;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_inst_mix_instructions",
+ "PublicDescription": "Total number of retired Instructions. Sample with: INST_RETIRED.PREC_DIST"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR + (FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR))",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_iparith",
+ "MetricThreshold": "tma_info_inst_mix_iparith < 10",
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx128",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx256",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx512",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx512 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_dp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED2.SCALAR",
+ "MetricGroup": "Flops;FpScalar;InsType;Server",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_hp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_hp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_sp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Branches;Fed;InsType",
+ "MetricName": "tma_info_inst_mix_ipbranch",
+ "MetricThreshold": "tma_info_inst_mix_ipbranch < 8"
+ },
+ {
+ "BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALL",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_ipcall",
+ "MetricThreshold": "tma_info_inst_mix_ipcall < 200"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF)",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_ipflop",
+ "MetricThreshold": "tma_info_inst_mix_ipflop < 10"
+ },
+ {
+ "BriefDescription": "Instructions per Load (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_LOADS",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipload",
+ "MetricThreshold": "tma_info_inst_mix_ipload < 3"
+ },
+ {
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / CPU_CLK_UNHALTED.PAUSE_INST",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause"
+ },
+ {
+ "BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STORES",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipstore",
+ "MetricThreshold": "tma_info_inst_mix_ipstore < 8"
+ },
+ {
+ "BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / SW_PREFETCH_ACCESS.ANY",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_inst_mix_ipswpf",
+ "MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
+ },
+ {
+ "BriefDescription": "Instructions per taken branch",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
+ "MetricName": "tma_info_inst_mix_iptb",
+ "MetricThreshold": "tma_info_inst_mix_iptb < 13",
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
+ "MetricExpr": "1e3 * L2_LINES_OUT.NON_SILENT / tma_info_inst_mix_instructions",
+ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "tma_info_memory_core_l2_evictions_nonsilent_pki"
+ },
+ {
+ "BriefDescription": "Rate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)",
+ "MetricExpr": "1e3 * L2_LINES_OUT.SILENT / tma_info_inst_mix_instructions",
+ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "tma_info_memory_core_l2_evictions_silent_pki"
+ },
+ {
+ "BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_fb_hpki"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki_load"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_all"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_load"
+ },
+ {
+ "BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Backend;CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_all"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki_load"
+ },
+ {
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
+ },
+ {
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss data reads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
+ },
+ {
+ "BriefDescription": "Average Latency for L2 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
+ },
+ {
+ "BriefDescription": "Average Latency for L3 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
+ "MetricGroup": "Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l3_miss_latency"
+ },
+ {
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / MEM_LOAD_COMPLETED.L1_MISS_ANY",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
+ },
+ {
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki"
+ },
+ {
+ "BriefDescription": "Off-core accesses per kilo instruction for modified write requests",
+ "MetricExpr": "1e3 * OCR.MODIFIED_WRITE.ANY_RESPONSE / tma_info_inst_mix_instructions",
+ "MetricGroup": "Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_mwrite_any_pki"
+ },
+ {
+ "BriefDescription": "Off-core accesses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)",
+ "MetricExpr": "1e3 * OCR.READS_TO_CORE.ANY_RESPONSE / tma_info_inst_mix_instructions",
+ "MetricGroup": "CacheHits;Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_read_any_pki"
+ },
+ {
+ "BriefDescription": "L3 cache misses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)",
+ "MetricExpr": "1e3 * OCR.READS_TO_CORE.L3_MISS / tma_info_inst_mix_instructions",
+ "MetricGroup": "Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_read_l3m_pki"
+ },
+ {
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki"
+ },
+ {
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ },
+ {
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "L2_LINES_OUT.USELESS_HWPF / (L2_LINES_OUT.SILENT + L2_LINES_OUT.NON_SILENT)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15"
+ },
+ {
+ "BriefDescription": "Average DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socket",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.DRAM / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_dram_bw",
+ "PublicDescription": "Average DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socket. See R2C_Offcore_BW."
+ },
+ {
+ "BriefDescription": "Average L3-cache miss BW for Reads-to-Core (R2C)",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.L3_MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_l3m_bw",
+ "PublicDescription": "Average L3-cache miss BW for Reads-to-Core (R2C). This covering going to DRAM or other memory off-chip memory tears. See R2C_Offcore_BW."
+ },
+ {
+ "BriefDescription": "Average Off-core access BW for Reads-to-Core (R2C)",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.ANY_RESPONSE / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_offcore_bw",
+ "PublicDescription": "Average Off-core access BW for Reads-to-Core (R2C). R2C account for demand or prefetch load/RFO/code access that fill data into the Core caches."
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Fed;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_code_stlb_mpki"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_load_stlb_mpki"
+ },
+ {
+ "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
+ "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * tma_info_core_core_clks)",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_page_walks_utilization",
+ "MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_store_stlb_mpki"
+ },
+ {
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
+ "MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
+ "MetricName": "tma_info_pipeline_execute"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)"
+ },
+ {
+ "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_retire"
+ },
+ {
+ "BriefDescription": "Estimated fraction of retirement-cycles dealing with repeat instructions",
+ "MetricExpr": "INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "MicroSeq;Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_strings_cycles",
+ "MetricThreshold": "tma_info_pipeline_strings_cycles > 0.1"
+ },
+ {
+ "BriefDescription": "Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states",
+ "MetricExpr": "CPU_CLK_UNHALTED.C0_WAIT / tma_info_thread_clks",
+ "MetricGroup": "C0Wait",
+ "MetricName": "tma_info_system_c0_wait",
+ "MetricThreshold": "tma_info_system_c0_wait > 0.05"
+ },
+ {
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Power;Summary",
+ "MetricName": "tma_info_system_core_frequency"
+ },
+ {
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
+ "MetricGroup": "HPC;Summary",
+ "MetricName": "tma_info_system_cpu_utilization"
+ },
+ {
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_read_bw"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_write_bw"
+ },
+ {
+ "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
+ "MetricName": "tma_info_system_dram_bw_use",
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR_HALF + 2 * (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF) + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * (FP_ARITH_INST_RETIRED2.128B_PACKED_HALF + FP_ARITH_INST_RETIRED.8_FLOPS) + 16 * (FP_ARITH_INST_RETIRED2.256B_PACKED_HALF + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) + 32 * FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / 1e9 / tma_info_system_time",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_system_gflops",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
+ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_read_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU"
+ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_write_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU"
+ },
+ {
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u",
+ "MetricGroup": "Branches;OS",
+ "MetricName": "tma_info_system_ipfarbranch",
+ "MetricThreshold": "tma_info_system_ipfarbranch < 1e6"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / INST_RETIRED.ANY_P:k",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_cpi"
+ },
+ {
+ "BriefDescription": "Fraction of cycles spent in the Operating System (OS) Kernel mode",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_utilization",
+ "MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
+ },
+ {
+ "BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / uncore_cha_0@event\\=0x1@",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
+ "MetricName": "tma_info_system_mem_dram_read_latency",
+ "PublicDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
+ },
+ {
+ "BriefDescription": "Fraction of Uncore cycles where requests got rejected due to duplicate address already in IRQ ingress queue in the cache homing agent",
+ "MetricExpr": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH / UNC_CHA_CLOCKTICKS",
+ "MetricGroup": "LockCont;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_mem_irq_duplicate_address",
+ "MetricThreshold": "tma_info_system_mem_irq_duplicate_address > 0.1"
+ },
+ {
+ "BriefDescription": "Average number of parallel data read requests to external memory",
+ "MetricExpr": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD@thresh\\=1@",
+ "MetricGroup": "Mem;MemoryBW;SoC",
+ "MetricName": "tma_info_system_mem_parallel_reads",
+ "PublicDescription": "Average number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches"
+ },
+ {
+ "BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / tma_info_system_time)",
+ "MetricGroup": "Mem;MemoryLat;SoC",
+ "MetricName": "tma_info_system_mem_read_latency",
+ "PublicDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)"
+ },
+ {
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
+ "BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
+ "MetricExpr": "(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_DISTRIBUTED if #SMT_on else 0)",
+ "MetricGroup": "SMT",
+ "MetricName": "tma_info_system_smt_2t_utilization"
+ },
+ {
+ "BriefDescription": "Socket actual clocks when any core is active on that socket",
+ "MetricExpr": "uncore_cha_0@event\\=0x1@",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_socket_clks"
+ },
+ {
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization"
+ },
+ {
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
+ "BriefDescription": "Cross-socket Ultra Path Interconnect (UPI) data transmit bandwidth for data only [MB / sec]",
+ "MetricExpr": "UNC_UPI_TxL_FLITS.ALL_DATA * 64 / 9 / 1e6",
+ "MetricGroup": "Server;SoC",
+ "MetricName": "tma_info_system_upi_data_transmit_bw"
+ },
+ {
+ "BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Pipeline",
+ "MetricName": "tma_info_thread_clks"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction (per Logical Processor)",
+ "MetricExpr": "1 / tma_info_thread_ipc",
+ "MetricGroup": "Mem;Pipeline",
+ "MetricName": "tma_info_thread_cpi"
+ },
+ {
+ "BriefDescription": "The ratio of Executed- by Issued-Uops",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY",
+ "MetricGroup": "Cor;Pipeline",
+ "MetricName": "tma_info_thread_execute_per_issue",
+ "PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage."
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle (per Logical Processor)",
+ "MetricExpr": "INST_RETIRED.ANY / tma_info_thread_clks",
+ "MetricGroup": "Ret;Summary",
+ "MetricName": "tma_info_thread_ipc"
+ },
+ {
+ "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
+ "MetricExpr": "TOPDOWN.SLOTS",
+ "MetricGroup": "TmaL1;tma_L1_group",
+ "MetricName": "tma_info_thread_slots"
+ },
+ {
+ "BriefDescription": "Fraction of Physical Core issue-slots utilized by this Logical Processor",
+ "MetricExpr": "(tma_info_thread_slots / (TOPDOWN.SLOTS / 2) if #SMT_on else 1)",
+ "MetricGroup": "SMT;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_thread_slots_utilization"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / INST_RETIRED.ANY",
+ "MetricGroup": "Pipeline;Ret;Retire",
+ "MetricName": "tma_info_thread_uoppi",
+ "MetricThreshold": "tma_info_thread_uoppi > 1.05"
+ },
+ {
+ "BriefDescription": "Uops per taken branch",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;FetchBW",
+ "MetricName": "tma_info_thread_uptb",
+ "MetricThreshold": "tma_info_thread_uptb < 9"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_int_operations",
+ "MetricThreshold": "tma_int_operations > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired). Vector/Matrix Int operations and shuffles are counted. Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "(INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_128b",
+ "MetricThreshold": "tma_int_vector_128b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "(INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 + INT_VEC_RETIRED.VNNI_256) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_256b",
+ "MetricThreshold": "tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
+ "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
+ "MetricExpr": "max((EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricName": "tma_l1_bound",
+ "MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
+ "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l2_bound",
+ "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "4.4 * tma_info_system_core_frequency * MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
+ "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l3_bound",
+ "MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "32.6 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricName": "tma_l3_hit_latency",
+ "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
+ "MetricExpr": "DECODE.LCP / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_lcp",
+ "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)",
+ "MetricGroup": "Default;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_light_operations",
+ "MetricThreshold": "tma_light_operations > 0.6",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_2_3_10 / (3 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_load_op_utilization",
+ "MetricThreshold": "tma_load_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "tma_dtlb_load - tma_load_stlb_miss",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_hit",
+ "MetricThreshold": "tma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by load accesses, performing a hardware page walk",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_miss",
+ "MetricThreshold": "tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_1G / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_4K / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory",
+ "MetricExpr": "72 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clks",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricName": "tma_lock_latency",
+ "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
+ "MetricGroup": "BadSpec;BvMS;Default;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to memory bandwidth Allocation feature (RDT's memory bandwidth throttling).",
+ "MetricExpr": "INT_MISC.MBA_STALLS / tma_info_thread_clks",
+ "MetricGroup": "MemoryBW;Offcore;Server;TopdownL5;tma_L5_group;tma_mem_bandwidth_group",
+ "MetricName": "tma_mba_stalls",
+ "MetricThreshold": "tma_mba_stalls > 0.1 & (tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricName": "tma_mem_bandwidth",
+ "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricName": "tma_mem_latency",
+ "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-mem\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "Backend;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_memory_bound",
+ "MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck. Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions.",
+ "MetricExpr": "13 * MISC2_RETIRED.LFENCE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_memory_fence",
+ "MetricThreshold": "tma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses.",
+ "MetricExpr": "tma_light_operations * MEM_UOP_RETIRED.ANY / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_memory_operations",
+ "MetricThreshold": "tma_memory_operations > 0.1 & tma_light_operations > 0.6",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit",
+ "MetricExpr": "UOPS_RETIRED.MS / tma_info_thread_slots",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
+ "MetricName": "tma_microcode_sequencer",
+ "MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
+ "MetricExpr": "tma_branch_mispredicts / tma_bad_speculation * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricName": "tma_mispredicts_resteers",
+ "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)",
+ "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_mite",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
+ "MetricExpr": "160 * ASSISTS.SSE_AVX_MIX / tma_info_thread_clks",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
+ "MetricName": "tma_mixing_vectors",
+ "MetricThreshold": "tma_mixing_vectors > 0.05",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "max(IDQ.MS_CYCLES_ANY, cpu@UOPS_RETIRED.MS\\,cmask\\=1@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY)) / tma_info_core_core_clks / 2.4",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)",
+ "MetricExpr": "3 * cpu@UOPS_RETIRED.MS\\,cmask\\=1\\,edge@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY) / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
+ "MetricName": "tma_ms_switches",
+ "MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused",
+ "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHES - INST_RETIRED.MACRO_FUSED) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_non_fused_branches",
+ "MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
+ "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_nop_instructions",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_int_operations + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_other_light_ops",
+ "MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults",
+ "MetricExpr": "99 * ASSISTS.PAGE_FAULT / tma_info_thread_slots",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_page_faults",
+ "MetricThreshold": "tma_page_faults > 0.05",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults. A Page Fault may apply on first application access to a memory page. Note operating system handling of page faults accounts for the majority of its cost.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_0 / tma_info_core_core_clks",
+ "MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_0",
+ "MetricThreshold": "tma_port_0 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_1 / tma_info_core_core_clks",
+ "MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_1",
+ "MetricThreshold": "tma_port_1 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_6 / tma_info_core_core_clks",
+ "MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_6",
+ "MetricThreshold": "tma_port_6 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_3_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIV_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_3_PORTS_UTIL) / tma_info_thread_clks)",
+ "MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_ports_utilization",
+ "MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related). Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "(EXE_ACTIVITY.EXE_BOUND_0_PORTS + max(RS.EMPTY_RESOURCE - RESOURCE_STALLS.SCOREBOARD, 0)) / tma_info_thread_clks * (CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS) / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_0",
+ "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_1",
+ "MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_2",
+ "MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_3m",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues",
+ "MetricExpr": "(133 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 133 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group",
+ "MetricName": "tma_remote_cache",
+ "MetricThreshold": "tma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory",
+ "MetricExpr": "153 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category. Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved. Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance. For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
+ "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks + tma_c02_wait",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
+ "MetricName": "tma_serializing_operation",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)",
+ "MetricExpr": "tma_light_operations * INT_VEC_RETIRED.SHUFFLES / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "HPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_shuffles_256b",
+ "MetricThreshold": "tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross \"vector lane\" data transfers.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
+ "MetricExpr": "CPU_CLK_UNHALTED.PAUSE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_slow_pause",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary",
+ "MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_split_loads",
+ "MetricThreshold": "tma_split_loads > 0.3",
+ "PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents rate of split store accesses",
+ "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group",
+ "MetricName": "tma_split_stores",
+ "MetricThreshold": "tma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents rate of split store accesses. Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
+ "MetricExpr": "(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricName": "tma_sq_full",
+ "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write",
+ "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / tma_info_thread_clks",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_store_bound",
+ "MetricThreshold": "tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores",
+ "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_store_fwd_blk",
+ "MetricThreshold": "tma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
+ "MetricExpr": "(MEM_STORE_RETIRED.L2_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricName": "tma_store_latency",
+ "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations",
+ "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_8) / (4 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_store_op_utilization",
+ "MetricThreshold": "tma_store_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the TLB was missed by store accesses, hitting in the second-level TLB (STLB)",
+ "MetricExpr": "tma_dtlb_store - tma_store_stlb_miss",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_hit",
+ "MetricThreshold": "tma_store_stlb_hit > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the STLB was missed by store accesses, performing a hardware page walk",
+ "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / tma_info_core_core_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_miss",
+ "MetricThreshold": "tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_1G / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_4K / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
+ "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / tma_info_thread_clks",
+ "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
+ "MetricName": "tma_streaming_stores",
+ "MetricThreshold": "tma_streaming_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should Streaming stores be a bottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE. Related metrics: tma_fb_full",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
+ "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricName": "tma_unknown_branches",
+ "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric serves as an approximation of legacy x87 usage",
+ "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD",
+ "MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
+ "MetricName": "tma_x87_use",
+ "MetricThreshold": "tma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uncore operating frequency in GHz",
+ "MetricExpr": "UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages) / 1e9 / duration_time",
+ "MetricName": "uncore_frequency",
+ "ScaleUnit": "1GHz"
+ },
+ {
+ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data transmit bandwidth (MB/sec)",
+ "MetricExpr": "UNC_UPI_TxL_FLITS.ALL_DATA * 7.111111111111111 / 1e6 / duration_time",
+ "MetricName": "upi_data_transmit_bw",
+ "ScaleUnit": "1MB/s"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/floating-point.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/floating-point.json
index 1bdefaf96287..bc475e163227 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "ARITH.FPDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.FPDIV_ACTIVE",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.FP",
"PublicDescription": "Counts all microcode Floating Point assists.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "ASSISTS.SSE_AVX_MIX",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.SSE_AVX_MIX",
"SampleAfterValue": "1000003",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.PORT_0 [This event is alias to FP_ARITH_DISPATCHED.V0]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_0",
"SampleAfterValue": "2000003",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.PORT_1 [This event is alias to FP_ARITH_DISPATCHED.V1]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_1",
"SampleAfterValue": "2000003",
@@ -38,6 +43,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.PORT_5 [This event is alias to FP_ARITH_DISPATCHED.V2]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.PORT_5",
"SampleAfterValue": "2000003",
@@ -45,6 +51,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.V0 [This event is alias to FP_ARITH_DISPATCHED.PORT_0]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.V0",
"SampleAfterValue": "2000003",
@@ -52,6 +59,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.V1 [This event is alias to FP_ARITH_DISPATCHED.PORT_1]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.V1",
"SampleAfterValue": "2000003",
@@ -59,6 +67,7 @@
},
{
"BriefDescription": "FP_ARITH_DISPATCHED.V2 [This event is alias to FP_ARITH_DISPATCHED.PORT_5]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb3",
"EventName": "FP_ARITH_DISPATCHED.V2",
"SampleAfterValue": "2000003",
@@ -66,6 +75,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -74,6 +84,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -82,6 +93,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -90,6 +102,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -98,6 +111,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -106,6 +120,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -114,6 +129,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -122,6 +138,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -130,6 +147,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -138,6 +156,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -146,6 +165,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -154,6 +174,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"PublicDescription": "Number of any Vector retired FP arithmetic instructions. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -162,6 +183,7 @@
},
{
"BriefDescription": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF",
"SampleAfterValue": "100003",
@@ -169,6 +191,7 @@
},
{
"BriefDescription": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF",
"SampleAfterValue": "100003",
@@ -176,6 +199,7 @@
},
{
"BriefDescription": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF",
"SampleAfterValue": "100003",
@@ -183,6 +207,7 @@
},
{
"BriefDescription": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF",
"SampleAfterValue": "100003",
@@ -190,6 +215,7 @@
},
{
"BriefDescription": "Number of all Scalar Half-Precision FP arithmetic instructions(1) retired - regular and complex.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.SCALAR",
"PublicDescription": "FP_ARITH_INST_RETIRED2.SCALAR",
@@ -198,6 +224,7 @@
},
{
"BriefDescription": "FP_ARITH_INST_RETIRED2.SCALAR_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.SCALAR_HALF",
"SampleAfterValue": "100003",
@@ -205,6 +232,7 @@
},
{
"BriefDescription": "Number of all Vector (also called packed) Half-Precision FP arithmetic instructions(1) retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcf",
"EventName": "FP_ARITH_INST_RETIRED2.VECTOR",
"PublicDescription": "FP_ARITH_INST_RETIRED2.VECTOR",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/frontend.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/frontend.json
index 9e53da55d0c1..793c486ffabe 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Clears due to Unknown Branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Number of times the front-end is resteered when it finds a branch instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Cycles the Microcode Sequencer is busy.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.MS_BUSY",
"SampleAfterValue": "500009",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
@@ -32,213 +36,216 @@
},
{
"BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x1",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
+ "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x11",
- "PEBS": "1",
- "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
+ "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x14",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss.",
+ "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x12",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss.",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x13",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x600106",
- "PEBS": "1",
- "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall.",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7",
"MSRValue": "0x608006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7",
"MSRValue": "0x601006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7",
"MSRValue": "0x600206",
- "PEBS": "1",
- "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7",
"MSRValue": "0x610006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x100206",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7",
"MSRValue": "0x602006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7",
"MSRValue": "0x600406",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7",
"MSRValue": "0x620006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7",
"MSRValue": "0x604006",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7",
"MSRValue": "0x600806",
- "PEBS": "1",
- "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops.",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "FRONTEND_RETIRED.MS_FLOWS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.MS_FLOWS",
"MSRIndex": "0x3F7",
"MSRValue": "0x8",
- "PEBS": "1",
+ "PublicDescription": "FRONTEND_RETIRED.MS_FLOWS Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x15",
- "PEBS": "1",
- "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.",
+ "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
"MSRIndex": "0x3F7",
"MSRValue": "0x17",
- "PEBS": "1",
+ "PublicDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_DATA.STALLS",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The decode pipeline works at a 32 Byte granularity.",
@@ -246,7 +253,18 @@
"UMask": "0x4"
},
{
+ "BriefDescription": "ICACHE_DATA.STALL_PERIODS",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALL_PERIODS",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
@@ -255,6 +273,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY",
@@ -264,15 +283,17 @@
},
{
"BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK",
- "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
"SampleAfterValue": "2000003",
"UMask": "0x8"
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
@@ -281,6 +302,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_ANY",
@@ -290,6 +312,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_OK",
@@ -299,6 +322,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -307,6 +331,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES_ANY",
@@ -316,6 +341,7 @@
},
{
"BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -326,6 +352,7 @@
},
{
"BriefDescription": "Uops delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS).",
@@ -334,6 +361,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CORE]",
@@ -342,6 +370,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE",
@@ -351,6 +380,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CYCLES_FE_WAS_OK",
@@ -361,6 +391,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CORE]",
@@ -369,6 +400,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -378,6 +410,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/memory.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/memory.json
index e8bf7c9c44e1..5e6c1f05c981 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/memory.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.CYCLES_L1D_MISS",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L1D_MISS",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L2_MISS",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Execution stalls while L3 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "9",
"EventCode": "0x47",
"EventName": "MEMORY_ACTIVITY.STALLS_L3_MISS",
@@ -50,203 +56,423 @@
"UMask": "0x9"
},
{
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.",
+ "SampleAfterValue": "53",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "1009",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "503",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "101",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "2003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
"BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
+ "Counter": "0",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
- "PEBS": "2",
- "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8",
+ "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8 Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x2"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F3FC00002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.HWPF_L3.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x94002380",
+ "PublicDescription": "Counts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 caches. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.HWPF_L3.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x84002380",
+ "PublicDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F3FC04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F04C04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster. It does not count misses to the L3 which go to Local CXL Type 2 Memory or Local Non DRAM.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x70CC04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster. It does not count misses to the L3 which go to Local CXL Type 2 Memory or Local Non DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x70C004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x733004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that missed the local socket's L1, L2, and L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x94000800",
+ "PublicDescription": "Counts streaming stores that missed the local socket's L1, L2, and L3 caches. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x84000800",
+ "PublicDescription": "Counts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0xFBFF80822",
+ "PublicDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM) Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "100003",
@@ -254,6 +480,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache. Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cache.",
@@ -262,22 +489,25 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
- "PublicDescription": "Counts the number of times RTM abort was triggered.",
+ "PublicDescription": "Counts the number of times RTM abort was triggered. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x4"
},
{
- "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_EVENTS",
- "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt).",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEM",
"PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -286,6 +516,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.",
@@ -294,6 +525,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.",
@@ -302,6 +534,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times RTM commit succeeded.",
@@ -310,6 +543,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.",
@@ -318,6 +552,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_READ",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads",
@@ -326,6 +561,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.",
@@ -334,6 +570,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Counts the number of times a TSX line had a cache conflict.",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/metricgroups.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/metricgroups.json
new file mode 100644
index 000000000000..9129fb7b7ce4
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/metricgroups.json
@@ -0,0 +1,145 @@
+{
+ "Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "C0Wait": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSBmiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DataSharing": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Fed": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Flops": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpScalar": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Frontend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "HPC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IcMiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IntVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IoBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryTLB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_BW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_Lat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MicroSeq": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "OS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Offcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PGO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Server": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Snoop": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SoC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Summary": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL1": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL2": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL3mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TopdownL1": "Metrics for top-down breakdown at level 1",
+ "TopdownL2": "Metrics for top-down breakdown at level 2",
+ "TopdownL3": "Metrics for top-down breakdown at level 3",
+ "TopdownL4": "Metrics for top-down breakdown at level 4",
+ "TopdownL5": "Metrics for top-down breakdown at level 5",
+ "TopdownL6": "Metrics for top-down breakdown at level 6",
+ "tma_L1_group": "Metrics for top-down breakdown at level 1",
+ "tma_L2_group": "Metrics for top-down breakdown at level 2",
+ "tma_L3_group": "Metrics for top-down breakdown at level 3",
+ "tma_L4_group": "Metrics for top-down breakdown at level 4",
+ "tma_L5_group": "Metrics for top-down breakdown at level 5",
+ "tma_L6_group": "Metrics for top-down breakdown at level 6",
+ "tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
+ "tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
+ "tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
+ "tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
+ "tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
+ "tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
+ "tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
+ "tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
+ "tma_fetch_bandwidth_group": "Metrics contributing to tma_fetch_bandwidth category",
+ "tma_fetch_latency_group": "Metrics contributing to tma_fetch_latency category",
+ "tma_fp_arith_group": "Metrics contributing to tma_fp_arith category",
+ "tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
+ "tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
+ "tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
+ "tma_int_operations_group": "Metrics contributing to tma_int_operations category",
+ "tma_issue2P": "Metrics related by the issue $issue2P",
+ "tma_issueBM": "Metrics related by the issue $issueBM",
+ "tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
+ "tma_issueD0": "Metrics related by the issue $issueD0",
+ "tma_issueFB": "Metrics related by the issue $issueFB",
+ "tma_issueFL": "Metrics related by the issue $issueFL",
+ "tma_issueL1": "Metrics related by the issue $issueL1",
+ "tma_issueLat": "Metrics related by the issue $issueLat",
+ "tma_issueMC": "Metrics related by the issue $issueMC",
+ "tma_issueMS": "Metrics related by the issue $issueMS",
+ "tma_issueMV": "Metrics related by the issue $issueMV",
+ "tma_issueRFO": "Metrics related by the issue $issueRFO",
+ "tma_issueSL": "Metrics related by the issue $issueSL",
+ "tma_issueSO": "Metrics related by the issue $issueSO",
+ "tma_issueSmSt": "Metrics related by the issue $issueSmSt",
+ "tma_issueSpSt": "Metrics related by the issue $issueSpSt",
+ "tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
+ "tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
+ "tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
+ "tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
+ "tma_light_operations_group": "Metrics contributing to tma_light_operations category",
+ "tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
+ "tma_mem_bandwidth_group": "Metrics contributing to tma_mem_bandwidth category",
+ "tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
+ "tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
+ "tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
+ "tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
+ "tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
+ "tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
+ "tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
+ "tma_retiring_group": "Metrics contributing to tma_retiring category",
+ "tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
+ "tma_store_bound_group": "Metrics contributing to tma_store_bound category",
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
+}
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/other.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/other.json
index 2f375a6badcd..21f49f609ed4 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/other.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/other.json
@@ -1,310 +1,51 @@
[
{
"BriefDescription": "ASSISTS.PAGE_FAULT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.PAGE_FAULT",
"SampleAfterValue": "1000003",
"UMask": "0x8"
},
{
- "BriefDescription": "Counts the cycles where the AMX (Advance Matrix Extension) unit is busy performing an operation.",
- "EventCode": "0xb7",
- "EventName": "EXE.AMX_BUSY",
- "SampleAfterValue": "2000003",
- "UMask": "0x2"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x730000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F3FFC0002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000002",
+ "BriefDescription": "HW_INTERRUPTS.MASKED",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.MASKED",
"SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts data load hardware prefetch requests to the L1 data cache that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.HWPF_L1D.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetches (which bring data to L2) that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.HWPF_L2.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10070",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.HWPF_L3.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x12380",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.HWPF_L3.REMOTE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x90002380",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts writebacks of modified cachelines and streaming stores that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.MODIFIED_WRITE.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10808",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F3FFC4477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C004477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104004477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x70C004477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.REMOTE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F33004477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x730004477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
+ "UMask": "0x2"
},
{
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x733004477",
+ "BriefDescription": "HW_INTERRUPTS.PENDING_AND_MASKED",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.PENDING_AND_MASKED",
"SampleAfterValue": "100003",
- "UMask": "0x1"
+ "UMask": "0x4"
},
{
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.READS_TO_CORE.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708004477",
- "SampleAfterValue": "100003",
+ "BriefDescription": "Number of hardware interrupts received by the processor.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.RECEIVED",
+ "PublicDescription": "Counts the number of hardware interruptions received by the processor.",
+ "SampleAfterValue": "203",
"UMask": "0x1"
},
{
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
- "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0xFBFF80822",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
- "EventCode": "0xa5",
- "EventName": "RS.EMPTY",
- "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
- "SampleAfterValue": "1000003",
- "UMask": "0x7"
- },
- {
- "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
- "CounterMask": "1",
- "EdgeDetect": "1",
- "EventCode": "0xa5",
- "EventName": "RS.EMPTY_COUNT",
- "Invert": "1",
- "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
- "SampleAfterValue": "100003",
- "UMask": "0x7"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY_COUNT",
- "CounterMask": "1",
- "Deprecated": "1",
- "EdgeDetect": "1",
- "EventCode": "0xa5",
- "EventName": "RS_EMPTY.COUNT",
- "Invert": "1",
- "SampleAfterValue": "100003",
- "UMask": "0x7"
- },
- {
- "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY",
- "Deprecated": "1",
- "EventCode": "0xa5",
- "EventName": "RS_EMPTY.CYCLES",
- "SampleAfterValue": "1000003",
- "UMask": "0x7"
- },
- {
"BriefDescription": "Cycles the uncore cannot take further requests",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x2d",
"EventName": "XQ.FULL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/pipeline.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/pipeline.json
index 1f8200fb8964..1fa7957956df 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "This event is deprecated. Refer to new event ARITH.DIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.DIV_ACTIVE",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event ARITH.FPDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -28,6 +31,7 @@
},
{
"BriefDescription": "This event counts the cycles the integer divider is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb0",
"EventName": "ARITH.IDIV_ACTIVE",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event ARITH.IDIV_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb0",
@@ -45,6 +50,7 @@
},
{
"BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.ANY",
"PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware. Examples include AD (page Access Dirty), FP and AVX related assists.",
@@ -53,157 +59,158 @@
},
{
"BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all branch instructions retired.",
+ "PublicDescription": "Counts all branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009"
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
- "PublicDescription": "Counts conditional branch instructions retired.",
+ "PublicDescription": "Counts conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x11"
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts not taken branch instructions retired.",
+ "PublicDescription": "Counts not taken branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x10"
},
{
"BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken conditional branch instructions retired.",
+ "PublicDescription": "Counts taken conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x1"
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
- "PublicDescription": "Counts far branch instructions retired.",
+ "PublicDescription": "Counts far branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
- "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
+ "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
- "PublicDescription": "Counts both direct and indirect near call instructions retired.",
+ "PublicDescription": "Counts both direct and indirect near call instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x2"
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
- "PublicDescription": "Counts return instructions retired.",
+ "PublicDescription": "Counts return instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken branch instructions retired.",
+ "PublicDescription": "Counts taken branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x20"
},
{
"BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
+ "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path. Available PDIST counters: 0",
"SampleAfterValue": "400009"
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
- "PublicDescription": "Counts mispredicted conditional branch instructions retired.",
+ "PublicDescription": "Counts mispredicted conditional branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x11"
},
{
"BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_NTAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken.",
+ "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x10"
},
{
"BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts taken conditional mispredicted branch instructions retired.",
+ "PublicDescription": "Counts taken conditional mispredicted branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x1"
},
{
"BriefDescription": "Miss-predicted near indirect branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
- "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
+ "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Mispredicted indirect CALL retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
- "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect.",
+ "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x2"
},
{
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
- "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken.",
+ "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken. Available PDIST counters: 0",
"SampleAfterValue": "400009",
"UMask": "0x20"
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RET",
- "PEBS": "1",
- "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.",
+ "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "Core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C01",
"PublicDescription": "Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
@@ -212,6 +219,7 @@
},
{
"BriefDescription": "Core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C02",
"PublicDescription": "Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
@@ -220,6 +228,7 @@
},
{
"BriefDescription": "Core clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.C0_WAIT",
"PublicDescription": "Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instruction.",
@@ -228,6 +237,7 @@
},
{
"BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
"PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -236,6 +246,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
@@ -244,6 +255,7 @@
},
{
"BriefDescription": "CPU_CLK_UNHALTED.PAUSE",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.PAUSE",
"SampleAfterValue": "2000003",
@@ -251,6 +263,7 @@
},
{
"BriefDescription": "CPU_CLK_UNHALTED.PAUSE_INST",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xec",
@@ -260,6 +273,7 @@
},
{
"BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
"PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -268,6 +282,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -275,6 +290,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
@@ -283,6 +299,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -290,6 +307,7 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -297,6 +315,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "8",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -305,6 +324,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -313,6 +333,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "16",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -321,6 +342,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "12",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -329,6 +351,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -337,6 +360,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -344,7 +368,16 @@
"UMask": "0x4"
},
{
+ "BriefDescription": "Counts the cycles where the AMX (Advance Matrix Extension) unit is busy performing an operation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb7",
+ "EventName": "EXE.AMX_BUSY",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
@@ -352,7 +385,16 @@
"UMask": "0x2"
},
{
+ "BriefDescription": "Cycles total of 2 or 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_3_PORTS_UTIL",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xc"
+ },
+ {
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -361,6 +403,7 @@
},
{
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -369,6 +412,7 @@
},
{
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -377,6 +421,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "5",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.BOUND_ON_LOADS",
@@ -385,6 +430,7 @@
},
{
"BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
@@ -394,6 +440,7 @@
},
{
"BriefDescription": "Cycles no uop executed while RS was not empty, the SB was not full and there was no outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS",
"PublicDescription": "Number of cycles total of 0 uops executed on all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) was not full and there was no outstanding load.",
@@ -402,6 +449,7 @@
},
{
"BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
@@ -410,22 +458,23 @@
},
{
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
+ "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "INST_RETIRED.MACRO_FUSED",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.MACRO_FUSED",
"SampleAfterValue": "2000003",
@@ -433,6 +482,7 @@
},
{
"BriefDescription": "Retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.NOP",
"PublicDescription": "Counts all retired NOP or ENDBR32/64 instructions",
@@ -441,14 +491,15 @@
},
{
"BriefDescription": "Precise instruction retired with PEBS precise-distribution",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.PREC_DIST",
- "PEBS": "1",
- "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0.",
+ "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Iterations of Repeat string retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.REP_ITERATION",
"PublicDescription": "Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent.",
@@ -457,6 +508,7 @@
},
{
"BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xad",
@@ -467,6 +519,7 @@
},
{
"BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
@@ -475,6 +528,7 @@
},
{
"BriefDescription": "INT_MISC.MBA_STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.MBA_STALLS",
"SampleAfterValue": "1000003",
@@ -482,6 +536,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
@@ -490,6 +545,7 @@
},
{
"BriefDescription": "Bubble cycles of BAClear (Unknown Branch).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.UNKNOWN_BRANCH_CYCLES",
"MSRIndex": "0x3F7",
@@ -499,6 +555,7 @@
},
{
"BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xad",
"EventName": "INT_MISC.UOP_DROPPING",
"PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
@@ -507,6 +564,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.128BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.128BIT",
"SampleAfterValue": "1000003",
@@ -514,6 +572,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.256BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.256BIT",
"SampleAfterValue": "1000003",
@@ -521,6 +580,7 @@
},
{
"BriefDescription": "integer ADD, SUB, SAD 128-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.ADD_128",
"PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 128-bit vector instructions.",
@@ -529,6 +589,7 @@
},
{
"BriefDescription": "integer ADD, SUB, SAD 256-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.ADD_256",
"PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 256-bit vector instructions.",
@@ -537,6 +598,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.MUL_256",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.MUL_256",
"SampleAfterValue": "1000003",
@@ -544,6 +606,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.SHUFFLES",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.SHUFFLES",
"SampleAfterValue": "1000003",
@@ -551,6 +614,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.VNNI_128",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.VNNI_128",
"SampleAfterValue": "1000003",
@@ -558,6 +622,7 @@
},
{
"BriefDescription": "INT_VEC_RETIRED.VNNI_256",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe7",
"EventName": "INT_VEC_RETIRED.VNNI_256",
"SampleAfterValue": "1000003",
@@ -565,6 +630,7 @@
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ADDRESS_ALIAS",
"PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.",
@@ -573,6 +639,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -581,6 +648,7 @@
},
{
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -589,14 +657,16 @@
},
{
"BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PREFETCH.SWPF",
- "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
+ "PublicDescription": "Counts all software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -606,6 +676,7 @@
},
{
"BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "6",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_OK",
@@ -615,6 +686,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
@@ -623,6 +695,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xc3",
@@ -633,6 +706,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -641,6 +715,7 @@
},
{
"BriefDescription": "LFENCE instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xe0",
"EventName": "MISC2_RETIRED.LFENCE",
"PublicDescription": "number of LFENCE retired instructions",
@@ -649,6 +724,7 @@
},
{
"BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT.",
@@ -657,6 +733,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
@@ -665,13 +742,65 @@
},
{
"BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SCOREBOARD",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
{
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY",
+ "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_COUNT",
+ "Invert": "1",
+ "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty due to a resource in the back-end",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_RESOURCE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY_COUNT",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "Deprecated": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS_EMPTY.COUNT",
+ "Invert": "1",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event RS.EMPTY",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS_EMPTY.CYCLES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7"
+ },
+ {
"BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
"PublicDescription": "Number of slots in TMA method where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.",
@@ -680,6 +809,7 @@
},
{
"BriefDescription": "TMA slots wasted due to incorrect speculations.",
+ "Counter": "0",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BAD_SPEC_SLOTS",
"PublicDescription": "Number of slots of TMA method that were wasted due to incorrect speculation. It covers all types of control-flow or data-related mis-speculations.",
@@ -688,6 +818,7 @@
},
{
"BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
+ "Counter": "0",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
"PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
@@ -696,6 +827,7 @@
},
{
"BriefDescription": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.MEMORY_BOUND_SLOTS",
"SampleAfterValue": "10000003",
@@ -703,6 +835,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003",
@@ -710,6 +843,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
@@ -718,6 +852,7 @@
},
{
"BriefDescription": "UOPS_DECODED.DEC0_UOPS",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UOPS_DECODED.DEC0_UOPS",
"SampleAfterValue": "1000003",
@@ -725,6 +860,7 @@
},
{
"BriefDescription": "Uops executed on port 0",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_0",
"PublicDescription": "Number of uops dispatch to execution port 0.",
@@ -733,6 +869,7 @@
},
{
"BriefDescription": "Uops executed on port 1",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_1",
"PublicDescription": "Number of uops dispatch to execution port 1.",
@@ -741,6 +878,7 @@
},
{
"BriefDescription": "Uops executed on ports 2, 3 and 10",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_2_3_10",
"PublicDescription": "Number of uops dispatch to execution ports 2, 3 and 10",
@@ -749,6 +887,7 @@
},
{
"BriefDescription": "Uops executed on ports 4 and 9",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_4_9",
"PublicDescription": "Number of uops dispatch to execution ports 4 and 9",
@@ -757,6 +896,7 @@
},
{
"BriefDescription": "Uops executed on ports 5 and 11",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_5_11",
"PublicDescription": "Number of uops dispatch to execution ports 5 and 11",
@@ -765,6 +905,7 @@
},
{
"BriefDescription": "Uops executed on port 6",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_6",
"PublicDescription": "Number of uops dispatch to execution port 6.",
@@ -773,6 +914,7 @@
},
{
"BriefDescription": "Uops executed on ports 7 and 8",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb2",
"EventName": "UOPS_DISPATCHED.PORT_7_8",
"PublicDescription": "Number of uops dispatch to execution ports 7 and 8.",
@@ -781,6 +923,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Counts the number of uops executed from any thread.",
@@ -789,6 +932,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -798,6 +942,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -807,6 +952,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -816,6 +962,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -825,6 +972,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1",
@@ -834,6 +982,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2",
@@ -843,6 +992,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3",
@@ -852,6 +1002,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4",
@@ -861,6 +1012,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.STALLS",
@@ -871,6 +1023,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UOPS_EXECUTED.STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xb1",
@@ -881,6 +1034,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.THREAD",
"SampleAfterValue": "2000003",
@@ -888,6 +1042,7 @@
},
{
"BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.",
@@ -896,6 +1051,7 @@
},
{
"BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xae",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
@@ -903,7 +1059,17 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "UOPS_ISSUED.CYCLES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.CYCLES",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Cycles with retired uop(s).",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.CYCLES",
@@ -913,6 +1079,7 @@
},
{
"BriefDescription": "Retired uops except the last uop of each instruction.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.HEAVY",
"PublicDescription": "Counts the number of retired micro-operations (uops) except the last uop of each instruction. An instruction that is decoded into less than two uops does not contribute to the count.",
@@ -921,6 +1088,7 @@
},
{
"BriefDescription": "UOPS_RETIRED.MS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.MS",
"MSRIndex": "0x3F7",
@@ -930,6 +1098,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS",
"PublicDescription": "Counts the retirement slots used each cycle.",
@@ -938,6 +1107,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.STALLS",
@@ -948,6 +1118,7 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UOPS_RETIRED.STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"Deprecated": "1",
"EventCode": "0xc2",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json
index bf5a511b99d1..92cf47967f0b 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cache.json
@@ -1,8 +1,10 @@
[
{
"BriefDescription": "CHA to iMC Bypass : Intermediate bypass Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Intermediate bypass Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the intermediate bypass.",
"UMask": "0x2",
@@ -10,8 +12,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass : Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Not Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that could not take the bypass, and issues a read to memory. Note that transactions that did not take the bypass but did not issue read to memory will not be counted.",
"UMask": "0x4",
@@ -19,8 +23,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass : Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the full bypass.",
"UMask": "0x1",
@@ -28,6 +34,7 @@
},
{
"BriefDescription": "CHA Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_CHA_CLOCKTICKS",
"PerPkg": "1",
@@ -36,6 +43,7 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_CHA_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -43,8 +51,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Any Cycle with Multiple Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Any Cycle with Multiple Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xf2",
@@ -52,8 +62,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Any Single Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Any Single Snoop : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xf1",
@@ -61,8 +73,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x42",
@@ -70,8 +84,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x41",
@@ -79,8 +95,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Eviction : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x82",
@@ -88,8 +106,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Eviction : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x81",
@@ -97,8 +117,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x22",
@@ -106,8 +128,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x21",
@@ -115,8 +139,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Snoop Targets from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.REMOTE_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Snoop Targets from Remote : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x12",
@@ -124,8 +150,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Snoop Target from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.REMOTE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Snoop Target from Remote : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x11",
@@ -133,96 +161,120 @@
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6e",
"EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_DRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6e",
"EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_NO_D2C",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6e",
"EventName": "UNC_CHA_DIRECT_GO.HA_TOR_DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.EXTCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO_PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.GO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.GO_PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.IDLE_DUE_SUPPRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.NOP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_CHA_DIRECT_GO_OPC.PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Multi-socket cacheline Directory state lookups; Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts transactions that looked into the multi-socket cacheline Directory state, and therefore did not send a snoop because the Directory indicated it was not needed.",
"UMask": "0x2",
@@ -230,8 +282,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state lookups; Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts transactions that looked into the multi-socket cacheline Directory state, and sent one or more snoops, because the Directory indicated it was needed.",
"UMask": "0x1",
@@ -239,6 +293,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state updates; Directory Updated memory write from the HA pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.HA",
"PerPkg": "1",
@@ -248,6 +303,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory state updates; Directory Updated memory write from TOR pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.TOR",
"PerPkg": "1",
@@ -256,9 +312,22 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "Distress signal asserted : DPT Remote",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xaf",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_NONLOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tile",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -266,8 +335,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -275,8 +346,10 @@
},
{
"BriefDescription": "Read request from a remote socket which hit in the HitMe Cache to a line In the E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_CHA_HITME_HIT.EX_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts read requests from a remote socket which hit in the HitME cache (used to cache the multi-socket Directory state) to a line in the E(Exclusive) state. This includes the following read opcodes (RdCode, RdData, RdDataMigratory, RdCur, RdInv*, Inv*).",
"UMask": "0x1",
@@ -284,80 +357,100 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : Shared hit and op is RdInvOwn, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_CHA_HITME_HIT.SHARED_OWNREQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : op is WbMtoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_CHA_HITME_HIT.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_CHA_HITME_HIT.WBMTOI_OR_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed : op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5e",
"EventName": "UNC_CHA_HITME_LOOKUP.READ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed : op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5e",
"EventName": "UNC_CHA_HITME_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : No SF/LLC HitS/F and op is RdInvOwn",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.READ_OR_INV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : SF/LLC HitS/F and op is RdInvOwn",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.SHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Deallocate HitME$ on Reads without RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local request : Received RspFwdI* for a local request, but converted HitME$ to SF entry",
"UMask": "0x1",
@@ -365,16 +458,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Update HitMe Cache on RdInvOwn even if not RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RSPFWDI_REM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote request : Updated HitME$ on RspFwdI* or local HitM/E received for a remote request",
"UMask": "0x2",
@@ -382,14 +479,17 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Update HitMe Cache to SHARed",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.SHARED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Normal priority reads issued to the memory controller from the CHA",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.NORMAL",
"PerPkg": "1",
@@ -399,8 +499,10 @@
},
{
"BriefDescription": "HA to iMC Reads Issued : ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "HA to iMC Reads Issued : ISOCH : Count of the number of reads issued to any of the memory controller channels. This can be filtered by the priority of the reads.",
"UMask": "0x2",
@@ -408,6 +510,7 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5b",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL",
"PerPkg": "1",
@@ -417,8 +520,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x5b",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x4",
@@ -426,8 +531,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5b",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x2",
@@ -435,8 +542,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x5b",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x8",
@@ -444,8 +553,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Any Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.",
"UMask": "0x1fffff",
@@ -453,8 +564,10 @@
},
{
"BriefDescription": "Cache Lookups : All transactions from Remote Agents",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ALL_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : All transactions from Remote Agents : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x17e0ff",
@@ -462,16 +575,20 @@
},
{
"BriefDescription": "Cache Lookups : All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ANY_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : All Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Any local or remote transaction to the LLC, including prefetch.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : CRd Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x1bd0ff",
@@ -479,24 +596,30 @@
},
{
"BriefDescription": "Cache Lookups : CRd Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Local non-prefetch requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.COREPREF_OR_DMND_LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Local non-prefetch requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Any local transaction to the LLC, not including prefetch",
"Unit": "CHA"
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x1bc1ff",
@@ -504,8 +627,10 @@
},
{
"BriefDescription": "Cache Lookups : Data Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Data Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1fc1ff",
@@ -513,16 +638,20 @@
},
{
"BriefDescription": "Cache Lookups : Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Data Read Request : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Read transactions.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Demand Data Reads, Core and LLC prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Demand Data Reads, Core and LLC prefetches : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x841ff",
@@ -530,8 +659,10 @@
},
{
"BriefDescription": "Cache Lookups : Data Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Data Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1fc101",
@@ -539,8 +670,10 @@
},
{
"BriefDescription": "Cache Lookups : E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Hit Exclusive State",
"UMask": "0x20",
@@ -548,8 +681,10 @@
},
{
"BriefDescription": "Cache Lookups : F State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : F State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Hit Forward State",
"UMask": "0x80",
@@ -557,8 +692,10 @@
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_INV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1a44ff",
@@ -566,16 +703,20 @@
},
{
"BriefDescription": "Cache Lookups : Flush",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_OR_INV_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : I State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Miss",
"UMask": "0x1",
@@ -583,16 +724,20 @@
},
{
"BriefDescription": "Cache Lookups : Local LLC prefetch requests (from LLC)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LLCPREF_LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Local LLC prefetch requests (from LLC) : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Any local LLC prefetch to the LLC",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Transactions homed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCALLY_HOMED_ADDRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in the local MC.",
"UMask": "0xbdfff",
@@ -600,8 +745,10 @@
},
{
"BriefDescription": "Cache Lookups : CRd Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_CODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x19d0ff",
@@ -609,8 +756,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Request that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x19c1ff",
@@ -618,8 +767,10 @@
},
{
"BriefDescription": "Cache Lookups : Demand CRd Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_CODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x1850ff",
@@ -627,8 +778,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Demand Data Reads that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x1841ff",
@@ -636,8 +789,10 @@
},
{
"BriefDescription": "Cache Lookups : Demand RFO Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1848ff",
@@ -645,16 +800,20 @@
},
{
"BriefDescription": "Cache Lookups : Transactions homed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in the local MC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_FLUSH_INV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1844ff",
@@ -662,8 +821,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Prefetch requests to the LLC that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_LLC_PF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x189dff",
@@ -671,8 +832,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Prefetches that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x199dff",
@@ -680,8 +843,10 @@
},
{
"BriefDescription": "Cache Lookups : CRd Prefetches that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_CODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x1910ff",
@@ -689,8 +854,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Prefetches that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x1981ff",
@@ -698,8 +865,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Prefetches that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1908ff",
@@ -707,8 +876,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x19c8ff",
@@ -716,8 +887,10 @@
},
{
"BriefDescription": "Cache Lookups : M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : M State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Hit Modified State",
"UMask": "0x40",
@@ -725,8 +898,10 @@
},
{
"BriefDescription": "Cache Lookups : All Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.MISS_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1fe001",
@@ -734,24 +909,30 @@
},
{
"BriefDescription": "Cache Lookups : Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.OTHER_REQ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC This includes all write transactions -- both Cacheable and UC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Remote non-snoop requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.PREF_OR_DMND_REMOTE_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remote non-snoop requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Remote non-snoop transactions to the LLC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Transactions homed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTELY_HOMED_ADDRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in a remote MC",
"UMask": "0x15dfff",
@@ -759,8 +940,10 @@
},
{
"BriefDescription": "Cache Lookups : CRd Requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_CODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x1a10ff",
@@ -768,8 +951,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Requests that come from a Remote socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x1a01ff",
@@ -777,16 +962,20 @@
},
{
"BriefDescription": "Cache Lookups : Transactions homed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in a remote MC",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_FLUSH_INV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1a04ff",
@@ -794,8 +983,10 @@
},
{
"BriefDescription": "Cache Lookups : Filters Requests for those that write info into the cache that come from a remote socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_OTHER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC This includes all write transactions -- both Cacheable and UC.",
"UMask": "0x1a02ff",
@@ -803,8 +994,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1a08ff",
@@ -812,16 +1005,20 @@
},
{
"BriefDescription": "Cache Lookups : Remote snoop requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remote snoop requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Remote snoop transactions to the LLC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Snoop Requests from a Remote Socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.",
"UMask": "0x1c19ff",
@@ -829,8 +1026,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1bc8ff",
@@ -838,16 +1037,20 @@
},
{
"BriefDescription": "Cache Lookups : RFO Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Locally HOMed RFOs - Demand and Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x9c8ff",
@@ -855,8 +1058,10 @@
},
{
"BriefDescription": "Cache Lookups : S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Hit Shared State",
"UMask": "0x10",
@@ -864,8 +1069,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : SF Hit Exclusive State",
"UMask": "0x4",
@@ -873,8 +1080,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - H State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_H",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - H State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : SF Hit HitMe State",
"UMask": "0x8",
@@ -882,8 +1091,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : SF Hit Shared State",
"UMask": "0x2",
@@ -891,8 +1102,10 @@
},
{
"BriefDescription": "Cache Lookups : Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITE_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Requests that install or change a line in the LLC. Examples: Writebacks from Core L2's and UPI. Prefetches into the LLC.",
"UMask": "0x842ff",
@@ -900,8 +1113,10 @@
},
{
"BriefDescription": "Cache Lookups : Remote Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITE_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x17c2ff",
@@ -909,8 +1124,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.E_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in E state : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2",
@@ -918,8 +1135,10 @@
},
{
"BriefDescription": "Lines Victimized : IA traffic",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : IA traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x20",
@@ -927,8 +1146,10 @@
},
{
"BriefDescription": "Lines Victimized : IO traffic",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : IO traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x10",
@@ -936,8 +1157,10 @@
},
{
"BriefDescription": "All LLC lines in E state that are victimized on a fill from an IO device",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IO_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x12",
@@ -945,8 +1168,10 @@
},
{
"BriefDescription": "All LLC lines in F or S state that are victimized on a fill from an IO device",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IO_FS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x1c",
@@ -954,8 +1179,10 @@
},
{
"BriefDescription": "All LLC lines in M state that are victimized on a fill from an IO device",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IO_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x11",
@@ -963,8 +1190,10 @@
},
{
"BriefDescription": "All LLC lines in any state that are victimized on a fill from an IO device",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.IO_MESF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x1f",
@@ -972,8 +1201,10 @@
},
{
"BriefDescription": "Lines Victimized; Local - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x200f",
@@ -981,8 +1212,10 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2002",
@@ -990,8 +1223,10 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2001",
@@ -999,16 +1234,20 @@
},
{
"BriefDescription": "Lines Victimized : Local Only",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local Only : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2004",
@@ -1016,8 +1255,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.M_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in M state : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x1",
@@ -1025,8 +1266,10 @@
},
{
"BriefDescription": "Lines Victimized; Remote - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x800f",
@@ -1034,8 +1277,10 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8002",
@@ -1043,8 +1288,10 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8001",
@@ -1052,16 +1299,20 @@
},
{
"BriefDescription": "Lines Victimized : Remote Only",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote Only : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8004",
@@ -1069,8 +1320,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.S_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in S State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x4",
@@ -1078,8 +1331,10 @@
},
{
"BriefDescription": "All LLC lines in E state that are victimized on a fill",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2",
@@ -1087,8 +1342,10 @@
},
{
"BriefDescription": "All LLC lines in M state that are victimized on a fill",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x1",
@@ -1096,8 +1353,10 @@
},
{
"BriefDescription": "All LLC lines in S state that are victimized on a fill",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x4",
@@ -1105,8 +1364,10 @@
},
{
"BriefDescription": "Cbo Misc : CV0 Prefetch Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : CV0 Prefetch Miss : Miscellaneous events in the Cbo.",
"UMask": "0x20",
@@ -1114,8 +1375,10 @@
},
{
"BriefDescription": "Cbo Misc : CV0 Prefetch Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_VIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : CV0 Prefetch Victim : Miscellaneous events in the Cbo.",
"UMask": "0x10",
@@ -1123,8 +1386,10 @@
},
{
"BriefDescription": "Number of times that an RFO hit in S state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RFO_HIT_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a RFO (the Read for Ownership issued before a write) request hit a cacheline in the S (Shared) state.",
"UMask": "0x8",
@@ -1132,8 +1397,10 @@
},
{
"BriefDescription": "Cbo Misc : Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RSPI_WAS_FSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : Silent Snoop Eviction : Miscellaneous events in the Cbo. : Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction. This is useful because this information is lost in the PRE encodings.",
"UMask": "0x1",
@@ -1141,8 +1408,10 @@
},
{
"BriefDescription": "Cbo Misc : Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.WC_ALIASING",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : Write Combining Aliasing : Miscellaneous events in the Cbo. : Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write. This occurs when there is WC aliasing.",
"UMask": "0x2",
@@ -1150,8 +1419,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Local InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.LOCAL_INVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x1",
@@ -1159,8 +1430,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Local Rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.LOCAL_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x2",
@@ -1168,8 +1441,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Off",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.OFF_PWRHEURISTIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x20",
@@ -1177,8 +1452,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Remote Rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.REMOTE_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Remote Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x4",
@@ -1186,8 +1463,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Remote Rd InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.REMOTE_READINVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Remote Rd InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x8",
@@ -1195,8 +1474,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.RFO_HITS_SNP_BCAST",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x10",
@@ -1204,32 +1485,40 @@
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw a Near Memory set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Near Memory evictions due to another read to the same Near Memory set in the LLC.",
"UMask": "0x2",
@@ -1237,8 +1526,10 @@
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw a Near memory set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Near Memory evictions due to another read to the same Near Memory set in the SF",
"UMask": "0x1",
@@ -1246,8 +1537,10 @@
},
{
"BriefDescription": "Memory Mode related events; Counts the number of times CHA saw a Near Memory set conflict in TOR",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Reject in the CHA due to a pending read to the same Near Memory set in the TOR.",
"UMask": "0x4",
@@ -1255,88 +1548,110 @@
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.REJ_IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.REJ_IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.SLOW_INSERT",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.SLOW_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE_IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE_PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": count # of FAST TOR Request inserted to ha_tor_req_fifo",
"UMask": "0x2",
@@ -1344,16 +1659,20 @@
},
{
"BriefDescription": "Number of SLOW TOR Request inserted to ha_pmm_tor_req_fifo",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_SLOW_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC0",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC0 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -1361,8 +1680,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC1",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC1 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -1370,8 +1691,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC2",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC2 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -1379,8 +1702,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC3",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC3 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -1388,8 +1713,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC4",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC4 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -1397,8 +1724,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC5",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC5 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -1406,8 +1735,10 @@
},
{
"BriefDescription": "Requests for exclusive ownership of a cache line without receiving data",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
"UMask": "0x30",
@@ -1415,6 +1746,7 @@
},
{
"BriefDescription": "Local requests for exclusive ownership of a cache line without receiving data",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -1424,6 +1756,7 @@
},
{
"BriefDescription": "Remote requests for exclusive ownership of a cache line without receiving data",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -1433,6 +1766,7 @@
},
{
"BriefDescription": "Read requests made into the CHA",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS",
"PerPkg": "1",
@@ -1442,6 +1776,7 @@
},
{
"BriefDescription": "Read requests from a unit on this socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -1451,6 +1786,7 @@
},
{
"BriefDescription": "Read requests from a remote socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -1460,6 +1796,7 @@
},
{
"BriefDescription": "Write requests made into the CHA",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES",
"PerPkg": "1",
@@ -1469,6 +1806,7 @@
},
{
"BriefDescription": "Write Requests from a unit on this socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -1478,6 +1816,7 @@
},
{
"BriefDescription": "Read and Write Requests; Writes Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -1487,8 +1826,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IPQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x4",
@@ -1496,8 +1837,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x1",
@@ -1505,8 +1848,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IRQ Rejected : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x2",
@@ -1514,8 +1859,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x10",
@@ -1523,8 +1870,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x20",
@@ -1532,8 +1881,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : RRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : RRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x40",
@@ -1541,8 +1892,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : WBQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : WBQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x80",
@@ -1550,8 +1903,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -1559,8 +1914,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -1568,8 +1925,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -1577,8 +1936,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -1586,8 +1947,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -1595,8 +1958,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -1604,8 +1969,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -1613,8 +1980,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -1622,16 +1991,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IPQ0 Reject counter was true",
"UMask": "0x1",
@@ -1639,16 +2012,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -1656,16 +2033,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -1673,8 +2054,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -1682,16 +2065,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -1699,8 +2086,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -1708,8 +2097,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -1717,8 +2108,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -1726,8 +2119,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -1735,8 +2130,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -1744,8 +2141,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -1753,8 +2152,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -1762,16 +2163,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IRQ0 Reject counter was true",
"UMask": "0x1",
@@ -1779,16 +2184,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -1796,24 +2205,30 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -1821,16 +2236,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -1838,8 +2257,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -1847,8 +2268,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring message",
"UMask": "0x40",
@@ -1856,8 +2279,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -1865,8 +2290,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -1874,8 +2301,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -1883,8 +2312,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -1892,8 +2323,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring message",
"UMask": "0x80",
@@ -1901,8 +2334,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -1910,8 +2345,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -1919,8 +2356,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring message",
"UMask": "0x40",
@@ -1928,8 +2367,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -1937,8 +2378,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -1946,8 +2389,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -1955,8 +2400,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -1964,8 +2411,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring message",
"UMask": "0x80",
@@ -1973,8 +2422,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was true",
"UMask": "0x1",
@@ -1982,8 +2433,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -1991,8 +2444,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was true",
"UMask": "0x1",
@@ -2000,8 +2455,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -2009,8 +2466,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : IPQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x4",
@@ -2018,8 +2477,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : RRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : RRQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x40",
@@ -2027,8 +2488,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : WBQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : WBQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x80",
@@ -2036,8 +2499,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : AD REQ on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -2045,8 +2510,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : AD RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -2054,8 +2521,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : Non UPI AK Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject AK ring message",
"UMask": "0x40",
@@ -2063,8 +2532,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL NCB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -2072,8 +2543,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL NCS on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -2081,8 +2554,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -2090,8 +2565,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL WB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -2099,8 +2576,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : Non UPI IV Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject IV ring message",
"UMask": "0x80",
@@ -2108,8 +2587,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : Allow Snoop : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x40",
@@ -2117,8 +2598,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : ANY0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Any condition listed in the Other0 Reject counter was true",
"UMask": "0x1",
@@ -2126,8 +2609,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : HA : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x2",
@@ -2135,8 +2620,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : LLC OR SF Way : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -2144,8 +2631,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : LLC Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x4",
@@ -2153,8 +2642,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : PhyAddr Match : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -2162,8 +2653,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : SF Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -2171,8 +2664,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x10",
@@ -2180,8 +2675,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -2189,8 +2686,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -2198,8 +2697,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -2207,8 +2708,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -2216,8 +2719,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -2225,8 +2730,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -2234,8 +2741,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -2243,8 +2752,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -2252,16 +2763,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the PRQ0 Reject counter was true",
"UMask": "0x1",
@@ -2269,16 +2784,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -2286,16 +2805,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -2303,8 +2826,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -2312,16 +2837,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Request Queue Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : AD REQ on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -2329,8 +2858,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : AD RSP on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -2338,8 +2869,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : Non UPI AK Request : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject AK ring message",
"UMask": "0x40",
@@ -2347,8 +2880,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL NCB on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -2356,8 +2891,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL NCS on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -2365,8 +2902,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL RSP on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -2374,8 +2913,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL WB on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -2383,8 +2924,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : Non UPI IV Request : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject IV ring message",
"UMask": "0x80",
@@ -2392,8 +2935,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : Allow Snoop : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x40",
@@ -2401,8 +2946,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : ANY0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Any condition listed in the WBQ0 Reject counter was true",
"UMask": "0x1",
@@ -2410,8 +2957,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : HA : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x2",
@@ -2419,8 +2968,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : LLC OR SF Way : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -2428,8 +2979,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : LLC Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x4",
@@ -2437,8 +2990,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : PhyAddr Match : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -2446,8 +3001,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : SF Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -2455,8 +3012,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x10",
@@ -2464,8 +3023,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -2473,8 +3034,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -2482,8 +3045,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject AK ring message",
"UMask": "0x40",
@@ -2491,8 +3056,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -2500,8 +3067,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -2509,8 +3078,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -2518,8 +3089,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -2527,8 +3100,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject IV ring message",
"UMask": "0x80",
@@ -2536,8 +3111,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x40",
@@ -2545,8 +3122,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Any condition listed in the RRQ0 Reject counter was true",
"UMask": "0x1",
@@ -2554,8 +3133,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : HA : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x2",
@@ -2563,8 +3144,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -2572,8 +3155,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x4",
@@ -2581,8 +3166,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -2590,8 +3177,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -2599,8 +3188,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x10",
@@ -2608,8 +3199,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -2617,8 +3210,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -2626,8 +3221,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject AK ring message",
"UMask": "0x40",
@@ -2635,8 +3232,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -2644,8 +3243,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -2653,8 +3254,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -2662,8 +3265,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -2671,8 +3276,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject IV ring message",
"UMask": "0x80",
@@ -2680,8 +3287,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x40",
@@ -2689,8 +3298,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Any condition listed in the WBQ0 Reject counter was true",
"UMask": "0x1",
@@ -2698,8 +3309,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : HA : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x2",
@@ -2707,8 +3320,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -2716,8 +3331,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x4",
@@ -2725,8 +3342,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -2734,8 +3353,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -2743,8 +3364,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x10",
@@ -2752,8 +3375,10 @@
},
{
"BriefDescription": "Snoops Sent : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : All : Counts the number of snoops issued by the HA.",
"UMask": "0x1",
@@ -2761,8 +3386,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast snoop for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast snoop for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA. This filter includes only requests coming from local sockets.",
"UMask": "0x10",
@@ -2770,8 +3397,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA.This filter includes only requests coming from remote sockets.",
"UMask": "0x20",
@@ -2779,8 +3408,10 @@
},
{
"BriefDescription": "Snoops Sent : Directed snoops for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Directed snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA. This filter includes only requests coming from local sockets.",
"UMask": "0x40",
@@ -2788,8 +3419,10 @@
},
{
"BriefDescription": "Snoops Sent : Directed snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Directed snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA. This filter includes only requests coming from remote sockets.",
"UMask": "0x80",
@@ -2797,8 +3430,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast or directed Snoops sent for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast or directed Snoops sent for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the local socket.",
"UMask": "0x4",
@@ -2806,8 +3441,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast or directed Snoops sent for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast or directed Snoops sent for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA per request. This filter includes only requests coming from the remote socket.",
"UMask": "0x8",
@@ -2815,8 +3452,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RSPCNFLCT*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : RSPCNFLCT* : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for snoops responses of RspConflict. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.",
"UMask": "0x40",
@@ -2824,8 +3463,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : RspFwd : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspFwd to a CA request. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -2833,8 +3474,10 @@
},
{
"BriefDescription": "Snoop Responses Received : Rsp*Fwd*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPFWDWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : Rsp*Fwd*WB : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of Rsp*Fwd*WB. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.",
"UMask": "0x20",
@@ -2842,8 +3485,10 @@
},
{
"BriefDescription": "RspI Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a transaction with the opcode type RspI Snoop Response was received which indicates the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO: the Read for Ownership issued before a write hits non-modified data).",
"UMask": "0x1",
@@ -2851,8 +3496,10 @@
},
{
"BriefDescription": "RspIFwd Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a a transaction with the opcode type RspIFwd Snoop Response was received which indicates a remote caching agent forwarded the data and the requesting agent is able to acquire the data in E (Exclusive) or M (modified) states. This is commonly returned with RFO (the Read for Ownership issued before a write) transactions. The snoop could have either been to a cacheline in the M,E,F (Modified, Exclusive or Forward) states.",
"UMask": "0x4",
@@ -2860,8 +3507,10 @@
},
{
"BriefDescription": "RspS Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a transaction with the opcode type RspS Snoop Response was received which indicates when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -2869,8 +3518,10 @@
},
{
"BriefDescription": "RspSFwd Snoop Responses Received",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a a transaction with the opcode type RspSFwd Snoop Response was received which indicates a remote caching agent forwarded the data but held on to its current copy. This is common for data and code reads that hit in a remote socket in E (Exclusive) or F (Forward) state.",
"UMask": "0x8",
@@ -2878,8 +3529,10 @@
},
{
"BriefDescription": "Snoop Responses Received : Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_CHA_SNOOP_RESP.RSPWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : Rsp*WB : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspIWB or RspSWB. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.",
"UMask": "0x10",
@@ -2887,8 +3540,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspCnflct : Number of snoop responses received for a Local request : Filters for snoops responses of RspConflict to local CA requests. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.",
"UMask": "0x40",
@@ -2896,8 +3551,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspFwd : Number of snoop responses received for a Local request : Filters for a snoop response of RspFwd to local CA requests. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -2905,8 +3562,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWDWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : Rsp*FWD*WB : Number of snoop responses received for a Local request : Filters for a snoop response of Rsp*Fwd*WB to local CA requests. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.",
"UMask": "0x20",
@@ -2914,8 +3573,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspI : Number of snoop responses received for a Local request : Filters for snoops responses of RspI to local CA requests. RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data).",
"UMask": "0x1",
@@ -2923,8 +3584,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspIFwd : Number of snoop responses received for a Local request : Filters for snoop responses of RspIFwd to local CA requests. This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states. This is commonly returned with RFO transactions. It can be either a HitM or a HitFE.",
"UMask": "0x4",
@@ -2932,8 +3595,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspS : Number of snoop responses received for a Local request : Filters for snoop responses of RspS to local CA requests. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -2941,8 +3606,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspSFwd : Number of snoop responses received for a Local request : Filters for a snoop response of RspSFwd to local CA requests. This is returned when a remote caching agent forwards data but holds on to its current copy. This is common for data and code reads that hit in a remote socket in E or F state.",
"UMask": "0x8",
@@ -2950,8 +3617,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : Rsp*WB : Number of snoop responses received for a Local request : Filters for a snoop response of RspIWB or RspSWB to local CA requests. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.",
"UMask": "0x10",
@@ -2959,56 +3628,70 @@
},
{
"BriefDescription": "Misc Snoop Responses Received : MtoI RspIDataM",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPDATAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : MtoI RspIFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPIFWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : Pull Data Partial - Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITLLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : Pull Data Partial - Hit SF",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITSF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITLLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hit SF",
+ "Counter": "0,1,2,3",
"EventCode": "0x6b",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITSF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0xc001ffff",
@@ -3016,16 +3699,20 @@
},
{
"BriefDescription": "TOR Inserts : DDR Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DDR Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : SF/LLC Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : SF/LLC Evictions : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -3033,14 +3720,17 @@
},
{
"BriefDescription": "TOR Inserts : Just Hits",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Hits : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; All from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA",
"PerPkg": "1",
@@ -3050,6 +3740,7 @@
},
{
"BriefDescription": "TOR Inserts;CLFlush from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSH",
"PerPkg": "1",
@@ -3059,8 +3750,10 @@
},
{
"BriefDescription": "TOR Inserts;CLFlushOpt from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSHOPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; CLFlushOpt events that are initiated from the Core",
"UMask": "0xc8d7ff01",
@@ -3068,6 +3761,7 @@
},
{
"BriefDescription": "TOR Inserts; CRd from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CRD",
"PerPkg": "1",
@@ -3077,8 +3771,10 @@
},
{
"BriefDescription": "TOR Inserts; CRd Pref from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Code read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc88fff01",
@@ -3086,6 +3782,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD",
"PerPkg": "1",
@@ -3095,8 +3792,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to a page walk : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837ff01",
@@ -3104,8 +3803,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt from local IA that misses in the snoop filter",
"UMask": "0xc827ff01",
@@ -3113,8 +3814,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt Pref from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt prefetch from local IA that misses in the snoop filter",
"UMask": "0xc8a7ff01",
@@ -3122,6 +3825,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd Pref from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_PREF",
"PerPkg": "1",
@@ -3131,6 +3835,7 @@
},
{
"BriefDescription": "TOR Inserts; Hits from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT",
"PerPkg": "1",
@@ -3140,6 +3845,7 @@
},
{
"BriefDescription": "TOR Inserts; CRd hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
"PerPkg": "1",
@@ -3149,6 +3855,7 @@
},
{
"BriefDescription": "TOR Inserts; CRd Pref hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD_PREF",
"PerPkg": "1",
@@ -3158,16 +3865,20 @@
},
{
"BriefDescription": "All requests issued from IA cores to CXL accelerator memory regions that hit the LLC.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c0018101",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c0008101",
@@ -3175,6 +3886,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
"PerPkg": "1",
@@ -3184,8 +3896,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to page walks that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fd01",
@@ -3193,8 +3907,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt from local IA that hits in the snoop filter",
"UMask": "0xc827fd01",
@@ -3202,8 +3918,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt Pref hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt prefetch from local IA that hits in the snoop filter",
"UMask": "0xc8a7fd01",
@@ -3211,6 +3929,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd Pref hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_PREF",
"PerPkg": "1",
@@ -3220,8 +3939,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fd01",
@@ -3229,8 +3950,10 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefCode hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Last level cache prefetch code read from local IA that hits in the snoop filter",
"UMask": "0xcccffd01",
@@ -3238,8 +3961,10 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefData hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Last level cache prefetch data read from local IA that hits in the snoop filter",
"UMask": "0xccd7fd01",
@@ -3247,6 +3972,7 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefRFO hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFRFO",
"PerPkg": "1",
@@ -3256,6 +3982,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
"PerPkg": "1",
@@ -3265,6 +3992,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO Pref hits from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO_PREF",
"PerPkg": "1",
@@ -3274,8 +4002,10 @@
},
{
"BriefDescription": "TOR Inserts;ItoM from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.; ItoM events that are initiated from the Core",
"UMask": "0xcc47ff01",
@@ -3283,8 +4013,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd47ff01",
@@ -3292,8 +4024,10 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefCode from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Last level cache prefetch code read from local IA.",
"UMask": "0xcccfff01",
@@ -3301,6 +4035,7 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefData from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFDATA",
"PerPkg": "1",
@@ -3310,6 +4045,7 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefRFO from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFRFO",
"PerPkg": "1",
@@ -3319,6 +4055,7 @@
},
{
"BriefDescription": "TOR Inserts; misses from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS",
"PerPkg": "1",
@@ -3328,6 +4065,7 @@
},
{
"BriefDescription": "TOR Inserts for CRd misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
"PerPkg": "1",
@@ -3337,16 +4075,20 @@
},
{
"BriefDescription": "CRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRDMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c80b8201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80efe01",
@@ -3354,6 +4096,7 @@
},
{
"BriefDescription": "TOR Inserts; CRd Pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF",
"PerPkg": "1",
@@ -3363,8 +4106,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88efe01",
@@ -3372,8 +4117,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88f7e01",
@@ -3381,8 +4128,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80f7e01",
@@ -3390,16 +4139,20 @@
},
{
"BriefDescription": "All requests issued from IA cores to CXL accelerator memory regions that miss the LLC.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c0018201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c0008201",
@@ -3407,6 +4160,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
"PerPkg": "1",
@@ -3416,16 +4170,20 @@
},
{
"BriefDescription": "DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8138201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to a page walk that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fe01",
@@ -3433,16 +4191,20 @@
},
{
"BriefDescription": "DRds issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander card.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8178201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8168201",
@@ -3450,8 +4212,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_EXP_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x20c8168201",
@@ -3459,6 +4223,7 @@
},
{
"BriefDescription": "TOR Inserts for DRds issued by IA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR",
"PerPkg": "1",
@@ -3468,6 +4233,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd misses from local IA targeting local memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL",
"PerPkg": "1",
@@ -3477,6 +4243,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_DDR",
"PerPkg": "1",
@@ -3486,6 +4253,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_PMM",
"PerPkg": "1",
@@ -3495,8 +4263,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt from local IA that misses in the snoop filter",
"UMask": "0xc827fe01",
@@ -3504,8 +4274,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8268201",
@@ -3513,8 +4285,10 @@
},
{
"BriefDescription": "TOR Inserts; DRd Opt Pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read opt prefetch from local IA that misses in the snoop filter",
"UMask": "0xc8a7fe01",
@@ -3522,8 +4296,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8a68201",
@@ -3531,6 +4307,7 @@
},
{
"BriefDescription": "TOR Inserts for DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM",
"PerPkg": "1",
@@ -3540,6 +4317,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd Pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF",
"PerPkg": "1",
@@ -3549,25 +4327,42 @@
},
{
"BriefDescription": "L2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8978201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8968201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20c8968201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978601",
@@ -3575,6 +4370,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd Pref misses from local IA targeting local memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL",
"PerPkg": "1",
@@ -3584,8 +4380,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968601",
@@ -3593,8 +4391,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968a01",
@@ -3602,8 +4402,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978a01",
@@ -3611,6 +4413,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd Pref misses from local IA targeting remote memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE",
"PerPkg": "1",
@@ -3620,8 +4423,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970601",
@@ -3629,8 +4434,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970a01",
@@ -3638,6 +4445,7 @@
},
{
"BriefDescription": "TOR Inserts for DRd misses from local IA targeting remote memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE",
"PerPkg": "1",
@@ -3647,6 +4455,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_DDR",
"PerPkg": "1",
@@ -3656,6 +4465,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_PMM",
"PerPkg": "1",
@@ -3665,8 +4475,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fe01",
@@ -3674,8 +4486,10 @@
},
{
"BriefDescription": "TOR Inserts; LLCPrefCode misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Last level cache prefetch code read from local IA that misses in the snoop filter",
"UMask": "0xcccffe01",
@@ -3683,14 +4497,17 @@
},
{
"BriefDescription": "LLC Prefetch Code transactions issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10cccf8201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; LLCPrefData misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA",
"PerPkg": "1",
@@ -3700,23 +4517,39 @@
},
{
"BriefDescription": "LLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10ccd78201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10ccd68201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20ccd68201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts; LLCPrefRFO misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO",
"PerPkg": "1",
@@ -3726,25 +4559,42 @@
},
{
"BriefDescription": "L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8878201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8868201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20c8868201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668601",
@@ -3752,8 +4602,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668a01",
@@ -3761,8 +4613,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8601",
@@ -3770,8 +4624,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8a01",
@@ -3779,8 +4635,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670601",
@@ -3788,8 +4646,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670a01",
@@ -3797,8 +4657,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0601",
@@ -3806,8 +4668,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0a01",
@@ -3815,6 +4679,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
"PerPkg": "1",
@@ -3824,24 +4689,30 @@
},
{
"BriefDescription": "RFO and L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFOMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8038201",
"Unit": "CHA"
},
{
"BriefDescription": "RFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8078201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8068201",
@@ -3849,8 +4720,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_EXP_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x20c8068201",
@@ -3858,6 +4731,7 @@
},
{
"BriefDescription": "TOR Inserts RFO misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_LOCAL",
"PerPkg": "1",
@@ -3867,6 +4741,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF",
"PerPkg": "1",
@@ -3876,23 +4751,39 @@
},
{
"BriefDescription": "LLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10ccc78201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC_LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10ccc68201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_EXP_LOCAL",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20ccc68201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts; RFO prefetch misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_LOCAL",
"PerPkg": "1",
@@ -3902,6 +4793,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO prefetch misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_REMOTE",
"PerPkg": "1",
@@ -3911,6 +4803,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_REMOTE",
"PerPkg": "1",
@@ -3920,8 +4813,10 @@
},
{
"BriefDescription": "TOR Inserts : UCRdFs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_UCRDF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc877de01",
@@ -3929,8 +4824,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86ffe01",
@@ -3938,8 +4835,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLF issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867fe01",
@@ -3947,8 +4846,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678601",
@@ -3956,8 +4857,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678a01",
@@ -3965,8 +4868,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8601",
@@ -3974,8 +4879,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8a01",
@@ -3983,8 +4890,10 @@
},
{
"BriefDescription": "TOR Inserts : WiLs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc87fde01",
@@ -3992,6 +4901,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_RFO",
"PerPkg": "1",
@@ -4001,6 +4911,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO pref from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_RFO_PREF",
"PerPkg": "1",
@@ -4010,6 +4921,7 @@
},
{
"BriefDescription": "TOR Inserts;SpecItoM from Local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_SPECITOM",
"PerPkg": "1",
@@ -4019,8 +4931,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc3fff01",
@@ -4028,8 +4942,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc37ff01",
@@ -4037,8 +4953,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc2fff01",
@@ -4046,8 +4964,10 @@
},
{
"BriefDescription": "TOR Inserts : WbMtoIs issued by an iA Cores. Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbMtoIs issued by iA Cores . (Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc27ff01",
@@ -4055,8 +4975,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBSTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc67ff01",
@@ -4064,8 +4986,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86fff01",
@@ -4073,8 +4997,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLF issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867ff01",
@@ -4082,6 +5008,7 @@
},
{
"BriefDescription": "TOR Inserts; All from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO",
"PerPkg": "1",
@@ -4091,6 +5018,7 @@
},
{
"BriefDescription": "TOR Inserts : CLFlushes issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_CLFLUSH",
"PerPkg": "1",
@@ -4100,6 +5028,7 @@
},
{
"BriefDescription": "TOR Inserts; Hits from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT",
"PerPkg": "1",
@@ -4109,6 +5038,7 @@
},
{
"BriefDescription": "TOR Inserts; ItoM hits from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOM",
"PerPkg": "1",
@@ -4118,6 +5048,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR",
"PerPkg": "1",
@@ -4127,6 +5058,7 @@
},
{
"BriefDescription": "TOR Inserts; RdCur and FsRdCur hits from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR",
"PerPkg": "1",
@@ -4136,6 +5068,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO hits from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_RFO",
"PerPkg": "1",
@@ -4145,6 +5078,7 @@
},
{
"BriefDescription": "TOR Inserts for ItoM from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM",
"PerPkg": "1",
@@ -4154,6 +5088,7 @@
},
{
"BriefDescription": "TOR Inserts for ItoMCacheNears from IO devices.",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR",
"PerPkg": "1",
@@ -4162,7 +5097,48 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "ItoMCacheNear (partial write) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcd42ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNear (partial write) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcd437f04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM (write) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcc42ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM (write) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcc437f04",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts; Misses from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS",
"PerPkg": "1",
@@ -4171,7 +5147,8 @@
"Unit": "CHA"
},
{
- "BriefDescription": "TOR Inserts; ItoM misses from local IO",
+ "BriefDescription": "TOR Inserts : ItoM, indicating a full cacheline write request, from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
"PerPkg": "1",
@@ -4181,6 +5158,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR",
"PerPkg": "1",
@@ -4189,7 +5167,8 @@
"Unit": "CHA"
},
{
- "BriefDescription": "TOR Inserts; RdCur and FsRdCur misses from local IO",
+ "BriefDescription": "TOR Inserts; RdCur and FsRdCur requests from local IO that miss LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR",
"PerPkg": "1",
@@ -4199,6 +5178,7 @@
},
{
"BriefDescription": "TOR Inserts; RFO misses from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
"PerPkg": "1",
@@ -4208,6 +5188,7 @@
},
{
"BriefDescription": "TOR Inserts for RdCur from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
"PerPkg": "1",
@@ -4216,7 +5197,28 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f2ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f37f04",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts; RFO from local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_RFO",
"PerPkg": "1",
@@ -4226,6 +5228,7 @@
},
{
"BriefDescription": "TOR Inserts : WbMtoIs issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_WBMTOI",
"PerPkg": "1",
@@ -4235,8 +5238,10 @@
},
{
"BriefDescription": "TOR Inserts : IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IPQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x8",
@@ -4244,8 +5249,10 @@
},
{
"BriefDescription": "TOR Inserts : IRQ - iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IRQ_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IRQ - iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : From an iA Core",
"UMask": "0x1",
@@ -4253,8 +5260,10 @@
},
{
"BriefDescription": "TOR Inserts : IRQ - Non iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IRQ_NON_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IRQ - Non iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x10",
@@ -4262,24 +5271,30 @@
},
{
"BriefDescription": "TOR Inserts : Just ISOC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ISOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just ISOC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just Local Targets",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOCAL_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Local Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : All from Local iA and IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local iA and IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally initiated requests",
"UMask": "0xc000ff05",
@@ -4287,8 +5302,10 @@
},
{
"BriefDescription": "TOR Inserts : All from Local iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally initiated requests from iA Cores",
"UMask": "0xc000ff01",
@@ -4296,8 +5313,10 @@
},
{
"BriefDescription": "TOR Inserts : All from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All locally generated IO traffic",
"UMask": "0xc000ff04",
@@ -4305,80 +5324,100 @@
},
{
"BriefDescription": "TOR Inserts : Match the Opcode in b[29:19] of the extended umask field",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MATCH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Match the Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Misses : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : MMCFG Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MMCFG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : MMCFG Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : MMIO Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MMIO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : MMIO Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NearMem",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NonCoherent",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NONCOH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NonCoherent : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NotNearMem",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NOT_NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NotNearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : PMM Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PM Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PREMORPH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : PRQ - IOSF",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PRQ_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PRQ - IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : From a PCIe Device",
"UMask": "0x4",
@@ -4386,8 +5425,10 @@
},
{
"BriefDescription": "TOR Inserts : PRQ - Non IOSF",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PRQ_NON_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PRQ - Non IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x20",
@@ -4395,16 +5436,20 @@
},
{
"BriefDescription": "TOR Inserts : Just Remote Targets",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.REMOTE_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Remote Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : All from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.REM_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Remote : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All remote requests (e.g. snoops, writebacks) that came from remote sockets",
"UMask": "0xc001ffc8",
@@ -4412,8 +5457,10 @@
},
{
"BriefDescription": "TOR Inserts : All Snoops from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.REM_SNPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All Snoops from Remote : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. : All snoops to this LLC that came from remote sockets",
"UMask": "0xc001ff08",
@@ -4421,17 +5468,71 @@
},
{
"BriefDescription": "TOR Inserts : RRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : RRQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x40",
"Unit": "CHA"
},
{
+ "BriefDescription": "TOR Inserts for INVXTOM opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS_INVXTOM_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e87e8240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for RDCODE opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS_RDCODE_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e80e8240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for RDCUR opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS_RDCUR_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8068240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for RDDATA opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS_RDDATA_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8168240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for RDINVOWN_OPT opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.RRQ_MISS_RDINVOWN_OPT_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8268240",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts; All Snoops from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.SNPS_FROM_REM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. All snoops to this LLC that came from remote sockets.",
"UMask": "0xc001ff08",
@@ -4439,8 +5540,10 @@
},
{
"BriefDescription": "TOR Inserts : WBQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WBQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent.",
"UMask": "0x80",
@@ -4448,8 +5551,10 @@
},
{
"BriefDescription": "TOR Occupancy : All",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0xc001ffff",
@@ -4457,16 +5562,20 @@
},
{
"BriefDescription": "TOR Occupancy : DDR Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DDR Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : SF/LLC Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : SF/LLC Evictions : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -4474,14 +5583,17 @@
},
{
"BriefDescription": "TOR Occupancy : Just Hits",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Hits : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy; All from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA",
"PerPkg": "1",
@@ -4491,6 +5603,7 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushes issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSH",
"PerPkg": "1",
@@ -4500,8 +5613,10 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushOpts issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSHOPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CLFlushOpts issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8d7ff01",
@@ -4509,6 +5624,7 @@
},
{
"BriefDescription": "TOR Occupancy; CRd from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD",
"PerPkg": "1",
@@ -4518,8 +5634,10 @@
},
{
"BriefDescription": "TOR Occupancy; CRd Pref from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Code read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc88fff01",
@@ -4527,6 +5645,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD",
"PerPkg": "1",
@@ -4536,8 +5655,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRDPTE",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -4547,8 +5668,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt from local IA that misses in the snoop filter",
"UMask": "0xc827ff01",
@@ -4556,8 +5679,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt Pref from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt prefetch from local IA that misses in the snoop filter",
"UMask": "0xc8a7ff01",
@@ -4565,6 +5690,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_PREF",
"PerPkg": "1",
@@ -4574,6 +5700,7 @@
},
{
"BriefDescription": "TOR Occupancy; Hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT",
"PerPkg": "1",
@@ -4583,6 +5710,7 @@
},
{
"BriefDescription": "TOR Occupancy; CRd hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
"PerPkg": "1",
@@ -4592,6 +5720,7 @@
},
{
"BriefDescription": "TOR Occupancy; CRd Pref hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD_PREF",
"PerPkg": "1",
@@ -4601,16 +5730,20 @@
},
{
"BriefDescription": "TOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that hit the LLC.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c0018101",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c0008101",
@@ -4618,6 +5751,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
"PerPkg": "1",
@@ -4627,8 +5761,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRDPTE",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -4638,8 +5774,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt from local IA that hits in the snoop filter",
"UMask": "0xc827fd01",
@@ -4647,8 +5785,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt Pref hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt prefetch from local IA that hits in the snoop filter",
"UMask": "0xc8a7fd01",
@@ -4656,6 +5796,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_PREF",
"PerPkg": "1",
@@ -4665,8 +5806,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fd01",
@@ -4674,8 +5817,10 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefCode hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Last level cache prefetch code read from local IA that hits in the snoop filter",
"UMask": "0xcccffd01",
@@ -4683,8 +5828,10 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefData hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Last level cache prefetch data read from local IA that hits in the snoop filter",
"UMask": "0xccd7fd01",
@@ -4692,6 +5839,7 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefRFO hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFRFO",
"PerPkg": "1",
@@ -4701,6 +5849,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
"PerPkg": "1",
@@ -4710,6 +5859,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO Pref hits from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO_PREF",
"PerPkg": "1",
@@ -4719,8 +5869,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47ff01",
@@ -4728,8 +5880,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd47ff01",
@@ -4737,8 +5891,10 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefCode from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Last level cache prefetch data read from local IA.",
"UMask": "0xcccfff01",
@@ -4746,6 +5902,7 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefData from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFDATA",
"PerPkg": "1",
@@ -4755,6 +5912,7 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefRFO from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFRFO",
"PerPkg": "1",
@@ -4764,6 +5922,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses from Local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS",
"PerPkg": "1",
@@ -4773,6 +5932,7 @@
},
{
"BriefDescription": "TOR Occupancy; CRd misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
"PerPkg": "1",
@@ -4782,16 +5942,20 @@
},
{
"BriefDescription": "TOR Occupancy for CRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRDMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c80b8201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80efe01",
@@ -4799,8 +5963,10 @@
},
{
"BriefDescription": "TOR Occupancy; CRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Code read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc88ffe01",
@@ -4808,8 +5974,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88efe01",
@@ -4817,8 +5985,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88f7e01",
@@ -4826,8 +5996,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80f7e01",
@@ -4835,16 +6007,20 @@
},
{
"BriefDescription": "TOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that miss the LLC.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c0018201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c0008201",
@@ -4852,6 +6028,7 @@
},
{
"BriefDescription": "TOR Occupancy for DRd misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
"PerPkg": "1",
@@ -4861,16 +6038,20 @@
},
{
"BriefDescription": "TOR Occupancy for DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8138201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDPTE",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -4880,16 +6061,20 @@
},
{
"BriefDescription": "TOR Occupancy for DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander card.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8178201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8168201",
@@ -4897,8 +6082,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_EXP_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_EXP_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x20c8168201",
@@ -4906,6 +6093,7 @@
},
{
"BriefDescription": "TOR Occupancy for DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR",
"PerPkg": "1",
@@ -4915,6 +6103,7 @@
},
{
"BriefDescription": "TOR Occupancy for DRd misses from local IA targeting local memory",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL",
"PerPkg": "1",
@@ -4924,6 +6113,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_DDR",
"PerPkg": "1",
@@ -4933,6 +6123,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_PMM",
"PerPkg": "1",
@@ -4942,8 +6133,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt from local IA that misses in the snoop filter",
"UMask": "0xc827fe01",
@@ -4951,8 +6144,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8268201",
@@ -4960,8 +6155,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Opt Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read opt prefetch from local IA that misses in the snoop filter",
"UMask": "0xc8a7fe01",
@@ -4969,8 +6166,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8a68201",
@@ -4978,6 +6177,7 @@
},
{
"BriefDescription": "TOR Occupancy for DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM",
"PerPkg": "1",
@@ -4987,6 +6187,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF",
"PerPkg": "1",
@@ -4996,25 +6197,42 @@
},
{
"BriefDescription": "TOR Occupancy for L2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8978201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8968201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_EXP_LOCAL",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20c8968201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978601",
@@ -5022,8 +6240,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc896fe01",
@@ -5031,8 +6251,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968601",
@@ -5040,8 +6262,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968a01",
@@ -5049,8 +6273,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978a01",
@@ -5058,6 +6284,7 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE",
"PerPkg": "1",
@@ -5067,8 +6294,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970601",
@@ -5076,8 +6305,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970a01",
@@ -5085,6 +6316,7 @@
},
{
"BriefDescription": "TOR Occupancy for DRd misses from local IA targeting remote memory",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE",
"PerPkg": "1",
@@ -5094,6 +6326,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_DDR",
"PerPkg": "1",
@@ -5103,6 +6336,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_PMM",
"PerPkg": "1",
@@ -5112,8 +6346,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fe01",
@@ -5121,8 +6357,10 @@
},
{
"BriefDescription": "TOR Occupancy; LLCPrefCode misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Last level cache prefetch code read from local IA that misses in the snoop filter",
"UMask": "0xcccffe01",
@@ -5130,14 +6368,17 @@
},
{
"BriefDescription": "TOR Occupancy for LLC Prefetch Code transactions issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10cccf8201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy; LLCPrefData misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA",
"PerPkg": "1",
@@ -5147,23 +6388,39 @@
},
{
"BriefDescription": "TOR Occupancy for LLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10ccd78201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10ccd68201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_EXP_LOCAL",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20ccd68201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy; LLCPrefRFO misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO",
"PerPkg": "1",
@@ -5173,25 +6430,42 @@
},
{
"BriefDescription": "TOR Occupancy for L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8878201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8868201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_EXP_LOCAL",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20c8868201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668601",
@@ -5199,8 +6473,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668a01",
@@ -5208,8 +6484,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8601",
@@ -5217,8 +6495,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8a01",
@@ -5226,8 +6506,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670601",
@@ -5235,8 +6517,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670a01",
@@ -5244,8 +6528,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0601",
@@ -5253,8 +6539,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0a01",
@@ -5262,6 +6550,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
"PerPkg": "1",
@@ -5271,24 +6560,30 @@
},
{
"BriefDescription": "TOR Occupancy for RFO and L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFOMORPH_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8038201",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy for RFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10c8078201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10c8068201",
@@ -5296,8 +6591,10 @@
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_EXP_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_EXP_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x20c8068201",
@@ -5305,6 +6602,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_LOCAL",
"PerPkg": "1",
@@ -5314,6 +6612,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO prefetch misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF",
"PerPkg": "1",
@@ -5323,23 +6622,39 @@
},
{
"BriefDescription": "TOR Occupancy for LLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10ccc78201",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC_LOCAL",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x000",
"UMask": "0x10ccc68201",
"Unit": "CHA"
},
{
+ "BriefDescription": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_EXP_LOCAL",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20ccc68201",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy; RFO prefetch misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_LOCAL",
"PerPkg": "1",
@@ -5349,6 +6664,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO prefetch misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_REMOTE",
"PerPkg": "1",
@@ -5358,6 +6674,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_REMOTE",
"PerPkg": "1",
@@ -5367,8 +6684,10 @@
},
{
"BriefDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_UCRDF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc877de01",
@@ -5376,8 +6695,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86ffe01",
@@ -5385,8 +6706,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867fe01",
@@ -5394,8 +6717,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678601",
@@ -5403,8 +6728,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678a01",
@@ -5412,8 +6739,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8601",
@@ -5421,8 +6750,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8a01",
@@ -5430,8 +6761,10 @@
},
{
"BriefDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc87fde01",
@@ -5439,6 +6772,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO",
"PerPkg": "1",
@@ -5448,6 +6782,7 @@
},
{
"BriefDescription": "TOR Occupancy; RFO prefetch from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO_PREF",
"PerPkg": "1",
@@ -5457,6 +6792,7 @@
},
{
"BriefDescription": "TOR Occupancy : SpecItoMs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_SPECITOM",
"PerPkg": "1",
@@ -5466,8 +6802,10 @@
},
{
"BriefDescription": "TOR Occupancy : WbMtoIs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WbMtoIs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc27ff01",
@@ -5475,8 +6813,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86fff01",
@@ -5484,8 +6824,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867ff01",
@@ -5493,6 +6835,7 @@
},
{
"BriefDescription": "TOR Occupancy; All from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO",
"PerPkg": "1",
@@ -5502,8 +6845,10 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushes issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CLFlushes issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8c3ff04",
@@ -5511,6 +6856,7 @@
},
{
"BriefDescription": "TOR Occupancy; Hits from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT",
"PerPkg": "1",
@@ -5520,6 +6866,7 @@
},
{
"BriefDescription": "TOR Occupancy; ITOM hits from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOM",
"PerPkg": "1",
@@ -5529,6 +6876,7 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOMCACHENEAR",
"PerPkg": "1",
@@ -5538,6 +6886,7 @@
},
{
"BriefDescription": "TOR Occupancy; RdCur and FsRdCur hits from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_PCIRDCUR",
"PerPkg": "1",
@@ -5547,8 +6896,10 @@
},
{
"BriefDescription": "TOR Occupancy; RFO hits from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803fd04",
@@ -5556,6 +6907,7 @@
},
{
"BriefDescription": "TOR Occupancy; ITOM from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOM",
"PerPkg": "1",
@@ -5565,8 +6917,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOMCACHENEAR",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -5576,6 +6930,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS",
"PerPkg": "1",
@@ -5585,6 +6940,7 @@
},
{
"BriefDescription": "TOR Occupancy; ITOM misses from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
"PerPkg": "1",
@@ -5594,6 +6950,7 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR",
"PerPkg": "1",
@@ -5602,7 +6959,52 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC and targets local memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcd42fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC and targets remote memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcd437e04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy; ITOM misses from local IO and targets local memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcc42fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy; ITOM misses from local IO and targets remote memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xcc437e04",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR",
"PerPkg": "1",
@@ -5611,9 +7013,33 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from local IO and targets local memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f2fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy; RdCur and FsRdCur misses from local IO and targets remote memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f37e04",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy; RFO misses from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803fe04",
@@ -5621,6 +7047,7 @@
},
{
"BriefDescription": "TOR Occupancy; RdCur and FsRdCur from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_PCIRDCUR",
"PerPkg": "1",
@@ -5630,8 +7057,10 @@
},
{
"BriefDescription": "TOR Occupancy; ItoM from local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803ff04",
@@ -5639,8 +7068,10 @@
},
{
"BriefDescription": "TOR Occupancy : WbMtoIs issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WbMtoIs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc23ff04",
@@ -5648,8 +7079,10 @@
},
{
"BriefDescription": "TOR Occupancy : IPQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IPQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x8",
@@ -5657,8 +7090,10 @@
},
{
"BriefDescription": "TOR Occupancy : IRQ - iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IRQ - iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : From an iA Core",
"UMask": "0x1",
@@ -5666,8 +7101,10 @@
},
{
"BriefDescription": "TOR Occupancy : IRQ - Non iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_NON_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IRQ - Non iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x10",
@@ -5675,24 +7112,30 @@
},
{
"BriefDescription": "TOR Occupancy : Just ISOC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ISOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just ISOC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just Local Targets",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOCAL_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Local Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : All from Local iA and IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local iA and IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : All locally initiated requests",
"UMask": "0xc000ff05",
@@ -5700,8 +7143,10 @@
},
{
"BriefDescription": "TOR Occupancy : All from Local iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : All locally initiated requests from iA Cores",
"UMask": "0xc000ff01",
@@ -5709,8 +7154,10 @@
},
{
"BriefDescription": "TOR Occupancy : All from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : All locally generated IO traffic",
"UMask": "0xc000ff04",
@@ -5718,80 +7165,100 @@
},
{
"BriefDescription": "TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MATCH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just Misses",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Misses : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : MMCFG Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MMCFG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : MMCFG Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : MMIO Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MMIO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : MMIO Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NearMem",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NonCoherent",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NONCOH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NonCoherent : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NotNearMem",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NOT_NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NotNearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : PMM Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PMM Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PREMORPH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : PRQ - IOSF",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PRQ - IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : From a PCIe Device",
"UMask": "0x4",
@@ -5799,8 +7266,10 @@
},
{
"BriefDescription": "TOR Occupancy : PRQ - Non IOSF",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ_NON_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PRQ - Non IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x20",
@@ -5808,16 +7277,20 @@
},
{
"BriefDescription": "TOR Occupancy : Just Remote Targets",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.REMOTE_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Remote Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : All from Remote",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.REM_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Remote : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : All remote requests (e.g. snoops, writebacks) that came from remote sockets",
"UMask": "0xc001ffc8",
@@ -5825,8 +7298,10 @@
},
{
"BriefDescription": "TOR Occupancy : All Snoops from Remote",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.REM_SNPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All Snoops from Remote : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T : All snoops to this LLC that came from remote sockets",
"UMask": "0xc001ff08",
@@ -5834,17 +7309,71 @@
},
{
"BriefDescription": "TOR Occupancy : RRQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RRQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x40",
"Unit": "CHA"
},
{
+ "BriefDescription": "TOR Occupancy for INVXTOM opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ_MISS_INVXTOM_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e87e8240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RDCODE opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ_MISS_RDCODE_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e80e8240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RDCUR opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ_MISS_RDCUR_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8068240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RDDATA opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ_MISS_RDDATA_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8168240",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RDINVOWN_OPT opcodes received from a remote socket which miss the L3 and target memory in a CXL type 3 memory expander local to this socket.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.RRQ_MISS_RDINVOWN_OPT_CXL_EXP_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20e8268240",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Occupancy; All Snoops from Remote",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.SNPS_FROM_REM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. All snoops to this LLC that came from remote sockets.",
"UMask": "0xc001ff08",
@@ -5852,8 +7381,10 @@
},
{
"BriefDescription": "TOR Occupancy : WBQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WBQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. T",
"UMask": "0x80",
@@ -5861,8 +7392,10 @@
},
{
"BriefDescription": "WbPushMtoI : Pushed to LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbPushMtoI : Pushed to LLC : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was able to push WbPushMToI to LLC",
"UMask": "0x1",
@@ -5870,8 +7403,10 @@
},
{
"BriefDescription": "WbPushMtoI : Pushed to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.MEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbPushMtoI : Pushed to Memory : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to MEM)",
"UMask": "0x2",
@@ -5879,8 +7414,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC0 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -5888,8 +7425,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC1 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -5897,8 +7436,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC2",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC2 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -5906,8 +7447,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC3 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -5915,8 +7458,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC4",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC4 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -5924,8 +7469,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC5",
+ "Counter": "0,1,2,3",
"EventCode": "0x5a",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC5 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -5933,8 +7480,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 0?) - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP0_CONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 0?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contention",
"UMask": "0x8",
@@ -5942,8 +7491,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 0?) - No Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP0_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 0?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress credits",
"UMask": "0x4",
@@ -5951,8 +7502,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 1?) - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP1_CONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 1?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contention",
"UMask": "0x80",
@@ -5960,8 +7513,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 1?) - No Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP1_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 1?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress credits",
"UMask": "0x40",
@@ -5969,8 +7524,10 @@
},
{
"BriefDescription": "XPT Prefetches : Sent (on 0?)",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.SENT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Sent (on 0?) : Number of XPT prefetches sent",
"UMask": "0x1",
@@ -5978,8 +7535,10 @@
},
{
"BriefDescription": "XPT Prefetches : Sent (on 1?)",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.SENT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Sent (on 1?) : Number of XPT prefetches sent",
"UMask": "0x10",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cxl.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cxl.json
index f3e84fd88de3..ff81f3a6426a 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cxl.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-cxl.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of lfclk ticks",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x01",
"EventName": "UNC_CXLCM_CLOCKTICKS",
"PerPkg": "1",
@@ -9,390 +10,487 @@
},
{
"BriefDescription": "Number of Allocation to Mem Rxx AGF 0",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Req AGF0",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_REQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Rsp AGF",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_REQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Data AGF",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_RSP0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Rsp AGF",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.CACHE_RSP1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Req AGF 1",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.MEM_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Mem Data AGF",
+ "Counter": "4,5,6,7",
"EventCode": "0x43",
"EventName": "UNC_CXLCM_RxC_AGF_INSERTS.MEM_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with AK set",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.AK_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with BE set",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.BE_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of control flits received",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.CTRL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Headerless flits received",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.NO_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of protocol flits received",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.PROT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with SZ set",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.SZ_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of flits received",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.VALID",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of valid messages in the flit",
+ "Counter": "4,5,6,7",
"EventCode": "0x4b",
"EventName": "UNC_CXLCM_RxC_FLITS.VALID_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of CRC errors detected",
+ "Counter": "4,5,6,7",
"EventCode": "0x40",
"EventName": "UNC_CXLCM_RxC_MISC.CRC_ERRORS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Init flits sent",
+ "Counter": "4,5,6,7",
"EventCode": "0x40",
"EventName": "UNC_CXLCM_RxC_MISC.INIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of LLCRD flits sent",
+ "Counter": "4,5,6,7",
"EventCode": "0x40",
"EventName": "UNC_CXLCM_RxC_MISC.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Retry flits sent",
+ "Counter": "4,5,6,7",
"EventCode": "0x40",
"EventName": "UNC_CXLCM_RxC_MISC.RETRY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles the Packing Buffer is Full",
+ "Counter": "4,5,6,7",
"EventCode": "0x52",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles the Packing Buffer is Full",
+ "Counter": "4,5,6,7",
"EventCode": "0x52",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles the Packing Buffer is Full",
+ "Counter": "4,5,6,7",
"EventCode": "0x52",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.CACHE_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles the Packing Buffer is Full",
+ "Counter": "4,5,6,7",
"EventCode": "0x52",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.MEM_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles the Packing Buffer is Full",
+ "Counter": "4,5,6,7",
"EventCode": "0x52",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_FULL.MEM_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Data Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x41",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Req Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x41",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Rsp Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x41",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.CACHE_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Mem Data Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x41",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Mem Rxx Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x41",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles of Not Empty for Cache Data Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x42",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles of Not Empty for Cache Req Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x42",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles of Not Empty for Cache Rsp Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x42",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.CACHE_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles of Not Empty for Mem Data Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x42",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.MEM_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of cycles of Not Empty for Mem Rxx Packing buffer",
+ "Counter": "4,5,6,7",
"EventCode": "0x42",
"EventName": "UNC_CXLCM_RxC_PACK_BUF_NE.MEM_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with AK set",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.AK_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with BE set",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.BE_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of control flits packed",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.CTRL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Headerless flits packed",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.NO_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of protocol flits packed",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.PROT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of Flits with SZ set",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.SZ_HDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CXLCM"
},
{
"BriefDescription": "Count the number of flits packed",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_CXLCM_TxC_FLITS.VALID",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Data Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Req Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_REQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Rsp1 Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_REQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Rsp0 Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_RSP0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Cache Req Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.CACHE_RSP1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Mem Data Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.MEM_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLCM"
},
{
"BriefDescription": "Number of Allocation to Mem Rxx Packing buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLCM_TxC_PACK_BUF_INSERTS.MEM_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLCM"
},
{
"BriefDescription": "Counts the number of uclk ticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_CXLDP_CLOCKTICKS",
"PerPkg": "1",
@@ -401,48 +499,60 @@
},
{
"BriefDescription": "Number of Allocation to M2S Data AGF",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.M2S_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CXLDP"
},
{
"BriefDescription": "Number of Allocation to M2S Req AGF",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.M2S_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CXLDP"
},
{
"BriefDescription": "Number of Allocation to U2C Data AGF",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CXLDP"
},
{
"BriefDescription": "Number of Allocation to U2C Req AGF",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CXLDP"
},
{
"BriefDescription": "Number of Allocation to U2C Rsp AGF 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_RSP0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CXLDP"
},
{
"BriefDescription": "Number of Allocation to U2C Rsp AGF 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_CXLDP_TxC_AGF_INSERTS.U2C_RSP1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CXLDP"
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-interconnect.json
index 65d088556bae..8b1ae9540066 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-interconnect.json
@@ -1,8 +1,10 @@
[
{
"BriefDescription": "Total IRP occupancy of inbound read and write requests to coherent memory.",
+ "Counter": "0,1",
"EventCode": "0x0f",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Total IRP occupancy of inbound read and write requests to coherent memory. This is effectively the sum of read occupancy and write occupancy.",
"UMask": "0x4",
@@ -10,6 +12,7 @@
},
{
"BriefDescription": "IRP Clockticks",
+ "Counter": "0,1",
"EventCode": "0x01",
"EventName": "UNC_I_CLOCKTICKS",
"PerPkg": "1",
@@ -18,6 +21,7 @@
},
{
"BriefDescription": "FAF RF full",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_FAF_FULL",
"PerPkg": "1",
@@ -25,6 +29,7 @@
},
{
"BriefDescription": "FAF - request insert from TC.",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_FAF_INSERTS",
"PerPkg": "1",
@@ -32,6 +37,7 @@
},
{
"BriefDescription": "FAF occupancy",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_FAF_OCCUPANCY",
"PerPkg": "1",
@@ -39,6 +45,7 @@
},
{
"BriefDescription": "FAF allocation -- sent to ADQ",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_FAF_TRANSACTIONS",
"PerPkg": "1",
@@ -46,14 +53,17 @@
},
{
"BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.EVICTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": ": All Inserts Inbound (p2p + faf + cset)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.INBOUND_INSERTS",
"PerPkg": "1",
@@ -62,78 +72,97 @@
},
{
"BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.OUTBOUND_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.FAST_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.FAST_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.FAST_XFER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Slow path fwpf didn't find prefetch",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.SLOWPATH_FWPF_NO_PRF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 1 : Lost Forward",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.LOST_FWD",
"PerPkg": "1",
@@ -143,8 +172,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Received Invalid : Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x20",
@@ -152,8 +183,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Received Valid",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Received Valid : Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x40",
@@ -161,8 +194,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of E Line : Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x4",
@@ -170,8 +205,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of I Line : Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x1",
@@ -179,8 +216,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of M Line : Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x8",
@@ -188,8 +227,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of S Line : Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x2",
@@ -197,8 +238,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit M, E, S or I line in the IIO",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit M, E, S or I line in the IIO",
"UMask": "0x7e",
@@ -206,8 +249,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit E or S line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit E or S line in the IIO cache",
"UMask": "0x74",
@@ -215,8 +260,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit I line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit I line in the IIO cache",
"UMask": "0x72",
@@ -224,6 +271,7 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit M line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M",
"PerPkg": "1",
@@ -233,8 +281,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that miss the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that miss the IIO cache",
"UMask": "0x71",
@@ -242,62 +292,77 @@
},
{
"BriefDescription": "Snoop Responses : Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Hit I",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Hit M",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Miss",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpCode",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpData",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpInv",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound write (fast path) requests received by the IRP.",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -307,132 +372,167 @@
},
{
"BriefDescription": "AK Egress Allocations",
+ "Counter": "0,1",
"EventCode": "0x0b",
"EventName": "UNC_I_TxC_AK_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x05",
"EventName": "UNC_I_TxC_BL_DRS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_I_TxC_BL_DRS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x08",
"EventName": "UNC_I_TxC_BL_DRS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x06",
"EventName": "UNC_I_TxC_BL_NCB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_I_TxC_BL_NCB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x09",
"EventName": "UNC_I_TxC_BL_NCB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x07",
"EventName": "UNC_I_TxC_BL_NCS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x04",
"EventName": "UNC_I_TxC_BL_NCS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x0a",
"EventName": "UNC_I_TxC_BL_NCS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES",
+ "Counter": "0,1",
"EventCode": "0x1c",
"EventName": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Counts the number times when it is not possible to issue a request to the M2PCIe because there are no Egress Credits available on AD0, A1 or AD0AD1 both. Stalls on both AD0 and AD1 will count as 2",
"Unit": "IRP"
},
{
"BriefDescription": "No AD0 Egress Credits Stalls",
+ "Counter": "0,1",
"EventCode": "0x1a",
"EventName": "UNC_I_TxR2_AD0_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No AD0 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD0 Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "No AD1 Egress Credits Stalls",
+ "Counter": "0,1",
"EventCode": "0x1b",
"EventName": "UNC_I_TxR2_AD1_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No AD1 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD1 Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x1d",
"EventName": "UNC_I_TxR2_BL_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No BL Egress Credit Stalls : Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0x0d",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0x0e",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0x0c",
"EventName": "UNC_I_TxS_REQUEST_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Request Queue Occupancy : Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjunction with the allocations event in order to calculate average latency of outbound requests.",
"Unit": "IRP"
},
{
"BriefDescription": "M2M Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M2M_CLOCKTICKS",
"PerPkg": "1",
@@ -441,6 +541,7 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M2M_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -448,16 +549,20 @@
},
{
"BriefDescription": "Cycles when direct to core mode (which bypasses the CHA) was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgress",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE.NON_CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgress : Counts the number of time non cisgress D2C was not honoured by egress due to directory state constraints",
"UMask": "0x2",
@@ -465,39 +570,49 @@
},
{
"BriefDescription": "Counts the time when FM didn't do d2c for fill reads (cross tile case)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_NOTFORKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads in which direct to core transaction were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads in which direct to core transaction was overridden : Cisgress",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE.CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads in which direct to core transaction was overridden : 2LM Hit?",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE.PMM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Number of times a direct to UPI transaction was overridden.",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_DIRECT2UPITXN_OVERRIDE.PMM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of times a direct to UPI transaction was overridden. : Counts the number of times D2K wasn't honored even though the incoming request had d2k set",
"UMask": "0x1",
@@ -505,24 +620,30 @@
},
{
"BriefDescription": "Number of reads in which direct to Intel UPI transactions were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles when direct to Intel UPI was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Cisgress D2U Ignored",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles when Direct2UPI was Disabled : Cisgress D2U Ignored : Counts cisgress d2K that was not honored due to directory constraints",
"UMask": "0x4",
@@ -530,8 +651,10 @@
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Egress Ignored D2U",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.EGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles when Direct2UPI was Disabled : Egress Ignored D2U : Counts the number of time D2K was not honoured by egress due to directory state constraints",
"UMask": "0x1",
@@ -539,8 +662,10 @@
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Non Cisgress D2U Ignored",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE.NON_CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles when Direct2UPI was Disabled : Non Cisgress D2U Ignored : Counts non cisgress d2K that was not honored due to directory constraints",
"UMask": "0x2",
@@ -548,8 +673,10 @@
},
{
"BriefDescription": "Messages sent direct to the Intel UPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2M_DIRECT2UPI_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times egress did D2K (Direct to KTI)",
"UMask": "0x7",
@@ -557,86 +684,107 @@
},
{
"BriefDescription": "Number of reads that a message sent direct2 Intel UPI was overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x1c",
"EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "Number of times a direct to UPI transaction was overridden.",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE.CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (any state found)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.ANY",
"PerPkg": "1",
@@ -646,6 +794,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (cacheline found in A state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_A",
"PerPkg": "1",
@@ -655,6 +804,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in I state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_I",
"PerPkg": "1",
@@ -664,6 +814,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in S state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_S",
"PerPkg": "1",
@@ -673,86 +824,107 @@
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A2I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x320",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A2S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x340",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from/to Any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -761,8 +933,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_I_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from A to I to non persistent memory (DRAM or HBM)",
"UMask": "0x120",
@@ -770,8 +944,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_I_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from A to I to non persistent memory (DRAM or HBM)",
"UMask": "0x220",
@@ -779,8 +955,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_S_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from A to S to non persistent memory (DRAM or HBM)",
"UMask": "0x140",
@@ -788,8 +966,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.A_TO_S_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from A to S to non persistent memory (DRAM or HBM)",
"UMask": "0x240",
@@ -797,8 +977,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts any 1lm or 2lm hit data return that would result in directory update to non persistent memory (DRAM or HBM)",
"UMask": "0x101",
@@ -806,24 +988,30 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I2A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x304",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I2S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x302",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_A_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from I to A to non persistent memory (DRAM or HBM)",
"UMask": "0x104",
@@ -831,8 +1019,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_A_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from I to A to non persistent memory (DRAM or HBM)",
"UMask": "0x204",
@@ -840,8 +1030,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_S_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from I to S to non persistent memory (DRAM or HBM)",
"UMask": "0x102",
@@ -849,8 +1041,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.I_TO_S_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from I to S to non persistent memory (DRAM or HBM)",
"UMask": "0x202",
@@ -858,8 +1052,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts any 2lm miss data return that would result in directory update to non persistent memory (DRAM or HBM)",
"UMask": "0x201",
@@ -867,24 +1063,30 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S2A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x310",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S2I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x308",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_A_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from S to A to non persistent memory (DRAM or HBM)",
"UMask": "0x110",
@@ -892,8 +1094,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_A_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from S to A to non persistent memory (DRAM or HBM)",
"UMask": "0x210",
@@ -901,8 +1105,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_I_HIT_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 1lm or 2lm hit data returns that would result in directory update from S to I to non persistent memory (DRAM or HBM)",
"UMask": "0x108",
@@ -910,8 +1116,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.S_TO_I_MISS_NON_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts 2lm miss data returns that would result in directory update from S to I to non persistent memory (DRAM or HBM)",
"UMask": "0x208",
@@ -919,8 +1127,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x80000004",
@@ -928,8 +1138,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x80000001",
@@ -937,40 +1149,50 @@
},
{
"BriefDescription": "Count when Starve Glocab counter is at 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_IGR_STARVE_WINNER.MASK7",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Reads to iMC issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x304",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0.TO_NM1LM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0.TO_NM1LM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x108",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0.TO_NMCache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0.TO_NMCache",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x110",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -979,24 +1201,30 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_FROM_TGR",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x140",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x102",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_NORMAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1005,24 +1233,30 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x110",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x108",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH0_TO_PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1031,24 +1265,30 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1.TO_NM1LM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1.TO_NM1LM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x208",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1.TO_NMCache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1.TO_NMCache",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x210",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1057,24 +1297,30 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_FROM_TGR",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x240",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x202",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_NORMAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1083,24 +1329,30 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x210",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x208",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.CH1_TO_PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1109,62 +1361,77 @@
},
{
"BriefDescription": "UNC_M2M_IMC_READS.FROM_TGR",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x340",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x302",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x301",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.TO_DDR_AS_CACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x310",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.TO_DDR_AS_MEM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x308",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.TO_NM1LM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.TO_NM1LM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x308",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.TO_NMCACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.TO_NMCACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x310",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_READS.TO_PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_IMC_READS.TO_PMM",
"PerPkg": "1",
@@ -1173,23 +1440,29 @@
},
{
"BriefDescription": "All Writes - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1810",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0.NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1198,15 +1471,19 @@
},
{
"BriefDescription": "From TGR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_FULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FULL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1215,30 +1492,38 @@
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_FULL_ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x804",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive Miss - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_PARTIAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1247,32 +1532,40 @@
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x808",
"Unit": "M2M"
},
{
"BriefDescription": "DDR, acting as Cache - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x840",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEM",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x820",
"Unit": "M2M"
},
{
"BriefDescription": "PMM - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1282,15 +1575,19 @@
},
{
"BriefDescription": "Non-Inclusive - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1.NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "All Writes - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1299,15 +1596,19 @@
},
{
"BriefDescription": "From TGR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Full Line Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FULL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1316,30 +1617,38 @@
},
{
"BriefDescription": "ISOCH Full Line - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1004",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive Miss - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Partial Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1348,32 +1657,40 @@
},
{
"BriefDescription": "ISOCH Partial - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1008",
"Unit": "M2M"
},
{
"BriefDescription": "DDR, acting as Cache - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1040",
"Unit": "M2M"
},
{
"BriefDescription": "DDR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1020",
"Unit": "M2M"
},
{
"BriefDescription": "PMM - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1383,75 +1700,94 @@
},
{
"BriefDescription": "From TGR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Full Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1801",
"Unit": "M2M"
},
{
"BriefDescription": "ISOCH Full Line - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1804",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Non-Inclusive Miss - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Partial Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1802",
"Unit": "M2M"
},
{
"BriefDescription": "ISOCH Partial - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1808",
"Unit": "M2M"
},
{
"BriefDescription": "DDR, acting as Cache - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1840",
"Unit": "M2M"
},
{
"BriefDescription": "DDR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1820",
"Unit": "M2M"
},
{
"BriefDescription": "PMM - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_IMC_WRITES.TO_PMM",
"PerPkg": "1",
@@ -1460,143 +1796,179 @@
},
{
"BriefDescription": "UNC_M2M_PREFCAM_CIS_DROPS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_M2M_PREFCAM_CIS_DROPS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "M2M"
},
{
"BriefDescription": ": UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2M"
},
{
"BriefDescription": ": XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "M2M"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.RD_MERGED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.WR_MERGED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.WR_SQUASHED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_PREFCAM_INSERTS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Prefetch CAM Inserts : XPT -All Channels",
"UMask": "0x5",
@@ -1604,108 +1976,135 @@
},
{
"BriefDescription": "Prefetch CAM Occupancy : All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": ": Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": ": Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPT",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy - Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2M_PREFCAM_RxC_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M2M_RxC_AD_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M2M_RxC_AD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Clean NearMem Read Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_CLEAN",
"PerPkg": "1",
@@ -1715,6 +2114,7 @@
},
{
"BriefDescription": "Dirty NearMem Read Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_DIRTY",
"PerPkg": "1",
@@ -1724,8 +2124,10 @@
},
{
"BriefDescription": "Tag Hit : Clean NearMem Underfill Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_CLEAN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts clean underfill hits due to a partial write",
"UMask": "0x4",
@@ -1733,8 +2135,10 @@
},
{
"BriefDescription": "Tag Hit : Dirty NearMem Underfill Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_DIRTY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts dirty underfill read hits due to a partial write",
"UMask": "0x8",
@@ -1742,230 +2146,288 @@
},
{
"BriefDescription": "UNC_M2M_TAG_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2M_TAG_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "Number AD Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M2M_TGR_AD_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number BL Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_M2M_TGR_BL_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x104",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x204",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "WPQ Flush : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2M_WPQ_FLUSH.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "WPQ Flush : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2M_WPQ_FLUSH.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_WR_TRACKER_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_WR_TRACKER_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Mirror",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_NONTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_PWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "CBox AD Credits Empty : Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Requests : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x4",
@@ -1973,8 +2435,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Snoops : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x8",
@@ -1982,8 +2446,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : VNA Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : VNA Messages : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x1",
@@ -1991,8 +2457,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Writebacks : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x2",
@@ -2000,6 +2468,7 @@
},
{
"BriefDescription": "M3UPI Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M3UPI_CLOCKTICKS",
"PerPkg": "1",
@@ -2008,31 +2477,39 @@
},
{
"BriefDescription": "M3UPI CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M3UPI_CMS_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2C Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_M3UPI_D2C_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "D2C Sent : Count cases BL sends direct to core",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2U Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_M3UPI_D2U_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "D2U Sent : Cases where SMI3 sends D2U command",
"Unit": "M3UPI"
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -2040,8 +2517,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -2049,8 +2528,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only)",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only) : No vn0 and vna credits available to send to M2",
"UMask": "0x1",
@@ -2058,8 +2539,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO2",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO2 : No vn0 and vna credits available to send to M2",
"UMask": "0x2",
@@ -2067,8 +2550,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO3",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO3 : No vn0 and vna credits available to send to M2",
"UMask": "0x4",
@@ -2076,8 +2561,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO4",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO4 : No vn0 and vna credits available to send to M2",
"UMask": "0x8",
@@ -2085,8 +2572,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO5",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2",
"UMask": "0x10",
@@ -2094,8 +2583,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them together",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them together : No vn0 and vna credits available to send to M2",
"UMask": "0x40",
@@ -2103,8 +2594,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : Selected M2p BL NCS credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS_SEL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : Selected M2p BL NCS credits : No vn0 and vna credits available to send to M2",
"UMask": "0x80",
@@ -2112,8 +2605,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO5",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.UBOX_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2",
"UMask": "0x20",
@@ -2121,8 +2616,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x1",
@@ -2130,8 +2627,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 1 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x2",
@@ -2139,8 +2638,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x4",
@@ -2148,8 +2649,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AK - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AK - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x10",
@@ -2157,8 +2660,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AK - Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AK - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x20",
@@ -2166,8 +2671,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : BL - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3e",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.BL_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : BL - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x8",
@@ -2175,8 +2682,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : REQ on AD : VN0 message requested but lost arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2184,8 +2693,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : RSP on AD : VN0 message requested but lost arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2193,8 +2704,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : SNP on AD : VN0 message requested but lost arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2202,8 +2715,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : NCB on BL : VN0 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2211,8 +2726,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : NCS on BL : VN0 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2220,8 +2737,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : RSP on BL : VN0 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2229,8 +2748,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : WB on BL",
+ "Counter": "0",
"EventCode": "0x4b",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : WB on BL : VN0 message requested but lost arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2238,8 +2759,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : REQ on AD : VN1 message requested but lost arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2247,8 +2770,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : RSP on AD : VN1 message requested but lost arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2256,8 +2781,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : SNP on AD : VN1 message requested but lost arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2265,8 +2792,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : NCB on BL : VN1 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2274,8 +2803,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : NCS on BL : VN1 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2283,8 +2814,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : RSP on BL : VN1 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2292,8 +2825,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : WB on BL",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : WB on BL : VN1 message requested but lost arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2301,8 +2836,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0 : AD and BL messages won arbitration concurrently / in parallel",
"UMask": "0x10",
@@ -2310,8 +2847,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1 : AD and BL messages won arbitration concurrently / in parallel",
"UMask": "0x20",
@@ -2319,8 +2858,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : Max Parallel Win",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ALL_PARALLEL_WIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : Max Parallel Win : VN0 and VN1 arbitration sub-pipelines both produced AD and BL winners (maximum possible parallel winners)",
"UMask": "0x80",
@@ -2328,8 +2869,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending AD VN0",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending AD VN0 : Arbitration stage made no progress on pending ad vn0 messages because slotting stage cannot accept new message",
"UMask": "0x1",
@@ -2337,8 +2880,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending AD VN1",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending AD VN1 : Arbitration stage made no progress on pending ad vn1 messages because slotting stage cannot accept new message",
"UMask": "0x2",
@@ -2346,8 +2891,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending BL VN0",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending BL VN0 : Arbitration stage made no progress on pending bl vn0 messages because slotting stage cannot accept new message",
"UMask": "0x4",
@@ -2355,8 +2902,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending BL VN1",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending BL VN1 : Arbitration stage made no progress on pending bl vn1 messages because slotting stage cannot accept new message",
"UMask": "0x8",
@@ -2364,8 +2913,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.VN01_PARALLEL_WIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win : VN0 and VN1 arbitration sub-pipelines had parallel winners (at least one AD or BL on each side)",
"UMask": "0x40",
@@ -2373,8 +2924,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : REQ on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2382,8 +2935,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : RSP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2391,8 +2946,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : SNP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2400,8 +2957,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : NCB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2409,8 +2968,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : NCS on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2418,8 +2979,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : RSP on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2427,8 +2990,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : WB on BL",
+ "Counter": "0",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : WB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2436,8 +3001,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : REQ on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2445,8 +3012,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : RSP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2454,8 +3023,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : SNP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2463,8 +3034,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : NCB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2472,8 +3045,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : NCS on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2481,8 +3056,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : RSP on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2490,8 +3067,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : WB on BL",
+ "Counter": "0",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : WB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2499,8 +3078,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : REQ on AD : VN0 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2508,8 +3089,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : RSP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2517,8 +3100,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : SNP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2526,8 +3111,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : NCB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2535,8 +3122,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : NCS on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2544,8 +3133,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : RSP on BL : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2553,8 +3144,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : WB on BL",
+ "Counter": "0",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : WB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2562,8 +3155,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : REQ on AD",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : REQ on AD : VN1 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2571,8 +3166,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : RSP on AD",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : RSP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2580,8 +3177,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : SNP on AD",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : SNP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2589,8 +3188,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : NCB on BL",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : NCB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2598,8 +3199,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : NCS on BL",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : NCS on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2607,8 +3210,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : RSP on BL",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : RSP on BL : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2616,8 +3221,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : WB on BL",
+ "Counter": "0",
"EventCode": "0x4a",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : WB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2625,8 +3232,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL Arb",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_BL_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL Arb : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while bl message is in arbitration",
"UMask": "0x2",
@@ -2634,8 +3243,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idle",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idle : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while pipeline is idle",
"UMask": "0x1",
@@ -2643,8 +3254,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 1",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S1_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 1 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 1 while merging with bl message in same flit",
"UMask": "0x4",
@@ -2652,8 +3265,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 2",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S2_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 2 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 2 while merging with bl message in same flit",
"UMask": "0x8",
@@ -2661,8 +3276,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : Any In BGF FIFO",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : Any In BGF FIFO : Indication that at least one packet (flit) is in the bgf (fifo only)",
"UMask": "0x1",
@@ -2670,8 +3287,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : Any in BGF Path",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : Any in BGF Path : Indication that at least one packet (flit) is in the bgf path (i.e. pipe to fifo)",
"UMask": "0x2",
@@ -2679,8 +3298,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.LT1_FOR_D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : d2k credit count is less than 1",
"UMask": "0x10",
@@ -2688,8 +3309,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.LT2_FOR_D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : d2k credit count is less than 2",
"UMask": "0x20",
@@ -2697,8 +3320,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : No D2K For Arb",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.VN0_NO_D2K_FOR_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : No D2K For Arb : VN0 BL RSP message was blocked from arbitration request due to lack of D2K CMP credit",
"UMask": "0x4",
@@ -2706,8 +3331,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0",
"EventCode": "0x5f",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.VN1_NO_D2K_FOR_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : VN1 BL RSP message was blocked from arbitration request due to lack of D2K CMP credits",
"UMask": "0x8",
@@ -2715,8 +3342,10 @@
},
{
"BriefDescription": "Credit Occupancy : Credits Consumed",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.CONSUMED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Credits Consumed : number of remote vna credits consumed per cycle",
"UMask": "0x80",
@@ -2724,8 +3353,10 @@
},
{
"BriefDescription": "Credit Occupancy : D2K Credits",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.D2K_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : D2K Credits : D2K completion fifo credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x10",
@@ -2733,8 +3364,10 @@
},
{
"BriefDescription": "Credit Occupancy : Packets in BGF FIFO",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Packets in BGF FIFO : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifo",
"UMask": "0x2",
@@ -2742,8 +3375,10 @@
},
{
"BriefDescription": "Credit Occupancy : Packets in BGF Path",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Packets in BGF Path : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e. pipe to fifo or fifo)",
"UMask": "0x4",
@@ -2751,8 +3386,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : count of bl messages in pump-1-pending state, in completion fifo only",
"UMask": "0x40",
@@ -2760,8 +3397,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_TOTAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : count of bl messages in pump-1-pending state, in marker table and in fifo",
"UMask": "0x20",
@@ -2769,8 +3408,10 @@
},
{
"BriefDescription": "Credit Occupancy : Transmit Credits",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.TxQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Transmit Credits : Link layer transmit queue credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x8",
@@ -2778,8 +3419,10 @@
},
{
"BriefDescription": "Credit Occupancy : VNA In Use",
+ "Counter": "0",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.VNA_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : VNA In Use : Remote UPI VNA credit occupancy (number of credits in use), accumulated across all cycles",
"UMask": "0x1",
@@ -2787,8 +3430,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -2796,8 +3441,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -2805,8 +3452,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -2814,8 +3463,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -2823,8 +3474,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -2832,8 +3485,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -2841,8 +3496,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL",
+ "Counter": "0",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -2850,8 +3507,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : All",
+ "Counter": "0",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : All : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but could not be sent for any reason, e.g. low credits, low tsv, stall injection",
"UMask": "0x1",
@@ -2859,8 +3518,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : No BGF Credits",
+ "Counter": "0",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_BGF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : No BGF Credits : Data flit is ready for transmission but could not be sent",
"UMask": "0x8",
@@ -2868,8 +3529,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : No TxQ Credits",
+ "Counter": "0",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : No TxQ Credits : Data flit is ready for transmission but could not be sent",
"UMask": "0x10",
@@ -2877,8 +3540,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : TSV High",
+ "Counter": "0",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.TSV_HI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : TSV High : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while tsv high",
"UMask": "0x2",
@@ -2886,8 +3551,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : Cycle valid for Flit",
+ "Counter": "0",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.VALID_FOR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : Cycle valid for Flit : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while cycle is valid for flit transmission",
"UMask": "0x4",
@@ -2895,8 +3562,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence : Wait on Pump 0",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : Wait on Pump 0 : generating bl data flit sequence; waiting for data pump 0",
"UMask": "0x1",
@@ -2904,8 +3573,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_AT_LIMIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is at capacity (pending table plus completion fifo at limit)",
"UMask": "0x10",
@@ -2913,8 +3584,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_BUSY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is tracking at least one message",
"UMask": "0x8",
@@ -2922,8 +3595,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_FIFO_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending completion fifo is full",
"UMask": "0x40",
@@ -2931,8 +3606,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_HOLD_P0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is at or near capacity, such that pump-0-only bl messages are getting stalled in slotting stage",
"UMask": "0x20",
@@ -2940,8 +3617,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_TO_LIMBO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : a bl message finished but is in limbo and moved to pump-1-pending logic",
"UMask": "0x4",
@@ -2949,8 +3628,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence : Wait on Pump 1",
+ "Counter": "0",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : Wait on Pump 1 : generating bl data flit sequence; waiting for data pump 1",
"UMask": "0x2",
@@ -2958,8 +3639,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF",
+ "Counter": "0",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request naturally serviced during hold-off period",
"UMask": "0x4",
@@ -2967,8 +3650,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE",
+ "Counter": "0",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request forcibly serviced during service window",
"UMask": "0x8",
@@ -2976,8 +3661,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED",
+ "Counter": "0",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request received from link layer while idle (with no slot 2 request active immediately prior)",
"UMask": "0x1",
@@ -2985,8 +3672,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN",
+ "Counter": "0",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request withdrawn during hold-off period or service window",
"UMask": "0x2",
@@ -2994,16 +3683,20 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : All",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Needs Data Flit",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.NEED_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Needs Data Flit : BL message requires data flit sequence",
"UMask": "0x2",
@@ -3011,8 +3704,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Wait on Pump 0",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Wait on Pump 0 : Waiting for header pump 0",
"UMask": "0x4",
@@ -3020,8 +3715,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 : Header pump 1 is not required for flit",
"UMask": "0x10",
@@ -3029,8 +3726,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Bubble",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_BUT_BUBBLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Bubble : Header pump 1 is not required for flit but flit transmission delayed",
"UMask": "0x20",
@@ -3038,8 +3737,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Not Avail",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_NOT_AVAIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Not Avail : Header pump 1 is not required for flit and not available",
"UMask": "0x40",
@@ -3047,8 +3748,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Wait on Pump 1",
+ "Counter": "0",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Wait on Pump 1 : Waiting for header pump 1",
"UMask": "0x8",
@@ -3056,8 +3759,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate : Events related to Header Flit Generation - Set 1 : Header flit slotting control state machine is in any accumulate state; multi-message flit may be assembled over multiple cycles",
"UMask": "0x1",
@@ -3065,8 +3770,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate Ready",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate Ready : Events related to Header Flit Generation - Set 1 : header flit slotting control state machine is in accum_ready state; flit is ready to send but transmission is blocked; more messages may be slotted into flit",
"UMask": "0x2",
@@ -3074,8 +3781,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate Wasted",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_WASTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate Wasted : Events related to Header Flit Generation - Set 1 : Flit is being assembled over multiple cycles, but no additional message is being slotted into flit in current cycle; accumulate cycle is wasted",
"UMask": "0x4",
@@ -3083,8 +3792,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked : Events related to Header Flit Generation - Set 1 : Header flit slotting entered run-ahead state; new header flit is started while transmission of prior, fully assembled flit is blocked",
"UMask": "0x8",
@@ -3092,8 +3803,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_AFTER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: message was slotted only after run-ahead was over; run-ahead mode definitely wasted",
"UMask": "0x80",
@@ -3101,8 +3814,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Message",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_DURING",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Message : Events related to Header Flit Generation - Set 1 : run-ahead mode: one message slotted during run-ahead",
"UMask": "0x10",
@@ -3110,8 +3825,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_AFTER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: second message slotted immediately after run-ahead; potential run-ahead success",
"UMask": "0x20",
@@ -3119,8 +3836,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: two (or three) message flit sent immediately after run-ahead; complete run-ahead success",
"UMask": "0x40",
@@ -3128,8 +3847,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Ok",
+ "Counter": "0",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Ok : Events related to Header Flit Generation - Set 2 : new header flit construction may proceed in parallel with data flit sequence",
"UMask": "0x4",
@@ -3137,8 +3858,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Flit Finished",
+ "Counter": "0",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Flit Finished : Events related to Header Flit Generation - Set 2 : header flit finished assembly in parallel with data flit sequence",
"UMask": "0x10",
@@ -3146,8 +3869,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Message",
+ "Counter": "0",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Message : Events related to Header Flit Generation - Set 2 : message is slotted into header flit in parallel with data flit sequence",
"UMask": "0x8",
@@ -3155,8 +3880,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall",
+ "Counter": "0",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall : Events related to Header Flit Generation - Set 2 : Rate-matching stall injected",
"UMask": "0x1",
@@ -3164,8 +3891,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall - No Message",
+ "Counter": "0",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL_NOMSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall - No Message : Events related to Header Flit Generation - Set 2 : Rate matching stall injected, but no additional message slotted during stall cycle",
"UMask": "0x2",
@@ -3173,8 +3902,10 @@
},
{
"BriefDescription": "Sent Header Flit : One Message",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : One Message : One message in flit; VNA or non-VNA flit",
"UMask": "0x1",
@@ -3182,8 +3913,10 @@
},
{
"BriefDescription": "Sent Header Flit : One Message in non-VNA",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG_VNX",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : One Message in non-VNA : One message in flit; non-VNA flit",
"UMask": "0x8",
@@ -3191,8 +3924,10 @@
},
{
"BriefDescription": "Sent Header Flit : Two Messages",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.2_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : Two Messages : Two messages in flit; VNA flit",
"UMask": "0x2",
@@ -3200,8 +3935,10 @@
},
{
"BriefDescription": "Sent Header Flit : Three Messages",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.3_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : Three Messages : Three messages in flit; VNA flit",
"UMask": "0x4",
@@ -3209,32 +3946,40 @@
},
{
"BriefDescription": "Sent Header Flit : One Slot Taken",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit : Two Slots Taken",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit : All Slots Taken",
+ "Counter": "0",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "Header Not Sent : All",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : All : header flit is ready for transmission but could not be sent : header flit is ready for transmission but could not be sent for any reason, e.g. no credits, low tsv, stall injection",
"UMask": "0x1",
@@ -3242,8 +3987,10 @@
},
{
"BriefDescription": "Header Not Sent : No BGF Credits",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No BGF Credits : header flit is ready for transmission but could not be sent : No BGF credits available",
"UMask": "0x8",
@@ -3251,8 +3998,10 @@
},
{
"BriefDescription": "Header Not Sent : No BGF Credits + No Extra Message Slotted",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No BGF Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No BGF credits available; no additional message slotted into flit",
"UMask": "0x20",
@@ -3260,8 +4009,10 @@
},
{
"BriefDescription": "Header Not Sent : No TxQ Credits",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No TxQ Credits : header flit is ready for transmission but could not be sent : No TxQ credits available",
"UMask": "0x10",
@@ -3269,8 +4020,10 @@
},
{
"BriefDescription": "Header Not Sent : No TxQ Credits + No Extra Message Slotted",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No TxQ Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No TxQ credits available; no additional message slotted into flit",
"UMask": "0x40",
@@ -3278,8 +4031,10 @@
},
{
"BriefDescription": "Header Not Sent : TSV High",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.TSV_HI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : TSV High : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while tsv high",
"UMask": "0x2",
@@ -3287,8 +4042,10 @@
},
{
"BriefDescription": "Header Not Sent : Cycle valid for Flit",
+ "Counter": "0",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.VALID_FOR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : Cycle valid for Flit : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while cycle is valid for flit transmission",
"UMask": "0x4",
@@ -3296,8 +4053,10 @@
},
{
"BriefDescription": "Message Held : Can't Slot AD",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Can't Slot AD : some AD message could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x10",
@@ -3305,8 +4064,10 @@
},
{
"BriefDescription": "Message Held : Can't Slot BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Can't Slot BL : some BL message could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x20",
@@ -3314,8 +4075,10 @@
},
{
"BriefDescription": "Message Held : Parallel Attempt",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_ATTEMPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Parallel Attempt : ad and bl messages attempted to slot into the same flit in parallel",
"UMask": "0x4",
@@ -3323,8 +4086,10 @@
},
{
"BriefDescription": "Message Held : Parallel Success",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_SUCCESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Parallel Success : ad and bl messages were actually slotted into the same flit in parallel",
"UMask": "0x8",
@@ -3332,8 +4097,10 @@
},
{
"BriefDescription": "Message Held : VN0",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : VN0 : vn0 message(s) that couldn't be slotted into last vn0 flit are held in slotting stage while processing vn1 flit",
"UMask": "0x1",
@@ -3341,8 +4108,10 @@
},
{
"BriefDescription": "Message Held : VN1",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : VN1 : vn1 message(s) that couldn't be slotted into last vn1 flit are held in slotting stage while processing vn0 flit",
"UMask": "0x2",
@@ -3350,8 +4119,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -3359,8 +4130,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -3368,8 +4141,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -3377,8 +4152,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -3386,8 +4163,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -3395,8 +4174,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -3404,8 +4185,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4e",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -3413,8 +4196,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -3422,8 +4207,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -3431,8 +4218,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -3440,8 +4229,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -3449,8 +4240,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -3458,8 +4251,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -3467,8 +4262,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4f",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -3476,8 +4273,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Any In Use",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.ANY_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Any In Use : At least one remote vna credit is in use",
"UMask": "0x20",
@@ -3485,8 +4284,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Corrected",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.CORRECTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Corrected : Number of remote vna credits corrected (local return) per cycle",
"UMask": "0x1",
@@ -3494,8 +4295,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 1",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 1 : Remote vna credit level is less than 1 (i.e. no vna credits available)",
"UMask": "0x2",
@@ -3503,8 +4306,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 10",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 10 : remote vna credit level is less than 10; parallel vn0/vn1 arb not possible",
"UMask": "0x10",
@@ -3512,8 +4317,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 4",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 4 : Remote vna credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vna",
"UMask": "0x4",
@@ -3521,8 +4328,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 5",
+ "Counter": "0",
"EventCode": "0x5a",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 5 : Remote vna credit level is less than 5; parallel ad/bl arb on vna not possible",
"UMask": "0x8",
@@ -3530,8 +4339,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credit count was less than 5 and allocation to ad or bl messages was required",
"UMask": "0x2",
@@ -3539,8 +4350,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credit count was less than 10 and allocation to vn0 or vn1 was required",
"UMask": "0x1",
@@ -3548,8 +4361,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn0, remote vna credits were allocated only to ad messages, not to bl",
"UMask": "0x10",
@@ -3557,8 +4372,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn0, remote vna credits were allocated only to bl messages, not to ad",
"UMask": "0x20",
@@ -3566,8 +4383,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credits were allocated only to vn0, not to vn1",
"UMask": "0x4",
@@ -3575,8 +4394,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn1, remote vna credits were allocated only to ad messages, not to bl",
"UMask": "0x40",
@@ -3584,8 +4405,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn1, remote vna credits were allocated only to bl messages, not to ad",
"UMask": "0x80",
@@ -3593,8 +4416,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY",
+ "Counter": "0",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credits were allocated only to vn1, not to vn0",
"UMask": "0x8",
@@ -3602,8 +4427,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 REQ Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 REQ Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -3611,8 +4438,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 RSP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -3620,8 +4449,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 SNP Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 SNP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -3629,8 +4460,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 WB Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -3638,8 +4471,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 REQ Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 REQ Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -3647,8 +4482,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 RSP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -3656,8 +4493,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 SNP Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 SNP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -3665,8 +4504,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 WB Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -3674,8 +4515,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -3684,8 +4527,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x1",
@@ -3693,8 +4538,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x2",
@@ -3702,8 +4549,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x4",
@@ -3711,8 +4560,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.BL_EARLY_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x8",
@@ -3720,8 +4571,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 REQ Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x1",
@@ -3729,8 +4582,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 RSP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x4",
@@ -3738,8 +4593,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 SNP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x2",
@@ -3747,8 +4604,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 WB Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x8",
@@ -3756,8 +4615,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 REQ Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x10",
@@ -3765,8 +4626,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 RSP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x40",
@@ -3774,8 +4637,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 SNP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x20",
@@ -3783,8 +4648,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 WB Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x80",
@@ -3792,8 +4659,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -3801,8 +4670,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -3810,8 +4681,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -3819,8 +4692,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -3828,8 +4703,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -3837,8 +4714,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -3846,8 +4725,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -3855,78 +4736,98 @@
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1c",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "AK Flow Q Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AK Flow Q Occupancy",
+ "Counter": "0",
"EventCode": "0x1e",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Failed ARB for BL : VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 NCB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -3934,8 +4835,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 NCS Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 NCS Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -3943,8 +4846,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 RSP Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -3952,8 +4857,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 WB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -3961,8 +4868,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 NCS Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 NCS Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -3970,8 +4879,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 NCB Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 NCB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -3979,8 +4890,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 RSP Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -3988,8 +4901,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 WB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -3997,8 +4912,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 REQ Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x1",
@@ -4006,8 +4923,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 RSP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x4",
@@ -4015,8 +4934,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 SNP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x2",
@@ -4024,8 +4945,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 WB Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x8",
@@ -4033,8 +4956,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 REQ Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x10",
@@ -4042,8 +4967,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 RSP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x40",
@@ -4051,8 +4978,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 SNP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x20",
@@ -4060,8 +4989,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 WB Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x80",
@@ -4069,8 +5000,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -4078,8 +5011,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -4087,8 +5022,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -4096,8 +5033,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -4105,8 +5044,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -4114,8 +5055,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -4123,8 +5066,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1_NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1_NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x80",
@@ -4132,8 +5077,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1_NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1_NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -4141,120 +5088,150 @@
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCS Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCB Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x1d",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_THROUGH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_WRPULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_THROUGH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages",
+ "Counter": "0",
"EventCode": "0x1f",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_WRPULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 REQ Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x2",
@@ -4262,8 +5239,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 RSP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x8",
@@ -4271,8 +5250,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 SNP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x4",
@@ -4280,8 +5261,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 REQ Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x10",
@@ -4289,8 +5272,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 RSP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x40",
@@ -4298,8 +5283,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 SNP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x20",
@@ -4307,8 +5294,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VNA : No credits available to send to UPIs on the AD Ring",
"UMask": "0x1",
@@ -4316,8 +5305,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x4",
@@ -4325,8 +5316,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 REQ Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x2",
@@ -4334,8 +5327,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 SNP Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x8",
@@ -4343,8 +5338,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x20",
@@ -4352,8 +5349,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 REQ Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x10",
@@ -4361,8 +5360,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 SNP Messages",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x40",
@@ -4370,8 +5371,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VNA",
+ "Counter": "0",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VNA : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x1",
@@ -4379,16 +5382,20 @@
},
{
"BriefDescription": "FlowQ Generated Prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_M3UPI_UPI_PREFETCH_SPAWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "FlowQ Generated Prefetch : Count cases where FlowQ causes spawn of Prefetch to iMC/SMI3 target",
"Unit": "M3UPI"
},
{
"BriefDescription": "VN0 Credit Used : WB on BL",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : WB on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -4396,8 +5403,10 @@
},
{
"BriefDescription": "VN0 Credit Used : NCB on BL",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : NCB on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -4405,8 +5414,10 @@
},
{
"BriefDescription": "VN0 Credit Used : REQ on AD",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : REQ on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -4414,8 +5425,10 @@
},
{
"BriefDescription": "VN0 Credit Used : RSP on AD",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : RSP on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -4423,8 +5436,10 @@
},
{
"BriefDescription": "VN0 Credit Used : SNP on AD",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : SNP on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -4432,8 +5447,10 @@
},
{
"BriefDescription": "VN0 Credit Used : RSP on BL",
+ "Counter": "0",
"EventCode": "0x5b",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : RSP on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -4441,8 +5458,10 @@
},
{
"BriefDescription": "VN0 No Credits : WB on BL",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : WB on BL : Number of Cycles there were no VN0 Credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -4450,8 +5469,10 @@
},
{
"BriefDescription": "VN0 No Credits : NCB on BL",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : NCB on BL : Number of Cycles there were no VN0 Credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -4459,8 +5480,10 @@
},
{
"BriefDescription": "VN0 No Credits : REQ on AD",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : REQ on AD : Number of Cycles there were no VN0 Credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -4468,8 +5491,10 @@
},
{
"BriefDescription": "VN0 No Credits : RSP on AD",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : RSP on AD : Number of Cycles there were no VN0 Credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -4477,8 +5502,10 @@
},
{
"BriefDescription": "VN0 No Credits : SNP on AD",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : SNP on AD : Number of Cycles there were no VN0 Credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -4486,8 +5513,10 @@
},
{
"BriefDescription": "VN0 No Credits : RSP on BL",
+ "Counter": "0",
"EventCode": "0x5d",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : RSP on BL : Number of Cycles there were no VN0 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -4495,8 +5524,10 @@
},
{
"BriefDescription": "VN1 Credit Used : WB on BL",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : WB on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -4504,8 +5535,10 @@
},
{
"BriefDescription": "VN1 Credit Used : NCB on BL",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : NCB on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -4513,8 +5546,10 @@
},
{
"BriefDescription": "VN1 Credit Used : REQ on AD",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : REQ on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -4522,8 +5557,10 @@
},
{
"BriefDescription": "VN1 Credit Used : RSP on AD",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : RSP on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -4531,8 +5568,10 @@
},
{
"BriefDescription": "VN1 Credit Used : SNP on AD",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : SNP on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -4540,8 +5579,10 @@
},
{
"BriefDescription": "VN1 Credit Used : RSP on BL",
+ "Counter": "0",
"EventCode": "0x5c",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : RSP on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -4549,8 +5590,10 @@
},
{
"BriefDescription": "VN1 No Credits : WB on BL",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : WB on BL : Number of Cycles there were no VN1 Credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -4558,8 +5601,10 @@
},
{
"BriefDescription": "VN1 No Credits : NCB on BL",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : NCB on BL : Number of Cycles there were no VN1 Credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -4567,8 +5612,10 @@
},
{
"BriefDescription": "VN1 No Credits : REQ on AD",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : REQ on AD : Number of Cycles there were no VN1 Credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -4576,8 +5623,10 @@
},
{
"BriefDescription": "VN1 No Credits : RSP on AD",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : RSP on AD : Number of Cycles there were no VN1 Credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -4585,8 +5634,10 @@
},
{
"BriefDescription": "VN1 No Credits : SNP on AD",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : SNP on AD : Number of Cycles there were no VN1 Credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -4594,8 +5645,10 @@
},
{
"BriefDescription": "VN1 No Credits : RSP on BL",
+ "Counter": "0",
"EventCode": "0x5e",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : RSP on BL : Number of Cycles there were no VN1 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -4603,168 +5656,210 @@
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x82",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x81",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x84",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc0",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7e",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1",
+ "Counter": "0",
"EventCode": "0x7d",
"EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARB",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message is making arbitration request",
"UMask": "0x4",
@@ -4772,8 +5867,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARRIVED",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.ARRIVED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message arrived in ingress pipeline",
"UMask": "0x1",
@@ -4781,8 +5878,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.BYPASS",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.BYPASS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message took bypass path",
"UMask": "0x2",
@@ -4790,8 +5889,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.FLITTED",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.FLITTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was slotted into flit (non bypass)",
"UMask": "0x10",
@@ -4799,8 +5900,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_ARB",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message lost arbitration",
"UMask": "0x8",
@@ -4808,8 +5911,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_OLD",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_OLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was dropped because it became too old",
"UMask": "0x20",
@@ -4817,8 +5922,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL",
+ "Counter": "0",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was dropped because it was overwritten by new message while prefetch queue was full",
"UMask": "0x40",
@@ -4826,8 +5933,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (AD Bounceable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Bounceable : Number of allocations into the CRS Egress",
"UMask": "0x1",
@@ -4835,8 +5944,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (AD credited)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD credited : Number of allocations into the CRS Egress",
"UMask": "0x2",
@@ -4844,8 +5955,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (AK)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AK : Number of allocations into the CRS Egress",
"UMask": "0x10",
@@ -4853,8 +5966,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (AKC)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AKC : Number of allocations into the CRS Egress",
"UMask": "0x40",
@@ -4862,8 +5977,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (BL Bounceable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.BL_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Bounceable : Number of allocations into the CRS Egress",
"UMask": "0x4",
@@ -4871,8 +5988,10 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (BL credited)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL credited : Number of allocations into the CRS Egress",
"UMask": "0x8",
@@ -4880,53 +5999,65 @@
},
{
"BriefDescription": "Number of allocations into the CRS Egress used to queue up requests destined to the mesh (IV)",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_MDF_CRS_TxR_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IV : Number of allocations into the CRS Egress",
"UMask": "0x20",
"Unit": "MDF"
},
{
- "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO\r\nIngress (V-EMIB) (AD)",
+ "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AD)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD : Number of cycles incoming messages from the vertical ring that are bounced at the SBO",
"UMask": "0x1",
"Unit": "MDF"
},
{
- "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO\r\nIngress (V-EMIB) (AK)",
+ "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AK)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AK : Number of cycles incoming messages from the vertical ring that are bounced at the SBO",
"UMask": "0x4",
"Unit": "MDF"
},
{
- "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO\r\nIngress (V-EMIB) (AKC)",
+ "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (AKC)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AKC : Number of cycles incoming messages from the vertical ring that are bounced at the SBO",
"UMask": "0x10",
"Unit": "MDF"
},
{
- "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO\r\nIngress (V-EMIB) (BL)",
+ "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (BL)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL : Number of cycles incoming messages from the vertical ring that are bounced at the SBO",
"UMask": "0x2",
"Unit": "MDF"
},
{
- "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO\r\nIngress (V-EMIB) (IV)",
+ "BriefDescription": "Number of cycles incoming messages from the vertical ring that are bounced at the SBO Ingress (V-EMIB) (IV)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_MDF_CRS_TxR_V_BOUNCES.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IV : Number of cycles incoming messages from the vertical ring that are bounced at the SBO",
"UMask": "0x8",
@@ -4934,8 +6065,10 @@
},
{
"BriefDescription": "Counts the number of cycles when the distress signals are asserted based on SBO Ingress threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_MDF_FAST_ASSERTED.AD_BNC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD bnc : Counts the number of cycles when the distress signals are asserted based on SBO Ingress threshold",
"UMask": "0x1",
@@ -4943,8 +6076,10 @@
},
{
"BriefDescription": "Counts the number of cycles when the distress signals are asserted based on SBO Ingress threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_MDF_FAST_ASSERTED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL bnc : Counts the number of cycles when the distress signals are asserted based on SBO Ingress threshold",
"UMask": "0x2",
@@ -4952,6 +6087,7 @@
},
{
"BriefDescription": "UPI Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_UPI_CLOCKTICKS",
"PerPkg": "1",
@@ -4960,8 +6096,10 @@
},
{
"BriefDescription": "Direct packet attempts : D2C",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2C",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Direct packet attempts : D2C : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"UMask": "0x1",
@@ -4969,8 +6107,10 @@
},
{
"BriefDescription": "Direct packet attempts : D2K",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Direct packet attempts : D2K : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"UMask": "0x2",
@@ -4978,70 +6118,87 @@
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_UPI_L1_POWER_CYCLES",
"PerPkg": "1",
@@ -5050,282 +6207,352 @@
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles where phy is not in L0, L0c, L0p, L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_UPI_PHY_INIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req Nack",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_UPI_POWER_L1_NACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "L1 Req Nack : Counts the number of times a link sends/receives a LinkReqNAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx originally requested the power change). A Tx LinkReqNAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req (same as L1 Ack).",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_UPI_POWER_L1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "L1 Req (same as L1 Ack). : Counts the number of times a link sends/receives a LinkReqAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqAck refers to receiving an Ack (meaning this agent's Tx originally requested the power change). A Tx LinkReqAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0p : Number of UPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the UPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize UPI for snoops and their responses. Use edge detect to count the number of instances when the UPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_UPI_RxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.DATA",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.LLCRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.NULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.PROTHDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_ANY_FLITS.SLOT2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_UPI_RxL_ANY_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xe",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10e",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xf",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10f",
"Unit": "UPI"
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 0 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x1",
@@ -5333,8 +6560,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 1 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x2",
@@ -5342,8 +6571,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 2 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x4",
@@ -5351,40 +6582,50 @@
},
{
"BriefDescription": "CRC Errors Detected",
+ "Counter": "0,1,2,3",
"EventCode": "0x0b",
"EventName": "UNC_UPI_RxL_CRC_ERRORS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CRC Errors Detected : Number of CRC errors detected in the UPI Agent. Each UPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the UPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).",
"Unit": "UPI"
},
{
"BriefDescription": "LLR Requests Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "UNC_UPI_RxL_CRC_LLR_REQ_TRANSMIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "LLR Requests Sent : Number of LLR Requests were transmitted. This should generally be <= the number of CRC errors detected. If multiple errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKs..",
"Unit": "UPI"
},
{
"BriefDescription": "VN0 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Consumed : Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VN1 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x3a",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Consumed : Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VNA",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -5393,6 +6634,7 @@
},
{
"BriefDescription": "Valid Flits Received : All Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -5402,6 +6644,7 @@
},
{
"BriefDescription": "Null FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -5410,8 +6653,10 @@
},
{
"BriefDescription": "Valid Flits Received : Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
"UMask": "0x8",
@@ -5419,8 +6664,10 @@
},
{
"BriefDescription": "Valid Flits Received : Idle",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Idle : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -5428,8 +6675,10 @@
},
{
"BriefDescription": "Valid Flits Received : LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -5437,8 +6686,10 @@
},
{
"BriefDescription": "Valid Flits Received : LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -5446,6 +6697,7 @@
},
{
"BriefDescription": "Valid Flits Received : All Non Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -5455,8 +6707,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot NULL or LLCRD Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
"UMask": "0x20",
@@ -5464,8 +6718,10 @@
},
{
"BriefDescription": "Valid Flits Received : Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -5473,8 +6729,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -5482,8 +6740,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -5491,8 +6751,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -5500,8 +6762,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 0 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x1",
@@ -5509,8 +6773,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 1 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x2",
@@ -5518,8 +6784,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 2 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x4",
@@ -5527,8 +6795,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 0 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x1",
@@ -5536,8 +6806,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 1 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x2",
@@ -5545,8 +6817,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 2 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x4",
@@ -5554,256 +6828,319 @@
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0p : Number of UPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the UPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize UPI for snoops and their responses. Use edge detect to count the number of instances when the UPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_UPI_TxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.DATA",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.LLCRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.NULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.PROTHDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL_ANY_FLITS.SLOT2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_UPI_TxL_ANY_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xe",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10e",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xf",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10f",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_UPI_TxL_BYPASSED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Bypassed : Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the UPI Link. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.",
"Unit": "UPI"
},
{
"BriefDescription": "Valid Flits Sent : All Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -5813,8 +7150,10 @@
},
{
"BriefDescription": "Valid Flits Sent : All LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : All Data : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x17",
@@ -5822,8 +7161,10 @@
},
{
"BriefDescription": "Valid Flits Sent : All LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : All LLCTRL : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -5831,6 +7172,7 @@
},
{
"BriefDescription": "All Null Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -5839,8 +7181,10 @@
},
{
"BriefDescription": "Valid Flits Sent : All Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : All ProtDDR : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x87",
@@ -5848,8 +7192,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
"UMask": "0x8",
@@ -5857,8 +7203,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Idle",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Idle : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -5866,8 +7214,10 @@
},
{
"BriefDescription": "Valid Flits Sent : LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -5875,8 +7225,10 @@
},
{
"BriefDescription": "Valid Flits Sent : LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -5884,6 +7236,7 @@
},
{
"BriefDescription": "Valid Flits Sent : All Non Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -5893,8 +7246,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
"UMask": "0x20",
@@ -5902,8 +7257,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -5911,8 +7268,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -5920,8 +7279,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -5929,8 +7290,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -5938,47 +7301,59 @@
},
{
"BriefDescription": "Tx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_UPI_TxL_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Allocations : Number of allocations into the UPI Tx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_UPI_TxL_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Occupancy : Accumulates the number of flits in the TxQ. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credits Pending Return - Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VNA Credits Pending Return - Occupancy : Number of VNA credits in the Rx side that are waitng to be returned back across the link.",
"Unit": "UPI"
},
{
"BriefDescription": "Message Received : Doorbell",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "Message Received : Interrupt",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.INT_PRIO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : Interrupt : Interrupts",
"UMask": "0x10",
@@ -5986,8 +7361,10 @@
},
{
"BriefDescription": "Message Received : IPI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.IPI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : IPI : Inter Processor Interrupts",
"UMask": "0x4",
@@ -5995,8 +7372,10 @@
},
{
"BriefDescription": "Message Received : MSI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.MSI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : MSI : Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)",
"UMask": "0x2",
@@ -6004,8 +7383,10 @@
},
{
"BriefDescription": "Message Received : VLW",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.VLW_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : VLW : Virtual Logical Wire (legacy) message were received from Uncore.",
"UMask": "0x1",
@@ -6013,152 +7394,190 @@
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS",
+ "Counter": "0",
"EventCode": "0x4d",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL",
+ "Counter": "0",
"EventCode": "0x4e",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK",
+ "Counter": "0",
"EventCode": "0x4f",
"EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC",
+ "Counter": "0",
"EventCode": "0x4f",
"EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack : Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles PHOLD Assert to Ack : Assert to ACK : PHOLD cycles.",
"UMask": "0x1",
@@ -6166,32 +7585,40 @@
},
{
"BriefDescription": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDRAND",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_U_RACU_DRNG.RDRAND",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDSEED",
+ "Counter": "0",
"EventCode": "0x4c",
"EventName": "UNC_U_RACU_DRNG.RDSEED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RACU Request : Number outstanding register requests within message channel tracker",
"Unit": "UBOX"
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-io.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-io.json
index 0761980c34a0..d4cf2199d46b 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-io.json
@@ -1,70 +1,87 @@
[
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "1",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART0_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "2",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART1_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x21",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "3",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART2_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x22",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "4",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART3_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x23",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "5",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART4_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x24",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "6",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART5_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x25",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "7",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART6_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x26",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "8",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART7_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x27",
"Unit": "iio_free_running"
},
{
"BriefDescription": "IIO Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_IIO_CLOCKTICKS",
"PerPkg": "1",
@@ -74,6 +91,7 @@
},
{
"BriefDescription": "Free running counter that increments for IIO clocktick",
+ "Counter": "0",
"EventCode": "0xff",
"EventName": "UNC_IIO_CLOCKTICKS_FREERUN",
"PerPkg": "1",
@@ -82,8 +100,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-7",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xff",
@@ -93,96 +113,114 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7001004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 1",
- "UMask": "0x7002004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 2",
- "UMask": "0x7004004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 3",
- "UMask": "0x7008004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 0 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 4",
- "UMask": "0x7010004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 1 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 5",
- "UMask": "0x7020004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 6",
- "UMask": "0x7040004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
"PublicDescription": "PCIe Completion Buffer Inserts of completions with data : Part 2 : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 7",
- "UMask": "0x7080004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"UMask": "0xff",
@@ -190,96 +228,114 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 0",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7000001",
+ "UMask": "0x1",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 1",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x4 card is plugged in to slot 1",
- "UMask": "0x7000002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 2",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7000004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 3",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x4 card is plugged in to slot 3",
- "UMask": "0x7000008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 4",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7000010",
+ "UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 5",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x4 card is plugged in to slot 1",
- "UMask": "0x7000020",
+ "UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 6",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7000040",
+ "UMask": "0x40",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy : Part 7",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "x4 card is plugged in to slot 3",
- "UMask": "0x7000080",
+ "UMask": "0x80",
"Unit": "IIO"
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part0-7",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.ALL_PARTS",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00ff",
@@ -288,96 +344,106 @@
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
"PublicDescription": "Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7001004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part1",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
"PublicDescription": "Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7002004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part2",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
"PublicDescription": "Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7004004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Read request for 4 bytes made by the CPU to IIO Part3",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
"PublicDescription": "Data requested by the CPU : Core reading from Card's MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7008004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART4",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
"PublicDescription": "Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7010004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART5",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
"PublicDescription": "Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7020004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART6",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
"PublicDescription": "Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7040004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART7",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
"PublicDescription": "Data requested by the CPU : Core reading from Cards MMIO space : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7080004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part0-7 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.ALL_PARTS",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00ff",
@@ -386,8 +452,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0100",
@@ -397,8 +465,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0200",
@@ -408,6 +478,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part0 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -419,6 +490,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part1 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -430,6 +502,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part2 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -441,6 +514,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made to IIO Part3 by the CPU",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -452,6 +526,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -463,6 +538,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -474,6 +550,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -485,6 +562,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -496,184 +574,218 @@
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7001008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7002008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7004008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer read request for 4 bytes made by a different IIO unit to IIO Part0",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7008008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7010008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7020008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7040008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7080008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part0 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7001002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part0 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7002002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part0 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7004002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made to IIO Part0 by a different IIO unit",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7008002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x16 card plugged in to stack, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7010002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7020002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 1",
- "UMask": "0x7040002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
"PublicDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card. : Number of DWs (4 bytes) requested by the main die. Includes all requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7080002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.ALL_PARTS",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xff",
@@ -683,6 +795,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART0",
"FCMask": "0x07",
@@ -694,6 +807,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART1",
"FCMask": "0x07",
@@ -705,6 +819,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART2",
"FCMask": "0x07",
@@ -716,6 +831,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART3",
"FCMask": "0x07",
@@ -727,6 +843,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART4",
"FCMask": "0x07",
@@ -738,6 +855,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART5",
"FCMask": "0x07",
@@ -749,6 +867,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART6",
"FCMask": "0x07",
@@ -760,6 +879,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART7",
"FCMask": "0x07",
@@ -771,6 +891,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by IIO Part0-7 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS",
"FCMask": "0x07",
@@ -781,6 +902,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by IIO Part0 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -792,6 +914,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by IIO Part1 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -803,6 +926,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by IIO Part2 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -814,6 +938,7 @@
},
{
"BriefDescription": "Read request for 4 bytes made by IIO Part3 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -825,6 +950,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -836,6 +962,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -847,6 +974,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -858,6 +986,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -869,6 +998,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made by IIO Part0-7 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS",
"FCMask": "0x07",
@@ -879,6 +1009,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made by IIO Part0 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -890,6 +1021,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made by IIO Part1 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -901,6 +1033,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made by IIO Part2 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -912,6 +1045,7 @@
},
{
"BriefDescription": "Write request of 4 bytes made by IIO Part3 to Memory",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -923,6 +1057,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -934,6 +1069,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -945,6 +1081,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -956,6 +1093,7 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -967,8 +1105,10 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
@@ -978,8 +1118,10 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
@@ -989,8 +1131,10 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
@@ -1000,8 +1144,10 @@
},
{
"BriefDescription": "Peer to peer write request of 4 bytes made by IIO Part0 to an IIO target",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
@@ -1011,8 +1157,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
@@ -1022,8 +1170,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
@@ -1033,8 +1183,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
@@ -1044,8 +1196,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
@@ -1055,19 +1209,23 @@
},
{
"BriefDescription": "Incoming arbitration requests : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests : Passing data to be written : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff020",
+ "UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1077,8 +1235,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Processing response from IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1088,8 +1248,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Issuing to IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1099,96 +1261,114 @@
},
{
"BriefDescription": "Incoming arbitration requests : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests : Request Ownership : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests : Writing line : How often different queues (e.g. channel / fc) ask to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff010",
+ "UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Passing data to be written : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff020",
+ "UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Issuing final read or write of line : How often different queues (e.g. channel / fc) are allowed to send request into pipeline",
- "UMask": "0x70ff008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Processing response from IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Processing response from IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipeline",
- "UMask": "0x70ff002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Issuing to IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Issuing to IOMMU : How often different queues (e.g. channel / fc) are allowed to send request into pipeline",
- "UMask": "0x70ff001",
+ "UMask": "0x1",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Request Ownership : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Incoming arbitration requests granted : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Incoming arbitration requests granted : Writing line : How often different queues (e.g. channel / fc) are allowed to send request into pipeline : Only for posted requests",
- "UMask": "0x70ff010",
+ "UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": ": IOTLB Hits to a 1G Page",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.1G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": IOTLB Hits to a 1G Page : Counts if a transaction to a 1G page, on its first lookup, hits the IOTLB.",
@@ -1197,8 +1377,10 @@
},
{
"BriefDescription": ": IOTLB Hits to a 2M Page",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.2M_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": IOTLB Hits to a 2M Page : Counts if a transaction to a 2M page, on its first lookup, hits the IOTLB.",
@@ -1207,8 +1389,10 @@
},
{
"BriefDescription": ": IOTLB Hits to a 4K Page",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.4K_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": IOTLB Hits to a 4K Page : Counts if a transaction to a 4K page, on its first lookup, hits the IOTLB.",
@@ -1217,8 +1401,10 @@
},
{
"BriefDescription": ": Context cache hits",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": Context cache hits : Counts each time a first look up of the transaction hits the RCC.",
@@ -1227,8 +1413,10 @@
},
{
"BriefDescription": ": Context cache lookups",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": Context cache lookups : Counts each time a transaction looks up root context cache.",
@@ -1237,8 +1425,10 @@
},
{
"BriefDescription": ": IOTLB lookups first",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.FIRST_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": IOTLB lookups first : Some transactions have to look up IOTLB multiple times. Counts the first time a request looks up IOTLB.",
@@ -1247,8 +1437,10 @@
},
{
"BriefDescription": "IOTLB Fills (same as IOTLB miss)",
+ "Counter": "0",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.MISSES",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "IOTLB Fills (same as IOTLB miss) : When a transaction misses IOTLB, it does a page walk to look up memory and bring in the relevant page translation. Counts when this page translation is written to IOTLB.",
@@ -1257,8 +1449,10 @@
},
{
"BriefDescription": ": IOMMU memory access",
+ "Counter": "0",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOMMU memory access : IOMMU sends out memory fetches when it misses the cache look up which is indicated by this signal. M2IOSF only uses low priority channel",
"UMask": "0xc0",
@@ -1266,8 +1460,10 @@
},
{
"BriefDescription": ": PWC Hit to a 2M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_1G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M level",
"UMask": "0x4",
@@ -1275,8 +1471,10 @@
},
{
"BriefDescription": ": PWT Hit to a 256T page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_256T_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWT Hit to a 256T page : Counts each time a transaction's first look up hits the SLPWC at the 512G level",
"UMask": "0x10",
@@ -1284,8 +1482,10 @@
},
{
"BriefDescription": ": PWC Hit to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_2M_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 4K page : Counts each time a transaction's first look up hits the SLPWC at the 4K level",
"UMask": "0x2",
@@ -1293,8 +1493,10 @@
},
{
"BriefDescription": ": PWC Hit to a 1G page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_512G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G level",
"UMask": "0x8",
@@ -1302,8 +1504,10 @@
},
{
"BriefDescription": ": PageWalk cache fill",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_CACHE_FILLS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PageWalk cache fill : When a transaction misses SLPWC, it does a page walk to look up memory and bring in the relevant page translation. When this page translation is written to SLPWC, ObsPwcFillValid_nnnH is asserted.",
"UMask": "0x20",
@@ -1311,8 +1515,10 @@
},
{
"BriefDescription": ": PageWalk cache lookup",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWT_CACHE_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PageWalk cache lookup : Counts each time a transaction looks up second level page walk cache.",
"UMask": "0x1",
@@ -1320,8 +1526,10 @@
},
{
"BriefDescription": ": PWC Hit to a 2M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.SLPWC_1G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M level",
"UMask": "0x4",
@@ -1329,8 +1537,10 @@
},
{
"BriefDescription": ": PWC Hit to a 2M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.SLPWC_256T_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M level",
"UMask": "0x10",
@@ -1338,8 +1548,10 @@
},
{
"BriefDescription": ": PWC Hit to a 1G page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.SLPWC_512G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G level",
"UMask": "0x8",
@@ -1347,8 +1559,10 @@
},
{
"BriefDescription": ": Global IOTLB invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.PWT_OCCUPANCY_MSB",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": ": Global IOTLB invalidation cycles : Indicates that IOMMU is doing global invalidation.",
@@ -1357,8 +1571,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus : Asserted if all bits specified by mask match",
@@ -1367,8 +1583,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if all bits specified by mask match",
@@ -1377,8 +1595,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if all bits specified by mask match",
@@ -1387,8 +1607,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : PCIE bus : Asserted if all bits specified by mask match",
@@ -1397,8 +1619,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if all bits specified by mask match",
@@ -1407,8 +1631,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if all bits specified by mask match",
@@ -1417,8 +1643,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus : Asserted if any bits specified by mask match",
@@ -1427,8 +1655,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if any bits specified by mask match",
@@ -1437,8 +1667,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if any bits specified by mask match",
@@ -1447,8 +1679,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : PCIE bus : Asserted if any bits specified by mask match",
@@ -1457,8 +1691,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if any bits specified by mask match",
@@ -1467,8 +1703,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if any bits specified by mask match",
@@ -1477,6 +1715,7 @@
},
{
"BriefDescription": "Number requests PCIe makes of the main die : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU.COMMIT.ALL",
"FCMask": "0x07",
@@ -1488,8 +1727,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Abort",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.ABORT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1498,8 +1739,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Confined P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.CONFINED_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1508,8 +1751,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Local P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.LOC_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1518,8 +1763,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Multi-cast",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MCAST",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1528,8 +1775,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MEM",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1538,8 +1787,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : MsgB",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MSGB",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1548,8 +1799,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Remote P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.REM_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1558,8 +1811,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Ubox",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1567,9 +1822,23 @@
"Unit": "IIO"
},
{
+ "BriefDescription": "Posted requests sent by the integrated IO (IIO) controller to the Ubox, useful for counting message signaled interrupts (MSI).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX_POSTED",
+ "Experimental": "1",
+ "FCMask": "0x01",
+ "PerPkg": "1",
+ "PortMask": "0x00FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
"BriefDescription": "ITC address map 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8f",
"EventName": "UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPU",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPU",
@@ -1577,8 +1846,10 @@
},
{
"BriefDescription": "Outbound cacheline requests issued : 64B requests issued to device",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_IIO_OUTBOUND_CL_REQS_ISSUED.TO_IO",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1588,8 +1859,10 @@
},
{
"BriefDescription": "Outbound TLP (transaction layer packet) requests issued : To device",
+ "Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "UNC_IIO_OUTBOUND_TLP_REQS_ISSUED.TO_IO",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1599,8 +1872,10 @@
},
{
"BriefDescription": "PWT occupancy. Does not include 9th bit of occupancy (will undercount if PWT is greater than 255 per cycle).",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_IIO_PWT_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PortMask": "0x0000",
"PublicDescription": "PWT occupancy : Indicates how many page walks are outstanding at any point in time.",
@@ -1609,106 +1884,126 @@
},
{
"BriefDescription": "Request Ownership : PCIe Request complete",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Request Ownership : PCIe Request complete : Only for posted requests : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast). Each time a single PCIe request completes all its cacheline granular requests, it advances pointer.",
- "UMask": "0x70ff020",
+ "UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "Request Ownership : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Request Ownership : Writing line : Only for posted requests : Only for posted requests",
- "UMask": "0x70ff008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Request Ownership : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Request Ownership : Issuing final read or write of line : Only for posted requests",
- "UMask": "0x70ff004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "Request Ownership : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Request Ownership : Passing data to be written : Only for posted requests : Only for posted requests",
- "UMask": "0x70ff010",
+ "UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": "Processing response from IOMMU : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Processing response from IOMMU : Passing data to be written : Only for posted requests",
- "UMask": "0x70ff008",
+ "UMask": "0x8",
"Unit": "IIO"
},
{
"BriefDescription": "Processing response from IOMMU : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
- "UMask": "0x70ff002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Processing response from IOMMU : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Processing response from IOMMU : Request Ownership : Only for posted requests",
- "UMask": "0x70ff001",
+ "UMask": "0x1",
"Unit": "IIO"
},
{
"BriefDescription": "Processing response from IOMMU : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "Processing response from IOMMU : Writing line : Only for posted requests",
- "UMask": "0x70ff004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Request - pass complete : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "PCIe Request - pass complete : Passing data to be written : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast). Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requests",
- "UMask": "0x70ff020",
+ "UMask": "0x20",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Request - pass complete : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
@@ -1718,28 +2013,33 @@
},
{
"BriefDescription": "PCIe Request - pass complete : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "PCIe Request - pass complete : Request Ownership : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast). Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requests",
- "UMask": "0x70ff004",
+ "UMask": "0x4",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Request - pass complete : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x00FF",
"PublicDescription": "PCIe Request - pass complete : Writing line : Each PCIe request is broken down into a series of cacheline granular requests and each cacheline size request may need to make multiple passes through the pipeline (e.g. for posted interrupts or multi-cast). Each time a cacheline completes a single pass (e.g. posts a write to single multi-cast target) it advances state : Only for posted requests",
- "UMask": "0x70ff010",
+ "UMask": "0x10",
"Unit": "IIO"
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part0",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -1751,6 +2051,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part1",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -1762,6 +2063,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part2",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -1773,6 +2075,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by the CPU to IIO Part3",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -1784,6 +2087,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -1795,6 +2099,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -1806,6 +2111,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -1817,6 +2123,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -1828,6 +2135,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part0 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -1839,6 +2147,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part1 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -1850,6 +2159,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part2 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -1861,6 +2171,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made to IIO Part3 by the CPU",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -1872,6 +2183,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -1883,6 +2195,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -1894,6 +2207,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -1905,6 +2219,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -1916,94 +2231,111 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 0/1/2/3, Or x8 card plugged in to Lane 0/1, Or x4 card is plugged in to slot 0",
- "UMask": "0x7001002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 1",
- "UMask": "0x7002002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 2/3, Or x4 card is plugged in to slot 2",
- "UMask": "0x7004002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 3",
- "UMask": "0x7008002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x16 card plugged in to Lane 4/5/6/7, Or x8 card plugged in to Lane 4/5, Or x4 card is plugged in to slot 4",
- "UMask": "0x7010002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 5",
- "UMask": "0x7020002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x8 card plugged in to Lane 6/7, Or x4 card is plugged in to slot 6",
- "UMask": "0x7040002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
"PublicDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card. : Also known as Outbound. Number of requests initiated by the main die, including reads and writes. : x4 card is plugged in to slot 7",
- "UMask": "0x7080002",
+ "UMask": "0x2",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART0",
"FCMask": "0x07",
@@ -2015,6 +2347,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART1",
"FCMask": "0x07",
@@ -2026,6 +2359,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART2",
"FCMask": "0x07",
@@ -2037,6 +2371,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART3",
"FCMask": "0x07",
@@ -2048,6 +2383,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART4",
"FCMask": "0x07",
@@ -2059,6 +2395,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART5",
"FCMask": "0x07",
@@ -2070,6 +2407,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART6",
"FCMask": "0x07",
@@ -2081,6 +2419,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART7",
"FCMask": "0x07",
@@ -2092,6 +2431,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part0 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -2103,6 +2443,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part1 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -2114,6 +2455,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part2 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -2125,6 +2467,7 @@
},
{
"BriefDescription": "Read request for up to a 64 byte transaction is made by IIO Part3 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -2136,6 +2479,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -2147,6 +2491,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -2158,6 +2503,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -2169,6 +2515,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -2180,6 +2527,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part0 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -2191,6 +2539,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part1 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -2202,6 +2551,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part2 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -2213,6 +2563,7 @@
},
{
"BriefDescription": "Write request of up to a 64 byte transaction is made by IIO Part3 to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -2224,6 +2575,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -2235,6 +2587,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -2246,6 +2599,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -2257,6 +2611,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -2268,8 +2623,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0001",
@@ -2279,8 +2636,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0002",
@@ -2290,8 +2649,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0004",
@@ -2301,8 +2662,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0008",
@@ -2312,8 +2675,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0010",
@@ -2323,8 +2688,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0020",
@@ -2334,8 +2701,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0040",
@@ -2345,8 +2714,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x0080",
@@ -2356,6 +2727,7 @@
},
{
"BriefDescription": "M2P Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M2P_CLOCKTICKS",
"PerPkg": "1",
@@ -2364,6 +2736,7 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M2P_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -2371,8 +2744,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -2380,8 +2755,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -2389,8 +2766,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x1",
@@ -2398,8 +2777,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x2",
@@ -2407,8 +2788,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x4",
@@ -2416,8 +2799,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x8",
@@ -2425,8 +2810,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message class.",
"UMask": "0x10",
@@ -2434,8 +2821,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -2443,8 +2832,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.DRS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the DRS message class.",
"UMask": "0x8",
@@ -2452,8 +2843,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCB message class.",
"UMask": "0x10",
@@ -2461,8 +2854,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -2470,8 +2865,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x1",
@@ -2479,8 +2876,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x2",
@@ -2488,8 +2887,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x4",
@@ -2497,8 +2898,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x8",
@@ -2506,8 +2909,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message class.",
"UMask": "0x10",
@@ -2515,8 +2920,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -2524,896 +2931,1120 @@
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent3",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent4",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent5",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : All",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Local NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Local NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Remote NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Remote NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Local NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Local NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Remote NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Remote NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Local NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Local NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Remote NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Remote NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x80",
@@ -3421,8 +4052,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_IDI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x1",
@@ -3430,8 +4063,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x2",
@@ -3439,8 +4074,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x4",
@@ -3448,8 +4085,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x20",
@@ -3457,8 +4096,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x40",
@@ -3466,8 +4107,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x8",
@@ -3475,8 +4118,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x10",
@@ -3484,8 +4129,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x80",
@@ -3493,8 +4140,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_IDI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x1",
@@ -3502,8 +4151,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x2",
@@ -3511,8 +4162,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x4",
@@ -3520,8 +4173,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.IIO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x20",
@@ -3529,8 +4184,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.IIO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x40",
@@ -3538,8 +4195,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x8",
@@ -3547,8 +4206,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x10",
@@ -3556,24 +4217,30 @@
},
{
"BriefDescription": "UNC_M2P_TxC_CREDITS.PMM",
+ "Counter": "0,1",
"EventCode": "0x2d",
"EventName": "UNC_M2P_TxC_CREDITS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "UNC_M2P_TxC_CREDITS.PRQ",
+ "Counter": "0,1",
"EventCode": "0x2d",
"EventName": "UNC_M2P_TxC_CREDITS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_0",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -3583,8 +4250,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_1",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -3594,8 +4263,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_0",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -3605,8 +4276,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_1",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json
index 3ff9e9b722c8..30044177ccf8 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles - at UCLK",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M2HBM_CLOCKTICKS",
"PerPkg": "1",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M2HBM_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -15,16 +17,20 @@
},
{
"BriefDescription": "Cycles when direct to core mode (which bypasses the CHA) was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2HBM"
},
{
"BriefDescription": "Cycles when direct to core mode, which bypasses the CHA, was disabled : Non Cisgress",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_DIRSTATE.NON_CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of time non cisgress D2C was not honoured by egress due to directory state constraints",
"UMask": "0x2",
@@ -32,47 +38,59 @@
},
{
"BriefDescription": "Counts the time when FM didn't do d2c for fill reads (cross tile case)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2HBM_DIRECT2CORE_NOT_TAKEN_NOTFORKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number of reads in which direct to core transaction were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2HBM_DIRECT2CORE_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number of reads in which direct to core transaction was overridden : Cisgress",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2HBM_DIRECT2CORE_TXN_OVERRIDE.CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number of reads in which direct to Intel UPI transactions were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2HBM"
},
{
"BriefDescription": "Cycles when direct to Intel UPI was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2HBM"
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Cisgress D2U Ignored",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.CISGRESS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -82,8 +100,10 @@
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Egress Ignored D2U",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.EGRESS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -93,8 +113,10 @@
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled : Non Cisgress D2U Ignored",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M2HBM_DIRECT2UPI_NOT_TAKEN_DIRSTATE.NON_CISGRESS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -104,86 +126,107 @@
},
{
"BriefDescription": "Number of reads that a message sent direct2 Intel UPI was overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x1c",
"EventName": "UNC_M2HBM_DIRECT2UPI_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number of times a direct to UPI transaction was overridden.",
+ "Counter": "0,1,2,3",
"EventCode": "0x1c",
"EventName": "UNC_M2HBM_DIRECT2UPI_TXN_OVERRIDE.CISGRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1d",
"EventName": "UNC_M2HBM_DIRECTORY_HIT.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (any state found)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.ANY",
"PerPkg": "1",
@@ -193,6 +236,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookups (cacheline found in A state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_A",
"PerPkg": "1",
@@ -202,6 +246,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in I state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_I",
"PerPkg": "1",
@@ -211,6 +256,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory lookup (cacheline found in S state)",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2HBM_DIRECTORY_LOOKUP.STATE_S",
"PerPkg": "1",
@@ -220,86 +266,107 @@
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2HBM"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x1e",
"EventName": "UNC_M2HBM_DIRECTORY_MISS.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A2I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x320",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from A to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A2S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x340",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from/to Any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -308,8 +375,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_I_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -319,8 +388,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_I_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -330,8 +401,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_S_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -341,8 +414,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.A_TO_S_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -352,8 +427,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -363,24 +440,30 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I2A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x304",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from I to S",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I2S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x302",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_A_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -390,8 +473,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_A_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -401,8 +486,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_S_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -412,8 +499,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.I_TO_S_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -423,8 +512,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -434,24 +525,30 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to A",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S2A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x310",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory update from S to I",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S2I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x308",
"Unit": "M2HBM"
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_A_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -461,8 +558,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_A_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -472,8 +571,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_I_HIT_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -483,8 +584,10 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2HBM_DIRECTORY_UPDATE.S_TO_I_MISS_NON_PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -494,64 +597,80 @@
},
{
"BriefDescription": "Count distress signalled on AkAd cmp message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on any packet type",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on Bl Cmp message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.BL_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on NM fill write message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.CROSSTILE_NMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on D2Cha message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.D2CHA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on D2c message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.D2CORE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Count distress signalled on D2k message",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_M2HBM_DISTRESS.D2UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2HBM"
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2HBM_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x80000004",
@@ -559,8 +678,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2HBM_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x80000001",
@@ -568,8 +689,10 @@
},
{
"BriefDescription": "Count when Starve Glocab counter is at 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2HBM_IGR_STARVE_WINNER.MASK7",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -578,32 +701,40 @@
},
{
"BriefDescription": "Reads to iMC issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x304",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH0.ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x104",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH0.NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x101",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH0_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -612,24 +743,30 @@
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH0_FROM_TGR",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x140",
"Unit": "M2HBM"
},
{
"BriefDescription": "Critical Priority - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x102",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH0_NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH0_NORMAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -638,24 +775,30 @@
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH1.ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x204",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH1.NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x201",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH1_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -664,24 +807,30 @@
},
{
"BriefDescription": "From TGR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x240",
"Unit": "M2HBM"
},
{
"BriefDescription": "Critical Priority - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x202",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.CH1_NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.CH1_NORMAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -690,64 +839,80 @@
},
{
"BriefDescription": "From TGR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x340",
"Unit": "M2HBM"
},
{
"BriefDescription": "Critical Priority - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x302",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_READS.NORMAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2HBM_IMC_READS.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x301",
"Unit": "M2HBM"
},
{
"BriefDescription": "All Writes - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1810",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x810",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.FULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x801",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0.PARTIAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x802",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_ALL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -756,15 +921,19 @@
},
{
"BriefDescription": "From TGR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_FULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_FULL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -773,16 +942,20 @@
},
{
"BriefDescription": "ISOCH Full Line - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x804",
"Unit": "M2HBM"
},
{
"BriefDescription": "Non-Inclusive - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_NI",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -790,8 +963,10 @@
},
{
"BriefDescription": "Non-Inclusive Miss - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_NI_MISS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -799,8 +974,10 @@
},
{
"BriefDescription": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -809,40 +986,50 @@
},
{
"BriefDescription": "ISOCH Partial - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH0_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x808",
"Unit": "M2HBM"
},
{
"BriefDescription": "All Writes - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1010",
"Unit": "M2HBM"
},
{
"BriefDescription": "Full Line Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1001",
"Unit": "M2HBM"
},
{
"BriefDescription": "Partial Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1002",
"Unit": "M2HBM"
},
{
"BriefDescription": "All Writes - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_ALL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -851,15 +1038,19 @@
},
{
"BriefDescription": "From TGR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Full Line Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_FULL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -868,16 +1059,20 @@
},
{
"BriefDescription": "ISOCH Full Line - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1004",
"Unit": "M2HBM"
},
{
"BriefDescription": "Non-Inclusive - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_NI",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -885,8 +1080,10 @@
},
{
"BriefDescription": "Non-Inclusive Miss - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_NI_MISS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -894,8 +1091,10 @@
},
{
"BriefDescription": "Partial Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_PARTIAL",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -904,39 +1103,49 @@
},
{
"BriefDescription": "ISOCH Partial - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.CH1_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1008",
"Unit": "M2HBM"
},
{
"BriefDescription": "From TGR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Full Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1801",
"Unit": "M2HBM"
},
{
"BriefDescription": "ISOCH Full Line - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1804",
"Unit": "M2HBM"
},
{
"BriefDescription": "Non-Inclusive - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.NI",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -944,8 +1153,10 @@
},
{
"BriefDescription": "Non-Inclusive Miss - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.NI_MISS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -953,159 +1164,199 @@
},
{
"BriefDescription": "Partial Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1802",
"Unit": "M2HBM"
},
{
"BriefDescription": "ISOCH Partial - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2HBM_IMC_WRITES.PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1808",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_PREFCAM_CIS_DROPS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5c",
"EventName": "UNC_M2HBM_PREFCAM_CIS_DROPS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2HBM"
},
{
"BriefDescription": "Data Prefetches Dropped",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_DROPS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "M2HBM"
},
{
"BriefDescription": ": UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_MERGE.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2HBM"
},
{
"BriefDescription": ": XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_MERGE.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "M2HBM"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5e",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.RD_MERGED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2HBM"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5e",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.WR_MERGED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2HBM"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x5e",
"EventName": "UNC_M2HBM_PREFCAM_DEMAND_NO_MERGE.WR_SQUASHED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2HBM_PREFCAM_INSERTS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Prefetch CAM Inserts : XPT -All Channels",
"UMask": "0x5",
@@ -1113,80 +1364,100 @@
},
{
"BriefDescription": "Prefetch CAM Occupancy : All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M2HBM_PREFCAM_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2HBM"
},
{
"BriefDescription": ": Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": ": Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5f",
"EventName": "UNC_M2HBM_PREFCAM_RESP_MISS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.CIS",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.CIS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2HBM_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "UNC_M2HBM_PREFCAM_RxC_OCCUPANCY",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2HBM_PREFCAM_RxC_OCCUPANCY",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1194,8 +1465,10 @@
},
{
"BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M2HBM_RxC_AD.INSERTS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1204,23 +1477,29 @@
},
{
"BriefDescription": "AD Ingress (from CMS) : AD Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M2HBM_RxC_AD_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M2HBM_RxC_AD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "BL Ingress (from CMS) : BL Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M2HBM_RxC_BL.INSERTS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1230,8 +1509,10 @@
},
{
"BriefDescription": "BL Ingress (from CMS) : BL Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M2HBM_RxC_BL_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts anytime a BL packet is added to Ingress",
"UMask": "0x1",
@@ -1239,61 +1520,77 @@
},
{
"BriefDescription": "BL Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M2HBM_RxC_BL_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number AD Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M2HBM_TGR_AD_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Number BL Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_M2HBM_TGR_BL_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2HBM_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x104",
"Unit": "M2HBM"
},
{
"BriefDescription": "Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2HBM_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x204",
"Unit": "M2HBM"
},
{
"BriefDescription": "Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2HBM_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Tracker Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2HBM_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "AD Egress (to CMS) : AD Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M2HBM_TxC_AD.INSERTS",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1303,8 +1600,10 @@
},
{
"BriefDescription": "AD Egress (to CMS) : AD Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M2HBM_TxC_AD_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts anytime a AD packet is added to Egress",
"UMask": "0x1",
@@ -1312,15 +1611,19 @@
},
{
"BriefDescription": "AD Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "UNC_M2HBM_TxC_AD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2HBM"
},
{
"BriefDescription": "BL Egress (to CMS) : Inserts - CMS0 - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UNC_M2HBM_TxC_BL.INSERTS_CMS0",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1330,8 +1633,10 @@
},
{
"BriefDescription": "BL Egress (to CMS) : Inserts - CMS1 - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UNC_M2HBM_TxC_BL.INSERTS_CMS1",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -1341,160 +1646,200 @@
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x0f",
"EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2HBM"
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x0f",
"EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "BL Egress (to CMS) Occupancy : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x0f",
"EventName": "UNC_M2HBM_TxC_BL_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "WPQ Flush : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2HBM_WPQ_FLUSH.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "WPQ Flush : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2HBM_WPQ_FLUSH.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Regular : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2HBM_WPQ_NO_REG_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Regular : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2HBM_WPQ_NO_REG_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Special : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2HBM_WPQ_NO_SPEC_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "M2M and iMC WPQ Cycles w/Credits - Special : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2HBM_WPQ_NO_SPEC_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2HBM_WR_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2HBM_WR_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2HBM_WR_TRACKER_NONPOSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2HBM_WR_TRACKER_POSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2HBM_WR_TRACKER_POSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2HBM_WR_TRACKER_POSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2HBM"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2HBM_WR_TRACKER_POSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2HBM"
},
{
"BriefDescription": "Activate due to read, write, underfill, or bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0xff",
@@ -1502,8 +1847,10 @@
},
{
"BriefDescription": "Activate due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x11",
@@ -1511,8 +1858,10 @@
},
{
"BriefDescription": "HBM Activate Count : Activate due to Read in PCH0",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.RD_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x1",
@@ -1520,8 +1869,10 @@
},
{
"BriefDescription": "HBM Activate Count : Activate due to Read in PCH1",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.RD_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x10",
@@ -1529,8 +1880,10 @@
},
{
"BriefDescription": "HBM Activate Count : Underfill Read transaction on Page Empty or Page Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.UFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x44",
@@ -1538,8 +1891,10 @@
},
{
"BriefDescription": "HBM Activate Count",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.UFILL_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x4",
@@ -1547,8 +1902,10 @@
},
{
"BriefDescription": "HBM Activate Count",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.UFILL_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x40",
@@ -1556,8 +1913,10 @@
},
{
"BriefDescription": "Activate due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x22",
@@ -1565,8 +1924,10 @@
},
{
"BriefDescription": "HBM Activate Count : Activate due to Write in PCH0",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.WR_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x2",
@@ -1574,8 +1935,10 @@
},
{
"BriefDescription": "HBM Activate Count : Activate due to Write in PCH1",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_MCHBM_ACT_COUNT.WR_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Activate commands sent on this channel. Activate commands are issued to open up a page on the HBM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x20",
@@ -1583,16 +1946,20 @@
},
{
"BriefDescription": "All CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xff",
"Unit": "MCHBM"
},
{
"BriefDescription": "Pseudo Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "HBM RD_CAS and WR_CAS Commands",
"UMask": "0x40",
@@ -1600,8 +1967,10 @@
},
{
"BriefDescription": "Pseudo Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "HBM RD_CAS and WR_CAS Commands",
"UMask": "0x80",
@@ -1609,134 +1978,167 @@
},
{
"BriefDescription": "Read CAS commands issued (regular and underfill)",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xcf",
"Unit": "MCHBM"
},
{
"BriefDescription": "Regular read CAS commands with precharge",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.RD_PRE_REG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc2",
"Unit": "MCHBM"
},
{
"BriefDescription": "Underfill read CAS commands with precharge",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.RD_PRE_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8",
"Unit": "MCHBM"
},
{
"BriefDescription": "Regular read CAS commands issued (does not include underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.RD_REG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc1",
"Unit": "MCHBM"
},
{
"BriefDescription": "Underfill read CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.RD_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc4",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf0",
"Unit": "MCHBM"
},
{
"BriefDescription": "HBM RD_CAS and WR_CAS Commands. : HBM WR_CAS commands w/o auto-pre",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.WR_NONPRE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd0",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write CAS commands with precharge",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_MCHBM_CAS_COUNT.WR_PRE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe0",
"Unit": "MCHBM"
},
{
"BriefDescription": "Pseudo Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "MCHBM"
},
{
"BriefDescription": "Pseudo Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read CAS Command in Regular Mode (64B) in Pseudochannel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc1",
"Unit": "MCHBM"
},
{
"BriefDescription": "Underfill Read CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_UFILL_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd0",
"Unit": "MCHBM"
},
{
"BriefDescription": "Underfill Read CAS Command in Regular Mode (64B) in Pseudochannel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.RD_UFILL_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc2",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.WR_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe0",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write CAS Command in Regular Mode (64B) in Pseudochannel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_MCHBM_CAS_ISSUED_REQ_LEN.WR_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc4",
"Unit": "MCHBM"
},
{
"BriefDescription": "IMC Clockticks at DCLK frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_MCHBM_CLOCKTICKS",
"PerPkg": "1",
@@ -1744,9 +2146,21 @@
"Unit": "MCHBM"
},
{
+ "BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x09",
+ "EventName": "UNC_MCHBM_ECC_CORRECTABLE_ERRORS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "ECC Correctable Errors. Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit errors in lockstep mode.",
+ "Unit": "MCHBM"
+ },
+ {
"BriefDescription": "HBM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_MCHBM_HBM_PREALL.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that the precharge all command was sent.",
"UMask": "0x1",
@@ -1754,8 +2168,10 @@
},
{
"BriefDescription": "HBM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_MCHBM_HBM_PREALL.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times that the precharge all command was sent.",
"UMask": "0x2",
@@ -1763,8 +2179,10 @@
},
{
"BriefDescription": "All Precharge Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_MCHBM_HBM_PRE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Precharge All Commands: Counts the number of times that the precharge all command was sent.",
"UMask": "0x3",
@@ -1772,15 +2190,19 @@
},
{
"BriefDescription": "IMC Clockticks at HCLK frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_MCHBM_HCLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "MCHBM"
},
{
"BriefDescription": "All precharge events",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0xff",
@@ -1788,8 +2210,10 @@
},
{
"BriefDescription": "Precharge from MC page table",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.PGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x88",
@@ -1797,8 +2221,10 @@
},
{
"BriefDescription": "HBM Precharge commands. : Precharges from Page Table",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.PGT_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel. : Equivalent to PAGE_EMPTY",
"UMask": "0x8",
@@ -1806,8 +2232,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.PGT_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x80",
@@ -1815,8 +2243,10 @@
},
{
"BriefDescription": "Precharge due to read on page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.RD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x11",
@@ -1824,8 +2254,10 @@
},
{
"BriefDescription": "HBM Precharge commands. : Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.RD_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel. : Precharge from read bank scheduler",
"UMask": "0x1",
@@ -1833,8 +2265,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.RD_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x10",
@@ -1842,8 +2276,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.UFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x44",
@@ -1851,8 +2287,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.UFILL_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x4",
@@ -1860,8 +2298,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.UFILL_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x40",
@@ -1869,8 +2309,10 @@
},
{
"BriefDescription": "Precharge due to write on page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x22",
@@ -1878,8 +2320,10 @@
},
{
"BriefDescription": "HBM Precharge commands. : Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.WR_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel. : Precharge from write bank scheduler",
"UMask": "0x2",
@@ -1887,8 +2331,10 @@
},
{
"BriefDescription": "HBM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_MCHBM_PRE_COUNT.WR_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of HBM Precharge commands sent on this channel.",
"UMask": "0x20",
@@ -1896,46 +2342,58 @@
},
{
"BriefDescription": "Counts the number of cycles where the read buffer has greater than UMASK elements. NOTE: Umask must be set to the maximum number of elements in the queue (24 entries for SPR).",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_MCHBM_RDB_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "MCHBM"
},
{
"BriefDescription": "Counts the number of inserts into the read buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_MCHBM_RDB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read Data Buffer Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_MCHBM_RDB_INSERTS.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read Data Buffer Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_MCHBM_RDB_INSERTS.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "MCHBM"
},
{
"BriefDescription": "Counts the number of elements in the read buffer per cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_MCHBM_RDB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_MCHBM_RPQ_INSERTS.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Allocations: Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
"UMask": "0x1",
@@ -1943,8 +2401,10 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_MCHBM_RPQ_INSERTS.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Allocations: Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
"UMask": "0x2",
@@ -1952,24 +2412,30 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_MCHBM_RPQ_OCCUPANCY_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Occupancy: Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory.",
"Unit": "MCHBM"
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_MCHBM_RPQ_OCCUPANCY_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Occupancy: Accumulates the occupancies of the Read Pending Queue each cycle. This can then be used to calculate both the average occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory.",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_MCHBM_WPQ_INSERTS.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Allocations: Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.",
"UMask": "0x1",
@@ -1977,8 +2443,10 @@
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_MCHBM_WPQ_INSERTS.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Allocations: Counts the number of allocations into the Write Pending Queue. This can then be used to calculate the average queuing latency (in conjunction with the WPQ occupancy count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC.",
"UMask": "0x2",
@@ -1986,24 +2454,30 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_MCHBM_WPQ_OCCUPANCY_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Occupancy: Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to memory. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA. The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts.",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_MCHBM_WPQ_OCCUPANCY_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Occupancy: Accumulates the occupancies of the Write Pending Queue each cycle. This can then be used to calculate both the average queue occupancy (in conjunction with the number of cycles not empty) and the average latency (in conjunction with the number of allocations). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after being issued to memory. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies. So, we provide filtering based on if the request has posted or not. By using the not posted filter, we can track how long writes spent in the iMC before completions were sent to the HA. The posted filter, on the other hand, provides information about how much queueing is actually happening in the iMC for writes before they are actually issued to memory. High average occupancies will generally coincide with high write major mode counts.",
"Unit": "MCHBM"
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_MCHBM_WPQ_READ_HIT",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -2012,8 +2486,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_MCHBM_WPQ_READ_HIT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1",
@@ -2021,8 +2497,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_MCHBM_WPQ_READ_HIT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2",
@@ -2030,8 +2508,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_MCHBM_WPQ_WRITE_HIT",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -2040,8 +2520,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_MCHBM_WPQ_WRITE_HIT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1",
@@ -2049,8 +2531,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_MCHBM_WPQ_WRITE_HIT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match: Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2",
@@ -2058,6 +2542,7 @@
},
{
"BriefDescription": "Activate due to read, write, underfill, or bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_ACT_COUNT.ALL",
"PerPkg": "1",
@@ -2067,6 +2552,7 @@
},
{
"BriefDescription": "All DRAM CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -2076,8 +2562,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 0 : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x40",
@@ -2085,8 +2573,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Pseudo Channel 1 : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x80",
@@ -2094,6 +2584,7 @@
},
{
"BriefDescription": "All DRAM read CAS commands issued (including underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -2103,8 +2594,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_REG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0xc2",
@@ -2112,8 +2605,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0xc8",
@@ -2121,8 +2616,10 @@
},
{
"BriefDescription": "All DRAM read CAS commands issued (does not include underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/out auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with implicit Precharge. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).",
"UMask": "0xc1",
@@ -2130,8 +2627,10 @@
},
{
"BriefDescription": "DRAM underfill read CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : Underfill Read Issued : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0xc4",
@@ -2139,6 +2638,7 @@
},
{
"BriefDescription": "All DRAM write CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -2148,8 +2648,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.WR_NONPRE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0xd0",
@@ -2157,8 +2659,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M_CAS_COUNT.WR_PRE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0xe0",
@@ -2166,70 +2670,87 @@
},
{
"BriefDescription": "Pseudo Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Pseudo Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Read CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8",
"Unit": "iMC"
},
{
"BriefDescription": "Read CAS Command in Regular Mode (64B) in Pseudochannel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc1",
"Unit": "iMC"
},
{
"BriefDescription": "Underfill Read CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_UFILL_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xd0",
"Unit": "iMC"
},
{
"BriefDescription": "Underfill Read CAS Command in Regular Mode (64B) in Pseudochannel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.RD_UFILL_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc2",
"Unit": "iMC"
},
{
"BriefDescription": "Write CAS Command in Interleaved Mode (32B)",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.WR_32B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xe0",
"Unit": "iMC"
},
{
"BriefDescription": "Write CAS Command in Regular Mode (64B) in Pseudochannel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M_CAS_ISSUED_REQ_LEN.WR_64B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc4",
"Unit": "iMC"
},
{
"BriefDescription": "IMC Clockticks at DCLK frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
@@ -2239,15 +2760,110 @@
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M_DRAM_PRE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge All Commands : Counts the number of times that the precharge all command was sent.",
"UMask": "0x3",
"Unit": "iMC"
},
{
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.HIGH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Number of DRAM Refreshes Issued : Counts the number of refreshes issued.",
+ "UMask": "0x24",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.HIGH_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x24",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.HIGH_PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.HIGH_PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.PANIC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Number of DRAM Refreshes Issued : Counts the number of refreshes issued.",
+ "UMask": "0x12",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.PANIC_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x12",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.PANIC_PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x45",
+ "EventName": "UNC_M_DRAM_REFRESH.PANIC_PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x09",
+ "EventName": "UNC_M_ECC_CORRECTABLE_ERRORS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "ECC Correctable Errors : Counts the number of ECC errors detected and corrected by the iMC on this channel. This counter is only useful with ECC DRAM devices. This count will increment one time for each correction regardless of the number of bits corrected. The iMC can correct up to 4 bit errors in independent channel mode and 8 bit errors in lockstep mode.",
+ "Unit": "iMC"
+ },
+ {
"BriefDescription": "IMC Clockticks at HCLK frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M_HCLOCKTICKS",
"PerPkg": "1",
@@ -2256,30 +2872,37 @@
},
{
"BriefDescription": "UNC_M_PCLS.RD",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M_PCLS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PCLS.TOTAL",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M_PCLS.TOTAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xf",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PCLS.WR",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M_PCLS.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Read Pending Queue inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M_PMM_RPQ_INSERTS",
"PerPkg": "1",
@@ -2288,6 +2911,7 @@
},
{
"BriefDescription": "PMM Read Pending Queue occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL_SCH0",
"PerPkg": "1",
@@ -2297,6 +2921,7 @@
},
{
"BriefDescription": "PMM Read Pending Queue occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL_SCH1",
"PerPkg": "1",
@@ -2306,8 +2931,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT_SCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x10",
@@ -2315,8 +2942,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT_SCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x20",
@@ -2324,8 +2953,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT_SCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x4",
@@ -2333,8 +2964,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT_SCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x8",
@@ -2342,13 +2975,16 @@
},
{
"BriefDescription": "PMM (for IXP) Write Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M_PMM_WPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Write Pending Queue inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xe7",
"EventName": "UNC_M_PMM_WPQ_INSERTS",
"PerPkg": "1",
@@ -2357,6 +2993,7 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -2366,6 +3003,7 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL_SCH0",
"PerPkg": "1",
@@ -2375,6 +3013,7 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL_SCH1",
"PerPkg": "1",
@@ -2384,8 +3023,10 @@
},
{
"BriefDescription": "PMM (for IXP) Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM (for IXP) Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP DIMM.",
"UMask": "0xc",
@@ -2393,8 +3034,10 @@
},
{
"BriefDescription": "PMM (for IXP) Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM (for IXP) Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the Write Pending Queue to the IXP DIMM.",
"UMask": "0x30",
@@ -2402,16 +3045,20 @@
},
{
"BriefDescription": "Channel PPD Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.",
"Unit": "iMC"
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x1",
@@ -2419,8 +3066,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x2",
@@ -2428,8 +3077,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x4",
@@ -2437,8 +3088,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x8",
@@ -2446,8 +3099,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1",
@@ -2455,8 +3110,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2",
@@ -2464,14 +3121,39 @@
},
{
"BriefDescription": "Clock-Enabled Self-Refresh",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.",
"Unit": "iMC"
},
{
+ "BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
+ "UMask": "0x1",
+ "Unit": "iMC"
+ },
+ {
+ "BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
+ "UMask": "0x2",
+ "Unit": "iMC"
+ },
+ {
"BriefDescription": "Precharge due to read, write, underfill, or PGT.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.ALL",
"PerPkg": "1",
@@ -2481,6 +3163,7 @@
},
{
"BriefDescription": "DRAM Precharge commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.PGT",
"PerPkg": "1",
@@ -2490,8 +3173,10 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharges from Page Table",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.PGT_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharges from Page Table : Counts the number of DRAM Precharge commands sent on this channel. : Equivalent to PAGE_EMPTY",
"UMask": "0x8",
@@ -2499,8 +3184,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.PGT_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x80",
@@ -2508,6 +3195,7 @@
},
{
"BriefDescription": "Precharge due to read on page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -2517,8 +3205,10 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.RD_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharge due to read : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from read bank scheduler",
"UMask": "0x1",
@@ -2526,8 +3216,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.RD_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x10",
@@ -2535,8 +3227,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.UFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x44",
@@ -2544,8 +3238,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.UFILL_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x4",
@@ -2553,8 +3249,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.UFILL_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x40",
@@ -2562,6 +3260,7 @@
},
{
"BriefDescription": "Precharge due to write on page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1",
@@ -2571,8 +3270,10 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.WR_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharge due to write : Counts the number of DRAM Precharge commands sent on this channel. : Precharge from write bank scheduler",
"UMask": "0x2",
@@ -2580,8 +3281,10 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M_PRE_COUNT.WR_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
"UMask": "0x20",
@@ -2589,21 +3292,26 @@
},
{
"BriefDescription": "Counts the number of cycles where the read buffer has greater than UMASK elements. This includes reads to both DDR and PMEM. NOTE: Umask must be set to the maximum number of elements in the queue (24 entries for SPR).",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M_RDB_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Counts the number of inserts into the read buffer destined for DDR. Does not count reads destined for PMEM.",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M_RDB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M_RDB_INSERTS.PCH0",
"PerPkg": "1",
@@ -2612,6 +3320,7 @@
},
{
"BriefDescription": "Read Data Buffer Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M_RDB_INSERTS.PCH1",
"PerPkg": "1",
@@ -2620,45 +3329,56 @@
},
{
"BriefDescription": "Counts the number of cycles where there's at least one element in the read buffer. This includes reads to both DDR and PMEM.",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M_RDB_NE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M_RDB_NE.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M_RDB_NE.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Counts the number of cycles where there's at least one element in the read buffer. This includes reads to both DDR and PMEM.",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M_RDB_NOT_EMPTY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "iMC"
},
{
"BriefDescription": "Counts the number of elements in the read buffer, including reads to both DDR and PMEM.",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M_RDB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH0",
"PerPkg": "1",
@@ -2668,6 +3388,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH1",
"PerPkg": "1",
@@ -2677,6 +3398,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
@@ -2685,6 +3407,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
@@ -2693,294 +3416,368 @@
},
{
"BriefDescription": "Scoreboard accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Write Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Write Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : FM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : FM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Read Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Read Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.RD_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : NM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : NM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.WR_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Alloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.ALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Dealloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FM_RD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FM_TGR_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FM_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": ": Valid",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NM_RD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NM_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.VLD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "UNC_M_SB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Not-Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M_SB_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M_SB_INSERTS.WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M_SB_OCCUPANCY.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : All",
+ "Counter": "0,1,2,3",
"EventCode": "0xda",
"EventName": "UNC_M_SB_PREF_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : DDR4",
+ "Counter": "0,1,2,3",
"EventCode": "0xda",
"EventName": "UNC_M_SB_PREF_INSERTS.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0xda",
"EventName": "UNC_M_SB_PREF_INSERTS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : All",
+ "Counter": "0,1,2,3",
"EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : DDR4",
+ "Counter": "0,1,2,3",
"EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : Persistent Mem",
+ "Counter": "0,1,2,3",
"EventCode": "0xDB",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.PMM",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -2989,230 +3786,287 @@
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M_SB_REJECT.CANARY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M_SB_REJECT.DDR_EARLY_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : FM requests rejected due to full address conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : NM requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : Patrol requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.NEW",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.NEW",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.OCC",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.OCC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_HIT",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.RD_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xdd",
"EventName": "UNC_M_SB_TAGGED.RD_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "2LM Tag check hit in near memory cache (DDR4)",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M_TAGCHK.HIT",
"PerPkg": "1",
@@ -3221,6 +4075,7 @@
},
{
"BriefDescription": "2LM Tag check miss, no data at this line",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M_TAGCHK.MISS_CLEAN",
"PerPkg": "1",
@@ -3229,6 +4084,7 @@
},
{
"BriefDescription": "2LM Tag check miss, existing data may be evicted to PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M_TAGCHK.MISS_DIRTY",
"PerPkg": "1",
@@ -3237,6 +4093,7 @@
},
{
"BriefDescription": "2LM Tag check hit due to memory read",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M_TAGCHK.NM_RD_HIT",
"PerPkg": "1",
@@ -3245,6 +4102,7 @@
},
{
"BriefDescription": "2LM Tag check hit due to memory write",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M_TAGCHK.NM_WR_HIT",
"PerPkg": "1",
@@ -3253,6 +4111,7 @@
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH0",
"PerPkg": "1",
@@ -3262,6 +4121,7 @@
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH1",
"PerPkg": "1",
@@ -3271,6 +4131,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
@@ -3279,6 +4140,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
@@ -3287,8 +4149,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
@@ -3297,8 +4161,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT",
+ "Experimental": "1",
"FCMask": "0x00000000",
"PerPkg": "1",
"PortMask": "0x00000000",
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json
index 8948e85074f0..71c35b165a3e 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/uncore-power.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "PCU PCLK Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1",
@@ -9,137 +10,172 @@
},
{
"BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_DEMOTIONS",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 0 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_P_FIVR_PS_PS0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 0 Cycles : Cycles spent in phase-shedding power state 0",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 1 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_P_FIVR_PS_PS1_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 1 Cycles : Cycles spent in phase-shedding power state 1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 2 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x77",
"EventName": "UNC_P_FIVR_PS_PS2_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 2 Cycles : Cycles spent in phase-shedding power state 2",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 3 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x78",
"EventName": "UNC_P_FIVR_PS_PS3_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 3 Cycles : Cycles spent in phase-shedding power state 3",
"Unit": "PCU"
},
{
"BriefDescription": "AVX256 Frequency Clipping",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_P_FREQ_CLIP_AVX256",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "AVX512 Frequency Clipping",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_P_FREQ_CLIP_AVX512",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit. Count only if throttling is occurring.",
"Unit": "PCU"
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequency.",
"Unit": "PCU"
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IO P Limit Strongest Lower Limit Cycles : Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.",
"Unit": "PCU"
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
"Unit": "PCU"
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2f",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Memory Phase Shedding Cycles : Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2a",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C0 : Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2b",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C2E : Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2d",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C6 : Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Counter": "0",
"EventCode": "0x06",
"EventName": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0",
"PerPkg": "1",
@@ -148,14 +184,17 @@
},
{
"BriefDescription": "Number of cores in C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cores in C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6",
"PerPkg": "1",
@@ -164,32 +203,40 @@
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x0a",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "External Prochot : Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x09",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Total Core C State Transition Cycles : Number of cycles spent performing core C state transitions across all cores.",
"Unit": "PCU"
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VR Hot : Number of cycles that a CPU SVID VR is hot. Does not cover DRAM VRs",
"Unit": "PCU"
diff --git a/tools/perf/pmu-events/arch/x86/emeraldrapids/virtual-memory.json b/tools/perf/pmu-events/arch/x86/emeraldrapids/virtual-memory.json
index a1e3b8d2ebe7..609a9549cbf3 100644
--- a/tools/perf/pmu-events/arch/x86/emeraldrapids/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/emeraldrapids/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -107,6 +120,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
@@ -115,6 +129,7 @@
},
{
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_ACTIVE",
@@ -132,6 +148,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/cache.json b/tools/perf/pmu-events/arch/x86/goldmont/cache.json
index ee47a09172a1..1a121ef47a99 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/cache.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Requests rejected by the L2Q",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "CORE_REJECT_L2Q.ALL",
"PublicDescription": "Counts the number of demand and L1 prefetcher requests rejected by the L2Q due to a full or nearly full condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to ensure fairness between cores, or to delay a core's dirty eviction when the address conflicts with incoming external snoops.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "L1 Cache evictions for dirty data",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "DL1.DIRTY_EVICTION",
"PublicDescription": "Counts when a modified (dirty) cache line is evicted from the data L1 cache and needs to be written back to memory. No count will occur if the evicted line is clean, and hence does not require a writeback.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Cycles code-fetch stalled due to an outstanding ICache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ICACHE_FILL_PENDING_CYCLES",
"PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ICache miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ICache miss. Note: this event is not the same as the total number of cycles spent retrieving instruction cache lines from the memory hierarchy.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Requests rejected by the XQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "L2_REJECT_XQ.ALL",
"PublicDescription": "Counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the intra-die interconnect (IDI) fabric. The XQ may reject transactions from the L2Q (non-cacheable requests), L2 misses and L2 write-back victims.",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "L2 cache request misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts memory requests originating from the core that miss in the L2 cache.",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts memory requests originating from the core that reference a cache line in the L2 cache.",
@@ -47,6 +53,7 @@
},
{
"BriefDescription": "Loads retired that came from DRAM (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Memory uop retired where cross core or cross module HITM occurred (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Load uops retired that hit L1 data cache (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
@@ -77,6 +86,7 @@
},
{
"BriefDescription": "Load uops retired that missed L1 data cache (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "Load uops retired that hit L2 (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
@@ -97,6 +108,7 @@
},
{
"BriefDescription": "Load uops retired that missed L2 (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "Loads retired that hit WCB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT",
@@ -117,6 +130,7 @@
},
{
"BriefDescription": "Memory uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL",
@@ -127,6 +141,7 @@
},
{
"BriefDescription": "Load uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
@@ -137,6 +152,7 @@
},
{
"BriefDescription": "Store uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
@@ -147,6 +163,7 @@
},
{
"BriefDescription": "Locked load uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
@@ -157,6 +174,7 @@
},
{
"BriefDescription": "Memory uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT",
@@ -167,6 +185,7 @@
},
{
"BriefDescription": "Load uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
@@ -177,6 +196,7 @@
},
{
"BriefDescription": "Stores uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
@@ -187,6 +207,7 @@
},
{
"BriefDescription": "Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100007",
@@ -194,6 +215,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -204,6 +226,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -214,6 +237,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -224,6 +248,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -234,6 +259,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -244,6 +270,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -254,6 +281,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -264,6 +292,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -274,6 +303,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -284,6 +314,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -294,6 +325,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -304,6 +336,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -314,6 +347,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -324,6 +358,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -334,6 +369,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -344,6 +380,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem that have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -354,6 +391,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -364,6 +402,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -374,6 +413,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -384,6 +424,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -394,6 +435,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -404,6 +446,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -414,6 +457,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -424,6 +468,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -434,6 +479,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -444,6 +490,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests that have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -454,6 +501,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_HIT",
"MSRIndex": "0x1a6",
@@ -464,6 +512,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.ANY",
"MSRIndex": "0x1a6",
@@ -474,6 +523,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6",
@@ -484,6 +534,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6",
@@ -494,6 +545,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6",
@@ -504,6 +556,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -514,6 +567,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -524,6 +578,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -534,6 +589,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -544,6 +600,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache that are outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -554,6 +611,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -564,6 +622,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -574,6 +633,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -584,6 +644,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -594,6 +655,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -604,6 +666,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines that are outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -614,6 +677,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -624,6 +688,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -634,6 +699,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -644,6 +710,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -654,6 +721,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -664,6 +732,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line that are outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -674,6 +743,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -684,6 +754,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -694,6 +765,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -704,6 +776,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -714,6 +787,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -724,6 +798,7 @@
},
{
"BriefDescription": "Counts demand data partial reads, including data in uncacheable (UC) or uncacheable write combining (USWC) memory types that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_READS.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -734,6 +809,7 @@
},
{
"BriefDescription": "Counts partial cache line data writes to uncacheable write combining (USWC) memory region that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_STREAMING_STORES.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -744,6 +820,7 @@
},
{
"BriefDescription": "Counts partial cache line data writes to uncacheable write combining (USWC) memory region that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_STREAMING_STORES.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -754,6 +831,7 @@
},
{
"BriefDescription": "Counts partial cache line data writes to uncacheable write combining (USWC) memory region that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_STREAMING_STORES.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -764,6 +842,7 @@
},
{
"BriefDescription": "Counts partial cache line data writes to uncacheable write combining (USWC) memory region that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_STREAMING_STORES.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -774,6 +853,7 @@
},
{
"BriefDescription": "Counts partial cache line data writes to uncacheable write combining (USWC) memory region that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -784,6 +864,7 @@
},
{
"BriefDescription": "Counts the number of demand write requests (RFO) generated by a write to partial data cache line, including the writes to uncacheable (UC) and write through (WT), and write protected (WP) types of memory that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PARTIAL_WRITES.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -794,6 +875,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -804,6 +886,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -814,6 +897,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -824,6 +908,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -834,6 +919,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -844,6 +930,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -854,6 +941,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -864,6 +952,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -874,6 +963,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -884,6 +974,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -894,6 +985,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -904,6 +996,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -914,6 +1007,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -924,6 +1018,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -934,6 +1029,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -944,6 +1040,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -954,6 +1051,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -964,6 +1062,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions that hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -974,6 +1073,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions that miss the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -984,6 +1084,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -994,6 +1095,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions that miss the L2 cache with a snoop hit in the other processor module, no data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -1004,6 +1106,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions that true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/counter.json b/tools/perf/pmu-events/arch/x86/goldmont/counter.json
new file mode 100644
index 000000000000..aa443347b694
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/goldmont/counter.json
@@ -0,0 +1,7 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/floating-point.json b/tools/perf/pmu-events/arch/x86/goldmont/floating-point.json
index a3f03855ca05..0141e214ff39 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles the FP divide unit is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.FPDIV",
"PublicDescription": "Counts core cycles the floating point divide unit is busy.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Machine clears due to FP assists",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.FP_ASSIST",
"PublicDescription": "Counts machine clears due to floating point (FP) operations needing assists. For instance, if the result was a floating point denormal, the hardware clears the pipeline and reissues uops to produce the correct IEEE compliant denormal result.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Floating point divide uops retired. (Precise Event Capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.FPDIV",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/frontend.json b/tools/perf/pmu-events/arch/x86/goldmont/frontend.json
index ace2a114b546..249a97cf3f4c 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "BACLEARs asserted for any branch type",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.ALL",
"PublicDescription": "Counts the number of times a BACLEAR is signaled for any reason, including, but not limited to indirect branch/call, Jcc (Jump on Conditional Code/Jump if Condition is Met) branch, unconditional branch/call, and returns.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "BACLEARs asserted for conditional branch",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.COND",
"PublicDescription": "Counts BACLEARS on Jcc (Jump on Conditional Code/Jump if Condition is Met) branches.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "BACLEARs asserted for return branch",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.RETURN",
"PublicDescription": "Counts BACLEARS on return instructions.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Decode restrictions due to predicting wrong instruction length",
+ "Counter": "0,1,2,3",
"EventCode": "0xE9",
"EventName": "DECODE_RESTRICTION.PREDECODE_WRONG",
"PublicDescription": "Counts the number of times the prediction (from the predecode cache) for instruction length is incorrect.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "References per ICache line. This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line. The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS. Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.\r\nThis event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "References per ICache line that are available in the ICache (hit). This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is in the ICache (hit). The event strives to count on a cache line basis, so that multiple accesses which hit in a single cache line count as one ICACHE.HIT. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "References per ICache line that are not available in the ICache (miss). This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is not in the ICache (miss). The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "MS decode starts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE7",
"EventName": "MS_DECODED.MS_ENTRY",
"PublicDescription": "Counts the number of times the Microcode Sequencer (MS) starts a flow of uops from the MSROM. It does not count every time a uop is read from the MSROM. The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine. Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort that initiates a flow of uops. The event will count MS startups for uops that are speculative, and subsequently cleared by branch mispredict or a machine clear.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/memory.json b/tools/perf/pmu-events/arch/x86/goldmont/memory.json
index b97642a109ee..abd0fc02afac 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/memory.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Machine clears due to memory ordering issue",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts machine clears due to memory ordering issues. This occurs when a snoop request happens and the machine is uncertain if memory ordering will be preserved as another core is in the process of modifying the data.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Load uops that split a page (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
"PEBS": "2",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Store uops that split a page (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/other.json b/tools/perf/pmu-events/arch/x86/goldmont/other.json
index c4fd0acb15bc..260b10740644 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/other.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles code-fetch stalled due to any reason.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ALL",
"PublicDescription": "Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes. This will include cycles due to an ITLB miss, ICache miss and other events.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Cycles code-fetch stalled due to an outstanding ITLB miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ITLB_FILL_PENDING_CYCLES",
"PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss. Note: this event is not the same as page walk cycles to retrieve an instruction translation.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Cycles hardware interrupts are masked",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.MASKED",
"PublicDescription": "Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or not.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Cycles pending interrupts are masked",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.PENDING_AND_MASKED",
"PublicDescription": "Counts core cycles during which there are pending interrupts, but interrupts are masked (EFLAGS.IF = 0).",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Hardware interrupts received",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.RECEIVED",
"PublicDescription": "Counts hardware interrupts received by the processor.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/pipeline.json b/tools/perf/pmu-events/arch/x86/goldmont/pipeline.json
index acb897483a87..013d2d5b50df 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Retired branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PEBS": "2",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Retired taken branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_TAKEN_BRANCHES",
"PEBS": "2",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Retired near call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CALL",
"PEBS": "2",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Retired far branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PEBS": "2",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "Retired near indirect call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.IND_CALL",
"PEBS": "2",
@@ -45,6 +50,7 @@
},
{
"BriefDescription": "Retired conditional branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.JCC",
"PEBS": "2",
@@ -54,6 +60,7 @@
},
{
"BriefDescription": "Retired instructions of near indirect Jmp or call (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NON_RETURN_IND",
"PEBS": "2",
@@ -63,6 +70,7 @@
},
{
"BriefDescription": "Retired near relative call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.REL_CALL",
"PEBS": "2",
@@ -72,6 +80,7 @@
},
{
"BriefDescription": "Retired near return instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.RETURN",
"PEBS": "2",
@@ -81,6 +90,7 @@
},
{
"BriefDescription": "Retired conditional branch instructions that were taken (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.TAKEN_JCC",
"PEBS": "2",
@@ -90,6 +100,7 @@
},
{
"BriefDescription": "Retired mispredicted branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PEBS": "2",
@@ -98,6 +109,7 @@
},
{
"BriefDescription": "Retired mispredicted near indirect call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.IND_CALL",
"PEBS": "2",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "Retired mispredicted conditional branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.JCC",
"PEBS": "2",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "Retired mispredicted instructions of near indirect Jmp or near indirect call. (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NON_RETURN_IND",
"PEBS": "2",
@@ -125,6 +139,7 @@
},
{
"BriefDescription": "Retired mispredicted near return instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RETURN",
"PEBS": "2",
@@ -134,6 +149,7 @@
},
{
"BriefDescription": "Retired mispredicted conditional branch instructions that were taken (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.TAKEN_JCC",
"PEBS": "2",
@@ -143,6 +159,7 @@
},
{
"BriefDescription": "Core cycles when core is not halted (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1. You cannot collect a PEBs record for this event.",
"SampleAfterValue": "2000003",
@@ -150,6 +167,7 @@
},
{
"BriefDescription": "Core cycles when core is not halted",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"PublicDescription": "Core cycles when core is not halted. This event uses a (_P)rogrammable general purpose performance counter.",
@@ -157,6 +175,7 @@
},
{
"BriefDescription": "Reference cycles when core is not halted",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF",
"PublicDescription": "Reference cycles when core is not halted. This event uses a programmable general purpose performance counter.",
@@ -165,6 +184,7 @@
},
{
"BriefDescription": "Reference cycles when core is not halted (Fixed event)",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. This event uses fixed counter 2. You cannot collect a PEBs record for this event.",
"SampleAfterValue": "2000003",
@@ -172,6 +192,7 @@
},
{
"BriefDescription": "Cycles a divider is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.ALL",
"PublicDescription": "Counts core cycles if either divide unit is busy.",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Cycles the integer divide unit is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.IDIV",
"PublicDescription": "Counts core cycles the integer divide unit is busy.",
@@ -187,6 +209,7 @@
},
{
"BriefDescription": "Instructions retired (Fixed event)",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
"PublicDescription": "Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. You cannot collect a PEBs record for this event.",
"SampleAfterValue": "2000003",
@@ -194,6 +217,7 @@
},
{
"BriefDescription": "Instructions retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
"PEBS": "2",
@@ -202,6 +226,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.ANY",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend due to either a full resource in the backend (RESOURCE_FULL) or due to the processor recovering from some event (RECOVERY).",
@@ -209,6 +234,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle to recover",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.RECOVERY",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend because allocation is stalled waiting for a mispredicted jump to retire or other branch-like conditions (e.g. the event is relevant during certain microcode flows). Counts all issue slots blocked while within this window including slots where uops were not available in the Instruction Queue.",
@@ -217,6 +243,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle because of a full resource in the backend",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.RESOURCE_FULL",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed because of a full resource in the backend. Including but not limited to resources such as the Re-order Buffer (ROB), reservation stations (RS), load/store buffers, physical registers, or any other needed machine resource that is currently unavailable. Note that uops must be available for consumption in order for this event to fire. If a uop is not available (Instruction Queue is empty), this event will not count.",
@@ -225,6 +252,7 @@
},
{
"BriefDescription": "Loads blocked because address has 4k partial address false dependence (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.4K_ALIAS",
"PEBS": "2",
@@ -234,6 +262,7 @@
},
{
"BriefDescription": "Loads blocked (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ALL_BLOCK",
"PEBS": "2",
@@ -243,6 +272,7 @@
},
{
"BriefDescription": "Loads blocked due to store data not ready (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DATA_UNKNOWN",
"PEBS": "2",
@@ -252,6 +282,7 @@
},
{
"BriefDescription": "Loads blocked due to store forward restriction (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PEBS": "2",
@@ -261,6 +292,7 @@
},
{
"BriefDescription": "Loads blocked because address in not in the UTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.UTLB_MISS",
"PEBS": "2",
@@ -270,6 +302,7 @@
},
{
"BriefDescription": "All machine clears",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.ALL",
"PublicDescription": "Counts machine clears for any reason.",
@@ -277,6 +310,7 @@
},
{
"BriefDescription": "Machine clears due to memory disambiguation",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.DISAMBIGUATION",
"PublicDescription": "Counts machine clears due to memory disambiguation. Memory disambiguation happens when a load which has been issued conflicts with a previous unretired store in the pipeline whose address was not known at issue time, but is later resolved to be the same as the load address.",
@@ -285,6 +319,7 @@
},
{
"BriefDescription": "Self-Modifying Code detected",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts the number of times that the processor detects that a program is writing to a code section and has to perform a machine clear because of that modification. Self-modifying code (SMC) causes a severe penalty in all Intel(R) architecture processors.",
@@ -293,6 +328,7 @@
},
{
"BriefDescription": "Uops issued to the back end per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts uops issued by the front end and allocated into the back end of the machine. This event counts uops that retire as well as uops that were speculatively executed but didn't retire. The sort of speculative uops that might be counted includes, but is not limited to those uops issued in the shadow of a miss-predicted branch, those uops that are inserted during an assist (such as for a denormal floating point result), and (previously allocated) uops that might be canceled during a machine clear.",
@@ -300,6 +336,7 @@
},
{
"BriefDescription": "Uops requested but not-delivered to the back-end per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UOPS_NOT_DELIVERED.ANY",
"PublicDescription": "This event used to measure front-end inefficiencies. I.e. when front-end of the machine is not delivering uops to the back-end and the back-end has is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into uops in machine understandable format and putting them into a uop queue to be consumed by back end. The back-end then takes these uops, allocates the required resources. When all resources are ready, uops are executed. If the back-end is not ready to accept uops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more uops. This event counts only when back-end is requesting more uops and front-end is not able to provide them. When 3 uops are requested and no uops are delivered, the event counts 3. When 3 are requested, and only 1 is delivered, the event counts 2. When only 2 are delivered, the event counts 1. Alternatively stated, the event will not count if 3 uops are delivered, or if the back end is stalled and not requesting any uops at all. Counts indicate missed opportunities for the front-end to deliver a uop to the back end. Some examples of conditions that cause front-end efficiencies are: ICache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth. Known Issues: Some uops require multiple allocation slots. These uops will not be charged as a front end 'not delivered' opportunity, and will be regarded as a back end problem. For example, the INC instruction has one uop that requires 2 issue slots. A stream of INC instructions will not count as UOPS_NOT_DELIVERED, even though only one instruction can be issued per clock. The low uop issue rate for a stream of INC instructions is considered to be a back end issue.",
@@ -307,6 +344,7 @@
},
{
"BriefDescription": "Uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ANY",
"PEBS": "2",
@@ -315,6 +353,7 @@
},
{
"BriefDescription": "Integer divide uops retired. (Precise Event Capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.IDIV",
"PEBS": "2",
@@ -324,6 +363,7 @@
},
{
"BriefDescription": "MS uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.MS",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmont/virtual-memory.json b/tools/perf/pmu-events/arch/x86/goldmont/virtual-memory.json
index 8c4929a517fa..13b23bbe4226 100644
--- a/tools/perf/pmu-events/arch/x86/goldmont/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/goldmont/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "ITLB misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "ITLB.MISS",
"PublicDescription": "Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) for a linear address of an instruction fetch. It counts when new translation are filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLB.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Memory uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Load uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS",
@@ -29,6 +32,7 @@
},
{
"BriefDescription": "Store uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES",
@@ -39,6 +43,7 @@
},
{
"BriefDescription": "Duration of page-walks in cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "PAGE_WALKS.CYCLES",
"PublicDescription": "Counts every core cycle a page-walk is in progress due to either a data memory operation or an instruction fetch.",
@@ -47,6 +52,7 @@
},
{
"BriefDescription": "Duration of D-side page-walks in cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "PAGE_WALKS.D_SIDE_CYCLES",
"PublicDescription": "Counts every core cycle when a Data-side (walks due to a data operation) page walk is in progress.",
@@ -55,6 +61,7 @@
},
{
"BriefDescription": "Duration of I-side pagewalks in cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "PAGE_WALKS.I_SIDE_CYCLES",
"PublicDescription": "Counts every core cycle when a Instruction-side (walks due to an instruction fetch) page walk is in progress.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json b/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json
index a7f80fd1b1df..92086758e7ce 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Requests rejected by the L2Q",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "CORE_REJECT_L2Q.ALL",
"PublicDescription": "Counts the number of demand and L1 prefetcher requests rejected by the L2Q due to a full or nearly full condition which likely indicates back pressure from L2Q. It also counts requests that would have gone directly to the XQ, but are rejected due to a full or nearly full condition, indicating back pressure from the IDI link. The L2Q may also reject transactions from a core to insure fairness between cores, or to delay a core's dirty eviction when the address conflicts with incoming external snoops.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "L1 Cache evictions for dirty data",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "DL1.REPLACEMENT",
"PublicDescription": "Counts when a modified (dirty) cache line is evicted from the data L1 cache and needs to be written back to memory. No count will occur if the evicted line is clean, and hence does not require a writeback.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Cycles code-fetch stalled due to an outstanding ICache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ICACHE_FILL_PENDING_CYCLES",
"PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ICache miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ICache miss. Note: this event is not the same as the total number of cycles spent retrieving instruction cache lines from the memory hierarchy.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Requests rejected by the XQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "L2_REJECT_XQ.ALL",
"PublicDescription": "Counts the number of demand and prefetch transactions that the L2 XQ rejects due to a full or near full condition which likely indicates back pressure from the intra-die interconnect (IDI) fabric. The XQ may reject transactions from the L2Q (non-cacheable requests), L2 misses and L2 write-back victims.",
@@ -31,6 +35,7 @@
},
{
"BriefDescription": "L2 cache request misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts memory requests originating from the core that miss in the L2 cache.",
@@ -39,6 +44,7 @@
},
{
"BriefDescription": "L2 cache requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts memory requests originating from the core that reference a cache line in the L2 cache.",
@@ -47,6 +53,7 @@
},
{
"BriefDescription": "Loads retired that came from DRAM (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.DRAM_HIT",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Memory uop retired where cross core or cross module HITM occurred (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HITM",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Load uops retired that hit L1 data cache (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
@@ -77,6 +86,7 @@
},
{
"BriefDescription": "Load uops retired that missed L1 data cache (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
@@ -87,6 +97,7 @@
},
{
"BriefDescription": "Load uops retired that hit L2 (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
@@ -97,6 +108,7 @@
},
{
"BriefDescription": "Load uops retired that missed L2 (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "Loads retired that hit WCB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT",
@@ -117,6 +130,7 @@
},
{
"BriefDescription": "Memory uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL",
@@ -127,6 +141,7 @@
},
{
"BriefDescription": "Load uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
@@ -137,6 +152,7 @@
},
{
"BriefDescription": "Store uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
@@ -147,6 +163,7 @@
},
{
"BriefDescription": "Locked load uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
@@ -157,6 +174,7 @@
},
{
"BriefDescription": "Memory uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT",
@@ -167,6 +185,7 @@
},
{
"BriefDescription": "Load uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
@@ -177,6 +196,7 @@
},
{
"BriefDescription": "Stores uops retired that split a cache-line (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
@@ -187,6 +207,7 @@
},
{
"BriefDescription": "Requires MSR_OFFCORE_RESP[0,1] to specify request type and response. (duplicated for both MSRs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100007",
@@ -194,6 +215,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -204,6 +226,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -214,6 +237,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -224,6 +248,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -234,6 +259,7 @@
},
{
"BriefDescription": "Counts data reads (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -244,6 +270,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -254,6 +281,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -264,6 +292,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -274,6 +303,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -284,6 +314,7 @@
},
{
"BriefDescription": "Counts data reads generated by L1 or L2 prefetchers outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_PF_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -294,6 +325,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -304,6 +336,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -314,6 +347,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -324,6 +358,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -334,6 +369,7 @@
},
{
"BriefDescription": "Counts data read, code read, and read for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_READ.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -344,6 +380,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -354,6 +391,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -364,6 +402,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -374,6 +413,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -384,6 +424,7 @@
},
{
"BriefDescription": "Counts requests to the uncore subsystem outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_REQUEST.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -394,6 +435,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -404,6 +446,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -414,6 +457,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -424,6 +468,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -434,6 +479,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests (demand & prefetch) outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.ANY_RFO.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -444,6 +490,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -454,6 +501,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -464,6 +512,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -474,6 +523,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -484,6 +534,7 @@
},
{
"BriefDescription": "Counts bus lock and split lock requests outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.BUS_LOCKS.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -494,6 +545,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -504,6 +556,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -514,6 +567,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -524,6 +578,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -534,6 +589,7 @@
},
{
"BriefDescription": "Counts the number of writeback transactions caused by L1 or L2 cache evictions outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.COREWB.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -544,6 +600,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -554,6 +611,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -564,6 +622,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -574,6 +633,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -584,6 +644,7 @@
},
{
"BriefDescription": "Counts demand instruction cacheline and I-side prefetch requests that miss the instruction cache outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -594,6 +655,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -604,6 +666,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -614,6 +677,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -624,6 +688,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -634,6 +699,7 @@
},
{
"BriefDescription": "Counts demand cacheable data reads of full cache lines outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -644,6 +710,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -654,6 +721,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -664,6 +732,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -674,6 +743,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -684,6 +754,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests generated by a write to full data cache line outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -694,6 +765,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -704,6 +776,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -714,6 +787,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -724,6 +798,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -734,6 +809,7 @@
},
{
"BriefDescription": "Counts full cache line data writes to uncacheable write combining (USWC) memory region and full cache-line non-temporal writes outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.FULL_STREAMING_STORES.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -744,6 +820,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -754,6 +831,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -764,6 +842,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -774,6 +853,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -784,6 +864,7 @@
},
{
"BriefDescription": "Counts data cache line reads generated by hardware L1 data cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L1_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -794,6 +875,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -804,6 +886,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -814,6 +897,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -824,6 +908,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -834,6 +919,7 @@
},
{
"BriefDescription": "Counts data cacheline reads generated by hardware L2 cache prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -844,6 +930,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -854,6 +941,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -864,6 +952,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -874,6 +963,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -884,6 +974,7 @@
},
{
"BriefDescription": "Counts reads for ownership (RFO) requests generated by L2 prefetcher outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -894,6 +985,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -904,6 +996,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -914,6 +1007,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -924,6 +1018,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -934,6 +1029,7 @@
},
{
"BriefDescription": "Counts any data writes to uncacheable write combining (USWC) memory region outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.OUTSTANDING",
"MSRIndex": "0x1a6",
@@ -944,6 +1040,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions have any transaction responses from the uncore subsystem.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.ANY_RESPONSE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -954,6 +1051,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions hit the L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_HIT",
"MSRIndex": "0x1a6, 0x1a7",
@@ -964,6 +1062,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions miss the L2 cache with a snoop hit in the other processor module, data forwarding is required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.HITM_OTHER_CORE",
"MSRIndex": "0x1a6, 0x1a7",
@@ -974,6 +1073,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions true miss for the L2 cache with a snoop miss in the other processor module.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.L2_MISS.SNOOP_MISS_OR_NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6, 0x1a7",
@@ -984,6 +1084,7 @@
},
{
"BriefDescription": "Counts data cache lines requests by software prefetch instructions outstanding, per cycle, from the time of the L2 miss to when any response is received.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "OFFCORE_RESPONSE.SW_PREFETCH.OUTSTANDING",
"MSRIndex": "0x1a6",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/counter.json b/tools/perf/pmu-events/arch/x86/goldmontplus/counter.json
new file mode 100644
index 000000000000..aa443347b694
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/counter.json
@@ -0,0 +1,7 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/floating-point.json b/tools/perf/pmu-events/arch/x86/goldmontplus/floating-point.json
index 822a7a6bcaeb..3d06ac1ee0cf 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles the FP divide unit is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.FPDIV",
"PublicDescription": "Counts core cycles the floating point divide unit is busy.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Machine clears due to FP assists",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.FP_ASSIST",
"PublicDescription": "Counts machine clears due to floating point (FP) operations needing assists. For instance, if the result was a floating point denormal, the hardware clears the pipeline and reissues uops to produce the correct IEEE compliant denormal result.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Floating point divide uops retired (Precise Event Capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.FPDIV",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json b/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json
index ace2a114b546..249a97cf3f4c 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "BACLEARs asserted for any branch type",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.ALL",
"PublicDescription": "Counts the number of times a BACLEAR is signaled for any reason, including, but not limited to indirect branch/call, Jcc (Jump on Conditional Code/Jump if Condition is Met) branch, unconditional branch/call, and returns.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "BACLEARs asserted for conditional branch",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.COND",
"PublicDescription": "Counts BACLEARS on Jcc (Jump on Conditional Code/Jump if Condition is Met) branches.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "BACLEARs asserted for return branch",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.RETURN",
"PublicDescription": "Counts BACLEARS on return instructions.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Decode restrictions due to predicting wrong instruction length",
+ "Counter": "0,1,2,3",
"EventCode": "0xE9",
"EventName": "DECODE_RESTRICTION.PREDECODE_WRONG",
"PublicDescription": "Counts the number of times the prediction (from the predecode cache) for instruction length is incorrect.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "References per ICache line. This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line. The event strives to count on a cache line basis, so that multiple fetches to a single cache line count as one ICACHE.ACCESS. Specifically, the event counts when accesses from straight line code crosses the cache line boundary, or when a branch target is to a new line.\r\nThis event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "References per ICache line that are available in the ICache (hit). This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is in the ICache (hit). The event strives to count on a cache line basis, so that multiple accesses which hit in a single cache line count as one ICACHE.HIT. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "References per ICache line that are not available in the ICache (miss). This event counts differently than Intel processors based on Silvermont microarchitecture",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Counts requests to the Instruction Cache (ICache) for one or more bytes in an ICache Line and that cache line is not in the ICache (miss). The event strives to count on a cache line basis, so that multiple accesses which miss in a single cache line count as one ICACHE.MISS. Specifically, the event counts when straight line code crosses the cache line boundary, or when a branch target is to a new line, and that cache line is not in the ICache. This event counts differently than Intel processors based on Silvermont microarchitecture.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "MS decode starts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE7",
"EventName": "MS_DECODED.MS_ENTRY",
"PublicDescription": "Counts the number of times the Microcode Sequencer (MS) starts a flow of uops from the MSROM. It does not count every time a uop is read from the MSROM. The most common case that this counts is when a micro-coded instruction is encountered by the front end of the machine. Other cases include when an instruction encounters a fault, trap, or microcode assist of any sort that initiates a flow of uops. The event will count MS startups for uops that are speculative, and subsequently cleared by branch mispredict or a machine clear.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json b/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json
index 7038873a5c8d..72bc2155ed00 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Machine clears due to memory ordering issue",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts machine clears due to memory ordering issues. This occurs when a snoop request happens and the machine is uncertain if memory ordering will be preserved - as another core is in the process of modifying the data.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Load uops that split a page (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
"PEBS": "2",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Store uops that split a page (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/other.json b/tools/perf/pmu-events/arch/x86/goldmontplus/other.json
index ec0ce9078c98..96bbc4fc82a1 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/other.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles code-fetch stalled due to any reason.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ALL",
"PublicDescription": "Counts cycles that fetch is stalled due to any reason. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes. This will include cycles due to an ITLB miss, ICache miss and other events.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Cycles the code-fetch stalls and an ITLB miss is outstanding.",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "FETCH_STALL.ITLB_FILL_PENDING_CYCLES",
"PublicDescription": "Counts cycles that fetch is stalled due to an outstanding ITLB miss. That is, the decoder queue is able to accept bytes, but the fetch unit is unable to provide bytes due to an ITLB miss. Note: this event is not the same as page walk cycles to retrieve an instruction translation.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Cycles hardware interrupts are masked",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.MASKED",
"PublicDescription": "Counts the number of core cycles during which interrupts are masked (disabled). Increments by 1 each core cycle that EFLAGS.IF is 0, regardless of whether interrupts are pending or not.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Cycles pending interrupts are masked",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.PENDING_AND_MASKED",
"PublicDescription": "Counts core cycles during which there are pending interrupts, but interrupts are masked (EFLAGS.IF = 0).",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Hardware interrupts received",
+ "Counter": "0,1,2,3",
"EventCode": "0xCB",
"EventName": "HW_INTERRUPTS.RECEIVED",
"PublicDescription": "Counts hardware interrupts received by the processor.",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json b/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json
index 33ef331e77e0..8cbf253d0c30 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Retired branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PEBS": "2",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Retired taken branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_TAKEN_BRANCHES",
"PEBS": "2",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Retired near call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CALL",
"PEBS": "2",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Retired far branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PEBS": "2",
@@ -36,6 +40,7 @@
},
{
"BriefDescription": "Retired near indirect call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.IND_CALL",
"PEBS": "2",
@@ -45,6 +50,7 @@
},
{
"BriefDescription": "Retired conditional branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.JCC",
"PEBS": "2",
@@ -54,6 +60,7 @@
},
{
"BriefDescription": "Retired instructions of near indirect Jmp or call (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NON_RETURN_IND",
"PEBS": "2",
@@ -63,6 +70,7 @@
},
{
"BriefDescription": "Retired near relative call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.REL_CALL",
"PEBS": "2",
@@ -72,6 +80,7 @@
},
{
"BriefDescription": "Retired near return instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.RETURN",
"PEBS": "2",
@@ -81,6 +90,7 @@
},
{
"BriefDescription": "Retired conditional branch instructions that were taken (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.TAKEN_JCC",
"PEBS": "2",
@@ -90,6 +100,7 @@
},
{
"BriefDescription": "Retired mispredicted branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PEBS": "2",
@@ -98,6 +109,7 @@
},
{
"BriefDescription": "Retired mispredicted near indirect call instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.IND_CALL",
"PEBS": "2",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "Retired mispredicted conditional branch instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.JCC",
"PEBS": "2",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "Retired mispredicted instructions of near indirect Jmp or near indirect call (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NON_RETURN_IND",
"PEBS": "2",
@@ -125,6 +139,7 @@
},
{
"BriefDescription": "Retired mispredicted near return instructions (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.RETURN",
"PEBS": "2",
@@ -134,6 +149,7 @@
},
{
"BriefDescription": "Retired mispredicted conditional branch instructions that were taken (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.TAKEN_JCC",
"PEBS": "2",
@@ -143,6 +159,7 @@
},
{
"BriefDescription": "Core cycles when core is not halted (Fixed event)",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"PublicDescription": "Counts the number of core cycles while the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time to time. For this reason this event may have a changing ratio with regards to time. This event uses fixed counter 1. You cannot collect a PEBs record for this event.",
"SampleAfterValue": "2000003",
@@ -150,6 +167,7 @@
},
{
"BriefDescription": "Core cycles when core is not halted",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"PublicDescription": "Core cycles when core is not halted. This event uses a (_P)rogrammable general purpose performance counter.",
@@ -157,6 +175,7 @@
},
{
"BriefDescription": "Reference cycles when core is not halted",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF",
"PublicDescription": "Reference cycles when core is not halted. This event uses a (_P)rogrammable general purpose performance counter.",
@@ -165,6 +184,7 @@
},
{
"BriefDescription": "Reference cycles when core is not halted (Fixed event)",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. In mobile systems the core frequency may change from time. This event is not affected by core frequency changes but counts as if the core is running at the maximum frequency all the time. This event uses fixed counter 2. You cannot collect a PEBs record for this event.",
"SampleAfterValue": "2000003",
@@ -172,6 +192,7 @@
},
{
"BriefDescription": "Cycles a divider is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.ALL",
"PublicDescription": "Counts core cycles if either divide unit is busy.",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Cycles the integer divide unit is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0xCD",
"EventName": "CYCLES_DIV_BUSY.IDIV",
"PublicDescription": "Counts core cycles the integer divide unit is busy.",
@@ -187,6 +209,7 @@
},
{
"BriefDescription": "Instructions retired (Fixed event)",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
"PEBS": "2",
"PublicDescription": "Counts the number of instructions that retire execution. For instructions that consist of multiple uops, this event counts the retirement of the last uop of the instruction. The counter continues counting during hardware interrupts, traps, and inside interrupt handlers. This event uses fixed counter 0. You cannot collect a PEBs record for this event.",
@@ -195,6 +218,7 @@
},
{
"BriefDescription": "Instructions retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
"PEBS": "2",
@@ -203,6 +227,7 @@
},
{
"BriefDescription": "Instructions retired - using Reduced Skid PEBS feature",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
"PEBS": "2",
@@ -211,6 +236,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.ANY",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend due to either a full resource in the backend (RESOURCE_FULL) or due to the processor recovering from some event (RECOVERY).",
@@ -218,6 +244,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle to recover",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.RECOVERY",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed by the backend because allocation is stalled waiting for a mispredicted jump to retire or other branch-like conditions (e.g. the event is relevant during certain microcode flows). Counts all issue slots blocked while within this window including slots where uops were not available in the Instruction Queue.",
@@ -226,6 +253,7 @@
},
{
"BriefDescription": "Unfilled issue slots per cycle because of a full resource in the backend",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "ISSUE_SLOTS_NOT_CONSUMED.RESOURCE_FULL",
"PublicDescription": "Counts the number of issue slots per core cycle that were not consumed because of a full resource in the backend. Including but not limited to resources such as the Re-order Buffer (ROB), reservation stations (RS), load/store buffers, physical registers, or any other needed machine resource that is currently unavailable. Note that uops must be available for consumption in order for this event to fire. If a uop is not available (Instruction Queue is empty), this event will not count.",
@@ -234,6 +262,7 @@
},
{
"BriefDescription": "Loads blocked because address has 4k partial address false dependence (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.4K_ALIAS",
"PEBS": "2",
@@ -243,6 +272,7 @@
},
{
"BriefDescription": "Loads blocked (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.ALL_BLOCK",
"PEBS": "2",
@@ -252,6 +282,7 @@
},
{
"BriefDescription": "Loads blocked due to store data not ready (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.DATA_UNKNOWN",
"PEBS": "2",
@@ -261,6 +292,7 @@
},
{
"BriefDescription": "Loads blocked due to store forward restriction (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PEBS": "2",
@@ -270,6 +302,7 @@
},
{
"BriefDescription": "Loads blocked because address in not in the UTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.UTLB_MISS",
"PEBS": "2",
@@ -279,6 +312,7 @@
},
{
"BriefDescription": "All machine clears",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.ALL",
"PublicDescription": "Counts machine clears for any reason.",
@@ -286,6 +320,7 @@
},
{
"BriefDescription": "Machine clears due to memory disambiguation",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.DISAMBIGUATION",
"PublicDescription": "Counts machine clears due to memory disambiguation. Memory disambiguation happens when a load which has been issued conflicts with a previous unretired store in the pipeline whose address was not known at issue time, but is later resolved to be the same as the load address.",
@@ -294,6 +329,7 @@
},
{
"BriefDescription": "Machines clear due to a page fault",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.PAGE_FAULT",
"PublicDescription": "Counts the number of times that the machines clears due to a page fault. Covers both I-side and D-side(Loads/Stores) page faults. A page fault occurs when either page is not present, or an access violation",
@@ -302,6 +338,7 @@
},
{
"BriefDescription": "Self-Modifying Code detected",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts the number of times that the processor detects that a program is writing to a code section and has to perform a machine clear because of that modification. Self-modifying code (SMC) causes a severe penalty in all Intel(R) architecture processors.",
@@ -310,6 +347,7 @@
},
{
"BriefDescription": "Uops issued to the back end per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts uops issued by the front end and allocated into the back end of the machine. This event counts uops that retire as well as uops that were speculatively executed but didn't retire. The sort of speculative uops that might be counted includes, but is not limited to those uops issued in the shadow of a miss-predicted branch, those uops that are inserted during an assist (such as for a denormal floating point result), and (previously allocated) uops that might be canceled during a machine clear.",
@@ -317,6 +355,7 @@
},
{
"BriefDescription": "Uops requested but not-delivered to the back-end per cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UOPS_NOT_DELIVERED.ANY",
"PublicDescription": "This event used to measure front-end inefficiencies. I.e. when front-end of the machine is not delivering uops to the back-end and the back-end has is not stalled. This event can be used to identify if the machine is truly front-end bound. When this event occurs, it is an indication that the front-end of the machine is operating at less than its theoretical peak performance. Background: We can think of the processor pipeline as being divided into 2 broader parts: Front-end and Back-end. Front-end is responsible for fetching the instruction, decoding into uops in machine understandable format and putting them into a uop queue to be consumed by back end. The back-end then takes these uops, allocates the required resources. When all resources are ready, uops are executed. If the back-end is not ready to accept uops from the front-end, then we do not want to count these as front-end bottlenecks. However, whenever we have bottlenecks in the back-end, we will have allocation unit stalls and eventually forcing the front-end to wait until the back-end is ready to receive more uops. This event counts only when back-end is requesting more uops and front-end is not able to provide them. When 3 uops are requested and no uops are delivered, the event counts 3. When 3 are requested, and only 1 is delivered, the event counts 2. When only 2 are delivered, the event counts 1. Alternatively stated, the event will not count if 3 uops are delivered, or if the back end is stalled and not requesting any uops at all. Counts indicate missed opportunities for the front-end to deliver a uop to the back end. Some examples of conditions that cause front-end efficiencies are: ICache misses, ITLB misses, and decoder restrictions that limit the front-end bandwidth. Known Issues: Some uops require multiple allocation slots. These uops will not be charged as a front end 'not delivered' opportunity, and will be regarded as a back end problem. For example, the INC instruction has one uop that requires 2 issue slots. A stream of INC instructions will not count as UOPS_NOT_DELIVERED, even though only one instruction can be issued per clock. The low uop issue rate for a stream of INC instructions is considered to be a back end issue.",
@@ -324,6 +363,7 @@
},
{
"BriefDescription": "Uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ANY",
"PEBS": "2",
@@ -332,6 +372,7 @@
},
{
"BriefDescription": "Integer divide uops retired (Precise Event Capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.IDIV",
"PEBS": "2",
@@ -341,6 +382,7 @@
},
{
"BriefDescription": "MS uops retired (Precise event capable)",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.MS",
"PEBS": "2",
diff --git a/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json b/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json
index 3d6feb45a50b..09208af2e0ba 100644
--- a/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/goldmontplus/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Page walk completed due to a demand load to a 1GB page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1GB",
"PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 1GB pages. The page walks can end with or without a page fault.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand load to a 2M or 4M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand load to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts page walks completed due to demand data loads (including SW prefetches) whose address translations missed in all TLB levels and were mapped to 4K pages. The page walks can end with or without a page fault.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Page walks outstanding due to a demand load every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts once per cycle for each page walk occurring due to a load (demand data loads or SW prefetches). Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 1GB page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1GB",
"PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 1GB pages. The page walks can end with or without a page fault.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 2M or 4M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Page walk completed due to a demand data store to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts page walks completed due to demand data stores whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Page walks outstanding due to a demand data store every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts once per cycle for each page walk occurring due to a demand data store. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Page walks outstanding due to walking the EPT every cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "EPT.WALK_PENDING",
"PublicDescription": "Counts once per cycle for each page walk only while traversing the Extended Page Table (EPT), and does not count during the rest of the translation. The EPT is used for translating Guest-Physical Addresses to Physical Addresses for Virtual Machine Monitors (VMMs). Average cycles per walk can be calculated by dividing the count by number of walks.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "ITLB misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "ITLB.MISS",
"PublicDescription": "Counts the number of times the machine was unable to find a translation in the Instruction Translation Lookaside Buffer (ITLB) for a linear address of an instruction fetch. It counts when new translation are filled into the ITLB. The event is speculative in nature, but will not count translations (page walks) that are begun and not finished, or translations that are finished but not filled into the ITLB.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Page walk completed due to an instruction fetch in a 1GB page",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1GB",
"PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 1GB pages. The page walks can end with or without a page fault.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Page walk completed due to an instruction fetch in a 2M or 4M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 2M or 4M pages. The page walks can end with or without a page fault.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Page walk completed due to an instruction fetch in a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts page walks completed due to instruction fetches whose address translations missed in the TLB and were mapped to 4K pages. The page walks can end with or without a page fault.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "Page walks outstanding due to an instruction fetch every cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts once per cycle for each page walk occurring due to an instruction fetch. Includes cycles spent traversing the Extended Page Table (EPT). Average cycles per walk can be calculated by dividing by the number of walks.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Memory uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Load uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_LOADS",
@@ -133,6 +149,7 @@
},
{
"BriefDescription": "Store uops retired that missed the DTLB (Precise event capable)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.DTLB_MISS_STORES",
@@ -143,6 +160,7 @@
},
{
"BriefDescription": "STLB flushes",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSHES.STLB_ANY",
"PublicDescription": "Counts STLB flushes. The TLBs are flushed on instructions like INVLPG and MOV to CR3.",
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/cache.json b/tools/perf/pmu-events/arch/x86/grandridge/cache.json
index 7f0dc65a55d2..9abddb06a837 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/cache.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/cache.json
@@ -1,155 +1,532 @@
[
{
+ "BriefDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x51",
+ "EventName": "DL1.DIRTY_EVICTION",
+ "PublicDescription": "Counts the number of L1D cacheline (dirty) evictions caused by load misses, stores, and prefetches. Does not count evictions or dirty writebacks caused by snoops. Does not count a replacement unless a (dirty) line was written back.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Exclusive state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.E",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Exclusive state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Forward state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.F",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Forward state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Modified state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.M",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Modified state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines filled into the L2 cache that are in Shared state",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.S",
+ "PublicDescription": "Counts the number of cache lines filled into the L2 cache that are in Shared state. Counts on a per core basis.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 cache lines that are evicted due to an L2 cache fill",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of L2 cache lines that are evicted due to an L2 cache fill. Increments on the core that brought the line in originally.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 cache lines that are silently dropped due to an L2 cache fill",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.SILENT",
+ "PublicDescription": "Counts the number of L2 cache lines that are silently dropped due to an L2 cache fill. Increments on the core that brought the line in originally.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache Accesses that resulted in a Hit from a front door request only (does not include rejects or recycles), per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of total L2 Cache Accesses that resulted in a Miss from a front door request only (does not include rejects or recycles), per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of L2 Cache Accesses that miss the L2 and get BBL reject short and long rejects, per core event",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.REJECTS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
- "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x41"
},
{
"BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
- "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
+ "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the core has access to an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
"SampleAfterValue": "200003",
"UMask": "0x4f"
},
{
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an instruction cache or TLB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7f"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to an instruction cache or TLB miss which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.L2_HIT",
+ "PublicDescription": "Counts the number of cycles the core is stalled due to an instruction cache or Translation Lookaside Buffer (TLB) miss which hit in the L2 cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an ICACHE or ITLB miss which hit in the LLC. If the core has access to an L3 cache, an LLC hit refers to an L3 cache hit, otherwise it counts zeros.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.LLC_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an ICACHE or ITLB miss which missed all the caches. If the core has access to an L3 cache, an LLC miss refers to an L3 cache miss, otherwise it is an L2 cache miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x35",
+ "EventName": "MEM_BOUND_STALLS_IFETCH.LLC_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x78"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to an L1 demand load miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7f"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a demand load which hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.L2_HIT",
+ "PublicDescription": "Counts the number of cycles a core is stalled due to a demand load which hit in the L2 cache.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which hit in the LLC. If the core has access to an L3 cache, an LLC hit refers to an L3 cache hit, otherwise it counts zeros.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_HIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which missed all the local caches. If the core has access to an L3 cache, an LLC miss refers to an L3 cache miss, otherwise it is an L2 cache miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x78"
+ },
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled to a store buffer full condition",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.SBFULL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss the L3 cache and hit in DRAM",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit the L1 data cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L1 data cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss in the L2 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that hit in the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.L3_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1c"
+ },
+ {
+ "BriefDescription": "Counts the number of loads that hit in a write combining buffer (WCB), excluding the first load that caused the WCB to allocate.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_UOPS_RETIRED.WCB_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked for any of the following reasons: load buffer, store buffer or RSV full.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ALL",
+ "SampleAfterValue": "20003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to a load buffer full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.LD_BUF",
+ "SampleAfterValue": "20003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to an RSV full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.RSV",
+ "SampleAfterValue": "20003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that uops are blocked due to a store buffer full condition.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x04",
+ "EventName": "MEM_SCHEDULER_BLOCK.ST_BUF",
+ "SampleAfterValue": "20003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts the number of load ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x81"
},
{
"BriefDescription": "Counts the number of store ops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
- "PEBS": "1",
"SampleAfterValue": "200003",
"UMask": "0x82"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
"MSRIndex": "0x3F6",
"MSRValue": "0x400",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
"MSRIndex": "0x3F6",
"MSRValue": "0x800",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x5"
},
{
+ "BriefDescription": "Counts the number of load uops retired that performed one or more locks",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21"
+ },
+ {
+ "BriefDescription": "Counts the number of memory uops retired that were splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x43"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split load uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x41"
+ },
+ {
+ "BriefDescription": "Counts the number of retired split store uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x42"
+ },
+ {
+ "BriefDescription": "Counts the number of memory uops retired that missed in the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x13"
+ },
+ {
+ "BriefDescription": "Counts the number of load uops retired that miss in the second Level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "Counts the number of store uops retired that miss in the second level TLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
+ "SampleAfterValue": "200003",
+ "UMask": "0x12"
+ },
+ {
"BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
- "PEBS": "2",
"SampleAfterValue": "1000003",
"UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that were supplied by the L3 cache where a snoop was sent, the snoop hit, and non-modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xB7",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were supplied by the L3 cache where a snoop was sent, the snoop hit, and modified data was forwarded. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to an icache miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ICACHE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/counter.json b/tools/perf/pmu-events/arch/x86/grandridge/counter.json
new file mode 100644
index 000000000000..d9ac3aca5bd5
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/counter.json
@@ -0,0 +1,42 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "B2CMI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CHA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IIO",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 4
+ },
+ {
+ "Unit": "CHACMS",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/floating-point.json b/tools/perf/pmu-events/arch/x86/grandridge/floating-point.json
new file mode 100644
index 000000000000..5266eed969be
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/floating-point.json
@@ -0,0 +1,110 @@
+[
+ {
+ "BriefDescription": "Counts the number of cycles when any of the floating point dividers are active.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of all types of floating point operations per uop with all default weighting",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP64]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 32 bit single precision results [This event is alias to FP_FLOPS_RETIRED.SP]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP32",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations that produce 64 bit double precision results [This event is alias to FP_FLOPS_RETIRED.DP]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.FP64",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to FP_FLOPS_RETIRED.FP32]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc8",
+ "EventName": "FP_FLOPS_RETIRED.SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the total number of floating point retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 128 bit single precision floating point. This may be SSE or AVX.128 operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.128B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a packed 256 bit double precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.256B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 32bit single precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.32B_SP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of retired instructions whose sources are a scalar 64 bit double precision floating point.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_INST_RETIRED.64B_DP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point operations retired that required microcode assist.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.FP_ASSIST",
+ "PublicDescription": "Counts the number of floating point operations retired that required microcode assist, which is not a reflection of the number of FP operations, instructions or uops.",
+ "SampleAfterValue": "20003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of floating point divide uops retired (x87 and sse, including x87 sqrt).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.FPDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/frontend.json b/tools/perf/pmu-events/arch/x86/grandridge/frontend.json
index be8f1c7e195c..fef5cba533bb 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/frontend.json
@@ -1,6 +1,24 @@
[
{
+ "BriefDescription": "Counts the total number of BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe6",
+ "EventName": "BACLEARS.ANY",
+ "PublicDescription": "Counts the total number of BACLEARS, which occur when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of instructions retired that were tagged because empty issue slots were seen before the uop due to ITLB miss",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x80",
"EventName": "ICACHE.ACCESSES",
"SampleAfterValue": "200003",
@@ -8,9 +26,18 @@
},
{
"BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"SampleAfterValue": "200003",
"UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the micro-sequencer is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "MS_DECODED.MS_BUSY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/grr-metrics.json b/tools/perf/pmu-events/arch/x86/grandridge/grr-metrics.json
new file mode 100644
index 000000000000..a0d637a24c1b
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/grr-metrics.json
@@ -0,0 +1,789 @@
+[
+ {
+ "BriefDescription": "C10 residency percent per package",
+ "MetricExpr": "cstate_pkg@c10\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C10_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C1 residency percent per core",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C1_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C2 residency percent per package",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C2_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C3 residency percent per package",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C3_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per core",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per package",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C7 residency percent per core",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C7_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C8 residency percent per package",
+ "MetricExpr": "cstate_pkg@c8\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C8_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
+ "MetricName": "cpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "CPU operating frequency (in GHz)",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC * #SYSTEM_TSC_FREQ / 1e9",
+ "MetricName": "cpu_operating_frequency",
+ "ScaleUnit": "1GHz"
+ },
+ {
+ "BriefDescription": "Percentage of time spent in the active CPU power state C0",
+ "MetricExpr": "tma_info_system_cpu_utilization",
+ "MetricName": "cpu_utilization",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_2mb_large_page_load_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_load_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
+ "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_store_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "The percent of inbound full cache line writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "MetricName": "io_full_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Message Signaled Interrupts (MSI) per second sent by the integrated I/O traffic controller (IIO) to System Configuration Controller (Ubox)",
+ "MetricExpr": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX_POSTED / duration_time",
+ "MetricName": "io_msi",
+ "ScaleUnit": "1per_sec"
+ },
+ {
+ "BriefDescription": "The percent of inbound partial writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_RFO) / (UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_RFO)",
+ "MetricName": "io_partial_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "The percent of inbound reads initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR / UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "MetricName": "io_read_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
+ "MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
+ "MetricName": "itlb_2nd_level_large_page_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
+ "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "itlb_2nd_level_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing in L1 instruction cache (includes prefetches) to the total number of completed instructions",
+ "MetricExpr": "ICACHE.MISSES / INST_RETIRED.ANY",
+ "MetricName": "l1_i_code_read_misses_with_prefetches_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of demand load requests hitting in L1 data cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L1_HIT / INST_RETIRED.ANY",
+ "MetricName": "l1d_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed demand load requests hitting in L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L2_HIT / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed data read request missing L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of requests missing L2 cache (includes code+data+rfo w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "LONGEST_LAT_CACHE.REFERENCE / INST_RETIRED.ANY",
+ "MetricName": "l2_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_CRD + UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF) / INST_RETIRED.ANY",
+ "MetricName": "llc_code_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF + UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA) / INST_RETIRED.ANY",
+ "MetricName": "llc_data_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Load operations retired per instruction",
+ "MetricExpr": "MEM_UOPS_RETIRED.ALL_LOADS / INST_RETIRED.ANY",
+ "MetricName": "loads_retired_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "DDR memory read bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD + UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_total",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory write bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
+ "MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
+ "MetricGroup": "smi",
+ "MetricName": "smi_cycles",
+ "MetricThreshold": "smi_cycles > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Number of SMI interrupts.",
+ "MetricExpr": "msr@smi@",
+ "MetricGroup": "smi",
+ "MetricName": "smi_num",
+ "ScaleUnit": "1SMI#"
+ },
+ {
+ "BriefDescription": "Store operations retired per instruction",
+ "MetricExpr": "MEM_UOPS_RETIRED.ALL_STORES / INST_RETIRED.ANY",
+ "MetricName": "stores_retired_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to certain allocation restrictions",
+ "MetricExpr": "tma_core_bound",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_allocation_restriction",
+ "MetricThreshold": "tma_allocation_restriction > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls",
+ "MetricExpr": "TOPDOWN_BE_BOUND.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL1;tma_L1_group",
+ "MetricName": "tma_backend_bound",
+ "MetricThreshold": "tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL1",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend due to backend stalls. Note that uops must be available for consumption in order for this event to count. If a uop is not available (IQ is empty), this event will not count",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL1;tma_L1_group",
+ "MetricName": "tma_bad_speculation",
+ "MetricThreshold": "tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL1",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the instruction queue (IQ). Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend",
+ "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_DETECT / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_branch_detect",
+ "MetricThreshold": "tma_branch_detect > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "PublicDescription": "Counts the number of issue slots that were not delivered by the frontend due to BACLEARS, which occurs when the Branch Target Buffer (BTB) prediction or lack thereof, was corrected by a later branch predictor in the frontend. Includes BACLEARS due to all branch types including conditional and unconditional jumps, returns, and indirect branches.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to branch mispredicts",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.MISPREDICT / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
+ "MetricName": "tma_branch_mispredicts",
+ "MetricThreshold": "tma_branch_mispredicts > 0.05 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to BTCLEARS, which occurs when the Branch Target Buffer (BTB) predicts a taken branch.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.BRANCH_RESTEER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_branch_resteer",
+ "MetricThreshold": "tma_branch_resteer > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to the microcode sequencer (MS).",
+ "MetricExpr": "TOPDOWN_FE_BOUND.CISC / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_cisc",
+ "MetricThreshold": "tma_cisc > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles due to backend bound stalls that are bounded by core restrictions and not attributed to an outstanding load or stores, or resource limitation",
+ "MetricExpr": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_core_bound",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to decode stalls.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.DECODE / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_decode",
+ "MetricThreshold": "tma_decode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that does not require the use of microcode, classified as a fast nuke, due to memory ordering, memory disambiguation and memory renaming",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.FASTNUKE / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_fast_nuke",
+ "MetricThreshold": "tma_fast_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to frontend stalls.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to instruction cache misses.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ICACHE / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_ifetch_bandwidth",
+ "MetricThreshold": "tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to frontend latency restrictions due to icache misses, itlb misses, branch detection, and resteer limitations.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_ifetch_latency",
+ "MetricThreshold": "tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation",
+ "MetricExpr": "INST_RETIRED.ANY / FP_FLOPS_RETIRED.ALL",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipflop"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_INST_RETIRED.128B_DP + FP_INST_RETIRED.128B_SP)",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_avx128"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction",
+ "MetricExpr": "INST_RETIRED.ANY / FP_INST_RETIRED.64B_DP",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_scalar_dp"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction",
+ "MetricExpr": "INST_RETIRED.ANY / FP_INST_RETIRED.32B_SP",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_arith_inst_mix_ipfparith_scalar_sp"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled due to a first level data TLB miss",
+ "MetricExpr": "100 * (LD_HEAD.DTLB_MISS_AT_RET + LD_HEAD.PGWALK_AT_RET) / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_bottleneck_%_dtlb_miss_bound_cycles"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_IFETCH.ALL / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Ifetch",
+ "MetricName": "tma_info_bottleneck_%_ifetch_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that allocation and retirement is stalled by the Frontend Cluster due to an Ifetch Miss, either Icache or ITLB Miss. See Info.Ifetch_Bound"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled due to an L1 miss",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_LOAD.ALL / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Load_Store_Miss",
+ "MetricName": "tma_info_bottleneck_%_load_miss_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled due to an L1 miss. See Info.Load_Miss_Bound"
+ },
+ {
+ "BriefDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall",
+ "MetricExpr": "100 * LD_HEAD.ANY_AT_RET / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Mem_Exec",
+ "MetricName": "tma_info_bottleneck_%_mem_exec_bound_cycles",
+ "PublicDescription": "Percentage of time that retirement is stalled by the Memory Cluster due to a pipeline stall. See Info.Mem_Exec_Bound"
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_inst_mix_ipbranch"
+ },
+ {
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALL",
+ "MetricName": "tma_info_br_inst_mix_ipcall"
+ },
+ {
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u",
+ "MetricName": "tma_info_br_inst_mix_ipfarbranch"
+ },
+ {
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was not taken",
+ "MetricExpr": "INST_RETIRED.ANY / (BR_MISP_RETIRED.COND - BR_MISP_RETIRED.COND_TAKEN)",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_ntaken"
+ },
+ {
+ "BriefDescription": "Instructions per retired conditional Branch Misprediction where the branch was taken",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_cond_taken"
+ },
+ {
+ "BriefDescription": "Instructions per retired indirect call or jump Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_indirect"
+ },
+ {
+ "BriefDescription": "Instructions per retired return Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RETURN",
+ "MetricName": "tma_info_br_inst_mix_ipmisp_ret"
+ },
+ {
+ "BriefDescription": "Instructions per retired Branch Misprediction",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_inst_mix_ipmispredict"
+ },
+ {
+ "BriefDescription": "Ratio of all branches which mispredict",
+ "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_ratio"
+ },
+ {
+ "BriefDescription": "Ratio between Mispredicted branches and unknown branches",
+ "MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / BACLEARS.ANY",
+ "MetricName": "tma_info_br_mispredict_bound_branch_mispredict_to_unknown_branch_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to load buffer full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.LD_BUF / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_load_buffer_stall_cycles"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to memory reservation stations full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.RSV / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_mem_rsv_stall_cycles"
+ },
+ {
+ "BriefDescription": "Percentage of time that allocation is stalled due to store buffer full",
+ "MetricExpr": "100 * MEM_SCHEDULER_BLOCK.ST_BUF / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_buffer_stalls_%_store_buffer_stall_cycles"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE / INST_RETIRED.ANY",
+ "MetricName": "tma_info_core_cpi"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "FP_FLOPS_RETIRED.ALL / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_core_flopc"
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle",
+ "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_core_ipc"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "TOPDOWN_RETIRING.ALL_P / INST_RETIRED.ANY",
+ "MetricName": "tma_info_core_upi"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L2",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_IFETCH.L2_HIT / MEM_BOUND_STALLS_IFETCH.ALL",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2hit"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss doesn't hit in the L2",
+ "MetricExpr": "100 * (MEM_BOUND_STALLS_IFETCH.LLC_HIT + MEM_BOUND_STALLS_IFETCH.LLC_MISS) / MEM_BOUND_STALLS_IFETCH.ALL",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l2miss"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss hits in the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_IFETCH.LLC_HIT / MEM_BOUND_STALLS_IFETCH.ALL",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3hit"
+ },
+ {
+ "BriefDescription": "Percentage of ifetch miss bound stalls, where the ifetch miss subsequently misses in the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_IFETCH.LLC_MISS / MEM_BOUND_STALLS_IFETCH.ALL",
+ "MetricName": "tma_info_ifetch_miss_bound_%_ifetchmissbound_with_l3miss"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L2",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_LOAD.L2_HIT / MEM_BOUND_STALLS_LOAD.ALL",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2hit"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses in the L2",
+ "MetricExpr": "100 * (MEM_BOUND_STALLS_LOAD.LLC_HIT + MEM_BOUND_STALLS_LOAD.LLC_MISS) / MEM_BOUND_STALLS_LOAD.ALL",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l2miss"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that hit the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_LOAD.LLC_HIT / MEM_BOUND_STALLS_LOAD.ALL",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3hit"
+ },
+ {
+ "BriefDescription": "Percentage of memory bound stalls where retirement is stalled due to an L1 miss that subsequently misses the L3",
+ "MetricExpr": "100 * MEM_BOUND_STALLS_LOAD.LLC_MISS / MEM_BOUND_STALLS_LOAD.ALL",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_miss_bound_%_loadmissbound_with_l3miss"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement due to a pipeline block",
+ "MetricExpr": "100 * LD_HEAD.L1_BOUND_AT_RET / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_l1_bound"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the oldest load of the load buffer is stalled at retirement",
+ "MetricExpr": "100 * (LD_HEAD.L1_BOUND_AT_RET + MEM_BOUND_STALLS_LOAD.ALL) / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_load_bound"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to store buffer full",
+ "MetricExpr": "100 * (MEM_SCHEDULER_BLOCK.ST_BUF / MEM_SCHEDULER_BLOCK.ALL) * tma_mem_scheduler",
+ "MetricGroup": "load_store_bound",
+ "MetricName": "tma_info_load_store_bound_store_bound"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to floating point assists",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.FP_ASSIST / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_fp_assist_pki"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to page faults",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.PAGE_FAULT / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_page_fault_pki"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears relative to thousands of instructions retired, due to self-modifying code",
+ "MetricExpr": "1e3 * MACHINE_CLEARS.SMC / INST_RETIRED.ANY",
+ "MetricName": "tma_info_machine_clear_bound_machine_clears_smc_pki"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads with an address aliasing block",
+ "MetricExpr": "100 * LD_BLOCKS.ADDRESS_ALIAS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_adressaliasing"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads with a store forward or unknown store address block",
+ "MetricExpr": "100 * LD_BLOCKS.DATA_UNKNOWN / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_exec_blocks_%_loads_with_storefwdblk"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a first level data cache miss",
+ "MetricExpr": "100 * LD_HEAD.L1_MISS_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_l1miss"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to other block cases, such as pipeline conflicts, fences, etc",
+ "MetricExpr": "100 * LD_HEAD.OTHER_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_otherpipelineblks"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a pagewalk",
+ "MetricExpr": "100 * LD_HEAD.PGWALK_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_pagewalk"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a second level TLB miss",
+ "MetricExpr": "100 * LD_HEAD.DTLB_MISS_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_stlbhit"
+ },
+ {
+ "BriefDescription": "Percentage of Memory Execution Bound due to a store forward address match",
+ "MetricExpr": "100 * LD_HEAD.ST_ADDR_AT_RET / LD_HEAD.ANY_AT_RET",
+ "MetricName": "tma_info_mem_exec_bound_%_loadhead_with_storefwding"
+ },
+ {
+ "BriefDescription": "Instructions per Load",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_ipload"
+ },
+ {
+ "BriefDescription": "Instructions per Store",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_UOPS_RETIRED.ALL_STORES",
+ "MetricName": "tma_info_mem_mix_ipstore"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that perform one or more locks",
+ "MetricExpr": "100 * MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_load_locks_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of total non-speculative loads that are splits",
+ "MetricExpr": "100 * MEM_UOPS_RETIRED.SPLIT_LOADS / MEM_UOPS_RETIRED.ALL_LOADS",
+ "MetricName": "tma_info_mem_mix_load_splits_ratio"
+ },
+ {
+ "BriefDescription": "Ratio of mem load uops to all uops",
+ "MetricExpr": "1e3 * MEM_UOPS_RETIRED.ALL_LOADS / TOPDOWN_RETIRING.ALL_P",
+ "MetricName": "tma_info_mem_mix_memload_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of time that the core is stalled due to a TPAUSE or UMWAIT instruction",
+ "MetricExpr": "100 * SERIALIZATION.C01_MS_SCB / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricName": "tma_info_serialization_%_tpause_cycles"
+ },
+ {
+ "BriefDescription": "Average CPU Utilization",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricName": "tma_info_system_cpu_utilization"
+ },
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "FP_FLOPS_RETIRED.ALL / (duration_time * 1e9)",
+ "MetricGroup": "Flops",
+ "MetricName": "tma_info_system_gflops",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
+ },
+ {
+ "BriefDescription": "Fraction of cycles spent in Kernel mode",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE_P:k / CPU_CLK_UNHALTED.CORE",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_kernel_utilization"
+ },
+ {
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE_P / CPU_CLK_UNHALTED.CORE",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "CPU_CLK_UNHALTED.CORE / CPU_CLK_UNHALTED.REF_TSC",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are FPDiv uops",
+ "MetricExpr": "100 * UOPS_RETIRED.FPDIV / TOPDOWN_RETIRING.ALL_P",
+ "MetricName": "tma_info_uop_mix_fpdiv_uop_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are IDiv uops",
+ "MetricExpr": "100 * UOPS_RETIRED.IDIV / TOPDOWN_RETIRING.ALL_P",
+ "MetricName": "tma_info_uop_mix_idiv_uop_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are microcode ops",
+ "MetricExpr": "100 * UOPS_RETIRED.MS / TOPDOWN_RETIRING.ALL_P",
+ "MetricName": "tma_info_uop_mix_microcode_uop_ratio"
+ },
+ {
+ "BriefDescription": "Percentage of all uops which are x87 uops",
+ "MetricExpr": "100 * UOPS_RETIRED.X87 / TOPDOWN_RETIRING.ALL_P",
+ "MetricName": "tma_info_uop_mix_x87_uop_ratio"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to Instruction Table Lookaside Buffer (ITLB) misses.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.ITLB_MISS / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_ifetch_latency > 0.15 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_bad_speculation_group",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.05 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to memory reservation stalls in which a scheduler is not able to accept uops",
+ "MetricExpr": "TOPDOWN_BE_BOUND.MEM_SCHEDULER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_mem_scheduler",
+ "MetricThreshold": "tma_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to IEC or FPC RAT stalls, which can be due to FIQ or IEC reservation stalls in which the integer, floating point or SIMD scheduler is not able to accept uops",
+ "MetricExpr": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_non_mem_scheduler",
+ "MetricThreshold": "tma_non_mem_scheduler > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to a machine clear that requires the use of microcode (slow nuke)",
+ "MetricExpr": "TOPDOWN_BAD_SPECULATION.NUKE / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_nuke",
+ "MetricThreshold": "tma_nuke > 0.05 & (tma_machine_clears > 0.05 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to other common frontend stalls not categorized.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.OTHER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_other_fb",
+ "MetricThreshold": "tma_other_fb > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not delivered by the frontend due to wrong predecodes.",
+ "MetricExpr": "TOPDOWN_FE_BOUND.PREDECODE / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_ifetch_bandwidth_group",
+ "MetricName": "tma_predecode",
+ "MetricThreshold": "tma_predecode > 0.05 & (tma_ifetch_bandwidth > 0.1 & tma_frontend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the physical register file unable to accept an entry (marble stalls)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.REGISTER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_register",
+ "MetricThreshold": "tma_register > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to the reorder buffer being full (ROB stalls)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.REORDER_BUFFER / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_reorder_buffer",
+ "MetricThreshold": "tma_reorder_buffer > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles the core is stalled due to a resource limitation",
+ "MetricExpr": "tma_backend_bound - tma_core_bound",
+ "MetricGroup": "TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_resource_bound",
+ "MetricThreshold": "tma_resource_bound > 0.2 & tma_backend_bound > 0.1",
+ "MetricgroupNoGroup": "TopdownL2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots",
+ "MetricExpr": "TOPDOWN_RETIRING.ALL_P / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.75",
+ "MetricgroupNoGroup": "TopdownL1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend due to scoreboards from the instruction queue (IQ), jump execution unit (JEU), or microcode sequencer (MS)",
+ "MetricExpr": "TOPDOWN_BE_BOUND.SERIALIZATION / (6 * CPU_CLK_UNHALTED.CORE)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_resource_bound_group",
+ "MetricName": "tma_serialization",
+ "MetricThreshold": "tma_serialization > 0.1 & (tma_resource_bound > 0.2 & tma_backend_bound > 0.1)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uncore operating frequency in GHz",
+ "MetricExpr": "UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages) / 1e9 / duration_time",
+ "MetricName": "uncore_frequency",
+ "ScaleUnit": "1GHz"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/memory.json b/tools/perf/pmu-events/arch/x86/grandridge/memory.json
index 79d8af45100c..48b6301e7696 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/memory.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/memory.json
@@ -1,19 +1,96 @@
[
{
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to any number of reasons, including an L1 miss, WCB full, pagewalk, store address block or store data block, on a load that retires.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ANY_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xff"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer is stalled due to a core bound stall including a store address match, a DTLB miss or a page walk that detains the load from retiring.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_BOUND_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xf4"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DL1 miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.L1_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x81"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.OTHER_AT_RET",
+ "PublicDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to other block cases such as pipeline conflicts, fences, etc.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc0"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a pagewalk.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.PGWALK_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xa0"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a store address match.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.ST_ADDR_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x84"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to memory ordering caused by a snoop from an external agent. Does not count internally generated machine clears such as those due to memory disambiguation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
+ "SampleAfterValue": "20003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts misaligned loads that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.LOAD_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts misaligned stores that are 4K page splits.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x13",
+ "EventName": "MISALIGN_MEM_REF.STORE_PAGE_SPLIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB7",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/metricgroups.json b/tools/perf/pmu-events/arch/x86/grandridge/metricgroups.json
new file mode 100644
index 000000000000..40984c23a6c9
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/metricgroups.json
@@ -0,0 +1,23 @@
+{
+ "Flops": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ifetch": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Load_Store_Miss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem_Exec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Summary": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TopdownL1": "Metrics for top-down breakdown at level 1",
+ "TopdownL2": "Metrics for top-down breakdown at level 2",
+ "TopdownL3": "Metrics for top-down breakdown at level 3",
+ "load_store_bound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "tma_L1_group": "Metrics for top-down breakdown at level 1",
+ "tma_L2_group": "Metrics for top-down breakdown at level 2",
+ "tma_L3_group": "Metrics for top-down breakdown at level 3",
+ "tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
+ "tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
+ "tma_ifetch_bandwidth_group": "Metrics contributing to tma_ifetch_bandwidth category",
+ "tma_ifetch_latency_group": "Metrics contributing to tma_ifetch_latency category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
+ "tma_resource_bound_group": "Metrics contributing to tma_resource_bound category"
+}
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/other.json b/tools/perf/pmu-events/arch/x86/grandridge/other.json
index 2414f6ff53b0..ea34103a8292 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/other.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/other.json
@@ -1,19 +1,21 @@
[
{
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0xB7",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
+ "BriefDescription": "This event is deprecated. [This event is alias to MISC_RETIRED.LBR_INSERTS]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xe4",
+ "EventName": "LBR_INSERTS.ANY",
+ "SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
- "BriefDescription": "Counts demand reads for ownership (RFO) and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB7",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
+ "MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/pipeline.json b/tools/perf/pmu-events/arch/x86/grandridge/pipeline.json
index 41212957ef21..f56d8d816e53 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/pipeline.json
@@ -1,40 +1,206 @@
[
{
+ "BriefDescription": "Counts the number of cycles when any of the floating point or integer dividers are active.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xcd",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
"BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
"SampleAfterValue": "200003"
},
{
+ "BriefDescription": "Counts the number of retired JCC (Jump on Conditional Code) branch instructions retired, includes both taken and not taken branches.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e"
+ },
+ {
+ "BriefDescription": "Counts the number of taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe"
+ },
+ {
+ "BriefDescription": "Counts the number of far branch instructions retired, includes far jump, far call and return, and interrupt call and return.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.FAR_BRANCH",
+ "SampleAfterValue": "200003",
+ "UMask": "0xbf"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb"
+ },
+ {
+ "BriefDescription": "Counts the number of near indirect JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xef"
+ },
+ {
+ "BriefDescription": "This event is deprecated. Refer to new event BR_INST_RETIRED.INDIRECT_CALL",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.IND_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb"
+ },
+ {
+ "BriefDescription": "Counts the number of near CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf9"
+ },
+ {
+ "BriefDescription": "Counts the number of near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7"
+ },
+ {
+ "BriefDescription": "Counts the number of near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc0"
+ },
+ {
+ "BriefDescription": "Counts the number of near relative CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.REL_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfd"
+ },
+ {
+ "BriefDescription": "Counts the number of near relative JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.REL_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xdf"
+ },
+ {
"BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
"SampleAfterValue": "200003"
},
{
+ "BriefDescription": "Counts the number of mispredicted JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND",
+ "SampleAfterValue": "200003",
+ "UMask": "0x7e"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted taken JCC (Jump on Conditional Code) branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfe"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect JMP and near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT",
+ "SampleAfterValue": "200003",
+ "UMask": "0xeb"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect CALL branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
+ "SampleAfterValue": "200003",
+ "UMask": "0xfb"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near indirect JMP branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_JMP",
+ "SampleAfterValue": "200003",
+ "UMask": "0xef"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
+ "SampleAfterValue": "200003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Counts the number of mispredicted near RET branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RETURN",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf7"
+ },
+ {
"BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.CORE",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.CORE_P",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"SampleAfterValue": "2000003",
"UMask": "0x3"
},
{
"BriefDescription": "Counts the number of unhalted reference clock cycles at TSC frequency.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles that the core is not in a halt state. The core enters the halt state when it is running the HLT instruction. This event is not affected by core frequency changes and increments at a fixed frequency that is also used for the Time Stamp Counter (TSC). This event uses a programmable general purpose performance counter.",
@@ -43,54 +209,362 @@
},
{
"BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Fixed Counter: Counts the number of instructions retired",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
+ "PublicDescription": "Fixed Counter: Counts the number of instructions retired Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts the number of instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"SampleAfterValue": "2000003"
},
{
- "BriefDescription": "Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
+ "BriefDescription": "Counts the number of retired loads that are blocked because it initially appears to be store forward blocked, but subsequently is shown not to be blocked based on 4K alias check.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.ADDRESS_ALIAS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because its address exactly matches an older store whose data is not ready.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.DATA_UNKNOWN",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of retired loads that are blocked because its address partially overlapped with an older store.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.STORE_FORWARD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to memory ordering in which an internal load passes an older store within the same CPU.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.DISAMBIGUATION",
+ "SampleAfterValue": "20003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to a page fault. Counts both I-Side and D-Side (Loads/Stores) page faults. A page fault occurs when either the page is not present, or an access violation occurs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.PAGE_FAULT",
+ "SampleAfterValue": "20003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SLOW",
+ "SampleAfterValue": "20003",
+ "UMask": "0x6f"
+ },
+ {
+ "BriefDescription": "Counts the number of machine clears due to program modifying data (self modifying code) within 1K of a recently fetched code page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SMC",
+ "SampleAfterValue": "20003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of Last Branch Record (LBR) entries. Requires LBRs to be enabled and configured in IA32_LBR_CTL. [This event is alias to LBR_INSERTS.ANY]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe4",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots in a UMWAIT or TPAUSE instruction where no uop issues due to the instruction putting the CPU into the C0.1 activity state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x75",
+ "EventName": "SERIALIZATION.C01_MS_SCB",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x73",
"EventName": "TOPDOWN_BAD_SPECULATION.ALL",
- "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL_P]",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.ALL_P",
+ "PublicDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Only issue slots wasted due to fast nukes such as memory ordering nukes are counted. Other nukes are not accounted for. Counts all issue slots blocked during this recovery window, including relevant microcode flows, and while uops are not yet available in the instruction queue (IQ) or until an FE_BOUND event occurs besides OTHER and CISC. Also includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear. [This event is alias to TOPDOWN_BAD_SPECULATION.ALL]",
"SampleAfterValue": "1000003"
},
{
- "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls",
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Fast Nukes such as Memory Ordering Machine clears and MRN nukes",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.FASTNUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the total number of issue slots that were not consumed by the backend because allocation is stalled due to a machine clear (nuke) of any kind including memory ordering and memory disambiguation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MACHINE_CLEARS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to Branch Mispredict",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.MISPREDICT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to a machine clear (nuke).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x73",
+ "EventName": "TOPDOWN_BAD_SPECULATION.NUKE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x74",
"EventName": "TOPDOWN_BE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
- "BriefDescription": "Counts the number of retirement slots not consumed due to front end stalls",
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to due to certain allocation restrictions",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.ALLOC_RESTRICTIONS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls [This event is alias to TOPDOWN_BE_BOUND.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to memory reservation stall (scheduler not being able to accept another uop). This could be caused by RSV full or load/store buffer block.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to IEC and FPC RAT stalls - which can be due to the FIQ and IEC reservation station stall (integer, FP and SIMD scheduler not being able to accept another uop. )",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.NON_MEM_SCHEDULER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to mrbl stall. A 'marble' refers to a physical register file entry, also known as the physical destination (PDST).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REGISTER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to ROB full",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.REORDER_BUFFER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not consumed by the backend due to iq/jeu scoreboards or ms scb",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x74",
+ "EventName": "TOPDOWN_BE_BOUND.SERIALIZATION",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of retirement slots not consumed due to front end stalls [This event is alias to TOPDOWN_FE_BOUND.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x71",
"EventName": "TOPDOWN_FE_BOUND.ALL",
"SampleAfterValue": "1000003"
},
{
- "BriefDescription": "Counts the number of consumed retirement slots. Similar to UOPS_RETIRED.ALL",
+ "BriefDescription": "Counts the number of retirement slots not consumed due to front end stalls [This event is alias to TOPDOWN_FE_BOUND.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ALL_P",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BAClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_DETECT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to BTClear",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.BRANCH_RESTEER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to ms",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.CISC",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to decode stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.DECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to frontend bandwidth restrictions due to decode, predecode, cisc, and other limitations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_BANDWIDTH",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8d"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to latency related stalls including BACLEARs, BTCLEARs, ITLB misses, and ICache misses.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.FRONTEND_LATENCY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x72"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to TOPDOWN_FE_BOUND.ITLB_MISS]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "Deprecated": "1",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ITLB",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to itlb miss [This event is alias to TOPDOWN_FE_BOUND.ITLB]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.ITLB_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend that do not categorize into any other common frontend stall",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.OTHER",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Counts the number of issue slots every cycle that were not delivered by the frontend due to predecode wrong",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x71",
+ "EventName": "TOPDOWN_FE_BOUND.PREDECODE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of consumed retirement slots. [This event is alias to TOPDOWN_RETIRING.ALL_P]",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x72",
"EventName": "TOPDOWN_RETIRING.ALL",
- "PEBS": "1",
"SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the number of consumed retirement slots. [This event is alias to TOPDOWN_RETIRING.ALL]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x72",
+ "EventName": "TOPDOWN_RETIRING.ALL_P",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the number of uops issued by the front end every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x0e",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops issued by the front end every cycle. When 4-uops are requested and only 2-uops are delivered, the event counts 2. Uops_issued correlates to the number of ROB entries. If uop takes 2 ROB slots it counts as 2 uops_issued.",
+ "SampleAfterValue": "1000003"
+ },
+ {
+ "BriefDescription": "Counts the total number of uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.ALL",
+ "SampleAfterValue": "2000003"
+ },
+ {
+ "BriefDescription": "Counts the number of integer divide uops retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.IDIV",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of uops that are from the complex flows issued by the micro-sequencer (MS). This includes uops from flows due to complex instructions, faults, assists, and inserted flows.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of x87 uops retired, includes those in ms flows",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.X87",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/uncore-cache.json b/tools/perf/pmu-events/arch/x86/grandridge/uncore-cache.json
new file mode 100644
index 000000000000..b89ab6e5cfb5
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/uncore-cache.json
@@ -0,0 +1,2117 @@
+[
+ {
+ "BriefDescription": "Clockticks for CMS units attached to CHA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_CHACMS_CLOCKTICKS",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "CHACMS"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles FAST trigger is received from the global FAST distress wire.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHACMS_RING_SRC_THRTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "CHACMS"
+ },
+ {
+ "BriefDescription": "Number of CHA clock cycles while the event is enabled",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_CHA_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "Clockticks of the uncore caching and home agent (CHA)",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in TOR or IRQ (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_ANY",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x3",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in IRQ (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_IRQ",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in TOR (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_TOR",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts when a normal (Non-Isochronous) full line write is issued from the CHA to the any of the memory controller channels.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: CRd Requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1bd0ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests and Read Prefetches",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1bc1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests, Read Prefetches, and Snoops",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Data Reads",
+ "UMask": "0x1fc1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Demand Data Reads, Core and LLC prefetches",
+ "UMask": "0x841ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests, Read Prefetches, and Snoops which miss the Cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Data Read Misses",
+ "UMask": "0x1fc101",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCALLY_HOMED_ADDRESS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Transactions homed locally",
+ "UMask": "0xbdfff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Read Requests and Code Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x19d0ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests and Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x19c1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1850ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1841ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x1848ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: LLC Prefetch Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_LLC_PF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x189dff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x199dff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1910ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1981ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x1908ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x19c8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All RFO and RFO Prefetches",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : All RFOs - Demand and Prefetches",
+ "UMask": "0x1bc8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.RFO_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Locally HOMed RFOs - Demand and Prefetches",
+ "UMask": "0x9c8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Writes to Locally Homed Memory (includes writebacks from L1/L2)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.WRITE_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Writes",
+ "UMask": "0x842ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : All Lines Victimized",
+ "UMask": "0xf",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : IA traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : IO traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - All Lines",
+ "UMask": "0x200f",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in E State",
+ "UMask": "0x2002",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_F",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in F State",
+ "UMask": "0x2008",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in M State",
+ "UMask": "0x2001",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in S State",
+ "UMask": "0x2004",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in E state",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in M state",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in S State",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts when a RFO (the Read for Ownership issued before a write) request hit a cacheline in the S (Shared) state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x39",
+ "EventName": "UNC_CHA_MISC.RFO_HIT_S",
+ "PerPkg": "1",
+ "PublicDescription": "Cbo Misc : RFO HitS",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.LOCAL_INVITOE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.LOCAL_READ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.OFF_PWRHEURISTIC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.RFO_HITS_SNP_BCAST",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.INVITOE",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : InvalItoE",
+ "UMask": "0x30",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write) .",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.READS",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : Reads",
+ "UMask": "0x3",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.READS_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.WRITES",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : Writes",
+ "UMask": "0xc",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Ingress (from CMS) Allocations : IRQ : Counts number of allocations per cycle into the specified Ingress queue.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_CHA_RxC_INSERTS.IRQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Ingress (from CMS) Occupancy : IRQ : Counts number of entries in the specified Ingress queue in each cycle.",
+ "Counter": "0",
+ "EventCode": "0x11",
+ "EventName": "UNC_CHA_RxC_OCCUPANCY.IRQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR Inserts",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All",
+ "UMask": "0xc001ffff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores",
+ "UMask": "0xc001ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CLFlushes issued by iA Cores",
+ "UMask": "0xc8c7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlushOpt events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSHOPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CLFlushOpts issued by iA Cores",
+ "UMask": "0xc8d7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRDs issued by iA Cores",
+ "UMask": "0xc80fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts; Code read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc88fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opts issued by iA Cores",
+ "UMask": "0xc827ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt prefetch from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores",
+ "UMask": "0xc8a7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores that Hit the LLC",
+ "UMask": "0xc001fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc80ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc88ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opts issued by iA Cores that hit the LLC",
+ "UMask": "0xc827fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc8a7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM requests from local IA cores that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Hit LLC",
+ "UMask": "0xcc47fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLC",
+ "UMask": "0xcccffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores that hit the LLC",
+ "UMask": "0xccd7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "UMask": "0xccc7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc807fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc887fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores",
+ "UMask": "0xcc47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNear requests from local IA cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears issued by iA Cores",
+ "UMask": "0xcd47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores",
+ "UMask": "0xcccfff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores",
+ "UMask": "0xccd7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores",
+ "UMask": "0xccc7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC",
+ "UMask": "0xc001fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc80ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRDs from local IA cores to locally homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc80efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc88ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRD Prefetches from local IA cores to locally homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc88efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opt issued by iA Cores that missed the LLC",
+ "UMask": "0xc827fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRd_Opt, and which target local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opt issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc826fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read opt prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLC",
+ "UMask": "0xc8a7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF_OPT, and target local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : Data read opt prefetch from local iA that missed the LLC targeting local memory",
+ "UMask": "0xc8a6fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Missed LLC",
+ "UMask": "0xcc47fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLC",
+ "UMask": "0xcccffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores that missed the LLC",
+ "UMask": "0xccd7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "UMask": "0xccc7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc8668601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc807fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that miss the LLC targeting local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc806fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc887fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that miss the LLC targeting local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc886fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UCRDF requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_UCRDF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : UCRdFs issued by iA Cores that Missed LLC",
+ "UMask": "0xc877de01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from a local IA core that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc86ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA core that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLF issued by iA Cores that Missed the LLC",
+ "UMask": "0xc867fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc8678601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc86f8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WIL requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WiLs issued by iA Cores that Missed LLC",
+ "UMask": "0xc87fde01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores",
+ "UMask": "0xc807ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores",
+ "UMask": "0xc887ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "SpecItoM events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_SPECITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : SpecItoMs issued by iA Cores",
+ "UMask": "0xcc57ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbEFtoEs issued by iA Cores. (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc3fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbEFtoIs issued by iA Cores . (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc37ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbMtoEs issued by iA Cores . (Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc2fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbMtoI requests from local IA cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WbMtoIs issued by iA Cores",
+ "UMask": "0xcc27ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbStoIs issued by iA Cores . (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBSTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc67ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from a local IA core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores",
+ "UMask": "0xc86fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLF issued by iA Cores",
+ "UMask": "0xc867ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices",
+ "UMask": "0xc001ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush requests from IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CLFlushes issued by IO Devices",
+ "UMask": "0xc8c3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices that hit the LLC",
+ "UMask": "0xc001fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMs from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "UMask": "0xcd43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices which hit the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLC",
+ "UMask": "0xc8f3fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "RFOs from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices that hit the LLC",
+ "UMask": "0xc803fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR ItoM inserts from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices",
+ "UMask": "0xcc43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "UMask": "0xcd43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices that missed the LLC",
+ "UMask": "0xc001fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR ItoM inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices which miss the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f3fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR RFO inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices that missed the LLC",
+ "UMask": "0xc803fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices",
+ "UMask": "0xc8f3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "RFOs from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices",
+ "UMask": "0xc803ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WBMtoI requests from IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WbMtoIs issued by IO Devices",
+ "UMask": "0xcc23ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for SF or LLC Evictions",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LLC_OR_SF_EVICTIONS",
+ "PerPkg": "1",
+ "PublicDescription": "TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
+ "UMask": "0xc001ff02",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local iA and IO",
+ "UMask": "0xc000ff05",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All from Local iA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local iA",
+ "UMask": "0xc000ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All from Local IO",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local IO",
+ "UMask": "0xc000ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Occupancy for all TOR entries",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All",
+ "UMask": "0xc001ffff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores",
+ "UMask": "0xc001ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CLFlushes issued by iA Cores",
+ "UMask": "0xc8c7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlushOpt events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSHOPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CLFlushOpts issued by iA Cores",
+ "UMask": "0xc8d7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRDs issued by iA Cores",
+ "UMask": "0xc80fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy; Code read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc88fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opts issued by iA Cores",
+ "UMask": "0xc827ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT_PREF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores",
+ "UMask": "0xc8a7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores that Hit the LLC",
+ "UMask": "0xc001fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc80ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc88ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opts issued by iA Cores that hit the LLC",
+ "UMask": "0xc827fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT_PREF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc8a7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM requests from local IA cores that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC",
+ "UMask": "0xcc47fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLC",
+ "UMask": "0xcccffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLC",
+ "UMask": "0xccd7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "UMask": "0xccc7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc807fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc887fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores",
+ "UMask": "0xcc47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNear requests from local IA cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores",
+ "UMask": "0xcd47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores",
+ "UMask": "0xcccfff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores",
+ "UMask": "0xccd7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores",
+ "UMask": "0xccc7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores that Missed the LLC",
+ "UMask": "0xc001fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc80ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRDs from local IA cores to locally homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc80efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc88ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRD Prefetches from local IA cores to locally homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc88efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opt issued by iA Cores that missed the LLC",
+ "UMask": "0xc827fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read opt prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLC",
+ "UMask": "0xc8a7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC",
+ "UMask": "0xcc47fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLC",
+ "UMask": "0xcccffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLC",
+ "UMask": "0xccd7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "UMask": "0xccc7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc8668601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc807fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc806fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc887fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc886fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for UCRDF requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_UCRDF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC",
+ "UMask": "0xc877de01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from a local IA core that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc86ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA core that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC",
+ "UMask": "0xc867fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc8678601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc86f8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WIL requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC",
+ "UMask": "0xc87fde01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores",
+ "UMask": "0xc807ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores",
+ "UMask": "0xc887ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for SpecItoM events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_SPECITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : SpecItoMs issued by iA Cores",
+ "UMask": "0xcc57ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WbMtoI requests from local IA cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WbMtoIs issued by iA Cores",
+ "UMask": "0xcc27ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from a local IA core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores",
+ "UMask": "0xc86fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores",
+ "UMask": "0xc867ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices",
+ "UMask": "0xc001ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush requests from IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CLFlushes issued by IO Devices",
+ "UMask": "0xc8c3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices that hit the LLC",
+ "UMask": "0xc001fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMs from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "UMask": "0xcd43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices which hit the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC",
+ "UMask": "0xc8f3fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RFOs from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that hit the LLC",
+ "UMask": "0xc803fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR ItoM inserts from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices",
+ "UMask": "0xcc43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "UMask": "0xcd43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices that missed the LLC",
+ "UMask": "0xc001fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR ItoM inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices which miss the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f3fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR RFO inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that missed the LLC",
+ "UMask": "0xc803fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices",
+ "UMask": "0xc8f3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RFOs from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices",
+ "UMask": "0xc803ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WBMtoI requests from IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WbMtoIs issued by IO Devices",
+ "UMask": "0xcc23ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local iA and IO",
+ "UMask": "0xc000ff05",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All from Local iA",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local iA",
+ "UMask": "0xc000ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All from Local IO",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local IO",
+ "UMask": "0xc000ff04",
+ "Unit": "CHA"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/grandridge/uncore-interconnect.json
new file mode 100644
index 000000000000..c7250332d8aa
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/uncore-interconnect.json
@@ -0,0 +1,275 @@
+[
+ {
+ "BriefDescription": "Clockticks of the mesh to memory (B2CMI)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_B2CMI_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times B2CMI egress did D2C (direct to core)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x16",
+ "EventName": "UNC_B2CMI_DIRECT2CORE_TAKEN",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times D2C wasn't honoured even though the incoming request had d2c set for non cisgress txn",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x18",
+ "EventName": "UNC_B2CMI_DIRECT2CORE_TXN_OVERRIDE",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts any read",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.ALL",
+ "PerPkg": "1",
+ "UMask": "0x104",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts normal reads issue to CMI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.NORMAL",
+ "PerPkg": "1",
+ "UMask": "0x101",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts reads to 1lm non persistent memory regions",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.TO_DDR_AS_MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x108",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "All Writes - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.ALL",
+ "PerPkg": "1",
+ "UMask": "0x110",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Full Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.FULL",
+ "PerPkg": "1",
+ "UMask": "0x101",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Partial Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.PARTIAL",
+ "PerPkg": "1",
+ "UMask": "0x102",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "DDR - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.TO_DDR_AS_MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x120",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.CH0_XPT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : XPT -All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.XPT_ALLCH",
+ "PerPkg": "1",
+ "PublicDescription": "Prefetch CAM Inserts : XPT - All Channels",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x54",
+ "EventName": "UNC_B2CMI_PREFCAM_OCCUPANCY.CH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "UNC_B2CMI_TRACKER_INSERTS.CH0",
+ "PerPkg": "1",
+ "UMask": "0x104",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x33",
+ "EventName": "UNC_B2CMI_TRACKER_OCCUPANCY.CH0",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Write Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_B2CMI_WR_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Total Write Cache Occupancy : Mem",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x0F",
+ "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "IRP Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_I_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Inbound read requests received by the IRP and inserted into the FAF queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x18",
+ "EventName": "UNC_I_FAF_INSERTS",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "FAF occupancy",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x19",
+ "EventName": "UNC_I_FAF_OCCUPANCY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Rejects",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_I_MISC0.FAST_REJ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_I_MISC0.FAST_REQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Misc Events - Set 1 : Lost Forward : Snoop pulled away ownership before a write was committed",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_I_MISC1.LOST_FWD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit E/S responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x74",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit I responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x72",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit M responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop miss responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x71",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Inbound write (fast path) requests to coherent memory, received by the IRP resulting in write ownership requests issued by IRP to the mesh.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "UNC_I_TRANSACTIONS.WR_PREF",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Message Received : MSI",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.MSI_RCVD",
+ "PerPkg": "1",
+ "PublicDescription": "Message Received : MSI : Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)",
+ "UMask": "0x2",
+ "Unit": "UBOX"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/uncore-io.json b/tools/perf/pmu-events/arch/x86/grandridge/uncore-io.json
new file mode 100644
index 000000000000..764cf2f0b4a8
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/uncore-io.json
@@ -0,0 +1,1380 @@
+[
+ {
+ "BriefDescription": "IIO Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_IIO_CLOCKTICKS",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0xff",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x02",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x04",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x08",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x10",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x20",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x40",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x80",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x02",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x04",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x08",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x10",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x20",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x40",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x80",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 1G Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.1G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 2M Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.2M_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 4K Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.4K_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Context cache hits",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Context cache lookups",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB lookups first",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.FIRST_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Fills (same as IOTLB miss)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.MISSES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOMMU memory access (both low and high priority)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0xc0",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 1G page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_1G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 256T page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_256T_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 512G page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_512G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.ABORT",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.CONFINED_P2P",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.LOC_P2P",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MCAST",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MEM",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MSGB",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Posted requests sent by the integrated IO (IIO) controller to the Ubox, useful for counting message signaled interrupts (MSI).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX_POSTED",
+ "FCMask": "0x01",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "PublicDescription": "-",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "All 9 bits of Page Walk Tracker Occupancy",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x42",
+ "EventName": "UNC_IIO_PWT_OCCUPANCY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/uncore-memory.json b/tools/perf/pmu-events/arch/x86/grandridge/uncore-memory.json
new file mode 100644
index 000000000000..6a11e5505957
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/uncore-memory.json
@@ -0,0 +1,789 @@
+[
+ {
+ "BriefDescription": "DRAM Activate Count : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.ALL",
+ "PerPkg": "1",
+ "UMask": "0xf7",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Underfill Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.UFILL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Write transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.WR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all CAS operations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD",
+ "PerPkg": "1",
+ "UMask": "0xcf",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_REG",
+ "PerPkg": "1",
+ "UMask": "0xc1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR",
+ "PerPkg": "1",
+ "UMask": "0xf0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR_NONPRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xd0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR_PRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all CAS operations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD",
+ "PerPkg": "1",
+ "UMask": "0xcf",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_REG",
+ "PerPkg": "1",
+ "UMask": "0xc1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR",
+ "PerPkg": "1",
+ "UMask": "0xf0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR_NONPRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xd0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR_PRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM DCLK clock cycles while the event is enabled. DCLK is 1/4 of DRAM data rate.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_M_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "DRAM Clockticks",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM HCLK clock cycles while the event is enabled",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_M_HCLOCKTICKS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "DRAM Clockticks",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK3",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK3",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode and all pages are closed",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_M_POWER_CHANNEL_PPD_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM and throttle level is zero.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x89",
+ "EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM and throttle level is zero.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x89",
+ "EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "MR4 temp reading is throttling",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.MR4BLKEN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "RAPL is throttling",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RAPLBLK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Precharge due to (?) : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.PGT",
+ "PerPkg": "1",
+ "UMask": "0xf8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.UFILL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.WR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer inserts on subchannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x17",
+ "EventName": "UNC_M_RDB_INSERTS.SCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer inserts on subchannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x17",
+ "EventName": "UNC_M_RDB_INSERTS.SCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer occupancy on subchannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1a",
+ "EventName": "UNC_M_RDB_OCCUPANCY_SCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer occupancy on subchannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1b",
+ "EventName": "UNC_M_RDB_OCCUPANCY_SCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x50",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH0_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH0_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH1_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH1_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x80",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH0_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x81",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH0_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x82",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH1_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x83",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH1_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at High level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8d",
+ "EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at High level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8d",
+ "EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Normal level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8b",
+ "EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Normal level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8b",
+ "EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Mid level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8c",
+ "EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Mid level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8c",
+ "EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "-",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x50",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH0_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH0_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH1_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH1_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x84",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH0_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x85",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH0_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x86",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH1_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x87",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH1_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/uncore-power.json b/tools/perf/pmu-events/arch/x86/grandridge/uncore-power.json
new file mode 100644
index 000000000000..02e59f64a544
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/grandridge/uncore-power.json
@@ -0,0 +1,11 @@
+[
+ {
+ "BriefDescription": "PCU Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_P_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "PCU Clockticks: The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
+ "Unit": "PCU"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/grandridge/virtual-memory.json b/tools/perf/pmu-events/arch/x86/grandridge/virtual-memory.json
index bd5f2b634c98..35cc5b6d41f2 100644
--- a/tools/perf/pmu-events/arch/x86/grandridge/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/grandridge/virtual-memory.json
@@ -1,24 +1,148 @@
[
{
- "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 1G page.",
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to a demand load that did not start a page walk. Accounts for all page sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
- "SampleAfterValue": "1000003",
+ "SampleAfterValue": "200003",
"UMask": "0xe"
},
{
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for Loads (demand or SW prefetch) in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to stores that did not start a page walk. Accounts for all pages sizes. Will result in a DTLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.STLB_HIT",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 1G page.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
- "SampleAfterValue": "1000003",
+ "SampleAfterValue": "2000003",
"UMask": "0xe"
},
{
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding in the page miss handler (PMH) for stores every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks initiated by a instruction fetch that missed the first and second level TLBs.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.MISS_CAUSED_WALK",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of first level TLB misses but second level hits due to an instruction fetch that did not start a page walk. Account for all pages sizes. Will result in an ITLB write from STLB.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.STLB_HIT",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
"SampleAfterValue": "200003",
"UMask": "0xe"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 2M or 4M page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 2M or 4M pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to a 4K page.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to 4K pages. Includes page walks that page fault.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of page walks outstanding for iside in PMH every cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x85",
+ "EventName": "ITLB_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for iside in PMH every cycle. A PMH page walk is outstanding from page walk start till PMH becomes idle again (ready to serve next walk). Includes EPT-walk intervals. Walks could be counted by edge detecting on this event, but would count restarted suspended walks.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles that the head (oldest load) of the load buffer and retirement are both stalled due to a DTLB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x05",
+ "EventName": "LD_HEAD.DTLB_MISS_AT_RET",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x90"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/cache.json b/tools/perf/pmu-events/arch/x86/graniterapids/cache.json
index 56212827870c..7edb73583b07 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/cache.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/cache.json
@@ -1,6 +1,135 @@
[
{
+ "BriefDescription": "L1D.HWPF_MISS",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x51",
+ "EventName": "L1D.HWPF_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x51",
+ "EventName": "L1D.REPLACEMENT",
+ "PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x48",
+ "EventName": "L1D_PEND_MISS.FB_FULL",
+ "PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x48",
+ "EventName": "L1D_PEND_MISS.FB_FULL_PERIODS",
+ "PublicDescription": "Counts number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x48",
+ "EventName": "L1D_PEND_MISS.L2_STALLS",
+ "PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x48",
+ "EventName": "L1D_PEND_MISS.PENDING",
+ "PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x48",
+ "EventName": "L1D_PEND_MISS.PENDING_CYCLES",
+ "PublicDescription": "Counts duration of L1D miss outstanding in cycles.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "L2_LINES_IN.ALL",
+ "PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1f"
+ },
+ {
+ "BriefDescription": "Modified cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.NON_SILENT",
+ "PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3",
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.SILENT",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x26",
+ "EventName": "L2_LINES_OUT.USELESS_HWPF",
+ "PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "All accesses to L2 cache [This event is alias to L2_RQSTS.REFERENCES]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.ALL",
+ "PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.REFERENCES]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xff"
+ },
+ {
+ "BriefDescription": "All requests that hit L2 cache. [This event is alias to L2_RQSTS.HIT]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.HIT",
+ "PublicDescription": "Counts all requests that hit L2 cache. [This event is alias to L2_RQSTS.HIT]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xdf"
+ },
+ {
+ "BriefDescription": "Read requests with true-miss in L2 cache [This event is alias to L2_RQSTS.MISS]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_REQUEST.MISS",
+ "PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_RQSTS.MISS]",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3f"
+ },
+ {
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -9,6 +138,7 @@
},
{
"BriefDescription": "Demand Data Read access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts Demand Data Read requests accessing the L2 cache. These requests may hit or miss L2 cache. True-miss exclude misses that were merged with ongoing L2 misses. An access is counted once.",
@@ -16,7 +146,159 @@
"UMask": "0xe1"
},
{
+ "BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_DEMAND_MISS",
+ "PublicDescription": "Counts demand requests that miss L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x27"
+ },
+ {
+ "BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
+ "PublicDescription": "Counts demand requests to L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe7"
+ },
+ {
+ "BriefDescription": "L2_RQSTS.ALL_HWPF",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_HWPF",
+ "SampleAfterValue": "200003",
+ "UMask": "0xf0"
+ },
+ {
+ "BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.ALL_RFO",
+ "PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xe2"
+ },
+ {
+ "BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.CODE_RD_HIT",
+ "PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc4"
+ },
+ {
+ "BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.CODE_RD_MISS",
+ "PublicDescription": "Counts L2 cache misses when fetching instructions.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x24"
+ },
+ {
+ "BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
+ "PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc1"
+ },
+ {
+ "BriefDescription": "Demand Data Read miss L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
+ "PublicDescription": "Counts demand Data Read requests with true-miss in the L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. An access is counted once.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x21"
+ },
+ {
+ "BriefDescription": "All requests that hit L2 cache. [This event is alias to L2_REQUEST.HIT]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.HIT",
+ "PublicDescription": "Counts all requests that hit L2 cache. [This event is alias to L2_REQUEST.HIT]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xdf"
+ },
+ {
+ "BriefDescription": "L2_RQSTS.HWPF_MISS",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.HWPF_MISS",
+ "SampleAfterValue": "200003",
+ "UMask": "0x30"
+ },
+ {
+ "BriefDescription": "Read requests with true-miss in L2 cache [This event is alias to L2_REQUEST.MISS]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.MISS",
+ "PublicDescription": "Counts read requests of any type with true-miss in the L2 cache. True-miss excludes L2 misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.MISS]",
+ "SampleAfterValue": "200003",
+ "UMask": "0x3f"
+ },
+ {
+ "BriefDescription": "All accesses to L2 cache [This event is alias to L2_REQUEST.ALL]",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.REFERENCES",
+ "PublicDescription": "Counts all requests that were hit or true misses in L2 cache. True-miss excludes misses that were merged with ongoing L2 misses. [This event is alias to L2_REQUEST.ALL]",
+ "SampleAfterValue": "200003",
+ "UMask": "0xff"
+ },
+ {
+ "BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.RFO_HIT",
+ "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc2"
+ },
+ {
+ "BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.RFO_MISS",
+ "PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x22"
+ },
+ {
+ "BriefDescription": "SW prefetch requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.SWPF_HIT",
+ "PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
+ "SampleAfterValue": "200003",
+ "UMask": "0xc8"
+ },
+ {
+ "BriefDescription": "SW prefetch requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "L2_RQSTS.SWPF_MISS",
+ "PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x28"
+ },
+ {
+ "BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x23",
+ "EventName": "L2_TRANS.L2_WB",
+ "PublicDescription": "Counts L2 writebacks that access L2 cache.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x40"
+ },
+ {
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -25,6 +307,7 @@
},
{
"BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -33,22 +316,915 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
- "PEBS": "1",
- "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
+ "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x81"
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
- "PEBS": "1",
- "PublicDescription": "Counts all retired store instructions.",
+ "PublicDescription": "Counts all retired store instructions. Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x82"
+ },
+ {
+ "BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.ANY",
+ "PublicDescription": "Counts all retired memory instructions - loads and stores. Available PDIST counters: 0",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x83"
+ },
+ {
+ "BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.LOCK_LOADS",
+ "PublicDescription": "Counts retired load instructions with locked access. Available PDIST counters: 0",
+ "RetirementLatencyMax": 5156,
+ "RetirementLatencyMean": 63.76,
+ "RetirementLatencyMin": 15,
+ "SampleAfterValue": "100007",
+ "UMask": "0x21"
+ },
+ {
+ "BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
+ "PublicDescription": "Counts retired load instructions that split across a cacheline boundary. Available PDIST counters: 0",
+ "RetirementLatencyMax": 4704,
+ "RetirementLatencyMean": 3.97,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100003",
+ "UMask": "0x41"
+ },
+ {
+ "BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.SPLIT_STORES",
+ "PublicDescription": "Counts retired store instructions that split across a cacheline boundary. Available PDIST counters: 0",
+ "RetirementLatencyMax": 65535,
+ "RetirementLatencyMean": 19.0,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100003",
+ "UMask": "0x42"
+ },
+ {
+ "BriefDescription": "Retired load instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_LOADS",
+ "PublicDescription": "Number of retired load instructions with a clean hit in the 2nd-level TLB (STLB). Available PDIST counters: 0",
+ "RetirementLatencyMax": 3424,
+ "RetirementLatencyMean": 1.57,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100003",
+ "UMask": "0x9"
+ },
+ {
+ "BriefDescription": "Retired store instructions that hit the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_HIT_STORES",
+ "PublicDescription": "Number of retired store instructions that hit in the 2nd-level TLB (STLB). Available PDIST counters: 0",
+ "RetirementLatencyMax": 65535,
+ "RetirementLatencyMean": 5.24,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100003",
+ "UMask": "0xa"
+ },
+ {
+ "BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
+ "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
+ "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x12"
+ },
+ {
+ "BriefDescription": "Completed demand load uops that miss the L1 d-cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "MEM_LOAD_COMPLETED.L1_MISS_ANY",
+ "PublicDescription": "Number of completed demand load requests that missed the L1 data cache including shadow misses (FB hits, merge to an ongoing L1D miss)",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xfd"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
+ "PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3. Available PDIST counters: 0",
+ "RetirementLatencyMax": 4472,
+ "RetirementLatencyMean": 353.04,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "20011",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
+ "PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache. Available PDIST counters: 0",
+ "RetirementLatencyMax": 830,
+ "RetirementLatencyMean": 125.27,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "20011",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
+ "PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd2",
+ "EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
+ "PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache. Available PDIST counters: 0",
+ "RetirementLatencyMax": 3939,
+ "RetirementLatencyMean": 289.9,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "20011",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
+ "PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM. Available PDIST counters: 0",
+ "RetirementLatencyMax": 4146,
+ "RetirementLatencyMean": 115.83,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Retired load instructions with remote cxl mem as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_CXL_MEM",
+ "PublicDescription": "Counts retired load instructions with remote cxl mem as the data source and the data request missed L3. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
+ "PublicDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM Available PDIST counters: 0",
+ "RetirementLatencyMax": 3572,
+ "RetirementLatencyMean": 430.22,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD",
+ "PublicDescription": "Retired load instructions whose data sources was forwarded from a remote cache. Available PDIST counters: 0",
+ "RetirementLatencyMax": 8552,
+ "RetirementLatencyMean": 125.36,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
+ "PublicDescription": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM Available PDIST counters: 0",
+ "RetirementLatencyMax": 2580,
+ "RetirementLatencyMean": 135.29,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Retired instructions with at least 1 uncacheable load or lock.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd4",
+ "EventName": "MEM_LOAD_MISC_RETIRED.UC",
+ "PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock). Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.FB_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source. Available PDIST counters: 0",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L1_MISS",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "200003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L2_HIT",
+ "PublicDescription": "Counts retired load instructions with L2 cache hits as data sources. Available PDIST counters: 0",
+ "RetirementLatencyMax": 7140,
+ "RetirementLatencyMean": 5.71,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "200003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L2_MISS",
+ "PublicDescription": "Counts retired load instructions missed L2 cache as data sources. Available PDIST counters: 0",
+ "SampleAfterValue": "100021",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L3_HIT",
+ "PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache. Available PDIST counters: 0",
+ "RetirementLatencyMax": 5630,
+ "RetirementLatencyMean": 57.64,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100021",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.L3_MISS",
+ "PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache. Available PDIST counters: 0",
+ "SampleAfterValue": "50021",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Retired load instructions with local cxl mem as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
+ "Data_LA": "1",
+ "EventCode": "0xd1",
+ "EventName": "MEM_LOAD_RETIRED.LOCAL_CXL_MEM",
+ "PublicDescription": "Counts retired load instructions with local cxl mem as the data source and the data request missed L3. Available PDIST counters: 0",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "MEM_STORE_RETIRED.L2_HIT",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x44",
+ "EventName": "MEM_STORE_RETIRED.L2_HIT",
+ "SampleAfterValue": "200003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Retired memory uops for any access",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe5",
+ "EventName": "MEM_UOP_RETIRED.ANY",
+ "PublicDescription": "Number of retired micro-operations (uops) for load or store memory accesses",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F803C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "PublicDescription": "Counts demand data reads that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 or Type 3).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703C00001",
+ "PublicDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 or Type 3). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F803C0001",
+ "PublicDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x4003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x8003C0001",
+ "PublicDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700C00001",
+ "PublicDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1030000001",
+ "PublicDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x830000001",
+ "PublicDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703000001",
+ "PublicDescription": "Counts demand data reads that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1008000001",
+ "PublicDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x808000001",
+ "PublicDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 or Type 3).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703C00002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 or Type 3). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F803C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C0002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700C00002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.REMOTE_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts writebacks of modified cachelines and streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.MODIFIED_WRITE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10808",
+ "PublicDescription": "Counts writebacks of modified cachelines and streaming stores that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 or Type 3).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703C04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 or Type 3). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_HIT",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10003C4477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700C04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 and Type 3) attached to local socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F33004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1830004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified). Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1030004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x830004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_CXL_MEM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by CXL MEM (Type 2 or Type 3) attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1008004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x808004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO), hardware prefetch RFOs (which bring data to L2), and software prefetches for exclusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 or snoop filter.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.RFO_TO_CORE.L3_HIT_M",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x1F80040022",
+ "PublicDescription": "Counts demand reads for ownership (RFO), hardware prefetch RFOs (which bring data to L2), and software prefetches for exclusive ownership (PREFETCHW) that hit to a (M)odified cacheline in the L3 or snoop filter. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Any memory transaction that reached the SQ.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
+ "PublicDescription": "Counts memory transactions reached the super queue including requests initiated by the core, all L3 prefetches, page walks, etc..",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DATA_RD",
+ "PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cacheable and Non-Cacheable code read requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
+ "PublicDescription": "Counts both cacheable and Non-Cacheable code read requests.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
+ "PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
+ "PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Offcore Uncacheable memory data read transactions.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.MEM_UC",
+ "PublicDescription": "This event counts noncacheable memory data read transactions.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "PublicDescription": "Counts cycles when offcore outstanding cacheable Core Data Read transactions are present in the super queue. A transaction is considered to be in the Offcore outstanding state between L2 miss and transaction completion sent to requestor (SQ de-allocation). See corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cycles with offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles where at least 1 outstanding demand data read request is pending.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles with offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
+ "PublicDescription": "Counts the number of offcore outstanding demand rfo Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Offcore outstanding Code Reads transactions in the SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
+ "PublicDescription": "Counts the number of offcore outstanding Code Reads transactions in the super queue every cycle. The 'Offcore outstanding' state of the transaction lasts from the L2 miss until the sending transaction completion to requestor (SQ deallocation). See the corresponding Umask under OFFCORE_REQUESTS.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
+ "PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Store Read transactions pending for off-core. Highly correlated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
+ "PublicDescription": "Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completion.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2c",
+ "EventName": "SQ_MISC.BUS_LOCK",
+ "PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.NTA",
+ "PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
+ "PublicDescription": "Counts the number of PREFETCHW instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.T0",
+ "PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "SW_PREFETCH_ACCESS.T1_T2",
+ "PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/counter.json b/tools/perf/pmu-events/arch/x86/graniterapids/counter.json
new file mode 100644
index 000000000000..d97211a0227e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/counter.json
@@ -0,0 +1,82 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "4",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "B2CMI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CHA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CXLCM",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 8
+ },
+ {
+ "Unit": "CXLDP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 4
+ },
+ {
+ "Unit": "B2HOT",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 4
+ },
+ {
+ "Unit": "IIO",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "B2UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 4
+ },
+ {
+ "Unit": "B2CXL",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": 4
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CHACMS",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "MDF",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/floating-point.json b/tools/perf/pmu-events/arch/x86/graniterapids/floating-point.json
new file mode 100644
index 000000000000..59789eee060c
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/floating-point.json
@@ -0,0 +1,242 @@
+[
+ {
+ "BriefDescription": "This event counts the cycles the floating point divider is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.FPDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.FP",
+ "PublicDescription": "Counts all microcode Floating Point assists.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "ASSISTS.SSE_AVX_MIX",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.SSE_AVX_MIX",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_0 [This event is alias to FP_ARITH_DISPATCHED.V0]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.PORT_0",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_1 [This event is alias to FP_ARITH_DISPATCHED.V1]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.PORT_1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.PORT_5 [This event is alias to FP_ARITH_DISPATCHED.V2]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.PORT_5",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.V0 [This event is alias to FP_ARITH_DISPATCHED.PORT_0]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V0",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.V1 [This event is alias to FP_ARITH_DISPATCHED.PORT_1]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V1",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "FP_ARITH_DISPATCHED.V2 [This event is alias to FP_ARITH_DISPATCHED.PORT_5]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb3",
+ "EventName": "FP_ARITH_DISPATCHED.V2",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
+ "PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x18"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
+ "PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x60"
+ },
+ {
+ "BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 RANGE SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR",
+ "PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
+ "PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
+ "PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc7",
+ "EventName": "FP_ARITH_INST_RETIRED.VECTOR",
+ "PublicDescription": "Number of any Vector retired FP arithmetic instructions. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xfc"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.128B_PACKED_HALF",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.256B_PACKED_HALF",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.512B_PACKED_HALF",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.COMPLEX_SCALAR_HALF",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of all Scalar Half-Precision FP arithmetic instructions(1) retired - regular and complex.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.SCALAR",
+ "PublicDescription": "FP_ARITH_INST_RETIRED2.SCALAR",
+ "SampleAfterValue": "100003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "FP_ARITH_INST_RETIRED2.SCALAR_HALF",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.SCALAR_HALF",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Number of all Vector (also called packed) Half-Precision FP arithmetic instructions(1) retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcf",
+ "EventName": "FP_ARITH_INST_RETIRED2.VECTOR",
+ "PublicDescription": "FP_ARITH_INST_RETIRED2.VECTOR",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1c"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/frontend.json b/tools/perf/pmu-events/arch/x86/graniterapids/frontend.json
index c6d5016e7337..d580d305c926 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/frontend.json
@@ -1,9 +1,475 @@
[
{
- "BriefDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.",
+ "BriefDescription": "Clears due to Unknown Branches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x60",
+ "EventName": "BACLEARS.ANY",
+ "PublicDescription": "Number of times the front-end is resteered when it finds a branch instruction in a fetch line. This is called Unknown Branch which occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x87",
+ "EventName": "DECODE.LCP",
+ "PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles the Microcode Sequencer is busy.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x87",
+ "EventName": "DECODE.MS_BUSY",
+ "SampleAfterValue": "500009",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x61",
+ "EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
+ "PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Retired ANT branches",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ANY_ANT",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x9",
+ "PublicDescription": "Always Not Taken (ANT) conditional retired branches (no BTB entry and not mispredicted) Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x1",
+ "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Available PDIST counters: 0",
+ "RetirementLatencyMax": 65535,
+ "RetirementLatencyMean": 2.46,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.DSB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x11",
+ "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.ITLB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x14",
+ "PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss. Available PDIST counters: 0",
+ "RetirementLatencyMax": 980,
+ "RetirementLatencyMean": 41.96,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.L1I_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x12",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss. Available PDIST counters: 0",
+ "RetirementLatencyMax": 1785,
+ "RetirementLatencyMean": 9.83,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.L2_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x13",
+ "PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss. Available PDIST counters: 0",
+ "RetirementLatencyMax": 2854,
+ "RetirementLatencyMean": 137.41,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600106",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x608006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x601006",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600206",
+ "PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x610006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x100206",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x602006",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600406",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x620006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x604006",
+ "PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x600806",
+ "PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "I-Cache miss too close to Code Prefetch Instruction",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.LATE_SWPF",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0xA",
+ "PublicDescription": "Number of Instruction Cache demand miss in shadow of an on-going i-fetch cache-line triggered by PREFETCHIT0/1 instructions Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Mispredicted Retired ANT branches",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.MISP_ANT",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x9",
+ "PublicDescription": "ANT retired branches that got just mispredicted Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "FRONTEND_RETIRED.MS_FLOWS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.MS_FLOWS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x8",
+ "PublicDescription": "FRONTEND_RETIRED.MS_FLOWS Available PDIST counters: 0",
+ "RetirementLatencyMax": 65535,
+ "RetirementLatencyMean": 77.14,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.STLB_MISS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x15",
+ "PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss. Available PDIST counters: 0",
+ "RetirementLatencyMax": 754,
+ "RetirementLatencyMean": 206.85,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc6",
+ "EventName": "FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x17",
+ "PublicDescription": "FRONTEND_RETIRED.UNKNOWN_BRANCH Available PDIST counters: 0",
+ "RetirementLatencyMax": 532,
+ "RetirementLatencyMean": 3.85,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100007",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALLS",
+ "PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The decode pipeline works at a 32 Byte granularity.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "ICACHE_DATA.STALL_PERIODS",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x80",
+ "EventName": "ICACHE_DATA.STALL_PERIODS",
+ "SampleAfterValue": "500009",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x83",
+ "EventName": "ICACHE_TAG.STALLS",
+ "PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss.",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_CYCLES_ANY",
+ "PublicDescription": "Counts the number of cycles uops were delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
+ "CounterMask": "6",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_CYCLES_OK",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x79",
+ "EventName": "IDQ.DSB_UOPS",
+ "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_CYCLES_ANY",
+ "PublicDescription": "Counts the number of cycles uops were delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
+ "CounterMask": "6",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_CYCLES_OK",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MITE_UOPS",
+ "PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_CYCLES_ANY",
+ "PublicDescription": "Counts cycles during which uops are being delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Uops maybe initiated by Decode Stream Buffer (DSB) or MITE.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_SWITCHES",
+ "PublicDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Uops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x79",
+ "EventName": "IDQ.MS_UOPS",
+ "PublicDescription": "Counts the number of uops initiated by MITE or Decode Stream Buffer (DSB) and delivered to Instruction Decode Queue (IDQ) while the Microcode Sequencer (MS) is busy. Counting includes uops that may 'bypass' the IDQ.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "This event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_BUBBLES.CORE",
- "PublicDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.\nThe count may be distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that when no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations. The count may be distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "6",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE",
+ "PublicDescription": "Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_BUBBLES.CYCLES_FE_WAS_OK",
+ "Invert": "1",
+ "PublicDescription": "Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
+ "PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "6",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
+ "PublicDescription": "Counts the number of cycles when no uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE]",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0x9c",
+ "EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
+ "Invert": "1",
+ "PublicDescription": "Counts the number of cycles when the optimal number of uops were delivered by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle. [This event is alias to IDQ_BUBBLES.CYCLES_FE_WAS_OK]",
"SampleAfterValue": "1000003",
"UMask": "0x1"
}
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/gnr-metrics.json b/tools/perf/pmu-events/arch/x86/graniterapids/gnr-metrics.json
new file mode 100644
index 000000000000..cc3c834ca286
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/gnr-metrics.json
@@ -0,0 +1,2383 @@
+[
+ {
+ "BriefDescription": "C1 residency percent per core",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C1_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C2 residency percent per package",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C2_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per core",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Core_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "C6 residency percent per package",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
+ "MetricGroup": "Power",
+ "MetricName": "C6_Pkg_Residency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uncore frequency per die [GHZ]",
+ "MetricExpr": "tma_info_system_socket_clks / #num_dies / duration_time / 1e9",
+ "MetricGroup": "SoC",
+ "MetricName": "UNCORE_FREQ"
+ },
+ {
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
+ "MetricName": "cpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "The average number of cores that are in cstate C0 as observed by the power control unit (PCU)",
+ "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0 / pcu_0@UNC_P_CLOCKTICKS@ * #num_packages",
+ "MetricGroup": "cpu_cstate",
+ "MetricName": "cpu_cstate_c0"
+ },
+ {
+ "BriefDescription": "The average number of cores are in cstate C6 as observed by the power control unit (PCU)",
+ "MetricExpr": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6 / pcu_0@UNC_P_CLOCKTICKS@ * #num_packages",
+ "MetricGroup": "cpu_cstate",
+ "MetricName": "cpu_cstate_c6"
+ },
+ {
+ "BriefDescription": "CPU operating frequency (in GHz)",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC * #SYSTEM_TSC_FREQ / 1e9",
+ "MetricName": "cpu_operating_frequency",
+ "ScaleUnit": "1GHz"
+ },
+ {
+ "BriefDescription": "Percentage of time spent in the active CPU power state C0",
+ "MetricExpr": "tma_info_system_cpus_utilized",
+ "MetricName": "cpu_utilization",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_load_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
+ "MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "dtlb_2nd_level_store_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Bandwidth observed by the integrated I/O traffic controller (IIO) of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
+ "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS * 4 / 1e6 / duration_time",
+ "MetricName": "iio_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth observed by the integrated I/O traffic controller (IIO) of IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS * 4 / 1e6 / duration_time",
+ "MetricName": "iio_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of inbound IO reads that are initiated by end device controllers that are requesting memory from the CPU and miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the local CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from a remote CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of inbound IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the local CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to a remote CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "The percent of inbound full cache line writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "MetricName": "io_full_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Message Signaled Interrupts (MSI) per second sent by the integrated I/O traffic controller (IIO) to System Configuration Controller (Ubox)",
+ "MetricExpr": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX_POSTED / duration_time",
+ "MetricName": "io_msi",
+ "ScaleUnit": "1per_sec"
+ },
+ {
+ "BriefDescription": "The percent of inbound partial writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_RFO) / (UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_RFO)",
+ "MetricName": "io_partial_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "The percent of inbound reads initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR / UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "MetricName": "io_read_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
+ "MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricName": "itlb_2nd_level_mpi",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing in L1 instruction cache (includes prefetches) to the total number of completed instructions",
+ "MetricExpr": "L2_RQSTS.ALL_CODE_RD / INST_RETIRED.ANY",
+ "MetricName": "l1_i_code_read_misses_with_prefetches_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of demand load requests hitting in L1 data cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L1_HIT / INST_RETIRED.ANY",
+ "MetricName": "l1d_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of requests missing L1 data cache (includes data+rfo w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "L1D.REPLACEMENT / INST_RETIRED.ANY",
+ "MetricName": "l1d_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read request missing L2 cache to the total number of completed instructions",
+ "MetricExpr": "L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_code_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed demand load requests hitting in L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L2_HIT / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_hits_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of completed data read request missing L2 cache to the total number of completed instructions",
+ "MetricExpr": "MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricName": "l2_demand_data_read_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of requests missing L2 cache (includes code+data+rfo w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "L2_LINES_IN.ALL / INST_RETIRED.ANY",
+ "MetricName": "l2_mpi",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of code read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD / INST_RETIRED.ANY",
+ "MetricName": "llc_code_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Ratio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA + UNC_CHA_TOR_INSERTS.IA_MISS_DRD + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF) / INST_RETIRED.ANY",
+ "MetricName": "llc_data_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to local memory in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency_for_local_requests",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to remote memory in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_latency_for_remote_requests",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to DRAM in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_to_dram_latency",
+ "ScaleUnit": "1ns"
+ },
+ {
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory",
+ "MetricExpr": "UNC_CHA_REQUESTS.READS_LOCAL * 64 / 1e6 / duration_time",
+ "MetricName": "llc_miss_local_memory_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory",
+ "MetricExpr": "UNC_CHA_REQUESTS.WRITES_LOCAL * 64 / 1e6 / duration_time",
+ "MetricName": "llc_miss_local_memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory",
+ "MetricExpr": "UNC_CHA_REQUESTS.READS_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "llc_miss_remote_memory_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to remote memory",
+ "MetricExpr": "UNC_CHA_REQUESTS.WRITES_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "llc_miss_remote_memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "The ratio of number of completed memory load instructions to the total number completed instructions",
+ "MetricExpr": "MEM_INST_RETIRED.ALL_LOADS / INST_RETIRED.ANY",
+ "MetricName": "loads_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "DDR memory read bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD + UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_total",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "DDR memory write bandwidth (MB/sec)",
+ "MetricExpr": "(UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) * 64 / 1e6 / duration_time",
+ "MetricName": "memory_bandwidth_write",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
+ "MetricName": "numa_reads_addressed_to_local_dram",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
+ "MetricName": "numa_reads_addressed_to_remote_dram",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from decoded instruction cache (decoded stream buffer or DSB) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_decoded_icache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from legacy decode pipeline (Micro-instruction Translation Engine or MITE) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.MITE_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_legacy_decode_pipeline",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uops delivered from microcode sequencer (MS) as a percent of total uops delivered to Instruction Decode Queue",
+ "MetricExpr": "IDQ.MS_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS + LSD.UOPS)",
+ "MetricName": "percent_uops_delivered_from_microcode_sequencer",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Percentage of cycles spent in System Management Interrupts.",
+ "MetricExpr": "((msr@aperf@ - cycles) / msr@aperf@ if msr@smi@ > 0 else 0)",
+ "MetricGroup": "smi",
+ "MetricName": "smi_cycles",
+ "MetricThreshold": "smi_cycles > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Number of SMI interrupts.",
+ "MetricExpr": "msr@smi@",
+ "MetricGroup": "smi",
+ "MetricName": "smi_num",
+ "ScaleUnit": "1SMI#"
+ },
+ {
+ "BriefDescription": "The ratio of number of completed memory store instructions to the total number completed instructions",
+ "MetricExpr": "MEM_INST_RETIRED.ALL_STORES / INST_RETIRED.ANY",
+ "MetricName": "stores_per_instr",
+ "ScaleUnit": "1per_instr"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
+ "MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5_11 + UOPS_DISPATCHED.PORT_6) / (5 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_alu_op_utilization",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the Advanced Matrix eXtensions (AMX) execution engine was busy with tile (arithmetic) operations",
+ "MetricExpr": "EXE.AMX_BUSY / tma_info_core_core_clks",
+ "MetricGroup": "BvCB;Compute;HPC;Server;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_amx_busy",
+ "MetricThreshold": "tma_amx_busy > 0.5 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
+ "MetricExpr": "78 * ASSISTS.ANY / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_assists",
+ "MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU retired uops as a result of handing SSE to AVX* or AVX* to SSE transition Assists.",
+ "MetricExpr": "63 * ASSISTS.SSE_AVX_MIX / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_avx_assists",
+ "MetricThreshold": "tma_avx_assists > 0.1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_backend_bound",
+ "MetricThreshold": "tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. Sample with: TOPDOWN.BACKEND_BOUND_SLOTS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "max(1 - (tma_frontend_bound + tma_backend_bound + tma_retiring), 0)",
+ "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_bad_speculation",
+ "MetricThreshold": "tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * tma_amx_busy / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + 0 / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - (1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricExpr": "100 * ((1 - INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.MS\\,cmask\\=1@) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + tma_mispredicts_resteers * tma_other_mispredicts / tma_branch_mispredicts) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + RS.EMPTY_RESOURCE / tma_info_thread_clks * tma_ports_utilized_0) / (tma_amx_busy + tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / (tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-br\\-mispredict / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;Default;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricName": "tma_branch_mispredicts",
+ "MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: TOPDOWN.BR_MISPREDICT_SLOTS. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers",
+ "MetricExpr": "INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks + tma_unknown_branches",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_branch_resteers",
+ "MetricThreshold": "tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers. Branch Resteers estimates the Frontend delay in fetching operations from corrected path; following all sorts of miss-predicted branches. For example; branchy code with lots of miss-predictions might get categorized under Branch Resteers. Note the value of this node may overlap with its siblings. Sample with: BR_MISP_RETIRED.ALL_BRANCHES",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.1 power-performance optimized state (Faster wakeup time; Smaller power savings).",
+ "MetricExpr": "CPU_CLK_UNHALTED.C01 / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c01_wait",
+ "MetricThreshold": "tma_c01_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due staying in C0.2 power-performance optimized state (Slower wakeup time; Larger power savings).",
+ "MetricExpr": "CPU_CLK_UNHALTED.C02 / tma_info_thread_clks",
+ "MetricGroup": "C0Wait;TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_c02_wait",
+ "MetricThreshold": "tma_c02_wait > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
+ "MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricName": "tma_cisc",
+ "MetricThreshold": "tma_cisc > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction. A CISC instruction has multiple uops that are required to perform the instruction's functionality as in the case of read-modify-write as an example. Since these instructions require multiple uops they may or may not imply sub-optimal use of machine resources.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears",
+ "MetricExpr": "(1 - tma_branch_mispredicts / tma_bad_speculation) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;MachineClears;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueMC",
+ "MetricName": "tma_clears_resteers",
+ "MetricThreshold": "tma_clears_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Machine Clears. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, FRONTEND_RETIRED.L1I_MISS * FRONTEND_RETIRED.L1I_MISS:R / tma_info_thread_clks - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "FRONTEND_RETIRED.L2_MISS * FRONTEND_RETIRED.L2_MISS:R / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, FRONTEND_RETIRED.ITLB_MISS * FRONTEND_RETIRED.ITLB_MISS:R / tma_info_thread_clks - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "FRONTEND_RETIRED.STLB_MISS * FRONTEND_RETIRED.STLB_MISS:R / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks * ITLB_MISSES.WALK_COMPLETED_2M_4M / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks * ITLB_MISSES.WALK_COMPLETED_4K / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by non-taken conditional branches.",
+ "MetricExpr": "BR_MISP_RETIRED.COND_NTAKEN_COST * BR_MISP_RETIRED.COND_NTAKEN_COST:R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_cond_nt_mispredicts",
+ "MetricThreshold": "tma_cond_nt_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to misprediction by taken conditional branches.",
+ "MetricExpr": "BR_MISP_RETIRED.COND_TAKEN_COST * BR_MISP_RETIRED.COND_TAKEN_COST:R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_cond_tk_mispredicts",
+ "MetricThreshold": "tma_cond_tk_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
+ "MetricExpr": "(MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS * min(MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS:R, 74.6 * tma_info_system_core_frequency) + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * min(MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD:R, 76.6 * tma_info_system_core_frequency) * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_contested_accesses",
+ "MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_backend_bound - tma_memory_bound)",
+ "MetricGroup": "Backend;Compute;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_core_bound",
+ "MetricThreshold": "tma_core_bound > 0.1 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where Core non-memory issues were of a bottleneck. Shortage in hardware compute resources; or dependencies in software's instructions are both categorized under Core Bound. Hence it may indicate the machine ran out of an out-of-order resource; certain execution units are overloaded or dependencies in program's data- or instruction-flow are limiting the performance (e.g. FP-chained long-latency arithmetic operations).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g",
+ "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_CXL_MEM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_CXL_MEM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0))) if #has_pmem > 0 else 1)) * (MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_CXL_MEM + MEM_LOAD_RETIRED.LOCAL_CXL_MEM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_cxl_mem_bound",
+ "MetricThreshold": "tma_cxl_mem_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g. 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory, PMM - Persistent Memory Module [from CLX to SPR] or any other CXL Type3 Memory [EMR onwards]).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "(MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD * min(MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD:R, 74.6 * tma_info_system_core_frequency) + MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD * min(MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD:R, 74.6 * tma_info_system_core_frequency) * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricName": "tma_data_sharing",
+ "MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder",
+ "MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
+ "MetricName": "tma_decoder0_alone",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
+ "MetricExpr": "ARITH.DIV_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_divider",
+ "MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIV_ACTIVE",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads",
+ "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks - tma_cxl_mem_bound if #has_pmem > 0 else MEMORY_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks)",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_dram_bound",
+ "MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline",
+ "MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_dsb",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines",
+ "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_dsb_switches",
+ "MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
+ "MetricExpr": "MEM_INST_RETIRED.STLB_HIT_LOADS * min(MEM_INST_RETIRED.STLB_HIT_LOADS:R, 7) / tma_info_thread_clks + tma_load_stlb_miss",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricName": "tma_dtlb_load",
+ "MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
+ "MetricExpr": "MEM_INST_RETIRED.STLB_HIT_STORES * min(MEM_INST_RETIRED.STLB_HIT_STORES:R, 7) / tma_info_thread_clks + tma_store_stlb_miss",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricName": "tma_dtlb_store",
+ "MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
+ "MetricExpr": "(170 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_MISS@offcore_rsp\\=0x103b800002@ + 81 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricName": "tma_false_sharing",
+ "MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
+ "MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricName": "tma_fb_full",
+ "MetricThreshold": "tma_fb_full > 0.3",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
+ "MetricGroup": "Default;FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
+ "MetricName": "tma_fetch_bandwidth",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-fetch\\-lat / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
+ "MetricGroup": "Default;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group",
+ "MetricName": "tma_fetch_latency",
+ "MetricThreshold": "tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend latency issues. For example; instruction-cache misses; iTLB misses or fetch stalls after a branch misprediction are categorized under Frontend Latency. In such cases; the Frontend eventually delivers no uops for some period. Sample with: FRONTEND_RETIRED.LATENCY_GE_16_PS;FRONTEND_RETIRED.LATENCY_GE_8_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
+ "MetricExpr": "max(0, tma_heavy_operations - tma_microcode_sequencer)",
+ "MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
+ "MetricName": "tma_few_uops_instructions",
+ "MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_x87_use + tma_fp_scalar + tma_fp_vector",
+ "MetricGroup": "HPC;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fp_arith",
+ "MetricThreshold": "tma_fp_arith > 0.2 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall arithmetic floating-point (FP) operations fraction the CPU has executed (retired). Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain and FMA double-counting.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "30 * ASSISTS.FP / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "ARITH.FPDIV_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_scalar",
+ "MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
+ "MetricName": "tma_fp_vector",
+ "MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_128b",
+ "MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_256b",
+ "MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_512b",
+ "MetricThreshold": "tma_fp_vector_512b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 512-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_frontend_bound",
+ "MetricThreshold": "tma_frontend_bound > 0.15",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Pipeline_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. Sample with: FRONTEND_RETIRED.LATENCY_GE_4_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions",
+ "MetricExpr": "tma_light_operations * INST_RETIRED.MACRO_FUSED / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_fused_instructions",
+ "MetricThreshold": "tma_fused_instructions > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring fused instructions -- where one uop can represent multiple contiguous instructions. CMP+JCC or DEC+JCC are common examples of legacy fusions. {([MTL] Note new MOV+OP and Load+OP fusions appear under Other_Light_Ops in MTL!)}",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-heavy\\-ops / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "Default;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_heavy_operations",
+ "MetricThreshold": "tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+]). Sample with: UOPS_RETIRED.HEAVY",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
+ "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_icache_misses",
+ "MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by indirect CALL instructions.",
+ "MetricExpr": "BR_MISP_RETIRED.INDIRECT_CALL_COST * BR_MISP_RETIRED.INDIRECT_CALL_COST:R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ind_call_mispredicts",
+ "MetricThreshold": "tma_ind_call_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by indirect JMP instructions.",
+ "MetricExpr": "max((BR_MISP_RETIRED.INDIRECT_COST * BR_MISP_RETIRED.INDIRECT_COST:R - BR_MISP_RETIRED.INDIRECT_CALL_COST * BR_MISP_RETIRED.INDIRECT_CALL_COST:R) / tma_info_thread_clks, 0)",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ind_jump_mispredicts",
+ "MetricThreshold": "tma_ind_jump_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 6 / BR_MISP_RETIRED.ALL_BRANCHES / 100",
+ "MetricGroup": "Bad;BrMispredicts;tma_issueBM",
+ "MetricName": "tma_info_bad_spec_branch_misprediction_cost",
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_NTAKEN",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for conditional taken branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_cond_taken",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_cond_taken < 200"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_indirect",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
+ },
+ {
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RET",
+ "MetricGroup": "Bad;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmisp_ret",
+ "MetricThreshold": "tma_info_bad_spec_ipmisp_ret < 500"
+ },
+ {
+ "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts",
+ "MetricName": "tma_info_bad_spec_ipmispredict",
+ "MetricThreshold": "tma_info_bad_spec_ipmispredict < 200"
+ },
+ {
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "INT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio"
+ },
+ {
+ "BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
+ "MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
+ "MetricGroup": "Cor;SMT",
+ "MetricName": "tma_info_botlnk_l0_core_bound_likely",
+ "MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite + tma_ms))",
+ "MetricGroup": "DSBmiss;Fed;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_icache_misses / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Fed;FetchLat;IcMiss;tma_issueFL",
+ "MetricName": "tma_info_botlnk_l2_ic_misses",
+ "MetricThreshold": "tma_info_botlnk_l2_ic_misses > 5",
+ "PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
+ },
+ {
+ "BriefDescription": "Fraction of branches that are CALL or RET",
+ "MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_callret"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are non-taken conditionals",
+ "MetricExpr": "BR_INST_RETIRED.COND_NTAKEN / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_nt"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are taken conditionals",
+ "MetricExpr": "BR_INST_RETIRED.COND_TAKEN / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches;CodeGen;PGO",
+ "MetricName": "tma_info_branches_cond_tk"
+ },
+ {
+ "BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
+ "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_jump"
+ },
+ {
+ "BriefDescription": "Fraction of branches of other types (not individually covered by other metrics in Info.Branches group)",
+ "MetricExpr": "1 - (tma_info_branches_cond_nt + tma_info_branches_cond_tk + tma_info_branches_callret + tma_info_branches_jump)",
+ "MetricGroup": "Bad;Branches",
+ "MetricName": "tma_info_branches_other_branches"
+ },
+ {
+ "BriefDescription": "Core actual clocks when any Logical Processor is active on the Physical Core",
+ "MetricExpr": "(CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else tma_info_thread_clks)",
+ "MetricGroup": "SMT",
+ "MetricName": "tma_info_core_core_clks"
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+ "MetricExpr": "INST_RETIRED.ANY / tma_info_core_core_clks",
+ "MetricGroup": "Ret;SMT;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_core_coreipc"
+ },
+ {
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricGroup": "Flops;Ret",
+ "MetricName": "tma_info_core_flopc"
+ },
+ {
+ "BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
+ "MetricExpr": "(FP_ARITH_DISPATCHED.PORT_0 + FP_ARITH_DISPATCHED.PORT_1 + FP_ARITH_DISPATCHED.PORT_5) / (2 * tma_info_core_core_clks)",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_core_fp_arith_utilization",
+ "PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
+ },
+ {
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
+ "MetricName": "tma_info_core_ilp"
+ },
+ {
+ "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
+ "MetricExpr": "IDQ.DSB_UOPS / UOPS_ISSUED.ANY",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_frontend_dsb_coverage",
+ "MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 6 > 0.35",
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
+ "BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
+ "MetricExpr": "DSB2MITE_SWITCHES.PENALTY_CYCLES / cpu@DSB2MITE_SWITCHES.PENALTY_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "DSBmiss",
+ "MetricName": "tma_info_frontend_dsb_switch_cost"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired DSB misses",
+ "MetricExpr": "FRONTEND_RETIRED.ANY_DSB_MISS * FRONTEND_RETIRED.ANY_DSB_MISS:R / tma_info_thread_clks",
+ "MetricGroup": "DSBmiss;Fed;FetchLat",
+ "MetricName": "tma_info_frontend_dsb_switches_ret",
+ "MetricThreshold": "tma_info_frontend_dsb_switches_ret > 0.05"
+ },
+ {
+ "BriefDescription": "Average number of Uops issued by front-end when it issued something",
+ "MetricExpr": "UOPS_ISSUED.ANY / cpu@UOPS_ISSUED.ANY\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_frontend_fetch_upc"
+ },
+ {
+ "BriefDescription": "Average Latency for L1 instruction cache misses",
+ "MetricExpr": "ICACHE_DATA.STALLS / ICACHE_DATA.STALL_PERIODS",
+ "MetricGroup": "Fed;FetchLat;IcMiss",
+ "MetricName": "tma_info_frontend_icache_miss_latency"
+ },
+ {
+ "BriefDescription": "Instructions per non-speculative DSB miss (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS",
+ "MetricGroup": "DSBmiss;Fed",
+ "MetricName": "tma_info_frontend_ipdsb_miss_ret",
+ "MetricThreshold": "tma_info_frontend_ipdsb_miss_ret < 50"
+ },
+ {
+ "BriefDescription": "Instructions per speculative Unknown Branch Misprediction (BAClear) (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / BACLEARS.ANY",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_ipunknown_branch"
+ },
+ {
+ "BriefDescription": "L2 cache true code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * FRONTEND_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code"
+ },
+ {
+ "BriefDescription": "L2 cache speculative code cacheline misses per kilo instruction",
+ "MetricExpr": "1e3 * L2_RQSTS.CODE_RD_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "IcMiss",
+ "MetricName": "tma_info_frontend_l2mpki_code_all"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired operations that invoke the Microcode Sequencer",
+ "MetricExpr": "FRONTEND_RETIRED.MS_FLOWS * FRONTEND_RETIRED.MS_FLOWS:R / tma_info_thread_clks",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_frontend_ms_latency_ret",
+ "MetricThreshold": "tma_info_frontend_ms_latency_ret > 0.05"
+ },
+ {
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
+ "BriefDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection",
+ "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / cpu@INT_MISC.UNKNOWN_BRANCH_CYCLES\\,cmask\\=1\\,edge@",
+ "MetricGroup": "Fed",
+ "MetricName": "tma_info_frontend_unknown_branch_cost",
+ "PublicDescription": "Average number of cycles the front-end was delayed due to an Unknown Branch detection. See Unknown_Branches node."
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to retired branches who got branch address clears",
+ "MetricExpr": "FRONTEND_RETIRED.UNKNOWN_BRANCH * FRONTEND_RETIRED.UNKNOWN_BRANCH:R / tma_info_thread_clks",
+ "MetricGroup": "Fed;FetchLat",
+ "MetricName": "tma_info_frontend_unknown_branches_ret"
+ },
+ {
+ "BriefDescription": "Branch instructions per taken branch.",
+ "MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_bptkbranch"
+ },
+ {
+ "BriefDescription": "Total number of retired Instructions",
+ "MetricExpr": "INST_RETIRED.ANY",
+ "MetricGroup": "Summary;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_inst_mix_instructions",
+ "PublicDescription": "Total number of retired Instructions. Sample with: INST_RETIRED.PREC_DIST"
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED2.SCALAR + (FP_ARITH_INST_RETIRED.VECTOR + FP_ARITH_INST_RETIRED2.VECTOR))",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_iparith",
+ "MetricThreshold": "tma_info_inst_mix_iparith < 10",
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.128B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx128",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.256B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx256",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE + FP_ARITH_INST_RETIRED2.512B_PACKED_HALF)",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_avx512",
+ "MetricThreshold": "tma_info_inst_mix_iparith_avx512 < 10",
+ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_dp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED2.SCALAR",
+ "MetricGroup": "Flops;FpScalar;InsType;Server",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_hp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_hp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Half-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
+ "MetricGroup": "Flops;FpScalar;InsType",
+ "MetricName": "tma_info_inst_mix_iparith_scalar_sp",
+ "MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
+ },
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+ "MetricGroup": "Branches;Fed;InsType",
+ "MetricName": "tma_info_inst_mix_ipbranch",
+ "MetricThreshold": "tma_info_inst_mix_ipbranch < 8"
+ },
+ {
+ "BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALL",
+ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "tma_info_inst_mix_ipcall",
+ "MetricThreshold": "tma_info_inst_mix_ipcall < 200"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
+ "MetricGroup": "Flops;InsType",
+ "MetricName": "tma_info_inst_mix_ipflop",
+ "MetricThreshold": "tma_info_inst_mix_ipflop < 10"
+ },
+ {
+ "BriefDescription": "Instructions per Load (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_LOADS",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipload",
+ "MetricThreshold": "tma_info_inst_mix_ipload < 3"
+ },
+ {
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / CPU_CLK_UNHALTED.PAUSE_INST",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause"
+ },
+ {
+ "BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STORES",
+ "MetricGroup": "InsType",
+ "MetricName": "tma_info_inst_mix_ipstore",
+ "MetricThreshold": "tma_info_inst_mix_ipstore < 8"
+ },
+ {
+ "BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / SW_PREFETCH_ACCESS.ANY",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_inst_mix_ipswpf",
+ "MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
+ },
+ {
+ "BriefDescription": "Instructions per taken branch",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
+ "MetricName": "tma_info_inst_mix_iptb",
+ "MetricThreshold": "tma_info_inst_mix_iptb < 13",
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
+ "MetricExpr": "1e3 * L2_LINES_OUT.NON_SILENT / tma_info_inst_mix_instructions",
+ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "tma_info_memory_core_l2_evictions_nonsilent_pki"
+ },
+ {
+ "BriefDescription": "Rate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)",
+ "MetricExpr": "1e3 * L2_LINES_OUT.SILENT / tma_info_inst_mix_instructions",
+ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "tma_info_memory_core_l2_evictions_silent_pki"
+ },
+ {
+ "BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_fb_hpki"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l1mpki_load"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * (L2_RQSTS.REFERENCES - L2_RQSTS.MISS) / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_all"
+ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2hpki_load"
+ },
+ {
+ "BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Backend;CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_all"
+ },
+ {
+ "BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
+ "MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheHits;Mem",
+ "MetricName": "tma_info_memory_l2mpki_load"
+ },
+ {
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
+ },
+ {
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss data reads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
+ },
+ {
+ "BriefDescription": "Average Latency for L2 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
+ },
+ {
+ "BriefDescription": "Average Parallel L2 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
+ "MetricGroup": "Memory_BW;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
+ },
+ {
+ "BriefDescription": "Average Latency for L3 cache miss demand Loads",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
+ "MetricGroup": "Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l3_miss_latency"
+ },
+ {
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / MEM_LOAD_COMPLETED.L1_MISS_ANY",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
+ },
+ {
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki"
+ },
+ {
+ "BriefDescription": "Off-core accesses per kilo instruction for modified write requests",
+ "MetricExpr": "1e3 * OCR.MODIFIED_WRITE.ANY_RESPONSE / tma_info_inst_mix_instructions",
+ "MetricGroup": "Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_mwrite_any_pki"
+ },
+ {
+ "BriefDescription": "Off-core accesses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)",
+ "MetricExpr": "1e3 * OCR.READS_TO_CORE.ANY_RESPONSE / tma_info_inst_mix_instructions",
+ "MetricGroup": "CacheHits;Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_read_any_pki"
+ },
+ {
+ "BriefDescription": "L3 cache misses per kilo instruction for reads-to-core requests (speculative; including in-core HW prefetches)",
+ "MetricExpr": "1e3 * OCR.READS_TO_CORE.L3_MISS / tma_info_inst_mix_instructions",
+ "MetricGroup": "Offcore;Server",
+ "MetricName": "tma_info_memory_mix_offcore_read_l3m_pki"
+ },
+ {
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki"
+ },
+ {
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ },
+ {
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "L2_LINES_OUT.USELESS_HWPF / (L2_LINES_OUT.SILENT + L2_LINES_OUT.NON_SILENT)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15"
+ },
+ {
+ "BriefDescription": "Average DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socket",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.DRAM / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_dram_bw",
+ "PublicDescription": "Average DRAM BW for Reads-to-Core (R2C) covering for memory attached to local- and remote-socket. See R2C_Offcore_BW."
+ },
+ {
+ "BriefDescription": "Average L3-cache miss BW for Reads-to-Core (R2C)",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.L3_MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_l3m_bw",
+ "PublicDescription": "Average L3-cache miss BW for Reads-to-Core (R2C). This covering going to DRAM or other memory off-chip memory tears. See R2C_Offcore_BW."
+ },
+ {
+ "BriefDescription": "Average Off-core access BW for Reads-to-Core (R2C)",
+ "MetricExpr": "64 * OCR.READS_TO_CORE.ANY_RESPONSE / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;Mem;MemoryBW;Offcore;Server",
+ "MetricName": "tma_info_memory_soc_r2c_offcore_bw",
+ "PublicDescription": "Average Off-core access BW for Reads-to-Core (R2C). R2C account for demand or prefetch load/RFO/code access that fill data into the Core caches."
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Fed;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_code_stlb_mpki"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to STLB misses by demand loads",
+ "MetricExpr": "MEM_INST_RETIRED.STLB_MISS_LOADS * MEM_INST_RETIRED.STLB_MISS_LOADS:R / tma_info_thread_clks",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_load_stlb_miss_ret",
+ "MetricThreshold": "tma_info_memory_tlb_load_stlb_miss_ret > 0.05"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data load speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_load_stlb_mpki"
+ },
+ {
+ "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
+ "MetricExpr": "(ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING) / (4 * tma_info_core_core_clks)",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_page_walks_utilization",
+ "MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU retirement was stalled likely due to STLB misses by demand stores",
+ "MetricExpr": "MEM_INST_RETIRED.STLB_MISS_STORES * MEM_INST_RETIRED.STLB_MISS_STORES:R / tma_info_thread_clks",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_store_stlb_miss_ret",
+ "MetricThreshold": "tma_info_memory_tlb_store_stlb_miss_ret > 0.05"
+ },
+ {
+ "BriefDescription": "STLB (2nd level TLB) data store speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
+ "MetricExpr": "1e3 * DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
+ "MetricGroup": "Mem;MemoryTLB",
+ "MetricName": "tma_info_memory_tlb_store_stlb_mpki"
+ },
+ {
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
+ "MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
+ "MetricName": "tma_info_pipeline_execute"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)"
+ },
+ {
+ "BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_retire"
+ },
+ {
+ "BriefDescription": "Estimated fraction of retirement-cycles dealing with repeat instructions",
+ "MetricExpr": "INST_RETIRED.REP_ITERATION / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
+ "MetricGroup": "MicroSeq;Pipeline;Ret",
+ "MetricName": "tma_info_pipeline_strings_cycles",
+ "MetricThreshold": "tma_info_pipeline_strings_cycles > 0.1"
+ },
+ {
+ "BriefDescription": "Fraction of cycles the processor is waiting yet unhalted; covering legacy PAUSE instruction, as well as C0.1 / C0.2 power-performance optimized states",
+ "MetricExpr": "CPU_CLK_UNHALTED.C0_WAIT / tma_info_thread_clks",
+ "MetricGroup": "C0Wait",
+ "MetricName": "tma_info_system_c0_wait",
+ "MetricThreshold": "tma_info_system_c0_wait > 0.05"
+ },
+ {
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
+ "MetricGroup": "Power;Summary",
+ "MetricName": "tma_info_system_core_frequency"
+ },
+ {
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
+ "MetricGroup": "HPC;Summary",
+ "MetricName": "tma_info_system_cpu_utilization"
+ },
+ {
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
+ "MetricExpr": "(64 * UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_DATA / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_read_bw"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(64 * UNC_CXLDP_TxC_AGF_INSERTS.M2S_DATA / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_write_bw"
+ },
+ {
+ "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT_SCH0.RD + UNC_M_CAS_COUNT_SCH1.RD + UNC_M_CAS_COUNT_SCH0.WR + UNC_M_CAS_COUNT_SCH1.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
+ "MetricName": "tma_info_system_dram_bw_use",
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
+ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "tma_info_system_gflops",
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
+ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_read_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU"
+ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR) * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_write_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU"
+ },
+ {
+ "BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.FAR_BRANCH:u",
+ "MetricGroup": "Branches;OS",
+ "MetricName": "tma_info_system_ipfarbranch",
+ "MetricThreshold": "tma_info_system_ipfarbranch < 1e6"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / INST_RETIRED.ANY_P:k",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_cpi"
+ },
+ {
+ "BriefDescription": "Fraction of cycles spent in the Operating System (OS) Kernel mode",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "OS",
+ "MetricName": "tma_info_system_kernel_utilization",
+ "MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
+ },
+ {
+ "BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / uncore_cha_0@event\\=0x1@",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
+ "MetricName": "tma_info_system_mem_dram_read_latency",
+ "PublicDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
+ },
+ {
+ "BriefDescription": "Average number of parallel data read requests to external memory",
+ "MetricExpr": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD@thresh\\=1@",
+ "MetricGroup": "Mem;MemoryBW;SoC",
+ "MetricName": "tma_info_system_mem_parallel_reads",
+ "PublicDescription": "Average number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches"
+ },
+ {
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
+ "BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
+ "MetricExpr": "(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_DISTRIBUTED if #SMT_on else 0)",
+ "MetricGroup": "SMT",
+ "MetricName": "tma_info_system_smt_2t_utilization"
+ },
+ {
+ "BriefDescription": "Socket actual clocks when any core is active on that socket",
+ "MetricExpr": "uncore_cha_0@event\\=0x1@",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_socket_clks"
+ },
+ {
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
+ "BriefDescription": "Average Frequency Utilization relative nominal frequency",
+ "MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_system_turbo_utilization"
+ },
+ {
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
+ "BriefDescription": "Cross-socket Ultra Path Interconnect (UPI) data transmit bandwidth for data only [MB / sec]",
+ "MetricExpr": "UNC_UPI_TxL_FLITS.ALL_DATA * 64 / 9 / 1e6",
+ "MetricGroup": "Server;SoC",
+ "MetricName": "tma_info_system_upi_data_transmit_bw"
+ },
+ {
+ "BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Pipeline",
+ "MetricName": "tma_info_thread_clks"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction (per Logical Processor)",
+ "MetricExpr": "1 / tma_info_thread_ipc",
+ "MetricGroup": "Mem;Pipeline",
+ "MetricName": "tma_info_thread_cpi"
+ },
+ {
+ "BriefDescription": "The ratio of Executed- by Issued-Uops",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY",
+ "MetricGroup": "Cor;Pipeline",
+ "MetricName": "tma_info_thread_execute_per_issue",
+ "PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage."
+ },
+ {
+ "BriefDescription": "Instructions Per Cycle (per Logical Processor)",
+ "MetricExpr": "INST_RETIRED.ANY / tma_info_thread_clks",
+ "MetricGroup": "Ret;Summary",
+ "MetricName": "tma_info_thread_ipc"
+ },
+ {
+ "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
+ "MetricExpr": "TOPDOWN.SLOTS",
+ "MetricGroup": "TmaL1;tma_L1_group",
+ "MetricName": "tma_info_thread_slots"
+ },
+ {
+ "BriefDescription": "Fraction of Physical Core issue-slots utilized by this Logical Processor",
+ "MetricExpr": "(tma_info_thread_slots / (TOPDOWN.SLOTS / 2) if #SMT_on else 1)",
+ "MetricGroup": "SMT;TmaL1;tma_L1_group",
+ "MetricName": "tma_info_thread_slots_utilization"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / INST_RETIRED.ANY",
+ "MetricGroup": "Pipeline;Ret;Retire",
+ "MetricName": "tma_info_thread_uoppi",
+ "MetricThreshold": "tma_info_thread_uoppi > 1.05"
+ },
+ {
+ "BriefDescription": "Uops per taken branch",
+ "MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
+ "MetricGroup": "Branches;Fed;FetchBW",
+ "MetricName": "tma_info_thread_uptb",
+ "MetricThreshold": "tma_info_thread_uptb < 9"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired)",
+ "MetricExpr": "tma_int_vector_128b + tma_int_vector_256b",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_int_operations",
+ "MetricThreshold": "tma_int_operations > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents overall Integer (Int) select operations fraction the CPU has executed (retired). Vector/Matrix Int operations and shuffles are counted. Note this metric's value may exceed its parent due to use of \"Uops\" CountDomain.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "(INT_VEC_RETIRED.ADD_128 + INT_VEC_RETIRED.VNNI_128) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_128b",
+ "MetricThreshold": "tma_int_vector_128b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 128-bit vector Integer ADD/SUB/SAD or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired",
+ "MetricExpr": "(INT_VEC_RETIRED.ADD_256 + INT_VEC_RETIRED.MUL_256 + INT_VEC_RETIRED.VNNI_256) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Compute;IntVector;Pipeline;TopdownL4;tma_L4_group;tma_int_operations_group;tma_issue2P",
+ "MetricName": "tma_int_vector_256b",
+ "MetricThreshold": "tma_int_vector_256b > 0.1 & (tma_int_operations > 0.1 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents 256-bit vector Integer ADD/SUB/SAD/MUL or VNNI (Vector Neural Network Instructions) uops fraction the CPU has retired. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
+ "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricName": "tma_itlb_misses",
+ "MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
+ "MetricExpr": "max((EXE_ACTIVITY.BOUND_ON_LOADS - MEMORY_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricName": "tma_l1_bound",
+ "MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - MEMORY_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
+ "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L1D_MISS - MEMORY_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l2_bound",
+ "MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * min(MEM_LOAD_RETIRED.L2_HIT:R, 4.4 * tma_info_system_core_frequency) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
+ "MetricExpr": "(MEMORY_ACTIVITY.STALLS_L2_MISS - MEMORY_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_l3_bound",
+ "MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "MEM_LOAD_RETIRED.L3_HIT * min(MEM_LOAD_RETIRED.L3_HIT:R, 32.6 * tma_info_system_core_frequency) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricName": "tma_l3_hit_latency",
+ "MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
+ "MetricExpr": "DECODE.LCP / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
+ "MetricName": "tma_lcp",
+ "MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation)",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_retiring - tma_heavy_operations)",
+ "MetricGroup": "Default;Retire;TmaL2;TopdownL2;tma_L2_group;tma_retiring_group",
+ "MetricName": "tma_light_operations",
+ "MetricThreshold": "tma_light_operations > 0.6",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_2_3_10 / (3 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_load_op_utilization",
+ "MetricThreshold": "tma_load_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations. Sample with: UOPS_DISPATCHED.PORT_2_3_10",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) DTLB was missed by load accesses, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_dtlb_load - tma_load_stlb_miss)",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_hit",
+ "MetricThreshold": "tma_load_stlb_hit > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by load accesses, performing a hardware page walk",
+ "MetricExpr": "DTLB_LOAD_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_load_group",
+ "MetricName": "tma_load_stlb_miss",
+ "MetricThreshold": "tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_1G / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_4K / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory",
+ "MetricExpr": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM:R * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "MEM_INST_RETIRED.LOCK_LOADS * MEM_INST_RETIRED.LOCK_LOADS:R / tma_info_thread_clks",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricName": "tma_lock_latency",
+ "MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
+ "MetricGroup": "BadSpec;BvMS;Default;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricName": "tma_machine_clears",
+ "MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to memory bandwidth Allocation feature (RDT's memory bandwidth throttling).",
+ "MetricExpr": "INT_MISC.MBA_STALLS / tma_info_thread_clks",
+ "MetricGroup": "MemoryBW;Offcore;Server;TopdownL5;tma_L5_group;tma_mem_bandwidth_group",
+ "MetricName": "tma_mba_stalls",
+ "MetricThreshold": "tma_mba_stalls > 0.1 & (tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricName": "tma_mem_bandwidth",
+ "MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
+ "MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricName": "tma_mem_latency",
+ "MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
+ "DefaultMetricgroupName": "TopdownL2",
+ "MetricExpr": "topdown\\-mem\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "Backend;Default;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
+ "MetricName": "tma_memory_bound",
+ "MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
+ "MetricgroupNoGroup": "TopdownL2;Default",
+ "PublicDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck. Memory Bound estimates fraction of slots where pipeline is likely stalled due to demand load or store instructions. This accounts mainly for (1) non-completed in-flight memory demand loads which coincides with execution units starvation; in addition to (2) cases where stores could impose backpressure on the pipeline when many of them get buffered at the same time (less common out of the two).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to LFENCE Instructions.",
+ "MetricExpr": "13 * MISC2_RETIRED.LFENCE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_memory_fence",
+ "MetricThreshold": "tma_memory_fence > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring memory operations -- uops for memory load or store accesses.",
+ "MetricExpr": "tma_light_operations * MEM_UOP_RETIRED.ANY / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_memory_operations",
+ "MetricThreshold": "tma_memory_operations > 0.1 & tma_light_operations > 0.6",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit",
+ "MetricExpr": "UOPS_RETIRED.MS / tma_info_thread_slots",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
+ "MetricName": "tma_microcode_sequencer",
+ "MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: UOPS_RETIRED.MS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
+ "MetricExpr": "tma_branch_mispredicts / tma_bad_speculation * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricName": "tma_mispredicts_resteers",
+ "MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)",
+ "MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / tma_info_core_core_clks / 2",
+ "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_mite",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
+ "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
+ "MetricExpr": "160 * ASSISTS.SSE_AVX_MIX / tma_info_thread_clks",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
+ "MetricName": "tma_mixing_vectors",
+ "MetricThreshold": "tma_mixing_vectors > 0.05",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "max(IDQ.MS_CYCLES_ANY, cpu@UOPS_RETIRED.MS\\,cmask\\=1@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY)) / tma_info_core_core_clks / 2.4",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS)",
+ "MetricExpr": "3 * cpu@UOPS_RETIRED.MS\\,cmask\\=1\\,edge@ / (UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY) / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
+ "MetricName": "tma_ms_switches",
+ "MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused",
+ "MetricExpr": "tma_light_operations * (BR_INST_RETIRED.ALL_BRANCHES - INST_RETIRED.MACRO_FUSED) / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_non_fused_branches",
+ "MetricThreshold": "tma_non_fused_branches > 0.1 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions that were not fused. Non-conditional branches like direct JMP or CALL would count here. Can be used to examine fusible conditional jumps that were not fused.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
+ "MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_nop_instructions",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_int_operations + tma_memory_operations + tma_fused_instructions + tma_non_fused_branches))",
+ "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricName": "tma_other_light_ops",
+ "MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
+ "PublicDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes. May undercount due to FMA double counting",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults",
+ "MetricExpr": "99 * ASSISTS.PAGE_FAULT / tma_info_thread_slots",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_page_faults",
+ "MetricThreshold": "tma_page_faults > 0.05",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Page Faults. A Page Fault may apply on first application access to a memory page. Note operating system handling of page faults accounts for the majority of its cost.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_0 / tma_info_core_core_clks",
+ "MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_0",
+ "MetricThreshold": "tma_port_0 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_1 / tma_info_core_core_clks",
+ "MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_1",
+ "MetricThreshold": "tma_port_1 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
+ "MetricExpr": "UOPS_DISPATCHED.PORT_6 / tma_info_core_core_clks",
+ "MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
+ "MetricName": "tma_port_6",
+ "MetricThreshold": "tma_port_6 > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
+ "MetricConstraint": "NO_GROUP_EVENTS_NMI",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_3_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIV_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - EXE_ACTIVITY.BOUND_ON_LOADS else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_3_PORTS_UTIL) / tma_info_thread_clks)",
+ "MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricName": "tma_ports_utilization",
+ "MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related). Two distinct categories can be attributed into this metric: (1) heavy data-dependency among contiguous instructions would manifest in this metric - such cases are often referred to as low Instruction Level Parallelism (ILP). (2) Contention on some hardware execution unit other than Divider. For example; when there are too many multiply operations.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "max(EXE_ACTIVITY.EXE_BOUND_0_PORTS - RESOURCE_STALLS.SCOREBOARD, 0) / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_0",
+ "MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise). Long-latency instructions like divides may contribute to this metric.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricConstraint": "NO_THRESHOLD_AND_NMI",
+ "MetricExpr": "EXE_ACTIVITY.1_PORTS_UTIL / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_1",
+ "MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). This can be due to heavy data-dependency among software instructions; or over oversubscribing a particular hardware resource. In some other cases with high 1_Port_Utilized and L1_Bound; this metric can point to L1 data-cache latency bottleneck that may not necessarily manifest with complete execution starvation (due to the short L1 latency e.g. walking a linked list) - looking at the assembly can be helpful. Sample with: EXE_ACTIVITY.1_PORTS_UTIL. Related metrics: tma_l1_bound",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "EXE_ACTIVITY.2_PORTS_UTIL / tma_info_thread_clks",
+ "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_2",
+ "MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Sample with: EXE_ACTIVITY.2_PORTS_UTIL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_int_vector_128b, tma_int_vector_256b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
+ "MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricName": "tma_ports_utilized_3m",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues",
+ "MetricExpr": "(MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * PEBS + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * PEBS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group",
+ "MetricName": "tma_remote_cache",
+ "MetricThreshold": "tma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory",
+ "MetricExpr": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * PEBS * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to retired misprediction by (indirect) RET instructions.",
+ "MetricExpr": "BR_MISP_RETIRED.RET_COST * BR_MISP_RETIRED.RET_COST:R / tma_info_thread_clks",
+ "MetricGroup": "BrMispredicts;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_ret_mispredicts",
+ "MetricThreshold": "tma_ret_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricName": "tma_retiring",
+ "MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
+ "MetricgroupNoGroup": "TopdownL1;Default",
+ "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category. Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved. Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance. For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. Sample with: UOPS_RETIRED.SLOTS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
+ "MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks + tma_c02_wait",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
+ "MetricName": "tma_serializing_operation",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer)",
+ "MetricExpr": "tma_light_operations * INT_VEC_RETIRED.SHUFFLES / (tma_retiring * tma_info_thread_slots)",
+ "MetricGroup": "HPC;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
+ "MetricName": "tma_shuffles_256b",
+ "MetricThreshold": "tma_shuffles_256b > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring Shuffle operations of 256-bit vector size (FP or Integer). Shuffles may incur slow cross \"vector lane\" data transfers.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
+ "MetricExpr": "CPU_CLK_UNHALTED.PAUSE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
+ "MetricName": "tma_slow_pause",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: CPU_CLK_UNHALTED.PAUSE_INST",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary",
+ "MetricExpr": "MEM_INST_RETIRED.SPLIT_LOADS * min(MEM_INST_RETIRED.SPLIT_LOADS:R, tma_info_memory_load_miss_real_latency) / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_split_loads",
+ "MetricThreshold": "tma_split_loads > 0.3",
+ "PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents rate of split store accesses",
+ "MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES * min(MEM_INST_RETIRED.SPLIT_STORES:R, 1) / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group",
+ "MetricName": "tma_split_stores",
+ "MetricThreshold": "tma_split_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents rate of split store accesses. Consider aligning your data to the 64-byte cache line granularity. Sample with: MEM_INST_RETIRED.SPLIT_STORES_PS. Related metrics: tma_port_4",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
+ "MetricExpr": "(XQ.FULL_CYCLES + L1D_PEND_MISS.L2_STALLS) / tma_info_thread_clks",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricName": "tma_sq_full",
+ "MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write",
+ "MetricExpr": "EXE_ACTIVITY.BOUND_ON_STORES / tma_info_thread_clks",
+ "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_store_bound",
+ "MetricThreshold": "tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to RFO store memory accesses; RFO store issue a read-for-ownership request before the write. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should RFO stores be a bottleneck. Sample with: MEM_INST_RETIRED.ALL_STORES_PS",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores",
+ "MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_store_fwd_blk",
+ "MetricThreshold": "tma_store_fwd_blk > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores. To streamline memory operations in the pipeline; a load can avoid waiting for memory if a prior in-flight store is writing the data that the load wants to read (store forwarding process). However; in some cases the load may be blocked for a significant time pending the store forward. For example; when the prior store is writing a smaller region than the load is reading.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
+ "MetricExpr": "(MEM_STORE_RETIRED.L2_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricName": "tma_store_latency",
+ "MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations",
+ "MetricExpr": "(UOPS_DISPATCHED.PORT_4_9 + UOPS_DISPATCHED.PORT_7_8) / (4 * tma_info_core_core_clks)",
+ "MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
+ "MetricName": "tma_store_op_utilization",
+ "MetricThreshold": "tma_store_op_utilization > 0.6",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Store operations. Sample with: UOPS_DISPATCHED.PORT_7_8",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the TLB was missed by store accesses, hitting in the second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_dtlb_store - tma_store_stlb_miss)",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_hit",
+ "MetricThreshold": "tma_store_stlb_hit > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the STLB was missed by store accesses, performing a hardware page walk",
+ "MetricExpr": "DTLB_STORE_MISSES.WALK_ACTIVE / tma_info_core_core_clks",
+ "MetricGroup": "MemoryTLB;TopdownL5;tma_L5_group;tma_dtlb_store_group",
+ "MetricName": "tma_store_stlb_miss",
+ "MetricThreshold": "tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_1G / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_4K / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
+ "MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / tma_info_thread_clks",
+ "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
+ "MetricName": "tma_streaming_stores",
+ "MetricThreshold": "tma_streaming_stores > 0.2 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores. Even though store accesses do not typically stall out-of-order CPUs; there are few cases where stores can lead to actual stalls. This metric will be flagged should Streaming stores be a bottleneck. Sample with: OCR.STREAMING_WR.ANY_RESPONSE. Related metrics: tma_fb_full",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
+ "MetricExpr": "INT_MISC.UNKNOWN_BRANCH_CYCLES / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricName": "tma_unknown_branches",
+ "MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: FRONTEND_RETIRED.UNKNOWN_BRANCH",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric serves as an approximation of legacy x87 usage",
+ "MetricExpr": "tma_retiring * UOPS_EXECUTED.X87 / UOPS_EXECUTED.THREAD",
+ "MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
+ "MetricName": "tma_x87_use",
+ "MetricThreshold": "tma_x87_use > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
+ "PublicDescription": "This metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint.",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "Uncore operating frequency in GHz",
+ "MetricExpr": "UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_CLOCKTICKS) * #num_packages) / 1e9 / duration_time",
+ "MetricName": "uncore_frequency",
+ "ScaleUnit": "1GHz"
+ },
+ {
+ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data receive bandwidth (MB/sec)",
+ "MetricExpr": "UNC_UPI_RxL_FLITS.ALL_DATA * 7.111111111111111 / 1e6 / duration_time",
+ "MetricName": "upi_data_receive_bw",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Intel(R) Ultra Path Interconnect (UPI) data transmit bandwidth (MB/sec)",
+ "MetricExpr": "UNC_UPI_TxL_FLITS.ALL_DATA * 7.111111111111111 / 1e6 / duration_time",
+ "MetricName": "upi_data_transmit_bw",
+ "ScaleUnit": "1MB/s"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/memory.json b/tools/perf/pmu-events/arch/x86/graniterapids/memory.json
index 1c0e0e86e58e..96f40390becf 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/memory.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/memory.json
@@ -1,138 +1,483 @@
[
{
+ "BriefDescription": "Cycles while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "2",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.CYCLES_L3_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "6",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x6"
+ },
+ {
+ "BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
+ "PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "2",
+ "EventCode": "0x47",
+ "EventName": "MEMORY_ACTIVITY.CYCLES_L1D_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "3",
+ "EventCode": "0x47",
+ "EventName": "MEMORY_ACTIVITY.STALLS_L1D_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "Execution stalls while L2 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "5",
+ "EventCode": "0x47",
+ "EventName": "MEMORY_ACTIVITY.STALLS_L2_MISS",
+ "PublicDescription": "Execution stalls while L2 cache miss demand cacheable load request is outstanding (will not count for uncacheable demand requests e.g. bus lock).",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Execution stalls while L3 cache miss demand cacheable load request is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "9",
+ "EventCode": "0x47",
+ "EventName": "MEMORY_ACTIVITY.STALLS_L3_MISS",
+ "PublicDescription": "Execution stalls while L3 cache miss demand cacheable load request is outstanding (will not count for uncacheable demand requests e.g. bus lock).",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x9"
+ },
+ {
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x400",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
+ "SampleAfterValue": "53",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "1009",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
+ "Data_LA": "1",
+ "EventCode": "0xcd",
+ "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_2048",
+ "MSRIndex": "0x3F6",
+ "MSRValue": "0x800",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
+ "SampleAfterValue": "23",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "503",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "101",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "2003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
- "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
+ "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency. Available PDIST counters: 0",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
"BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
+ "Counter": "0",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
- "PEBS": "2",
- "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8",
+ "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8 Available PDIST counters: 0",
"SampleAfterValue": "1000003",
"UMask": "0x2"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3FBFC00004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000004",
+ "PublicDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3FBFC00001",
+ "PublicDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730000001",
+ "PublicDescription": "Counts demand data reads that were supplied by DRAM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x3F3FC00002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000002",
+ "PublicDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_MISS",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FC04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F04C04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster. It does not count misses to the L3 which go to Local CXL Type 2 Memory or Local Non DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x70CC04477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster. It does not count misses to the L3 which go to Local CXL Type 2 Memory or Local Non DRAM. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x70C004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x733004477",
+ "PublicDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2A,0x2B",
+ "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0xFBFF80822",
+ "PublicDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM) Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Cycles where data return is pending for a Demand Data Read request who miss L3 cache.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
+ "PublicDescription": "Cycles with at least 1 Demand Data Read requests who miss L3 cache in the superQ.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
+ "PublicDescription": "For every cycle, increments by the number of demand data read requests pending that are known to have missed the L3 cache. Note that this does not capture all elapsed cycles while requests are outstanding - only cycles from when the requests were known by the requesting core to have missed the L3 cache.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Number of times an RTM execution aborted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
- "PublicDescription": "Counts the number of times RTM abort was triggered.",
+ "PublicDescription": "Counts the number of times RTM abort was triggered. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x4"
},
{
+ "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "RTM_RETIRED.ABORTED_EVENTS",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "RTM_RETIRED.ABORTED_MEM",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc9",
+ "EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "Number of times an RTM execution successfully committed",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times RTM commit succeeded.",
@@ -141,6 +486,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.",
@@ -149,6 +495,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_READ",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads",
@@ -157,6 +504,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.",
@@ -165,6 +513,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Counts the number of times a TSX line had a cache conflict.",
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/metricgroups.json b/tools/perf/pmu-events/arch/x86/graniterapids/metricgroups.json
new file mode 100644
index 000000000000..9129fb7b7ce4
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/metricgroups.json
@@ -0,0 +1,145 @@
+{
+ "Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "C0Wait": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DSBmiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "DataSharing": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Fed": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FetchLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Flops": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpScalar": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "FpVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Frontend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "HPC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IcMiss": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IntVector": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "IoBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemoryTLB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_BW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Memory_Lat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MicroSeq": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "OS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Offcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PGO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Server": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Snoop": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "SoC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Summary": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL1": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL2": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TmaL3mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "TopdownL1": "Metrics for top-down breakdown at level 1",
+ "TopdownL2": "Metrics for top-down breakdown at level 2",
+ "TopdownL3": "Metrics for top-down breakdown at level 3",
+ "TopdownL4": "Metrics for top-down breakdown at level 4",
+ "TopdownL5": "Metrics for top-down breakdown at level 5",
+ "TopdownL6": "Metrics for top-down breakdown at level 6",
+ "tma_L1_group": "Metrics for top-down breakdown at level 1",
+ "tma_L2_group": "Metrics for top-down breakdown at level 2",
+ "tma_L3_group": "Metrics for top-down breakdown at level 3",
+ "tma_L4_group": "Metrics for top-down breakdown at level 4",
+ "tma_L5_group": "Metrics for top-down breakdown at level 5",
+ "tma_L6_group": "Metrics for top-down breakdown at level 6",
+ "tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
+ "tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
+ "tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
+ "tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
+ "tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
+ "tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
+ "tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
+ "tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
+ "tma_fetch_bandwidth_group": "Metrics contributing to tma_fetch_bandwidth category",
+ "tma_fetch_latency_group": "Metrics contributing to tma_fetch_latency category",
+ "tma_fp_arith_group": "Metrics contributing to tma_fp_arith category",
+ "tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
+ "tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
+ "tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
+ "tma_int_operations_group": "Metrics contributing to tma_int_operations category",
+ "tma_issue2P": "Metrics related by the issue $issue2P",
+ "tma_issueBM": "Metrics related by the issue $issueBM",
+ "tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
+ "tma_issueD0": "Metrics related by the issue $issueD0",
+ "tma_issueFB": "Metrics related by the issue $issueFB",
+ "tma_issueFL": "Metrics related by the issue $issueFL",
+ "tma_issueL1": "Metrics related by the issue $issueL1",
+ "tma_issueLat": "Metrics related by the issue $issueLat",
+ "tma_issueMC": "Metrics related by the issue $issueMC",
+ "tma_issueMS": "Metrics related by the issue $issueMS",
+ "tma_issueMV": "Metrics related by the issue $issueMV",
+ "tma_issueRFO": "Metrics related by the issue $issueRFO",
+ "tma_issueSL": "Metrics related by the issue $issueSL",
+ "tma_issueSO": "Metrics related by the issue $issueSO",
+ "tma_issueSmSt": "Metrics related by the issue $issueSmSt",
+ "tma_issueSpSt": "Metrics related by the issue $issueSpSt",
+ "tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
+ "tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
+ "tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
+ "tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
+ "tma_light_operations_group": "Metrics contributing to tma_light_operations category",
+ "tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
+ "tma_mem_bandwidth_group": "Metrics contributing to tma_mem_bandwidth category",
+ "tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
+ "tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
+ "tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
+ "tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
+ "tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
+ "tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
+ "tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
+ "tma_retiring_group": "Metrics contributing to tma_retiring category",
+ "tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
+ "tma_store_bound_group": "Metrics contributing to tma_store_bound category",
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
+}
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/other.json b/tools/perf/pmu-events/arch/x86/graniterapids/other.json
index 5e799bae03ea..c0747750b1a8 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/other.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/other.json
@@ -1,29 +1,65 @@
[
{
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
+ "BriefDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events. the event also counts for Machine Ordering count.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.HARDWARE",
+ "PublicDescription": "Count all other hardware assists or traps that are not necessarily architecturally exposed (through a software handler) beyond FP; SSE-AVX mix and A/D assists who are counted by dedicated sub-events. This includes, but not limited to, assists at EXE or MEM uop writeback like AVX* load/store/gather/scatter (non-FP GSSE-assist ) , assists generated by ROB like PEBS and RTIT, Uncore trap, RAR (Remote Action Request) and CET (Control flow Enforcement Technology) assists. the event also counts for Machine Ordering count.",
"SampleAfterValue": "100003",
- "UMask": "0x1"
+ "UMask": "0x4"
},
{
- "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000001",
+ "BriefDescription": "ASSISTS.PAGE_FAULT",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.PAGE_FAULT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "HW_INTERRUPTS.MASKED",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.MASKED",
"SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "HW_INTERRUPTS.PENDING_AND_MASKED",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.PENDING_AND_MASKED",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Number of hardware interrupts received by the processor.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcb",
+ "EventName": "HW_INTERRUPTS.RECEIVED",
+ "PublicDescription": "Counts the number of hardware interruptions received by the processor.",
+ "SampleAfterValue": "203",
"UMask": "0x1"
},
{
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A,0x2B",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F3FFC0002",
+ "MSRValue": "0x10800",
+ "PublicDescription": "Counts streaming stores that have any type of response. Available PDIST counters: 0",
"SampleAfterValue": "100003",
"UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles the uncore cannot take further requests",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x2d",
+ "EventName": "XQ.FULL_CYCLES",
+ "PublicDescription": "number of cycles when the thread is active and the uncore cannot take any further requests (for example prefetches, loads or stores initiated by the Core that miss the L2 cache).",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/pipeline.json b/tools/perf/pmu-events/arch/x86/graniterapids/pipeline.json
index 764c0435d1d2..0fef8fd61974 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/pipeline.json
@@ -1,22 +1,345 @@
[
{
+ "BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.DIV_ACTIVE",
+ "PublicDescription": "Counts cycles when divide unit is busy executing divide or square root operations. Accounts for integer and floating-point operations.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x9"
+ },
+ {
+ "BriefDescription": "This event counts the cycles the integer divider is busy.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb0",
+ "EventName": "ARITH.IDIV_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc1",
+ "EventName": "ASSISTS.ANY",
+ "PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware. Examples include AD (page Access Dirty), FP and AVX related assists.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1b"
+ },
+ {
"BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all branch instructions retired.",
+ "PublicDescription": "Counts all branch instructions retired. Available PDIST counters: 0",
"SampleAfterValue": "400009"
},
{
+ "BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND",
+ "PublicDescription": "Counts conditional branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_NTAKEN",
+ "PublicDescription": "Counts not taken branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.COND_TAKEN",
+ "PublicDescription": "Counts taken conditional branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.FAR_BRANCH",
+ "PublicDescription": "Counts far branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.INDIRECT",
+ "PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_CALL",
+ "PublicDescription": "Counts both direct and indirect near call instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_RETURN",
+ "PublicDescription": "Counts return instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc4",
+ "EventName": "BR_INST_RETIRED.NEAR_TAKEN",
+ "PublicDescription": "Counts taken branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20"
+ },
+ {
"BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
- "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
+ "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path. Available PDIST counters: 0",
"SampleAfterValue": "400009"
},
{
+ "BriefDescription": "All mispredicted branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.ALL_BRANCHES_COST",
+ "PublicDescription": "All mispredicted branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x44"
+ },
+ {
+ "BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND",
+ "PublicDescription": "Counts mispredicted conditional branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x11"
+ },
+ {
+ "BriefDescription": "Mispredicted conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_COST",
+ "PublicDescription": "Mispredicted conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x51"
+ },
+ {
+ "BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_NTAKEN",
+ "PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Mispredicted non-taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_NTAKEN_COST",
+ "PublicDescription": "Mispredicted non-taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "RetirementLatencyMax": 888,
+ "RetirementLatencyMean": 6.11,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "400009",
+ "UMask": "0x50"
+ },
+ {
+ "BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN",
+ "PublicDescription": "Counts taken conditional mispredicted branch instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Mispredicted taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.COND_TAKEN_COST",
+ "PublicDescription": "Mispredicted taken conditional branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "RetirementLatencyMax": 2750,
+ "RetirementLatencyMean": 5.09,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "400009",
+ "UMask": "0x41"
+ },
+ {
+ "BriefDescription": "Miss-predicted near indirect branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT",
+ "PublicDescription": "Counts miss-predicted near indirect branch instructions retired excluding returns. TSX abort is an indirect branch. Available PDIST counters: 0",
+ "SampleAfterValue": "100003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Mispredicted indirect CALL retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
+ "PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Mispredicted indirect CALL retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_CALL_COST",
+ "PublicDescription": "Mispredicted indirect CALL retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "RetirementLatencyMax": 703,
+ "RetirementLatencyMean": 15.56,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "400009",
+ "UMask": "0x42"
+ },
+ {
+ "BriefDescription": "Mispredicted near indirect branch instructions retired (excluding returns). This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.INDIRECT_COST",
+ "PublicDescription": "Mispredicted near indirect branch instructions retired (excluding returns). This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "RetirementLatencyMax": 1562,
+ "RetirementLatencyMean": 11.07,
+ "RetirementLatencyMin": 0,
+ "SampleAfterValue": "100003",
+ "UMask": "0xc0"
+ },
+ {
+ "BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
+ "PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Mispredicted taken near branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.NEAR_TAKEN_COST",
+ "PublicDescription": "Mispredicted taken near branch instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "SampleAfterValue": "400009",
+ "UMask": "0x60"
+ },
+ {
+ "BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RET",
+ "PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired. Available PDIST counters: 0",
+ "SampleAfterValue": "100007",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Mispredicted ret instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc5",
+ "EventName": "BR_MISP_RETIRED.RET_COST",
+ "PublicDescription": "Mispredicted ret instructions retired. This precise event may be used to get the misprediction cost via the Retire_Latency field of PEBS. It fires on the instruction that immediately follows the mispredicted branch. Available PDIST counters: 0",
+ "RetirementLatencyMax": 1082,
+ "RetirementLatencyMean": 32.37,
+ "RetirementLatencyMin": 9,
+ "SampleAfterValue": "100007",
+ "UMask": "0x48"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C01",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.1 light-weight slower wakeup time but more power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C02",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.2 light-weight faster wakeup time but less power saving optimized state. This state can be entered via the TPAUSE or UMWAIT instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Core clocks when the thread is in the C0.1 or C0.2 or running a PAUSE in C0 ACPI state.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.C0_WAIT",
+ "PublicDescription": "Counts core clocks when the thread is in the C0.1 or C0.2 power saving optimized states (TPAUSE or UMWAIT instructions) or running the PAUSE instruction.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x70"
+ },
+ {
+ "BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
+ "PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
+ "PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
+ "SampleAfterValue": "25003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "CPU_CLK_UNHALTED.PAUSE",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.PAUSE",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "CPU_CLK_UNHALTED.PAUSE_INST",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xec",
+ "EventName": "CPU_CLK_UNHALTED.PAUSE_INST",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x3c",
+ "EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
+ "PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -24,6 +347,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. It is counted on a dedicated fixed counter, leaving the four (eight when Hyperthreading is disabled) programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
@@ -32,6 +356,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -39,29 +364,348 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
"SampleAfterValue": "2000003"
},
{
+ "BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "8",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "16",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "12",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc"
+ },
+ {
+ "BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "5",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x5"
+ },
+ {
+ "BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "4",
+ "EventCode": "0xa3",
+ "EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Counts the cycles where the AMX (Advance Matrix Extension) unit is busy performing an operation.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb7",
+ "EventName": "EXE.AMX_BUSY",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
+ "PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles total of 2 or 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_3_PORTS_UTIL",
+ "SampleAfterValue": "2000003",
+ "UMask": "0xc"
+ },
+ {
+ "BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
+ "PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
+ "PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
+ "PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "5",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.BOUND_ON_LOADS",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x21"
+ },
+ {
+ "BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "2",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
+ "PublicDescription": "Counts cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Cycles no uop executed while RS was not empty, the SB was not full and there was no outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa6",
+ "EventName": "EXE_ACTIVITY.EXE_BOUND_0_PORTS",
+ "PublicDescription": "Number of cycles total of 0 uops executed on all ports, Reservation Station (RS) was not empty, the Store Buffer (SB) was not full and there was no outstanding load.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x75",
+ "EventName": "INST_DECODED.DECODERS",
+ "PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
- "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
+ "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter. Available PDIST counters: 32",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003"
},
{
+ "BriefDescription": "INST_RETIRED.MACRO_FUSED",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.MACRO_FUSED",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.NOP",
+ "PublicDescription": "Counts all retired NOP or ENDBR32/64 or PREFETCHIT0/1 instructions",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Precise instruction retired with PEBS precise-distribution",
+ "Counter": "Fixed counter 0",
+ "EventName": "INST_RETIRED.PREC_DIST",
+ "PublicDescription": "A version of INST_RETIRED that allows for a precise distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR++) feature to fix bias in how retired instructions get sampled. Use on Fixed Counter 0. Available PDIST counters: 32",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Iterations of Repeat string retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc0",
+ "EventName": "INST_RETIRED.REP_ITERATION",
+ "PublicDescription": "Number of iterations of Repeat (REP) string retired instructions such as MOVS, CMPS, and SCAS. Each has a byte, word, and doubleword version and string instructions can be repeated using a repetition prefix, REP, that allows their architectural execution to be repeated a number of times as specified by the RCX register. Note the number of iterations is implementation-dependent.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.CLEARS_COUNT",
+ "PublicDescription": "Counts the number of speculative clears due to any type of branch misprediction or machine clears",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
+ "PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "INT_MISC.MBA_STALLS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.MBA_STALLS",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.RECOVERY_CYCLES",
+ "PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
+ "SampleAfterValue": "500009",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Bubble cycles of BAClear (Unknown Branch).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.UNKNOWN_BRANCH_CYCLES",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x7",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xad",
+ "EventName": "INT_MISC.UOP_DROPPING",
+ "PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.128BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.128BIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x13"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.256BIT",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.256BIT",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xac"
+ },
+ {
+ "BriefDescription": "integer ADD, SUB, SAD 128-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.ADD_128",
+ "PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 128-bit vector instructions.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x3"
+ },
+ {
+ "BriefDescription": "integer ADD, SUB, SAD 256-bit vector instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.ADD_256",
+ "PublicDescription": "Number of retired integer ADD/SUB (regular or horizontal), SAD 256-bit vector instructions.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xc"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.MUL_256",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.MUL_256",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.SHUFFLES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.SHUFFLES",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.VNNI_128",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.VNNI_128",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "INT_VEC_RETIRED.VNNI_256",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe7",
+ "EventName": "INT_VEC_RETIRED.VNNI_256",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.ADDRESS_ALIAS",
+ "PublicDescription": "Counts the number of times a load got blocked due to false dependencies in MOB due to partial compare on address.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "LD_BLOCKS.NO_SR",
+ "PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x88"
+ },
+ {
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -69,15 +713,165 @@
"UMask": "0x82"
},
{
+ "BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4c",
+ "EventName": "LOAD_HIT_PREFETCH.SWPF",
+ "PublicDescription": "Counts all software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xa8",
+ "EventName": "LSD.CYCLES_ACTIVE",
+ "PublicDescription": "Counts the cycles when at least one uop is delivered by the LSD (Loop-stream detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "6",
+ "EventCode": "0xa8",
+ "EventName": "LSD.CYCLES_OK",
+ "PublicDescription": "Counts the cycles when optimal number of uops is delivered by the LSD (Loop-stream detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa8",
+ "EventName": "LSD.UOPS",
+ "PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.COUNT",
+ "PublicDescription": "Counts the number of machine clears (nukes) of any type.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc3",
+ "EventName": "MACHINE_CLEARS.SMC",
+ "PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "LFENCE instructions retired",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xe0",
+ "EventName": "MISC2_RETIRED.LFENCE",
+ "PublicDescription": "number of LFENCE retired instructions",
+ "SampleAfterValue": "400009",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xcc",
+ "EventName": "MISC_RETIRED.LBR_INSERTS",
+ "PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR enable via IA32_DEBUGCTL MSR and branch type selection via MSR_LBR_SELECT.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa2",
+ "EventName": "RESOURCE_STALLS.SB",
+ "PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa2",
+ "EventName": "RESOURCE_STALLS.SCOREBOARD",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY",
+ "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EdgeDetect": "1",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_COUNT",
+ "Invert": "1",
+ "PublicDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to closely sample on front-end latency issues (see the FRONTEND_RETIRED event of designated precise events)",
+ "SampleAfterValue": "100003",
+ "UMask": "0x7"
+ },
+ {
+ "BriefDescription": "Cycles when RS was empty and a resource allocation stall is asserted",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa5",
+ "EventName": "RS.EMPTY_RESOURCE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
- "PublicDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.\nThe count is distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core, in processors that support Intel Hyper-Threading Technology. Software can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
"SampleAfterValue": "10000003",
"UMask": "0x2"
},
{
+ "BriefDescription": "TMA slots wasted due to incorrect speculations.",
+ "Counter": "0",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.BAD_SPEC_SLOTS",
+ "PublicDescription": "Number of slots of TMA method that were wasted due to incorrect speculation. It covers all types of control-flow or data-related mis-speculations.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
+ "Counter": "0",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
+ "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xa4",
+ "EventName": "TOPDOWN.MEMORY_BOUND_SLOTS",
+ "SampleAfterValue": "10000003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003",
@@ -85,6 +879,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
@@ -92,11 +887,259 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Number of non dec-by-all uops decoded by decoder",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x76",
+ "EventName": "UOPS_DECODED.DEC0_UOPS",
+ "PublicDescription": "This event counts the number of not dec-by-all uops decoded by decoder 0.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Uops executed on port 0",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_0",
+ "PublicDescription": "Number of uops dispatch to execution port 0.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Uops executed on port 1",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_1",
+ "PublicDescription": "Number of uops dispatch to execution port 1.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Uops executed on ports 2, 3 and 10",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_2_3_10",
+ "PublicDescription": "Number of uops dispatch to execution ports 2, 3 and 10",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Uops executed on ports 4 and 9",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_4_9",
+ "PublicDescription": "Number of uops dispatch to execution ports 4 and 9",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Uops executed on ports 5 and 11",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_5_11",
+ "PublicDescription": "Number of uops dispatch to execution ports 5 and 11",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Uops executed on port 6",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_6",
+ "PublicDescription": "Number of uops dispatch to execution port 6.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "Uops executed on ports 7 and 8",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb2",
+ "EventName": "UOPS_DISPATCHED.PORT_7_8",
+ "PublicDescription": "Number of uops dispatch to execution ports 7 and 8.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x80"
+ },
+ {
+ "BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CORE",
+ "PublicDescription": "Counts the number of uops executed from any thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
+ "PublicDescription": "Counts cycles when at least 1 micro-op is executed from any thread on physical core.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "2",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
+ "PublicDescription": "Counts cycles when at least 2 micro-ops are executed from any thread on physical core.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "3",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
+ "PublicDescription": "Counts cycles when at least 3 micro-ops are executed from any thread on physical core.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "4",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
+ "PublicDescription": "Counts cycles when at least 4 micro-ops are executed from any thread on physical core.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_1",
+ "PublicDescription": "Cycles where at least 1 uop was executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "2",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_2",
+ "PublicDescription": "Cycles where at least 2 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "3",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_3",
+ "PublicDescription": "Cycles where at least 3 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "4",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.CYCLES_GE_4",
+ "PublicDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.STALLS",
+ "Invert": "1",
+ "PublicDescription": "Counts cycles during which no uops were dispatched from the Reservation Station (RS) per thread.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.THREAD",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xb1",
+ "EventName": "UOPS_EXECUTED.X87",
+ "PublicDescription": "Counts the number of x87 uops executed.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.ANY",
+ "PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "UOPS_ISSUED.CYCLES",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xae",
+ "EventName": "UOPS_ISSUED.CYCLES",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Cycles with retired uop(s).",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.CYCLES",
+ "PublicDescription": "Counts cycles where at least one uop has retired.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Retired uops except the last uop of each instruction.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.HEAVY",
+ "PublicDescription": "Counts the number of retired micro-operations (uops) except the last uop of each instruction. An instruction that is decoded into less than two uops does not contribute to the count.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "UOPS_RETIRED.MS",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.MS",
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x8",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS",
- "PublicDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.\nSoftware can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
+ "PublicDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric. Software can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
"SampleAfterValue": "2000003",
"UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0xc2",
+ "EventName": "UOPS_RETIRED.STALLS",
+ "Invert": "1",
+ "PublicDescription": "This event counts cycles without actually retired uops.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cache.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cache.json
new file mode 100644
index 000000000000..b782f6d54fc2
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cache.json
@@ -0,0 +1,3736 @@
+[
+ {
+ "BriefDescription": "Clockticks for CMS units attached to CHA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_CHACMS_CLOCKTICKS",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "PublicDescription": "UNC_CHACMS_CLOCKTICKS",
+ "Unit": "CHACMS"
+ },
+ {
+ "BriefDescription": "Counts the number of cycles FAST trigger is received from the global FAST distress wire.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHACMS_RING_SRC_THRTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "CHACMS"
+ },
+ {
+ "BriefDescription": "Number of CHA clock cycles while the event is enabled",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_CHA_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "Clockticks of the uncore caching and home agent (CHA)",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts transactions that looked into the multi-socket cacheline Directory state, and therefore did not send a snoop because the Directory indicated it was not needed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x53",
+ "EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts transactions that looked into the multi-socket cacheline Directory state, and sent one or more snoops, because the Directory indicated it was needed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x53",
+ "EventName": "UNC_CHA_DIR_LOOKUP.SNP",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts only multi-socket cacheline Directory state updates memory writes issued from the HA pipe. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelines.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x54",
+ "EventName": "UNC_CHA_DIR_UPDATE.HA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts only multi-socket cacheline Directory state updates due to memory writes issued from the TOR pipe which are the result of remote transaction hitting the SF/LLC and returning data Core2Core. This does not include memory write requests which are for I (Invalid) or E (Exclusive) cachelines.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x54",
+ "EventName": "UNC_CHA_DIR_UPDATE.TOR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in TOR or IRQ (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_ANY",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x3",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in IRQ (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_IRQ",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Distress signal assertion for dynamic prefetch throttle (DPT). Threshold for distress signal assertion reached in TOR (immediate cause for triggering).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x59",
+ "EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_TOR",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts when a normal (Non-Isochronous) full line write is issued from the CHA to the any of the memory controller channels.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controller.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x5b",
+ "EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Requests to Remotely Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.ALL_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : All transactions from Remote Agents",
+ "UMask": "0x17e0ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: CRd Requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1bd0ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests and Read Prefetches",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1bc1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests, Read Prefetches, and Snoops",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Data Reads",
+ "UMask": "0x1fc1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Demand Data Reads, Core and LLC prefetches",
+ "UMask": "0x841ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests, Read Prefetches, and Snoops which miss the Cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Data Read Misses",
+ "UMask": "0x1fc101",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCALLY_HOMED_ADDRESS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Transactions homed locally",
+ "UMask": "0xbdfff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Read Requests and Code Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x19d0ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests and Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x19c1ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1850ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1841ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_DMND_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x1848ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: LLC Prefetch Requests to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_LLC_PF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x189dff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x199dff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1910ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Read Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1981ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_PF_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x1908ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x19c8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All Requests to Remotely Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.REMOTELY_HOMED_ADDRESS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in a remote MC",
+ "UMask": "0x15dfff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Code Read/Prefetch Requests from a Remote Socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_CODE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : CRd Requests",
+ "UMask": "0x1a10ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Data Read/Prefetch Requests from a Remote Socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_DATA_RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
+ "UMask": "0x1a01ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests/Prefetches from a Remote Socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : RFO Requests",
+ "UMask": "0x1a08ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Snoop Requests from a Remote Socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNP",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of times the LLC was accessed",
+ "UMask": "0x1c19ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: All RFO and RFO Prefetches",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.RFO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : All RFOs - Demand and Prefetches",
+ "UMask": "0x1bc8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: RFO Requests and RFO Prefetches to Locally Homed Memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.RFO_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Locally HOMed RFOs - Demand and Prefetches",
+ "UMask": "0x9c8ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Writes to Locally Homed Memory (includes writebacks from L1/L2)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.WRITE_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Writes",
+ "UMask": "0x842ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Cache Lookups: Writes to Remotely Homed Memory (includes writebacks from L1/L2)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x34",
+ "EventName": "UNC_CHA_LLC_LOOKUP.WRITE_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cache Lookups : Remote Writes",
+ "UMask": "0x17c2ff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : All Lines Victimized",
+ "UMask": "0xf",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : IA traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : IO traffic : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - All Lines",
+ "UMask": "0x200f",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in E State",
+ "UMask": "0x2002",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_F",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in F State",
+ "UMask": "0x2008",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in M State",
+ "UMask": "0x2001",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Local - Lines in S State",
+ "UMask": "0x2004",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Remote - All Lines",
+ "UMask": "0x800f",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Remote - Lines in E State",
+ "UMask": "0x8002",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Remote - Lines in M State",
+ "UMask": "0x8001",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Lines Victimized : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Remote - Lines in S State",
+ "UMask": "0x8004",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_E",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in E state",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in M state",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_CHA_LLC_VICTIMS.TOTAL_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Lines Victimized : Lines in S State",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts when a RFO (the Read for Ownership issued before a write) request hit a cacheline in the S (Shared) state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x39",
+ "EventName": "UNC_CHA_MISC.RFO_HIT_S",
+ "PerPkg": "1",
+ "PublicDescription": "Cbo Misc : RFO HitS",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.LOCAL_INVITOE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.LOCAL_READ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.OFF_PWRHEURISTIC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : Remote Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.REMOTE_READ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x55",
+ "EventName": "UNC_CHA_OSB.RFO_HITS_SNP_BCAST",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.ALLOC_EXCLUSIVE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.ALLOC_EXCLUSIVE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.ALLOC_SHARED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.ALLOC_SHARED",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.DEALLOC_EVCTCLN",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.DEALLOC_EVCTCLN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.DIRBACKED_ONLY",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.DIRBACKED_ONLY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.HIT_EXCLUSIVE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.HIT_EXCLUSIVE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.HIT_SHARED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.HIT_SHARED",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.INCLUSIVE_ONLY",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.INCLUSIVE_ONLY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.MISS",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.MISS",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.UPDATE_EXCLUSIVE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.UPDATE_EXCLUSIVE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.UPDATE_SHARED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.UPDATE_SHARED",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.VICTIM_EXCLUSIVE",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.VICTIM_EXCLUSIVE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UNC_CHA_REMOTE_SF.VICTIM_SHARED",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x69",
+ "EventName": "UNC_CHA_REMOTE_SF.VICTIM_SHARED",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.INVITOE",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : InvalItoE",
+ "UMask": "0x30",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of requests coming from a remote socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts read requests made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write) .",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.READS",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : Reads",
+ "UMask": "0x3",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts read requests coming from a unit on this socket made into this CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.READS_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts read requests coming from a remote socket made into the CHA. Reads include all read opcodes (including RFO: the Read for Ownership issued before a write).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.READS_REMOTE",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts write requests made into the CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.WRITES",
+ "PerPkg": "1",
+ "PublicDescription": "HA Read and Write Requests : Writes",
+ "UMask": "0xc",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts write requests coming from a unit on this socket made into this CHA, including streaming, evictions, HitM (Reads from another core to a Modified cacheline), etc.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts the total number of read requests made into the Home Agent. Reads include all read opcodes (including RFO). Writes include all writes (streaming, evictions, HitM, etc).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x50",
+ "EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Ingress (from CMS) Allocations : IRQ : Counts number of allocations per cycle into the specified Ingress queue.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_CHA_RxC_INSERTS.IRQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Ingress (from CMS) Occupancy : IRQ : Counts number of entries in the specified Ingress queue in each cycle.",
+ "Counter": "0",
+ "EventCode": "0x11",
+ "EventName": "UNC_CHA_RxC_OCCUPANCY.IRQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts snoop filter capacity evictions for entries tracking exclusive lines in the core's cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a core's cache replaces a tracked cacheline with a new cacheline.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x3d",
+ "EventName": "UNC_CHA_SF_EVICTION.E_STATE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Snoop Filter Capacity Evictions : E state",
+ "UMask": "0x2",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts snoop filter capacity evictions for entries tracking modified lines in the core's cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a core's cache replaces a tracked cacheline with a new cacheline.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x3d",
+ "EventName": "UNC_CHA_SF_EVICTION.M_STATE",
+ "PerPkg": "1",
+ "PublicDescription": "Snoop Filter Capacity Evictions : M state",
+ "UMask": "0x1",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Counts snoop filter capacity evictions for entries tracking shared lines in the core's cache. Snoop filter capacity evictions occur when the snoop filter is full and evicts an existing entry to track a new entry. Does not count clean evictions such as when a core's cache replaces a tracked cacheline with a new cacheline.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x3d",
+ "EventName": "UNC_CHA_SF_EVICTION.S_STATE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Snoop Filter Capacity Evictions : S state",
+ "UMask": "0x4",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR Inserts",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All",
+ "UMask": "0xc001ffff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_CLFLUSH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8c7fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "FsRdCur transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_FSRDCUR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8effd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "FsRdCurPtl transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_FSRDCURPTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c9effd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_ITOM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc47fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMWr transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_ITOMWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc4ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "MemPushWr transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_MEMPUSHWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc6ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCiL transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_WCIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c86ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WcilF transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_WCILF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c867fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WiL transactions from a CXL device which hit in the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_HIT_WIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c87ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_CLFLUSH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8c7fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "FsRdCur transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_FSRDCUR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8effe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "FsRdCurPtl transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_FSRDCURPTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c9effe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_ITOM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc47fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMWr transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_ITOMWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc4ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "MemPushWr transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_MEMPUSHWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc6ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCiL transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_WCIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c86ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WcilF transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_WCILF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c867fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WiL transactions from a CXL device which miss the L3.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.CXL_MISS_WIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c87ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores",
+ "UMask": "0xc001ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CLFlushes issued by iA Cores",
+ "UMask": "0xc8c7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRDs issued by iA Cores",
+ "UMask": "0xc80fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts; Code read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc88fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores",
+ "UMask": "0xc817ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd PTEs issued by iA Cores due to a page walk",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRdPte issued by iA Cores due to a page walk",
+ "UMask": "0xc837ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read prefetch from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores",
+ "UMask": "0xc897ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores that Hit the LLC",
+ "UMask": "0xc001fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc80ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc88ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All requests issued from IA cores to CXL accelerator memory regions that hit the LLC.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c0018101",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc817fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd PTEs issued by iA Cores due to page walks that hit the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRdPte issued by iA Cores due to a page walk that hit the LLC",
+ "UMask": "0xc837fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc897fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM requests from local IA cores that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Hit LLC",
+ "UMask": "0xcc47fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLC",
+ "UMask": "0xcccffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores that hit the LLC",
+ "UMask": "0xccd7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "UMask": "0xccc7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc807fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc887fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores",
+ "UMask": "0xcc47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNear requests from local IA cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears issued by iA Cores",
+ "UMask": "0xcd47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores",
+ "UMask": "0xcccfff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores",
+ "UMask": "0xccd7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores",
+ "UMask": "0xccc7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests from IA Cores which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC",
+ "UMask": "0xc001fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc80ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRDs from local IA cores to locally homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc80efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Code read prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc88ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRD Prefetches from local IA cores to locally homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc88efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRD Prefetches from local IA cores to remotely homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc88f7e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CRDs from local IA cores to remotely homed memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc80f7e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All requests issued from IA cores to CXL accelerator memory regions that miss the LLC.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c0018201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc817fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd PTEs issued by iA Cores due to a page walk that missed the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRdPte issued by iA Cores due to a page walk that missed the LLC",
+ "UMask": "0xc837fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander card.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_CXL_ACC",
+ "PerPkg": "1",
+ "PublicDescription": "DRds issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander card.",
+ "UMask": "0x10c8178201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "UMask": "0xc8178601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read from local IA that miss the cache and targets local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc816fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8168601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds from local IA cores to locally homed PMM addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8168a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "UMask": "0xc8178a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc897fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "L2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8978201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "UMask": "0xc8978601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read prefetch from local IA that miss the cache and targets local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target local memory",
+ "UMask": "0xc896fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8968601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to locally homed PMM addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8968a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to PMM addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "UMask": "0xc8978a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read prefetch from local IA that miss the cache and targets remote memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "Inserts into the TOR from local IA cores which miss the LLC and snoop filter with the opcode DRD_PREF, and target remote memory",
+ "UMask": "0xc8977e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8970601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRd Prefetches from local IA cores to remotely homed PMM addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8970a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Data read from local IA that miss the cache and targets remote memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8177e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8170601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "DRds from local IA cores to remotely homed PMM addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8170a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Missed LLC",
+ "UMask": "0xcc47fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch code read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLC",
+ "UMask": "0xcccffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch data read from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores that missed the LLC",
+ "UMask": "0xccd7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "LLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10ccd78201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "UMask": "0xccc7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8878201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc8668601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to locally homed PMM addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "UMask": "0xc8668a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to locally homed PMM addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "UMask": "0xc8670601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to remotely homed PMM addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "UMask": "0xc8670a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "UMask": "0xc86f0601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to remotely homed PMM addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "UMask": "0xc86f0a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc807fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "RFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8078201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that miss the LLC targeting local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc806fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc887fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "LLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10ccc78201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that miss the LLC targeting local memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc886fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA that miss the LLC targeting remote memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8877e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA that miss the LLC targeting remote memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8077e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "UCRDF requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_UCRDF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : UCRdFs issued by iA Cores that Missed LLC",
+ "UMask": "0xc877de01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from a local IA core that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc86ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA core that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLF issued by iA Cores that Missed the LLC",
+ "UMask": "0xc867fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc8678601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA cores to PMM homed addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "UMask": "0xc8678a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc86f8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from a local IA core to PMM homed addresses that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "UMask": "0xc86f8a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WIL requests from local IA cores that miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WiLs issued by iA Cores that Missed LLC",
+ "UMask": "0xc87fde01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by iA Cores",
+ "UMask": "0xc807ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Read for ownership prefetch from local IA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFO_Prefs issued by iA Cores",
+ "UMask": "0xc887ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "SpecItoM events that are initiated from the Core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_SPECITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : SpecItoMs issued by iA Cores",
+ "UMask": "0xcc57ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbEFtoEs issued by iA Cores. (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc3fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbEFtoIs issued by iA Cores . (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc37ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbMtoEs issued by iA Cores . (Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc2fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbMtoI requests from local IA cores",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WbMtoIs issued by iA Cores",
+ "UMask": "0xcc27ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WbStoIs issued by iA Cores . (Non Modified Write Backs)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WBSTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc67ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCIL requests from a local IA core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLs issued by iA Cores",
+ "UMask": "0xc86fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WCILF requests from local IA core",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IA_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WCiLF issued by iA Cores",
+ "UMask": "0xc867ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices",
+ "UMask": "0xc001ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "CLFlush requests from IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : CLFlushes issued by IO Devices",
+ "UMask": "0xc8c3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices that hit the LLC",
+ "UMask": "0xc001fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMs from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "UMask": "0xcd43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices which hit the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLC",
+ "UMask": "0xc8f3fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "RFOs from local IO devices which hit the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices that hit the LLC",
+ "UMask": "0xc803fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR ItoM inserts from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices",
+ "UMask": "0xcc43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "UMask": "0xcd43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNear (partial write) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that address memory on the local socket",
+ "UMask": "0xcd42ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNear (partial write) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that address memory on a remote socket",
+ "UMask": "0xcd437f04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM (write) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoM, indicating a write request, from IO Devices that address memory on the local socket",
+ "UMask": "0xcc42ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoM (write) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoM, indicating a write request, from IO Devices that address memory on a remote socket",
+ "UMask": "0xcc437f04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All requests from IO Devices that missed the LLC",
+ "UMask": "0xc001fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR ItoM inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices which miss the LLC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f3fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All TOR RFO inserts from local IO devices which miss the cache",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices that missed the LLC",
+ "UMask": "0xc803fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCURs issued by IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices",
+ "UMask": "0xc8f3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that addresses memory on the local socket",
+ "UMask": "0xc8f2ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that addresses memory on a remote socket",
+ "UMask": "0xc8f37f04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "RFOs from local IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : RFOs issued by IO Devices",
+ "UMask": "0xc803ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "WBMtoI requests from IO devices",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : WbMtoIs issued by IO Devices",
+ "UMask": "0xcc23ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Inserts for SF or LLC Evictions",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LLC_OR_SF_EVICTIONS",
+ "PerPkg": "1",
+ "PublicDescription": "TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
+ "UMask": "0xc001ff02",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All locally initiated requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local iA and IO",
+ "UMask": "0xc000ff05",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All from Local iA",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local iA",
+ "UMask": "0xc000ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All from Local IO",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.LOC_IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All from Local IO",
+ "UMask": "0xc000ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All remote requests (e.g. snoops, writebacks) that came from remote sockets",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.REM_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All Remote Requests",
+ "UMask": "0xc001ffc8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "All snoops to this LLC that came from remote sockets",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.REM_SNPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : All Snoops from Remote",
+ "UMask": "0xc001ff08",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "Occupancy for all TOR entries",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All",
+ "UMask": "0xc001ffff",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_CLFLUSH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8c7fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for FsRdCur transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_FSRDCUR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8effd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for FsRdCurPtl transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_FSRDCURPTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c9effd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_ITOM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc47fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMWr transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_ITOMWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc4ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for MemPushWr transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_MEMPUSHWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc6ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCiL transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_WCIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c86ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WcilF transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_WCILF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c867fd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WiL transactions from a CXL device which hit in the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_HIT_WIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c87ffd20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_CLFLUSH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8c7fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for FsRdCur transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_FSRDCUR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c8effe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for FsRdCurPtl transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_FSRDCURPTL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c9effe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_ITOM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc47fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMWr transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_ITOMWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc4ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for MemPushWr transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_MEMPUSHWR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78cc6ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCiL transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_WCIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c86ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WcilF transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_WCILF",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c867fe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WiL transactions from a CXL device which miss the L3.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.CXL_MISS_WIL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78c87ffe20",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores",
+ "UMask": "0xc001ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CLFlushes issued by iA Cores",
+ "UMask": "0xc8c7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRDs issued by iA Cores",
+ "UMask": "0xc80fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy; Code read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc88fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores",
+ "UMask": "0xc817ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd PTEs issued by iA Cores due to a page walk",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk",
+ "UMask": "0xc837ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores",
+ "UMask": "0xc897ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores that Hit the LLC",
+ "UMask": "0xc001fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc80ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLC",
+ "UMask": "0xc88ffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that hit the LLC.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c0018101",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC",
+ "UMask": "0xc817fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd PTEs issued by iA Cores due to page walks that hit the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLC",
+ "UMask": "0xc837fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc897fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM requests from local IA cores that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC",
+ "UMask": "0xcc47fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLC",
+ "UMask": "0xcccffd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLC",
+ "UMask": "0xccd7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "UMask": "0xccc7fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc807fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "UMask": "0xc887fd01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores",
+ "UMask": "0xcc47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNear requests from local IA cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores",
+ "UMask": "0xcd47ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores",
+ "UMask": "0xcccfff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores",
+ "UMask": "0xccd7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores",
+ "UMask": "0xccc7ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests from IA Cores which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from iA Cores that Missed the LLC",
+ "UMask": "0xc001fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc80ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRDs from local IA cores to locally homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc80efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Code read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc88ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRD Prefetches from local IA cores to locally homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc88efe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRD Prefetches from local IA cores to remotely homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc88f7e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CRDs from local IA cores to remotely homed memory",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc80f7e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All requests issued from IA cores to CXL accelerator memory regions that miss the LLC.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c0018201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC",
+ "UMask": "0xc817fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd PTEs issued by iA Cores due to a page walk that missed the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDPTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLC",
+ "UMask": "0xc837fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds and equivalent opcodes issued from an IA core which miss the L3 and target memory in a CXL type 2 memory expander card.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8178201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "UMask": "0xc8178601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc816fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8168601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds from local IA cores to locally homed PMM addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8168a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "UMask": "0xc8178a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc897fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for L2 data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8978201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "UMask": "0xc8978601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy; Data read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc896fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8968601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to locally homed PMM addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "UMask": "0xc8968a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to PMM addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "UMask": "0xc8978a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy; Data read prefetch from local IA that misses in the snoop filter",
+ "UMask": "0xc8977e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8970601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRd Prefetches from local IA cores to remotely homed PMM addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8970a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8177e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8170601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for DRds from local IA cores to remotely homed PMM addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8170a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC",
+ "UMask": "0xcc47fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch code read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLC",
+ "UMask": "0xcccffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch data read from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLC",
+ "UMask": "0xccd7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for LLC data prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10ccd78201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Last level cache prefetch read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "UMask": "0xccc7fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for L2 RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8878201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc8668601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to locally homed PMM addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "UMask": "0xc8668a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to locally homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to locally homed PMM addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "UMask": "0xc86e8a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "UMask": "0xc8670601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to remotely homed PMM addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "UMask": "0xc8670a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to remotely homed DDR addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "UMask": "0xc86f0601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to remotely homed PMM addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "UMask": "0xc86f0a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc807fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RFOs issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10c8078201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc806fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc887fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for LLC RFO prefetches issued from an IA core which miss the L3 and target memory in a CXL type 2 accelerator.",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_CXL_ACC",
+ "PerPkg": "1",
+ "UMask": "0x10ccc78201",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "UMask": "0xc886fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8877e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "UMask": "0xc8077e01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for UCRDF requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_UCRDF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC",
+ "UMask": "0xc877de01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from a local IA core that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC",
+ "UMask": "0xc86ffe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA core that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC",
+ "UMask": "0xc867fe01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc8678601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA cores to PMM homed addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "UMask": "0xc8678a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from local IA cores to DDR homed addresses which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_DDR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "UMask": "0xc86f8601",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from a local IA core to PMM homed addresses that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "UMask": "0xc86f8a01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WIL requests from local IA cores that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC",
+ "UMask": "0xc87fde01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by iA Cores",
+ "UMask": "0xc807ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for Read for ownership prefetch from local IA that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO_PREF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores",
+ "UMask": "0xc887ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for SpecItoM events that are initiated from the Core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_SPECITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : SpecItoMs issued by iA Cores",
+ "UMask": "0xcc57ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WbMtoI requests from local IA cores",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WbMtoIs issued by iA Cores",
+ "UMask": "0xcc27ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCIL requests from a local IA core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCIL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores",
+ "UMask": "0xc86fff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WCILF requests from local IA core",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCILF",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores",
+ "UMask": "0xc867ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices",
+ "UMask": "0xc001ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for CLFlush requests from IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_CLFLUSH",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : CLFlushes issued by IO Devices",
+ "UMask": "0xc8c3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices that hit the LLC",
+ "UMask": "0xc001fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMs from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC",
+ "UMask": "0xcc43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "UMask": "0xcd43fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices which hit the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC",
+ "UMask": "0xc8f3fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RFOs from local IO devices which hit the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that hit the LLC",
+ "UMask": "0xc803fd04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR ItoM inserts from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices",
+ "UMask": "0xcc43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "UMask": "0xcd43ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All requests from IO Devices that missed the LLC",
+ "UMask": "0xc001fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR ItoM inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd43fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNear transactions from an IO device on the local socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd42fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoMCacheNear transactions from an IO device on a remote socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "UMask": "0xcd437e04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM transactions from an IO device on the local socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc42fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for ItoM transactions from an IO device on a remote socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC",
+ "UMask": "0xcc437e04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices which miss the LLC",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f3fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCUR transactions from an IO device on the local socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_LOCAL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f2fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCUR transactions from an IO device on a remote socket that miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR_REMOTE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC",
+ "UMask": "0xc8f37e04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All TOR RFO inserts from local IO devices which miss the cache",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that missed the LLC",
+ "UMask": "0xc803fe04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for PCIRDCURs issued by IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_PCIRDCUR",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices",
+ "UMask": "0xc8f3ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for RFOs from local IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_RFO",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : RFOs issued by IO Devices",
+ "UMask": "0xc803ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for WBMtoI requests from IO devices",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.IO_WBMTOI",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : WbMtoIs issued by IO Devices",
+ "UMask": "0xcc23ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All locally initiated requests",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local iA and IO",
+ "UMask": "0xc000ff05",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All from Local iA",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local iA",
+ "UMask": "0xc000ff01",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All from Local IO",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All from Local IO",
+ "UMask": "0xc000ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All remote requests (e.g. snoops, writebacks) that came from remote sockets",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.REM_ALL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All Remote Requests",
+ "UMask": "0xc001ffc8",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "TOR Occupancy for All snoops to this LLC that came from remote sockets",
+ "Counter": "0",
+ "EventCode": "0x36",
+ "EventName": "UNC_CHA_TOR_OCCUPANCY.REM_SNPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Occupancy : All Snoops from Remote",
+ "UMask": "0xc001ff08",
+ "Unit": "CHA"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cxl.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cxl.json
new file mode 100644
index 000000000000..43e094c233cc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-cxl.json
@@ -0,0 +1,29 @@
+[
+ {
+ "BriefDescription": "B2CXL Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_B2CXL_CLOCKTICKS",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "B2CXL"
+ },
+ {
+ "BriefDescription": "Number of Allocation to Mem Data Packing buffer",
+ "Counter": "4,5,6,7",
+ "EventCode": "0x41",
+ "EventName": "UNC_CXLCM_RxC_PACK_BUF_INSERTS.MEM_DATA",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "CXLCM"
+ },
+ {
+ "BriefDescription": "Number of Allocation to M2S Data AGF",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_CXLDP_TxC_AGF_INSERTS.M2S_DATA",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "CXLDP"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-interconnect.json
new file mode 100644
index 000000000000..5eb1145f204f
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-interconnect.json
@@ -0,0 +1,1979 @@
+[
+ {
+ "BriefDescription": "Clockticks of the mesh to memory (B2CMI)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_B2CMI_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of time D2C was not honoured by egress due to directory state constraints",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x17",
+ "EventName": "UNC_B2CMI_DIRECT2CORE_NOT_TAKEN_DIRSTATE",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times B2CMI egress did D2C (direct to core)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x16",
+ "EventName": "UNC_B2CMI_DIRECT2CORE_TAKEN",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times D2C wasn't honoured even though the incoming request had d2c set for non cisgress txn",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x18",
+ "EventName": "UNC_B2CMI_DIRECT2CORE_TXN_OVERRIDE",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of d2k wasn't done due to credit constraints",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1B",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_NOT_TAKEN_CREDITS",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Direct to UPI Transactions - Ignored due to lack of credits : All : Counts the number of d2k wasn't done due to credit constraints",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1B",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_NOT_TAKEN_CREDITS.EGRESS",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of time D2K was not honoured by egress due to directory state constraints",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1A",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_NOT_TAKEN_DIRSTATE",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Cycles when Direct2UPI was Disabled : Egress Ignored D2U : Counts the number of time D2K was not honoured by egress due to directory state constraints",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1A",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_NOT_TAKEN_DIRSTATE.EGRESS",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times egress did D2K (Direct to KTI)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x19",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_TAKEN",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of times D2K wasn't honoured even though the incoming request had d2k set for non cisgress txn",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_B2CMI_DIRECT2UPI_TXN_OVERRIDE",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit Clean",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.CLEAN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x38",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.CLEAN_A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.CLEAN_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.CLEAN_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit Dirty (modified)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.DIRTY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x7",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.DIRTY_A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.DIRTY_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Hit : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_B2CMI_DIRECTORY_HIT.DIRTY_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of 1lm or 2lm hit read data returns to egress with any directory to non persistent memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "UNC_B2CMI_DIRECTORY_LOOKUP.ANY",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of 1lm or 2lm hit read data returns to egress with directory A to non persistent memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "UNC_B2CMI_DIRECTORY_LOOKUP.STATE_A",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of 1lm or 2lm hit read data returns to egress with directory I to non persistent memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "UNC_B2CMI_DIRECTORY_LOOKUP.STATE_I",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the number of 1lm or 2lm hit read data returns to egress with directory S to non persistent memory",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x20",
+ "EventName": "UNC_B2CMI_DIRECTORY_LOOKUP.STATE_S",
+ "PerPkg": "1",
+ "PublicDescription": "Counts the number of 1lm or 2lm hit read data returns to egress with directory S to non persistent memory",
+ "UMask": "0x4",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss Clean",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.CLEAN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x38",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.CLEAN_A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.CLEAN_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.CLEAN_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss Dirty (modified)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.DIRTY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x7",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.DIRTY_A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.DIRTY_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory Miss : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_B2CMI_DIRECTORY_MISS.DIRTY_S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any A2I Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.A2I",
+ "PerPkg": "1",
+ "UMask": "0x320",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any A2S Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.A2S",
+ "PerPkg": "1",
+ "UMask": "0x340",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts cisgress directory updates",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.ANY",
+ "PerPkg": "1",
+ "UMask": "0x301",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts any 1lm or 2lm hit data return that would result in directory update to non persistent memory (DRAM)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.HIT_ANY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x101",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in near memory to the A state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.HIT_X2A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x114",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in near memory to the I state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.HIT_X2I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x128",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in near memory to the S state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.HIT_X2S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x142",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any I2A Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.I2A",
+ "PerPkg": "1",
+ "UMask": "0x304",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any I2S Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.I2S",
+ "PerPkg": "1",
+ "UMask": "0x302",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in far memory to the A state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.MISS_X2A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x214",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in far memory to the I state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.MISS_X2I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x228",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update in far memory to the S state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.MISS_X2S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x242",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any S2A Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.S2A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x310",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Any S2I Transition",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.S2I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x308",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update to the A state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.X2A",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x314",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update to the I state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.X2I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x328",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Directory update to the S state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_B2CMI_DIRECTORY_UPDATE.X2S",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x342",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts any read",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.ALL",
+ "PerPkg": "1",
+ "UMask": "0x104",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts normal reads issue to CMI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.NORMAL",
+ "PerPkg": "1",
+ "UMask": "0x101",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Count reads to NM region",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.TO_DDR_AS_CACHE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x110",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts reads to 1lm non persistent memory regions",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x24",
+ "EventName": "UNC_B2CMI_IMC_READS.TO_DDR_AS_MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x108",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "All Writes - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.ALL",
+ "PerPkg": "1",
+ "UMask": "0x110",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Full Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.FULL",
+ "PerPkg": "1",
+ "UMask": "0x101",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Non-Inclusive - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.NI",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Non-Inclusive Miss - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.NI_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Partial Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.PARTIAL",
+ "PerPkg": "1",
+ "UMask": "0x102",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "DDR, acting as Cache - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.TO_DDR_AS_CACHE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x140",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "DDR - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x25",
+ "EventName": "UNC_B2CMI_IMC_WRITES.TO_DDR_AS_MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x120",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.CH0_UPI",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.CH0_XPT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : UPI - All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.UPI_ALLCH",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Inserts : XPT -All Channels",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x56",
+ "EventName": "UNC_B2CMI_PREFCAM_INSERTS.XPT_ALLCH",
+ "PerPkg": "1",
+ "PublicDescription": "Prefetch CAM Inserts : XPT - All Channels",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Prefetch CAM Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x54",
+ "EventName": "UNC_B2CMI_PREFCAM_OCCUPANCY.CH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm reads and WRNI which were a hit",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_B2CMI_TAG_HIT.ALL",
+ "PerPkg": "1",
+ "UMask": "0xf",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm reads which were a hit clean",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_B2CMI_TAG_HIT.RD_CLEAN",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm reads which were a hit dirty",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_B2CMI_TAG_HIT.RD_DIRTY",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm WRNI which were a hit clean",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_B2CMI_TAG_HIT.WR_CLEAN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm WRNI which were a hit dirty",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_B2CMI_TAG_HIT.WR_DIRTY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm second way read miss for a WrNI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.CLEAN",
+ "PerPkg": "1",
+ "UMask": "0x5",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm second way read miss for a WrNI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.DIRTY",
+ "PerPkg": "1",
+ "UMask": "0xa",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm second way read miss for a Rd",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.RD_2WAY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm reads which were a miss and the cache line is unmodified",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.RD_CLEAN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm reads which were a miss and the cache line is modified",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.RD_DIRTY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm second way read miss for a WrNI",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.WR_2WAY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm WRNI which were a miss and the cache line is unmodified",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.WR_CLEAN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Counts the 2lm WRNI which were a miss and the cache line is modified",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x4B",
+ "EventName": "UNC_B2CMI_TAG_MISS.WR_DIRTY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "UNC_B2CMI_TRACKER_INSERTS.CH0",
+ "PerPkg": "1",
+ "UMask": "0x104",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x33",
+ "EventName": "UNC_B2CMI_TRACKER_OCCUPANCY.CH0",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "Write Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_B2CMI_WR_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "B2CMI"
+ },
+ {
+ "BriefDescription": "UNC_B2HOT_CLOCKTICKS",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_B2HOT_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "Clockticks for the B2HOT unit",
+ "UMask": "0x1",
+ "Unit": "B2HOT"
+ },
+ {
+ "BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_B2UPI_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "B2UPI"
+ },
+ {
+ "BriefDescription": "Total Write Cache Occupancy : Mem",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x0F",
+ "EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "IRP Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_I_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Inbound read requests received by the IRP and inserted into the FAF queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x18",
+ "EventName": "UNC_I_FAF_INSERTS",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "FAF occupancy",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x19",
+ "EventName": "UNC_I_FAF_OCCUPANCY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Rejects",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_I_MISC0.FAST_REJ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Counts Timeouts - Set 0 : Fastpath Requests",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_I_MISC0.FAST_REQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Misc Events - Set 1 : Lost Forward : Snoop pulled away ownership before a write was committed",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_I_MISC1.LOST_FWD",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Misc Events - Set 1 : Received Invalid : Secondary received a transfer that did not have sufficient MESI state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit E/S responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x74",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit I responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x72",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop Hit M responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x78",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Snoop miss responses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_I_SNOOP_RESP.ALL_MISS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x71",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "Inbound write (fast path) requests to coherent memory, received by the IRP resulting in write ownership requests issued by IRP to the mesh.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "UNC_I_TRANSACTIONS.WR_PREF",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IRP"
+ },
+ {
+ "BriefDescription": "MDF Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_MDF_CLOCKTICKS",
+ "PerPkg": "1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of packets bypassing the ingress queue",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x14",
+ "EventName": "UNC_MDF_RxR_BYPASS.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (AD_BNC)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (AD)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (AK)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (BL_BNC)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (BL_CRD)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of allocations into the Ingress used to queue up requests from the mesh (IV)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "UNC_MDF_RxR_INSERTS.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Occupancy counts for the Ingress buffer",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "UNC_MDF_RxR_OCCUPANCY.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for AD_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for AD_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for AK",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for BL_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for BL_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress bypasses for IV",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1E",
+ "EventName": "UNC_MDF_TxR_BYPASS.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for AD_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for AD_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for AK",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for BL_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for BL_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of egress inserts for IV",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1C",
+ "EventName": "UNC_MDF_TxR_INSERTS.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for AD_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.AD_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for AD_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for AK",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.AK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for BL_BNC",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.BL_BNC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for BL_CRD",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Egress occupancy for IV",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1D",
+ "EventName": "UNC_MDF_TxR_OCCUPANCY.IV",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "MDF"
+ },
+ {
+ "BriefDescription": "Number of UPI LL clock cycles while the event is enabled",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_UPI_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "Number of kfclks",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Cycles in L1 : Number of UPI qfclk cycles spent in L1 power mode. L1 is a mode that totally shuts down a UPI link. Use edge detect to count the number of instances when the UPI link entered L1. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. Because L1 totally shuts down the link, it takes a good amount of time to exit this mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x21",
+ "EventName": "UNC_UPI_L1_POWER_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10e",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10f",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Request",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Request, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x108",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - Conflict",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1aa",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - Invalid",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x12a",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - Data",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xc",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - Data, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10c",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - No Data",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Response - No Data, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10a",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Snoop",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x9",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Snoop, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x109",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Writeback",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB",
+ "PerPkg": "1",
+ "UMask": "0xd",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Receive path of a UPI Port : Writeback, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10d",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : All Data : Shows legal flit time (hides impact of L0p and L0c).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.ALL_DATA",
+ "PerPkg": "1",
+ "UMask": "0xf",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Null FLITs received from any slot",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.ALL_NULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Valid Flits Received : Null FLITs received from any slot",
+ "UMask": "0x27",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.DATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Idle : Shows legal flit time (hides impact of L0p and L0c).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.IDLE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x47",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.LLCRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.LLCTRL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : All Non Data : Shows legal flit time (hides impact of L0p and L0c).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.NON_DATA",
+ "PerPkg": "1",
+ "UMask": "0x97",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.NULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.PROTHDR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Received : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_UPI_RxL_FLITS.SLOT2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Flit Buffer Allocations : Slot 0 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x30",
+ "EventName": "UNC_UPI_RxL_INSERTS.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Flit Buffer Allocations : Slot 1 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x30",
+ "EventName": "UNC_UPI_RxL_INSERTS.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Flit Buffer Allocations : Slot 2 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x30",
+ "EventName": "UNC_UPI_RxL_INSERTS.SLOT2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Occupancy - All Packets : Slot 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Occupancy - All Packets : Slot 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "RxQ Occupancy - All Packets : Slot 2",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x27",
+ "EventName": "UNC_UPI_TxL0P_POWER_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x28",
+ "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x29",
+ "EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10e",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10f",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Request",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Request, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x108",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - Conflict",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1aa",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - Invalid",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x12a",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - Data",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xc",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - Data, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10c",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - No Data",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Response - No Data, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10a",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Snoop",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x9",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Snoop, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x109",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Writeback",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xd",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Matches on Transmit path of a UPI Port : Writeback, Match Opcode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10d",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : All Data : Counts number of data flits across this UPI link.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.ALL_DATA",
+ "PerPkg": "1",
+ "UMask": "0xf",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "All Null Flits",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.ALL_NULL",
+ "PerPkg": "1",
+ "PublicDescription": "Valid Flits Sent : Idle",
+ "UMask": "0x27",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.DATA",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Idle : Shows legal flit time (hides impact of L0p and L0c).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.IDLE",
+ "PerPkg": "1",
+ "UMask": "0x47",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.LLCRD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.LLCTRL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : All Non Data : Shows legal flit time (hides impact of L0p and L0c).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.NON_DATA",
+ "PerPkg": "1",
+ "PublicDescription": "Valid Flits Sent : Null FLITs transmitted to any slot",
+ "UMask": "0x97",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.NULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.PROTHDR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Valid Flits Sent : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_UPI_TxL_FLITS.SLOT2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "UPI"
+ },
+ {
+ "BriefDescription": "Message Received : Doorbell",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "UBOX"
+ },
+ {
+ "BriefDescription": "Message Received : Interrupt",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.INT_PRIO",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Message Received : Interrupt : Interrupts",
+ "UMask": "0x10",
+ "Unit": "UBOX"
+ },
+ {
+ "BriefDescription": "Message Received : IPI",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.IPI_RCVD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Message Received : IPI : Inter Processor Interrupts",
+ "UMask": "0x4",
+ "Unit": "UBOX"
+ },
+ {
+ "BriefDescription": "Message Received : MSI",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.MSI_RCVD",
+ "PerPkg": "1",
+ "PublicDescription": "Message Received : MSI : Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)",
+ "UMask": "0x2",
+ "Unit": "UBOX"
+ },
+ {
+ "BriefDescription": "Message Received : VLW",
+ "Counter": "0,1",
+ "EventCode": "0x42",
+ "EventName": "UNC_U_EVENT_MSG.VLW_RCVD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Message Received : VLW : Virtual Logical Wire (legacy) message were received from Uncore.",
+ "UMask": "0x1",
+ "Unit": "UBOX"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-io.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-io.json
new file mode 100644
index 000000000000..2ea2637df3fb
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-io.json
@@ -0,0 +1,1925 @@
+[
+ {
+ "BriefDescription": "IIO Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_IIO_CLOCKTICKS",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PCIE Completion Buffer Inserts. Counts once per 64 byte read issued from this PCIE device.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xC2",
+ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0xff",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Count of allocations in the completion buffer",
+ "Counter": "2,3",
+ "EventCode": "0xD5",
+ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
+ "EventCode": "0xC0",
+ "EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Counts once for every 4 bytes read from this card to memory. This event does include reads to IO.",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x02",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x04",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x08",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x10",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x20",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x40",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x80",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Counts once for every 4 bytes written from this card to memory. This event does include writes to IO.",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x02",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x04",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x08",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x10",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x20",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x40",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x80",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Counts once for every 4 bytes written from this card to a peer device's IO space.",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.ALL_PARTS",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x83",
+ "EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 1G Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.1G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 2M Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.2M_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Hits to a 4K Page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.4K_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB lookups all",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.ALL_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Context cache hits",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Context cache lookups",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB lookups first",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.FIRST_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB Fills (same as IOTLB miss)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x40",
+ "EventName": "UNC_IIO_IOMMU0.MISSES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOMMU memory access (both low and high priority)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0xc0",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOMMU high priority memory access",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES_HIGH",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOMMU low priority memory access",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES_LOW",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 1G page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_1G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 256T page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_256T_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 2M page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_2M_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache Hit to a 512G page",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_512G_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache fill",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_CACHE_FILLS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Second Level Page Walk Cache lookup",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x41",
+ "EventName": "UNC_IIO_IOMMU1.SLPWC_CACHE_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Cycles PWT full",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.CYC_PWT_FULL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Interrupt Entry cache hit",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.INT_CACHE_HITS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Interrupt Entry cache lookup",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.INT_CACHE_LOOKUPS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Context Cache invalidation events",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.NUM_INVAL_CTXT_CACHE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Interrupt Entry Cache invalidation events",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.NUM_INVAL_INT_CACHE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "IOTLB invalidation events",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.NUM_INVAL_IOTLB",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "PASID Cache invalidation events",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_IIO_IOMMU3.NUM_INVAL_PASID_CACHE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "This event is deprecated. [This event is alias to UNC_IIO_NUM_OUTSTANDING_REQ_FROM_CPU.TO_IO]",
+ "Counter": "2,3",
+ "Deprecated": "1",
+ "EventCode": "0xc5",
+ "EventName": "UNC_IIO_NUM_OUSTANDING_REQ_FROM_CPU.TO_IO",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Occupancy of outbound request queue : To device : Counts number of outbound requests/completions IIO is currently processing [This event is alias to UNC_IIO_NUM_OUSTANDING_REQ_FROM_CPU.TO_IO]",
+ "Counter": "2,3",
+ "EventCode": "0xc5",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_FROM_CPU.TO_IO",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Passing data to be written",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.DATA",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Issuing final read or write of line",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.FINAL_RD_WR",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Processing response from IOMMU",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.IOMMU_HIT",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Issuing to IOMMU",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.IOMMU_REQ",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Request Ownership",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.REQ_OWN",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Writing line",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.WR",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.ABORT",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x80",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.CONFINED_P2P",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x40",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.LOC_P2P",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x20",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MCAST",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MEM",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MSGB",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.REM_P2P",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x10",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "-",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Posted requests sent by the integrated IO (IIO) controller to the Ubox, useful for counting message signaled interrupts (MSI).",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX_POSTED",
+ "FCMask": "0x01",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "All 9 bits of Page Walk Tracker Occupancy",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x42",
+ "EventName": "UNC_IIO_PWT_OCCUPANCY",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PortMask": "0x000",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core reading from Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Core writing to Cards MMIO space",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
+ "EventCode": "0xC1",
+ "EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.ALL_PARTS",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x0FF",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x4",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART4",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART5",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART6",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART7",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x1",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x8",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x001",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x002",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x004",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x008",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x010",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x020",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x040",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ },
+ {
+ "BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
+ "EventCode": "0x84",
+ "EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
+ "FCMask": "0x07",
+ "PerPkg": "1",
+ "PortMask": "0x080",
+ "UMask": "0x2",
+ "Unit": "IIO"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-memory.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-memory.json
new file mode 100644
index 000000000000..f559e27e2815
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-memory.json
@@ -0,0 +1,890 @@
+[
+ {
+ "BriefDescription": "DRAM Activate Count : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.ALL",
+ "PerPkg": "1",
+ "UMask": "0xf7",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Underfill Read transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.UFILL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Activate Count : Write transaction on Page Empty or Page Miss : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x02",
+ "EventName": "UNC_M_ACT_COUNT.WR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all CAS operations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD",
+ "PerPkg": "1",
+ "UMask": "0xcf",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_NON_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc3",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_PRE_REG",
+ "PerPkg": "1",
+ "UMask": "0xc2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_PRE_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_REG",
+ "PerPkg": "1",
+ "UMask": "0xc1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_UNDERFILL_ALL",
+ "PerPkg": "1",
+ "UMask": "0xcc",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0, all writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR",
+ "PerPkg": "1",
+ "UMask": "0xf0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR_NONPRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xd0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.WR_PRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all CAS operations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD",
+ "PerPkg": "1",
+ "UMask": "0xcf",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_NON_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc3",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_PRE_REG",
+ "PerPkg": "1",
+ "UMask": "0xc2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_PRE_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_REG",
+ "PerPkg": "1",
+ "UMask": "0xc1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_UNDERFILL_ALL",
+ "PerPkg": "1",
+ "UMask": "0xcc",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1, all writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR",
+ "PerPkg": "1",
+ "UMask": "0xf0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR_NONPRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xd0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge writes",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.WR_PRE",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xe0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM DCLK clock cycles while the event is enabled. DCLK is 1/4 of DRAM data rate.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_M_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "DRAM Clockticks",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Number of DRAM HCLK clock cycles while the event is enabled",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_M_HCLOCKTICKS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "DRAM Clockticks",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "PMMNT is sending REF* commands while being in specified Refresh rate",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x72",
+ "EventName": "UNC_M_MNTCMD_REFRATE.REFAB1X",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "PMMNT is sending REF* commands while being in specified Refresh rate",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x72",
+ "EventName": "UNC_M_MNTCMD_REFRATE.REFAB2X",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "PMMNT is sending REF* commands while being in specified Refresh rate",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x72",
+ "EventName": "UNC_M_MNTCMD_REFRATE.REFSB1X",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "PMMNT is sending REF* commands while being in specified Refresh rate",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x72",
+ "EventName": "UNC_M_MNTCMD_REFRATE.REFSB2X",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 temp readings forced 2x refresh",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA7",
+ "EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles MR4 MRRs was triggered/running",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xA6",
+ "EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK3",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK2",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x47",
+ "EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK3",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles a given rank is in Power Down Mode and all pages are closed",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x88",
+ "EventName": "UNC_M_POWER_CHANNEL_PPD_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM and throttle level is zero.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x89",
+ "EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM and throttle level is zero.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x89",
+ "EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.BW_SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "MR4 temp reading is throttling",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.MR4BLKEN",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "RAPL is throttling",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x46",
+ "EventName": "UNC_M_POWER_THROTTLE_CYCLES.RAPLBLK",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.ALL",
+ "PerPkg": "1",
+ "UMask": "0xff",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Precharge due to (?) : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.PGT",
+ "PerPkg": "1",
+ "UMask": "0xf8",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.RD",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.UFILL",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf4",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "DRAM Precharge commands. : Counts the number of DRAM Precharge commands sent on this channel.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x03",
+ "EventName": "UNC_M_PRE_COUNT.WR",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xf2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer inserts on subchannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x17",
+ "EventName": "UNC_M_RDB_INSERTS.SCH0",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer inserts on subchannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x17",
+ "EventName": "UNC_M_RDB_INSERTS.SCH1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer occupancy on subchannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1a",
+ "EventName": "UNC_M_RDB_OCCUPANCY_SCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read buffer occupancy on subchannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1b",
+ "EventName": "UNC_M_RDB_OCCUPANCY_SCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x50",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue Allocations : Counts the number of allocations into the Read Pending Queue. This queue is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This includes both ISOCH and non-ISOCH requests.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH0_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH0_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH1_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read Pending Queue inserts for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x10",
+ "EventName": "UNC_M_RPQ_INSERTS.SCH1_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x80",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH0_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x81",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH0_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x82",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH1_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Read pending queue occupancy for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x83",
+ "EventName": "UNC_M_RPQ_OCCUPANCY_SCH1_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "subevent0 - # of cycles all ranks were in SR subevent1 - # of times all ranks went into SR subevent2 -# of times ps_sr_active asserted (SRE) subevent3 - # of times ps_sr_active deasserted (SRX) subevent4 - # of times PS-&>Refresh ps_sr_req asserted (SRE) subevent5 - # of times PS-&>Refresh ps_sr_req deasserted (SRX) subevent6 - # of cycles PSCtrlr FSM was in FATAL",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_M_SELF_REFRESH.ENTER_SUCCESS",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles all ranks were in SR",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x43",
+ "EventName": "UNC_M_SELF_REFRESH.ENTER_SUCCESS_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Critical level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8e",
+ "EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at High level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8d",
+ "EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at High level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8d",
+ "EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Normal level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8b",
+ "EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Normal level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8b",
+ "EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Mid level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8c",
+ "EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "# of cycles Throttling at Mid level on specified DIMM",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x8c",
+ "EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.PCH0",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0x50",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.PCH1",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "UMask": "0xa0",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH0_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x10",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH0_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH1_PCH0",
+ "PerPkg": "1",
+ "UMask": "0x40",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write Pending Queue inserts for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x22",
+ "EventName": "UNC_M_WPQ_INSERTS.SCH1_PCH1",
+ "PerPkg": "1",
+ "UMask": "0x80",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 0, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x84",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH0_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 0, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x85",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH0_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 1, pseudochannel 0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x86",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH1_PCH0",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "Write pending queue occupancy for subchannel 1, pseudochannel 1",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x87",
+ "EventName": "UNC_M_WPQ_OCCUPANCY_SCH1_PCH1",
+ "PerPkg": "1",
+ "Unit": "IMC"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/uncore-power.json b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-power.json
new file mode 100644
index 000000000000..9ea852ef190e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/uncore-power.json
@@ -0,0 +1,109 @@
+[
+ {
+ "BriefDescription": "PCU Clockticks",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x01",
+ "EventName": "UNC_P_CLOCKTICKS",
+ "PerPkg": "1",
+ "PublicDescription": "PCU Clockticks: The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x04",
+ "EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit. Count only if throttling is occurring.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequency.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x74",
+ "EventName": "UNC_P_FREQ_TRANS_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2b",
+ "EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Package C State Residency - C2E : Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x2d",
+ "EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Package C State Residency - C6 : Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Number of cores in C0",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C0",
+ "PerPkg": "1",
+ "PublicDescription": "Number of cores in C0 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Number of cores in C3",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x36",
+ "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C3",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Number of cores in C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Number of cores in C6",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x37",
+ "EventName": "UNC_P_POWER_STATE_OCCUPANCY_CORES_C6",
+ "PerPkg": "1",
+ "PublicDescription": "Number of cores in C6 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x0a",
+ "EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "External Prochot : Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
+ "Unit": "PCU"
+ },
+ {
+ "BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x09",
+ "EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
+ "Experimental": "1",
+ "PerPkg": "1",
+ "PublicDescription": "Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
+ "Unit": "PCU"
+ }
+]
diff --git a/tools/perf/pmu-events/arch/x86/graniterapids/virtual-memory.json b/tools/perf/pmu-events/arch/x86/graniterapids/virtual-memory.json
index 8784c97b7534..609a9549cbf3 100644
--- a/tools/perf/pmu-events/arch/x86/graniterapids/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/graniterapids/virtual-memory.json
@@ -1,6 +1,26 @@
[
{
+ "BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.STLB_HIT",
+ "PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a demand load.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -8,7 +28,63 @@
"UMask": "0xe"
},
{
+ "BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x12",
+ "EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.STLB_HIT",
+ "PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a store.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -16,11 +92,94 @@
"UMask": "0xe"
},
{
+ "BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x13",
+ "EventName": "DTLB_STORE_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.STLB_HIT",
+ "PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_ACTIVE",
+ "PublicDescription": "Counts cycles when at least one PMH (Page Miss Handler) is busy with a page walk for a code (instruction fetch) request.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
+ },
+ {
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
"SampleAfterValue": "100003",
"UMask": "0xe"
+ },
+ {
+ "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
+ "PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x4"
+ },
+ {
+ "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
+ "PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x11",
+ "EventName": "ITLB_MISSES.WALK_PENDING",
+ "PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x10"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/haswell/cache.json b/tools/perf/pmu-events/arch/x86/haswell/cache.json
index 0831f14b3cc6..29b408d036c2 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/cache.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "This event counts when new data lines are brought into the L1 Data cache, which cause other lines to be evicted from the cache.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "L1D miss outstanding duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -34,6 +38,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are e.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.REQUEST_FB_FULL",
"SampleAfterValue": "2000003",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Not rejected writebacks that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_DEMAND_RQSTS.WB_HIT",
"PublicDescription": "Not rejected writebacks that hit L2 cache.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "This event counts the number of L2 cache lines brought into the L2 cache. Lines are filled into the L2 cache when there was an L2 miss.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "L2 cache lines in E state filling L2.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "L2 cache lines in I state filling L2.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "L2 cache lines in S state filling L2.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by demand.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by demand.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts all L2 code requests.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "Counts all L2 HW prefetcher requests.",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts all L2 store RFO requests.",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Number of instruction fetches that hit the L2 cache.",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Number of instruction fetches that missed the L2 cache.",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
@@ -190,6 +213,7 @@
},
{
"BriefDescription": "L2 prefetch requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_HIT",
"PublicDescription": "Counts all L2 HW prefetcher requests that hit L2.",
@@ -198,6 +222,7 @@
},
{
"BriefDescription": "L2 prefetch requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_MISS",
"PublicDescription": "Counts all L2 HW prefetcher requests that missed L2.",
@@ -206,6 +231,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
@@ -215,6 +241,7 @@
},
{
"BriefDescription": "All L2 requests",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
@@ -224,6 +251,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the number of store RFO requests that hit the L2 cache.",
@@ -232,6 +260,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the number of store RFO requests that miss the L2 cache.",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "L2 or L3 HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "Any MLC or L3 HW prefetch accessing L2, including rejects.",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "Transactions accessing L2 pipe.",
@@ -256,6 +287,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "L2 cache accesses when fetching instructions.",
@@ -264,6 +296,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "Demand data read requests that access L2 cache.",
@@ -272,6 +305,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "L1D writebacks that access L2 cache.",
@@ -280,6 +314,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "L2 fill requests that access L2 cache.",
@@ -288,6 +323,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "L2 writebacks that access L2 cache.",
@@ -296,6 +332,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "RFO requests that access L2 cache.",
@@ -304,6 +341,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D is locked.",
@@ -312,6 +350,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts each cache miss condition for references to the last level cache.",
@@ -320,6 +359,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts requests originating from the core that reference a cache line in the last level cache.",
@@ -328,6 +368,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -338,6 +379,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared L3.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -348,6 +390,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -358,6 +401,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -368,6 +412,7 @@
},
{
"BriefDescription": "Data from local DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM30",
"EventCode": "0xD3",
@@ -379,6 +424,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD1",
@@ -389,6 +435,7 @@
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD1",
@@ -399,6 +446,7 @@
},
{
"BriefDescription": "Retired load uops misses in L1 cache as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD1",
@@ -410,6 +458,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD76, HSD29, HSM30",
"EventCode": "0xD1",
@@ -420,6 +469,7 @@
},
{
"BriefDescription": "Miss in mid-level (L2) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD1",
@@ -431,6 +481,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD1",
@@ -442,6 +493,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD1",
@@ -453,6 +505,7 @@
},
{
"BriefDescription": "Retired load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -464,6 +517,7 @@
},
{
"BriefDescription": "Retired store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -475,6 +529,7 @@
},
{
"BriefDescription": "Retired load uops with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD76, HSD29, HSM30",
"EventCode": "0xD0",
@@ -485,6 +540,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -495,6 +551,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -505,6 +562,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -515,6 +573,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -525,6 +584,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Data read requests sent to uncore (demand and prefetch).",
@@ -533,6 +593,7 @@
},
{
"BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Demand code read requests sent to uncore.",
@@ -541,6 +602,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
@@ -550,6 +612,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.",
@@ -558,6 +621,7 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
"SampleAfterValue": "2000003",
@@ -565,6 +629,7 @@
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -574,6 +639,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
@@ -583,6 +649,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
@@ -592,6 +659,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
@@ -601,6 +669,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
@@ -610,6 +679,7 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
@@ -619,6 +689,7 @@
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
@@ -628,6 +699,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
@@ -637,6 +709,7 @@
},
{
"BriefDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100003",
@@ -644,6 +717,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -653,6 +727,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -662,6 +737,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -671,6 +747,7 @@
},
{
"BriefDescription": "hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -680,6 +757,7 @@
},
{
"BriefDescription": "hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -689,6 +767,7 @@
},
{
"BriefDescription": "Counts all requests hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -698,6 +777,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -707,6 +787,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -716,6 +797,7 @@
},
{
"BriefDescription": "Counts all demand code reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -725,6 +807,7 @@
},
{
"BriefDescription": "Counts all demand code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -734,6 +817,7 @@
},
{
"BriefDescription": "Counts demand data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -743,6 +827,7 @@
},
{
"BriefDescription": "Counts demand data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -752,6 +837,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -761,6 +847,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -770,6 +857,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -779,6 +867,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -788,6 +877,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -797,6 +887,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -806,6 +897,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -815,6 +907,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -824,6 +917,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"SampleAfterValue": "100003",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/counter.json b/tools/perf/pmu-events/arch/x86/haswell/counter.json
new file mode 100644
index 000000000000..1be6522e2bbc
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/haswell/counter.json
@@ -0,0 +1,22 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "ARB",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "cbox_0",
+ "CountersNumFixed": 1,
+ "CountersNumGeneric": "0"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/haswell/floating-point.json b/tools/perf/pmu-events/arch/x86/haswell/floating-point.json
index 8fcc10f74ad9..a0b917306887 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Approximate counts of AVX & AVX2 256-bit instructions, including non-arithmetic instructions, loads, and stores. May count non-AVX instructions that employ 256-bit operations, including (but not necessarily limited to) rep string instructions that use 256-bit loads and stores for optimized performance, XSAVE* and XRSTOR*, and operations that transition the x87 FPU data registers between x87 and MMX.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "AVX_INSTS.ALL",
"PublicDescription": "Note that a whole rep string only counts AVX_INST.ALL once.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "Number of SIMD FP assists due to input values.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "Number of SIMD FP assists due to output values.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "Number of X87 FP assists due to input values.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "Number of X87 FP assists due to output values.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"PublicDescription": "Number of SIMD move elimination candidate uops that were eliminated.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"PublicDescription": "Number of SIMD move elimination candidate uops that were not eliminated.",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "HSD56, HSM57",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
@@ -74,6 +83,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "HSD56, HSM57",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/frontend.json b/tools/perf/pmu-events/arch/x86/haswell/frontend.json
index 73d6d681dfa7..a9f81fd17925 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Number of front end re-steers due to BPU misprediction.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"SampleAfterValue": "2000003",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"SampleAfterValue": "2000003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFDATA_STALL",
"SampleAfterValue": "2000003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFETCH_STALL",
"SampleAfterValue": "2000003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accesses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "This event counts Instruction Cache (ICACHE) misses.",
@@ -45,6 +51,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -54,6 +61,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cycles.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "Number of uops delivered to IDQ from any path.",
@@ -114,6 +128,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cycles.",
@@ -130,6 +146,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of delivery.",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cycles.",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "This event counts uops delivered by the Front-end with the assistance of the microcode sequencer. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.",
@@ -189,6 +212,7 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
@@ -198,6 +222,7 @@
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -208,6 +233,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -218,6 +244,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -227,6 +254,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -236,6 +264,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD135",
"EventCode": "0x9C",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json b/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json
index 79d89c263677..aebd82ced1cf 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/hsw-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -80,17 +80,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -98,9 +97,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -121,7 +119,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -139,7 +137,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -151,7 +148,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -172,7 +169,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -181,10 +178,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "10 * ARITH.DIVIDER_UOPS / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -202,7 +199,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -218,7 +215,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -227,7 +224,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -236,7 +233,7 @@
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
@@ -246,7 +243,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.REQUEST_FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -257,7 +254,7 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
@@ -275,7 +272,7 @@
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -289,20 +286,20 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses.",
"MetricExpr": "ICACHE.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -327,7 +324,7 @@
"MetricName": "tma_info_core_coreipc"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
"MetricExpr": "(UOPS_EXECUTED.CORE / 2 / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@) if #SMT_on else UOPS_EXECUTED.CORE / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@))",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
@@ -347,6 +344,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -388,105 +391,105 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -502,21 +505,27 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / tma_info_system_time / 1e3",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
@@ -541,17 +550,17 @@
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
},
{
- "BriefDescription": "Average number of parallel requests to external memory",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_parallel_requests",
- "PublicDescription": "Average number of parallel requests to external memory. Accounts for all requests"
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
},
{
- "BriefDescription": "Average latency of all requests to external memory (in Uncore cycles)",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_REQUESTS.ALL",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_request_latency"
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
},
{
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
@@ -566,6 +575,13 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
@@ -603,7 +619,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -612,25 +628,25 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY.STALLS_L2_PENDING) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS",
@@ -640,20 +656,20 @@
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.L3_MISS))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -672,12 +688,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -689,7 +704,7 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
@@ -699,7 +714,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -707,27 +722,27 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=6@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_bound",
+ "MetricExpr": "(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_bound",
"MetricGroup": "Backend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_memory_bound",
"MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
@@ -749,7 +764,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.",
"ScaleUnit": "100%"
},
@@ -768,7 +783,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -777,7 +792,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -813,16 +828,16 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_5",
"MetricThreshold": "tma_port_5 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -837,7 +852,7 @@
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clks",
+ "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -846,7 +861,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -855,7 +870,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -864,25 +879,25 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -895,7 +910,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -911,7 +926,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -939,7 +954,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -952,14 +967,5 @@
"MetricName": "tma_store_op_utilization",
"MetricThreshold": "tma_store_op_utilization > 0.6",
"ScaleUnit": "100%"
- },
- {
- "BriefDescription": "This metric serves as an approximation of legacy x87 usage",
- "MetricExpr": "INST_RETIRED.X87 * tma_info_thread_uoppi / UOPS_RETIRED.RETIRE_SLOTS",
- "MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
- "MetricName": "tma_x87_use",
- "MetricThreshold": "tma_x87_use > 0.1",
- "PublicDescription": "This metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint.",
- "ScaleUnit": "100%"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/haswell/memory.json b/tools/perf/pmu-events/arch/x86/haswell/memory.json
index 2fc25e22a42a..a76f1862e5ea 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC1",
"SampleAfterValue": "2000003",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to uncommon conditions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC2",
"SampleAfterValue": "2000003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC3",
"SampleAfterValue": "2000003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type.",
+ "Counter": "0,1,2,3",
"Errata": "HSD65",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC4",
@@ -38,6 +43,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts).",
@@ -46,6 +52,7 @@
},
{
"BriefDescription": "Number of times an HLE execution successfully committed.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"SampleAfterValue": "2000003",
@@ -53,6 +60,7 @@
},
{
"BriefDescription": "Number of times an HLE execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.START",
"SampleAfterValue": "2000003",
@@ -60,6 +68,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
@@ -68,6 +77,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 128.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 16.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -92,6 +103,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 256.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 32.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 4.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -128,6 +142,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 512.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -140,6 +155,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 64.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -152,6 +168,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 8.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -164,6 +181,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "Speculative cache-line split load uops dispatched to L1D.",
@@ -172,6 +190,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "Speculative cache-line split store-address uops dispatched to L1D.",
@@ -180,6 +199,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -189,6 +209,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -198,6 +219,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -207,6 +229,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -216,6 +239,7 @@
},
{
"BriefDescription": "miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -225,6 +249,7 @@
},
{
"BriefDescription": "miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -234,6 +259,7 @@
},
{
"BriefDescription": "Counts all requests miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -243,6 +269,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -252,6 +279,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -261,6 +289,7 @@
},
{
"BriefDescription": "Counts all demand code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -270,6 +299,7 @@
},
{
"BriefDescription": "Counts all demand code reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -279,6 +309,7 @@
},
{
"BriefDescription": "Counts demand data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -288,6 +319,7 @@
},
{
"BriefDescription": "Counts demand data reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -297,6 +329,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -306,6 +339,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -315,6 +349,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -324,6 +359,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -333,6 +369,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -342,6 +379,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_CODE_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -351,6 +389,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -360,6 +399,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -369,6 +409,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1",
@@ -377,6 +418,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -385,6 +427,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC2",
"SampleAfterValue": "2000003",
@@ -392,6 +435,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC3",
"SampleAfterValue": "2000003",
@@ -399,6 +443,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type.",
+ "Counter": "0,1,2,3",
"Errata": "HSD65",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC4",
@@ -407,6 +452,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
@@ -415,6 +461,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"SampleAfterValue": "2000003",
@@ -422,6 +469,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.START",
"SampleAfterValue": "2000003",
@@ -429,6 +477,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -436,6 +485,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"SampleAfterValue": "2000003",
@@ -443,6 +493,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"SampleAfterValue": "2000003",
@@ -450,6 +501,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"SampleAfterValue": "2000003",
@@ -457,6 +509,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"SampleAfterValue": "2000003",
@@ -464,6 +517,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"SampleAfterValue": "2000003",
@@ -471,6 +525,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"SampleAfterValue": "2000003",
@@ -478,6 +533,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"SampleAfterValue": "2000003",
@@ -485,6 +541,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"SampleAfterValue": "2000003",
@@ -492,6 +549,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"SampleAfterValue": "2000003",
@@ -499,6 +557,7 @@
},
{
"BriefDescription": "Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"SampleAfterValue": "2000003",
@@ -506,6 +565,7 @@
},
{
"BriefDescription": "Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"SampleAfterValue": "2000003",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/metricgroups.json b/tools/perf/pmu-events/arch/x86/haswell/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/other.json b/tools/perf/pmu-events/arch/x86/haswell/other.json
index 2395ebf112db..7d8769ef6d04 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/other.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "Unhalted core cycles when the thread is in ring 0.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "Unhalted core cycles when the thread is not in ring 0.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D and L2 are locked, due to a UC lock or split lock.",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/pipeline.json b/tools/perf/pmu-events/arch/x86/haswell/pipeline.json
index 540f4372623c..c00301fdb3d7 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Any uop executed by the Divider. (This includes all divide uops, sqrt, ...)",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.DIVIDER_UOPS",
"SampleAfterValue": "2000003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"SampleAfterValue": "200003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -44,6 +50,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"SampleAfterValue": "200003",
@@ -51,6 +58,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -58,6 +66,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -65,6 +74,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"SampleAfterValue": "200003",
@@ -72,6 +82,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -79,6 +90,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -86,6 +98,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -93,6 +106,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"SampleAfterValue": "200003",
@@ -100,6 +114,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "Branch instructions at retirement.",
@@ -107,6 +122,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -115,6 +131,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -124,6 +141,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PublicDescription": "Number of far branches retired.",
@@ -132,6 +150,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -140,6 +159,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -148,6 +168,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -157,6 +178,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -166,6 +188,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "Counts the number of not taken branch instructions retired.",
@@ -174,6 +197,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -182,6 +206,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -189,6 +214,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -196,6 +222,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -204,6 +231,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -211,6 +239,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -218,6 +247,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -225,6 +255,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -232,6 +263,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"SampleAfterValue": "200003",
@@ -239,6 +271,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "Mispredicted branch instructions at retirement.",
@@ -246,6 +279,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -255,6 +289,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -263,6 +298,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -272,6 +308,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -279,6 +316,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "Increments at the frequency of XCLK (100 MHz) when not halted.",
@@ -288,6 +326,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"PublicDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
@@ -296,6 +335,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -303,6 +343,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state.",
"SampleAfterValue": "2000003",
@@ -310,6 +351,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted. (counts at 100 MHz rate)",
@@ -319,6 +361,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"PublicDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
@@ -327,6 +370,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "This event counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.",
"SampleAfterValue": "2000003",
@@ -335,12 +379,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.",
@@ -349,12 +395,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles with pending L1 cache miss loads.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -364,6 +412,7 @@
},
{
"BriefDescription": "Cycles with pending L2 cache miss loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD78, HSM63, HSM80",
"EventCode": "0xa3",
@@ -374,6 +423,7 @@
},
{
"BriefDescription": "Cycles with pending memory loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -383,6 +433,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -392,6 +443,7 @@
},
{
"BriefDescription": "Execution stalls due to L1 data cache misses",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -401,6 +453,7 @@
},
{
"BriefDescription": "Execution stalls due to L2 cache misses.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"Errata": "HSM63, HSM80",
"EventCode": "0xa3",
@@ -411,6 +464,7 @@
},
{
"BriefDescription": "Execution stalls due to memory subsystem.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -420,6 +474,7 @@
},
{
"BriefDescription": "Stall cycles because IQ is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.IQ_FULL",
"PublicDescription": "Stall cycles due to IQ is full.",
@@ -428,6 +483,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "This event counts cycles where the decoder is stalled on an instruction with a length changing prefix (LCP).",
@@ -436,6 +492,7 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"Errata": "HSD140, HSD143",
"EventName": "INST_RETIRED.ANY",
"PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. INST_RETIRED.ANY is counted by a designated fixed counter, leaving the programmable counters available for other events. Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
@@ -444,6 +501,7 @@
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "HSD11, HSD140",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -452,6 +510,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "HSD140",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -462,6 +521,7 @@
},
{
"BriefDescription": "FP operations retired. X87 FP operations that have no exceptions: Counts also flows that have several X87 or flows that use X87 uops in the exception handling.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.X87",
"PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.",
@@ -470,6 +530,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -480,6 +541,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -489,6 +551,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -497,6 +560,7 @@
},
{
"BriefDescription": "loads blocked by overlapping with store buffer that cannot be forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load. The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. The penalty for blocked store forwarding is that the load must wait for the store to write its value to the cache before it can be issued.",
@@ -505,6 +569,7 @@
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K. This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline which can have a performance impact.",
@@ -513,6 +578,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetch.",
@@ -521,6 +587,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetch.",
@@ -529,6 +596,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -537,6 +605,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -545,6 +614,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Number of uops delivered by the LSD.",
@@ -553,6 +623,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -562,6 +633,7 @@
},
{
"BriefDescription": "Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.CYCLES",
"SampleAfterValue": "2000003",
@@ -569,6 +641,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"SampleAfterValue": "100003",
@@ -576,6 +649,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear. Machine clears can have a significant performance impact if they are happening frequently.",
@@ -584,6 +658,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"PublicDescription": "Number of integer move elimination candidate uops that were eliminated.",
@@ -592,6 +667,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"PublicDescription": "Number of integer move elimination candidate uops that were not eliminated.",
@@ -600,6 +676,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"PublicDescription": "Number of microcode assists invoked by HW upon uop writeback.",
@@ -608,6 +685,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ANY",
@@ -617,6 +695,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"SampleAfterValue": "2000003",
@@ -624,6 +703,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"SampleAfterValue": "2000003",
@@ -631,6 +711,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "This event counts cycles during which no instructions were allocated because no Store Buffers (SB) were available.",
@@ -639,6 +720,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "Count cases of saving new LBR records by hardware.",
@@ -647,6 +729,7 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "This event counts cycles when the Reservation Station ( RS ) is empty for the thread. The RS is a structure that buffers allocated micro-ops from the Front-end. If there are many cycles when the RS is empty, it may represent an underflow of instructions delivered from the Front-end.",
@@ -655,6 +738,7 @@
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -665,6 +749,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"SampleAfterValue": "2000003",
@@ -672,6 +757,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"SampleAfterValue": "2000003",
@@ -679,6 +765,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"SampleAfterValue": "2000003",
@@ -686,6 +773,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"SampleAfterValue": "2000003",
@@ -693,6 +781,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"SampleAfterValue": "2000003",
@@ -700,6 +789,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"SampleAfterValue": "2000003",
@@ -707,6 +797,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"SampleAfterValue": "2000003",
@@ -714,6 +805,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"SampleAfterValue": "2000003",
@@ -721,6 +813,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"Errata": "HSD30, HSM31",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
@@ -730,6 +823,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -739,6 +833,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -748,6 +843,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -757,6 +853,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -766,6 +863,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
@@ -775,6 +873,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -785,6 +884,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -795,6 +895,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -805,6 +906,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -814,6 +916,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -824,6 +927,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0",
"PublicDescription": "Cycles which a uop is dispatched on port 0 in this thread.",
@@ -833,6 +937,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0_CORE",
"SampleAfterValue": "2000003",
@@ -840,6 +945,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1",
"PublicDescription": "Cycles which a uop is dispatched on port 1 in this thread.",
@@ -849,6 +955,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1_CORE",
"SampleAfterValue": "2000003",
@@ -856,6 +963,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2",
"PublicDescription": "Cycles which a uop is dispatched on port 2 in this thread.",
@@ -865,6 +973,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -872,6 +981,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3",
"PublicDescription": "Cycles which a uop is dispatched on port 3 in this thread.",
@@ -881,6 +991,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3_CORE",
"SampleAfterValue": "2000003",
@@ -888,6 +999,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4",
"PublicDescription": "Cycles which a uop is dispatched on port 4 in this thread.",
@@ -897,6 +1009,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4_CORE",
"SampleAfterValue": "2000003",
@@ -904,6 +1017,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5",
"PublicDescription": "Cycles which a uop is dispatched on port 5 in this thread.",
@@ -913,6 +1027,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5_CORE",
"SampleAfterValue": "2000003",
@@ -920,6 +1035,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6",
"PublicDescription": "Cycles which a uop is dispatched on port 6 in this thread.",
@@ -929,6 +1045,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6_CORE",
"SampleAfterValue": "2000003",
@@ -936,6 +1053,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7",
"PublicDescription": "Cycles which a uop is dispatched on port 7 in this thread.",
@@ -945,6 +1063,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7_CORE",
"SampleAfterValue": "2000003",
@@ -952,6 +1071,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "This event counts the number of uops issued by the Front-end of the pipeline to the Back-end. This event is counted at the allocation stage and will count both retired and non-retired uops.",
@@ -961,6 +1081,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.CORE_STALL_CYCLES",
@@ -970,6 +1091,7 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
"PublicDescription": "Number of flags-merge uops allocated. Such uops add delay.",
@@ -978,6 +1100,7 @@
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"PublicDescription": "Number of multiply packed/scalar single precision uops allocated.",
@@ -986,6 +1109,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"PublicDescription": "Number of slow LEA or similar uops allocated. Such uop has 3 sources (for example, 2 sources + immediate) regardless of whether it is a result of LEA instruction or not.",
@@ -994,6 +1118,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -1003,6 +1128,7 @@
},
{
"BriefDescription": "Actually retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -1013,6 +1139,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.CORE_STALL_CYCLES",
@@ -1022,6 +1149,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1031,6 +1159,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1040,6 +1169,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/uncore-cache.json b/tools/perf/pmu-events/arch/x86/haswell/uncore-cache.json
index be9a3ed1a940..fb116637e83e 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L3 Lookup any request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_ES",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_I",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_M",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_MESI",
"PerPkg": "1",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_ES",
"PerPkg": "1",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_I",
"PerPkg": "1",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_M",
"PerPkg": "1",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_MESI",
"PerPkg": "1",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_ES",
"PerPkg": "1",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_I",
"PerPkg": "1",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_M",
"PerPkg": "1",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in any MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_MESI",
"PerPkg": "1",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_ES",
"PerPkg": "1",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_I",
"PerPkg": "1",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_M",
"PerPkg": "1",
@@ -121,6 +136,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_MESI",
"PerPkg": "1",
@@ -129,6 +145,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_EVICTION",
"PerPkg": "1",
@@ -137,6 +154,7 @@
},
{
"BriefDescription": "An external snoop hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_EXTERNAL",
"PerPkg": "1",
@@ -145,6 +163,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_XCORE",
"PerPkg": "1",
@@ -153,6 +172,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_EVICTION",
"PerPkg": "1",
@@ -161,6 +181,7 @@
},
{
"BriefDescription": "An external snoop hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_EXTERNAL",
"PerPkg": "1",
@@ -169,6 +190,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_XCORE",
"PerPkg": "1",
@@ -177,6 +199,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_EVICTION",
"PerPkg": "1",
@@ -185,6 +208,7 @@
},
{
"BriefDescription": "An external snoop misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_EXTERNAL",
"PerPkg": "1",
@@ -193,10 +217,19 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_XCORE",
"PerPkg": "1",
"UMask": "0x41",
"Unit": "CBOX"
+ },
+ {
+ "BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "FIXED",
+ "EventCode": "0xff",
+ "EventName": "UNC_CLOCK.SOCKET",
+ "PerPkg": "1",
+ "Unit": "cbox_0"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/haswell/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/haswell/uncore-interconnect.json
index 8da28239ebf9..557b278e631d 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Each cycle count number of valid entries in Coherency Tracker queue from allocation till deallocation. Aperture requests (snoops) appear as NC decoded internally and become coherent (snoop L3, access memory)",
+ "Counter": "0",
"EventCode": "0x83",
"EventName": "UNC_ARB_COH_TRK_OCCUPANCY.All",
"PerPkg": "1",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Number of entries allocated. Account for Any type: e.g. Snoop, Core aperture, etc.",
+ "Counter": "0,1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Each cycle counts number of all Core outgoing valid entries. Such entry is defined as valid from its allocation till first of IDI0 or DRS0 messages is sent out. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Cycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.",
+ "Counter": "0",
"CounterMask": "1",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "Total number of Core outgoing entries allocated. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Number of Writes allocated - any write transactions: full/partials writes and evictions.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.WRITES",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswell/virtual-memory.json b/tools/perf/pmu-events/arch/x86/haswell/virtual-memory.json
index 87a4ec1ee7d7..7cf00ae0e993 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in all TLB levels that cause a page walk of any page size.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "DTLB demand load misses with low part of linear-to-physical address translation missed",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.PDE_CACHE_MISS",
"PublicDescription": "DTLB demand load misses with low part of linear-to-physical address translation missed.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Number of cache load STLB hits. No page walk.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_2M",
"PublicDescription": "This event counts load operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_4K",
"PublicDescription": "This event counts load operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks in any TLB of any page size due to demand load misses.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "2000003",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to demand load misses that caused 2M/4M page walks in any TLB levels.",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to demand load misses that caused 4K page walks in any TLB levels.",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses.",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G).",
@@ -88,6 +99,7 @@
},
{
"BriefDescription": "DTLB store misses with low part of linear-to-physical address translation missed",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.PDE_CACHE_MISS",
"PublicDescription": "DTLB store misses with low part of linear-to-physical address translation missed.",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_2M",
"PublicDescription": "This event counts store operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -112,6 +126,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_4K",
"PublicDescription": "This event counts store operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks due to store miss in any TLB levels of any page size (4K/2M/4M/1G).",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "100003",
@@ -135,6 +152,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to store misses in one or more TLB levels of 2M/4M page structure.",
@@ -143,6 +161,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to store misses in one or more TLB levels of 4K page structure.",
@@ -151,6 +170,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB store misses.",
@@ -159,6 +179,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.WALK_CYCLES",
"SampleAfterValue": "2000003",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "Counts the number of ITLB flushes, includes 4k/2M/4M pages.",
@@ -174,6 +196,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in ITLB that causes a page walk of any page size.",
@@ -182,6 +205,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "ITLB misses that hit STLB. No page walk.",
@@ -190,6 +214,7 @@
},
{
"BriefDescription": "Code misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_2M",
"PublicDescription": "ITLB misses that hit STLB (2M).",
@@ -198,6 +223,7 @@
},
{
"BriefDescription": "Core misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_4K",
"PublicDescription": "ITLB misses that hit STLB (4K).",
@@ -206,6 +232,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks in ITLB of any page size.",
@@ -214,6 +241,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "100003",
@@ -221,6 +249,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to misses in ITLB 2M/4M page entries.",
@@ -229,6 +258,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to misses in ITLB 4K page entries.",
@@ -237,6 +267,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by ITLB misses.",
@@ -245,6 +276,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L1+FB",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L1",
"PublicDescription": "Number of DTLB page walker loads that hit in the L1+FB.",
@@ -253,6 +285,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L2",
"PublicDescription": "Number of DTLB page walker loads that hit in the L2.",
@@ -261,6 +294,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L3 + XSNP",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L3",
@@ -270,6 +304,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in Memory",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_MEMORY",
@@ -279,6 +314,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L1 and FB.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L1",
"SampleAfterValue": "2000003",
@@ -286,6 +322,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L2",
"SampleAfterValue": "2000003",
@@ -293,6 +330,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L3",
"SampleAfterValue": "2000003",
@@ -300,6 +338,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in memory.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_MEMORY",
"SampleAfterValue": "2000003",
@@ -307,6 +346,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L1 and FB.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L1",
"SampleAfterValue": "2000003",
@@ -314,6 +354,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L2",
"SampleAfterValue": "2000003",
@@ -321,6 +362,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L3",
"SampleAfterValue": "2000003",
@@ -328,6 +370,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in memory.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_MEMORY",
"SampleAfterValue": "2000003",
@@ -335,6 +378,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L1+FB",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L1",
"PublicDescription": "Number of ITLB page walker loads that hit in the L1+FB.",
@@ -343,6 +387,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L2",
"PublicDescription": "Number of ITLB page walker loads that hit in the L2.",
@@ -351,6 +396,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L3 + XSNP",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L3",
@@ -360,6 +406,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in Memory",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_MEMORY",
@@ -369,6 +416,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "DTLB flush attempts of the thread-specific entries.",
@@ -377,6 +425,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Count number of STLB flush attempts.",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/cache.json b/tools/perf/pmu-events/arch/x86/haswellx/cache.json
index a6c81010b394..42f24cdbe6ae 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/cache.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "This event counts when new data lines are brought into the L1 Data cache, which cause other lines to be evicted from the cache.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "L1D miss outstanding duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -34,6 +38,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of times a request needed a FB entry but there was no entry available for it. That is the FB unavailability was dominant reason for blocking the request. A request includes cacheable/uncacheable demands that is load, store or SW prefetch. HWP are e.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.REQUEST_FB_FULL",
"SampleAfterValue": "2000003",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Not rejected writebacks that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_DEMAND_RQSTS.WB_HIT",
"PublicDescription": "Not rejected writebacks that hit L2 cache.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "This event counts the number of L2 cache lines brought into the L2 cache. Lines are filled into the L2 cache when there was an L2 miss.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "L2 cache lines in E state filling L2.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "L2 cache lines in I state filling L2.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "L2 cache lines in S state filling L2.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by demand.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by demand.",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts all L2 code requests.",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "Counts all L2 HW prefetcher requests.",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts all L2 store RFO requests.",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Number of instruction fetches that hit the L2 cache.",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Number of instruction fetches that missed the L2 cache.",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
@@ -190,6 +213,7 @@
},
{
"BriefDescription": "L2 prefetch requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_HIT",
"PublicDescription": "Counts all L2 HW prefetcher requests that hit L2.",
@@ -198,6 +222,7 @@
},
{
"BriefDescription": "L2 prefetch requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.L2_PF_MISS",
"PublicDescription": "Counts all L2 HW prefetcher requests that missed L2.",
@@ -206,6 +231,7 @@
},
{
"BriefDescription": "All requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
@@ -215,6 +241,7 @@
},
{
"BriefDescription": "All L2 requests",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
@@ -224,6 +251,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the number of store RFO requests that hit the L2 cache.",
@@ -232,6 +260,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the number of store RFO requests that miss the L2 cache.",
@@ -240,6 +269,7 @@
},
{
"BriefDescription": "L2 or L3 HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "Any MLC or L3 HW prefetch accessing L2, including rejects.",
@@ -248,6 +278,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "Transactions accessing L2 pipe.",
@@ -256,6 +287,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "L2 cache accesses when fetching instructions.",
@@ -264,6 +296,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "Demand data read requests that access L2 cache.",
@@ -272,6 +305,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "L1D writebacks that access L2 cache.",
@@ -280,6 +314,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "L2 fill requests that access L2 cache.",
@@ -288,6 +323,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "L2 writebacks that access L2 cache.",
@@ -296,6 +332,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xf0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "RFO requests that access L2 cache.",
@@ -304,6 +341,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D is locked.",
@@ -312,6 +350,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts each cache miss condition for references to the last level cache.",
@@ -320,6 +359,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to L3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts requests originating from the core that reference a cache line in the last level cache.",
@@ -328,6 +368,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -338,6 +379,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared L3.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -348,6 +390,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -358,6 +401,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD2",
@@ -368,6 +412,7 @@
},
{
"BriefDescription": "Data from local DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM30",
"EventCode": "0xD3",
@@ -379,6 +424,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: remote DRAM either Snoop not needed or Snoop Miss (RspI)",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD3",
@@ -389,6 +435,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: forwarded from remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD3",
@@ -399,6 +446,7 @@
},
{
"BriefDescription": "Retired load uop whose Data Source was: Remote cache HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD3",
@@ -409,6 +457,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD1",
@@ -419,6 +468,7 @@
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD1",
@@ -429,6 +479,7 @@
},
{
"BriefDescription": "Retired load uops misses in L1 cache as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSM30",
"EventCode": "0xD1",
@@ -440,6 +491,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD76, HSD29, HSM30",
"EventCode": "0xD1",
@@ -450,6 +502,7 @@
},
{
"BriefDescription": "Miss in mid-level (L2) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD1",
@@ -461,6 +514,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in L3 without snoops required.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD1",
@@ -472,6 +526,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD74, HSD29, HSD25, HSM26, HSM30",
"EventCode": "0xD1",
@@ -483,6 +538,7 @@
},
{
"BriefDescription": "Retired load uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -494,6 +550,7 @@
},
{
"BriefDescription": "Retired store uops.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -505,6 +562,7 @@
},
{
"BriefDescription": "Retired load uops with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD76, HSD29, HSM30",
"EventCode": "0xD0",
@@ -515,6 +573,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -525,6 +584,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -535,6 +595,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -545,6 +606,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Errata": "HSD29, HSM30",
"EventCode": "0xD0",
@@ -555,6 +617,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Data read requests sent to uncore (demand and prefetch).",
@@ -563,6 +626,7 @@
},
{
"BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Demand code read requests sent to uncore.",
@@ -571,6 +635,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSM80",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
@@ -580,6 +645,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.",
@@ -588,6 +654,7 @@
},
{
"BriefDescription": "Offcore requests buffer cannot take more entries for this thread core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
"SampleAfterValue": "2000003",
@@ -595,6 +662,7 @@
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
@@ -604,6 +672,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
@@ -613,6 +682,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
@@ -622,6 +692,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
@@ -631,6 +702,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
@@ -640,6 +712,7 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
@@ -649,6 +722,7 @@
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"Errata": "HSD78, HSD62, HSD61, HSM63, HSM80",
"EventCode": "0x60",
@@ -658,6 +732,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"Errata": "HSD62, HSD61, HSM63",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
@@ -667,6 +742,7 @@
},
{
"BriefDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE",
"SampleAfterValue": "100003",
@@ -674,6 +750,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -683,6 +760,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -692,6 +770,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -701,6 +780,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -710,6 +790,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -719,6 +800,7 @@
},
{
"BriefDescription": "Counts all requests hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -728,6 +810,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -737,6 +820,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -746,6 +830,7 @@
},
{
"BriefDescription": "Counts all demand code reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -755,6 +840,7 @@
},
{
"BriefDescription": "Counts all demand code reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -764,6 +850,7 @@
},
{
"BriefDescription": "Counts demand data reads hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -773,6 +860,7 @@
},
{
"BriefDescription": "Counts demand data reads hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -782,6 +870,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3 and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -791,6 +880,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) hit in the L3 and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -800,6 +890,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -809,6 +900,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -818,6 +910,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -827,6 +920,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -836,6 +930,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -845,6 +940,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs hit in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -854,6 +950,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"SampleAfterValue": "100003",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/counter.json b/tools/perf/pmu-events/arch/x86/haswellx/counter.json
new file mode 100644
index 000000000000..84c01d8023f1
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/haswellx/counter.json
@@ -0,0 +1,57 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "HA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "QPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "R2PCIe",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "R3QPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "3"
+ },
+ {
+ "Unit": "SBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/floating-point.json b/tools/perf/pmu-events/arch/x86/haswellx/floating-point.json
index 8fcc10f74ad9..a0b917306887 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Approximate counts of AVX & AVX2 256-bit instructions, including non-arithmetic instructions, loads, and stores. May count non-AVX instructions that employ 256-bit operations, including (but not necessarily limited to) rep string instructions that use 256-bit loads and stores for optimized performance, XSAVE* and XRSTOR*, and operations that transition the x87 FPU data registers between x87 and MMX.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC6",
"EventName": "AVX_INSTS.ALL",
"PublicDescription": "Note that a whole rep string only counts AVX_INST.ALL once.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "Number of SIMD FP assists due to input values.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "Number of SIMD FP assists due to output values.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "Number of X87 FP assists due to input values.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "Number of X87 FP assists due to output values.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"PublicDescription": "Number of SIMD move elimination candidate uops that were eliminated.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"PublicDescription": "Number of SIMD move elimination candidate uops that were not eliminated.",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "HSD56, HSM57",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
@@ -74,6 +83,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"Errata": "HSD56, HSM57",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/frontend.json b/tools/perf/pmu-events/arch/x86/haswellx/frontend.json
index 73d6d681dfa7..a9f81fd17925 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Number of front end re-steers due to BPU misprediction.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"SampleAfterValue": "2000003",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"SampleAfterValue": "2000003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFDATA_STALL",
"SampleAfterValue": "2000003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFETCH_STALL",
"SampleAfterValue": "2000003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes Uncacheable accesses.",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "This event counts Instruction Cache (ICACHE) misses.",
@@ -45,6 +51,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -54,6 +61,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -63,6 +71,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cycles.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "Number of uops delivered to IDQ from any path.",
@@ -114,6 +128,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cycles.",
@@ -130,6 +146,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of delivery.",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cycles.",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -181,6 +203,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "This event counts uops delivered by the Front-end with the assistance of the microcode sequencer. Microcode assists are used for complex instructions or scenarios that can't be handled by the standard decoder. Using other instructions, if possible, will usually improve performance.",
@@ -189,6 +212,7 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
@@ -198,6 +222,7 @@
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -208,6 +233,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -218,6 +244,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -227,6 +254,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD135",
"EventCode": "0x9C",
@@ -236,6 +264,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD135",
"EventCode": "0x9C",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json b/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json
index 5f451948c893..b8845f8a28b9 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/hsx-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -55,7 +55,7 @@
"MetricName": "UNCORE_FREQ"
},
{
- "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles.",
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
"MetricName": "cpi",
"ScaleUnit": "1per_instr"
@@ -68,7 +68,7 @@
},
{
"BriefDescription": "Percentage of time spent in the active CPU power state C0",
- "MetricExpr": "tma_info_system_cpu_utilization",
+ "MetricExpr": "tma_info_system_cpus_utilized",
"MetricName": "cpu_utilization",
"ScaleUnit": "100%"
},
@@ -76,24 +76,24 @@
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
"MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_store_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
- "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU.",
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=0x19e@ * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.",
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.OPCODE\\,filter_opc\\=0x1c8\\,filter_tid\\=0x3e@ * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write",
"ScaleUnit": "1MB/s"
@@ -102,14 +102,14 @@
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "itlb_large_page_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "itlb_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
@@ -209,13 +209,13 @@
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@)",
"MetricName": "numa_reads_addressed_to_local_dram",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@ / (cbox@UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE\\,filter_opc\\=0x182@ + cbox@UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE\\,filter_opc\\=0x182@)",
"MetricName": "numa_reads_addressed_to_remote_dram",
"ScaleUnit": "100%"
@@ -282,17 +282,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5 + UOPS_DISPATCHED_PORT.PORT_6) / tma_info_thread_slots",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -300,9 +299,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -323,7 +321,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -341,7 +339,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -353,7 +350,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) + 43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -374,7 +371,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -383,10 +380,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "10 * ARITH.DIVIDER_UOPS / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -404,7 +401,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -420,7 +417,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(8 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -429,7 +426,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(8 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -438,7 +435,7 @@
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "(200 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_HITM + 60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
@@ -448,7 +445,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.REQUEST_FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -459,7 +456,7 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
@@ -477,7 +474,7 @@
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -491,20 +488,20 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses.",
"MetricExpr": "ICACHE.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -529,7 +526,7 @@
"MetricName": "tma_info_core_coreipc"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
"MetricExpr": "(UOPS_EXECUTED.CORE / 2 / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@) if #SMT_on else UOPS_EXECUTED.CORE / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@))",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
@@ -549,6 +546,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -590,105 +593,105 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -704,21 +707,27 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_time",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
@@ -744,6 +753,7 @@
},
{
"BriefDescription": "Average number of parallel data read requests to external memory",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182\\,thresh\\=1@",
"MetricGroup": "Mem;MemoryBW;SoC",
"MetricName": "tma_info_system_mem_parallel_reads",
@@ -751,12 +761,25 @@
},
{
"BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
- "MetricExpr": "1e9 * (UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_INSERTS.MISS_OPCODE@filter_opc\\=0x182@) / (tma_info_system_socket_clks / duration_time)",
+ "MetricExpr": "1e9 * (UNC_C_TOR_OCCUPANCY.MISS_OPCODE@filter_opc\\=0x182@ / UNC_C_TOR_INSERTS.MISS_OPCODE@filter_opc\\=0x182@) / (tma_info_system_socket_clks / tma_info_system_time)",
"MetricGroup": "Mem;MemoryLat;SoC",
"MetricName": "tma_info_system_mem_read_latency",
"PublicDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)"
},
{
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
"MetricExpr": "(1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / (CPU_CLK_UNHALTED.REF_XCLK_ANY / 2) if #SMT_on else 0)",
"MetricGroup": "SMT",
@@ -769,12 +792,25 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization"
},
{
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
"BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD",
"MetricGroup": "Pipeline",
@@ -806,7 +842,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -815,25 +851,25 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(14 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY.STALLS_L2_PENDING) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS",
@@ -843,20 +879,20 @@
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.L3_HIT / (MEM_LOAD_UOPS_RETIRED.L3_HIT + 7 * MEM_LOAD_UOPS_RETIRED.L3_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "41 * (MEM_LOAD_UOPS_RETIRED.L3_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -875,12 +911,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 + UOPS_DISPATCHED_PORT.PORT_7 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -893,8 +928,8 @@
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "200 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
"MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_local_dram",
- "MetricThreshold": "tma_local_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM_PS",
"ScaleUnit": "100%"
},
@@ -902,7 +937,7 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
@@ -912,7 +947,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -920,27 +955,27 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=6@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the Memory subsystem within the Backend was a bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_bound",
+ "MetricExpr": "(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) + RESOURCE_STALLS.SB) / (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) * tma_backend_bound",
"MetricGroup": "Backend;TmaL2;TopdownL2;tma_L2_group;tma_backend_bound_group",
"MetricName": "tma_memory_bound",
"MetricThreshold": "tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
@@ -962,7 +997,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.",
"ScaleUnit": "100%"
},
@@ -981,7 +1016,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -990,7 +1025,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1026,16 +1061,16 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_5",
"MetricThreshold": "tma_port_5 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED_PORT.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -1050,7 +1085,7 @@
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clks",
+ "MetricExpr": "((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@)) / 2 - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) + cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - (cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ if tma_info_thread_ipc > 1.8 else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0) + RESOURCE_STALLS.SB) - RESOURCE_STALLS.SB - min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING)) / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -1059,7 +1094,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1068,7 +1103,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1077,19 +1112,19 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
@@ -1107,15 +1142,15 @@
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "310 * (MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.L3_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_L3_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_L3_MISS_RETIRED.LOCAL_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_HITM + MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD))) / tma_info_thread_clks",
"MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_remote_dram",
- "MetricThreshold": "tma_remote_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -1128,7 +1163,7 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -1144,7 +1179,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -1172,7 +1207,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1187,15 +1222,6 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric serves as an approximation of legacy x87 usage",
- "MetricExpr": "INST_RETIRED.X87 * tma_info_thread_uoppi / UOPS_RETIRED.RETIRE_SLOTS",
- "MetricGroup": "Compute;TopdownL4;tma_L4_group;tma_fp_arith_group",
- "MetricName": "tma_x87_use",
- "MetricThreshold": "tma_x87_use > 0.1",
- "PublicDescription": "This metric serves as an approximation of legacy x87 usage. It accounts for instructions beyond X87 FP arithmetic operations; hence may be used as a thermometer to avoid X87 high usage and preferably upgrade to modern ISA. See Tip under Tuning Hint.",
- "ScaleUnit": "100%"
- },
- {
"BriefDescription": "Uncore operating frequency in GHz",
"MetricExpr": "UNC_C_CLOCKTICKS / (#num_cores / #num_packages * #num_packages) / 1e9 / duration_time",
"MetricName": "uncore_frequency",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/memory.json b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
index 2d212cf59e92..be0108558103 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PEBS": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC1",
"SampleAfterValue": "2000003",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to uncommon conditions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC2",
"SampleAfterValue": "2000003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC3",
"SampleAfterValue": "2000003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to incompatible memory type.",
+ "Counter": "0,1,2,3",
"Errata": "HSD65",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC4",
@@ -38,6 +43,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times an HLE execution aborted due to none of the previous 4 categories (e.g. interrupts).",
@@ -46,6 +52,7 @@
},
{
"BriefDescription": "Number of times an HLE execution successfully committed.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"SampleAfterValue": "2000003",
@@ -53,6 +60,7 @@
},
{
"BriefDescription": "Number of times an HLE execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC8",
"EventName": "HLE_RETIRED.START",
"SampleAfterValue": "2000003",
@@ -60,6 +68,7 @@
},
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
@@ -68,6 +77,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 128.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 16.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -92,6 +103,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 256.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 32.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 4.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -128,6 +142,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 512.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -140,6 +155,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 64.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -152,6 +168,7 @@
},
{
"BriefDescription": "Randomly selected loads with latency value being above 8.",
+ "Counter": "3",
"Data_LA": "1",
"Errata": "HSD76, HSD25, HSM26",
"EventCode": "0xcd",
@@ -164,6 +181,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "Speculative cache-line split load uops dispatched to L1D.",
@@ -172,6 +190,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "Speculative cache-line split store-address uops dispatched to L1D.",
@@ -180,6 +199,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -189,6 +209,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -198,6 +219,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -207,6 +229,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -216,6 +239,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the data is returned from remote dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -225,6 +249,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -234,6 +259,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads miss the L3 and clean or shared data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -243,6 +269,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -252,6 +279,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -261,6 +289,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the data is returned from remote dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -270,6 +299,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -279,6 +309,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) miss the L3 and clean or shared data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.REMOTE_HIT_FORWARD",
"MSRIndex": "0x1a6,0x1a7",
@@ -288,6 +319,7 @@
},
{
"BriefDescription": "Counts all requests miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_REQUESTS.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -297,6 +329,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -306,6 +339,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -315,6 +349,7 @@
},
{
"BriefDescription": "Counts all demand code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -324,6 +359,7 @@
},
{
"BriefDescription": "Counts all demand code reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -333,6 +369,7 @@
},
{
"BriefDescription": "Counts demand data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -342,6 +379,7 @@
},
{
"BriefDescription": "Counts demand data reads miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -351,6 +389,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -360,6 +399,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss the L3 and the data is returned from local dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -369,6 +409,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) miss the L3 and the modified data is transferred from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_MISS.REMOTE_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -378,6 +419,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -387,6 +429,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -396,6 +439,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -405,6 +449,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) code reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_CODE_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -414,6 +459,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) data reads miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -423,6 +469,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) RFOs miss in the L3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_RFO.LLC_MISS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -432,6 +479,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PEBS": "1",
@@ -440,6 +488,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC1",
"PublicDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -448,6 +497,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC2",
"SampleAfterValue": "2000003",
@@ -455,6 +505,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC3",
"SampleAfterValue": "2000003",
@@ -462,6 +513,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type.",
+ "Counter": "0,1,2,3",
"Errata": "HSD65",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC4",
@@ -470,6 +522,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MISC5",
"PublicDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
@@ -478,6 +531,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed.",
+ "Counter": "0,1,2,3",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"SampleAfterValue": "2000003",
@@ -485,6 +539,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC9",
"EventName": "RTM_RETIRED.START",
"SampleAfterValue": "2000003",
@@ -492,6 +547,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed. Since this is the count of execution, it may not always cause a transactional abort.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC1",
"SampleAfterValue": "2000003",
@@ -499,6 +555,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions (e.g., vzeroupper) that may cause a transactional abort was executed inside a transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"SampleAfterValue": "2000003",
@@ -506,6 +563,7 @@
},
{
"BriefDescription": "Counts the number of times an instruction execution caused the transactional nest count supported to be exceeded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"SampleAfterValue": "2000003",
@@ -513,6 +571,7 @@
},
{
"BriefDescription": "Counts the number of times a XBEGIN instruction was executed inside an HLE transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC4",
"SampleAfterValue": "2000003",
@@ -520,6 +579,7 @@
},
{
"BriefDescription": "Counts the number of times an HLE XACQUIRE instruction was executed inside an RTM transactional region.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC5",
"SampleAfterValue": "2000003",
@@ -527,6 +587,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"SampleAfterValue": "2000003",
@@ -534,6 +595,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"SampleAfterValue": "2000003",
@@ -541,6 +603,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"SampleAfterValue": "2000003",
@@ -548,6 +611,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"SampleAfterValue": "2000003",
@@ -555,6 +619,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"SampleAfterValue": "2000003",
@@ -562,6 +627,7 @@
},
{
"BriefDescription": "Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"SampleAfterValue": "2000003",
@@ -569,6 +635,7 @@
},
{
"BriefDescription": "Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"SampleAfterValue": "2000003",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/metricgroups.json b/tools/perf/pmu-events/arch/x86/haswellx/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/other.json b/tools/perf/pmu-events/arch/x86/haswellx/other.json
index 2395ebf112db..7d8769ef6d04 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/other.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "Unhalted core cycles when the thread is in ring 0.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "Unhalted core cycles when the thread is not in ring 0.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D and L2 are locked, due to a UC lock or split lock.",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/pipeline.json b/tools/perf/pmu-events/arch/x86/haswellx/pipeline.json
index 540f4372623c..c00301fdb3d7 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Any uop executed by the Divider. (This includes all divide uops, sqrt, ...)",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.DIVIDER_UOPS",
"SampleAfterValue": "2000003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -23,6 +26,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"SampleAfterValue": "200003",
@@ -30,6 +34,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -37,6 +42,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -44,6 +50,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"SampleAfterValue": "200003",
@@ -51,6 +58,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -58,6 +66,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -65,6 +74,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"SampleAfterValue": "200003",
@@ -72,6 +82,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -79,6 +90,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -86,6 +98,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -93,6 +106,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"SampleAfterValue": "200003",
@@ -100,6 +114,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "Branch instructions at retirement.",
@@ -107,6 +122,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -115,6 +131,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -124,6 +141,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PublicDescription": "Number of far branches retired.",
@@ -132,6 +150,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -140,6 +159,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -148,6 +168,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -157,6 +178,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -166,6 +188,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "Counts the number of not taken branch instructions retired.",
@@ -174,6 +197,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -182,6 +206,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -189,6 +214,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -196,6 +222,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -204,6 +231,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -211,6 +239,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"SampleAfterValue": "200003",
@@ -218,6 +247,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"SampleAfterValue": "200003",
@@ -225,6 +255,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"SampleAfterValue": "200003",
@@ -232,6 +263,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"SampleAfterValue": "200003",
@@ -239,6 +271,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "Mispredicted branch instructions at retirement.",
@@ -246,6 +279,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -255,6 +289,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -263,6 +298,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -272,6 +308,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3c",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -279,6 +316,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "Increments at the frequency of XCLK (100 MHz) when not halted.",
@@ -288,6 +326,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"PublicDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
@@ -296,6 +335,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "100003",
@@ -303,6 +343,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "This event counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state.",
"SampleAfterValue": "2000003",
@@ -310,6 +351,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted. (counts at 100 MHz rate)",
@@ -319,6 +361,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"PublicDescription": "Reference cycles when the at least one thread on the physical core is unhalted (counts at 100 MHz rate).",
@@ -327,6 +370,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "This event counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.",
"SampleAfterValue": "2000003",
@@ -335,12 +379,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.",
@@ -349,12 +395,14 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Cycles with pending L1 cache miss loads.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -364,6 +412,7 @@
},
{
"BriefDescription": "Cycles with pending L2 cache miss loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD78, HSM63, HSM80",
"EventCode": "0xa3",
@@ -374,6 +423,7 @@
},
{
"BriefDescription": "Cycles with pending memory loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -383,6 +433,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -392,6 +443,7 @@
},
{
"BriefDescription": "Execution stalls due to L1 data cache misses",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -401,6 +453,7 @@
},
{
"BriefDescription": "Execution stalls due to L2 cache misses.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"Errata": "HSM63, HSM80",
"EventCode": "0xa3",
@@ -411,6 +464,7 @@
},
{
"BriefDescription": "Execution stalls due to memory subsystem.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -420,6 +474,7 @@
},
{
"BriefDescription": "Stall cycles because IQ is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.IQ_FULL",
"PublicDescription": "Stall cycles due to IQ is full.",
@@ -428,6 +483,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "This event counts cycles where the decoder is stalled on an instruction with a length changing prefix (LCP).",
@@ -436,6 +492,7 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"Errata": "HSD140, HSD143",
"EventName": "INST_RETIRED.ANY",
"PublicDescription": "This event counts the number of instructions retired from execution. For instructions that consist of multiple micro-ops, this event counts the retirement of the last micro-op of the instruction. Counting continues during hardware interrupts, traps, and inside interrupt handlers. INST_RETIRED.ANY is counted by a designated fixed counter, leaving the programmable counters available for other events. Faulting executions of GETSEC/VM entry/VM Exit/MWait will not count as retired instructions.",
@@ -444,6 +501,7 @@
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"Errata": "HSD11, HSD140",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
@@ -452,6 +510,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"Errata": "HSD140",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
@@ -462,6 +521,7 @@
},
{
"BriefDescription": "FP operations retired. X87 FP operations that have no exceptions: Counts also flows that have several X87 or flows that use X87 uops in the exception handling.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.X87",
"PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts FP operations retired. For X87 FP operations that have no exceptions counting also includes flows that have several X87, or flows that use X87 uops in the exception handling.",
@@ -470,6 +530,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -480,6 +541,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -489,6 +551,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -497,6 +560,7 @@
},
{
"BriefDescription": "loads blocked by overlapping with store buffer that cannot be forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load. The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceding smaller uncompleted store. The penalty for blocked store forwarding is that the load must wait for the store to write its value to the cache before it can be issued.",
@@ -505,6 +569,7 @@
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Aliasing occurs when a load is issued after a store and their memory addresses are offset by 4K. This event counts the number of loads that aliased with a preceding store, resulting in an extended address check in the pipeline which can have a performance impact.",
@@ -513,6 +578,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetch.",
@@ -521,6 +587,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetch.",
@@ -529,6 +596,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -537,6 +605,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -545,6 +614,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Number of uops delivered by the LSD.",
@@ -553,6 +623,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -562,6 +633,7 @@
},
{
"BriefDescription": "Cycles there was a Nuke. Account for both thread-specific and All Thread Nukes.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.CYCLES",
"SampleAfterValue": "2000003",
@@ -569,6 +641,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"SampleAfterValue": "100003",
@@ -576,6 +649,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "This event is incremented when self-modifying code (SMC) is detected, which causes a machine clear. Machine clears can have a significant performance impact if they are happening frequently.",
@@ -584,6 +658,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"PublicDescription": "Number of integer move elimination candidate uops that were eliminated.",
@@ -592,6 +667,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"PublicDescription": "Number of integer move elimination candidate uops that were not eliminated.",
@@ -600,6 +676,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"PublicDescription": "Number of microcode assists invoked by HW upon uop writeback.",
@@ -608,6 +685,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"Errata": "HSD135",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ANY",
@@ -617,6 +695,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"SampleAfterValue": "2000003",
@@ -624,6 +703,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"SampleAfterValue": "2000003",
@@ -631,6 +711,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "This event counts cycles during which no instructions were allocated because no Store Buffers (SB) were available.",
@@ -639,6 +720,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "Count cases of saving new LBR records by hardware.",
@@ -647,6 +729,7 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "This event counts cycles when the Reservation Station ( RS ) is empty for the thread. The RS is a structure that buffers allocated micro-ops from the Front-end. If there are many cycles when the RS is empty, it may represent an underflow of instructions delivered from the Front-end.",
@@ -655,6 +738,7 @@
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -665,6 +749,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"SampleAfterValue": "2000003",
@@ -672,6 +757,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"SampleAfterValue": "2000003",
@@ -679,6 +765,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"SampleAfterValue": "2000003",
@@ -686,6 +773,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"SampleAfterValue": "2000003",
@@ -693,6 +781,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"SampleAfterValue": "2000003",
@@ -700,6 +789,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"SampleAfterValue": "2000003",
@@ -707,6 +797,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_6",
"SampleAfterValue": "2000003",
@@ -714,6 +805,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_7",
"SampleAfterValue": "2000003",
@@ -721,6 +813,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"Errata": "HSD30, HSM31",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
@@ -730,6 +823,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -739,6 +833,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -748,6 +843,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -757,6 +853,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
@@ -766,6 +863,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core.",
+ "Counter": "0,1,2,3",
"Errata": "HSD30, HSM31",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
@@ -775,6 +873,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -785,6 +884,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -795,6 +895,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -805,6 +906,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -814,6 +916,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"Errata": "HSD144, HSD30, HSM31",
"EventCode": "0xB1",
@@ -824,6 +927,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0",
"PublicDescription": "Cycles which a uop is dispatched on port 0 in this thread.",
@@ -833,6 +937,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_0_CORE",
"SampleAfterValue": "2000003",
@@ -840,6 +945,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1",
"PublicDescription": "Cycles which a uop is dispatched on port 1 in this thread.",
@@ -849,6 +955,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 1.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_1_CORE",
"SampleAfterValue": "2000003",
@@ -856,6 +963,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2",
"PublicDescription": "Cycles which a uop is dispatched on port 2 in this thread.",
@@ -865,6 +973,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -872,6 +981,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3",
"PublicDescription": "Cycles which a uop is dispatched on port 3 in this thread.",
@@ -881,6 +991,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_3_CORE",
"SampleAfterValue": "2000003",
@@ -888,6 +999,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4",
"PublicDescription": "Cycles which a uop is dispatched on port 4 in this thread.",
@@ -897,6 +1009,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 4.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_4_CORE",
"SampleAfterValue": "2000003",
@@ -904,6 +1017,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5",
"PublicDescription": "Cycles which a uop is dispatched on port 5 in this thread.",
@@ -913,6 +1027,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 5.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_5_CORE",
"SampleAfterValue": "2000003",
@@ -920,6 +1035,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6",
"PublicDescription": "Cycles which a uop is dispatched on port 6 in this thread.",
@@ -929,6 +1045,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are executed in port 6.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_6_CORE",
"SampleAfterValue": "2000003",
@@ -936,6 +1053,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are executed in port 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7",
"PublicDescription": "Cycles which a uop is dispatched on port 7 in this thread.",
@@ -945,6 +1063,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 7.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_EXECUTED_PORT.PORT_7_CORE",
"SampleAfterValue": "2000003",
@@ -952,6 +1071,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "This event counts the number of uops issued by the Front-end of the pipeline to the Back-end. This event is counted at the allocation stage and will count both retired and non-retired uops.",
@@ -961,6 +1081,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.CORE_STALL_CYCLES",
@@ -970,6 +1091,7 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated. Such uops considered perf sensitive; added by GSR u-arch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
"PublicDescription": "Number of flags-merge uops allocated. Such uops add delay.",
@@ -978,6 +1100,7 @@
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"PublicDescription": "Number of multiply packed/scalar single precision uops allocated.",
@@ -986,6 +1109,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"PublicDescription": "Number of slow LEA or similar uops allocated. Such uop has 3 sources (for example, 2 sources + immediate) regardless of whether it is a result of LEA instruction or not.",
@@ -994,6 +1118,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -1003,6 +1128,7 @@
},
{
"BriefDescription": "Actually retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -1013,6 +1139,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.CORE_STALL_CYCLES",
@@ -1022,6 +1149,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1031,6 +1159,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1040,6 +1169,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "16",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-cache.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-cache.json
index 9227cc226002..d664af16c1db 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "LLC prefetch misses for code reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.CODE_LLC_PREFETCH",
"Filter": "filter_opc=0x191",
@@ -12,6 +13,7 @@
},
{
"BriefDescription": "LLC prefetch misses for data reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.DATA_LLC_PREFETCH",
"Filter": "filter_opc=0x192",
@@ -23,6 +25,7 @@
},
{
"BriefDescription": "LLC misses - demand and prefetch data reads - excludes LLC prefetches. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.DATA_READ",
"Filter": "filter_opc=0x182",
@@ -34,6 +37,7 @@
},
{
"BriefDescription": "MMIO reads. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_READ",
"Filter": "filter_opc=0x187,filter_nc=1",
@@ -45,6 +49,7 @@
},
{
"BriefDescription": "MMIO writes. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.MMIO_WRITE",
"Filter": "filter_opc=0x18f,filter_nc=1",
@@ -56,6 +61,7 @@
},
{
"BriefDescription": "PCIe write misses (full cache line). Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_NON_SNOOP_WRITE",
"Filter": "filter_opc=0x1c8,filter_tid=0x3e",
@@ -67,6 +73,7 @@
},
{
"BriefDescription": "LLC misses for PCIe read current. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_READ",
"Filter": "filter_opc=0x19e",
@@ -78,6 +85,7 @@
},
{
"BriefDescription": "ItoM write misses (as part of fast string memcpy stores) + PCIe full line writes. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.PCIE_WRITE",
"Filter": "filter_opc=0x1c8",
@@ -89,6 +97,7 @@
},
{
"BriefDescription": "LLC prefetch misses for RFO. Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.RFO_LLC_PREFETCH",
"Filter": "filter_opc=0x190",
@@ -100,6 +109,7 @@
},
{
"BriefDescription": "LLC misses - Uncacheable reads (from cpu) . Derived from unc_c_tor_inserts.miss_opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_MISSES.UNCACHEABLE",
"Filter": "filter_opc=0x187",
@@ -111,6 +121,7 @@
},
{
"BriefDescription": "L2 demand and L2 prefetch code references to LLC. Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.CODE_LLC_PREFETCH",
"Filter": "filter_opc=0x181",
@@ -122,6 +133,7 @@
},
{
"BriefDescription": "PCIe writes (partial cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_NS_PARTIAL_WRITE",
"Filter": "filter_opc=0x180,filter_tid=0x3e",
@@ -132,6 +144,7 @@
},
{
"BriefDescription": "PCIe read current. Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_READ",
"Filter": "filter_opc=0x19e",
@@ -143,6 +156,7 @@
},
{
"BriefDescription": "PCIe write references (full cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.PCIE_WRITE",
"Filter": "filter_opc=0x1c8,filter_tid=0x3e",
@@ -154,6 +168,7 @@
},
{
"BriefDescription": "Streaming stores (full cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_FULL",
"Filter": "filter_opc=0x18c",
@@ -165,6 +180,7 @@
},
{
"BriefDescription": "Streaming stores (partial cache line). Derived from unc_c_tor_inserts.opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "LLC_REFERENCES.STREAMING_PARTIAL",
"Filter": "filter_opc=0x18d",
@@ -176,6 +192,7 @@
},
{
"BriefDescription": "Bounce Control",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_C_BOUNCE_CONTROL",
"PerPkg": "1",
@@ -183,12 +200,14 @@
},
{
"BriefDescription": "Uncore Clocks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_C_CLOCKTICKS",
"PerPkg": "1",
"Unit": "CBOX"
},
{
"BriefDescription": "Counter 0 Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_C_COUNTER0_OCCUPANCY",
"PerPkg": "1",
@@ -197,6 +216,7 @@
},
{
"BriefDescription": "FaST wire asserted",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_C_FAST_ASSERTED",
"PerPkg": "1",
@@ -205,6 +225,7 @@
},
{
"BriefDescription": "All LLC Misses (code+ data rd + data wr - including demand and prefetch)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.ANY",
"Filter": "filter_state=0x1",
@@ -216,6 +237,7 @@
},
{
"BriefDescription": "Cache Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.DATA_READ",
"PerPkg": "1",
@@ -225,6 +247,7 @@
},
{
"BriefDescription": "Cache Lookups; Lookups that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.NID",
"PerPkg": "1",
@@ -234,6 +257,7 @@
},
{
"BriefDescription": "Cache Lookups; Any Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.READ",
"PerPkg": "1",
@@ -243,6 +267,7 @@
},
{
"BriefDescription": "Cache Lookups; External Snoop Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.REMOTE_SNOOP",
"PerPkg": "1",
@@ -252,6 +277,7 @@
},
{
"BriefDescription": "Cache Lookups; Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_C_LLC_LOOKUP.WRITE",
"PerPkg": "1",
@@ -261,6 +287,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.E_STATE",
"PerPkg": "1",
@@ -270,6 +297,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.F_STATE",
"PerPkg": "1",
@@ -279,6 +307,7 @@
},
{
"BriefDescription": "Lines Victimized; Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.I_STATE",
"PerPkg": "1",
@@ -288,6 +317,7 @@
},
{
"BriefDescription": "Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.MISS",
"PerPkg": "1",
@@ -297,6 +327,7 @@
},
{
"BriefDescription": "M line evictions from LLC (writebacks to memory)",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.M_STATE",
"PerPkg": "1",
@@ -307,6 +338,7 @@
},
{
"BriefDescription": "Lines Victimized; Victimized Lines that Match NID",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.NID",
"PerPkg": "1",
@@ -316,6 +348,7 @@
},
{
"BriefDescription": "Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_C_LLC_VICTIMS.S_STATE",
"PerPkg": "1",
@@ -325,6 +358,7 @@
},
{
"BriefDescription": "Cbo Misc; DRd hitting non-M with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_MISS",
"PerPkg": "1",
@@ -334,6 +368,7 @@
},
{
"BriefDescription": "Cbo Misc; Clean Victim with raw CV=0",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.CVZERO_PREFETCH_VICTIM",
"PerPkg": "1",
@@ -343,6 +378,7 @@
},
{
"BriefDescription": "Cbo Misc; RFO HitS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RFO_HIT_S",
"PerPkg": "1",
@@ -352,6 +388,7 @@
},
{
"BriefDescription": "Cbo Misc; Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.RSPI_WAS_FSE",
"PerPkg": "1",
@@ -361,6 +398,7 @@
},
{
"BriefDescription": "Cbo Misc",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.STARTED",
"PerPkg": "1",
@@ -370,6 +408,7 @@
},
{
"BriefDescription": "Cbo Misc; Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_C_MISC.WC_ALIASING",
"PerPkg": "1",
@@ -379,6 +418,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE0",
"PerPkg": "1",
@@ -388,6 +428,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE1",
"PerPkg": "1",
@@ -397,6 +438,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE2",
"PerPkg": "1",
@@ -406,6 +448,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Age 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.AGE3",
"PerPkg": "1",
@@ -415,6 +458,7 @@
},
{
"BriefDescription": "LRU Queue; LRU Bits Decremented",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.LRU_DECREMENT",
"PerPkg": "1",
@@ -424,6 +468,7 @@
},
{
"BriefDescription": "LRU Queue; Non-0 Aged Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_C_QLRU.VICTIM_NON_ZERO",
"PerPkg": "1",
@@ -433,6 +478,7 @@
},
{
"BriefDescription": "AD Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.ALL",
"PerPkg": "1",
@@ -442,6 +488,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN",
"PerPkg": "1",
@@ -451,6 +498,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -460,6 +508,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.DOWN_ODD",
"PerPkg": "1",
@@ -469,6 +518,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP",
"PerPkg": "1",
@@ -478,6 +528,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_EVEN",
"PerPkg": "1",
@@ -487,6 +538,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_C_RING_AD_USED.UP_ODD",
"PerPkg": "1",
@@ -496,6 +548,7 @@
},
{
"BriefDescription": "AK Ring In Use; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.ALL",
"PerPkg": "1",
@@ -505,6 +558,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN",
"PerPkg": "1",
@@ -514,6 +568,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -523,6 +578,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.DOWN_ODD",
"PerPkg": "1",
@@ -532,6 +588,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP",
"PerPkg": "1",
@@ -541,6 +598,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_EVEN",
"PerPkg": "1",
@@ -550,6 +608,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_C_RING_AK_USED.UP_ODD",
"PerPkg": "1",
@@ -559,6 +618,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.ALL",
"PerPkg": "1",
@@ -568,6 +628,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN",
"PerPkg": "1",
@@ -577,6 +638,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -586,6 +648,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.DOWN_ODD",
"PerPkg": "1",
@@ -595,6 +658,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP",
"PerPkg": "1",
@@ -604,6 +668,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_EVEN",
"PerPkg": "1",
@@ -613,6 +678,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_C_RING_BL_USED.UP_ODD",
"PerPkg": "1",
@@ -622,6 +688,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AD",
"PerPkg": "1",
@@ -630,6 +697,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.AK",
"PerPkg": "1",
@@ -638,6 +706,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.BL",
"PerPkg": "1",
@@ -646,6 +715,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_C_RING_BOUNCES.IV",
"PerPkg": "1",
@@ -654,6 +724,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -663,6 +734,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DN",
"PerPkg": "1",
@@ -672,6 +744,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.DOWN",
"PerPkg": "1",
@@ -681,6 +754,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_C_RING_IV_USED.UP",
"PerPkg": "1",
@@ -690,6 +764,7 @@
},
{
"BriefDescription": "UNC_C_RING_SINK_STARVED.AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AD",
"PerPkg": "1",
@@ -698,6 +773,7 @@
},
{
"BriefDescription": "UNC_C_RING_SINK_STARVED.AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.AK",
"PerPkg": "1",
@@ -706,6 +782,7 @@
},
{
"BriefDescription": "UNC_C_RING_SINK_STARVED.BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.BL",
"PerPkg": "1",
@@ -714,6 +791,7 @@
},
{
"BriefDescription": "UNC_C_RING_SINK_STARVED.IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_C_RING_SINK_STARVED.IV",
"PerPkg": "1",
@@ -722,6 +800,7 @@
},
{
"BriefDescription": "Number of cycles the Cbo is actively throttling traffic onto the Ring in order to limit bounce traffic.",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_C_RING_SRC_THRTL",
"PerPkg": "1",
@@ -729,15 +808,17 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IPQ",
"PerPkg": "1",
- "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally startved and therefore we are blocking the IRQ.",
+ "PublicDescription": "Counts cycles in external starvation. This occurs when one of the ingress queues is being starved by the other queues.; IPQ is externally starved and therefore we are blocking the IRQ.",
"UMask": "0x2",
"Unit": "CBOX"
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.IRQ",
"PerPkg": "1",
@@ -747,6 +828,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; ISMQ_BID",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.ISMQ_BIDS",
"PerPkg": "1",
@@ -756,6 +838,7 @@
},
{
"BriefDescription": "Ingress Arbiter Blocking Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_C_RxR_EXT_STARVED.PRQ",
"PerPkg": "1",
@@ -765,6 +848,7 @@
},
{
"BriefDescription": "Ingress Allocations; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IPQ",
"PerPkg": "1",
@@ -774,6 +858,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ",
"PerPkg": "1",
@@ -783,6 +868,7 @@
},
{
"BriefDescription": "Ingress Allocations; IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.IRQ_REJ",
"PerPkg": "1",
@@ -792,6 +878,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ",
"PerPkg": "1",
@@ -801,6 +888,7 @@
},
{
"BriefDescription": "Ingress Allocations; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_C_RxR_INSERTS.PRQ_REJ",
"PerPkg": "1",
@@ -810,6 +898,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IPQ",
"PerPkg": "1",
@@ -819,6 +908,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.IRQ",
"PerPkg": "1",
@@ -828,6 +918,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; ISMQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.ISMQ",
"PerPkg": "1",
@@ -837,6 +928,7 @@
},
{
"BriefDescription": "Ingress Internal Starvation Cycles; PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_C_RxR_INT_STARVED.PRQ",
"PerPkg": "1",
@@ -846,6 +938,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -855,6 +948,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.ANY",
"PerPkg": "1",
@@ -864,6 +958,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.FULL",
"PerPkg": "1",
@@ -873,6 +968,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_C_RxR_IPQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -882,6 +978,7 @@
},
{
"BriefDescription": "Probe Queue Retries; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -891,6 +988,7 @@
},
{
"BriefDescription": "Probe Queue Retries; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_C_RxR_IPQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -900,6 +998,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Address Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ADDR_CONFLICT",
"PerPkg": "1",
@@ -909,6 +1008,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.ANY",
"PerPkg": "1",
@@ -918,6 +1018,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.FULL",
"PerPkg": "1",
@@ -927,6 +1028,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -936,6 +1038,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.NID",
"PerPkg": "1",
@@ -945,6 +1048,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -954,6 +1058,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_C_RxR_IRQ_RETRY.RTID",
"PerPkg": "1",
@@ -963,6 +1068,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -972,6 +1078,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -981,6 +1088,7 @@
},
{
"BriefDescription": "Ingress Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_C_RxR_IRQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -990,6 +1098,7 @@
},
{
"BriefDescription": "ISMQ Retries; Any Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.ANY",
"PerPkg": "1",
@@ -999,6 +1108,7 @@
},
{
"BriefDescription": "ISMQ Retries; No Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.FULL",
"PerPkg": "1",
@@ -1008,6 +1118,7 @@
},
{
"BriefDescription": "ISMQ Retries; No IIO Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.IIO_CREDITS",
"PerPkg": "1",
@@ -1017,6 +1128,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.NID",
"PerPkg": "1",
@@ -1026,6 +1138,7 @@
},
{
"BriefDescription": "ISMQ Retries; No QPI Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.QPI_CREDITS",
"PerPkg": "1",
@@ -1035,6 +1148,7 @@
},
{
"BriefDescription": "ISMQ Retries; No RTIDs",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.RTID",
"PerPkg": "1",
@@ -1044,6 +1158,7 @@
},
{
"BriefDescription": "ISMQ Retries",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_C_RxR_ISMQ_RETRY.WB_CREDITS",
"PerPkg": "1",
@@ -1053,6 +1168,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No AD Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.AD_SBO",
"PerPkg": "1",
@@ -1062,6 +1178,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; No BL Sbo Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.BL_SBO",
"PerPkg": "1",
@@ -1071,6 +1188,7 @@
},
{
"BriefDescription": "ISMQ Request Queue Rejects; Target Node Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_C_RxR_ISMQ_RETRY2.TARGET",
"PerPkg": "1",
@@ -1080,6 +1198,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IPQ",
"PerPkg": "1",
@@ -1089,6 +1208,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ",
"PerPkg": "1",
@@ -1098,6 +1218,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IRQ Rejected",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.IRQ_REJ",
"PerPkg": "1",
@@ -1107,6 +1228,7 @@
},
{
"BriefDescription": "Ingress Occupancy; PRQ Rejects",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_C_RxR_OCCUPANCY.PRQ_REJ",
"PerPkg": "1",
@@ -1116,6 +1238,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -1125,6 +1248,7 @@
},
{
"BriefDescription": "SBo Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_C_SBO_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -1134,6 +1258,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -1143,6 +1268,7 @@
},
{
"BriefDescription": "SBo Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x3E",
"EventName": "UNC_C_SBO_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -1152,6 +1278,7 @@
},
{
"BriefDescription": "TOR Inserts; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.ALL",
"PerPkg": "1",
@@ -1161,6 +1288,7 @@
},
{
"BriefDescription": "TOR Inserts; Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.EVICTION",
"PerPkg": "1",
@@ -1170,6 +1298,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL",
"PerPkg": "1",
@@ -1179,6 +1308,7 @@
},
{
"BriefDescription": "TOR Inserts; Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1188,6 +1318,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL",
"PerPkg": "1",
@@ -1197,6 +1328,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Local Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1206,6 +1338,7 @@
},
{
"BriefDescription": "TOR Inserts; Miss Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_OPCODE",
"PerPkg": "1",
@@ -1215,6 +1348,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE",
"PerPkg": "1",
@@ -1224,6 +1358,7 @@
},
{
"BriefDescription": "TOR Inserts; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1233,6 +1368,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_ALL",
"PerPkg": "1",
@@ -1242,6 +1378,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_EVICTION",
"PerPkg": "1",
@@ -1251,6 +1388,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Miss All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_ALL",
"PerPkg": "1",
@@ -1260,6 +1398,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1269,6 +1408,7 @@
},
{
"BriefDescription": "TOR Inserts; NID and Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_OPCODE",
"PerPkg": "1",
@@ -1278,6 +1418,7 @@
},
{
"BriefDescription": "TOR Inserts; NID Matched Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.NID_WB",
"PerPkg": "1",
@@ -1287,6 +1428,7 @@
},
{
"BriefDescription": "TOR Inserts; Opcode Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.OPCODE",
"PerPkg": "1",
@@ -1296,6 +1438,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE",
"PerPkg": "1",
@@ -1305,6 +1448,7 @@
},
{
"BriefDescription": "TOR Inserts; Remote Memory - Opcode Matched",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1314,6 +1458,7 @@
},
{
"BriefDescription": "TOR Inserts; Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_C_TOR_INSERTS.WB",
"PerPkg": "1",
@@ -1323,6 +1468,7 @@
},
{
"BriefDescription": "TOR Occupancy; Any",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -1332,6 +1478,7 @@
},
{
"BriefDescription": "TOR Occupancy; Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.EVICTION",
"PerPkg": "1",
@@ -1341,6 +1488,7 @@
},
{
"BriefDescription": "Occupancy counter for LLC data reads (demand and L2 prefetch). Derived from unc_c_tor_occupancy.miss_opcode",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LLC_DATA_READ",
"Filter": "filter_opc=0x182",
@@ -1351,6 +1499,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -1360,6 +1509,7 @@
},
{
"BriefDescription": "TOR Occupancy; Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.LOCAL_OPCODE",
"PerPkg": "1",
@@ -1369,6 +1519,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss All",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_ALL",
"PerPkg": "1",
@@ -1378,6 +1529,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL",
"PerPkg": "1",
@@ -1387,6 +1539,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Local Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_LOCAL_OPCODE",
"PerPkg": "1",
@@ -1396,6 +1549,7 @@
},
{
"BriefDescription": "TOR Occupancy; Miss Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_OPCODE",
"PerPkg": "1",
@@ -1405,6 +1559,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE",
"PerPkg": "1",
@@ -1414,6 +1569,7 @@
},
{
"BriefDescription": "TOR Occupancy; Misses to Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.MISS_REMOTE_OPCODE",
"PerPkg": "1",
@@ -1423,6 +1579,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_ALL",
"PerPkg": "1",
@@ -1432,6 +1589,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_EVICTION",
"PerPkg": "1",
@@ -1441,6 +1599,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_ALL",
"PerPkg": "1",
@@ -1450,6 +1609,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched Miss",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_MISS_OPCODE",
"PerPkg": "1",
@@ -1459,6 +1619,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID and Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_OPCODE",
"PerPkg": "1",
@@ -1468,6 +1629,7 @@
},
{
"BriefDescription": "TOR Occupancy; NID Matched Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.NID_WB",
"PerPkg": "1",
@@ -1477,6 +1639,7 @@
},
{
"BriefDescription": "TOR Occupancy; Opcode Match",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.OPCODE",
"PerPkg": "1",
@@ -1486,6 +1649,7 @@
},
{
"BriefDescription": "TOR Occupancy",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -1495,6 +1659,7 @@
},
{
"BriefDescription": "TOR Occupancy; Remote Memory - Opcode Matched",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.REMOTE_OPCODE",
"PerPkg": "1",
@@ -1504,6 +1669,7 @@
},
{
"BriefDescription": "TOR Occupancy; Writebacks",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_C_TOR_OCCUPANCY.WB",
"PerPkg": "1",
@@ -1513,6 +1679,7 @@
},
{
"BriefDescription": "Onto AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AD",
"PerPkg": "1",
@@ -1521,6 +1688,7 @@
},
{
"BriefDescription": "Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.AK",
"PerPkg": "1",
@@ -1529,6 +1697,7 @@
},
{
"BriefDescription": "Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_C_TxR_ADS_USED.BL",
"PerPkg": "1",
@@ -1537,6 +1706,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CACHE",
"PerPkg": "1",
@@ -1546,6 +1716,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AD_CORE",
"PerPkg": "1",
@@ -1555,6 +1726,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CACHE",
"PerPkg": "1",
@@ -1564,6 +1736,7 @@
},
{
"BriefDescription": "Egress Allocations; AK - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.AK_CORE",
"PerPkg": "1",
@@ -1573,6 +1746,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Cacheno",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CACHE",
"PerPkg": "1",
@@ -1582,6 +1756,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Corebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.BL_CORE",
"PerPkg": "1",
@@ -1591,6 +1766,7 @@
},
{
"BriefDescription": "Egress Allocations; IV - Cachebo",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_C_TxR_INSERTS.IV_CACHE",
"PerPkg": "1",
@@ -1600,6 +1776,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AD Ring (to core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AD_CORE",
"PerPkg": "1",
@@ -1609,6 +1786,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.AK_BOTH",
"PerPkg": "1",
@@ -1618,6 +1796,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.BL_BOTH",
"PerPkg": "1",
@@ -1627,6 +1806,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto IV Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_C_TxR_STARVED.IV",
"PerPkg": "1",
@@ -1636,6 +1816,7 @@
},
{
"BriefDescription": "BT Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_H_BT_CYCLES_NE",
"PerPkg": "1",
@@ -1644,6 +1825,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_BL_HAZARD",
"PerPkg": "1",
@@ -1653,6 +1835,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Snoop Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.INCOMING_SNP_HAZARD",
"PerPkg": "1",
@@ -1662,6 +1845,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.RSPACKCFLT_HAZARD",
"PerPkg": "1",
@@ -1671,6 +1855,7 @@
},
{
"BriefDescription": "BT to HT Not Issued; Incoming Data Hazard",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_H_BT_TO_HT_NOT_ISSUED.WBMDATA_HAZARD",
"PerPkg": "1",
@@ -1680,24 +1865,27 @@
},
{
"BriefDescription": "HA to iMC Bypass; Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.NOT_TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that could not take the bypass.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA to iMC Bypass; Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_H_BYPASS_IMC.TAKEN",
"PerPkg": "1",
- "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filted by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
+ "PublicDescription": "Counts the number of times when the HA was able to bypass was attempted. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not.; Filter for transactions that succeeded in taking the bypass.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "uclks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_H_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Counts the number of uclks in the HA. This will be slightly different than the count in the Ubox because of enable/freeze delays. The HA is on the other side of the die from the fixed Ubox uclk counter, so the drift could be somewhat larger than in units that are closer like the QPI Agent.",
@@ -1705,6 +1893,7 @@
},
{
"BriefDescription": "Direct2Core Messages Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_H_DIRECT2CORE_COUNT",
"PerPkg": "1",
@@ -1713,6 +1902,7 @@
},
{
"BriefDescription": "Cycles when Direct2Core was Disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_H_DIRECT2CORE_CYCLES_DISABLED",
"PerPkg": "1",
@@ -1721,6 +1911,7 @@
},
{
"BriefDescription": "Number of Reads that had Direct2Core Overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_H_DIRECT2CORE_TXN_OVERRIDE",
"PerPkg": "1",
@@ -1729,6 +1920,7 @@
},
{
"BriefDescription": "Directory Lat Opt Return",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_H_DIRECTORY_LAT_OPT",
"PerPkg": "1",
@@ -1737,6 +1929,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.NO_SNP",
"PerPkg": "1",
@@ -1746,6 +1939,7 @@
},
{
"BriefDescription": "Directory Lookups; Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_H_DIRECTORY_LOOKUP.SNP",
"PerPkg": "1",
@@ -1755,6 +1949,7 @@
},
{
"BriefDescription": "Directory Updates; Any Directory Update",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -1764,6 +1959,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.CLEAR",
"PerPkg": "1",
@@ -1773,6 +1969,7 @@
},
{
"BriefDescription": "Directory Updates; Directory Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_H_DIRECTORY_UPDATE.SET",
"PerPkg": "1",
@@ -1782,6 +1979,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1790,6 +1988,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALL",
"PerPkg": "1",
@@ -1798,6 +1997,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.ALLOCS",
"PerPkg": "1",
@@ -1806,6 +2006,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.EVICTS",
"PerPkg": "1",
@@ -1814,6 +2015,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.HOM",
"PerPkg": "1",
@@ -1822,6 +2024,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.INVALS",
"PerPkg": "1",
@@ -1830,6 +2033,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1838,6 +2042,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSP",
"PerPkg": "1",
@@ -1846,6 +2051,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1854,6 +2060,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1862,6 +2069,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.RSPFWDS",
"PerPkg": "1",
@@ -1870,6 +2078,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1878,6 +2087,7 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_H_HITME_HIT.WBMTOI",
"PerPkg": "1",
@@ -1886,6 +2096,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1894,6 +2105,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.ALL",
"PerPkg": "1",
@@ -1902,6 +2114,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.HOM",
"PerPkg": "1",
@@ -1910,6 +2123,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.READ_OR_INVITOE",
"PerPkg": "1",
@@ -1918,6 +2132,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSP",
"PerPkg": "1",
@@ -1926,6 +2141,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -1934,6 +2150,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -1942,6 +2159,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.RSPFWDS",
"PerPkg": "1",
@@ -1950,6 +2168,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOE_OR_S",
"PerPkg": "1",
@@ -1958,6 +2177,7 @@
},
{
"BriefDescription": "Accumulates Number of PV bits set on HitMe Cache Hits; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_H_HITME_HIT_PV_BITS_SET.WBMTOI",
"PerPkg": "1",
@@ -1966,6 +2186,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is AckCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ACKCNFLTWBI",
"PerPkg": "1",
@@ -1974,6 +2195,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALL",
"PerPkg": "1",
@@ -1982,6 +2204,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.ALLOCS",
"PerPkg": "1",
@@ -1990,6 +2213,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; HOM Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.HOM",
"PerPkg": "1",
@@ -1998,6 +2222,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; Invalidations",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.INVALS",
"PerPkg": "1",
@@ -2006,6 +2231,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RdCode, RdData, RdDataMigratory, RdInvOwn, RdCur or InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.READ_OR_INVITOE",
"PerPkg": "1",
@@ -2014,6 +2240,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspI, RspIWb, RspS, RspSWb, RspCnflt or RspCnfltWbI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSP",
"PerPkg": "1",
@@ -2022,6 +2249,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_LOCAL",
"PerPkg": "1",
@@ -2030,6 +2258,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDI_REMOTE",
"PerPkg": "1",
@@ -2038,6 +2267,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is RsSFwd or RspSFwdWb",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.RSPFWDS",
"PerPkg": "1",
@@ -2046,6 +2276,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoE or WbMtoS",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOE_OR_S",
"PerPkg": "1",
@@ -2054,6 +2285,7 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed; op is WbMtoI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_H_HITME_LOOKUP.WBMTOI",
"PerPkg": "1",
@@ -2062,6 +2294,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI0",
"PerPkg": "1",
@@ -2071,6 +2304,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; AD to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI1",
"PerPkg": "1",
@@ -2080,6 +2314,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.AD_QPI2",
"PerPkg": "1",
@@ -2089,6 +2324,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI0",
"PerPkg": "1",
@@ -2098,6 +2334,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI1",
"PerPkg": "1",
@@ -2107,6 +2344,7 @@
},
{
"BriefDescription": "Cycles without QPI Ingress Credits; BL to QPI Link 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_H_IGR_NO_CREDIT_CYCLES.BL_QPI2",
"PerPkg": "1",
@@ -2116,6 +2354,7 @@
},
{
"BriefDescription": "HA to iMC Normal Priority Reads Issued; Normal Priority",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_H_IMC_READS.NORMAL",
"PerPkg": "1",
@@ -2125,6 +2364,7 @@
},
{
"BriefDescription": "Retry Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_H_IMC_RETRY",
"PerPkg": "1",
@@ -2132,6 +2372,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; All Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.ALL",
"PerPkg": "1",
@@ -2141,6 +2382,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL",
"PerPkg": "1",
@@ -2150,6 +2392,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.FULL_ISOCH",
"PerPkg": "1",
@@ -2159,6 +2402,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL",
"PerPkg": "1",
@@ -2168,6 +2412,7 @@
},
{
"BriefDescription": "HA to iMC Full Line Writes Issued; ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_H_IMC_WRITES.PARTIAL_ISOCH",
"PerPkg": "1",
@@ -2177,6 +2422,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.HUB",
"PerPkg": "1",
@@ -2185,6 +2431,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0x61",
"EventName": "UNC_H_IOT_BACKPRESSURE.SAT",
"PerPkg": "1",
@@ -2193,6 +2440,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS0",
"PerPkg": "1",
@@ -2202,6 +2450,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x64",
"EventName": "UNC_H_IOT_CTS_EAST_LO.CTS1",
"PerPkg": "1",
@@ -2211,6 +2460,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS2",
"PerPkg": "1",
@@ -2220,6 +2470,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0x65",
"EventName": "UNC_H_IOT_CTS_HI.CTS3",
"PerPkg": "1",
@@ -2229,6 +2480,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS0",
"PerPkg": "1",
@@ -2238,6 +2490,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0x62",
"EventName": "UNC_H_IOT_CTS_WEST_LO.CTS1",
"PerPkg": "1",
@@ -2247,6 +2500,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Cancelled",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.CANCELLED",
"PerPkg": "1",
@@ -2256,6 +2510,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2265,6 +2520,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL",
"PerPkg": "1",
@@ -2274,6 +2530,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Reads Local - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.READS_LOCAL_USEFUL",
"PerPkg": "1",
@@ -2283,6 +2540,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE",
"PerPkg": "1",
@@ -2292,6 +2550,7 @@
},
{
"BriefDescription": "OSB Snoop Broadcast; Remote - Useful",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_H_OSB.REMOTE_USEFUL",
"PerPkg": "1",
@@ -2301,6 +2560,7 @@
},
{
"BriefDescription": "OSB Early Data Return; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.ALL",
"PerPkg": "1",
@@ -2310,6 +2570,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_I",
"PerPkg": "1",
@@ -2319,6 +2580,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Local S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_LOCAL_S",
"PerPkg": "1",
@@ -2328,6 +2590,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote I",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_I",
"PerPkg": "1",
@@ -2337,6 +2600,7 @@
},
{
"BriefDescription": "OSB Early Data Return; Reads to Remote S",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_H_OSB_EDR.READS_REMOTE_S",
"PerPkg": "1",
@@ -2346,6 +2610,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2355,6 +2620,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote InvItoEs",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -2364,6 +2630,7 @@
},
{
"BriefDescription": "Read and Write Requests; Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS",
"PerPkg": "1",
@@ -2373,6 +2640,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -2382,6 +2650,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -2391,6 +2660,7 @@
},
{
"BriefDescription": "Read and Write Requests; Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES",
"PerPkg": "1",
@@ -2400,6 +2670,7 @@
},
{
"BriefDescription": "Read and Write Requests; Local Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -2409,6 +2680,7 @@
},
{
"BriefDescription": "Read and Write Requests; Remote Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_H_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -2418,6 +2690,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -2427,6 +2700,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2436,6 +2710,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -2445,6 +2720,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW",
"PerPkg": "1",
@@ -2454,6 +2730,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -2463,6 +2740,7 @@
},
{
"BriefDescription": "HA AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_H_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -2472,6 +2750,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -2481,6 +2760,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2490,6 +2770,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -2499,6 +2780,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW",
"PerPkg": "1",
@@ -2508,6 +2790,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -2517,6 +2800,7 @@
},
{
"BriefDescription": "HA AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_H_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -2526,6 +2810,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -2535,6 +2820,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2544,6 +2830,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -2553,6 +2840,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW",
"PerPkg": "1",
@@ -2562,6 +2850,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -2571,6 +2860,7 @@
},
{
"BriefDescription": "HA BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_H_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -2580,42 +2870,47 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_H_RPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting reads from the HA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's RPQ (read pending queue). This queue is broken into regular credits/buffers that are used by general reads, and special requests such as ISOCH reads. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -2625,6 +2920,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -2634,6 +2930,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -2643,6 +2940,7 @@
},
{
"BriefDescription": "iMC RPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_H_RPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
@@ -2652,6 +2950,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2661,6 +2960,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_H_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2670,6 +2970,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2679,6 +2980,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_H_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2688,6 +2990,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2697,6 +3000,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_H_SBO1_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2706,6 +3010,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2715,6 +3020,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_H_SBO1_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2724,6 +3030,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.LOCAL",
"PerPkg": "1",
@@ -2733,6 +3040,7 @@
},
{
"BriefDescription": "Data beat the Snoop Responses; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_H_SNOOPS_RSP_AFTER_DATA.REMOTE",
"PerPkg": "1",
@@ -2742,6 +3050,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -2751,6 +3060,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.LOCAL",
"PerPkg": "1",
@@ -2760,6 +3070,7 @@
},
{
"BriefDescription": "Cycles with Snoops Outstanding; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_H_SNOOP_CYCLES_NE.REMOTE",
"PerPkg": "1",
@@ -2769,6 +3080,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -2778,6 +3090,7 @@
},
{
"BriefDescription": "Tracker Snoops Outstanding Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_H_SNOOP_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -2787,6 +3100,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RSPCNFLCT*",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPCNFLCT",
"PerPkg": "1",
@@ -2796,6 +3110,7 @@
},
{
"BriefDescription": "Snoop Responses Received; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPI",
"PerPkg": "1",
@@ -2805,6 +3120,7 @@
},
{
"BriefDescription": "M line forwarded from remote cache with no writeback to memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPIFWD",
"PerPkg": "1",
@@ -2815,6 +3131,7 @@
},
{
"BriefDescription": "Shared line response from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPS",
"PerPkg": "1",
@@ -2825,6 +3142,7 @@
},
{
"BriefDescription": "Shared line forwarded from remote cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSPSFWD",
"PerPkg": "1",
@@ -2835,6 +3153,7 @@
},
{
"BriefDescription": "M line forwarded from remote cache along with writeback to memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_FWD_WB",
"PerPkg": "1",
@@ -2845,6 +3164,7 @@
},
{
"BriefDescription": "Snoop Responses Received; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_H_SNOOP_RESP.RSP_WB",
"PerPkg": "1",
@@ -2854,6 +3174,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Other",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.OTHER",
"PerPkg": "1",
@@ -2863,6 +3184,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPCNFLCT",
"PerPkg": "1",
@@ -2872,6 +3194,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPI",
"PerPkg": "1",
@@ -2881,6 +3204,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPIFWD",
"PerPkg": "1",
@@ -2890,6 +3214,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPS",
"PerPkg": "1",
@@ -2899,6 +3224,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPSFWD",
"PerPkg": "1",
@@ -2908,6 +3234,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxFWDxWB",
"PerPkg": "1",
@@ -2917,6 +3244,7 @@
},
{
"BriefDescription": "Snoop Responses Received Local; Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_H_SNP_RESP_RECV_LOCAL.RSPxWB",
"PerPkg": "1",
@@ -2926,6 +3254,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -2935,6 +3264,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -2944,6 +3274,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -2953,6 +3284,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_H_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -2962,6 +3294,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION0",
"PerPkg": "1",
@@ -2971,6 +3304,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION1",
"PerPkg": "1",
@@ -2980,6 +3314,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION2",
"PerPkg": "1",
@@ -2989,6 +3324,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION3",
"PerPkg": "1",
@@ -2998,6 +3334,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION4",
"PerPkg": "1",
@@ -3007,6 +3344,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION5",
"PerPkg": "1",
@@ -3016,6 +3354,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION6",
"PerPkg": "1",
@@ -3025,6 +3364,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 0; TAD Region 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_H_TAD_REQUESTS_G0.REGION7",
"PerPkg": "1",
@@ -3034,6 +3374,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION10",
"PerPkg": "1",
@@ -3043,6 +3384,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 11",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION11",
"PerPkg": "1",
@@ -3052,6 +3394,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION8",
"PerPkg": "1",
@@ -3061,6 +3404,7 @@
},
{
"BriefDescription": "HA Requests to a TAD Region - Group 1; TAD Region 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_H_TAD_REQUESTS_G1.REGION9",
"PerPkg": "1",
@@ -3070,6 +3414,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3079,6 +3424,7 @@
},
{
"BriefDescription": "Tracker Cycles Full; Cycles GP Completely Used",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_H_TRACKER_CYCLES_FULL.GP",
"PerPkg": "1",
@@ -3088,33 +3434,37 @@
},
{
"BriefDescription": "Tracker Cycles Not Empty; All Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.ALL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; Requests coming from both local and remote sockets.",
"UMask": "0x3",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.LOCAL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from the local socket.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Cycles Not Empty; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_H_TRACKER_CYCLES_NE.REMOTE",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occpancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
+ "PublicDescription": "Counts the number of cycles when the local HA tracker pool is not empty. This can be used with edge detect to identify the number of situations when the pool became empty. This should not be confused with RTID credit usage -- which must be tracked inside each cbo individually -- but represents the actual tracker buffer structure. In other words, this buffer could be completely empty, but there may still be credits in use by the CBos. This stat can be used in conjunction with the occupancy accumulation stat in order to calculate average queue occupancy. HA trackers are allocated as soon as a request enters the HA if an HT (Home Tracker) entry is available and is released after the snoop response and data return (or post in the case of a write) and the response is returned on the ring.; This filter includes only requests coming from remote sockets.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_LOCAL",
"PerPkg": "1",
@@ -3124,6 +3474,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote InvItoE Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.INVITOE_REMOTE",
"PerPkg": "1",
@@ -3133,6 +3484,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_LOCAL",
"PerPkg": "1",
@@ -3142,6 +3494,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Read Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.READS_REMOTE",
"PerPkg": "1",
@@ -3151,6 +3504,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Local Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_LOCAL",
"PerPkg": "1",
@@ -3160,6 +3514,7 @@
},
{
"BriefDescription": "Tracker Occupancy Accumulator; Remote Write Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_H_TRACKER_OCCUPANCY.WRITES_REMOTE",
"PerPkg": "1",
@@ -3169,6 +3524,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.LOCAL",
"PerPkg": "1",
@@ -3178,6 +3534,7 @@
},
{
"BriefDescription": "Data Pending Occupancy Accumulator; Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_H_TRACKER_PENDING_OCCUPANCY.REMOTE",
"PerPkg": "1",
@@ -3187,6 +3544,7 @@
},
{
"BriefDescription": "Outbound NDR Ring Transactions; Non-data Responses",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_H_TxR_AD.HOM",
"PerPkg": "1",
@@ -3196,6 +3554,7 @@
},
{
"BriefDescription": "AD Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3205,6 +3564,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3214,6 +3574,7 @@
},
{
"BriefDescription": "AD Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_H_TxR_AD_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3223,6 +3584,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3232,6 +3594,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3241,6 +3604,7 @@
},
{
"BriefDescription": "AD Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_H_TxR_AD_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3250,6 +3614,7 @@
},
{
"BriefDescription": "AD Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.ALL",
"PerPkg": "1",
@@ -3259,6 +3624,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3268,6 +3634,7 @@
},
{
"BriefDescription": "AD Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_H_TxR_AD_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3277,6 +3644,7 @@
},
{
"BriefDescription": "AK Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3286,6 +3654,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3295,6 +3664,7 @@
},
{
"BriefDescription": "AK Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_H_TxR_AK_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3304,6 +3674,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3313,6 +3684,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3322,6 +3694,7 @@
},
{
"BriefDescription": "AK Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_H_TxR_AK_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3331,6 +3704,7 @@
},
{
"BriefDescription": "AK Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.ALL",
"PerPkg": "1",
@@ -3340,6 +3714,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3349,6 +3724,7 @@
},
{
"BriefDescription": "AK Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_H_TxR_AK_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3358,6 +3734,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CACHE",
"PerPkg": "1",
@@ -3367,6 +3744,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to Core",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_CORE",
"PerPkg": "1",
@@ -3376,6 +3754,7 @@
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache; Data to QPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_H_TxR_BL.DRS_QPI",
"PerPkg": "1",
@@ -3385,6 +3764,7 @@
},
{
"BriefDescription": "BL Egress Full; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.ALL",
"PerPkg": "1",
@@ -3394,6 +3774,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED0",
"PerPkg": "1",
@@ -3403,6 +3784,7 @@
},
{
"BriefDescription": "BL Egress Full; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_H_TxR_BL_CYCLES_FULL.SCHED1",
"PerPkg": "1",
@@ -3412,6 +3794,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.ALL",
"PerPkg": "1",
@@ -3421,6 +3804,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED0",
"PerPkg": "1",
@@ -3430,6 +3814,7 @@
},
{
"BriefDescription": "BL Egress Not Empty; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_H_TxR_BL_CYCLES_NE.SCHED1",
"PerPkg": "1",
@@ -3439,6 +3824,7 @@
},
{
"BriefDescription": "BL Egress Allocations; All",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.ALL",
"PerPkg": "1",
@@ -3448,6 +3834,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED0",
"PerPkg": "1",
@@ -3457,6 +3844,7 @@
},
{
"BriefDescription": "BL Egress Allocations; Scheduler 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_H_TxR_BL_INSERTS.SCHED1",
"PerPkg": "1",
@@ -3466,6 +3854,7 @@
},
{
"BriefDescription": "Injection Starvation; For AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.AK",
"PerPkg": "1",
@@ -3475,6 +3864,7 @@
},
{
"BriefDescription": "Injection Starvation; For BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_H_TxR_STARVED.BL",
"PerPkg": "1",
@@ -3484,42 +3874,47 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN0",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 0 only.",
"UMask": "0x1",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN1",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 1 only.",
"UMask": "0x2",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN2",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 2 only.",
"UMask": "0x4",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Regular; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_H_WPQ_CYCLES_NO_REG_CREDITS.CHN3",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high banwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
+ "PublicDescription": "Counts the number of cycles when there are no regular credits available for posting writes from the HA into the iMC. In order to send writes into the memory controller, the HA must first acquire a credit for the iMC's WPQ (write pending queue). This queue is broken into regular credits/buffers that are used by general writes, and special requests such as ISOCH writes. This count only tracks the regular credits Common high bandwidth workloads should be able to make use of all of the regular buffers, but it will be difficult (and uncommon) to make use of both the regular and special buffers at the same time. One can filter based on the memory controller channel. One or more channels can be tracked at a given time.; Filter for memory controller channel 3 only.",
"UMask": "0x8",
"Unit": "HA"
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN0",
"PerPkg": "1",
@@ -3529,6 +3924,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN1",
"PerPkg": "1",
@@ -3538,6 +3934,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN2",
"PerPkg": "1",
@@ -3547,6 +3944,7 @@
},
{
"BriefDescription": "HA iMC CHN0 WPQ Credits Empty - Special; Channel 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_H_WPQ_CYCLES_NO_SPEC_CREDITS.CHN3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
index bef1f5ef6f31..8f73a8649b39 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
@@ -1,24 +1,7 @@
[
{
- "BriefDescription": "Number of non data (control) flits transmitted . Derived from unc_q_txl_flits_g0.non_data",
- "EventName": "QPI_CTL_BANDWIDTH_TX",
- "PerPkg": "1",
- "PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI. This basically tracks the protocol overhead on the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This includes the header flits for data packets.",
- "ScaleUnit": "8Bytes",
- "UMask": "0x4",
- "Unit": "QPI"
- },
- {
- "BriefDescription": "Number of data flits transmitted . Derived from unc_q_txl_flits_g0.data",
- "EventName": "QPI_DATA_BANDWIDTH_TX",
- "PerPkg": "1",
- "PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI. Each flit contains 64b of data. This includes both DRS and NCB data flits (coherent and non-coherent). This can be used to calculate the data bandwidth of the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This does not include the header flits that go in data packets.",
- "ScaleUnit": "8Bytes",
- "UMask": "0x2",
- "Unit": "QPI"
- },
- {
"BriefDescription": "Total Write Cache Occupancy; Any Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY",
"PerPkg": "1",
@@ -28,6 +11,7 @@
},
{
"BriefDescription": "Total Write Cache Occupancy; Select Source",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.SOURCE",
"PerPkg": "1",
@@ -37,6 +21,7 @@
},
{
"BriefDescription": "Clocks in the IRP",
+ "Counter": "0,1",
"EventName": "UNC_I_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Number of clocks in the IRP.",
@@ -44,78 +29,87 @@
},
{
"BriefDescription": "Coherent Ops; CLFlush",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CLFLUSH",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; CRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.CRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; DRd",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.DRD",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIDCAHin5t",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIDCAHINT",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIRdCur",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCIRDCUR",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; PCIItoM",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.PCITOM",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; RFO",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.RFO",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Coherent Ops; WbMtoI",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_COHERENT_OPS.WBMTOI",
"PerPkg": "1",
- "PublicDescription": "Counts the number of coherency related operations servied by the IRP",
+ "PublicDescription": "Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
"PerPkg": "1",
@@ -125,6 +119,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
"PerPkg": "1",
@@ -134,6 +129,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
"PerPkg": "1",
@@ -143,6 +139,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REJ",
"PerPkg": "1",
@@ -152,6 +149,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_REQ",
"PerPkg": "1",
@@ -161,6 +159,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.FAST_XFER",
"PerPkg": "1",
@@ -170,6 +169,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
"PerPkg": "1",
@@ -179,6 +179,7 @@
},
{
"BriefDescription": "Misc Events - Set 0; Prefetch TimeOut",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_MISC0.PF_TIMEOUT",
"PerPkg": "1",
@@ -188,6 +189,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Data Throttled",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.DATA_THROTTLE",
"PerPkg": "1",
@@ -197,6 +199,7 @@
},
{
"BriefDescription": "Misc Events - Set 1",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.LOST_FWD",
"PerPkg": "1",
@@ -206,6 +209,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
"PerPkg": "1",
@@ -215,6 +219,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Received Valid",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
"PerPkg": "1",
@@ -224,6 +229,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_E",
"PerPkg": "1",
@@ -233,6 +239,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_I",
"PerPkg": "1",
@@ -242,6 +249,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_M",
"PerPkg": "1",
@@ -251,6 +259,7 @@
},
{
"BriefDescription": "Misc Events - Set 1; Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_MISC1.SLOW_S",
"PerPkg": "1",
@@ -260,6 +269,7 @@
},
{
"BriefDescription": "AK Ingress Occupancy",
+ "Counter": "0,1",
"EventCode": "0xA",
"EventName": "UNC_I_RxR_AK_INSERTS",
"PerPkg": "1",
@@ -268,6 +278,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
@@ -276,6 +287,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - DRS",
+ "Counter": "0,1",
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
@@ -284,6 +296,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_DRS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
@@ -292,6 +305,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
@@ -300,6 +314,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCB",
+ "Counter": "0,1",
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
@@ -308,6 +323,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCB_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
@@ -316,6 +332,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
+ "Counter": "0,1",
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
@@ -324,6 +341,7 @@
},
{
"BriefDescription": "BL Ingress Occupancy - NCS",
+ "Counter": "0,1",
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
@@ -332,6 +350,7 @@
},
{
"BriefDescription": "UNC_I_RxR_BL_NCS_OCCUPANCY",
+ "Counter": "0,1",
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
@@ -340,6 +359,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
"PerPkg": "1",
@@ -349,6 +369,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit I",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
"PerPkg": "1",
@@ -358,6 +379,7 @@
},
{
"BriefDescription": "Snoop Responses; Hit M",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
"PerPkg": "1",
@@ -367,6 +389,7 @@
},
{
"BriefDescription": "Snoop Responses; Miss",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.MISS",
"PerPkg": "1",
@@ -376,6 +399,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpCode",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
"PerPkg": "1",
@@ -385,6 +409,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpData",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
"PerPkg": "1",
@@ -394,6 +419,7 @@
},
{
"BriefDescription": "Snoop Responses; SnpInv",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
"PerPkg": "1",
@@ -403,6 +429,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Atomic",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.ATOMIC",
"PerPkg": "1",
@@ -412,6 +439,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Other",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.OTHER",
"PerPkg": "1",
@@ -421,6 +449,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Read Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.RD_PREF",
"PerPkg": "1",
@@ -430,6 +459,7 @@
},
{
"BriefDescription": "Inbound Transaction Count; Reads",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.READS",
"PerPkg": "1",
@@ -439,15 +469,17 @@
},
{
"BriefDescription": "Inbound Transaction Count; Writes",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WRITES",
"PerPkg": "1",
- "PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Trackes only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.",
+ "PublicDescription": "Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID.; Tracks only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound Transaction Count; Write Prefetches",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -457,6 +489,7 @@
},
{
"BriefDescription": "No AD Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_TxR_AD_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -465,6 +498,7 @@
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_TxR_BL_STALL_CREDIT_CYCLES",
"PerPkg": "1",
@@ -473,6 +507,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xE",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCB",
"PerPkg": "1",
@@ -481,6 +516,7 @@
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0xF",
"EventName": "UNC_I_TxR_DATA_INSERTS_NCS",
"PerPkg": "1",
@@ -489,6 +525,7 @@
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0xD",
"EventName": "UNC_I_TxR_REQUEST_OCCUPANCY",
"PerPkg": "1",
@@ -497,6 +534,7 @@
},
{
"BriefDescription": "Number of qfclks",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_CLOCKTICKS",
"PerPkg": "1",
@@ -505,6 +543,7 @@
},
{
"BriefDescription": "Count of CTO Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_Q_CTO_COUNT",
"PerPkg": "1",
@@ -513,6 +552,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS",
"PerPkg": "1",
@@ -522,6 +562,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_MISS",
"PerPkg": "1",
@@ -531,6 +572,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT",
"PerPkg": "1",
@@ -540,6 +582,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - Egress and RBT Miss, Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_CREDITS_RBT_MISS",
"PerPkg": "1",
@@ -549,6 +592,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_MISS",
"PerPkg": "1",
@@ -558,6 +602,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_HIT",
"PerPkg": "1",
@@ -567,6 +612,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Failure - RBT Miss and Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.FAILURE_RBT_MISS",
"PerPkg": "1",
@@ -576,6 +622,7 @@
},
{
"BriefDescription": "Direct 2 Core Spawning; Spawn Success",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_DIRECT2CORE.SUCCESS_RBT_HIT",
"PerPkg": "1",
@@ -585,6 +632,7 @@
},
{
"BriefDescription": "Cycles in L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_L1_POWER_CYCLES",
"PerPkg": "1",
@@ -593,6 +641,7 @@
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -601,6 +650,7 @@
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL0_POWER_CYCLES",
"PerPkg": "1",
@@ -609,6 +659,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_BYPASSED",
"PerPkg": "1",
@@ -617,6 +668,7 @@
},
{
"BriefDescription": "CRC Errors Detected; LinkInit",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_CRC_ERRORS.LINK_INIT",
"PerPkg": "1",
@@ -626,6 +678,7 @@
},
{
"BriefDescription": "CRC Errors Detected; Normal Operations",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_CRC_ERRORS.NORMAL_OP",
"PerPkg": "1",
@@ -635,6 +688,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS",
"PerPkg": "1",
@@ -644,6 +698,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.HOM",
"PerPkg": "1",
@@ -653,6 +708,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCB",
"PerPkg": "1",
@@ -662,6 +718,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NCS",
"PerPkg": "1",
@@ -671,6 +728,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.NDR",
"PerPkg": "1",
@@ -680,6 +738,7 @@
},
{
"BriefDescription": "VN0 Credit Consumed; SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN0.SNP",
"PerPkg": "1",
@@ -689,6 +748,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.DRS",
"PerPkg": "1",
@@ -698,6 +758,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.HOM",
"PerPkg": "1",
@@ -707,6 +768,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCB",
"PerPkg": "1",
@@ -716,6 +778,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NCS",
"PerPkg": "1",
@@ -725,6 +788,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.NDR",
"PerPkg": "1",
@@ -734,6 +798,7 @@
},
{
"BriefDescription": "VN1 Credit Consumed; SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VN1.SNP",
"PerPkg": "1",
@@ -743,6 +808,7 @@
},
{
"BriefDescription": "VNA Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_Q_RxL_CREDITS_CONSUMED_VNA",
"PerPkg": "1",
@@ -751,6 +817,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_CYCLES_NE",
"PerPkg": "1",
@@ -759,6 +826,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL_CYCLES_NE_DRS.VN0",
"PerPkg": "1",
@@ -768,6 +836,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xF",
"EventName": "UNC_Q_RxL_CYCLES_NE_DRS.VN1",
"PerPkg": "1",
@@ -777,6 +846,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_RxL_CYCLES_NE_HOM.VN0",
"PerPkg": "1",
@@ -786,6 +856,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_Q_RxL_CYCLES_NE_HOM.VN1",
"PerPkg": "1",
@@ -795,6 +866,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCB.VN0",
"PerPkg": "1",
@@ -804,6 +876,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCB.VN1",
"PerPkg": "1",
@@ -813,6 +886,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCS.VN0",
"PerPkg": "1",
@@ -822,6 +896,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_Q_RxL_CYCLES_NE_NCS.VN1",
"PerPkg": "1",
@@ -831,6 +906,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_RxL_CYCLES_NE_NDR.VN0",
"PerPkg": "1",
@@ -840,6 +916,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_Q_RxL_CYCLES_NE_NDR.VN1",
"PerPkg": "1",
@@ -849,6 +926,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_RxL_CYCLES_NE_SNP.VN0",
"PerPkg": "1",
@@ -858,6 +936,7 @@
},
{
"BriefDescription": "RxQ Cycles Not Empty - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_Q_RxL_CYCLES_NE_SNP.VN1",
"PerPkg": "1",
@@ -867,6 +946,7 @@
},
{
"BriefDescription": "Flits Received - Group 0; Idle and Null Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_RxL_FLITS_G0.IDLE",
"PerPkg": "1",
@@ -876,6 +956,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Flits (both Header and Data)",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS",
"PerPkg": "1",
@@ -885,6 +966,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Data Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS_DATA",
"PerPkg": "1",
@@ -894,6 +976,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; DRS Header Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.DRS_NONDATA",
"PerPkg": "1",
@@ -903,6 +986,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM",
"PerPkg": "1",
@@ -912,6 +996,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Non-Request Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM_NONREQ",
"PerPkg": "1",
@@ -921,6 +1006,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; HOM Request Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.HOM_REQ",
"PerPkg": "1",
@@ -930,6 +1016,7 @@
},
{
"BriefDescription": "Flits Received - Group 1; SNP Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_RxL_FLITS_G1.SNP",
"PerPkg": "1",
@@ -939,6 +1026,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB",
"PerPkg": "1",
@@ -948,6 +1036,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent data Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB_DATA",
"PerPkg": "1",
@@ -957,6 +1046,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent non-data Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCB_NONDATA",
"PerPkg": "1",
@@ -966,6 +1056,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Coherent standard Rx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NCS",
"PerPkg": "1",
@@ -975,6 +1066,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Data Response Rx Flits - AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NDR_AD",
"PerPkg": "1",
@@ -984,6 +1076,7 @@
},
{
"BriefDescription": "Flits Received - Group 2; Non-Data Response Rx Flits - AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_Q_RxL_FLITS_G2.NDR_AK",
"PerPkg": "1",
@@ -993,6 +1086,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_Q_RxL_INSERTS",
"PerPkg": "1",
@@ -1001,6 +1095,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_INSERTS_DRS.VN0",
"PerPkg": "1",
@@ -1010,6 +1105,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_Q_RxL_INSERTS_DRS.VN1",
"PerPkg": "1",
@@ -1019,6 +1115,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_RxL_INSERTS_HOM.VN0",
"PerPkg": "1",
@@ -1028,6 +1125,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_RxL_INSERTS_HOM.VN1",
"PerPkg": "1",
@@ -1037,6 +1135,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_INSERTS_NCB.VN0",
"PerPkg": "1",
@@ -1046,6 +1145,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_Q_RxL_INSERTS_NCB.VN1",
"PerPkg": "1",
@@ -1055,6 +1155,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_INSERTS_NCS.VN0",
"PerPkg": "1",
@@ -1064,6 +1165,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_INSERTS_NCS.VN1",
"PerPkg": "1",
@@ -1073,6 +1175,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_Q_RxL_INSERTS_NDR.VN0",
"PerPkg": "1",
@@ -1082,6 +1185,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_Q_RxL_INSERTS_NDR.VN1",
"PerPkg": "1",
@@ -1091,6 +1195,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_RxL_INSERTS_SNP.VN0",
"PerPkg": "1",
@@ -1100,6 +1205,7 @@
},
{
"BriefDescription": "Rx Flit Buffer Allocations - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_RxL_INSERTS_SNP.VN1",
"PerPkg": "1",
@@ -1109,6 +1215,7 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_Q_RxL_OCCUPANCY",
"PerPkg": "1",
@@ -1117,6 +1224,7 @@
},
{
"BriefDescription": "RxQ Occupancy - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_Q_RxL_OCCUPANCY_DRS.VN0",
"PerPkg": "1",
@@ -1126,6 +1234,7 @@
},
{
"BriefDescription": "RxQ Occupancy - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_Q_RxL_OCCUPANCY_DRS.VN1",
"PerPkg": "1",
@@ -1135,6 +1244,7 @@
},
{
"BriefDescription": "RxQ Occupancy - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_Q_RxL_OCCUPANCY_HOM.VN0",
"PerPkg": "1",
@@ -1144,6 +1254,7 @@
},
{
"BriefDescription": "RxQ Occupancy - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_Q_RxL_OCCUPANCY_HOM.VN1",
"PerPkg": "1",
@@ -1153,6 +1264,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCB.VN0",
"PerPkg": "1",
@@ -1162,6 +1274,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCB.VN1",
"PerPkg": "1",
@@ -1171,6 +1284,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCS.VN0",
"PerPkg": "1",
@@ -1180,6 +1294,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_Q_RxL_OCCUPANCY_NCS.VN1",
"PerPkg": "1",
@@ -1189,6 +1304,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_Q_RxL_OCCUPANCY_NDR.VN0",
"PerPkg": "1",
@@ -1198,6 +1314,7 @@
},
{
"BriefDescription": "RxQ Occupancy - NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_Q_RxL_OCCUPANCY_NDR.VN1",
"PerPkg": "1",
@@ -1207,6 +1324,7 @@
},
{
"BriefDescription": "RxQ Occupancy - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_Q_RxL_OCCUPANCY_SNP.VN0",
"PerPkg": "1",
@@ -1216,6 +1334,7 @@
},
{
"BriefDescription": "RxQ Occupancy - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_Q_RxL_OCCUPANCY_SNP.VN1",
"PerPkg": "1",
@@ -1225,6 +1344,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_DRS",
"PerPkg": "1",
@@ -1234,6 +1354,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_HOM",
"PerPkg": "1",
@@ -1243,6 +1364,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NCB",
"PerPkg": "1",
@@ -1252,6 +1374,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NCS",
"PerPkg": "1",
@@ -1261,6 +1384,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_NDR",
"PerPkg": "1",
@@ -1270,6 +1394,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; BGF Stall - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.BGF_SNP",
"PerPkg": "1",
@@ -1279,6 +1404,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; Egress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.EGRESS_CREDITS",
"PerPkg": "1",
@@ -1288,6 +1414,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN0; GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_Q_RxL_STALLS_VN0.GV",
"PerPkg": "1",
@@ -1297,6 +1424,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - HOM",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_DRS",
"PerPkg": "1",
@@ -1306,6 +1434,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_HOM",
"PerPkg": "1",
@@ -1315,6 +1444,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - SNP",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NCB",
"PerPkg": "1",
@@ -1324,6 +1454,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NCS",
"PerPkg": "1",
@@ -1333,6 +1464,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_NDR",
"PerPkg": "1",
@@ -1342,6 +1474,7 @@
},
{
"BriefDescription": "Stalls Sending to R3QPI on VN1; BGF Stall - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_Q_RxL_STALLS_VN1.BGF_SNP",
"PerPkg": "1",
@@ -1351,6 +1484,7 @@
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_Q_TxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -1359,6 +1493,7 @@
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_Q_TxL0_POWER_CYCLES",
"PerPkg": "1",
@@ -1367,6 +1502,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_Q_TxL_BYPASSED",
"PerPkg": "1",
@@ -1375,6 +1511,7 @@
},
{
"BriefDescription": "Cycles Stalled with no LLR Credits; LLR is almost full",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_TxL_CRC_NO_CREDITS.ALMOST_FULL",
"PerPkg": "1",
@@ -1384,6 +1521,7 @@
},
{
"BriefDescription": "Cycles Stalled with no LLR Credits; LLR is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_Q_TxL_CRC_NO_CREDITS.FULL",
"PerPkg": "1",
@@ -1393,6 +1531,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Cycles not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_Q_TxL_CYCLES_NE",
"PerPkg": "1",
@@ -1401,6 +1540,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 0; Data Tx Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G0.DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of data flits transmitted over QPI. Each flit contains 64b of data. This includes both DRS and NCB data flits (coherent and non-coherent). This can be used to calculate the data bandwidth of the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This does not include the header flits that go in data packets.",
@@ -1409,6 +1549,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 0; Non-Data protocol Tx Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G0.NON_DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. It includes filters for Idle, protocol, and Data Flits. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time (for L0) or 4B instead of 8B for L0p.; Number of non-NULL non-data flits transmitted across QPI. This basically tracks the protocol overhead on the QPI link. One can get a good picture of the QPI-link characteristics by evaluating the protocol flits, data flits, and idle/null flits. This includes the header flits for data packets.",
@@ -1417,6 +1558,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Flits (both Header and Data)",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency.",
@@ -1425,6 +1567,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Data Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS_DATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of data flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the data flits (not the header).",
@@ -1433,6 +1576,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; DRS Header Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.DRS_NONDATA",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the total number of protocol flits transmitted over QPI on the DRS (Data Response) channel. DRS flits are used to transmit data with coherency. This does not count data flits transmitted over the NCB channel which transmits non-coherent data. This includes only the header flits (not the data). This includes extended headers.",
@@ -1441,6 +1585,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of flits transmitted over QPI on the home channel.",
@@ -1449,6 +1594,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Non-Request Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM_NONREQ",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of non-request flits transmitted over QPI on the home channel. These are most commonly snoop responses, and this event can be used as a proxy for that.",
@@ -1457,6 +1603,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; HOM Request Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.HOM_REQ",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of data request transmitted over QPI on the home channel. This basically counts the number of remote memory requests transmitted over QPI. In conjunction with the local read count in the Home Agent, one can calculate the number of LLC Misses.",
@@ -1465,6 +1612,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 1; SNP Flits",
+ "Counter": "0,1,2,3",
"EventName": "UNC_Q_TxL_FLITS_G1.SNP",
"PerPkg": "1",
"PublicDescription": "Counts the number of flits transmitted across the QPI Link. This is one of three groups that allow us to track flits. It includes filters for SNP, HOM, and DRS message classes. Each flit is made up of 80 bits of information (in addition to some ECC data). In full-width (L0) mode, flits are made up of four fits, each of which contains 20 bits of data (along with some additional ECC data). In half-width (L0p) mode, the fits are only 10 bits, and therefore it takes twice as many fits to transmit a flit. When one talks about QPI speed (for example, 8.0 GT/s), the transfers here refer to fits. Therefore, in L0, the system will transfer 1 flit at the rate of 1/4th the QPI speed. One can calculate the bandwidth of the link by taking: flits*80b/time. Note that this is not the same as data bandwidth. For example, when we are transferring a 64B cacheline across QPI, we will break it into 9 flits -- 1 with header information and 8 with 64 bits of actual data and an additional 16 bits of other information. To calculate data bandwidth, one should therefore do: data flits * 8B / time.; Counts the number of snoop request flits transmitted over QPI. These requests are contained in the snoop channel. This does not include snoop responses, which are transmitted on the home channel.",
@@ -1473,6 +1621,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent Bypass Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB",
"PerPkg": "1",
@@ -1482,6 +1631,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent data Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB_DATA",
"PerPkg": "1",
@@ -1491,6 +1641,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent non-data Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCB_NONDATA",
"PerPkg": "1",
@@ -1500,6 +1651,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Coherent standard Tx Flits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NCS",
"PerPkg": "1",
@@ -1509,6 +1661,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Data Response Tx Flits - AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NDR_AD",
"PerPkg": "1",
@@ -1518,6 +1671,7 @@
},
{
"BriefDescription": "Flits Transferred - Group 2; Non-Data Response Tx Flits - AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_Q_TxL_FLITS_G2.NDR_AK",
"PerPkg": "1",
@@ -1527,6 +1681,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_Q_TxL_INSERTS",
"PerPkg": "1",
@@ -1535,6 +1690,7 @@
},
{
"BriefDescription": "Tx Flit Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_Q_TxL_OCCUPANCY",
"PerPkg": "1",
@@ -1543,6 +1699,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1552,6 +1709,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1561,6 +1719,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD HOM; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1570,6 +1729,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD HOM; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_Q_TxR_AD_HOM_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1579,6 +1739,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1588,6 +1749,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1597,6 +1759,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1606,6 +1769,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD NDR; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_Q_TxR_AD_NDR_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1615,6 +1779,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1624,6 +1789,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1633,6 +1799,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD SNP; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1642,6 +1809,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AD SNP; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_Q_TxR_AD_SNP_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1651,6 +1819,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AK NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_Q_TxR_AK_NDR_CREDIT_ACQUIRED",
"PerPkg": "1",
@@ -1659,6 +1828,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - AK NDR",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_Q_TxR_AK_NDR_CREDIT_OCCUPANCY",
"PerPkg": "1",
@@ -1667,6 +1837,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1676,6 +1847,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1685,6 +1857,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - DRS; for Shared VN",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_ACQUIRED.VN_SHR",
"PerPkg": "1",
@@ -1694,6 +1867,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1703,6 +1877,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1712,6 +1887,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL DRS; for Shared VN",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_Q_TxR_BL_DRS_CREDIT_OCCUPANCY.VN_SHR",
"PerPkg": "1",
@@ -1721,6 +1897,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1730,6 +1907,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1739,6 +1917,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCB; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1748,6 +1927,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCB; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_Q_TxR_BL_NCB_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1757,6 +1937,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN0",
"PerPkg": "1",
@@ -1766,6 +1947,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_ACQUIRED.VN1",
"PerPkg": "1",
@@ -1775,6 +1957,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCS; for VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN0",
"PerPkg": "1",
@@ -1784,6 +1967,7 @@
},
{
"BriefDescription": "R3QPI Egress Credit Occupancy - BL NCS; for VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_Q_TxR_BL_NCS_CREDIT_OCCUPANCY.VN1",
"PerPkg": "1",
@@ -1793,6 +1977,7 @@
},
{
"BriefDescription": "VNA Credits Returned",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_Q_VNA_CREDIT_RETURNS",
"PerPkg": "1",
@@ -1801,6 +1986,7 @@
},
{
"BriefDescription": "VNA Credits Pending Return - Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_Q_VNA_CREDIT_RETURN_OCCUPANCY",
"PerPkg": "1",
@@ -1809,6 +1995,7 @@
},
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2",
"EventCode": "0x1",
"EventName": "UNC_R3_CLOCKTICKS",
"PerPkg": "1",
@@ -1817,6 +2004,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO10",
"PerPkg": "1",
@@ -1826,6 +2014,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO11",
"PerPkg": "1",
@@ -1835,6 +2024,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO12",
"PerPkg": "1",
@@ -1844,6 +2034,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO13",
"PerPkg": "1",
@@ -1853,6 +2044,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO14_16",
"PerPkg": "1",
@@ -1862,6 +2054,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO8",
"PerPkg": "1",
@@ -1871,6 +2064,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO9",
"PerPkg": "1",
@@ -1880,6 +2074,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_R3_C_HI_AD_CREDITS_EMPTY.CBO_15_17",
"PerPkg": "1",
@@ -1889,6 +2084,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO0",
"PerPkg": "1",
@@ -1898,6 +2094,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO1",
"PerPkg": "1",
@@ -1907,6 +2104,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO2",
"PerPkg": "1",
@@ -1916,6 +2114,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO3",
"PerPkg": "1",
@@ -1925,6 +2124,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO4",
"PerPkg": "1",
@@ -1934,6 +2134,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO5",
"PerPkg": "1",
@@ -1943,6 +2144,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO6",
"PerPkg": "1",
@@ -1952,6 +2154,7 @@
},
{
"BriefDescription": "CBox AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_R3_C_LO_AD_CREDITS_EMPTY.CBO7",
"PerPkg": "1",
@@ -1961,6 +2164,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA0",
"PerPkg": "1",
@@ -1970,6 +2174,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.HA1",
"PerPkg": "1",
@@ -1979,6 +2184,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCB",
"PerPkg": "1",
@@ -1988,6 +2194,7 @@
},
{
"BriefDescription": "HA/R2 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R3_HA_R2_BL_CREDITS_EMPTY.R2_NCS",
"PerPkg": "1",
@@ -1997,6 +2204,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0xB",
"EventName": "UNC_R3_IOT_BACKPRESSURE.HUB",
"PerPkg": "1",
@@ -2005,6 +2213,7 @@
},
{
"BriefDescription": "IOT Backpressure",
+ "Counter": "0,1,2",
"EventCode": "0xB",
"EventName": "UNC_R3_IOT_BACKPRESSURE.SAT",
"PerPkg": "1",
@@ -2013,6 +2222,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0xD",
"EventName": "UNC_R3_IOT_CTS_HI.CTS2",
"PerPkg": "1",
@@ -2022,6 +2232,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Hi",
+ "Counter": "0,1,2",
"EventCode": "0xD",
"EventName": "UNC_R3_IOT_CTS_HI.CTS3",
"PerPkg": "1",
@@ -2031,6 +2242,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0xC",
"EventName": "UNC_R3_IOT_CTS_LO.CTS0",
"PerPkg": "1",
@@ -2040,6 +2252,7 @@
},
{
"BriefDescription": "IOT Common Trigger Sequencer - Lo",
+ "Counter": "0,1,2",
"EventCode": "0xC",
"EventName": "UNC_R3_IOT_CTS_LO.CTS1",
"PerPkg": "1",
@@ -2049,6 +2262,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_HOM",
"PerPkg": "1",
@@ -2058,6 +2272,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_NDR",
"PerPkg": "1",
@@ -2067,6 +2282,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN0_SNP",
"PerPkg": "1",
@@ -2076,6 +2292,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2085,6 +2302,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2094,6 +2312,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2103,6 +2322,7 @@
},
{
"BriefDescription": "QPI0 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_R3_QPI0_AD_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2112,6 +2332,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2121,6 +2342,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2130,6 +2352,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2139,6 +2362,7 @@
},
{
"BriefDescription": "QPI0 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x21",
"EventName": "UNC_R3_QPI0_BL_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2148,6 +2372,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2157,6 +2382,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2166,6 +2392,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2175,6 +2402,7 @@
},
{
"BriefDescription": "QPI1 AD Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2E",
"EventName": "UNC_R3_QPI1_AD_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2184,6 +2412,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_HOM",
"PerPkg": "1",
@@ -2193,6 +2422,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_NDR",
"PerPkg": "1",
@@ -2202,6 +2432,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN0_SNP",
"PerPkg": "1",
@@ -2211,6 +2442,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_HOM",
"PerPkg": "1",
@@ -2220,6 +2452,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_NDR",
"PerPkg": "1",
@@ -2229,6 +2462,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VN1_SNP",
"PerPkg": "1",
@@ -2238,6 +2472,7 @@
},
{
"BriefDescription": "QPI1 BL Credits Empty",
+ "Counter": "0,1",
"EventCode": "0x2F",
"EventName": "UNC_R3_QPI1_BL_CREDITS_EMPTY.VNA",
"PerPkg": "1",
@@ -2247,6 +2482,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -2256,6 +2492,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2265,6 +2502,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -2274,6 +2512,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW",
"PerPkg": "1",
@@ -2283,6 +2522,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -2292,6 +2532,7 @@
},
{
"BriefDescription": "R3 AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x7",
"EventName": "UNC_R3_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -2301,6 +2542,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -2310,6 +2552,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2319,6 +2562,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -2328,6 +2572,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW",
"PerPkg": "1",
@@ -2337,6 +2582,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -2346,6 +2592,7 @@
},
{
"BriefDescription": "R3 AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x8",
"EventName": "UNC_R3_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -2355,6 +2602,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -2364,6 +2612,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -2373,6 +2622,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -2382,6 +2632,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW",
"PerPkg": "1",
@@ -2391,6 +2642,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -2400,6 +2652,7 @@
},
{
"BriefDescription": "R3 BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2",
"EventCode": "0x9",
"EventName": "UNC_R3_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -2409,6 +2662,7 @@
},
{
"BriefDescription": "R3 IV Ring in Use; Any",
+ "Counter": "0,1,2",
"EventCode": "0xA",
"EventName": "UNC_R3_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -2418,6 +2672,7 @@
},
{
"BriefDescription": "R3 IV Ring in Use; Clockwise",
+ "Counter": "0,1,2",
"EventCode": "0xA",
"EventName": "UNC_R3_RING_IV_USED.CW",
"PerPkg": "1",
@@ -2427,6 +2682,7 @@
},
{
"BriefDescription": "Ring Stop Starved; AK",
+ "Counter": "0,1,2",
"EventCode": "0xE",
"EventName": "UNC_R3_RING_SINK_STARVED.AK",
"PerPkg": "1",
@@ -2436,6 +2692,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; HOM",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.HOM",
"PerPkg": "1",
@@ -2445,6 +2702,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NDR",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.NDR",
"PerPkg": "1",
@@ -2454,6 +2712,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; SNP",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R3_RxR_CYCLES_NE.SNP",
"PerPkg": "1",
@@ -2463,6 +2722,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; DRS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.DRS",
"PerPkg": "1",
@@ -2472,6 +2732,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; HOM",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.HOM",
"PerPkg": "1",
@@ -2481,6 +2742,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NCB",
"PerPkg": "1",
@@ -2490,6 +2752,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NCS",
"PerPkg": "1",
@@ -2499,6 +2762,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; NDR",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.NDR",
"PerPkg": "1",
@@ -2508,6 +2772,7 @@
},
{
"BriefDescription": "VN1 Ingress Cycles Not Empty; SNP",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_R3_RxR_CYCLES_NE_VN1.SNP",
"PerPkg": "1",
@@ -2517,6 +2782,7 @@
},
{
"BriefDescription": "Ingress Allocations; DRS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.DRS",
"PerPkg": "1",
@@ -2526,6 +2792,7 @@
},
{
"BriefDescription": "Ingress Allocations; HOM",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.HOM",
"PerPkg": "1",
@@ -2535,6 +2802,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NCB",
"PerPkg": "1",
@@ -2544,6 +2812,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NCS",
"PerPkg": "1",
@@ -2553,6 +2822,7 @@
},
{
"BriefDescription": "Ingress Allocations; NDR",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.NDR",
"PerPkg": "1",
@@ -2562,6 +2832,7 @@
},
{
"BriefDescription": "Ingress Allocations; SNP",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R3_RxR_INSERTS.SNP",
"PerPkg": "1",
@@ -2571,6 +2842,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; DRS",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.DRS",
"PerPkg": "1",
@@ -2580,6 +2852,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; HOM",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.HOM",
"PerPkg": "1",
@@ -2589,6 +2862,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NCB",
"PerPkg": "1",
@@ -2598,6 +2872,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NCS",
"PerPkg": "1",
@@ -2607,6 +2882,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; NDR",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.NDR",
"PerPkg": "1",
@@ -2616,6 +2892,7 @@
},
{
"BriefDescription": "VN1 Ingress Allocations; SNP",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_R3_RxR_INSERTS_VN1.SNP",
"PerPkg": "1",
@@ -2625,6 +2902,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; DRS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.DRS",
"PerPkg": "1",
@@ -2634,6 +2912,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; HOM",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.HOM",
"PerPkg": "1",
@@ -2643,6 +2922,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NCB",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NCB",
"PerPkg": "1",
@@ -2652,6 +2932,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NCS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NCS",
"PerPkg": "1",
@@ -2661,6 +2942,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; NDR",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.NDR",
"PerPkg": "1",
@@ -2670,6 +2952,7 @@
},
{
"BriefDescription": "VN1 Ingress Occupancy Accumulator; SNP",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R3_RxR_OCCUPANCY_VN1.SNP",
"PerPkg": "1",
@@ -2679,6 +2962,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R3_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2688,6 +2972,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R3_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2697,6 +2982,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R3_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2706,6 +2992,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R3_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2715,6 +3002,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "UNC_R3_SBO1_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -2724,6 +3012,7 @@
},
{
"BriefDescription": "SBo1 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x29",
"EventName": "UNC_R3_SBO1_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -2733,6 +3022,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2B",
"EventName": "UNC_R3_SBO1_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -2742,6 +3032,7 @@
},
{
"BriefDescription": "SBo1 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2B",
"EventName": "UNC_R3_SBO1_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -2751,6 +3042,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -2760,6 +3052,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -2769,6 +3062,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -2778,6 +3072,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R3_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -2787,6 +3082,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AD CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_AD",
"PerPkg": "1",
@@ -2796,6 +3092,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_AK",
"PerPkg": "1",
@@ -2805,6 +3102,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.DN_BL",
"PerPkg": "1",
@@ -2814,6 +3112,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_AD",
"PerPkg": "1",
@@ -2823,6 +3122,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_AK",
"PerPkg": "1",
@@ -2832,6 +3132,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R3_TxR_NACK.UP_BL",
"PerPkg": "1",
@@ -2841,6 +3142,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -2850,6 +3152,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -2859,6 +3162,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -2868,6 +3172,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -2877,6 +3182,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -2886,6 +3192,7 @@
},
{
"BriefDescription": "VN0 Credit Acquisition Failed on DRS; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x37",
"EventName": "UNC_R3_VN0_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -2895,6 +3202,7 @@
},
{
"BriefDescription": "VN0 Credit Used; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -2904,6 +3212,7 @@
},
{
"BriefDescription": "VN0 Credit Used; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.HOM",
"PerPkg": "1",
@@ -2913,6 +3222,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -2922,6 +3232,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -2931,6 +3242,7 @@
},
{
"BriefDescription": "VN0 Credit Used; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.NDR",
"PerPkg": "1",
@@ -2940,6 +3252,7 @@
},
{
"BriefDescription": "VN0 Credit Used; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x36",
"EventName": "UNC_R3_VN0_CREDITS_USED.SNP",
"PerPkg": "1",
@@ -2949,6 +3262,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -2958,6 +3272,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -2967,6 +3282,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -2976,6 +3292,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -2985,6 +3302,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -2994,6 +3312,7 @@
},
{
"BriefDescription": "VN1 Credit Acquisition Failed on DRS; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x39",
"EventName": "UNC_R3_VN1_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -3003,6 +3322,7 @@
},
{
"BriefDescription": "VN1 Credit Used; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -3012,6 +3332,7 @@
},
{
"BriefDescription": "VN1 Credit Used; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.HOM",
"PerPkg": "1",
@@ -3021,6 +3342,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -3030,6 +3352,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -3039,6 +3362,7 @@
},
{
"BriefDescription": "VN1 Credit Used; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.NDR",
"PerPkg": "1",
@@ -3048,6 +3372,7 @@
},
{
"BriefDescription": "VN1 Credit Used; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x38",
"EventName": "UNC_R3_VN1_CREDITS_USED.SNP",
"PerPkg": "1",
@@ -3057,6 +3382,7 @@
},
{
"BriefDescription": "VNA credit Acquisitions; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -3066,6 +3392,7 @@
},
{
"BriefDescription": "VNA credit Acquisitions; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R3_VNA_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -3075,6 +3402,7 @@
},
{
"BriefDescription": "VNA Credit Reject; DRS Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.DRS",
"PerPkg": "1",
@@ -3084,6 +3412,7 @@
},
{
"BriefDescription": "VNA Credit Reject; HOM Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.HOM",
"PerPkg": "1",
@@ -3093,6 +3422,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NCB Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NCB",
"PerPkg": "1",
@@ -3102,6 +3432,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NCS Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NCS",
"PerPkg": "1",
@@ -3111,6 +3442,7 @@
},
{
"BriefDescription": "VNA Credit Reject; NDR Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.NDR",
"PerPkg": "1",
@@ -3120,6 +3452,7 @@
},
{
"BriefDescription": "VNA Credit Reject; SNP Message Class",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_R3_VNA_CREDITS_REJECT.SNP",
"PerPkg": "1",
@@ -3129,6 +3462,7 @@
},
{
"BriefDescription": "Bounce Control",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_S_BOUNCE_CONTROL",
"PerPkg": "1",
@@ -3136,12 +3470,14 @@
},
{
"BriefDescription": "Uncore Clocks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_S_CLOCKTICKS",
"PerPkg": "1",
"Unit": "SBOX"
},
{
"BriefDescription": "FaST wire asserted",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_S_FAST_ASSERTED",
"PerPkg": "1",
@@ -3150,6 +3486,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN",
"PerPkg": "1",
@@ -3159,6 +3496,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3168,6 +3506,7 @@
},
{
"BriefDescription": "AD Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3177,6 +3516,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP",
"PerPkg": "1",
@@ -3186,6 +3526,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP_EVEN",
"PerPkg": "1",
@@ -3195,6 +3536,7 @@
},
{
"BriefDescription": "AD Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_S_RING_AD_USED.UP_ODD",
"PerPkg": "1",
@@ -3204,6 +3546,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN",
"PerPkg": "1",
@@ -3213,6 +3556,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3222,6 +3566,7 @@
},
{
"BriefDescription": "AK Ring In Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3231,6 +3576,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP",
"PerPkg": "1",
@@ -3240,6 +3586,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP_EVEN",
"PerPkg": "1",
@@ -3249,6 +3596,7 @@
},
{
"BriefDescription": "AK Ring In Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_S_RING_AK_USED.UP_ODD",
"PerPkg": "1",
@@ -3258,6 +3606,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN",
"PerPkg": "1",
@@ -3267,6 +3616,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Event",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN_EVEN",
"PerPkg": "1",
@@ -3276,6 +3626,7 @@
},
{
"BriefDescription": "BL Ring in Use; Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.DOWN_ODD",
"PerPkg": "1",
@@ -3285,6 +3636,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP",
"PerPkg": "1",
@@ -3294,6 +3646,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP_EVEN",
"PerPkg": "1",
@@ -3303,6 +3656,7 @@
},
{
"BriefDescription": "BL Ring in Use; Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_S_RING_BL_USED.UP_ODD",
"PerPkg": "1",
@@ -3312,6 +3666,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.AD_CACHE",
"PerPkg": "1",
@@ -3320,6 +3675,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.AK_CORE",
"PerPkg": "1",
@@ -3328,6 +3684,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.BL_CORE",
"PerPkg": "1",
@@ -3336,6 +3693,7 @@
},
{
"BriefDescription": "Number of LLC responses that bounced on the Ring.; Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_S_RING_BOUNCES.IV_CORE",
"PerPkg": "1",
@@ -3344,6 +3702,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_S_RING_IV_USED.DN",
"PerPkg": "1",
@@ -3353,6 +3712,7 @@
},
{
"BriefDescription": "BL Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0x1E",
"EventName": "UNC_S_RING_IV_USED.UP",
"PerPkg": "1",
@@ -3362,6 +3722,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.AD_CACHE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.AD_CACHE",
"PerPkg": "1",
@@ -3370,6 +3731,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.AK_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.AK_CORE",
"PerPkg": "1",
@@ -3378,6 +3740,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.BL_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.BL_CORE",
"PerPkg": "1",
@@ -3386,6 +3749,7 @@
},
{
"BriefDescription": "UNC_S_RING_SINK_STARVED.IV_CORE",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_S_RING_SINK_STARVED.IV_CORE",
"PerPkg": "1",
@@ -3394,6 +3758,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.AD_BNC",
"PerPkg": "1",
@@ -3403,6 +3768,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.AD_CRD",
"PerPkg": "1",
@@ -3412,6 +3778,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.BL_BNC",
"PerPkg": "1",
@@ -3421,6 +3788,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_S_RxR_BUSY_STARVED.BL_CRD",
"PerPkg": "1",
@@ -3430,6 +3798,7 @@
},
{
"BriefDescription": "Bypass; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AD_BNC",
"PerPkg": "1",
@@ -3439,6 +3808,7 @@
},
{
"BriefDescription": "Bypass; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AD_CRD",
"PerPkg": "1",
@@ -3448,6 +3818,7 @@
},
{
"BriefDescription": "Bypass; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.AK",
"PerPkg": "1",
@@ -3457,6 +3828,7 @@
},
{
"BriefDescription": "Bypass; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.BL_BNC",
"PerPkg": "1",
@@ -3466,6 +3838,7 @@
},
{
"BriefDescription": "Bypass; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.BL_CRD",
"PerPkg": "1",
@@ -3475,6 +3848,7 @@
},
{
"BriefDescription": "Bypass; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_S_RxR_BYPASS.IV",
"PerPkg": "1",
@@ -3484,6 +3858,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AD_BNC",
"PerPkg": "1",
@@ -3493,6 +3868,7 @@
},
{
"BriefDescription": "Injection Starvation; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AD_CRD",
"PerPkg": "1",
@@ -3502,6 +3878,7 @@
},
{
"BriefDescription": "Injection Starvation; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.AK",
"PerPkg": "1",
@@ -3511,6 +3888,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.BL_BNC",
"PerPkg": "1",
@@ -3520,6 +3898,7 @@
},
{
"BriefDescription": "Injection Starvation; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.BL_CRD",
"PerPkg": "1",
@@ -3529,6 +3908,7 @@
},
{
"BriefDescription": "Injection Starvation; IVF Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.IFV",
"PerPkg": "1",
@@ -3538,6 +3918,7 @@
},
{
"BriefDescription": "Injection Starvation; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_S_RxR_CRD_STARVED.IV",
"PerPkg": "1",
@@ -3547,6 +3928,7 @@
},
{
"BriefDescription": "Ingress Allocations; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AD_BNC",
"PerPkg": "1",
@@ -3556,6 +3938,7 @@
},
{
"BriefDescription": "Ingress Allocations; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AD_CRD",
"PerPkg": "1",
@@ -3565,6 +3948,7 @@
},
{
"BriefDescription": "Ingress Allocations; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.AK",
"PerPkg": "1",
@@ -3574,6 +3958,7 @@
},
{
"BriefDescription": "Ingress Allocations; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.BL_BNC",
"PerPkg": "1",
@@ -3583,6 +3968,7 @@
},
{
"BriefDescription": "Ingress Allocations; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.BL_CRD",
"PerPkg": "1",
@@ -3592,6 +3978,7 @@
},
{
"BriefDescription": "Ingress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_S_RxR_INSERTS.IV",
"PerPkg": "1",
@@ -3601,6 +3988,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AD_BNC",
"PerPkg": "1",
@@ -3610,6 +3998,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AD_CRD",
"PerPkg": "1",
@@ -3619,6 +4008,7 @@
},
{
"BriefDescription": "Ingress Occupancy; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.AK",
"PerPkg": "1",
@@ -3628,6 +4018,7 @@
},
{
"BriefDescription": "Ingress Occupancy; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.BL_BNC",
"PerPkg": "1",
@@ -3637,6 +4028,7 @@
},
{
"BriefDescription": "Ingress Occupancy; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.BL_CRD",
"PerPkg": "1",
@@ -3646,6 +4038,7 @@
},
{
"BriefDescription": "Ingress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_S_RxR_OCCUPANCY.IV",
"PerPkg": "1",
@@ -3655,6 +4048,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.AD",
"PerPkg": "1",
@@ -3663,6 +4057,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.AK",
"PerPkg": "1",
@@ -3671,6 +4066,7 @@
},
{
"BriefDescription": "UNC_S_TxR_ADS_USED.BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_S_TxR_ADS_USED.BL",
"PerPkg": "1",
@@ -3679,6 +4075,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AD_BNC",
"PerPkg": "1",
@@ -3688,6 +4085,7 @@
},
{
"BriefDescription": "Egress Allocations; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AD_CRD",
"PerPkg": "1",
@@ -3697,6 +4095,7 @@
},
{
"BriefDescription": "Egress Allocations; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.AK",
"PerPkg": "1",
@@ -3706,6 +4105,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.BL_BNC",
"PerPkg": "1",
@@ -3715,6 +4115,7 @@
},
{
"BriefDescription": "Egress Allocations; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.BL_CRD",
"PerPkg": "1",
@@ -3724,6 +4125,7 @@
},
{
"BriefDescription": "Egress Allocations; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_S_TxR_INSERTS.IV",
"PerPkg": "1",
@@ -3733,6 +4135,7 @@
},
{
"BriefDescription": "Egress Occupancy; AD - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AD_BNC",
"PerPkg": "1",
@@ -3742,6 +4145,7 @@
},
{
"BriefDescription": "Egress Occupancy; AD - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AD_CRD",
"PerPkg": "1",
@@ -3751,6 +4155,7 @@
},
{
"BriefDescription": "Egress Occupancy; AK",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.AK",
"PerPkg": "1",
@@ -3760,6 +4165,7 @@
},
{
"BriefDescription": "Egress Occupancy; BL - Bounces",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.BL_BNC",
"PerPkg": "1",
@@ -3769,6 +4175,7 @@
},
{
"BriefDescription": "Egress Occupancy; BL - Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.BL_CRD",
"PerPkg": "1",
@@ -3778,6 +4185,7 @@
},
{
"BriefDescription": "Egress Occupancy; IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_S_TxR_OCCUPANCY.IV",
"PerPkg": "1",
@@ -3787,6 +4195,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AD Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.AD",
"PerPkg": "1",
@@ -3796,6 +4205,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto AK Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.AK",
"PerPkg": "1",
@@ -3805,6 +4215,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto BL Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.BL",
"PerPkg": "1",
@@ -3814,6 +4225,7 @@
},
{
"BriefDescription": "Injection Starvation; Onto IV Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0x3",
"EventName": "UNC_S_TxR_STARVED.IV",
"PerPkg": "1",
@@ -3823,12 +4235,14 @@
},
{
"BriefDescription": "UNC_U_CLOCKTICKS",
+ "Counter": "0,1",
"EventName": "UNC_U_CLOCKTICKS",
"PerPkg": "1",
"Unit": "UBOX"
},
{
"BriefDescription": "VLW Received",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
"PerPkg": "1",
@@ -3838,6 +4252,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.DISABLE",
"PerPkg": "1",
@@ -3847,6 +4262,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.ENABLE",
"PerPkg": "1",
@@ -3856,6 +4272,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_DISABLE",
"PerPkg": "1",
@@ -3865,6 +4282,7 @@
},
{
"BriefDescription": "Filter Match",
+ "Counter": "0,1",
"EventCode": "0x41",
"EventName": "UNC_U_FILTER_MATCH.U2C_ENABLE",
"PerPkg": "1",
@@ -3874,6 +4292,7 @@
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack; Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
"PerPkg": "1",
@@ -3883,6 +4302,7 @@
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
"PerPkg": "1",
@@ -3891,6 +4311,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Correctable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.CMC",
"PerPkg": "1",
@@ -3900,6 +4321,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Livelock",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LIVELOCK",
"PerPkg": "1",
@@ -3909,6 +4331,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; LTError",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.LTERROR",
"PerPkg": "1",
@@ -3918,6 +4341,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T0",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T0",
"PerPkg": "1",
@@ -3927,6 +4351,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Monitor T1",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.MONITOR_T1",
"PerPkg": "1",
@@ -3936,6 +4361,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Other",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.OTHER",
"PerPkg": "1",
@@ -3945,6 +4371,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Trap",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.TRAP",
"PerPkg": "1",
@@ -3954,6 +4381,7 @@
},
{
"BriefDescription": "Monitor Sent to T0; Uncorrectable Machine Check",
+ "Counter": "0,1",
"EventCode": "0x43",
"EventName": "UNC_U_U2C_EVENTS.UMC",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-io.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-io.json
index bd64a8a1625f..84d1d601ea95 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-io.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of uclks in domain",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_R2_CLOCKTICKS",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI0",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.ISOCH_QPI1",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI0",
"PerPkg": "1",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_R2_IIO_CREDIT.PRQ_QPI1",
"PerPkg": "1",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; DRS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.DRS",
"PerPkg": "1",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCB",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCB",
"PerPkg": "1",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credit Acquired; NCS",
+ "Counter": "0,1",
"EventCode": "0x33",
"EventName": "UNC_R2_IIO_CREDITS_ACQUIRED.NCS",
"PerPkg": "1",
@@ -68,6 +76,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; DRS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.DRS",
"PerPkg": "1",
@@ -77,6 +86,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCB",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCB",
"PerPkg": "1",
@@ -86,6 +96,7 @@
},
{
"BriefDescription": "R2PCIe IIO Credits in Use; NCS",
+ "Counter": "0,1",
"EventCode": "0x32",
"EventName": "UNC_R2_IIO_CREDITS_USED.NCS",
"PerPkg": "1",
@@ -95,6 +106,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW",
"PerPkg": "1",
@@ -104,6 +116,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_EVEN",
"PerPkg": "1",
@@ -113,6 +126,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CCW_ODD",
"PerPkg": "1",
@@ -122,6 +136,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW",
"PerPkg": "1",
@@ -131,6 +146,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_EVEN",
"PerPkg": "1",
@@ -140,6 +156,7 @@
},
{
"BriefDescription": "R2 AD Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_R2_RING_AD_USED.CW_ODD",
"PerPkg": "1",
@@ -149,6 +166,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Dn",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.DN",
"PerPkg": "1",
@@ -158,6 +176,7 @@
},
{
"BriefDescription": "AK Ingress Bounced; Up",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_R2_RING_AK_BOUNCES.UP",
"PerPkg": "1",
@@ -167,6 +186,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW",
"PerPkg": "1",
@@ -176,6 +196,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_EVEN",
"PerPkg": "1",
@@ -185,6 +206,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CCW_ODD",
"PerPkg": "1",
@@ -194,6 +216,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW",
"PerPkg": "1",
@@ -203,6 +226,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_EVEN",
"PerPkg": "1",
@@ -212,6 +236,7 @@
},
{
"BriefDescription": "R2 AK Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_R2_RING_AK_USED.CW_ODD",
"PerPkg": "1",
@@ -221,6 +246,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW",
"PerPkg": "1",
@@ -230,6 +256,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_EVEN",
"PerPkg": "1",
@@ -239,6 +266,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Counterclockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CCW_ODD",
"PerPkg": "1",
@@ -248,6 +276,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW",
"PerPkg": "1",
@@ -257,6 +286,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_EVEN",
"PerPkg": "1",
@@ -266,6 +296,7 @@
},
{
"BriefDescription": "R2 BL Ring in Use; Clockwise and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_R2_RING_BL_USED.CW_ODD",
"PerPkg": "1",
@@ -275,6 +306,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Any",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.ANY",
"PerPkg": "1",
@@ -284,6 +316,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Counterclockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CCW",
"PerPkg": "1",
@@ -293,6 +326,7 @@
},
{
"BriefDescription": "R2 IV Ring in Use; Clockwise",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_R2_RING_IV_USED.CW",
"PerPkg": "1",
@@ -302,6 +336,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCB",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCB",
"PerPkg": "1",
@@ -311,6 +346,7 @@
},
{
"BriefDescription": "Ingress Cycles Not Empty; NCS",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_R2_RxR_CYCLES_NE.NCS",
"PerPkg": "1",
@@ -320,6 +356,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCB",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCB",
"PerPkg": "1",
@@ -329,6 +366,7 @@
},
{
"BriefDescription": "Ingress Allocations; NCS",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_R2_RxR_INSERTS.NCS",
"PerPkg": "1",
@@ -338,6 +376,7 @@
},
{
"BriefDescription": "Ingress Occupancy Accumulator; DRS",
+ "Counter": "0",
"EventCode": "0x13",
"EventName": "UNC_R2_RxR_OCCUPANCY.DRS",
"PerPkg": "1",
@@ -347,6 +386,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For AD Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.AD",
"PerPkg": "1",
@@ -356,6 +396,7 @@
},
{
"BriefDescription": "SBo0 Credits Acquired; For BL Ring",
+ "Counter": "0,1",
"EventCode": "0x28",
"EventName": "UNC_R2_SBO0_CREDITS_ACQUIRED.BL",
"PerPkg": "1",
@@ -365,6 +406,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For AD Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.AD",
"PerPkg": "1",
@@ -374,6 +416,7 @@
},
{
"BriefDescription": "SBo0 Credits Occupancy; For BL Ring",
+ "Counter": "0",
"EventCode": "0x2A",
"EventName": "UNC_R2_SBO0_CREDIT_OCCUPANCY.BL",
"PerPkg": "1",
@@ -383,6 +426,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_AD",
"PerPkg": "1",
@@ -392,6 +436,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo0, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO0_BL",
"PerPkg": "1",
@@ -401,6 +446,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, AD Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_AD",
"PerPkg": "1",
@@ -410,6 +456,7 @@
},
{
"BriefDescription": "Stall on No Sbo Credits; For SBo1, BL Ring",
+ "Counter": "0,1",
"EventCode": "0x2C",
"EventName": "UNC_R2_STALL_NO_SBO_CREDIT.SBO1_BL",
"PerPkg": "1",
@@ -419,6 +466,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AD",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AD",
"PerPkg": "1",
@@ -428,6 +476,7 @@
},
{
"BriefDescription": "Egress Cycles Full; AK",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.AK",
"PerPkg": "1",
@@ -437,6 +486,7 @@
},
{
"BriefDescription": "Egress Cycles Full; BL",
+ "Counter": "0",
"EventCode": "0x25",
"EventName": "UNC_R2_TxR_CYCLES_FULL.BL",
"PerPkg": "1",
@@ -446,6 +496,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AD",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AD",
"PerPkg": "1",
@@ -455,6 +506,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; AK",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.AK",
"PerPkg": "1",
@@ -464,6 +516,7 @@
},
{
"BriefDescription": "Egress Cycles Not Empty; BL",
+ "Counter": "0",
"EventCode": "0x23",
"EventName": "UNC_R2_TxR_CYCLES_NE.BL",
"PerPkg": "1",
@@ -473,6 +526,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AD CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AD",
"PerPkg": "1",
@@ -482,6 +536,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_AK",
"PerPkg": "1",
@@ -491,6 +546,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.DN_BL",
"PerPkg": "1",
@@ -500,6 +556,7 @@
},
{
"BriefDescription": "Egress CCW NACK; AK CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AD",
"PerPkg": "1",
@@ -509,6 +566,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_AK",
"PerPkg": "1",
@@ -518,6 +576,7 @@
},
{
"BriefDescription": "Egress CCW NACK; BL CCW",
+ "Counter": "0,1",
"EventCode": "0x26",
"EventName": "UNC_R2_TxR_NACK_CW.UP_BL",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json
index c005f5115722..9ef5eeba3ef4 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "read requests to memory controller. Derived from unc_m_cas_count.rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_READ",
"PerPkg": "1",
@@ -11,6 +12,7 @@
},
{
"BriefDescription": "write requests to memory controller. Derived from unc_m_cas_count.wr",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "LLC_MISSES.MEM_WRITE",
"PerPkg": "1",
@@ -21,6 +23,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.BYP",
"PerPkg": "1",
@@ -30,6 +33,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Read",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.RD",
"PerPkg": "1",
@@ -39,6 +43,7 @@
},
{
"BriefDescription": "DRAM Activate Count; Activate due to Write",
+ "Counter": "0,1,2,3",
"EventCode": "0x1",
"EventName": "UNC_M_ACT_COUNT.WR",
"PerPkg": "1",
@@ -48,6 +53,7 @@
},
{
"BriefDescription": "ACT command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.ACT",
"PerPkg": "1",
@@ -56,6 +62,7 @@
},
{
"BriefDescription": "CAS command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.CAS",
"PerPkg": "1",
@@ -64,6 +71,7 @@
},
{
"BriefDescription": "PRE command issued by 2 cycle bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M_BYP_CMDS.PRE",
"PerPkg": "1",
@@ -72,6 +80,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -81,6 +90,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM Reads (RD_CAS + Underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -90,6 +100,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM RD_CAS (w/ and w/out auto-pre)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
"PerPkg": "1",
@@ -99,6 +110,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_RMM",
"PerPkg": "1",
@@ -107,6 +119,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Underfill Read Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
"PerPkg": "1",
@@ -116,6 +129,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; Read CAS issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.RD_WMM",
"PerPkg": "1",
@@ -124,6 +138,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; All DRAM WR_CAS (both Modes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -133,6 +148,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_RMM",
"PerPkg": "1",
@@ -142,6 +158,7 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.; DRAM WR_CAS (w/ and w/out auto-pre) in Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_M_CAS_COUNT.WR_WMM",
"PerPkg": "1",
@@ -151,18 +168,21 @@
},
{
"BriefDescription": "DRAM Clockticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Clockticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_DCLOCKTICKS",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_M_DRAM_PRE_ALL",
"PerPkg": "1",
@@ -171,6 +191,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1",
@@ -180,6 +201,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1",
@@ -189,6 +211,7 @@
},
{
"BriefDescription": "ECC Correctable Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_M_ECC_CORRECTABLE_ERRORS",
"PerPkg": "1",
@@ -197,6 +220,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Isoch Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.ISOCH",
"PerPkg": "1",
@@ -206,6 +230,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Partial Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.PARTIAL",
"PerPkg": "1",
@@ -215,6 +240,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Read Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.READ",
"PerPkg": "1",
@@ -224,6 +250,7 @@
},
{
"BriefDescription": "Cycles in a Major Mode; Write Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x7",
"EventName": "UNC_M_MAJOR_MODES.WRITE",
"PerPkg": "1",
@@ -233,6 +260,7 @@
},
{
"BriefDescription": "Channel DLLOFF Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M_POWER_CHANNEL_DLLOFF",
"PerPkg": "1",
@@ -241,6 +269,7 @@
},
{
"BriefDescription": "Channel PPD Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
"PerPkg": "1",
@@ -249,6 +278,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK0",
"PerPkg": "1",
@@ -258,6 +288,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK1",
"PerPkg": "1",
@@ -267,6 +298,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK2",
"PerPkg": "1",
@@ -276,6 +308,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK3",
"PerPkg": "1",
@@ -285,6 +318,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK4",
"PerPkg": "1",
@@ -294,6 +328,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK5",
"PerPkg": "1",
@@ -303,6 +338,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK6",
"PerPkg": "1",
@@ -312,6 +348,7 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_POWER_CKE_CYCLES.RANK7",
"PerPkg": "1",
@@ -321,6 +358,7 @@
},
{
"BriefDescription": "Critical Throttle Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES",
"PerPkg": "1",
@@ -329,6 +367,7 @@
},
{
"BriefDescription": "UNC_M_POWER_PCU_THROTTLING",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M_POWER_PCU_THROTTLING",
"PerPkg": "1",
@@ -336,6 +375,7 @@
},
{
"BriefDescription": "Clock-Enabled Self-Refresh",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
"PerPkg": "1",
@@ -344,6 +384,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK0",
"PerPkg": "1",
@@ -353,6 +394,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK1",
"PerPkg": "1",
@@ -362,6 +404,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK2",
"PerPkg": "1",
@@ -371,6 +414,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK3",
"PerPkg": "1",
@@ -380,6 +424,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK4",
"PerPkg": "1",
@@ -389,6 +434,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK5",
"PerPkg": "1",
@@ -398,6 +444,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK6",
"PerPkg": "1",
@@ -407,6 +454,7 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0; DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RANK7",
"PerPkg": "1",
@@ -416,6 +464,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Read Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_RD",
"PerPkg": "1",
@@ -425,6 +474,7 @@
},
{
"BriefDescription": "Read Preemption Count; Read over Write Preemption",
+ "Counter": "0,1,2,3",
"EventCode": "0x8",
"EventName": "UNC_M_PREEMPTION.RD_PREEMPT_WR",
"PerPkg": "1",
@@ -434,6 +484,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.BYP",
"PerPkg": "1",
@@ -443,6 +494,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to timer expiration",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_CLOSE",
"PerPkg": "1",
@@ -452,6 +504,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharges due to page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
"PerPkg": "1",
@@ -461,6 +514,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -470,6 +524,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.; Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x2",
"EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1",
@@ -479,6 +534,7 @@
},
{
"BriefDescription": "Read CAS issued with HIGH priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.HIGH",
"PerPkg": "1",
@@ -487,6 +543,7 @@
},
{
"BriefDescription": "Read CAS issued with LOW priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.LOW",
"PerPkg": "1",
@@ -495,6 +552,7 @@
},
{
"BriefDescription": "Read CAS issued with MEDIUM priority",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.MED",
"PerPkg": "1",
@@ -503,6 +561,7 @@
},
{
"BriefDescription": "Read CAS issued with PANIC NON ISOCH priority (starved)",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_RD_CAS_PRIO.PANIC",
"PerPkg": "1",
@@ -511,6 +570,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -520,6 +580,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -528,6 +589,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -537,6 +599,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -546,6 +609,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -555,6 +619,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -564,6 +629,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -573,6 +639,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -582,6 +649,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -591,6 +659,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -600,6 +669,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -609,6 +679,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -618,6 +689,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -627,6 +699,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -636,6 +709,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -645,6 +719,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -654,6 +729,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -663,6 +739,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -672,6 +749,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -681,6 +759,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -690,6 +769,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M_RD_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -699,6 +779,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -708,6 +789,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -716,6 +798,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -725,6 +808,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -734,6 +818,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -743,6 +828,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -752,6 +838,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -761,6 +848,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -770,6 +858,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -779,6 +868,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -788,6 +878,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -797,6 +888,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -806,6 +898,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -815,6 +908,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -824,6 +918,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -833,6 +928,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -842,6 +938,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -851,6 +948,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -860,6 +958,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -869,6 +968,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -878,6 +978,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M_RD_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -887,6 +988,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 2; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M_RD_CAS_RANK2.BANK0",
"PerPkg": "1",
@@ -895,6 +997,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -904,6 +1007,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -912,6 +1016,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -921,6 +1026,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -930,6 +1036,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -939,6 +1046,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -948,6 +1056,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -957,6 +1066,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -966,6 +1076,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -975,6 +1086,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -984,6 +1096,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -993,6 +1106,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -1002,6 +1116,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -1011,6 +1126,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -1020,6 +1136,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -1029,6 +1146,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -1038,6 +1156,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -1047,6 +1166,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -1056,6 +1176,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -1065,6 +1186,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -1074,6 +1196,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M_RD_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -1083,6 +1206,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -1092,6 +1216,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -1100,6 +1225,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -1109,6 +1235,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -1118,6 +1245,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -1127,6 +1255,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -1136,6 +1265,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -1145,6 +1275,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -1154,6 +1285,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -1163,6 +1295,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -1172,6 +1305,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -1181,6 +1315,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -1190,6 +1325,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -1199,6 +1335,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -1208,6 +1345,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -1217,6 +1355,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -1226,6 +1365,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -1235,6 +1375,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -1244,6 +1385,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -1253,6 +1395,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -1262,6 +1405,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M_RD_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -1271,6 +1415,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -1280,6 +1425,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -1288,6 +1434,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -1297,6 +1444,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -1306,6 +1454,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -1315,6 +1464,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -1324,6 +1474,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -1333,6 +1484,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -1342,6 +1494,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -1351,6 +1504,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -1360,6 +1514,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -1369,6 +1524,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -1378,6 +1534,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -1387,6 +1544,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -1396,6 +1554,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -1405,6 +1564,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -1414,6 +1574,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -1423,6 +1584,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -1432,6 +1594,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -1441,6 +1604,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -1450,6 +1614,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M_RD_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -1459,6 +1624,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -1468,6 +1634,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -1476,6 +1643,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -1485,6 +1653,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -1494,6 +1663,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -1503,6 +1673,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -1512,6 +1683,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -1521,6 +1693,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -1530,6 +1703,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -1539,6 +1713,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -1548,6 +1723,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -1557,6 +1733,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -1566,6 +1743,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -1575,6 +1753,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -1584,6 +1763,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -1593,6 +1773,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -1602,6 +1783,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -1611,6 +1793,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -1620,6 +1803,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -1629,6 +1813,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -1638,6 +1823,7 @@
},
{
"BriefDescription": "RD_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M_RD_CAS_RANK7.BANKG3",
"PerPkg": "1",
@@ -1647,6 +1833,7 @@
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1655,6 +1842,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS",
"PerPkg": "1",
@@ -1663,6 +1851,7 @@
},
{
"BriefDescription": "VMSE MXB write buffer occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M_VMSE_MXB_WR_OCCUPANCY",
"PerPkg": "1",
@@ -1670,6 +1859,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in RMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.RMM",
"PerPkg": "1",
@@ -1678,6 +1868,7 @@
},
{
"BriefDescription": "VMSE WR PUSH issued; VMSE write PUSH issued in WMM",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M_VMSE_WR_PUSH.WMM",
"PerPkg": "1",
@@ -1686,6 +1877,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold; Transition from WMM to RMM because of starve counter",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.LOW_THRESH",
"PerPkg": "1",
@@ -1694,6 +1886,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.STARVE",
"PerPkg": "1",
@@ -1702,6 +1895,7 @@
},
{
"BriefDescription": "Transition from WMM to RMM because of low threshold",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "UNC_M_WMM_TO_RMM.VMSE_RETRY",
"PerPkg": "1",
@@ -1710,6 +1904,7 @@
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL",
"PerPkg": "1",
@@ -1718,6 +1913,7 @@
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE",
"PerPkg": "1",
@@ -1726,6 +1922,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT",
"PerPkg": "1",
@@ -1734,6 +1931,7 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT",
"PerPkg": "1",
@@ -1742,6 +1940,7 @@
},
{
"BriefDescription": "Not getting the requested Major Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_M_WRONG_MM",
"PerPkg": "1",
@@ -1749,6 +1948,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.ALLBANKS",
"PerPkg": "1",
@@ -1758,6 +1958,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK0",
"PerPkg": "1",
@@ -1766,6 +1967,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK1",
"PerPkg": "1",
@@ -1775,6 +1977,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK10",
"PerPkg": "1",
@@ -1784,6 +1987,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK11",
"PerPkg": "1",
@@ -1793,6 +1997,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK12",
"PerPkg": "1",
@@ -1802,6 +2007,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK13",
"PerPkg": "1",
@@ -1811,6 +2017,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK14",
"PerPkg": "1",
@@ -1820,6 +2027,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK15",
"PerPkg": "1",
@@ -1829,6 +2037,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK2",
"PerPkg": "1",
@@ -1838,6 +2047,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK3",
"PerPkg": "1",
@@ -1847,6 +2057,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK4",
"PerPkg": "1",
@@ -1856,6 +2067,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK5",
"PerPkg": "1",
@@ -1865,6 +2077,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK6",
"PerPkg": "1",
@@ -1874,6 +2087,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK7",
"PerPkg": "1",
@@ -1883,6 +2097,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK8",
"PerPkg": "1",
@@ -1892,6 +2107,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANK9",
"PerPkg": "1",
@@ -1901,6 +2117,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG0",
"PerPkg": "1",
@@ -1910,6 +2127,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG1",
"PerPkg": "1",
@@ -1919,6 +2137,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG2",
"PerPkg": "1",
@@ -1928,6 +2147,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 0; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M_WR_CAS_RANK0.BANKG3",
"PerPkg": "1",
@@ -1937,6 +2157,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.ALLBANKS",
"PerPkg": "1",
@@ -1946,6 +2167,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK0",
"PerPkg": "1",
@@ -1954,6 +2176,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK1",
"PerPkg": "1",
@@ -1963,6 +2186,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK10",
"PerPkg": "1",
@@ -1972,6 +2196,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK11",
"PerPkg": "1",
@@ -1981,6 +2206,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK12",
"PerPkg": "1",
@@ -1990,6 +2216,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK13",
"PerPkg": "1",
@@ -1999,6 +2226,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK14",
"PerPkg": "1",
@@ -2008,6 +2236,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK15",
"PerPkg": "1",
@@ -2017,6 +2246,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK2",
"PerPkg": "1",
@@ -2026,6 +2256,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK3",
"PerPkg": "1",
@@ -2035,6 +2266,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK4",
"PerPkg": "1",
@@ -2044,6 +2276,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK5",
"PerPkg": "1",
@@ -2053,6 +2286,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK6",
"PerPkg": "1",
@@ -2062,6 +2296,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK7",
"PerPkg": "1",
@@ -2071,6 +2306,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK8",
"PerPkg": "1",
@@ -2080,6 +2316,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANK9",
"PerPkg": "1",
@@ -2089,6 +2326,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG0",
"PerPkg": "1",
@@ -2098,6 +2336,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG1",
"PerPkg": "1",
@@ -2107,6 +2346,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG2",
"PerPkg": "1",
@@ -2116,6 +2356,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 1; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M_WR_CAS_RANK1.BANKG3",
"PerPkg": "1",
@@ -2125,6 +2366,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.ALLBANKS",
"PerPkg": "1",
@@ -2134,6 +2376,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK0",
"PerPkg": "1",
@@ -2142,6 +2385,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK1",
"PerPkg": "1",
@@ -2151,6 +2395,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK10",
"PerPkg": "1",
@@ -2160,6 +2405,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK11",
"PerPkg": "1",
@@ -2169,6 +2415,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK12",
"PerPkg": "1",
@@ -2178,6 +2425,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK13",
"PerPkg": "1",
@@ -2187,6 +2435,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK14",
"PerPkg": "1",
@@ -2196,6 +2445,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK15",
"PerPkg": "1",
@@ -2205,6 +2455,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK2",
"PerPkg": "1",
@@ -2214,6 +2465,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK3",
"PerPkg": "1",
@@ -2223,6 +2475,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK4",
"PerPkg": "1",
@@ -2232,6 +2485,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK5",
"PerPkg": "1",
@@ -2241,6 +2495,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK6",
"PerPkg": "1",
@@ -2250,6 +2505,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK7",
"PerPkg": "1",
@@ -2259,6 +2515,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK8",
"PerPkg": "1",
@@ -2268,6 +2525,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANK9",
"PerPkg": "1",
@@ -2277,6 +2535,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG0",
"PerPkg": "1",
@@ -2286,6 +2545,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG1",
"PerPkg": "1",
@@ -2295,6 +2555,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG2",
"PerPkg": "1",
@@ -2304,6 +2565,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 4; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "UNC_M_WR_CAS_RANK4.BANKG3",
"PerPkg": "1",
@@ -2313,6 +2575,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.ALLBANKS",
"PerPkg": "1",
@@ -2322,6 +2585,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK0",
"PerPkg": "1",
@@ -2330,6 +2594,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK1",
"PerPkg": "1",
@@ -2339,6 +2604,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK10",
"PerPkg": "1",
@@ -2348,6 +2614,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK11",
"PerPkg": "1",
@@ -2357,6 +2624,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK12",
"PerPkg": "1",
@@ -2366,6 +2634,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK13",
"PerPkg": "1",
@@ -2375,6 +2644,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK14",
"PerPkg": "1",
@@ -2384,6 +2654,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK15",
"PerPkg": "1",
@@ -2393,6 +2664,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK2",
"PerPkg": "1",
@@ -2402,6 +2674,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK3",
"PerPkg": "1",
@@ -2411,6 +2684,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK4",
"PerPkg": "1",
@@ -2420,6 +2694,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK5",
"PerPkg": "1",
@@ -2429,6 +2704,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK6",
"PerPkg": "1",
@@ -2438,6 +2714,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK7",
"PerPkg": "1",
@@ -2447,6 +2724,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK8",
"PerPkg": "1",
@@ -2456,6 +2734,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANK9",
"PerPkg": "1",
@@ -2465,6 +2744,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG0",
"PerPkg": "1",
@@ -2474,6 +2754,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG1",
"PerPkg": "1",
@@ -2483,6 +2764,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG2",
"PerPkg": "1",
@@ -2492,6 +2774,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 5; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "UNC_M_WR_CAS_RANK5.BANKG3",
"PerPkg": "1",
@@ -2501,6 +2784,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.ALLBANKS",
"PerPkg": "1",
@@ -2510,6 +2794,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK0",
"PerPkg": "1",
@@ -2518,6 +2803,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK1",
"PerPkg": "1",
@@ -2527,6 +2813,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK10",
"PerPkg": "1",
@@ -2536,6 +2823,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK11",
"PerPkg": "1",
@@ -2545,6 +2833,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK12",
"PerPkg": "1",
@@ -2554,6 +2843,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK13",
"PerPkg": "1",
@@ -2563,6 +2853,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK14",
"PerPkg": "1",
@@ -2572,6 +2863,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK15",
"PerPkg": "1",
@@ -2581,6 +2873,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK2",
"PerPkg": "1",
@@ -2590,6 +2883,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK3",
"PerPkg": "1",
@@ -2599,6 +2893,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK4",
"PerPkg": "1",
@@ -2608,6 +2903,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK5",
"PerPkg": "1",
@@ -2617,6 +2913,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK6",
"PerPkg": "1",
@@ -2626,6 +2923,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK7",
"PerPkg": "1",
@@ -2635,6 +2933,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK8",
"PerPkg": "1",
@@ -2644,6 +2943,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANK9",
"PerPkg": "1",
@@ -2653,6 +2953,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG0",
"PerPkg": "1",
@@ -2662,6 +2963,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG1",
"PerPkg": "1",
@@ -2671,6 +2973,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG2",
"PerPkg": "1",
@@ -2680,6 +2983,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 6; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "UNC_M_WR_CAS_RANK6.BANKG3",
"PerPkg": "1",
@@ -2689,6 +2993,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; All Banks",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.ALLBANKS",
"PerPkg": "1",
@@ -2698,6 +3003,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK0",
"PerPkg": "1",
@@ -2706,6 +3012,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK1",
"PerPkg": "1",
@@ -2715,6 +3022,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK10",
"PerPkg": "1",
@@ -2724,6 +3032,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 11",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK11",
"PerPkg": "1",
@@ -2733,6 +3042,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 12",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK12",
"PerPkg": "1",
@@ -2742,6 +3052,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 13",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK13",
"PerPkg": "1",
@@ -2751,6 +3062,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 14",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK14",
"PerPkg": "1",
@@ -2760,6 +3072,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 15",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK15",
"PerPkg": "1",
@@ -2769,6 +3082,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK2",
"PerPkg": "1",
@@ -2778,6 +3092,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK3",
"PerPkg": "1",
@@ -2787,6 +3102,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK4",
"PerPkg": "1",
@@ -2796,6 +3112,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK5",
"PerPkg": "1",
@@ -2805,6 +3122,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK6",
"PerPkg": "1",
@@ -2814,6 +3132,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK7",
"PerPkg": "1",
@@ -2823,6 +3142,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK8",
"PerPkg": "1",
@@ -2832,6 +3152,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANK9",
"PerPkg": "1",
@@ -2841,6 +3162,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 0 (Banks 0-3)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG0",
"PerPkg": "1",
@@ -2850,6 +3172,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 1 (Banks 4-7)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG1",
"PerPkg": "1",
@@ -2859,6 +3182,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 2 (Banks 8-11)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG2",
"PerPkg": "1",
@@ -2868,6 +3192,7 @@
},
{
"BriefDescription": "WR_CAS Access to Rank 7; Bank Group 3 (Banks 12-15)",
+ "Counter": "0,1,2,3",
"EventCode": "0xBF",
"EventName": "UNC_M_WR_CAS_RANK7.BANKG3",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-power.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-power.json
index daebf1050acb..252415937680 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-power.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "pclk Cycles",
+ "Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "The PCU runs off a fixed 800 MHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE0_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_P_CORE10_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_P_CORE11_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_P_CORE12_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_P_CORE13_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_P_CORE14_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_P_CORE15_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_P_CORE16_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_P_CORE17_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_P_CORE1_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -88,6 +99,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_P_CORE2_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_P_CORE3_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_P_CORE4_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -112,6 +126,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_P_CORE5_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_P_CORE6_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_P_CORE7_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -136,6 +153,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x68",
"EventName": "UNC_P_CORE8_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -144,6 +162,7 @@
},
{
"BriefDescription": "Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x69",
"EventName": "UNC_P_CORE9_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -152,6 +171,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS_CORE0",
"PerPkg": "1",
@@ -160,6 +180,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_P_DEMOTIONS_CORE1",
"PerPkg": "1",
@@ -168,6 +189,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_P_DEMOTIONS_CORE10",
"PerPkg": "1",
@@ -176,6 +198,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3B",
"EventName": "UNC_P_DEMOTIONS_CORE11",
"PerPkg": "1",
@@ -184,6 +207,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "UNC_P_DEMOTIONS_CORE12",
"PerPkg": "1",
@@ -192,6 +216,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_P_DEMOTIONS_CORE13",
"PerPkg": "1",
@@ -200,6 +225,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_P_DEMOTIONS_CORE14",
"PerPkg": "1",
@@ -208,6 +234,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x3F",
"EventName": "UNC_P_DEMOTIONS_CORE15",
"PerPkg": "1",
@@ -216,6 +243,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_P_DEMOTIONS_CORE16",
"PerPkg": "1",
@@ -224,6 +252,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_P_DEMOTIONS_CORE17",
"PerPkg": "1",
@@ -232,6 +261,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_P_DEMOTIONS_CORE2",
"PerPkg": "1",
@@ -240,6 +270,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_P_DEMOTIONS_CORE3",
"PerPkg": "1",
@@ -248,6 +279,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_P_DEMOTIONS_CORE4",
"PerPkg": "1",
@@ -256,6 +288,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_P_DEMOTIONS_CORE5",
"PerPkg": "1",
@@ -264,6 +297,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_P_DEMOTIONS_CORE6",
"PerPkg": "1",
@@ -272,6 +306,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_P_DEMOTIONS_CORE7",
"PerPkg": "1",
@@ -280,6 +315,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_P_DEMOTIONS_CORE8",
"PerPkg": "1",
@@ -288,6 +324,7 @@
},
{
"BriefDescription": "Core C State Demotions",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_P_DEMOTIONS_CORE9",
"PerPkg": "1",
@@ -296,6 +333,7 @@
},
{
"BriefDescription": "Frequency Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0xB",
"EventName": "UNC_P_FREQ_BAND0_CYCLES",
"PerPkg": "1",
@@ -304,6 +342,7 @@
},
{
"BriefDescription": "Frequency Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0xC",
"EventName": "UNC_P_FREQ_BAND1_CYCLES",
"PerPkg": "1",
@@ -312,6 +351,7 @@
},
{
"BriefDescription": "Frequency Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0xD",
"EventName": "UNC_P_FREQ_BAND2_CYCLES",
"PerPkg": "1",
@@ -320,6 +360,7 @@
},
{
"BriefDescription": "Frequency Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0xE",
"EventName": "UNC_P_FREQ_BAND3_CYCLES",
"PerPkg": "1",
@@ -328,6 +369,7 @@
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x4",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
"PerPkg": "1",
@@ -336,6 +378,7 @@
},
{
"BriefDescription": "OS Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x6",
"EventName": "UNC_P_FREQ_MAX_OS_CYCLES",
"PerPkg": "1",
@@ -344,6 +387,7 @@
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x5",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
"PerPkg": "1",
@@ -352,6 +396,7 @@
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
"PerPkg": "1",
@@ -360,6 +405,7 @@
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
"PerPkg": "1",
@@ -368,6 +414,7 @@
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
"PerPkg": "1",
@@ -376,6 +423,7 @@
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
"PerPkg": "1",
@@ -384,6 +432,7 @@
},
{
"BriefDescription": "Package C State Residency - C1E",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_P_PKG_RESIDENCY_C1E_CYCLES",
"PerPkg": "1",
@@ -392,6 +441,7 @@
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
"PerPkg": "1",
@@ -400,6 +450,7 @@
},
{
"BriefDescription": "Package C State Residency - C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
"PerPkg": "1",
@@ -408,6 +459,7 @@
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
"PerPkg": "1",
@@ -416,6 +468,7 @@
},
{
"BriefDescription": "Package C7 State Residency",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_P_PKG_RESIDENCY_C7_CYCLES",
"PerPkg": "1",
@@ -424,30 +477,37 @@
},
{
"BriefDescription": "Number of cores in C-State; C0 and C1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
+ "Filter": "occ_sel=1",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
+ "Filter": "occ_sel=2",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State; C6 and C7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
+ "Filter": "occ_sel=3",
"PerPkg": "1",
"PublicDescription": "This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
"Unit": "PCU"
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0xA",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
"PerPkg": "1",
@@ -456,6 +516,7 @@
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x9",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
"PerPkg": "1",
@@ -464,6 +525,7 @@
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
"PerPkg": "1",
@@ -472,6 +534,7 @@
},
{
"BriefDescription": "UNC_P_UFS_TRANSITIONS_NO_CHANGE",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "UNC_P_UFS_TRANSITIONS_NO_CHANGE",
"PerPkg": "1",
@@ -480,6 +543,7 @@
},
{
"BriefDescription": "UNC_P_UFS_TRANSITIONS_RING_GV",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "UNC_P_UFS_TRANSITIONS_RING_GV",
"PerPkg": "1",
@@ -488,6 +552,7 @@
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/virtual-memory.json b/tools/perf/pmu-events/arch/x86/haswellx/virtual-memory.json
index 87a4ec1ee7d7..7cf00ae0e993 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Load misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in all TLB levels that cause a page walk of any page size.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "DTLB demand load misses with low part of linear-to-physical address translation missed",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.PDE_CACHE_MISS",
"PublicDescription": "DTLB demand load misses with low part of linear-to-physical address translation missed.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Number of cache load STLB hits. No page walk.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_2M",
"PublicDescription": "This event counts load operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Load misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT_4K",
"PublicDescription": "This event counts load operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks in any TLB of any page size due to demand load misses.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "2000003",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (2M/4M).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to demand load misses that caused 2M/4M page walks in any TLB levels.",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes (4K).",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to demand load misses that caused 4K page walks in any TLB levels.",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses.",
@@ -80,6 +90,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G).",
@@ -88,6 +99,7 @@
},
{
"BriefDescription": "DTLB store misses with low part of linear-to-physical address translation missed",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.PDE_CACHE_MISS",
"PublicDescription": "DTLB store misses with low part of linear-to-physical address translation missed.",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_2M",
"PublicDescription": "This event counts store operations from a 2M page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -112,6 +126,7 @@
},
{
"BriefDescription": "Store misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT_4K",
"PublicDescription": "This event counts store operations from a 4K page that miss the first DTLB level but hit the second and do not cause page walks.",
@@ -120,6 +135,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks due to store miss in any TLB levels of any page size (4K/2M/4M/1G).",
@@ -128,6 +144,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "100003",
@@ -135,6 +152,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to store misses in one or more TLB levels of 2M/4M page structure.",
@@ -143,6 +161,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to store misses in one or more TLB levels of 4K page structure.",
@@ -151,6 +170,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB store misses.",
@@ -159,6 +179,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4f",
"EventName": "EPT.WALK_CYCLES",
"SampleAfterValue": "2000003",
@@ -166,6 +187,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "Counts the number of ITLB flushes, includes 4k/2M/4M pages.",
@@ -174,6 +196,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in ITLB that causes a page walk of any page size.",
@@ -182,6 +205,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "ITLB misses that hit STLB. No page walk.",
@@ -190,6 +214,7 @@
},
{
"BriefDescription": "Code misses that miss the DTLB and hit the STLB (2M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_2M",
"PublicDescription": "ITLB misses that hit STLB (2M).",
@@ -198,6 +223,7 @@
},
{
"BriefDescription": "Core misses that miss the DTLB and hit the STLB (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT_4K",
"PublicDescription": "ITLB misses that hit STLB (4K).",
@@ -206,6 +232,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Completed page walks in ITLB of any page size.",
@@ -214,6 +241,7 @@
},
{
"BriefDescription": "Store miss in all TLB levels causes a page walk that completes. (1G)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_1G",
"SampleAfterValue": "100003",
@@ -221,6 +249,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Completed page walks due to misses in ITLB 2M/4M page entries.",
@@ -229,6 +258,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Completed page walks due to misses in ITLB 4K page entries.",
@@ -237,6 +267,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
"PublicDescription": "This event counts cycles when the page miss handler (PMH) is servicing page walks caused by ITLB misses.",
@@ -245,6 +276,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L1+FB",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L1",
"PublicDescription": "Number of DTLB page walker loads that hit in the L1+FB.",
@@ -253,6 +285,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L2",
"PublicDescription": "Number of DTLB page walker loads that hit in the L2.",
@@ -261,6 +294,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in the L3 + XSNP",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_L3",
@@ -270,6 +304,7 @@
},
{
"BriefDescription": "Number of DTLB page walker hits in Memory",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.DTLB_MEMORY",
@@ -279,6 +314,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L1 and FB.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L1",
"SampleAfterValue": "2000003",
@@ -286,6 +322,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L2",
"SampleAfterValue": "2000003",
@@ -293,6 +330,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in the L3.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_L3",
"SampleAfterValue": "2000003",
@@ -300,6 +338,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the DTLB that hit in memory.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_DTLB_MEMORY",
"SampleAfterValue": "2000003",
@@ -307,6 +346,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L1 and FB.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L1",
"SampleAfterValue": "2000003",
@@ -314,6 +354,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L2",
"SampleAfterValue": "2000003",
@@ -321,6 +362,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in the L2.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_L3",
"SampleAfterValue": "2000003",
@@ -328,6 +370,7 @@
},
{
"BriefDescription": "Counts the number of Extended Page Table walks from the ITLB that hit in memory.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.EPT_ITLB_MEMORY",
"SampleAfterValue": "2000003",
@@ -335,6 +378,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L1+FB",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L1",
"PublicDescription": "Number of ITLB page walker loads that hit in the L1+FB.",
@@ -343,6 +387,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L2",
"PublicDescription": "Number of ITLB page walker loads that hit in the L2.",
@@ -351,6 +396,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in the L3 + XSNP",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_L3",
@@ -360,6 +406,7 @@
},
{
"BriefDescription": "Number of ITLB page walker hits in Memory",
+ "Counter": "0,1,2,3",
"Errata": "HSD25",
"EventCode": "0xBC",
"EventName": "PAGE_WALKER_LOADS.ITLB_MEMORY",
@@ -369,6 +416,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "DTLB flush attempts of the thread-specific entries.",
@@ -377,6 +425,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Count number of STLB flush attempts.",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/cache.json b/tools/perf/pmu-events/arch/x86/icelake/cache.json
index d26c4efe35f0..e7bb2ca6f183 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/cache.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x48",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -52,6 +58,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -60,6 +67,7 @@
},
{
"BriefDescription": "Modified cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.NON_SILENT",
"PublicDescription": "Counts the number of lines that are evicted by L2 cache when triggered by an L2 cache fill. Those lines are in Modified state. Modified lines are written back to L3",
@@ -67,15 +75,17 @@
"UMask": "0x2"
},
{
- "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fill.",
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.SILENT",
- "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
"BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3",
"EventCode": "0xf2",
"EventName": "L2_LINES_OUT.USELESS_HWPF",
"PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -92,6 +103,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -100,6 +112,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Counts demand requests that miss L2 cache.",
@@ -108,6 +121,7 @@
},
{
"BriefDescription": "Demand requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_REFERENCES",
"PublicDescription": "Counts demand requests to L2 cache.",
@@ -116,6 +130,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -124,6 +139,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
@@ -132,6 +148,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "L2_RQSTS.MISS",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x24",
"EventName": "L2_RQSTS.REFERENCES",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
@@ -180,6 +202,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
@@ -188,6 +211,7 @@
},
{
"BriefDescription": "SW prefetch requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_HIT",
"PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -196,6 +220,7 @@
},
{
"BriefDescription": "SW prefetch requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_MISS",
"PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -204,6 +229,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "Counts L2 writebacks that access L2 cache.",
@@ -212,6 +238,7 @@
},
{
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -220,206 +247,217 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
"SampleAfterValue": "1000003",
"UMask": "0x81"
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
- "PEBS": "1",
"PublicDescription": "Counts all retired store instructions.",
"SampleAfterValue": "1000003",
"UMask": "0x82"
},
{
"BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ANY",
- "PEBS": "1",
"PublicDescription": "Counts all retired memory instructions - loads and stores.",
"SampleAfterValue": "1000003",
"UMask": "0x83"
},
{
"BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with locked access.",
"SampleAfterValue": "100007",
"UMask": "0x21"
},
{
"BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions that split across a cacheline boundary.",
"SampleAfterValue": "100003",
"UMask": "0x41"
},
{
"BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES",
- "PEBS": "1",
"PublicDescription": "Counts retired store instructions that split across a cacheline boundary.",
"SampleAfterValue": "100003",
"UMask": "0x42"
},
{
"BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
- "PEBS": "1",
"PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
"SampleAfterValue": "100003",
"UMask": "0x11"
},
{
"BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
- "PEBS": "1",
"PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
"SampleAfterValue": "100003",
"UMask": "0x12"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.",
"SampleAfterValue": "20011",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.",
"SampleAfterValue": "20011",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
- "PEBS": "1",
"PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required.",
"SampleAfterValue": "100003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired instructions with at least 1 uncacheable load or Bus Lock.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC",
- "PEBS": "1",
"PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock).",
"SampleAfterValue": "100007",
"UMask": "0x4"
},
{
"BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready.",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache.",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with L2 cache hits as data sources.",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions missed L2 cache as data sources.",
"SampleAfterValue": "100021",
"UMask": "0x10"
},
{
"BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache.",
"SampleAfterValue": "100021",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache.",
"SampleAfterValue": "50021",
"UMask": "0x20"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -429,6 +467,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -438,6 +477,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -447,6 +487,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -456,6 +497,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -465,6 +507,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -473,7 +516,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -483,6 +537,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -492,6 +547,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -501,6 +557,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -510,6 +567,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -519,6 +577,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -527,7 +586,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -537,6 +607,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -546,6 +617,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -555,6 +627,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -564,6 +637,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -573,6 +647,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -581,7 +656,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10400",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -591,6 +677,7 @@
},
{
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -600,6 +687,7 @@
},
{
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -608,7 +696,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10010",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -618,6 +717,7 @@
},
{
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -627,6 +727,7 @@
},
{
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -636,6 +737,7 @@
},
{
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -645,6 +747,7 @@
},
{
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -654,6 +757,7 @@
},
{
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -662,7 +766,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10020",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -672,6 +787,7 @@
},
{
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another cores caches, data forwarding is required as the data is modified.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -681,6 +797,7 @@
},
{
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -690,6 +807,7 @@
},
{
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -699,6 +817,7 @@
},
{
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -708,6 +827,7 @@
},
{
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -717,6 +837,7 @@
},
{
"BriefDescription": "Counts hardware prefetches to the L3 only that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -726,6 +847,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop hit in another core, data forwarding is not required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -735,6 +857,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was sent but no other cores had the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -744,6 +867,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was not needed to satisfy the request.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_NOT_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -753,6 +877,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that hit a cacheline in the L3 where a snoop was sent.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_HIT.SNOOP_SENT",
"MSRIndex": "0x1a6,0x1a7",
@@ -762,6 +887,7 @@
},
{
"BriefDescription": "Counts streaming stores that hit a cacheline in the L3 where a snoop was sent or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_HIT.ANY",
"MSRIndex": "0x1a6,0x1a7",
@@ -771,6 +897,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -779,6 +906,7 @@
},
{
"BriefDescription": "Counts memory transactions sent to the uncore.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "Counts memory transactions sent to the uncore including requests initiated by the core, all L3 prefetches, reads resulting from page walks, and snoop responses.",
@@ -787,6 +915,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -795,6 +924,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -803,6 +933,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of outstanding data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding data read requests pending. Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3. Reads due to page walks resulting from any request type will also be counted. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -811,6 +942,7 @@
},
{
"BriefDescription": "Cycles where at least 1 outstanding data read request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -819,7 +951,18 @@
"UMask": "0x8"
},
{
+ "BriefDescription": "Cycles with outstanding code read requests pending.",
+ "Counter": "0,1,2,3",
+ "CounterMask": "1",
+ "EventCode": "0x60",
+ "EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
+ "PublicDescription": "Cycles with outstanding code read requests pending. Code Read requests include both cacheable and non-cacheable Code Reads. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
"BriefDescription": "Cycles where at least 1 outstanding Demand RFO request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -829,6 +972,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -837,6 +981,7 @@
},
{
"BriefDescription": "Store Read transactions pending for off-core. Highly correlated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
"PublicDescription": "Counts the number of off-core outstanding read-for-ownership (RFO) store transactions every cycle. An RFO transaction is considered to be in the Off-core outstanding state between L2 cache miss and transaction completion.",
@@ -845,6 +990,7 @@
},
{
"BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF4",
"EventName": "SQ_MISC.BUS_LOCK",
"PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
@@ -853,6 +999,7 @@
},
{
"BriefDescription": "Cycles the queue waiting for offcore responses is full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SQ_FULL",
"PublicDescription": "Counts the cycles for which the thread is active and the queue waiting for responses from the uncore cannot take any more entries.",
@@ -860,7 +1007,16 @@
"UMask": "0x4"
},
{
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf"
+ },
+ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.NTA",
"PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
@@ -869,6 +1025,7 @@
},
{
"BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"PublicDescription": "Counts the number of PREFETCHW instructions executed.",
@@ -877,6 +1034,7 @@
},
{
"BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T0",
"PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
@@ -885,6 +1043,7 @@
},
{
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T1_T2",
"PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/counter.json b/tools/perf/pmu-events/arch/x86/icelake/counter.json
new file mode 100644
index 000000000000..5a350072522a
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/icelake/counter.json
@@ -0,0 +1,17 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "4",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "ARB",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "CLOCK",
+ "CountersNumFixed": 1,
+ "CountersNumGeneric": "0"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/icelake/floating-point.json b/tools/perf/pmu-events/arch/x86/icelake/floating-point.json
index 85c26c889088..61ddce0c8db6 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.FP",
"PublicDescription": "Counts all microcode Floating Point assists.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "1000003",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/frontend.json b/tools/perf/pmu-events/arch/x86/icelake/frontend.json
index 2b539a08d2bf..7afa2436d584 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE transitions count.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xab",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
@@ -35,193 +39,194 @@
},
{
"BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x1",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x11",
- "PEBS": "1",
"PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x14",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x12",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x13",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x500106",
- "PEBS": "1",
"PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7",
"MSRValue": "0x508006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7",
"MSRValue": "0x501006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7",
"MSRValue": "0x500206",
- "PEBS": "1",
"PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7",
"MSRValue": "0x510006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x100206",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7",
"MSRValue": "0x502006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7",
"MSRValue": "0x500406",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7",
"MSRValue": "0x520006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7",
"MSRValue": "0x504006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7",
"MSRValue": "0x500806",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x15",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_DATA.STALLS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_16B.IFDATA_STALL",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_DATA.STALLS]",
@@ -230,6 +235,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_HIT",
"PublicDescription": "Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
@@ -238,6 +244,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_MISS",
"PublicDescription": "Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
@@ -246,6 +253,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_STALL",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
@@ -254,6 +262,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_16B.IFDATA_STALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_DATA.STALLS",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_16B.IFDATA_STALL]",
@@ -262,6 +271,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
@@ -270,6 +280,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY",
@@ -279,15 +290,17 @@
},
{
"BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK",
- "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
"SampleAfterValue": "2000003",
"UMask": "0x8"
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
@@ -296,6 +309,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_ANY",
@@ -305,6 +319,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_OK",
@@ -314,6 +329,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -322,6 +338,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES_ANY",
@@ -331,6 +348,7 @@
},
{
"BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -341,6 +359,7 @@
},
{
"BriefDescription": "Uops delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.",
@@ -349,6 +368,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.",
@@ -357,6 +377,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "5",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -366,6 +387,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json b/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json
index b43a6c6d8b7f..cf9ed3edb694 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/icl-metrics.json
@@ -1,63 +1,63 @@
[
{
"BriefDescription": "C10 residency percent per package",
- "MetricExpr": "cstate_pkg@c10\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c10\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C10_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C8 residency percent per package",
- "MetricExpr": "cstate_pkg@c8\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c8\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C8_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C9 residency percent per package",
- "MetricExpr": "cstate_pkg@c9\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c9\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C9_Pkg_Residency",
"ScaleUnit": "100%"
@@ -85,7 +85,6 @@
},
{
"BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_4k_aliasing",
@@ -98,13 +97,13 @@
"MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * ASSISTS.ANY / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "34 * ASSISTS.ANY / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
@@ -113,8 +112,8 @@
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=1\\,edge@ / tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / tma_info_thread_slots",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -133,9 +132,113 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_mispredicts_resteers) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_mispredicts_resteers) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20"
+ },
+ {
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions.",
"MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_branch_instructions",
"MetricThreshold": "tma_branch_instructions > 0.1 & tma_light_operations > 0.6",
"ScaleUnit": "100%"
@@ -143,11 +246,11 @@
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
"ScaleUnit": "100%"
},
{
@@ -178,13 +281,61 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, tma_icache_misses - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_itlb_misses - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_2M_4M / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_4K / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(29 * tma_info_system_average_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM + 23.5 * tma_info_system_average_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "(29 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM + 23.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -200,11 +351,11 @@
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "23.5 * tma_info_system_average_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "23.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -212,14 +363,14 @@
"MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
"MetricName": "tma_decoder0_alone",
- "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35))",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
@@ -232,7 +383,7 @@
"MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_dram_bound",
"MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
"ScaleUnit": "100%"
},
{
@@ -240,7 +391,7 @@
"MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -250,43 +401,43 @@
"MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_dsb_switches",
"MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
- "MetricExpr": "32.5 * tma_info_system_average_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricExpr": "32.5 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
- "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
"ScaleUnit": "100%"
},
{
@@ -294,9 +445,9 @@
"MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
@@ -310,12 +461,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops",
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
"MetricExpr": "tma_heavy_operations - tma_microcode_sequencer",
"MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
"MetricName": "tma_few_uops_instructions",
"MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
"ScaleUnit": "100%"
},
{
@@ -328,8 +479,25 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "34 * ASSISTS.FP / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "ARITH.FP_DIVIDER_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -338,7 +506,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -351,7 +519,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -360,7 +528,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -376,7 +544,7 @@
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
- "MetricGroup": "Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -390,49 +558,49 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
- "MetricExpr": "ICACHE_16B.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 5 / BR_MISP_RETIRED.ALL_BRANCHES / 100",
"MetricGroup": "Bad;BrMispredicts;tma_issueBM",
"MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers"
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_NTAKEN",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional taken branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional taken branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_taken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_taken < 200"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
},
{
- "BriefDescription": "Instructions per retired mispredicts for return branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RET",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_ret",
@@ -446,6 +614,12 @@
"MetricThreshold": "tma_info_bad_spec_ipmispredict < 200"
},
{
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "INT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio"
+ },
+ {
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
@@ -454,13 +628,22 @@
"MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
},
{
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_lsd + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite))",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_lsd + tma_mite + tma_ms))",
"MetricGroup": "DSBmiss;Fed;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_misses",
"MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
- "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
@@ -472,67 +655,6 @@
"PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
},
{
- "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
- "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB;tma_issueBC",
- "MetricName": "tma_info_bottleneck_big_code",
- "MetricThreshold": "tma_info_bottleneck_big_code > 20",
- "PublicDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses). Related metrics: tma_info_bottleneck_branching_overhead"
- },
- {
- "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
- "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL)) / tma_info_thread_slots)",
- "MetricGroup": "Ret;tma_issueBC",
- "MetricName": "tma_info_bottleneck_branching_overhead",
- "MetricThreshold": "tma_info_bottleneck_branching_overhead > 10",
- "PublicDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls). Related metrics: tma_info_bottleneck_big_code"
- },
- {
- "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code",
- "MetricGroup": "Fed;FetchBW;Frontend",
- "MetricName": "tma_info_bottleneck_instruction_fetch_bw",
- "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20"
- },
- {
- "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))",
- "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW",
- "MetricName": "tma_info_bottleneck_memory_bandwidth",
- "MetricThreshold": "tma_info_bottleneck_memory_bandwidth > 20",
- "PublicDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
- "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB",
- "MetricName": "tma_info_bottleneck_memory_data_tlbs",
- "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20",
- "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound))",
- "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat",
- "MetricName": "tma_info_bottleneck_memory_latency",
- "MetricThreshold": "tma_info_bottleneck_memory_latency > 20",
- "PublicDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches). Related metrics: tma_l3_hit_latency, tma_mem_latency"
- },
- {
- "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bottleneck_mispredictions",
- "MetricThreshold": "tma_info_bottleneck_mispredictions > 20",
- "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
- },
- {
"BriefDescription": "Fraction of branches that are CALL or RET",
"MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;Branches",
@@ -564,7 +686,7 @@
},
{
"BriefDescription": "Core actual clocks when any Logical Processor is active on the Physical Core",
- "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED",
+ "MetricExpr": "(CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else tma_info_thread_clks)",
"MetricGroup": "SMT",
"MetricName": "tma_info_core_core_clks"
},
@@ -575,21 +697,27 @@
"MetricName": "tma_info_core_coreipc"
},
{
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc"
+ },
+ {
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
@@ -599,7 +727,7 @@
"MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
"MetricName": "tma_info_frontend_dsb_coverage",
"MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 5 > 0.35",
- "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
@@ -651,6 +779,12 @@
"MetricName": "tma_info_frontend_lsd_coverage"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -665,11 +799,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -677,7 +811,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -685,7 +819,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
@@ -693,7 +827,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx512",
"MetricThreshold": "tma_info_inst_mix_iparith_avx512 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -701,7 +835,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -709,7 +843,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -727,7 +861,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -740,6 +874,12 @@
"MetricThreshold": "tma_info_inst_mix_ipload < 3"
},
{
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / MISC_RETIRED.PAUSE_INST",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause"
+ },
+ {
"BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STORES",
"MetricGroup": "InsType",
@@ -748,151 +888,170 @@
},
{
"BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / cpu@SW_PREFETCH_ACCESS.T0\\,umask\\=0xF@",
+ "MetricExpr": "INST_RETIRED.ANY / SW_PREFETCH_ACCESS.ANY",
"MetricGroup": "Prefetches",
"MetricName": "tma_info_inst_mix_ipswpf",
"MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 11",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
"MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_core_l3_cache_access_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
},
{
"BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_fb_hpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
+ },
+ {
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki_load"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_REQUESTS.DEMAND_DATA_RD + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS) / tma_info_inst_mix_instructions",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average Latency for L3 cache miss demand Loads",
- "MetricExpr": "cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,umask\\=0x10@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l3_miss_latency"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_access_bw",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "L2_LINES_OUT.USELESS_HWPF / (L2_LINES_OUT.SILENT + L2_LINES_OUT.NON_SILENT)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15"
},
{
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
@@ -920,42 +1079,80 @@
"MetricName": "tma_info_memory_tlb_store_stlb_mpki"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
{
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from LSD per cycle",
+ "MetricExpr": "LSD.UOPS / LSD.CYCLES_ACTIVE",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_lsd"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MS per cycle",
+ "MetricExpr": "IDQ.MS_UOPS / cpu@IDQ.MS_UOPS\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_pipeline_fetch_ms"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)"
+ },
+ {
"BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
"MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
"MetricGroup": "Pipeline;Ret",
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / tma_info_system_time / 1e3",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
- "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_mem_bandwidth, tma_sq_full"
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -978,6 +1175,19 @@
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
},
{
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
+ },
+ {
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 61 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
+ },
+ {
"BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0",
"MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / tma_info_core_core_clks",
"MetricGroup": "Power",
@@ -1013,6 +1223,13 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
@@ -1063,66 +1280,91 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
"MetricThreshold": "tma_info_thread_uptb < 7.5"
},
{
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
- "MetricExpr": "ICACHE_64B.IFTAG_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + L1D_PEND_MISS.FB_FULL_PERIODS) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "3.5 * tma_info_system_core_frequency * MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
- "MetricExpr": "9 * tma_info_system_average_frequency * MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "9 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_memory_latency, tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
- "MetricExpr": "ILD_STALL.LCP / tma_info_thread_clks",
+ "MetricExpr": "DECODE.LCP / tma_info_thread_clks",
"MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_lcp",
"MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
"ScaleUnit": "100%"
},
{
@@ -1132,7 +1374,7 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
@@ -1161,13 +1403,37 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_1G / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_4K / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%"
},
{
@@ -1175,36 +1441,36 @@
"MetricExpr": "(LSD.CYCLES_ACTIVE - LSD.CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_lsd",
- "MetricThreshold": "tma_lsd > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35)",
+ "MetricThreshold": "tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit. LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure.",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_memory_latency, tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -1228,20 +1494,20 @@
},
{
"BriefDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit",
- "MetricExpr": "tma_retiring * tma_info_thread_slots / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slots",
+ "MetricExpr": "UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slots",
"MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
"MetricName": "tma_microcode_sequencer",
"MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
"ScaleUnit": "100%"
},
{
@@ -1249,7 +1515,7 @@
"MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
"ScaleUnit": "100%"
},
@@ -1258,16 +1524,24 @@
"MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=4@ - cpu@IDQ.MITE_UOPS\\,cmask\\=5@) / tma_info_thread_clks",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_mite_group",
"MetricName": "tma_mite_4wide",
- "MetricThreshold": "tma_mite_4wide > 0.05 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35))",
+ "MetricThreshold": "tma_mite_4wide > 0.05 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued",
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
"MetricExpr": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISSUED.ANY",
"MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
"MetricName": "tma_mixing_vectors",
"MetricThreshold": "tma_mixing_vectors > 0.05",
- "PublicDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued. Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "cpu@IDQ.MS_UOPS\\,cmask\\=1@ / tma_info_core_core_clks / 3.3",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
"ScaleUnit": "100%"
},
{
@@ -1276,22 +1550,22 @@
"MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
"MetricName": "tma_ms_switches",
"MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
"MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
"MetricName": "tma_nop_instructions",
- "MetricThreshold": "tma_nop_instructions > 0.1 & tma_light_operations > 0.6",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_branch_instructions + tma_nop_instructions))",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_branch_instructions))",
"MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_other_light_ops",
"MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
@@ -1299,6 +1573,22 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch)",
"MetricExpr": "UOPS_DISPATCHED.PORT_0 / tma_info_core_core_clks",
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
@@ -1326,17 +1616,17 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
- "MetricExpr": "((cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -1345,7 +1635,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_thread_clks",
+ "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1373,17 +1663,17 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
"MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -1393,18 +1683,18 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
"MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL5;tma_L5_group;tma_issueSO;tma_ports_utilized_0_group",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
"MetricName": "tma_serializing_operation",
- "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)))",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
"MetricExpr": "140 * MISC_RETIRED.PAUSE_INST / tma_info_thread_clks",
- "MetricGroup": "TopdownL6;tma_L6_group;tma_serializing_operation_group",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
"MetricName": "tma_slow_pause",
- "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))))",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST",
"ScaleUnit": "100%"
},
@@ -1413,13 +1703,12 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents rate of split store accesses",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group",
"MetricName": "tma_split_stores",
@@ -1430,10 +1719,10 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "L1D_PEND_MISS.L2_STALL / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
"ScaleUnit": "100%"
},
{
@@ -1447,7 +1736,6 @@
},
{
"BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_store_fwd_blk",
@@ -1458,7 +1746,7 @@
{
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1490,6 +1778,30 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_1G / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_4K / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
"MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / tma_info_thread_clks",
"MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
@@ -1501,10 +1813,10 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "10 * BACLEARS.ANY / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: BACLEARS.ANY",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/icelake/memory.json b/tools/perf/pmu-events/arch/x86/icelake/memory.json
index e8d2ec1c029b..1455aaac37b1 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/memory.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L3_MISS",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to any reasons (multiple categories may count as one).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED",
"PublicDescription": "Counts the number of times HLE abort was triggered.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to unfriendly events (such as interrupts).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_EVENTS",
"PublicDescription": "Counts the number of times an HLE execution aborted due to unfriendly events (such as interrupts).",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_MEM",
"PublicDescription": "Counts the number of times an HLE execution aborted due to various memory events (e.g., read/write capacity and conflicts).",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Counts the number of times an HLE execution aborted due to HLE-unfriendly instructions and certain unfriendly events (such as AD assists etc.).",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Number of times an HLE execution successfully committed",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times HLE commit succeeded.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Number of times an HLE execution started.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc8",
"EventName": "HLE_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an HLE region. Does not count nested transactions.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
@@ -73,102 +82,113 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "1009",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "503",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "101",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "2003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -177,7 +197,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -186,7 +227,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -195,7 +257,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000400",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -204,7 +287,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000400",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000010",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -213,7 +317,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000010",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000020",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L2_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -222,7 +347,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000020",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.OTHER.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184008000",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -231,7 +377,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.OTHER.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184008000",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts streaming stores that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.STREAMING_WR.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000800",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts streaming stores that was not supplied by the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -240,7 +407,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts streaming stores that DRAM supplied the request.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.STREAMING_WR.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x184000800",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "100003",
@@ -248,6 +426,7 @@
},
{
"BriefDescription": "Cycles where at least one demand data read request known to have missed the L3 cache is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
@@ -257,6 +436,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PublicDescription": "Counts the number of times RTM abort was triggered.",
@@ -264,15 +444,17 @@
"UMask": "0x4"
},
{
- "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_EVENTS",
- "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt).",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEM",
"PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -281,6 +463,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.",
@@ -289,6 +472,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.",
@@ -297,6 +481,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times RTM commit succeeded.",
@@ -305,6 +490,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.",
@@ -313,6 +499,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -321,6 +508,7 @@
},
{
"BriefDescription": "Number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -329,6 +517,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_READ",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads",
@@ -337,6 +526,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.",
@@ -345,6 +535,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Counts the number of times a TSX line had a cache conflict.",
@@ -353,6 +544,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to XRELEASE lock not satisfying the address and value requirements in the elision buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_MISMATCH",
"PublicDescription": "Counts the number of times a TSX Abort was triggered due to release/commit but data and address mismatch.",
@@ -361,6 +553,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to NoAllocatedElisionBuffer being non-zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_NOT_EMPTY",
"PublicDescription": "Counts the number of times a TSX Abort was triggered due to commit but Lock Buffer not empty.",
@@ -369,6 +562,7 @@
},
{
"BriefDescription": "Number of times an HLE transactional execution aborted due to an unsupported read alignment from the elision buffer.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_ELISION_BUFFER_UNSUPPORTED_ALIGNMENT",
"PublicDescription": "Counts the number of times a TSX Abort was triggered due to attempting an unsupported alignment from Lock Buffer.",
@@ -377,6 +571,7 @@
},
{
"BriefDescription": "Number of times a HLE transactional region aborted due to a non XRELEASE prefixed instruction writing to an elided lock in the elision buffer",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_HLE_STORE_TO_ELIDED_LOCK",
"PublicDescription": "Counts the number of times a TSX Abort was triggered due to a non-release/commit store to lock.",
@@ -385,6 +580,7 @@
},
{
"BriefDescription": "Number of times HLE lock could not be elided due to ElisionBufferAvailable being zero.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.HLE_ELISION_BUFFER_FULL",
"PublicDescription": "Counts the number of times we could not allocate Lock Buffer.",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/metricgroups.json b/tools/perf/pmu-events/arch/x86/icelake/metricgroups.json
index a151ba9cccb0..80ca8021f2de 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/metricgroups.json
@@ -2,9 +2,22 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -24,8 +37,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -63,10 +79,14 @@
"tma_L5_group": "Metrics for top-down breakdown at level 5",
"tma_L6_group": "Metrics for top-down breakdown at level 6",
"tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
"tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -76,10 +96,11 @@
"tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
"tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
"tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
"tma_issue2P": "Metrics related by the issue $issue2P",
- "tma_issueBC": "Metrics related by the issue $issueBC",
"tma_issueBM": "Metrics related by the issue $issueBM",
"tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
"tma_issueD0": "Metrics related by the issue $issueD0",
"tma_issueFB": "Metrics related by the issue $issueFB",
"tma_issueFL": "Metrics related by the issue $issueFL",
@@ -95,19 +116,25 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
"tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
"tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
"tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
"tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
"tma_retiring_group": "Metrics contributing to tma_retiring category",
"tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
"tma_store_bound_group": "Metrics contributing to tma_store_bound category",
- "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category"
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
}
diff --git a/tools/perf/pmu-events/arch/x86/icelake/other.json b/tools/perf/pmu-events/arch/x86/icelake/other.json
index cfb590632918..141cd30a30af 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/other.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL0_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL1_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.",
@@ -17,176 +19,16 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL2_TURBO_LICENSE",
- "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchtecture). This includes high current AVX 512-bit instructions.",
+ "PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions.",
"SampleAfterValue": "200003",
"UMask": "0x20"
},
{
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch data reads (which bring data to L2) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000010",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch RFOs (which bring data to L2) that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000020",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -195,48 +37,13 @@
"UMask": "0x1"
},
{
- "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.OTHER.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184008000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.OTHER.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184008000",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
"SampleAfterValue": "100003",
"UMask": "0x1"
- },
- {
- "BriefDescription": "Counts streaming stores that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.STREAMING_WR.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000800",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts streaming stores that DRAM supplied the request.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.STREAMING_WR.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x184000800",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/icelake/pipeline.json b/tools/perf/pmu-events/arch/x86/icelake/pipeline.json
index 375b78044f14..44659f26cbab 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x14",
"EventName": "ARITH.DIVIDER_ACTIVE",
@@ -9,7 +10,17 @@
"UMask": "0x9"
},
{
+ "BriefDescription": "ARITH.FP_DIVIDER_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0x14",
+ "EventName": "ARITH.FP_DIVIDER_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.ANY",
"PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assists.",
@@ -18,157 +29,158 @@
},
{
"BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts all branch instructions retired.",
"SampleAfterValue": "400009"
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
"PublicDescription": "Counts conditional branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x11"
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN",
- "PEBS": "1",
"PublicDescription": "Counts not taken branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x10"
},
{
"BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken conditional branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x1"
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
"PublicDescription": "Counts far branch instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
"PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
"PublicDescription": "Counts both direct and indirect near call instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x2"
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
"PublicDescription": "Counts return instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x20"
},
{
"BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
"SampleAfterValue": "50021"
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
"PublicDescription": "Counts mispredicted conditional branch instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x11"
},
{
"BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_NTAKEN",
- "PEBS": "1",
"PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken.",
"SampleAfterValue": "50021",
"UMask": "0x10"
},
{
"BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken conditional mispredicted branch instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
"BriefDescription": "All miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
"PublicDescription": "Counts all miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).",
"SampleAfterValue": "50021",
"UMask": "0x80"
},
{
"BriefDescription": "Mispredicted indirect CALL instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"PublicDescription": "Counts retired mispredicted indirect (near taken) CALL instructions, including both register and memory indirect.",
"SampleAfterValue": "50021",
"UMask": "0x2"
},
{
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken.",
"SampleAfterValue": "50021",
"UMask": "0x20"
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RET",
- "PEBS": "1",
"PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x8"
},
{
"BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
"PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -177,6 +189,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
@@ -185,6 +198,7 @@
},
{
"BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
"PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -193,6 +207,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -200,6 +215,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when the thread is unhalted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Counts core crystal clock cycles when the thread is unhalted.",
@@ -208,6 +224,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -215,6 +232,7 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -222,6 +240,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -230,6 +249,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -238,6 +258,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "16",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -246,6 +267,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -254,6 +276,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -262,6 +285,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "20",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -270,6 +294,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -278,6 +303,7 @@
},
{
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
@@ -286,6 +312,7 @@
},
{
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -294,6 +321,7 @@
},
{
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -302,6 +330,7 @@
},
{
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -310,6 +339,7 @@
},
{
"BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
@@ -319,6 +349,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]",
@@ -327,6 +358,7 @@
},
{
"BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
@@ -335,38 +367,39 @@
},
{
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
"PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Number of all retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.NOP",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Precise instruction retired event with a reduced effect of PEBS shadow in IP distribution",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.PREC_DIST",
- "PEBS": "1",
"PublicDescription": "A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles without actually retired instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.STALL_CYCLES",
@@ -377,6 +410,7 @@
},
{
"BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.ALL_RECOVERY_CYCLES",
@@ -386,6 +420,7 @@
},
{
"BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x0D",
@@ -396,6 +431,7 @@
},
{
"BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
@@ -404,6 +440,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
@@ -412,6 +449,7 @@
},
{
"BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d",
"EventName": "INT_MISC.UOP_DROPPING",
"PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
@@ -420,6 +458,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -428,6 +467,7 @@
},
{
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -436,6 +476,7 @@
},
{
"BriefDescription": "False dependencies due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Counts the number of times a load got blocked due to false dependencies due to partial compare on address.",
@@ -444,6 +485,7 @@
},
{
"BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PREFETCH.SWPF",
"PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
@@ -452,6 +494,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -461,6 +504,7 @@
},
{
"BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_OK",
@@ -470,6 +514,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
@@ -478,6 +523,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xc3",
@@ -488,6 +534,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -496,6 +543,7 @@
},
{
"BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR to be enabled properly.",
@@ -504,6 +552,7 @@
},
{
"BriefDescription": "Number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.PAUSE_INST",
"PublicDescription": "Counts number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
@@ -512,6 +561,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
@@ -520,6 +570,7 @@
},
{
"BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SCOREBOARD",
"SampleAfterValue": "100003",
@@ -527,14 +578,16 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5e",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
- "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into stravation periods (e.g. branch mispredictions or i-cache misses)",
+ "PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -546,6 +599,7 @@
},
{
"BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
"PublicDescription": "Counts the number of Top-down Microarchitecture Analysis (TMA) method's slots where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.",
@@ -553,15 +607,8 @@
"UMask": "0x2"
},
{
- "BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
- "EventCode": "0xa4",
- "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
- "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by branch mispredictions. This event estimates number of operations that were issued but not retired from the speculative path as well as the out-of-order engine recovery past a branch misprediction.",
- "SampleAfterValue": "10000003",
- "UMask": "0x8"
- },
- {
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003",
@@ -569,6 +616,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
@@ -577,6 +625,7 @@
},
{
"BriefDescription": "Number of uops decoded out of instructions exclusively fetched by decoder 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UOPS_DECODED.DEC0",
"PublicDescription": "Uops exclusively fetched by decoder 0",
@@ -585,6 +634,7 @@
},
{
"BriefDescription": "Number of uops executed on port 0",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_0",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.",
@@ -593,6 +643,7 @@
},
{
"BriefDescription": "Number of uops executed on port 1",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_1",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.",
@@ -601,6 +652,7 @@
},
{
"BriefDescription": "Number of uops executed on port 2 and 3",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_2_3",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3.",
@@ -609,6 +661,7 @@
},
{
"BriefDescription": "Number of uops executed on port 4 and 9",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_4_9",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9.",
@@ -617,6 +670,7 @@
},
{
"BriefDescription": "Number of uops executed on port 5",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_5",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.",
@@ -625,6 +679,7 @@
},
{
"BriefDescription": "Number of uops executed on port 6",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_6",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.",
@@ -633,6 +688,7 @@
},
{
"BriefDescription": "Number of uops executed on port 7 and 8",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_7_8",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8.",
@@ -641,6 +697,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Counts the number of uops executed from any thread.",
@@ -649,6 +706,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -658,6 +716,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -667,6 +726,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -676,6 +736,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -685,6 +746,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1",
@@ -694,6 +756,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2",
@@ -703,6 +766,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3",
@@ -712,6 +776,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4",
@@ -721,6 +786,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -731,6 +797,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.THREAD",
"SampleAfterValue": "2000003",
@@ -738,6 +805,7 @@
},
{
"BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.",
@@ -746,6 +814,7 @@
},
{
"BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
@@ -754,6 +823,7 @@
},
{
"BriefDescription": "Cycles when RAT does not issue Uops to RS for the thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -764,6 +834,7 @@
},
{
"BriefDescription": "Uops inserted at issue-stage in order to preserve upper bits of vector registers.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e",
"EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH",
"PublicDescription": "Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to 'Mixing Intel AVX and Intel SSE Code' section of the Optimization Guide.",
@@ -772,6 +843,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS",
"PublicDescription": "Counts the retirement slots used each cycle.",
@@ -780,6 +852,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -790,6 +863,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "10",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/icelake/uncore-interconnect.json
index 8027590f1776..bee503eda18c 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Number of entries allocated. Account for Any type: e.g. Snoop, etc.",
+ "Counter": "1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -8,67 +9,12 @@
"Unit": "ARB"
},
{
- "BriefDescription": "Each cycle counts number of any coherent request at memory controller that were issued by any core. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x85",
- "EventName": "UNC_ARB_DAT_OCCUPANCY.ALL",
- "PerPkg": "1",
- "UMask": "0x1",
- "Unit": "ARB"
- },
- {
- "BriefDescription": "Each cycle counts number of coherent reads pending on data return from memory controller that were issued by any core. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x85",
- "EventName": "UNC_ARB_DAT_OCCUPANCY.RD",
- "PerPkg": "1",
- "UMask": "0x2",
- "Unit": "ARB"
- },
- {
- "BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x80",
- "EventName": "UNC_ARB_REQ_TRK_OCCUPANCY.DRD",
- "PerPkg": "1",
- "UMask": "0x2",
- "Unit": "ARB"
- },
- {
- "BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches",
- "EventCode": "0x81",
- "EventName": "UNC_ARB_REQ_TRK_REQUEST.DRD",
- "PerPkg": "1",
- "UMask": "0x2",
- "Unit": "ARB"
- },
- {
- "BriefDescription": "Each cycle counts number of all outgoing valid entries in ReqTrk. Such entry is defined as valid from its allocation in ReqTrk till deallocation. Accounts for Coherent and non-coherent traffic. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x80",
- "EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
- "PerPkg": "1",
- "UMask": "0x1",
- "Unit": "ARB"
- },
- {
- "BriefDescription": "Each cycle count number of 'valid' coherent Data Read entries . Such entry is defined as valid when it is allocated till deallocation. Doesn't include prefetches. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x80",
- "EventName": "UNC_ARB_TRK_OCCUPANCY.RD",
- "PerPkg": "1",
- "UMask": "0x2",
- "Unit": "ARB"
- },
- {
"BriefDescription": "Total number of all outgoing entries allocated. Accounts for Coherent and non-coherent traffic.",
+ "Counter": "1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "ARB"
- },
- {
- "BriefDescription": "Number of all coherent Data Read entries. Doesn't include prefetches. This event is not supported on ICL products but is supported on RKL products.",
- "EventCode": "0x81",
- "EventName": "UNC_ARB_TRK_REQUESTS.RD",
- "PerPkg": "1",
- "UMask": "0x2",
- "Unit": "ARB"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/icelake/uncore-other.json b/tools/perf/pmu-events/arch/x86/icelake/uncore-other.json
index c6596ba09195..1ac5b5ef8094 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/uncore-other.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/uncore-other.json
@@ -1,6 +1,7 @@
[
{
- "BriefDescription": "UNC_CLOCK.SOCKET",
+ "BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_CLOCK.SOCKET",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json
index b28f62ce1f39..9df790d4361f 100644
--- a/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/icelake/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -25,7 +28,17 @@
"UMask": "0xe"
},
{
+ "BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x08",
+ "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
"BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -34,6 +47,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -42,6 +56,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
@@ -50,6 +65,7 @@
},
{
"BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
@@ -58,6 +74,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
@@ -67,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -74,7 +92,17 @@
"UMask": "0xe"
},
{
+ "BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x49",
+ "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
+ "PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
+ "SampleAfterValue": "100003",
+ "UMask": "0x8"
+ },
+ {
"BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -83,6 +111,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -91,6 +120,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
@@ -99,6 +129,7 @@
},
{
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
@@ -107,6 +138,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_ACTIVE",
@@ -116,6 +148,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -124,6 +157,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -132,6 +166,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -140,6 +175,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
@@ -148,6 +184,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -156,6 +193,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/cache.json b/tools/perf/pmu-events/arch/x86/icelakex/cache.json
index 3bdc56a75097..e46fd6f91d6b 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/cache.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/cache.json
@@ -1,6 +1,70 @@
[
{
+ "BriefDescription": "Hit snoop reply with data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x20"
+ },
+ {
+ "BriefDescription": "HitM snoop reply with data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.I_FWD_M",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response). A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x10"
+ },
+ {
+ "BriefDescription": "Hit snoop reply without sending the data, line invalidated.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE",
+ "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without being forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Line not found snoop reply",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.MISS",
+ "PublicDescription": "Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Hit snoop reply with data, line kept in Shared state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE",
+ "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x40"
+ },
+ {
+ "BriefDescription": "HitM snoop reply with data, line kept in Shared state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.S_FWD_M",
+ "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Hit snoop reply without sending the data, line kept in Shared state.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xef",
+ "EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE",
+ "PublicDescription": "Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state. A single snoop response from the core counts on all hyperthreads of the core.",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "Counts the number of cache lines replaced in L1 data cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts L1D data line replacements including opportunistic replacements, and replacements that require stall-for-replace or block-for-replace.",
@@ -9,6 +73,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D Fill Buffer (FB) unavailability. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -17,6 +82,7 @@
},
{
"BriefDescription": "Number of phases a demand request has waited due to L1D Fill Buffer (FB) unavailability.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x48",
@@ -27,6 +93,7 @@
},
{
"BriefDescription": "Number of cycles a demand request has waited due to L1D due to lack of L2 resources.",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.L2_STALL",
"PublicDescription": "Counts number of cycles a demand request has waited due to L1D due to lack of L2 resources. Demand requests include cacheable/uncacheable demand load, store, lock or SW prefetch accesses.",
@@ -35,6 +102,7 @@
},
{
"BriefDescription": "Number of L1D misses that are outstanding",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Counts number of L1D misses that are outstanding in each cycle, that is each cycle the number of Fill Buffers (FB) outstanding required by Demand Reads. FB either is held by demand loads, or it is held by non-demand loads and gets hit at least once by demand. The valid outstanding interval is defined until the FB deallocation by one of the following ways: from FB allocation, if FB is allocated by demand from the demand Hit FB, if it is allocated by hardware or software prefetch. Note: In the L1D, a Demand Read contains cacheable or noncacheable demand loads, including ones causing cache-line splits and reads due to page walks resulted from any request type.",
@@ -43,6 +111,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -52,6 +121,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "Counts the number of L2 cache lines filling the L2. Counting does not cover rejects.",
@@ -60,6 +130,7 @@
},
{
"BriefDescription": "Cache lines that are evicted by L2 cache when triggered by an L2 cache fill.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.NON_SILENT",
"PublicDescription": "Counts the number of lines that are evicted by the L2 cache due to L2 cache fills. Evicted lines are delivered to the L3, which may or may not cache them, according to system load and priorities.",
@@ -67,15 +138,26 @@
"UMask": "0x2"
},
{
- "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache when triggered by an L2 cache fill.",
+ "BriefDescription": "Non-modified cache lines that are silently dropped by L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.SILENT",
- "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache when triggered by an L2 cache fill. These lines are typically in Shared or Exclusive state. A non-threaded event.",
+ "PublicDescription": "Counts the number of lines that are silently dropped by L2 cache. These lines are typically in Shared or Exclusive state. A non-threaded event.",
"SampleAfterValue": "200003",
"UMask": "0x1"
},
{
+ "BriefDescription": "Cache lines that have been L2 hardware prefetched but not used by demand accesses",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xf2",
+ "EventName": "L2_LINES_OUT.USELESS_HWPF",
+ "PublicDescription": "Counts the number of cache lines that have been prefetched by the L2 hardware prefetcher but not used by demand access when evicted from the L2 cache",
+ "SampleAfterValue": "200003",
+ "UMask": "0x4"
+ },
+ {
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts the total number of L2 code requests.",
@@ -84,6 +166,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts the number of demand Data Read requests (including requests from L1D hardware prefetchers). These loads may hit or miss L2 cache. Only non rejected loads are counted.",
@@ -92,6 +175,7 @@
},
{
"BriefDescription": "Demand requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_MISS",
"PublicDescription": "Counts demand requests that miss L2 cache.",
@@ -100,6 +184,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts the total number of RFO (read for ownership) requests to L2 cache. L2 RFO requests include both L1D demand RFO misses as well as L1D RFO prefetches.",
@@ -108,6 +193,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Counts L2 cache hits when fetching instructions, code reads.",
@@ -116,6 +202,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Counts L2 cache misses when fetching instructions.",
@@ -124,6 +211,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Counts the number of demand Data Read requests initiated by load instructions that hit L2 cache.",
@@ -132,6 +220,7 @@
},
{
"BriefDescription": "Demand Data Read miss L2, no rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_MISS",
"PublicDescription": "Counts the number of demand Data Read requests that miss L2 cache. Only not rejected loads are counted.",
@@ -140,6 +229,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that hit L2 cache.",
@@ -148,6 +238,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the RFO (Read-for-Ownership) requests that miss L2 cache.",
@@ -156,6 +247,7 @@
},
{
"BriefDescription": "SW prefetch requests that hit L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_HIT",
"PublicDescription": "Counts Software prefetch requests that hit the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -164,6 +256,7 @@
},
{
"BriefDescription": "SW prefetch requests that miss L2 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.SWPF_MISS",
"PublicDescription": "Counts Software prefetch requests that miss the L2 cache. Accounts for PREFETCHNTA and PREFETCHT0/1/2 instructions when FB is not full.",
@@ -172,6 +265,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "Counts L2 writebacks that access L2 cache.",
@@ -180,6 +274,7 @@
},
{
"BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -188,6 +283,7 @@
},
{
"BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x2e",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
@@ -196,285 +292,296 @@
},
{
"BriefDescription": "Retired load instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
"SampleAfterValue": "1000003",
"UMask": "0x81"
},
{
"BriefDescription": "Retired store instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ALL_STORES",
- "PEBS": "1",
"PublicDescription": "Counts all retired store instructions.",
"SampleAfterValue": "1000003",
"UMask": "0x82"
},
{
"BriefDescription": "All retired memory instructions.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.ANY",
- "PEBS": "1",
"PublicDescription": "Counts all retired memory instructions - loads and stores.",
"SampleAfterValue": "1000003",
"UMask": "0x83"
},
{
"BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.LOCK_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with locked access.",
"SampleAfterValue": "100007",
"UMask": "0x21"
},
{
"BriefDescription": "Retired load instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_LOADS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions that split across a cacheline boundary.",
"SampleAfterValue": "100003",
"UMask": "0x41"
},
{
"BriefDescription": "Retired store instructions that split across a cacheline boundary.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.SPLIT_STORES",
- "PEBS": "1",
"PublicDescription": "Counts retired store instructions that split across a cacheline boundary.",
"SampleAfterValue": "100003",
"UMask": "0x42"
},
{
"BriefDescription": "Retired load instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
- "PEBS": "1",
"PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
"SampleAfterValue": "100003",
"UMask": "0x11"
},
{
"BriefDescription": "Retired store instructions that miss the STLB.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
- "PEBS": "1",
"PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
"SampleAfterValue": "100003",
"UMask": "0x12"
},
{
"BriefDescription": "Retired load instructions whose data sources were HitM responses from shared L3",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were HitM responses from shared L3.",
"SampleAfterValue": "20011",
"UMask": "0x4"
},
{
"BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT",
- "PEBS": "1",
"SampleAfterValue": "20011",
"UMask": "0x2"
},
{
"BriefDescription": "This event is deprecated. Refer to new event MEM_LOAD_L3_HIT_RETIRED.XSNP_FWD",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM",
- "PEBS": "1",
"SampleAfterValue": "20011",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS",
- "PEBS": "1",
"PublicDescription": "Counts the retired load instructions whose data sources were L3 hit and cross-core snoop missed in on-pkg core cache.",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions whose data sources were hits in L3 without snoops required",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NONE",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were hits in L3 without snoops required.",
"SampleAfterValue": "100003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd2",
"EventName": "MEM_LOAD_L3_HIT_RETIRED.XSNP_NO_FWD",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions whose data sources were L3 and cross-core snoop hits in on-pkg core cache.",
"SampleAfterValue": "20011",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from local dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
- "PEBS": "1",
"PublicDescription": "Retired load instructions which data sources missed L3 but serviced from local DRAM.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions which data sources missed L3 but serviced from remote dram",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM",
- "PEBS": "1",
"SampleAfterValue": "100007",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions whose data sources was forwarded from a remote cache",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD",
- "PEBS": "1",
"PublicDescription": "Retired load instructions whose data sources was forwarded from a remote cache.",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions whose data sources was remote HITM",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM",
- "PEBS": "1",
"PublicDescription": "Retired load instructions whose data sources was remote HITM.",
"SampleAfterValue": "100007",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd3",
"EventName": "MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with remote Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3 (AppDirect or Memory Mode) and DRAM cache(Memory Mode).",
"SampleAfterValue": "100007",
"UMask": "0x10"
},
{
"BriefDescription": "Retired instructions with at least 1 uncacheable load or Bus Lock.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd4",
"EventName": "MEM_LOAD_MISC_RETIRED.UC",
- "PEBS": "1",
"PublicDescription": "Retired instructions with at least one load to uncacheable memory-type, or at least one cache-line split locked access (Bus Lock).",
"SampleAfterValue": "100007",
"UMask": "0x4"
},
{
"BriefDescription": "Number of completed demand load requests that missed the L1, but hit the FB(fill buffer), because a preceding miss to the same cacheline initiated the line to be brought into L1, but data is not yet ready in L1.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.FB_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop was load missed in L1 but hit FB (Fill Buffers) due to preceding miss to the same cache line with data not ready.",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Retired load instructions with L1 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that hit in the L1 data cache. This event includes all SW prefetches and lock instructions regardless of the data source.",
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
{
"BriefDescription": "Retired load instructions missed L1 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L1_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that missed in the L1 cache.",
"SampleAfterValue": "200003",
"UMask": "0x8"
},
{
"BriefDescription": "Retired load instructions with L2 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with L2 cache hits as data sources.",
"SampleAfterValue": "200003",
"UMask": "0x2"
},
{
"BriefDescription": "Retired load instructions missed L2 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L2_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions missed L2 cache as data sources.",
"SampleAfterValue": "100021",
"UMask": "0x10"
},
{
"BriefDescription": "Retired load instructions with L3 cache hits as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_HIT",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that hit in the L3 cache.",
"SampleAfterValue": "100021",
"UMask": "0x4"
},
{
"BriefDescription": "Retired load instructions missed L3 cache as data sources",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.L3_MISS",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with at least one uop that missed in the L3 cache.",
"SampleAfterValue": "50021",
"UMask": "0x20"
},
{
"BriefDescription": "Retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source where the data request missed all caches.",
+ "Counter": "0,1,2,3",
"Data_LA": "1",
"EventCode": "0xd1",
"EventName": "MEM_LOAD_RETIRED.LOCAL_PMM",
- "PEBS": "1",
"PublicDescription": "Counts retired load instructions with local Intel(R) Optane(TM) DC persistent memory as the data source and the data request missed L3 (AppDirect or Memory Mode) and DRAM cache(Memory Mode).",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -484,6 +591,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -493,6 +601,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -502,6 +611,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -510,7 +620,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -520,6 +641,7 @@
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -529,6 +651,7 @@
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop that hit in another core, which did not forward the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -538,6 +661,7 @@
},
{
"BriefDescription": "Counts demand data reads that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -546,7 +670,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by PMM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703C00001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -556,6 +701,7 @@
},
{
"BriefDescription": "Counts demand data reads that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -564,7 +710,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that were supplied by PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -574,6 +731,7 @@
},
{
"BriefDescription": "Counts demand data reads that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -582,7 +740,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SNC_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700800001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC0002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -592,6 +771,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -600,7 +780,38 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703C00002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.REMOTE_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -610,6 +821,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -618,7 +830,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SNC_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700800002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -627,7 +850,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetch (which bring data to L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L2.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x10070",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L3.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x12380",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetches to the L3 only that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -636,7 +880,28 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L3.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x90002380",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.ITOM.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x90000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware and software prefetches to all cache levels that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PREFETCHES.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -645,7 +910,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F3FFC0477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -655,6 +931,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -664,6 +941,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop that hit in another core, which did not forward the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -673,6 +951,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that resulted in a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_HIT.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -681,7 +960,38 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x100400477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700C00477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.REMOTE",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x3F33000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop was sent and data was returned (Modified or Not Modified).",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -691,6 +1001,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit a modified line in another core's caches which forwarded the data.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -700,6 +1011,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by a cache on a remote socket where a snoop hit in another core's caches which forwarded the unmodified data to the requesting core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.REMOTE_CACHE.SNOOP_HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -708,7 +1020,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x703000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that hit a modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HITM",
"MSRIndex": "0x1a6,0x1a7",
@@ -718,6 +1041,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that either hit a non-modified line in a distant L3 Cache or were snooped from a distant core's L1/L2 caches on this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.SNC_CACHE.HIT_WITH_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -726,7 +1050,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.SNC_PMM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x700800477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts streaming stores that hit in the L3 or were snooped from another core's caches on the same socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_HIT",
"MSRIndex": "0x1a6,0x1a7",
@@ -736,6 +1071,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Counts the demand and prefetch data reads. All Core Data Reads include cacheable 'Demands' and L2 prefetchers (not L3 prefetchers). Counting also covers reads due to page walks resulted from any request type.",
@@ -744,6 +1080,7 @@
},
{
"BriefDescription": "Counts memory transactions sent to the uncore.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_REQUESTS",
"PublicDescription": "Counts memory transactions sent to the uncore including requests initiated by the core, all L3 prefetches, reads resulting from page walks, and snoop responses.",
@@ -752,6 +1089,7 @@
},
{
"BriefDescription": "Counts cacheable and non-cacheable code reads to the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Counts both cacheable and non-cacheable code reads to the core.",
@@ -760,6 +1098,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Counts the Demand Data Read requests sent to uncore. Use it in conjunction with OFFCORE_REQUESTS_OUTSTANDING to determine average latency in the uncore.",
@@ -768,6 +1107,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Counts the demand RFO (read for ownership) requests including regular RFOs, locks, ItoM.",
@@ -776,6 +1116,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of outstanding data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding data read requests pending. Data read requests include cacheable demand reads and L2 prefetches, but do not include RFOs, code reads or prefetches to the L3. Reads due to page walks resulting from any request type will also be counted. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -784,6 +1125,7 @@
},
{
"BriefDescription": "Cycles where at least 1 outstanding data read request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -793,6 +1135,7 @@
},
{
"BriefDescription": "Cycles with outstanding code read requests pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
@@ -802,6 +1145,7 @@
},
{
"BriefDescription": "Cycles where at least 1 outstanding Demand RFO request is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -811,6 +1155,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of outstanding code read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding code read requests pending. Code Read requests include both cacheable and non-cacheable Code Reads. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -819,6 +1164,7 @@
},
{
"BriefDescription": "For every cycle, increments by the number of outstanding demand data read requests pending.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "For every cycle, increments by the number of outstanding demand data read requests pending. Requests are considered outstanding from the time they miss the core's L2 cache until the transaction completion message is sent to the requestor.",
@@ -827,6 +1173,7 @@
},
{
"BriefDescription": "Counts bus locks, accounts for cache line split locks and UC locks.",
+ "Counter": "0,1,2,3",
"EventCode": "0xF4",
"EventName": "SQ_MISC.BUS_LOCK",
"PublicDescription": "Counts the more expensive bus lock needed to enforce cache coherency for certain memory accesses that need to be done atomically. Can be created by issuing an atomic instruction (via the LOCK prefix) which causes a cache line split or accesses uncacheable memory.",
@@ -835,6 +1182,7 @@
},
{
"BriefDescription": "Cycles the queue waiting for offcore responses is full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xf4",
"EventName": "SQ_MISC.SQ_FULL",
"PublicDescription": "Counts the cycles for which the thread is active and the queue waiting for responses from the uncore cannot take any more entries.",
@@ -842,7 +1190,16 @@
"UMask": "0x4"
},
{
+ "BriefDescription": "Counts the number of PREFETCHNTA, PREFETCHW, PREFETCHT0, PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x32",
+ "EventName": "SW_PREFETCH_ACCESS.ANY",
+ "SampleAfterValue": "100003",
+ "UMask": "0xf"
+ },
+ {
"BriefDescription": "Number of PREFETCHNTA instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.NTA",
"PublicDescription": "Counts the number of PREFETCHNTA instructions executed.",
@@ -851,6 +1208,7 @@
},
{
"BriefDescription": "Number of PREFETCHW instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.PREFETCHW",
"PublicDescription": "Counts the number of PREFETCHW instructions executed.",
@@ -859,6 +1217,7 @@
},
{
"BriefDescription": "Number of PREFETCHT0 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T0",
"PublicDescription": "Counts the number of PREFETCHT0 instructions executed.",
@@ -867,6 +1226,7 @@
},
{
"BriefDescription": "Number of PREFETCHT1 or PREFETCHT2 instructions executed.",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "SW_PREFETCH_ACCESS.T1_T2",
"PublicDescription": "Counts the number of PREFETCHT1 or PREFETCHT2 instructions executed.",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/counter.json b/tools/perf/pmu-events/arch/x86/icelakex/counter.json
new file mode 100644
index 000000000000..63657e0a51a3
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/icelakex/counter.json
@@ -0,0 +1,57 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "4",
+ "CountersNumGeneric": "8"
+ },
+ {
+ "Unit": "CHA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IIO",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M2M",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M2PCIe",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "M3UPI",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "PCU",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "UBOX",
+ "CountersNumFixed": 1,
+ "CountersNumGeneric": "2"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/floating-point.json b/tools/perf/pmu-events/arch/x86/icelakex/floating-point.json
index 85c26c889088..61ddce0c8db6 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts all microcode FP assists.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.FP",
"PublicDescription": "Counts all microcode Floating Point assists.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 128-bit packed single and 256-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and packed double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.4_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 128-bit packed single precision and 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 or/and 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point and packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
"PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision FP instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, 1 for each element. Applies to SSE* and AVX* packed single precision and double precision FP instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.8_FLOPS",
"PublicDescription": "Number of SSE/AVX computational 256-bit packed single precision and 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision and double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RSQRT14 RCP RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "Number of SSE/AVX computational scalar floating-point instructions retired; some instructions will count twice as noted below. Applies to SSE* and AVX* scalar, double and single precision floating-point: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform multiple calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR",
"PublicDescription": "Number of SSE/AVX computational scalar single precision and double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
"PublicDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Counts number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
"PublicDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "Number of any Vector retired FP arithmetic instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc7",
"EventName": "FP_ARITH_INST_RETIRED.VECTOR",
"SampleAfterValue": "1000003",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/frontend.json b/tools/perf/pmu-events/arch/x86/icelakex/frontend.json
index f6edc4222f42..3b1bdcfaa276 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Counts the number of times the front-end is resteered when it finds a branch instruction in a fetch line. This occurs for the first time a branch instruction is fetched or when the branch is not tracked by the BPU (Branch Prediction Unit) anymore.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to ILD_STALL.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "DECODE.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to ILD_STALL.LCP]",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE transitions count.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xab",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "DSB-to-MITE switch true penalty cycles.",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Decode Stream Buffer (DSB) is a Uop-cache that holds translations of previously fetched instructions that were decoded by the legacy x86 decode pipeline (MITE). This event counts fetch penalty cycles when a transition occurs from DSB to MITE.",
@@ -35,193 +39,194 @@
},
{
"BriefDescription": "Retired Instructions who experienced DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x1",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.DSB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x11",
- "PEBS": "1",
"PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced iTLB true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.ITLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x14",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced iTLB (Instruction TLB) true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L1 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L1I_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x12",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions who experienced Instruction L1 Cache true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced Instruction L2 Cache true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.L2_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x13",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions who experienced Instruction L2 Cache true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 1 cycle",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x500106",
- "PEBS": "1",
"PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 1 cycle which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_128",
"MSRIndex": "0x3F7",
"MSRValue": "0x508006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 128 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 16 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_16",
"MSRIndex": "0x3F7",
"MSRValue": "0x501006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 16 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions after front-end starvation of at least 2 cycles",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2",
"MSRIndex": "0x3F7",
"MSRValue": "0x500206",
- "PEBS": "1",
"PublicDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of at least 2 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_256",
"MSRIndex": "0x3F7",
"MSRValue": "0x510006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 256 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end had at least 1 bubble-slot for a period of 2 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1",
"MSRIndex": "0x3F7",
"MSRValue": "0x100206",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after the front-end had at least 1 bubble-slot for a period of 2 cycles. A bubble-slot is an empty issue-pipeline slot while there was no RAT stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 32 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_32",
"MSRIndex": "0x3F7",
"MSRValue": "0x502006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 32 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_4",
"MSRIndex": "0x3F7",
"MSRValue": "0x500406",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 4 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_512",
"MSRIndex": "0x3F7",
"MSRValue": "0x520006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 512 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_64",
"MSRIndex": "0x3F7",
"MSRValue": "0x504006",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 64 cycles which was not interrupted by a back-end stall.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired instructions that are fetched after an interval where the front-end delivered no uops for a period of 8 cycles which was not interrupted by a back-end stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.LATENCY_GE_8",
"MSRIndex": "0x3F7",
"MSRValue": "0x500806",
- "PEBS": "1",
"PublicDescription": "Counts retired instructions that are delivered to the back-end after a front-end stall of at least 8 cycles. During this period the front-end delivered no uops.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Retired Instructions who experienced STLB (2nd level TLB) true miss.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc6",
"EventName": "FRONTEND_RETIRED.STLB_MISS",
"MSRIndex": "0x3F7",
"MSRValue": "0x15",
- "PEBS": "1",
"PublicDescription": "Counts retired Instructions that experienced STLB (2nd level TLB) true miss.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_DATA.STALLS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_16B.IFDATA_STALL",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_DATA.STALLS]",
@@ -230,6 +235,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_HIT",
"PublicDescription": "Counts instruction fetch tag lookups that hit in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
@@ -238,6 +244,7 @@
},
{
"BriefDescription": "Instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity.",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_MISS",
"PublicDescription": "Counts instruction fetch tag lookups that miss in the instruction cache (L1I). Counts at 64-byte cache-line granularity. Accounts for both cacheable and uncacheable accesses.",
@@ -246,6 +253,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_64B.IFTAG_STALL",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_TAG.STALLS]",
@@ -254,6 +262,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache miss. [This event is alias to ICACHE_16B.IFDATA_STALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE_DATA.STALLS",
"PublicDescription": "Counts cycles where a code line fetch is stalled due to an L1 instruction cache miss. The legacy decode pipeline works at a 16 Byte granularity. [This event is alias to ICACHE_16B.IFDATA_STALL]",
@@ -262,6 +271,7 @@
},
{
"BriefDescription": "Cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "ICACHE_TAG.STALLS",
"PublicDescription": "Counts cycles where a code fetch is stalled due to L1 instruction cache tag miss. [This event is alias to ICACHE_64B.IFTAG_STALL]",
@@ -270,6 +280,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_ANY",
@@ -279,15 +290,17 @@
},
{
"BriefDescription": "Cycles DSB is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES_OK",
- "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the MITE (legacy decode pipeline) path. During these cycles uops are not being delivered from the Decode Stream Buffer (DSB).",
+ "PublicDescription": "Counts the number of cycles where optimal number of uops was delivered to the Instruction Decode Queue (IDQ) from the DSB (Decode Stream Buffer) path. Count includes uops that may 'bypass' the IDQ.",
"SampleAfterValue": "2000003",
"UMask": "0x8"
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path.",
@@ -296,6 +309,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_ANY",
@@ -305,6 +319,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering optimal number of Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES_OK",
@@ -314,6 +329,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Counts the number of uops delivered to Instruction Decode Queue (IDQ) from the MITE path. This also means that uops are not being delivered from the Decode Stream Buffer (DSB).",
@@ -322,6 +338,7 @@
},
{
"BriefDescription": "Number of switches from DSB or MITE to the MS",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -332,6 +349,7 @@
},
{
"BriefDescription": "Uops delivered to IDQ while MS is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Counts the total number of uops delivered by the Microcode Sequencer (MS). Any instruction over 4 uops will be delivered by the MS. Some instructions such as transcendentals may additionally generate uops from the MS.",
@@ -340,6 +358,7 @@
},
{
"BriefDescription": "Uops not delivered by IDQ when backend of the machine is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Counts the number of uops not delivered to by the Instruction Decode Queue (IDQ) to the back-end of the pipeline when there was no back-end stalls. This event counts for one SMT thread in a given cycle.",
@@ -348,6 +367,7 @@
},
{
"BriefDescription": "Cycles when no uops are not delivered by the IDQ when backend of the machine is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "5",
"EventCode": "0x9c",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -357,6 +377,7 @@
},
{
"BriefDescription": "Cycles when optimal number of uops was delivered to the back-end when the back-end is not stalled",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json b/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json
index 71d78a7841ea..f58eec2a1788 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/icx-metrics.json
@@ -1,28 +1,28 @@
[
{
"BriefDescription": "C1 residency percent per core",
- "MetricExpr": "cstate_core@c1\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c1\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C1_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
@@ -34,7 +34,7 @@
"MetricName": "UNCORE_FREQ"
},
{
- "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles.",
+ "BriefDescription": "Cycles per instruction retired; indicating how much time each executed instruction took; in units of cycles",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD / INST_RETIRED.ANY",
"MetricName": "cpi",
"ScaleUnit": "1per_instr"
@@ -47,7 +47,7 @@
},
{
"BriefDescription": "Percentage of time spent in the active CPU power state C0",
- "MetricExpr": "tma_info_system_cpu_utilization",
+ "MetricExpr": "tma_info_system_cpus_utilized",
"MetricName": "cpu_utilization",
"ScaleUnit": "100%"
},
@@ -55,41 +55,101 @@
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "dtlb_2nd_level_2mb_large_page_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the Data Translation Lookaside Buffer (DTLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions",
"MetricExpr": "DTLB_LOAD_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_2nd_level_load_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data loads to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions",
"MetricExpr": "DTLB_STORE_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "dtlb_2nd_level_store_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by demand data stores to the total number of completed instructions. This implies it missed in the DTLB and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
- "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU.",
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR + UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of inbound IO reads that are initiated by end device controllers that are requesting memory from the CPU and miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the local CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from a remote CPU socket",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_read_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
"MetricName": "io_bandwidth_write",
"ScaleUnit": "1MB/s"
},
{
+ "BriefDescription": "Bandwidth of inbound IO writes that are initiated by end device controllers that are writing memory to the CPU",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_l3_miss",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the local CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_local",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "Bandwidth of IO writes that are initiated by end device controllers that are writing memory to a remote CPU socket",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE + UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE) * 64 / 1e6 / duration_time",
+ "MetricName": "io_bandwidth_write_remote",
+ "ScaleUnit": "1MB/s"
+ },
+ {
+ "BriefDescription": "The percent of inbound full cache line writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM / UNC_CHA_TOR_INSERTS.IO_ITOM",
+ "MetricName": "io_full_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "The percent of inbound partial writes initiated by IO that miss the L3 cache",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_RFO) / (UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_RFO)",
+ "MetricName": "io_partial_write_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "The percent of inbound reads initiated by IO that miss the L3 cache",
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR / UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
+ "MetricName": "io_read_l3_miss",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED_2M_4M / INST_RETIRED.ANY",
"MetricName": "itlb_2nd_level_large_page_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for 2 megabyte and 4 megabyte page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the Instruction Translation Lookaside Buffer (ITLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
"BriefDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions",
"MetricExpr": "ITLB_MISSES.WALK_COMPLETED / INST_RETIRED.ANY",
"MetricName": "itlb_2nd_level_mpi",
- "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB.",
+ "PublicDescription": "Ratio of number of completed page walks (for all page sizes) caused by a code fetch to the total number of completed instructions. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB",
"ScaleUnit": "1per_instr"
},
{
@@ -141,6 +201,12 @@
"ScaleUnit": "1per_instr"
},
{
+ "BriefDescription": "Ratio of number of data read requests missing last level core cache (includes demand w/ prefetches) to the total number of completed instructions",
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA + UNC_CHA_TOR_INSERTS.IA_MISS_DRD + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF) / INST_RETIRED.ANY",
+ "MetricName": "llc_data_read_mpi_demand_plus_prefetch",
+ "ScaleUnit": "1per_instr"
+ },
+ {
"BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) in nano seconds",
"MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD) * #num_packages)) * duration_time",
"MetricName": "llc_demand_data_read_miss_latency",
@@ -159,31 +225,37 @@
"ScaleUnit": "1ns"
},
{
+ "BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to DRAM in nano seconds",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR) * #num_packages)) * duration_time",
+ "MetricName": "llc_demand_data_read_miss_to_dram_latency",
+ "ScaleUnit": "1ns"
+ },
+ {
"BriefDescription": "Average latency of a last level cache (LLC) demand data read miss (read memory access) addressed to Intel(R) Optane(TM) Persistent Memory(PMEM) in nano seconds",
"MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / (UNC_CHA_CLOCKTICKS / (source_count(UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM) * #num_packages)) * duration_time",
"MetricName": "llc_demand_data_read_miss_to_pmem_latency",
"ScaleUnit": "1ns"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to local memory",
"MetricExpr": "UNC_CHA_REQUESTS.READS_LOCAL * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_local_memory_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to local memory",
"MetricExpr": "UNC_CHA_REQUESTS.WRITES_LOCAL * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_local_memory_bandwidth_write",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of read requests that miss the last level cache (LLC) and go to remote memory",
"MetricExpr": "UNC_CHA_REQUESTS.READS_REMOTE * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_remote_memory_bandwidth_read",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to remote memory.",
+ "BriefDescription": "Bandwidth (MB/sec) of write requests that miss the last level cache (LLC) and go to remote memory",
"MetricExpr": "UNC_CHA_REQUESTS.WRITES_REMOTE * 64 / 1e6 / duration_time",
"MetricName": "llc_miss_remote_memory_bandwidth_write",
"ScaleUnit": "1MB/s"
@@ -213,19 +285,19 @@
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Memory write bandwidth (MB/sec) caused by directory updates; includes DDR and Intel(R) Optane(TM) Persistent Memory(PMEM).",
+ "BriefDescription": "Memory write bandwidth (MB/sec) caused by directory updates; includes DDR and Intel(R) Optane(TM) Persistent Memory(PMEM)",
"MetricExpr": "(UNC_CHA_DIR_UPDATE.HA + UNC_CHA_DIR_UPDATE.TOR + UNC_M2M_DIRECTORY_UPDATE.ANY) * 64 / 1e6 / duration_time",
"MetricName": "memory_extra_write_bw_due_to_directory_updates",
"ScaleUnit": "1MB/s"
},
{
- "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory read that miss the last level cache (LLC) addressed to local DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
"MetricName": "numa_reads_addressed_to_local_dram",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches.",
+ "BriefDescription": "Memory reads that miss the last level cache (LLC) addressed to remote DRAM as a percentage of total memory read accesses, does not include LLC prefetches",
"MetricExpr": "(UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE) / (UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE + UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE)",
"MetricName": "numa_reads_addressed_to_remote_dram",
"ScaleUnit": "100%"
@@ -289,7 +361,6 @@
},
{
"BriefDescription": "This metric estimates how often memory load accesses were aliased by preceding stores (in program order) with a 4K address offset",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_4k_aliasing",
@@ -302,13 +373,13 @@
"MetricExpr": "(UOPS_DISPATCHED.PORT_0 + UOPS_DISPATCHED.PORT_1 + UOPS_DISPATCHED.PORT_5 + UOPS_DISPATCHED.PORT_6) / (4 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * ASSISTS.ANY / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "34 * ASSISTS.ANY / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: ASSISTS.ANY",
@@ -317,8 +388,8 @@
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * cpu@INT_MISC.RECOVERY_CYCLES\\,cmask\\=1\\,edge@ / tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "topdown\\-be\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 5 * INT_MISC.CLEARS_COUNT / tma_info_thread_slots",
+ "MetricGroup": "BvOB;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -337,9 +408,113 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
+ "MetricGroup": "BigFootprint;BvBC;Fed;Frontend;IcMiss;MemoryTLB",
+ "MetricName": "tma_bottleneck_big_code",
+ "MetricThreshold": "tma_bottleneck_big_code > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA",
+ "MetricExpr": "100 * ((BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots)",
+ "MetricGroup": "BvBO;Ret",
+ "MetricName": "tma_bottleneck_branching_overhead",
+ "MetricThreshold": "tma_bottleneck_branching_overhead > 5",
+ "PublicDescription": "Total pipeline cost of instructions used for program control-flow - a subset of the Retiring category in TMA. Examples include function calls; loops and alignments. (A lower bound)"
+ },
+ {
+ "BriefDescription": "Total pipeline cost when the execution is compute-bound - an estimation",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_core_bound * tma_divider / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_core_bound * (tma_ports_utilization / (tma_divider + tma_ports_utilization + tma_serializing_operation)) * (tma_ports_utilized_3m / (tma_ports_utilized_0 + tma_ports_utilized_1 + tma_ports_utilized_2 + tma_ports_utilized_3m)))",
+ "MetricGroup": "BvCB;Cor;tma_issueComp",
+ "MetricName": "tma_bottleneck_compute_bound_est",
+ "MetricThreshold": "tma_bottleneck_compute_bound_est > 20",
+ "PublicDescription": "Total pipeline cost when the execution is compute-bound - an estimation. Covers Core Bound when High ILP as well as when long-latency execution units are busy. Related metrics: "
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)))",
+ "MetricGroup": "BvMB;Mem;MemoryBW;Offcore;tma_issueBW",
+ "MetricName": "tma_bottleneck_data_cache_memory_bandwidth",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_bandwidth > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_memory_bound * (tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_memory_bound * tma_l2_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_l1_latency_dependency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_lock_latency / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_l1_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_loads / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_split_stores / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_store_latency / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvML;Mem;MemoryLat;Offcore;tma_issueLat",
+ "MetricName": "tma_bottleneck_data_cache_memory_latency",
+ "MetricThreshold": "tma_bottleneck_data_cache_memory_latency > 20",
+ "PublicDescription": "Total pipeline cost of external Memory- or Cache-Latency related bottlenecks. Related metrics: tma_l3_hit_latency, tma_mem_latency"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks (when the front-end could not sustain operations delivery to the back-end)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound - (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_mispredicts_resteers) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms)) - tma_bottleneck_big_code",
+ "MetricGroup": "BvFB;Fed;FetchBW;Frontend",
+ "MetricName": "tma_bottleneck_instruction_fetch_bw",
+ "MetricThreshold": "tma_bottleneck_instruction_fetch_bw > 20"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of irregular execution (e.g",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * (tma_fetch_latency * (tma_ms_switches + tma_branch_resteers * (tma_clears_resteers + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_mispredicts_resteers) / (tma_clears_resteers + tma_mispredicts_resteers + tma_unknown_branches)) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_ms) + 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts * tma_branch_mispredicts + tma_machine_clears * tma_other_nukes / tma_other_nukes + tma_core_bound * (tma_serializing_operation + tma_core_bound * RS_EVENTS.EMPTY_CYCLES / tma_info_thread_clks * tma_ports_utilized_0) / (tma_divider + tma_ports_utilization + tma_serializing_operation) + tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "Bad;BvIO;Cor;Ret;tma_issueMS",
+ "MetricName": "tma_bottleneck_irregular_overhead",
+ "MetricThreshold": "tma_bottleneck_irregular_overhead > 10",
+ "PublicDescription": "Total pipeline cost of irregular execution (e.g. FP-assists in HPC, Wait time with work imbalance multithreaded workloads, overhead in system services or virtualized environments). Related metrics: tma_microcode_sequencer, tma_ms_switches"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_l1_latency_dependency + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_memory_bound * (tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound)) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
+ "MetricGroup": "BvMT;Mem;MemoryTLB;Offcore;tma_issueTLB",
+ "MetricName": "tma_bottleneck_memory_data_tlbs",
+ "MetricThreshold": "tma_bottleneck_memory_data_tlbs > 20",
+ "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_memory_bound * (tma_dram_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) * tma_remote_cache / (tma_local_mem + tma_remote_cache + tma_remote_mem) + tma_l3_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * (tma_contested_accesses + tma_data_sharing) / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full) + tma_store_bound / (tma_cxl_mem_bound + tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_store_bound) * tma_false_sharing / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores - tma_store_latency)) + tma_machine_clears * (1 - tma_other_nukes / tma_other_nukes))",
+ "MetricGroup": "BvMS;LockCont;Mem;Offcore;tma_issueSyncxn",
+ "MetricName": "tma_bottleneck_memory_synchronization",
+ "MetricThreshold": "tma_bottleneck_memory_synchronization > 10",
+ "PublicDescription": "Total pipeline cost of Memory Synchronization related bottlenecks (data transfers and coherency updates across processors). Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (1 - 10 * tma_microcode_sequencer * tma_other_mispredicts / tma_branch_mispredicts) * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
+ "MetricGroup": "Bad;BadSpec;BrMispredicts;BvMP;tma_issueBM",
+ "MetricName": "tma_bottleneck_mispredictions",
+ "MetricThreshold": "tma_bottleneck_mispredictions > 20",
+ "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
+ },
+ {
+ "BriefDescription": "Total pipeline cost of remaining bottlenecks in the back-end",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 - (tma_bottleneck_big_code + tma_bottleneck_instruction_fetch_bw + tma_bottleneck_mispredictions + tma_bottleneck_data_cache_memory_bandwidth + tma_bottleneck_data_cache_memory_latency + tma_bottleneck_memory_data_tlbs + tma_bottleneck_memory_synchronization + tma_bottleneck_compute_bound_est + tma_bottleneck_irregular_overhead + tma_bottleneck_branching_overhead + tma_bottleneck_useful_work)",
+ "MetricGroup": "BvOB;Cor;Offcore",
+ "MetricName": "tma_bottleneck_other_bottlenecks",
+ "MetricThreshold": "tma_bottleneck_other_bottlenecks > 20",
+ "PublicDescription": "Total pipeline cost of remaining bottlenecks in the back-end. Examples include data-dependencies (Core Bound when Low ILP) and other unlisted memory-related stalls."
+ },
+ {
+ "BriefDescription": "Total pipeline cost of \"useful operations\" - the portion of Retiring category not covered by Branching_Overhead nor Irregular_Overhead.",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_retiring - (BR_INST_RETIRED.ALL_BRANCHES + 2 * BR_INST_RETIRED.NEAR_CALL + INST_RETIRED.NOP) / tma_info_thread_slots - tma_microcode_sequencer / (tma_few_uops_instructions + tma_microcode_sequencer) * (tma_assists / tma_microcode_sequencer) * tma_heavy_operations)",
+ "MetricGroup": "BvUW;Ret",
+ "MetricName": "tma_bottleneck_useful_work",
+ "MetricThreshold": "tma_bottleneck_useful_work > 20"
+ },
+ {
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring branch instructions.",
"MetricExpr": "tma_light_operations * BR_INST_RETIRED.ALL_BRANCHES / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "Branches;BvBO;Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_branch_instructions",
"MetricThreshold": "tma_branch_instructions > 0.1 & tma_light_operations > 0.6",
"ScaleUnit": "100%"
@@ -347,11 +522,11 @@
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction. These slots are either wasted by uops fetched from an incorrectly speculated program path; or stalls when the out-of-order part of the machine needs to recover its state from a speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES. Related metrics: tma_bottleneck_mispredictions, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers",
"ScaleUnit": "100%"
},
{
@@ -382,13 +557,61 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that hit in the L2 cache.",
+ "MetricExpr": "max(0, tma_icache_misses - tma_code_l2_miss)",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_hit",
+ "MetricThreshold": "tma_code_l2_hit > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates fraction of cycles the CPU was stalled due to instruction cache misses that miss in the L2 cache.",
+ "MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;IcMiss;Offcore;TopdownL4;tma_L4_group;tma_icache_misses_group",
+ "MetricName": "tma_code_l2_miss",
+ "MetricThreshold": "tma_code_l2_miss > 0.05 & (tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric roughly estimates the fraction of cycles where the (first level) ITLB was missed by instructions fetches, that later on hit in second-level TLB (STLB)",
+ "MetricExpr": "max(0, tma_itlb_misses - tma_code_stlb_miss)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_hit",
+ "MetricThreshold": "tma_code_stlb_hit > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles where the Second-level TLB (STLB) was missed by instruction fetches, performing a hardware page walk",
+ "MetricExpr": "ITLB_MISSES.WALK_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL4;tma_L4_group;tma_itlb_misses_group",
+ "MetricName": "tma_code_stlb_miss",
+ "MetricThreshold": "tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_2M_4M / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_2m",
+ "MetricThreshold": "tma_code_stlb_miss_2m > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for (instruction) code accesses.",
+ "MetricExpr": "tma_code_stlb_miss * ITLB_MISSES.WALK_COMPLETED_4K / (ITLB_MISSES.WALK_COMPLETED_4K + ITLB_MISSES.WALK_COMPLETED_2M_4M)",
+ "MetricGroup": "FetchLat;MemoryTLB;TopdownL5;tma_L5_group;tma_code_stlb_miss_group",
+ "MetricName": "tma_code_stlb_miss_4k",
+ "MetricThreshold": "tma_code_stlb_miss_4k > 0.05 & (tma_code_stlb_miss > 0.05 & (tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(44 * tma_info_system_average_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 43.5 * tma_info_system_average_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "(44 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) + 43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -402,13 +625,22 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g",
+ "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0))) if #has_pmem > 0 else 1)) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricName": "tma_cxl_mem_bound",
+ "MetricThreshold": "tma_cxl_mem_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
+ "PublicDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external CXL Memory by loads (e.g. 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory, PMM - Persistent Memory Module [from CLX to SPR] or any other CXL Type3 Memory [EMR onwards]).",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "43.5 * tma_info_system_average_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricExpr": "43.5 * tma_info_system_core_frequency * (MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT + MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM * (1 - OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM / (OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HITM + OCR.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD))) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
@@ -416,14 +648,14 @@
"MetricExpr": "(cpu@INST_DECODED.DECODERS\\,cmask\\=1@ - cpu@INST_DECODED.DECODERS\\,cmask\\=2@) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_issueD0;tma_mite_group",
"MetricName": "tma_decoder0_alone",
- "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35))",
+ "MetricThreshold": "tma_decoder0_alone > 0.1 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where decoder-0 was the only active decoder. Related metrics: tma_few_uops_instructions",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.DIVIDER_ACTIVE / tma_info_thread_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
@@ -432,11 +664,11 @@
{
"BriefDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound - tma_pmm_bound if #has_pmem > 0 else CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound)",
+ "MetricExpr": "(CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound - tma_cxl_mem_bound if #has_pmem > 0 else CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound)",
"MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_dram_bound",
"MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
"ScaleUnit": "100%"
},
{
@@ -444,7 +676,7 @@
"MetricExpr": "(IDQ.DSB_CYCLES_ANY - IDQ.DSB_CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -454,43 +686,43 @@
"MetricGroup": "DSBmiss;FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_dsb_switches",
"MetricThreshold": "tma_dsb_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to switches from DSB to MITE pipelines. The DSB (decoded i-cache) is a Uop Cache where the front-end directly delivers Uops (micro operations) avoiding heavy x86 decoding. The DSB pipeline has shorter latency and delivered higher bandwidth than the MITE (legacy instruction decode pipeline). Switching between the two pipelines can cause penalties hence this metric measures the exposed penalty. Sample with: FRONTEND_RETIRED.DSB_MISS_PS. Related metrics: tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "min(7 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_INST_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_store",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(7 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE) / tma_info_core_core_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load, tma_info_bottleneck_memory_data_tlbs",
+ "PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_INST_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_bottleneck_memory_data_tlbs, tma_dtlb_load",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
- "MetricExpr": "48 * tma_info_system_average_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricExpr": "(120 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_MISS@offcore_rsp\\=0x103b800002@ + 48 * tma_info_system_core_frequency * OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM) / tma_info_thread_clks",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
+ "PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: OCR.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricExpr": "L1D_PEND_MISS.FB_FULL / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
- "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
+ "PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
"ScaleUnit": "100%"
},
{
@@ -498,9 +730,9 @@
"MetricExpr": "max(0, tma_frontend_bound - tma_fetch_latency)",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_1_PS;FRONTEND_RETIRED.LATENCY_GE_2_PS. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
+ "PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Sample with: FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1;FRONTEND_RETIRED.LATENCY_GE_1;FRONTEND_RETIRED.LATENCY_GE_2. Related metrics: tma_dsb_switches, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
},
{
@@ -514,12 +746,12 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops",
+ "BriefDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops",
"MetricExpr": "tma_heavy_operations - tma_microcode_sequencer",
"MetricGroup": "TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueD0",
"MetricName": "tma_few_uops_instructions",
"MetricThreshold": "tma_few_uops_instructions > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or up to ([SNB+] four; [ADL+] five) uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring instructions that that are decoder into two or more uops. This highly-correlates with the number of uops in such instructions. Related metrics: tma_decoder0_alone",
"ScaleUnit": "100%"
},
{
@@ -532,8 +764,25 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists",
+ "MetricExpr": "34 * ASSISTS.FP / tma_info_thread_slots",
+ "MetricGroup": "HPC;TopdownL5;tma_L5_group;tma_assists_group",
+ "MetricName": "tma_fp_assists",
+ "MetricThreshold": "tma_fp_assists > 0.1",
+ "PublicDescription": "This metric roughly estimates fraction of slots the CPU retired uops as a result of handing Floating Point (FP) Assists. FP Assist may apply when working with very small floating point values (so-called Denormals).",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles where the Floating-Point Divider unit was active.",
+ "MetricExpr": "ARITH.FP_DIVIDER_ACTIVE / tma_info_thread_clks",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_fp_divider",
+ "MetricThreshold": "tma_fp_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.SCALAR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -542,7 +791,7 @@
},
{
"BriefDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths",
- "MetricExpr": "cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@ / (tma_retiring * tma_info_thread_slots)",
+ "MetricExpr": "FP_ARITH_INST_RETIRED.VECTOR / (tma_retiring * tma_info_thread_slots)",
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
@@ -555,7 +804,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_128b",
"MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -564,7 +813,7 @@
"MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
"MetricName": "tma_fp_vector_256b",
"MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
- "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -580,7 +829,7 @@
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "topdown\\-fe\\-bound / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) - INT_MISC.UOP_DROPPING / tma_info_thread_slots",
- "MetricGroup": "Default;PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;Default;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -594,49 +843,49 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses",
- "MetricExpr": "ICACHE_16B.IFDATA_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricExpr": "ICACHE_DATA.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses. Sample with: FRONTEND_RETIRED.L2_MISS_PS;FRONTEND_RETIRED.L1I_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
+ "BriefDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "(tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) * tma_info_thread_slots / BR_MISP_RETIRED.ALL_BRANCHES",
+ "MetricExpr": "tma_bottleneck_mispredictions * tma_info_thread_slots / 5 / BR_MISP_RETIRED.ALL_BRANCHES / 100",
"MetricGroup": "Bad;BrMispredicts;tma_issueBM",
"MetricName": "tma_info_bad_spec_branch_misprediction_cost",
- "PublicDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_branch_mispredicts, tma_info_bottleneck_mispredictions, tma_mispredicts_resteers"
+ "PublicDescription": "Branch Misprediction Cost: Cycles representing fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear). Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_mispredicts_resteers"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional non-taken branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_NTAKEN",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_ntaken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_ntaken < 200"
},
{
- "BriefDescription": "Instructions per retired mispredicts for conditional taken branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for conditional taken branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.COND_TAKEN",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_cond_taken",
"MetricThreshold": "tma_info_bad_spec_ipmisp_cond_taken < 200"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.INDIRECT",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
},
{
- "BriefDescription": "Instructions per retired mispredicts for return branches (lower number means higher occurrence rate).",
+ "BriefDescription": "Instructions per retired Mispredicts for return branches (lower number means higher occurrence rate).",
"MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.RET",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_ret",
@@ -644,12 +893,18 @@
},
{
"BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear) (lower number means higher occurrence rate)",
- "MetricExpr": "tma_info_core_ipmispredict",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;BadSpec;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmispredict",
"MetricThreshold": "tma_info_bad_spec_ipmispredict < 200"
},
{
+ "BriefDescription": "Speculative to Retired ratio of all clears (covering Mispredicts and nukes)",
+ "MetricExpr": "INT_MISC.CLEARS_COUNT / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT)",
+ "MetricGroup": "BrMispredicts",
+ "MetricName": "tma_info_bad_spec_spec_clears_ratio"
+ },
+ {
"BriefDescription": "Probability of Core Bound bottleneck hidden by SMT-profiling artifacts",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(100 * (1 - tma_core_bound / tma_ports_utilization if tma_core_bound < tma_ports_utilization else 1) if tma_info_system_smt_2t_utilization > 0.5 else 0)",
@@ -658,13 +913,22 @@
"MetricThreshold": "tma_info_botlnk_l0_core_bound_likely > 0.5"
},
{
+ "BriefDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "100 * (tma_frontend_bound * (tma_fetch_bandwidth / (tma_fetch_bandwidth + tma_fetch_latency)) * (tma_dsb / (tma_dsb + tma_mite + tma_ms)))",
+ "MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
+ "MetricName": "tma_info_botlnk_l2_dsb_bandwidth",
+ "MetricThreshold": "tma_info_botlnk_l2_dsb_bandwidth > 10",
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) hits - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ },
+ {
"BriefDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite))",
+ "MetricExpr": "100 * (tma_fetch_latency * tma_dsb_switches / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches) + tma_fetch_bandwidth * tma_mite / (tma_dsb + tma_mite + tma_ms))",
"MetricGroup": "DSBmiss;Fed;tma_issueFB",
"MetricName": "tma_info_botlnk_l2_dsb_misses",
"MetricThreshold": "tma_info_botlnk_l2_dsb_misses > 10",
- "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Total pipeline cost of DSB (uop cache) misses - subset of the Instruction_Fetch_BW Bottleneck. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck",
@@ -676,67 +940,6 @@
"PublicDescription": "Total pipeline cost of Instruction Cache misses - subset of the Big_Code Bottleneck. Related metrics: "
},
{
- "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_fetch_latency * (tma_itlb_misses + tma_icache_misses + tma_unknown_branches) / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)",
- "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB;tma_issueBC",
- "MetricName": "tma_info_bottleneck_big_code",
- "MetricThreshold": "tma_info_bottleneck_big_code > 20",
- "PublicDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses). Related metrics: tma_info_bottleneck_branching_overhead"
- },
- {
- "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
- "MetricExpr": "100 * ((BR_INST_RETIRED.COND + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - BR_INST_RETIRED.COND_TAKEN - 2 * BR_INST_RETIRED.NEAR_CALL)) / tma_info_thread_slots)",
- "MetricGroup": "Ret;tma_issueBC",
- "MetricName": "tma_info_bottleneck_branching_overhead",
- "MetricThreshold": "tma_info_bottleneck_branching_overhead > 10",
- "PublicDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls). Related metrics: tma_info_bottleneck_big_code"
- },
- {
- "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_frontend_bound - tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches)) - tma_info_bottleneck_big_code",
- "MetricGroup": "Fed;FetchBW;Frontend",
- "MetricName": "tma_info_bottleneck_instruction_fetch_bw",
- "MetricThreshold": "tma_info_bottleneck_instruction_fetch_bw > 20"
- },
- {
- "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_bandwidth / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_sq_full / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full))) + tma_l1_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_fb_full / (tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk))",
- "MetricGroup": "Mem;MemoryBW;Offcore;tma_issueBW",
- "MetricName": "tma_info_bottleneck_memory_bandwidth",
- "MetricThreshold": "tma_info_bottleneck_memory_bandwidth > 20",
- "PublicDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks. Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_l1_bound / max(tma_memory_bound, tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_dtlb_load / max(tma_l1_bound, tma_4k_aliasing + tma_dtlb_load + tma_fb_full + tma_lock_latency + tma_split_loads + tma_store_fwd_blk)) + tma_store_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_dtlb_store / (tma_dtlb_store + tma_false_sharing + tma_split_stores + tma_store_latency + tma_streaming_stores)))",
- "MetricGroup": "Mem;MemoryTLB;Offcore;tma_issueTLB",
- "MetricName": "tma_info_bottleneck_memory_data_tlbs",
- "MetricThreshold": "tma_info_bottleneck_memory_data_tlbs > 20",
- "PublicDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs). Related metrics: tma_dtlb_load, tma_dtlb_store"
- },
- {
- "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * tma_memory_bound * (tma_dram_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_mem_latency / (tma_mem_bandwidth + tma_mem_latency)) + tma_l3_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound) * (tma_l3_hit_latency / (tma_contested_accesses + tma_data_sharing + tma_l3_hit_latency + tma_sq_full)) + tma_l2_bound / (tma_dram_bound + tma_l1_bound + tma_l2_bound + tma_l3_bound + tma_pmm_bound + tma_store_bound))",
- "MetricGroup": "Mem;MemoryLat;Offcore;tma_issueLat",
- "MetricName": "tma_info_bottleneck_memory_latency",
- "MetricThreshold": "tma_info_bottleneck_memory_latency > 20",
- "PublicDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches). Related metrics: tma_l3_hit_latency, tma_mem_latency"
- },
- {
- "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "100 * (tma_branch_mispredicts + tma_fetch_latency * tma_mispredicts_resteers / (tma_branch_resteers + tma_dsb_switches + tma_icache_misses + tma_itlb_misses + tma_lcp + tma_ms_switches))",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;tma_issueBM",
- "MetricName": "tma_info_bottleneck_mispredictions",
- "MetricThreshold": "tma_info_bottleneck_mispredictions > 20",
- "PublicDescription": "Total pipeline cost of Branch Misprediction related bottlenecks. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_mispredicts_resteers"
- },
- {
"BriefDescription": "Fraction of branches that are CALL or RET",
"MetricExpr": "(BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN) / BR_INST_RETIRED.ALL_BRANCHES",
"MetricGroup": "Bad;Branches",
@@ -768,7 +971,7 @@
},
{
"BriefDescription": "Core actual clocks when any Logical Processor is active on the Physical Core",
- "MetricExpr": "CPU_CLK_UNHALTED.DISTRIBUTED",
+ "MetricExpr": "(CPU_CLK_UNHALTED.DISTRIBUTED if #SMT_on else tma_info_thread_clks)",
"MetricGroup": "SMT",
"MetricName": "tma_info_core_core_clks"
},
@@ -779,38 +982,37 @@
"MetricName": "tma_info_core_coreipc"
},
{
+ "BriefDescription": "uops Executed per Cycle",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / tma_info_thread_clks",
+ "MetricGroup": "Power",
+ "MetricName": "tma_info_core_epc"
+ },
+ {
"BriefDescription": "Floating Point Operations Per Cycle",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / tma_info_core_core_clks",
"MetricGroup": "Flops;Ret",
"MetricName": "tma_info_core_flopc"
},
{
"BriefDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width)",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@) / (2 * tma_info_core_core_clks)",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR) / (2 * tma_info_core_core_clks)",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_core_fp_arith_utilization",
"PublicDescription": "Actual per-core usage of the Floating Point non-X87 execution units (regardless of precision or vector-width). Values > 1 are possible due to ([BDW+] Fused-Multiply Add (FMA) counting - common; [ADL+] use all of ADD/MUL/FMA in Scalar or 128/256-bit vectors - less common)."
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
{
- "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear)",
- "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
- "MetricGroup": "Bad;BadSpec;BrMispredicts;TopdownL1;tma_L1_group",
- "MetricName": "tma_info_core_ipmispredict",
- "MetricgroupNoGroup": "TopdownL1"
- },
- {
"BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
"MetricExpr": "IDQ.DSB_UOPS / UOPS_ISSUED.ANY",
"MetricGroup": "DSB;Fed;FetchBW;tma_issueFB",
"MetricName": "tma_info_frontend_dsb_coverage",
"MetricThreshold": "tma_info_frontend_dsb_coverage < 0.7 & tma_info_thread_ipc / 5 > 0.35",
- "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
+ "PublicDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache). Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_inst_mix_iptb, tma_lcp"
},
{
"BriefDescription": "Average number of cycles of a switch from the DSB fetch-unit to MITE fetch unit - see DSB_Switches tree node for details.",
@@ -856,6 +1058,12 @@
"MetricName": "tma_info_frontend_l2mpki_code_all"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -870,11 +1078,11 @@
},
{
"BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + cpu@FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE\\,umask\\=0xfc@)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + FP_ARITH_INST_RETIRED.VECTOR)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
@@ -882,7 +1090,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx128",
"MetricThreshold": "tma_info_inst_mix_iparith_avx128 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
@@ -890,7 +1098,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx256",
"MetricThreshold": "tma_info_inst_mix_iparith_avx256 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
@@ -898,7 +1106,7 @@
"MetricGroup": "Flops;FpVector;InsType",
"MetricName": "tma_info_inst_mix_iparith_avx512",
"MetricThreshold": "tma_info_inst_mix_iparith_avx512 < 10",
- "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
@@ -906,7 +1114,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_dp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_dp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
@@ -914,7 +1122,7 @@
"MetricGroup": "Flops;FpScalar;InsType",
"MetricName": "tma_info_inst_mix_iparith_scalar_sp",
"MetricThreshold": "tma_info_inst_mix_iparith_scalar_sp < 10",
- "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
+ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -932,7 +1140,7 @@
},
{
"BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / (cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
+ "MetricExpr": "INST_RETIRED.ANY / (FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE)",
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_ipflop",
"MetricThreshold": "tma_info_inst_mix_ipflop < 10"
@@ -945,6 +1153,12 @@
"MetricThreshold": "tma_info_inst_mix_ipload < 3"
},
{
+ "BriefDescription": "Instructions per PAUSE (lower number means higher occurrence rate)",
+ "MetricExpr": "tma_info_inst_mix_instructions / MISC_RETIRED.PAUSE_INST",
+ "MetricGroup": "Flops;FpVector;InsType",
+ "MetricName": "tma_info_inst_mix_ippause"
+ },
+ {
"BriefDescription": "Instructions per Store (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / MEM_INST_RETIRED.ALL_STORES",
"MetricGroup": "InsType",
@@ -953,30 +1167,30 @@
},
{
"BriefDescription": "Instructions per Software prefetch instruction (of any type: NTA/T0/T1/T2/Prefetch) (lower number means higher occurrence rate)",
- "MetricExpr": "INST_RETIRED.ANY / cpu@SW_PREFETCH_ACCESS.T0\\,umask\\=0xF@",
+ "MetricExpr": "INST_RETIRED.ANY / SW_PREFETCH_ACCESS.ANY",
"MetricGroup": "Prefetches",
"MetricName": "tma_info_inst_mix_ipswpf",
"MetricThreshold": "tma_info_inst_mix_ipswpf < 100"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 11",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
@@ -992,124 +1206,143 @@
},
{
"BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_access_bw",
"MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_core_l3_cache_access_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_access_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
},
{
"BriefDescription": "Fill Buffer (FB) hits per kilo instructions for retired demand loads (L1D misses that merge into ongoing miss-handling entries)",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_fb_hpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
+ },
+ {
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki_load"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2hpki_load"
},
{
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all request types (including speculative)",
"MetricExpr": "1e3 * (OFFCORE_REQUESTS.ALL_DATA_RD - OFFCORE_REQUESTS.DEMAND_DATA_RD + L2_RQSTS.ALL_DEMAND_MISS + L2_RQSTS.SWPF_MISS) / tma_info_inst_mix_instructions",
- "MetricGroup": "CacheMisses;Mem;Offcore",
+ "MetricGroup": "CacheHits;Mem;Offcore",
"MetricName": "tma_info_memory_l2mpki_all"
},
{
"BriefDescription": "L2 cache ([RKL+] true) misses per kilo instruction for all demand loads (including speculative)",
"MetricExpr": "1e3 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki_load"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * L2_RQSTS.RFO_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "tma_info_memory_l3_cache_access_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
+ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,cmask\\=1@",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average Latency for L3 cache miss demand Loads",
- "MetricExpr": "cpu@OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD\\,umask\\=0x10@ / OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l3_miss_latency"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "BriefDescription": "\"Bus lock\" per kilo instruction",
+ "MetricExpr": "1e3 * SQ_MISC.BUS_LOCK / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_bus_lock_pki"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
+ "BriefDescription": "Un-cacheable retired load per kilo instruction",
+ "MetricExpr": "1e3 * MEM_LOAD_MISC_RETIRED.UC / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_mix_uc_load_pki"
},
{
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_access_bw",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Rate of L2 HW prefetched lines that were not used by demand accesses",
+ "MetricExpr": "L2_LINES_OUT.USELESS_HWPF / (L2_LINES_OUT.SILENT + L2_LINES_OUT.NON_SILENT)",
+ "MetricGroup": "Prefetches",
+ "MetricName": "tma_info_memory_prefetches_useless_hwpf",
+ "MetricThreshold": "tma_info_memory_prefetches_useless_hwpf > 0.15"
},
{
"BriefDescription": "STLB (2nd level TLB) code speculative misses per kilo instruction (misses of any page-size that complete the page walk)",
@@ -1137,54 +1370,100 @@
"MetricName": "tma_info_memory_tlb_store_stlb_mpki"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 if #SMT_on else cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
{
+ "BriefDescription": "Average number of uops fetched from DSB per cycle",
+ "MetricExpr": "IDQ.DSB_UOPS / IDQ.DSB_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_dsb"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MITE per cycle",
+ "MetricExpr": "IDQ.MITE_UOPS / IDQ.MITE_CYCLES_ANY",
+ "MetricGroup": "Fed;FetchBW",
+ "MetricName": "tma_info_pipeline_fetch_mite"
+ },
+ {
+ "BriefDescription": "Average number of uops fetched from MS per cycle",
+ "MetricExpr": "IDQ.MS_UOPS / cpu@IDQ.MS_UOPS\\,cmask\\=1@",
+ "MetricGroup": "Fed;FetchLat;MicroSeq",
+ "MetricName": "tma_info_pipeline_fetch_ms"
+ },
+ {
+ "BriefDescription": "Instructions per a microcode Assist invocation",
+ "MetricExpr": "INST_RETIRED.ANY / ASSISTS.ANY",
+ "MetricGroup": "MicroSeq;Pipeline;Ret;Retire",
+ "MetricName": "tma_info_pipeline_ipassist",
+ "MetricThreshold": "tma_info_pipeline_ipassist < 100e3",
+ "PublicDescription": "Instructions per a microcode Assist invocation. See Assists tree node for details (lower number means higher occurrence rate)"
+ },
+ {
"BriefDescription": "Average number of Uops retired in cycles where at least one uop has retired.",
"MetricExpr": "tma_retiring * tma_info_thread_slots / cpu@UOPS_RETIRED.SLOTS\\,cmask\\=1@",
"MetricGroup": "Pipeline;Ret",
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_read_bw"
+ },
+ {
+ "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / tma_info_system_time if #has_pmem > 0 else 0)",
+ "MetricGroup": "MemOffcore;MemoryBW;Server;SoC",
+ "MetricName": "tma_info_system_cxl_mem_write_bw"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / duration_time",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_M_CAS_COUNT.RD + UNC_M_CAS_COUNT.WR) / 1e9 / tma_info_system_time",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
- "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_mem_bandwidth, tma_sq_full"
+ "PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(cpu@FP_ARITH_INST_RETIRED.SCALAR_SINGLE\\,umask\\=0x03@ + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * cpu@FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE\\,umask\\=0x18@ + 8 * cpu@FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE\\,umask\\=0x60@ + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_ARITH_INST_RETIRED.SCALAR + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * FP_ARITH_INST_RETIRED.4_FLOPS + 8 * FP_ARITH_INST_RETIRED.8_FLOPS + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
- "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e9 / duration_time",
- "MetricGroup": "IoBW;Mem;Server;SoC",
- "MetricName": "tma_info_system_io_read_bw"
+ "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_read_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]. Bandwidth of IO reads that are initiated by end device controllers that are requesting memory from the CPU"
},
{
"BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
- "MetricExpr": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR * 64 / 1e9 / duration_time",
- "MetricGroup": "IoBW;Mem;Server;SoC",
- "MetricName": "tma_info_system_io_write_bw"
+ "MetricExpr": "(UNC_CHA_TOR_INSERTS.IO_HIT_ITOM + UNC_CHA_TOR_INSERTS.IO_MISS_ITOM + UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR + UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR) * 64 / 1e9 / tma_info_system_time",
+ "MetricGroup": "IoBW;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_io_write_bw",
+ "PublicDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]. Bandwidth of IO writes that are initiated by end device controllers that are writing memory to the CPU"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -1209,12 +1488,20 @@
{
"BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]",
"MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR) / cha_0@event\\=0x0@",
- "MetricGroup": "Mem;MemoryLat;Server;SoC",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
"MetricName": "tma_info_system_mem_dram_read_latency",
"PublicDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
},
{
+ "BriefDescription": "Fraction of Uncore cycles where requests got rejected due to duplicate address already in IRQ ingress queue in the cache homing agent",
+ "MetricExpr": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH / UNC_CHA_CLOCKTICKS",
+ "MetricGroup": "LockCont;MemOffcore;Server;SoC",
+ "MetricName": "tma_info_system_mem_irq_duplicate_address",
+ "MetricThreshold": "tma_info_system_mem_irq_duplicate_address > 0.1"
+ },
+ {
"BriefDescription": "Average number of parallel data read requests to external memory",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD@thresh\\=1@",
"MetricGroup": "Mem;MemoryBW;SoC",
"MetricName": "tma_info_system_mem_parallel_reads",
@@ -1223,28 +1510,29 @@
{
"BriefDescription": "Average latency of data read request to external 3D X-Point memory [in nanoseconds]",
"MetricExpr": "(1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM / UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM) / cha_0@event\\=0x0@ if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryLat;Server;SoC",
+ "MetricGroup": "MemOffcore;MemoryLat;Server;SoC",
"MetricName": "tma_info_system_mem_pmm_read_latency",
"PublicDescription": "Average latency of data read request to external 3D X-Point memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches"
},
{
"BriefDescription": "Average latency of data read request to external memory (in nanoseconds)",
- "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / duration_time)",
+ "MetricExpr": "1e9 * (UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD / UNC_CHA_TOR_INSERTS.IA_MISS_DRD) / (tma_info_system_socket_clks / tma_info_system_time)",
"MetricGroup": "Mem;MemoryLat;SoC",
"MetricName": "tma_info_system_mem_read_latency",
"PublicDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches. ([RKL+]memory-controller only)"
},
{
- "BriefDescription": "Average 3DXP Memory Bandwidth Use for reads [GB / sec]",
- "MetricExpr": "(64 * UNC_M_PMM_RPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryBW;Server;SoC",
- "MetricName": "tma_info_system_pmm_read_bw"
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
},
{
- "BriefDescription": "Average 3DXP Memory Bandwidth Use for Writes [GB / sec]",
- "MetricExpr": "(64 * UNC_M_PMM_WPQ_INSERTS / 1e9 / duration_time if #has_pmem > 0 else 0)",
- "MetricGroup": "Mem;MemoryBW;Server;SoC",
- "MetricName": "tma_info_system_pmm_write_bw"
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "(power@energy\\-pkg@ * 61 + 15.6 * power@energy\\-ram@) / (duration_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
},
{
"BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0",
@@ -1282,12 +1570,25 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
"MetricName": "tma_info_system_turbo_utilization"
},
{
+ "BriefDescription": "Measured Average Uncore Frequency for the SoC [GHz]",
+ "MetricExpr": "tma_info_system_socket_clks / 1e9 / tma_info_system_time",
+ "MetricGroup": "SoC",
+ "MetricName": "tma_info_system_uncore_frequency"
+ },
+ {
"BriefDescription": "Per-Logical Processor actual clocks when the Logical Processor is active.",
"MetricExpr": "CPU_CLK_UNHALTED.THREAD",
"MetricGroup": "Pipeline",
@@ -1332,66 +1633,91 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "tma_retiring * tma_info_thread_slots / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
"MetricThreshold": "tma_info_thread_uptb < 7.5"
},
{
+ "BriefDescription": "This metric represents fraction of cycles where the Integer Divider unit was active.",
+ "MetricExpr": "tma_divider - tma_fp_divider",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_divider_group",
+ "MetricName": "tma_int_divider",
+ "MetricThreshold": "tma_int_divider > 0.2 & (tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
- "MetricExpr": "ICACHE_64B.IFTAG_STALL / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricExpr": "ICACHE_TAG.STALLS / tma_info_thread_clks",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: FRONTEND_RETIRED.STLB_MISS_PS;FRONTEND_RETIRED.ITLB_MISS_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT_PS;MEM_LOAD_RETIRED.FB_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_RETIRED.L1_HIT. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache",
+ "MetricExpr": "min(2 * (MEM_INST_RETIRED.ALL_LOADS - MEM_LOAD_RETIRED.FB_HIT - MEM_LOAD_RETIRED.L1_MISS) * 20 / 100, max(CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS, 0)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_l1_bound_group",
+ "MetricName": "tma_l1_latency_dependency",
+ "MetricThreshold": "tma_l1_latency_dependency > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric ([SKL+] roughly; [LNL]) estimates fraction of cycles with demand load accesses that hit the L1D cache. The short latency of the L1D cache may be exposed in pointer-chasing memory access patterns as an example. Sample with: MEM_LOAD_RETIRED.L1_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / (MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + L1D_PEND_MISS.FB_FULL_PERIODS) * ((CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT_PS",
+ "PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L2_HIT",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited)",
+ "MetricExpr": "4 * tma_info_system_core_frequency * MEM_LOAD_RETIRED.L2_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_l2_bound_group",
+ "MetricName": "tma_l2_hit_latency",
+ "MetricThreshold": "tma_l2_hit_latency > 0.05 & (tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L2 cache under unloaded scenarios (possibly L2 latency limited). Avoiding L1 cache misses (i.e. L1 misses/L2 hits) will improve the latency. Sample with: MEM_LOAD_RETIRED.L2_HIT",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
- "MetricExpr": "19 * tma_info_system_average_frequency * MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "MetricExpr": "19 * tma_info_system_core_frequency * (MEM_LOAD_RETIRED.L3_HIT * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2)) / tma_info_thread_clks",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_info_bottleneck_memory_latency, tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_RETIRED.L3_HIT_PS. Related metrics: tma_bottleneck_data_cache_memory_latency, tma_mem_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs)",
- "MetricExpr": "ILD_STALL.LCP / tma_info_thread_clks",
+ "MetricExpr": "DECODE.LCP / tma_info_thread_clks",
"MetricGroup": "FetchLat;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueFB",
"MetricName": "tma_lcp",
"MetricThreshold": "tma_lcp > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
+ "PublicDescription": "This metric represents fraction of cycles CPU was stalled due to Length Changing Prefixes (LCPs). Using proper compiler flags or Intel Compiler by default will certainly avoid this. #Link: Optimization Guide about LCP BKMs. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_botlnk_l2_dsb_bandwidth, tma_info_botlnk_l2_dsb_misses, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb",
"ScaleUnit": "100%"
},
{
@@ -1401,7 +1727,7 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
@@ -1430,50 +1756,74 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_1G / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_1g",
+ "MetricThreshold": "tma_load_stlb_miss_1g > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_2m",
+ "MetricThreshold": "tma_load_stlb_miss_2m > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data load accesses.",
+ "MetricExpr": "tma_load_stlb_miss * DTLB_LOAD_MISSES.WALK_COMPLETED_4K / (DTLB_LOAD_MISSES.WALK_COMPLETED_4K + DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M + DTLB_LOAD_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_load_stlb_miss_group",
+ "MetricName": "tma_load_stlb_miss_4k",
+ "MetricThreshold": "tma_load_stlb_miss_4k > 0.05 & (tma_load_stlb_miss > 0.05 & (tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory",
- "MetricExpr": "43.5 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "43.5 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Server;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_local_dram",
- "MetricThreshold": "tma_local_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM_PS",
+ "MetricName": "tma_local_mem",
+ "MetricThreshold": "tma_local_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from local memory. Caching will improve the latency and increase performance. Sample with: MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(16 * max(0, MEM_INST_RETIRED.LOCK_LOADS - L2_RQSTS.ALL_RFO) + MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES * (10 * L2_RQSTS.RFO_HIT + min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
+ "PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_INST_RETIRED.LOCK_LOADS. Related metrics: tma_store_latency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricExpr": "max(0, tma_bad_speculation - tma_branch_mispredicts)",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
+ "PublicDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears. These slots are either wasted by uops fetched prior to the clear; or stalls the out-of-order portion of the machine needs to recover its state after the clear. For example; this can happen due to memory ordering Nukes (e.g. Memory Disambiguation) or Self-Modifying-Code (SMC) nukes. Sample with: MACHINE_CLEARS.COUNT. Related metrics: tma_bottleneck_memory_synchronization, tma_clears_resteers, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_l1_bound, tma_microcode_sequencer, tma_ms_switches, tma_remote_cache",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_info_bottleneck_memory_latency, tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_bottleneck_data_cache_memory_latency, tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -1497,20 +1847,20 @@
},
{
"BriefDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit",
- "MetricExpr": "tma_retiring * tma_info_thread_slots / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slots",
+ "MetricExpr": "UOPS_RETIRED.SLOTS / UOPS_ISSUED.ANY * IDQ.MS_UOPS / tma_info_thread_slots",
"MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_heavy_operations_group;tma_issueMC;tma_issueMS",
"MetricName": "tma_microcode_sequencer",
"MetricThreshold": "tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1",
- "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
+ "PublicDescription": "This metric represents fraction of slots the CPU was retiring uops fetched by the Microcode Sequencer (MS) unit. The MS is used for CISC instructions not supported by the default decoders (like repeat move strings; or CPUID); or by microcode assists used to address some operation modes (like in Floating Point assists). These cases can often be avoided. Sample with: IDQ.MS_UOPS. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * INT_MISC.CLEAR_RESTEER_CYCLES / tma_info_thread_clks",
- "MetricGroup": "BadSpec;BrMispredicts;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TopdownL4;tma_L4_group;tma_branch_resteers_group;tma_issueBM",
"MetricName": "tma_mispredicts_resteers",
"MetricThreshold": "tma_mispredicts_resteers > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost, tma_info_bottleneck_mispredictions",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Branch Resteers as a result of Branch Misprediction at execution stage. Sample with: INT_MISC.CLEAR_RESTEER_CYCLES. Related metrics: tma_bottleneck_mispredictions, tma_branch_mispredicts, tma_info_bad_spec_branch_misprediction_cost",
"ScaleUnit": "100%"
},
{
@@ -1518,7 +1868,7 @@
"MetricExpr": "(IDQ.MITE_CYCLES_ANY - IDQ.MITE_CYCLES_OK) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck. Sample with: FRONTEND_RETIRED.ANY_DSB_MISS",
"ScaleUnit": "100%"
},
@@ -1527,16 +1877,24 @@
"MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=4@ - cpu@IDQ.MITE_UOPS\\,cmask\\=5@) / tma_info_thread_clks",
"MetricGroup": "DSBmiss;FetchBW;TopdownL4;tma_L4_group;tma_mite_group",
"MetricName": "tma_mite_4wide",
- "MetricThreshold": "tma_mite_4wide > 0.05 & (tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 5 > 0.35))",
+ "MetricThreshold": "tma_mite_4wide > 0.05 & (tma_mite > 0.1 & tma_fetch_bandwidth > 0.2)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued",
+ "BriefDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles)",
"MetricExpr": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH / UOPS_ISSUED.ANY",
"MetricGroup": "TopdownL5;tma_L5_group;tma_issueMV;tma_ports_utilized_0_group",
"MetricName": "tma_mixing_vectors",
"MetricThreshold": "tma_mixing_vectors > 0.05",
- "PublicDescription": "The Mixing_Vectors metric gives the percentage of injected blend uops out of all uops issued. Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "PublicDescription": "This metric estimates penalty in terms of percentage of([SKL+] injected blend uops out of all Uops Issued -- the Count Domain; [ADL+] cycles). Usually a Mixing_Vectors over 5% is worth investigating. Read more in Appendix B1 of the Optimizations Guide for this topic. Related metrics: tma_ms_switches",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the Microcode Sequencer (MS) unit - see Microcode_Sequencer node for details.",
+ "MetricExpr": "cpu@IDQ.MS_UOPS\\,cmask\\=1@ / tma_info_core_core_clks / 3.3",
+ "MetricGroup": "MicroSeq;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
+ "MetricName": "tma_ms",
+ "MetricThreshold": "tma_ms > 0.05 & tma_fetch_bandwidth > 0.2",
"ScaleUnit": "100%"
},
{
@@ -1545,22 +1903,22 @@
"MetricGroup": "FetchLat;MicroSeq;TopdownL3;tma_L3_group;tma_fetch_latency_group;tma_issueMC;tma_issueMS;tma_issueMV;tma_issueSO",
"MetricName": "tma_ms_switches",
"MetricThreshold": "tma_ms_switches > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
- "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
+ "PublicDescription": "This metric estimates the fraction of cycles when the CPU was stalled due to switches of uop delivery to the Microcode Sequencer (MS). Commonly used instructions are optimized for delivery by the DSB (decoded i-cache) or MITE (legacy instruction decode) pipelines. Certain operations cannot be handled natively by the execution pipeline; and must be performed by microcode (small programs injected into the execution stream). Switching to the MS too often can negatively impact performance. The MS is designated to deliver long uop flows required by CISC instructions like CPUID; or uncommon conditions like Floating Point Assists when dealing with Denormals. Sample with: IDQ.MS_SWITCHES. Related metrics: tma_bottleneck_irregular_overhead, tma_clears_resteers, tma_l1_bound, tma_machine_clears, tma_microcode_sequencer, tma_mixing_vectors, tma_serializing_operation",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions",
"MetricExpr": "tma_light_operations * INST_RETIRED.NOP / (tma_retiring * tma_info_thread_slots)",
- "MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
+ "MetricGroup": "BvBO;Pipeline;TopdownL4;tma_L4_group;tma_other_light_ops_group",
"MetricName": "tma_nop_instructions",
- "MetricThreshold": "tma_nop_instructions > 0.1 & tma_light_operations > 0.6",
+ "MetricThreshold": "tma_nop_instructions > 0.1 & (tma_other_light_ops > 0.3 & tma_light_operations > 0.6)",
"PublicDescription": "This metric represents fraction of slots where the CPU was retiring NOP (no op) instructions. Compilers often use NOPs for certain address alignments - e.g. start address of a function or loop body. Sample with: INST_RETIRED.NOP",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents the remaining light uops fraction the CPU has executed - remaining means not covered by other sibling nodes",
"MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_branch_instructions + tma_nop_instructions))",
+ "MetricExpr": "max(0, tma_light_operations - (tma_fp_arith + tma_memory_operations + tma_branch_instructions))",
"MetricGroup": "Pipeline;TopdownL3;tma_L3_group;tma_light_operations_group",
"MetricName": "tma_other_light_ops",
"MetricThreshold": "tma_other_light_ops > 0.3 & tma_light_operations > 0.6",
@@ -1568,12 +1926,19 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a",
- "MetricExpr": "(((1 - ((19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS))) / (19 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + 10 * (MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) + MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS)) + (25 * (MEM_LOAD_RETIRED.LOCAL_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0) + 33 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) if #has_pmem > 0 else 0))) if #has_pmem > 0 else 0)) * (CYCLE_ACTIVITY.STALLS_L3_MISS / tma_info_thread_clks + (CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS) / tma_info_thread_clks - tma_l2_bound) if 1e6 * (MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM + MEM_LOAD_RETIRED.LOCAL_PMM) > MEM_LOAD_RETIRED.L1_MISS else 0) if #has_pmem > 0 else 0)",
- "MetricGroup": "MemoryBound;Server;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
- "MetricName": "tma_pmm_bound",
- "MetricThreshold": "tma_pmm_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric roughly estimates (based on idle latencies) how often the CPU was stalled on accesses to external 3D-Xpoint (Crystal Ridge, a.k.a. IXP) memory by loads, PMM stands for Persistent Memory Module.",
+ "BriefDescription": "This metric estimates fraction of slots the CPU was stalled due to other cases of misprediction (non-retired x86 branches or other types).",
+ "MetricExpr": "max(tma_branch_mispredicts * (1 - BR_MISP_RETIRED.ALL_BRANCHES / (INT_MISC.CLEARS_COUNT - MACHINE_CLEARS.COUNT)), 0.0001)",
+ "MetricGroup": "BrMispredicts;BvIO;TopdownL3;tma_L3_group;tma_branch_mispredicts_group",
+ "MetricName": "tma_other_mispredicts",
+ "MetricThreshold": "tma_other_mispredicts > 0.05 & (tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15)",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Nukes (Machine Clears) not related to memory ordering.",
+ "MetricExpr": "max(tma_machine_clears * (1 - MACHINE_CLEARS.MEMORY_ORDERING / MACHINE_CLEARS.COUNT), 0.0001)",
+ "MetricGroup": "BvIO;Machine_Clears;TopdownL3;tma_L3_group;tma_machine_clears_group",
+ "MetricName": "tma_other_nukes",
+ "MetricThreshold": "tma_other_nukes > 0.05 & (tma_machine_clears > 0.1 & tma_bad_speculation > 0.15)",
"ScaleUnit": "100%"
},
{
@@ -1604,17 +1969,17 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU)",
+ "BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU)",
"MetricExpr": "UOPS_DISPATCHED.PORT_6 / tma_info_core_core_clks",
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_6",
"MetricThreshold": "tma_port_6 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+]Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_6. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 6 ([HSW+] Primary Branch and simple ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU performance was potentially limited due to Core computation issues (non divider-related)",
- "MetricExpr": "((cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
+ "MetricExpr": "((tma_ports_utilized_0 * tma_info_thread_clks + (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL)) / tma_info_thread_clks if ARITH.DIVIDER_ACTIVE < CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY else (EXE_ACTIVITY.1_PORTS_UTIL + tma_retiring * EXE_ACTIVITY.2_PORTS_UTIL) / tma_info_thread_clks)",
"MetricGroup": "PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_ports_utilization",
"MetricThreshold": "tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
@@ -1623,7 +1988,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks + tma_serializing_operation * (CYCLE_ACTIVITY.STALLS_TOTAL - CYCLE_ACTIVITY.STALLS_MEM_ANY) / tma_info_thread_clks",
+ "MetricExpr": "cpu@EXE_ACTIVITY.3_PORTS_UTIL\\,umask\\=0x80@ / tma_info_thread_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -1651,35 +2016,35 @@
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
"MetricExpr": "UOPS_EXECUTED.CYCLES_GE_3 / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Sample with: UOPS_EXECUTED.CYCLES_GE_3",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues",
- "MetricExpr": "(97 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 97 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "(97 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM + 97 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD) * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Offcore;Server;Snoop;TopdownL5;tma_L5_group;tma_issueSyncxn;tma_mem_latency_group",
"MetricName": "tma_remote_cache",
"MetricThreshold": "tma_remote_cache > 0.05 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
- "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
+ "PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote cache in other sockets including synchronizations issues. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS;MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD_PS. Related metrics: tma_bottleneck_memory_synchronization, tma_contested_accesses, tma_data_sharing, tma_false_sharing, tma_machine_clears",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory",
- "MetricExpr": "108 * tma_info_system_average_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
+ "MetricExpr": "108 * tma_info_system_core_frequency * MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM * (1 + MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS / 2) / tma_info_thread_clks",
"MetricGroup": "Server;Snoop;TopdownL5;tma_L5_group;tma_mem_latency_group",
- "MetricName": "tma_remote_dram",
- "MetricThreshold": "tma_remote_dram > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
+ "MetricName": "tma_remote_mem",
+ "MetricThreshold": "tma_remote_mem > 0.1 & (tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling loads from remote memory. This is caused often due to non-optimal NUMA allocations. #link to NUMA article. Sample with: MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"DefaultMetricgroupName": "TopdownL1",
- "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound) + 0 * tma_info_thread_slots",
- "MetricGroup": "Default;TmaL1;TopdownL1;tma_L1_group",
+ "MetricExpr": "topdown\\-retiring / (topdown\\-fe\\-bound + topdown\\-bad\\-spec + topdown\\-retiring + topdown\\-be\\-bound)",
+ "MetricGroup": "BvUW;Default;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1;Default",
@@ -1689,18 +2054,18 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations",
"MetricExpr": "RESOURCE_STALLS.SCOREBOARD / tma_info_thread_clks",
- "MetricGroup": "PortsUtil;TopdownL5;tma_L5_group;tma_issueSO;tma_ports_utilized_0_group",
+ "MetricGroup": "BvIO;PortsUtil;TopdownL3;tma_L3_group;tma_core_bound_group;tma_issueSO",
"MetricName": "tma_serializing_operation",
- "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)))",
+ "MetricThreshold": "tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric represents fraction of cycles the CPU issue-pipeline was stalled due to serializing operations. Instructions like CPUID; WRMSR or LFENCE serialize the out-of-order execution which may limit performance. Sample with: RESOURCE_STALLS.SCOREBOARD. Related metrics: tma_ms_switches",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions",
"MetricExpr": "37 * MISC_RETIRED.PAUSE_INST / tma_info_thread_clks",
- "MetricGroup": "TopdownL6;tma_L6_group;tma_serializing_operation_group",
+ "MetricGroup": "TopdownL4;tma_L4_group;tma_serializing_operation_group",
"MetricName": "tma_slow_pause",
- "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))))",
+ "MetricThreshold": "tma_slow_pause > 0.05 & (tma_serializing_operation > 0.1 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to PAUSE Instructions. Sample with: MISC_RETIRED.PAUSE_INST",
"ScaleUnit": "100%"
},
@@ -1709,13 +2074,12 @@
"MetricExpr": "tma_info_memory_load_miss_real_latency * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_INST_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents rate of split store accesses",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "MEM_INST_RETIRED.SPLIT_STORES / tma_info_core_core_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_issueSpSt;tma_store_bound_group",
"MetricName": "tma_split_stores",
@@ -1726,10 +2090,10 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "L1D_PEND_MISS.L2_STALL / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_bottleneck_memory_bandwidth, tma_info_system_dram_bw_use, tma_mem_bandwidth",
+ "PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_bottleneck_data_cache_memory_bandwidth, tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
"ScaleUnit": "100%"
},
{
@@ -1743,7 +2107,6 @@
},
{
"BriefDescription": "This metric roughly estimates fraction of cycles when the memory subsystem had loads blocked since they could not forward data from earlier (in program order) overlapping stores",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "13 * LD_BLOCKS.STORE_FORWARD / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_store_fwd_blk",
@@ -1754,7 +2117,7 @@
{
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 10 * (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) + (1 - MEM_INST_RETIRED.LOCK_LOADS / MEM_INST_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
@@ -1786,6 +2149,30 @@
"ScaleUnit": "100%"
},
{
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 1 GB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_1G / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_1g",
+ "MetricThreshold": "tma_store_stlb_miss_1g > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 2 or 4 MB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_2m",
+ "MetricThreshold": "tma_store_stlb_miss_2m > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric estimates the fraction of cycles to walk the memory paging structures to cache translation of 4 KB pages for data store accesses.",
+ "MetricExpr": "tma_store_stlb_miss * DTLB_STORE_MISSES.WALK_COMPLETED_4K / (DTLB_STORE_MISSES.WALK_COMPLETED_4K + DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M + DTLB_STORE_MISSES.WALK_COMPLETED_1G)",
+ "MetricGroup": "MemoryTLB;TopdownL6;tma_L6_group;tma_store_stlb_miss_group",
+ "MetricName": "tma_store_stlb_miss_4k",
+ "MetricThreshold": "tma_store_stlb_miss_4k > 0.05 & (tma_store_stlb_miss > 0.05 & (tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))))",
+ "ScaleUnit": "100%"
+ },
+ {
"BriefDescription": "This metric estimates how often CPU was stalled due to Streaming store memory accesses; Streaming store optimize out a read request required by RFO stores",
"MetricExpr": "9 * OCR.STREAMING_WR.ANY_RESPONSE / tma_info_thread_clks",
"MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueSmSt;tma_store_bound_group",
@@ -1797,10 +2184,10 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears",
"MetricExpr": "10 * BACLEARS.ANY / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;TopdownL4;tma_L4_group;tma_branch_resteers_group",
"MetricName": "tma_unknown_branches",
"MetricThreshold": "tma_unknown_branches > 0.05 & (tma_branch_resteers > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15))",
- "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit). Sample with: BACLEARS.ANY",
+ "PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to new branch address clears. These are fetched branches the Branch Prediction Unit was unable to recognize (e.g. first time the branch is fetched or hitting BTB capacity limit) hence called Unknown Branches. Sample with: BACLEARS.ANY",
"ScaleUnit": "100%"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/memory.json b/tools/perf/pmu-events/arch/x86/icelakex/memory.json
index f36ac04f8d76..ca7f68f67463 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/memory.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Execution stalls while L3 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L3_MISS",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of machine clears due to memory ordering conflicts.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"PublicDescription": "Counts the number of Machine Clears detected dye to memory ordering. Memory Ordering Machine Clears may apply when a memory read may not conform to the memory ordering rules of the x86 architecture",
@@ -17,102 +19,113 @@
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
"MSRValue": "0x80",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "1009",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
"MSRValue": "0x10",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "20011",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
"MSRValue": "0x100",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "503",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
"MSRValue": "0x20",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100007",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
"MSRValue": "0x4",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
"MSRValue": "0x200",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "101",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
"MSRValue": "0x40",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "2003",
"UMask": "0x1"
},
{
"BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xcd",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
"MSRValue": "0x8",
- "PEBS": "2",
"PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -122,6 +135,7 @@
},
{
"BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_CODE_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -130,7 +144,38 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000004",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -140,6 +185,7 @@
},
{
"BriefDescription": "Counts demand data reads that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_DATA_RD.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -148,7 +194,48 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000001",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -158,6 +245,7 @@
},
{
"BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.DEMAND_RFO.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -166,7 +254,38 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.DEMAND_RFO.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000002",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000400",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -176,6 +295,7 @@
},
{
"BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L1D_AND_SWPF.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -184,7 +304,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000400",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts hardware prefetches to the L3 only that missed the local socket's L1, L2, and L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -194,6 +325,7 @@
},
{
"BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.HWPF_L3.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -203,6 +335,7 @@
},
{
"BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.ITOM.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -212,6 +345,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -221,6 +355,7 @@
},
{
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -230,6 +365,7 @@
},
{
"BriefDescription": "Counts hardware and software prefetches to all cache levels that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.PREFETCHES.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -238,7 +374,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x73C000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -248,6 +395,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by the local socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -257,6 +405,7 @@
},
{
"BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that missed the L3 Cache and were supplied by the local socket (DRAM or PMM), whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM or DRAM accesses that are controlled by the close or distant SNC Cluster.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.READS_TO_CORE.L3_MISS_LOCAL_SOCKET",
"MSRIndex": "0x1a6,0x1a7",
@@ -265,7 +414,58 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x104000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x70C000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x730000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x731800477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
+ "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.READS_TO_CORE.SNC_DRAM",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0x708000477",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts streaming stores that missed the local socket's L1, L2, and L3 caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -275,6 +475,7 @@
},
{
"BriefDescription": "Counts streaming stores that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline is homed locally.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.L3_MISS_LOCAL",
"MSRIndex": "0x1a6,0x1a7",
@@ -283,7 +484,18 @@
"UMask": "0x1"
},
{
+ "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
+ "Counter": "0,1,2,3",
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
+ "MSRIndex": "0x1a6,0x1a7",
+ "MSRValue": "0xFBFF80822",
+ "SampleAfterValue": "100003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Counts demand data read requests that miss the L3 cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "OFFCORE_REQUESTS.L3_MISS_DEMAND_DATA_RD",
"SampleAfterValue": "100003",
@@ -291,6 +503,7 @@
},
{
"BriefDescription": "Cycles where at least one demand data read request known to have missed the L3 cache is pending.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_L3_MISS_DEMAND_DATA_RD",
@@ -300,6 +513,7 @@
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD",
@@ -308,6 +522,7 @@
},
{
"BriefDescription": "Cycles where the core is waiting on at least 6 outstanding demand data read requests known to have missed the L3 cache.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.L3_MISS_DEMAND_DATA_RD_GE_6",
@@ -317,6 +532,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED",
"PublicDescription": "Counts the number of times RTM abort was triggered.",
@@ -324,15 +540,17 @@
"UMask": "0x4"
},
{
- "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt)",
+ "BriefDescription": "Number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_EVENTS",
- "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 4 categories (e.g. interrupt).",
+ "PublicDescription": "Counts the number of times an RTM execution aborted due to none of the previous 3 categories (e.g. interrupt).",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEM",
"PublicDescription": "Counts the number of times an RTM execution aborted due to various memory events (e.g. read/write capacity and conflicts).",
@@ -341,6 +559,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to incompatible memory type",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_MEMTYPE",
"PublicDescription": "Counts the number of times an RTM execution aborted due to incompatible memory type.",
@@ -349,6 +568,7 @@
},
{
"BriefDescription": "Number of times an RTM execution aborted due to HLE-unfriendly instructions",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.ABORTED_UNFRIENDLY",
"PublicDescription": "Counts the number of times an RTM execution aborted due to HLE-unfriendly instructions.",
@@ -357,6 +577,7 @@
},
{
"BriefDescription": "Number of times an RTM execution successfully committed",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.COMMIT",
"PublicDescription": "Counts the number of times RTM commit succeeded.",
@@ -365,6 +586,7 @@
},
{
"BriefDescription": "Number of times an RTM execution started.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc9",
"EventName": "RTM_RETIRED.START",
"PublicDescription": "Counts the number of times we entered an RTM region. Does not count nested transactions.",
@@ -373,6 +595,7 @@
},
{
"BriefDescription": "Counts the number of times a class of instructions that may cause a transactional abort was executed inside a transactional region",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC2",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a vzeroupper instruction.",
@@ -381,6 +604,7 @@
},
{
"BriefDescription": "Number of times an instruction execution caused the transactional nest count supported to be exceeded",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5d",
"EventName": "TX_EXEC.MISC3",
"PublicDescription": "Counts Unfriendly TSX abort triggered by a nest count that is too deep.",
@@ -389,6 +613,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_READ",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional reads",
@@ -397,6 +622,7 @@
},
{
"BriefDescription": "Speculatively counts the number of TSX aborts due to a data capacity limitation for transactional writes.",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CAPACITY_WRITE",
"PublicDescription": "Speculatively counts the number of Transactional Synchronization Extensions (TSX) aborts due to a data capacity limitation for transactional writes.",
@@ -405,6 +631,7 @@
},
{
"BriefDescription": "Number of times a transactional abort was signaled due to a data conflict on a transactionally accessed address",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "TX_MEM.ABORT_CONFLICT",
"PublicDescription": "Counts the number of times a TSX line had a cache conflict.",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/metricgroups.json b/tools/perf/pmu-events/arch/x86/icelakex/metricgroups.json
index bc6a9a4d27a9..4a28187c2896 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/metricgroups.json
@@ -2,9 +2,22 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CodeGen": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -25,8 +38,11 @@
"IoBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -64,10 +80,14 @@
"tma_L5_group": "Metrics for top-down breakdown at level 5",
"tma_L6_group": "Metrics for top-down breakdown at level 6",
"tma_alu_op_utilization_group": "Metrics contributing to tma_alu_op_utilization category",
+ "tma_assists_group": "Metrics contributing to tma_assists category",
"tma_backend_bound_group": "Metrics contributing to tma_backend_bound category",
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
+ "tma_branch_mispredicts_group": "Metrics contributing to tma_branch_mispredicts category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
+ "tma_code_stlb_miss_group": "Metrics contributing to tma_code_stlb_miss category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -77,10 +97,11 @@
"tma_fp_vector_group": "Metrics contributing to tma_fp_vector category",
"tma_frontend_bound_group": "Metrics contributing to tma_frontend_bound category",
"tma_heavy_operations_group": "Metrics contributing to tma_heavy_operations category",
+ "tma_icache_misses_group": "Metrics contributing to tma_icache_misses category",
"tma_issue2P": "Metrics related by the issue $issue2P",
- "tma_issueBC": "Metrics related by the issue $issueBC",
"tma_issueBM": "Metrics related by the issue $issueBM",
"tma_issueBW": "Metrics related by the issue $issueBW",
+ "tma_issueComp": "Metrics related by the issue $issueComp",
"tma_issueD0": "Metrics related by the issue $issueD0",
"tma_issueFB": "Metrics related by the issue $issueFB",
"tma_issueFL": "Metrics related by the issue $issueFL",
@@ -96,19 +117,25 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
+ "tma_l2_bound_group": "Metrics contributing to tma_l2_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_load_stlb_miss_group": "Metrics contributing to tma_load_stlb_miss category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
"tma_mite_group": "Metrics contributing to tma_mite category",
+ "tma_other_light_ops_group": "Metrics contributing to tma_other_light_ops category",
"tma_ports_utilization_group": "Metrics contributing to tma_ports_utilization category",
"tma_ports_utilized_0_group": "Metrics contributing to tma_ports_utilized_0 category",
"tma_ports_utilized_3m_group": "Metrics contributing to tma_ports_utilized_3m category",
"tma_retiring_group": "Metrics contributing to tma_retiring category",
"tma_serializing_operation_group": "Metrics contributing to tma_serializing_operation category",
"tma_store_bound_group": "Metrics contributing to tma_store_bound category",
- "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category"
+ "tma_store_op_utilization_group": "Metrics contributing to tma_store_op_utilization category",
+ "tma_store_stlb_miss_group": "Metrics contributing to tma_store_stlb_miss category"
}
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/other.json b/tools/perf/pmu-events/arch/x86/icelakex/other.json
index 11810daaf150..141cd30a30af 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/other.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the Non-AVX turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL0_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX2 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL1_TURBO_LICENSE",
"PublicDescription": "Counts Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Core cycles where the core was running in a manner where Turbo may be clipped to the AVX512 turbo schedule.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "CORE_POWER.LVL2_TURBO_LICENSE",
"PublicDescription": "Core cycles where the core was running with power-delivery for license level 2 (introduced in Skylake Server microarchitecture). This includes high current AVX 512-bit instructions.",
@@ -24,306 +27,8 @@
"UMask": "0x20"
},
{
- "BriefDescription": "Hit snoop reply with data, line invalidated.",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.I_FWD_FE",
- "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's cache, after the data is forwarded back to the requestor and indicating the data was found unmodified in the (FE) Forward or Exclusive State in this cores caches cache. A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x20"
- },
- {
- "BriefDescription": "HitM snoop reply with data, line invalidated.",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.I_FWD_M",
- "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated: removed from this core's caches, after the data is forwarded back to the requestor, and indicating the data was found modified(M) in this cores caches cache (aka HitM response). A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x10"
- },
- {
- "BriefDescription": "Hit snoop reply without sending the data, line invalidated.",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.I_HIT_FSE",
- "PublicDescription": "Counts responses to snoops indicating the line will now be (I)nvalidated in this core's caches without being forwarded back to the requestor. The line was in Forward, Shared or Exclusive (FSE) state in this cores caches. A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x2"
- },
- {
- "BriefDescription": "Line not found snoop reply",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.MISS",
- "PublicDescription": "Counts responses to snoops indicating that the data was not found (IHitI) in this core's caches. A single snoop response from the core counts on all hyperthreads of the Core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Hit snoop reply with data, line kept in Shared state.",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.S_FWD_FE",
- "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (FS) Forward or Shared state. A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x40"
- },
- {
- "BriefDescription": "HitM snoop reply with data, line kept in Shared state",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.S_FWD_M",
- "PublicDescription": "Counts responses to snoops indicating the line may be kept on this core in the (S)hared state, after the data is forwarded back to the requestor, initially the data was found in the cache in the (M)odified state. A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x8"
- },
- {
- "BriefDescription": "Hit snoop reply without sending the data, line kept in Shared state.",
- "EventCode": "0xef",
- "EventName": "CORE_SNOOP_RESPONSE.S_HIT_FSE",
- "PublicDescription": "Counts responses to snoops indicating the line was kept on this core in the (S)hared state, and that the data was found unmodified but not forwarded back to the requestor, initially the data was found in the cache in the (FSE) Forward, Shared state or Exclusive state. A single snoop response from the core counts on all hyperthreads of the core.",
- "SampleAfterValue": "1000003",
- "UMask": "0x4"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand instruction fetches and L1 instruction cache prefetches that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_CODE_RD.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000004",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.LOCAL_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by PMM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x703C00001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.REMOTE_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x730000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by PMM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.REMOTE_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x703000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand data reads that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_DATA_RD.SNC_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x700800001",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F3FFC0002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.LOCAL_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x703C00002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.REMOTE_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x703000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts demand reads for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.DEMAND_RFO.SNC_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x700800002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts L1 data cache prefetch requests and software prefetches (except PREFETCHW) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L1D_AND_SWPF.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000400",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetch (which bring data to L2) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L2.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x10070",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetches to the L3 only that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L3.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x12380",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts hardware prefetches to the L3 only that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.HWPF_L3.REMOTE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x90002380",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts full cacheline writes (ItoM) that were not supplied by the local socket's L1, L2, or L3 caches and the cacheline was homed in a remote socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.ITOM.REMOTE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x90000002",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
"BriefDescription": "Counts miscellaneous requests, such as I/O and un-cacheable accesses that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -332,129 +37,13 @@
"UMask": "0x1"
},
{
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that have any type of response.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.ANY_RESPONSE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F3FFC0477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x73C000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those DRAM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.LOCAL_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x104000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, unless in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts only those PMM accesses that are controlled by the close SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.LOCAL_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x100400477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts DRAM accesses that are controlled by the close or distant SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x70C000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to this socket, whether or not in Sub NUMA Cluster(SNC) Mode. In SNC Mode counts PMM accesses that are controlled by the close or distant SNC Cluster.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.LOCAL_SOCKET_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x700C00477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were not supplied by the local socket's L1, L2, or L3 caches and were supplied by a remote socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.REMOTE",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x3F33000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.REMOTE_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x730000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM or PMM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.REMOTE_MEMORY",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x731800477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM attached to another socket.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.REMOTE_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x703000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by DRAM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.SNC_DRAM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x708000477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
- "BriefDescription": "Counts all (cacheable) data read, code read and RFO requests including demands and prefetches to the core caches (L1 or L2) that were supplied by PMM on a distant memory controller of this socket when the system is in SNC (sub-NUMA cluster) mode.",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.READS_TO_CORE.SNC_PMM",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0x700800477",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
- },
- {
"BriefDescription": "Counts streaming stores that have any type of response.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OCR.STREAMING_WR.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
"MSRValue": "0x10800",
"SampleAfterValue": "100003",
"UMask": "0x1"
- },
- {
- "BriefDescription": "Counts Demand RFOs, ItoM's, PREFECTHW's, Hardware RFO Prefetches to the L1/L2 and Streaming stores that likely resulted in a store to Memory (DRAM or PMM)",
- "EventCode": "0xB7, 0xBB",
- "EventName": "OCR.WRITE_ESTIMATE.MEMORY",
- "MSRIndex": "0x1a6,0x1a7",
- "MSRValue": "0xFBFF80822",
- "SampleAfterValue": "100003",
- "UMask": "0x1"
}
]
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json b/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json
index 45ee6bceba7f..f3a0d7f49af4 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles when divide unit is busy executing divide or square root operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x14",
"EventName": "ARITH.DIVIDER_ACTIVE",
@@ -9,7 +10,17 @@
"UMask": "0x9"
},
{
+ "BriefDescription": "ARITH.FP_DIVIDER_ACTIVE",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "CounterMask": "1",
+ "EventCode": "0x14",
+ "EventName": "ARITH.FP_DIVIDER_ACTIVE",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x1"
+ },
+ {
"BriefDescription": "Number of occurrences where a microcode assist is invoked by hardware.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc1",
"EventName": "ASSISTS.ANY",
"PublicDescription": "Counts the number of occurrences where a microcode assist is invoked by hardware Examples include AD (page Access Dirty), FP and AVX related assists.",
@@ -18,157 +29,158 @@
},
{
"BriefDescription": "All branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts all branch instructions retired.",
"SampleAfterValue": "400009"
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND",
- "PEBS": "1",
"PublicDescription": "Counts conditional branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x11"
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_NTAKEN",
- "PEBS": "1",
"PublicDescription": "Counts not taken branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x10"
},
{
"BriefDescription": "Taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.COND_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken conditional branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x1"
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
- "PEBS": "1",
"PublicDescription": "Counts far branch instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x40"
},
{
"BriefDescription": "Indirect near branch instructions retired (excluding returns)",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.INDIRECT",
- "PEBS": "1",
"PublicDescription": "Counts near indirect branch instructions retired excluding returns. TSX abort is an indirect branch.",
"SampleAfterValue": "100003",
"UMask": "0x80"
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
- "PEBS": "1",
"PublicDescription": "Counts both direct and indirect near call instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x2"
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
- "PEBS": "1",
"PublicDescription": "Counts return instructions retired.",
"SampleAfterValue": "100007",
"UMask": "0x8"
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken branch instructions retired.",
"SampleAfterValue": "400009",
"UMask": "0x20"
},
{
"BriefDescription": "All mispredicted branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
- "PEBS": "1",
"PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
"SampleAfterValue": "50021"
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND",
- "PEBS": "1",
"PublicDescription": "Counts mispredicted conditional branch instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x11"
},
{
"BriefDescription": "Mispredicted non-taken conditional branch instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_NTAKEN",
- "PEBS": "1",
"PublicDescription": "Counts the number of conditional branch instructions retired that were mispredicted and the branch direction was not taken.",
"SampleAfterValue": "50021",
"UMask": "0x10"
},
{
"BriefDescription": "number of branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.COND_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts taken conditional mispredicted branch instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x1"
},
{
"BriefDescription": "All miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT",
- "PEBS": "1",
"PublicDescription": "Counts all miss-predicted indirect branch instructions retired (excluding RETs. TSX aborts is considered indirect branch).",
"SampleAfterValue": "50021",
"UMask": "0x80"
},
{
"BriefDescription": "Mispredicted indirect CALL instructions retired.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.INDIRECT_CALL",
- "PEBS": "1",
"PublicDescription": "Counts retired mispredicted indirect (near taken) calls, including both register and memory indirect.",
"SampleAfterValue": "50021",
"UMask": "0x2"
},
{
"BriefDescription": "Number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
- "PEBS": "1",
"PublicDescription": "Counts number of near branch instructions retired that were mispredicted and taken.",
"SampleAfterValue": "50021",
"UMask": "0x20"
},
{
"BriefDescription": "This event counts the number of mispredicted ret instructions retired. Non PEBS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc5",
"EventName": "BR_MISP_RETIRED.RET",
- "PEBS": "1",
"PublicDescription": "This is a non-precise version (that is, does not use PEBS) of the event that counts mispredicted return instructions retired.",
"SampleAfterValue": "50021",
"UMask": "0x8"
},
{
"BriefDescription": "Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xec",
"EventName": "CPU_CLK_UNHALTED.DISTRIBUTED",
"PublicDescription": "This event distributes cycle counts between active hyperthreads, i.e., those in C0. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If all other hyperthreads are inactive (or disabled or do not exist), all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -177,6 +189,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"PublicDescription": "Counts Core crystal clock cycles when current thread is unhalted and the other thread is halted.",
@@ -185,6 +198,7 @@
},
{
"BriefDescription": "Core crystal clock cycles. Cycle counts are evenly distributed between active threads in the Core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3c",
"EventName": "CPU_CLK_UNHALTED.REF_DISTRIBUTED",
"PublicDescription": "This event distributes Core crystal clock cycle counts between active hyperthreads, i.e., those in C0 sleep-state. A hyperthread becomes inactive when it executes the HLT or MWAIT instructions. If one thread is active in a core, all counts are attributed to this hyperthread. To obtain the full count when the Core is active, sum the counts from each hyperthread.",
@@ -193,6 +207,7 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. This event has a constant ratio with the CPU_CLK_UNHALTED.REF_XCLK event. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
"SampleAfterValue": "2000003",
@@ -200,6 +215,7 @@
},
{
"BriefDescription": "Core crystal clock cycles when the thread is unhalted.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Counts core crystal clock cycles when the thread is unhalted.",
@@ -208,6 +224,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
"SampleAfterValue": "2000003",
@@ -215,6 +232,7 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
@@ -222,6 +240,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -230,6 +249,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -238,6 +258,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "16",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -246,6 +267,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -254,6 +276,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss demand load is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -262,6 +285,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "20",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -270,6 +294,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xa3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -278,6 +303,7 @@
},
{
"BriefDescription": "Cycles total of 1 uop is executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.1_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 1 uop was executed on all ports and Reservation Station (RS) was not empty.",
@@ -286,6 +312,7 @@
},
{
"BriefDescription": "Cycles total of 2 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.2_PORTS_UTIL",
"PublicDescription": "Counts cycles during which a total of 2 uops were executed on all ports and Reservation Station (RS) was not empty.",
@@ -294,6 +321,7 @@
},
{
"BriefDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.3_PORTS_UTIL",
"PublicDescription": "Cycles total of 3 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -302,6 +330,7 @@
},
{
"BriefDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station was not empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa6",
"EventName": "EXE_ACTIVITY.4_PORTS_UTIL",
"PublicDescription": "Cycles total of 4 uops are executed on all ports and Reservation Station (RS) was not empty.",
@@ -310,6 +339,7 @@
},
{
"BriefDescription": "Cycles where the Store Buffer was full and no loads caused an execution stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xA6",
"EventName": "EXE_ACTIVITY.BOUND_ON_STORES",
@@ -319,6 +349,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction. [This event is alias to DECODE.LCP]",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"PublicDescription": "Counts cycles that the Instruction Length decoder (ILD) stalls occurred due to dynamically changing prefix length of the decoded instruction (by operand size prefix instruction 0x66, address size prefix instruction 0x67 or REX.W for Intel64). Count is proportional to the number of prefixes in a 16B-line. This may result in a three-cycle penalty for each LCP (Length changing prefix) in a 16-byte chunk. [This event is alias to DECODE.LCP]",
@@ -327,6 +358,7 @@
},
{
"BriefDescription": "Instruction decoders utilized in a cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "INST_DECODED.DECODERS",
"PublicDescription": "Number of decoders utilized in a cycle when the MITE (legacy decode pipeline) fetches instructions.",
@@ -335,38 +367,39 @@
},
{
"BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
- "PEBS": "1",
"PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.ANY_P",
- "PEBS": "1",
"PublicDescription": "Counts the number of instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
"SampleAfterValue": "2000003"
},
{
"BriefDescription": "Number of all retired NOP instructions.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc0",
"EventName": "INST_RETIRED.NOP",
- "PEBS": "1",
"SampleAfterValue": "2000003",
"UMask": "0x2"
},
{
"BriefDescription": "Precise instruction retired event with a reduced effect of PEBS shadow in IP distribution",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.PREC_DIST",
- "PEBS": "1",
"PublicDescription": "A version of INST_RETIRED that allows for a more unbiased distribution of samples across instructions retired. It utilizes the Precise Distribution of Instructions Retired (PDIR) feature to mitigate some bias in how retired instructions get sampled. Use on Fixed Counter 0.",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles the Backend cluster is recovering after a miss-speculation or a Store Buffer or Load Buffer drain stall.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.ALL_RECOVERY_CYCLES",
@@ -376,6 +409,7 @@
},
{
"BriefDescription": "Clears speculative count",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x0D",
@@ -386,6 +420,7 @@
},
{
"BriefDescription": "Counts cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d",
"EventName": "INT_MISC.CLEAR_RESTEER_CYCLES",
"PublicDescription": "Cycles after recovery from a branch misprediction or machine clear till the first uop is issued from the resteered path.",
@@ -394,6 +429,7 @@
},
{
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for this thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
"PublicDescription": "Counts core cycles when the Resource allocator was stalled due to recovery from an earlier branch misprediction or machine clear event.",
@@ -402,6 +438,7 @@
},
{
"BriefDescription": "TMA slots where uops got dropped",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0d",
"EventName": "INT_MISC.UOP_DROPPING",
"PublicDescription": "Estimated number of Top-down Microarchitecture Analysis slots that got dropped due to non front-end reasons",
@@ -410,6 +447,7 @@
},
{
"BriefDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "Counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -418,6 +456,7 @@
},
{
"BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
@@ -426,6 +465,7 @@
},
{
"BriefDescription": "False dependencies due to partial compare on address.",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "Counts the number of times a load got blocked due to false dependencies due to partial compare on address.",
@@ -434,14 +474,16 @@
},
{
"BriefDescription": "Counts the number of demand load dispatches that hit L1D fill buffer (FB) allocated for software prefetch.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "LOAD_HIT_PREFETCH.SWPF",
- "PublicDescription": "Counts all not software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
+ "PublicDescription": "Counts all software-prefetch load dispatches that hit the fill buffer (FB) allocated for the software prefetch. It can also be incremented by some lock instructions. So it should only be used with profiling so that the locks can be excluded by ASM (Assembly File) inspection of the nearby instructions.",
"SampleAfterValue": "100003",
"UMask": "0x1"
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -451,6 +493,7 @@
},
{
"BriefDescription": "Cycles optimal number of Uops delivered by the LSD, but did not come from the decoder.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xa8",
"EventName": "LSD.CYCLES_OK",
@@ -460,6 +503,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xa8",
"EventName": "LSD.UOPS",
"PublicDescription": "Counts the number of uops delivered to the back-end by the LSD(Loop Stream Detector).",
@@ -468,6 +512,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xc3",
@@ -478,6 +523,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Counts self-modifying code (SMC) detected, which causes a machine clear.",
@@ -486,6 +532,7 @@
},
{
"BriefDescription": "Increments whenever there is an update to the LBR array.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.LBR_INSERTS",
"PublicDescription": "Increments when an entry is added to the Last Branch Record (LBR) array (or removed from the array in case of RETURNs in call stack mode). The event requires LBR to be enabled properly.",
@@ -494,6 +541,7 @@
},
{
"BriefDescription": "Number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xcc",
"EventName": "MISC_RETIRED.PAUSE_INST",
"PublicDescription": "Counts number of retired PAUSE instructions. This event is not supported on first SKL and KBL products.",
@@ -502,6 +550,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Counts allocation stall cycles caused by the store buffer (SB) being full. This counts cycles that the pipeline back-end blocked uop delivery from the front-end.",
@@ -510,6 +559,7 @@
},
{
"BriefDescription": "Counts cycles where the pipeline is stalled due to serializing operations.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa2",
"EventName": "RESOURCE_STALLS.SCOREBOARD",
"SampleAfterValue": "100003",
@@ -517,6 +567,7 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x5e",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "Counts cycles during which the reservation station (RS) is empty for this logical processor. This is usually caused when the front-end pipeline runs into starvation periods (e.g. branch mispredictions or i-cache misses)",
@@ -525,6 +576,7 @@
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -536,6 +588,7 @@
},
{
"BriefDescription": "TMA slots where no uops were being issued due to lack of back-end resources.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
"PublicDescription": "Counts the number of Top-down Microarchitecture Analysis (TMA) method's slots where no micro-operations were being issued from front-end to back-end of the machine due to lack of back-end resources.",
@@ -544,6 +597,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
+ "Counter": "Fixed counter 3",
"EventName": "TOPDOWN.SLOTS",
"PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core. Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
"SampleAfterValue": "10000003",
@@ -551,6 +605,7 @@
},
{
"BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa4",
"EventName": "TOPDOWN.SLOTS_P",
"PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method. The count is distributed among unhalted logical processors (hyper-threads) who share the same physical core.",
@@ -559,6 +614,7 @@
},
{
"BriefDescription": "Number of uops decoded out of instructions exclusively fetched by decoder 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UOPS_DECODED.DEC0",
"PublicDescription": "Uops exclusively fetched by decoder 0",
@@ -567,6 +623,7 @@
},
{
"BriefDescription": "Number of uops executed on port 0",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_0",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 0.",
@@ -575,6 +632,7 @@
},
{
"BriefDescription": "Number of uops executed on port 1",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_1",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 1.",
@@ -583,6 +641,7 @@
},
{
"BriefDescription": "Number of uops executed on port 2 and 3",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_2_3",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 2 and 3.",
@@ -591,6 +650,7 @@
},
{
"BriefDescription": "Number of uops executed on port 4 and 9",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_4_9",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 5 and 9.",
@@ -599,6 +659,7 @@
},
{
"BriefDescription": "Number of uops executed on port 5",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_5",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 5.",
@@ -607,6 +668,7 @@
},
{
"BriefDescription": "Number of uops executed on port 6",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_6",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to port 6.",
@@ -615,6 +677,7 @@
},
{
"BriefDescription": "Number of uops executed on port 7 and 8",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xa1",
"EventName": "UOPS_DISPATCHED.PORT_7_8",
"PublicDescription": "Counts, on the per-thread basis, cycles during which at least one uop is dispatched from the Reservation Station (RS) to ports 7 and 8.",
@@ -623,6 +686,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -632,6 +696,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -641,6 +706,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -650,6 +716,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -659,6 +726,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1",
@@ -668,6 +736,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "2",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2",
@@ -677,6 +746,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "3",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3",
@@ -686,6 +756,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "4",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4",
@@ -695,6 +766,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -705,6 +777,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xb1",
"EventName": "UOPS_EXECUTED.THREAD",
"SampleAfterValue": "2000003",
@@ -712,6 +785,7 @@
},
{
"BriefDescription": "Counts the number of x87 uops dispatched.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.X87",
"PublicDescription": "Counts the number of x87 uops executed.",
@@ -720,6 +794,7 @@
},
{
"BriefDescription": "Uops that RAT issues to RS",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Counts the number of uops that the Resource Allocation Table (RAT) issues to the Reservation Station (RS).",
@@ -728,6 +803,7 @@
},
{
"BriefDescription": "Cycles when RAT does not issue Uops to RS for the thread",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -738,6 +814,7 @@
},
{
"BriefDescription": "Uops inserted at issue-stage in order to preserve upper bits of vector registers.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0x0e",
"EventName": "UOPS_ISSUED.VECTOR_WIDTH_MISMATCH",
"PublicDescription": "Counts the number of Blend Uops issued by the Resource Allocation Table (RAT) to the reservation station (RS) in order to preserve upper bits of vector registers. Starting with the Skylake microarchitecture, these Blend uops are needed since every Intel SSE instruction executed in Dirty Upper State needs to preserve bits 128-255 of the destination register. For more information, refer to 'Mixing Intel AVX and Intel SSE Code' section of the Optimization Guide.",
@@ -746,6 +823,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3,4,5,6,7",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.SLOTS",
"PublicDescription": "Counts the retirement slots used each cycle.",
@@ -754,6 +832,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "1",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -764,6 +843,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3,4,5,6,7",
"CounterMask": "10",
"EventCode": "0xc2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-cache.json b/tools/perf/pmu-events/arch/x86/icelakex/uncore-cache.json
index b6ce14ebf844..6f84ad47276d 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-cache.json
@@ -1,80 +1,98 @@
[
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x65",
"EventName": "UNC_CHA_2LM_NM_INVITOX.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x65",
"EventName": "UNC_CHA_2LM_NM_INVITOX.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x65",
"EventName": "UNC_CHA_2LM_NM_INVITOX.SETCONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLC",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x64",
"EventName": "UNC_CHA_2LM_NM_SETCONFLICTS.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SF",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x64",
"EventName": "UNC_CHA_2LM_NM_SETCONFLICTS.SF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x64",
"EventName": "UNC_CHA_2LM_NM_SETCONFLICTS.TOR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x70",
"EventName": "UNC_CHA_2LM_NM_SETCONFLICTS2.MEMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x70",
"EventName": "UNC_CHA_2LM_NM_SETCONFLICTS2.MEMWRNI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -82,8 +100,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -91,8 +111,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -100,8 +122,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -109,8 +133,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -118,8 +144,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -127,8 +155,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -136,8 +166,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -145,8 +177,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -154,8 +188,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -163,8 +199,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_CHA_AG0_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -172,8 +210,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -181,8 +221,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -190,8 +232,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -199,8 +243,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -208,8 +254,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -217,8 +265,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -226,8 +276,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -235,8 +287,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -244,8 +298,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -253,8 +309,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -262,8 +320,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_CHA_AG0_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -271,8 +331,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -280,8 +342,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -289,8 +353,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -298,8 +364,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -307,8 +375,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -316,8 +386,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -325,8 +397,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -334,8 +408,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -343,8 +419,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -352,8 +430,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -361,8 +441,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_CHA_AG0_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -370,8 +452,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -379,8 +463,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -388,8 +474,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -397,8 +485,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -406,8 +496,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -415,8 +507,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -424,8 +518,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -433,8 +529,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -442,8 +540,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -451,8 +551,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -460,8 +562,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_CHA_AG0_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -469,8 +573,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -478,8 +584,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -487,8 +595,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -496,8 +606,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -505,8 +617,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -514,8 +628,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -523,8 +639,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -532,8 +650,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -541,8 +661,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -550,8 +672,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -559,8 +683,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_CHA_AG1_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -568,8 +694,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -577,8 +705,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -586,8 +716,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -595,8 +727,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -604,8 +738,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -613,8 +749,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -622,8 +760,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -631,8 +771,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -640,8 +782,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -649,8 +793,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -658,8 +804,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_CHA_AG1_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -667,8 +815,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -676,8 +826,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -685,8 +837,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -694,8 +848,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -703,8 +859,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -712,8 +870,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -721,8 +881,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -730,8 +892,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -739,8 +903,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -748,8 +914,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -757,8 +925,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_CHA_AG1_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -766,8 +936,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -775,8 +947,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -784,8 +958,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -793,8 +969,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -802,8 +980,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -811,8 +991,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -820,8 +1002,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -829,8 +1013,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -838,8 +1024,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -847,8 +1035,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -856,8 +1046,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_CHA_AG1_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -865,8 +1057,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass : Intermediate bypass Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.INTERMEDIATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Intermediate bypass Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the intermediate bypass.",
"UMask": "0x2",
@@ -874,8 +1068,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass : Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Not Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that could not take the bypass, and issues a read to memory. Note that transactions that did not take the bypass but did not issue read to memory will not be counted.",
"UMask": "0x4",
@@ -883,8 +1079,10 @@
},
{
"BriefDescription": "CHA to iMC Bypass : Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_CHA_BYPASS_CHA_IMC.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Bypass : Taken : Counts the number of times when the CHA was able to bypass HA pipe on the way to iMC. This is a latency optimization for situations when there is light loadings on the memory subsystem. This can be filtered by when the bypass was taken and when it was not. : Filter for transactions that succeeded in taking the full bypass.",
"UMask": "0x1",
@@ -892,12 +1090,14 @@
},
{
"BriefDescription": "Clockticks of the uncore caching and home agent (CHA)",
+ "Counter": "0,1,2,3",
"EventName": "UNC_CHA_CLOCKTICKS",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_CHA_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -905,8 +1105,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Any Cycle with Multiple Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Any Cycle with Multiple Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xf2",
@@ -914,8 +1116,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Any Single Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.ANY_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Any Single Snoop : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0xf1",
@@ -923,8 +1127,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x42",
@@ -932,8 +1138,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Core Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.CORE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Core Requests : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x41",
@@ -941,8 +1149,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Eviction : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x82",
@@ -950,8 +1160,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EVICT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Eviction : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x81",
@@ -959,8 +1171,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x22",
@@ -968,8 +1182,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single External Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.EXT_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single External Snoops : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x21",
@@ -977,8 +1193,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Multiple Snoop Targets from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.REMOTE_GTONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Multiple Snoop Targets from Remote : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x12",
@@ -986,8 +1204,10 @@
},
{
"BriefDescription": "Core Cross Snoops Issued : Single Snoop Target from Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_CHA_CORE_SNP.REMOTE_ONE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Core Cross Snoops Issued : Single Snoop Target from Remote : Counts the number of transactions that trigger a configurable number of cross snoops. Cores are snooped if the transaction looks up the cache and determines that it is necessary based on the operation type and what CoreValid bits are set. For example, if 2 CV bits are set on a data read, the cores must have the data in S state so it is not necessary to snoop them. However, if only 1 CV bit is set the core my have modified the data. If the transaction was an RFO, it would need to invalidate the lines. This event can be filtered based on who triggered the initial snoop(s).",
"UMask": "0x11",
@@ -995,104 +1215,130 @@
},
{
"BriefDescription": "Counter 0 Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_CHA_COUNTER0_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counter 0 Occupancy : Since occupancy counts can only be captured in the Cbo's 0 counter, this event allows a user to capture occupancy related information by filtering the Cb0 occupancy count captured in Counter 0. The filtering available is found in the control register - threshold, invert and edge detect. E.g. setting threshold to 1 can effectively monitor how many cycles the monitored queue has an entry.",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_DRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_CHA_DIRECT_GO.HA_SUPPRESS_NO_D2C",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_CHA_DIRECT_GO.HA_TOR_DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.EXTCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.FAST_GO_PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.GO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.GO_PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.IDLE_DUE_SUPPRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.NOP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "Direct GO",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_CHA_DIRECT_GO_OPC.PULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Multi-socket cacheline directory state lookups : Snoop Not Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.NO_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi-socket cacheline directory state lookups : Snoop Not Needed : Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. : Filters for transactions that did not have to send any snoops because the directory was clean.",
"UMask": "0x2",
@@ -1100,8 +1346,10 @@
},
{
"BriefDescription": "Multi-socket cacheline directory state lookups : Snoop Needed",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_CHA_DIR_LOOKUP.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi-socket cacheline directory state lookups : Snoop Needed : Counts the number of transactions that looked up the directory. Can be filtered by requests that had to snoop and those that did not have to. : Filters for transactions that had to send one or more snoops because the directory was not clean.",
"UMask": "0x1",
@@ -1109,6 +1357,7 @@
},
{
"BriefDescription": "Multi-socket cacheline directory state updates; memory write due to directory update from the home agent (HA) pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.HA",
"PerPkg": "1",
@@ -1118,6 +1367,7 @@
},
{
"BriefDescription": "Multi-socket cacheline directory state updates; memory write due to directory update from (table of requests) TOR pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_CHA_DIR_UPDATE.TOR",
"PerPkg": "1",
@@ -1127,8 +1377,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tile",
"UMask": "0x4",
@@ -1136,8 +1388,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tile",
"UMask": "0x8",
@@ -1145,8 +1399,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_STALL_IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalled",
"UMask": "0x40",
@@ -1154,8 +1410,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - No Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.DPT_STALL_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalled",
"UMask": "0x80",
@@ -1163,8 +1421,10 @@
},
{
"BriefDescription": "Distress signal asserted : Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x2",
@@ -1172,8 +1432,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.PMM_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x10",
@@ -1181,8 +1443,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.PMM_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x20",
@@ -1190,8 +1454,10 @@
},
{
"BriefDescription": "Distress signal asserted : Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_CHA_DISTRESS_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x1",
@@ -1199,8 +1465,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -1208,8 +1476,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_CHA_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -1217,8 +1487,10 @@
},
{
"BriefDescription": "Read request from a remote socket which hit in the HitMe Cache to a line In the E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.EX_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts read requests from a remote socket which hit in the HitME cache (used to cache the multi-socket Directory state) to a line in the E(Exclusive) state. This includes the following read opcodes (RdCode, RdData, RdDataMigratory, RdCur, RdInv*, Inv*).",
"UMask": "0x1",
@@ -1226,8 +1498,10 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : Remote socket ownership read requests that hit in S state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.SHARED_OWNREQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of Hits in HitMe Cache : Remote socket ownership read requests that hit in S state. : Shared hit and op is RdInvOwn, RdInv, Inv*",
"UMask": "0x4",
@@ -1235,16 +1509,20 @@
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : Remote socket WBMtoE requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts Number of Hits in HitMe Cache : Remote socket writeback to I or S requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_CHA_HITME_HIT.WBMTOI_OR_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of Hits in HitMe Cache : Remote socket writeback to I or S requests : op is WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
"UMask": "0x10",
@@ -1252,8 +1530,10 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed : Remote socket read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_CHA_HITME_LOOKUP.READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of times HitMe Cache is accessed : Remote socket read requests : op is RdCode, RdData, RdDataMigratory, RdCur, RdInvOwn, RdInv, Inv*",
"UMask": "0x1",
@@ -1261,8 +1541,10 @@
},
{
"BriefDescription": "Counts Number of times HitMe Cache is accessed : Remote socket write (i.e. writeback) requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_CHA_HITME_LOOKUP.WRITE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of times HitMe Cache is accessed : Remote socket write (i.e. writeback) requests : op is WbMtoE, WbMtoI, WbPushMtoI, WbFlush, or WbMtoS",
"UMask": "0x2",
@@ -1270,8 +1552,10 @@
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests that are not to shared line",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.NOTSHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests that are not to shared line : No SF/LLC HitS/F and op is RdInvOwn",
"UMask": "0x40",
@@ -1279,8 +1563,10 @@
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : Remote socket read or invalidate requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.READ_OR_INV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of Misses in HitMe Cache : Remote socket read or invalidate requests : op is RdCode, RdData, RdDataMigratory, RdCur, RdInv, Inv*",
"UMask": "0x80",
@@ -1288,8 +1574,10 @@
},
{
"BriefDescription": "Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests to shared line",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_CHA_HITME_MISS.SHARED_RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts Number of Misses in HitMe Cache : Remote socket RdInvOwn requests to shared line : SF/LLC HitS/F and op is RdInvOwn",
"UMask": "0x20",
@@ -1297,16 +1585,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Deallocate HitME$ on Reads without RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.DEALLOCATE_RSPFWDI_LOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a local request : Received RspFwdI* for a local request, but converted HitME$ to SF entry",
"UMask": "0x1",
@@ -1314,16 +1606,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Update HitMe Cache on RdInvOwn even if not RspFwdI*",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RDINVOWN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote request",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.RSPFWDI_REM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of Allocate/Update to HitMe Cache : op is RspIFwd or RspIFwdWb for a remote request : Updated HitME$ on RspFwdI* or local HitM/E received for a remote request",
"UMask": "0x2",
@@ -1331,16 +1627,20 @@
},
{
"BriefDescription": "Counts the number of Allocate/Update to HitMe Cache : Update HitMe Cache to SHARed",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_CHA_HITME_UPDATE.SHARED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1348,8 +1648,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1357,8 +1659,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1366,8 +1670,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_CHA_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1375,8 +1681,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_CHA_HORZ_RING_AKC_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1384,8 +1692,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_CHA_HORZ_RING_AKC_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1393,8 +1703,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_CHA_HORZ_RING_AKC_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1402,8 +1714,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_CHA_HORZ_RING_AKC_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1411,8 +1725,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1420,8 +1736,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1429,8 +1747,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1438,8 +1758,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_CHA_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1447,8 +1769,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1456,8 +1780,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1465,8 +1791,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1474,8 +1802,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_CHA_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1483,8 +1813,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -1492,8 +1824,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_CHA_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -1501,6 +1835,7 @@
},
{
"BriefDescription": "Normal priority reads issued to the memory controller from the CHA",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.NORMAL",
"PerPkg": "1",
@@ -1510,8 +1845,10 @@
},
{
"BriefDescription": "HA to iMC Reads Issued : ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_CHA_IMC_READS_COUNT.PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "HA to iMC Reads Issued : ISOCH : Count of the number of reads issued to any of the memory controller channels. This can be filtered by the priority of the reads.",
"UMask": "0x2",
@@ -1519,6 +1856,7 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : Full Line Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL",
"PerPkg": "1",
@@ -1528,8 +1866,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.FULL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH Full Line : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x4",
@@ -1537,8 +1877,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : Partial Non-ISOCH : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x2",
@@ -1546,8 +1888,10 @@
},
{
"BriefDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_CHA_IMC_WRITES_COUNT.PARTIAL_PRIORITY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA to iMC Full Line Writes Issued : ISOCH Partial : Counts the total number of full line writes issued from the HA into the memory controller.",
"UMask": "0x8",
@@ -1555,8 +1899,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Any Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.",
"UMask": "0x1fffff",
@@ -1564,8 +1910,10 @@
},
{
"BriefDescription": "Cache Lookups : All transactions from Remote Agents",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ALL_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : All transactions from Remote Agents : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1e20ff",
@@ -1573,34 +1921,42 @@
},
{
"BriefDescription": "Cache Lookups : All Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.ANY_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : All Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Any local or remote transaction to the LLC, including prefetch.",
"Unit": "CHA"
},
{
- "BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READ",
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1bd0ff",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READ_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x19d0ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Code Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Code Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bd0ff",
@@ -1608,16 +1964,20 @@
},
{
"BriefDescription": "Cache Lookups : CRd Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : CRd Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x19d0ff",
@@ -1625,8 +1985,10 @@
},
{
"BriefDescription": "Cache Lookups : Code Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Code Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bd001",
@@ -1634,8 +1996,10 @@
},
{
"BriefDescription": "Cache Lookups : CRd Requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_READ_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : CRd Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote CRd transactions to the LLC. This includes CRd prefetch.",
"UMask": "0x1a10ff",
@@ -1643,32 +2007,39 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.CODE_READ_REMOTE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.CODE_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1a10ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Local request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.COREPREF_OR_DMND_LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Local request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Any local transaction to the LLC, including prefetches from the Core",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READ",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1bc1ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ",
"PerPkg": "1",
@@ -1677,26 +2048,32 @@
"Unit": "CHA"
},
{
- "BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READ",
+ "BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1fc1ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Data Read Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Data Read Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Read transactions.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Request that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x19c1ff",
@@ -1704,8 +2081,10 @@
},
{
"BriefDescription": "Cache Lookups : Data Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Data Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bc101",
@@ -1713,8 +2092,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Data Read Requests that come from a Remote socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DATA_READ_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x1a01ff",
@@ -1722,17 +2103,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.DATA_READ_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.DMND_READ_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x841ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Hit Exclusive State",
"UMask": "0x20",
@@ -1740,8 +2125,10 @@
},
{
"BriefDescription": "Cache Lookups : F State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : F State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Hit Forward State",
"UMask": "0x80",
@@ -1749,8 +2136,10 @@
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_INV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1a44ff",
@@ -1758,8 +2147,10 @@
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_INV_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1844ff",
@@ -1767,8 +2158,10 @@
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_INV_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing.",
"UMask": "0x1a04ff",
@@ -1776,16 +2169,20 @@
},
{
"BriefDescription": "Cache Lookups : Flush or Invalidate Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.FLUSH_OR_INV_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Flush or Invalidate Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : I State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Miss",
"UMask": "0x1",
@@ -1793,8 +2190,10 @@
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Prefetch requests to the LLC that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LLCPREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state. Read transactions",
"UMask": "0x189dff",
@@ -1802,42 +2201,52 @@
},
{
"BriefDescription": "Cache Lookups : Local LLC prefetch requests (from LLC) Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LLCPREF_LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Local LLC prefetch requests (from LLC) Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Any local LLC prefetch to the LLC",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LLCPREF_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LLC_PF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x189dff",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.LOC_HOM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCALLY_HOMED_ADDRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xbdfff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Transactions homed locally Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOCAL_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed locally Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Transaction whose address resides in the local MC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Transactions homed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.LOC_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed locally : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in the local MC.",
"UMask": "0xbdfff",
@@ -1845,8 +2254,10 @@
},
{
"BriefDescription": "Cache Lookups : M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : M State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Hit Modified State",
"UMask": "0x40",
@@ -1854,8 +2265,10 @@
},
{
"BriefDescription": "Cache Lookups : All Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.MISS_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : All Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1fe001",
@@ -1863,24 +2276,30 @@
},
{
"BriefDescription": "Cache Lookups : Write Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.OTHER_REQ_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Write Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Writeback transactions to the LLC This includes all write transactions -- both Cacheable and UC.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Remote non-snoop request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.PREF_OR_DMND_REMOTE_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remote non-snoop request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Non-snoop transactions to the LLC from remote agent",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Reads : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bd9ff",
@@ -1888,8 +2307,10 @@
},
{
"BriefDescription": "Cache Lookups : Locally Requested Reads that are Locally HOMed",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_LOCAL_LOC_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Locally Requested Reads that are Locally HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x9d9ff",
@@ -1897,8 +2318,10 @@
},
{
"BriefDescription": "Cache Lookups : Locally Requested Reads that are Remotely HOMed",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_LOCAL_REM_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Locally Requested Reads that are Remotely HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x11d9ff",
@@ -1906,8 +2329,10 @@
},
{
"BriefDescription": "Cache Lookups : Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bd901",
@@ -1915,8 +2340,10 @@
},
{
"BriefDescription": "Cache Lookups : Locally HOMed Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_MISS_LOC_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Locally HOMed Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0xbd901",
@@ -1924,8 +2351,10 @@
},
{
"BriefDescription": "Cache Lookups : Remotely HOMed Read Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_MISS_REM_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remotely HOMed Read Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x13d901",
@@ -1933,8 +2362,10 @@
},
{
"BriefDescription": "Cache Lookups : Remotely requested Read or Snoop Misses that are Remotely HOMed",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_OR_SNOOP_REMOTE_MISS_REM_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remotely requested Read or Snoop Misses that are Remotely HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x161901",
@@ -1942,8 +2373,10 @@
},
{
"BriefDescription": "Cache Lookups : Remotely Requested Reads that are Locally HOMed",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_REMOTE_LOC_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remotely Requested Reads that are Locally HOMed : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0xa19ff",
@@ -1951,8 +2384,10 @@
},
{
"BriefDescription": "Cache Lookups : Reads that Hit the Snoop Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.READ_SF_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Reads that Hit the Snoop Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bd90e",
@@ -1960,33 +2395,41 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.REM_HOM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTELY_HOMED_ADDRESS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x15dfff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Transactions homed remotely Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed remotely Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Transaction whose address resides in a remote MC",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : Remote snoop request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNOOP_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Remote snoop request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Snoop transactions to the LLC from remote agent",
"Unit": "CHA"
},
{
"BriefDescription": "Cache and Snoop Filter Lookups; Snoop Requests from a Remote Socket",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REMOTE_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. CHAFilter0[24:21,17] bits correspond to [FMESI] state.; Filters for any transaction originating from the IPQ or IRQ. This does not include lookups originating from the ISMQ.",
"UMask": "0x1c19ff",
@@ -1994,8 +2437,10 @@
},
{
"BriefDescription": "Cache Lookups : Transactions homed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.REM_HOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Transactions homed remotely : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Transaction whose address resides in a remote MC",
"UMask": "0x15dfff",
@@ -2003,8 +2448,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1bc8ff",
@@ -2012,16 +2459,20 @@
},
{
"BriefDescription": "Cache Lookups : RFO Request Filter",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_F",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Request Filter : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : RFO Requests that come from the local socket (usually the core)",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x19c8ff",
@@ -2029,8 +2480,10 @@
},
{
"BriefDescription": "Cache Lookups : RFO Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Misses : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing.",
"UMask": "0x1bc801",
@@ -2038,17 +2491,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.RFO_LOCAL",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x888ff",
"Unit": "CHA"
},
{
"BriefDescription": "Cache Lookups : RFO Requests that come from a Remote socket.",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.RFO_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : RFO Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Local or remote RFO transactions to the LLC. This includes RFO prefetch.",
"UMask": "0x1a08ff",
@@ -2056,8 +2513,10 @@
},
{
"BriefDescription": "Cache Lookups : S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : Hit Shared State",
"UMask": "0x10",
@@ -2065,8 +2524,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - E State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : SF Hit Exclusive State",
"UMask": "0x4",
@@ -2074,8 +2535,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - H State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_H",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - H State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : SF Hit HitMe State",
"UMask": "0x8",
@@ -2083,8 +2546,10 @@
},
{
"BriefDescription": "Cache Lookups : SnoopFilter - S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.SF_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : SnoopFilter - S State : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS select a state or states (in the umask field) to match. Otherwise, the event will count nothing. : SF Hit Shared State",
"UMask": "0x2",
@@ -2092,8 +2557,10 @@
},
{
"BriefDescription": "Cache Lookups : Filters Requests for those that write info into the cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITES_AND_OTHER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cache Lookups : Write Requests : Counts the number of times the LLC was accessed - this includes code, data, prefetches and hints coming from L2. This has numerous filters available. Note the non-standard filtering equation. This event will count requests that lookup the cache multiple times with multiple increments. One must ALWAYS set umask bit 0 and select a state or states to match. Otherwise, the event will count nothing. : Writeback transactions from L2 to the LLC This includes all write transactions -- both Cacheable and UC.",
"UMask": "0x1a42ff",
@@ -2101,24 +2568,29 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITES_AND_OTHER",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITE_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x842ff",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_LLC_LOOKUP.WRITES_AND_OTHER",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x34",
"EventName": "UNC_CHA_LLC_LOOKUP.WRITE_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x17c2ff",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized : All Lines Victimized",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.ALL",
"PerPkg": "1",
@@ -2128,8 +2600,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.E_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in E state : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2",
@@ -2137,8 +2611,10 @@
},
{
"BriefDescription": "Lines Victimized : Local - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local - All Lines : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x200f",
@@ -2146,8 +2622,10 @@
},
{
"BriefDescription": "Lines Victimized : Local - Lines in E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local - Lines in E State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2002",
@@ -2155,8 +2633,10 @@
},
{
"BriefDescription": "Lines Victimized : Local - Lines in M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local - Lines in M State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2001",
@@ -2164,16 +2644,20 @@
},
{
"BriefDescription": "Lines Victimized : Local Only",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local Only : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized : Local - Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.LOCAL_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Local - Lines in S State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x2004",
@@ -2181,8 +2665,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.M_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in M state : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x1",
@@ -2190,8 +2676,10 @@
},
{
"BriefDescription": "Lines Victimized : Remote - All Lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote - All Lines : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x800f",
@@ -2199,8 +2687,10 @@
},
{
"BriefDescription": "Lines Victimized : Remote - Lines in E State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote - Lines in E State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8002",
@@ -2208,8 +2698,10 @@
},
{
"BriefDescription": "Lines Victimized : Remote - Lines in M State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote - Lines in M State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8001",
@@ -2217,16 +2709,20 @@
},
{
"BriefDescription": "Lines Victimized : Remote Only",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote Only : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"Unit": "CHA"
},
{
"BriefDescription": "Lines Victimized : Remote - Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.REMOTE_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Remote - Lines in S State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x8004",
@@ -2234,8 +2730,10 @@
},
{
"BriefDescription": "Lines Victimized : Lines in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_CHA_LLC_VICTIMS.S_STATE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lines Victimized : Lines in S State : Counts the number of lines that were victimized on a fill. This can be filtered by the state that the line was in.",
"UMask": "0x4",
@@ -2243,8 +2741,10 @@
},
{
"BriefDescription": "Cbo Misc : CV0 Prefetch Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : CV0 Prefetch Miss : Miscellaneous events in the Cbo.",
"UMask": "0x20",
@@ -2252,8 +2752,10 @@
},
{
"BriefDescription": "Cbo Misc : CV0 Prefetch Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.CV0_PREF_VIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : CV0 Prefetch Victim : Miscellaneous events in the Cbo.",
"UMask": "0x10",
@@ -2261,8 +2763,10 @@
},
{
"BriefDescription": "Number of times that an RFO hit in S state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RFO_HIT_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a RFO (the Read for Ownership issued before a write) request hit a cacheline in the S (Shared) state.",
"UMask": "0x8",
@@ -2270,8 +2774,10 @@
},
{
"BriefDescription": "Cbo Misc : Silent Snoop Eviction",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.RSPI_WAS_FSE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : Silent Snoop Eviction : Miscellaneous events in the Cbo. : Counts the number of times when a Snoop hit in FSE states and triggered a silent eviction. This is useful because this information is lost in the PRE encodings.",
"UMask": "0x1",
@@ -2279,8 +2785,10 @@
},
{
"BriefDescription": "Cbo Misc : Write Combining Aliasing",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_CHA_MISC.WC_ALIASING",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cbo Misc : Write Combining Aliasing : Miscellaneous events in the Cbo. : Counts the number of times that a USWC write (WCIL(F)) transaction hit in the LLC in M state, triggering a WBMtoI followed by the USWC write. This occurs when there is WC aliasing.",
"UMask": "0x2",
@@ -2288,24 +2796,30 @@
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_CHA_MISC_EXTERNAL.MBE_INST0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_CHA_MISC_EXTERNAL.MBE_INST1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "OSB Snoop Broadcast : Local InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.LOCAL_INVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Local InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x1",
@@ -2313,8 +2827,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Local Rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.LOCAL_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Local Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x2",
@@ -2322,8 +2838,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Off",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.OFF_PWRHEURISTIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Off : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x20",
@@ -2331,8 +2849,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Remote Rd",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.REMOTE_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Remote Rd : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x4",
@@ -2340,8 +2860,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : Remote Rd InvItoE",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.REMOTE_READINVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : Remote Rd InvItoE : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x8",
@@ -2349,8 +2871,10 @@
},
{
"BriefDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_CHA_OSB.RFO_HITS_SNP_BCAST",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OSB Snoop Broadcast : RFO HitS Snoop Broadcast : Count of OSB snoop broadcasts. Counts by 1 per request causing OSB snoops to be broadcast. Does not count all the snoops generated by OSB.",
"UMask": "0x10",
@@ -2358,48 +2882,60 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.ADEGRCREDIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.AKEGRCREDIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.ALLRSFWAYS_RES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.BLEGRCREDIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.FSF_VICP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.GOTRACK_ALLOWSNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x4",
@@ -2407,8 +2943,10 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.GOTRACK_ALLWAYRSV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x10",
@@ -2416,8 +2954,10 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.GOTRACK_PAMATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x2",
@@ -2425,8 +2965,10 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.GOTRACK_WAYMATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x8",
@@ -2434,32 +2976,40 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.HACREDIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.IDX_INPIPE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.IPQ_SETMATCH_VICP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.IRQ_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x20",
@@ -2467,80 +3017,100 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.IRQ_SETMATCH_VICP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.ISMQ_SETMATCH_VICP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.IVEGRCREDIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.LLC_WAYS_RES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.NOTALLOWSNOOP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.ONE_FSF_VIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.ONE_RSP_CON",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.PMM_MEMMODE_TORMATCH_MULTI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.PMM_MEMMODE_TOR_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.PRQ_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x40",
@@ -2548,8 +3118,10 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.PTL_INPIPE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x80",
@@ -2557,8 +3129,10 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.RMW_SETMATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"UMask": "0x1",
@@ -2566,128 +3140,130 @@
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.RRQ_SETMATCH_VICP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.SETMATCHENTRYWSCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.SF_WAYS_RES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.TOPA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.TORID_MATCH_GO_P",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.VN_AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.VN_AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
- "EventCode": "0x42",
- "EventName": "UNC_CHA_PIPE_REJECT.VN_BL_NCB",
- "PerPkg": "1",
- "PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
- "Unit": "CHA"
- },
- {
- "BriefDescription": "Pipe Rejects",
- "EventCode": "0x42",
- "EventName": "UNC_CHA_PIPE_REJECT.VN_BL_NCS",
- "PerPkg": "1",
- "PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
- "Unit": "CHA"
- },
- {
- "BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.VN_BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "Pipe Rejects",
- "EventCode": "0x42",
- "EventName": "UNC_CHA_PIPE_REJECT.VN_BL_WB",
- "PerPkg": "1",
- "PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
- "Unit": "CHA"
- },
- {
- "BriefDescription": "Pipe Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_CHA_PIPE_REJECT.WAY_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Pipe Rejects : More Miscellaneous events in the Cbo.",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_INVITOX.SETCONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC : NM evictions due to another read to the same near memory set in the LLC.",
"UMask": "0x2",
@@ -2695,8 +3271,10 @@
},
{
"BriefDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.SF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in SF/LLC : NM evictions due to another read to the same near memory set in the SF.",
"UMask": "0x1",
@@ -2704,8 +3282,10 @@
},
{
"BriefDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in TOR",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS.TOR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Memory Mode related events : Counts the number of times CHA saw NM Set conflict in TOR : No Reject in the CHA due to a pending read to the same near memory set in the TOR.",
"UMask": "0x4",
@@ -2713,88 +3293,110 @@
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.IODC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_CHA_PMM_MEMMODE_NM_SETCONFLICTS2.MEMWRNI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.DDR4_FAST_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.REJ_IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.REJ_IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.SLOWTORQ_SKIP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.SLOW_INSERT",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.SLOW_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE_IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS.THROTTLE_PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x66",
"EventName": "UNC_CHA_PMM_QOS.THROTTLE_PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_FAST_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": count # of FAST TOR Request inserted to ha_tor_req_fifo",
"UMask": "0x2",
@@ -2802,8 +3404,10 @@
},
{
"BriefDescription": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_SLOW_FIFO",
+ "Counter": "0,1,2,3",
"EventCode": "0x67",
"EventName": "UNC_CHA_PMM_QOS_OCCUPANCY.DDR_SLOW_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": count # of SLOW TOR Request inserted to ha_pmm_tor_req_fifo",
"UMask": "0x1",
@@ -2811,8 +3415,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC0",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC0 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -2820,8 +3426,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC1",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC1 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -2829,40 +3437,50 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC10",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC10 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 10 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC11",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC11",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC11 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 11 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC12",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC12",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC12 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 12 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC13",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC13",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC13 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 13 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC2",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC2 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -2870,8 +3488,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC3",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC3 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -2879,8 +3499,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC4",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC4 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -2888,8 +3510,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC5",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC5 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -2897,8 +3521,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC6",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC6 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 6 only.",
"UMask": "0x40",
@@ -2906,8 +3532,10 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC7",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC7 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 7 only.",
"UMask": "0x80",
@@ -2915,24 +3543,30 @@
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC8",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC8 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 8 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx READ Credits Empty : MC9",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_CHA_READ_NO_CREDITS.MC9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx READ Credits Empty : MC9 : Counts the number of times when there are no credits available for sending reads from the CHA into the iMC. In order to send reads into the memory controller, the HA must first acquire a credit for the iMC's AD Ingress queue. : Filter for memory controller 9 only.",
"Unit": "CHA"
},
{
"BriefDescription": "Local INVITOE requests (exclusive ownership of a cache line without receiving data) that miss the SF/LLC and remote INVITOE requests sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of requests coming from a unit on this socket for exclusive ownership of a cache line without receiving data (INVITOE) to the CHA.",
"UMask": "0x30",
@@ -2940,6 +3574,7 @@
},
{
"BriefDescription": "Local INVITOE requests (exclusive ownership of a cache line without receiving data) that miss the SF/LLC and are sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_LOCAL",
"PerPkg": "1",
@@ -2949,6 +3584,7 @@
},
{
"BriefDescription": "Remote INVITOE requests (exclusive ownership of a cache line without receiving data) sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.INVITOE_REMOTE",
"PerPkg": "1",
@@ -2958,6 +3594,7 @@
},
{
"BriefDescription": "Local read requests that miss the SF/LLC and remote read requests sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS",
"PerPkg": "1",
@@ -2967,6 +3604,7 @@
},
{
"BriefDescription": "Local read requests that miss the SF/LLC and are sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_LOCAL",
"PerPkg": "1",
@@ -2976,6 +3614,7 @@
},
{
"BriefDescription": "Remote read requests sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.READS_REMOTE",
"PerPkg": "1",
@@ -2985,6 +3624,7 @@
},
{
"BriefDescription": "Local write requests that miss the SF/LLC and remote write requests sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES",
"PerPkg": "1",
@@ -2994,6 +3634,7 @@
},
{
"BriefDescription": "Local write requests that miss the SF/LLC and are sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_LOCAL",
"PerPkg": "1",
@@ -3003,6 +3644,7 @@
},
{
"BriefDescription": "Remote write requests sent to the CHA's home agent",
+ "Counter": "0,1,2,3",
"EventCode": "0x50",
"EventName": "UNC_CHA_REQUESTS.WRITES_REMOTE",
"PerPkg": "1",
@@ -3012,8 +3654,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -3021,8 +3665,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -3030,8 +3676,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -3039,8 +3687,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_CHA_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -3048,8 +3698,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -3057,8 +3709,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -3066,8 +3720,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x10",
@@ -3075,8 +3731,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -3084,8 +3742,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_CHA_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -3093,95 +3753,119 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_CHA_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_CHA_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "UNC_CHA_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IPQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x4",
@@ -3189,8 +3873,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x1",
@@ -3198,8 +3884,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : IRQ Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.IRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : IRQ Rejected : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x2",
@@ -3207,8 +3895,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x10",
@@ -3216,8 +3906,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : PRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.PRQ_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : PRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x20",
@@ -3225,8 +3917,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : RRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : RRQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x40",
@@ -3234,8 +3928,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Allocations : WBQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_CHA_RxC_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Allocations : WBQ : Counts number of allocations per cycle into the specified Ingress queue.",
"UMask": "0x80",
@@ -3243,8 +3939,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3252,8 +3950,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3261,8 +3961,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -3270,8 +3972,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3279,8 +3983,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3288,8 +3994,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3297,8 +4005,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3306,8 +4016,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_CHA_RxC_IPQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -3315,16 +4027,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IPQ0 Reject counter was true",
"UMask": "0x1",
@@ -3332,16 +4048,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -3349,16 +4069,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -3366,8 +4090,10 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IPQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -3375,16 +4101,20 @@
},
{
"BriefDescription": "IPQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_CHA_RxC_IPQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3392,8 +4122,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3401,8 +4133,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -3410,8 +4144,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3419,8 +4155,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3428,8 +4166,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3437,8 +4177,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3446,8 +4188,10 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_CHA_RxC_IRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -3455,16 +4199,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the IRQ0 Reject counter was true",
"UMask": "0x1",
@@ -3472,16 +4220,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC or SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -3489,24 +4241,30 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Ingress (from CMS) Request Queue Rejects; PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "CHA"
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -3514,16 +4272,20 @@
},
{
"BriefDescription": "IRQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_CHA_RxC_IRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3531,8 +4293,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3540,8 +4304,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring message",
"UMask": "0x40",
@@ -3549,8 +4315,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3558,8 +4326,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3567,8 +4337,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3576,8 +4348,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3585,8 +4359,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_CHA_RxC_ISMQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring message",
"UMask": "0x80",
@@ -3594,8 +4370,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3603,8 +4381,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3612,8 +4392,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject AK ring message",
"UMask": "0x40",
@@ -3621,8 +4403,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3630,8 +4414,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3639,8 +4425,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3648,8 +4436,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3657,8 +4447,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_CHA_RxC_ISMQ0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Can't inject IV ring message",
"UMask": "0x80",
@@ -3666,8 +4458,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was true",
"UMask": "0x1",
@@ -3675,8 +4469,10 @@
},
{
"BriefDescription": "ISMQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_CHA_RxC_ISMQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Rejects - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -3684,8 +4480,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 1 : ANY0 : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores. : Any condition listed in the ISMQ0 Reject counter was true",
"UMask": "0x1",
@@ -3693,8 +4491,10 @@
},
{
"BriefDescription": "ISMQ Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_CHA_RxC_ISMQ1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "ISMQ Retries - Set 1 : HA : Number of times a transaction flowing through the ISMQ had to retry. Transaction pass through the ISMQ as responses for requests that already exist in the Cbo. Some examples include: when data is returned or when snoop responses come back from the cores.",
"UMask": "0x2",
@@ -3702,8 +4502,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : IPQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : IPQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x4",
@@ -3711,8 +4513,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : IRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.IRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : IRQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x1",
@@ -3720,8 +4524,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : RRQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : RRQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x40",
@@ -3729,8 +4535,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Occupancy : WBQ",
+ "Counter": "0",
"EventCode": "0x11",
"EventName": "UNC_CHA_RxC_OCCUPANCY.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Occupancy : WBQ : Counts number of entries in the specified Ingress queue in each cycle.",
"UMask": "0x80",
@@ -3738,8 +4546,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : AD REQ on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3747,8 +4557,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : AD RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3756,8 +4568,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : Non UPI AK Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject AK ring message",
"UMask": "0x40",
@@ -3765,8 +4579,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL NCB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3774,8 +4590,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL NCS on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3783,8 +4601,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL RSP on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3792,8 +4612,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : BL WB on VN0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3801,8 +4623,10 @@
},
{
"BriefDescription": "Other Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_CHA_RxC_OTHER0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 0 : Non UPI IV Request : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Can't inject IV ring message",
"UMask": "0x80",
@@ -3810,8 +4634,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : Allow Snoop : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x40",
@@ -3819,8 +4645,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : ANY0 : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Any condition listed in the Other0 Reject counter was true",
"UMask": "0x1",
@@ -3828,8 +4656,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : HA : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x2",
@@ -3837,8 +4667,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : LLC OR SF Way : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -3846,8 +4678,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : LLC Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x4",
@@ -3855,8 +4689,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : PhyAddr Match : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -3864,8 +4700,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : SF Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject) : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -3873,8 +4711,10 @@
},
{
"BriefDescription": "Other Retries - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_CHA_RxC_OTHER1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Other Retries - Set 1 : Victim : Retry Queue Inserts of Transactions that were already in another Retry Q (sub-events encode the reason for the next reject)",
"UMask": "0x10",
@@ -3882,8 +4722,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD REQ on VN0 : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -3891,8 +4733,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : AD RSP on VN0 : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -3900,8 +4744,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI AK Request : Can't inject AK ring message",
"UMask": "0x40",
@@ -3909,8 +4755,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCB on VN0 : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -3918,8 +4766,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL NCS on VN0 : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -3927,8 +4777,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL RSP on VN0 : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -3936,8 +4788,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : BL WB on VN0 : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -3945,8 +4799,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_CHA_RxC_PRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 0 : Non UPI IV Request : Can't inject IV ring message",
"UMask": "0x80",
@@ -3954,16 +4810,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : ANY0 : Any condition listed in the PRQ0 Reject counter was true",
"UMask": "0x1",
@@ -3971,16 +4831,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC OR SF Way : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -3988,16 +4852,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : PhyAddr Match : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -4005,8 +4873,10 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PRQ Requests (from CMS) Rejected - Set 1 : SF Victim : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -4014,16 +4884,20 @@
},
{
"BriefDescription": "PRQ Requests (from CMS) Rejected - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_CHA_RxC_PRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Request Queue Retries - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : AD REQ on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -4031,8 +4905,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : AD RSP on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -4040,8 +4916,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : Non UPI AK Request : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject AK ring message",
"UMask": "0x40",
@@ -4049,8 +4927,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL NCB on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -4058,8 +4938,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL NCS on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -4067,8 +4949,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL RSP on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -4076,8 +4960,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : BL WB on VN0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -4085,8 +4971,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_CHA_RxC_REQ_Q0_RETRY.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 0 : Non UPI IV Request : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Can't inject IV ring message",
"UMask": "0x80",
@@ -4094,8 +4982,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : Allow Snoop : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x40",
@@ -4103,8 +4993,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : ANY0 : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Any condition listed in the WBQ0 Reject counter was true",
"UMask": "0x1",
@@ -4112,8 +5004,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : HA : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x2",
@@ -4121,8 +5015,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : LLC OR SF Way : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -4130,8 +5026,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : LLC Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x4",
@@ -4139,8 +5037,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : PhyAddr Match : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -4148,8 +5048,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : SF Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ) : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -4157,8 +5059,10 @@
},
{
"BriefDescription": "Request Queue Retries - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_CHA_RxC_REQ_Q1_RETRY.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Request Queue Retries - Set 1 : Victim : REQUESTQ includes: IRQ, PRQ, IPQ, RRQ, WBQ (everything except for ISMQ)",
"UMask": "0x10",
@@ -4166,8 +5070,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -4175,8 +5081,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -4184,8 +5092,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject AK ring message",
"UMask": "0x40",
@@ -4193,8 +5103,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -4202,8 +5114,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -4211,8 +5125,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -4220,8 +5136,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -4229,8 +5147,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_CHA_RxC_RRQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Can't inject IV ring message",
"UMask": "0x80",
@@ -4238,8 +5158,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x40",
@@ -4247,8 +5169,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Any condition listed in the RRQ0 Reject counter was true",
"UMask": "0x1",
@@ -4256,8 +5180,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : HA : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x2",
@@ -4265,8 +5191,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -4274,8 +5202,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x4",
@@ -4283,8 +5213,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -4292,8 +5224,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry. : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -4301,8 +5235,10 @@
},
{
"BriefDescription": "RRQ Rejects - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_CHA_RxC_RRQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RRQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the RRQ (Remote Response Queue) had to retry.",
"UMask": "0x10",
@@ -4310,8 +5246,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : AD REQ on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_REQ_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : AD REQ on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a request",
"UMask": "0x1",
@@ -4319,8 +5257,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : AD RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AD_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : AD RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No AD VN0 credit for generating a response",
"UMask": "0x2",
@@ -4328,8 +5268,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : Non UPI AK Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.AK_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : Non UPI AK Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject AK ring message",
"UMask": "0x40",
@@ -4337,8 +5279,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL NCB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL NCB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCB",
"UMask": "0x10",
@@ -4346,8 +5290,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL NCS on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_NCS_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL NCS on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for NCS",
"UMask": "0x20",
@@ -4355,8 +5301,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL RSP on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_RSP_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL RSP on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a response",
"UMask": "0x4",
@@ -4364,8 +5312,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : BL WB on VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.BL_WB_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : BL WB on VN0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : No BL VN0 credit for generating a writeback",
"UMask": "0x8",
@@ -4373,8 +5323,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 0 : Non UPI IV Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_CHA_RxC_WBQ0_REJECT.IV_NON_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 0 : Non UPI IV Request : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Can't inject IV ring message",
"UMask": "0x80",
@@ -4382,8 +5334,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : Allow Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ALLOW_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : Allow Snoop : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x40",
@@ -4391,8 +5345,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : ANY0",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.ANY0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : ANY0 : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Any condition listed in the WBQ0 Reject counter was true",
"UMask": "0x1",
@@ -4400,8 +5356,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : HA",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.HA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : HA : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x2",
@@ -4409,8 +5367,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : LLC OR SF Way",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_OR_SF_WAY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : LLC OR SF Way : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Way conflict with another request that caused the reject",
"UMask": "0x20",
@@ -4418,8 +5378,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : LLC Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.LLC_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : LLC Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x4",
@@ -4427,8 +5389,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : PhyAddr Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.PA_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : PhyAddr Match : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Address match with an outstanding request that was rejected.",
"UMask": "0x80",
@@ -4436,8 +5400,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : SF Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.SF_VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : SF Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry. : Requests did not generate Snoop filter victim",
"UMask": "0x8",
@@ -4445,8 +5411,10 @@
},
{
"BriefDescription": "WBQ Rejects - Set 1 : Victim",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_CHA_RxC_WBQ1_REJECT.VICTIM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WBQ Rejects - Set 1 : Victim : Number of times a transaction flowing through the WBQ (Writeback Queue) had to retry.",
"UMask": "0x10",
@@ -4454,8 +5422,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4463,8 +5433,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -4472,8 +5444,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -4481,8 +5455,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4490,8 +5466,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -4499,8 +5477,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_CHA_RxR_BUSY_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -4508,8 +5488,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4517,8 +5499,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -4526,8 +5510,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -4535,8 +5521,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -4544,8 +5532,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x80",
@@ -4553,8 +5543,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4562,8 +5554,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -4571,8 +5565,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -4580,8 +5576,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_CHA_RxR_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -4589,8 +5587,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4598,8 +5598,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -4607,8 +5609,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -4616,8 +5620,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AK : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -4625,8 +5631,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4634,8 +5642,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -4643,8 +5653,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -4652,8 +5664,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IFV - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -4661,8 +5675,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_CHA_RxR_CRD_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IV : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -4670,16 +5686,20 @@
},
{
"BriefDescription": "Transgress Injection Starvation",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_CHA_RxR_CRD_STARVED_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"Unit": "CHA"
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4687,8 +5707,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -4696,8 +5718,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -4705,8 +5729,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -4714,8 +5740,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -4723,8 +5751,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4732,8 +5762,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -4741,8 +5773,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -4750,8 +5784,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_CHA_RxR_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -4759,8 +5795,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4768,8 +5806,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -4777,8 +5817,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -4786,8 +5828,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -4795,8 +5839,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -4804,8 +5850,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4813,8 +5861,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x20",
@@ -4822,8 +5872,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -4831,8 +5883,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_CHA_RxR_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -4840,6 +5894,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for E-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.E_STATE",
"PerPkg": "1",
@@ -4849,6 +5904,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for M-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.M_STATE",
"PerPkg": "1",
@@ -4858,6 +5914,7 @@
},
{
"BriefDescription": "Snoop filter capacity evictions for S-state entries.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3D",
"EventName": "UNC_CHA_SF_EVICTION.S_STATE",
"PerPkg": "1",
@@ -4867,8 +5924,10 @@
},
{
"BriefDescription": "Snoops Sent : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : All : Counts the number of snoops issued by the HA.",
"UMask": "0x1",
@@ -4876,8 +5935,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast snoops for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA responding to local requests",
"UMask": "0x10",
@@ -4885,8 +5946,10 @@
},
{
"BriefDescription": "Snoops Sent : Broadcast snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.BCST_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Broadcast snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast snoops issued by the HA responding to remote requests",
"UMask": "0x20",
@@ -4894,8 +5957,10 @@
},
{
"BriefDescription": "Snoops Sent : Directed snoops for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Directed snoops for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA responding to local requests",
"UMask": "0x40",
@@ -4903,8 +5968,10 @@
},
{
"BriefDescription": "Snoops Sent : Directed snoops for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.DIRECT_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Directed snoops for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of directed snoops issued by the HA responding to remote requests",
"UMask": "0x80",
@@ -4912,8 +5979,10 @@
},
{
"BriefDescription": "Snoops Sent : Snoops sent for Local Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Snoops sent for Local Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA responding to local requests",
"UMask": "0x4",
@@ -4921,8 +5990,10 @@
},
{
"BriefDescription": "Snoops Sent : Snoops sent for Remote Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_CHA_SNOOPS_SENT.REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoops Sent : Snoops sent for Remote Requests : Counts the number of snoops issued by the HA. : Counts the number of broadcast or directed snoops issued by the HA responding to remote requests",
"UMask": "0x8",
@@ -4930,8 +6001,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RSPCNFLCT*",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : RSPCNFLCT* : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for snoops responses of RspConflict. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.",
"UMask": "0x40",
@@ -4939,8 +6012,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : RspFwd : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspFwd to a CA request. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -4948,8 +6023,10 @@
},
{
"BriefDescription": "Snoop Responses Received : Rsp*Fwd*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPFWDWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : Rsp*Fwd*WB : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of Rsp*Fwd*WB. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.",
"UMask": "0x20",
@@ -4957,8 +6034,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a transaction with the opcode type RspI Snoop Response was received which indicates the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO: the Read for Ownership issued before a write hits non-modified data).",
"UMask": "0x1",
@@ -4966,8 +6045,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a a transaction with the opcode type RspIFwd Snoop Response was received which indicates a remote caching agent forwarded the data and the requesting agent is able to acquire the data in E (Exclusive) or M (modified) states. This is commonly returned with RFO (the Read for Ownership issued before a write) transactions. The snoop could have either been to a cacheline in the M,E,F (Modified, Exclusive or Forward) states.",
"UMask": "0x4",
@@ -4975,8 +6056,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a transaction with the opcode type RspS Snoop Response was received which indicates when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -4984,8 +6067,10 @@
},
{
"BriefDescription": "Snoop Responses Received : RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts when a a transaction with the opcode type RspSFwd Snoop Response was received which indicates a remote caching agent forwarded the data but held on to its current copy. This is common for data and code reads that hit in a remote socket in E (Exclusive) or F (Forward) state.",
"UMask": "0x8",
@@ -4993,8 +6078,10 @@
},
{
"BriefDescription": "Snoop Responses Received : Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_CHA_SNOOP_RESP.RSPWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received : Rsp*WB : Counts the total number of RspI snoop responses received. Whenever a snoops are issued, one or more snoop responses will be returned depending on the topology of the system. In systems larger than 2s, when multiple snoops are returned this will count all the snoops that are received. For example, if 3 snoops were issued and returned RspI, RspS, and RspSFwd; then each of these sub-events would increment by 1. : Filters for a snoop response of RspIWB or RspSWB. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.",
"UMask": "0x10",
@@ -5002,8 +6089,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspCnflct",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPCNFLCT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspCnflct : Number of snoop responses received for a Local request : Filters for snoops responses of RspConflict to local CA requests. This is returned when a snoop finds an existing outstanding transaction in a remote caching agent when it CAMs that caching agent. This triggers conflict resolution hardware. This covers both RspCnflct and RspCnflctWbI.",
"UMask": "0x40",
@@ -5011,8 +6100,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspFwd : Number of snoop responses received for a Local request : Filters for a snoop response of RspFwd to local CA requests. This snoop response is only possible for RdCur when a snoop HITM/E in a remote caching agent and it directly forwards data to a requestor without changing the requestor's cache line state.",
"UMask": "0x80",
@@ -5020,8 +6111,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : Rsp*FWD*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPFWDWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : Rsp*FWD*WB : Number of snoop responses received for a Local request : Filters for a snoop response of Rsp*Fwd*WB to local CA requests. This snoop response is only used in 4s systems. It is used when a snoop HITM's in a remote caching agent and it directly forwards data to a requestor, and simultaneously returns data to the home to be written back to memory.",
"UMask": "0x20",
@@ -5029,8 +6122,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspI",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspI : Number of snoop responses received for a Local request : Filters for snoops responses of RspI to local CA requests. RspI is returned when the remote cache does not have the data, or when the remote cache silently evicts data (such as when an RFO hits non-modified data).",
"UMask": "0x1",
@@ -5038,8 +6133,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspIFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPIFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspIFwd : Number of snoop responses received for a Local request : Filters for snoop responses of RspIFwd to local CA requests. This is returned when a remote caching agent forwards data and the requesting agent is able to acquire the data in E or M states. This is commonly returned with RFO transactions. It can be either a HitM or a HitFE.",
"UMask": "0x4",
@@ -5047,8 +6144,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspS",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspS : Number of snoop responses received for a Local request : Filters for snoop responses of RspS to local CA requests. RspS is returned when a remote cache has data but is not forwarding it. It is a way to let the requesting socket know that it cannot allocate the data in E state. No data is sent with S RspS.",
"UMask": "0x2",
@@ -5056,8 +6155,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : RspSFwd",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPSFWD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : RspSFwd : Number of snoop responses received for a Local request : Filters for a snoop response of RspSFwd to local CA requests. This is returned when a remote caching agent forwards data but holds on to its currently copy. This is common for data and code reads that hit in a remote socket in E or F state.",
"UMask": "0x8",
@@ -5065,8 +6166,10 @@
},
{
"BriefDescription": "Snoop Responses Received Local : Rsp*WB",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_CHA_SNOOP_RESP_LOCAL.RSPWB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Snoop Responses Received Local : Rsp*WB : Number of snoop responses received for a Local request : Filters for a snoop response of RspIWB or RspSWB to local CA requests. This is returned when a non-RFO request hits in M state. Data and Code Reads can return either RspIWB or RspSWB depending on how the system has been configured. InvItoE transactions will also return RspIWB because they must acquire ownership.",
"UMask": "0x10",
@@ -5074,56 +6177,70 @@
},
{
"BriefDescription": "Misc Snoop Responses Received : MtoI RspIDataM",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPDATAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : MtoI RspIFwdM",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.MTOI_RSPIFWDM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : Pull Data Partial - Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITLLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : Pull Data Partial - Hit SF",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.PULLDATAPTL_HITSF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITLLC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "CHA"
},
{
"BriefDescription": "Misc Snoop Responses Received : RspIFwdPtl Hit SF",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_CHA_SNOOP_RSP_MISC.RSPIFWDMPTL_HITSF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "CHA"
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5131,8 +6248,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5140,8 +6259,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5149,8 +6270,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -5158,8 +6281,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -5167,8 +6292,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -5176,8 +6303,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -5185,8 +6314,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -5194,8 +6325,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5203,8 +6336,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5212,8 +6347,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5221,8 +6358,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -5230,8 +6369,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -5239,8 +6380,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -5248,8 +6391,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -5257,8 +6402,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -5266,8 +6413,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5275,8 +6424,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5284,8 +6435,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5293,8 +6446,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -5302,8 +6457,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -5311,8 +6468,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -5320,8 +6479,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -5329,8 +6490,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -5338,8 +6501,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5347,8 +6512,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5356,8 +6523,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5365,8 +6534,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -5374,8 +6545,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -5383,8 +6556,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -5392,8 +6567,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -5401,8 +6578,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_CHA_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -5410,8 +6589,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5419,8 +6600,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5428,8 +6611,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5437,8 +6622,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5446,8 +6633,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5455,8 +6644,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5464,8 +6655,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5473,8 +6666,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5482,8 +6677,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5491,8 +6688,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -5500,8 +6699,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -5509,8 +6710,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_CHA_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -5518,8 +6721,10 @@
},
{
"BriefDescription": "TOR Inserts : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc001ffff",
@@ -5527,24 +6732,30 @@
},
{
"BriefDescription": "TOR Inserts : DDR4 Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DDR4 Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.DDR4",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : SF/LLC Evictions",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : SF/LLC Evictions : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -5552,14 +6763,17 @@
},
{
"BriefDescription": "TOR Inserts : Just Hits",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Hits : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : All requests from iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA",
"PerPkg": "1",
@@ -5569,6 +6783,7 @@
},
{
"BriefDescription": "TOR Inserts : CLFlushes issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSH",
"PerPkg": "1",
@@ -5578,8 +6793,10 @@
},
{
"BriefDescription": "TOR Inserts : CLFlushOpts issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CLFLUSHOPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CLFlushOpts issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8d7ff01",
@@ -5587,6 +6804,7 @@
},
{
"BriefDescription": "TOR Inserts : CRDs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CRD",
"PerPkg": "1",
@@ -5596,8 +6814,10 @@
},
{
"BriefDescription": "TOR Inserts; CRd Pref from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Code read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc88fff01",
@@ -5605,8 +6825,10 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRds issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc817ff01",
@@ -5614,8 +6836,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to a page walk : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837ff01",
@@ -5623,8 +6847,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opts issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opts issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827ff01",
@@ -5632,8 +6858,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7ff01",
@@ -5641,6 +6869,7 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_DRD_PREF",
"PerPkg": "1",
@@ -5650,6 +6879,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT",
"PerPkg": "1",
@@ -5659,6 +6889,7 @@
},
{
"BriefDescription": "TOR Inserts : CRds issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD",
"PerPkg": "1",
@@ -5668,6 +6899,7 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_CRD_PREF",
"PerPkg": "1",
@@ -5677,6 +6909,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD",
"PerPkg": "1",
@@ -5686,8 +6919,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to page walks that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fd01",
@@ -5695,8 +6930,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opts issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opts issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827fd01",
@@ -5704,8 +6941,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7fd01",
@@ -5713,6 +6952,7 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_DRD_PREF",
"PerPkg": "1",
@@ -5722,8 +6962,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that Hit LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Hit LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fd01",
@@ -5731,8 +6973,10 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccffd01",
@@ -5740,17 +6984,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCODE",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xcccffd01",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : LLCPrefData issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : LLCPrefData issued by iA Cores that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccd7fd01",
@@ -5758,15 +7006,18 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDATA",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFDRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xccd7fd01",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_LLCPREFRFO",
"PerPkg": "1",
@@ -5776,6 +7027,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO",
"PerPkg": "1",
@@ -5785,6 +7037,7 @@
},
{
"BriefDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_RFO_PREF",
"PerPkg": "1",
@@ -5794,8 +7047,10 @@
},
{
"BriefDescription": "TOR Inserts : SpecItoMs issued by iA Cores that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_HIT_SPECITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : SpecItoMs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc57fd01",
@@ -5803,8 +7058,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : ItoMs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47ff01",
@@ -5812,8 +7069,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : ItoMCacheNears issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd47ff01",
@@ -5821,8 +7080,10 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefCode issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccfff01",
@@ -5830,6 +7091,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefData issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFDATA",
"PerPkg": "1",
@@ -5839,6 +7101,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_LLCPREFRFO",
"PerPkg": "1",
@@ -5848,6 +7111,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS",
"PerPkg": "1",
@@ -5857,6 +7121,7 @@
},
{
"BriefDescription": "TOR Inserts : CRds issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD",
"PerPkg": "1",
@@ -5866,8 +7131,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80efe01",
@@ -5875,6 +7142,7 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF",
"PerPkg": "1",
@@ -5884,8 +7152,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88efe01",
@@ -5893,8 +7163,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88f7e01",
@@ -5902,8 +7174,10 @@
},
{
"BriefDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_CRD_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CRd issued by iA Cores that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80f7e01",
@@ -5911,6 +7185,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD",
"PerPkg": "1",
@@ -5920,8 +7195,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd PTEs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd PTEs issued by iA Cores due to a page walk that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fe01",
@@ -5929,6 +7206,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_DDR",
"PerPkg": "1",
@@ -5938,6 +7216,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL",
"PerPkg": "1",
@@ -5947,6 +7226,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_DDR",
"PerPkg": "1",
@@ -5956,6 +7236,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_LOCAL_PMM",
"PerPkg": "1",
@@ -5965,8 +7246,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opt issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opt issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827fe01",
@@ -5974,8 +7257,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Opt_Prefs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7fe01",
@@ -5983,6 +7268,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PMM",
"PerPkg": "1",
@@ -5992,6 +7278,7 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF",
"PerPkg": "1",
@@ -6001,8 +7288,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978601",
@@ -6010,6 +7299,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd Pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL",
"PerPkg": "1",
@@ -6019,8 +7309,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968601",
@@ -6028,8 +7320,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968a01",
@@ -6037,8 +7331,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978a01",
@@ -6046,6 +7342,7 @@
},
{
"BriefDescription": "TOR Inserts; DRd Pref misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE",
"PerPkg": "1",
@@ -6055,8 +7352,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970601",
@@ -6064,8 +7363,10 @@
},
{
"BriefDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970a01",
@@ -6073,6 +7374,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE",
"PerPkg": "1",
@@ -6082,6 +7384,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_DDR",
"PerPkg": "1",
@@ -6091,6 +7394,7 @@
},
{
"BriefDescription": "TOR Inserts : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_DRD_REMOTE_PMM",
"PerPkg": "1",
@@ -6100,6 +7404,7 @@
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR",
"PerPkg": "1",
@@ -6109,8 +7414,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8678601",
@@ -6118,17 +7425,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8678601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8668601",
@@ -6136,17 +7447,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_LOCAL_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8668601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8668a01",
@@ -6154,8 +7469,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8678a01",
@@ -6163,8 +7480,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8670601",
@@ -6172,17 +7491,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_REMOTE_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc8670601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiLF misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_FULL_STREAMING_WR_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8670a01",
@@ -6190,8 +7513,10 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : ItoMs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fe01",
@@ -6199,8 +7524,10 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : LLCPrefCode issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccffe01",
@@ -6208,6 +7535,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefData issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFDATA",
"PerPkg": "1",
@@ -6217,6 +7545,7 @@
},
{
"BriefDescription": "TOR Inserts : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LLCPREFRFO",
"PerPkg": "1",
@@ -6226,8 +7555,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668601",
@@ -6235,8 +7566,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668a01",
@@ -6244,8 +7577,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8601",
@@ -6253,8 +7588,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8a01",
@@ -6262,6 +7599,7 @@
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR",
"PerPkg": "1",
@@ -6271,8 +7609,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f8601",
@@ -6280,17 +7620,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc86f8601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86e8601",
@@ -6298,17 +7642,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_LOCAL_WCIL_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_LOCAL_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc86e8601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86e8a01",
@@ -6316,8 +7664,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f8a01",
@@ -6325,8 +7675,10 @@
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f0601",
@@ -6334,17 +7686,21 @@
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_REMOTE_DRAM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc86f0601",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts; WCiL misses from local IA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_PARTIAL_STREAMING_WR_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f0a01",
@@ -6352,8 +7708,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670601",
@@ -6361,8 +7719,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remote memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670a01",
@@ -6370,8 +7730,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0601",
@@ -6379,8 +7741,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0a01",
@@ -6388,6 +7752,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO",
"PerPkg": "1",
@@ -6397,6 +7762,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_LOCAL",
"PerPkg": "1",
@@ -6406,6 +7772,7 @@
},
{
"BriefDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF",
"PerPkg": "1",
@@ -6415,6 +7782,7 @@
},
{
"BriefDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_LOCAL",
"PerPkg": "1",
@@ -6424,6 +7792,7 @@
},
{
"BriefDescription": "TOR Inserts : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_PREF_REMOTE",
"PerPkg": "1",
@@ -6433,6 +7802,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_RFO_REMOTE",
"PerPkg": "1",
@@ -6442,8 +7812,10 @@
},
{
"BriefDescription": "TOR Inserts : SpecItoMs issued by iA Cores that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_SPECITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : SpecItoMs issued by iA Cores that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc57fe01",
@@ -6451,8 +7823,10 @@
},
{
"BriefDescription": "TOR Inserts : UCRdFs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_UCRDF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : UCRdFs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc877de01",
@@ -6460,8 +7834,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86ffe01",
@@ -6469,8 +7845,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLF issued by iA Cores that Missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLF issued by iA Cores that Missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867fe01",
@@ -6478,8 +7856,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting DDR that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678601",
@@ -6487,8 +7867,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLFs issued by iA Cores targeting PMM that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678a01",
@@ -6496,8 +7878,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting DDR that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8601",
@@ -6505,8 +7889,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores targeting PMM that missed the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8a01",
@@ -6514,8 +7900,10 @@
},
{
"BriefDescription": "TOR Inserts : WiLs issued by iA Cores that Missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_MISS_WIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WiLs issued by iA Cores that Missed LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc87fde01",
@@ -6523,6 +7911,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_RFO",
"PerPkg": "1",
@@ -6532,6 +7921,7 @@
},
{
"BriefDescription": "TOR Inserts : RFO_Prefs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_RFO_PREF",
"PerPkg": "1",
@@ -6541,6 +7931,7 @@
},
{
"BriefDescription": "TOR Inserts : SpecItoMs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_SPECITOM",
"PerPkg": "1",
@@ -6550,8 +7941,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc3fff01",
@@ -6559,8 +7952,10 @@
},
{
"BriefDescription": "TOR Inserts : WBEFtoIs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBEFTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbEFtoIs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc37ff01",
@@ -6568,8 +7963,10 @@
},
{
"BriefDescription": "TOR Inserts : WBMtoEs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbMtoEs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc2fff01",
@@ -6577,8 +7974,10 @@
},
{
"BriefDescription": "TOR Inserts : WbMtoIs issued by an iA Cores. Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbMtoIs issued by iA Cores . (Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc27ff01",
@@ -6586,8 +7985,10 @@
},
{
"BriefDescription": "TOR Inserts : WBStoIs issued by an IA Core. Non Modified Write Backs",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WBSTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbStoIs issued by iA Cores . (Non Modified Write Backs) :Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc67ff01",
@@ -6595,8 +7996,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLs issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLs issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86fff01",
@@ -6604,8 +8007,10 @@
},
{
"BriefDescription": "TOR Inserts : WCiLF issued by iA Cores",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IA_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WCiLF issued by iA Cores : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867ff01",
@@ -6613,6 +8018,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO",
"PerPkg": "1",
@@ -6622,8 +8028,10 @@
},
{
"BriefDescription": "TOR Inserts : CLFlushes issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : CLFlushes issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8c3ff04",
@@ -6631,6 +8039,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT",
"PerPkg": "1",
@@ -6640,6 +8049,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by IO Devices that Hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOM",
"PerPkg": "1",
@@ -6649,6 +8059,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_ITOMCACHENEAR",
"PerPkg": "1",
@@ -6658,6 +8069,7 @@
},
{
"BriefDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_PCIRDCUR",
"PerPkg": "1",
@@ -6667,8 +8079,10 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by IO Devices that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_HIT_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : RFOs issued by IO Devices that hit the LLC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803fd04",
@@ -6676,6 +8090,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM",
"PerPkg": "1",
@@ -6685,6 +8100,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR",
"PerPkg": "1",
@@ -6694,6 +8110,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices to locally HOMed memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_LOCAL",
"PerPkg": "1",
@@ -6703,6 +8120,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices to remotely HOMed memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOMCACHENEAR_REMOTE",
"PerPkg": "1",
@@ -6712,6 +8130,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by IO Devices to locally HOMed memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_LOCAL",
"PerPkg": "1",
@@ -6721,6 +8140,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by IO Devices to remotely HOMed memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_ITOM_REMOTE",
"PerPkg": "1",
@@ -6730,6 +8150,7 @@
},
{
"BriefDescription": "TOR Inserts : All requests from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS",
"PerPkg": "1",
@@ -6739,6 +8160,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMs issued by IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOM",
"PerPkg": "1",
@@ -6748,6 +8170,7 @@
},
{
"BriefDescription": "TOR Inserts : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_ITOMCACHENEAR",
"PerPkg": "1",
@@ -6757,6 +8180,7 @@
},
{
"BriefDescription": "TOR Inserts : PCIRdCurs issued by IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_PCIRDCUR",
"PerPkg": "1",
@@ -6766,6 +8190,7 @@
},
{
"BriefDescription": "TOR Inserts : RFOs issued by IO Devices that missed the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_MISS_RFO",
"PerPkg": "1",
@@ -6775,6 +8200,7 @@
},
{
"BriefDescription": "TOR Inserts : PCIRdCurs issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR",
"PerPkg": "1",
@@ -6783,7 +8209,28 @@
"Unit": "CHA"
},
{
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on the local socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_LOCAL",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices and targets local memory : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f2ff04",
+ "Unit": "CHA"
+ },
+ {
+ "BriefDescription": "PCIRDCUR (read) transactions from an IO device that addresses memory on a remote socket",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x35",
+ "EventName": "UNC_CHA_TOR_INSERTS.IO_PCIRDCUR_REMOTE",
+ "PerPkg": "1",
+ "PublicDescription": "TOR Inserts : PCIRdCurs issued by IO Devices and targets remote memory : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
+ "UMask": "0xc8f37f04",
+ "Unit": "CHA"
+ },
+ {
"BriefDescription": "TOR Inserts : RFOs issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_RFO",
"PerPkg": "1",
@@ -6793,8 +8240,10 @@
},
{
"BriefDescription": "TOR Inserts : WbMtoIs issued by IO Devices",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IO_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WbMtoIs issued by IO Devices : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc23ff04",
@@ -6802,8 +8251,10 @@
},
{
"BriefDescription": "TOR Inserts : IPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IPQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x8",
@@ -6811,8 +8262,10 @@
},
{
"BriefDescription": "TOR Inserts : IRQ - iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IRQ_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IRQ - iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : From an iA Core",
"UMask": "0x1",
@@ -6820,8 +8273,10 @@
},
{
"BriefDescription": "TOR Inserts : IRQ - Non iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.IRQ_NON_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : IRQ - Non iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x10",
@@ -6829,24 +8284,30 @@
},
{
"BriefDescription": "TOR Inserts : Just ISOC",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.ISOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just ISOC : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just Local Targets",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOCAL_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Local Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : All from Local iA and IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local iA and IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally initiated requests",
"UMask": "0xc000ff05",
@@ -6854,8 +8315,10 @@
},
{
"BriefDescription": "TOR Inserts : All from Local iA",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local iA : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally initiated requests from iA Cores",
"UMask": "0xc000ff01",
@@ -6863,8 +8326,10 @@
},
{
"BriefDescription": "TOR Inserts : All from Local IO",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : All from Local IO : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally generated IO traffic",
"UMask": "0xc000ff04",
@@ -6872,72 +8337,90 @@
},
{
"BriefDescription": "TOR Inserts : Match the Opcode in b[29:19] of the extended umask field",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MATCH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Match the Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just Misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Misses : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : MMCFG Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.MMCFG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : MMCFG Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NearMem",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NonCoherent",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NONCOH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NonCoherent : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Just NotNearMem",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.NOT_NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just NotNearMem : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : PMM Access",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PMM Access : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PREMORPH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Match the PreMorphed Opcode in b[29:19] of the extended umask field : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : PRQ - IOSF",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PRQ_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PRQ - IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : From a PCIe Device",
"UMask": "0x4",
@@ -6945,8 +8428,10 @@
},
{
"BriefDescription": "TOR Inserts : PRQ - Non IOSF",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.PRQ_NON_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : PRQ - Non IOSF : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x20",
@@ -6954,16 +8439,20 @@
},
{
"BriefDescription": "TOR Inserts : Just Remote Targets",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.REMOTE_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : Just Remote Targets : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Inserts : RRQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.RRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : RRQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x40",
@@ -6971,8 +8460,10 @@
},
{
"BriefDescription": "TOR Inserts : WBQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_CHA_TOR_INSERTS.WBQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Inserts : WBQ : Counts the number of entries successfully inserted into the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x80",
@@ -6980,16 +8471,20 @@
},
{
"BriefDescription": "TOR Occupancy : DDR4 Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DDR4 Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : SF/LLC Evictions",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.EVICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : SF/LLC Evictions : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : TOR allocation occurred as a result of SF/LLC evictions (came from the ISMQ)",
"UMask": "0x2",
@@ -6997,14 +8492,17 @@
},
{
"BriefDescription": "TOR Occupancy : Just Hits",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Hits : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : All requests from iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA",
"PerPkg": "1",
@@ -7014,8 +8512,10 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushes issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CLFlushes issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8c7ff01",
@@ -7023,8 +8523,10 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushOpts issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CLFLUSHOPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CLFlushOpts issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8d7ff01",
@@ -7032,6 +8534,7 @@
},
{
"BriefDescription": "TOR Occupancy : CRDs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD",
"PerPkg": "1",
@@ -7041,8 +8544,10 @@
},
{
"BriefDescription": "TOR Occupancy; CRd Pref from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Code read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc88fff01",
@@ -7050,6 +8555,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD",
"PerPkg": "1",
@@ -7059,8 +8565,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837ff01",
@@ -7068,8 +8576,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opts issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opts issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827ff01",
@@ -7077,8 +8587,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7ff01",
@@ -7086,8 +8598,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_DRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc897ff01",
@@ -7095,6 +8609,7 @@
},
{
"BriefDescription": "TOR Occupancy : All requests from iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT",
"PerPkg": "1",
@@ -7104,8 +8619,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80ffd01",
@@ -7113,8 +8630,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88ffd01",
@@ -7122,8 +8641,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRds issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc817fd01",
@@ -7131,8 +8652,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fd01",
@@ -7140,8 +8663,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opts issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opts issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827fd01",
@@ -7149,8 +8674,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7fd01",
@@ -7158,8 +8685,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_DRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc897fd01",
@@ -7167,8 +8696,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Hit LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fd01",
@@ -7176,8 +8707,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccffd01",
@@ -7185,8 +8718,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccd7fd01",
@@ -7194,8 +8729,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_LLCPREFRFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccc7fd01",
@@ -7203,8 +8740,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc807fd01",
@@ -7212,8 +8751,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_HIT_RFO_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc887fd01",
@@ -7221,8 +8762,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47ff01",
@@ -7230,8 +8773,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMCacheNears issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd47ff01",
@@ -7239,8 +8784,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccfff01",
@@ -7248,8 +8795,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefData issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccd7ff01",
@@ -7257,8 +8806,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_LLCPREFRFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccc7ff01",
@@ -7266,6 +8817,7 @@
},
{
"BriefDescription": "TOR Occupancy : All requests from iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS",
"PerPkg": "1",
@@ -7275,6 +8827,7 @@
},
{
"BriefDescription": "TOR Occupancy : CRds issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD",
"PerPkg": "1",
@@ -7284,8 +8837,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80efe01",
@@ -7293,8 +8848,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88ffe01",
@@ -7302,8 +8859,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88efe01",
@@ -7311,8 +8870,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc88f7e01",
@@ -7320,8 +8881,10 @@
},
{
"BriefDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_CRD_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CRd issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc80f7e01",
@@ -7329,6 +8892,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD",
"PerPkg": "1",
@@ -7338,8 +8902,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRDPTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRdPte issued by iA Cores due to a page walk that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc837fe01",
@@ -7347,6 +8913,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_DDR",
"PerPkg": "1",
@@ -7356,6 +8923,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL",
"PerPkg": "1",
@@ -7365,8 +8933,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8168601",
@@ -7374,8 +8944,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8168a01",
@@ -7383,8 +8955,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opt issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opt issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc827fe01",
@@ -7392,8 +8966,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_OPT_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Opt_Prefs issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8a7fe01",
@@ -7401,6 +8977,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PMM",
"PerPkg": "1",
@@ -7410,8 +8987,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc897fe01",
@@ -7419,8 +8998,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978601",
@@ -7428,8 +9009,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc896fe01",
@@ -7437,8 +9020,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968601",
@@ -7446,8 +9031,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8968a01",
@@ -7455,8 +9042,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8978a01",
@@ -7464,8 +9053,10 @@
},
{
"BriefDescription": "TOR Occupancy; DRd Pref misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read prefetch from local IA that misses in the snoop filter",
"UMask": "0xc8977e01",
@@ -7473,8 +9064,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970601",
@@ -7482,8 +9075,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_PREF_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRd_Prefs issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8970a01",
@@ -7491,6 +9086,7 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE",
"PerPkg": "1",
@@ -7500,8 +9096,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting DDR Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8170601",
@@ -7509,8 +9107,10 @@
},
{
"BriefDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_DRD_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : DRds issued by iA Cores targeting PMM Mem that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8170a01",
@@ -7518,8 +9118,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc867fe01",
@@ -7527,8 +9129,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8678601",
@@ -7536,8 +9140,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8668601",
@@ -7545,8 +9151,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8668a01",
@@ -7554,8 +9162,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8678a01",
@@ -7563,8 +9173,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8670601",
@@ -7572,8 +9184,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiLF misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_FULL_STREAMING_WR_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc8670a01",
@@ -7581,8 +9195,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc47fe01",
@@ -7590,8 +9206,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFCODE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefCode issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcccffe01",
@@ -7599,8 +9217,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFDATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefData issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccd7fe01",
@@ -7608,8 +9228,10 @@
},
{
"BriefDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LLCPREFRFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : LLCPrefRFO issued by iA Cores that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xccc7fe01",
@@ -7617,8 +9239,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668601",
@@ -7626,8 +9250,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8668a01",
@@ -7635,8 +9261,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8601",
@@ -7644,8 +9272,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_LOCAL_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86e8a01",
@@ -7653,8 +9283,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86ffe01",
@@ -7662,8 +9294,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f8601",
@@ -7671,8 +9305,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_LOCAL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86e8601",
@@ -7680,8 +9316,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_LOCAL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86e8a01",
@@ -7689,8 +9327,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f8a01",
@@ -7698,8 +9338,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_REMOTE_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f0601",
@@ -7707,8 +9349,10 @@
},
{
"BriefDescription": "TOR Occupancy; WCiL misses from local IA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_PARTIAL_STREAMING_WR_REMOTE_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy; Data read from local IA that misses in the snoop filter",
"UMask": "0xc86f0a01",
@@ -7716,8 +9360,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670601",
@@ -7725,8 +9371,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8670a01",
@@ -7734,8 +9382,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0601",
@@ -7743,8 +9393,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_REMOTE_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f0a01",
@@ -7752,6 +9404,7 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO",
"PerPkg": "1",
@@ -7761,8 +9414,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc806fe01",
@@ -7770,8 +9425,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc887fe01",
@@ -7779,8 +9436,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed locally : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc886fe01",
@@ -7788,8 +9447,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_PREF_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8877e01",
@@ -7797,8 +9458,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotely",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_RFO_REMOTE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by iA Cores that Missed the LLC - HOMed remotely : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8077e01",
@@ -7806,8 +9469,10 @@
},
{
"BriefDescription": "TOR Occupancy : SpecItoMs issued by iA Cores that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_SPECITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : SpecItoMs issued by iA Cores that missed the LLC: For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc57fe01",
@@ -7815,8 +9480,10 @@
},
{
"BriefDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_UCRDF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : UCRdFs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc877de01",
@@ -7824,8 +9491,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86ffe01",
@@ -7833,8 +9502,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores that Missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867fe01",
@@ -7842,8 +9513,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678601",
@@ -7851,8 +9524,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCILF_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLFs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8678a01",
@@ -7860,8 +9535,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_DDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting DDR that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8601",
@@ -7869,8 +9546,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WCIL_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores targeting PMM that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86f8a01",
@@ -7878,8 +9557,10 @@
},
{
"BriefDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_MISS_WIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WiLs issued by iA Cores that Missed LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc87fde01",
@@ -7887,6 +9568,7 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO",
"PerPkg": "1",
@@ -7896,8 +9578,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_RFO_PREF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFO_Prefs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc887ff01",
@@ -7905,8 +9589,10 @@
},
{
"BriefDescription": "TOR Occupancy : SpecItoMs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_SPECITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : SpecItoMs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc57ff01",
@@ -7914,8 +9600,10 @@
},
{
"BriefDescription": "TOR Occupancy : WbMtoIs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WbMtoIs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc27ff01",
@@ -7923,8 +9611,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLs issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLs issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc86fff01",
@@ -7932,8 +9622,10 @@
},
{
"BriefDescription": "TOR Occupancy : WCiLF issued by iA Cores",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IA_WCILF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WCiLF issued by iA Cores : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc867ff01",
@@ -7941,6 +9633,7 @@
},
{
"BriefDescription": "TOR Occupancy : All requests from IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO",
"PerPkg": "1",
@@ -7950,8 +9643,10 @@
},
{
"BriefDescription": "TOR Occupancy : CLFlushes issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : CLFlushes issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8c3ff04",
@@ -7959,6 +9654,7 @@
},
{
"BriefDescription": "TOR Occupancy : All requests from IO Devices that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT",
"PerPkg": "1",
@@ -7968,8 +9664,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that Hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc43fd04",
@@ -7977,8 +9675,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd43fd04",
@@ -7986,8 +9686,10 @@
},
{
"BriefDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_PCIRDCUR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc8f3fd04",
@@ -7995,8 +9697,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by IO Devices that hit the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_HIT_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that hit the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803fd04",
@@ -8004,8 +9708,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc43ff04",
@@ -8013,8 +9719,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd43ff04",
@@ -8022,6 +9730,7 @@
},
{
"BriefDescription": "TOR Occupancy : All requests from IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS",
"PerPkg": "1",
@@ -8031,8 +9740,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc43fe04",
@@ -8040,8 +9751,10 @@
},
{
"BriefDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_ITOMCACHENEAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : ItoMCacheNears, indicating a partial write request, from IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcd43fe04",
@@ -8049,6 +9762,7 @@
},
{
"BriefDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_PCIRDCUR",
"PerPkg": "1",
@@ -8058,8 +9772,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by IO Devices that missed the LLC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_MISS_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices that missed the LLC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803fe04",
@@ -8067,6 +9783,7 @@
},
{
"BriefDescription": "TOR Occupancy : PCIRdCurs issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_PCIRDCUR",
"PerPkg": "1",
@@ -8076,8 +9793,10 @@
},
{
"BriefDescription": "TOR Occupancy : RFOs issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : RFOs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xc803ff04",
@@ -8085,8 +9804,10 @@
},
{
"BriefDescription": "TOR Occupancy : WbMtoIs issued by IO Devices",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IO_WBMTOI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : WbMtoIs issued by IO Devices : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0xcc23ff04",
@@ -8094,8 +9815,10 @@
},
{
"BriefDescription": "TOR Occupancy : IPQ",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IPQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IPQ : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x8",
@@ -8103,8 +9826,10 @@
},
{
"BriefDescription": "TOR Occupancy : IRQ - iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IRQ - iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : From an iA Core",
"UMask": "0x1",
@@ -8112,8 +9837,10 @@
},
{
"BriefDescription": "TOR Occupancy : IRQ - Non iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.IRQ_NON_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : IRQ - Non iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x10",
@@ -8121,24 +9848,30 @@
},
{
"BriefDescription": "TOR Occupancy : Just ISOC",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.ISOC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just ISOC : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just Local Targets",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOCAL_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Local Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : All from Local iA and IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local iA and IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally initiated requests",
"UMask": "0xc000ff05",
@@ -8146,8 +9879,10 @@
},
{
"BriefDescription": "TOR Occupancy : All from Local iA",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local iA : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally initiated requests from iA Cores",
"UMask": "0xc000ff01",
@@ -8155,8 +9890,10 @@
},
{
"BriefDescription": "TOR Occupancy : All from Local IO",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.LOC_IO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : All from Local IO : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : All locally generated IO traffic",
"UMask": "0xc000ff04",
@@ -8164,72 +9901,90 @@
},
{
"BriefDescription": "TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MATCH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Match the Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just Misses",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Misses : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : MMCFG Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.MMCFG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : MMCFG Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NearMem",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NonCoherent",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NONCOH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NonCoherent : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Just NotNearMem",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.NOT_NEARMEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just NotNearMem : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : PMM Access",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PMM Access : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PREMORPH_OPC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Match the PreMorphed Opcode in b[29:19] of the extended umask field : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "TOR Occupancy : PRQ - IOSF",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PRQ - IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts. : From a PCIe Device",
"UMask": "0x4",
@@ -8237,8 +9992,10 @@
},
{
"BriefDescription": "TOR Occupancy : PRQ - Non IOSF",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.PRQ_NON_IOSF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : PRQ - Non IOSF : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"UMask": "0x20",
@@ -8246,16 +10003,20 @@
},
{
"BriefDescription": "TOR Occupancy : Just Remote Targets",
+ "Counter": "0",
"EventCode": "0x36",
"EventName": "UNC_CHA_TOR_OCCUPANCY.REMOTE_TGT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "TOR Occupancy : Just Remote Targets : For each cycle, this event accumulates the number of valid entries in the TOR that match qualifications specified by the subevent. Does not include addressless requests such as locks and interrupts.",
"Unit": "CHA"
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8263,8 +10024,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8272,8 +10035,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8281,8 +10046,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8290,8 +10057,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8299,8 +10068,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_CHA_TxR_HORZ_ADS_USED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8308,8 +10079,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8317,8 +10090,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8326,8 +10101,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8335,8 +10112,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8344,8 +10123,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x80",
@@ -8353,8 +10134,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8362,8 +10145,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8371,8 +10156,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8380,8 +10167,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_CHA_TxR_HORZ_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -8389,8 +10178,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8398,8 +10189,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8407,8 +10200,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8416,8 +10211,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8425,8 +10222,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8434,8 +10233,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8443,8 +10244,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8452,8 +10255,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8461,8 +10266,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8470,8 +10277,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8479,8 +10288,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8488,8 +10299,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8497,8 +10310,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8506,8 +10321,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8515,8 +10332,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8524,8 +10343,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8533,8 +10354,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8542,8 +10365,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_CHA_TxR_HORZ_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8551,8 +10376,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8560,8 +10387,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8569,8 +10398,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8578,8 +10409,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8587,8 +10420,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8596,8 +10431,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8605,8 +10442,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8614,8 +10453,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8623,8 +10464,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_CHA_TxR_HORZ_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8632,8 +10475,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8641,8 +10486,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x10",
@@ -8650,8 +10497,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -8659,8 +10508,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -8668,8 +10519,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x80",
@@ -8677,8 +10530,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8686,8 +10541,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -8695,8 +10552,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -8704,8 +10563,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_CHA_TxR_HORZ_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -8713,8 +10574,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8722,8 +10585,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8731,8 +10596,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8740,8 +10607,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8749,8 +10618,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8758,8 +10629,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8767,8 +10640,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8776,8 +10651,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8785,8 +10662,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_CHA_TxR_HORZ_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8794,8 +10673,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x1",
@@ -8803,8 +10684,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -8812,8 +10695,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -8821,8 +10706,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x80",
@@ -8830,8 +10717,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x4",
@@ -8839,8 +10728,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -8848,8 +10739,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_CHA_TxR_HORZ_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -8857,8 +10750,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8866,8 +10761,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8875,8 +10772,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8884,8 +10783,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_CHA_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8893,8 +10794,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8902,8 +10805,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8911,8 +10816,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8920,8 +10827,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -8929,8 +10838,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8938,8 +10849,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8947,8 +10860,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : IV - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_CHA_TxR_VERT_BYPASS.IV_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -8956,8 +10871,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS_1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8965,8 +10882,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_CHA_TxR_VERT_BYPASS_1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8974,8 +10893,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8983,8 +10904,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -8992,8 +10915,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9001,8 +10926,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9010,8 +10937,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9019,8 +10948,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9028,8 +10959,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9037,8 +10970,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9046,8 +10981,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_FULL1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9055,8 +10992,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9064,8 +11003,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9073,8 +11014,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9082,8 +11025,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9091,8 +11036,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9100,8 +11047,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9109,8 +11058,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9118,8 +11069,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9127,8 +11080,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_CHA_TxR_VERT_CYCLES_NE1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9136,8 +11091,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9145,8 +11102,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9154,8 +11113,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9163,8 +11124,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9172,8 +11135,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9181,8 +11146,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9190,8 +11157,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_CHA_TxR_VERT_INSERTS0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9199,8 +11168,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_INSERTS1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9208,8 +11179,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_CHA_TxR_VERT_INSERTS1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9217,8 +11190,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -9226,8 +11201,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -9235,8 +11212,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -9244,8 +11223,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -9253,8 +11234,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -9262,8 +11245,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -9271,8 +11256,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_CHA_TxR_VERT_NACK0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -9280,8 +11267,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_VERT_NACK1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -9289,8 +11278,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_CHA_TxR_VERT_NACK1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -9298,8 +11289,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9307,8 +11300,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -9316,8 +11311,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9325,8 +11322,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -9334,8 +11333,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -9343,8 +11344,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -9352,8 +11355,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -9361,8 +11366,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -9370,8 +11377,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_CHA_TxR_VERT_OCCUPANCY1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -9379,8 +11388,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -9388,8 +11399,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -9397,8 +11410,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -9406,8 +11421,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -9415,8 +11432,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -9424,8 +11443,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -9433,8 +11454,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_CHA_TxR_VERT_STARVED0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -9442,8 +11465,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_VERT_STARVED1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -9451,8 +11476,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_VERT_STARVED1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -9460,8 +11487,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_CHA_TxR_VERT_STARVED1.TGC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -9469,8 +11498,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9478,8 +11509,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9487,8 +11520,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9496,8 +11531,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_CHA_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9505,8 +11542,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_VERT_RING_AKC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9514,8 +11553,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_VERT_RING_AKC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9523,8 +11564,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_VERT_RING_AKC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9532,8 +11575,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_CHA_VERT_RING_AKC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9541,8 +11586,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9550,8 +11597,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9559,8 +11608,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9568,8 +11619,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_CHA_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9577,8 +11630,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9586,8 +11641,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9595,8 +11652,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9604,8 +11663,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_CHA_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9613,8 +11674,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -9622,8 +11685,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_CHA_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -9631,8 +11696,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_CHA_VERT_RING_TGC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9640,8 +11707,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_CHA_VERT_RING_TGC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9649,8 +11718,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_CHA_VERT_RING_TGC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9658,8 +11729,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_CHA_VERT_RING_TGC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9667,8 +11740,10 @@
},
{
"BriefDescription": "WbPushMtoI : Pushed to LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.LLC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbPushMtoI : Pushed to LLC : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was able to push WbPushMToI to LLC",
"UMask": "0x1",
@@ -9676,8 +11751,10 @@
},
{
"BriefDescription": "WbPushMtoI : Pushed to Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_CHA_WB_PUSH_MTOI.MEM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "WbPushMtoI : Pushed to Memory : Counts the number of times when the CHA was received WbPushMtoI : Counts the number of times when the CHA was unable to push WbPushMToI to LLC (hence pushed it to MEM)",
"UMask": "0x2",
@@ -9685,8 +11762,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC0 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 0 only.",
"UMask": "0x1",
@@ -9694,8 +11773,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC1 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 1 only.",
"UMask": "0x2",
@@ -9703,40 +11784,50 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC10",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC10 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 10 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC11",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC11",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC11 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 11 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC12",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC12",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC12 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 12 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC13",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC13",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC13 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 13 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC2",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC2 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 2 only.",
"UMask": "0x4",
@@ -9744,8 +11835,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC3 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 3 only.",
"UMask": "0x8",
@@ -9753,8 +11846,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC4",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC4 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 4 only.",
"UMask": "0x10",
@@ -9762,8 +11857,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC5",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC5 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 5 only.",
"UMask": "0x20",
@@ -9771,8 +11868,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC6",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC6 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 6 only.",
"UMask": "0x40",
@@ -9780,8 +11879,10 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC7",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC7 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 7 only.",
"UMask": "0x80",
@@ -9789,24 +11890,30 @@
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC8",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC8 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 8 only.",
"Unit": "CHA"
},
{
"BriefDescription": "CHA iMC CHNx WRITE Credits Empty : MC9",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_CHA_WRITE_NO_CREDITS.MC9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CHA iMC CHNx WRITE Credits Empty : MC9 : Counts the number of times when there are no credits available for sending WRITEs from the CHA into the iMC. In order to send WRITEs into the memory controller, the HA must first acquire a credit for the iMC's BL Ingress queue. : Filter for memory controller 9 only.",
"Unit": "CHA"
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 0?) - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP0_CONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 0?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contention",
"UMask": "0x8",
@@ -9814,8 +11921,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 0?) - No Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP0_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 0?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress credits",
"UMask": "0x4",
@@ -9823,8 +11932,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 1?) - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP1_CONFLICT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 1?) - Conflict : Number of XPT prefetches dropped due to AD CMS write port contention",
"UMask": "0x80",
@@ -9832,8 +11943,10 @@
},
{
"BriefDescription": "XPT Prefetches : Dropped (on 1?) - No Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.DROP1_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Dropped (on 1?) - No Credits : Number of XPT prefetches dropped due to lack of XPT AD egress credits",
"UMask": "0x40",
@@ -9841,8 +11954,10 @@
},
{
"BriefDescription": "XPT Prefetches : Sent (on 0?)",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.SENT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Sent (on 0?) : Number of XPT prefetches sent",
"UMask": "0x1",
@@ -9850,8 +11965,10 @@
},
{
"BriefDescription": "XPT Prefetches : Sent (on 1?)",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_CHA_XPT_PREF.SENT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "XPT Prefetches : Sent (on 1?) : Number of XPT prefetches sent",
"UMask": "0x10",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/icelakex/uncore-interconnect.json
index a066a009c511..97bec6cfc79c 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-interconnect.json
@@ -1,8 +1,10 @@
[
{
"BriefDescription": "Total Write Cache Occupancy : Any Source",
+ "Counter": "0,1",
"EventCode": "0x0F",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.ANY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Total Write Cache Occupancy : Any Source : Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events. : Tracks all requests from any source port.",
"UMask": "0x1",
@@ -10,8 +12,10 @@
},
{
"BriefDescription": "Total Write Cache Occupancy : Snoops",
+ "Counter": "0,1",
"EventCode": "0x0F",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.IV_Q",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Total Write Cache Occupancy : Snoops : Accumulates the number of reads and writes that are outstanding in the uncore in each cycle. This is effectively the sum of the READ_OCCUPANCY and WRITE_OCCUPANCY events.",
"UMask": "0x2",
@@ -19,6 +23,7 @@
},
{
"BriefDescription": "Total IRP occupancy of inbound read and write requests to coherent memory.",
+ "Counter": "0,1",
"EventCode": "0x0f",
"EventName": "UNC_I_CACHE_TOTAL_OCCUPANCY.MEM",
"PerPkg": "1",
@@ -28,6 +33,7 @@
},
{
"BriefDescription": "Clockticks of the IO coherency tracker (IRP)",
+ "Counter": "0,1",
"EventCode": "0x01",
"EventName": "UNC_I_CLOCKTICKS",
"PerPkg": "1",
@@ -35,8 +41,10 @@
},
{
"BriefDescription": "Coherent Ops : CLFlush",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.CLFLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Coherent Ops : CLFlush : Counts the number of coherency related operations serviced by the IRP",
"UMask": "0x80",
@@ -44,6 +52,7 @@
},
{
"BriefDescription": "PCIITOM request issued by the IRP unit to the mesh with the intention of writing a full cacheline.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.PCITOM",
"PerPkg": "1",
@@ -53,8 +62,10 @@
},
{
"BriefDescription": "RFO request issued by the IRP unit to the mesh with the intention of writing a partial cacheline.",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.RFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RFO request issued by the IRP unit to the mesh with the intention of writing a partial cacheline to coherent memory. RFO is a Read For Ownership command that requests ownership of the cacheline and moves data from the mesh to IRP cache.",
"UMask": "0x8",
@@ -62,6 +73,7 @@
},
{
"BriefDescription": "Coherent Ops : WbMtoI",
+ "Counter": "0,1",
"EventCode": "0x10",
"EventName": "UNC_I_COHERENT_OPS.WBMTOI",
"PerPkg": "1",
@@ -71,6 +83,7 @@
},
{
"BriefDescription": "FAF RF full",
+ "Counter": "0,1",
"EventCode": "0x17",
"EventName": "UNC_I_FAF_FULL",
"PerPkg": "1",
@@ -78,6 +91,7 @@
},
{
"BriefDescription": "Inbound read requests received by the IRP and inserted into the FAF queue.",
+ "Counter": "0,1",
"EventCode": "0x18",
"EventName": "UNC_I_FAF_INSERTS",
"PerPkg": "1",
@@ -86,6 +100,7 @@
},
{
"BriefDescription": "Occupancy of the IRP FAF queue.",
+ "Counter": "0,1",
"EventCode": "0x19",
"EventName": "UNC_I_FAF_OCCUPANCY",
"PerPkg": "1",
@@ -94,6 +109,7 @@
},
{
"BriefDescription": "FAF allocation -- sent to ADQ",
+ "Counter": "0,1",
"EventCode": "0x16",
"EventName": "UNC_I_FAF_TRANSACTIONS",
"PerPkg": "1",
@@ -101,14 +117,17 @@
},
{
"BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.EVICTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": ": All Inserts Inbound (p2p + faf + cset)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.INBOUND_INSERTS",
"PerPkg": "1",
@@ -117,78 +136,97 @@
},
{
"BriefDescription": ": All Inserts Outbound (BL, AK, Snoops)",
+ "Counter": "0,1",
"EventCode": "0x20",
"EventName": "UNC_I_IRP_ALL.OUTBOUND_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Atomic Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_MISC0.2ND_ATOMIC_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Read Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.2ND_RD_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Cache Inserts of Write Transactions as Secondary",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.2ND_WR_INSERT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Rejects",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_MISC0.FAST_REJ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Requests",
+ "Counter": "0,1",
"EventCode": "0x1e",
"EventName": "UNC_I_MISC0.FAST_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Fastpath Transfers From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_MISC0.FAST_XFER",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Prefetch Ack Hints From Primary to Secondary",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_MISC0.PF_ACK_HINT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Counts Timeouts - Set 0 : Slow path fwpf didn't find prefetch",
+ "Counter": "0,1",
"EventCode": "0x1E",
"EventName": "UNC_I_MISC0.SLOWPATH_FWPF_NO_PRF",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "Misc Events - Set 1 : Lost Forward",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_I_MISC1.LOST_FWD",
"PerPkg": "1",
@@ -198,8 +236,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Received Invalid",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Received Invalid : Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x20",
@@ -207,8 +247,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Received Valid",
+ "Counter": "0,1",
"EventCode": "0x1F",
"EventName": "UNC_I_MISC1.SEC_RCVD_VLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Received Valid : Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x40",
@@ -216,8 +258,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of E Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_E",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of E Line : Secondary received a transfer that did have sufficient MESI state",
"UMask": "0x4",
@@ -225,8 +269,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of I Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of I Line : Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x1",
@@ -234,8 +280,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of M Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_M",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of M Line : Snoop took cacheline ownership before write from data was committed.",
"UMask": "0x8",
@@ -243,8 +291,10 @@
},
{
"BriefDescription": "Misc Events - Set 1 : Slow Transfer of S Line",
+ "Counter": "0,1",
"EventCode": "0x1f",
"EventName": "UNC_I_MISC1.SLOW_S",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Misc Events - Set 1 : Slow Transfer of S Line : Secondary received a transfer that did not have sufficient MESI state",
"UMask": "0x2",
@@ -252,88 +302,110 @@
},
{
"BriefDescription": "P2P Requests",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_I_P2P_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "P2P Requests : P2P requests from the ITC",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Occupancy",
+ "Counter": "0,1",
"EventCode": "0x15",
"EventName": "UNC_I_P2P_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "P2P Occupancy : P2P B & S Queue Occupancy",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : P2P completions",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.CMPL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : match if local only",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.LOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : match if local and target matches",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.LOC_AND_TGT_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : P2P Message",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.MSG",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : P2P reads",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : Match if remote only",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.REM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : match if remote and target matches",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.REM_AND_TGT_MATCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "P2P Transactions : P2P Writes",
+ "Counter": "0,1",
"EventCode": "0x13",
"EventName": "UNC_I_P2P_TRANSACTIONS.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Responses to snoops of any type that hit M, E, S or I line in the IIO",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit M, E, S or I line in the IIO",
"UMask": "0x7e",
@@ -341,8 +413,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit E or S line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit E or S line in the IIO cache",
"UMask": "0x74",
@@ -350,8 +424,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit I line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that hit I line in the IIO cache",
"UMask": "0x72",
@@ -359,6 +435,7 @@
},
{
"BriefDescription": "Responses to snoops of any type that hit M line in the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_HIT_M",
"PerPkg": "1",
@@ -368,8 +445,10 @@
},
{
"BriefDescription": "Responses to snoops of any type that miss the IIO cache",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.ALL_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Responses to snoops of any type (code, data, invalidate) that miss the IIO cache",
"UMask": "0x71",
@@ -377,64 +456,80 @@
},
{
"BriefDescription": "Snoop Responses : Hit E or S",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_ES",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Hit I",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Hit M",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.HIT_M",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : Miss",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpCode",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPCODE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpData",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPDATA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "IRP"
},
{
"BriefDescription": "Snoop Responses : SnpInv",
+ "Counter": "0,1",
"EventCode": "0x12",
"EventName": "UNC_I_SNOOP_RESP.SNPINV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "IRP"
},
{
"BriefDescription": "Inbound Transaction Count : Atomic",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.ATOMIC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Inbound Transaction Count : Atomic : Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks the number of atomic transactions",
"UMask": "0x10",
@@ -442,8 +537,10 @@
},
{
"BriefDescription": "Inbound Transaction Count : Other",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.OTHER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Inbound Transaction Count : Other : Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks the number of 'other' kinds of transactions.",
"UMask": "0x20",
@@ -451,8 +548,10 @@
},
{
"BriefDescription": "Inbound Transaction Count : Writes",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.WRITES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Inbound Transaction Count : Writes : Counts the number of Inbound transactions from the IRP to the Uncore. This can be filtered based on request type in addition to the source queue. Note the special filtering equation. We do OR-reduction on the request type. If the SOURCE bit is set, then we also do AND qualification based on the source portID. : Tracks only write requests. Each write request should have a prefetch, so there is no need to explicitly track these requests. For writes that are tickled and have to retry, the counter will be incremented for each retry.",
"UMask": "0x2",
@@ -460,6 +559,7 @@
},
{
"BriefDescription": "Inbound write (fast path) requests received by the IRP.",
+ "Counter": "0,1",
"EventCode": "0x11",
"EventName": "UNC_I_TRANSACTIONS.WR_PREF",
"PerPkg": "1",
@@ -469,134 +569,170 @@
},
{
"BriefDescription": "AK Egress Allocations",
+ "Counter": "0,1",
"EventCode": "0x0B",
"EventName": "UNC_I_TxC_AK_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x05",
"EventName": "UNC_I_TxC_BL_DRS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_I_TxC_BL_DRS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL DRS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x08",
"EventName": "UNC_I_TxC_BL_DRS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x06",
"EventName": "UNC_I_TxC_BL_NCB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_I_TxC_BL_NCB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCB Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x09",
"EventName": "UNC_I_TxC_BL_NCB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Cycles Full",
+ "Counter": "0,1",
"EventCode": "0x07",
"EventName": "UNC_I_TxC_BL_NCS_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Inserts",
+ "Counter": "0,1",
"EventCode": "0x04",
"EventName": "UNC_I_TxC_BL_NCS_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "BL NCS Egress Occupancy",
+ "Counter": "0,1",
"EventCode": "0x0A",
"EventName": "UNC_I_TxC_BL_NCS_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IRP"
},
{
"BriefDescription": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES",
+ "Counter": "0,1",
"EventCode": "0x1C",
"EventName": "UNC_I_TxR2_AD01_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Counts the number times when it is not possible to issue a request to the M2PCIe because there are no Egress Credits available on AD0, A1 or AD0&AD1 both. Stalls on both AD0 and AD1 will count as 2",
"Unit": "IRP"
},
{
"BriefDescription": "No AD0 Egress Credits Stalls",
+ "Counter": "0,1",
"EventCode": "0x1A",
"EventName": "UNC_I_TxR2_AD0_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No AD0 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD0 Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "No AD1 Egress Credits Stalls",
+ "Counter": "0,1",
"EventCode": "0x1B",
"EventName": "UNC_I_TxR2_AD1_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No AD1 Egress Credits Stalls : Counts the number times when it is not possible to issue a request to the M2PCIe because there are no AD1 Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "No BL Egress Credit Stalls",
+ "Counter": "0,1",
"EventCode": "0x1D",
"EventName": "UNC_I_TxR2_BL_STALL_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No BL Egress Credit Stalls : Counts the number times when it is not possible to issue data to the R2PCIe because there are no BL Egress Credits available.",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0x0D",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Read Requests",
+ "Counter": "0,1",
"EventCode": "0x0E",
"EventName": "UNC_I_TxS_DATA_INSERTS_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Read Requests : Counts the number of requests issued to the switch (towards the devices).",
"Unit": "IRP"
},
{
"BriefDescription": "Outbound Request Queue Occupancy",
+ "Counter": "0,1",
"EventCode": "0x0C",
"EventName": "UNC_I_TxS_REQUEST_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Outbound Request Queue Occupancy : Accumulates the number of outstanding outbound requests from the IRP to the switch (towards the devices). This can be used in conjunction with the allocations event in order to calculate average latency of outbound requests.",
"Unit": "IRP"
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -604,8 +740,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -613,8 +751,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -622,8 +762,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -631,8 +773,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -640,8 +784,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -649,8 +795,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -658,8 +806,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -667,8 +817,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -676,8 +828,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -685,8 +839,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2M_AG0_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -694,8 +850,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -703,8 +861,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -712,8 +872,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -721,8 +883,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -730,8 +894,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -739,8 +905,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -748,8 +916,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -757,8 +927,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -766,8 +938,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -775,8 +949,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -784,8 +960,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2M_AG0_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -793,8 +971,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -802,8 +982,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -811,8 +993,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -820,8 +1004,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -829,8 +1015,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -838,8 +1026,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -847,8 +1037,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -856,8 +1048,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -865,8 +1059,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -874,8 +1070,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -883,8 +1081,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2M_AG0_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -892,8 +1092,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -901,8 +1103,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -910,8 +1114,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -919,8 +1125,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -928,8 +1136,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -937,8 +1147,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -946,8 +1158,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -955,8 +1169,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -964,8 +1180,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -973,8 +1191,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -982,8 +1202,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M2M_AG0_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -991,8 +1213,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -1000,8 +1224,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -1009,8 +1235,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -1018,8 +1246,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -1027,8 +1257,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -1036,8 +1268,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -1045,8 +1279,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -1054,8 +1290,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -1063,8 +1301,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -1072,8 +1312,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -1081,8 +1323,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2M_AG1_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -1090,8 +1334,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -1099,8 +1345,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -1108,8 +1356,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -1117,8 +1367,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -1126,8 +1378,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -1135,8 +1389,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -1144,8 +1400,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -1153,8 +1411,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -1162,8 +1422,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -1171,8 +1433,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -1180,8 +1444,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2M_AG1_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -1189,8 +1455,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -1198,8 +1466,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -1207,8 +1477,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -1216,8 +1488,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -1225,8 +1499,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -1234,8 +1510,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -1243,8 +1521,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -1252,8 +1532,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -1261,8 +1543,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -1270,8 +1554,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -1279,8 +1565,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M2M_AG1_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -1288,8 +1576,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -1297,8 +1587,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -1306,8 +1598,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -1315,8 +1609,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -1324,8 +1620,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -1333,8 +1631,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -1342,8 +1642,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -1351,8 +1653,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -1360,8 +1664,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -1369,8 +1675,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -1378,8 +1686,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M2M_AG1_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -1387,44 +1697,54 @@
},
{
"BriefDescription": "M2M to iMC Bypass : Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M2M_BYPASS_M2M_EGRESS.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC Bypass : Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M2M_BYPASS_M2M_EGRESS.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC Bypass : Not Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_BYPASS_M2M_INGRESS.NOT_TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC Bypass : Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M2M_BYPASS_M2M_INGRESS.TAKEN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Clockticks of the mesh to memory (M2M)",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M2M_CLOCKTICKS",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M2M_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -1432,113 +1752,142 @@
},
{
"BriefDescription": "Cycles when direct to core mode, which bypasses the CHA, was disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_NOTFORKED",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M2M_DIRECT2CORE_NOT_TAKEN_NOTFORKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads in which direct to core transaction was overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2M_DIRECT2CORE_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads in which direct to Intel UPI transactions were overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles when Direct2UPI was Disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M2M_DIRECT2UPI_NOT_TAKEN_DIRSTATE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number of reads that a message sent direct2 Intel UPI was overridden",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_M2M_DIRECT2UPI_TXN_OVERRIDE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Clockticks of the mesh to PCI (M2P)",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Hit : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M2M_DIRECTORY_HIT.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory Lookups : Found in any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.ANY",
"PerPkg": "1",
@@ -1547,6 +1896,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Lookups : Found in A state",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_A",
"PerPkg": "1",
@@ -1555,6 +1905,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Lookups : Found in I state",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_I",
"PerPkg": "1",
@@ -1563,6 +1914,7 @@
},
{
"BriefDescription": "Multi-socket cacheline Directory Lookups : Found in S state",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M2M_DIRECTORY_LOOKUP.STATE_S",
"PerPkg": "1",
@@ -1571,70 +1923,87 @@
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On NonDirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.CLEAN_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in A State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_A",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in I State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_I",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in L State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_P",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Directory Miss : On Dirty Line in S State",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M2M_DIRECTORY_MISS.DIRTY_S",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Multi-socket cacheline Directory Updates : From/to any state. Note: event counts are incorrect in 2LM mode.",
+ "Counter": "0,1,2,3",
"EventCode": "0x2e",
"EventName": "UNC_M2M_DIRECTORY_UPDATE.ANY",
"PerPkg": "1",
@@ -1643,8 +2012,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.DPT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tile",
"UMask": "0x4",
@@ -1652,8 +2023,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.DPT_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tile",
"UMask": "0x8",
@@ -1661,8 +2034,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.DPT_STALL_IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalled",
"UMask": "0x40",
@@ -1670,8 +2045,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - No Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.DPT_STALL_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalled",
"UMask": "0x80",
@@ -1679,8 +2056,10 @@
},
{
"BriefDescription": "Distress signal asserted : Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x2",
@@ -1688,8 +2067,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.PMM_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x10",
@@ -1697,8 +2078,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.PMM_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x20",
@@ -1706,8 +2089,10 @@
},
{
"BriefDescription": "Distress signal asserted : Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2M_DISTRESS_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x1",
@@ -1715,22 +2100,28 @@
},
{
"BriefDescription": "UNC_M2M_DISTRESS_PMM",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "UNC_M2M_DISTRESS_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_DISTRESS_PMM_MEMMODE",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "UNC_M2M_DISTRESS_PMM_MEMMODE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -1738,8 +2129,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M2M_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -1747,8 +2140,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1756,8 +2151,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1765,8 +2162,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1774,8 +2173,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M2M_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1783,8 +2184,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M2M_HORZ_RING_AKC_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1792,8 +2195,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M2M_HORZ_RING_AKC_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1801,8 +2206,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M2M_HORZ_RING_AKC_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1810,8 +2217,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M2M_HORZ_RING_AKC_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1819,8 +2228,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1828,8 +2239,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1837,8 +2250,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1846,8 +2261,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M2M_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1855,8 +2272,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -1864,8 +2283,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -1873,8 +2294,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -1882,8 +2305,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M2M_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -1891,8 +2316,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -1900,8 +2327,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M2M_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -1909,64 +2338,80 @@
},
{
"BriefDescription": "M2M Reads Issued to iMC : All, regardless of priority. - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x704",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : All, regardless of priority. - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x104",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : From TGR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x140",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Critical Priority - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x102",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Normal Priority - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x101",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR, acting as Cache - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x110",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x108",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : PMM - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH0_TO_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2M Reads Issued to iMC : PMM - Ch0 : Counts all PMM dimm read requests(full line) sent from M2M to iMC",
"UMask": "0x120",
@@ -1974,56 +2419,70 @@
},
{
"BriefDescription": "M2M Reads Issued to iMC : All, regardless of priority. - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x204",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : From TGR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x240",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Critical Priority - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x202",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Normal Priority - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x201",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR, acting as Cache - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x210",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x208",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : PMM - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH1_TO_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2M Reads Issued to iMC : PMM - Ch1 : Counts all PMM dimm read requests(full line) sent from M2M to iMC",
"UMask": "0x220",
@@ -2031,54 +2490,67 @@
},
{
"BriefDescription": "M2M Reads Issued to iMC : From TGR - Ch2",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.CH2_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x440",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : From TGR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x740",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Critical Priority - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x702",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : Normal Priority - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.NORMAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x701",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR, acting as Cache - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x710",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : DDR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x708",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Reads Issued to iMC : PMM - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x37",
"EventName": "UNC_M2M_IMC_READS.TO_PMM",
"PerPkg": "1",
@@ -2087,93 +2559,117 @@
},
{
"BriefDescription": "M2M Writes Issued to iMC : All Writes - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c10",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : All Writes - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x410",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : From TGR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Full Line Non-ISOCH - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x401",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Full Line - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x404",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive Miss - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Partial Non-ISOCH - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x402",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Partial - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x408",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR, acting as Cache - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x440",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x420",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : PMM - Ch0",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH0_TO_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2M Writes Issued to iMC : PMM - Ch0 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMC",
"UMask": "0x480",
@@ -2181,85 +2677,107 @@
},
{
"BriefDescription": "M2M Writes Issued to iMC : All Writes - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x810",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : From TGR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Full Line Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x801",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Full Line - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x804",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive Miss - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Partial Non-ISOCH - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x802",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Partial - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x808",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR, acting as Cache - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x840",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x820",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : PMM - Ch1",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.CH1_TO_PMM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2M Writes Issued to iMC : PMM - Ch1 : Counts all PMM dimm writes requests(full line and partial) sent from M2M to iMC",
"UMask": "0x880",
@@ -2267,75 +2785,94 @@
},
{
"BriefDescription": "M2M Writes Issued to iMC : From TGR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FROM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Full Line Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c01",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Full Line - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.FULL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c04",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.NI",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Non-Inclusive Miss - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.NI_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : Partial Non-ISOCH - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c02",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : ISOCH Partial - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.PARTIAL_ISOCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c08",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR, acting as Cache - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c40",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : DDR - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.TO_DDR_AS_MEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1c20",
"Unit": "M2M"
},
{
"BriefDescription": "M2M Writes Issued to iMC : PMM - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_M2M_IMC_WRITES.TO_PMM",
"PerPkg": "1",
@@ -2344,281 +2881,353 @@
},
{
"BriefDescription": "Write Tracker Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x64",
"EventName": "UNC_M2M_MIRR_WRQ_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x65",
"EventName": "UNC_M2M_MIRR_WRQ_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M2M_MISC_EXTERNAL.MBE_INST0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M2M_MISC_EXTERNAL.MBE_INST1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Number Packet Header Matches : MC Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M2M_PKT_MATCH.MC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Number Packet Header Matches : Mesh Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M2M_PKT_MATCH.MESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_CIS_DROPS",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_M2M_PREFCAM_CIS_DROPS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Full : All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_PREFCAM_CYCLES_FULL.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Full : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_PREFCAM_CYCLES_FULL.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Full : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_PREFCAM_CYCLES_FULL.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Full : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6B",
"EventName": "UNC_M2M_PREFCAM_CYCLES_FULL.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Not Empty : All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_M2M_PREFCAM_CYCLES_NE.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Not Empty : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_M2M_PREFCAM_CYCLES_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Not Empty : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_M2M_PREFCAM_CYCLES_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Cycles Not Empty : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6C",
"EventName": "UNC_M2M_PREFCAM_CYCLES_NE.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH0_HITA0_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH0_HITA1_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH0_MISS_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH0_RSP_PDRESET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH1_HITA0_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH1_HITA1_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH1_MISS_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH1_RSP_PDRESET",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH2_HITA0_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH2_HITA1_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH2_MISS_INVAL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Deallocs",
+ "Counter": "0,1,2,3",
"EventCode": "0x6E",
"EventName": "UNC_M2M_PREFCAM_DEALLOCS.CH2_RSP_PDRESET",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : XPT - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : XPT - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH2_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : XPT - Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6F",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.CH2_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2a",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped : XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6f",
"EventName": "UNC_M2M_PREFCAM_DEMAND_DROPS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x15",
"Unit": "M2M"
},
{
"BriefDescription": "Demands Merged with CAMed Prefetches : XPT & UPI- Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.CH0_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Merged with CAMed Prefetches : XPT & UPI - Ch 0",
"UMask": "0x1",
@@ -2626,8 +3235,10 @@
},
{
"BriefDescription": "Demands Merged with CAMed Prefetches : XPT & UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.CH1_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Merged with CAMed Prefetches : XPT & UPI- Ch 1",
"UMask": "0x4",
@@ -2635,8 +3246,10 @@
},
{
"BriefDescription": "Demands Merged with CAMed Prefetches : XPT & UPI- Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.CH2_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Merged with CAMed Prefetches : XPT & UPI - Ch 2",
"UMask": "0x10",
@@ -2644,8 +3257,10 @@
},
{
"BriefDescription": "Demands Merged with CAMed Prefetches : XPT & UPI- All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_M2M_PREFCAM_DEMAND_MERGE.XPTUPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Merged with CAMed Prefetches : XPT & UPI - All Channels",
"UMask": "0x15",
@@ -2653,8 +3268,10 @@
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.CH0_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI- Ch 0",
"UMask": "0x1",
@@ -2662,8 +3279,10 @@
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.CH1_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI- Ch 1",
"UMask": "0x4",
@@ -2671,460 +3290,578 @@
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI - Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.CH2_XPTUPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Demands Not Merged with CAMed Prefetches : XPT & UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_M2M_PREFCAM_DEMAND_NO_MERGE.XPTUPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x15",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.ERRORBLK_RxC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.NOT_PF_SAD_REGION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.PF_AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.PF_CAM_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.PF_CAM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.PF_SECURE_DROP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.RPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.STOP_B2B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.UPI_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.WPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch0 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x70",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH0.XPT_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.ERRORBLK_RxC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.NOT_PF_SAD_REGION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.PF_AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.PF_CAM_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.PF_CAM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.PF_SECURE_DROP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.RPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.STOP_B2B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.UPI_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.WPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch1 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x71",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH1.XPT_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.ERRORBLK_RxC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.NOT_PF_SAD_REGION",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.PF_AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.PF_CAM_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.PF_CAM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.PF_SECURE_DROP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.RPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.STOP_B2B",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.UPI_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.WPQ_PROXY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2M"
},
{
"BriefDescription": "Data Prefetches Dropped Ch2 - Reasons",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_M2M_PREFCAM_DROP_REASONS_CH2.XPT_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH0_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH1_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH2_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - Ch 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.CH2_XPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : UPI - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6d",
"EventName": "UNC_M2M_PREFCAM_INSERTS.UPI_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2a",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Inserts : XPT - All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6D",
"EventName": "UNC_M2M_PREFCAM_INSERTS.XPT_ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x15",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Prefetch CAM Occupancy : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x6A",
"EventName": "UNC_M2M_PREFCAM_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": ": All Channels",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.ALLCH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x7",
"Unit": "M2M"
},
{
"BriefDescription": ": Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": ": Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": ": Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_M2M_PREFCAM_RESP_MISS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_CYCLES_NE",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "UNC_M2M_PREFCAM_RxC_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Counter": "0,1,2,3",
"EventCode": "0x7A",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.1LM_POSTED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS",
+ "Counter": "0,1,2,3",
"EventCode": "0x7A",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.CIS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPT",
+ "Counter": "0,1,2,3",
"EventCode": "0x7A",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.PMM_MEMMODE_ACCEPT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Counter": "0,1,2,3",
"EventCode": "0x7A",
"EventName": "UNC_M2M_PREFCAM_RxC_DEALLOCS.SQUASHED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_INSERTS",
+ "Counter": "0,1,2,3",
"EventCode": "0x78",
"EventName": "UNC_M2M_PREFCAM_RxC_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_PREFCAM_RxC_OCCUPANCY",
+ "Counter": "0,1,2,3",
"EventCode": "0x77",
"EventName": "UNC_M2M_PREFCAM_RxC_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -3132,8 +3869,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -3141,8 +3880,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -3150,8 +3891,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M2M_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -3159,8 +3902,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -3168,8 +3913,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -3177,8 +3924,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x10",
@@ -3186,8 +3935,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -3195,8 +3946,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M2M_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -3204,237 +3957,299 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M2M_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M2M_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "UNC_M2M_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Regular : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - PMM : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD_PMM.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - PMM : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD_PMM.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC RPQ Cycles w/Credits - PMM : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "UNC_M2M_RPQ_NO_REG_CRD_PMM.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_NO_SPEC_CRD.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_NO_SPEC_CRD.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M to iMC RPQ Cycles w/Credits - Special : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2M_RPQ_NO_SPEC_CRD.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M2M_RxC_AD_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_M2M_RxC_AD_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M2M_RxC_AD_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M2M_RxC_AD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Ingress (from CMS) Occupancy - Prefetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x77",
"EventName": "UNC_M2M_RxC_AD_PREF_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M2M_RxC_AK_WR_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "UNC_M2M_RxC_BL_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "UNC_M2M_RxC_BL_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_M2M_RxC_BL_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Ingress (from CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_M2M_RxC_BL_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x11",
@@ -3442,8 +4257,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -3451,8 +4268,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -3460,8 +4279,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x44",
@@ -3469,8 +4290,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -3478,8 +4301,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M2M_RxR_BUSY_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -3487,8 +4312,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x11",
@@ -3496,8 +4323,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -3505,8 +4334,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -3514,8 +4345,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -3523,8 +4356,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x80",
@@ -3532,8 +4367,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x44",
@@ -3541,8 +4378,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -3550,8 +4389,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -3559,8 +4400,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M2M_RxR_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -3568,8 +4411,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -3577,8 +4422,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -3586,8 +4433,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -3595,8 +4444,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AK : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -3604,8 +4455,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -3613,8 +4466,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -3622,8 +4477,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -3631,8 +4488,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IFV - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -3640,8 +4499,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M2M_RxR_CRD_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IV : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -3649,16 +4510,20 @@
},
{
"BriefDescription": "Transgress Injection Starvation",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M2M_RxR_CRD_STARVED_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"Unit": "M2M"
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -3666,8 +4531,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -3675,8 +4542,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -3684,8 +4553,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -3693,8 +4564,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -3702,8 +4575,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -3711,8 +4586,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -3720,8 +4597,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -3729,8 +4608,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M2M_RxR_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -3738,8 +4619,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -3747,8 +4630,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -3756,8 +4641,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -3765,8 +4652,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -3774,8 +4663,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -3783,8 +4674,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -3792,8 +4685,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x20",
@@ -3801,8 +4696,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -3810,8 +4707,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M2M_RxR_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -3819,64 +4718,82 @@
},
{
"BriefDescription": "UNC_M2M_SCOREBOARD_AD_RETRY_ACCEPTS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2M_SCOREBOARD_AD_RETRY_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "UNC_M2M_SCOREBOARD_AD_RETRY_REJECTS",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2M_SCOREBOARD_AD_RETRY_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Retry - Mem Mirroring Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M2M_SCOREBOARD_BL_RETRY_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Retry - Mem Mirroring Mode",
+ "Counter": "0,1,2,3",
"EventCode": "0x36",
"EventName": "UNC_M2M_SCOREBOARD_BL_RETRY_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Scoreboard Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_M2M_SCOREBOARD_RD_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Scoreboard Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M2M_SCOREBOARD_RD_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Scoreboard Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_M2M_SCOREBOARD_WR_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Scoreboard Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2M_SCOREBOARD_WR_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3884,8 +4801,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3893,8 +4812,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3902,8 +4823,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3911,8 +4834,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3920,8 +4845,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -3929,8 +4856,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -3938,8 +4867,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -3947,8 +4878,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -3956,8 +4889,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -3965,8 +4900,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -3974,8 +4911,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -3983,8 +4922,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -3992,8 +4933,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -4001,8 +4944,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -4010,8 +4955,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -4019,8 +4966,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4028,8 +4977,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4037,8 +4988,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4046,8 +4999,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -4055,8 +5010,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -4064,8 +5021,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -4073,8 +5032,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -4082,8 +5043,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -4091,8 +5054,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4100,8 +5065,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4109,8 +5076,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4118,8 +5087,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -4127,8 +5098,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -4136,8 +5109,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -4145,8 +5120,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -4154,8 +5131,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M2M_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -4163,8 +5142,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4172,8 +5153,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4181,8 +5164,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4190,8 +5175,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4199,8 +5186,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4208,8 +5197,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4217,8 +5208,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4226,8 +5219,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4235,8 +5230,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4244,8 +5241,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -4253,8 +5252,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -4262,8 +5263,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M2M_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -4271,6 +5274,7 @@
},
{
"BriefDescription": "Tag Hit : Clean NearMem Read Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_CLEAN",
"PerPkg": "1",
@@ -4280,6 +5284,7 @@
},
{
"BriefDescription": "Tag Hit : Dirty NearMem Read Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_RD_HIT_DIRTY",
"PerPkg": "1",
@@ -4289,8 +5294,10 @@
},
{
"BriefDescription": "Tag Hit : Clean NearMem Underfill Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_CLEAN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tag Hit : Clean NearMem Underfill Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts clean underfill hits due to a partial write",
"UMask": "0x4",
@@ -4298,8 +5305,10 @@
},
{
"BriefDescription": "Tag Hit : Dirty NearMem Underfill Hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M2M_TAG_HIT.NM_UFILL_HIT_DIRTY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tag Hit : Dirty NearMem Underfill Hit : Tag Hit indicates when a request sent to the iMC hit in Near Memory. : Counts dirty underfill read hits due to a partial write",
"UMask": "0x8",
@@ -4307,620 +5316,778 @@
},
{
"BriefDescription": "Tag Miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M2M_TAG_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number AD Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2M_TGR_AD_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Number BL Ingress Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2M_TGR_BL_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_FULL.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_FULL.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Full : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2M_TRACKER_FULL.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Inserts : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2M_TRACKER_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Cycles Not Empty : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2M_TRACKER_NE.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Tracker Occupancy : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2M_TRACKER_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Credit Acquired",
+ "Counter": "0,1,2,3",
"EventCode": "0x0d",
"EventName": "UNC_M2M_TxC_AD_CREDITS_ACQUIRED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Credits Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x0e",
"EventName": "UNC_M2M_TxC_AD_CREDIT_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x0c",
"EventName": "UNC_M2M_TxC_AD_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x0b",
"EventName": "UNC_M2M_TxC_AD_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x09",
"EventName": "UNC_M2M_TxC_AD_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No AD Egress (to CMS) Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x0f",
"EventName": "UNC_M2M_TxC_AD_NO_CREDIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AD Egress (to CMS) Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2M_TxC_AD_NO_CREDIT_STALLED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AD Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x0A",
"EventName": "UNC_M2M_TxC_AD_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound Ring Transactions on AK : CRD Transactions to Cbo",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_M2M_TxC_AK.CRD_CBO",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound Ring Transactions on AK : NDR Transactions",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_M2M_TxC_AK.NDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AKC Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M2M_TxC_AKC_CREDITS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credit Acquired : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Credit Acquired : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1D",
"EventName": "UNC_M2M_TxC_AK_CREDITS_ACQUIRED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.RDCRD1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x88",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCMP1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_M2M_TxC_AK_CYCLES_FULL.WRCRD1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x13",
"EventName": "UNC_M2M_TxC_AK_CYCLES_NE.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.PREF_RD_CAM_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2M_TxC_AK_INSERTS.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No AK Egress (to CMS) Credits : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No AK Egress (to CMS) Credits : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_CYCLES.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Credits : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No AK Egress (to CMS) Credits : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M2M_TxC_AK_NO_CREDIT_STALLED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.RDCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "AK Egress (to CMS) Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M2M_TxC_AK_OCCUPANCY.WRCRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache : Data to Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_CACHE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache : Data to Core",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_CORE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Outbound DRS Ring Transactions to Cache : Data to QPI",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2M_TxC_BL.DRS_UPI",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credit Acquired : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Credit Acquired : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2M_TxC_BL_CREDITS_ACQUIRED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Full : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2M_TxC_BL_CYCLES_FULL.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Not Empty : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2M_TxC_BL_CYCLES_NE.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Allocations : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x3",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Allocations : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "BL Egress (to CMS) Allocations : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2M_TxC_BL_INSERTS.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No BL Egress (to CMS) Credits : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles with No BL Egress (to CMS) Credits : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1B",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_CYCLES.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Credits : Common Mesh Stop - Near Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Cycles Stalled with No BL Egress (to CMS) Credits : Common Mesh Stop - Far Side",
+ "Counter": "0,1,2,3",
"EventCode": "0x1C",
"EventName": "UNC_M2M_TxC_BL_NO_CREDIT_STALLED.CMS1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4928,8 +6095,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4937,8 +6106,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -4946,8 +6117,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -4955,8 +6128,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -4964,8 +6139,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M2M_TxR_HORZ_ADS_USED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -4973,8 +6150,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -4982,8 +6161,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -4991,8 +6172,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -5000,8 +6183,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -5009,8 +6194,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x80",
@@ -5018,8 +6205,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5027,8 +6216,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -5036,8 +6227,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -5045,8 +6238,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M2M_TxR_HORZ_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -5054,8 +6249,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -5063,8 +6260,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -5072,8 +6271,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -5081,8 +6282,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -5090,8 +6293,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -5099,8 +6304,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5108,8 +6315,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -5117,8 +6326,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -5126,8 +6337,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -5135,8 +6348,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -5144,8 +6359,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -5153,8 +6370,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -5162,8 +6381,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -5171,8 +6392,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -5180,8 +6403,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5189,8 +6414,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -5198,8 +6425,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -5207,8 +6436,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M2M_TxR_HORZ_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -5216,8 +6447,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -5225,8 +6458,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -5234,8 +6469,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -5243,8 +6480,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -5252,8 +6491,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -5261,8 +6502,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5270,8 +6513,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -5279,8 +6524,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -5288,8 +6535,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M2M_TxR_HORZ_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -5297,8 +6546,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x11",
@@ -5306,8 +6557,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x10",
@@ -5315,8 +6568,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -5324,8 +6579,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -5333,8 +6590,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x80",
@@ -5342,8 +6601,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5351,8 +6612,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -5360,8 +6623,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -5369,8 +6634,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M2M_TxR_HORZ_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -5378,8 +6645,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -5387,8 +6656,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -5396,8 +6667,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -5405,8 +6678,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -5414,8 +6689,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -5423,8 +6700,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -5432,8 +6711,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -5441,8 +6722,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -5450,8 +6733,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M2M_TxR_HORZ_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -5459,8 +6744,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x1",
@@ -5468,8 +6755,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -5477,8 +6766,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -5486,8 +6777,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x80",
@@ -5495,8 +6788,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x4",
@@ -5504,8 +6799,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -5513,8 +6810,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M2M_TxR_HORZ_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -5522,8 +6821,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -5531,8 +6832,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -5540,8 +6843,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -5549,8 +6854,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M2M_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -5558,8 +6865,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -5567,8 +6876,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -5576,8 +6887,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -5585,8 +6898,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -5594,8 +6909,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -5603,8 +6920,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -5612,8 +6931,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : IV - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M2M_TxR_VERT_BYPASS.IV_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -5621,8 +6942,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS_1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -5630,8 +6953,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M2M_TxR_VERT_BYPASS_1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -5639,8 +6964,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5648,8 +6975,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5657,8 +6986,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5666,8 +6997,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5675,8 +7008,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5684,8 +7019,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5693,8 +7030,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5702,8 +7041,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5711,8 +7052,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_FULL1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5720,8 +7063,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5729,8 +7074,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5738,8 +7085,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5747,8 +7096,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5756,8 +7107,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5765,8 +7118,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5774,8 +7129,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5783,8 +7140,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5792,8 +7151,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2M_TxR_VERT_CYCLES_NE1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5801,8 +7162,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5810,8 +7173,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5819,8 +7184,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5828,8 +7195,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5837,8 +7206,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -5846,8 +7217,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -5855,8 +7228,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2M_TxR_VERT_INSERTS0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -5864,8 +7239,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_INSERTS1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5873,8 +7250,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2M_TxR_VERT_INSERTS1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5882,8 +7261,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -5891,8 +7272,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -5900,8 +7283,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -5909,8 +7294,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -5918,8 +7305,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -5927,8 +7316,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -5936,8 +7327,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2M_TxR_VERT_NACK0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -5945,8 +7338,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_VERT_NACK1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -5954,8 +7349,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2M_TxR_VERT_NACK1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -5963,8 +7360,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -5972,8 +7371,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -5981,8 +7382,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -5990,8 +7393,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -5999,8 +7404,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -6008,8 +7415,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -6017,8 +7426,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -6026,8 +7437,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -6035,8 +7448,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2M_TxR_VERT_OCCUPANCY1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -6044,8 +7459,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -6053,8 +7470,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -6062,8 +7481,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -6071,8 +7492,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -6080,8 +7503,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -6089,8 +7514,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -6098,8 +7525,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M2M_TxR_VERT_STARVED0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -6107,8 +7536,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_VERT_STARVED1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -6116,8 +7547,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_VERT_STARVED1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -6125,8 +7558,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M2M_TxR_VERT_STARVED1.TGC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -6134,8 +7569,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -6143,8 +7580,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -6152,8 +7591,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -6161,8 +7602,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M2M_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -6170,8 +7613,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_VERT_RING_AKC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -6179,8 +7624,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_VERT_RING_AKC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -6188,8 +7635,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_VERT_RING_AKC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -6197,8 +7646,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M2M_VERT_RING_AKC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -6206,8 +7657,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -6215,8 +7668,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -6224,8 +7679,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -6233,8 +7690,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M2M_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -6242,8 +7701,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -6251,8 +7712,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -6260,8 +7723,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -6269,8 +7734,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M2M_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -6278,8 +7745,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -6287,8 +7756,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M2M_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -6296,8 +7767,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M2M_VERT_RING_TGC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -6305,8 +7778,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M2M_VERT_RING_TGC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -6314,8 +7789,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M2M_VERT_RING_TGC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -6323,8 +7800,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M2M_VERT_RING_TGC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -6332,352 +7811,440 @@
},
{
"BriefDescription": "WPQ Flush : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_WPQ_FLUSH.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "WPQ Flush : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_WPQ_FLUSH.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "WPQ Flush : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M2M_WPQ_FLUSH.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Regular : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - PMM : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD_PMM.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - PMM : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD_PMM.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - PMM : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M2M_WPQ_NO_REG_CRD_PMM.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "M2M->iMC WPQ Cycles w/Credits - Special : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4E",
"EventName": "UNC_M2M_WPQ_NO_SPEC_CRD.CHN2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WR_TRACKER_FULL.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WR_TRACKER_FULL.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WR_TRACKER_FULL.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Full : Mirror",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M2M_WR_TRACKER_FULL.MIRR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Inserts : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M2M_WR_TRACKER_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty : Mirror",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_NONTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M2M_WR_TRACKER_NE.MIRR_PWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Inserts : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Non-Posted Occupancy : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x62",
"EventName": "UNC_M2M_WR_TRACKER_NONPOSTED_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy : Mirror",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.MIRR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.MIRR_NONTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M2M_WR_TRACKER_OCCUPANCY.MIRR_PWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Inserts : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_INSERTS.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2M"
},
{
"BriefDescription": "Write Tracker Posted Occupancy : Channel 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M2M_WR_TRACKER_POSTED_OCCUPANCY.CH2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2M"
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -6685,8 +8252,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -6694,8 +8263,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -6703,8 +8274,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -6712,8 +8285,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -6721,8 +8296,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -6730,8 +8307,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -6739,8 +8318,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -6748,8 +8329,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -6757,8 +8340,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -6766,8 +8351,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M3UPI_AG0_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -6775,8 +8362,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -6784,8 +8373,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -6793,8 +8384,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -6802,8 +8395,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -6811,8 +8406,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -6820,8 +8417,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -6829,8 +8428,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -6838,8 +8439,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -6847,8 +8450,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -6856,8 +8461,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -6865,8 +8472,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M3UPI_AG0_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -6874,8 +8483,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -6883,8 +8494,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -6892,8 +8505,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -6901,8 +8516,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -6910,8 +8527,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -6919,8 +8538,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -6928,8 +8549,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -6937,8 +8560,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -6946,8 +8571,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -6955,8 +8582,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -6964,8 +8593,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M3UPI_AG0_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -6973,8 +8604,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -6982,8 +8615,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -6991,8 +8626,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7000,8 +8637,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -7009,8 +8648,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -7018,8 +8659,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -7027,8 +8670,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -7036,8 +8681,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8A",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -7045,8 +8692,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7054,8 +8703,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -7063,8 +8714,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8B",
"EventName": "UNC_M3UPI_AG0_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -7072,8 +8725,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -7081,8 +8736,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -7090,8 +8747,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -7099,8 +8758,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -7108,8 +8769,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -7117,8 +8780,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -7126,8 +8791,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -7135,8 +8802,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -7144,8 +8813,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -7153,8 +8824,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -7162,8 +8835,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M3UPI_AG1_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -7171,8 +8846,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -7180,8 +8857,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -7189,8 +8868,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7198,8 +8879,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -7207,8 +8890,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -7216,8 +8901,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -7225,8 +8912,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -7234,8 +8923,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -7243,8 +8934,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7252,8 +8945,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -7261,8 +8956,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M3UPI_AG1_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -7270,8 +8967,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -7279,8 +8978,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -7288,8 +8989,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -7297,8 +9000,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -7306,8 +9011,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -7315,8 +9022,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -7324,8 +9033,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -7333,8 +9044,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8C",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -7342,8 +9055,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -7351,8 +9066,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -7360,8 +9077,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8D",
"EventName": "UNC_M3UPI_AG1_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -7369,8 +9088,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -7378,8 +9099,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -7387,8 +9110,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7396,8 +9121,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -7405,8 +9132,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -7414,8 +9143,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -7423,8 +9154,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -7432,8 +9165,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -7441,8 +9176,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -7450,8 +9187,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -7459,8 +9198,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_M3UPI_AG1_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -7468,8 +9209,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : Requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Requests : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x4",
@@ -7477,8 +9220,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : Snoops",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Snoops : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x8",
@@ -7486,8 +9231,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : VNA Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : VNA Messages : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x1",
@@ -7495,8 +9242,10 @@
},
{
"BriefDescription": "CBox AD Credits Empty : Writebacks",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M3UPI_CHA_AD_CREDITS_EMPTY.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CBox AD Credits Empty : Writebacks : No credits available to send to Cbox on the AD Ring (covers higher CBoxes)",
"UMask": "0x2",
@@ -7504,6 +9253,7 @@
},
{
"BriefDescription": "Clockticks of the mesh to UPI (M3UPI)",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M3UPI_CLOCKTICKS",
"PerPkg": "1",
@@ -7512,31 +9262,39 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M3UPI_CMS_CLOCKTICKS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2C Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_M3UPI_D2C_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "D2C Sent : Count cases BL sends direct to core",
"Unit": "M3UPI"
},
{
"BriefDescription": "D2U Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_M3UPI_D2U_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "D2U Sent : Cases where SMI3 sends D2U command",
"Unit": "M3UPI"
},
{
"BriefDescription": "Distress signal asserted : DPT Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.DPT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tile",
"UMask": "0x4",
@@ -7544,8 +9302,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.DPT_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tile",
"UMask": "0x8",
@@ -7553,8 +9313,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.DPT_STALL_IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalled",
"UMask": "0x40",
@@ -7562,8 +9324,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - No Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.DPT_STALL_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalled",
"UMask": "0x80",
@@ -7571,8 +9335,10 @@
},
{
"BriefDescription": "Distress signal asserted : Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x2",
@@ -7580,8 +9346,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.PMM_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x10",
@@ -7589,8 +9357,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.PMM_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x20",
@@ -7598,8 +9368,10 @@
},
{
"BriefDescription": "Distress signal asserted : Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M3UPI_DISTRESS_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x1",
@@ -7607,8 +9379,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -7616,8 +9390,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xBA",
"EventName": "UNC_M3UPI_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -7625,8 +9401,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -7634,8 +9412,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -7643,8 +9423,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -7652,8 +9434,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB6",
"EventName": "UNC_M3UPI_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -7661,8 +9445,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M3UPI_HORZ_RING_AKC_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -7670,8 +9456,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M3UPI_HORZ_RING_AKC_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -7679,8 +9467,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M3UPI_HORZ_RING_AKC_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -7688,8 +9478,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xBB",
"EventName": "UNC_M3UPI_HORZ_RING_AKC_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -7697,8 +9489,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -7706,8 +9500,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -7715,8 +9511,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -7724,8 +9522,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7",
"EventName": "UNC_M3UPI_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -7733,8 +9533,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -7742,8 +9544,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -7751,8 +9555,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -7760,8 +9566,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB8",
"EventName": "UNC_M3UPI_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -7769,8 +9577,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -7778,8 +9588,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xB9",
"EventName": "UNC_M3UPI_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -7787,8 +9599,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only)",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO0 and IIO1 share the same ring destination. (1 VN0 credit only) : No vn0 and vna credits available to send to M2",
"UMask": "0x1",
@@ -7796,8 +9610,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO2",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO2 : No vn0 and vna credits available to send to M2",
"UMask": "0x2",
@@ -7805,8 +9621,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO3",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO3 : No vn0 and vna credits available to send to M2",
"UMask": "0x4",
@@ -7814,8 +9632,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO4",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO4 : No vn0 and vna credits available to send to M2",
"UMask": "0x8",
@@ -7823,8 +9643,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO5",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.IIO5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2",
"UMask": "0x10",
@@ -7832,8 +9654,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them together",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : All IIO targets for NCS are in single mask. ORs them together : No vn0 and vna credits available to send to M2",
"UMask": "0x40",
@@ -7841,8 +9665,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : Selected M2p BL NCS credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.NCS_SEL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : Selected M2p BL NCS credits : No vn0 and vna credits available to send to M2",
"UMask": "0x80",
@@ -7850,8 +9676,10 @@
},
{
"BriefDescription": "M2 BL Credits Empty : IIO5",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M3UPI_M2_BL_CREDITS_EMPTY.UBOX_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2 BL Credits Empty : IIO5 : No vn0 and vna credits available to send to M2",
"UMask": "0x20",
@@ -7859,24 +9687,30 @@
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M3UPI_MISC_EXTERNAL.MBE_INST0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M3UPI_MISC_EXTERNAL.MBE_INST1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x1",
@@ -7884,8 +9718,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 1 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x2",
@@ -7893,8 +9729,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AD - Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AD - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x4",
@@ -7902,8 +9740,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AK - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AK - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x10",
@@ -7911,8 +9751,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : AK - Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.AK_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : AK - Slot 2 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x20",
@@ -7920,8 +9762,10 @@
},
{
"BriefDescription": "Multi Slot Flit Received : BL - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x3E",
"EventName": "UNC_M3UPI_MULTI_SLOT_RCVD.BL_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Multi Slot Flit Received : BL - Slot 0 : Multi slot flit received - S0, S1 and/or S2 populated (can use AK S0/S1 masks for AK allocations)",
"UMask": "0x8",
@@ -7929,8 +9773,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -7938,8 +9784,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -7947,8 +9795,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -7956,8 +9806,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "UNC_M3UPI_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -7965,8 +9817,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -7974,8 +9828,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -7983,8 +9839,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x10",
@@ -7992,8 +9850,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -8001,8 +9861,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAA",
"EventName": "UNC_M3UPI_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -8010,95 +9872,119 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xAD",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "UNC_M3UPI_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "UNC_M3UPI_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Lost Arb for VN0 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : REQ on AD : VN0 message requested but lost arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8106,8 +9992,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : RSP on AD : VN0 message requested but lost arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8115,8 +10003,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : SNP on AD : VN0 message requested but lost arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8124,8 +10014,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : NCB on BL : VN0 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8133,8 +10025,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : NCS on BL : VN0 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8142,8 +10036,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : RSP on BL : VN0 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8151,8 +10047,10 @@
},
{
"BriefDescription": "Lost Arb for VN0 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4B",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN0 : WB on BL : VN0 message requested but lost arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8160,8 +10058,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : REQ on AD : VN1 message requested but lost arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8169,8 +10069,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : RSP on AD : VN1 message requested but lost arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8178,8 +10080,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : SNP on AD : VN1 message requested but lost arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8187,8 +10091,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : NCB on BL : VN1 message requested but lost arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8196,8 +10102,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : NCS on BL : VN1 message requested but lost arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8205,8 +10113,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : RSP on BL : VN1 message requested but lost arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8214,8 +10124,10 @@
},
{
"BriefDescription": "Lost Arb for VN1 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "UNC_M3UPI_RxC_ARB_LOST_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Lost Arb for VN1 : WB on BL : VN1 message requested but lost arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8223,8 +10135,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN0 : AD and BL messages won arbitration concurrently / in parallel",
"UMask": "0x10",
@@ -8232,8 +10146,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ADBL_PARALLEL_WIN_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : AD, BL Parallel Win VN1 : AD and BL messages won arbitration concurrently / in parallel",
"UMask": "0x20",
@@ -8241,8 +10157,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : Max Parallel Win",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.ALL_PARALLEL_WIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : Max Parallel Win : VN0 and VN1 arbitration sub-pipelines both produced AD and BL winners (maximum possible parallel winners)",
"UMask": "0x80",
@@ -8250,8 +10168,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending AD VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending AD VN0 : Arbitration stage made no progress on pending ad vn0 messages because slotting stage cannot accept new message",
"UMask": "0x1",
@@ -8259,8 +10179,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending AD VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_AD_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending AD VN1 : Arbitration stage made no progress on pending ad vn1 messages because slotting stage cannot accept new message",
"UMask": "0x2",
@@ -8268,8 +10190,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending BL VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending BL VN0 : Arbitration stage made no progress on pending bl vn0 messages because slotting stage cannot accept new message",
"UMask": "0x4",
@@ -8277,8 +10201,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : No Progress on Pending BL VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.NO_PROG_BL_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : No Progress on Pending BL VN1 : Arbitration stage made no progress on pending bl vn1 messages because slotting stage cannot accept new message",
"UMask": "0x8",
@@ -8286,8 +10212,10 @@
},
{
"BriefDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win",
+ "Counter": "0,1,2,3",
"EventCode": "0x4D",
"EventName": "UNC_M3UPI_RxC_ARB_MISC.VN01_PARALLEL_WIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Arb Miscellaneous : VN0, VN1 Parallel Win : VN0 and VN1 arbitration sub-pipelines had parallel winners (at least one AD or BL on each side)",
"UMask": "0x40",
@@ -8295,8 +10223,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : REQ on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8304,8 +10234,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : RSP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8313,8 +10245,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : SNP on AD : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8322,8 +10256,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : NCB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8331,8 +10267,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : NCS on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8340,8 +10278,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : RSP on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8349,8 +10289,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN0 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN0 : WB on BL : VN0 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8358,8 +10300,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : REQ on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8367,8 +10311,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : RSP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8376,8 +10322,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : SNP on AD : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8385,8 +10333,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : NCB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8394,8 +10344,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : NCS on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8403,8 +10355,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : RSP on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8412,8 +10366,10 @@
},
{
"BriefDescription": "No Credits to Arb for VN1 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M3UPI_RxC_ARB_NOCRD_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "No Credits to Arb for VN1 : WB on BL : VN1 message is blocked from requesting arbitration due to lack of remote UPI credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8421,8 +10377,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : REQ on AD : VN0 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8430,8 +10388,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : RSP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8439,8 +10399,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : SNP on AD : VN0 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8448,8 +10410,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : NCB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8457,8 +10421,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : NCS on BL : VN0 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8466,8 +10432,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : RSP on BL : VN0 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8475,8 +10443,10 @@
},
{
"BriefDescription": "Can't Arb for VN0 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN0 : WB on BL : VN0 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8484,8 +10454,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : REQ on AD : VN1 message was not able to request arbitration while some other message won arbitration : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8493,8 +10465,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : RSP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8502,8 +10476,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : SNP on AD : VN1 message was not able to request arbitration while some other message won arbitration : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8511,8 +10487,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : NCB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8520,8 +10498,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : NCS on BL : VN1 message was not able to request arbitration while some other message won arbitration : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8529,8 +10509,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : RSP on BL : VN1 message was not able to request arbitration while some other message won arbitration : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8538,8 +10520,10 @@
},
{
"BriefDescription": "Can't Arb for VN1 : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x4A",
"EventName": "UNC_M3UPI_RxC_ARB_NOREQ_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Can't Arb for VN1 : WB on BL : VN1 message was not able to request arbitration while some other message won arbitration : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8547,8 +10531,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL Arb",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_BL_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on BL Arb : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while bl message is in arbitration",
"UMask": "0x2",
@@ -8556,8 +10542,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idle",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S0_IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD to Slot 0 on Idle : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to slot 0 of independent flit while pipeline is idle",
"UMask": "0x1",
@@ -8565,8 +10553,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 1",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S1_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 1 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 1 while merging with bl message in same flit",
"UMask": "0x4",
@@ -8574,8 +10564,10 @@
},
{
"BriefDescription": "Ingress Queue Bypasses : AD + BL to Slot 2",
+ "Counter": "0,1,2",
"EventCode": "0x40",
"EventName": "UNC_M3UPI_RxC_BYPASSED.AD_S2_BL_SLOT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress Queue Bypasses : AD + BL to Slot 2 : Number of times message is bypassed around the Ingress Queue : AD is taking bypass to flit slot 2 while merging with bl message in same flit",
"UMask": "0x8",
@@ -8583,8 +10575,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : Any In BGF FIFO",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : Any In BGF FIFO : Indication that at least one packet (flit) is in the bgf (fifo only)",
"UMask": "0x1",
@@ -8592,8 +10586,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : Any in BGF Path",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.ANY_BGF_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : Any in BGF Path : Indication that at least one packet (flit) is in the bgf path (i.e. pipe to fifo)",
"UMask": "0x2",
@@ -8601,8 +10597,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.LT1_FOR_D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : d2k credit count is less than 1",
"UMask": "0x10",
@@ -8610,8 +10608,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.LT2_FOR_D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : d2k credit count is less than 2",
"UMask": "0x20",
@@ -8619,8 +10619,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events : No D2K For Arb",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.VN0_NO_D2K_FOR_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : No D2K For Arb : VN0 BL RSP message was blocked from arbitration request due to lack of D2K CMP credit",
"UMask": "0x4",
@@ -8628,8 +10630,10 @@
},
{
"BriefDescription": "Miscellaneous Credit Events",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "UNC_M3UPI_RxC_CRD_MISC.VN1_NO_D2K_FOR_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Miscellaneous Credit Events : VN1 BL RSP message was blocked from arbitration request due to lack of D2K CMP credits",
"UMask": "0x8",
@@ -8637,8 +10641,10 @@
},
{
"BriefDescription": "Credit Occupancy : Credits Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.CONSUMED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Credits Consumed : number of remote vna credits consumed per cycle",
"UMask": "0x80",
@@ -8646,8 +10652,10 @@
},
{
"BriefDescription": "Credit Occupancy : D2K Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.D2K_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : D2K Credits : D2K completion fifo credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x10",
@@ -8655,8 +10663,10 @@
},
{
"BriefDescription": "Credit Occupancy : Packets in BGF FIFO",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Packets in BGF FIFO : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in fifo",
"UMask": "0x2",
@@ -8664,8 +10674,10 @@
},
{
"BriefDescription": "Credit Occupancy : Packets in BGF Path",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.FLITS_IN_PATH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Packets in BGF Path : Occupancy of m3upi ingress -> upi link layer bgf; packets (flits) in path (i.e. pipe to fifo or fifo)",
"UMask": "0x4",
@@ -8673,8 +10685,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_FIFO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : count of bl messages in pump-1-pending state, in completion fifo only",
"UMask": "0x40",
@@ -8682,8 +10696,10 @@
},
{
"BriefDescription": "Credit Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.P1P_TOTAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : count of bl messages in pump-1-pending state, in marker table and in fifo",
"UMask": "0x20",
@@ -8691,8 +10707,10 @@
},
{
"BriefDescription": "Credit Occupancy : Transmit Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.TxQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : Transmit Credits : Link layer transmit queue credit occupancy (credits in use), accumulated across all cycles",
"UMask": "0x8",
@@ -8700,8 +10718,10 @@
},
{
"BriefDescription": "Credit Occupancy : VNA In Use",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_M3UPI_RxC_CRD_OCC.VNA_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Credit Occupancy : VNA In Use : Remote UPI VNA credit occupancy (number of credits in use), accumulated across all cycles",
"UMask": "0x1",
@@ -8709,8 +10729,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8718,8 +10740,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8727,8 +10751,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8736,8 +10762,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8745,8 +10773,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8754,8 +10784,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8763,8 +10795,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL : Counts the number of cycles when the UPI Ingress is not empty. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8772,8 +10806,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : REQ on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -8781,8 +10817,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -8790,8 +10828,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : SNP on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -8799,8 +10839,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCB on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -8808,8 +10850,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : NCS on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -8817,8 +10861,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : RSP on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -8826,8 +10872,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M3UPI_RxC_CYCLES_NE_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Cycles Not Empty : WB on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -8835,8 +10883,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : All : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but could not be sent for any reason, e.g. low credits, low tsv, stall injection",
"UMask": "0x1",
@@ -8844,8 +10894,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : No BGF Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_BGF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : No BGF Credits : Data flit is ready for transmission but could not be sent",
"UMask": "0x8",
@@ -8853,8 +10905,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : No TxQ Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.NO_TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : No TxQ Credits : Data flit is ready for transmission but could not be sent",
"UMask": "0x10",
@@ -8862,8 +10916,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : TSV High",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.TSV_HI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : TSV High : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while tsv high",
"UMask": "0x2",
@@ -8871,8 +10927,10 @@
},
{
"BriefDescription": "Data Flit Not Sent : Cycle valid for Flit",
+ "Counter": "0,1,2,3",
"EventCode": "0x55",
"EventName": "UNC_M3UPI_RxC_DATA_FLITS_NOT_SENT.VALID_FOR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Data Flit Not Sent : Cycle valid for Flit : Data flit is ready for transmission but could not be sent : data flit is ready for transmission but was not sent while cycle is valid for flit transmission",
"UMask": "0x4",
@@ -8880,8 +10938,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence : Wait on Pump 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : Wait on Pump 0 : generating bl data flit sequence; waiting for data pump 0",
"UMask": "0x1",
@@ -8889,8 +10949,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_AT_LIMIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is at capacity (pending table plus completion fifo at limit)",
"UMask": "0x10",
@@ -8898,8 +10960,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_BUSY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is tracking at least one message",
"UMask": "0x8",
@@ -8907,8 +10971,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_FIFO_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending completion fifo is full",
"UMask": "0x40",
@@ -8916,8 +10982,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_HOLD_P0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : pump-1-pending logic is at or near capacity, such that pump-0-only bl messages are getting stalled in slotting stage",
"UMask": "0x20",
@@ -8925,8 +10993,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1P_TO_LIMBO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : a bl message finished but is in limbo and moved to pump-1-pending logic",
"UMask": "0x4",
@@ -8934,8 +11004,10 @@
},
{
"BriefDescription": "Generating BL Data Flit Sequence : Wait on Pump 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x57",
"EventName": "UNC_M3UPI_RxC_FLITS_GEN_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Generating BL Data Flit Sequence : Wait on Pump 1 : generating bl data flit sequence; waiting for data pump 1",
"UMask": "0x2",
@@ -8943,8 +11015,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_HOLDOFF",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request naturally serviced during hold-off period",
"UMask": "0x4",
@@ -8952,8 +11026,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_IN_SERVICE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request forcibly serviced during service window",
"UMask": "0x8",
@@ -8961,8 +11037,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_RECEIVED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request received from link layer while idle (with no slot 2 request active immediately prior)",
"UMask": "0x1",
@@ -8970,8 +11048,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "UNC_M3UPI_RxC_FLITS_MISC.S2REQ_WITHDRAWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": slot 2 request withdrawn during hold-off period or service window",
"UMask": "0x2",
@@ -8979,16 +11059,20 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Needs Data Flit",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.NEED_DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Needs Data Flit : BL message requires data flit sequence",
"UMask": "0x2",
@@ -8996,8 +11080,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Wait on Pump 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P0_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Wait on Pump 0 : Waiting for header pump 0",
"UMask": "0x4",
@@ -9005,8 +11091,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 : Header pump 1 is not required for flit",
"UMask": "0x10",
@@ -9014,8 +11102,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Bubble",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_BUT_BUBBLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Bubble : Header pump 1 is not required for flit but flit transmission delayed",
"UMask": "0x20",
@@ -9023,8 +11113,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Not Avail",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_NOT_REQ_NOT_AVAIL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Don't Need Pump 1 - Not Avail : Header pump 1 is not required for flit and not available",
"UMask": "0x40",
@@ -9032,8 +11124,10 @@
},
{
"BriefDescription": "Slotting BL Message Into Header Flit : Wait on Pump 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x56",
"EventName": "UNC_M3UPI_RxC_FLITS_SLOT_BL.P1_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Slotting BL Message Into Header Flit : Wait on Pump 1 : Waiting for header pump 1",
"UMask": "0x8",
@@ -9041,8 +11135,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate : Events related to Header Flit Generation - Set 1 : Header flit slotting control state machine is in any accumulate state; multi-message flit may be assembled over multiple cycles",
"UMask": "0x1",
@@ -9050,8 +11146,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate Ready",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_READ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate Ready : Events related to Header Flit Generation - Set 1 : header flit slotting control state machine is in accum_ready state; flit is ready to send but transmission is blocked; more messages may be slotted into flit",
"UMask": "0x2",
@@ -9059,8 +11157,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Accumulate Wasted",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.ACCUM_WASTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Accumulate Wasted : Events related to Header Flit Generation - Set 1 : Flit is being assembled over multiple cycles, but no additional message is being slotted into flit in current cycle; accumulate cycle is wasted",
"UMask": "0x4",
@@ -9068,8 +11168,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Blocked : Events related to Header Flit Generation - Set 1 : Header flit slotting entered run-ahead state; new header flit is started while transmission of prior, fully assembled flit is blocked",
"UMask": "0x8",
@@ -9077,8 +11179,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_AFTER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: message was slotted only after run-ahead was over; run-ahead mode definitely wasted",
"UMask": "0x80",
@@ -9086,8 +11190,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1 : Run-Ahead - Message",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG1_DURING",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Run-Ahead - Message : Events related to Header Flit Generation - Set 1 : run-ahead mode: one message slotted during run-ahead",
"UMask": "0x10",
@@ -9095,8 +11201,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_AFTER",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: second message slotted immediately after run-ahead; potential run-ahead success",
"UMask": "0x20",
@@ -9104,8 +11212,10 @@
},
{
"BriefDescription": "Flit Gen - Header 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR1.AHEAD_MSG2_SENT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 1 : Events related to Header Flit Generation - Set 1 : run-ahead mode: two (or three) message flit sent immediately after run-ahead; complete run-ahead success",
"UMask": "0x40",
@@ -9113,8 +11223,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Ok",
+ "Counter": "0,1,2,3",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Ok : Events related to Header Flit Generation - Set 2 : new header flit construction may proceed in parallel with data flit sequence",
"UMask": "0x4",
@@ -9122,8 +11234,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Flit Finished",
+ "Counter": "0,1,2,3",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Flit Finished : Events related to Header Flit Generation - Set 2 : header flit finished assembly in parallel with data flit sequence",
"UMask": "0x10",
@@ -9131,8 +11245,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Parallel Message",
+ "Counter": "0,1,2,3",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.PAR_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Parallel Message : Events related to Header Flit Generation - Set 2 : message is slotted into header flit in parallel with data flit sequence",
"UMask": "0x8",
@@ -9140,8 +11256,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall",
+ "Counter": "0,1,2,3",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall : Events related to Header Flit Generation - Set 2 : Rate-matching stall injected",
"UMask": "0x1",
@@ -9149,8 +11267,10 @@
},
{
"BriefDescription": "Flit Gen - Header 2 : Rate-matching Stall - No Message",
+ "Counter": "0,1,2,3",
"EventCode": "0x52",
"EventName": "UNC_M3UPI_RxC_FLIT_GEN_HDR2.RMSTALL_NOMSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Flit Gen - Header 2 : Rate-matching Stall - No Message : Events related to Header Flit Generation - Set 2 : Rate matching stall injected, but no additional message slotted during stall cycle",
"UMask": "0x2",
@@ -9158,8 +11278,10 @@
},
{
"BriefDescription": "Sent Header Flit : One Message",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : One Message : One message in flit; VNA or non-VNA flit",
"UMask": "0x1",
@@ -9167,8 +11289,10 @@
},
{
"BriefDescription": "Sent Header Flit : One Message in non-VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.1_MSG_VNX",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : One Message in non-VNA : One message in flit; non-VNA flit",
"UMask": "0x8",
@@ -9176,8 +11300,10 @@
},
{
"BriefDescription": "Sent Header Flit : Two Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.2_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : Two Messages : Two messages in flit; VNA flit",
"UMask": "0x2",
@@ -9185,8 +11311,10 @@
},
{
"BriefDescription": "Sent Header Flit : Three Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.3_MSGS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Sent Header Flit : Three Messages : Three messages in flit; VNA flit",
"UMask": "0x4",
@@ -9194,32 +11322,40 @@
},
{
"BriefDescription": "Sent Header Flit : One Slot Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit : Two Slots Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "Sent Header Flit : All Slots Taken",
+ "Counter": "0,1,2,3",
"EventCode": "0x54",
"EventName": "UNC_M3UPI_RxC_HDR_FLITS_SENT.SLOTS_3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "Header Not Sent : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : All : header flit is ready for transmission but could not be sent : header flit is ready for transmission but could not be sent for any reason, e.g. no credits, low tsv, stall injection",
"UMask": "0x1",
@@ -9227,8 +11363,10 @@
},
{
"BriefDescription": "Header Not Sent : No BGF Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No BGF Credits : header flit is ready for transmission but could not be sent : No BGF credits available",
"UMask": "0x8",
@@ -9236,8 +11374,10 @@
},
{
"BriefDescription": "Header Not Sent : No BGF Credits + No Extra Message Slotted",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_BGF_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No BGF Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No BGF credits available; no additional message slotted into flit",
"UMask": "0x20",
@@ -9245,8 +11385,10 @@
},
{
"BriefDescription": "Header Not Sent : No TxQ Credits",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No TxQ Credits : header flit is ready for transmission but could not be sent : No TxQ credits available",
"UMask": "0x10",
@@ -9254,8 +11396,10 @@
},
{
"BriefDescription": "Header Not Sent : No TxQ Credits + No Extra Message Slotted",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.NO_TXQ_NO_MSG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : No TxQ Credits + No Extra Message Slotted : header flit is ready for transmission but could not be sent : No TxQ credits available; no additional message slotted into flit",
"UMask": "0x40",
@@ -9263,8 +11407,10 @@
},
{
"BriefDescription": "Header Not Sent : TSV High",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.TSV_HI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : TSV High : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while tsv high",
"UMask": "0x2",
@@ -9272,8 +11418,10 @@
},
{
"BriefDescription": "Header Not Sent : Cycle valid for Flit",
+ "Counter": "0,1,2,3",
"EventCode": "0x53",
"EventName": "UNC_M3UPI_RxC_HDR_FLIT_NOT_SENT.VALID_FOR_FLIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Header Not Sent : Cycle valid for Flit : header flit is ready for transmission but could not be sent : header flit is ready for transmission but was not sent while cycle is valid for flit transmission",
"UMask": "0x4",
@@ -9281,8 +11429,10 @@
},
{
"BriefDescription": "Message Held : Can't Slot AD",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Can't Slot AD : some AD message could not be slotted (logical OR of all AD events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x10",
@@ -9290,8 +11440,10 @@
},
{
"BriefDescription": "Message Held : Can't Slot BL",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.CANT_SLOT_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Can't Slot BL : some BL message could not be slotted (logical OR of all BL events under INGR_SLOT_CANT_MC_VN{0,1})",
"UMask": "0x20",
@@ -9299,8 +11451,10 @@
},
{
"BriefDescription": "Message Held : Parallel Attempt",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_ATTEMPT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Parallel Attempt : ad and bl messages attempted to slot into the same flit in parallel",
"UMask": "0x4",
@@ -9308,8 +11462,10 @@
},
{
"BriefDescription": "Message Held : Parallel Success",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.PARALLEL_SUCCESS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : Parallel Success : ad and bl messages were actually slotted into the same flit in parallel",
"UMask": "0x8",
@@ -9317,8 +11473,10 @@
},
{
"BriefDescription": "Message Held : VN0",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : VN0 : vn0 message(s) that couldn't be slotted into last vn0 flit are held in slotting stage while processing vn1 flit",
"UMask": "0x1",
@@ -9326,8 +11484,10 @@
},
{
"BriefDescription": "Message Held : VN1",
+ "Counter": "0,1,2",
"EventCode": "0x50",
"EventName": "UNC_M3UPI_RxC_HELD.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Held : VN1 : vn1 message(s) that couldn't be slotted into last vn1 flit are held in slotting stage while processing vn0 flit",
"UMask": "0x2",
@@ -9335,8 +11495,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : REQ on AD : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9344,8 +11506,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : RSP on AD : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9353,8 +11517,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : SNP on AD : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9362,8 +11528,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : NCB on BL : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9371,8 +11539,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : NCS on BL : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9380,8 +11550,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : RSP on BL : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9389,8 +11561,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Inserts : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Inserts : WB on BL : Counts the number of allocations into the UPI Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9398,8 +11572,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : REQ on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9407,8 +11583,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : RSP on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9416,8 +11594,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : SNP on AD : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9425,8 +11605,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : NCB on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9434,8 +11616,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : NCS on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9443,8 +11627,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : RSP on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9452,8 +11638,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Inserts : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M3UPI_RxC_INSERTS_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Inserts : WB on BL : Counts the number of allocations into the UPI VN1 Ingress. This tracks one of the three rings that are used by the UPI agent. This can be used in conjunction with the UPI VN1 Ingress Occupancy Accumulator event in order to calculate average queue latency. Multiple ingress buffers can be tracked at a given time using multiple counters. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9461,8 +11649,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : REQ on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9470,8 +11660,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : RSP on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9479,8 +11671,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : SNP on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9488,8 +11682,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : NCB on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9497,8 +11693,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : NCS on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9506,8 +11704,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : RSP on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9515,8 +11715,10 @@
},
{
"BriefDescription": "VN0 Ingress (from CMS) Queue - Occupancy : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Ingress (from CMS) Queue - Occupancy : WB on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9524,8 +11726,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : REQ on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9533,8 +11737,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : RSP on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9542,8 +11748,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : SNP on AD : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9551,8 +11759,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : NCB on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9560,8 +11770,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : NCS on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : NCS on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9569,8 +11781,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : RSP on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9578,8 +11792,10 @@
},
{
"BriefDescription": "VN1 Ingress (from CMS) Queue - Occupancy : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M3UPI_RxC_OCCUPANCY_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Ingress (from CMS) Queue - Occupancy : WB on BL : Accumulates the occupancy of a given UPI VN1 Ingress queue in each cycle. This tracks one of the three ring Ingress buffers. This can be used with the UPI VN1 Ingress Not Empty event to calculate average occupancy or the UPI VN1 Ingress Allocations event in order to calculate average queuing latency. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9587,8 +11803,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9596,8 +11814,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9605,8 +11825,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9614,8 +11836,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9623,8 +11847,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9632,8 +11858,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9641,8 +11869,10 @@
},
{
"BriefDescription": "VN0 message can't slot into flit : WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4E",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN0.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9650,8 +11880,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : REQ on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : REQ on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -9659,8 +11891,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : RSP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : RSP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -9668,8 +11902,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : SNP on AD",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.AD_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : SNP on AD : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -9677,8 +11913,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : NCB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : NCB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -9686,8 +11924,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : NCS on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : NCS on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Non-Coherent Standard (NCS) messages on BL.",
"UMask": "0x40",
@@ -9695,8 +11935,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : RSP on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : RSP on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -9704,8 +11946,10 @@
},
{
"BriefDescription": "VN1 message can't slot into flit : WB on BL",
+ "Counter": "0,1,2",
"EventCode": "0x4F",
"EventName": "UNC_M3UPI_RxC_PACKING_MISS_VN1.BL_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 message can't slot into flit : WB on BL : Count cases where Ingress has packets to send but did not have time to pack into flit before sending to Agent so slot was left NULL which could have been used. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -9713,8 +11957,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Any In Use",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.ANY_IN_USE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Any In Use : At least one remote vna credit is in use",
"UMask": "0x20",
@@ -9722,8 +11968,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Corrected",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.CORRECTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Corrected : Number of remote vna credits corrected (local return) per cycle",
"UMask": "0x1",
@@ -9731,8 +11979,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 1 : Remote vna credit level is less than 1 (i.e. no vna credits available)",
"UMask": "0x2",
@@ -9740,8 +11990,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 10 : remote vna credit level is less than 10; parallel vn0/vn1 arb not possible",
"UMask": "0x10",
@@ -9749,8 +12001,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 4 : Remote vna credit level is less than 4; bl (or ad requiring 4 vna) cannot arb on vna",
"UMask": "0x4",
@@ -9758,8 +12012,10 @@
},
{
"BriefDescription": "Remote VNA Credits : Level < 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x5A",
"EventName": "UNC_M3UPI_RxC_VNA_CRD.LT5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Remote VNA Credits : Level < 5 : Remote vna credit level is less than 5; parallel ad/bl arb on vna not possible",
"UMask": "0x8",
@@ -9767,8 +12023,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_ADBL_ALLOC_L5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credit count was less than 5 and allocation to ad or bl messages was required",
"UMask": "0x2",
@@ -9776,8 +12034,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.REQ_VN01_ALLOC_LT10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credit count was less than 10 and allocation to vn0 or vn1 was required",
"UMask": "0x1",
@@ -9785,8 +12045,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn0, remote vna credits were allocated only to ad messages, not to bl",
"UMask": "0x10",
@@ -9794,8 +12056,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_JUST_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn0, remote vna credits were allocated only to bl messages, not to ad",
"UMask": "0x20",
@@ -9803,8 +12067,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN0_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credits were allocated only to vn0, not to vn1",
"UMask": "0x4",
@@ -9812,8 +12078,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn1, remote vna credits were allocated only to ad messages, not to bl",
"UMask": "0x40",
@@ -9821,8 +12089,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_JUST_BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": on vn1, remote vna credits were allocated only to bl messages, not to ad",
"UMask": "0x80",
@@ -9830,8 +12100,10 @@
},
{
"BriefDescription": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY",
+ "Counter": "0,1,2,3",
"EventCode": "0x59",
"EventName": "UNC_M3UPI_RxC_VNA_CRD_MISC.VN1_ONLY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": remote vna credits were allocated only to vn1, not to vn0",
"UMask": "0x8",
@@ -9839,8 +12111,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x11",
@@ -9848,8 +12122,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -9857,8 +12133,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -9866,8 +12144,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x44",
@@ -9875,8 +12155,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -9884,8 +12166,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M3UPI_RxR_BUSY_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -9893,8 +12177,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x11",
@@ -9902,8 +12188,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -9911,8 +12199,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -9920,8 +12210,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -9929,8 +12221,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x80",
@@ -9938,8 +12232,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x44",
@@ -9947,8 +12243,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -9956,8 +12254,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -9965,8 +12265,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M3UPI_RxR_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -9974,8 +12276,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -9983,8 +12287,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -9992,8 +12298,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -10001,8 +12309,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AK : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -10010,8 +12320,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -10019,8 +12331,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -10028,8 +12342,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -10037,8 +12353,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IFV - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -10046,8 +12364,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IV : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -10055,16 +12375,20 @@
},
{
"BriefDescription": "Transgress Injection Starvation",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M3UPI_RxR_CRD_STARVED_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"Unit": "M3UPI"
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -10072,8 +12396,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -10081,8 +12407,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -10090,8 +12418,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -10099,8 +12429,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -10108,8 +12440,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -10117,8 +12451,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -10126,8 +12462,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -10135,8 +12473,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M3UPI_RxR_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -10144,8 +12484,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -10153,8 +12495,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -10162,8 +12506,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -10171,8 +12517,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -10180,8 +12528,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -10189,8 +12539,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -10198,8 +12550,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x20",
@@ -10207,8 +12561,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -10216,8 +12572,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M3UPI_RxR_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -10225,8 +12583,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10234,8 +12594,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10243,8 +12605,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10252,8 +12616,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -10261,8 +12627,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -10270,8 +12638,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -10279,8 +12649,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -10288,8 +12660,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -10297,8 +12671,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10306,8 +12682,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10315,8 +12693,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10324,8 +12704,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -10333,8 +12715,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -10342,8 +12726,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -10351,8 +12737,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -10360,8 +12748,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -10369,8 +12759,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10378,8 +12770,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10387,8 +12781,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10396,8 +12792,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -10405,8 +12803,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -10414,8 +12814,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -10423,8 +12825,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -10432,8 +12836,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -10441,8 +12847,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10450,8 +12858,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10459,8 +12869,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10468,8 +12880,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -10477,8 +12891,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -10486,8 +12902,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -10495,8 +12913,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -10504,8 +12924,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M3UPI_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -10513,8 +12935,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10522,8 +12946,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10531,8 +12957,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10540,8 +12968,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10549,8 +12979,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10558,8 +12990,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10567,8 +13001,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10576,8 +13012,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10585,8 +13023,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10594,8 +13034,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -10603,8 +13045,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -10612,8 +13056,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M3UPI_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -10621,8 +13067,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 REQ Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -10630,8 +13078,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 RSP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -10639,8 +13089,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 SNP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -10648,8 +13100,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN0 WB Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -10657,8 +13111,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 REQ Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -10666,8 +13122,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 RSP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -10675,8 +13133,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 SNP Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -10684,8 +13144,10 @@
},
{
"BriefDescription": "Failed ARB for AD : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_M3UPI_TxC_AD_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for AD : VN1 WB Messages : AD arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -10693,8 +13155,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x1",
@@ -10702,8 +13166,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x2",
@@ -10711,8 +13177,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.AD_SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x4",
@@ -10720,8 +13188,10 @@
},
{
"BriefDescription": "AD FlowQ Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_BYPASS.BL_EARLY_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD FlowQ Bypass : Counts cases when the AD flowQ is bypassed (S0, S1 and S2 indicate which slot was bypassed with S0 having the highest priority and S2 the least)",
"UMask": "0x8",
@@ -10729,8 +13199,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 REQ Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x1",
@@ -10738,8 +13210,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 RSP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x4",
@@ -10747,8 +13221,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 SNP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x2",
@@ -10756,8 +13232,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN0 WB Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x8",
@@ -10765,8 +13243,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 REQ Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x10",
@@ -10774,8 +13254,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 RSP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x40",
@@ -10783,8 +13265,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 SNP Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x20",
@@ -10792,8 +13276,10 @@
},
{
"BriefDescription": "AD Flow Q Not Empty : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Not Empty : VN1 WB Messages : Number of cycles the AD Egress queue is Not Empty",
"UMask": "0x80",
@@ -10801,8 +13287,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -10810,8 +13298,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -10819,8 +13309,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -10828,8 +13320,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -10837,8 +13331,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 REQ Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -10846,8 +13342,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -10855,8 +13353,10 @@
},
{
"BriefDescription": "AD Flow Q Inserts : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_INSERTS.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AD Flow Q Inserts : VN1 SNP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -10864,78 +13364,98 @@
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 REQ Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "AD Flow Q Occupancy : VN1 SNP Messages",
+ "Counter": "0",
"EventCode": "0x1C",
"EventName": "UNC_M3UPI_TxC_AD_FLQ_OCCUPANCY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "AK Flow Q Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "AK Flow Q Occupancy",
+ "Counter": "0",
"EventCode": "0x1E",
"EventName": "UNC_M3UPI_TxC_AK_FLQ_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M3UPI"
},
{
"BriefDescription": "Failed ARB for BL : VN0 NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 NCB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x4",
@@ -10943,8 +13463,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 NCS Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x8",
@@ -10952,8 +13474,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 RSP Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x1",
@@ -10961,8 +13485,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN0 WB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x2",
@@ -10970,8 +13496,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 NCS Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x40",
@@ -10979,8 +13507,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 NCB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x80",
@@ -10988,8 +13518,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 RSP Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x10",
@@ -10997,8 +13529,10 @@
},
{
"BriefDescription": "Failed ARB for BL : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x35",
"EventName": "UNC_M3UPI_TxC_BL_ARB_FAIL.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Failed ARB for BL : VN1 WB Messages : BL arb but no win; arb request asserted but not won",
"UMask": "0x20",
@@ -11006,8 +13540,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 REQ Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x1",
@@ -11015,8 +13551,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 RSP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x4",
@@ -11024,8 +13562,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 SNP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x2",
@@ -11033,8 +13573,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN0 WB Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x8",
@@ -11042,8 +13584,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 REQ Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x10",
@@ -11051,8 +13595,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 RSP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x40",
@@ -11060,8 +13606,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 SNP Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x20",
@@ -11069,8 +13617,10 @@
},
{
"BriefDescription": "BL Flow Q Not Empty : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_CYCLES_NE.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Not Empty : VN1 WB Messages : Number of cycles the BL Egress queue is Not Empty",
"UMask": "0x80",
@@ -11078,8 +13628,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x1",
@@ -11087,8 +13639,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x2",
@@ -11096,8 +13650,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x8",
@@ -11105,8 +13661,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN0 NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN0 NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x4",
@@ -11114,8 +13672,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1 RSP Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x10",
@@ -11123,8 +13683,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1 WB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1 WB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x20",
@@ -11132,8 +13694,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1_NCB Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1_NCB Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x80",
@@ -11141,8 +13705,10 @@
},
{
"BriefDescription": "BL Flow Q Inserts : VN1_NCS Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_INSERTS.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "BL Flow Q Inserts : VN1_NCS Messages : Counts the number of allocations into the QPI FlowQ. This can be used in conjunction with the QPI FlowQ Occupancy Accumulator event in order to calculate average queue latency. Only a single FlowQ queue can be tracked at any given time. It is not possible to filter based on direction or polarity.",
"UMask": "0x40",
@@ -11150,120 +13716,150 @@
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCS Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x1D",
"EventName": "UNC_M3UPI_TxC_BL_FLQ_OCCUPANCY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 WB Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_THROUGH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN0 NCB Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN0_WRPULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 RSP Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1 WB Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_THROUGH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "BL Flow Q Occupancy : VN1_NCS Messages",
+ "Counter": "0",
"EventCode": "0x1F",
"EventName": "UNC_M3UPI_TxC_BL_WB_FLQ_OCCUPANCY.VN1_WRPULL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11271,8 +13867,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -11280,8 +13878,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -11289,8 +13889,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11298,8 +13900,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -11307,8 +13911,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA6",
"EventName": "UNC_M3UPI_TxR_HORZ_ADS_USED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -11316,8 +13922,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11325,8 +13933,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -11334,8 +13944,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -11343,8 +13955,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -11352,8 +13966,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x80",
@@ -11361,8 +13977,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11370,8 +13988,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -11379,8 +13999,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -11388,8 +14010,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA7",
"EventName": "UNC_M3UPI_TxR_HORZ_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -11397,8 +14021,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11406,8 +14032,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -11415,8 +14043,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -11424,8 +14054,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -11433,8 +14065,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -11442,8 +14076,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11451,8 +14087,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -11460,8 +14098,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -11469,8 +14109,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -11478,8 +14120,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11487,8 +14131,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -11496,8 +14142,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -11505,8 +14153,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -11514,8 +14164,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -11523,8 +14175,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11532,8 +14186,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -11541,8 +14197,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -11550,8 +14208,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA3",
"EventName": "UNC_M3UPI_TxR_HORZ_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -11559,8 +14219,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11568,8 +14230,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -11577,8 +14241,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -11586,8 +14252,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -11595,8 +14263,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -11604,8 +14274,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11613,8 +14285,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -11622,8 +14296,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -11631,8 +14307,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UNC_M3UPI_TxR_HORZ_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -11640,8 +14318,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11649,8 +14329,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x10",
@@ -11658,8 +14340,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -11667,8 +14351,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -11676,8 +14362,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x80",
@@ -11685,8 +14373,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11694,8 +14384,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -11703,8 +14395,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -11712,8 +14406,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA4",
"EventName": "UNC_M3UPI_TxR_HORZ_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -11721,8 +14417,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -11730,8 +14428,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -11739,8 +14439,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -11748,8 +14450,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -11757,8 +14461,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -11766,8 +14472,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -11775,8 +14483,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -11784,8 +14494,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -11793,8 +14505,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M3UPI_TxR_HORZ_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -11802,8 +14516,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x1",
@@ -11811,8 +14527,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -11820,8 +14538,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -11829,8 +14549,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x80",
@@ -11838,8 +14560,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x4",
@@ -11847,8 +14571,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -11856,8 +14582,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xA5",
"EventName": "UNC_M3UPI_TxR_HORZ_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -11865,8 +14593,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -11874,8 +14604,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -11883,8 +14615,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -11892,8 +14626,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "UNC_M3UPI_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -11901,8 +14637,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -11910,8 +14648,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -11919,8 +14659,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -11928,8 +14670,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -11937,8 +14681,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -11946,8 +14692,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -11955,8 +14703,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : IV - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9D",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS.IV_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -11964,8 +14714,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS_1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -11973,8 +14725,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9E",
"EventName": "UNC_M3UPI_TxR_VERT_BYPASS_1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -11982,8 +14736,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -11991,8 +14747,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -12000,8 +14758,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12009,8 +14769,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -12018,8 +14780,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -12027,8 +14791,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -12036,8 +14802,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -12045,8 +14813,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12054,8 +14824,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_FULL1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12063,8 +14835,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12072,8 +14846,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -12081,8 +14857,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12090,8 +14868,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -12099,8 +14879,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -12108,8 +14890,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -12117,8 +14901,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -12126,8 +14912,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12135,8 +14923,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M3UPI_TxR_VERT_CYCLES_NE1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12144,8 +14934,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12153,8 +14945,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -12162,8 +14956,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12171,8 +14967,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -12180,8 +14978,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -12189,8 +14989,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -12198,8 +15000,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -12207,8 +15011,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12216,8 +15022,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M3UPI_TxR_VERT_INSERTS1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12225,8 +15033,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -12234,8 +15044,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -12243,8 +15055,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -12252,8 +15066,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -12261,8 +15077,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -12270,8 +15088,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -12279,8 +15099,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M3UPI_TxR_VERT_NACK0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -12288,8 +15110,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_VERT_NACK1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -12297,8 +15121,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M3UPI_TxR_VERT_NACK1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -12306,8 +15132,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12315,8 +15143,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -12324,8 +15154,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12333,8 +15165,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -12342,8 +15176,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -12351,8 +15187,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -12360,8 +15198,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -12369,8 +15209,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -12378,8 +15220,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M3UPI_TxR_VERT_OCCUPANCY1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -12387,8 +15231,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -12396,8 +15242,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -12405,8 +15253,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -12414,8 +15264,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -12423,8 +15275,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -12432,8 +15286,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -12441,8 +15297,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9A",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -12450,8 +15308,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -12459,8 +15319,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -12468,8 +15330,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9B",
"EventName": "UNC_M3UPI_TxR_VERT_STARVED1.TGC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -12477,8 +15341,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 REQ Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x2",
@@ -12486,8 +15352,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 RSP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x8",
@@ -12495,8 +15363,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN0_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN0 SNP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x4",
@@ -12504,8 +15374,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 REQ Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x10",
@@ -12513,8 +15385,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 RSP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x40",
@@ -12522,8 +15396,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VN1_SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VN1 SNP Messages : No credits available to send to UPIs on the AD Ring",
"UMask": "0x20",
@@ -12531,8 +15407,10 @@
},
{
"BriefDescription": "UPI0 AD Credits Empty : VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M3UPI_UPI_PEER_AD_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 AD Credits Empty : VNA : No credits available to send to UPIs on the AD Ring",
"UMask": "0x1",
@@ -12540,8 +15418,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x4",
@@ -12549,8 +15429,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x2",
@@ -12558,8 +15440,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN0 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN0_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN0 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x8",
@@ -12567,8 +15451,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 RSP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_NCS_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 RSP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x20",
@@ -12576,8 +15462,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 REQ Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 REQ Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x10",
@@ -12585,8 +15473,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VN1 SNP Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VN1_WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VN1 SNP Messages : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x40",
@@ -12594,8 +15484,10 @@
},
{
"BriefDescription": "UPI0 BL Credits Empty : VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M3UPI_UPI_PEER_BL_CREDITS_EMPTY.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "UPI0 BL Credits Empty : VNA : No credits available to send to UPI on the BL Ring (diff between non-SMI and SMI mode)",
"UMask": "0x1",
@@ -12603,16 +15495,20 @@
},
{
"BriefDescription": "FlowQ Generated Prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_M3UPI_UPI_PREFETCH_SPAWN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "FlowQ Generated Prefetch : Count cases where FlowQ causes spawn of Prefetch to iMC/SMI3 target",
"Unit": "M3UPI"
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -12620,8 +15516,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -12629,8 +15527,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -12638,8 +15538,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "UNC_M3UPI_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -12647,8 +15549,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_VERT_RING_AKC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -12656,8 +15560,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_VERT_RING_AKC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -12665,8 +15571,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_VERT_RING_AKC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -12674,8 +15582,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB4",
"EventName": "UNC_M3UPI_VERT_RING_AKC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -12683,8 +15593,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -12692,8 +15604,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -12701,8 +15615,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -12710,8 +15626,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UNC_M3UPI_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -12719,8 +15637,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -12728,8 +15648,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -12737,8 +15659,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -12746,8 +15670,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "UNC_M3UPI_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -12755,8 +15681,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -12764,8 +15692,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xB3",
"EventName": "UNC_M3UPI_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -12773,8 +15703,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M3UPI_VERT_RING_TGC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -12782,8 +15714,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M3UPI_VERT_RING_TGC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -12791,8 +15725,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M3UPI_VERT_RING_TGC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -12800,8 +15736,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xB5",
"EventName": "UNC_M3UPI_VERT_RING_TGC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -12809,8 +15747,10 @@
},
{
"BriefDescription": "VN0 Credit Used : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : WB on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -12818,8 +15758,10 @@
},
{
"BriefDescription": "VN0 Credit Used : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : NCB on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -12827,8 +15769,10 @@
},
{
"BriefDescription": "VN0 Credit Used : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : REQ on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -12836,8 +15780,10 @@
},
{
"BriefDescription": "VN0 Credit Used : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : RSP on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -12845,8 +15791,10 @@
},
{
"BriefDescription": "VN0 Credit Used : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : SNP on AD : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -12854,8 +15802,10 @@
},
{
"BriefDescription": "VN0 Credit Used : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5B",
"EventName": "UNC_M3UPI_VN0_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Used : RSP on BL : Number of times a VN0 credit was used on the DRS message channel. In order for a request to be transferred across UPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN0. VNA is a shared pool used to achieve high performance. The VN0 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN0 if they fail. This counts the number of times a VN0 credit was used. Note that a single VN0 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN0 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -12863,8 +15813,10 @@
},
{
"BriefDescription": "VN0 No Credits : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : WB on BL : Number of Cycles there were no VN0 Credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -12872,8 +15824,10 @@
},
{
"BriefDescription": "VN0 No Credits : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : NCB on BL : Number of Cycles there were no VN0 Credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -12881,8 +15835,10 @@
},
{
"BriefDescription": "VN0 No Credits : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : REQ on AD : Number of Cycles there were no VN0 Credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -12890,8 +15846,10 @@
},
{
"BriefDescription": "VN0 No Credits : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : RSP on AD : Number of Cycles there were no VN0 Credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -12899,8 +15857,10 @@
},
{
"BriefDescription": "VN0 No Credits : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : SNP on AD : Number of Cycles there were no VN0 Credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -12908,8 +15868,10 @@
},
{
"BriefDescription": "VN0 No Credits : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5D",
"EventName": "UNC_M3UPI_VN0_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 No Credits : RSP on BL : Number of Cycles there were no VN0 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -12917,8 +15879,10 @@
},
{
"BriefDescription": "VN1 Credit Used : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : WB on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -12926,8 +15890,10 @@
},
{
"BriefDescription": "VN1 Credit Used : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : NCB on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -12935,8 +15901,10 @@
},
{
"BriefDescription": "VN1 Credit Used : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : REQ on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -12944,8 +15912,10 @@
},
{
"BriefDescription": "VN1 Credit Used : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : RSP on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -12953,8 +15923,10 @@
},
{
"BriefDescription": "VN1 Credit Used : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : SNP on AD : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -12962,8 +15934,10 @@
},
{
"BriefDescription": "VN1 Credit Used : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "UNC_M3UPI_VN1_CREDITS_USED.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Used : RSP on BL : Number of times a VN1 credit was used on the WB message channel. In order for a request to be transferred across QPI, it must be guaranteed to have a flit buffer on the remote socket to sink into. There are two credit pools, VNA and VN1. VNA is a shared pool used to achieve high performance. The VN1 pool has reserved entries for each message class and is used to prevent deadlock. Requests first attempt to acquire a VNA credit, and then fall back to VN1 if they fail. This counts the number of times a VN1 credit was used. Note that a single VN1 credit holds access to potentially multiple flit buffers. For example, a transfer that uses VNA could use 9 flit buffers and in that case uses 9 credits. A transfer on VN1 will only count a single credit even though it may use multiple buffers. : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -12971,8 +15945,10 @@
},
{
"BriefDescription": "VN1 No Credits : WB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : WB on BL : Number of Cycles there were no VN1 Credits : Data Response (WB) messages on BL. WB is generally used to transmit data with coherency. For example, remote reads and writes, or cache to cache transfers will transmit their data using WB.",
"UMask": "0x10",
@@ -12980,8 +15956,10 @@
},
{
"BriefDescription": "VN1 No Credits : NCB on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : NCB on BL : Number of Cycles there were no VN1 Credits : Non-Coherent Broadcast (NCB) messages on BL. NCB is generally used to transmit data without coherency. For example, non-coherent read data returns.",
"UMask": "0x20",
@@ -12989,8 +15967,10 @@
},
{
"BriefDescription": "VN1 No Credits : REQ on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : REQ on AD : Number of Cycles there were no VN1 Credits : Home (REQ) messages on AD. REQ is generally used to send requests, request responses, and snoop responses.",
"UMask": "0x1",
@@ -12998,8 +15978,10 @@
},
{
"BriefDescription": "VN1 No Credits : RSP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.RSP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : RSP on AD : Number of Cycles there were no VN1 Credits : Response (RSP) messages on AD. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x4",
@@ -13007,8 +15989,10 @@
},
{
"BriefDescription": "VN1 No Credits : SNP on AD",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.SNP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : SNP on AD : Number of Cycles there were no VN1 Credits : Snoops (SNP) messages on AD. SNP is used for outgoing snoops.",
"UMask": "0x2",
@@ -13016,8 +16000,10 @@
},
{
"BriefDescription": "VN1 No Credits : RSP on BL",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "UNC_M3UPI_VN1_NO_CREDITS.WB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 No Credits : RSP on BL : Number of Cycles there were no VN1 Credits : Response (RSP) messages on BL. RSP packets are used to transmit a variety of protocol flits including grants and completions (CMP).",
"UMask": "0x8",
@@ -13025,168 +16011,210 @@
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x82",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_EQ_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa0",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x81",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_GT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x90",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x84",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.BOTHNONZERO_RT_LT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xc0",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_EQ_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_GT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7E",
"EventName": "UNC_M3UPI_WB_OCC_COMPARE.RT_LT_LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.LOCALDEST_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.LOCAL_AND_RT_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.ROUTETHRU_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x7D",
"EventName": "UNC_M3UPI_WB_PENDING.WAITING4PULL_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M3UPI"
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARB",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message is making arbitration request",
"UMask": "0x4",
@@ -13194,8 +16222,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.ARRIVED",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.ARRIVED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message arrived in ingress pipeline",
"UMask": "0x1",
@@ -13203,8 +16233,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.BYPASS",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.BYPASS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message took bypass path",
"UMask": "0x2",
@@ -13212,8 +16244,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.FLITTED",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.FLITTED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was slotted into flit (non bypass)",
"UMask": "0x10",
@@ -13221,8 +16255,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_ARB",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_ARB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message lost arbitration",
"UMask": "0x8",
@@ -13230,8 +16266,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_OLD",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_OLD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was dropped because it became too old",
"UMask": "0x20",
@@ -13239,8 +16277,10 @@
},
{
"BriefDescription": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL",
+ "Counter": "0,1,2,3",
"EventCode": "0x61",
"EventName": "UNC_M3UPI_XPT_PFTCH.LOST_QFULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": xpt prefetch message was dropped because it was overwritten by new message while prefetch queue was full",
"UMask": "0x20",
@@ -13248,6 +16288,7 @@
},
{
"BriefDescription": "Number of kfclks",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_UPI_CLOCKTICKS",
"PerPkg": "1",
@@ -13256,8 +16297,10 @@
},
{
"BriefDescription": "Direct packet attempts : D2C",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2C",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Direct packet attempts : D2C : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"UMask": "0x1",
@@ -13265,8 +16308,10 @@
},
{
"BriefDescription": "Direct packet attempts : D2K",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_UPI_DIRECT_ATTEMPTS.D2K",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Direct packet attempts : D2K : Counts the number of DRS packets that we attempted to do direct2core/direct2UPI on. There are 4 mutually exclusive filters. Filter [0] can be used to get successful spawns, while [1:3] provide the different failure cases. Note that this does not count packets that are not candidates for Direct2Core. The only candidates for Direct2Core are DRS packets destined for Cbos.",
"UMask": "0x2",
@@ -13274,70 +16319,87 @@
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AD_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.AK_VNA_EQ3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_UPI_FLOWQ_NO_VNA_CRD.BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_UPI_L1_POWER_CYCLES",
"PerPkg": "1",
@@ -13346,326 +16408,404 @@
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "UNC_UPI_M3_BYP_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_UPI_M3_CRD_RETURN_BLOCKED",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.BGF_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_BTW_2_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AD_VNA_LE2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_AK_VNA_LE3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_BTW_0_THRESH",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.FLOWQ_BL_VNA_EQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_UPI_M3_RXQ_BLOCKED.GV_BLOCK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles where phy is not in L0, L0c, L0p, L1",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_UPI_PHY_INIT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req Nack",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_UPI_POWER_L1_NACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "L1 Req Nack : Counts the number of times a link sends/receives a LinkReqNAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqNAck refers to receiving an NAck (meaning this agent's Tx originally requested the power change). A Tx LinkReqNAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "L1 Req (same as L1 Ack).",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_UPI_POWER_L1_REQ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "L1 Req (same as L1 Ack). : Counts the number of times a link sends/receives a LinkReqAck. When the UPI links would like to change power state, the Tx side initiates a request to the Rx side requesting to change states. This requests can either be accepted or denied. If the Rx side replies with an Ack, the power mode will change. If it replies with NAck, no change will take place. This can be filtered based on Rx and Tx. An Rx LinkReqAck refers to receiving an Ack (meaning this agent's Tx originally requested the power change). A Tx LinkReqAck refers to sending this command (meaning the peer agent's Tx originally requested the power change and this agent accepted it).",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.ACK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VN1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_UPI_REQ_SLOT2_FROM_M3.VNA",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_UPI_RxL0P_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0p : Number of UPI qfclk cycles spent in L0p power mode. L0p is a mode where we disable 1/2 of the UPI lanes, decreasing our bandwidth in order to save power. It increases snoop and data transfer latencies and decreases overall bandwidth. This mode can be very useful in NUMA optimized workloads that largely only utilize UPI for snoops and their responses. Use edge detect to count the number of instances when the UPI link entered L0p. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another.",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_UPI_RxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xe",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10e",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xf",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10f",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Request : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Request : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Request, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Request, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Request, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x108",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - Conflict : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - Conflict : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x1aa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - Invalid : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - Invalid : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x12a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - Data : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - Data : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xc",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - Data, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - Data, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - Data, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10c",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - No Data : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - No Data : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Response - No Data, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Response - No Data, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Response - No Data, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Snoop : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Snoop : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x9",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Snoop, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Snoop, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Snoop, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x109",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Writeback : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Writeback : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xd",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Receive path of a UPI Port : Writeback, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_UPI_RxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Receive path of a UPI Port : Writeback, Match Opcode : Matches on Receive path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Receive path of a UPI Port : Writeback, Match Opcode : Matches on Receive path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10d",
"Unit": "UPI"
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 0 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x1",
@@ -13673,8 +16813,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 1 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x2",
@@ -13682,8 +16824,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Bypassed : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x31",
"EventName": "UNC_UPI_RxL_BYPASSED.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Bypassed : Slot 2 : Counts the number of times that an incoming flit was able to bypass the flit buffer and pass directly across the BGF and into the Egress. This is a latency optimization, and should generally be the common case. If this value is less than the number of flits transferred, it implies that there was queueing getting onto the ring, and thus the transactions saw higher latency.",
"UMask": "0x4",
@@ -13691,46 +16835,57 @@
},
{
"BriefDescription": "CRC Errors Detected",
+ "Counter": "0,1,2,3",
"EventCode": "0x0B",
"EventName": "UNC_UPI_RxL_CRC_ERRORS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CRC Errors Detected : Number of CRC errors detected in the UPI Agent. Each UPI flit incorporates 8 bits of CRC for error detection. This counts the number of flits where the CRC was able to detect an error. After an error has been detected, the UPI agent will send a request to the transmitting socket to resend the flit (as well as any flits that came after it).",
"Unit": "UPI"
},
{
"BriefDescription": "LLR Requests Sent",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "UNC_UPI_RxL_CRC_LLR_REQ_TRANSMIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "LLR Requests Sent : Number of LLR Requests were transmitted. This should generally be <= the number of CRC errors detected. If multiple errors are detected before the Rx side receives a LLC_REQ_ACK from the Tx side, there is no need to send more LLR_REQ_NACKs.",
"Unit": "UPI"
},
{
"BriefDescription": "VN0 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x39",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN0 Credit Consumed : Counts the number of times that an RxQ VN0 credit was consumed (i.e. message uses a VN0 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VN1 Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x3A",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VN1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VN1 Credit Consumed : Counts the number of times that an RxQ VN1 credit was consumed (i.e. message uses a VN1 credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credit Consumed",
+ "Counter": "0,1,2,3",
"EventCode": "0x38",
"EventName": "UNC_UPI_RxL_CREDITS_CONSUMED_VNA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VNA Credit Consumed : Counts the number of times that an RxQ VNA credit was consumed (i.e. message uses a VNA credit for the Rx Buffer). This includes packets that went through the RxQ and those that were bypasssed.",
"Unit": "UPI"
},
{
"BriefDescription": "Valid Flits Received : All Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -13740,6 +16895,7 @@
},
{
"BriefDescription": "Valid Flits Received : Null FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -13749,8 +16905,10 @@
},
{
"BriefDescription": "Valid Flits Received : Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
"UMask": "0x8",
@@ -13758,8 +16916,10 @@
},
{
"BriefDescription": "Valid Flits Received : Null FLITs received from any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Null FLITs received from any slot : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -13767,8 +16927,10 @@
},
{
"BriefDescription": "Valid Flits Received : LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -13776,8 +16938,10 @@
},
{
"BriefDescription": "Valid Flits Received : LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -13785,6 +16949,7 @@
},
{
"BriefDescription": "Valid Flits Received : All Non Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -13794,8 +16959,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot NULL or LLCRD Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
"UMask": "0x20",
@@ -13803,8 +16970,10 @@
},
{
"BriefDescription": "Valid Flits Received : Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -13812,8 +16981,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -13821,8 +16992,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -13830,8 +17003,10 @@
},
{
"BriefDescription": "Valid Flits Received : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "UNC_UPI_RxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Received : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -13839,8 +17014,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 0 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x1",
@@ -13848,8 +17025,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 1 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x2",
@@ -13857,8 +17036,10 @@
},
{
"BriefDescription": "RxQ Flit Buffer Allocations : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_UPI_RxL_INSERTS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Flit Buffer Allocations : Slot 2 : Number of allocations into the UPI Rx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"UMask": "0x4",
@@ -13866,8 +17047,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 0 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x1",
@@ -13875,8 +17058,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 1 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x2",
@@ -13884,8 +17069,10 @@
},
{
"BriefDescription": "RxQ Occupancy - All Packets : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_UPI_RxL_OCCUPANCY.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RxQ Occupancy - All Packets : Slot 2 : Accumulates the number of elements in the UPI RxQ in each cycle. Generally, when data is transmitted across UPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Not Empty event to calculate average occupancy, or with the Flit Buffer Allocations event to track average lifetime.",
"UMask": "0x4",
@@ -13893,118 +17080,147 @@
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S0_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S1_RXQ2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_UPI_RxL_SLOT_BYPASS.S2_RXQ1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.CFG_CTL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.DFX",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RETRY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_BYPASS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.RXQ_CRED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.SPARE",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_UPI_TxL0P_CLK_ACTIVE.TXQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0p",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES",
"PerPkg": "1",
@@ -14013,180 +17229,221 @@
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_LL_ENTER",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Counter": "0,1,2,3",
"EventCode": "0x29",
"EventName": "UNC_UPI_TxL0P_POWER_CYCLES_M3_EXIT",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "Cycles in L0",
+ "Counter": "0,1,2,3",
"EventCode": "0x26",
"EventName": "UNC_UPI_TxL0_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles in L0 : Number of UPI qfclk cycles spent in L0 power mode in the Link Layer. L0 is the default mode which provides the highest performance with the most power. Use edge detect to count the number of instances that the link entered L0. Link power states are per link and per direction, so for example the Tx direction could be in one state while Rx was in another. The phy layer sometimes leaves L0 for training, which will not be captured by this event.",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xe",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Bypass, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10e",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xf",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.NCS_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Non-Coherent Standard, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10f",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Request",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Request : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Request : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x8",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Request, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.REQ_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Request, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Request, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x108",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - Conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPCNFLT",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Conflict : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Conflict : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x1aa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - Invalid",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSPI",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Invalid : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Invalid : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x12a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Data : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Data : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xc",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - Data, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_DATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Data, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - Data, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10c",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - No Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - No Data : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - No Data : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xa",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Response - No Data, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.RSP_NODATA_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Response - No Data, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Response - No Data, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10a",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Snoop",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Snoop : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Snoop : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x9",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Snoop, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.SNP_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Snoop, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Snoop, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x109",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Writeback",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Writeback : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Writeback : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0xd",
"Unit": "UPI"
},
{
"BriefDescription": "Matches on Transmit path of a UPI Port : Writeback, Match Opcode",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_UPI_TxL_BASIC_HDR_MATCH.WB_OPC",
+ "Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "Matches on Transmit path of a UPI Port : Writeback, Match Opcode : Matches on Transmit path of a UPI port.\r\nMatch based on UMask specific bits:\r\nZ: Message Class (3-bit)\r\nY: Message Class Enable\r\nW: Opcode (4-bit)\r\nV: Opcode Enable\r\nU: Local Enable\r\nT: Remote Enable\r\nS: Data Hdr Enable\r\nR: Non-Data Hdr Enable\r\nQ: Dual Slot Hdr Enable\r\nP: Single Slot Hdr Enable\r\nLink Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases.\r\nNote: If Message Class is disabled, we expect opcode to also be disabled.",
+ "PublicDescription": "Matches on Transmit path of a UPI Port : Writeback, Match Opcode : Matches on Transmit path of a UPI port. Match based on UMask specific bits: Z: Message Class (3-bit) Y: Message Class Enable W: Opcode (4-bit) V: Opcode Enable U: Local Enable T: Remote Enable S: Data Hdr Enable R: Non-Data Hdr Enable Q: Dual Slot Hdr Enable P: Single Slot Hdr Enable Link Layer control types are excluded (LL CTRL, slot NULL, LLCRD) even under specific opcode match_en cases. Note: If Message Class is disabled, we expect opcode to also be disabled.",
"UMask": "0x10d",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Bypassed",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_UPI_TxL_BYPASSED",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Bypassed : Counts the number of times that an incoming flit was able to bypass the Tx flit buffer and pass directly out the UPI Link. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link.",
"Unit": "UPI"
},
{
"BriefDescription": "Valid Flits Sent : All Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_DATA",
"PerPkg": "1",
@@ -14196,6 +17453,7 @@
},
{
"BriefDescription": "Valid Flits Sent : Null FLITs transmitted to any slot",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.ALL_NULL",
"PerPkg": "1",
@@ -14205,8 +17463,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.DATA",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Data : Shows legal flit time (hides impact of L0p and L0c). : Count Data Flits (which consume all slots), but how much to count is based on Slot0-2 mask, so count can be 0-3 depending on which slots are enabled for counting..",
"UMask": "0x8",
@@ -14214,8 +17474,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Idle",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.IDLE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Idle : Shows legal flit time (hides impact of L0p and L0c).",
"UMask": "0x47",
@@ -14223,8 +17485,10 @@
},
{
"BriefDescription": "Valid Flits Sent : LLCRD Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.LLCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : LLCRD Not Empty : Shows legal flit time (hides impact of L0p and L0c). : Enables counting of LLCRD (with non-zero payload). This only applies to slot 2 since LLCRD is only allowed in slot 2",
"UMask": "0x10",
@@ -14232,8 +17496,10 @@
},
{
"BriefDescription": "Valid Flits Sent : LLCTRL",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.LLCTRL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : LLCTRL : Shows legal flit time (hides impact of L0p and L0c). : Equivalent to an idle packet. Enables counting of slot 0 LLCTRL messages.",
"UMask": "0x40",
@@ -14241,6 +17507,7 @@
},
{
"BriefDescription": "Valid Flits Sent : All Non Data",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.NON_DATA",
"PerPkg": "1",
@@ -14250,8 +17517,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.NULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot NULL or LLCRD Empty : Shows legal flit time (hides impact of L0p and L0c). : LLCRD with all zeros is treated as NULL. Slot 1 is not treated as NULL if slot 0 is a dual slot. This can apply to slot 0,1, or 2.",
"UMask": "0x20",
@@ -14259,8 +17528,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Protocol Header",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.PROTHDR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Protocol Header : Shows legal flit time (hides impact of L0p and L0c). : Enables count of protocol headers in slot 0,1,2 (depending on slot uMask bits)",
"UMask": "0x80",
@@ -14268,8 +17539,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 0 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 0 - Other mask bits determine types of headers to count.",
"UMask": "0x1",
@@ -14277,8 +17550,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 1 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 1 - Other mask bits determine types of headers to count.",
"UMask": "0x2",
@@ -14286,8 +17561,10 @@
},
{
"BriefDescription": "Valid Flits Sent : Slot 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_UPI_TxL_FLITS.SLOT2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Valid Flits Sent : Slot 2 : Shows legal flit time (hides impact of L0p and L0c). : Count Slot 2 - Other mask bits determine types of headers to count.",
"UMask": "0x4",
@@ -14295,37 +17572,46 @@
},
{
"BriefDescription": "Tx Flit Buffer Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_UPI_TxL_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Allocations : Number of allocations into the UPI Tx Flit Buffer. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This event can be used in conjunction with the Flit Buffer Occupancy event in order to calculate the average flit buffer lifetime.",
"Unit": "UPI"
},
{
"BriefDescription": "Tx Flit Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_UPI_TxL_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Tx Flit Buffer Occupancy : Accumulates the number of flits in the TxQ. Generally, when data is transmitted across UPI, it will bypass the TxQ and pass directly to the link. However, the TxQ will be used with L0p and when LLR occurs, increasing latency to transfer out to the link. This can be used with the cycles not empty event to track average occupancy, or the allocations event to track average lifetime in the TxQ.",
"Unit": "UPI"
},
{
"BriefDescription": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_BLOCKED_VN01",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "UPI"
},
{
"BriefDescription": "VNA Credits Pending Return - Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_UPI_VNA_CREDIT_RETURN_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VNA Credits Pending Return - Occupancy : Number of VNA credits in the Rx side that are waitng to be returned back across the link.",
"Unit": "UPI"
},
{
"BriefDescription": "Clockticks in the UBOX using a dedicated 48-bit Fixed Counter",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_U_CLOCKTICKS",
"PerPkg": "1",
@@ -14333,16 +17619,20 @@
},
{
"BriefDescription": "Message Received : Doorbell",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.DOORBELL_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "Message Received : Interrupt",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.INT_PRIO",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : Interrupt : Interrupts",
"UMask": "0x10",
@@ -14350,8 +17640,10 @@
},
{
"BriefDescription": "Message Received : IPI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.IPI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : IPI : Inter Processor Interrupts",
"UMask": "0x4",
@@ -14359,8 +17651,10 @@
},
{
"BriefDescription": "Message Received : MSI",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.MSI_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : MSI : Message Signaled Interrupts - interrupts sent by devices (including PCIe via IOxAPIC) (Socket Mode only)",
"UMask": "0x2",
@@ -14368,8 +17662,10 @@
},
{
"BriefDescription": "Message Received : VLW",
+ "Counter": "0,1",
"EventCode": "0x42",
"EventName": "UNC_U_EVENT_MSG.VLW_RCVD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Message Received : VLW : Virtual Logical Wire (legacy) message were received from Uncore.",
"UMask": "0x1",
@@ -14377,160 +17673,200 @@
},
{
"BriefDescription": "IDI Lock/SplitLock Cycles",
+ "Counter": "0,1",
"EventCode": "0x44",
"EventName": "UNC_U_LOCK_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IDI Lock/SplitLock Cycles : Number of times an IDI Lock/SplitLock sequence was started",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_CBO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.RxC_CYCLES_NE_UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_CBO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS",
+ "Counter": "0,1",
"EventCode": "0x4D",
"EventName": "UNC_U_M2U_MISC1.TxC_CYCLES_CRD_OVF_UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_EMPTY_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.RxC_CYCLES_FULL_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_CRD_OVF_VN0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_EMPTY_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL",
+ "Counter": "0,1",
"EventCode": "0x4E",
"EventName": "UNC_U_M2U_MISC2.TxC_CYCLES_FULL_BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK",
+ "Counter": "0,1",
"EventCode": "0x4F",
"EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC",
+ "Counter": "0,1",
"EventCode": "0x4F",
"EventName": "UNC_U_M2U_MISC3.TxC_CYCLES_FULL_AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "Cycles PHOLD Assert to Ack : Assert to ACK",
+ "Counter": "0,1",
"EventCode": "0x45",
"EventName": "UNC_U_PHOLD_CYCLES.ASSERT_TO_ACK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles PHOLD Assert to Ack : Assert to ACK : PHOLD cycles.",
"UMask": "0x1",
@@ -14538,32 +17874,40 @@
},
{
"BriefDescription": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.PFTCH_BUF_EMPTY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDRAND",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.RDRAND",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "UBOX"
},
{
"BriefDescription": "UNC_U_RACU_DRNG.RDSEED",
+ "Counter": "0,1",
"EventCode": "0x4C",
"EventName": "UNC_U_RACU_DRNG.RDSEED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "UBOX"
},
{
"BriefDescription": "RACU Request",
+ "Counter": "0,1",
"EventCode": "0x46",
"EventName": "UNC_U_RACU_REQUESTS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "RACU Request : Number outstanding register requests within message channel tracker",
"Unit": "UBOX"
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-io.json b/tools/perf/pmu-events/arch/x86/icelakex/uncore-io.json
index 9cef8862c428..3c3c2cf51e1d 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-io.json
@@ -1,70 +1,87 @@
[
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "1",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART0_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "2",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART1_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x21",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "3",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART2_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x22",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "4",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART3_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x23",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "5",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART4_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x24",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "6",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART5_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x25",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "7",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART6_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x26",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Free running counter that increments for every 32 bytes of data sent from the IO agent to the SOC",
+ "Counter": "8",
"EventCode": "0xff",
"EventName": "UNC_IIO_BANDWIDTH_IN.PART7_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x27",
"Unit": "iio_free_running"
},
{
"BriefDescription": "Clockticks of the integrated IO (IIO) traffic controller",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_IIO_CLOCKTICKS",
"PerPkg": "1",
@@ -73,6 +90,7 @@
},
{
"BriefDescription": "Free running counter that increments for IIO clocktick",
+ "Counter": "0",
"EventCode": "0xff",
"EventName": "UNC_IIO_CLOCKTICKS_FREERUN",
"PerPkg": "1",
@@ -82,8 +100,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts : All Ports",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL",
+ "Experimental": "1",
"FCMask": "0x04",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -92,6 +112,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-7",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
"FCMask": "0x04",
@@ -103,6 +124,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART0",
"FCMask": "0x04",
@@ -114,6 +136,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART1",
"FCMask": "0x04",
@@ -125,6 +148,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART2",
"FCMask": "0x04",
@@ -136,6 +160,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART3",
"FCMask": "0x04",
@@ -147,6 +172,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART4",
"FCMask": "0x04",
@@ -158,6 +184,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART5",
"FCMask": "0x04",
@@ -169,6 +196,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART6",
"FCMask": "0x04",
@@ -180,6 +208,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.PART7",
"FCMask": "0x04",
@@ -191,8 +220,10 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 0-7",
+ "Counter": "2,3",
"EventCode": "0xD5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL",
+ "Experimental": "1",
"FCMask": "0x04",
"PerPkg": "1",
"PublicDescription": "PCIe Completion Buffer Occupancy : Part 0-7",
@@ -201,6 +232,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 0-7",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
"FCMask": "0x04",
@@ -211,6 +243,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 0",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART0",
"FCMask": "0x04",
@@ -221,6 +254,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 1",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART1",
"FCMask": "0x04",
@@ -231,6 +265,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 2",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART2",
"FCMask": "0x04",
@@ -241,6 +276,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 3",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART3",
"FCMask": "0x04",
@@ -251,6 +287,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 4",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART4",
"FCMask": "0x04",
@@ -261,6 +298,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 5",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART5",
"FCMask": "0x04",
@@ -271,6 +309,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 6",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART6",
"FCMask": "0x04",
@@ -281,6 +320,7 @@
},
{
"BriefDescription": "PCIe Completion Buffer Occupancy of completions with data : Part 7",
+ "Counter": "2,3",
"EventCode": "0xd5",
"EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.PART7",
"FCMask": "0x04",
@@ -291,8 +331,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -302,8 +344,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -313,8 +357,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -324,8 +370,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -335,8 +383,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -346,8 +396,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -357,8 +409,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -368,8 +422,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -379,8 +435,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -390,8 +448,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -401,8 +461,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -412,8 +474,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -423,8 +487,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -434,8 +500,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -445,8 +513,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -456,8 +526,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -467,8 +539,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -478,8 +552,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -489,8 +565,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -500,8 +578,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.CFG_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -511,8 +591,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -522,8 +604,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -533,8 +617,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -544,8 +630,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -555,8 +643,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -566,8 +656,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -577,8 +669,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -588,8 +682,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -599,8 +695,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -610,8 +708,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reading from Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -621,8 +721,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -632,8 +734,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -643,8 +747,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -654,8 +760,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -665,8 +773,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -676,8 +786,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -687,8 +799,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -698,8 +812,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -709,8 +825,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -720,8 +838,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's IO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.IO_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -731,8 +851,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -742,8 +864,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -753,6 +877,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -764,6 +889,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -775,6 +901,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -786,6 +913,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -797,6 +925,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -808,6 +937,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -819,6 +949,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -830,6 +961,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core reporting completion of Card read from Core DRAM",
+ "Counter": "2,3",
"EventCode": "0xc0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -841,8 +973,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -852,8 +986,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -863,6 +999,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -874,6 +1011,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -885,6 +1023,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -896,6 +1035,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -907,6 +1047,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -918,6 +1059,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -929,6 +1071,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -940,6 +1083,7 @@
},
{
"BriefDescription": "Data requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -951,8 +1095,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -962,8 +1108,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -973,8 +1121,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -984,8 +1134,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -995,8 +1147,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1006,8 +1160,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1017,8 +1173,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1028,8 +1186,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1039,8 +1199,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1050,8 +1212,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1061,8 +1225,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1072,8 +1238,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1083,8 +1251,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -1094,8 +1264,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -1105,8 +1277,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1116,8 +1290,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1127,8 +1303,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1138,8 +1316,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1149,8 +1329,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1160,8 +1342,10 @@
},
{
"BriefDescription": "Data requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "2,3",
"EventCode": "0xC0",
"EventName": "UNC_IIO_DATA_REQ_BY_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1171,8 +1355,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1182,8 +1368,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1193,8 +1381,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -1204,8 +1394,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -1215,8 +1407,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1226,8 +1420,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1237,8 +1433,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1248,8 +1446,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1259,8 +1459,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1270,8 +1472,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.ATOMIC.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1281,8 +1485,10 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1292,8 +1498,10 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1303,6 +1511,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART0",
"FCMask": "0x07",
@@ -1314,6 +1523,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART1",
"FCMask": "0x07",
@@ -1325,6 +1535,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART2",
"FCMask": "0x07",
@@ -1336,6 +1547,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART3",
"FCMask": "0x07",
@@ -1347,6 +1559,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART4",
"FCMask": "0x07",
@@ -1358,6 +1571,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART5",
"FCMask": "0x07",
@@ -1369,6 +1583,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART6",
"FCMask": "0x07",
@@ -1380,6 +1595,7 @@
},
{
"BriefDescription": "Data requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.CMPD.PART7",
"FCMask": "0x07",
@@ -1391,8 +1607,10 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1402,8 +1620,10 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1413,6 +1633,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -1424,6 +1645,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -1435,6 +1657,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -1446,6 +1669,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -1457,6 +1681,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -1468,6 +1693,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -1479,6 +1705,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -1490,6 +1717,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card reading from DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -1501,8 +1729,10 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1512,8 +1742,10 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1523,6 +1755,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -1534,6 +1767,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -1545,6 +1779,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -1556,6 +1791,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -1567,6 +1803,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -1578,6 +1815,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -1589,6 +1827,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -1600,6 +1839,7 @@
},
{
"BriefDescription": "Four byte data request of the CPU : Card writing to DRAM",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -1611,8 +1851,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1622,8 +1864,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1633,8 +1877,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -1644,8 +1890,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -1655,8 +1903,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1666,8 +1916,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1677,8 +1929,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1688,8 +1942,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1699,8 +1955,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1710,8 +1968,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Messages",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.MSG.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1721,8 +1981,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1732,8 +1994,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1743,8 +2007,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -1754,8 +2020,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -1765,8 +2033,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1776,8 +2046,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1787,8 +2059,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1798,8 +2072,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1809,8 +2085,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1820,8 +2098,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1831,8 +2111,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -1842,8 +2124,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -1853,8 +2137,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -1864,8 +2150,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -1875,8 +2163,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -1886,8 +2176,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -1897,8 +2189,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -1908,8 +2202,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -1919,8 +2215,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -1930,8 +2228,10 @@
},
{
"BriefDescription": "Data requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1",
"EventCode": "0x83",
"EventName": "UNC_IIO_DATA_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -1941,8 +2241,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -1952,8 +2254,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -1963,8 +2267,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Processing response from IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -1974,8 +2280,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Issuing to IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -1985,8 +2293,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -1996,8 +2306,10 @@
},
{
"BriefDescription": "Incoming arbitration requests : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_IIO_INBOUND_ARB_REQ.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2007,8 +2319,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2018,8 +2332,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2029,8 +2345,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Processing response from IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2040,8 +2358,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Issuing to IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2051,8 +2371,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2062,8 +2384,10 @@
},
{
"BriefDescription": "Incoming arbitration requests granted : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_IIO_INBOUND_ARB_WON.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2073,8 +2397,10 @@
},
{
"BriefDescription": ": IOTLB Hits to a 1G Page",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.1G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB Hits to a 1G Page : Counts if a transaction to a 1G page, on its first lookup, hits the IOTLB.",
"UMask": "0x10",
@@ -2082,8 +2408,10 @@
},
{
"BriefDescription": ": IOTLB Hits to a 2M Page",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.2M_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB Hits to a 2M Page : Counts if a transaction to a 2M page, on its first lookup, hits the IOTLB.",
"UMask": "0x8",
@@ -2091,8 +2419,10 @@
},
{
"BriefDescription": ": IOTLB Hits to a 4K Page",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.4K_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB Hits to a 4K Page : Counts if a transaction to a 4K page, on its first lookup, hits the IOTLB.",
"UMask": "0x4",
@@ -2100,8 +2430,10 @@
},
{
"BriefDescription": ": IOTLB lookups all",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.ALL_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB lookups all : Some transactions have to look up IOTLB multiple times. Counts every time a request looks up IOTLB.",
"UMask": "0x2",
@@ -2109,8 +2441,10 @@
},
{
"BriefDescription": ": Context cache hits",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Context cache hits : Counts each time a first look up of the transaction hits the RCC.",
"UMask": "0x80",
@@ -2118,8 +2452,10 @@
},
{
"BriefDescription": ": Context cache lookups",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.CTXT_CACHE_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Context cache lookups : Counts each time a transaction looks up root context cache.",
"UMask": "0x40",
@@ -2127,8 +2463,10 @@
},
{
"BriefDescription": ": IOTLB lookups first",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.FIRST_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB lookups first : Some transactions have to look up IOTLB multiple times. Counts the first time a request looks up IOTLB.",
"UMask": "0x1",
@@ -2136,8 +2474,10 @@
},
{
"BriefDescription": ": IOTLB Fills (same as IOTLB miss)",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_IIO_IOMMU0.MISSES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOTLB Fills (same as IOTLB miss) : When a transaction misses IOTLB, it does a page walk to look up memory and bring in the relevant page translation. Counts when this page translation is written to IOTLB.",
"UMask": "0x20",
@@ -2145,8 +2485,10 @@
},
{
"BriefDescription": ": Cycles PWT full",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.CYC_PWT_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Cycles PWT full : Counts cycles the IOMMU has reached its maximum limit for outstanding page walks.",
"UMask": "0x80",
@@ -2154,8 +2496,10 @@
},
{
"BriefDescription": ": IOMMU memory access",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.NUM_MEM_ACCESSES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": IOMMU memory access : IOMMU sends out memory fetches when it misses the cache look up which is indicated by this signal. M2IOSF only uses low priority channel",
"UMask": "0x40",
@@ -2163,8 +2507,10 @@
},
{
"BriefDescription": ": PWC Hit to a 1G page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_1G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 1G page : Counts each time a transaction's first look up hits the SLPWC at the 1G level",
"UMask": "0x8",
@@ -2172,8 +2518,10 @@
},
{
"BriefDescription": ": PWC Hit to a 2M page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_2M_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 2M page : Counts each time a transaction's first look up hits the SLPWC at the 2M level",
"UMask": "0x4",
@@ -2181,8 +2529,10 @@
},
{
"BriefDescription": ": PWC Hit to a 4K page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_4K_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWC Hit to a 4K page : Counts each time a transaction's first look up hits the SLPWC at the 4K level",
"UMask": "0x2",
@@ -2190,8 +2540,10 @@
},
{
"BriefDescription": ": PWT Hit to a 256T page",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_512G_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PWT Hit to a 256T page : Counts each time a transaction's first look up hits the SLPWC at the 512G level",
"UMask": "0x10",
@@ -2199,8 +2551,10 @@
},
{
"BriefDescription": ": PageWalk cache fill",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWC_CACHE_FILLS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PageWalk cache fill : When a transaction misses SLPWC, it does a page walk to look up memory and bring in the relevant page translation. When this page translation is written to SLPWC, ObsPwcFillValid_nnnH is asserted.",
"UMask": "0x20",
@@ -2208,8 +2562,10 @@
},
{
"BriefDescription": ": PageWalk cache lookup",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_IIO_IOMMU1.PWT_CACHE_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": PageWalk cache lookup : Counts each time a transaction looks up second level page walk cache.",
"UMask": "0x1",
@@ -2217,8 +2573,10 @@
},
{
"BriefDescription": ": Interrupt Entry cache hit",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.INT_CACHE_HITS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Interrupt Entry cache hit : Counts each time a transaction's first look up hits the IEC.",
"UMask": "0x80",
@@ -2226,8 +2584,10 @@
},
{
"BriefDescription": ": Interrupt Entry cache lookup",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.INT_CACHE_LOOKUPS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Interrupt Entry cache lookup : Counts the number of transaction looks up that interrupt remapping cache.",
"UMask": "0x40",
@@ -2235,8 +2595,10 @@
},
{
"BriefDescription": ": Device-selective Context cache invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_CTXT_CACHE_INVAL_DEVICE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Device-selective Context cache invalidation cycles : Counts number of Device selective context cache invalidation events",
"UMask": "0x20",
@@ -2244,8 +2606,10 @@
},
{
"BriefDescription": ": Domain-selective Context cache invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_CTXT_CACHE_INVAL_DOMAIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Domain-selective Context cache invalidation cycles : Counts number of Domain selective context cache invalidation events",
"UMask": "0x10",
@@ -2253,8 +2617,10 @@
},
{
"BriefDescription": ": Context cache global invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_CTXT_CACHE_INVAL_GBL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Context cache global invalidation cycles : Counts number of Context Cache global invalidation events",
"UMask": "0x8",
@@ -2262,8 +2628,10 @@
},
{
"BriefDescription": ": Domain-selective IOTLB invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_INVAL_DOMAIN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Domain-selective IOTLB invalidation cycles : Counts number of Domain selective invalidation events",
"UMask": "0x2",
@@ -2271,8 +2639,10 @@
},
{
"BriefDescription": ": Global IOTLB invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_INVAL_GBL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Global IOTLB invalidation cycles : Indicates that IOMMU is doing global invalidation.",
"UMask": "0x1",
@@ -2280,8 +2650,10 @@
},
{
"BriefDescription": ": Page-selective IOTLB invalidation cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_IIO_IOMMU3.NUM_INVAL_PAGE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": ": Page-selective IOTLB invalidation cycles : Counts number of Page-selective within Domain Invalidation events",
"UMask": "0x4",
@@ -2289,8 +2661,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus : Asserted if all bits specified by mask match",
"UMask": "0x1",
@@ -2298,8 +2672,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if all bits specified by mask match",
"UMask": "0x8",
@@ -2307,8 +2683,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if all bits specified by mask match",
"UMask": "0x4",
@@ -2316,8 +2694,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : PCIE bus : Asserted if all bits specified by mask match",
"UMask": "0x2",
@@ -2325,8 +2705,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if all bits specified by mask match",
"UMask": "0x10",
@@ -2334,8 +2716,10 @@
},
{
"BriefDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x02",
"EventName": "UNC_IIO_MASK_MATCH_AND.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "AND Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if all bits specified by mask match",
"UMask": "0x20",
@@ -2343,8 +2727,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus : Asserted if any bits specified by mask match",
"UMask": "0x1",
@@ -2352,8 +2738,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus and PCIE bus : Asserted if any bits specified by mask match",
"UMask": "0x8",
@@ -2361,8 +2749,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : Non-PCIE bus and !(PCIE bus) : Asserted if any bits specified by mask match",
"UMask": "0x4",
@@ -2370,8 +2760,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : PCIE bus : Asserted if any bits specified by mask match",
"UMask": "0x2",
@@ -2379,8 +2771,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and PCIE bus : Asserted if any bits specified by mask match",
"UMask": "0x10",
@@ -2388,8 +2782,10 @@
},
{
"BriefDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus)",
+ "Counter": "0,1",
"EventCode": "0x03",
"EventName": "UNC_IIO_MASK_MATCH_OR.NOT_BUS0_NOT_BUS1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "OR Mask/match for debug bus : !(Non-PCIE bus) and !(PCIE bus) : Asserted if any bits specified by mask match",
"UMask": "0x20",
@@ -2397,15 +2793,19 @@
},
{
"BriefDescription": "Counting disabled",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_IIO_NOTHING",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "Occupancy of outbound request queue : To device",
+ "Counter": "2,3",
"EventCode": "0xC5",
"EventName": "UNC_IIO_NUM_OUSTANDING_REQ_FROM_CPU.TO_IO",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2415,8 +2815,10 @@
},
{
"BriefDescription": ": Passing data to be written",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2426,8 +2828,10 @@
},
{
"BriefDescription": ": Issuing final read or write of line",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2436,8 +2840,10 @@
},
{
"BriefDescription": ": Processing response from IOMMU",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2446,8 +2852,10 @@
},
{
"BriefDescription": ": Issuing to IOMMU",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2456,8 +2864,10 @@
},
{
"BriefDescription": ": Request Ownership",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2467,8 +2877,10 @@
},
{
"BriefDescription": ": Writing line",
+ "Counter": "2,3",
"EventCode": "0x88",
"EventName": "UNC_IIO_NUM_OUTSTANDING_REQ_OF_CPU.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2477,20 +2889,11 @@
"Unit": "IIO"
},
{
- "BriefDescription": "Number requests sent to PCIe from main die : From IRP",
- "EventCode": "0xC2",
- "EventName": "UNC_IIO_NUM_REQ_FROM_CPU.IRP",
- "FCMask": "0x07",
- "PerPkg": "1",
- "PortMask": "0xFF",
- "PublicDescription": "Number requests sent to PCIe from main die : From IRP : Captures Posted/Non-posted allocations from IRP. i.e. either non-confined P2P traffic or from the CPU",
- "UMask": "0x1",
- "Unit": "IIO"
- },
- {
"BriefDescription": "Number requests sent to PCIe from main die : From ITC",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UNC_IIO_NUM_REQ_FROM_CPU.ITC",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2500,8 +2903,10 @@
},
{
"BriefDescription": "Number requests sent to PCIe from main die : Completion allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0xc2",
"EventName": "UNC_IIO_NUM_REQ_FROM_CPU.PREALLOC",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2510,8 +2915,10 @@
},
{
"BriefDescription": "Number requests PCIe makes of the main die : Drop request",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU.ALL.DROP",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2521,6 +2928,7 @@
},
{
"BriefDescription": "Number requests PCIe makes of the main die : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU.COMMIT.ALL",
"FCMask": "0x07",
@@ -2532,8 +2940,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Abort",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.ABORT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2542,8 +2952,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Confined P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.CONFINED_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2552,8 +2964,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Local P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.LOC_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2562,8 +2976,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Multi-cast",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MCAST",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2572,8 +2988,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MEM",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2582,8 +3000,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : MsgB",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.MSGB",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2592,8 +3012,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Remote P2P",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.REM_P2P",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2602,8 +3024,10 @@
},
{
"BriefDescription": "Num requests sent by PCIe - by target : Ubox",
+ "Counter": "0,1,2,3",
"EventCode": "0x8E",
"EventName": "UNC_IIO_NUM_REQ_OF_CPU_BY_TGT.UBOX",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2612,15 +3036,19 @@
},
{
"BriefDescription": "ITC address map 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8F",
"EventName": "UNC_IIO_NUM_TGT_MATCHED_REQ_OF_CPU",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "IIO"
},
{
"BriefDescription": "Outbound cacheline requests issued : 64B requests issued to device",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_IIO_OUTBOUND_CL_REQS_ISSUED.TO_IO",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2630,8 +3058,10 @@
},
{
"BriefDescription": "Outbound TLP (transaction layer packet) requests issued : To device",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_IIO_OUTBOUND_TLP_REQS_ISSUED.TO_IO",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2641,16 +3071,20 @@
},
{
"BriefDescription": "PWT occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_IIO_PWT_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PWT occupancy : Indicates how many page walks are outstanding at any point in time.",
"Unit": "IIO"
},
{
"BriefDescription": "PCIe Request - cacheline complete : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2660,8 +3094,10 @@
},
{
"BriefDescription": "PCIe Request - cacheline complete : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2671,8 +3107,10 @@
},
{
"BriefDescription": "PCIe Request - cacheline complete : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2682,8 +3120,10 @@
},
{
"BriefDescription": "PCIe Request - cacheline complete : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CL_CMPL.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2693,8 +3133,10 @@
},
{
"BriefDescription": "PCIe Request complete : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2704,8 +3146,10 @@
},
{
"BriefDescription": "PCIe Request complete : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2715,8 +3159,10 @@
},
{
"BriefDescription": "PCIe Request complete : Processing response from IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_HIT",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2726,8 +3172,10 @@
},
{
"BriefDescription": "PCIe Request complete : Issuing to IOMMU",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.IOMMU_REQ",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2737,8 +3185,10 @@
},
{
"BriefDescription": "PCIe Request complete : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2748,8 +3198,10 @@
},
{
"BriefDescription": "PCIe Request complete : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_IIO_REQ_FROM_PCIE_CMPL.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2759,8 +3211,10 @@
},
{
"BriefDescription": "PCIe Request - pass complete : Passing data to be written",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.DATA",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2770,8 +3224,10 @@
},
{
"BriefDescription": "PCIe Request - pass complete : Issuing final read or write of line",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.FINAL_RD_WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2781,8 +3237,10 @@
},
{
"BriefDescription": "PCIe Request - pass complete : Request Ownership",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.REQ_OWN",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2792,8 +3250,10 @@
},
{
"BriefDescription": "PCIe Request - pass complete : Writing line",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_IIO_REQ_FROM_PCIE_PASS_CMPL.WR",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0xFF",
@@ -2803,16 +3263,20 @@
},
{
"BriefDescription": "Symbol Times on Link",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_IIO_SYMBOL_TIMES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Symbol Times on Link : Gen1 - increment once every 4nS, Gen2 - increment once every 2nS, Gen3 - increment once every 1nS",
"Unit": "IIO"
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -2822,8 +3286,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -2833,8 +3299,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -2844,8 +3312,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -2855,8 +3325,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -2866,8 +3338,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -2877,8 +3351,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2888,8 +3364,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -2899,8 +3377,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -2910,8 +3390,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -2921,8 +3403,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -2932,8 +3416,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -2943,8 +3429,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -2954,8 +3442,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -2965,8 +3455,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -2976,8 +3468,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -2987,8 +3481,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -2998,8 +3494,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3009,8 +3507,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3020,8 +3520,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's PCICFG space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.CFG_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3031,8 +3533,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3042,8 +3546,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3053,8 +3559,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3064,8 +3572,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3075,8 +3585,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3086,8 +3598,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3097,8 +3611,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3108,8 +3624,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3119,8 +3637,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3130,8 +3650,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3141,8 +3663,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3152,8 +3676,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3163,8 +3689,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3174,8 +3702,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3185,8 +3715,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3196,8 +3728,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3207,8 +3741,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3218,8 +3754,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3229,8 +3767,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3240,8 +3780,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's IO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.IO_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3251,8 +3793,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3262,8 +3806,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3273,6 +3819,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -3284,6 +3831,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -3295,6 +3843,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -3306,6 +3855,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -3317,6 +3867,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -3328,6 +3879,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -3339,6 +3891,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -3350,6 +3903,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core reading from Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -3361,8 +3915,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3372,8 +3928,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3383,6 +3941,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -3394,6 +3953,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -3405,6 +3965,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -3416,6 +3977,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -3427,6 +3989,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -3438,6 +4001,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -3449,6 +4013,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -3460,6 +4025,7 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Core writing to Card's MMIO space",
+ "Counter": "0,1,2,3",
"EventCode": "0xc1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -3471,8 +4037,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3482,8 +4050,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3493,8 +4063,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3504,8 +4076,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3515,8 +4089,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3526,8 +4102,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3537,8 +4115,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3548,8 +4128,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3559,8 +4141,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3570,8 +4154,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) reading from this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3581,8 +4167,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3592,8 +4180,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3603,8 +4193,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3614,8 +4206,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3625,8 +4219,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3636,8 +4232,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3647,8 +4245,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3658,8 +4258,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3669,8 +4271,10 @@
},
{
"BriefDescription": "Number Transactions requested by the CPU : Another card (different IIO stack) writing to this card.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "UNC_IIO_TXN_REQ_BY_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3680,8 +4284,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3691,8 +4297,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3702,8 +4310,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -3713,8 +4323,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -3724,8 +4336,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -3735,8 +4349,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -3746,8 +4362,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -3757,8 +4375,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -3768,8 +4388,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -3779,8 +4401,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Atomic requests targeting DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.ATOMIC.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -3790,8 +4414,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3801,8 +4427,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3812,6 +4440,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART0",
"FCMask": "0x07",
@@ -3823,6 +4452,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART1",
"FCMask": "0x07",
@@ -3834,6 +4464,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART2",
"FCMask": "0x07",
@@ -3845,6 +4476,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART3",
"FCMask": "0x07",
@@ -3856,6 +4488,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART4",
"FCMask": "0x07",
@@ -3867,6 +4500,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART5",
"FCMask": "0x07",
@@ -3878,6 +4512,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART6",
"FCMask": "0x07",
@@ -3889,6 +4524,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : CmpD - device sending completion to CPU request",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.CMPD.PART7",
"FCMask": "0x07",
@@ -3900,8 +4536,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -3911,8 +4549,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -3922,6 +4562,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART0",
"FCMask": "0x07",
@@ -3933,6 +4574,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART1",
"FCMask": "0x07",
@@ -3944,6 +4586,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART2",
"FCMask": "0x07",
@@ -3955,6 +4598,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART3",
"FCMask": "0x07",
@@ -3966,6 +4610,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART4",
"FCMask": "0x07",
@@ -3977,6 +4622,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART5",
"FCMask": "0x07",
@@ -3988,6 +4634,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART6",
"FCMask": "0x07",
@@ -3999,6 +4646,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_READ.PART7",
"FCMask": "0x07",
@@ -4010,8 +4658,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -4021,8 +4671,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -4032,6 +4684,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART0",
"FCMask": "0x07",
@@ -4043,6 +4696,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART1",
"FCMask": "0x07",
@@ -4054,6 +4708,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART2",
"FCMask": "0x07",
@@ -4065,6 +4720,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART3",
"FCMask": "0x07",
@@ -4076,6 +4732,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART4",
"FCMask": "0x07",
@@ -4087,6 +4744,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART5",
"FCMask": "0x07",
@@ -4098,6 +4756,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART6",
"FCMask": "0x07",
@@ -4109,6 +4768,7 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to DRAM",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MEM_WRITE.PART7",
"FCMask": "0x07",
@@ -4120,8 +4780,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -4131,8 +4793,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -4142,8 +4806,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -4153,8 +4819,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -4164,8 +4832,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -4175,8 +4845,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -4186,8 +4858,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4197,8 +4871,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4208,8 +4884,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -4219,8 +4897,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Messages",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.MSG.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -4230,8 +4910,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -4241,8 +4923,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -4252,8 +4936,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -4263,8 +4949,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -4274,8 +4962,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -4285,8 +4975,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -4296,8 +4988,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4307,8 +5001,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4318,8 +5014,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -4329,8 +5027,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card reading from another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_READ.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -4340,8 +5040,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.IOMMU0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x100",
@@ -4351,8 +5053,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.IOMMU1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x200",
@@ -4362,8 +5066,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART0",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x01",
@@ -4373,8 +5079,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART1",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x02",
@@ -4384,8 +5092,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART2",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x04",
@@ -4395,8 +5105,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART3",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x08",
@@ -4406,8 +5118,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART4",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x10",
@@ -4417,8 +5131,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART5",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x20",
@@ -4428,8 +5144,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART6",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x40",
@@ -4439,8 +5157,10 @@
},
{
"BriefDescription": "Number Transactions requested of the CPU : Card writing to another Card (same or different stack)",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_IIO_TXN_REQ_OF_CPU.PEER_WRITE.PART7",
+ "Experimental": "1",
"FCMask": "0x07",
"PerPkg": "1",
"PortMask": "0x80",
@@ -4450,8 +5170,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4459,8 +5181,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4468,8 +5192,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4477,8 +5203,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4486,8 +5214,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4495,8 +5225,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4504,8 +5236,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -4513,8 +5247,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -4522,8 +5258,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4531,8 +5269,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4540,8 +5280,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M2P_AG0_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 0 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4549,8 +5291,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4558,8 +5302,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4567,8 +5313,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4576,8 +5324,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4585,8 +5335,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4594,8 +5346,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4603,8 +5357,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -4612,8 +5368,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -4621,8 +5379,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4630,8 +5390,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4639,8 +5401,10 @@
},
{
"BriefDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M2P_AG0_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4648,8 +5412,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4657,8 +5423,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4666,8 +5434,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4675,8 +5445,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4684,8 +5456,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4693,8 +5467,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4702,8 +5478,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 6 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -4711,8 +5489,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 7 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -4720,8 +5500,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4729,8 +5511,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4738,8 +5522,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "UNC_M2P_AG0_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 0 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4747,8 +5533,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4756,8 +5544,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4765,8 +5555,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4774,8 +5566,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4783,8 +5577,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4792,8 +5588,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4801,8 +5599,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -4810,8 +5610,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8a",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -4819,8 +5621,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8b",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4828,8 +5632,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8b",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4837,8 +5643,10 @@
},
{
"BriefDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8b",
"EventName": "UNC_M2P_AG0_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent0 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 0 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4846,8 +5654,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 0 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4855,8 +5665,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 1 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4864,8 +5676,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 2 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4873,8 +5687,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 3 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -4882,8 +5698,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 4 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -4891,8 +5709,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 5 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -4900,8 +5720,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 6 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -4909,8 +5731,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x84",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 7 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -4918,8 +5742,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 10 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -4927,8 +5753,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 8 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -4936,8 +5764,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M2P_AG1_AD_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Acquired : For Transgress 9 : Number of CMS Agent 1 AD credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -4945,8 +5775,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -4954,8 +5786,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -4963,8 +5797,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -4972,8 +5808,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -4981,8 +5819,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -4990,8 +5830,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -4999,8 +5841,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -5008,8 +5852,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -5017,8 +5863,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -5026,8 +5874,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -5035,8 +5885,10 @@
},
{
"BriefDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "UNC_M2P_AG1_AD_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 AD Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 AD credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -5044,8 +5896,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 0 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -5053,8 +5907,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 1 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -5062,8 +5918,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 2 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -5071,8 +5929,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 3 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x8",
@@ -5080,8 +5940,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x10",
@@ -5089,8 +5951,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x20",
@@ -5098,8 +5962,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 4 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x40",
@@ -5107,8 +5973,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8c",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 5 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x80",
@@ -5116,8 +5984,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8d",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 10 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x4",
@@ -5125,8 +5995,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8d",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 8 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x1",
@@ -5134,8 +6006,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8d",
"EventName": "UNC_M2P_AG1_BL_CRD_ACQUIRED1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Acquired : For Transgress 9 : Number of CMS Agent 1 BL credits acquired in a given cycle, per transgress.",
"UMask": "0x2",
@@ -5143,8 +6017,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 0 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -5152,8 +6028,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 1 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -5161,8 +6039,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 2 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -5170,8 +6050,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 3 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x8",
@@ -5179,8 +6061,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 4 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x10",
@@ -5188,8 +6072,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 5 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x20",
@@ -5197,8 +6083,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 6 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x40",
@@ -5206,8 +6094,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0x8e",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 7 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x80",
@@ -5215,8 +6105,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0x8f",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 10 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x4",
@@ -5224,8 +6116,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0x8f",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 8 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x1",
@@ -5233,8 +6127,10 @@
},
{
"BriefDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0x8f",
"EventName": "UNC_M2P_AG1_BL_CRD_OCCUPANCY1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Agent1 BL Credits Occupancy : For Transgress 9 : Number of CMS Agent 1 BL credits in use in a given cycle, per transgress",
"UMask": "0x2",
@@ -5242,6 +6138,7 @@
},
{
"BriefDescription": "Clockticks of the mesh to PCI (M2P)",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M2P_CLOCKTICKS",
"PerPkg": "1",
@@ -5250,6 +6147,7 @@
},
{
"BriefDescription": "CMS Clockticks",
+ "Counter": "0,1,2,3",
"EventCode": "0xc0",
"EventName": "UNC_M2P_CMS_CLOCKTICKS",
"PerPkg": "1",
@@ -5257,8 +6155,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.DPT_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Local : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle triggered by this tile",
"UMask": "0x4",
@@ -5266,8 +6166,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.DPT_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : Dynamic Prefetch Throttle received by this tile",
"UMask": "0x8",
@@ -5275,8 +6177,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.DPT_STALL_IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - IV : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while regular IVs were received, causing DPT to be stalled",
"UMask": "0x40",
@@ -5284,8 +6188,10 @@
},
{
"BriefDescription": "Distress signal asserted : DPT Stalled - No Credit",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.DPT_STALL_NOCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : DPT Stalled - No Credit : Counts the number of cycles either the local or incoming distress signals are asserted. : DPT occurred while credit not available causing DPT to be stalled",
"UMask": "0x80",
@@ -5293,8 +6199,10 @@
},
{
"BriefDescription": "Distress signal asserted : Horizontal",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.HORZ",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Horizontal : Counts the number of cycles either the local or incoming distress signals are asserted. : If TGR egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x2",
@@ -5302,8 +6210,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Local",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.PMM_LOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Local : Counts the number of cycles either the local or incoming distress signals are asserted. : If the CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x10",
@@ -5311,8 +6221,10 @@
},
{
"BriefDescription": "Distress signal asserted : PMM Remote",
+ "Counter": "0,1,2,3",
"EventCode": "0xAF",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.PMM_NONLOCAL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : PMM Remote : Counts the number of cycles either the local or incoming distress signals are asserted. : If another CHA TOR has too many PMM transactions, this signal will throttle outgoing MS2IDI traffic",
"UMask": "0x20",
@@ -5320,8 +6232,10 @@
},
{
"BriefDescription": "Distress signal asserted : Vertical",
+ "Counter": "0,1,2,3",
"EventCode": "0xaf",
"EventName": "UNC_M2P_DISTRESS_ASSERTED.VERT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Distress signal asserted : Vertical : Counts the number of cycles either the local or incoming distress signals are asserted. : If IRQ egress is full, then agents will throttle outgoing AD IDI transactions",
"UMask": "0x1",
@@ -5329,8 +6243,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Down : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x4",
@@ -5338,8 +6254,10 @@
},
{
"BriefDescription": "Egress Blocking due to Ordering requirements : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xba",
"EventName": "UNC_M2P_EGRESS_ORDERING.IV_SNOOPGO_UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress Blocking due to Ordering requirements : Up : Counts number of cycles IV was blocked in the TGR Egress due to SNP/GO Ordering requirements",
"UMask": "0x1",
@@ -5347,8 +6265,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb6",
"EventName": "UNC_M2P_HORZ_RING_AD_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5356,8 +6276,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb6",
"EventName": "UNC_M2P_HORZ_RING_AD_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5365,8 +6287,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb6",
"EventName": "UNC_M2P_HORZ_RING_AD_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5374,8 +6298,10 @@
},
{
"BriefDescription": "Horizontal AD Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb6",
"EventName": "UNC_M2P_HORZ_RING_AD_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AD Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5383,8 +6309,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xbb",
"EventName": "UNC_M2P_HORZ_RING_AKC_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5392,8 +6320,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xbb",
"EventName": "UNC_M2P_HORZ_RING_AKC_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5401,8 +6331,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xbb",
"EventName": "UNC_M2P_HORZ_RING_AKC_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5410,8 +6342,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xbb",
"EventName": "UNC_M2P_HORZ_RING_AKC_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5419,8 +6353,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb7",
"EventName": "UNC_M2P_HORZ_RING_AK_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5428,8 +6364,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb7",
"EventName": "UNC_M2P_HORZ_RING_AK_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Left and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5437,8 +6375,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb7",
"EventName": "UNC_M2P_HORZ_RING_AK_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Even : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5446,8 +6386,10 @@
},
{
"BriefDescription": "Horizontal AK Ring In Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb7",
"EventName": "UNC_M2P_HORZ_RING_AK_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal AK Ring In Use : Right and Odd : Counts the number of cycles that the Horizontal AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5455,8 +6397,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb8",
"EventName": "UNC_M2P_HORZ_RING_BL_IN_USE.LEFT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -5464,8 +6408,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Left and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb8",
"EventName": "UNC_M2P_HORZ_RING_BL_IN_USE.LEFT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Left and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -5473,8 +6419,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb8",
"EventName": "UNC_M2P_HORZ_RING_BL_IN_USE.RIGHT_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Even : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -5482,8 +6430,10 @@
},
{
"BriefDescription": "Horizontal BL Ring in Use : Right and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb8",
"EventName": "UNC_M2P_HORZ_RING_BL_IN_USE.RIGHT_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal BL Ring in Use : Right and Odd : Counts the number of cycles that the Horizontal BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -5491,8 +6441,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Left",
+ "Counter": "0,1,2,3",
"EventCode": "0xb9",
"EventName": "UNC_M2P_HORZ_RING_IV_IN_USE.LEFT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Left : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -5500,8 +6452,10 @@
},
{
"BriefDescription": "Horizontal IV Ring in Use : Right",
+ "Counter": "0,1,2,3",
"EventCode": "0xb9",
"EventName": "UNC_M2P_HORZ_RING_IV_IN_USE.RIGHT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Horizontal IV Ring in Use : Right : Counts the number of cycles that the Horizontal IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -5509,8 +6463,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x1",
@@ -5518,8 +6474,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.DRS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : DRS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x2",
@@ -5527,8 +6485,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x4",
@@ -5536,8 +6496,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCB_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCB : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x8",
@@ -5545,8 +6507,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message class.",
"UMask": "0x10",
@@ -5554,8 +6518,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credit Acquired : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x33",
"EventName": "UNC_M2P_IIO_CREDITS_ACQUIRED.NCS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credit Acquired : NCS : Counts the number of credits that are acquired in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -5563,8 +6529,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.DRS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : DRS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the DRS message class.",
"UMask": "0x8",
@@ -5572,8 +6540,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCB : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCB message class.",
"UMask": "0x10",
@@ -5581,8 +6551,10 @@
},
{
"BriefDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x34",
"EventName": "UNC_M2P_IIO_CREDITS_REJECT.NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Failed to Acquire a Credit : NCS : Counts the number of times that a request pending in the BL Ingress attempted to acquire either a NCB or NCS credit to transmit into the IIO, but was rejected because no credits were available. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -5590,8 +6562,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x1",
@@ -5599,8 +6573,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.DRS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : DRS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the DRS message class.",
"UMask": "0x2",
@@ -5608,8 +6584,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x4",
@@ -5617,8 +6595,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCB_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCB to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCB message class.",
"UMask": "0x8",
@@ -5626,8 +6606,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 0 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credits for transfer through CMS Port 0 to the IIO for the NCS message class.",
"UMask": "0x10",
@@ -5635,8 +6617,10 @@
},
{
"BriefDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x32",
"EventName": "UNC_M2P_IIO_CREDITS_USED.NCS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "M2PCIe IIO Credits in Use : NCS to CMS Port 1 : Counts the number of cycles when one or more credits in the M2PCIe agent for sending transactions into the IIO on either NCB or NCS are in use. Transactions from the BL ring going into the IIO Agent must first acquire a credit. These credits are for either the NCB or NCS message classes. NCB, or non-coherent bypass messages are used to transmit data without coherency (and are common). NCS is used for reads to PCIe (and should be used sparingly). : Credit for transfer through CMS Port 0s to the IIO for the NCS message class.",
"UMask": "0x20",
@@ -5644,912 +6628,1140 @@
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Dedicated P2P Credit Taken - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M2P_LOCAL_DED_P2P_CRD_TAKEN_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_0.MS2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Dedicated Credits Returned - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1a",
"EventName": "UNC_M2P_LOCAL_P2P_DED_RETURNED_1.MS2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local P2P Shared Credits Returned : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M2P_LOCAL_P2P_SHAR_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent3",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_3",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent4",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_4",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Returned to credit ring : Agent5",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_RETURNED.AGENT_5",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x40",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Local Shared P2P Credit Taken - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x41",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_TAKEN_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 0 : M2IOSF3 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_0.M2IOSF3_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF4 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF4_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Local Shared P2P Credit - 1 : M2IOSF5 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4b",
"EventName": "UNC_M2P_LOCAL_SHAR_P2P_CRD_WAIT_1.M2IOSF5_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI0",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "UNC_M2P_MISC_EXTERNAL.MBE_INST0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Miscellaneous Events (mostly from MS2IDI) : Number of cycles MBE is high for MS2IDI1",
+ "Counter": "0,1,2,3",
"EventCode": "0xe6",
"EventName": "UNC_M2P_MISC_EXTERNAL.MBE_INST1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : All",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Local NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Local NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Remote NCB",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "P2P Credit Occupancy : Remote NCS",
+ "Counter": "0,1",
"EventCode": "0x14",
"EventName": "UNC_M2P_P2P_CRD_OCCUPANCY.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Local NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Local NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Remote NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Dedicated Credits Received : Remote NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M2P_P2P_DED_RECEIVED.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : All",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Local NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Local NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.LOCAL_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Remote NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Shared Credits Received : Remote NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M2P_P2P_SHAR_RECEIVED.REMOTE_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x48",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Dedicated P2P Credit Taken - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_M2P_REMOTE_DED_P2P_CRD_TAKEN_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Dedicated Credits Returned : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x1b",
"EventName": "UNC_M2P_REMOTE_P2P_DED_RETURNED.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote P2P Shared Credits Returned : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M2P_REMOTE_P2P_SHAR_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent0",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent1",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Returned to credit ring : Agent2",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_RETURNED.AGENT_2",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Remote Shared P2P Credit Taken - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_TAKEN_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI0 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI0_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 0 : UPI1 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4c",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_0.UPI1_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - DRS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_DRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - NCB",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Waiting on Remote Shared P2P Credit - 1 : UPI2 - NCS",
+ "Counter": "0,1,2,3",
"EventCode": "0x4d",
"EventName": "UNC_M2P_REMOTE_SHAR_P2P_CRD_WAIT_1.UPI2_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xac",
"EventName": "UNC_M2P_RING_BOUNCES_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AD : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -6557,8 +7769,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xac",
"EventName": "UNC_M2P_RING_BOUNCES_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : AK : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -6566,8 +7780,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xac",
"EventName": "UNC_M2P_RING_BOUNCES_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : BL : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -6575,8 +7791,10 @@
},
{
"BriefDescription": "Messages that bounced on the Horizontal Ring. : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xac",
"EventName": "UNC_M2P_RING_BOUNCES_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Horizontal Ring. : IV : Number of cycles incoming messages from the Horizontal ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -6584,8 +7802,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xaa",
"EventName": "UNC_M2P_RING_BOUNCES_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : AD : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x1",
@@ -6593,8 +7813,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xaa",
"EventName": "UNC_M2P_RING_BOUNCES_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Acknowledgements to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x2",
@@ -6602,8 +7824,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring.",
+ "Counter": "0,1,2,3",
"EventCode": "0xaa",
"EventName": "UNC_M2P_RING_BOUNCES_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x10",
@@ -6611,8 +7835,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xaa",
"EventName": "UNC_M2P_RING_BOUNCES_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Data Responses to core : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x4",
@@ -6620,8 +7846,10 @@
},
{
"BriefDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xaa",
"EventName": "UNC_M2P_RING_BOUNCES_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Messages that bounced on the Vertical Ring. : Snoops of processor's cache. : Number of cycles incoming messages from the Vertical ring that were bounced, by ring type.",
"UMask": "0x8",
@@ -6629,95 +7857,119 @@
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xad",
"EventName": "UNC_M2P_RING_SINK_STARVED_HORZ.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xad",
"EventName": "UNC_M2P_RING_SINK_STARVED_HORZ.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : Acknowledgements to Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xad",
"EventName": "UNC_M2P_RING_SINK_STARVED_HORZ.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : BL",
+ "Counter": "0,1,2,3",
"EventCode": "0xad",
"EventName": "UNC_M2P_RING_SINK_STARVED_HORZ.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Horizontal Ring : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xad",
"EventName": "UNC_M2P_RING_SINK_STARVED_HORZ.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : AD",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "UNC_M2P_RING_SINK_STARVED_VERT.AD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Acknowledgements to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "UNC_M2P_RING_SINK_STARVED_VERT.AK",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "UNC_M2P_RING_SINK_STARVED_VERT.AKC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Data Responses to core",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "UNC_M2P_RING_SINK_STARVED_VERT.BL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Sink Starvation on Vertical Ring : Snoops of processor's cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xab",
"EventName": "UNC_M2P_RING_SINK_STARVED_VERT.IV",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Source Throttle",
+ "Counter": "0,1,2,3",
"EventCode": "0xae",
"EventName": "UNC_M2P_RING_SRC_THRTL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x80",
@@ -6725,8 +7977,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_IDI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x1",
@@ -6734,8 +7988,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x2",
@@ -6743,8 +7999,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.CHA_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x4",
@@ -6752,8 +8010,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x20",
@@ -6761,8 +8021,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.IIO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x40",
@@ -6770,8 +8032,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x8",
@@ -6779,8 +8043,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M2P_RxC_CYCLES_NE.UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Cycles Not Empty : Counts the number of cycles when the M2PCIe Ingress is not empty.",
"UMask": "0x10",
@@ -6788,8 +8054,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x80",
@@ -6797,8 +8065,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_IDI",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x1",
@@ -6806,8 +8076,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x2",
@@ -6815,8 +8087,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.CHA_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x4",
@@ -6824,8 +8098,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.IIO_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x20",
@@ -6833,8 +8109,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.IIO_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x40",
@@ -6842,8 +8120,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.UPI_NCB",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x8",
@@ -6851,8 +8131,10 @@
},
{
"BriefDescription": "Ingress (from CMS) Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M2P_RxC_INSERTS.UPI_NCS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Ingress (from CMS) Queue Inserts : Counts the number of entries inserted into the M2PCIe Ingress Queue. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue latency.",
"UMask": "0x10",
@@ -6860,8 +8142,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x11",
@@ -6869,8 +8153,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x10",
@@ -6878,8 +8164,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x1",
@@ -6887,8 +8175,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority : All == Credited + Uncredited",
"UMask": "0x44",
@@ -6896,8 +8186,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x40",
@@ -6905,8 +8197,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe5",
"EventName": "UNC_M2P_RxR_BUSY_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, because a message from the other queue has higher priority",
"UMask": "0x4",
@@ -6914,8 +8208,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x11",
@@ -6923,8 +8219,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x10",
@@ -6932,8 +8230,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AD - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x1",
@@ -6941,8 +8241,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AK : Number of packets bypassing the CMS Ingress",
"UMask": "0x2",
@@ -6950,8 +8252,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : AKC - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x80",
@@ -6959,8 +8263,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - All : Number of packets bypassing the CMS Ingress : All == Credited + Uncredited",
"UMask": "0x44",
@@ -6968,8 +8274,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Credited : Number of packets bypassing the CMS Ingress",
"UMask": "0x40",
@@ -6977,8 +8285,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : BL - Uncredited : Number of packets bypassing the CMS Ingress",
"UMask": "0x4",
@@ -6986,8 +8296,10 @@
},
{
"BriefDescription": "Transgress Ingress Bypass : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xe2",
"EventName": "UNC_M2P_RxR_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Bypass : IV : Number of packets bypassing the CMS Ingress",
"UMask": "0x8",
@@ -6995,8 +8307,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -7004,8 +8318,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x10",
@@ -7013,8 +8329,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AD - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x1",
@@ -7022,8 +8340,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : AK : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x2",
@@ -7031,8 +8351,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - All : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -7040,8 +8362,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x40",
@@ -7049,8 +8373,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : BL - Uncredited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x4",
@@ -7058,8 +8384,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IFV - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.IFV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IFV - Credited : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x80",
@@ -7067,8 +8395,10 @@
},
{
"BriefDescription": "Transgress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xe3",
"EventName": "UNC_M2P_RxR_CRD_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : IV : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"UMask": "0x8",
@@ -7076,16 +8406,20 @@
},
{
"BriefDescription": "Transgress Injection Starvation",
+ "Counter": "0,1,2,3",
"EventCode": "0xe4",
"EventName": "UNC_M2P_RxR_CRD_STARVED_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Injection Starvation : Counts cycles under injection starvation mode. This starvation is triggered when the CMS Ingress cannot send a transaction onto the mesh for a long period of time. In this case, the Ingress is unable to forward to the Egress due to a lack of credit.",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -7093,8 +8427,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -7102,8 +8438,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AD - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -7111,8 +8449,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AK : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -7120,8 +8460,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : AKC - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -7129,8 +8471,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - All : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -7138,8 +8482,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Credited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x40",
@@ -7147,8 +8493,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : BL - Uncredited : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -7156,8 +8504,10 @@
},
{
"BriefDescription": "Transgress Ingress Allocations : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xe1",
"EventName": "UNC_M2P_RxR_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Allocations : IV : Number of allocations into the CMS Ingress The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -7165,8 +8515,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x11",
@@ -7174,8 +8526,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x10",
@@ -7183,8 +8537,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AD - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x1",
@@ -7192,8 +8548,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AK : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x2",
@@ -7201,8 +8559,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : AKC - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x80",
@@ -7210,8 +8570,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - All : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh : All == Credited + Uncredited",
"UMask": "0x44",
@@ -7219,8 +8581,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Credited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x20",
@@ -7228,8 +8592,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : BL - Uncredited : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x4",
@@ -7237,8 +8603,10 @@
},
{
"BriefDescription": "Transgress Ingress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xe0",
"EventName": "UNC_M2P_RxR_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Transgress Ingress Occupancy : IV : Occupancy event for the Ingress buffers in the CMS The Ingress is used to queue up requests received from the mesh",
"UMask": "0x8",
@@ -7246,8 +8614,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7255,8 +8625,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7264,8 +8636,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7273,8 +8647,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7282,8 +8658,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7291,8 +8669,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7300,8 +8680,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -7309,8 +8691,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xd0",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -7318,8 +8702,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 0 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7327,8 +8713,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 1 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7336,8 +8724,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 2 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7345,8 +8735,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 3 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7354,8 +8746,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 4 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7363,8 +8757,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 5 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7372,8 +8768,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 6 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -7381,8 +8779,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xd2",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_AD_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 7 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -7390,8 +8790,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7399,8 +8801,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7408,8 +8812,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7417,8 +8823,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7426,8 +8834,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7435,8 +8845,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7444,8 +8856,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -7453,8 +8867,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xd4",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG0.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -7462,8 +8878,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 0 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7471,8 +8889,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 1 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7480,8 +8900,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 2 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7489,8 +8911,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 3 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x8",
@@ -7498,8 +8922,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR4",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 4 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x10",
@@ -7507,8 +8933,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR5",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 5 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x20",
@@ -7516,8 +8944,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 6 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x40",
@@ -7525,8 +8955,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7",
+ "Counter": "0,1,2,3",
"EventCode": "0xd6",
"EventName": "UNC_M2P_STALL0_NO_TxR_HORZ_CRD_BL_AG1.TGR7",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 7 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x80",
@@ -7534,8 +8966,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7543,8 +8977,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7552,8 +8988,10 @@
},
{
"BriefDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xd1",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG0.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent0 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7561,8 +8999,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 10 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7570,8 +9010,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 8 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7579,8 +9021,10 @@
},
{
"BriefDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xd3",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_AD_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No AD Agent1 Transgress Credits : For Transgress 9 : Number of cycles the AD Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7588,8 +9032,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7597,8 +9043,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7606,8 +9054,10 @@
},
{
"BriefDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xd5",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG0_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent0 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 0 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7615,8 +9065,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR10",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 10 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x4",
@@ -7624,8 +9076,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR8",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 8 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x1",
@@ -7633,8 +9087,10 @@
},
{
"BriefDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9",
+ "Counter": "0,1,2,3",
"EventCode": "0xd7",
"EventName": "UNC_M2P_STALL1_NO_TxR_HORZ_CRD_BL_AG1_1.TGR9",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Stall on No BL Agent1 Transgress Credits : For Transgress 9 : Number of cycles the BL Agent 1 Egress Buffer is stalled waiting for a TGR credit to become available, per transgress.",
"UMask": "0x2",
@@ -7642,24 +9098,30 @@
},
{
"BriefDescription": "UNC_M2P_TxC_CREDITS.PMM",
+ "Counter": "0,1",
"EventCode": "0x2D",
"EventName": "UNC_M2P_TxC_CREDITS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "M2PCIe"
},
{
"BriefDescription": "UNC_M2P_TxC_CREDITS.PRQ",
+ "Counter": "0,1",
"EventCode": "0x2d",
"EventName": "UNC_M2P_TxC_CREDITS.PRQ",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "M2PCIe"
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.AD_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x1",
@@ -7667,8 +9129,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.AD_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x10",
@@ -7676,8 +9140,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.AK_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x2",
@@ -7685,8 +9151,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.AK_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x20",
@@ -7694,8 +9162,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.BL_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x4",
@@ -7703,8 +9173,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.BL_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x40",
@@ -7712,8 +9184,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x80",
@@ -7721,8 +9195,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x25",
"EventName": "UNC_M2P_TxC_CYCLES_FULL.PMM_BLOCK_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Full : Counts the number of cycles when the M2PCIe Egress is full. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent.",
"UMask": "0x8",
@@ -7730,8 +9206,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.AD_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x1",
@@ -7739,8 +9217,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.AD_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x10",
@@ -7748,8 +9228,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.AK_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x2",
@@ -7757,8 +9239,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.AK_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x20",
@@ -7766,8 +9250,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.BL_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x4",
@@ -7775,8 +9261,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.BL_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x40",
@@ -7784,8 +9272,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x80",
@@ -7793,8 +9283,10 @@
},
{
"BriefDescription": "Egress (to CMS) Cycles Not Empty",
+ "Counter": "0,1",
"EventCode": "0x23",
"EventName": "UNC_M2P_TxC_CYCLES_NE.PMM_DISTRESS_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Cycles Not Empty : Counts the number of cycles when the M2PCIe Egress is not empty. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy. Multiple egress buffers can be tracked at a given time using multiple counters.",
"UMask": "0x8",
@@ -7802,8 +9294,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.AD_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x1",
@@ -7811,8 +9305,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.AD_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x10",
@@ -7820,8 +9316,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.AK_CRD_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x8",
@@ -7829,8 +9327,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.AK_CRD_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x80",
@@ -7838,8 +9338,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.BL_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x4",
@@ -7847,8 +9349,10 @@
},
{
"BriefDescription": "Egress (to CMS) Ingress",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M2P_TxC_INSERTS.BL_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Egress (to CMS) Ingress : Counts the number of number of messages inserted into the the M2PCIe Egress queue. This tracks messages for one of the two CMS ports that are used by the M2PCIe agent. This can be used in conjunction with the M2PCIe Ingress Occupancy Accumulator event in order to calculate average queue occupancy.",
"UMask": "0x40",
@@ -7856,8 +9360,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -7865,8 +9371,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -7874,8 +9382,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : AD - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -7883,8 +9393,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - All : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -7892,8 +9404,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Credited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -7901,8 +9415,10 @@
},
{
"BriefDescription": "CMS Horizontal ADS Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa6",
"EventName": "UNC_M2P_TxR_HORZ_ADS_USED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal ADS Used : BL - Uncredited : Number of packets using the Horizontal Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -7910,8 +9426,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -7919,8 +9437,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -7928,8 +9448,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AD - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -7937,8 +9459,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AK : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -7946,8 +9470,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : AKC - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x80",
@@ -7955,8 +9481,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - All : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -7964,8 +9492,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Credited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -7973,8 +9503,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : BL - Uncredited : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -7982,8 +9514,10 @@
},
{
"BriefDescription": "CMS Horizontal Bypass Used : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa7",
"EventName": "UNC_M2P_TxR_HORZ_BYPASS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Bypass Used : IV : Number of packets bypassing the Horizontal Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -7991,8 +9525,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8000,8 +9536,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8009,8 +9547,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8018,8 +9558,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AK : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8027,8 +9569,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8036,8 +9580,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8045,8 +9591,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8054,8 +9602,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8063,8 +9613,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Full : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa2",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_FULL.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Full : IV : Cycles the Transgress buffers in the Common Mesh Stop are Full. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8072,8 +9624,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8081,8 +9635,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8090,8 +9646,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AD - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8099,8 +9657,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AK : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8108,8 +9668,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : AKC - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8117,8 +9679,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - All : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8126,8 +9690,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Credited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8135,8 +9701,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : BL - Uncredited : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8144,8 +9712,10 @@
},
{
"BriefDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa3",
"EventName": "UNC_M2P_TxR_HORZ_CYCLES_NE.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Horizontal Egress Queue is Not Empty : IV : Cycles the Transgress buffers in the Common Mesh Stop are Not-Empty. The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8153,8 +9723,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8162,8 +9734,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8171,8 +9745,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AD - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8180,8 +9756,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AK : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8189,8 +9767,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : AKC - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8198,8 +9778,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - All : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8207,8 +9789,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Credited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8216,8 +9800,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : BL - Uncredited : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8225,8 +9811,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Inserts : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa1",
"EventName": "UNC_M2P_TxR_HORZ_INSERTS.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Inserts : IV : Number of allocations into the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8234,8 +9822,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8243,8 +9833,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x10",
@@ -8252,8 +9844,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AD - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x1",
@@ -8261,8 +9855,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AK : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x2",
@@ -8270,8 +9866,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : AKC - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x80",
@@ -8279,8 +9877,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - All : Counts number of Egress packets NACK'ed on to the Horizontal Ring : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8288,8 +9888,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Credited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x40",
@@ -8297,8 +9899,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : BL - Uncredited : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x4",
@@ -8306,8 +9910,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa4",
"EventName": "UNC_M2P_TxR_HORZ_NACK.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Horizontal Ring",
"UMask": "0x8",
@@ -8315,8 +9921,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x11",
@@ -8324,8 +9932,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.AD_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x10",
@@ -8333,8 +9943,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AD - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x1",
@@ -8342,8 +9954,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AK : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x2",
@@ -8351,8 +9965,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : AKC - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x80",
@@ -8360,8 +9976,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - All : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh. : All == Credited + Uncredited",
"UMask": "0x44",
@@ -8369,8 +9987,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Credited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.BL_CRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Credited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x40",
@@ -8378,8 +9998,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : BL - Uncredited : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x4",
@@ -8387,8 +10009,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Occupancy : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa0",
"EventName": "UNC_M2P_TxR_HORZ_OCCUPANCY.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Occupancy : IV : Occupancy event for the Transgress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Horizontal Ring on the Mesh.",
"UMask": "0x8",
@@ -8396,8 +10020,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.AD_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x1",
@@ -8405,8 +10031,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.AD_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AD - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x1",
@@ -8414,8 +10042,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AK",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.AK",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AK : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x2",
@@ -8423,8 +10053,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.AKC_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : AKC - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x80",
@@ -8432,8 +10064,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - All",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.BL_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - All : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time. : All == Credited + Uncredited",
"UMask": "0x4",
@@ -8441,8 +10075,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.BL_UNCRD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : BL - Uncredited : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x4",
@@ -8450,8 +10086,10 @@
},
{
"BriefDescription": "CMS Horizontal Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0xa5",
"EventName": "UNC_M2P_TxR_HORZ_STARVED.IV",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Horizontal Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Transgress buffer cannot send a transaction onto the Horizontal ring for a long period of time.",
"UMask": "0x8",
@@ -8459,8 +10097,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9c",
"EventName": "UNC_M2P_TxR_VERT_ADS_USED.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8468,8 +10108,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9c",
"EventName": "UNC_M2P_TxR_VERT_ADS_USED.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8477,8 +10119,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9c",
"EventName": "UNC_M2P_TxR_VERT_ADS_USED.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8486,8 +10130,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9c",
"EventName": "UNC_M2P_TxR_VERT_ADS_USED.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets using the Vertical Anti-Deadlock Slot, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8495,8 +10141,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8504,8 +10152,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AD - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x10",
@@ -8513,8 +10163,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8522,8 +10174,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AK - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x20",
@@ -8531,8 +10185,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x4",
@@ -8540,8 +10196,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : BL - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x40",
@@ -8549,8 +10207,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : IV - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9d",
"EventName": "UNC_M2P_TxR_VERT_BYPASS.IV_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : IV - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x8",
@@ -8558,8 +10218,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9e",
"EventName": "UNC_M2P_TxR_VERT_BYPASS_1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 0 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x1",
@@ -8567,8 +10229,10 @@
},
{
"BriefDescription": "CMS Vertical ADS Used : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9e",
"EventName": "UNC_M2P_TxR_VERT_BYPASS_1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical ADS Used : AKC - Agent 1 : Number of packets bypassing the Vertical Egress, broken down by ring type and CMS Agent.",
"UMask": "0x2",
@@ -8576,8 +10240,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8585,8 +10251,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -8594,8 +10262,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8603,8 +10273,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -8612,8 +10284,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -8621,8 +10295,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -8630,8 +10306,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x94",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -8639,8 +10317,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8648,8 +10328,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x95",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_FULL1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Full : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Full. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8657,8 +10339,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8666,8 +10350,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AD - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -8675,8 +10361,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8684,8 +10372,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AK - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -8693,8 +10383,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -8702,8 +10394,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : BL - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -8711,8 +10405,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x96",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : IV - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -8720,8 +10416,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 0 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8729,8 +10427,10 @@
},
{
"BriefDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x97",
"EventName": "UNC_M2P_TxR_VERT_CYCLES_NE1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles CMS Vertical Egress Queue Is Not Empty : AKC - Agent 1 : Number of cycles the Common Mesh Stop Egress was Not Empty. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8738,8 +10438,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8747,8 +10449,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AD - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -8756,8 +10460,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8765,8 +10471,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AK - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -8774,8 +10482,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -8783,8 +10493,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : BL - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -8792,8 +10504,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x92",
"EventName": "UNC_M2P_TxR_VERT_INSERTS0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : IV - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -8801,8 +10515,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2P_TxR_VERT_INSERTS1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 0 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8810,8 +10526,10 @@
},
{
"BriefDescription": "CMS Vert Egress Allocations : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x93",
"EventName": "UNC_M2P_TxR_VERT_INSERTS1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Allocations : AKC - Agent 1 : Number of allocations into the Common Mesh Stop Egress. The Egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8819,8 +10537,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -8828,8 +10548,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AD - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x10",
@@ -8837,8 +10559,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -8846,8 +10570,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AK - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x20",
@@ -8855,8 +10581,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x4",
@@ -8864,8 +10592,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : BL - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x40",
@@ -8873,8 +10603,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x98",
"EventName": "UNC_M2P_TxR_VERT_NACK0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : IV : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x8",
@@ -8882,8 +10614,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2P_TxR_VERT_NACK1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 0 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x1",
@@ -8891,8 +10625,10 @@
},
{
"BriefDescription": "CMS Vertical Egress NACKs : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x99",
"EventName": "UNC_M2P_TxR_VERT_NACK1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress NACKs : AKC - Agent 1 : Counts number of Egress packets NACK'ed on to the Vertical Ring",
"UMask": "0x2",
@@ -8900,8 +10636,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8909,8 +10647,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AD - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AD ring. This is commonly used for outbound requests.",
"UMask": "0x10",
@@ -8918,8 +10658,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8927,8 +10669,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AK - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the AK ring.",
"UMask": "0x20",
@@ -8936,8 +10680,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the BL ring. This is commonly used to send data from the cache to various destinations.",
"UMask": "0x4",
@@ -8945,8 +10691,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : BL - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 1 destined for the BL ring. This is commonly used for transferring writeback data to the cache.",
"UMask": "0x40",
@@ -8954,8 +10702,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : IV - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x90",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : IV - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the IV ring. This is commonly used for snoops to the cores.",
"UMask": "0x8",
@@ -8963,8 +10713,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 0 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AD ring. Some example include outbound requests, snoop requests, and snoop responses.",
"UMask": "0x1",
@@ -8972,8 +10724,10 @@
},
{
"BriefDescription": "CMS Vert Egress Occupancy : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x91",
"EventName": "UNC_M2P_TxR_VERT_OCCUPANCY1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vert Egress Occupancy : AKC - Agent 1 : Occupancy event for the Egress buffers in the Common Mesh Stop The egress is used to queue up requests destined for the Vertical Ring on the Mesh. : Ring transactions from Agent 0 destined for the AK ring. This is commonly used for credit returns and GO responses.",
"UMask": "0x2",
@@ -8981,8 +10735,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.AD_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -8990,8 +10746,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.AD_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AD - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x10",
@@ -8999,8 +10757,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.AK_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -9008,8 +10768,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.AK_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AK - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x20",
@@ -9017,8 +10779,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.BL_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -9026,8 +10790,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.BL_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : BL - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x40",
@@ -9035,8 +10801,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : IV",
+ "Counter": "0,1,2,3",
"EventCode": "0x9a",
"EventName": "UNC_M2P_TxR_VERT_STARVED0.IV_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : IV : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x8",
@@ -9044,8 +10812,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9b",
"EventName": "UNC_M2P_TxR_VERT_STARVED1.AKC_AG0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x1",
@@ -9053,8 +10823,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1",
+ "Counter": "0,1,2,3",
"EventCode": "0x9b",
"EventName": "UNC_M2P_TxR_VERT_STARVED1.AKC_AG1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 1 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x2",
@@ -9062,8 +10834,10 @@
},
{
"BriefDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x9b",
"EventName": "UNC_M2P_TxR_VERT_STARVED1.TGC",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CMS Vertical Egress Injection Starvation : AKC - Agent 0 : Counts injection starvation. This starvation is triggered when the CMS Egress cannot send a transaction onto the Vertical ring for a long period of time.",
"UMask": "0x4",
@@ -9071,8 +10845,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "UNC_M2P_VERT_RING_AD_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9080,8 +10856,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "UNC_M2P_VERT_RING_AD_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9089,8 +10867,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "UNC_M2P_VERT_RING_AD_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Even : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9098,8 +10878,10 @@
},
{
"BriefDescription": "Vertical AD Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb0",
"EventName": "UNC_M2P_VERT_RING_AD_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AD Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AD ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9107,8 +10889,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb4",
"EventName": "UNC_M2P_VERT_RING_AKC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9116,8 +10900,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb4",
"EventName": "UNC_M2P_VERT_RING_AKC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9125,8 +10911,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb4",
"EventName": "UNC_M2P_VERT_RING_AKC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Even : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9134,8 +10922,10 @@
},
{
"BriefDescription": "Vertical AKC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb4",
"EventName": "UNC_M2P_VERT_RING_AKC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AKC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AKC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9143,8 +10933,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UNC_M2P_VERT_RING_AK_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9152,8 +10944,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UNC_M2P_VERT_RING_AK_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Down and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9161,8 +10955,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UNC_M2P_VERT_RING_AK_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Even : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9170,8 +10966,10 @@
},
{
"BriefDescription": "Vertical AK Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb1",
"EventName": "UNC_M2P_VERT_RING_AK_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical AK Ring In Use : Up and Odd : Counts the number of cycles that the Vertical AK ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9179,8 +10977,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "UNC_M2P_VERT_RING_BL_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9188,8 +10988,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "UNC_M2P_VERT_RING_BL_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Down and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9197,8 +10999,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "UNC_M2P_VERT_RING_BL_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Even : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9206,8 +11010,10 @@
},
{
"BriefDescription": "Vertical BL Ring in Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb2",
"EventName": "UNC_M2P_VERT_RING_BL_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical BL Ring in Use : Up and Odd : Counts the number of cycles that the Vertical BL ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
@@ -9215,8 +11021,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Down",
+ "Counter": "0,1,2,3",
"EventCode": "0xb3",
"EventName": "UNC_M2P_VERT_RING_IV_IN_USE.DN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Down : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x4",
@@ -9224,8 +11032,10 @@
},
{
"BriefDescription": "Vertical IV Ring in Use : Up",
+ "Counter": "0,1,2,3",
"EventCode": "0xb3",
"EventName": "UNC_M2P_VERT_RING_IV_IN_USE.UP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical IV Ring in Use : Up : Counts the number of cycles that the Vertical IV ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop. There is only 1 IV ring. Therefore, if one wants to monitor the Even ring, they should select both UP_EVEN and DN_EVEN. To monitor the Odd ring, they should select both UP_ODD and DN_ODD.",
"UMask": "0x1",
@@ -9233,8 +11043,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb5",
"EventName": "UNC_M2P_VERT_RING_TGC_IN_USE.DN_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x4",
@@ -9242,8 +11054,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Down and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb5",
"EventName": "UNC_M2P_VERT_RING_TGC_IN_USE.DN_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Down and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x8",
@@ -9251,8 +11065,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Even",
+ "Counter": "0,1,2,3",
"EventCode": "0xb5",
"EventName": "UNC_M2P_VERT_RING_TGC_IN_USE.UP_EVEN",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Even : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x1",
@@ -9260,8 +11076,10 @@
},
{
"BriefDescription": "Vertical TGC Ring In Use : Up and Odd",
+ "Counter": "0,1,2,3",
"EventCode": "0xb5",
"EventName": "UNC_M2P_VERT_RING_TGC_IN_USE.UP_ODD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Vertical TGC Ring In Use : Up and Odd : Counts the number of cycles that the Vertical TGC ring is being used at this ring stop. This includes when packets are passing by and when packets are being sunk, but does not include when packets are being sent from the ring stop.We really have two rings in JKT -- a clockwise ring and a counter-clockwise ring. On the left side of the ring, the UP direction is on the clockwise ring and DN is on the counter-clockwise ring. On the right side of the ring, this is reversed. The first half of the CBos are on the left side of the ring, and the 2nd half are on the right side of the ring. In other words (for example), in a 4c part, Cbo 0 UP AD is NOT the same ring as CBo 2 UP AD because they are on opposite sides of the ring.",
"UMask": "0x2",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-memory.json b/tools/perf/pmu-events/arch/x86/icelakex/uncore-memory.json
index 814d9599474d..87604c953c0f 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "DRAM Activate Count : All Activates",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M_ACT_COUNT.ALL",
"PerPkg": "1",
@@ -10,8 +11,10 @@
},
{
"BriefDescription": "DRAM Activate Count : Activate due to Bypass",
+ "Counter": "0,1,2,3",
"EventCode": "0x01",
"EventName": "UNC_M_ACT_COUNT.BYP",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Activate Count : Activate due to Bypass : Counts the number of DRAM Activate commands sent on this channel. Activate commands are issued to open up a page on the DRAM devices so that it can be read or written to with a CAS. One can calculate the number of Page Misses by subtracting the number of Page Miss precharges from the number of Activates.",
"UMask": "0x8",
@@ -19,6 +22,7 @@
},
{
"BriefDescription": "All DRAM CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.ALL",
"PerPkg": "1",
@@ -28,6 +32,7 @@
},
{
"BriefDescription": "All DRAM read CAS commands issued (including underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD",
"PerPkg": "1",
@@ -37,8 +42,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_REG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS commands w/auto-pre : DRAM RD_CAS and WR_CAS Commands : Counts the total number or DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with explicit Precharge. AutoPre is only used in systems that are using closed page policy. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).",
"UMask": "0x2",
@@ -46,8 +53,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_PRE_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x8",
@@ -55,8 +64,10 @@
},
{
"BriefDescription": "All DRAM read CAS commands issued (does not include underfills)",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_REG",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total number of DRAM Read CAS commands issued on this channel. This includes both regular RD CAS commands as well as those with implicit Precharge. We do not filter based on major mode, as RD_CAS is not issued during WMM (with the exception of underfills).",
"UMask": "0x1",
@@ -64,8 +75,10 @@
},
{
"BriefDescription": "DRAM underfill read CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.RD_UNDERFILL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Counts the total of DRAM Read CAS commands issued due to an underfill",
"UMask": "0x4",
@@ -73,6 +86,7 @@
},
{
"BriefDescription": "All DRAM write CAS commands issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR",
"PerPkg": "1",
@@ -82,8 +96,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR_NONPRE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/o auto-pre : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x10",
@@ -91,8 +107,10 @@
},
{
"BriefDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_M_CAS_COUNT.WR_PRE",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM RD_CAS and WR_CAS Commands. : DRAM WR_CAS commands w/ auto-pre : DRAM RD_CAS and WR_CAS Commands",
"UMask": "0x20",
@@ -100,28 +118,34 @@
},
{
"BriefDescription": "DRAM Clockticks",
+ "Counter": "0,1,2,3",
"EventName": "UNC_M_CLOCKTICKS",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Free running counter that increments for the Memory Controller",
+ "Counter": "4",
"EventCode": "0xff",
"EventName": "UNC_M_CLOCKTICKS_FREERUN",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "imc_free_running"
},
{
"BriefDescription": "DRAM Precharge All Commands",
+ "Counter": "0,1,2,3",
"EventCode": "0x44",
"EventName": "UNC_M_DRAM_PRE_ALL",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge All Commands : Counts the number of times that the precharge all command was sent.",
"Unit": "iMC"
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.HIGH",
"PerPkg": "1",
@@ -131,6 +155,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.OPPORTUNISTIC",
"PerPkg": "1",
@@ -140,6 +165,7 @@
},
{
"BriefDescription": "Number of DRAM Refreshes Issued",
+ "Counter": "0,1,2,3",
"EventCode": "0x45",
"EventName": "UNC_M_DRAM_REFRESH.PANIC",
"PerPkg": "1",
@@ -149,6 +175,7 @@
},
{
"BriefDescription": "Half clockticks for IMC",
+ "Counter": "FIXED",
"EventCode": "0xff",
"EventName": "UNC_M_HCLOCKTICKS",
"PerPkg": "1",
@@ -156,37 +183,46 @@
},
{
"BriefDescription": "UNC_M_PARITY_ERRORS",
+ "Counter": "0,1,2,3",
"EventCode": "0x2c",
"EventName": "UNC_M_PARITY_ERRORS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PCLS.RD",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_PCLS.RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PCLS.TOTAL",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_PCLS.TOTAL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PCLS.WR",
+ "Counter": "0,1,2,3",
"EventCode": "0xA0",
"EventName": "UNC_M_PCLS.WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands : All",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.ALL",
"PerPkg": "1",
@@ -196,22 +232,27 @@
},
{
"BriefDescription": "PMM Commands : Misc Commands (error, flow ACKs)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands : Misc GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.MISC_GNT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands : Reads - RPQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RD",
"PerPkg": "1",
@@ -221,14 +262,17 @@
},
{
"BriefDescription": "PMM Commands : RPQ GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.RPQ_GNTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands : Underfill reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.UFILL_RD",
"PerPkg": "1",
@@ -238,14 +282,17 @@
},
{
"BriefDescription": "PMM Commands : Underfill GNTs",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WPQ_GNTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands : Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xEA",
"EventName": "UNC_M_PMM_CMD1.WR",
"PerPkg": "1",
@@ -255,84 +302,105 @@
},
{
"BriefDescription": "PMM Commands - Part 2 : Expected No data packet (ERID matched NDP encoding)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_EXP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : Unexpected No data packet (ERID matched a Read, but data was a NDP)",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.NODATA_UNEXP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : Opportunistic Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.OPP_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : ECC Errors",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ECC_ERROR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : ERID detectable parity error",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ERID_ERROR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.PMM_ERID_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Commands - Part 2 : Read Requests - Slot 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xEB",
"EventName": "UNC_M_PMM_CMD2.REQS_SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Read Queue Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xE2",
"EventName": "UNC_M_PMM_RPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Read Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xE1",
"EventName": "UNC_M_PMM_RPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Read Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE3",
"EventName": "UNC_M_PMM_RPQ_INSERTS",
"PerPkg": "1",
@@ -341,6 +409,7 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -350,8 +419,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.GNT_WAIT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x4",
@@ -359,8 +430,10 @@
},
{
"BriefDescription": "PMM Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE0",
"EventName": "UNC_M_PMM_RPQ_OCCUPANCY.NO_GNT",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Read Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Read Pending Queue.",
"UMask": "0x2",
@@ -368,34 +441,43 @@
},
{
"BriefDescription": "PMM Write Queue Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "UNC_M_PMM_WPQ_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Write Queue Cycles Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xE5",
"EventName": "UNC_M_PMM_WPQ_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PMM_WPQ_FLUSH",
+ "Counter": "0,1,2,3",
"EventCode": "0xe8",
"EventName": "UNC_M_PMM_WPQ_FLUSH",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_PMM_WPQ_FLUSH_CYC",
+ "Counter": "0,1,2,3",
"EventCode": "0xe9",
"EventName": "UNC_M_PMM_WPQ_FLUSH_CYC",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "PMM Write Queue Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0xE7",
"EventName": "UNC_M_PMM_WPQ_INSERTS",
"PerPkg": "1",
@@ -404,6 +486,7 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -413,8 +496,10 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.CAS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.",
"UMask": "0x2",
@@ -422,8 +507,10 @@
},
{
"BriefDescription": "PMM Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0xE4",
"EventName": "UNC_M_PMM_WPQ_OCCUPANCY.PWR",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "PMM Write Pending Queue Occupancy : Accumulates the per cycle occupancy of the PMM Write Pending Queue.",
"UMask": "0x4",
@@ -431,16 +518,20 @@
},
{
"BriefDescription": "Channel PPD Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "UNC_M_POWER_CHANNEL_PPD",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Channel PPD Cycles : Number of cycles when all the ranks in the channel are in PPD mode. If IBT=off is enabled, then this can be used to count those cycles. If it is not enabled, then this can count the number of cycles when that could have been taken advantage of.",
"Unit": "iMC"
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x1",
@@ -448,8 +539,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x2",
@@ -457,8 +550,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_2",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x4",
@@ -466,8 +561,10 @@
},
{
"BriefDescription": "CKE_ON_CYCLES by Rank : DIMM ID",
+ "Counter": "0,1,2,3",
"EventCode": "0x47",
"EventName": "UNC_M_POWER_CKE_CYCLES.LOW_3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "CKE_ON_CYCLES by Rank : DIMM ID : Number of cycles spent in CKE ON mode. The filter allows you to select a rank to monitor. If multiple ranks are in CKE ON mode at one time, the counter will ONLY increment by one rather than doing accumulation. Multiple counters will need to be used to track multiple ranks simultaneously. There is no distinction between the different CKE modes (APD, PPDS, PPDF). This can be determined based on the system programming. These events should commonly be used with Invert to get the number of cycles in power saving mode. Edge Detect is also useful here. Make sure that you do NOT use Invert with Edge Detect (this just confuses the system and is not necessary).",
"UMask": "0x8",
@@ -475,8 +572,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1",
@@ -484,8 +583,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x86",
"EventName": "UNC_M_POWER_CRIT_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2",
@@ -493,16 +594,20 @@
},
{
"BriefDescription": "Clock-Enabled Self-Refresh",
+ "Counter": "0,1,2,3",
"EventCode": "0x43",
"EventName": "UNC_M_POWER_SELF_REFRESH",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Clock-Enabled Self-Refresh : Counts the number of cycles when the iMC is in self-refresh and the iMC still has a clock. This happens in some package C-states. For example, the PCU may ask the iMC to enter self-refresh even though some of the cores are still processing. One use of this is for Monroe technology. Self-refresh is required during package C3 and C6, but there is no clock in the iMC at this time, so it is not possible to count these cases.",
"Unit": "iMC"
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1. : Thermal throttling is performed per DIMM. We support 3 DIMMs per channel. This ID allows us to filter by ID.",
"UMask": "0x1",
@@ -510,8 +615,10 @@
},
{
"BriefDescription": "Throttle Cycles for Rank 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x46",
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.SLOT1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Throttle Cycles for Rank 0 : Counts the number of cycles while the iMC is being throttled by either thermal constraints or by the PCU throttling. It is not possible to distinguish between the two. This can be filtered by rank. If multiple ranks are selected and are being throttled at the same time, the counter will only increment by 1.",
"UMask": "0x2",
@@ -519,6 +626,7 @@
},
{
"BriefDescription": "DRAM Precharge commands.",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.ALL",
"PerPkg": "1",
@@ -528,8 +636,10 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to page miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.PAGE_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "DRAM Precharge commands. : Precharge due to page miss : Counts the number of DRAM Precharge commands sent on this channel. : Pages Misses are due to precharges from bank scheduler (rd/wr requests)",
"UMask": "0xc",
@@ -537,6 +647,7 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to page table",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.PGT",
"PerPkg": "1",
@@ -546,6 +657,7 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to read",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.RD",
"PerPkg": "1",
@@ -555,6 +667,7 @@
},
{
"BriefDescription": "DRAM Precharge commands. : Precharge due to write",
+ "Counter": "0,1,2,3",
"EventCode": "0x02",
"EventName": "UNC_M_PRE_COUNT.WR",
"PerPkg": "1",
@@ -564,52 +677,66 @@
},
{
"BriefDescription": "Read Data Buffer Full",
+ "Counter": "0,1,2,3",
"EventCode": "0x19",
"EventName": "UNC_M_RDB_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Inserts",
+ "Counter": "0,1,2,3",
"EventCode": "0x17",
"EventName": "UNC_M_RDB_INSERTS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x18",
"EventName": "UNC_M_RDB_NOT_EMPTY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Data Buffer Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x1A",
"EventName": "UNC_M_RDB_OCCUPANCY",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x12",
"EventName": "UNC_M_RPQ_CYCLES_FULL_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x15",
"EventName": "UNC_M_RPQ_CYCLES_FULL_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Full Cycles : Counts the number of cycles when the Read Pending Queue is full. When the RPQ is full, the HA will not be able to issue any additional read requests into the iMC. This count should be similar count in the HA which tracks the number of cycles that the HA has no RPQ credits, just somewhat smaller to account for the credit return overhead. We generally do not expect to see RPQ become full except for potentially during Write Major Mode or while running with slow DRAM. This event only tracks non-ISOC queue entries.",
"Unit": "iMC"
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"UMask": "0x1",
@@ -617,8 +744,10 @@
},
{
"BriefDescription": "Read Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "UNC_M_RPQ_CYCLES_NE.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Read Pending Queue Not Empty : Counts the number of cycles that the Read Pending Queue is not empty. This can then be used to calculate the average occupancy (in conjunction with the Read Pending Queue Occupancy count). The RPQ is used to schedule reads out to the memory controller and to track the requests. Requests allocate into the RPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the HA to the iMC. They deallocate after the CAS command has been issued to memory. This filter is to be used in conjunction with the occupancy filter so that one can correctly track the average occupancies for schedulable entries and scheduled requests.",
"UMask": "0x2",
@@ -626,6 +755,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH0",
"PerPkg": "1",
@@ -635,6 +765,7 @@
},
{
"BriefDescription": "Read Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "UNC_M_RPQ_INSERTS.PCH1",
"PerPkg": "1",
@@ -644,6 +775,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
@@ -652,6 +784,7 @@
},
{
"BriefDescription": "Read Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x81",
"EventName": "UNC_M_RPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
@@ -660,749 +793,930 @@
},
{
"BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Accepted",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x5",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FMRD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.FMWR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Write Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Write Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.FM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NMRD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated.",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd2",
"EventName": "UNC_M_SB_ACCESSES.NMWR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : FM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_RD_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : FM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.NM_WR_CMPS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Read Accepts",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Read Rejects",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.RD_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : Scoreboard Accesses Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0xa",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : NM read completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_ACCEPTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Accesses : NM write completions",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "UNC_M_SB_ACCESSES.WR_REJECTS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Alloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.ALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Dealloc",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.DEALLOC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_RD_STARVED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMRD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_TGR_WR_STARVED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMTGRWR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.FM_WR_STARVED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.FMWR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_RD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_TGR_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.FM_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_RD_STARVED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NMRD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_CANARY.NM_WR_STARVED",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd9",
"EventName": "UNC_M_SB_CANARY.NMWR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Valid",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NM_RD_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read Starved",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.NM_WR_STARVED",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Reject",
+ "Counter": "0,1,2,3",
"EventCode": "0xD9",
"EventName": "UNC_M_SB_CANARY.VLD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Full",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "UNC_M_SB_CYCLES_FULL",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Cycles Not-Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "UNC_M_SB_CYCLES_NE",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Inserts : Writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD6",
"EventName": "UNC_M_SB_INSERTS.WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Block region reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Block region writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.BLOCK_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Persistent Mem reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Persistent Mem writes",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.PMM_WRS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Occupancy : Reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xD5",
"EventName": "UNC_M_SB_OCCUPANCY.RDS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : All",
+ "Counter": "0,1,2,3",
"EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : DDR4",
+ "Counter": "0,1,2,3",
"EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Inserts : Persistent Mem",
+ "Counter": "0,1,2,3",
"EventCode": "0xDA",
"EventName": "UNC_M_SB_PREF_INSERTS.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : All",
+ "Counter": "0,1,2,3",
"EventCode": "0xDB",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.ALL",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : DDR4",
+ "Counter": "0,1,2,3",
"EventCode": "0xDB",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.DDR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_PREF_OCCUPANCY.PMM",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.PMEM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Scoreboard Prefetch Occupancy : Persistent Mem",
+ "Counter": "0,1,2,3",
"EventCode": "0xdb",
"EventName": "UNC_M_SB_PREF_OCCUPANCY.PMM",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.CANARY",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.DDR_EARLY_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : FM requests rejected due to full address conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.FM_ADDR_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : NM requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.NM_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "Number of Scoreboard Requests Rejected : Patrol requests rejected due to set conflict",
+ "Counter": "0,1,2,3",
"EventCode": "0xD4",
"EventName": "UNC_M_SB_REJECT.PATROL_SET_CNFLT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_TGR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.FM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.FMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_ALLOC.NM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd7",
"EventName": "UNC_M_SB_STRV_ALLOC.NMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xD7",
"EventName": "UNC_M_SB_STRV_ALLOC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_TGR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.FM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.FMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_DEALLOC.NM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xde",
"EventName": "UNC_M_SB_STRV_DEALLOC.NMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write - Set",
+ "Counter": "0,1,2,3",
"EventCode": "0xDE",
"EventName": "UNC_M_SB_STRV_DEALLOC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_TGR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMTGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.FM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.FMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read - Clear",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_TGR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": ": Far Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.FM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_RD",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NMRD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "This event is deprecated. Refer to new event UNC_M_SB_STRV_OCC.NM_WR",
+ "Counter": "0,1,2,3",
"Deprecated": "1",
"EventCode": "0xd8",
"EventName": "UNC_M_SB_STRV_OCC.NMWR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Read",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NM_RD",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": ": Near Mem Write",
+ "Counter": "0,1,2,3",
"EventCode": "0xD8",
"EventName": "UNC_M_SB_STRV_OCC.NM_WR",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.DDR4_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x8",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.NEW",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.NEW",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x1",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.OCC",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.OCC",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x80",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM0_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM1_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x20",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.PMM2_CMP",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x40",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_HIT",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_HIT",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x2",
"Unit": "iMC"
},
{
"BriefDescription": "UNC_M_SB_TAGGED.RD_MISS",
+ "Counter": "0,1,2,3",
"EventCode": "0xDD",
"EventName": "UNC_M_SB_TAGGED.RD_MISS",
+ "Experimental": "1",
"PerPkg": "1",
"UMask": "0x4",
"Unit": "iMC"
},
{
"BriefDescription": "2LM Tag Check : Hit in Near Memory Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.HIT",
"PerPkg": "1",
@@ -1411,6 +1725,7 @@
},
{
"BriefDescription": "2LM Tag Check : Miss, no data in this line",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_CLEAN",
"PerPkg": "1",
@@ -1419,6 +1734,7 @@
},
{
"BriefDescription": "2LM Tag Check : Miss, existing data may be evicted to Far Memory",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.MISS_DIRTY",
"PerPkg": "1",
@@ -1427,6 +1743,7 @@
},
{
"BriefDescription": "2LM Tag Check : Read Hit in Near Memory Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.NM_RD_HIT",
"PerPkg": "1",
@@ -1435,6 +1752,7 @@
},
{
"BriefDescription": "2LM Tag Check : Write Hit in Near Memory Cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "UNC_M_TAGCHK.NM_WR_HIT",
"PerPkg": "1",
@@ -1443,24 +1761,30 @@
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x22",
"EventName": "UNC_M_WPQ_CYCLES_FULL_PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Full Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x16",
"EventName": "UNC_M_WPQ_CYCLES_FULL_PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Full Cycles : Counts the number of cycles when the Write Pending Queue is full. When the WPQ is full, the HA will not be able to issue any additional write requests into the iMC. This count should be similar count in the CHA which tracks the number of cycles that the CHA has no WPQ credits, just somewhat smaller to account for the credit return overhead.",
"Unit": "iMC"
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"UMask": "0x1",
@@ -1468,8 +1792,10 @@
},
{
"BriefDescription": "Write Pending Queue Not Empty",
+ "Counter": "0,1,2,3",
"EventCode": "0x21",
"EventName": "UNC_M_WPQ_CYCLES_NE.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue Not Empty : Counts the number of cycles that the Write Pending Queue is not empty. This can then be used to calculate the average queue occupancy (in conjunction with the WPQ Occupancy Accumulation count). The WPQ is used to schedule write out to the memory controller and to track the writes. Requests allocate into the WPQ soon after they enter the memory controller, and need credits for an entry in this buffer before being sent from the CHA to the iMC. They deallocate after being issued to DRAM. Write requests themselves are able to complete (from the perspective of the rest of the system) as soon they have posted to the iMC. This is not to be confused with actually performing the write to DRAM. Therefore, the average latency for this queue is actually not useful for deconstruction intermediate write latencies.",
"UMask": "0x2",
@@ -1477,6 +1803,7 @@
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH0",
"PerPkg": "1",
@@ -1486,6 +1813,7 @@
},
{
"BriefDescription": "Write Pending Queue Allocations",
+ "Counter": "0,1,2,3",
"EventCode": "0x20",
"EventName": "UNC_M_WPQ_INSERTS.PCH1",
"PerPkg": "1",
@@ -1495,6 +1823,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x82",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH0",
"PerPkg": "1",
@@ -1503,6 +1832,7 @@
},
{
"BriefDescription": "Write Pending Queue Occupancy",
+ "Counter": "0,1,2,3",
"EventCode": "0x83",
"EventName": "UNC_M_WPQ_OCCUPANCY_PCH1",
"PerPkg": "1",
@@ -1511,8 +1841,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1",
@@ -1520,8 +1852,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x23",
"EventName": "UNC_M_WPQ_READ_HIT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2",
@@ -1529,8 +1863,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT.PCH0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x1",
@@ -1538,8 +1874,10 @@
},
{
"BriefDescription": "Write Pending Queue CAM Match",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "UNC_M_WPQ_WRITE_HIT.PCH1",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Write Pending Queue CAM Match : Counts the number of times a request hits in the WPQ (write-pending queue). The iMC allows writes and reads to pass up other writes to different addresses. Before a read or a write is issued, it will first CAM the WPQ to see if there is a write pending to that address. When reads hit, they are able to directly pull their data from the WPQ instead of going to memory. Writes that hit will overwrite the existing data. Partial writes that hit will not need to do underfill reads and will simply update their relevant sections.",
"UMask": "0x2",
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/uncore-power.json b/tools/perf/pmu-events/arch/x86/icelakex/uncore-power.json
index ee4dac6fc797..03984d61ab29 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/uncore-power.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/uncore-power.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Clockticks of the power control unit (PCU)",
+ "Counter": "0,1,2,3",
"EventName": "UNC_P_CLOCKTICKS",
"PerPkg": "1",
"PublicDescription": "Clockticks of the power control unit (PCU) : The PCU runs off a fixed 1 GHz clock. This event counts the number of pclk cycles measured while the counter was enabled. The pclk, like the Memory Controller's dclk, counts at a constant rate making it a good measure of actual wall time.",
@@ -8,195 +9,248 @@
},
{
"BriefDescription": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "UNC_P_CORE_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_DEMOTIONS",
+ "Counter": "0,1,2,3",
"EventCode": "0x30",
"EventName": "UNC_P_DEMOTIONS",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 0 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x75",
"EventName": "UNC_P_FIVR_PS_PS0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 0 Cycles : Cycles spent in phase-shedding power state 0",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 1 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x76",
"EventName": "UNC_P_FIVR_PS_PS1_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 1 Cycles : Cycles spent in phase-shedding power state 1",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 2 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x77",
"EventName": "UNC_P_FIVR_PS_PS2_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 2 Cycles : Cycles spent in phase-shedding power state 2",
"Unit": "PCU"
},
{
"BriefDescription": "Phase Shed 3 Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x78",
"EventName": "UNC_P_FIVR_PS_PS3_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Phase Shed 3 Cycles : Cycles spent in phase-shedding power state 3",
"Unit": "PCU"
},
{
"BriefDescription": "AVX256 Frequency Clipping",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "UNC_P_FREQ_CLIP_AVX256",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "AVX512 Frequency Clipping",
+ "Counter": "0,1,2,3",
"EventCode": "0x4a",
"EventName": "UNC_P_FREQ_CLIP_AVX512",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Thermal Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x04",
"EventName": "UNC_P_FREQ_MAX_LIMIT_THERMAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Thermal Strongest Upper Limit Cycles : Number of cycles any frequency is reduced due to a thermal limit. Count only if throttling is occurring.",
"Unit": "PCU"
},
{
"BriefDescription": "Power Strongest Upper Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "UNC_P_FREQ_MAX_POWER_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Power Strongest Upper Limit Cycles : Counts the number of cycles when power is the upper limit on frequency.",
"Unit": "PCU"
},
{
"BriefDescription": "IO P Limit Strongest Lower Limit Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x73",
"EventName": "UNC_P_FREQ_MIN_IO_P_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "IO P Limit Strongest Lower Limit Cycles : Counts the number of cycles when IO P Limit is preventing us from dropping the frequency lower. This algorithm monitors the needs to the IO subsystem on both local and remote sockets and will maintain a frequency high enough to maintain good IO BW. This is necessary for when all the IA cores on a socket are idle but a user still would like to maintain high IO Bandwidth.",
"Unit": "PCU"
},
{
"BriefDescription": "Cycles spent changing Frequency",
+ "Counter": "0,1,2,3",
"EventCode": "0x74",
"EventName": "UNC_P_FREQ_TRANS_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Cycles spent changing Frequency : Counts the number of cycles when the system is changing frequency. This can not be filtered by thread ID. One can also use it with the occupancy counter that monitors number of threads in C0 to estimate the performance impact that frequency transitions had on the system.",
"Unit": "PCU"
},
{
"BriefDescription": "Memory Phase Shedding Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x2F",
"EventName": "UNC_P_MEMORY_PHASE_SHEDDING_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Memory Phase Shedding Cycles : Counts the number of cycles that the PCU has triggered memory phase shedding. This is a mode that can be run in the iMC physicals that saves power at the expense of additional latency.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C0",
+ "Counter": "0,1,2,3",
"EventCode": "0x2A",
"EventName": "UNC_P_PKG_RESIDENCY_C0_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C0 : Counts the number of cycles when the package was in C0. This event can be used in conjunction with edge detect to count C0 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C2E",
+ "Counter": "0,1,2,3",
"EventCode": "0x2B",
"EventName": "UNC_P_PKG_RESIDENCY_C2E_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C2E : Counts the number of cycles when the package was in C2E. This event can be used in conjunction with edge detect to count C2E entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x2C",
"EventName": "UNC_P_PKG_RESIDENCY_C3_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C3 : Counts the number of cycles when the package was in C3. This event can be used in conjunction with edge detect to count C3 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "Package C State Residency - C6",
+ "Counter": "0,1,2,3",
"EventCode": "0x2D",
"EventName": "UNC_P_PKG_RESIDENCY_C6_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Package C State Residency - C6 : Counts the number of cycles when the package was in C6. This event can be used in conjunction with edge detect to count C6 entrances (or exits using invert). Residency events do not include transition times.",
"Unit": "PCU"
},
{
"BriefDescription": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Counter": "0,1,2,3",
"EventCode": "0x06",
"EventName": "UNC_P_PMAX_THROTTLED_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State : C0 and C1",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C0",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C0 and C1 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0x40",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State : C3",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C3",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C3 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0x80",
"Unit": "PCU"
},
{
"BriefDescription": "Number of cores in C-State : C6 and C7",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "UNC_P_POWER_STATE_OCCUPANCY.CORES_C6",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Number of cores in C-State : C6 and C7 : This is an occupancy event that tracks the number of cores that are in the chosen C-State. It can be used by itself to get the average number of cores in that C-state with thresholding to generate histograms, or with other PCU events and occupancy triggering to capture other details.",
+ "UMask": "0xc0",
"Unit": "PCU"
},
{
"BriefDescription": "External Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x0A",
"EventName": "UNC_P_PROCHOT_EXTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "External Prochot : Counts the number of cycles that we are in external PROCHOT mode. This mode is triggered when a sensor off the die determines that something off-die (like DRAM) is too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Internal Prochot",
+ "Counter": "0,1,2,3",
"EventCode": "0x09",
"EventName": "UNC_P_PROCHOT_INTERNAL_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Internal Prochot : Counts the number of cycles that we are in Internal PROCHOT mode. This mode is triggered when a sensor on the die determines that we are too hot and must throttle to avoid damaging the chip.",
"Unit": "PCU"
},
{
"BriefDescription": "Total Core C State Transition Cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x72",
"EventName": "UNC_P_TOTAL_TRANSITION_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "Total Core C State Transition Cycles : Number of cycles spent performing core C state transitions across all cores.",
"Unit": "PCU"
},
{
"BriefDescription": "VR Hot",
+ "Counter": "0,1,2,3",
"EventCode": "0x42",
"EventName": "UNC_P_VR_HOT_CYCLES",
+ "Experimental": "1",
"PerPkg": "1",
"PublicDescription": "VR Hot : Number of cycles that a CPU SVID VR is hot. Does not cover DRAM VRs",
"Unit": "PCU"
diff --git a/tools/perf/pmu-events/arch/x86/icelakex/virtual-memory.json b/tools/perf/pmu-events/arch/x86/icelakex/virtual-memory.json
index e3227c7f2fe9..9df790d4361f 100644
--- a/tools/perf/pmu-events/arch/x86/icelakex/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/icelakex/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Loads that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts loads that miss the DTLB (Data TLB) and hit the STLB (Second level TLB).",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a demand load.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_ACTIVE",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data load to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data loads. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a demand load in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a demand load in the PMH (Page Miss Handler) each cycle.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Stores that miss the DTLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Counts stores that miss the DTLB (Data TLB) and hit the STLB (2nd Level TLB).",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for a store.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_ACTIVE",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 1G page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_1G",
"PublicDescription": "Counts completed page walks (1G sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 2M/4M page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "Page walks completed due to a demand data store to a 4K page.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K sizes) caused by demand data stores. This implies address translations missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
@@ -107,6 +120,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for a store in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for a store in the PMH (Page Miss Handler) each cycle.",
@@ -115,6 +129,7 @@
},
{
"BriefDescription": "Instruction fetch requests that miss the ITLB and hit the STLB.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Counts instruction fetch requests that miss the ITLB (Instruction TLB) and hit the STLB (Second-level TLB).",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Cycles when at least one PMH is busy with a page walk for code (instruction fetch) request.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_ACTIVE",
@@ -132,6 +148,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -140,6 +157,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (2M/4M)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_2M_4M",
"PublicDescription": "Counts completed page walks (2M/4M page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -148,6 +166,7 @@
},
{
"BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (4K)",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED_4K",
"PublicDescription": "Counts completed page walks (4K page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
@@ -156,6 +175,7 @@
},
{
"BriefDescription": "Number of page walks outstanding for an outstanding code request in the PMH each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_PENDING",
"PublicDescription": "Counts the number of page walks outstanding for an outstanding code (instruction fetch) request in the PMH (Page Miss Handler) each cycle.",
@@ -164,6 +184,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "Counts the number of DTLB flush attempts of the thread-specific entries.",
@@ -172,6 +193,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Counts the number of any STLB flush attempts (such as entire, VPID, PCID, InvPage, CR3 write, etc.).",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/cache.json b/tools/perf/pmu-events/arch/x86/ivybridge/cache.json
index 46570b522095..563ec3f71c5a 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/cache.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts the number of lines brought into the L1 data cache.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "L1D miss outstanding duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +39,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -44,6 +49,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in any state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.ALL",
"SampleAfterValue": "200003",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.HIT_E",
"PublicDescription": "Not rejected writebacks from L1D to L2 cache lines in E state.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.HIT_M",
"PublicDescription": "Not rejected writebacks from L1D to L2 cache lines in M state.",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.MISS",
"PublicDescription": "Not rejected writebacks that missed LLC.",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "L2 cache lines filling L2.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "L2 cache lines in E state filling L2.",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "L2 cache lines in I state filling L2.",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "L2 cache lines in S state filling L2.",
@@ -107,6 +120,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by demand.",
@@ -115,6 +129,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by demand.",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines filling the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DIRTY_ALL",
"PublicDescription": "Dirty L2 cache lines filling the L2.",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by L2 prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.PF_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by the MLC prefetcher.",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by L2 prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.PF_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by the MLC prefetcher.",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts all L2 code requests.",
@@ -155,6 +174,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts any demand and L1 HW prefetch data load requests to L2.",
@@ -163,6 +183,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "Counts all L2 HW prefetcher requests.",
@@ -171,6 +192,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts all L2 store RFO requests.",
@@ -179,6 +201,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Number of instruction fetches that hit the L2 cache.",
@@ -187,6 +210,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Number of instruction fetches that missed the L2 cache.",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Demand Data Read requests that hit L2 cache.",
@@ -203,6 +228,7 @@
},
{
"BriefDescription": "Requests from the L2 hardware prefetchers that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_HIT",
"PublicDescription": "Counts all L2 HW prefetcher requests that hit L2.",
@@ -211,6 +237,7 @@
},
{
"BriefDescription": "Requests from the L2 hardware prefetchers that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_MISS",
"PublicDescription": "Counts all L2 HW prefetcher requests that missed L2.",
@@ -219,6 +246,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "RFO requests that hit L2 cache.",
@@ -227,6 +255,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the number of store RFO requests that miss the L2 cache.",
@@ -235,6 +264,7 @@
},
{
"BriefDescription": "RFOs that access cache lines in any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.ALL",
"PublicDescription": "RFOs that access cache lines in any state.",
@@ -243,6 +273,7 @@
},
{
"BriefDescription": "RFOs that hit cache lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.HIT_M",
"PublicDescription": "RFOs that hit cache lines in M state.",
@@ -251,6 +282,7 @@
},
{
"BriefDescription": "RFOs that miss cache lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.MISS",
"PublicDescription": "RFOs that miss cache lines.",
@@ -259,6 +291,7 @@
},
{
"BriefDescription": "L2 or LLC HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "Any MLC or LLC HW prefetch accessing L2, including rejects.",
@@ -267,6 +300,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "Transactions accessing L2 pipe.",
@@ -275,6 +309,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "L2 cache accesses when fetching instructions.",
@@ -283,6 +318,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "Demand Data Read requests that access L2 cache.",
@@ -291,6 +327,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "L1D writebacks that access L2 cache.",
@@ -299,6 +336,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "L2 fill requests that access L2 cache.",
@@ -307,6 +345,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "L2 writebacks that access L2 cache.",
@@ -315,6 +354,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "RFO requests that access L2 cache.",
@@ -323,6 +363,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D is locked.",
@@ -331,6 +372,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts each cache miss condition for references to the last level cache.",
@@ -339,6 +381,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts requests originating from the core that reference a cache line in the last level cache.",
@@ -347,6 +390,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT",
"PEBS": "1",
@@ -355,6 +399,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared LLC.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM",
"PEBS": "1",
@@ -363,6 +408,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS",
"PEBS": "1",
@@ -371,6 +417,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in LLC without snoops required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_NONE",
"PEBS": "1",
@@ -379,6 +426,7 @@
},
{
"BriefDescription": "Retired load uops which data sources missed LLC but serviced from local dram.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM",
"PublicDescription": "Retired load uops whose data source was local memory (cross-socket snoop not needed or missed).",
@@ -387,6 +435,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HIT_LFB",
"PEBS": "1",
@@ -395,6 +444,7 @@
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
@@ -403,6 +453,7 @@
},
{
"BriefDescription": "Retired load uops which data sources following L1 data-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
"PEBS": "1",
@@ -411,6 +462,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
"PEBS": "1",
@@ -419,6 +471,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache misses as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
"PEBS": "1",
@@ -427,6 +480,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in LLC without snoops required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.LLC_HIT",
"PEBS": "1",
@@ -435,6 +489,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.LLC_MISS",
"PEBS": "1",
@@ -443,6 +498,7 @@
},
{
"BriefDescription": "All retired load uops. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
"PEBS": "1",
@@ -451,6 +507,7 @@
},
{
"BriefDescription": "All retired store uops. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
"PEBS": "1",
@@ -459,6 +516,7 @@
},
{
"BriefDescription": "Retired load uops with locked access. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
"PEBS": "1",
@@ -467,6 +525,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
"PEBS": "1",
@@ -475,6 +534,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
"PEBS": "1",
@@ -483,6 +543,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
"PEBS": "1",
@@ -491,6 +552,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
"PEBS": "1",
@@ -499,6 +561,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Data read requests sent to uncore (demand and prefetch).",
@@ -507,6 +570,7 @@
},
{
"BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Demand code read requests sent to uncore.",
@@ -515,6 +579,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Demand data read requests sent to uncore.",
@@ -523,6 +588,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.",
@@ -531,6 +597,7 @@
},
{
"BriefDescription": "Cases when offcore requests buffer cannot take more entries for core",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
"PublicDescription": "Cases when offcore requests buffer cannot take more entries for core.",
@@ -539,6 +606,7 @@
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -547,6 +615,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -556,6 +625,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
@@ -565,6 +635,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
@@ -574,6 +645,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -583,6 +655,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
"PublicDescription": "Offcore outstanding Demand Code Read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -591,6 +664,7 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "Offcore outstanding Demand Data Read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -599,6 +673,7 @@
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6",
@@ -608,6 +683,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
"PublicDescription": "Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -616,6 +692,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -625,6 +702,7 @@
},
{
"BriefDescription": "Counts demand & prefetch code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -634,6 +712,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -643,6 +722,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -652,6 +732,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -661,6 +742,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -670,6 +752,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -679,6 +762,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo references (demand & prefetch)",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -688,6 +772,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch prefetch RFOs",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -697,6 +782,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch RFOs that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -706,6 +792,7 @@
},
{
"BriefDescription": "Counts demand & prefetch RFOs that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_RFO.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -715,6 +802,7 @@
},
{
"BriefDescription": "Counts all writebacks from the core to the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -724,6 +812,7 @@
},
{
"BriefDescription": "Counts all demand code reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -733,6 +822,7 @@
},
{
"BriefDescription": "Counts all demand code reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -742,6 +832,7 @@
},
{
"BriefDescription": "Counts demand code reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -751,6 +842,7 @@
},
{
"BriefDescription": "Counts all demand data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -760,6 +852,7 @@
},
{
"BriefDescription": "Counts all demand data reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -769,6 +862,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -778,6 +872,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -787,6 +882,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -796,6 +892,7 @@
},
{
"BriefDescription": "Counts all demand rfo's",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -805,6 +902,7 @@
},
{
"BriefDescription": "Counts all demand data writes (RFOs) that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -814,6 +912,7 @@
},
{
"BriefDescription": "Counts demand data writes (RFOs) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -823,6 +922,7 @@
},
{
"BriefDescription": "Counts demand data writes (RFOs) that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -832,6 +932,7 @@
},
{
"BriefDescription": "Counts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accesses. It also includes L2 hints sent to LLC to keep a line from being evicted out of the core caches",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -841,6 +942,7 @@
},
{
"BriefDescription": "Counts requests where the address of an atomic lock instruction spans a cache line boundary or the lock instruction is executed on uncacheable address",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.SPLIT_LOCK_UC_LOCK.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -850,6 +952,7 @@
},
{
"BriefDescription": "Counts non-temporal stores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -859,6 +962,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xF4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"SampleAfterValue": "100003",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/counter.json b/tools/perf/pmu-events/arch/x86/ivybridge/counter.json
new file mode 100644
index 000000000000..35bb154900d7
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/counter.json
@@ -0,0 +1,17 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "ARB",
+ "CountersNumFixed": "1",
+ "CountersNumGeneric": "2"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"
+ }
+] \ No newline at end of file
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/floating-point.json b/tools/perf/pmu-events/arch/x86/ivybridge/floating-point.json
index 89c6d47cc077..336fa00ad006 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/floating-point.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles with any input/output SSE or FP assist",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.ANY",
@@ -10,6 +11,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to input values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_INPUT",
"PublicDescription": "Number of SIMD FP assists due to input values.",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "Number of SIMD FP assists due to Output values",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.SIMD_OUTPUT",
"PublicDescription": "Number of SIMD FP assists due to output values.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Number of X87 assists due to input value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_INPUT",
"PublicDescription": "Number of X87 FP assists due to input values.",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Number of X87 assists due to output value.",
+ "Counter": "0,1,2,3",
"EventCode": "0xCA",
"EventName": "FP_ASSIST.X87_OUTPUT",
"PublicDescription": "Number of X87 FP assists due to output values.",
@@ -42,6 +47,7 @@
},
{
"BriefDescription": "Number of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE",
"PublicDescription": "Number of SSE* or AVX-128 FP Computational packed double-precision uops issued this cycle.",
@@ -50,6 +56,7 @@
},
{
"BriefDescription": "Number of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "FP_COMP_OPS_EXE.SSE_PACKED_SINGLE",
"PublicDescription": "Number of SSE* or AVX-128 FP Computational packed single-precision uops issued this cycle.",
@@ -58,6 +65,7 @@
},
{
"BriefDescription": "Number of SSE* or AVX-128 FP Computational scalar double-precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE",
"PublicDescription": "Counts number of SSE* or AVX-128 double precision FP scalar uops executed.",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Number of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE",
"PublicDescription": "Number of SSE* or AVX-128 FP Computational scalar single-precision uops issued this cycle.",
@@ -74,6 +83,7 @@
},
{
"BriefDescription": "Number of FP Computational Uops Executed this cycle. The number of FADD, FSUB, FCOM, FMULs, integer MULs and IMULs, FDIVs, FPREMs, FSQRTS, integer DIVs, and IDIVs. This event does not distinguish an FADD used in the middle of a transcendental flow from a s",
+ "Counter": "0,1,2,3",
"EventCode": "0x10",
"EventName": "FP_COMP_OPS_EXE.X87",
"PublicDescription": "Counts number of X87 uops executed.",
@@ -82,6 +92,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "Number of SIMD Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.SIMD_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -96,6 +108,7 @@
},
{
"BriefDescription": "Number of GSSE memory assist for stores. GSSE microcode assist is being invoked whenever the hardware is unable to properly handle GSSE-256b operations.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_STORE",
"PublicDescription": "Number of assists associated with 256-bit AVX store operations.",
@@ -104,6 +117,7 @@
},
{
"BriefDescription": "Number of transitions from AVX-256 to legacy SSE when penalty applicable.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.AVX_TO_SSE",
"SampleAfterValue": "100003",
@@ -111,6 +125,7 @@
},
{
"BriefDescription": "Number of transitions from SSE to AVX-256 when penalty applicable.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.SSE_TO_AVX",
"SampleAfterValue": "100003",
@@ -118,6 +133,7 @@
},
{
"BriefDescription": "number of AVX-256 Computational FP double precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "SIMD_FP_256.PACKED_DOUBLE",
"PublicDescription": "Counts 256-bit packed double-precision floating-point instructions.",
@@ -126,6 +142,7 @@
},
{
"BriefDescription": "number of GSSE-256 Computational FP single precision uops issued this cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x11",
"EventName": "SIMD_FP_256.PACKED_SINGLE",
"PublicDescription": "Counts 256-bit packed single-precision floating-point instructions.",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/frontend.json b/tools/perf/pmu-events/arch/x86/ivybridge/frontend.json
index 4ee100024ca9..0d6c829a6023 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/frontend.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/frontend.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the total number when the front end is resteered, mainly when the BPU cannot provide a correct prediction and this is corrected by other branch handling mechanisms at the front end.",
+ "Counter": "0,1,2,3",
"EventCode": "0xE6",
"EventName": "BACLEARS.ANY",
"PublicDescription": "Number of front end re-steers due to BPU misprediction.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switches",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.COUNT",
"PublicDescription": "Number of DSB to MITE switches.",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Decode Stream Buffer (DSB)-to-MITE switch true penalty cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xAB",
"EventName": "DSB2MITE_SWITCHES.PENALTY_CYCLES",
"PublicDescription": "Cycles DSB to MITE switches caused delay.",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Cycles when Decode Stream Buffer (DSB) fill encounter more than 3 Decode Stream Buffer (DSB) lines",
+ "Counter": "0,1,2,3",
"EventCode": "0xAC",
"EventName": "DSB_FILL.EXCEED_DSB_LINES",
"PublicDescription": "DSB Fill encountered > 3 DSB lines.",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.HIT",
"PublicDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Reads. both cacheable and noncacheable, including UC fetches.",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "Cycles where a code-fetch stalled due to L1 instruction-cache miss or an iTLB miss",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.IFETCH_STALL",
"PublicDescription": "Cycles where a code-fetch stalled due to L1 instruction-cache miss or an iTLB miss.",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "Instruction cache, streaming buffer and victim cache misses",
+ "Counter": "0,1,2,3",
"EventCode": "0x80",
"EventName": "ICACHE.MISSES",
"PublicDescription": "Number of Instruction Cache, Streaming Buffer and Victim Cache Misses. Includes UC accesses.",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_4_UOPS",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Cycles Decode Stream Buffer (DSB) is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_DSB_CYCLES_ANY_UOPS",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering 4 Uops",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_4_UOPS",
@@ -84,6 +94,7 @@
},
{
"BriefDescription": "Cycles MITE is delivering any Uop",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.ALL_MITE_CYCLES_ANY_UOPS",
@@ -93,6 +104,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.DSB_CYCLES",
@@ -102,6 +114,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from the Decode Stream Buffer (DSB) path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.DSB_UOPS",
"PublicDescription": "Increment each cycle. # of uops delivered to IDQ from DSB path. Set Cmask = 1 to count cycles.",
@@ -110,6 +123,7 @@
},
{
"BriefDescription": "Instruction Decode Queue (IDQ) empty cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.EMPTY",
"PublicDescription": "Counts cycles the IDQ is empty.",
@@ -118,6 +132,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_ALL_UOPS",
"PublicDescription": "Number of uops delivered to IDQ from any path.",
@@ -126,6 +141,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MITE_CYCLES",
@@ -135,6 +151,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) from MITE path",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ from MITE path. Set Cmask = 1 to count cycles.",
@@ -143,6 +160,7 @@
},
{
"BriefDescription": "Cycles when uops are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_CYCLES",
@@ -152,6 +170,7 @@
},
{
"BriefDescription": "Cycles when uops initiated by Decode Stream Buffer (DSB) are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_CYCLES",
@@ -161,6 +180,7 @@
},
{
"BriefDescription": "Deliveries to Instruction Decode Queue (IDQ) initiated by Decode Stream Buffer (DSB) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -171,6 +191,7 @@
},
{
"BriefDescription": "Uops initiated by Decode Stream Buffer (DSB) that are being delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_DSB_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by DSB. Set Cmask = 1 to count cycles. Add Edge=1 to count # of delivery.",
@@ -179,6 +200,7 @@
},
{
"BriefDescription": "Uops initiated by MITE and delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_MITE_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ when MS_busy by MITE. Set Cmask = 1 to count cycles.",
@@ -187,6 +209,7 @@
},
{
"BriefDescription": "Number of switches from DSB (Decode Stream Buffer) or MITE (legacy decode pipeline) to the Microcode Sequencer",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x79",
@@ -197,6 +220,7 @@
},
{
"BriefDescription": "Uops delivered to Instruction Decode Queue (IDQ) while Microcode Sequencer (MS) is busy",
+ "Counter": "0,1,2,3",
"EventCode": "0x79",
"EventName": "IDQ.MS_UOPS",
"PublicDescription": "Increment each cycle # of uops delivered to IDQ from MS by either DSB or MITE. Set Cmask = 1 to count cycles.",
@@ -205,6 +229,7 @@
},
{
"BriefDescription": "Uops not delivered to Resource Allocation Table (RAT) per thread when backend of the machine is not stalled",
+ "Counter": "0,1,2,3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CORE",
"PublicDescription": "Count issue pipeline slots where no uop was delivered from the front end to the back end when there is no back-end stall.",
@@ -213,6 +238,7 @@
},
{
"BriefDescription": "Cycles per thread when 4 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE",
@@ -221,6 +247,7 @@
},
{
"BriefDescription": "Counts cycles FE delivered 4 uops or Resource Allocation Table (RAT) was stalling FE.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_FE_WAS_OK",
@@ -230,6 +257,7 @@
},
{
"BriefDescription": "Cycles per thread when 3 or more uops are not delivered to Resource Allocation Table (RAT) when backend of the machine is not stalled.",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_1_UOP_DELIV.CORE",
@@ -238,6 +266,7 @@
},
{
"BriefDescription": "Cycles with less than 2 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_2_UOP_DELIV.CORE",
@@ -246,6 +275,7 @@
},
{
"BriefDescription": "Cycles with less than 3 uops delivered by the front end.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x9C",
"EventName": "IDQ_UOPS_NOT_DELIVERED.CYCLES_LE_3_UOP_DELIV.CORE",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json
index 33fe555252b2..969cb519eec1 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/ivb-metrics.json
@@ -1,49 +1,49 @@
[
{
"BriefDescription": "C2 residency percent per package",
- "MetricExpr": "cstate_pkg@c2\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c2\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C2_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per core",
- "MetricExpr": "cstate_core@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C3 residency percent per package",
- "MetricExpr": "cstate_pkg@c3\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c3\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C3_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per core",
- "MetricExpr": "cstate_core@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C6 residency percent per package",
- "MetricExpr": "cstate_pkg@c6\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c6\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C6_Pkg_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per core",
- "MetricExpr": "cstate_core@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_core@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Core_Residency",
"ScaleUnit": "100%"
},
{
"BriefDescription": "C7 residency percent per package",
- "MetricExpr": "cstate_pkg@c7\\-residency@ / TSC",
+ "MetricExpr": "cstate_pkg@c7\\-residency@ / msr@tsc@",
"MetricGroup": "Power",
"MetricName": "C7_Pkg_Residency",
"ScaleUnit": "100%"
@@ -80,17 +80,16 @@
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution ports for ALU operations.",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_0 + UOPS_DISPATCHED_PORT.PORT_1 + UOPS_DISPATCHED_PORT.PORT_5) / (3 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_alu_op_utilization",
- "MetricThreshold": "tma_alu_op_utilization > 0.6",
+ "MetricThreshold": "tma_alu_op_utilization > 0.4",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists",
- "MetricExpr": "100 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
- "MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
+ "MetricExpr": "66 * OTHER_ASSISTS.ANY_WB_ASSIST / tma_info_thread_slots",
+ "MetricGroup": "BvIO;TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_assists",
"MetricThreshold": "tma_assists > 0.1 & (tma_microcode_sequencer > 0.05 & tma_heavy_operations > 0.1)",
"PublicDescription": "This metric estimates fraction of slots the CPU retired uops delivered by the Microcode_Sequencer as a result of Assists. Assists are long sequences of uops that are required in certain corner-cases for operations that cannot be handled natively by the execution pipeline. For example; when working with very small floating point values (so-called Denormals); the FP units are not set up to perform these operations natively. Instead; a sequence of instructions to perform the computation on the Denormals is injected into the pipeline. Since these microcode sequences might be dozens of uops long; Assists can be extremely deleterious to performance and they can be avoided in many cases. Sample with: OTHER_ASSISTS.ANY",
@@ -98,9 +97,8 @@
},
{
"BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "1 - (tma_frontend_bound + tma_bad_speculation + tma_retiring)",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvOB;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_backend_bound",
"MetricThreshold": "tma_backend_bound > 0.2",
"MetricgroupNoGroup": "TopdownL1",
@@ -121,7 +119,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Branch Misprediction",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "BR_MISP_RETIRED.ALL_BRANCHES / (BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT) * tma_bad_speculation",
- "MetricGroup": "BadSpec;BrMispredicts;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
+ "MetricGroup": "BadSpec;BrMispredicts;BvMP;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueBM",
"MetricName": "tma_branch_mispredicts",
"MetricThreshold": "tma_branch_mispredicts > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -139,7 +137,6 @@
},
{
"BriefDescription": "This metric estimates fraction of cycles the CPU retired uops originated from CISC (complex instruction set computer) instruction",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "max(0, tma_microcode_sequencer - tma_assists)",
"MetricGroup": "TopdownL4;tma_L4_group;tma_microcode_sequencer_group",
"MetricName": "tma_cisc",
@@ -151,7 +148,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(60 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) + 43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS)))) / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_contested_accesses",
"MetricThreshold": "tma_contested_accesses > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to contested accesses. Contested accesses occur when data written by one Logical Processor are read by another Logical Processor on a different Physical Core. Examples of contested accesses include synchronizations such as locks; true data sharing such as modified locked variables; and false sharing. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS. Related metrics: tma_data_sharing, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -172,7 +169,7 @@
"BriefDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "43 * (MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) / tma_info_thread_clks",
- "MetricGroup": "Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
+ "MetricGroup": "BvMS;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_l3_bound_group",
"MetricName": "tma_data_sharing",
"MetricThreshold": "tma_data_sharing > 0.05 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles while the memory subsystem was handling synchronizations due to data-sharing accesses. Data shared by multiple Logical Processors (even just read shared) may cause increased access latency due to cache coherency. Excessive data sharing can drastically harm multithreaded performance. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS. Related metrics: tma_contested_accesses, tma_false_sharing, tma_machine_clears, tma_remote_cache",
@@ -181,10 +178,10 @@
{
"BriefDescription": "This metric represents fraction of cycles where the Divider unit was active",
"MetricExpr": "ARITH.FPU_DIV_ACTIVE / tma_info_core_core_clks",
- "MetricGroup": "TopdownL3;tma_L3_group;tma_core_bound_group",
+ "MetricGroup": "BvCB;TopdownL3;tma_L3_group;tma_core_bound_group",
"MetricName": "tma_divider",
"MetricThreshold": "tma_divider > 0.2 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_UOPS",
+ "PublicDescription": "This metric represents fraction of cycles where the Divider unit was active. Divide and square root instructions are performed by the Divider unit and can take considerably longer latency than integer or Floating Point addition; subtraction; or multiplication. Sample with: ARITH.DIVIDER_ACTIVE",
"ScaleUnit": "100%"
},
{
@@ -202,7 +199,7 @@
"MetricExpr": "(IDQ.ALL_DSB_CYCLES_ANY_UOPS - IDQ.ALL_DSB_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_dsb",
- "MetricThreshold": "tma_dsb > 0.15 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
"ScaleUnit": "100%"
},
@@ -218,7 +215,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses",
"MetricExpr": "(7 * DTLB_LOAD_MISSES.STLB_HIT + DTLB_LOAD_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_l1_bound_group",
"MetricName": "tma_dtlb_load",
"MetricThreshold": "tma_dtlb_load > 0.1 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles where the Data TLB (DTLB) was missed by load accesses. TLBs (Translation Look-aside Buffers) are processor caches for recently used entries out of the Page Tables that are used to map virtual- to physical-addresses by the operating system. This metric approximates the potential delay of demand loads missing the first-level data TLB (assuming worst case scenario with back to back misses to different pages). This includes hitting in the second-level TLB (STLB) as well as performing a hardware page walk on an STLB miss. Sample with: MEM_UOPS_RETIRED.STLB_MISS_LOADS_PS. Related metrics: tma_dtlb_store",
@@ -227,7 +224,7 @@
{
"BriefDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses",
"MetricExpr": "(7 * DTLB_STORE_MISSES.STLB_HIT + DTLB_STORE_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
+ "MetricGroup": "BvMT;MemoryTLB;TopdownL4;tma_L4_group;tma_issueTLB;tma_store_bound_group",
"MetricName": "tma_dtlb_store",
"MetricThreshold": "tma_dtlb_store > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates the fraction of cycles spent handling first-level data TLB store misses. As with ordinary data caching; focus on improving data locality and reducing working-set size to reduce DTLB overhead. Additionally; consider using profile-guided optimization (PGO) to collocate frequently-used data on the same page. Try using larger page sizes for large amounts of frequently-used data. Sample with: MEM_UOPS_RETIRED.STLB_MISS_STORES_PS. Related metrics: tma_dtlb_load",
@@ -236,7 +233,7 @@
{
"BriefDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing",
"MetricExpr": "60 * OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE / tma_info_thread_clks",
- "MetricGroup": "DataSharing;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
+ "MetricGroup": "BvMS;DataSharing;LockCont;Offcore;Snoop;TopdownL4;tma_L4_group;tma_issueSyncxn;tma_store_bound_group",
"MetricName": "tma_false_sharing",
"MetricThreshold": "tma_false_sharing > 0.05 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric roughly estimates how often CPU was handling synchronizations due to False Sharing. False Sharing is a multithreading hiccup; where multiple Logical Processors contend on different data-elements mapped into the same cache line. Sample with: MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS;OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HITM. Related metrics: tma_contested_accesses, tma_data_sharing, tma_machine_clears, tma_remote_cache",
@@ -246,7 +243,7 @@
"BriefDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_info_memory_load_miss_real_latency * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;TopdownL4;tma_L4_group;tma_issueBW;tma_issueSL;tma_issueSmSt;tma_l1_bound_group",
"MetricName": "tma_fb_full",
"MetricThreshold": "tma_fb_full > 0.3",
"PublicDescription": "This metric does a *rough estimation* of how often L1D Fill Buffer unavailability limited additional L1D miss memory access requests to proceed. The higher the metric value; the deeper the memory hierarchy level the misses are satisfied from (metric values >1 are valid). Often it hints on approaching bandwidth limits (to L2 cache; L3 cache or external memory). Related metrics: tma_info_system_dram_bw_use, tma_mem_bandwidth, tma_sq_full, tma_store_latency, tma_streaming_stores",
@@ -257,7 +254,7 @@
"MetricExpr": "tma_frontend_bound - tma_fetch_latency",
"MetricGroup": "FetchBW;Frontend;TmaL2;TopdownL2;tma_L2_group;tma_frontend_bound_group;tma_issueFB",
"MetricName": "tma_fetch_bandwidth",
- "MetricThreshold": "tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35",
+ "MetricThreshold": "tma_fetch_bandwidth > 0.2",
"MetricgroupNoGroup": "TopdownL2",
"PublicDescription": "This metric represents fraction of slots the CPU was stalled due to Frontend bandwidth issues. For example; inefficiencies at the instruction decoders; or restrictions for caching in the DSB (decoded uops cache) are categorized under Fetch Bandwidth. In such cases; the Frontend typically delivers suboptimal amount of uops to the Backend. Related metrics: tma_dsb_switches, tma_info_frontend_dsb_coverage, tma_info_inst_mix_iptb, tma_lcp",
"ScaleUnit": "100%"
@@ -287,7 +284,7 @@
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_scalar",
"MetricThreshold": "tma_fp_scalar > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
- "PublicDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) scalar uops fraction the CPU has retired. May overcount due to FMA double counting. Related metrics: tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -296,13 +293,31 @@
"MetricGroup": "Compute;Flops;TopdownL4;tma_L4_group;tma_fp_arith_group;tma_issue2P",
"MetricName": "tma_fp_vector",
"MetricThreshold": "tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6)",
- "PublicDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric approximates arithmetic floating-point (FP) vector uops fraction the CPU has retired aggregated across all vector widths. May overcount due to FMA double counting. Related metrics: tma_fp_scalar, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors",
+ "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE) / UOPS_EXECUTED.THREAD",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_128b",
+ "MetricThreshold": "tma_fp_vector_128b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 128-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "ScaleUnit": "100%"
+ },
+ {
+ "BriefDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors",
+ "MetricExpr": "(SIMD_FP_256.PACKED_DOUBLE + SIMD_FP_256.PACKED_SINGLE) / UOPS_EXECUTED.THREAD",
+ "MetricGroup": "Compute;Flops;TopdownL5;tma_L5_group;tma_fp_vector_group;tma_issue2P",
+ "MetricName": "tma_fp_vector_256b",
+ "MetricThreshold": "tma_fp_vector_256b > 0.1 & (tma_fp_vector > 0.1 & (tma_fp_arith > 0.2 & tma_light_operations > 0.6))",
+ "PublicDescription": "This metric approximates arithmetic FP vector uops fraction the CPU has retired for 256-bit wide vectors. May overcount due to FMA double counting prior to LNL. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
"MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / tma_info_thread_slots",
- "MetricGroup": "PGO;TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvFB;BvIO;PGO;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_frontend_bound",
"MetricThreshold": "tma_frontend_bound > 0.15",
"MetricgroupNoGroup": "TopdownL1",
@@ -316,20 +331,20 @@
"MetricName": "tma_heavy_operations",
"MetricThreshold": "tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring heavy-weight operations -- instructions that require two or more uops or micro-coded sequences. This highly-correlates with the uop length of these instructions/sequences.([ICL+] Note this may overcount due to approximation using indirect events; [ADL+])",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to instruction cache misses.",
"MetricExpr": "ICACHE.IFETCH_STALL / tma_info_thread_clks - tma_itlb_misses",
- "MetricGroup": "BigFoot;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;IcMiss;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_icache_misses",
"MetricThreshold": "tma_icache_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Instructions per retired mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
- "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * cpu@BR_MISP_EXEC.ALL_BRANCHES\\,umask\\=0xE4@)",
+ "BriefDescription": "Instructions per retired Mispredicts for indirect CALL or JMP branches (lower number means higher occurrence rate).",
+ "MetricExpr": "tma_info_inst_mix_instructions / (UOPS_RETIRED.RETIRE_SLOTS / UOPS_ISSUED.ANY * BR_MISP_EXEC.INDIRECT)",
"MetricGroup": "Bad;BrMispredicts",
"MetricName": "tma_info_bad_spec_ipmisp_indirect",
"MetricThreshold": "tma_info_bad_spec_ipmisp_indirect < 1e3"
@@ -360,8 +375,8 @@
"MetricName": "tma_info_core_flopc"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-core",
- "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
"MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
"MetricName": "tma_info_core_ilp"
},
@@ -380,6 +395,12 @@
"MetricName": "tma_info_frontend_ipunknown_branch"
},
{
+ "BriefDescription": "Taken Branches retired Per Cycle",
+ "MetricExpr": "BR_INST_RETIRED.NEAR_TAKEN / tma_info_thread_clks",
+ "MetricGroup": "Branches;FetchBW",
+ "MetricName": "tma_info_frontend_tbpc"
+ },
+ {
"BriefDescription": "Branch instructions per taken branch.",
"MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;PGO",
@@ -398,7 +419,7 @@
"MetricGroup": "Flops;InsType",
"MetricName": "tma_info_inst_mix_iparith",
"MetricThreshold": "tma_info_inst_mix_iparith < 10",
- "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
+ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). Values < 1 are possible due to intentional FMA double counting. Approximated prior to BDW."
},
{
"BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
@@ -429,105 +450,105 @@
"MetricThreshold": "tma_info_inst_mix_ipstore < 8"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Instructions per taken branch",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO;tma_issueFB",
"MetricName": "tma_info_inst_mix_iptb",
"MetricThreshold": "tma_info_inst_mix_iptb < 9",
- "PublicDescription": "Instruction per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
+ "PublicDescription": "Instructions per taken branch. Related metrics: tma_dsb_switches, tma_fetch_bandwidth, tma_info_frontend_dsb_coverage, tma_lcp"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l1d_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l1d_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l1d_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l2_cache_fill_bw",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l2_cache_fill_bw"
+ "MetricName": "tma_info_memory_core_l2_cache_fill_bw_2t"
},
{
"BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / duration_time",
+ "MetricExpr": "tma_info_memory_l3_cache_fill_bw",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_core_l3_cache_fill_bw_2t"
+ },
+ {
+ "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1e9 / tma_info_system_time",
"MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_core_l3_cache_fill_bw"
+ "MetricName": "tma_info_memory_l1d_cache_fill_bw"
},
{
"BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L1_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
+ "MetricGroup": "CacheHits;Mem",
"MetricName": "tma_info_memory_l1mpki"
},
{
+ "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l2_cache_fill_bw"
+ },
+ {
"BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
"MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.L2_MISS / INST_RETIRED.ANY",
- "MetricGroup": "Backend;CacheMisses;Mem",
+ "MetricGroup": "Backend;CacheHits;Mem",
"MetricName": "tma_info_memory_l2mpki"
},
{
- "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
- "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.LLC_MISS / INST_RETIRED.ANY",
- "MetricGroup": "CacheMisses;Mem",
- "MetricName": "tma_info_memory_l3mpki"
+ "BriefDescription": "Offcore requests (L2 cache miss) per kilo instruction for demand RFOs",
+ "MetricExpr": "1e3 * OFFCORE_REQUESTS.DEMAND_RFO / INST_RETIRED.ANY",
+ "MetricGroup": "CacheMisses;Offcore",
+ "MetricName": "tma_info_memory_l2mpki_rfo"
},
{
- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
- "MetricGroup": "Mem;MemoryBound;MemoryLat",
- "MetricName": "tma_info_memory_load_miss_real_latency"
+ "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1e9 / tma_info_system_time",
+ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "tma_info_memory_l3_cache_fill_bw"
},
{
- "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
- "MetricConstraint": "NO_GROUP_EVENTS",
- "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
- "MetricGroup": "Mem;MemoryBW;MemoryBound",
- "MetricName": "tma_info_memory_mlp",
- "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1e3 * MEM_LOAD_UOPS_RETIRED.LLC_MISS / INST_RETIRED.ANY",
+ "MetricGroup": "Mem",
+ "MetricName": "tma_info_memory_l3mpki"
},
{
"BriefDescription": "Average Parallel L2 cache miss data reads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_data_l2_mlp"
+ "MetricName": "tma_info_memory_latency_data_l2_mlp"
},
{
"BriefDescription": "Average Latency for L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS.DEMAND_DATA_RD",
- "MetricGroup": "Memory_Lat;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_miss_latency"
+ "MetricGroup": "LockCont;Memory_Lat;Offcore",
+ "MetricName": "tma_info_memory_latency_load_l2_miss_latency"
},
{
"BriefDescription": "Average Parallel L2 cache miss demand Loads",
"MetricExpr": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD / OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
"MetricGroup": "Memory_BW;Offcore",
- "MetricName": "tma_info_memory_oro_load_l2_mlp"
- },
- {
- "BriefDescription": "Average per-thread data fill bandwidth to the L1 data cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l1d_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l1d_cache_fill_bw_1t"
+ "MetricName": "tma_info_memory_latency_load_l2_mlp"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L2 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l2_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l2_cache_fill_bw_1t"
- },
- {
- "BriefDescription": "Average per-thread data access bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "0",
- "MetricGroup": "Mem;MemoryBW;Offcore",
- "MetricName": "tma_info_memory_thread_l3_cache_access_bw_1t"
+ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load operations (in core cycles)",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / (MEM_LOAD_UOPS_RETIRED.L1_MISS + MEM_LOAD_UOPS_RETIRED.HIT_LFB)",
+ "MetricGroup": "Mem;MemoryBound;MemoryLat",
+ "MetricName": "tma_info_memory_load_miss_real_latency"
},
{
- "BriefDescription": "Average per-thread data fill bandwidth to the L3 cache [GB / sec]",
- "MetricExpr": "tma_info_memory_core_l3_cache_fill_bw",
- "MetricGroup": "Mem;MemoryBW",
- "MetricName": "tma_info_memory_thread_l3_cache_fill_bw_1t"
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss",
+ "MetricConstraint": "NO_GROUP_EVENTS",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+ "MetricGroup": "Mem;MemoryBW;MemoryBound",
+ "MetricName": "tma_info_memory_mlp",
+ "PublicDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)"
},
{
"BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
@@ -537,8 +558,8 @@
"MetricThreshold": "tma_info_memory_tlb_page_walks_utilization > 0.5"
},
{
- "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is execution) per-thread",
- "MetricExpr": "UOPS_EXECUTED.THREAD / cpu@UOPS_EXECUTED.THREAD\\,cmask\\=1@",
+ "BriefDescription": "Mem;Backend;CacheHits",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC)",
"MetricGroup": "Cor;Pipeline;PortsUtil;SMT",
"MetricName": "tma_info_pipeline_execute"
},
@@ -549,30 +570,36 @@
"MetricName": "tma_info_pipeline_retire"
},
{
- "BriefDescription": "Measured Average Frequency for unhalted processors [GHz]",
- "MetricExpr": "tma_info_system_turbo_utilization * TSC / 1e9 / duration_time",
+ "BriefDescription": "Measured Average Core Frequency for unhalted processors [GHz]",
+ "MetricExpr": "tma_info_system_turbo_utilization * msr@tsc@ / 1e9 / tma_info_system_time",
"MetricGroup": "Power;Summary",
- "MetricName": "tma_info_system_average_frequency"
+ "MetricName": "tma_info_system_core_frequency"
},
{
- "BriefDescription": "Average CPU Utilization",
- "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / TSC",
+ "BriefDescription": "Average CPU Utilization (percentage)",
+ "MetricExpr": "tma_info_system_cpus_utilized / #num_cpus_online",
"MetricGroup": "HPC;Summary",
"MetricName": "tma_info_system_cpu_utilization"
},
{
+ "BriefDescription": "Average number of utilized CPUs",
+ "MetricExpr": "CPU_CLK_UNHALTED.REF_TSC / msr@tsc@",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_cpus_utilized"
+ },
+ {
"BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
- "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / duration_time / 1e3",
- "MetricGroup": "HPC;Mem;MemoryBW;SoC;tma_issueBW",
+ "MetricExpr": "64 * (UNC_ARB_TRK_REQUESTS.ALL + UNC_ARB_COH_TRK_REQUESTS.ALL) / 1e6 / tma_info_system_time / 1e3",
+ "MetricGroup": "HPC;MemOffcore;MemoryBW;SoC;tma_issueBW",
"MetricName": "tma_info_system_dram_bw_use",
"PublicDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]. Related metrics: tma_fb_full, tma_mem_bandwidth, tma_sq_full"
},
{
"BriefDescription": "Giga Floating Point Operations Per Second",
- "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_256.PACKED_SINGLE) / 1e9 / duration_time",
+ "MetricExpr": "(FP_COMP_OPS_EXE.SSE_SCALAR_SINGLE + FP_COMP_OPS_EXE.SSE_SCALAR_DOUBLE + 2 * FP_COMP_OPS_EXE.SSE_PACKED_DOUBLE + 4 * (FP_COMP_OPS_EXE.SSE_PACKED_SINGLE + SIMD_FP_256.PACKED_DOUBLE) + 8 * SIMD_FP_256.PACKED_SINGLE) / 1e9 / tma_info_system_time",
"MetricGroup": "Cor;Flops;HPC",
"MetricName": "tma_info_system_gflops",
- "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width and AMX engine."
+ "PublicDescription": "Giga Floating Point Operations Per Second. Aggregate across all supported options of: FP precisions, scalar and vector instructions, vector-width"
},
{
"BriefDescription": "Instructions per Far Branch ( Far Branches apply upon transition from application to operating system, handling interrupts, exceptions) [lower number means higher occurrence rate]",
@@ -595,17 +622,17 @@
"MetricThreshold": "tma_info_system_kernel_utilization > 0.05"
},
{
- "BriefDescription": "Average number of parallel requests to external memory",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_parallel_requests",
- "PublicDescription": "Average number of parallel requests to external memory. Accounts for all requests"
+ "BriefDescription": "PerfMon Event Multiplexing accuracy indicator",
+ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P / CPU_CLK_UNHALTED.THREAD",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_mux",
+ "MetricThreshold": "tma_info_system_mux > 1.1 | tma_info_system_mux < 0.9"
},
{
- "BriefDescription": "Average latency of all requests to external memory (in Uncore cycles)",
- "MetricExpr": "UNC_ARB_TRK_OCCUPANCY.ALL / UNC_ARB_TRK_REQUESTS.ALL",
- "MetricGroup": "Mem;SoC",
- "MetricName": "tma_info_system_mem_request_latency"
+ "BriefDescription": "Total package Power in Watts",
+ "MetricExpr": "power@energy\\-pkg@ * 15.6 / (tma_info_system_time * 1e6)",
+ "MetricGroup": "Power;SoC",
+ "MetricName": "tma_info_system_power"
},
{
"BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
@@ -620,6 +647,13 @@
"MetricName": "tma_info_system_socket_clks"
},
{
+ "BriefDescription": "Run duration time in seconds",
+ "MetricExpr": "duration_time",
+ "MetricGroup": "Summary",
+ "MetricName": "tma_info_system_time",
+ "MetricThreshold": "tma_info_system_time < 1"
+ },
+ {
"BriefDescription": "Average Frequency Utilization relative nominal frequency",
"MetricExpr": "tma_info_thread_clks / CPU_CLK_UNHALTED.REF_TSC",
"MetricGroup": "Power",
@@ -664,7 +698,7 @@
"MetricThreshold": "tma_info_thread_uoppi > 1.05"
},
{
- "BriefDescription": "Instruction per taken branch",
+ "BriefDescription": "Uops per taken branch",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
"MetricGroup": "Branches;Fed;FetchBW",
"MetricName": "tma_info_thread_uptb",
@@ -673,25 +707,25 @@
{
"BriefDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses",
"MetricExpr": "(12 * ITLB_MISSES.STLB_HIT + ITLB_MISSES.WALK_DURATION) / tma_info_thread_clks",
- "MetricGroup": "BigFoot;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
+ "MetricGroup": "BigFootprint;BvBC;FetchLat;MemoryTLB;TopdownL3;tma_L3_group;tma_fetch_latency_group",
"MetricName": "tma_itlb_misses",
"MetricThreshold": "tma_itlb_misses > 0.05 & (tma_fetch_latency > 0.1 & tma_frontend_bound > 0.15)",
"PublicDescription": "This metric represents fraction of cycles the CPU was stalled due to Instruction TLB (ITLB) misses. Sample with: ITLB_MISSES.WALK_COMPLETED",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache",
+ "BriefDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache",
"MetricExpr": "max((min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.STALLS_LDM_PENDING) - CYCLE_ACTIVITY.STALLS_L1D_PENDING) / tma_info_thread_clks, 0)",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_issueL1;tma_issueMC;tma_memory_bound_group",
"MetricName": "tma_l1_bound",
"MetricThreshold": "tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
- "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 data cache. The L1 data cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS;MEM_LOAD_UOPS_RETIRED.HIT_LFB_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
+ "PublicDescription": "This metric estimates how often the CPU was stalled without loads missing the L1 Data (L1D) cache. The L1D cache typically has the shortest latency. However; in certain cases like loads blocked on older stores; a load might suffer due to high latency even though it is being satisfied by the L1D. Another example is loads who miss in the TLB. These cases are characterized by execution unit stalls; while some non-completed demand load lives in the machine without having that demand load missing the L1 cache. Sample with: MEM_LOAD_UOPS_RETIRED.L1_HIT_PS. Related metrics: tma_clears_resteers, tma_machine_clears, tma_microcode_sequencer, tma_ms_switches, tma_ports_utilized_1",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads",
"MetricExpr": "(CYCLE_ACTIVITY.STALLS_L1D_PENDING - CYCLE_ACTIVITY.STALLS_L2_PENDING) / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "BvML;CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l2_bound",
"MetricThreshold": "tma_l2_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to L2 cache accesses by loads. Avoiding cache misses (i.e. L1 misses/L2 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L2_HIT_PS",
@@ -701,20 +735,20 @@
"BriefDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core",
"MetricConstraint": "NO_GROUP_EVENTS_SMT",
"MetricExpr": "MEM_LOAD_UOPS_RETIRED.LLC_HIT / (MEM_LOAD_UOPS_RETIRED.LLC_HIT + 7 * MEM_LOAD_UOPS_RETIRED.LLC_MISS) * CYCLE_ACTIVITY.STALLS_L2_PENDING / tma_info_thread_clks",
- "MetricGroup": "CacheMisses;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
+ "MetricGroup": "CacheHits;MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
"MetricName": "tma_l3_bound",
"MetricThreshold": "tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
"PublicDescription": "This metric estimates how often the CPU was stalled due to loads accesses to L3 cache or contended with a sibling Core. Avoiding cache misses (i.e. L2 misses/L3 hits) can improve the latency and increase performance. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
+ "BriefDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited)",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "29 * (MEM_LOAD_UOPS_RETIRED.LLC_HIT * (1 + MEM_LOAD_UOPS_RETIRED.HIT_LFB / (MEM_LOAD_UOPS_RETIRED.L2_HIT + MEM_LOAD_UOPS_RETIRED.LLC_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS + MEM_LOAD_UOPS_RETIRED.LLC_MISS))) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
+ "MetricGroup": "BvML;MemoryLat;TopdownL4;tma_L4_group;tma_issueLat;tma_l3_bound_group",
"MetricName": "tma_l3_hit_latency",
"MetricThreshold": "tma_l3_hit_latency > 0.1 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
+ "PublicDescription": "This metric estimates fraction of cycles with demand load accesses that hit the L3 cache under unloaded scenarios (possibly L3 latency limited). Avoiding private cache misses (i.e. L2 misses/L3 hits) will improve the latency; reduce contention with sibling physical cores and increase performance. Note the value of this node may overlap with its siblings. Sample with: MEM_LOAD_UOPS_RETIRED.L3_HIT_PS. Related metrics: tma_mem_latency",
"ScaleUnit": "100%"
},
{
@@ -733,12 +767,11 @@
"MetricName": "tma_light_operations",
"MetricThreshold": "tma_light_operations > 0.6",
"MetricgroupNoGroup": "TopdownL2",
- "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized software running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. Sample with: INST_RETIRED.PREC_DIST",
+ "PublicDescription": "This metric represents fraction of slots where the CPU was retiring light-weight operations -- instructions that require no more than one uop (micro-operation). This correlates with total number of instructions used by the program. A uops-per-instruction (see UopPI metric) ratio of 1 or less should be expected for decently optimized code running on Intel Core/Xeon products. While this often indicates efficient X86 instructions were executed; high value does not necessarily mean better performance cannot be achieved. ([ICL+] Note this may undercount due to approximation using indirect events; [ADL+] .). Sample with: INST_RETIRED.PREC_DIST",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port for Load operations",
- "MetricConstraint": "NO_GROUP_EVENTS_NMI",
"MetricExpr": "(UOPS_DISPATCHED_PORT.PORT_2 + UOPS_DISPATCHED_PORT.PORT_3 - UOPS_DISPATCHED_PORT.PORT_4) / (2 * tma_info_core_core_clks)",
"MetricGroup": "TopdownL5;tma_L5_group;tma_ports_utilized_3m_group",
"MetricName": "tma_load_op_utilization",
@@ -750,7 +783,7 @@
"BriefDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO) / tma_info_thread_clks",
- "MetricGroup": "Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
+ "MetricGroup": "LockCont;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_l1_bound_group",
"MetricName": "tma_lock_latency",
"MetricThreshold": "tma_lock_latency > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric represents fraction of cycles the CPU spent handling cache misses due to lock operations. Due to the microarchitecture handling of locks; they are classified as L1_Bound regardless of what memory source satisfied them. Sample with: MEM_UOPS_RETIRED.LOCK_LOADS_PS. Related metrics: tma_store_latency",
@@ -760,7 +793,7 @@
"BriefDescription": "This metric represents fraction of slots the CPU has wasted due to Machine Clears",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "tma_bad_speculation - tma_branch_mispredicts",
- "MetricGroup": "BadSpec;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
+ "MetricGroup": "BadSpec;BvMS;MachineClears;TmaL2;TopdownL2;tma_L2_group;tma_bad_speculation_group;tma_issueMC;tma_issueSyncxn",
"MetricName": "tma_machine_clears",
"MetricThreshold": "tma_machine_clears > 0.1 & tma_bad_speculation > 0.15",
"MetricgroupNoGroup": "TopdownL2",
@@ -768,21 +801,21 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=6@) / tma_info_thread_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueBW",
"MetricName": "tma_mem_bandwidth",
"MetricThreshold": "tma_mem_bandwidth > 0.2 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory (DRAM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
+ "PublicDescription": "This metric estimates fraction of cycles where the core's performance was likely hurt due to approaching bandwidth limits of external memory - DRAM ([SPR-HBM] and/or HBM). The underlying heuristic assumes that a similar off-core traffic is generated by all IA cores. This metric does not aggregate non-data-read requests by this logical processor; requests from other IA Logical Processors/Physical Cores/sockets; or other non-IA devices like GPU; hence the maximum external memory bandwidth limits may or may not be approached when this metric is flagged (see Uncore counters for that). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_sq_full",
"ScaleUnit": "100%"
},
{
- "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM)",
+ "BriefDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM)",
"MetricExpr": "min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD) / tma_info_thread_clks - tma_mem_bandwidth",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
+ "MetricGroup": "BvML;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_dram_bound_group;tma_issueLat",
"MetricName": "tma_mem_latency",
"MetricThreshold": "tma_mem_latency > 0.1 & (tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory (DRAM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
+ "PublicDescription": "This metric estimates fraction of cycles where the performance was likely hurt due to latency from external memory - DRAM ([SPR-HBM] and/or HBM). This metric does not aggregate requests from other Logical Processors/Physical Cores/sockets (see Uncore counters for that). Related metrics: tma_l3_hit_latency",
"ScaleUnit": "100%"
},
{
@@ -810,7 +843,7 @@
"MetricExpr": "(IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS) / tma_info_core_core_clks / 2",
"MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
"MetricName": "tma_mite",
- "MetricThreshold": "tma_mite > 0.1 & (tma_fetch_bandwidth > 0.1 & tma_frontend_bound > 0.15 & tma_info_thread_ipc / 4 > 0.35)",
+ "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
"PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline). This pipeline is used for code that was not pre-cached in the DSB or LSD. For example; inefficiencies due to asymmetric decoders; use of long immediate or LCP can manifest as MITE fetch bandwidth bottleneck.",
"ScaleUnit": "100%"
},
@@ -829,7 +862,7 @@
"MetricGroup": "Compute;TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_0",
"MetricThreshold": "tma_port_0 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED_PORT.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 0 ([SNB+] ALU; [HSW+] ALU and 2nd branch). Sample with: UOPS_DISPATCHED.PORT_0. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_1, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -838,7 +871,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_1",
"MetricThreshold": "tma_port_1 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED_PORT.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 1 (ALU). Sample with: UOPS_DISPATCHED.PORT_1. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_5, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -874,7 +907,7 @@
"MetricGroup": "TopdownL6;tma_L6_group;tma_alu_op_utilization_group;tma_issue2P",
"MetricName": "tma_port_5",
"MetricThreshold": "tma_port_5 > 0.6",
- "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
+ "PublicDescription": "This metric represents Core fraction of cycles CPU dispatched uops on execution port 5 ([SNB+] Branches and ALU; [HSW+] ALU). Sample with: UOPS_DISPATCHED.PORT_5. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_6, tma_ports_utilized_2",
"ScaleUnit": "100%"
},
{
@@ -889,7 +922,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed no uops on any execution port (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else (min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks)",
+ "MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,inv\\,cmask\\=1@ / 2 if #SMT_on else min(CPU_CLK_UNHALTED.THREAD, CYCLE_ACTIVITY.CYCLES_NO_EXECUTE) - (RS_EVENTS.EMPTY_CYCLES if tma_fetch_latency > 0.1 else 0)) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_0",
"MetricThreshold": "tma_ports_utilized_0 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -898,7 +931,7 @@
},
{
"BriefDescription": "This metric represents fraction of cycles where the CPU executed total of 1 uop per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=1@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC - UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issueL1;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_1",
"MetricThreshold": "tma_ports_utilized_1 > 0.2 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
@@ -907,25 +940,25 @@
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise)",
- "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else (UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks)",
+ "MetricExpr": "((cpu@UOPS_EXECUTED.CORE\\,cmask\\=2@ - cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@) / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC - UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
"MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_issue2P;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_2",
"MetricThreshold": "tma_ports_utilized_2 > 0.15 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
- "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
+ "PublicDescription": "This metric represents fraction of cycles CPU executed total of 2 uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise). Loop Vectorization -most compilers feature auto-Vectorization options today- reduces pressure on the execution ports as multiple elements are calculated with same uop. Related metrics: tma_fp_scalar, tma_fp_vector, tma_fp_vector_128b, tma_fp_vector_256b, tma_fp_vector_512b, tma_port_0, tma_port_1, tma_port_5, tma_port_6",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This metric represents fraction of cycles CPU executed total of 3 or more uops per cycle on all execution ports (Logical Processor cycles since ICL, Physical Core cycles otherwise).",
"MetricExpr": "(cpu@UOPS_EXECUTED.CORE\\,cmask\\=3@ / 2 if #SMT_on else UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC) / tma_info_core_core_clks",
- "MetricGroup": "PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
+ "MetricGroup": "BvCB;PortsUtil;TopdownL4;tma_L4_group;tma_ports_utilization_group",
"MetricName": "tma_ports_utilized_3m",
- "MetricThreshold": "tma_ports_utilized_3m > 0.7 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_ports_utilized_3m > 0.4 & (tma_ports_utilization > 0.15 & (tma_core_bound > 0.1 & tma_backend_bound > 0.2))",
"ScaleUnit": "100%"
},
{
"BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
"MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / tma_info_thread_slots",
- "MetricGroup": "TmaL1;TopdownL1;tma_L1_group",
+ "MetricGroup": "BvUW;TmaL1;TopdownL1;tma_L1_group",
"MetricName": "tma_retiring",
"MetricThreshold": "tma_retiring > 0.7 | tma_heavy_operations > 0.1",
"MetricgroupNoGroup": "TopdownL1",
@@ -938,7 +971,7 @@
"MetricExpr": "13 * LD_BLOCKS.NO_SR / tma_info_thread_clks",
"MetricGroup": "TopdownL4;tma_L4_group;tma_l1_bound_group",
"MetricName": "tma_split_loads",
- "MetricThreshold": "tma_split_loads > 0.2 & (tma_l1_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
+ "MetricThreshold": "tma_split_loads > 0.3",
"PublicDescription": "This metric estimates fraction of cycles handling memory load split accesses - load that cross 64-byte cache line boundary. Sample with: MEM_UOPS_RETIRED.SPLIT_LOADS_PS",
"ScaleUnit": "100%"
},
@@ -954,7 +987,7 @@
{
"BriefDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors)",
"MetricExpr": "(OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 if #SMT_on else OFFCORE_REQUESTS_BUFFER.SQ_FULL) / tma_info_core_core_clks",
- "MetricGroup": "MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
+ "MetricGroup": "BvMB;MemoryBW;Offcore;TopdownL4;tma_L4_group;tma_issueBW;tma_l3_bound_group",
"MetricName": "tma_sq_full",
"MetricThreshold": "tma_sq_full > 0.3 & (tma_l3_bound > 0.05 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric measures fraction of cycles where the Super Queue (SQ) was full taking into account all request-types and both hardware SMT threads (Logical Processors). Related metrics: tma_fb_full, tma_info_system_dram_bw_use, tma_mem_bandwidth",
@@ -982,7 +1015,7 @@
"BriefDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses",
"MetricConstraint": "NO_GROUP_EVENTS",
"MetricExpr": "(L2_RQSTS.RFO_HIT * 9 * (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) + (1 - MEM_UOPS_RETIRED.LOCK_LOADS / MEM_UOPS_RETIRED.ALL_STORES) * min(CPU_CLK_UNHALTED.THREAD, OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO)) / tma_info_thread_clks",
- "MetricGroup": "MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
+ "MetricGroup": "BvML;LockCont;MemoryLat;Offcore;TopdownL4;tma_L4_group;tma_issueRFO;tma_issueSL;tma_store_bound_group",
"MetricName": "tma_store_latency",
"MetricThreshold": "tma_store_latency > 0.1 & (tma_store_bound > 0.2 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2))",
"PublicDescription": "This metric estimates fraction of cycles the CPU spent handling L1D store misses. Store accesses usually less impact out-of-order core performance; however; holding resources for longer time can lead into undesired implications (e.g. contention on L1D fill-buffer entries - see FB_Full). Related metrics: tma_fb_full, tma_lock_latency",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/memory.json b/tools/perf/pmu-events/arch/x86/ivybridge/memory.json
index fd1fe491c577..40f40384d58b 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/memory.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
"SampleAfterValue": "100003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Loads with latency value being above 128",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
"MSRIndex": "0x3F6",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Loads with latency value being above 16",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
"MSRIndex": "0x3F6",
@@ -30,6 +33,7 @@
},
{
"BriefDescription": "Loads with latency value being above 256",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
"MSRIndex": "0x3F6",
@@ -41,6 +45,7 @@
},
{
"BriefDescription": "Loads with latency value being above 32",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
"MSRIndex": "0x3F6",
@@ -52,6 +57,7 @@
},
{
"BriefDescription": "Loads with latency value being above 4",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
"MSRIndex": "0x3F6",
@@ -63,6 +69,7 @@
},
{
"BriefDescription": "Loads with latency value being above 512",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
"MSRIndex": "0x3F6",
@@ -74,6 +81,7 @@
},
{
"BriefDescription": "Loads with latency value being above 64",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
"MSRIndex": "0x3F6",
@@ -85,6 +93,7 @@
},
{
"BriefDescription": "Loads with latency value being above 8",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
"MSRIndex": "0x3F6",
@@ -96,6 +105,7 @@
},
{
"BriefDescription": "Sample stores and collect precise store operation via PEBS record. PMC3 only.",
+ "Counter": "3",
"EventCode": "0xCD",
"EventName": "MEM_TRANS_RETIRED.PRECISE_STORE",
"PEBS": "2",
@@ -104,6 +114,7 @@
},
{
"BriefDescription": "Speculative cache line split load uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.LOADS",
"PublicDescription": "Speculative cache-line split load uops dispatched to L1D.",
@@ -112,6 +123,7 @@
},
{
"BriefDescription": "Speculative cache line split STA uops dispatched to L1 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x05",
"EventName": "MISALIGN_MEM_REF.STORES",
"PublicDescription": "Speculative cache-line split Store-address uops dispatched to L1D.",
@@ -120,6 +132,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch code reads that miss the LLC and the data returned from dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_CODE_RD.LLC_MISS.DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -129,6 +142,7 @@
},
{
"BriefDescription": "Counts all demand & prefetch data reads that miss the LLC and the data returned from dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_MISS.DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -138,6 +152,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that miss the LLC and the data returned from dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_MISS.DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -147,6 +162,7 @@
},
{
"BriefDescription": "Counts LLC replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DATA_IN_SOCKET.LLC_MISS.LOCAL_DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -156,6 +172,7 @@
},
{
"BriefDescription": "Counts demand code reads that miss the LLC and the data returned from dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_MISS.DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -165,6 +182,7 @@
},
{
"BriefDescription": "Counts demand data reads that miss the LLC and the data returned from dram",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_MISS.DRAM",
"MSRIndex": "0x1a6,0x1a7",
@@ -174,6 +192,7 @@
},
{
"BriefDescription": "Number of any page walk that had a miss in LLC.",
+ "Counter": "0,1,2,3",
"EventCode": "0xBE",
"EventName": "PAGE_WALKS.LLC_MISS",
"SampleAfterValue": "100003",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/metricgroups.json b/tools/perf/pmu-events/arch/x86/ivybridge/metricgroups.json
index f6a0258e3241..0863375bdead 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/metricgroups.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/metricgroups.json
@@ -2,9 +2,21 @@
"Backend": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Bad": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BadSpec": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
- "BigFoot": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BigFootprint": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"BrMispredicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Branches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvBC": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvCB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvFB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvIO": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvML": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMP": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMS": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvOB": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "BvUW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "CacheHits": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"CacheMisses": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Compute": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Cor": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -23,8 +35,11 @@
"InsType": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"L2Evicts": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"LSD": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "LockCont": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MachineClears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Machine_Clears": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Mem": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "MemOffcore": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBW": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryBound": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"MemoryLat": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -38,6 +53,7 @@
"Pipeline": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"PortsUtil": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Power": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
+ "Prefetches": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Ret": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"Retire": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
"SMT": "Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet",
@@ -65,6 +81,7 @@
"tma_bad_speculation_group": "Metrics contributing to tma_bad_speculation category",
"tma_branch_resteers_group": "Metrics contributing to tma_branch_resteers category",
"tma_core_bound_group": "Metrics contributing to tma_core_bound category",
+ "tma_divider_group": "Metrics contributing to tma_divider category",
"tma_dram_bound_group": "Metrics contributing to tma_dram_bound category",
"tma_dtlb_load_group": "Metrics contributing to tma_dtlb_load category",
"tma_dtlb_store_group": "Metrics contributing to tma_dtlb_store category",
@@ -90,10 +107,12 @@
"tma_issueSpSt": "Metrics related by the issue $issueSpSt",
"tma_issueSyncxn": "Metrics related by the issue $issueSyncxn",
"tma_issueTLB": "Metrics related by the issue $issueTLB",
+ "tma_itlb_misses_group": "Metrics contributing to tma_itlb_misses category",
"tma_l1_bound_group": "Metrics contributing to tma_l1_bound category",
"tma_l3_bound_group": "Metrics contributing to tma_l3_bound category",
"tma_light_operations_group": "Metrics contributing to tma_light_operations category",
"tma_load_op_utilization_group": "Metrics contributing to tma_load_op_utilization category",
+ "tma_machine_clears_group": "Metrics contributing to tma_machine_clears category",
"tma_mem_latency_group": "Metrics contributing to tma_mem_latency category",
"tma_memory_bound_group": "Metrics contributing to tma_memory_bound category",
"tma_microcode_sequencer_group": "Metrics contributing to tma_microcode_sequencer category",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/other.json b/tools/perf/pmu-events/arch/x86/ivybridge/other.json
index e80e99d064ba..2e796d533c13 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/other.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/other.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Unhalted core cycles when the thread is in ring 0",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING0",
"PublicDescription": "Unhalted core cycles when the thread is in ring 0.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of intervals between processor halts while thread is in ring 0",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5C",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Unhalted core cycles when thread is in rings 1, 2, or 3",
+ "Counter": "0,1,2,3",
"EventCode": "0x5C",
"EventName": "CPL_CYCLES.RING123",
"PublicDescription": "Unhalted core cycles when the thread is not in ring 0.",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Cycles when L1 and L2 are locked due to UC or split lock",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.SPLIT_LOCK_UC_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D and L2 are locked, due to a UC lock or split lock.",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/pipeline.json b/tools/perf/pmu-events/arch/x86/ivybridge/pipeline.json
index 30a3da9cd22b..da05eaaae22c 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/pipeline.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Divide operations executed",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x14",
@@ -11,6 +12,7 @@
},
{
"BriefDescription": "Cycles when divider is busy executing divide operations",
+ "Counter": "0,1,2,3",
"EventCode": "0x14",
"EventName": "ARITH.FPU_DIV_ACTIVE",
"PublicDescription": "Cycles that the divider is active, includes INT and FP. Set 'edge =1, cmask=1' to count the number of divides.",
@@ -19,6 +21,7 @@
},
{
"BriefDescription": "Speculative and retired branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -27,6 +30,7 @@
},
{
"BriefDescription": "Speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_CONDITIONAL",
"PublicDescription": "Speculative and retired macro-conditional branches.",
@@ -35,6 +39,7 @@
},
{
"BriefDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_JMP",
"PublicDescription": "Speculative and retired macro-unconditional branches excluding calls and indirects.",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_DIRECT_NEAR_CALL",
"PublicDescription": "Speculative and retired direct near calls.",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "Speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "Speculative and retired indirect branches excluding calls and returns.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "Speculative and retired indirect return branches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.ALL_INDIRECT_NEAR_RETURN",
"SampleAfterValue": "200003",
@@ -66,6 +74,7 @@
},
{
"BriefDescription": "Not taken macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "Not taken macro-conditional branches.",
@@ -74,6 +83,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "Taken speculative and retired macro-conditional branches.",
@@ -82,6 +92,7 @@
},
{
"BriefDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_JUMP",
"PublicDescription": "Taken speculative and retired macro-conditional branch instructions excluding calls and indirects.",
@@ -90,6 +101,7 @@
},
{
"BriefDescription": "Taken speculative and retired direct near calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_DIRECT_NEAR_CALL",
"PublicDescription": "Taken speculative and retired direct near calls.",
@@ -98,6 +110,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "Taken speculative and retired indirect branches excluding calls and returns.",
@@ -106,6 +119,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"PublicDescription": "Taken speculative and retired indirect calls.",
@@ -114,6 +128,7 @@
},
{
"BriefDescription": "Taken speculative and retired indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x88",
"EventName": "BR_INST_EXEC.TAKEN_INDIRECT_NEAR_RETURN",
"PublicDescription": "Taken speculative and retired indirect branches with return mnemonic.",
@@ -122,6 +137,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES",
"PublicDescription": "Branch instructions at retirement.",
@@ -129,6 +145,7 @@
},
{
"BriefDescription": "All (macro) branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -137,6 +154,7 @@
},
{
"BriefDescription": "Conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -145,6 +163,7 @@
},
{
"BriefDescription": "Far branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.FAR_BRANCH",
"PublicDescription": "Number of far branches retired.",
@@ -153,6 +172,7 @@
},
{
"BriefDescription": "Direct and indirect near call instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL",
"PEBS": "1",
@@ -161,6 +181,7 @@
},
{
"BriefDescription": "Direct and indirect macro near call instructions retired (captured in ring 3).",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_CALL_R3",
"PEBS": "1",
@@ -169,6 +190,7 @@
},
{
"BriefDescription": "Return instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_RETURN",
"PEBS": "1",
@@ -177,6 +199,7 @@
},
{
"BriefDescription": "Taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -185,6 +208,7 @@
},
{
"BriefDescription": "Not taken branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC4",
"EventName": "BR_INST_RETIRED.NOT_TAKEN",
"PublicDescription": "Counts the number of not taken branch instructions retired.",
@@ -193,6 +217,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_BRANCHES",
"PublicDescription": "Counts all near executed branches (not necessarily retired).",
@@ -201,6 +226,7 @@
},
{
"BriefDescription": "Speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_CONDITIONAL",
"PublicDescription": "Speculative and retired mispredicted macro conditional branches.",
@@ -209,6 +235,7 @@
},
{
"BriefDescription": "Mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.ALL_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "Mispredicted indirect branches excluding calls and returns.",
@@ -217,6 +244,7 @@
},
{
"BriefDescription": "Speculative mispredicted indirect branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.INDIRECT",
"PublicDescription": "Counts speculatively miss-predicted indirect branches at execution time. Counts for indirect near CALL or JMP instructions (RET excluded).",
@@ -225,6 +253,7 @@
},
{
"BriefDescription": "Not taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.NONTAKEN_CONDITIONAL",
"PublicDescription": "Not taken speculative and retired mispredicted macro conditional branches.",
@@ -233,6 +262,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted macro conditional branches",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_CONDITIONAL",
"PublicDescription": "Taken speculative and retired mispredicted macro conditional branches.",
@@ -241,6 +271,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_JUMP_NON_CALL_RET",
"PublicDescription": "Taken speculative and retired mispredicted indirect branches excluding calls and returns.",
@@ -249,6 +280,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect calls",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_INDIRECT_NEAR_CALL",
"PublicDescription": "Taken speculative and retired mispredicted indirect calls.",
@@ -257,6 +289,7 @@
},
{
"BriefDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic",
+ "Counter": "0,1,2,3",
"EventCode": "0x89",
"EventName": "BR_MISP_EXEC.TAKEN_RETURN_NEAR",
"PublicDescription": "Taken speculative and retired mispredicted indirect branches with return mnemonic.",
@@ -265,6 +298,7 @@
},
{
"BriefDescription": "All mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
"PublicDescription": "Mispredicted branch instructions at retirement.",
@@ -272,6 +306,7 @@
},
{
"BriefDescription": "Mispredicted macro branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.ALL_BRANCHES_PEBS",
"PEBS": "2",
@@ -280,6 +315,7 @@
},
{
"BriefDescription": "Mispredicted conditional branch instructions retired.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.CONDITIONAL",
"PEBS": "1",
@@ -288,6 +324,7 @@
},
{
"BriefDescription": "number of near branch instructions retired that were mispredicted and taken.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC5",
"EventName": "BR_MISP_RETIRED.NEAR_TAKEN",
"PEBS": "1",
@@ -296,6 +333,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "2000003",
@@ -303,6 +341,7 @@
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK",
"PublicDescription": "Increments at the frequency of XCLK (100 MHz) when not halted.",
@@ -312,6 +351,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_THREAD_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "2000003",
@@ -319,6 +359,7 @@
},
{
"BriefDescription": "Count XClk pulses when this thread is unhalted and the other thread is halted.",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE",
"SampleAfterValue": "2000003",
@@ -326,12 +367,14 @@
},
{
"BriefDescription": "Reference cycles when the core is not in halt state.",
+ "Counter": "Fixed counter 2",
"EventName": "CPU_CLK_UNHALTED.REF_TSC",
"SampleAfterValue": "2000003",
"UMask": "0x3"
},
{
"BriefDescription": "Reference cycles when the thread is unhalted (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK",
"PublicDescription": "Reference cycles when the thread is unhalted. (counts at 100 MHz rate)",
@@ -341,6 +384,7 @@
{
"AnyThread": "1",
"BriefDescription": "Reference cycles when the at least one thread on the physical core is unhalted. (counts at 100 MHz rate)",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.REF_XCLK_ANY",
"SampleAfterValue": "2000003",
@@ -348,6 +392,7 @@
},
{
"BriefDescription": "Core cycles when the thread is not in halt state.",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD",
"SampleAfterValue": "2000003",
"UMask": "0x2"
@@ -355,6 +400,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state",
+ "Counter": "Fixed counter 1",
"EventName": "CPU_CLK_UNHALTED.THREAD_ANY",
"PublicDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
"SampleAfterValue": "2000003",
@@ -362,6 +408,7 @@
},
{
"BriefDescription": "Thread cycles when thread is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P",
"PublicDescription": "Counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling.",
@@ -370,6 +417,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles when at least one thread on the physical core is not in halt state",
+ "Counter": "0,1,2,3",
"EventCode": "0x3C",
"EventName": "CPU_CLK_UNHALTED.THREAD_P_ANY",
"PublicDescription": "Core cycles when at least one thread on the physical core is not in halt state.",
@@ -377,6 +425,7 @@
},
{
"BriefDescription": "Cycles while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_MISS",
@@ -385,6 +434,7 @@
},
{
"BriefDescription": "Cycles with pending L1 cache miss loads.",
+ "Counter": "2",
"CounterMask": "8",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L1D_PENDING",
@@ -394,6 +444,7 @@
},
{
"BriefDescription": "Cycles while L2 cache miss load* is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_MISS",
@@ -402,6 +453,7 @@
},
{
"BriefDescription": "Cycles with pending L2 cache miss loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_L2_PENDING",
@@ -411,6 +463,7 @@
},
{
"BriefDescription": "Cycles with pending memory loads.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_LDM_PENDING",
@@ -420,6 +473,7 @@
},
{
"BriefDescription": "Cycles while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_MEM_ANY",
@@ -428,6 +482,7 @@
},
{
"BriefDescription": "This event increments by 1 for every cycle where there was no execute for this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.CYCLES_NO_EXECUTE",
@@ -437,6 +492,7 @@
},
{
"BriefDescription": "Execution stalls while L1 cache miss demand load is outstanding.",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_MISS",
@@ -445,6 +501,7 @@
},
{
"BriefDescription": "Execution stalls due to L1 data cache misses",
+ "Counter": "2",
"CounterMask": "12",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L1D_PENDING",
@@ -454,6 +511,7 @@
},
{
"BriefDescription": "Execution stalls while L2 cache miss load* is outstanding.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_MISS",
@@ -462,6 +520,7 @@
},
{
"BriefDescription": "Execution stalls due to L2 cache misses.",
+ "Counter": "0,1,2,3",
"CounterMask": "5",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_L2_PENDING",
@@ -471,6 +530,7 @@
},
{
"BriefDescription": "Execution stalls due to memory subsystem.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_LDM_PENDING",
@@ -479,6 +539,7 @@
},
{
"BriefDescription": "Execution stalls while memory subsystem has an outstanding load.",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_MEM_ANY",
@@ -487,6 +548,7 @@
},
{
"BriefDescription": "Total execution stalls.",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA3",
"EventName": "CYCLE_ACTIVITY.STALLS_TOTAL",
@@ -495,6 +557,7 @@
},
{
"BriefDescription": "Stall cycles because IQ is full",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.IQ_FULL",
"PublicDescription": "Stall cycles due to IQ is full.",
@@ -503,6 +566,7 @@
},
{
"BriefDescription": "Stalls caused by changing prefix length of the instruction.",
+ "Counter": "0,1,2,3",
"EventCode": "0x87",
"EventName": "ILD_STALL.LCP",
"SampleAfterValue": "2000003",
@@ -510,12 +574,14 @@
},
{
"BriefDescription": "Instructions retired from execution.",
+ "Counter": "Fixed counter 0",
"EventName": "INST_RETIRED.ANY",
"SampleAfterValue": "2000003",
"UMask": "0x1"
},
{
"BriefDescription": "Number of instructions retired. General Counter - architectural event",
+ "Counter": "0,1,2,3",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.ANY_P",
"PublicDescription": "Number of instructions at retirement.",
@@ -523,6 +589,7 @@
},
{
"BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
"EventCode": "0xC0",
"EventName": "INST_RETIRED.PREC_DIST",
"PEBS": "2",
@@ -532,6 +599,7 @@
},
{
"BriefDescription": "Number of cycles waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES",
@@ -541,6 +609,7 @@
{
"AnyThread": "1",
"BriefDescription": "Core cycles the allocator was stalled due to recovery from earlier clear event for any thread running on the physical core (e.g. misprediction or memory nuke).",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0D",
"EventName": "INT_MISC.RECOVERY_CYCLES_ANY",
@@ -549,6 +618,7 @@
},
{
"BriefDescription": "Number of occurrences waiting for the checkpoints in Resource Allocation Table (RAT) to be recovered after Nuke due to all other cases except JEClear (e.g. whenever a ucode assist is needed like SSE exception, memory disambiguation, etc.)",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x0D",
@@ -558,6 +628,7 @@
},
{
"BriefDescription": "This event counts the number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.NO_SR",
"PublicDescription": "The number of times that split load operations are temporarily blocked because all resources for handling the split accesses are in use.",
@@ -566,6 +637,7 @@
},
{
"BriefDescription": "Cases when loads get true Block-on-Store blocking code preventing store forwarding",
+ "Counter": "0,1,2,3",
"EventCode": "0x03",
"EventName": "LD_BLOCKS.STORE_FORWARD",
"PublicDescription": "Loads blocked by overlapping with store buffer that cannot be forwarded.",
@@ -574,6 +646,7 @@
},
{
"BriefDescription": "False dependencies in MOB due to partial compare on address",
+ "Counter": "0,1,2,3",
"EventCode": "0x07",
"EventName": "LD_BLOCKS_PARTIAL.ADDRESS_ALIAS",
"PublicDescription": "False dependencies in MOB due to partial compare on address.",
@@ -582,6 +655,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for hardware prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.HW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for H/W prefetch.",
@@ -590,6 +664,7 @@
},
{
"BriefDescription": "Not software-prefetch load dispatches that hit FB allocated for software prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0x4C",
"EventName": "LOAD_HIT_PRE.SW_PF",
"PublicDescription": "Non-SW-prefetch load dispatches that hit fill buffer allocated for S/W prefetch.",
@@ -598,6 +673,7 @@
},
{
"BriefDescription": "Cycles 4 Uops delivered by the LSD, but didn't come from the decoder",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_4_UOPS",
@@ -607,6 +683,7 @@
},
{
"BriefDescription": "Cycles Uops delivered by the LSD, but didn't come from the decoder",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xA8",
"EventName": "LSD.CYCLES_ACTIVE",
@@ -616,6 +693,7 @@
},
{
"BriefDescription": "Number of Uops delivered by the LSD.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA8",
"EventName": "LSD.UOPS",
"SampleAfterValue": "2000003",
@@ -623,6 +701,7 @@
},
{
"BriefDescription": "Number of machine clears (nukes) of any type.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0xC3",
@@ -632,6 +711,7 @@
},
{
"BriefDescription": "This event counts the number of executed Intel AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MASKMOV",
"PublicDescription": "Counts the number of executed AVX masked load operations that refer to an illegal address range with the mask bits set to 0.",
@@ -640,6 +720,7 @@
},
{
"BriefDescription": "Self-modifying code (SMC) detected.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.SMC",
"PublicDescription": "Number of self-modifying-code machine clears detected.",
@@ -648,6 +729,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -655,6 +737,7 @@
},
{
"BriefDescription": "Number of integer Move Elimination candidate uops that were not eliminated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x58",
"EventName": "MOVE_ELIMINATION.INT_NOT_ELIMINATED",
"SampleAfterValue": "1000003",
@@ -662,6 +745,7 @@
},
{
"BriefDescription": "Number of times any microcode assist is invoked by HW upon uop writeback.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC1",
"EventName": "OTHER_ASSISTS.ANY_WB_ASSIST",
"SampleAfterValue": "100003",
@@ -669,6 +753,7 @@
},
{
"BriefDescription": "Resource-related stall cycles",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ANY",
"PublicDescription": "Cycles Allocation is stalled due to Resource Related reason.",
@@ -677,6 +762,7 @@
},
{
"BriefDescription": "Cycles stalled due to re-order buffer full.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.ROB",
"SampleAfterValue": "2000003",
@@ -684,6 +770,7 @@
},
{
"BriefDescription": "Cycles stalled due to no eligible RS entry available.",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.RS",
"SampleAfterValue": "2000003",
@@ -691,6 +778,7 @@
},
{
"BriefDescription": "Cycles stalled due to no store buffers available. (not including draining form sync).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA2",
"EventName": "RESOURCE_STALLS.SB",
"PublicDescription": "Cycles stalled due to no store buffers available (not including draining form sync).",
@@ -699,6 +787,7 @@
},
{
"BriefDescription": "Count cases of saving new LBR",
+ "Counter": "0,1,2,3",
"EventCode": "0xCC",
"EventName": "ROB_MISC_EVENTS.LBR_INSERTS",
"PublicDescription": "Count cases of saving new LBR records by hardware.",
@@ -707,6 +796,7 @@
},
{
"BriefDescription": "Cycles when Reservation Station (RS) is empty for the thread",
+ "Counter": "0,1,2,3",
"EventCode": "0x5E",
"EventName": "RS_EVENTS.EMPTY_CYCLES",
"PublicDescription": "Cycles the RS is empty for the thread.",
@@ -715,6 +805,7 @@
},
{
"BriefDescription": "Counts end of periods where the Reservation Station (RS) was empty. Could be useful to precisely locate Frontend Latency Bound issues.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EdgeDetect": "1",
"EventCode": "0x5E",
@@ -725,6 +816,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are dispatched to port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0",
"PublicDescription": "Cycles which a Uop is dispatched on port 0.",
@@ -734,6 +826,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 0",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_0_CORE",
"PublicDescription": "Cycles per core when uops are dispatched to port 0.",
@@ -742,6 +835,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are dispatched to port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1",
"PublicDescription": "Cycles which a Uop is dispatched on port 1.",
@@ -751,6 +845,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 1",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_1_CORE",
"PublicDescription": "Cycles per core when uops are dispatched to port 1.",
@@ -759,6 +854,7 @@
},
{
"BriefDescription": "Cycles per thread when load or STA uops are dispatched to port 2",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2",
"PublicDescription": "Cycles which a Uop is dispatched on port 2.",
@@ -768,6 +864,7 @@
{
"AnyThread": "1",
"BriefDescription": "Uops dispatched to port 2, loads and stores per core (speculative and retired).",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_2_CORE",
"SampleAfterValue": "2000003",
@@ -775,6 +872,7 @@
},
{
"BriefDescription": "Cycles per thread when load or STA uops are dispatched to port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3",
"PublicDescription": "Cycles which a Uop is dispatched on port 3.",
@@ -784,6 +882,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when load or STA uops are dispatched to port 3",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_3_CORE",
"PublicDescription": "Cycles per core when load or STA uops are dispatched to port 3.",
@@ -792,6 +891,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are dispatched to port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4",
"PublicDescription": "Cycles which a Uop is dispatched on port 4.",
@@ -801,6 +901,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 4",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_4_CORE",
"PublicDescription": "Cycles per core when uops are dispatched to port 4.",
@@ -809,6 +910,7 @@
},
{
"BriefDescription": "Cycles per thread when uops are dispatched to port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5",
"PublicDescription": "Cycles which a Uop is dispatched on port 5.",
@@ -818,6 +920,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles per core when uops are dispatched to port 5",
+ "Counter": "0,1,2,3",
"EventCode": "0xA1",
"EventName": "UOPS_DISPATCHED_PORT.PORT_5_CORE",
"PublicDescription": "Cycles per core when uops are dispatched to port 5.",
@@ -826,6 +929,7 @@
},
{
"BriefDescription": "Number of uops executed on the core.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE",
"PublicDescription": "Counts total number of uops to be executed per-core each cycle.",
@@ -834,6 +938,7 @@
},
{
"BriefDescription": "Cycles at least 1 micro-op is executed from any thread on physical core",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_1",
@@ -843,6 +948,7 @@
},
{
"BriefDescription": "Cycles at least 2 micro-op is executed from any thread on physical core",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_2",
@@ -852,6 +958,7 @@
},
{
"BriefDescription": "Cycles at least 3 micro-op is executed from any thread on physical core",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_3",
@@ -861,6 +968,7 @@
},
{
"BriefDescription": "Cycles at least 4 micro-op is executed from any thread on physical core",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_GE_4",
@@ -870,6 +978,7 @@
},
{
"BriefDescription": "Cycles with no micro-ops executed from any thread on physical core",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CORE_CYCLES_NONE",
"Invert": "1",
@@ -879,6 +988,7 @@
},
{
"BriefDescription": "Cycles where at least 1 uop was executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_1_UOP_EXEC",
@@ -888,6 +998,7 @@
},
{
"BriefDescription": "Cycles where at least 2 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "2",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_2_UOPS_EXEC",
@@ -897,6 +1008,7 @@
},
{
"BriefDescription": "Cycles where at least 3 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_3_UOPS_EXEC",
@@ -906,6 +1018,7 @@
},
{
"BriefDescription": "Cycles where at least 4 uops were executed per-thread",
+ "Counter": "0,1,2,3",
"CounterMask": "4",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.CYCLES_GE_4_UOPS_EXEC",
@@ -915,6 +1028,7 @@
},
{
"BriefDescription": "Counts number of cycles no uops were dispatched to be executed on this thread.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.STALL_CYCLES",
@@ -924,6 +1038,7 @@
},
{
"BriefDescription": "Counts the number of uops to be executed per-thread each cycle.",
+ "Counter": "0,1,2,3",
"EventCode": "0xB1",
"EventName": "UOPS_EXECUTED.THREAD",
"PublicDescription": "Counts total number of uops to be executed per-thread each cycle. Set Cmask = 1, INV =1 to count stall cycles.",
@@ -932,6 +1047,7 @@
},
{
"BriefDescription": "Uops that Resource Allocation Table (RAT) issues to Reservation Station (RS)",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.ANY",
"PublicDescription": "Increments each cycle the # of Uops issued by the RAT to RS. Set Cmask = 1, Inv = 1, Any= 1to count stalled cycles of this core.",
@@ -941,6 +1057,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for all threads",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.CORE_STALL_CYCLES",
@@ -951,6 +1068,7 @@
},
{
"BriefDescription": "Number of flags-merge uops being allocated.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.FLAGS_MERGE",
"PublicDescription": "Number of flags-merge uops allocated. Such uops adds delay.",
@@ -959,6 +1077,7 @@
},
{
"BriefDescription": "Number of Multiply packed/scalar single precision uops allocated",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SINGLE_MUL",
"PublicDescription": "Number of multiply packed/scalar single precision uops allocated.",
@@ -967,6 +1086,7 @@
},
{
"BriefDescription": "Number of slow LEA uops being allocated. A uop is generally considered SlowLea if it has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
+ "Counter": "0,1,2,3",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.SLOW_LEA",
"PublicDescription": "Number of slow LEA or similar uops allocated. Such uop has 3 sources (e.g. 2 sources + immediate) regardless if as a result of LEA instruction or not.",
@@ -975,6 +1095,7 @@
},
{
"BriefDescription": "Cycles when Resource Allocation Table (RAT) does not issue Uops to Reservation Station (RS) for the thread",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x0E",
"EventName": "UOPS_ISSUED.STALL_CYCLES",
@@ -985,6 +1106,7 @@
},
{
"BriefDescription": "Retired uops.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.ALL",
"PEBS": "1",
@@ -994,6 +1116,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.CORE_STALL_CYCLES",
@@ -1003,6 +1126,7 @@
},
{
"BriefDescription": "Retirement slots used.",
+ "Counter": "0,1,2,3",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.RETIRE_SLOTS",
"PEBS": "1",
@@ -1011,6 +1135,7 @@
},
{
"BriefDescription": "Cycles without actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.STALL_CYCLES",
@@ -1020,6 +1145,7 @@
},
{
"BriefDescription": "Cycles with less than 10 actually retired uops.",
+ "Counter": "0,1,2,3",
"CounterMask": "10",
"EventCode": "0xC2",
"EventName": "UOPS_RETIRED.TOTAL_CYCLES",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/uncore-cache.json b/tools/perf/pmu-events/arch/x86/ivybridge/uncore-cache.json
index be9a3ed1a940..8379dae91be4 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/uncore-cache.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/uncore-cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L3 Lookup any request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_ES",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_I",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_M",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "L3 Lookup any request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.ANY_MESI",
"PerPkg": "1",
@@ -33,6 +37,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_ES",
"PerPkg": "1",
@@ -41,6 +46,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_I",
"PerPkg": "1",
@@ -49,6 +55,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_M",
"PerPkg": "1",
@@ -57,6 +64,7 @@
},
{
"BriefDescription": "L3 Lookup external snoop request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.EXTSNP_MESI",
"PerPkg": "1",
@@ -65,6 +73,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_ES",
"PerPkg": "1",
@@ -73,6 +82,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_I",
"PerPkg": "1",
@@ -81,6 +91,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_M",
"PerPkg": "1",
@@ -89,6 +100,7 @@
},
{
"BriefDescription": "L3 Lookup read request that access cache and found line in any MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.READ_MESI",
"PerPkg": "1",
@@ -97,6 +109,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in E or S-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_ES",
"PerPkg": "1",
@@ -105,6 +118,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in I-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_I",
"PerPkg": "1",
@@ -113,6 +127,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in M-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_M",
"PerPkg": "1",
@@ -121,6 +136,7 @@
},
{
"BriefDescription": "L3 Lookup write request that access cache and found line in MESI-state.",
+ "Counter": "0,1",
"EventCode": "0x34",
"EventName": "UNC_CBO_CACHE_LOOKUP.WRITE_MESI",
"PerPkg": "1",
@@ -129,6 +145,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_EVICTION",
"PerPkg": "1",
@@ -137,6 +154,7 @@
},
{
"BriefDescription": "An external snoop hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_EXTERNAL",
"PerPkg": "1",
@@ -145,6 +163,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HITM_XCORE",
"PerPkg": "1",
@@ -153,6 +172,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_EVICTION",
"PerPkg": "1",
@@ -161,6 +181,7 @@
},
{
"BriefDescription": "An external snoop hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_EXTERNAL",
"PerPkg": "1",
@@ -169,6 +190,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which hits a non-modified line in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.HIT_XCORE",
"PerPkg": "1",
@@ -177,6 +199,7 @@
},
{
"BriefDescription": "A cross-core snoop resulted from L3 Eviction which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_EVICTION",
"PerPkg": "1",
@@ -185,6 +208,7 @@
},
{
"BriefDescription": "An external snoop misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_EXTERNAL",
"PerPkg": "1",
@@ -193,6 +217,7 @@
},
{
"BriefDescription": "A cross-core snoop initiated by this Cbox due to processor core memory request which misses in some processor core.",
+ "Counter": "0,1",
"EventCode": "0x22",
"EventName": "UNC_CBO_XSNP_RESPONSE.MISS_XCORE",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/ivybridge/uncore-interconnect.json
index c3252c094a9c..ba340e858ed4 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/uncore-interconnect.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Cycles weighted by number of requests pending in Coherency Tracker.",
+ "Counter": "0",
"EventCode": "0x83",
"EventName": "UNC_ARB_COH_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Number of requests allocated in Coherency Tracker.",
+ "Counter": "0,1",
"EventCode": "0x84",
"EventName": "UNC_ARB_COH_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -17,6 +19,7 @@
},
{
"BriefDescription": "Counts cycles weighted by the number of requests waiting for data returning from the memory controller. Accounts for coherent and non-coherent requests initiated by IA cores, processor graphic units, or LLC.",
+ "Counter": "0",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.ALL",
"PerPkg": "1",
@@ -25,6 +28,7 @@
},
{
"BriefDescription": "Cycles with at least half of the requests outstanding are waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.",
+ "Counter": "0,1",
"CounterMask": "10",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.CYCLES_OVER_HALF_FULL",
@@ -34,6 +38,7 @@
},
{
"BriefDescription": "Cycles with at least one request outstanding is waiting for data return from memory controller. Account for coherent and non-coherent requests initiated by IA Cores, Processor Graphics Unit, or LLC.",
+ "Counter": "0,1",
"CounterMask": "1",
"EventCode": "0x80",
"EventName": "UNC_ARB_TRK_OCCUPANCY.CYCLES_WITH_ANY_REQUEST",
@@ -43,6 +48,7 @@
},
{
"BriefDescription": "Counts the number of coherent and in-coherent requests initiated by IA cores, processor graphic units, or LLC.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.ALL",
"PerPkg": "1",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "Counts the number of LLC evictions allocated.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.EVICTIONS",
"PerPkg": "1",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "Counts the number of allocated write entries, include full, partial, and LLC evictions.",
+ "Counter": "0,1",
"EventCode": "0x81",
"EventName": "UNC_ARB_TRK_REQUESTS.WRITES",
"PerPkg": "1",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "This 48-bit fixed counter counts the UCLK cycles.",
+ "Counter": "Fixed",
"EventCode": "0xff",
"EventName": "UNC_CLOCK.SOCKET",
"PerPkg": "1",
diff --git a/tools/perf/pmu-events/arch/x86/ivybridge/virtual-memory.json b/tools/perf/pmu-events/arch/x86/ivybridge/virtual-memory.json
index b97f15cb20fc..8c6128eff958 100644
--- a/tools/perf/pmu-events/arch/x86/ivybridge/virtual-memory.json
+++ b/tools/perf/pmu-events/arch/x86/ivybridge/virtual-memory.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "Page walk for a large page completed for Demand load.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.LARGE_PAGE_WALK_COMPLETED",
"SampleAfterValue": "100003",
@@ -8,6 +9,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes an page walk of any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in all TLB levels that cause a page walk of any page size from demand loads.",
@@ -16,6 +18,7 @@
},
{
"BriefDescription": "Load operations that miss the first DTLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x5F",
"EventName": "DTLB_LOAD_MISSES.STLB_HIT",
"PublicDescription": "Counts load operations that missed 1st level DTLB but hit the 2nd level.",
@@ -24,6 +27,7 @@
},
{
"BriefDescription": "Demand load Miss in all translation lookaside buffer (TLB) levels causes a page walk that completes of any page size.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
"PublicDescription": "Misses in all TLB levels that caused page walk completed of any size by demand loads.",
@@ -32,6 +36,7 @@
},
{
"BriefDescription": "Demand load cycles page miss handler (PMH) is busy with this walk.",
+ "Counter": "0,1,2,3",
"EventCode": "0x08",
"EventName": "DTLB_LOAD_MISSES.WALK_DURATION",
"PublicDescription": "Cycle PMH is busy with a walk due to demand loads.",
@@ -40,6 +45,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Miss in all TLB levels causes a page walk of any page size (4K/2M/4M/1G).",
@@ -48,6 +54,7 @@
},
{
"BriefDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.STLB_HIT",
"PublicDescription": "Store operations that miss the first TLB level but hit the second and do not cause page walks.",
@@ -56,6 +63,7 @@
},
{
"BriefDescription": "Store misses in all DTLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
"PublicDescription": "Miss in all TLB levels causes a page walk that completes of any page size (4K/2M/4M/1G).",
@@ -64,6 +72,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x49",
"EventName": "DTLB_STORE_MISSES.WALK_DURATION",
"PublicDescription": "Cycles PMH is busy with this walk.",
@@ -72,6 +81,7 @@
},
{
"BriefDescription": "Cycle count for an Extended Page table walk. The Extended Page Directory cache is used by Virtual Machine operating systems while the guest operating systems use the standard TLB caches.",
+ "Counter": "0,1,2,3",
"EventCode": "0x4F",
"EventName": "EPT.WALK_CYCLES",
"SampleAfterValue": "2000003",
@@ -79,6 +89,7 @@
},
{
"BriefDescription": "Flushing of the Instruction TLB (ITLB) pages, includes 4k/2M/4M pages.",
+ "Counter": "0,1,2,3",
"EventCode": "0xAE",
"EventName": "ITLB.ITLB_FLUSH",
"PublicDescription": "Counts the number of ITLB flushes, includes 4k/2M/4M pages.",
@@ -87,6 +98,7 @@
},
{
"BriefDescription": "Completed page walks in ITLB due to STLB load misses for large pages",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.LARGE_PAGE_WALK_COMPLETED",
"PublicDescription": "Completed page walks in ITLB due to STLB load misses for large pages.",
@@ -95,6 +107,7 @@
},
{
"BriefDescription": "Misses at all ITLB levels that cause page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.MISS_CAUSES_A_WALK",
"PublicDescription": "Misses in all ITLB levels that cause page walks.",
@@ -103,6 +116,7 @@
},
{
"BriefDescription": "Operations that miss the first ITLB level but hit the second and do not cause any page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.STLB_HIT",
"PublicDescription": "Number of cache load STLB hits. No page walk.",
@@ -111,6 +125,7 @@
},
{
"BriefDescription": "Misses in all ITLB levels that cause completed page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_COMPLETED",
"PublicDescription": "Misses in all ITLB levels that cause completed page walks.",
@@ -119,6 +134,7 @@
},
{
"BriefDescription": "Cycles when PMH is busy with page walks",
+ "Counter": "0,1,2,3",
"EventCode": "0x85",
"EventName": "ITLB_MISSES.WALK_DURATION",
"PublicDescription": "Cycle PMH is busy with a walk.",
@@ -127,6 +143,7 @@
},
{
"BriefDescription": "DTLB flush attempts of the thread-specific entries",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.DTLB_THREAD",
"PublicDescription": "DTLB flush attempts of the thread-specific entries.",
@@ -135,6 +152,7 @@
},
{
"BriefDescription": "STLB flush attempts",
+ "Counter": "0,1,2,3",
"EventCode": "0xBD",
"EventName": "TLB_FLUSH.STLB_ANY",
"PublicDescription": "Count number of STLB flush attempts.",
diff --git a/tools/perf/pmu-events/arch/x86/ivytown/cache.json b/tools/perf/pmu-events/arch/x86/ivytown/cache.json
index 0e8e77253978..4b2128f1a765 100644
--- a/tools/perf/pmu-events/arch/x86/ivytown/cache.json
+++ b/tools/perf/pmu-events/arch/x86/ivytown/cache.json
@@ -1,6 +1,7 @@
[
{
"BriefDescription": "L1D data line replacements",
+ "Counter": "0,1,2,3",
"EventCode": "0x51",
"EventName": "L1D.REPLACEMENT",
"PublicDescription": "Counts the number of lines brought into the L1 data cache.",
@@ -9,6 +10,7 @@
},
{
"BriefDescription": "Cycles a demand request was blocked due to Fill Buffers unavailability",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.FB_FULL",
@@ -18,6 +20,7 @@
},
{
"BriefDescription": "L1D miss outstanding duration in cycles",
+ "Counter": "2",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING",
"PublicDescription": "Increments the number of outstanding L1D misses every cycle. Set Cmask = 1 and Edge =1 to count occurrences.",
@@ -26,6 +29,7 @@
},
{
"BriefDescription": "Cycles with L1D load Misses outstanding.",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES",
@@ -35,6 +39,7 @@
{
"AnyThread": "1",
"BriefDescription": "Cycles with L1D load Misses outstanding from any thread on physical core",
+ "Counter": "2",
"CounterMask": "1",
"EventCode": "0x48",
"EventName": "L1D_PEND_MISS.PENDING_CYCLES_ANY",
@@ -44,6 +49,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in any state.",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.ALL",
"SampleAfterValue": "200003",
@@ -51,6 +57,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in E state",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.HIT_E",
"PublicDescription": "Not rejected writebacks from L1D to L2 cache lines in E state.",
@@ -59,6 +66,7 @@
},
{
"BriefDescription": "Not rejected writebacks from L1D to L2 cache lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.HIT_M",
"PublicDescription": "Not rejected writebacks from L1D to L2 cache lines in M state.",
@@ -67,6 +75,7 @@
},
{
"BriefDescription": "Count the number of modified Lines evicted from L1 and missed L2. (Non-rejected WBs from the DCU.)",
+ "Counter": "0,1,2,3",
"EventCode": "0x28",
"EventName": "L2_L1D_WB_RQSTS.MISS",
"PublicDescription": "Not rejected writebacks that missed LLC.",
@@ -75,6 +84,7 @@
},
{
"BriefDescription": "L2 cache lines filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.ALL",
"PublicDescription": "L2 cache lines filling L2.",
@@ -83,6 +93,7 @@
},
{
"BriefDescription": "L2 cache lines in E state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.E",
"PublicDescription": "L2 cache lines in E state filling L2.",
@@ -91,6 +102,7 @@
},
{
"BriefDescription": "L2 cache lines in I state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.I",
"PublicDescription": "L2 cache lines in I state filling L2.",
@@ -99,6 +111,7 @@
},
{
"BriefDescription": "L2 cache lines in S state filling L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF1",
"EventName": "L2_LINES_IN.S",
"PublicDescription": "L2 cache lines in S state filling L2.",
@@ -107,6 +120,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by demand.",
@@ -115,6 +129,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by demand",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DEMAND_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by demand.",
@@ -123,6 +138,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines filling the L2",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.DIRTY_ALL",
"PublicDescription": "Dirty L2 cache lines filling the L2.",
@@ -131,6 +147,7 @@
},
{
"BriefDescription": "Clean L2 cache lines evicted by L2 prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.PF_CLEAN",
"PublicDescription": "Clean L2 cache lines evicted by the MLC prefetcher.",
@@ -139,6 +156,7 @@
},
{
"BriefDescription": "Dirty L2 cache lines evicted by L2 prefetch",
+ "Counter": "0,1,2,3",
"EventCode": "0xF2",
"EventName": "L2_LINES_OUT.PF_DIRTY",
"PublicDescription": "Dirty L2 cache lines evicted by the MLC prefetcher.",
@@ -147,6 +165,7 @@
},
{
"BriefDescription": "L2 code requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_CODE_RD",
"PublicDescription": "Counts all L2 code requests.",
@@ -155,6 +174,7 @@
},
{
"BriefDescription": "Demand Data Read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_DEMAND_DATA_RD",
"PublicDescription": "Counts any demand and L1 HW prefetch data load requests to L2.",
@@ -163,6 +183,7 @@
},
{
"BriefDescription": "Requests from L2 hardware prefetchers",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_PF",
"PublicDescription": "Counts all L2 HW prefetcher requests.",
@@ -171,6 +192,7 @@
},
{
"BriefDescription": "RFO requests to L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.ALL_RFO",
"PublicDescription": "Counts all L2 store RFO requests.",
@@ -179,6 +201,7 @@
},
{
"BriefDescription": "L2 cache hits when fetching instructions, code reads.",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_HIT",
"PublicDescription": "Number of instruction fetches that hit the L2 cache.",
@@ -187,6 +210,7 @@
},
{
"BriefDescription": "L2 cache misses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.CODE_RD_MISS",
"PublicDescription": "Number of instruction fetches that missed the L2 cache.",
@@ -195,6 +219,7 @@
},
{
"BriefDescription": "Demand Data Read requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.DEMAND_DATA_RD_HIT",
"PublicDescription": "Demand Data Read requests that hit L2 cache.",
@@ -203,6 +228,7 @@
},
{
"BriefDescription": "Requests from the L2 hardware prefetchers that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_HIT",
"PublicDescription": "Counts all L2 HW prefetcher requests that hit L2.",
@@ -211,6 +237,7 @@
},
{
"BriefDescription": "Requests from the L2 hardware prefetchers that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.PF_MISS",
"PublicDescription": "Counts all L2 HW prefetcher requests that missed L2.",
@@ -219,6 +246,7 @@
},
{
"BriefDescription": "RFO requests that hit L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_HIT",
"PublicDescription": "RFO requests that hit L2 cache.",
@@ -227,6 +255,7 @@
},
{
"BriefDescription": "RFO requests that miss L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0x24",
"EventName": "L2_RQSTS.RFO_MISS",
"PublicDescription": "Counts the number of store RFO requests that miss the L2 cache.",
@@ -235,6 +264,7 @@
},
{
"BriefDescription": "RFOs that access cache lines in any state",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.ALL",
"PublicDescription": "RFOs that access cache lines in any state.",
@@ -243,6 +273,7 @@
},
{
"BriefDescription": "RFOs that hit cache lines in M state",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.HIT_M",
"PublicDescription": "RFOs that hit cache lines in M state.",
@@ -251,6 +282,7 @@
},
{
"BriefDescription": "RFOs that miss cache lines",
+ "Counter": "0,1,2,3",
"EventCode": "0x27",
"EventName": "L2_STORE_LOCK_RQSTS.MISS",
"PublicDescription": "RFOs that miss cache lines.",
@@ -259,6 +291,7 @@
},
{
"BriefDescription": "L2 or LLC HW prefetches that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_PF",
"PublicDescription": "Any MLC or LLC HW prefetch accessing L2, including rejects.",
@@ -267,6 +300,7 @@
},
{
"BriefDescription": "Transactions accessing L2 pipe",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.ALL_REQUESTS",
"PublicDescription": "Transactions accessing L2 pipe.",
@@ -275,6 +309,7 @@
},
{
"BriefDescription": "L2 cache accesses when fetching instructions",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.CODE_RD",
"PublicDescription": "L2 cache accesses when fetching instructions.",
@@ -283,6 +318,7 @@
},
{
"BriefDescription": "Demand Data Read requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.DEMAND_DATA_RD",
"PublicDescription": "Demand Data Read requests that access L2 cache.",
@@ -291,6 +327,7 @@
},
{
"BriefDescription": "L1D writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L1D_WB",
"PublicDescription": "L1D writebacks that access L2 cache.",
@@ -299,6 +336,7 @@
},
{
"BriefDescription": "L2 fill requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_FILL",
"PublicDescription": "L2 fill requests that access L2 cache.",
@@ -307,6 +345,7 @@
},
{
"BriefDescription": "L2 writebacks that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.L2_WB",
"PublicDescription": "L2 writebacks that access L2 cache.",
@@ -315,6 +354,7 @@
},
{
"BriefDescription": "RFO requests that access L2 cache",
+ "Counter": "0,1,2,3",
"EventCode": "0xF0",
"EventName": "L2_TRANS.RFO",
"PublicDescription": "RFO requests that access L2 cache.",
@@ -323,6 +363,7 @@
},
{
"BriefDescription": "Cycles when L1D is locked",
+ "Counter": "0,1,2,3",
"EventCode": "0x63",
"EventName": "LOCK_CYCLES.CACHE_LOCK_DURATION",
"PublicDescription": "Cycles in which the L1D is locked.",
@@ -331,6 +372,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests missed LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.MISS",
"PublicDescription": "This event counts each cache miss condition for references to the last level cache.",
@@ -339,6 +381,7 @@
},
{
"BriefDescription": "Core-originated cacheable demand requests that refer to LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0x2E",
"EventName": "LONGEST_LAT_CACHE.REFERENCE",
"PublicDescription": "This event counts requests originating from the core that reference a cache line in the last level cache.",
@@ -347,6 +390,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were LLC and cross-core snoop hits in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT",
"PEBS": "1",
@@ -355,6 +399,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were HitM responses from shared LLC.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM",
"PEBS": "1",
@@ -363,6 +408,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were LLC hit and cross-core snoop missed in on-pkg core cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_MISS",
"PEBS": "1",
@@ -371,6 +417,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were hits in LLC without snoops required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD2",
"EventName": "MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_NONE",
"PEBS": "1",
@@ -379,6 +426,7 @@
},
{
"BriefDescription": "Retired load uops whose data source was local DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded).",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_UOPS_LLC_MISS_RETIRED.LOCAL_DRAM",
"SampleAfterValue": "100007",
@@ -386,6 +434,7 @@
},
{
"BriefDescription": "Retired load uops whose data source was remote DRAM (Snoop not needed, Snoop Miss, or Snoop Hit data not forwarded).",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_DRAM",
"SampleAfterValue": "100007",
@@ -393,6 +442,7 @@
},
{
"BriefDescription": "Data forwarded from remote cache.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_FWD",
"SampleAfterValue": "100007",
@@ -400,6 +450,7 @@
},
{
"BriefDescription": "Remote cache HITM.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD3",
"EventName": "MEM_LOAD_UOPS_LLC_MISS_RETIRED.REMOTE_HITM",
"SampleAfterValue": "100007",
@@ -407,6 +458,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were load uops missed L1 but hit FB due to preceding miss to the same cache line with data not ready.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.HIT_LFB",
"PEBS": "1",
@@ -415,6 +467,7 @@
},
{
"BriefDescription": "Retired load uops with L1 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_HIT",
"PEBS": "1",
@@ -423,6 +476,7 @@
},
{
"BriefDescription": "Retired load uops which data sources following L1 data-cache miss.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L1_MISS",
"PEBS": "1",
@@ -431,6 +485,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache hits as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_HIT",
"PEBS": "1",
@@ -439,6 +494,7 @@
},
{
"BriefDescription": "Retired load uops with L2 cache misses as data sources.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.L2_MISS",
"PEBS": "1",
@@ -447,6 +503,7 @@
},
{
"BriefDescription": "Retired load uops which data sources were data hits in LLC without snoops required.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.LLC_HIT",
"PEBS": "1",
@@ -455,6 +512,7 @@
},
{
"BriefDescription": "Miss in last-level (L3) cache. Excludes Unknown data-source.",
+ "Counter": "0,1,2,3",
"EventCode": "0xD1",
"EventName": "MEM_LOAD_UOPS_RETIRED.LLC_MISS",
"PEBS": "1",
@@ -463,6 +521,7 @@
},
{
"BriefDescription": "All retired load uops. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
"PEBS": "1",
@@ -471,6 +530,7 @@
},
{
"BriefDescription": "All retired store uops. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.ALL_STORES",
"PEBS": "1",
@@ -479,6 +539,7 @@
},
{
"BriefDescription": "Retired load uops with locked access. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.LOCK_LOADS",
"PEBS": "1",
@@ -487,6 +548,7 @@
},
{
"BriefDescription": "Retired load uops that split across a cacheline boundary. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_LOADS",
"PEBS": "1",
@@ -495,6 +557,7 @@
},
{
"BriefDescription": "Retired store uops that split across a cacheline boundary. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.SPLIT_STORES",
"PEBS": "1",
@@ -503,6 +566,7 @@
},
{
"BriefDescription": "Retired load uops that miss the STLB. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_LOADS",
"PEBS": "1",
@@ -511,6 +575,7 @@
},
{
"BriefDescription": "Retired store uops that miss the STLB. (Precise Event)",
+ "Counter": "0,1,2,3",
"EventCode": "0xD0",
"EventName": "MEM_UOPS_RETIRED.STLB_MISS_STORES",
"PEBS": "1",
@@ -519,6 +584,7 @@
},
{
"BriefDescription": "Demand and prefetch data reads",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.ALL_DATA_RD",
"PublicDescription": "Data read requests sent to uncore (demand and prefetch).",
@@ -527,6 +593,7 @@
},
{
"BriefDescription": "Cacheable and noncacheable code read requests",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_CODE_RD",
"PublicDescription": "Demand code read requests sent to uncore.",
@@ -535,6 +602,7 @@
},
{
"BriefDescription": "Demand Data Read requests sent to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_DATA_RD",
"PublicDescription": "Demand data read requests sent to uncore.",
@@ -543,6 +611,7 @@
},
{
"BriefDescription": "Demand RFO requests including regular RFOs, locks, ItoM",
+ "Counter": "0,1,2,3",
"EventCode": "0xB0",
"EventName": "OFFCORE_REQUESTS.DEMAND_RFO",
"PublicDescription": "Demand RFO read requests sent to uncore, including regular RFOs, locks, ItoM.",
@@ -551,6 +620,7 @@
},
{
"BriefDescription": "Cases when offcore requests buffer cannot take more entries for core",
+ "Counter": "0,1,2,3",
"EventCode": "0xB2",
"EventName": "OFFCORE_REQUESTS_BUFFER.SQ_FULL",
"PublicDescription": "Cases when offcore requests buffer cannot take more entries for core.",
@@ -559,6 +629,7 @@
},
{
"BriefDescription": "Offcore outstanding cacheable Core Data Read transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD",
"PublicDescription": "Offcore outstanding cacheable data read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -567,6 +638,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding cacheable Core Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD",
@@ -576,6 +648,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_CODE_RD",
@@ -585,6 +658,7 @@
},
{
"BriefDescription": "Cycles when offcore outstanding Demand Data Read transactions are present in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_DATA_RD",
@@ -594,6 +668,7 @@
},
{
"BriefDescription": "Offcore outstanding demand rfo reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"CounterMask": "1",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO",
@@ -603,6 +678,7 @@
},
{
"BriefDescription": "Offcore outstanding code reads transactions in SuperQueue (SQ), queue to uncore, every cycle",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_CODE_RD",
"PublicDescription": "Offcore outstanding Demand Code Read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -611,6 +687,7 @@
},
{
"BriefDescription": "Offcore outstanding Demand Data Read transactions in uncore queue.",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD",
"PublicDescription": "Offcore outstanding Demand Data Read transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -619,6 +696,7 @@
},
{
"BriefDescription": "Cycles with at least 6 offcore outstanding Demand Data Read transactions in uncore queue",
+ "Counter": "0,1,2,3",
"CounterMask": "6",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_DATA_RD_GE_6",
@@ -628,6 +706,7 @@
},
{
"BriefDescription": "Offcore outstanding RFO store transactions in SuperQueue (SQ), queue to uncore",
+ "Counter": "0,1,2,3",
"EventCode": "0x60",
"EventName": "OFFCORE_REQUESTS_OUTSTANDING.DEMAND_RFO",
"PublicDescription": "Offcore outstanding RFO store transactions in SQ to uncore. Set Cmask=1 to count cycles.",
@@ -636,6 +715,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -645,6 +725,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -654,6 +735,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -663,6 +745,7 @@
},
{
"BriefDescription": "Counts demand & prefetch data reads that hit in the LLC and sibling core snoop returned a clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -672,6 +755,7 @@
},
{
"BriefDescription": "Counts all prefetch data reads that hit the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -681,6 +765,7 @@
},
{
"BriefDescription": "Counts prefetch data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -690,6 +775,7 @@
},
{
"BriefDescription": "Counts prefetch data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -699,6 +785,7 @@
},
{
"BriefDescription": "Counts prefetch data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -708,6 +795,7 @@
},
{
"BriefDescription": "Counts prefetch data reads that hit in the LLC and sibling core snoop returned a clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -717,6 +805,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -726,6 +815,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -735,6 +825,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -744,6 +835,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -753,6 +845,7 @@
},
{
"BriefDescription": "Counts all data/code/rfo reads (demand & prefetch) that hit in the LLC and sibling core snoop returned a clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.ALL_READS.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -762,6 +855,7 @@
},
{
"BriefDescription": "Counts all writebacks from the core to the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.COREWB.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -771,6 +865,7 @@
},
{
"BriefDescription": "Counts all demand code reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -780,6 +875,7 @@
},
{
"BriefDescription": "Counts all demand data reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -789,6 +885,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -798,6 +895,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -807,6 +905,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -816,6 +915,7 @@
},
{
"BriefDescription": "Counts demand data reads that hit in the LLC and sibling core snoop returned a clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -825,6 +925,7 @@
},
{
"BriefDescription": "Counts demand data writes (RFOs) that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.DEMAND_RFO.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -834,6 +935,7 @@
},
{
"BriefDescription": "Counts L2 hints sent to LLC to keep a line from being evicted out of the core caches",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.LRU_HINTS",
"MSRIndex": "0x1a6,0x1a7",
@@ -843,6 +945,7 @@
},
{
"BriefDescription": "Counts miscellaneous accesses that include port i/o, MMIO and uncacheable memory accesses",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.OTHER.PORTIO_MMIO_UC",
"MSRIndex": "0x1a6,0x1a7",
@@ -852,6 +955,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to L2) code reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -861,6 +965,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -870,6 +975,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -879,6 +985,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -888,6 +995,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -897,6 +1005,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to L2) data reads that hit in the LLC and the snoops sent to sibling cores return clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -906,6 +1015,7 @@
},
{
"BriefDescription": "Counts all prefetch (that bring data to LLC only) code reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_CODE_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -915,6 +1025,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) data reads that hit in the LLC",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -924,6 +1035,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoop to one of the sibling cores hits the line in M state and the line is forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.HITM_OTHER_CORE",
"MSRIndex": "0x1a6,0x1a7",
@@ -933,6 +1045,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops to sibling cores hit in either E/S state and the line is not forwarded",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.HIT_OTHER_CORE_NO_FWD",
"MSRIndex": "0x1a6,0x1a7",
@@ -942,6 +1055,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and sibling core snoops are not needed as either the core-valid bit is not set or the shared line is present in multiple cores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.NO_SNOOP_NEEDED",
"MSRIndex": "0x1a6,0x1a7",
@@ -951,6 +1065,7 @@
},
{
"BriefDescription": "Counts prefetch (that bring data to LLC only) data reads that hit in the LLC and the snoops sent to sibling cores return clean response",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.PF_LLC_DATA_RD.LLC_HIT.SNOOP_MISS",
"MSRIndex": "0x1a6,0x1a7",
@@ -960,6 +1075,7 @@
},
{
"BriefDescription": "Counts requests where the address of an atomic lock instruction spans a cache line boundary or the lock instruction is executed on uncacheable address",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.SPLIT_LOCK_UC_LOCK.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -969,6 +1085,7 @@
},
{
"BriefDescription": "Counts non-temporal stores",
+ "Counter": "0,1,2,3",
"EventCode": "0xB7, 0xBB",
"EventName": "OFFCORE_RESPONSE.STREAMING_STORES.ANY_RESPONSE",
"MSRIndex": "0x1a6,0x1a7",
@@ -978,6 +1095,7 @@
},
{
"BriefDescription": "Split locks in SQ",
+ "Counter": "0,1,2,3",
"EventCode": "0xF4",
"EventName": "SQ_MISC.SPLIT_LOCK",
"SampleAfterValue": "100003",
diff --git a/tools/perf/pmu-events/arch/x86/ivytown/counter.json b/tools/perf/pmu-events/arch/x86/ivytown/counter.json
new file mode 100644
index 000000000000..b4e46a693f7e
--- /dev/null
+++ b/tools/perf/pmu-events/arch/x86/ivytown/counter.json
@@ -0,0 +1,52 @@
+[
+ {
+ "Unit": "core",
+ "CountersNumFixed": "3",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "CBOX",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "HA",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "iMC",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "4"
+ },
+ {
+ "Unit": "IRP",
+ "CountersNumFixed": "0",
+ "CountersNumGeneric": "2"